A Workshop on:

Digitisation, Data and AI for Health




Digitisation Data and AI for Health Ashoka University 2021


Topics for discussion

  • Electronic health records and theory of public good
  • Digital identity for health
  • Impact on privacy and other fundamental rights; considerations for design architecture
  • AI and data science techniques for individual and public health

 


Speakers

Avik Ghose
Principal Scientist, TCS

Sriram Lakshminarasimhan
Google Research

Partha Pratim Chakrabarti
Professor CSE, IIT Kharagpur and Advisor for CS@Ashoka

Divy Thakkar
Google Research, India

Subhashis Banerjee
Ashoka University, IIT Delhi

Ganesh Ramakrishnan
Professor-in-charge, KCDH IIT Delhi, Institute Chair Professor, Dept of CSE


Abstracts

The Potential of Ubiquitous Sensing and computing in improving public health (Avik Ghose)
Public health authorities are always challenged with the socio-economic dynamics of health and preventing outbreaks of communicable diseases along with managing the prevalence of lifestyle disorders. Ubiquitous sensing has shown to have the potential for early detection of both infections as well as non-communicable diseases. With wearable devices getting more affordable each day, it is evident that in near future, every citizen will be carrying wearable devices to monitor their vitals, activity levels, etc. The question we ask is how public health can utilize this data alongside EMR/EHR data to provide more effective therapy outcomes to patients for improved quality of life.
 

Google’s AI For Social Good (Sriram Lakshminarasimhan)
Lifestyle interventions are a first line of treatment for many important public health issues in India and around the world. Lifestyle changes in addition to medication can help control blood pressure, diabetes, etc and result in health benefits in a cost-effective manner. To help people adopt healthier habits, personalization has the potential to be an impactful tool. This talk highlights some of the aspects related to designing such a platform that can offer personalized coaching tailored to each individual through digital means.
 

Public Health and AI for Social Good (Divy Thakkar)
It is imperative to bring equity in the access and benefits of AI systems. I will discuss our work on bridging these gaps by discussing ways in which we brought together NGOs, Academics and Google Researchers to advance AI in Social Good for underserved communities. I will specifically discuss work aimed towards improving the efficacy of public health programs through AI w/ ARMMAN and my recent work on examining public health data through the lens of valuation to improve data quality and accountability.
 

A framework for privacy threat model analysis in national-scale health registries (Subhashis Banerjee)
Most attempts at building large digital public service applicaitons in national identity systems and national-scale health registries have often been questioned on privacy and fairness grounds and have been difficult to operationalise. There are few such successful systems anywhere in the world. Imprecise articulation of both the theory of public good and privacy threat models, and untenable assumptions regarding privacy safeguards, have made analysis of proportionality difficult and have often generated large-scale mistrust. In this talk we will discuss a framework for analysing the privacy threat model in such large public service applications.
 

Efforts on Data Efficient Machine Learning and Healthcare Informatics at IIT Bombay (Ganesh Ramakrishnan)
State of the art AI and Deep Learning are very data hungry. This comes at significant cost including larger resource costs (multiple expensive GPUs and cloud costs), training times (often times multiple days), and human labeling costs and time. In this talk we present our an overview of our research efforts toward Data Efficient maChIne LEarning (DECILE) and our associated open source platform (http://www.decile.org) in which we attempt to address the following questions. Can we train state of the art deep models with only a sample (say 5 to 10%) of massive datasets, while having negligible impact in accuracy? Can we do this while reducing training time/cost by an order of magnitude, and/or significantly reducing the amount of labeled data required? I will also present an overview of the newly formed Koita Centre for Digital Health at IIT Bombay (https://www.kcdh.iitb.ac.in/).

Digitisation, Data and AI for Health
  • Location: Online
  • Date: Dec 8, 2021
  • Time: 10:45 - 12:45 IST

info@futurehealth.uci.edu

© Copyright 2021 UCI Institute for Future Health - All Rights Reserved