Privacy-aware Multimedia Analytics: Towards Digital Trust

In this talk, we will start with our current research on privacy-aware multimedia analytics. We will first present three works covering different aspects of privacy & analytics. The first work is about privacy protection against machines. Utilizing machine learning and big data, algorithms often act as a tool for privacy violation, by automatically selecting content with sensitive information, such as photos with faces. To prevent this, the key idea is to perturb images using adversarial machine learning to protect image attributes privacy, while ensuring the images are not degraded. We conducted an experimental study to explore factors that influence human sensitivity to visual changes, which led to the concept of image-specific human sensitivity map. Using this map, an image perturbation model is developed that can subtly alter an image such that sensitive attributes like gender are misclassified. The second work concerns privacy-preserving analytics on images. Attributes such as gender or age in images and videos are important for many applications. Existing methods extract this information from faces in the images. However, faces raise serious privacy concerns as they reveal people’s identity. We first did an eye-tracking based human study of age, gender, and emotion prediction of people in images under various identity preserving scenarios – obfuscating eyes, lower face, head or the full face. Motivated by this study, we successfully developed a deep learning model for attributes prediction under privacy-preserving conditions and we present its results. The third work focuses on training machine learning models where data sets cannot be shared due to privacy regulations (e.g., from medical studies). Anonymized data synthesis can enable third parties to benefit from such valuable data. We propose learning implicitly from visually unrealistic, task-relevant stimuli, which are synthesized by exciting the neurons of a trained neural network. Neuronal excitation serves as a pseudo-generative model, and it can be extended to inhibit representations that are associated with specific individuals, thus providing privacy. The stimuli data is then used to train new classification models. Experiments on sleep apnea data show that these models can provide privacy protection. We will then give a brief summary of our work in fairness, explainability and robustness in machine learning. We will end the talk by discussing challenges and opportunities in trustworthy AI.

Speaker’s bio
Mohan Kankanhalli is Provost’s Chair Professor of Computer Science at the National University of Singapore (NUS). He is also the Dean of NUS School of Computing. Before becoming the Dean in July 2016, he was the NUS Vice Provost (Graduate Education) during 2014-2016 and Associate Provost during 2011-2013. Mohan obtained his BTech from IIT Kharagpur and MS & PhD from the Rensselaer Polytechnic Institute. Mohan’s research interests are in Multimedia Computing, Computer Vision, Information Security & Privacy and Image/Video Processing. He has made many contributions in the area of multimedia & vision – image and video understanding, data fusion, visual saliency as well as in multimedia security – content authentication and privacy, multi-camera surveillance. He directs N-CRiPT (NUS Centre for Research in Privacy Technologies) which conducts research on privacy on structured as well as unstructured (multimedia, sensors, IoT) data. N-CRiPT looks at privacy at both individual and organizational levels along the entire data life cycle. N-CRiPT, which has been funded by Singapore’s National Research Foundation, works with many industry, government and academic partners. He earlier directed the SeSaMe (Sensor-enhanced Social Media) Centre during 2012-2018. SeSaMe did fundamental exploration of social cyber-physical systems with applications in social sensing, sensor analytics and smart systems. Mohan is a Fellow of IEEE.

16 Dec 2021

© Copyright 2021 UCI Institute for Future Health - All Rights Reserved