.
Overview
Despite the remarkable performance and many applications, Machine Learning systems may have many pitfalls. For example, Deep Neural Networks (DNNs) can be fooled into malfunction by simply applying a small perturbation to the data, indicating that these techniques are not robust. Further, incidents have been reported where the ML decisions were unfair and hurt people in the minority or historically disadvantaged groups. Troubleshooting such issues with some of the best performing ML systems (e.g., DNNs) is also not possible due to their black-box nature. In other words, they are not human interpretable. Furthermore, unlike standard software systems, the validity of operations of an ML system cannot be formally verified against a user specification. To address these issues, the ML community has extensively researched techniques to make ML systems trustworthy because when deployed in the real world, human lives may depend on these systems. The umbrella covering interpretability, robustness, fairness, and verifiability of ML systems is commonly referred to as trustworthy ML.
This workshop aims to raise awareness of possible pitfalls of existing ML algorithms among many practitioners and users and emphasize the importance of developing trustworthy ML algorithms. To achieve this objective, the workshop will bring together international experts from ML interpretability, fairness, robustness, and verifiability to discuss the progress so far, issues, challenges, and the path forward.
The workshop will be held virtually on 29th July 2022. REGISTRATION IS FREE
Invited Speakers
David Bau
Assistance Professor, Northeastern University, USA.
David Bau is Assistant Professor at the Northeastern University Khoury College of Computer Science. He received his PhD from MIT and AB from Harvard, and he has previously worked at Google and Microsoft. He is known for his network dissection studies of individual neurons in deep networks and has published research on the interpretable structure of learned computations in large models in PNAS, CVPR, NeurIPS, and SIGGRAPH. Prof. Bau is also coauthor of the textbook, Numerical Linear Algebra. |
Adrian Baldwin
Senior Researcher, HP Labs, United Kingdom
Adrian Baldwin is a senior researcher in the Security Lab within HP Labs, UK. Over the years, he has worked on and published in a range of security areas, including security analytics and using ML for security, modeling systems to understand security trade-offs, automating audits, and securing audit logs. He has a Ph.D. in neural networks and natural language understanding from Exeter University. |
Ransalu Senanayake
Postdoctoral Researcher, Stanford University, USA
Ransalu Senanayake is a postdoctoral research scholar in the Machine Learning Group at the Department of Computer Science, Stanford University. Working at the intersection of modeling and decision-making, he focuses on making autonomous systems equipped with ML algorithms trustworthy. Prior to joining Stanford, Ransalu obtained a Ph.D. in Computer Science from the University of Sydney in 2019. He has been an Associate Editor for the IEEE International Conference on Intelligent Robots and Systems (IROS) since 2021. |
Gilbert Lim
AI Scientist, SingHealth, Singapore
Gilbert is currently an AI Scientist with SingHealth and holds research appointments at the SingHealth Duke-NUS Ophthalmology & Visual Sciences Academic Clinical Programme and Singapore Eye Research Institute. His past research involved the application of machine learning to healthcare, most prominently in ophthalmology, and has been published in journals such as JAMA, The Lancet Digital Health, and npj Digital Medicine. He obtained his doctorate in computer science from the National University of Singapore in 2016. |
Jay Nandy
Postdoctoral Researcher, Google India, India
Jay Nandy is currently working as a visiting researcher at Google Research, India. He completed his Ph.D. from the School of Computing, the National University of Singapore, in 2021. Before joining Google, he also worked as a research assistant at NUS. His research interests include robustness for Deep Learning models, predictive uncertainty estimation, unsupervised and weakly supervised learning, etc. He has published in premier AI conferences. |
Workshop Schedule
All times are in the Indian Standard Time zone (GMT +5:30 hours).
Topics of the invited talkes might be changed by the speaker
Time | Session | Speaker/s |
---|---|---|
4.15 - 4.20 pm | Welcome (Opening) | Sanka Rasnayaka |
4.20 - 5.00 pm | Machine Learning Risk Management Frameworks | Adrian Baldwin |
5.00 - 5.40 pm | Dimensions of Trust in Machine Learning for Healthcare | Gilbert Lim |
5.40 - 6.00 pm | Explainable AI for Smart City Applications | Sandareka Wickramanayake |
6.00 - 6.10 pm | Break | - |
6.10 - 6.50 pm | Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples | Jay Nandy |
6.50 - 7.30 pm | Direct Model Editing | David Bau |
7.30 - 8.10 pm | How Do We Fail? Stress Testing Vision-based Systems Using Reinforcement Learning | Ransalu Senanayake |
8.10 - 8.20 pm | Break | - |
8.20 - 9.20 pm | Panel Discussion | Moderator - Dileepa Fernando |
9.20 - 9.30pm | Closing Remarks (Closing) | Dileepa Fernando |
Register
Limited seats available: Register here: REGISTRATION LINK
Organizers
Sandareka Wickramanayake | Dileepa Fernando | Sanka Rasnayaka |
University of Moratuwa | Nanyang Technological University | National University of Singapore |
Program Committee
Ashraf Abdul - National University of Singapore