Machine learning systems have become ubiquitous in our lives, even if their presence goes largely unnoticed. These systems power everyday tools, such as search engines and social networks, and perform important tasks like medical diagnostics and cyber defense. The future beholds an even more important role for machine learning and artificial intelligence in our society as personal assisting, self-driving transportation, and other critical tasks are handed over to them. Despite the euphoria of AI, a number of unanswered questions linger about its efficacy, robustness, explainability, and fairness. Recent research has demonstrated that ML systems remain vulnerable to adversarial manipulations, provide no guarantees and explanations for decisions made by them, and carry over the biases of training data into their behavior. At the same time, the data-intensive nature of modern ML systems also has a negative implication for privacy as data collected for one purpose often allows other inferences.
Data-protection regulations, such as GDPR, mandate that indiscriminate data-collection and abuse are prevented. Maintaining the utility of machine learning models under such regulations requires development of new tools and techniques for model training. At the same time, prevention of adversarial attacks against ML models that cause mis-classification or leakage of private training data is necessary to earn users’ faith in these systems. We are conducting significant research in developing robust, privacy-preserving, transparent, and fair machine learning systems.