Microsoft introduces new tools for responsible AI

by Emma

[ad_1]

Example of Error Analysis exposing the distribution of errors.

Microsoft has announced new capabilities in its responsible AI (RAI) toolkits for helping data scientists reduce bias within their machine learning models. Last May at Microsoft Build, it announced three tools for the toolkit: InterpretML, Fairlearn, and SmartNoise. 

SmartNoise is a collaboration between Microsoft and Harvard and is used to protect personal data while allowing researchers to gather insights from that data using differential privacy. SmartNoise now offers the ability to use synthetic data, which is a created sample that is derived from the original dataset. 

By combining the synthetic dataset with the original dataset, researchers can continue to analyze the same dataset without increasing privacy risk. The synthetic data capability will allow for increased collaboration between research parties, democratized knowledge, and open dataset initiatives, the company explained. 

Microsoft has also announced the release of a new tool called Error Analysis. This tool will enable data scientists to understand the patterns in their errors, identify subgroups with higher inaccuracy, and visually diagnose root causes of the errors. 

According to Microsoft, Error Analysis can be used to dive deeper into questions such as: “Will the self-driving car recognition model still perform well even when it is dark and snowing outside?” or “Does the loan approval model perform similarly for population cohorts across ethnicity, gender, age, and education?”

Error Analysis had already been in use within Microsoft. It started as a research project in 2018 as part of a collaboration between Microsoft Research and the AI, Ethics, and Effects in Engineering and Research (AETHER) Committee. 

Going forward, Microsoft plans to add Error Analysis and other RAI tools to a larger model assessment dashboard that is expected to be available in mid 2021 in OSS and Azure Machine Learning. 

“The work doesn’t stop here. We continue to expand the capabilities in FairLearn, InterpretML, Error Analysis, and SmartNoise. We hope you’ll join us on GitHub and contribute directly to helping everyone build AI responsibly,” Sarah Bird, principal program manager at Microsoft; Besmira Nushi, principal researcher at Microsoft; and Mehrnoosh Sameki, senior program manager at Microsoft wrote in a post

[ad_2]

Source link

Related Posts

Leave a Comment