Microsoft introduced tools for ethical use of machine learning

They will allow developers to better understand and control the behavior of machine learning models.

Microsoft introduced the tools of responsible use of machine learning, which will correct problems with neural networks. The system will help to correct sample non-representativeness, data insecureness and more. This TJ reported in the company.

As Eric Boyd, vice president of Microsoft Azure AI, noted, developers are increasingly required to create AI systems that are easy to explain and that meet regulatory requirements. But experts in the field of machine learning are not always able to evaluate the correctness of the systems themselves.

To solve the problem, Microsoft launched tools that will better understand the behavior of machine learning models, ensure data confidentiality and avoid incorrect sampling. New features will be available in the Azure Machine Learning service, as well as open source on GitHub.

The InterpretML tool is responsible for understanding the model. Fairlearn provides an assessment and increased objectivity of AI systems – it allows developers to check the effectiveness of the model on groups of people of different sexes, skin color, age, and other characteristics. The WhiteNoise tool provides differential privacy. All tools can experiment with open source on GitHub and use it in Azure Machine Learning.

Microsoft provided examples of the need for new tools. In 2016, Marco Tulio Ribeiro, Senior Researcher at the company’s laboratory in Redmond, presented a scientific report explaining the principle of operation of classifiers of computer vision models.

To show the operation of the technology, he trained the system to predict that a wolf will be depicted in a photo with a snowy background and a husky dog ​​in a photo without snow. After that, he applied the model on the images of wolves in snowy landscapes, and the husky without snow and showed the results to experts who did not know what the system was trained on.

The developer asked the experts if they trust the model and how it makes forecasts. Less than half of the experts mentioned the background as a possible factor, and no one indicated snow. The rest were confident in the system and believed that she was paying attention to pointed teeth.

Then I showed them the explanations, and when I saw them, of course, almost everyone understood that the model is guided only by the background. This proves that even a bad model can be misleading even to experts.

Marco Tulio Ribeiro

 

egadget
We will be happy to hear your thoughts

Leave a reply