Interpretable Artificial Intelligence: Four Key Industries

China’s first batch of the highest level automatic subway trains in operation in China Interpretable AI enables people to understand how AI systems make decisions, which will be the key in the fields of medicine, manufacturing, insurance and automobile. So what does this mean for the organization?

For example, Spotify, a streaming music service platform, plans to recommend songs by singer Justin Bieber to users, but Belieber’s songs. Obviously, this is somewhat troubling. This does not necessarily mean that Spotify programmers must ensure that their algorithms are transparent and easy to understand, but people may find that they are somewhat off target, but the consequences are obviously insignificant.

This is a litmus test for explaining AI – machine learning algorithms and other AI systems that produce results that humans can easily understand and trace back to their origins. The more important the results based on AI are, the greater the need for interpretable AI. On the contrary, relatively low-risk AI systems may only be suitable for black-box models, and it is difficult to understand the results.

People can tolerate the misunderstanding of apps about their musical tastes. But it may not be able to tolerate the more important decisions brought about by AI systems, perhaps in the case of recommended medical care or refusal to apply for a mortgage.

These are high-risk situations, especially in the case of negative outcomes, where people may need to explain clearly how specific outcomes are achieved. In many cases, auditors, lawyers, government agencies and other potential parties will do the same.

Costenaro says the need for interpretability increases as responsibility for specific decisions or outcomes shifts from humans to machines.

However, as AI matures, people may see that more and more new applications are increasingly dependent on human decision-making and responsibility. Music recommendation engines may not be particularly burdensome, but many other real or potential use cases will face significant responsibilities.

IT leaders need to take steps to ensure that their organization’s AI use cases incorporate interpretability correctly when necessary. Gaurav Deshpande, vice president of marketing at Tiger Graph, said that many CIOs have been concerned about this issue, and they often hesitate even when they understand the value of specific AI technologies or use cases.

This is another way to think about how and why companies adopt interpretable AI systems instead of operating black box models. Their business may depend on it. People’s claims about artificial intelligence bias may be misguided. In high-risk situations, similar requirements may be quite serious. This is why AI may become the focus of machine learning, in-depth learning and other disciplines in business applications.

Moshe Kranc, chief technology officer of Ness Digital Engineering, discussed potential use cases that could explain AI. “Any use case that affects people’s lives can be affected by bias,” he said. The answer is simple and far-reaching.

With this in mind, various AI experts and IT leaders have identified industries and use cases that can explain AI’s indispensability. Banking is a good example. It can be said that interpretable AI is very suitable for machines to play a key role in loan decision-making and other financial services. In many cases, these uses can be extended to other industries, and the details may vary, but the principles remain unchanged, so these examples may help to think about interpretable AI use cases in organizations.

With this in mind, various AI experts and IT leaders have identified industries and use cases that can explain AI’s indispensability. Banking is a good example. It can be said that interpretable AI is very suitable for machines to play a key role in loan decision-making and other financial services. In many cases, these uses can be extended to other industries, and the details may vary, but the principles remain unchanged, so these examples may help to think about interpretable AI use cases in organizations.

Advertisement