User Review
( votes)It is early days for responsible artificial intelligence, but Microsoft aims to help companies avoid problems and improve the performance and quality of AI applications
AI problems- I have been asked many times during the past month whether the heightened pressure that enterprises are now facing as a result of the Covid-19 pandemic will cause them to short-cut aspects such as responsible machine learning in order to get pilots into production more quickly.
This is certainly a possibility, but in my opinion, people’s memories of the actions that enterprises are taking now will run much deeper than many of the better-planned projects that came before the pandemic or have yet to start. More organisations will therefore aim to get artificial intelligence (AI) right during the crisis as well.
As practitioners get going in this area, here are a few things to consider.
One global bank I spoke to recently has just put in place a policy that no AI model can move into production without some interpretability and bias controls built into the lifecycle of the application.
This is a fantastic approach. Embedding governance into the entire lifecycle of machine learning helps to reduce problems later on and, above all, engenders confidence and trust in the AI that gets built. This ultimately leads to faster deployments, wider adoption and more responsible innovation.
Kjersten Moody, chief data and analytics officer at insurer State Farm, perhaps captures this best when she says: “As we introduce AI into our processes, we have to hold ourselves to the highest standard and we have to hold AI to that high or higher standard that we would hold our people to.”
Although they are in their infancy, tools to counter potential unfairness in data and improve explainability in models are getting better. They are a good place to start in responsible AI. They will help to minimise any negative effects, not only on customers but also on business processes, employees and the surrounding technologies that support AI.