“Reasonable AI” is about artificial intelligence that can log every detail about itself – and thus becomes explainable (=understandable) – and AI that knows that it is not sure of the correct answer can be sure (= Humble AI). Both approaches are essential in data science and complete the overarching area of responsible AI. Where we currently stand in AI trends.
AI Trends For 2022: Auditable AI Methods Will Be Used More Frequently
Where do we stand with the AI trends: Auditable or interpretable machine learning models are an essential component of “auditable AI,” but initially, only a small group of specialists were interested in interpretable machine learning. In recent months, however, leading data science experts have also taken up the topic. The advantages of interpretable AI are now obviously recognized and used in practice – for example, more often when it comes to lending.
ALSO READ: AI In The World Of Work: What Managers Need To Consider
It is nice to see that in this area, which many consider high-risk, interpretable AI is being used to achieve more transparency and fairness. This is an excellent credit to explainable AI. Models that can be interpreted can uncover, record, and monitor distortions – and don’t exclude themselves: the AI also checks whether the models themselves have been set up and are working without bias.
Prediction: Data Scientists Lead The Way In Adopting “Modest AI”
Where do we stand now: A big step towards “Modest AI” is the Machine Learning Model Operationalization Management (MLOps). In addition to proactive and continuous model monitoring, MLOps believe that no distinction should be made between developing and practically using a machine learning model. In the past, these were usually two distinct phases, often weeks or months apart.
In today’s regulatory environment, on the other hand, closer integration is expected. With continuous model monitoring, the alarm bells ring early if the assumptions based on the model are violated, in complete contrast to the idea that the user would blindly and naively accept every result of the AI. That’s what Humble AI is all about. The MLOps model shows how relevant this topic is today and how important initiatives by data scientists are to achieve the goals of the humble AI.
Prediction: AI Transparency And “Ethics By Design” Will No Longer Be An Option
Where are we now: I’ve blogged a lot about the recommendations of the IEEE 7000 standard: “Create models that you can reliably communicate with and explain with sufficient transparency.” That thought is becoming more and more important as the regulation of AI comes into focus.
The Brookings Institute, for example, says that the European Union’s AI Act “aims to create the first comprehensive regulatory framework for artificial intelligence, but its implications will not stop at EU borders. Some EU politicians even consider setting a global standard a key goal of the AEOI, leading some to speak of a race to regulate AI.”
AI Trends: Discussion About High-Risk Artificial Intelligence
In my opinion, high-risk AI is not yet widely deployed, and the EU’s AI regulation is overshooting the mark. The intention is good, but not all AI is risky. In particular, interpretable machine learning can specify what influences the results. It is hoped that the regulators’ overreaction to AI will be limited as the year progresses.
My approach is to find a middle ground by letting users carefully and responsibly choose an algorithm developed with ethical considerations. It is also essential to question the model repeatedly while it is being used. Because in the words of the well-known statistician George Box: “Basically all models are wrong,
Looking back, my outlook on AI trends for 2022 was exact. The responsible use of artificial intelligence is increasingly coming to the fore and is slowly becoming a matter of course in the data science world.