The European Union (EU) has established itself as the leader in regulating issues impacting EU companies and citizens. EU was the first to introduce the General Data Protection Regulation (GDPR) to address data privacy issues. It is now adopting a similar approach for AI regulation which will impact most, if not all businesses. Now the question arises whether AI regulation should be implemented. And the answer is yes, most definitely. Although limited in scope, some US states have introduced regulation addressing consumer protection issues. (see Consumer Protection and AI—7 Expert Tips To Stay Out Of Trouble).
Why Do You Need to Pay Attention?
As with the GDPR, EU’s AI regulation will impact companies across the world. Firms looking to do business in the EU or with EU-based businesses or EU-based consumers will need to abide by these regulations. Simply put, these regulations will not impact you ONLY if you are willing to forego the opportunity to serve the over 450 million citizens of the EU. (According to the current plan, the EU AI regulation will be implemented in harmony with the GDPR and will not introduce any changes to data privacy regulations.)
According to McKinsey, “companies seeing the highest returns from AI are far more likely to report that they’re engaged in active risk mitigation.” To ensure the highest return on investment (ROI) from your investments in AI and ensuring compliance with the AI regulations, you need to be proactive.
5 Things to Do Now
To start, the EU is planning to regulate AI systems that have an “unacceptable risk” and “high risk.” These regulations will apply with exception to AI companies designing and marketing AI systems or buying AI systems from other AI companies.
Ensure that None of Your Initiatives Have an “Unacceptable Risk.”
Some of the Characteristics of Unacceptable Risk Systems Include:
- Exploitive, subliminal, or unscrupulous techniques that will inevitably cause harm. An example of this is Facebook’s AI that encourages users to post incendiary content.
- All types of social scoring systems. China’s Social Credit System, which tracks the behavior and trustworthiness of all citizens, is an illustration of such systems.
- Biometric identification systems being used in public spaces, including but not limited to facial recognition systems. (see Why Are Technology Companies Quitting Facial Recognition?)
Ascertain if You Will Build or Use a “High-Risk” AI System
These “High-Risk” AI Systems May Involve:
- Critical infrastructure where the AI system may put people’s life and wellbeing at risk (e.g., transportation);
- Safety components of (essential) products (e.g., robot-assisted surgery);
- Systems used to make employment-related decisions (e.g., resume reviews and performance reviews),
- Some components of biometric identification; and
- Essential private services (e.g., loans).
High-risk AI systems will be analyzed more thoroughly and regulated significantly.
An unfortunate consequence of this is that some completely harmless AI systems may be perceived as high-risk. An illustration of this is Duolingo (a popular language-learning platform). Since it is sometimes used by students to attain admission to educational institutions, it will be deemed high-risk. As an unintended outcome, Duolingo’s market value may decline.
Plan to Include Humans In the Decisions Made by the AI
The Involvement of Humans in AI Decisions Can Be Explained in Three Ways:
- Human-IN-the-loop: In such systems, a human is required to be part of the decision. An example of a Human-IN-the-loop system is the approval of a job candidate in the interview process. While AI may be capable of shortlisting great candidates, concerns about various types of bias may result the EU to require that the AI is downgraded to an advisor-only role and the human makes the final decision.
- Human-OVER-the-loop: In Human-OVER-the-loop systems, humans may intervene in AI-based decisions. An example of such a system is automated driving directions. The human may follow the system’s chosen route most of the time. However, s/he may choose to take a different route in some instances. It is currently not clear how these systems will be impacted by the EU regulations.
- Human-OUT-OF-the-loop: In Human-OUT-OF-the-loop systems, the AI generally runs without human interaction. A great example of a Human-OUT-OF-the-loop is a fully automated self-driving vehicle. These systems will be all the more challenging to introduce under the new regulations.
Prepare to Include Explainability as Part of Your AI System
Some loan applications made to the Ant Group (part of Alibaba) in China are approved in minutes because the AI involved in these decisions has access to thousands of data points gathered from Alibaba’s vast collection of data about that individual.
Because of the type of AI being used and the complexity of the algorithms involved, it is almost impossible to conclude how AI approved a loan for one person and rejected another. This system is neither transparent nor explainable.
EU AI regulations will most likely not allow such systems since explainability is a cornerstone of these regulations, especially for high-risk AI systems. Since fair access to financial products is an area that EU is emphasizing, any supplier of AI loan-evaluation systems will need to prove precisely how a decision was made by the system which may prove to be difficult.
One of the most far-reaching benefits of this regulation will be the ultimate removal of bias ensuring fairness for all consumers and businesses. Nonetheless, this will inevitably slow down the loan approval process reducing its benefits.
Ensure Ongoing Monitoring of Systems
Because AI systems are designed to continuously learn and improve, it will not be enough for it to be in compliance with EU regulations upon initial use. The regulations propose that “all providers should have a post-market monitoring system in place.”
Since the system is continuously evolving, it is important for the company to monitor and ensure compliance on an ongoing basis. Simply put, compliance will be an ongoing effort and will be complex because of the frequency of AI-systems updates. In essence, companies should think of the EU regulations as an opportunity to proactively manage and AI-related risks. Although compliance may seem cumbersome, it will most like give you a competitive advantage if you can ensure compliance and garner the highest ROI.
This is a summary of the original Forbes article. To read it, click here.