The European Commission will next week release its long-awaited plan on how to proceed toward laws for artificial intelligence (AI) that ensure the technology is developed and used in an ethical way.
The latest leaked proposal suggests a few options that the Commission is still considering for regulating the use of AI, including a voluntary labelling framework for developers and mandatory risk-based requirements for “high-risk” applications in sectors such as health care, policing, or transport.
However, an earlier proposal to introduce a three-to-five-year moratorium on the use of facial recognition technologies has vanished, suggesting the Commission won’t proceed with this idea.
The bloc’s executive arm is expected to propose updating existing EU safety and liability rules to address new AI risks.
“Given how fast AI evolves, the regulatory framework must leave room for further developments,” the draft says.
AI requires a “European governance structure”, the paper says, potentially replicating the model of the EU’s network of national data protection authorities.
EU governments are beginning to move forward on AI, “risking a patchwork of rules” throughout the continent, the draft says. Denmark, for example, has launched a data ethics seal. Malta has introduced a certification system for AI.
Following the release of the Commission white paper next week, the EU will spend months collecting feedback from industry, researchers, civil society and governments. Hard laws are expected to be written up in the autumn.
High risk AI
The Commission’s thinking on AI – ordered by new President Ursula von der Leyen as one of the initiatives she wants to launch in her first 100 days in office – is part of a global debate about these new technologies. Several researchers have been sounding the alarm that AI, unregulated, could undermine data privacy, allow rampant online hacking and financial fraud, lead to wrong medical diagnoses or biased decisions on lending and insurance. Last year, leaders of the 20 largest nations agreed to a set of broad ethical principles for AI, but haven’t yet gotten into the kind of specific ideas being discussed by the Commission.
According to the Commission’s draft paper, the challenge in pinning rules onto AI is that many of the decisions made by algorithms will in the future be illegible to humans – “even the developers may not know why a certain decision is reached”. This has become known as AI’s “black box” decision making.
EU laws should anyway differentiate between “high-risk” and “low-risk” AI, with high-risk applications tested before they come into every day use.
It will be necessary to set “appropriate requirements” on any data fed to AI algorithms, in order to ensure “traceability and compliance”, the paper says.
AI algorithms should be trained on data in Europe, “if there is no way to determine the way data has been gathered.”
Responsibility for AI applications should be shared between “developer and deployer”. Accurate records on data collection will need to be maintained.