As of Feb. 2, 2025, the first few requirements of the E.U.’s AI Act are legally binding. Businesses operating in the region that do not abide by these requirements are at risk of a fine of up to 7% of their global annual turnover.
Certain AI use cases are now not allowed, including using it to manipulate behaviour and cause harm, for example, to teenagers. However, Kirsten Rulf, co-author of the E.U. AI Act and partner at BCG, said that these are applicable to “very few” companies.
Other examples of now-prohibited AI practices include:
- AI “social scoring” that causes unjust or disproportionate harm.
- Risk assessment for predicting criminal behaviour based solely on profiling.
- Unauthorised real-time remote biometric identification by law enforcement in public spaces.
“For example, banks and other financial institutions using AI must carefully ensure that their creditworthiness assessments do not fall in the category of social scoring,” Rulf said. Read the complete list of prohibited practices via the E.U.’s AI Act.
In addition, the Act now requires staff at companies that either provide or use AI systems will need to have “a sufficient level of AI literacy.” This will be achieved through either training internally or hiring staff with the appropriate skillset.
“Business leaders must ensure their workforce is AI-literate at a functional level and equipped with preliminary AI training to foster an AI-driven culture,” Rulf said in a statement.
SEE: TechRepublic Premium’s AI Quick Glossary
The next milestone for the AI Act will come at the end of April, when the European Commission will likely publish the final Code of Practice for General Purpose AI Models, according to Rulf. The code will become effective in August, as will the powers of member state supervisory authorities for enforcing the Act.
“Between now and then, businesses must demand sufficient information from AI model providers to deploy AI responsibly and work collaboratively with providers, policymakers, and regulators to ensure pragmatic implementation,” Rulf advised.
AI Act is not stifling innovation but allows it to scale, according to its co-author
While many have criticised the AI Act, as well as the strict approach the E.U. has towards regulating tech companies in general, Rulf said during a BCG roundtable for the press that this first phase of the legislation marks the “start of a new era in AI scaling.”
“(The Act) brings the guardrails and quality and risk management framework into place that it needs to scale up,” she said. “It’s not stifling innovation… it’s enabling the scaling of AI innovations that we all want to see.”
She added that AI inherently comes with risks, and if you scale it up, the efficiency benefits will suffer and endanger the reputation of the business. “The AI Act provides you with a really good blueprint of how to tackle these risks, of how to tackle these quality issues, before they occur,” she said.
According to BCG, 57% of European companies cite uncertainty surrounding AI regulations as an obstacle. Rulf acknowledged that the current definition of AI that falls under the AI Act “cannot be operationalized easily” because it’s so broad, and was written as such to be consistent with international guidelines.
“The difference in how you interpret that AI definition for a bank is the difference between 100 models falling under that regulation, and 1,000 models plus falling under that regulation,” she said. “That, of course, makes a huge difference both for capacity costs, bureaucracy, scrutiny, but also can even policy makers keep up with all of that?”
Rulf stressed that it is important businesses engage with the E.U. AI Office while standards for the AI Act that are yet to be phased in are still being drawn up. This means that policymakers can develop them to be as practical as possible.
SEE: What is the EU’s AI Office? New Body Formed to Oversee the Rollout of General Purpose Models and AI Act
“As a regulator and policy maker, you don’t hear these voices,” she said. “You cannot deregulate if you don’t know where the big problems and stepping stones are… I can only encourage everyone to really be as blunt as possible and as industry-specific as possible.”
Regardless of criticism, Rulf said the AI Act has “evolved into a global standard” and that it has been copycatted both in Asia and in certain U.S. states. This means many companies may not find it too taxing to comply if they have already adopted a responsible AI program to abide with other regulations.
SEE: EU AI Act: Australian IT Pros Need to Prepare for AI Regulation
More than 100 organisations, including Amazon, Google, Microsoft, and OpenAI, have already signed the E.U. AI Pact and volunteered to start implementing the Act’s requirements ahead of legal deadlines.
Source link