Reading time: 3 minutes

“It is difficult to regulate something we do not fully understand, especially when we do not yet know how it will affect our future,” explained innovation expert Norman Lewis, visiting fellow at MCC Brussels, during the March 26 edition of Transylvania Lectures. The event in Kolozsvár/Cluj-Napoca brought together guests who debated the European Union’s stance on artificial intelligence and the effectiveness of its current approach, thoroughly analyzing arguments for and against AI regulation.

When discussing AI regulation, experts generally fall into two camps. Some argue that AI is over-regulated, stifling innovation, creating barriers for startups, and pushing talent to less restrictive markets. The other side expresses concern that lax rules may lead to misuse, biased algorithms, and unethical practices. These opposing views were central to the discussions at the March 26 event hosted by Mathias Corvinus Collegium (MCC). The discussion was moderated by Balázs Len, a law student at Sapientia University and a participant in MCC's University Program.

The debate provided valuable insights into the challenges and opportunities surrounding AI governance. Norman Lewis, an internationally recognized author and speaker specializing in innovation and technology, focused his research on the EU's digital sovereignty strategy and the negative effects of regulation, including limitations on online content and freedom of expression. His partner in discussion was Maria Ruxandra Bodea, a lawyer and lecturer at the Faculty of Law, University of Bucharest.

Lewis described himself as an AI enthusiast, seeing it not just as a pioneering invention but as the greatest and most significant human advancement to date. He emphasized the importance of leveraging AI’s potential as a tool for progress. Because of this view, he approaches EU regulation with skepticism. He voiced his main concern: the EU is trying to regulate something that hasn’t yet reached its final stage of development. “Although we have made great strides in AI development, its ultimate purpose and boundaries are still unclear—so how can we regulate something properly if we can't yet measure or understand it?” he asked. He stressed that premature regulation could hinder innovation and prevent Europe from becoming a competitive global player.

Lewis pointed to the historical example of penicillin: “If the lab that discovered penicillin had followed today’s regulations, the medicine might never have been developed.” He emphasized the need for freedom in innovation and less regulatory burden on those working to advance technology. Under current legislation, any AI activity that might endanger human life is prohibited. As a result, automotive companies developing driverless vehicles are moving their technology to other regions due to the lack of supportive EU frameworks. Inspired by the success of GDPR, the EU aims to use a similar model for AI regulation—but this could place the continent at a significant disadvantage compared to global powers. Lewis noted that AI is an integral part of the ongoing global transformation, and policy must reflect that.

Maria Ruxandra Bodea noted that the AI Act is unique in that it pleases no one—neither those advocating for free rein in AI development nor those with a more human-centered approach. She agreed with Lewis that the EU’s cautious stance deters innovation and scares off potential startups and investors. She pointed out that even a legal expert may find the current legal framework opaque. The solution, she argued, lies in creating laws that are transparent and understandable to all, focusing regulation on outcomes rather than on the AI systems themselves. Bodea also highlighted that the current legal ambiguity could cause companies to act illegally without even realizing it.

The conversation also addressed how other countries at the forefront of AI development approach regulation. Compared to Europe, both the US and China take more relaxed stances. However, these models are not without flaws—China, for example, grants the state full access to AI systems and the data they process, offering users no protection. The EU must find a middle ground between these extremes, creating a regulatory environment that both respects user rights and fosters innovation.