This page includes a handful of analyses of the AI Act, among many hundreds, that we have selected. We have chosen these analyses because, in our view, they contain constructive and thought-provoking ideas on how to improve the Act. The first analysis is our own.
Future of Life Institute
The Future of Life Institute (FLI), an independent nonprofit with the goal of maximising the benefits of technology and reducing its associated risks, shared its recommendations for the EU AI Act with the European Commission. It argues that the Act should ensure that AI providers consider the impact of their applications on society at large, not just the individual. AI applications that cause negligible harms to individuals could cause significant harms at the societal level. For example, a marketing application used to influence citizens’ voting behaviour could affect the results of elections. Read more of the recommendations here.
University of Cambridge institutions
Leverhulme Centre for the Future of Intelligence and Centre for the Study of Existential Risk, two leading institutions at the University of Cambridge, provided their feedback on the proposed EU AI law to the European Commission. They hope that the Act will help to set standards internationally for enabling the benefits and reducing the risks of AI. One of their recommendations is to allow changes to the list of restricted and high-risk systems to be proposed, increasing the flexibility of the regulation. Read the full reasoning here.
Access Now Europe
Access Now, an organisation that defends and extends the digital rights of users at risk, has also provided feedback on the EU AI Act. It worries that the law, in its current form, would fail to achieve the objective of protecting fundamental rights. Specifically, it does not think that the proposal goes far enough to protect fundamental rights in relation to biometric applications such as emotion recognition and AI polygraphs. The current draft of the AI Act advocates for transparency obligations for these applications, but Access Now recommends stronger measures to reduce all associated risks, such as bans. Read its concrete suggestions here.
Michael Veale & Frederik Zuiderveen Borgesius
Michael Veale, Assistant Professor at University College London in Digital Rights and Regulation, and Frederik Zuiderveen Borgesius, Professor of ICT and Private Law at the Dutch Radboud University, provide a thorough analysis of some of the most sensitive parts of the EU AI Act. One of their article’s many surprising insights is that compliance with the law would almost completely depend on self-assessment. Self-assessment means that there is no enforcement to comply with the law. Once standardisation bodies such as CEN and CENELEC have published their standards, third-party verification with the law will no longer be required. The full article can be found here.
The Future Society
The Future Society, a nonprofit organisation registered in Estonia, which advocates for the responsible adoption of AI for the benefit of humanity, submitted its feedback to the European Commission on the EU AI Act. One of its suggestions is to ensure governance remains responsive to technological trends. This could be achieved by improving the flow of information between national and European institutions and systematically compiling and analysing incident reports from member states. Read the full feedback here.
Nathalie A. Smuha and colleagues
Nathalie A. Smuha, Researcher at the Faculty of Law at KU Leuven, Emma Ahmed-Rengers, PhD Researcher in Law and Computer Science at the University of Birmingham, and colleagues argue that the EU AI Act does not always accurately recognise the wrongs and harms associated with different kinds of AI systems nor allocate responsibility for them adequately. They also state that the proposal does not provide an effective framework for the enforcement of legal rights and duties. The proposal neglects to ensure meaningful transparency, accountability, and rights of public participation. Read the full article here.
The European DIGITAL SME Alliance
The European DIGITAL SME Alliance, a network of ICT small and medium enterprises (SME) in Europe, welcomes a harmonised AI regulation and focus on ethical AI in the EU but suggests many improvements to avoid overburdening SMEs. For instance, it contends that wherever conformity assessments will be based on standards, SMEs should actively participate in the development of those standards. Otherwise, standards may be written in a way that is impractical for SMEs. Many other recommendations can be read here.
The cost of the EU AI Act
Center for Data Innovation, a nonprofit focused on data-driven innovation, published a report claiming that the EU AI Act will cost €31 billion over the next five years and reduce AI investments by almost 20%. Entrepreneur Meeri Haataja and academic Joanna Bryson published their own research writing that it will likely be much cheaper, as the regulation covers mostly a small proportion of AI applications considered high-risk. In addition, the cost analysis does not consider all the benefits of regulation to the public. Finally, CEPS, a think tank and forum for debate on EU affairs, posted their own analyses of the cost estimates and reached a similar conclusion to Haataja and Bryson.
Societal harm and the Act
Nathalie Smuha, distinguishes societal harm from individual harm in the context of the AI Act. Societal harm is not concerned with the interests of a particular individual, it instead considers harms to society at large, going over and above the sum of individual interests. She states that the proposal remains imbued by concerns relating almost exclusively to individual harm, and seems to overlook the need for protection against AI’s societal harms. The full paper can be read here.
The role of standards
Researchers at Oxford Information Labs discuss what role the EU AI Act gives to standards for AI. The key point they make is that conformance with harmonised standards will create a presumption of conformity for high-risk AI applications and services. This in turn can increase confidence that they are in compliance with the complex requirements of the proposed regulation and create strong incentives for industry to comply with European standards. Find the lengthy analysis of the role of standards in the EU AI regulation here.