AI Legislation Needs to Remain Technology-Agnostic to Prevent It from Hampering Technological Advancements, claims a Recent Study
The Center for Data Innovation, a leading think tank focused on technology policy, has urged the European Union to revise the definition of artificial intelligence (AI) in its AI Act to ensure regulatory clarity, technological alignment, effective risk management, and international leadership.
Currently, the AI Act defines an AI system as a machine-based system that operates with varying levels of autonomy, exhibits adaptiveness, and infers outputs based on input received. However, the Center for Data Innovation argues that this definition is broad and may inadvertently regulate systems that do not pose novel risks, such as linear regression models and statistical programs used in spreadsheets.
To address this issue, the Center proposes a revised definition: "Artificial intelligence system' (AI system) means a system that, based on parameters unknown to the provider or user, infers how to achieve a given set of objectives using machine learning and produces system-generated outputs." This definition emphasizes the need for a clearer, updated conceptualization of AI that accounts for the rapid evolution of AI technologies, including general-purpose AI models.
The Center emphasizes the importance of avoiding legislation that favors one technology over another while addressing novel risks. Specifically, they suggest that the AI Act should revise its definition to only cover AI technology with new risks not posed by non-AI systems. The Center specifically highlights uninterpretable machine learning as the AI technology with new risks that should be regulated.
The AI Act delineates and regulates "high-risk" uses of AI, requiring systems in this category to undergo conformity assessments, transparency requirements, and post-market monitoring obligations. However, the cost of these conformity assessments for high-risk AI systems may reach up to €400,000, which will be borne by consumers and EU businesses.
The Center's report urges EU policymakers to read the report carefully and consider the implications of the current scope of the AI Act. Patrick Grady, a policy analyst at the Center for Data Innovation, has authored the report and has warned that failing to address the scope problem could lead to a period of grave AI deterrence in Europe.
In conclusion, the Center for Data Innovation advocates revising the AI Act’s definition to ensure the EU law remains adaptable, practical, and forward-looking. This is crucial for maintaining regulatory efficacy and fostering AI innovation within a safe, transparent framework. The Center calls on EU policymakers to revise the AI Act's definition of AI to better align with technological advances, ensure effective risk management, provide legal clarity, and position the EU as a global leader in AI regulation.
- The Center for Data Innovation recommends revising the definition of artificial intelligence in the AI Act to improve regulatory clarity, technologically align with advancements, and ensure effective risk management for machine learning systems.
- The proposed definition by the Center for Data Innovation, a think tank focused on technology policy, emphasizes the need for a clearer conceptualization of AI that accounts for general-purpose AI models and the rapid evolution of AI technologies.
- To avoid legislation favoring one technology over another while addressing novel risks, the Center suggests that the EU revises the AI Act's definition to only cover AI technology with novel risks not posed by non-AI systems, such as uninterpretable machine learning.
- The AI Act delineates and regulates high-risk uses of AI, mandating systems in this category to undergo assessments, transparency requirements, and post-market monitoring obligations, but the cost of these assessments may impact consumers and EU businesses significantly.
- The Center for Data Innovation's report urges EU policymakers to carefully consider the implications of the current scope of the AI Act, as failing to address the scope issue could lead to stunted AI innovation and a period of deterrence in Europe.