Friday, May 3, 2024
HomeGlobalScience & TechnologyAI policy explored for hardware 

AI policy explored for hardware 

-

Science & Technology, UK (Commonwealth Union) – A pivotal proposal outlined in a significant new report advocates for a global registry to monitor the movement of chips destined for AI supercomputers, as part of a broader call for regulation of “compute” – the fundamental hardware underlying all AI systems. The aim is to mitigate potential misuse and avert AI-related disasters. 

Among other technical suggestions put forth in the report are the implementation of “compute caps” – setting inherent limits on the number of chips each AI system can interface with – and the distribution of a “start switch” for AI training among multiple entities, enabling a digital veto mechanism to preempt risky AI deployments before they access data. 

Researchers argue that AI chips and data centers represent more tangible targets for oversight and governance in AI safety efforts, given their physical presence, unlike the other components of the “AI triad” – data and algorithms – which can be replicated infinitely in theory, according to the report. 

Highlighting the concentrated nature of supply chains for powerful computing chips vital for driving generative AI models, experts underscore the hardware’s significance as a key intervention point for policies aimed at mitigating AI-related risks. 

Authored by nineteen experts and co-led by three University of Cambridge institutes – the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER), and the Bennett Institute for Public Policy – in collaboration with OpenAI and the Centre for the Governance of AI, the report provides critical insights into the governance of AI and proposes concrete measures for enhancing AI safety. 

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” says Haydn Belfield, who is a co-lead author of the report from Cambridge’s LCFI. 

The report presents potential avenues for governing computation, drawing parallels between AI training and uranium enrichment. 

Belfield pointed out that, ‘International regulation of nuclear supplies focuses on a vital input that undergoes a rigorous, resource-intensive process. Prioritizing compute could afford AI governance a similar approach.'” 

Policy proposals are categorized into three main approaches: enhancing global transparency regarding AI computation, optimizing compute allocation for societal benefit, and imposing constraints on computational power. 

For instance, one proposal entails establishing an internationally audited registry for AI chips, mandating producers, sellers, and resellers to document all transactions. This registry would offer precise insights into the computational capacity held by nations and corporations, according to the experts of the report. 

Additionally, the report suggests integrating a unique identifier into each chip to deter industrial espionage and illicit chip trafficking. 

Additional recommendations for enhancing visibility and accountability encompass disclosing significant AI training endeavors conducted by cloud computing providers and implementing privacy-preserving “workload monitoring” mechanisms. These measures aim to mitigate the risk of an escalation in computational capabilities without adequate transparency, thereby averting potential harmful consequences. 

According to Belfield, the users of computational resources will partake in a spectrum of activities ranging from advantageous and benign to potentially detrimental, and determined entities will seek means to bypass limitations. 

“Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.” 

These could encompass constraints such as physical barriers in chip-to-chip communication or cryptographic tools enabling the remote deactivation of AI chips under extreme conditions. 

An alternative proposal involves a system where the consent of multiple stakeholders is necessary to activate AI computing power for highly risky training operations, echoing protocols seen in nuclear weaponry. 

Policies aimed at mitigating AI risks might involve prioritizing computing resources for research endeavors with the highest potential societal benefits, spanning areas like renewable energy, healthcare, and education. This might manifest through large-scale international AI initiatives, pooling computational resources to address global challenges. 

The authors of the report emphasize that their policy suggestions are preliminary explorations rather than fully developed proposals. They acknowledge that each suggestion carries potential drawbacks, such as risks of data leaks and adverse economic effects, as well as potentially impeding positive AI progress. 

They outline five key considerations for regulating AI based on computational power, which include exempting small-scale and non-AI computing, regularly reassessing computational thresholds, and prioritizing the preservation of privacy. 

Belfield pointed out that attempting to regulate AI models at the deployment stage may prove ineffective, akin to chasing shadows. Those advocating for AI regulation should instead focus upstream on compute, the foundational force driving the AI revolution.” 

spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Follow us

51,000FansLike
50FollowersFollow
428SubscribersSubscribe
spot_img