• 11/25/2024

Hong Kong can be a leader in mitigating the dangers of AI

Hong Kong Free Press

By Nate Sharadin

Artificial intelligence (AI) will be regulated. The European Union has moved the AI act forward for negotiation. The Biden administration and the UK government are engaged. China has already begun to act.

The reasons to regulate are clear: advanced AI systems are positioned to cause serious, potentially catastrophic, harm. It’s easy to lose track of the specific risks in the flood of information about the different kinds of harms potentially caused by highly capable systems. It’s therefore helpful to think about concrete examples.

China facial-recognition
This file photo taken on October 24, 2018 shows visitors looking at AI (artificial intelligence) security cameras with facial recognition technology at the 14th China International Exhibition on Public Safety and Security at the China International Exhibition Center in Beijing. Photo: NICOLAS ASFOURI / AFP.

Here is an important recent example of extraordinarily risky AI development. Recently, researchers from Switzerland and the USA designed, developed, and then publicly released a large AI model with new capabilities in chemical synthesis. They did this by integrating a large language model (LLM) with expertly-designed tools. In effect, they developed a software program capable of autonomously planning and writing code for synthesising arbitrary chemical compounds.

They then wrote a paper describing how they did this, and released the basic code for copying the results to the public. In case that still doesn’t register as dangerous, remember that VX and Sarin gas are arbitrary chemical compounds, and that existing AI models have already proven adept at discovering new and potentially undetectable lethal chemical compounds.

Other examples of unsafe AI development are not difficult to find. This is to say nothing of how existing AI systems are already being used to cause harm, such as by enabling the spread of misinformation. How should regulators think about mitigating risk from the ongoing development of AI systems? This question is especially pressing given how quickly the pace of development is accelerating.

Dangers from unchecked, accelerated development are familiar and come in many different forms.

Hong Kong landslide 1972
The aftermath of a landslide in Mid-Levels in 1972. Photo: Wikicommons.

More than fifty years ago, in June 1972, a series of landslides took the lives of more than 150 Hongkongers. These disasters, along with prior landslides and slope erosion, were the direct result of unfettered hillside development. That unregulated development had been accelerating since the 1950s, when Hong Kong began to grow at a furious pace. The inevitable result was the tragic loss of lives.

The Hong Kong government’s reaction was to empanel experts tasked with understanding the problem and proposing solutions designed to mitigate the risks. The result was a government-led geotechnical control body — the Geotechnical Engineering Office (GEO). Among other things, the GEO is tasked with mitigating the risks posed by new development, with reducing the risks of existing development, and with minimising the consequences of potential accidents.

The GEO and the slope safety programme it pioneered is now recognised as a world-leader in slope safety and landslide risk mitigation, with cities and countries around the world seeking its advice and following its lead.

It is staffed with dedicated civil servants, technical experts, and scientists aimed at reducing the risk of harm from landslips, with educating the public about these risks, and with developing the science around the problem in a way that addresses potential future risks – such as those emerging as a result of climate change.

ChatGPT OpenAI Artificial Intelligence
File Photo: Focal Foto via Flickr

Government officials are in a position right now to take meaningful action on the unchecked, accelerating development of AI models before the ground shifts beneath us, and before the harms of unregulated AI are realised.

Of course, there are differences between slope safety and model safety. AI models are not geological features of our environment. They are sociotechnical features: such models are technical tools that are developed and deployed within the context of a society, and therefore the risks from them should be understood and mitigated within that context. What is required is a Sociotechnical Engineering Office (SEO) aimed at mitigating the risks of development and deployment of advanced AI systems.

Like the GEO, the SEO should be staffed with civil servants, scientists and technical experts and empowered to advise the government on how best to protect people from the manifold risks associated with AI development. The Office of the Government Chief Information Officer has already taken steps toward standing up a framework for responsible and ethical AI development. That framework should be backed by the rigorous, transparent enforcement of safety standards.

Despite the obvious differences, there are safety interventions that a proposed SEO could sensibly borrow from the GEO’s world-class slope safety systems. For example, slopes are liable to registration: models with certain kinds of capabilities should be registered, too. Slopes are liable to inspection. Models should be inspected, too. Harms caused by negligent care are down to landowners and developers; harms caused by negligent care over AI systems should be down to model owners and developers. The list goes on. 

A strong, independent, governance regime for advanced AI systems isn’t optional. Experts everywhere are urging action. Hong Kong is well-positioned to lead on safe, ethical AI development, rooted in its long-standing commitment to addressing society-scale risks with strong government oversight and expert scientific guidance. To borrow and adapt a familiar phrase: safe models save lives.


Nate Sharadin is a philosophy fellow at the Center for AI Safety. He is also an assistant professor in the Department of Philosophy at the University of Hong Kong.


Support HKFP  |  Code of Ethics  |  Error/typo?  |  Contact Us  |  Newsletter  | Transparency & Annual Report | Apps

HKFP is an impartial platform & does not necessarily share the views of opinion writers or advertisers. HKFP presents a diversity of views & regularly invites figures across the political spectrum to write for us. Press freedom is guaranteed under the Basic Law, security law, Bill of Rights and Chinese constitution. Opinion pieces aim to point out errors or defects in the government, law or policies, or aim to suggest ideas or alterations via legal means without an intention of hatred, discontent or hostility against the authorities or other communities.
Processing…
Success! You’re on the list.

support hong kong free press generic

https://hongkongfp.com/2023/07/05/hong-kong-can-be-a-leader-in-mitigating-the-dangers-of-ai/