Australian Experts Call for an AI Safety Institute

0

Australians for AI Safety and tech startup Harmony Intelligence have called for the creation of an AI Safety Institute. It was among a range of suggestions 40 Australian AI experts and organisations made in a joint submission to the Australian Government’s Senate Committee on Adopting AI.

“Too often, lessons are learned only after something goes wrong,” the submission reads. “With AI systems that might approach or surpass human-level capabilities, we cannot afford for that to be the case.”

In addition to creating an AI Safety Institute, the submission argued for new safeguards that keep pace with new AI capabilities and risks, and ensuring the regulation of high-risk AI systems includes those that could have catastrophic consequences.

Polling from the Lowy Institute shows that more than half of Australians think AI’s risks outweigh its benefits.

“The Government will fail to achieve its economic ambitions from AI unless it can satisfy Australians that it’s working to make AI safe,” said Greg Sadler, spokesperson for Australians for AI Safety.

Many countries, including the US, UK, Canada, Japan, Korea and Singapore, have moved to create AI safety institutes to progress technical efforts to make sure that next-generation AI models are safe.

In May 2024, participants in the Seoul Declaration on AI Safety, including Australia, committed to create or expand AI safety institutes.” The Australian Government has yet to say how it will approach the issue.

Senator David Pocock expressed his concern that Science and Industry Minister Ed Husic was creating temporary expert advisory bodies but hasn’t taken steps to create an enduring AI Safety Institute.

After hearing evidence about the funding Canada and the UK provide to their AI Safety Institutes, Pocock said, “that seems very doable to me”. Microsoft suggested that Australia was at risk of falling behind other countries.

A recent report found that the global AI assurance industry could be worth USD276 billion by the decade’s end. “What I see developing globally is the establishment of AI Safety Institutes,” said Lee Hickin, AI Technology and Policy Lead for Microsoft Asia. “Obviously, the opportunity exists for Australia to also participate in that safety institute network, which has a very clear focus of investing in learning, development and skills. There is not just a need, but a value, to Australians and Australian businesses and Australian industry to have Australia represented on that global stage. ”

“The next generation of AI models could pose grave risks to public safety,” CEO of Harmony Intelligence Soroush Pour told the recent committee hearing. “Australian businesses and researchers have world-leading skills but receive far too little support from the Australian Government. If Australia urgently created an AI Safety Institute, it would help create a powerful new export industry and make Australia relevant on the global stage. If government fails to do the work necessary to take these risks off the table, the outcomes could be catastrophic.”

Research from the University of Queensland found that 80% of Australians think AI risk is a global priority. When asked what the Australian Government’s AI focus should be, most respondents said preventing dangerous and catastrophic outcomes.

“Australia has yet to position itself to learn from and contribute to growing global efforts,” the joint submission added. “To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities. ”

The Senate Committee on Adopting AI hearings concluded in Canberra on August 16, 2024. The committee is due to report to the Parliament on or before September 19.

Share.

Leave A Reply