COVER STORY: What AI regulation might look like in Australia

AI regulation in Australia is coming and it seems decision-makers and industry experts are welcoming the decision with open arms.

As AI’s popularity continues to catapult into the stratosphere, business leaders and industry bodies have been calling for better regulation of the technology, especially as usage continues to soar and concerns around the technology start to rise.

Currently in Australia, AI is regulated through current legislation such as the Australian consumer law, the privacy act, copyright law and data protection.

However, Australia does not have any AI-specific regulations. But that is soon about to change.

In June, the federal government opened submissions over how it can support the safe and responsible use of AI.

From these submissions, the Albanese government will use the feedback to inform consideration across the government on any appropriate regulatory and policy responses.

As regulation is now a “when” not an “if”, industry leaders and AI experts, spoke to Digital Nation about how AI should be regulated in Australia and how this regulation will impact decision-makers.

A spectrum of technologies

Kate Pounder CEO of the Tech Council of Australia (TCA) said AI should be regulated “methodically and thoughtfully in a risk-based fashion”.

She said AI is a relatively new tool with a spectrum of technologies and they all come with different risks and practices.

“What is important in our regulatory framework is that we’re thinking about those, working out how we manage them as a society, what outcomes we want from them, and what outcomes we don’t want from them,” she said.

“That’s not an easy task because of the diversity of those applications, the diversity of their risk profile and the diversity of issues they raise.”

She added, “That’s why I say that methodical balanced approach will also be necessary not because we want to slow down regulation, but if we want to make sure the regulation in place works and has the outcomes that we need.”

Local insight

Australian regulation of AI will need to ensure that it has specific local regulation and specific oversight, according to Dana McKay, senior lecturer innovative interactive technologies at the school of computing technologies, RMIT.

McKay said if she was to make a recommendation on regulation in Australia, she said, “I would be looking for an Australian-based panel of digital ethicists and technology community members to essentially be an oversight body to regulate those technologies.

“I know that a lot of the Googles and Microsofts of this world might be uncomfortable with this, but we do need to have oversight of these technologies,” she said.

McKay suggested the panel could assess the technologies to ensure they’re safe to use and meet the regulatory requirements.

She said Australia needs to ensure that whatever generative AI technologies are being used in Australia, they take that context into consideration.

“We’ve got an old culture in Australia that probably hasn’t been taken into account in any of these [AI] models. If these models are making suggestions or decisions, it probably won’t be taking those cultures into account,” she said.

“We have a different defence context in Australia, a different border context in Australia than many other countries in the world, we have a different climate context.”

An example is making sure the AI knows Australian traditions and cultures, like what side of the road we drive on.

“If you ask ChatGPT without any context, it might well say the right, which in Australia doesn’t work,” she added.

Combatting AI risks

Regulating AI will most likely not be a smooth process, as the technology is complex.

Robert Tang, counsel, Australia at global law firm Clifford Chance said AI is unlike any existing technology, raising new challenges and risks.

“This makes regulating AI a complex issue and could take different forms, including potentially requiring international cooperation,” he said.

AI has also brought new issues to the forefront of leaders’ minds such as deepfakes and plagiarism.

Dr Catriona Wallace, founder, of the Responsible Metaverse Alliance said with the constant threat of deepfakes on the internet, there needs to be specific regulations around that.

“We should focus on the targeted influence and manipulation of individuals, particularly now with generative AI, there’s no specific regulation around generative AI,” she explained.

“The ability for this AI to be used, whether it says avatars or chatbots or other applications to target and manipulate people, particularly young people. This is an area that needs urgent regulation.”

Being safe and innovative

A spokesperson for the Department of Industry, Science and Resources told Digital Nation that the government must strike the right balance of creating safeguards and protections while fostering innovation and adoption.

“Australia has strong foundations to be a leader in safe and responsible AI. For example, Australia was one of the earliest countries to adopt a national set of AI Ethics Principles and established the world’s first eSafety Commissioner,” the spokesperson said.

“The central aim of the paper is to seek feedback on whether further regulatory and governance responses are required to ensure appropriate AI safeguards are in place.”

Wallace at the Responsible Metaverse Alliance told Digital Nation there needs to be a specific AI safety regulator.

“Ultimately, I believe we need an AI safety regulator, which is what Edward Santow, the previous human rights commissioner also called for, but I imagine that could be two to three years away,” she said.

The impact of regulation on business leaders

Australian business leaders are open to the idea of AI regulation, with a recent BCG X report highlighting that 78.8 percent of respondents believe this impending regulation is necessary.

Adam Whybrew, partner and director at BCG X told Digital Nation, this type of consensus on regulation is unheard of.

“It’s rare that we see a technology on which there’s such universal agreement that we need regulation. Nearly 80 percent of the Australians we surveyed thought this, and so do all the businesses which are developing the tools,” he said.

When asked if the impending regulation could have any impact on business leaders that have AI applications in place a spokesperson for the department of industry, science and resources said any AI applications must comply with existing regulations.

“Including consumer protection law, privacy law and anti-discrimination law, as well as those specific to certain sectors such as the regulation of medical devices,” they said.

“Feedback on the discussion paper will inform consideration across government on any appropriate further regulatory and policy responses to ensure AI governance mechanisms are fit for purpose to manage emerging risks.”

Dr Yi Zhang, senior lecturer, Australian Artificial Intelligence Institute at UTS those companies that are AI-based or implement specific AI technologies, the regulation will give them a clearer picture of how to use it within their organisations.

“AI regulation will give the company a clear research and development plan to know how they will develop AI technology and how to commercialise AI technology, rather than do whatever they want to do,” he said.

McKay at RMIT said leaders know that regulation is coming and are already preparing their organisation for what is to come.

“Whether they want to be regulated or not, they know it’s coming. That’s why we saw the CEO  of OpenAI having that conversation with Congress in the US,” she said.

“They of course want the regulations that they want, which is why we are seeing these CEOs go trying to get ahead of the game, essentially trying to get ahead of the regulation and set up the terms of reference.”

McKay said businesses are no strangers to regulation.

“This is another one of those things. Given that it’s coming, the certainty the actual regulation affords is probably a good thing,” she added.

An evolution of society

AI regulation will never be a one-and-done thing, it will be a constant evolution and keep on reflecting current societal needs and structures, according to Pounder at the TCA.

As someone who has spent her whole career in tech policy, she said, “I don’t think [regulation] ever ends.”

“The technology is dynamic, our society is dynamic and our values, get updated as well. To me, good regulation will help us keep constant the things that we value. If we value human rights, if we value improving our economy and growing it, we will make sure our regulatory framework is continuing to deliver those broader outcomes,” she explained.

Pounder said technology will change which means the regulation will need to regulate reviews and updates.

“Sometimes it’ll present a new use case that we hadn’t contemplated, and we’ll have to look at whether that warrants the sort of specific regulation,” she added.

Ultimately, regulation will open doors, not close them, Wallace said.

“The huge benefits that AI is going to bring are extraordinary. Already we can see with generative AI the basic need to be researching a lot goes away,” she explained.

“It allows humans to take work and not have mental fatigue, take work that the machine has done, and then build more creativity, build more analytics, build more strategy and do more visioning.

“AI should always help humans be more human, and there’s a huge possibility to do that. However, without regulation, there are some significant risks at play,” she ended.

News Source:

Leave a Reply