Business is about to get a say on AI rules

   2024-04-22 19:04

Nick Bonyhady

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.



Subscribe now

Already a subscriber?

Business will be handed a major say on the future of artificial intelligence in Australia, with the government planning to revamp the expert group helping set the direction of regulation on the pivotal technology.

Two visions of regulation for high-risk AI are being considered behind the scenes, according to sources familiar with the matter: a stringent European Union-style AI law or legislation relying on broad principles.

Whichever approach the government ultimately opts for will shape how Australian business approaches the technology that is already being deployed everywhere from Canva to the Commonwealth Bank.

Industry Minister Ed Husic will create voluntary guidelines and mandatory “guardrails” for high-risk AI. Alex Ellinghausen

The government will not announce broad legislation in the budget but is planning to signal the direction it is heading around that time and establish a permanent advisory group to guide the plans. Unlike the current body, business leaders will be directly represented.

Industry and Science Minister Ed Husic announced in January the government would create voluntary guidelines, mandatory “guardrails” for high-risk AI and possibly require AI-generated images to be labelled.

Advertisement

Those steps were the government’s interim response to a review conducted by the Department of Industry, Science and Resources and led to the creation of a temporary 12-member expert panel on AI heavy in academics.

One topic under consideration by that body is the model for future AI laws, which could apply to a range of industries seen as higher risk, such as healthcare, finance and housing. They could either prohibit specific practices like creating social scores for customers, as the European Union has done, or apply general standards such as prohibiting discrimination.

The new advisory body, the people familiar with the government’s plans said, would be permanent and include more business representatives because companies will largely be the entities deploying AI tools. Business groups including the Business Council of Australia are only observers on the current body.

Industry and Science Minister Ed Husic declined to comment directly on a permanent expert group, but said the temporary advisory group was working on how to define “high risk” artificial intelligence to work out what could be harmful from it.

“It’s also investigating options for mandatory guardrails for high-risk AI which could include more testing and transparency, and whether some types of very high-risk AI may require an outright ban,” Mr Husic said. “I will have more to say on the findings of the group in the coming weeks and months.”

Proponents of artificial intelligence point to its capacity to boost productivity by automating white-collar tasks and spurring the creation of new services. But detractors see it as a possible vehicle for discrimination, or for the dissemination of political bias, especially when it is deployed in fields like health, finance, housing and the media.

Advertisement

A survey of 46 technology leaders conducted by Datacom and the Tech Council of Australia, which represents big tech companies and emerging start-ups, found artificial intelligence would be the top trend for the industry this year.

Industry groups including the TCA and Business Council of Australia will be pleased by the government’s plans to give business more influence on AI. Reshaping the advisory board is among the Tech Council’s requests for the Federal Budget, according to a submission seen by The Australian Financial Review that also urges the government to continue with its flexible approach to AI.

“It’s important to recognise that we are not starting from scratch,” said acting TCA chief Ryan Black. “Australia has a sound, existing legal framework relevant to AI, and expert regulators that regulate products using AI, including in areas such as health and financial services.”

Mr Black, whose organisation represents large tech companies and emerging start-ups, said building on existing standards and regulations would ensure Australia remains a competitive economy as AI adoption surges.

The European Union adopted its Artificial Intelligence Act in March, which bans practices such as using AI to assign “social scores” to people based on their deeds as China has done.

The rules will take effect gradually but stand in contrast to countries like the United Kingdom, which Prime Minister Rishi Sunak has declared will be “pro-innovation”. Part of that is applying existing anti-discrimination rules to AI, which avoids problems of crafting specific standards that could be rapidly outmoded by technology.

Advertisement

Australian Council of Trade Unions ACTU secretary Sally McManus said workers should get their fair share of any productivity gains generated by AI.

“Workers’ voices must be at the heart of any adoption with clear protections to ensure that it does not erode living standards or take workers’ rights backwards,” Ms McManus said in a statement.

“There is already demonstrated risk with AI being used and leading to discrimination and unfair decision-making by employers.”

BCA chief executive Bran Black (no relation) lauded the potential of AI.

“The use of artificial intelligence can make our businesses more competitive and productive and that’s good for growth, investment and more Australian jobs,” Mr Black said. He backed the government’s previously announced plans for a more principles-based approach.

Australia has not yet had a breakout artificial intelligence company, but the tech is being widely applied by existing firms to help guide decisions, analyse data and answer customer service queries.

Advertisement

Luke Latham, the Australian general manager of fintech Airwallex, acknowledged the excitement about AI but cautioned that businesses should be pragmatic in deploying it.

“AI’s efficacy will ultimately be determined by the underlying data to inform these tools,” Mr Latham said in a statement.

“After all, its outputs will only ever be as good as its inputs, so investment in getting the foundations right is vital, without that businesses will struggle to yield these aspirational results.”

There have already been several cases where AI firms have fallen foul of existing laws, showing both the potential risk posed and how current legislation can respond.

In one case, the facial recognition firm Clearview AI was found to have breached Australians’ privacy by harvesting images.

Travel website Trivago was ordered to pay $44.7 million in 2022 for using its algorithms to mislead users into booking hotels at higher prices.

Nick Bonyhady is a technology writer for the Australian Financial Review, based in Sydney. He is a former technology editor, industrial relations and politics reporter at the Sydney Morning Herald and Age. Connect with Nick on Twitter. Email Nick at [email protected]

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

Latest In Technology

Fetching latest articles

Most Viewed In Technology


    Original Source