Fotar Blog

BLOGS WORLD

Australia Views OpenAI, Meta, and Google LLMs as High Risk

After an eight-month investigation into the nation’s adoption of AI, an Australian Senate Select Committee recently released a report sharply critical of large tech companies — including OpenAI, Meta, and Google — while calling for their large language model products to be classified as “high-risk” under a new Australian AI law.

The Senate Select Committee on Adopting Artificial Intelligence was tasked with examining the opportunities and challenges AI presents for Australia. Its inquiry covered a broad range of areas, from the economic benefits of AI-driven productivity to risks of bias and environmental impacts.

The committee’s final report concluded that global tech firms lacked transparency regarding aspects of their LLMs, such as using Australian training data. Its recommendations included the introduction of an AI law and the need for employers to consult with employees if AI is used in the workplace.

Big tech firms and their AI models lack transparency, report finds

The committee said in its report that a significant amount of time was dedicated to discussing the structure, growth, and impact of the world’s “general-purpose AI models,” including the LLMs produced by large multinational tech companies such as OpenAI, Amazon, Meta, and Google.

The committee said concerns raised included a lack of transparency around the models, the market power these companies enjoy in their respective fields, “their record of aversion to accountability and regulatory compliance,” and “overt and explicit theft of copyrighted information from Australian copyright holders.”

The government body also listed “the non-consensual scraping of personal and private information,” the potential breadth and scale of the models’ applications in the Australian context, and “the disappointing avoidance of this committee’s questions on these topics” as areas of concern.

“The committee believes these issues warrant a regulatory response that explicitly defines general purpose AI models as high-risk,” the report stated. “In doing so, these developers will be held to higher testing, transparency, and accountability requirements than many lower-risk, lower-impact uses of AI.”

Report outlines additional AI-related concerns, including job loss due to automation

While acknowledging AI would drive improvements to economic productivity, the committee acknowledged the high likelihood of job losses via automation. These losses could impact jobs with lower education and training requirements or vulnerable groups such as women and people in lower socioeconomic groups.

The committee also expressed concern about the evidence provided to it regarding AI’s impacts on workers’ rights and working conditions in Australia, particularly where AI systems are used for use cases such as workforce planning, management, and surveillance in the workplace.

“The committee notes that such systems are already being implemented in workplaces, in many cases pioneered by large multinational companies seeking greater profitability by extracting maximum productivity from their employees,” the report said.

SEE: Dovetail CEO advocates for a balanced approach to AI innovation regulation

“The evidence received by the inquiry shows there is considerable risk that these invasive and dehumanising uses of AI in the workplace undermine workplace consultation as well as workers’ rights and conditions more generally.”

What should IT leaders take from the committee’s recommendations?

The committee recommended the Australian government:

  • Ensure the final definition of high-risk AI explicitly includes applications that impact workers’ rights.
  • Extend the existing work health and safety legislative framework to address the workplace risks associated with AI adoption.
  • Ensure that workers and employers “are thoroughly consulted on the need for, and best approach to, further regulatory responses to address the impact of AI on work and workplaces.”

SEE: Why organisations should be using AI to become more sensitive and resilient

The Australian government does not need to act on the committee’s report. However, it should encourage local IT leaders to continue to ensure they responsibly consider all aspects of the application of AI technologies and tools within their organisations while seeking the expected productivity benefits.

Firstly, many organisations have already considered how applying different LLMs impacts them from a legal or reputation standpoint based on the training data used to create them. IT leaders should continue to consider underlying training data when applying any LLM within their organisation.

AI is expected to impact workforces significantly, and IT will be instrumental in rolling it out. IT leaders could encourage more “employee voice” initiatives in the introduction of AI, which could support both employee engagement with the organisation and the uptake of AI technologies and tools.

Leave a Reply

Your email address will not be published. Required fields are marked *