Legal Guide to AI and ChatGPT for Australian Businesses (2024)

Last updated: 7 December 2024

Legal Guide to AI and ChatGPT for Australian Businesses – AI has taken the world by storm, particularly in online business. It is used in software coding, content delivery, medical, legal, digital imaging, facial recognition, schools and university education – everywhere! How are we managing it from a regulatory perspective?

Currently, there are no laws specific to AI in Australia to regulate or even guide AI use. Regulators will play ‘catch-up’ once they see how it is managed overseas and the issues it raises in Australia. That said, current general legislation and regulations still come into play.

Here are some of the AI legal issues discussed below:

  • Training AI models using copyrighted materials
  • Copyright ownership of AI-generated works
  • Mandating the labelling of AI-generated content
  • Misuse of AI-generated content for malicious purposes (e.g. deepfakes)
  • Liability for decisions made or advice offered by AI systems
  • Algorithmic bias or discrimination in AI responses or decisions generated
  • Right to an explanation of decisions made by algorithms that impact individuals

Read on to understand where we stand legally in Australia with this fast-moving technology.

TLDR: Quick Summary of this Legal Guide

  • Clearly label AI-generated content and imagery to maintain trust and avoid misleading your audience.
  • Regularly verify AI-generated information for accuracy and evaluate systems for algorithmic biases to prevent errors or discriminatory outcomes.
  • Ensure you have the right to use AI-generated material by reviewing the AI platform’s terms.
  • Opt out of AI indexing to safeguard your original content and use watermarks and copyright notices.
  • Use AI to amplify human creativity and expertise rather than relying on it to automate critical tasks fully.

Click on any of the questions below to jump to that section of this legal guide.

If you still have questions after reading this legal guide, get in touch, as we’d love to keep adding your questions to this comprehensive guide.

For Original Content Creators (Writers, Artists & Film Makers)

AI models do not generally violate copyright. Training AI models on publicly available content and data, including copyrighted content, is considered fair use under established legal precedents.

However, if the human prompting of an AI system is designed to mimic a copyrighted design or work closely with it, then this could be seen as copyright infringement, and the human, not the AI system, would be responsible.

For example, an image-generating AI system can be asked to create a photographic representation of a well-known person or scene in a famous photographer’s style. This could be considered copyright infringement, and the AI systems are trying to crack down on this possible misuse.

If you prompt the AI system DALLE with “Create a photograph-like image of Michael Jackson in the style of Annie Leibovitz”, the AI system generates this error message: “Due to content policy constraints, I’m unable to create images that closely resemble specific public figures or use the distinctive styles of contemporary artists like Annie Leibovitz”.

Does the use of copyrighted works to train AI qualify as fair use?

Australia does not have the broad “fair use” exception found in the US or the text-and-data mining exemptions available in the UK and EU. Instead, copyright owners in Australia have stronger protections.

The only potential legal defences for using copyrighted works to train AI are the narrow fair dealing exemptions for research and study or the temporary reproduction exemptions under sections 43A and 43B of the Copyright Act 1968. However, these exemptions were originally designed for purposes like caching and have not been tested in the context of AI training.

No, you cannot sue an AI for copyright infringement. AI lacks legal personality and is not a legal entity. However, the person or entity prompting the AI system may be considered liable when it is used for copyright infringement. This is still a new, untested concept, so this is yet to be determined.

How do I stop AI from stealing my art?

Since AI systems are trained on publicly available datasets (e.g. websites on the Internet), it is possible that they could unknowingly “steal” someone’s art. You can do several things to stop AI from stealing your art:

  • Include a copyright notice including your name, copyright symbol, year of creation, and a statement that all rights are reserved.
  • Add a watermark to your images and only upload low-resolution photos online.
  • Use image recognition services or tools to identify instances where your work is being used without permission across online platforms, social networks, and websites.

You should also consider sending cease-and-desist notices to AI developers or other parties if you find that your artwork is being used illegally. Legal123 can help you.

How do I stop AI from stealing my content?

If you are worried about AI systems stealing your work that is already in the public domain, you can safeguard it by doing the following things:

  • Remove your work from AI datasets: Opt out of any AI indexing and always read the fine print (Privacy Policy and Terms and Conditions) of any website or platform you use to get your work into the public eye.
  • Block AI bots and crawlers: If you have a website where you publish your work, you can block AI bots and crawlers using meta tags in your HTML. Include this tag to block all AI agents: <meta name= “AI-Agent” content= “noindex,nofollow”>. Or you can edit your robots.txt file to block specific AI agents or folders: “User-agent: Bard Disallow: /images/”
  • Register your personal or business brand: You should protect your brand by registering your trademark. Whether your content gets plagiarised or not, building your brand helps protect it from copying and using it in the market. Registering your trademark will provide strong evidence and rights to require the offer to remove your work.

Using AI-Generated Images & Video

Can I use AI-generated images on my website?

Yes, you can use AI-generated images on your website. However, always check the latest Google algorithm guidelines to ensure you use them correctly and are not penalised. Remember, Google can now accurately judge whether or not AI has generated an image.

ai workplace adoption statistics for australia
Source: Avanade Jan 2024, AI Readiness Survey

This is what Google had to say about AI-generated images in 2023, but unfortunately, their publicly expressed view has not been updated since.

Do I have to label AI-generated images?

There are currently no universal legal requirements for labelling AI-generated images. However, labelling AI-generated images is presently considered the best practice and a way of preventing potential damage to your brand’s reputation.

No legislation exists on who owns the copyright to an AI-generated image. Of course, the AI system is not a person and cannot own the copyright to its generated image. Always check the Terms of Use of the AI platform, as they may post terms on who owns the copyright.

Can I use AI-generated videos on my YouTube channel?

Yes, you can use AI-generated videos on your YouTube channel. However, YouTube has recently required that any AI-generated content be disclosed upon upload. The policy applies only to realistic-looking video content that could be misconstrued as real people, places, or events.

Using generative AI to create fantastic or unrealistic scenes or to apply special effects would not require disclosure. Still, adding a disclosure notice to all AI-generated video content is ‘best practice’.

Yes, you can trademark an AI-generated logo, provided it meets trademark requirements. Owning the source code of the AI that generated the logo is unnecessary. Instead, the focus is on whether the logo meets the following criteria:

  1. Distinctiveness: The logo must be unique and distinctive, not generic or descriptive of the goods or services it represents.
  2. Ownership: You must own the rights to the logo. Ownership is typically determined by the terms of use of the AI platform, which may grant you full rights or only a limited license.
  3. Use in Trade: To qualify for trademark protection, you must actively use the logo in connection with the goods or services it represents.

Before applying, ensure that the AI platform’s terms grant you ownership or the rights to use the generated logo commercially.

Find out more about Legal123’s Trademark Registration Service.

Can I sell AI-generated art as my own?

It is unclear whether works created with AI will be protected by copyright. Still, it is possible that if you can show you have contributed ‘independent intellectual effort’ to the final work, you may have copyright protection and, therefore, can sell AI-generated art as your own. Several platforms do currently offer the sale of AI-generated art.

Is it illegal to sell AI-generated art on Etsy?

Etsy’s policies allow the sale of AI-assisted art, but not purely AI-generated art, with no human involvement. According to Etsy’s updated terms and conditions, items sold on the platform must be “made and/or designed by you, the seller“.

The line between AI-assisted and AI-generated art is still being debated, and you should ensure you regularly keep up-to-date with the changing Etsy rules on this. Australia only recognises copyright ownership of AI-generated art where the seller has contributed substantial independent intellectual effort (which may be difficult to prove).

Using AI-Generated Deepfakes

information

Definition: Deepfake

A deepfake is a hyper-realistic, AI-generated fake media that mimics one person’s likeness, often for malicious or deceptive purposes. The defining characteristic of a deepfake is that it is extremely difficult to distinguish from genuine content.

Deepfakes can be used to create false narratives, impersonate individuals, or put words in someone’s mouth that they never actually said. The term “deepfake” is a portmanteau of “deep learning” and “fake”.

In Australia, there is no law explicitly allowing or prohibiting the use of deepfakes. However, deepfakes may violate various Australian laws:

  • Privacy Laws: Using someone’s image or likeness in a deepfake without consent could violate their privacy rights.
  • Defamation Laws: Australian defamation laws allow individuals to sue if they have been the subject of false and damaging statements. A deepfake that portrays someone in a false and defamatory light could lead to a defamation claim.
  • Criminal Code: Australia’s criminal code contains offences related to the non-consensual sharing of intimate images, which could apply to deepfake pornography.
  • Copyright Act: Using someone’s copyrighted material to create a deepfake without permission could infringe their copyright.
  • Australian Consumer Law (ACL): Section 18 of the ACL prevents misleading and deceptive conduct in trade or commerce.

Do you need permission to deepfake someone?

No, depending on where you obtain the original content, you do not necessarily need permission to create a deepfake version of another person. It is how you use the deepfake that may be breaching the law. For example, using deepfake images or videos to portray an individual in a false light or damage their reputation may constitute defamation under Australian law.

Can you sue someone for making a deepfake of you?

Yes, depending on where and how it is used, you can sue someone for making a deepfake of you. If the deepfake falsely portrays you in a way that damages your reputation, you can sue under Australian defamation laws. If the deepfake is used misleadingly or deceptively, for example, endorsing a product or service, this breaches Australian Consumer Law, and you can sue for this.

Using AI-Generated Content

Can I use ChatGPT content in my blog?

You can use ChatGPT-generated content in your blog, website, social media posts, etc. However, you should be aware that this might have various downsides:

  • Google Algorithm: Google’s latest algorithm update has penalised websites that auto-publish large amounts of AI-generated content that is not seen as “helpful”.
  • AI Hallucinations: ChatGPT can generate false information, particularly when asked to generate specific examples to substantiate a point. We have personally seen this when asking ChatGPT for legal precedents, and the responses have been fabricated.
  • ChatGPT Cut-off Date: AI-generated content is not necessarily up to date. As of April 2024, ChatGPT-3.5 had a cut-off date of January 2022. That means that ChatGPT has no knowledge of and does not contain any content from anything published online for the last 28 months.
  • Limited Personalisation: ChatGPT cannot truly capture the personal experiences, opinions, and writing style of a human author, making it difficult to establish a genuine connection with readers.
  • Duplicate Content: You may find that ChatGPT produces the same content as another blog or site as it has been generated from other sites and/or may produce the same content for another user. You should be aware that this could effectively constitute a copyright issue.

Do I have to label AI-generated content?

Facebook, Instagram, and YouTube require labelling AI-generated images and videos. Twitter, LinkedIn, Pinterest, Reddit, and TikTok do not yet have an AI-generated labelling policy.

If you use AI-generated content on your website or blog, you can choose to label it. The Australian Government has developed a set of AI principles: Australia’s AI Ethics Principles, which, whilst not law, are being followed as ‘best practice’ behaviour for AI. It is best practice to be transparent with your audience about all your products and services.

The general rule of creating content is that the content creator is the copyright owner. However, with AI-generated content, the content owner is ambiguous. Is it the AI itself or the individual who gave the command to the AI and generated results?

Under Australian copyright law, you need to be able to show you contributed ‘independent intellectual effort’ to achieve the output.

One thing is for sure: AI systems themselves cannot own copyright. AI models and algorithms are not recognised as legal persons, so they cannot directly own the copyright to the content they generate. The copyright would have to be owned by a human or organisation.

If a person provided substantial creative direction, prompting, or editing during the AI-generation process, they might be considered the rightful copyright holder. However, if the person had little input into the creation process, the copyright ownership of AI-generated content may need clarification. This is still a legal grey area in Australia as in other countries.

Always check the licensing and terms of service posted by the AI platform. These agreements may specify how the copyright ownership is determined.

Can I legally publish a book written by ChatGPT?

The answer is: “It depends”. Copyright law in Australia only recognises human authors to give them moral rights. You need to be able to show ‘independent intellectual effort’ for this copyright recognition and protection.

light bulb blue

Case Study: The Land of Machine Memories (2023)

Journalism professor Shen Yang from China’s Tsinghua University used 66 prompts to generate a Chinese-language novel called “The Land of Machine Memories” in just 3 hours. The novel went on to win 2nd prize in a popular science and sci-fi competition in Nanjing, China, out of 200 entries. Shen used AI to generate 43,000 words, which he edited to 6,000 for the competition submission.

While there may not be a specific law prohibiting a person from publishing a book using ChatGPT, you should remember there are licensing terms that could affect attribution and publication. If you decide to publish a book primarily written by ChatGPT, there would likely be an expectation of complete transparency and disclosure about the AI’s involvement.

The Australian Government developed “Australia’s AI Ethics Principles,” which are not law but are ‘best practice’ and suggest that you indicate when you use AI to create work. Failure to do so could raise ethical concerns and potentially legal issues around misrepresentation.

Is it illegal to sell AI-generated content?

It is not necessarily illegal to sell AI-generated content – images, art, designs, copywritten content, etc. Always check the latest terms and conditions of the platform you choose to sell your AI-generated content. However, adding more human input allows you to make AI-generated content your own.

Using AI Algorithms in Business

Can AI models be patented?

Yes, AI models and algorithms are eligible for patent protection. However, the models or algorithms cannot be named as owners of any patents.

light bulb blue

Case Study: Australian Commissioner of Patents v Thaler

This 2022 case dealt with whether an AI system could be listed as an inventor in a patent application. Dr. Stephen Thaler filed a patent application for an invention created by DABUS, an AI system. The Commissioner of Patents denied the application, stating that the Australian Patents Act only allows humans to be inventors.

Following an appeal, the court ruled that AI could not be named as an inventor on a patent in Australia. However, the decision did not prevent Dr. Thaler from being named as the inventor of the AI that generated the invention.

What are some examples of algorithmic bias?

Algorithmic bias can result from limited AI training data, feedback loops or assumptions built into the AI code. For example:

  • Facial Recognition Bias: Many facial recognition systems must perform better in accurately identifying individuals from certain demographic groups not part of the training data set.
  • Resume Screening Bias: Some resume screening algorithms used by employers have, in some instances, exhibited bias against female applicants or those with names associated with specific ethnic groups.
  • Credit Scoring Bias: Algorithms used to determine credit scores and loan eligibility have been found, in some cases, to discriminate against certain demographic groups.
  • Online Ad Targeting Bias: Algorithms that target online advertisements have been criticised for showing different job or housing ads to users based on their perceived race or gender.
  • Healthcare Algorithm Bias: AI systems used in healthcare decision-making, such as determining treatment plans or assessing risk, have been found to exhibit biases towards certain patient populations.

What are the dangers of algorithmic bias?

Algorithmic bias can perpetuate social stereotypes, flawed decision-making and erosion of trust. These are serious concerns that could slow AI adoption.

light bulb blue

Case Study: UK A-Level Results (2020)

In the UK, during the pandemic, final-year A-level exams were suspended. Instead, the Government decided to use the results of “mock” exams taken six months prior and an algorithm based on the school’s track record to determine each student’s final grades.

According to lawmakers, the software would give students a “fairer” result. The model favoured students in private schools and those from affluent neighbourhoods, leaving the high achievers from state-funded schools marked down. The exam regulator, Ofqual’s, algorithm downgraded approximately 39% of A-level grades, replicating the inequalities in the UK education system. Due to the lower exam grades, many university admissions were revoked.

Ultimately, the AI algorithm-generated results were scrapped, and teacher’s assessments were used instead.

Who is responsible for AI decisions?

Artificial intelligence systems cannot be held responsible for decisions made using them. The ultimate responsibility for AI decisions rests with the individuals using the technology and those who wrote and trained the AI code.

In the UK Government case above, the exam grades were generated by an algorithm, which had a catastrophic effect on the reputation of the UK educational system. Real-world testing of the exam grades with teachers was ignored, with a preference for artificially generated results. Senior government ministers and regulators were forced to resign, and the UK government ultimately became responsible for the AI decision-making mistake.

Who is liable for AI mistakes?

Most people assume that AI mistakes are not actionable, but this assumption needs to be corrected. Generally, the liability for AI mistakes rests not with the AI but with those responsible for the AI code, training data and using or interpreting the AI system’s outputs.

For example, if an AI-powered medical diagnostic system was used in a hospital to identify cancerous growths and incorrectly recommended chemotherapy for a patient, the hospital and AI-system developers may be liable for this error. Determining actual liability depends on the terms of the AI developers’ agreement with the hospital and the hospital’s consent agreement with the patient, depending on whether the AI use and system were disclosed.

Who is liable for damages caused by autonomous systems?

Autonomous systems, such as self-driving cars, may be involved in accidents, resulting in personal injury or even death. However, is the autonomous system responsible and liable for any damage caused?

light bulb blue

Case Study: Tesla vs Family of Apple Engineer Walter Huang

Tesla settled a lawsuit brought by the family of Walter Huang, an Apple engineer who was killed in a 2018 crash while the Autopilot feature on his Tesla Model X was engaged.

Huang’s family filed a wrongful death lawsuit in 2019, accusing Tesla of negligence and exaggerating the capabilities of its self-driving technology, which they claim caused Huang to believe he didn’t have to remain alert while driving. The settlement amount was not disclosed.

Unfortunately, there is no easy answer, and the specific circumstances of each case need to be examined, including how the user used the AI, any scope for error and other inputs and circumstances particular to the instance that may impact the AI before liability can be determined. This is and will be an evolving legal issue.

Recommendations: How to Use AI in Your Business Responsibly

Follow these six recommendations to integrate AI into your business legally and ethically:

  1. Be Transparent: Clearly label AI-generated content and imagery to maintain trust and avoid misleading your audience. This includes disclosing deepfake or realistic AI content.
  2. Monitor and Audit Outputs: Regularly verify AI-generated information for accuracy and evaluate systems for algorithmic biases to prevent errors or discriminatory outcomes.
  3. Understand Copyright: Ensure you have the right to use AI-generated material by reviewing the AI platform’s terms. Only humans can own copyright in Australia, so AI-generated content may lack copyright protection unless significant human input is involved.
  4. Protect Your Work: Opt out of AI indexing when possible to safeguard your original content. Also, remember to use watermarks and a strong copyright notice.
  5. Clarify Responsibilities: Create clear policies on ownership, liability, and accountability for using AI in your operations.
  6. Enhance, Don’t Replace: Use AI to amplify human creativity and expertise rather than relying on it to automate critical tasks fully.

We hope you found this Legal Guide to AI and ChatGPT for Australian businesses helpful.

References

  1. Australia Copyright Act [1968] URL
  2. Australia’s AI Ethics Principles URL
  3. Artificial Intelligence – Australia [2024] Statista URL
  4. Generative AI – Australia [2024] Statista URL
  5. Contractual Liability for Wrongs Committed by Autonomous Systems [2020] Cambridge University Press URL
  6. Copyright and AI [2023] American Library Association URL
  7. Figma Caught Stealing Other Designs [2024] Tech Times URL
  8. Suno Defends AI Training with Copyrighted Music [2024] CoinTelegraph URL
vanessa emilio of legal123

About the Author: Vanessa Emilio

Vanessa Emilio (BA Hons, LLB, ACIS, AGIA) is the Founder and CEO of Legal123.com.au and Practice Director of Legal123 Pty Ltd. Vanessa is a qualified Australian lawyer with 20+ years experience in corporate, banking and trust law. Click for full bio of or follow on LinkedIn.

Disclaimer: We hope you found this article helpful, but please be aware that any information, comments or recommendations are general in nature, do not constitute legal advice and may not be suitable for your specific circumstances. Whilst we try our best to ensure that the information is accurate, sometimes there may be errors or new information that has yet to be included. Any decisions you take based on information on this website are made at your own risk and we cannot be held liable for any losses you suffer. Contact us directly before relying on any of this information.

Our quick and easy online template generates a customised Privacy Policy, Website Disclaimer and Terms & Conditions.

  • Customisable to your business needs
  • Choose from 13 different business types
  • Answer a few simple questions
  • Time to complete: Under 5 minutes
  • Lawyer drafted & legally binding
  • Easy to use with clear instructions
  • Email & telephone support
  • Plain English, easy to follow
  • Immediate download
  • Rated

Website Legal Package $199 +GST