Artificial intelligence (AI) is gradually gaining popularity and its influence on legal systems has raised concerns. Potential dangers posed by AI-generated fake cases has been highlighted lately and it is challenging the integrity of court systems worldwide.
AI is basically known for its ability to create content of various forms and includes deepfakes as well as misinformation. Now, it has now infiltrated legal disputes and the development is alarming. It is raising questions about legality and ethics besides undermining trust in global the legal systems.
AI is creating fake laws. The phenomenon is called generative AI hallucination and recently has emerged as a result of the technology’s ability to produce content based on vast datasets. The content may appear convincing, but it can be inaccurate and this is basically due to the gaps in the AI’s training data.
Some instances of AI-generated fake cases have occurred lately and a notable example is the Mata v Avianca case in the US. Lawyers unknowingly submitted fake extracts and citations to a court. It resulted in severe consequences. However, the submission was dismissal.
Legal regulators and courts across the world have taken various measures to curb such issues. Some jurisdictions have issued guidance or orders on the responsible use of generative AI. Others have developed specific guidelines for lawyers as well as the courts too.
Major steps have been taken in Australia to promote responsible AI use within the legal profession. NSW Bar Association, the Law Society of NSW and other such organizations have released guidance for lawyers. This shows the importance of exercising judgment and diligence when using AI tools.
However, more comprehensive measures are needed to prevent AI-generated fake cases in the legal system like setting up of clear requirements for the ethical use and incorporating technology competence into legal education.