Time to change the narrative when negative signs emerge

Over the past several decades, the Korean judiciary has been criticized for making politically motivated decisions in cases that should have been decided solely on the basis of law. Criticized for violating political neutrality, the judiciary is now perceived as an institution in need of reform, effectively facing a serious crisis.

The prosecution services in South Korea also began to undergo reform and reorganization due to cases in which prosecutors indicted people with the intention of intimidating the public or cases in which the decisions to prosecute was made based on political interests rather than the law.

Concerned about the corruption of the judiciary, fueled by judges' bargaining and prosecutors' abuse of their power to prosecute, the Korean public is demanding that the entire legal system reflect and reform. Meanwhile, a Korean lawyer has been caught committing a serious crime that goes beyond legal ethics and undermines the very foundation of the judicial system, sparking public outrage.


"Some argue that AI-generated fake cases submissions warrant disciplinary action, while others argue that such action is too harsh." - published by a Korean media covering legal affairs.


According to a legal newspaper in South Korea, a judge has caught a lawyer who submitted an opinion citing a fake legal case created by artificial intelligence (AI). A district court criminal division searched for five court decisions cited in the lawyer's opinion and confirmed that they did not exist.

The lawyer who submitted an opinion citing a fake case, submitted a written opinion withdrawing the controversial precedent before the upcoming hearing. However, during the trial, the court asked for the source of the fake precedent, and the lawyer who had blindly relied on AI to submit a judicial precedent invented by AI and was subsequently caught, admitted that the source was AI.

We live in a world where AI can retrieve relevant legal provisions in seconds. However, the reality is that no one can guarantee the accuracy of the legal content retrieved by AI. 

"It is deeply disconcerting that a lawyer, a legal expert who should have been the first to anticipate and be vigilant about the risks in AI use, submitted its opinion without any verification process, citing the results of a search," said a judge in an interview.

Rather than dismissing this as an embarrassment, the lawyer who submitted an opinion based on a fake precedent should have its license revoked for its lack of ethics.

A partner at a mid--sized law firm reportedly said, "If the lawyer cited a precedent knowing that it was fake, it would constitute litigation fraud, deceiving the court." But, is it possible that this was a mistake made by the lawyer without knowledge? 

It's highly unlikely that lawyers, who knew how to verify the existence of the precedent, would have included it in its opinion without knowing it was a fake case. In my opinion therefore, it's likely that this was a deliberate crime which constitutes deception of the court. 

The recent cases of AI's harmful effects, evident in various fields including the legal profession, are beyond the scope of accidental incidents. The biggest problem is not that AI saves time for time-strapped legal professionals, but rather that the need to thoroughly verify results from AI-powered search engines, bearing in mind the possibility of inaccuracy, ultimately leads to even more time wasted on searches.

"Seeing AI being used so indiscriminately, it feels like humans are fighting against AI. The added burden of reviewing all data to determine if it contains fabricated information effectively doubles or triples the workload, leaving me feeling exhausted."

Lawyers say that consultations are unnecessarily lengthened by the increasing number of clients asking, "I heard there was a ruling like this. Will the same ruling come out in my case?" based on incorrect information obtained through AI.

One lawyer recalled discovering a Supreme Court precedent he had never heard of, in a written statement from the opposing lawyer. He searched countless times to read it, only to discover it didn't exist.


"The lawyer who submitted a document citing a fake precedent created by artificial intelligence (AI) violated legal ethics and should be subject to disciplinary action and punishment. This is because it interfered with judicial functions and was against the interests of his clients."


AI can be helpful for parties involved in pro se legal representation. However, because even legal professionals, let alone the general public, can easily believe fake precedents without realizing they are fake, clear guidelines regarding the scope and ethics of AI use in legal affairs seem necessary.

Indeed, one Korean judge stated, "In litigation between parties without attorneys, the volume of documents submitted using AI programs has increased, making it more time-consuming to review all the submitted data than before, as we need to identify incorrect legal provisions or judicial precedents cited."


A Korean newspaper covering legal affairs reportedly asked the National Court Administration Service (NCAS) Korea for its opinion on a case involving a fake precedent invented by AI-generated data. The NCAS official's response and attitude were unexpected and shocking.

The problems that AI programs pose to the legal profession have been raised for years, and the NCAS Korea acknowledged that there was a growing number of cases in which AI-generated, non-existent precedents were being submitted in writing, but relevant guidelines have not yet been established.

The NCAS official, who only now responded, "We feel the need to review countermeasures," seemed extremely irresponsible and unprofessional.


An image depicting AI writing a verdict. (Photo - Google Gemini)
provided by a Korean media covering legal affairs.


Some wealthy influencers clamorously claim that we are to enter the AI era and must prepare for it.

That is not true. Humans don't necessarily have to enter the age of artificial intelligence. Nor does every country need to invest a significant portion of its national budget in AI to reap the tangible benefits of AI programs owned by a select few. It is time to change the narrative.
AI programs sometimes pose a threat to human safety, and we must consider ways to minimize the potential risks AI poses to our world.

National leaders should recognize that the risks posed by AI, which could lead to a small number of wealthy individuals dominating society, outweigh the benefits, and invest more in agriculture and manufacturing than in AI to create a society that prioritizes human health and safety.




Arguing based on fake legal principles may violate the Attorney-at-Law Act and the Attorney-at-Law Code of Ethics in South Korea. The relevant laws are as follows:
  • Article 24 of the Attorney-at-Law Act stipulates, "A lawyer shall not engage in any conduct that damages his or her dignity (paragraph 1)." "A lawyer shall not conceal the truth or make false statements while performing his or her duties (paragraph 2)."
  • Article 2, paragraph 2 of the Attorney-at-Law Code of Ethics stipulates, "A lawyer shall not distort the truth or make false statements while performing his or her duties."
  • Article 5 stipulates, "A lawyer shall maintain his or her dignity and refrain from any conduct that damages his or her reputation."
  • Article 35 stipulates, "A lawyer shall respect the judicial authority and strive to ensure fair trials and due process."
  • Article 36, paragraph 1 stipulates, "A lawyer shall not intentionally assert falsehoods or submit fake evidence during trial proceedings."


Comments

Popular posts from this blog

Why has the NIRS Gongju Backup Center remained inoperative for 18 years?

When the National Election Commission is run like a family business

Contact Form

Name

Email *

Message *