The story appears on

Page A4

June 27, 2024

GET this page in PDF

Free for subscribers

View shopping cart

Related News

Home » Opinion

AI-generated crimes: 
What can we do 
to stop them?

As speculated when ChatGPT 3.5 was released early last year, crimes by means of artificial intelligence have mushroomed around the world.

For the uninitiated, ChatGPT, developed by tech startup OpenAI, is a general-purpose chatbot that uses AI to generate text after a user enters a prompt.

Just last week two cases were reported in China.

The first was a multi-channel network that used AI tools to generate fake news and published them online. According to China Central Television, the network generated and published 4,000 to 7,000 articles a day, reaping a daily income of about 10,000 yuan (US$1,378).

The network used an AI tool called “Yizhuan,” which literally means “easy to forge” to create fake news, such as a non-existent explosion in Xi’an, capital of northwestern Shaanxi Province, in January.

The network has now been shut and its head detained.

The other case caused more harm to individuals. A man living in Beijing, surnamed Bai, allegedly made nearly 7,000 AI deepfake nude photographs and sold them online for 1.5 yuan each. Victims included school teachers, students and even his own colleagues.

Bai has been prosecuted for spreading obscenity.

It is no surprise that new tools and technologies are abused by bad people because this is practically the epitome of the entire history of mankind, but it is fair to think about what can be done legally to intimidate those full of guile.

In fairness, China has been issuing laws and regulations to manage AI-related practices since 2022, mostly targeting services generated by the technology.

For instance, a regulation issued last August says that online content, including graphics, texts, audios and videos, needs to be marked clearly that it is generated by AI if that is the case.

However, the regulation is more like a guideline, hence there’s no punishment if the content is not marked. Meanwhile, AI-related crimes are still prosecuted based on traditional charges, such as fraud, slander and extortion, with commensurate punishment.

“AI does make it more difficult to detect some of the cases,” noted Zhu Xiahua, a lawyer with the Shanghai Walson Law Firm.

“For example, deepfake photos and videos are not easy to detect or identify and they may spread fast and cause great harm. Meanwhile, spreading rumors through AI can involve various different charges, such as slander, extortion or damaging business reputation. Such criminals usually face a sentence of at most three years in jail.”

Besides, AI does bring about new problems that current laws may not fully cover. Zhu gave two examples: Who owns the copyright of AI-generated content, such as paintings and music pieces? Who is responsible for traffic accidents involving self-driving cars?

“The copyrights of AI-generated content have provoked much discussion in both technology and legal fields but no conclusions have been drawn yet,” said Zhu.

“As for accidents caused by self-driving cars, who should be responsible? Car manufacturers, self-driving software developers or users? These questions need to be answered to protect the rights of the victims as well as to promote self-driving technologies.”

It should be noted that solving AI-related problems is a global issue that is in everyone’s interest.

In April, Italy’s government mulled tougher penalties for crimes using AI tools, including market rigging and money laundering. A 25-article draft bill was issued to lay down principles “on research, experimentation, development, adoption and application” of AI in the country.

Earlier in March, the United States Justice Department warned that companies and individuals who deliberately misuse AI technology to advance white-collar crimes, such as price fixing, fraud or market manipulation, will be at risk for a harsher sentence.

All new technologies are a double-edged sword, and yet AI may be among the sharpest yet. In such a situation we need to collaborate more with other countries to figure out the rights and obligations of AI developers, service providers as well as users so that mankind can step into an era that inevitably comes with minimum risks.




 

Copyright © 1999- Shanghai Daily. All rights reserved.Preferably viewed with Internet Explorer 8 or newer browsers.

沪公网安备 31010602000204号

Email this to your friend