Update: AI banned from imitating voices? – First judgement in China

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​published on 21 May 2024 | reading time approx. 3 minutes
At the beginning of April, we reported on the ELVIS Act (in German) in the US state of Tennessee, a law to regulate AI voices. Two weeks later, the Internet Court in Beijing issued its first judgement on the infringement of personal rights by AI-generated voices. The court found that the use of AI voices that imitate the voices of natural persons may be unlawful. We explain the content of the judgement and examine the legal situation in Germany.

​Use of AI-generated voices in China

The case before the Internet Court in Beijing involved a voice actress whose voice was used by an artificial intelligence (AI)-based tool to add her voice to numerous audio books on the internet. The background to this was that a media company that held the rights to the voice actress‘s audio recordings had commissioned an AI software company to train an AI application with the help of these recordings. This was done without the knowledge and consent of the voice actress. When she learnt about the AI-generated voice, she sued both the media company and the AI software company for damages for her economic losses and immaterial damage.
It is the first case of its kind to be decided in China. Section 1023 of the Chinese Civil Code protects the right to one‘s own voice as the right to one‘s own likeness as a personal right. According to the court in Beijing, a natural person‘s voice can be distinguished by its sound, tone and frequency. The voice is unique and constant and can show the character and appearance of a person by evoking or causing thoughts or emotions.
Artificially generated voices could be associated with natural voices, even if there were slight deviations after synthesis. AI voices could be used to identify a natural person based on their timbre, perception and tone. If this happens, the listener will associate the (artificially generated) sound recording with the natural person and identify it as his or her voice. In addition, certain thoughts or emotions could be evoked. Therefore, the natural person concerned must be able to authorize or prohibit this use of their voice.
As the defendants did not have the necessary authorizations from the plaintiff for this use, the court ordered them to pay compensation in the total amount of 50,000 Yuan (32,000 Euro or 34,500 US Dollar) as jointly responsible parties. In assessing the amount of compensation, the infringements, the actual market price of the sound recording and the number of sound recordings played on the same market were taken into account. In addition, the defendants were obliged to apologize to the plaintiff within seven days.

And again: Protection against “voice theft” in Germany?

According to this Chinese case law, our own voice is a part of our personality by which we can be identified and which characterizes us. The judgement raises the question of what German courts would or should decide in a comparable case.
In our last article on protection against AI voice generators in Germany (in German), we explained that the voice is also covered by the general right of personality in Germany. This is a constitutionally guaranteed fundamental right under Article 1 (1) and Article 2 (1) of the German Constitution (Grundgesetz) and is also protected under civil law in accordance with Section 823 of the German Civil Code (Bürgerliches Gesetzbuch).
According to the German Federal Court of Justice (Bundesgerichtshof), the general right of personality and its special manifestations primarily serve to protect ideal interests, in particular the protection of the right to value and respect of the personality. However, pecuniary interests are also covered by the right of personality. This is important in view of the fact that the voice can have considerable economic value. (BGH, judgement of 1 December 1999, ref. I ZR 49/97)
If the voice of a natural person is used without their permission to generate an AI voice that imitates this voice, the person concerned can take legal action against this and claim injunctive relief and damages, as is also the case in China. The latter can be aimed at compensation for both material and non-material damage. The German legal situation is quite comparable to that in China. If a German court had had to decide the case described, it would probably have come to a similar conclusion.
However, it remains to be seen how high any compensations for damages in Germany will be. Nevertheless, the Chinese judgement can be seen as a “warning shot” for German companies wishing to generate AI voices. If the AI is to be trained using existing sound recordings, it is essential to obtain the permission of the person who is the subject of the voice.
The impressive example of dubbing and audiobook voice actors and also artists such as singers or actors shows us that this legal situation is to be welcomed. Their work is characterized by the unique sound of their voice and often its recognizability. If AI were to be used to imitate their voices without their permission and involvement, they could be deprived of a significant part of their earning potential in the future.
However, this problem does not only affect people whose voice is obviously an important part of their job. “Voice theft” can also pose a serious threat to managing directors, board members or other people in important (corporate) positions, as well as to members of the public (e.g. politicians). Not only email addresses or signatures can be misused or falsified using AI, but also voices. Fake calls or fake voice messages can be created that contain instructions from the person they are supposed to belong to, but which do not actually come from that person.
Fortunately, the Chinese judgement and the comparable German legal situation show us that this is not legal. Nevertheless, caution is advised with voice messages and calls that contain important information or instructions, such as a request to transfer money to an unknown account.


Contact Person Picture

Laura Münster


+49 221 9499 09682

Send inquiry

Contact Person Picture

Susanne Grimm

Associate Partner

+49 711 7819 144 10

Send inquiry


Deutschland Weltweit Search Menu