Artists vs AI … what’s next?

The furore among the artists whose work is being used for training AI models does not seem to calm down anytime soon. Musicians who have spent tireless efforts in producing and copyrighting their piece of music can’t bear to see AI models nonchalantly picking them to create new pieces of music. There have been numerous cases where AI-powered music generators have trained on millions of copyrighted songs without seeking permissions from the original artists. Artists globally claim that they must be compensated and consented before AI infringes on their copyrights.

Yet, in the entire debate, the gravity seems to be higher on the side of generative AI model developers. Why so?

 

The case of Fair Use

In the United States, artists are generally protected by law to ensure they are fairly compensated when their original work is used by other entities. However, copyrighting the piece of music also stifles innovation as other people cannot use the music to create something new for their work. AI music generating firms claim that they operate under the ambit of fair use, as they do not use the copyrighted music as it is, but create something new out of it which is fundamentally different from the original piece of art.

Musicians, however, disagree. They opine that their work is still being consumed in entirety even if for production of something different. And that constitutes an infringement of their copyrights. AI companies present a different perspective by claiming that training of models comes under “non-expressive use” and therefore, must be exempted from the ambit of copyright infringement, as the copying of music is only an intermediate step in creating something entirely new and different, which does not contain the original expression of the artists’ music.

Courts have largely been in agreement that the consumption of music by AI models to create new music comes under fair use policy.

Some experts have stated that the work of AI must be seen from a different lens. If the AI’s output benefits and democratizes the market, eventually leading to benefits for the common masses, then it must be allowed. For example, Amazon Kindle had introduced an AI-powered text-to-speech feature where books could be read out to blind people. Such a use case would be justifiable on humanitarian ground, even though Kindle had not licensed the books for this feature (this feature was, however, shut down following lawsuits from rights holders).

 

Need for innovation leadership

The lawmakers believe that prioritizing the interests of artists over fair use can lead to potential stifling of innovation and monopolization of the music industry. They advocate for democratization of the art industry by claiming that generative AI models need enormous volumes of data for training.

In case the AI companies had to go to each artist to seek permissions for their work, either the cost of model training will increase meteorically, or the companies will defer their innovation.

In case of the former, only those companies with deep pockets can survive as they alone to afford to bear the massive costs of licensing millions of pieces of music. That will choke the startup ecosystem and will push out small companies that could have created better and more accessible AI models otherwise. Moreover, what the US policymakers also fear is the erosion of companies to other countries, such as UK and Japan, where it is broadly allowed to use copyrighted music for model training. This will lead to exodus of innovation capital and talent, something which no country wants to witness.

 

Work substitution debate

Some people argue that the generative AI’s output should not drive away the audience of the original artists, leading to what is called the “substitution effect”.

Generative AI’s output should not leave the original artist redundant.

Basically, what it means is that in case generative AI creates a piece of music from Beyonce’s original art, it should not draw the market away from Beyonce by being expressively similar to her work. That is, the AI’s output should not be similar to the work produced by the original artist. In case the AI engine creates an altogether different music which sounds distant from Beyonce’s style, that should be okay.

 

Labelling is what we need

Much of the furore can be calmed down if AI-generated music can be labelled as such, so that people know the kind of music they are consuming. The AI Labelling Act of 2003 in the United States requires media metadata to show if AI has been used to create a piece of work, in addition to a conspicuous disclosure of AI-generated content to the users. Any song that is being submitted to a platform must carry the label of AI-generated music if it has been the work of AI. Platform owners must also mandate such labels and AI model developers must ensure that their tools automatically label the outputs as AI-generated. Even if there is a torrent of AI-generated music on the digital platforms, consumers must know which piece is original and which one is generated by technology.

Tomorrow Avatar

Arijit Goswami

Leave a Reply

Your email address will not be published. Required fields are marked *