- The early stage of AI development also brings forth certain inherent
legal complexities and hurdles that advertisers need to take into account. - These include concerns regarding the ownership of AI-generated content,
copyright issues, data security, potential AI bias, and so on. - Obtaining proper authorisations and licences for uploaded materials, particularly copyrighted and trademarked content, is crucial.
However, despite the advantages related to efficiency, cost savings, and productivity that generative AI brings, the early stage of AI development also brings forth certain inherent legal complexities and hurdles that advertisers need to take into account.
A report by law firm Khaitan & Co and the Advertising Standards Council of India (ASCI) has pointed out the legal ramifications like copyright infringement are necessary to keep in mind more than ever in these changing times. There are also concerns regarding the ownership of AI-generated content, data security, potential AI bias, and more.
When it comes to copyright infringement, generative AI can get tricky. For example, a song featuring AI-generated and cloned voices of 'The Weeknd' and 'Drake' was uploaded and subsequently streamed over 15 million times before it was eventually taken down.
In another instance, photographer Boris Eldagsen declined a Sony world photography award after disclosing that his winning image was generated using AI. This revelation has sparked a debate about the role of AI in photography.
The Copyright Act of 1957 provides copyright protection in India for original works and defines the 'author' as the creator. AI-generated content faces uncertainty regarding copyright protection due to AI not being recognized as a legal entity, raising concerns about ownership and infringement. Advertisers and marketing agencies may encounter challenges in claiming legal ownership of AI-generated works.
Generative AI models use two types of data, training data, and user input, which require due diligence to avoid copyright infringement. The unclear status of AI in copyright law necessitates careful consideration to ensure compliance and protection.
Training generative AI models using data from open sources or the internet can introduce biases, misinformation, and misleading information into the output. Limited diversity in datasets may result in underrepresentation of cultures, races, and genders, perpetuating historical stereotypes.
The effectiveness of AI tools relies on the quality of training data, but concerns arise from the dominance of English language data and models, potentially making it disadvantageous to non-English speakers and those outside of the Global North. Global cooperation is crucial to avoid linguistic discrimination and exclusion in AI development.
The report points out that in India, AI is not recognised as a legal entity, leaving AI-generated works without human involvement ineligible for copyright protection. Consequently, advertisers may find themselves without legal ownership of AI-generated content and limited recourse in the event of infringement by third parties.
Moreover, marketing agencies may encounter challenges in fully transferring ownership of AI-created content to their clients if they are not deemed rightful owners.
As per the report, a critical focus is on thoroughly reviewing AI platform terms, securing proper authorizations for copyrighted content, and carefully avoiding prohibited input. The implementation of robust content review processes, setting up guidelines, and incorporating AI disclaimers plays a significant role in mitigating potential liabilities.
It is also vital to safeguard confidential data by enforcing non-disclosure agreements and implementing strong security measures. Upskilling human labour is recommended to maintain human control, ensuring responsible AI use, and effectively minimising legal and ethical risks.
The Khaitan-ASCI white paper points out that stakeholder engagement is crucial in navigating AI complexities, involving developers, policymakers, experts, users, and the public. In India, the government's awareness of AI is evident through the Ministry of Electronics and Information Technology, Government of India ( MEITY) and NITI Aayog initiatives and the proposed Digital India Act with AI regulations.
Emphasising fairness, accountability, transparency, and ethics guides responsible AI development. Learning from global perspectives, like the EU AI Act, harmonises regulatory standards. Effective policies balance innovation with rights, privacy, and workforce impact. A collaborative approach fosters trust, responsible AI practices, and societal benefits.