One of AI's breakthrough innovations is Generative AI, which has become a valuable asset for industries. Given the great potential of AI, Microsoft made headlines in January with its ground-breaking investment of $10 billion in OpenAI, the creator of ChatGPT [1]. However, generative AI can be a double-edged sword, for beneath its promising capabilities lie foreboding aspects.
Art-ist or Art-ificial?
The advent of state-of-the-art technology has ignited discussions about the evolving role of artists in an AI-driven world. While generative AI can generate impressive and realistic content, some argue that it could potentially overshadow artists’ craftsmanship that took years to cultivate, diminishing the values, effort, and progress associated with art creation to a few clicks. Furthermore, with 48.2% of marketers actively employ Generative AI tools to generate content ideas [2], there is a legitimate apprehension that it could lead to a homogenization of creative outputs.
AI adoption in enterprises also raises concerns about job displacement and its impact on the workforce. McKinsey reports that the implementation of automation could potentially displace work for around 800 million individuals by 2030, highlighting the scale of impact this technological shift may have on the workforce [3]. In a post published in the Harvard Business Review in December, three professors highlighted DALL-E as an AI tool capable of generating images within seconds, posing a potential disruption to the graphic design industry [4]. While AI brings significant economic impact with boosted productivity and efficiency, individuals struggling to adapt to the changing landscape will encounter job displacement. Particularly, professionals in the creative industry will be affected, with 74.3% of creatives agree that AI is going to impact their jobs in some way in the next decade [5].
While generative AI presents certain challenges, it also opens up new avenues that rely on human skills and expertise. Automation and AI hold the potential to enhance productivity and stimulate economic growth. Thus, businesses should grasp these technologies' broader implications and limitations when considering their impact on the job market.
Who owns my art?
Debates have emerged regarding the use of unlicensed content in training data and the ethical issues arising from users explicitly referencing trademarked works without obtaining permission from the creators. Consequently, certain image-hosting platforms have opted to prohibit AI-generated content due to concerns surrounding intellectual property rights, such as the unauthorized reproduction of copyrighted works and the creation of unauthorized derivative works.
According to current US copyright law, AI systems are not recognized as creators of the content they generate. Instead, their outputs result from human-generated work, which may involve copyrighted material from the internet. In March 2023, the UK Intellectual Property Office (IPO) announced plans to establish a code of practice for generative AI companies, facilitating their access to copyrighted material [6]. In order to enhance confidence and accessibility in protecting the rights of copyright holders, companies will be provided with guidance to help AI enterprises access copyrighted works as inputs, with safeguards such as labeling to protect the rights of copyright holders.
While the potential benefits of generative AI technology are significant, enterprises need to navigate the legal and regulatory landscape carefully to avoid potential legal issues. Companies that adopt third-party generative AI tools should carefully review the service provider's terms of service and intellectual property policies. This examination is necessary to ascertain the ownership of content generated by the AI system and the utilization of user-inputted content for training purposes. Additionally, enterprises should disclose this usage to customers and explicitly address matters related to intellectual property ownership and liability in their customer agreements.
Above all, it is crucial to stay transparent about the creators, providing clear attribution or acknowledgment of the individuals or entities responsible for creating the content. This transparency promotes ethical practices from a moral standpoint and helps establish accountability and respect for the original creators. Regulations have been established to safeguard intellectual properties from violations of generative AI. Specifically, the Copyright Alliance founded Content Authenticity Initiative (CAI) to enable artists to get credit for their work wherever it goes, allowing creators to attach important attribution data to their content, like their name, date and what tools were used to create it [7].
To take a step further, in May 2023, the European Parliament passed the EU's Artificial Intelligence Act. The first formal regulation of AI introduces transparency requirements for generative AI, including publishing copyrighted material summaries and implementing safeguards against illegal content. The legislation categorizes AI systems into four levels of risk, and non-compliance can result in fines of up to 30 million euros or 6% of a company's annual global revenue [8].
To use or to abuse?
With its ability to generate highly realistic content, the rise of fake news and misinformation facilitated by generative AI has become a concerning phenomenon in the digital age. According to a deep fake detection company, the number of deep fake videos shared on the internet has tripled in 2023, with approximately 500,000 deep fake videos and voice recordings uploaded worldwide by the end of this year [9]. Videos of individuals engaging in actions or expressing themselves used to be solid proof of actual events. However, the emergence of deep fakes undermines this dependable norm and brings forth concerning outcomes.
One significant impact that companies may experience due to deep fakes is detrimental to their brand reputation. If a business becomes a target of deep fakes spreading false or defamatory content, it can damage consumer trust, decreasing loyalty and negative publicity. These videos can also deceive customers, employees, and investors through fake endorsements, fabricated statements, or manipulated financial reports. The misinformation can result in financial losses, legal complications, and erosion of stakeholder trust. Specifically, a Tessian survey found that 74% of IT leaders consider deep fakes a threat to their organizations' security [10]. Misrepresentation of employees through deep fakes can have reputational and financial consequences for businesses.
Therefore, the challenge lies in distinguishing between authentic and AI-generated content, as the lines between reality and fabrication become increasingly blurred. A survey found that 57% of global consumers claimed they could detect a deep fake video, while 43% said they could not tell the difference between a deep fake and a real video [11].
Addressing this issue requires a multi-faceted approach involving technological advancements, media literacy education, and responsible use of generative AI to ensure the integrity of information in the digital landscape. Specifically, 77% of American adults agree that measures should be taken to restrict altered videos and images intended to mislead [12].
Media literacy education resulted as an essential approach to empower individuals to evaluate and discern information from generative AI critically. Being educated about AI systems' capabilities, limitations, and potential biases equipped businesses with the necessary tools to navigate the digital landscape effectively. As 71% of respondents admit to being unfamiliar with deep fakes [13], media literacy programs should prioritize educating individuals in evaluating information sources and fact-checking claims. Finally, it is crucial for enterprises to establish robust policies to address misuse and be prepared to effectively counter future attacks.
Paving the way for fair use
Generative AI has proven to be an invaluable tool for enhancing art creation and expediting the creative process. Nevertheless, it is crucial to address the lingering concerns surrounding art ownership of deep fake content to ensure the responsible use of AI. This approach enables the harmonious integration of artificial intelligence capabilities with the creators’ unique talents and upholds the integrity of artistic expression.