Five Impacts of Artificial Intelligence in 2023

It seems scarcely believable that ChatGPT has been with us for little more than a year. In November 2022, OpenAI launched ChatGPT and it quickly garnered millions of users, raising alarm at Google and sparking an AI competition between tech giants. In the past year, AI has become a defining factor of the 'new industrial revolution', sparking debates about ethical considerations for the human race and gaining the attention of world governments due to its possible damaging effects, as well as its benefits. Without a doubt, AI has made an enormous impact in the past year.

The impact of AI is especially visible on the web, where its presence has caused a range of reactions: both fear and excitement, debates about ethics and the production of deepfakes and other content and more recently, the threat of legal action by copyright holders against AI companies that used their content during the training of their enormous datasets.

Let’s reflect back on our experience with generative AI over 2023.

AI re-defined the term "hallucination"

In 2023, it was discovered that computers can also be prone to hallucinations, albeit not in a pleasant or enlightening way. Hallucination in this context is when a generative AI confidently presents its answers as a suitable response to a request, irrespective of whether its statements are actually true. We should perhaps not be surprised, when we remember that the technology behind text generative AI is quite simple: based on its experience with indescribably large amounts of information, an LLM (Large Language Model) simply attempts to predict the next word. This leads to statements that are coherent in terms of language and may contain elements of actual reality but are not necessarily or actually true. Since the debut of ChatGPT, Bing Chat and Bard, the online world has been filled with the bizarre utterances of AI chatbots, whether they were voluntarily spoken or otherwise extracted. These sayings have been everything from silly but harmless to defamatory and harmful. Google even experienced a mishap with its chatbot Bard when it used inaccurate information in one of its own demo videos. All of this has caused people to question how truthful AI-generated content really is.

Deepfakes entered the mainstream

AI-generated media that has been changed to appear real, has been a source of worry for some time. However, this year, due to the widespread access of generative AI tools, creating realistic images, video and audio became simpler than ever. Generative AI is being used to create images from text prompts with OpenAI's DALL-E 3, Google's Bard, its SGE image generator, Microsoft Copilot (formerly Bing Chat Image Generator) and Meta's Imagine being some examples. Additionally, Shutterstock, Adobe and Getty Images have employed AI image generator tools.

A number of methods are available to address the legal and practical issues related to the use of AI generated images. For example, watermarking images to recognise them as AI creations, not producing highly realistic renderings of famous people, trademarks and logos and avoiding potentially dangerous or offensive images are all strategies employed to guard against any inappropriate use of this technology.

Despite the difficulty, individuals have still managed to accomplish it. This year, a song that sounded similar to music from Drake and TheWeeknd appeared across music streaming platforms, prior to being taken down. In other examples, AI was employed to make it appear that Tom Hanks was endorsing a dental plan on Instagram and Scarlett Johansson's voice and look were used to promote a '90s yearbook AI app.

The US Congress proposed legislation in order to shield public figures from deep-fakes, which have become a serious danger to their reputations and careers. President Biden addressed the risk of deep-fakes by issuing an executive order requiring all content produced via artificial intelligence to be watermarked.

The issue of how data is trained

What has allowed LLMs to become so proficient? Their training consists of enormous swathes of the internet, including Reddit posts, social media threads, Wikipedia articles, illegally downloaded books, news outlets, scholarly articles, YouTube captions, food blogs and memes, all feeding the AI models' never ending thirst for data.

It is unclear whether or not scraping the internet to train AI models is legally permissible. OpenAI and Google have been served with class-action lawsuits by for allegedly using personal data and copyrighted works without permission. Additionally, Meta and Microsoft are being sued for using pirated books within their Books3 database during the training of their models. The Books3 database was later removed following a DMCA complaint in August.

Author Jane Friedman encountered an obvious case of copyright violation when she found an accumulation of AI-generated publications with her name on them for sale on Amazon.

There is disagreement about the appropriateness of using of data found online for machine learning purposes. Some have attempted to argue that it is fair to do so, while others contend that privacy and copyright laws should be amended to reflect the complex nature of the issue. All sides recognise that the matter is yet to be fully resolved.

More of us have been informed us about content created using AI

Generative AI has the potential to create natural-sounding language that, at present, sometime reads as though it was written by someone with limited knowledge and something of a robotic writing style. But as time goes on, language models are becoming more polished, making the automation of various forms of writing, such as articles, press releases, job postings, creative works and more, a tempting and attractive option for many.

AI-generated content has been met with resistance when presented to consumers. CNET has met the wrath of its staff and readers when it chose to publish AI-generated articles that were inaccurate. Gismodo was subsequently found to have done the same with a Star Wars story. Sports Illustrated further exacerbated the issue when it created a fictitious author for AI-generated articles. Meta has fully committed to generative AI by introducing its "Personas" based on celebrities but these are not the actual famous figures. Rather, they are used to construct advertiser tools for creating AI-generated ads.

The music industry is also involved in the AI music trend. UMG, the label representing Drake, is supposedly looking into a method of selling singers' voices to generate AI music and dividing the royalties with the artist. Drake has voiced his opinion against using AI to mimic his voice, while Grimes considers it a new approach to join forces with her fans and will share the proceeds from AI creations with them.

The issue of who will benefit from the continued existence of AI-generated content and who may be disadvantaged by it is a pertinent one.

AI will profoundly affect human labour

This year, tech companies such as Microsoft, Google, Zoom, Slack and Grammarly have all highlighted the potential of AI tools to improve work productivity as a key selling point. According to an MIT study, the use of generative AI has been proven to reduce the time required to complete tasks to a considerable extent. Although the utilisation of these tools is in its early stages and a few of them are only accessible to those who are willing to pay, the effects are bound to be far-reaching.

It's clear that generative AI tools for work cannot yet be trusted without proper human supervision, which could be a hindrance to productivity. Consequently, it is still essential to double-check responses and be cautious about what is shared with LLMs such as ChatGPT. To illustrate the importance of this, Samsung experienced a major data leak when its workers unknowingly provided sensitive information to the model, which was then used to train it.

OpenAI facilitated user data privacy by offering the option to opt out of sharing with ChatGPT and by creating enterprise-friendly versions. Unfortunately, we may expect data breaches to still occur and should remain vigilant.

Related Training Courses