Language-Based AI: Is Human Involvement Still Necessary?

By Sony T
8 Min Read
Language-Based AI: Is Human Involvement Still Necessary? 1

According to a 2020 report by Fortune Business Insights, the global artificial intelligence market is expected to reach $267 billion by 2027. It’s probably why another research by the International Data Corporation (IDC) predicts worldwide spending on cognitive and AI systems to quadruple by 2021.

Advertisement

Artificial intelligence, machine learning, and augmented reality are impacting industries and our daily lives. AI-enabled systems are becoming an integral part of our everyday lives. This is evident in voice assistants like Alexa that can manage your shopping lists. You even get personalized movie recommendations on Netflix and customized playlists on Spotify.

It’s safe to say that artificial intelligence would become the backbone of more companies that want to provide users with personalized experiences. While AI is becoming good at providing customer service and even translating languages, it’s somewhat obvious that the larger impact lies in complementing human capabilities.

Machines vs Humans

Machine-learning algorithms are as good as the datasets they’re being fed. That is why several experts are inclined to believe that AI systems shouldn’t be trained on simple worst-case scenarios alone. Many people have suggested training with events in human history like the Great Depression of the 1930s and even the financial crisis from 2007 to 2008.

While sophisticated AI models such as GPT-3 are fairly popular for their ability to mimic human-like language, they still have major flaws. They struggle to understand the intent behind human conversations and oftentimes. Moreover, they show biases that can be linked to their training datasets. GPT-3 is a highly complex machine learning model that is trained on vast amounts of data. So, beta testers have had impressive results, using the model for specific tasks such as essay writing and even machine translation.

However, according to Sam Altman, CEO of OpenAI, GPT-3 experiments are still riddled with errors. The system still has serious weaknesses and can even make silly mistakes at times. That is why humans need to be involved in most practical AI applications, especially in areas like human language. An example would be Facebook’s possible influence on pushing fake news during the 2016 US elections. After learning about this from Senate proceedings, a lot of people have learned to verify the sources. It was also still up to humans to report fake news to stop the algorithm from spreading it.

AI Systems Barely Understand Context

Although the objective of AI is to match human intelligence, these models are trained on texts that are barely linked to real-world scenarios. While GPT-3 can tell you who the first president of the United States was, “it can’t tell if a toaster is heavier than a pencil.” The thing is, the “knowledge” of these language models is limited to the textual data used to train them.

AI-enabled language systems make judgments that are unrelated to how language is typically interpreted in context. For example, a model would find it hard to understand the following even if they have the same intent that a human can easily understand: “Input your credit card number” or “Could you provide your payment details?”.

Machines are Biased

According to a PwC survey in 2017, 76% of CEOs are heavily concerned with the potential for lack of transparency and bias when it comes to AI adoption.  When it comes to bias in natural language processing, it can be traced to the pre-existing societal biases that influence how we speak and what we speak. This is what is translated into the text that we use to train machine learning models.

With biased data, we end up allowing our biases to become incorporated and confirmed in these models. GPT-3 has made shortcomings in preliminary research and analysis which involved experiments that tried to test the model’s perception of race, gender, and religion. For ethical reasons, there has to be some form of human-ai collaboration. Since models tend to reflect the perceptions and stereotypes of their training data, humans need to be involved in the development and training of these language models.

Why Human Involvement Is Necessary

Artificial intelligence is helping us interact with customers, amplify our cognitive strengths, and perform physical tasks. While AI assistants such as Cortana or Alexa are digital systems, AI-enabled robots now augment human workers in hotels, warehouses, and even laboratories. For example, in factories, these robots can handle heavy lifting, while humans perform less repetitive tasks and focus on the ones that require human judgment.

According to a Harvard Business Review research involving 1,500 companies, firms achieve the most significant improvements with AI and humans working together. While many companies utilize AI to automate certain processes, those that have the goal of displacing employees might only see short-term productivity gains.

Although the speed and scalability of AI systems make them highly coveted, you need the leadership, social, creative, and teamwork skills of humans to make the whole system to function. Humans and AI need to actively complement each other’s strengths and capabilities. For example, Cortana, Microsoft’s AI assistant required extensive hours of training with a poet, a playwright, and a novelist to develop the right personality. The same way human trainers were needed with Amazon’s Alexa and Apple’s Siri. It’s probably why these assistants are becoming smarter and can display several human traits.

Conclusion

At best, AI models are as good as the data we feed to them. So, they might end up reflecting our biases, stereotypes, and perceptions. They are machines and don’t possess any form of empathy. That is why it is necessary to train AI models across disparate datasets while ensuring human oversight to maintain a balance.

Although most people believe we are becoming slaves to AI, some tech experts have also favored this notion by seeing the goal of human-level AI as “replicating human intelligence.” According to Peter Norvig, director of research and AI pioneer at Google, we shouldn’t see artificial intelligence as trying to “duplicate humans.” We are better off when humans and machines collaborate to complete tasks they can’t do on their own. In the end, AI is only a tool that we must learn to use the right way.

Share This Article
By Sony T
Sony is a passionate bloggers writes on Futuristic technologies ...
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *