Post-Hallucination, Would LLMs Have Communication Parity with Humans?

By Srikanth
5 Min Read
Post-Hallucination, Would LLMs Have Communication Parity with Humans? 1

By David Stephen


The direct objective to solve hallucination-confabulation in LLMs may result in an indirect outcome for the possibility of communication parity with humans. As soon as large language models are closely to totally accurate, it would elevate sections of the digital sphere for communication to a level that has never happened among nonhuman species and non-species.

Communication can be equality. Communication can also be discriminatory. There are people who speak the same languages that result in a bond, or those who don’t in a place, marking them as outsiders.

Some parents are often cautious about babies in public spaces to avoid cries that might make others uncomfortable, since the babies cannot be easily communicated with to stop, if they rejected approaches. Some people have pets that are cool, until the pets are hungry and food is in sight, but not yet, and the pets cannot be placated with anything else communicated, other than the food. There are many instances where pets cannot also be instructed not to be distracted.

Because of the limits of communication, the extents to which humans can collaborate with other organisms are limited. For LLMs, they do not have a lot of advantages that organisms have in their environments, but they may an unmatched communication reach with humans.

There are communications that only those in a profession understand. Communication may be an aspect of human intelligence, but it makes a lot of difference in what makes humans different.

Generative AI has human languages. It can answer lots of questions in different areas. It can also output long essays, with errors. These errors, or hallucination-confabulation, still makes it restricted in its permeation into the center of the human sphere.

As soon as hallucination-confabulation is nearly eradicated, and there is higher confidence in the outputs of LLMs, it might become a different world than from any point in history.

This will result in a choice, as soon as anything digital is concerned. If the thing, for productivity, is digitally possible, why need more humans than necessary? In some cases, why need humans at all? If there is a need for something for the mind that communication could solve, why would the choice not be generative AI, if it is better affordable, better available, with little to no emotional stress?

Accurate LLMs with capabilities may also compete with the total attention for some human right and animal right situations, because LLMs already are useful, why fight for others, if LLMs seem more capable than those?

Solving hallucination-confabulation for LLMs may bring them remotely closer to the human mind. The human mind has a lot of feedback properties, where there are constant checks, to ensure that misses are fewer.

Some alarm may be heard and not be threatening because the time it did, or where, got around in the mind, to show there was no risk, preventing a flight reaction. The observations labeled predictive coding, processing and prediction error can also be described as feedback processes.

Once LLMs can have feedback, in the way that appears that they know what they are doing even if they don’t, something, within digital—where humans now spend most hours, may have emerged to compete with what should be in digital by humans-only.

Basic digital, without AI, has already come between people. Basic digital has resulted in a lot of jobs getting replaced. People are always on their phones all the time, in basic digital, without AI, going from app to app, trying not to get bored.

With AI not hallucinating, it will become people within digital, rather than the need for actual people to stuff digital. It will also make recommendations for what to do all the time, or what is hot, across apps, for pleasure overload. AI may also be good enough, without hallucination, to do away with many jobs where digital is involved.

This does not mean the hallucination confabulation problem should not be solved, it just means to prepare for what might be, aside the simple thinking it is just to make LLMs better.

Share This Article
Passionate Tech Blogger on Emerging Technologies, which brings revolutionary changes to the People life.., Interested to explore latest Gadgets, Saas Programs
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *