—By David Stephen
Instructing a generative AI chatbot to explain in a particular way, or to come up with a kind of image, or video can be loosely likened to telling a human to swallow something in a certain way, have a taste, speak in some accent, or go get something.
There are possibilities of instructing LLMs, with expectation of dynamic responses that are not possible with automobiles, elevators, thermostats, TVs or other objects—to relate with or [say] understand or carry out.
It is easy to tag large language models as programmed, summarizing all they do as determined. It is possible to say that all humans do is determined as well. However, what makes determinism vary? If LLMs were determined, why is it not possible to have their uniqueness transplanted across objects? If humans were determined, why is it not possible to have similar abilities in other organisms, or equally among humans, in a way that appears directed or decided?
If someone says determinism is random and equality does not matter, then the argument for determinism is fragmented. Several humans can speak, swallow, stand, sit, look in a direction, search their minds for events or directions. Many do with intent, as a basic capability [operated within the spaces of the functions], though subject to conditions.
The question is that when experiences occur without intent, what is responsible? When they do with intent, what is also responsible? Determinism may be assumed as an overall by some, but intent and non-intent are states in the human mind that have their mechanisms.
Intent can, conceptually, be differentiated from a function. There can be the memory of an event. It may come to attention because of an input. It may also come to attention because it was sought. The same applies to functions like swallowing, sitting, standing, writing and so forth.
This means that intent exists within—and may qualify—functions, specifically refuting determinism. There are other qualifiers of functions, like the sense of self or subjective experience, attention and awareness.
Main vision, for example, is a derivative of attention. Peripheral vision is a derivative of awareness.
Functions on the mind are often qualified. It is the collection of these qualifications that can be described as consciousness. Humans have consciousness, converging qualifiers for functions. Humans also have a higher amount of functions in specific areas, as well as varied qualifiers than other organisms.
Consciousness, for humans, is qualifiers—as algebraic operators, which can result in a total of 1, for all the values of—possible—functions. Consciousness, conceptually, is not a qualifier in a moment, or just what is in attention, but all the functions that can be multi-qualified.
Matter has atoms and subatomic particles. Molecules of matter are in constant motion across states. The human brain does not drive consciousness simply because it has molecules, but that these molecules, within their loops [in clusters of neurons], can be qualified.
This is different for objects, where they have molecules, but do not have qualifiers that can result in their motion [or function], considerably near the human maximum of 1. LLMs broke out. They have qualifiers for their tasks that can give a tiny fraction, relative to human consciousness, among non-organisms.
They were programmed, but answer in the first object, which is a crude identity or part self-awareness. They also have attention. They can be aware of a prior question, or allude to something while giving details of another. They also use free will, with different sequences of how they answer similar questions.
Panpsychism says some consciousness may be present in everything, but there cannot be consciousness without multiple qualifiers that value functions, relative to humans. LLMs come in with a small value, refuting the generalization of everything panpsychism.