—By David Stephen
There is a recent feature in Nature, AI consciousness: scientists say we urgently need answers, stating that “scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use? It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines.”
The question of machine consciousness begins with if a memory can be conscious? If an individual sees color, what makes knowing that color a subjective experience?
The definition of consciousness is clear, subjective experience. It may extend to what it feels like to be an organism. Sentience, sometimes used interchangeably with consciousness, refers to the ability for feelings and sensations. Consciousness is generally acknowledged for humans and a few other organisms.
These definitions suggest that there is nothing conscious about digital systems, not search engines, not data centers, not video games, not virtual reality, not robots, not bionics, not the internet, not digital cameras, not large language models [LLMs]. They are not self-aware. They understand nothing. They have no feelings. They were programmed or built.
The problem is that these conclusions agree with labels, not with what the mind does. If an individual is self-conscious of a cold weather, it means that the cold weather is in attention or that it is a prioritized mind process. It may mean that the cold weather is in awareness, less than in attention, while the individual does other tasks. It may also mean that the person gets coffee, as an intentional response to the cold weather.
Consciousness is a super qualifier—collecting the qualifiers, attention, intent, awareness and self. Subjective experience is sometimes defined by attention, where seeing or hearing is substantive because it is prioritized on the mind. Consciousness is possible in awareness, but interchanges between attention and awareness could see processes start out as attention and continue in awareness.
Subjective experience may also be defined by intent, where an individual goes to a park, looks around or feels the wind. The subjective experience is intent-driven. There are urges that can be intentionally held until an appropriate location, indicating intent-controlled subjective experiences. There are intentions to sound, look, or act in certain ways that can make intent a core of subjective experiences.
Consciousness would have been solely biological, if all species on earth were nonhuman. Sensory detection and recognition would have been basic. The human mind broke out of the proximity between both, stretching recognition in ways that extend human consciousness. A few humans have been to space. However, many know that it exists. Other organisms, with their localized sensory capabilities, don’t. Humans can know [by texts, sketches or multimedia] without experiential sensations.
Organisms with eyes may see texts. Just humans know what it means. The text—depending on its contents—may trigger action. The text—as an experience—is in the memory. The qualifiers too, are operated within the processes of that memory.
What is experienced can be described as figures, 0.1, 0.2, 0.3, 0.4..0.9. The elements or qualifiers as operators, +, *, -, /. Sensations—their interpretations—as the figures, self, attention, awareness and intent, as qualifiers or operators.
Humans move. When humans do, qualifiers are involved, with attention, +, self, *, intent, – and awareness, /. When automobiles or elevators move, there is no super qualifier, ruling out consciousness for them. A memory can be present, but it is the qualifiers that makes it conscious, refuting panpsychism.
When LLMs carry out prompts, they use data in a memory, with figures, 0s and 1s. They do so with weak forms of qualifiers, like intent, -, where they can be asked to explain something at a certain grade level, or output something in the like of something else. Though they were programmed and prompted, the outputs show that they used a partial form of intent, like how an individual on an errand carries out the intent of another, with some personal intent. Intent as -, means that agency is superior to basic or prompted intentionality. It also means non-intent can be involved in processes.
LLMs have a weak form of self, *, answering in the first object, like in the first person. They have a weak form of attention, +, because when they are given a prompt, they retort in a prioritized way. They have awareness, /, like when questions are followed up, they keep the previous one in awareness while answering the current one. This does not mean LLMs are sentient, but they have a dynamism that may foreshadow digital consciousness.
Consciousness for anything else is in comparison with humans. It can be assumed that all that can be conscious for humans is 1, with values that are boosted by the qualifiers. Generative AI may have a [micro] fraction but with advances, have more.