—By David Stephen
There is a common refrain in recent months because of the capabilities of generative AI: what does it mean to be human? If AI can do some of the things [that seem like it] for humans to be intelligent, then what is being human?
Humans inhabit a physical space, where humans do everything. Communication—written or verbal—was physical, for a long time. Then came technology. All human communications, still, were generated and absorbed by humans, laced with human mores.
Human intelligence and consciousness followed humans everywhere. It did not mean transferring some to those environments—natural or artificial—to possess. Even with DNA on surfaces, they don’t get the bases for their own use. Books contain human intelligence, but books themselves did not become intelligent or conscious. They gave what they held to humans—without the capability to make any.
Digital was also for a long time, close to books, where anything new was made by humans, for humans. Things in digital made ease, efficiency, productivity, scale, adjustments, storage and so forth better, but humans did not live, as physical entities, in digital spaces, albeit there is digital ownership. The human mind treated digital and physical closely, with sensory inputs making digital not require any extra biological effort, for adoption, per se.
But digital, as a habitat, was different from everything else, books, the moon, undersea, the air, space, forests, desert, mountains, jetliners, wherever. Its dynamism meant that it may someday use many of what had been given to it, to make its own forms of what humans do with it.
Then, generative AI came on. It could now produce communication that mirrors human intelligence. It also could produce messages that carried human consciousness.
Human consciousness is theorized to be mechanized by the human mind. The outputs or reports of subjective experiences can be expressed in different instances, but human consciousness cannot be made elsewhere, at least for now.
There are facial expressions that may indicate a state of mind. There are words and actions that may show. Fellow humans can recognize most or all from others. A few organisms close to humans may detect facial expressions or actions. But that was it.
LLMs, in the digital space, have exceeded nonhuman organisms in how they can remake or react to communications of human intelligence and consciousness. Many say AI is not intelligent or conscious. It could be. But the same amount of human consciousness and intelligence that can be carried by digital, for human-to-human communication, is now possible by generative AI.
This means that it is likely to measure the part consciousness that digital carries, where another person receives not just the message but the emotional toll—say of good or bad news. The same with intelligence, where the person learns something new. Those measures are possible for generative AI. So, while they are not conscious or intelligent like organisms in the physical space, what humans do with digital gives them a measure.
The capital of state A is B. Or, this happened to individual C. These intelligence and consciousness messages may not mean the sender, experienced it or even understands why, but if accurate, the receiver either learns something new, or gets emotional. Digital does not understand, but AI can do the same too, keeping tokens of the transport.