After studying the representation of 2D shapes in the V4 visual area for the past 40 years, ʽCurrent Biologyʼ brought us a new paper detailing how, in fact, neurons in this area of the brain can actually represent fragments of 3D shapes. This means that the detection of these 3D shapes – spheres, shafts, hollows, bumps – happens in the beginning stages of natural object vision.
This discovery was soon overshadowed by a new study from John Hopkins University which found that the artificial neurons have almost identical responses. They’ve centered their research around AlexNet, an advanced artificial intelligence network for computer vision, meaning its purpose is to recognize visual objects.
Results clearly show that artificial and natural intelligence networks process 3D shapes in the same way when it comes to visual images. Detection of these shapes at an early stage of vision helps the interpretation of solid objects in the real world, meaning there are great similarities in how our brains and computers see.
A Spooky Correspondence, with Foundation
Ed Connor, director of the Zanvyl Krieger Mind/Brain Institute, neuroscience professor, and head of research, said he was more than surprised with the results. He called it a ʽspooky correspondenceʼ between artificial intelligence and the brain, between something designed by scientists and something that went through a lifetime of learning and evolution.
Despite Connor’s amazement, this breakthrough doesn’t come without foundation. AlexNet and similar AI networks were partly designed based on the brain’s visual networks, so close similarities don’t come as a total surprise. The most promising present-day models for understanding the functioning of the brain are precisely AI networks. At the same time, the best strategies for bringing AI closer to natural intelligence lie in the brain itself, meaning the correlation was there all along.
Replicating human vision was one of the main long-established challenges of AI, and scientists were working towards object recognition for years. Today’s major gains are largely based on high capacity Graphical Processing Units (GPU), traditionally developed for gaming.
With the global gaming revenue higher than $150 billion last year, it’s no wonder that this industry is investing in research and the latest technology to keep up with the evolving demands of customers. Good graphics is not enough for gamers anymore, they want games to resemble reality as much as they can, so developers are using AI in all kinds of games, from slots to RPGs.
AI is already playing a crucial role in game design, making astonishing advancements in 3D visualization techniques. Such visually appealing games that we have today couldn’t even be imagined just a decade ago.
The best example is Nvidia, a corporation that is a pioneer when it comes to bringing AI to computer graphics. They’ve found a way for AI to produce realistic human facial animations with equal efficiency as human artists, but in a fraction of the time needed. It can also simulate the interaction of light with these surfaces, which will lead to even greater realism. And we’re talking far beyond facial expressions – stimulating the light reflecting from various surfaces in the virtual scene (ray tracing) combined with AI leads to much quicker and more precise rendering of 3D graphics, turning whole digital worlds into the life-like images on our screens.
So the gaming industry has already been mining the computer vision potential for years, but there is no doubt that John Hopkins University’s discovery will bring their efforts to a whole different level. However, the entertainment industries are not the only ones that will prosper from AI 3D object recognition in the modern world.
Shortcuts & Automatization
Just imagine what revolution the AI 3D object recognition will bring to architecture and interior design. It will create an incredible database of everything ever built and created, providing us with numerous shortcuts in the creation process.
For example, weren’t we all at some point in a situation where we want to find a lamp, table, or sofa just based on a picture we saw or took somewhere? There were some successful attempts to build an image search engine for 3D objects in the past, but the whole process was extremely painstaking. The biggest problem is that the contours and shapes of objects depend on the viewpoint, so the objects look completely different from different perspectives. With AI registering 3D objects as a human brain, this limiting computer vision becomes a thing of the past.
To illustrate automatization, let’s tackle the story of self-driving cars. Engineers have been working on 3D object detection for years now, using both lidar-based and camera-based approaches. Lidar-detection has proven more accurate, but the form in which data is delivered increases the processing time. The perception components of self-driving cars need to detect real-world objects in real-time, which they’ll finally be able to do with AI 3D object recognition similar to a human brain.
As we can see, the foundation for this discovery has been around for quite a while, but Ed Connor’s team research has certainly opened new horizons. When you consider the fact that GPU is already changing the face of medicine, it’s obvious that improvements will follow in every aspect of our lives.
Now is the time to leverage this correlation between artificial and natural intelligence and continue onward.