Artificial intelligence (AI) has undoubtedly revolutionised our lives and work processes. The advancements in large-scale machine learning and natural language processing have made operations more efficient and effective. Technologies like ChatGPT have even brought about a semblance of genuine “intelligence” that was once confined to the realms of science fiction.
However, there remains a critical challenge in bridging the gap between technical brilliance and practical application in the physical world. Although AI has made significant virtual strides, the integration of AI-powered general-purpose robots into the physical environment is still a formidable task. This article delves into the reasons behind this disparity and explores potential solutions.
One of the primary obstacles preventing the deployment of AI in real-world scenarios is energy efficiency. When we break it down, a robot is essentially a self-propelled laptop. Anyone who has used a laptop on the go knows that even the most advanced models can barely operate for a few hours without requiring a recharge. The screen and internal processes consume significant amounts of energy. Additionally, robots must navigate and interact within the physical environment, necessitating freedom of movement without a tethered connection for safety reasons. Furthermore, optimal performance would require a battery life exceeding the current standard of 90 minutes.
Presently, the mechanics of robots and autonomous devices are simply not energy-efficient enough to sustain prolonged operation. They require regular and extended charging periods to perform optimally. While the first generation of robots is already employed in industrial settings for manufacturing purposes, they remain tethered to a power source. Existing general-purpose robots, like Sanctuary’s Phoenix, a carbon-based humanoid, are still clunky and expensive. It is anticipated that we will witness five to ten iterations before achieving a truly independent model capable of freely manoeuvring and performing tasks.
To bridge this gap between AI and effective integration into the physical world, we must begin with smaller and simpler applications. Cobots, for instance, are designed to perform specific tasks, such as self-driving wheelchairs for physically impaired individuals or robots that climb building facades to clean windows. Additionally, autonomous technology can be utilised for more complex and focused tasks, such as smoke-diving robots searching for people or drones fixing power line cut-offs. The key aspect here is the performance of a single duty, not only for enhanced energy efficiency but also to ensure the highest standards of work.
Bringing AI into the physical environment poses additional challenges primarily rooted in complexity. Real-world navigation requires immense cognitive processing for human beings. Explaining this intricate process to a robot is no easy feat. However, sensors offer a viable solution.
By employing 3D sensors like depth cameras to capture the geometry and texture of physical objects, AI algorithms can gain a better understanding of the physical world. This understanding is crucial for tasks like spatial relationships, object movement within a room, or human interaction. Without such comprehension, robots would struggle to navigate and interact safely and efficiently. Therefore, the development of AI-powered mapping and localisation systems, utilising sensors and cameras to generate maps of the physical environment and track object movement, becomes a pivotal step toward creating genuinely autonomous robotic assistants.
Mechanical efficiency is another critical area of focus. By improving the way robots move, potentially by utilising artificial muscles and joints to mimic human motion, we can reduce the energy required to power them. However, this humanoid technology is still far from being fully realised.
In the tech industry, the pursuit of intelligent robots has been an ambition for decades. Nevertheless, a more nuanced and evolutionary approach is now necessary. Rather than relying solely on overarching solutions provided by industry giants, it is essential to adopt a methodology that fosters collaboration among specialised startup companies possessing the required expertise and knowledge.
These startups can develop individual components that address the multiple challenges faced by developers. Only once this collaborative effort occurs can we expect to create the ultimate efficient, functional, and affordable general-purpose robot.
So, what’s the future?
While AI has revolutionised various aspects of our lives, integrating AI into the physical world poses unique challenges. Energy efficiency remains a critical barrier, with robots requiring prolonged operation without frequent recharges. However, by focusing on smaller applications and enhancing AI’s understanding of the physical environment through sensors, we can gradually bridge the gap between AI and real-world deployment. Improving mechanical efficiency and fostering collaboration among specialised startups are also crucial for the development of efficient, functional, and affordable general-purpose robots. As we tackle these barriers, AI will continue to redefine the possibilities of the physical world and reshape our future.
Developing AI-powered general-purpose robots that can effectively navigate and interact with the physical world requires a multidisciplinary approach. It is not solely a matter of improving energy efficiency and enhancing mechanical capabilities. Other factors such as safety, ethics, and social acceptance also come into play. As robots become more integrated into our daily lives, it becomes crucial to address concerns regarding their impact on human safety and wellbeing. Ensuring that AI-powered robots are programmed to prioritise human safety and adhere to ethical guidelines is essential. Additionally, fostering public trust and acceptance is crucial for the successful integration of robots into various industries and environments.
Furthermore, regulatory frameworks and standards need to be developed to govern the deployment and operation of AI-powered robots in the physical world. As these robots become more autonomous and capable of performing complex tasks, it becomes imperative to establish guidelines that ensure accountability, transparency, and responsible use. Regulatory bodies, industry experts, and policymakers must collaborate to define standards that address issues such as privacy, data security, liability, and the potential impact of AI on employment. By proactively addressing these concerns and establishing a robust regulatory framework, we can create an environment that fosters innovation while safeguarding the interests of individuals and society.
With a multidisciplinary approach and a proactive stance towards regulation, AI-powered robots can overcome the barriers of the physical world and reshape industries and societies in profound ways.
Jonas Angleflod, Group CEO at Theories Group, a startup accelerator on a mission to fund and build startups leveraging technology automating the monetisation of high-intent digital traffic using technology and AI.