Researchers at the Singapore University of Technology and Design (SUTD) have achieved a significant milestone by using reinforcement learning to train AI capable of outperforming champion Street Fighter players. Inspired by AI successes in games like Chess and Go, the SUTD team focused on the game that is known for its intricate combat mechanics.
Their primary objective was to develop an AI that could not just match but surpass the skills of human players. The team engineered a unique movement design software, powered by reinforcement learning. This type of machine learning allows algorithms to learn through experimentation and feedback. The AI was entrusted with the task of learning and refining its movements through battles against built-in AI opponents.
The results were remarkable. SUTD’s associate professor and lead investigator, Desmond Loke, highlighted their findings and said the achievement has far-reaching implications for diverse scientific fields including robotics, autonomous vehicles, collaborative robots and aerial drones.
The AI-driven approach stands out for its remarkable energy efficiency. It consumes only 26 femtojoules (fJ) of hardware energy, which is 141 times less than existing GPU systems. This efficiency opens doors to the development of ultra-low-energy movements. The AI’s success is credited to decay-based algorithms, which enabled efficient and effective movements.
This AI demonstrated exceptional physical and mental capabilities. It is known as “effective movement design.” The researchers foresee a future where this technology enables previously considered impossible movements and actions.
The AI showed it was really good at moving and thinking in the game. They trained it using a smart technique and it learned quickly. In the future, this kind of technology could help us do things we thought were impossible before.