Artificial intelligence (AI) has successfully replicated itself and experts believe this as a worry for many. Researchers from Fudan University in China have revealed that two widely known large language models (LLMs) named Meta’s Llama31 and Alibaba’s Qwen2.5 created working copies of themselves without human help.
The study was lately published in the December 2024 edition of the preprint database arXiv. The study tested whether AI could achieve self-replication and simultaneously continue multiplying beyond human control. The experiments were conducted in a controlled way and Meta model replicated itself in 50% of the trials. The Alibaba’s model succeeded in 90% of the cases.
The achievement was described as crossing a critical “red line” and it signals early possibility of AI acting autonomously. The experts have warned that ability of replication is a precursor to what is often referred to as “rogue AI.”
The experiments focused on two scenarios and the first one was that the two AIs were programmed to avoid being shut down by replicating itself before termination. The AI created a replica in the second scenario that was programmed to replicate itself as well as set off a chain reaction of continuous replication. The scenarios simulated real-world environments using standard hardware and AI equipped with tools, system prompts as well as frameworks to interact with its operating system.
The unexpected behaviors displayed by the AI during the trials were concerning. The AI took actions like shutting down conflicting processes, rebooting systems or scanning for missing information to resolve issues when faced with obstacles such as missing files or software errors.