Abacus.ai Introduces Dracarys: Open Source AI Models for Powerful Coding

By Sunil Sonkar
2 Min Read
Abacus.ai Introduces Dracarys: Open Source AI Models for Powerful Coding

Are you a fan of the HBO series “Game of Thrones?” If yes, the term “Dracarys” is instantly recognizable to you. It is command to summon a dragon’s fiery breath. Abacus.ai has repurposed this iconic word for a different kind of firepower in the world of artificial intelligence (AI). It has lately introduced “Dracarys,” which is a new family of open-source large language models (LLMs) and particularly designed for coding tasks.

Advertisement

Abacus.ai is known for its innovative AI model development tools. It released Smaug-72B, named after the dragon in J.R.R. Tolkien’s “The Hobbit,” earlier this year. Smaug was developed as a general-purpose LLM and Dracarys was tailored to optimize coding tasks. The initial release of Dracarys focuses on the 70B parameter class of models. Abacus.ai calls the “Dracarys recipe” as a combination of optimized fine-tuning and other techniques which are aimed at enhancing the coding abilities of any open-source LLM.

Abacus.ai CEO and co-founder Bindu Reddy said that the Dracarys recipe has demonstrated improvements in Qwen-2 72B and Llama-3.1 70B models. GitHub Copilot was one of the early pioneers in the segment and provides tools for code completion and app development. Tabnine, Replit and more such startups have also been active in building features to leverage LLMs for coding. Dracarys stand out as it is offering a fine-tuned version of Meta’s Llama 3.1 general-purpose model. It is providing a robust open-source alternative to closed models like Anthropic’s Claude 3.5.

The impact of Dracarys on coding tasks is evident in its performance metrics. LiveBench benchmarks show a noticeable improvement with the Dracarys recipe applied to existing models. The meta-llama-3.1-70b-instruct turbo model’s coding score increased from 32.67 to 35.23 after applying the Dracarys recipe. The improvement was even more pronounced for the Qwen2-72B-instruct model, with its score rising from 32.38 to 38.95.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *