Cyber week deals! Galaxy Watch8 Classic, Fold 7, S25 Ultra. Follow us on YouTube, TikTok, or Instagram
Last updated: October 9th, 2025 at 17:56 UTC+02:00
SamMobile has affiliate and sponsored partnerships, we may earn a commission.
Despite being a seven million parameter model, it outperforms many AI models that are 10,000 times larger.
Reading time: 2 minutes
Samsung has a dedicated divisions that solely focus on developing newer, groundbreaking technologies. One such new technology is an AI model that was recently developed by the company's researchers. It outperforms much larger AI models from bigger brands and is much better in terms of efficiency.
Alexia Jolicoeur-Martineau, Senior AI Researcher at Samsung’s Advanced Institute of Technology (SAIT), has developed (via Venture Beat) a small AI model called Tiny Recursion Model (TRM). With a 7-million parameter neural network, TRM is significantly smaller than other models. Despite its smaller size, TRM outperforms many other cutting-edge large AI models, such as OpenAI’s o3 Mini and Google’s Gemini 2.5 Pro, in some of the toughest benchmarks.
So, how does the AI model achieve such significantly better performance? It uses recursive reasoning approach. In the words of the Alexia, the model is “pretrained from scratch, recursing on itself and updating its answers over time, can achieve a lot without breaking the bank.“
It improves the technique introduced by the Hierarchical Reasoning Model (HRM) earlier this year. HRM uses two cooperating networks, one operating at a higher frequency and the other at a lower frequency. Jolicoeur-Martineau made it even simpler by stripping away these elements and using just one two-layer model that continues to refine its own output (predictions) until a stable enough answer is achieved. It has a lightweight halting mechanism to decide when to stop the output refinement.
TRM has a very small footprint, which means it doesn't need as powerful hardware to run as other models. Its GitHub repository has all the required data, including full training and evaluation scripts, dataset builders, and reference configurations for reproducing the published results. It also mentions the use of the $7,500 Nvidia L40S GPU for Sudoku training and the Nvidia H100 setup for ARC-AGI use cases.
The code of this AI model is available on Github under the MIT License, which means anyone (even companies) can take it, use it, and modify it to suit their needs. Its advantage is its smaller size and not computing needs. Still, it is an important development that defies a common philosophy that says scale is all you need.
Asif is a computer engineer turned technology journalist. He has been using Samsung phones since 2004, and his current smartphone is the Galaxy S21 Ultra. He loves headphones, mechanical keyboards, and PC hardware. When not writing about technology, he likes watching crime and science fiction movies and TV shows.