The corporate says its mission is to make constructing AI fashions much less like alchemy and extra like a science. Certain, LLMs like ChatGPT and Gemini can do superb issues. However no person is aware of precisely how or why they work, and that may make it exhausting to repair their flaws or block undesirable behaviors.
“We noticed this widening hole between how effectively fashions had been understood and simply how extensively they had been being deployed,” Goodfire’s CEO, Eric Ho, tells MIT Expertise Evaluation in an unique chat forward of Silico’s launch. “I feel the dominant feeling in each single main frontier lab at present is that you simply simply want extra scale, extra compute, extra knowledge, and then you definitely get AGI [artificial general intelligence] and nothing else issues. And we’re saying no, there’s a greater manner.”
Goodfire is certainly one of a small handful of corporations, together with trade leaders Anthropic, OpenAI, and Google DeepMind, pioneering a way generally known as mechanistic interpretability, which goals to perceive what goes on inside an AI mannequin when it carries out a process by mapping its neurons and the pathways between them. (MIT Expertise Evaluation picked mechanistic interpretability as certainly one of its 10 Breakthrough Applied sciences of 2026.)
Goodfire needs to make use of this method not solely to audit fashions—that’s, finding out people who have already been educated—however to assist design them within the first place.
“We need to take away the trial and error and switch coaching fashions into precision engineering,” says Ho. “And which means exposing the knobs and dials in an effort to really use them through the coaching course of.”
Goodfire has already used its methods and instruments to tweak the behaviors of LLMs—for instance, lowering the variety of hallucinations they produce. With Silico, the corporate is now packaging up a lot of these in-house methods and transport them as a product.
The software makes use of brokers to automate a lot of the advanced work. “Brokers at the moment are robust sufficient to do a variety of the interpretability work that we had been doing utilizing people,” says Ho. “That was sort of the hole that wanted to be bridged earlier than this was really a viable platform that clients might use themselves.”
Leonard Bereska, a researcher on the College of Amsterdam who has labored on mechanistic interpretability, thinks Silico seems like a great tool. However he pushes again on Goodfire’s loftier aspirations. “In actuality, they’re including precision to the alchemy,” he says. “Calling it engineering makes it sound extra principled than it’s.”
