Trending:
Startups & Funding

Goodfire raises $150M Series B at $1.25B valuation for AI interpretability tools

San Francisco-based Goodfire closed a $150M Series B led by B Capital, valuing the AI interpretability startup at $1.25B. The company builds tools that decode neural network internals, serving clients including Microsoft and Mayo Clinic. Total raised now sits at $209M since founding in 2024.

The Deal

Goodfire announced a $150M Series B on February 5, 2026, led by B Capital with participation from existing investors Menlo Ventures, Lightspeed Venture Partners, and Anthropic. The round values the San Francisco startup at $1.25B and brings total capital raised to $209M across three rounds in under two years.

The company previously raised $7M in seed funding (August 2024, led by Lightspeed) and approximately $50M in Series A (2025, led by Menlo Ventures).

What They Do

Goodfire develops mechanistic interpretability tools that decode how neural networks make decisions. Their Ember platform lets teams debug models, edit internal behaviors, and improve performance by surfacing what's happening inside the black box.

This matters because enterprise AI deployments increasingly require explanations. SHAP values and LIME have become table stakes for compliance, but mechanistic interpretability goes deeper: understanding the model's internal reasoning, not just input-output correlations. The distinction matters for debugging edge cases and retraining models with specific behavioral improvements.

The company counts Microsoft, Mayo Clinic, and Arc Institute as clients. The team includes former researchers from OpenAI and Google DeepMind.

The Context

Mechanistic interpretability sits adjacent to traditional explainability methods. Where SHAP calculates feature importance post-hoc, mechanistic interpretability aims to understand the model's internal computations. For production ML teams, this means potentially lower debugging costs and more targeted retraining strategies.

Anthropic's participation as an investor is notable. The AI safety lab has published extensively on mechanistic interpretability techniques and views the approach as critical for safely deploying frontier models. Anthropic CEO Dario Amodei has publicly supported Goodfire's mission.

What's Next

Goodfire plans to use the capital for compute infrastructure (likely GPU clusters for running interpretability analyses at scale), expanding engineering headcount, and supporting enterprise deployments. The company is structured as a public benefit corporation, which constrains profit maximization in favor of stated AI safety goals.

The $1.25B valuation reflects investor conviction that interpretability tools will become mandatory infrastructure as model capabilities increase. Whether that plays out depends on regulatory requirements and how many production incidents trace back to unexplainable model behavior.

History suggests AI safety tooling gets traction after high-profile failures, not before. Goodfire is betting enterprises will pay to avoid being the case study.