We train models that write code.
Morph does pre-training continuation and owns its own RL stack and inference engines, end to end. Every model we ship is trained, evaluated, and deployed in-house.
Many have said the future is small, specialized models. We only partially agree.
The future is small, specialized models with specialized inference, and we're building both vertically. Our models don't just generate code. They apply edits, search codebases, compress context, and review PRs. Each task gets its own model, its own training data, its own evaluation suite.
The result: sub-second edit application at 10,500 tok/s. Codebase search in under 6 seconds. Context compaction that preserves every identifier. These aren't benchmarks on a leaderboard. They're production numbers from teams shipping code every day.
Work with us.
If you're a passionate ML engineer looking to work on a very small team and push to production every day, we want to hear from you.
Y Combinator S23