JetBrains Releases Mellum: A Purpose-Built AI Coding Model Goes Open Source

JetBrains just open-sourced Mellum—an AI model built *only* for code completion. With 4B parameters and 4T tokens of training data, it’s focused, fine-tunable, and not trying to be ChatGPT. Just clean, sharp AI for developers.

· 2 min read
JetBrains Releases Mellum: A Purpose-Built AI Coding Model Goes Open Source

JetBrains, the company behind tools like IntelliJ IDEA and PyCharm, has released its first open-source AI model for code generation. The model, called Mellum, is now available on Hugging Face under the Apache 2.0 license, reinforcing JetBrains’ commitment to developer-first, focused AI tooling.


A Model Made for Code Completion

Originally launched in 2023 to power JetBrains’ internal tools, Mellum is now open to the public.
It’s a 4-billion-parameter model, trained on over 4 trillion tokens—that’s more than 120 billion lines of code.

Built specifically for code completion, Mellum helps predict the next line or block of code based on the context provided, making it ideal for tasks like:

  • Intelligent code suggestions
  • Code understanding research
  • AI-powered dev tools
  • Educational programming aids

JetBrains stresses that Mellum is not a general-purpose model. It’s designed for precision, not breadth, making it a strong fit for integrated development environments (IDEs).


Trained on Massive Infrastructure

JetBrains trained Mellum over 20 days using a 256-GPU cluster of Nvidia H200s.
Its training data includes:

  • Permissively licensed code from GitHub
  • Text sources like English-language Wikipedia

The goal? A clean, reliable, and secure training base for code-focused AI.


Fine-Tuning Required

Out of the box, Mellum isn’t plug-and-play. Developers will need to fine-tune it for their specific use cases.
JetBrains has released a few pre-fine-tuned models for Python, primarily for research and evaluation—not production deployment.

⚠️ Like all AI models trained on public code, Mellum may carry biases and could generate insecure code.
A 2023 Snyk survey showed that over half of organizations have faced security risks with AI-generated code—something to keep in mind when deploying Mellum.


Not About Hype — About Focus

JetBrains isn’t trying to compete with massive general-purpose LLMs.
Their aim with Mellum is focused, collaborative innovation:

“We’re not chasing generality — we’re building focus. If Mellum sparks even one meaningful experiment, contribution, or collaboration, we would consider it a win.”

With Mellum, JetBrains has made a thoughtful entry into the AI coding world—offering a model that’s lean, specialized, and open.

As developers begin exploring it, Mellum might not dominate the headlines—but it could quietly shape the next wave of AI-assisted programming.