Meta’s new CWM model learns how code works, not just what it looks like

Meta’s AI research team has released a new large language model (LLM) for coding that enhances code understanding by learning not only what code looks like, but also what it does when executed. The model, named Code World Model (CWM), is trained on vast amounts of data showing how code interacts with its environment, allowing it to build an internal “world model” of how computational systems work.
In addition to learning the dynamics of its environment, CWM shows strong performance on standard coding and math benchmarks, opening a potential new direction for training AI agents that can handle more complex, dynamic software development tasks in enterprise environments. CWM is part of a broader set of efforts to push LLMs beyond next-token prediction into developing world models.
The limits of standard code generation
Despite recent advances in AI code generation, generating high-quality, reliable code remains a challenge for even the most advanced LLMs. The researchers at Meta suggest this is because the typical training paradigm is insufficient for mastering the complexities of programming.
Typically, a model learns to code by predicting the next instruction in a program, much like it would predict the next word in a sentence. However, the researchers argue that to truly master coding, a model must understand “not just what code looks like but what it does when executed.” This skill is fundamental for software engineers, who have a general understanding of how changes to code will affect local variables or the general behavior of their application. Programmers don’t think about code as a sequence of tokens but as a series of related components (variables, objects, functions, modules, etc.), which they then translate into a series of instructions. In other words, they develop a “world model” of their application as they build it or make changes to it.
This “world modeling” capability is often overlooked in LLMs until after the main training is complete, a practice that the Meta team challenges.
How Code World Model works
CWM is a new LLM designed to address these challenges by training on extensive “code world modeling data.” Instead of waiting until the final fine-tuning stage, CWM is taught how code behaves during its “mid-training” phase. The hypothesis is that grounding the model’s predictions in the dynamics of computational environments early on provides a much stronger foundation for later training and reinforcement learning stages.
The researchers focused on two key types of data. The first is Python code execution traces, which are step-by-step records of how a program’s internal state, like its variables, changes as each line of code is run (this is in contrast to the classic training scheme which trains models on code and final results). By training on these observation-action trajectories, CWM obtains a deeper sense of how instructions affect the overall program behavior. “Our premise here is that teaching CWM the semantics and not just syntax of programs should help with writing code as well as with reasoning tasks like verification, testing, and debugging,” the researchers write.
The second data type consists of agentic interactions within Docker environments. The team created a synthetic data generator, called ForagerAgent, that simulates a software engineering agent performing tasks like fixing bugs or implementing new features. By observing these multi-step interactions at a large scale early in its training, CWM learns the dynamics of these environments before it is ever fine-tuned for specific tasks in the same environments.
In practice, this allows CWM to reason about code in a way that mimics a human developer. For example, when tasked with a competitive programming problem, CWM can create an initial solution, then devise its own input-output tests to check for correctness, and finally compare its predicted output against the actual results of running the code. This self-verification loop is a direct result of its world model training.
CWM in action
The researchers at Meta used the data and training recipe to train a 32-billion-parameter model with a context window of up to 131,000 tokens. The model shows promising results on key industry benchmarks. On SWE-bench Verified, a benchmark that involves resolving real-world issues from GitHub repositories, CWM achieved a 65.8% pass rate, outperforming other open-weight models of a similar size. It also scored highly on LiveCodeBench (a benchmark for competitive programming), Math-500 and AIME 2024 (mathematical reasoning), and CruxEval (predicting Python code output).
Based on these results, the researchers believe that world models “can benefit agentic coding, enable step-by-step simulation of Python code execution, and show early results of how reasoning can benefit from the latter.” However, they also stress the model’s limitations. CWM is released as a research model under a noncommercial license and is not intended as a general-purpose assistant or chatbot. While it received some instruction-following data, it has not undergone the extensive optimization needed for conversational use.
While optimistic about the future of this approach, the Meta team notes that these “are only our first steps in this direction.” They see a significant opportunity for future work, stating that “robust ways to leverage world model knowledge to improve performance across a variety of tasks via prompting or fine-tuning is a ripe area for research.”
World models are key to intelligence
CWM comes against the backdrop of growing interest in imbuing LLMs with something more than just the ability to predict the next token. Chain-of-thought (CoT) reasoning, the most popular such technique, forces models to write their “thoughts” before producing the final answer. Reasoning models such as DeepSeek-R1 use reinforcement learning to force LLMs to generate longer CoTs, where they reflect on their answer and correct it before generating it. But CoT is still a token-generation process, and there is evidence and research that shows CoT only represents an illusion of thinking and cannot be relied on as real evidence of reasoning. 
World models are a more recent and advanced stab at this problem. Instead of framing the problem as a next-token prediction objective, they try to force the LLM to develop a model of the world in its latent space, which is not necessarily represented in the output tokens. Another recent paper combines the strengths of LLMs with JEPA, a deep learning architecture specifically designed for world modeling. Early results show that LLM-JEPA is more robust against changes to its environment and can learn new tasks more efficiently than models trained on pure next-token prediction.
It remains to be seen how well researchers can reconcile these different AI architectures. But what seems to be certain is that having a robust world model makes AI systems more robust and reliable in the constantly changing environments of real-world applications.