Allgemein

MIT’s Recursive Language Models Improve Performance on Long-Context Tasks

MIT’s Recursive Language Models Improve Performance on Long-Context Tasks

Researchers at MIT’s CSAIL published a design for Recursive Language Models (RLM), a technique for improving LLM performance on long-context tasks. RLMs use a programming environment to recursively decompose and process inputs, and can handle prompts up to 100x longer than base LLMs.

By Anthony Alford