rlm.completion(prompt), your prompt becomes the context variable in a Python REPL. The LM can then write Python code to examine this context, decompose complex tasks, and recursively call itself via llm_query() to handle sub-problems.
Isolation Levels
RLM supports two types of environments based on their isolation level.Non-Isolated Environments
- Faster execution with no network overhead
- Shared resources: filesystem, network, and memory
- Lower security: code runs with host process privileges
- Use cases: development, testing, trusted code
Isolated Environments
- Full isolation from host resources
- Higher security
- Network overhead via HTTP tunnels
- Use cases: production, untrusted code, sensitive data
Available Environments
REPL Globals
These variables and functions are available inside code executed in the REPL environment:| Name | Description |
|---|---|
context | Your input prompt, available as a variable in the REPL |
llm_query(prompt, model=None) | Query a sub-LM from within the REPL. Returns the completion string. |
llm_query_batched(prompts, model=None) | Concurrent sub-LM queries. Returns a list of completion strings. |
FINAL_VAR(var_name) | Mark a variable as the final answer to return from the RLM |
Architecture
Non-Isolated (local, docker)
Direct TCP socket communication:Isolated (modal)
HTTP broker pattern for cloud sandboxes:llm_query() requests, host polls for pending requests, forwards them to the LM Handler, and posts responses back.