Quick Example
Constructor
backend
LM provider to use for completions.
| Value | Provider |
|---|---|
"openai" | OpenAI API |
"anthropic" | Anthropic API |
"portkey" | Portkey AI gateway |
"openrouter" | OpenRouter |
"litellm" | LiteLLM (multi-provider) |
"vllm" | Local vLLM server |
backend_kwargs
Provider-specific configuration (API keys, model names, etc.).
environment
Code execution environment for REPL interactions.
| Value | Description |
|---|---|
"local" | Same-process with sandboxed builtins |
"docker" | Docker container |
"modal" | Modal cloud sandbox |
environment_kwargs
Environment-specific configuration.
max_iterations
Maximum REPL iterations before forcing a final answer.
max_depth
Maximum recursion depth. When depth >= max_depth, falls back to regular LM completion.
custom_system_prompt
Override the default RLM system prompt.
other_backends
Additional backends available for sub-LM calls within the REPL.
other_backend_kwargs
Configurations for other_backends (must match order).
logger
Logger for saving RLM execution trajectories to JSON-lines files.
verbose
Enable rich console output showing iterations, code execution, and results.
Methods
completion()
Main method for RLM completions. Executes the recursive loop and returns the final result.
Arguments
| Name | Type | Description |
|---|---|---|
prompt | str | dict | Input context (becomes context variable in REPL) |
root_prompt | str | None | Optional hint visible only to the root LM call |
Returns
RLMChatCompletion object with:
| Attribute | Type | Description |
|---|---|---|
response | str | Final answer from the RLM |
execution_time | float | Total execution time in seconds |
usage_summary | UsageSummary | Aggregated token usage across all LM calls |
root_model | str | Model name used for root completion |