Examples
Ready-to-run code snippets for calling Routerly from any language. Every example connects to the same two endpoints:
| Protocol | Endpoint | When to use |
|---|---|---|
| OpenAI | http://localhost:3000/v1/chat/completions | Default — use with any OpenAI-compatible SDK |
| Anthropic | http://localhost:3000/v1/messages | Use with the Anthropic SDK or when targeting Claude models |
Common pattern
Every integration needs two values:
- Base URL —
http://localhost:3000/v1(OpenAI) orhttp://localhost:3000(Anthropic) - API Key — your project token:
sk-rt-YOUR_PROJECT_TOKEN
Change these two lines in your existing code and nothing else needs to change.
Languages
| Language | OpenAI SDK | Anthropic SDK | Raw HTTP |
|---|---|---|---|
| JavaScript / TypeScript | openai npm | @anthropic-ai/sdk npm | fetch |
| Python | openai pip | anthropic pip | httpx |
| Java | — | — | java.net.http |
| Go | go-openai | — | net/http |
| C# / .NET | Azure.AI.OpenAI | — | HttpClient |
| PHP | openai-php/client | — | Guzzle |
| Ruby | ruby-openai gem | — | Net::HTTP |
| Rust | async-openai crate | — | reqwest |
Getting a project token
Create a token in the dashboard: Projects → select your project → Tokens → New Token.
Tokens start with sk-rt- and are shown once. Copy it before closing the dialog.
Next steps
- Read the full LLM Proxy API reference to see all supported request parameters.
- See Integrations for ready-made setup guides for tools like Cursor, Open WebUI, and LangChain.