OpenAI-compatible

Works everywhere
you already use AI

One API key. Every tool that accepts a custom OpenAI endpoint. No special plugins, no adapter code — just change the base URL.

Featured integrationGitHub ↗Docs ↗

OpenClaw

The open-source personal AI agent with 247k GitHub stars. Runs as a daemon on your machine and connects to WhatsApp, Slack, Discord, Telegram, iMessage, Teams, and 10+ other messaging apps. Add OneForAll as a provider and every agent task gets access to all supported models through one key.

Claude Opus/Sonnet/HaikuGPT-4oGemini 2.5 Proo3-mini
1

Install OpenClaw

— requires Node 22.16+ or Node 24
npm install -g openclaw@latest
2

Start the daemon

— runs OpenClaw as an always-on background service
openclaw onboard --install-daemon
3

Export your API key as an environment variable

export ONEFORALL_API_KEY=ofa_YOUR_KEY_HERE

Add this to your ~/.zshrc or ~/.bashrc to persist across sessions. Replace with your actual key from API Keys.

4

Add OneForAll to your OpenClaw config

~/.openclaw/openclaw.json

{
  "models": {
    "mode": "merge",
    "providers": {
      "oneforall": {
        "baseUrl": "https://getoneforall.com/api/v1",
        "apiKey": {
          "$secretRef": {
            "provider": "env",
            "key": "ONEFORALL_API_KEY"
          }
        },
        "api": "openai-completions",
        "models": [
          {
            "id": "claude-opus-4-6",
            "name": "Claude Opus 4.6",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 32000
          },
          {
            "id": "claude-sonnet-4-6",
            "name": "Claude Sonnet 4.6",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 64000
          },
          {
            "id": "claude-haiku-4-5",
            "name": "Claude Haiku 4.5",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 8096
          },
          {
            "id": "gpt-4o",
            "name": "GPT-4o",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 128000,
            "maxTokens": 16384
          },
          {
            "id": "gpt-4o-mini",
            "name": "GPT-4o Mini",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 128000,
            "maxTokens": 16384
          },
          {
            "id": "o3-mini",
            "name": "o3-mini",
            "reasoning": true,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 100000
          },
          {
            "id": "gemini-2.5-pro-preview-03-25",
            "name": "Gemini 2.5 Pro",
            "reasoning": true,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 1048576,
            "maxTokens": 65536
          },
          {
            "id": "gemini-2.0-flash",
            "name": "Gemini 2.0 Flash",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 1048576,
            "maxTokens": 8192
          },
          {
            "id": "gemini-2.0-flash-lite",
            "name": "Gemini 2.0 Flash Lite",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 1048576,
            "maxTokens": 8192
          },
          {
            "id": "gemini-1.5-pro",
            "name": "Gemini 1.5 Pro",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 2097152,
            "maxTokens": 8192
          },
          {
            "id": "gemini-1.5-flash",
            "name": "Gemini 1.5 Flash",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 1048576,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "oneforall/claude-sonnet-4-6"
      },
      "models": {
        "oneforall/claude-opus-4-6": {},
        "oneforall/claude-sonnet-4-6": {},
        "oneforall/claude-haiku-4-5": {},
        "oneforall/gpt-4o": {},
        "oneforall/gpt-4o-mini": {},
        "oneforall/o3-mini": {},
        "oneforall/gemini-2.5-pro-preview-03-25": {},
        "oneforall/gemini-2.0-flash": {},
        "oneforall/gemini-2.0-flash-lite": {},
        "oneforall/gemini-1.5-pro": {},
        "oneforall/gemini-1.5-flash": {}
      }
    }
  }
}

The config uses "mode": "merge" so it merges with any existing config without overwriting it. The $secretRef pattern reads your key from the environment variable set in step 3 — your key is never stored in plaintext.

5

Verify the connection

openclaw doctor

Runs a full diagnostic — config syntax, provider connectivity, model availability, and auth status. All OneForAll models should show green. Also try openclaw models list --provider oneforall to confirm all supported models are registered.

AI Coding Assistants

Cursor

docs ↗

AI-powered code editor. Set a custom OpenAI base URL in Settings → Models → OpenAI API Key.

Config
# Cursor Settings → Models → OpenAI API Key
Base URL: https://getoneforall.com/api/v1
API Key:  ofa_YOUR_KEY_HERE

# Then use any model ID in the model selector

Continue.dev

docs ↗

Open-source VS Code & JetBrains AI plugin. Add OneForAll as a custom LLM provider in config.yaml.

Config
# ~/.continue/config.yaml
models:
  - name: Claude Sonnet 4.6
    provider: openai
    model: claude-sonnet-4-6
    apiKey: ofa_YOUR_KEY_HERE
    apiBase: https://getoneforall.com/api/v1

Autonomous coding agent for VS Code. Select 'OpenAI Compatible' in the provider dropdown.

Config
# Cline Settings
Provider:  OpenAI Compatible
Base URL:  https://getoneforall.com/api/v1
API Key:   ofa_YOUR_KEY_HERE
Model:     claude-sonnet-4-6

CLI coding agent. Set environment variables before running.

Config
export OPENAI_API_KEY=ofa_YOUR_KEY_HERE
export OPENAI_API_BASE=https://getoneforall.com/api/v1

aider --model openai/claude-sonnet-4-6

Frameworks & SDKs

LangChain

docs ↗

Python and JavaScript agent framework. Use the ChatOpenAI class with a custom base URL.

Config
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="claude-sonnet-4-6",
    api_key="ofa_YOUR_KEY_HERE",
    base_url="https://getoneforall.com/api/v1",
)

Vercel AI SDK

docs ↗

Build AI-powered Next.js and React apps. Use the createOpenAI provider with a custom base URL.

Config
import { createOpenAI } from "@ai-sdk/openai";

const oneforall = createOpenAI({
  apiKey: process.env.ONEFORALL_API_KEY,
  baseURL: process.env.NEXT_PUBLIC_APP_URL + "/api/v1",
});

const model = oneforall("claude-sonnet-4-6");

LlamaIndex

docs ↗

Data framework for LLM apps. Use OpenAI class with custom api_base.

Config
from llama_index.llms.openai import OpenAI

llm = OpenAI(
    model="claude-sonnet-4-6",
    api_key="ofa_YOUR_KEY_HERE",
    api_base="https://getoneforall.com/api/v1",
)

Chat Interfaces

Open WebUI

docs ↗

Self-hosted ChatGPT alternative. Add a new OpenAI connection under Settings → Connections.

Config
# Settings → Admin → Connections → OpenAI API
API URL: https://getoneforall.com/api/v1
API Key: ofa_YOUR_KEY_HERE

LibreChat

docs ↗

Open-source ChatGPT clone. Add an endpoint in librechat.yaml.

Config
# librechat.yaml
endpoints:
  custom:
    - name: "OneForAll"
      apiKey: "ofa_YOUR_KEY_HERE"
      baseURL: "https://getoneforall.com/api/v1"
      models:
        default: ["claude-sonnet-4-6", "gpt-4o", "gemini-2.0-flash"]

Automation

No-code workflow automation. Use the OpenAI node with a custom credential.

Config
# n8n → Credentials → OpenAI API
API Key:  ofa_YOUR_KEY_HERE
Base URL: https://getoneforall.com/api/v1

# Then use any OpenAI node — select model by ID

Flowise

docs ↗

Drag-and-drop LLM agent builder. Use ChatOpenAI node with custom base path.

Config
# ChatOpenAI node settings
Model Name: claude-sonnet-4-6
OpenAI API Key: ofa_YOUR_KEY_HERE
BasePath: https://getoneforall.com/api/v1

Ready to start?

One API key. No juggling providers. Works in every tool above — and anything else that is OpenAI-compatible.