Introduction

Claude Adapter is a transparent proxy server that enables Claude Code to work seamlessly with any OpenAI-compatible API.

It intercepts Anthropic Messages API requests, translates them to OpenAI Chat Completions format, forwards them to your chosen provider, and converts responses back — all in real-time.

Why Claude Adapter?

Provider Freedom Use GPT-5, Grok, DeepSeek, Mistral, local models, or any OpenAI-compatible endpoint
Zero Modifications Claude Code runs unmodified — the adapter handles everything transparently
Full Tool Support Bidirectional tool/function calling translation, including XML mode for models without native support
Real-time Streaming SSE translation preserves the interactive Claude Code experience
Simple Setup Interactive CLI wizard handles all configuration automatically

How It Works

1
Interception Claude Code sends requests to the adapter
2
Translation Request converted to OpenAI format
3
Proxying Sent to your provider
4
Delivery Response translated back to Claude format

Installation

Prerequisites

  • Node.js — Version 20.0.0 or higher
  • npm — Comes with Node.js
  • Claude Code — Anthropic's CLI tool

Global Installation

Install Claude Adapter globally using npm:

$npm install -g claude-adapter

Verify Installation

Confirm the installation was successful:

$claude-adapter --version

Quick Start

1

Launch the Adapter

Start the adapter with the interactive setup wizard:

$claude-adapter
2

Complete the Setup Wizard

The wizard guides you through configuration:

  • Enter your OpenAI-compatible API endpoint
  • Provide your API key
  • Map Claude model tiers to your preferred models
  • Select tool calling mode (Native or XML)
3

Use Claude Code

Open a new terminal and run Claude Code as usual:

$claude

Claude Code now routes through the adapter using your configured models.

CLI Options

Claude Adapter supports the following command-line options:

Option Description Default
-p, --port <port> Port for the proxy server 3080
-r, --reconfigure Force the configuration wizard to run again false
-V, --version Display version number
-h, --help Show help information

Environment Variables

Variable Description
LOG_LEVEL Set to DEBUG for verbose logging

Model Mapping

Claude Code operates with three model tiers. During setup, you map each tier to your preferred model:

Claude Tier Typical Use Case Recommended Alternatives
Opus Complex reasoning, detailed analysis gpt-5.2-codex/codex-max, grok-4.1, deepseek-v3.2
Sonnet Balanced general tasks gpt-5.2-codex, gpt-5.2, devstral-mediumdeepseek-v3.2, mistral-large-3
Haiku Fast, simple operations gpt-5-nano, ministral-3, amazon-nova-micro
Tip: For best results, match model capabilities to their intended use case. Use your most capable model for Opus and faster/cheaper models for Haiku.

Supported Providers

Claude Adapter works with any OpenAI-compatible API. Here are some popular providers:

Provider Endpoint Notes
OpenAI https://api.openai.com/v1 Full support
Azure OpenAI https://{resource}.openai.azure.com/openai/deployments/{model} Adjusted token limits
DeepSeek https://api.deepseek.com/v1 Recommended: XML mode
Grok (xAI) https://api.x.ai/v1 Full support
Mistral https://api.mistral.ai/v1 Use XML mode
Ollama http://localhost:11434/v1 Local inference
OpenRouter https://openrouter.ai/api/v1 Multi-model gateway

Any other OpenAI-compatible endpoint will also work.

Tool Calling Modes

Claude Adapter supports two tool calling modes to accommodate different model capabilities.

Native Mode

Uses the standard OpenAI function calling format. Best for models with native tool support.

Best for: OpenAI, Grok, Azure OpenAI

XML Mode

Injects tool instructions into the system prompt and parses XML-formatted responses. Works with any model.

Best for: DeepSeek, Mistral, local models, models without function calling

Choosing a Mode

Criterion Native XML
Model has function calling Use Not needed
Model lacks function calling Use
More reliable parsing Depends on model
Works with any model

Streaming

Claude Adapter provides full Server-Sent Events (SSE) streaming support, preserving the interactive Claude Code experience.

SSE Event Types

Event Description
message_start Initial message metadata
content_block_start New content block (text or tool_use)
content_block_delta Content update/chunk
content_block_stop Content block complete
message_delta Final metadata with stop_reason
message_stop Stream complete

API Reference

POST /v1/messages

Proxies Anthropic Messages API requests to your configured provider.

Request Body

TypeScript
{
  model: string;                  // Required
  max_tokens: number;             // Required
  messages: Message[];            // Required
  system?: string | SystemContent[];
  temperature?: number;           // 0-1
  top_p?: number;                 // 0-1
  stop_sequences?: string[];
  stream?: boolean;
  tools?: ToolDefinition[];
  tool_choice?: ToolChoice;
}

Response (Non-streaming)

TypeScript
{
  id: string;
  type: "message";
  role: "assistant";
  content: ContentBlock[];
  model: string;
  stop_reason: "end_turn" | "max_tokens" | "tool_use" | null;
  stop_sequence: string | null;
  usage: {
    input_tokens: number;
    output_tokens: number;
    cache_read_input_tokens?: number;
  };
}

GET /health

Health check endpoint for monitoring.

JSON
{
  "status": "ok",
  "adapter": "claude-adapter"
}

Error Responses

All errors follow Anthropic's error format:

JSON
{
  "error": {
    "type": "invalid_request_error",
    "message": "Description of the error"
  }
}
Status Error Type
400 invalid_request_error
401 authentication_error
403 permission_error
404 not_found_error
429 rate_limit_error
500 api_error

Programmatic Usage

Claude Adapter can be used as a library in Node.js applications.

Creating a Server

TypeScript
import { createServer, AdapterConfig } from 'claude-adapter';

const config: AdapterConfig = {
  baseUrl: 'https://api.openai.com/v1',
  apiKey: process.env.OPENAI_API_KEY!,
  models: {
    opus: 'gpt-5',
    sonnet: 'gpt-5-mini',
    haiku: 'gpt-5-nano'
  },
  toolFormat: 'native' // or 'xml'
};

const server = createServer(config);
await server.start(3080);

console.log('Proxy running on http://localhost:3080');

Using Conversion Functions

TypeScript
import {
  convertRequestToOpenAI,
  convertResponseToAnthropic
} from 'claude-adapter';

// Convert Anthropic request to OpenAI format
const openaiRequest = convertRequestToOpenAI(
  anthropicRequest,
  'gpt-5.2-codex',
  'native'
);

// Convert OpenAI response to Anthropic format
const anthropicResponse = convertResponseToAnthropic(
  openaiResponse,
  'claude-4.5-opus'
);

Type Exports

TypeScript
import {
  // Configuration
  AdapterConfig,
  ModelConfig,

  // Anthropic Types
  AnthropicMessageRequest,
  AnthropicMessageResponse,
  AnthropicMessage,
  AnthropicContentBlock,
  AnthropicToolDefinition,

  // OpenAI Types
  OpenAIChatRequest,
  OpenAIChatResponse,
  OpenAIMessage,
  OpenAITool
} from 'claude-adapter';

Troubleshooting

Port Already in Use

Use a different port:

$claude-adapter --port 3000

Or kill the process using the port:

$netstat -ano | findstr :3080
$taskkill /PID <pid> /F

Authentication Errors (401)

Reconfigure with a valid API key:

$claude-adapter --reconfigure

Connection Refused

Ensure the adapter is running in a terminal before starting Claude Code. The adapter must be active to proxy requests.

Tool Calls Not Working

If your model lacks native function calling, switch to XML mode:

$claude-adapter --reconfigure
# Select XML mode when prompted

Enable debug logging for more details:

$LOG_LEVEL=DEBUG claude-adapter

Model Not Responding

  • Verify your API key is valid
  • Confirm the base URL is correct
  • Check your API provider's status page
  • Try a different model

Development

Setup

$git clone https://github.com/shantoislamdev/claude-adapter.git
$cd claude-adapter
$npm install

Available Scripts

Script Description
npm run dev Start with ts-node (development)
npm run build Compile TypeScript
npm start Run compiled version
npm test Run Jest test suite
npm run lint ESLint check
npm run format Prettier formatting

Project Architecture

Flow
Request Flow:
cli.ts → server/index.ts → server/handlers.ts
  → converters/request.ts → OpenAI API
  → converters/response.ts or converters/streaming.ts → Client

Running Tests

# Run all tests
$npm test

# Run specific file
$npm test -- tests/request.test.ts

# With coverage
$npm test -- --coverage