OpenAI SDK 配置指南

Inkess 提供 OpenAI 兼容接口,可直接使用 OpenAI SDK 访问 Claude 及其他模型,只需修改 base_urlapi_key

端点

https://llm.starapp.net/api/llm/v1

注意:OpenAI SDK 的 base_url 需要包含 /v1 后缀。

Python

安装

pip install openai

代码示例

from openai import OpenAI

client = OpenAI(
    base_url="https://llm.starapp.net/api/llm/v1",
    api_key="your-token-here",
)

response = client.chat.completions.create(
    model="claude-sonnet-4-5",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)

流式输出

stream = client.chat.completions.create(
    model="claude-sonnet-4-5",
    messages=[{"role": "user", "content": "写一首诗"}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

环境变量方式

export OPENAI_BASE_URL=https://llm.starapp.net/api/llm/v1
export OPENAI_API_KEY=your-token-here
from openai import OpenAI

# 自动从环境变量读取
client = OpenAI()

Node.js / TypeScript

安装

npm install openai

代码示例

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://llm.starapp.net/api/llm/v1",
  apiKey: "your-token-here",
});

const response = await client.chat.completions.create({
  model: "claude-sonnet-4-5",
  messages: [
    { role: "user", content: "Hello!" }
  ],
});
console.log(response.choices[0].message.content);

流式输出

const stream = await client.chat.completions.create({
  model: "claude-sonnet-4-5",
  messages: [{ role: "user", content: "写一首诗" }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}

cURL 测试

curl https://llm.starapp.net/api/llm/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-token-here" \
  -d '{
    "model": "claude-sonnet-4-5",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

注意事项

  • 通过 OpenAI 兼容接口调用 Claude 时,部分 Claude 特有功能(如 vision 中的特殊格式、extended thinking)可能不可用
  • 如需完整使用 Claude 能力,推荐使用 Anthropic SDK
  • 完整模型 ID 列表见模型列表