DeepSeek

DeepSeek R1 API

DeepSeek R1 is an open reasoning model that punches at the level of much more expensive proprietary models. Like o3, it does extended chain-of-thought before answering — but at ~20× lower cost than o3.

Input
$0.55
/ 1M tokens
Output
$2.19
/ 1M tokens
Context
128K
tokens
Vision
Text only

Top use cases

  • Cost-sensitive math and code reasoning
  • Open-source research workflows
  • Self-hostable fallback for reasoning tasks
  • Bulk evaluation and grading

Use DeepSeek R1 in 30 seconds

ModelServer is OpenAI-compatible. Point your existing OpenAI SDK at modelserver.dev/v1 and set the model name to deepseek-r1.

deepseek-r1.py
from openai import OpenAI

client = OpenAI(
    api_key="sk-modelserver-...",
    base_url="https://modelserver.dev/v1",
)

response = client.chat.completions.create(
    model="deepseek-r1",
    messages=[
        {"role": "user", "content": "Hello, DeepSeek R1!"}
    ],
)

print(response.choices[0].message.content)

Frequently asked questions

How much does DeepSeek R1 cost?
DeepSeek R1 is priced at $0.55 per 1M input tokens and $2.19 per 1M output tokens via ModelServer. ModelServer adds a flat 5.5% platform fee on top — no markups on individual tokens, no monthly minimum.
What is the DeepSeek R1 context window?
DeepSeek R1 supports a 128K token context window. You can put roughly 96,000 words in a single prompt.
Is DeepSeek R1 OpenAI-compatible via ModelServer?
Yes. Point your OpenAI SDK base_url to https://modelserver.dev/v1 and set model="deepseek-r1". Existing OpenAI-SDK code works without modification.
Who is DeepSeek R1 best for?
Reasoning workloads where price-per-token matters more than the brand.
Does DeepSeek R1 support vision input?
No. DeepSeek R1 is text-only. For multimodal use cases consider Claude Sonnet 4, GPT-4o, or Gemini 2.5 Pro.

Other DeepSeek models

DeepSeek R1 API — Pricing & Access — ModelServer