DeepSeek’s new AI model can generate 200K pages of training data daily on a single GPU

The launch of DeepSeek-OCR reflects the company’s continued focus on improving the efficiency of LLMs while driving down the costs of building them.

Some of DeepSeek's statements about its development costs and the technology it used have been questioned by U.S. companies and officials. (Image: Reuters)Some of DeepSeek's statements about its development costs and the technology it used have been questioned by U.S. companies and officials. (Image: Reuters)

Chinese AI startup DeepSeek has released a new multimodal AI model, which it said is capable of processing large and complex documents using significantly fewer tokens.

The Huangzhou-based company said that DeepSeek-OCR uses visual perception as a medium to compress text for large language models (LLMs) more efficiently. Both the source code and weights of the model are publicly available via online developer platforms Hugging Face and GitHub. In its research, DeepSeek found that using “vision encoders” to compress text for LLMs would enable them to process massive amounts of text at lower computing costs.

“Through DeepSeek-OCR, we demonstrate that vision-text compression can achieve significant token reduction (7-20×) for different historical context stages, offering a promising direction for addressing long-context challenges in large language models,” the company said in a technical paper accompanying the model’s release.

The launch of DeepSeek-OCR reflects the company’s continued focus on improving the efficiency of LLMs while driving down the costs of building and using them. The company is said to have taken a similar approach in developing its breakthrough open-weight models V3 and R1, which made waves across the tech industry for achieving performance comparable to cutting-edge models like OpenAI’s o1 at only a fraction of the cost.

Story continues below this ad

What are the components of the new AI model?

With DeepSeek-OCR, the company aims to tackle a key limitation of LLMs: handling long contexts without running into memory limits. Its core hypothesis is that processing text as images can be more computationally efficient than processing raw digital text. The new OCR model serves as a proof-of-concept for this idea.

The model comprises two parts: a 380 million-parameter DeepEncoder used to analyse each image and produce a compressed version of it; and a 570 million-active parameter text generator built on top of another three billion-parameter mixture of experts (MoE) language model.

DeepSeek-OCR: Dual-Component Architecture
Component 1: DeepEncoder
380M
Parameters for image analysis & compression
Component 2: Text Generator
570M
Active parameters for text generation
3 Billion
Parameter MoE Language Model Foundation
Indian Express InfoGenIE
DeepSeek-OCR Benchmark Performance on OmniDocBench test
vs GOT-OCR2.0
100
Vision tokens used
vs 256 tokens/page
vs MinerU2.0
<800
Vision tokens used
vs 6000+ tokens/page
Superior Efficiency
61% Fewer
Tokens than GOT-OCR2.0 | 87% fewer than MinerU2.0
Indian Express InfoGenIE

DeepSeek’s researchers said that they trained the OCR model with 30 million PDF pages in roughly 100 languages, including 25 million in Chinese and English, along with 10 million synthetic diagrams, five million chemical formulae, and one million geometric figures.

How has DeepSeek performed on benchmarks?

The OCR model is capable of compressing text by up to a factor of ten while retaining 97 per cent of the original information, as per the technical paper. It can be used to process a wide range of document types including plain text, diagrams, chemical formulae, and geometric figures while being able to keep the original formatting, output plain text, and even provide general image descriptions. However, the requirement of ‘vision tokens’ is also likely to vary based on the document size and image resolution.

Story continues below this ad

In sum, DeepSeek-OCR can generate training data for LLMs and vision language models (VLMs) at a scale of more than 200,000 pages per day while running on a single Nvidia A100 GPU.

The OCR model was evaluated on two benchmarks, the OmniDocBench test that is used to evaluate a model’s document parsing capabilities and the Fox benchmark test used to evaluate the focusing capabilities of vision language models on dense PDF documents.

“On OmniDocBench, it surpasses GOT-OCR2.0 (256 tokens/page) using only 100 vision tokens, and outperforms MinerU2.0 (6000+ tokens per page on average) while utilising fewer than 800 vision tokens,” the paper read.

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement