INFO Reasoning model: DeepSeek is not a native reasoning model, defaulting to manual Chain-of-Thought reasoning

i init my agent with this settings:

reasoning=True,
            reasoning_model=DeepSeek(
                id=self.agent_config.agent_model_config.model_name,
                base_url=self.agent_config.agent_model_config.base_url,
                api_key=self.agent_config.agent_model_config.api_key,
                timeout=self.agent_config.agent_model_config.timeout,
                max_tokens=self.agent_config.max_tokens,
                max_completion_tokens=self.agent_config.max_completion_tokens,
                extra_body={"thinking": {"type": "enabled"}},
            ),
            reasoning_agent=None,
            markdown=False,
            output_schema=EventObservationMetricVMLookOutputSchema,
            use_json_mode=False,
            stream_events=False,
            stream=False,

bug still get this error


INFO Reasoning model: DeepSeek is not a native reasoning model, defaulting to manual Chain-of-Thought reasoning                                                                                                 
ERROR    API status error from OpenAI API: Error code: 400 - {'error': {'message': 'Missing `reasoning_content` field in the assistant message at message index 3. For more information, please refer to        
         https://api-docs.deepseek.com/guides/thinking_mode#tool-calls', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}                                                      
ERROR    Reasoning error: Missing `reasoning_content` field in the assistant message at message index 3. For more information, please refer to https://api-docs.deepseek.com/guides/thinking_mode#tool-calls    

bug with the official doc it said:

What: Pre-trained models that natively think before answering (e.g. OpenAI gpt-5, Claude 4.5 Sonnet, Gemini 2.0 Flash Thinking, DeepSeek-R1).
How it works: The model generates an internal chain of thought before producing its final response. This happens at the model layer: you simply use the model and reasoning happens automatically.
Best for:
Single-shot complex problems (math, coding, physics)
Problems where you trust the model to handle reasoning internally
Use cases where you don’t need to control the reasoning process
Example:
o3_mini.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Setup your Agent using a reasoning model
agent = Agent(model=OpenAIChat(id="gpt-5-mini"))

# Run the Agent
agent.print_response(
    "Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.",
    stream=True,
    show_full_reasoning=True,
)
Read more about reasoning models in the Reasoning Models Guide.