How to get the specific output from model in reasoningstep

Hello,

I got this error when executing my agent. Basically, it happens when accessing postgresql. I think it’s from “assistant”. How to make sure that this error doesn’t occur anymore?
Thanks!

DEBUG *************************************  TOOL METRICS  *************************************              
DEBUG * Time:                        0.0864s                                                                  
DEBUG *************************************  TOOL METRICS  *************************************              
DEBUG ======================================= assistant ========================================              
DEBUG {"reasoning_steps":[{"title":"Identify Year with Highest Bookings","action":"I have analyzed the query  
      results to identify the year with the highest bookings.","result":"The year 2016 had the highest number 
      of bookings with 56,707 bookings.","reasoning":"The query results clearly show that 2016 had the highest
      count of bookings among the years listed. This gives us the primary data point needed to answer the     
      user's question.","next_action":"validate","confidence":0.98},{"title":"Understanding Why 2016 had the  
      Highest Bookings","action":"I will analyze the data further to understand why 2016 had the highest      
      bookings.","result":null,"reasoning":"Understanding the reason behind a peak in bookings involves       
      examining additional data such as market conditions, promotions, or changes in customer behavior during 
      that year.","next_action":"continue","confidence":0.9},{"title":"Evaluate Possible                      
      Explanations","action":"I will evaluate possible factors such as market segments, distribution channels,
      or any external events that may have influenced the high number of bookings in                          
      2016.","result":null,"reasoning":"By examining additional attributes like market segments or            
      distribution channels, I can find correlations that explain the increase in                             
      bookings.","next_action":"continue","confidence":0.85}]}                                                
DEBUG Tool Calls:                                                                                             
        - ID: 'call_J8vzx0v2YevKD1xYuxDaXJ5g'                                                                 
          Name: 'run_query'                                                                                   
          Arguments: 'query: SELECT market_segment, COUNT(*) as segment_count FROM hotel_booking_v2 WHERE     
      arrival_date_year = 2016 GROUP BY market_segment ORDER BY segment_count DESC'                           
DEBUG ***************************************  METRICS  ****************************************              
DEBUG * Tokens:                      input=3266, output=310, total=3576, cached=3200                          
DEBUG * Prompt tokens details:       {'cached_tokens': 3200}                                                  
DEBUG * Completion tokens details:   {'reasoning_tokens': 0}                                                  
DEBUG * Time:                        4.3023s                                                                  
DEBUG * Tokens per second:           72.0541 tokens/s                                                         
DEBUG ***************************************  METRICS  ****************************************              
DEBUG Running: run_query(query=...)                                                                           
INFO Running: SELECT market_segment, COUNT(*) as segment_count FROM hotel_booking_v2 WHERE arrival_date_year =
     2016 GROUP BY market_segment ORDER BY segment_count DESC                                                 
DEBUG Query result: market_segment,segment_count                                                              
      Online TA,27661                                                                                         
      Offline TA/TO,12473                                                                                     
      Groups,7857                                                                                             
      Direct,5663                                                                                             
      Corporate,2562                                                                                          
      Complementary,364                                                                                       
      Aviation,127                                                                                            
DEBUG ========================================== tool ==========================================              
DEBUG Tool call Id: call_J8vzx0v2YevKD1xYuxDaXJ5g                                                             
DEBUG market_segment,segment_count                                                                            
      Online TA,27661                                                                                         
      Offline TA/TO,12473                                                                                     
      Groups,7857                                                                                             
      Direct,5663                                                                                             
      Corporate,2562                                                                                          
      Complementary,364                                                                                       
      Aviation,127                                                                                            
DEBUG *************************************  TOOL METRICS  *************************************              
DEBUG * Time:                        0.1044s                                                                  
DEBUG *************************************  TOOL METRICS  *************************************              
ERROR    Error from OpenAI API: 1 validation error for ReasoningSteps                                         
           Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid,                           
         input_value='{"reasoning_steps":[{"ti...r","confidence":0.98}]}', input_type=str]                    
             For further information visit https://errors.pydantic.dev/2.11/v/json_invalid                    
WARNING  Attempt 1/1 failed: 1 validation error for ReasoningSteps                                            
           Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid,                           
         input_value='{"reasoning_steps":[{"ti...r","confidence":0.98}]}', input_type=str]                    
             For further information visit https://errors.pydantic.dev/2.11/v/json_invalid                    
ERROR    Failed after 1 attempts. Last error using OpenRouter(openai/gpt-4o)                                  
ERROR    Reasoning error: 1 validation error for ReasoningSteps                                               
           Invalid JSON: trailing characters at line 2 column 1 [type=json_invalid,                           
         input_value='{"reasoning_steps":[{"ti...r","confidence":0.98}]}', input_type=str]                    
             For further information visit https://errors.pydantic.dev/2.11/v/json_invalid  

Hey @eibednejo
thanks for reaching out and supporting Agno. I’ve shared this with the team, we’re working through all requests one by one and will get back to you soon.
If it’s urgent, please let us know. We appreciate your patience!

Hi @eibednejo ! The team is looking into it and a fix will be out soon.

Thanks for reporting this bug!

Thanks! Looking forward!

Hi @eibednejo Would appreciate more information on agent configuration / code used to reproduce this error. Just want to make sure we fix the bug reliably.

Sorry, seems like it’s been replaced with new codes, but I’ll keep playing around with agno since I’m finalizing something.
I’ll update it if it occurs again.
Appreciate your support!

Ok I got it hahahaha
It occurs when I set reasoning=True

from agno.agent import Agent
from postgres_sync import PostgresTools # I use my own PostgresTools
from agno.models.openrouter import OpenRouter
from dotenv import load_dotenv
import asyncio
import os

# Load environment variables
load_dotenv()

# Initialize PostgresTools with connection details
postgres_tools = PostgresTools(
    host="localhost",
    port=5432,
    db_name="datasets",
    user="user",
    password="password"
)

# Create a simple agent with reasoning enabled
sql_agent = Agent(
    model=OpenRouter(
        id="openai/gpt-4o", # same result using gpt-4.1-mini
        api_key=os.getenv("OPENROUTER_API_KEY"),
        temperature=0
    ),
    tools=[
        postgres_tools,
    ],
    instructions=[
        "You are a helpful and smart SQL executor.",
        "Your job is to convert the user query to SQL query and run it.",
        "Always use tools.",
        "You can only SELECT operation.",
        "Think step by step.",
        "Always inform the error message when there is."
    ],
    debug_mode=True,
    reasoning=True
)

async def main():
    await sql_agent.aprint_response("""
    what year does have highest number of booking and why?
    """, stream=True, show_full_reasoning=True)

asyncio.run(main())

But here I’m using my own PostgresTools, basically just replacing psycopg2 with psycopg. Here’s the codes:

from typing import Any, Dict, List, Optional

try:
    import psycopg
except ImportError:
    raise ImportError(
        "`psycopg` not installed. Please install using `pip install psycopg`. "
        "If you face issues, try `pip install psycopg[binary,pool]`."
    )

from agno.tools import Toolkit
from agno.utils.log import log_debug, log_info


class PostgresTools(Toolkit):
    """A synchronous tool to connect to a PostgreSQL database and perform read-only operations."""

    def __init__(
        self,
        connection: Optional[psycopg.Connection] = None,
        db_name: Optional[str] = None,
        user: Optional[str] = None,
        password: Optional[str] = None,
        host: Optional[str] = None,
        port: Optional[int] = None,
        run_queries: bool = True,
        inspect_queries: bool = False,
        summarize_tables: bool = True,
        export_tables: bool = False,
        table_schema: str = "public",
        **kwargs,
    ):
        self._connection: Optional[psycopg.Connection] = connection
        self.db_name = db_name
        self.user = user
        self.password = password
        self.host = host
        self.port = port
        self.table_schema = table_schema

        tools: List[Any] = []
        tools.append(self.show_tables)
        tools.append(self.describe_table)
        if inspect_queries:
            tools.append(self.inspect_query)
        if run_queries:
            tools.append(self.run_query)
        if summarize_tables:
            tools.append(self.summarize_table)
        if export_tables:
            tools.append(self.export_table_to_path)

        super().__init__(name="postgres_tools", tools=tools, **kwargs)

    def connection(self) -> psycopg.Connection:
        if self._connection is None:
            connection_kwargs: Dict[str, Any] = {
                "dbname": self.db_name,
                "user": self.user,
                "password": self.password,
                "host": self.host,
                "port": self.port,
                "options": f"-c default_transaction_read_only=on -c search_path={self.table_schema}",
            }
            self._connection = psycopg.connect(**connection_kwargs)
        return self._connection

    def run_query(self, query: str) -> str:
        formatted_sql = query.replace("`", "").split(";")[0]
        log_info(f"Running: {formatted_sql}")
        try:
            conn = self.connection()
            with conn.cursor() as cursor:
                cursor.execute(formatted_sql)
                rows = cursor.fetchall()

                if not rows:
                    return "No output"

                headers = [desc.name for desc in cursor.description]
                results = [",".join(headers)]
                for row in rows:
                    results.append(",".join(str(col) for col in row))

                result_output = "\n".join(results)
                log_debug(f"Query result: {result_output}")
                return result_output
        except Exception as e:
            return f"Query failed: {str(e)}"

    def show_tables(self) -> str:
        stmt = f"SELECT table_name FROM information_schema.tables WHERE table_schema = '{self.table_schema}';"
        return self.run_query(stmt)

    def describe_table(self, table: str) -> str:
        stmt = f"""SELECT column_name, data_type, character_maximum_length 
                   FROM information_schema.columns 
                   WHERE table_name = '{table}' AND table_schema = '{self.table_schema}';"""
        result = self.run_query(stmt)
        return f"{table}\n{result}"

    def summarize_table(self, table: str) -> str:
        # Step 1: Get column metadata
        column_query = f"""
            SELECT column_name, data_type
            FROM information_schema.columns
            WHERE table_name = '{table}' AND table_schema = '{self.table_schema}';
        """
        column_csv = self.run_query(column_query)

        if column_csv.startswith("Query failed") or column_csv == "No output":
            return f"Failed to get columns: {column_csv}"

        lines = column_csv.strip().split("\n")
        column_info = [line.split(",") for line in lines[1:]]  # Skip header

        if not column_info:
            return f"No columns found for table '{table}'"

        summary_lines = ["column,data_type,non_null_count,null_count"]
        for column_name, data_type in column_info:
            summary_query = f"""
                SELECT 
                    '{column_name}' AS column,
                    '{data_type}' AS data_type,
                    COUNT(*) FILTER (WHERE "{column_name}" IS NOT NULL) AS non_null_count,
                    COUNT(*) FILTER (WHERE "{column_name}" IS NULL) AS null_count
                FROM {self.table_schema}.{table};
            """
            result = self.run_query(summary_query)
            if not result.startswith("Query failed"):
                result_lines = result.strip().split("\n")
                if len(result_lines) > 1:
                    summary_lines.append(result_lines[1])  # Get data row only

        return "\n".join(summary_lines)

    def inspect_query(self, query: str) -> str:
        stmt = f"EXPLAIN {query}"
        return self.run_query(stmt)

    def export_table_to_path(self, table: str, path: Optional[str] = None) -> str:
        if path is None:
            path = f"{table}.csv"
        else:
            path = f"{path}/{table}.csv"

        try:
            conn = self.connection()
            with conn.cursor() as cursor, open(path, "w") as f:
                cursor.copy_expert(f"COPY {self.table_schema}.{table} TO STDOUT WITH CSV HEADER", f)
            return f"Exported {table} to {path}"
        except Exception as e:
            return f"Export failed: {str(e)}"

Hope this helps!

I actually tried again using the latest version of Agno and that issue didn’t occur. Thanks team!
But I got this issue instead

DEBUG Added RunResponse to Memory                                                                                                                  
WARNING  MemoryDb not provided.                                                                                                                    
WARNING  Failed to parse cleaned JSON: 1 validation error for ReasoningSteps                                                                       
           Invalid JSON: trailing characters at line 1 column 1021 [type=json_invalid,                                                             
         input_value='{"reasoning_steps":[{"ti...ue","confidence":1.0}]}', input_type=str]                                                         
             For further information visit https://errors.pydantic.dev/2.11/v/json_invalid                                                         
WARNING  Failed to parse as Python dict: Extra data: line 1 column 1021 (char 1020)                                                                
WARNING  Failed to convert response to response_model                                                                                              
ERROR    Reasoning error: 'str' object has no attribute 'reasoning_steps'

Hi @eibednejo ! Thanks for bringing this up. The team is looking into it and we should have a fix out soon