Logging LLM calls

Is there any way to log the calls made to the LLM and its responses? Something like tool_hooks or similar. I see quite a few errors like these (HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-04-17:generateContent “HTTP/1.1 400 Bad Request”) in my logs and I’d like to know why they are bad requests.

Yes, there are several ways to log LLM calls and responses in both Python and TypeScript. This can help you debug those “bad request” errors you’re seeing. one agent Team example:

class OperationResponse(BaseModel):
result: int

class AgentMathExpert:

def configure(self):
    gemini_model_id = "gemini-2.5-flash-preview-04-17"
    gemini_api_key = "XXXXXXXXX"

    self.sum_expert = Agent(
        name="Sum expert",
        model=Gemini(id=gemini_model_id, api_key=gemini_api_key),
        role="A sum expert",
        instructions=[
            "you are a sum expert, you are given a list of numbers and you need to sum them",
            "if the tool fails, retry 5 times with exponential backoff"
        ],
        tools=[SumTool()],
        tool_hooks=[logger_hook],
        monitoring=True,
        add_datetime_to_instructions=True)

    self.hn_team = Team(
        name="Math Team",
        mode="coordinate",
        monitoring=True,
        tools=[
            #ReasoningTools(add_instructions=True),
        ],
        model=Gemini(id=gemini_model_id,
                     api_key=gemini_api_key),  # Use variables here
        members=[self.sum_expert],
        instructions=[
            "use the members to execute the operations i will give you"
        ],
        response_model=OperationResponse,
        show_tool_calls=True,
        markdown=True,
        debug_mode=True,
        show_members_responses=True,
        enable_agentic_context=True,
        share_member_interactions=True,
        enable_team_history=True,
        num_of_interactions_from_history=2,
        success_criteria=
        "the team has generated a valid result for the provided operation.",
        use_json_mode=True)

def execute_prompt(self, prompt: str):
    
    response = self.hn_team.run(prompt)
    

    return response

Hi @ivanf
thanks for reaching out and supporting Agno!
We’ve shared this with the team and are working through requests one by one—we’ll get back to you as soon as we can.

Hi @ivanf !

Just to make sure I fully understand the concern — we’re looking to log the complete payload being sent to the LLM endpoint exactly as it’s constructed, so we can easily identify the cause of 400 Bad Request errors. Is that correct?

If so, that’s definitely a valuable feature request. While we do currently log the inputs at the agent level, they aren’t formatted in the exact structure that gets sent to the API, which makes debugging harder.

Yes, it is Yash.Thanks a lot

Is there any updates on this?

Hi! Yes, we added a debug_level param on the Agent and Team. Heres a cookbook showing its use

Hi, I am looking for something like callback function to log the prompt, response, tools etc of the each LLM call.
It is also required to pricing and auditing.