Abnormal tool-calling alerts led to repeated requests to the LLM - while using Gemini to build the agent

I am building an agent based on Gemini-2.5-pro that uses the Tavily Tool to search and analyze information from the web.

When the agent uses the Tavily Tool for online searches, an alert will be triggered:

WARNING:google_genai.types:Warning: there are non-text parts in the response: [‘function_call’], returning concatenated parsed result from text parts. Check the full candidates.content.parts accessor to get the full model response.

After this alert is triggered, the Agent will request the LLM again to determine whether to call Tavily. Such events will occur multiple times (around 5-8 times) during my interaction with the Agent.

This issue has caused an intolerable delay in my Agent’s interaction with users. At the same time, I cannot determine whether the Agent is truly able to successfully use Tavily to search for real network data.

=====

Here is how I set up Gemini:

gemini= Gemini(
            id="gemini-2.5-pro",
            vertexai=True,
            project_id="My valid project name",
            location="global",
            api_key="My valid google api key",
            temperature=0.5,
        )

And here is how I build Agent:

tavily = TavilyTools(api_key=getenv("TAVILY_API_KEY"),format="markdown")

segment_analysis_agent = Agent(
            name="SegmentAnalysisAgent",
            model=gemini,
            tools=[tavily],
            reasoning=True,
            reasoning_model=gemini,
            description="Market segmentation analysis expert, identifying and analyzing   multiple target segments.",
            instructions="""
                 You are a professional market segmentation expert. Please identify and deeply analyze at least 2 different market segments.**Important: The Importance of Proactive Search**- If user behavior data, demographics, or market segmentation research is needed, proactively search.- Search for information on the targeted user group's demographics, psychographics, behavioral patterns, etc.- Use real market research data to support your analysis.Analysis Requirements:1. Must identify at least 2 different market segments.2. For each market segment, perform the following actions:- **Use search tools** to research the demographic information of that segment (age, gender, income, education level, etc.)- **Use search tools** to research the psychographic information of that segment (lifestyle, values, needs, etc.)- **Use search tools** to research the behavioral characteristics of that segment (buying habits, spending power, preferences, etc.)3. Evaluate market size potential and product fit.Output Format: Return a list of MarketSegment objects.
            """,
            response_model=SegmentationMatrix,
            show_tool_calls=True,
            markdown=True,
        )

Is there any effective solution to this problem? :smiling_face_with_tear:

By the way, Agno is the best agent-building framework I have ever used. :face_blowing_a_kiss:

Hey @Ran, thanks for reaching out and supporting Agno. I’ve shared this with the team, we’re working through all requests one by one and will get back to you soon.
If it’s urgent, please let us know. We appreciate your patience!

First thanks for your attention.

After testing myself, I found tavily tool can be used during interacting with agent.

But something in the structural data missing when gemini try to get web content retrieved by tavily.

Hope the details I provided are helpful. :slightly_smiling_face:

Hey @Ran

This seems to be the expected behavior from the Gemini API right now, when there is non-text content in a function call response.

I see how this is not ideal and would make many flows slower. Based on this conversation: Warning: there are non-text parts in the response · Issue #850 · googleapis/python-genai · GitHub - the developers on the Gemini side also see this as problematic and are working on a fix.

In the meantime, maybe there is a way to make the Tavily tools you are using return only text content? That should stop the problematic log