Integrate AGNO with @modelcontextprotocol/sdk

Hi everybody,

i am learning MCP and i am trying to use AGNO in my solution.
I developed a little MCPServer, basically the example provided on Github, using @modelcontextprotocol/sdk framework for Node.js.

I am trying to write an agent with AGNO that uses tools exposed by my McpServer.

here my index.js that implements McpServer (basically the example on GitHub - modelcontextprotocol/typescript-sdk: The official TypeScript SDK for Model Context Protocol servers and clients):



import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import express from 'express';
import { z } from 'zod';

const port = 5555;

const server = new McpServer({
    name: 'demo-server',
    version: '1.0.0'
});

 
server.registerTool(
    'add', 
    {
        title: 'Addition Tool',
        description: 'Add two numbers',
        inputSchema: { a: z.number(), b: z.number() },
        outputSchema: { result: z.number() }
    },

    async ({ a, b }) => {
        const result = a + b;
        return {
            content: [{
                type: 'text',
                text: `The sum of ${a} and ${b} is ${result}.`
            }],
            structuredContent: { result },
            isError:false
        };
    }   
);


const app = express();
app.use(express.json());


app.post('/mcp', async (req, res) => {

    const transport = new StreamableHTTPServerTransport({
        sessionIdGenerator: undefined,
        enableJsonResponse: true
    });

    res.on('close', () => {
        transport.close();
    });

    await server.connect(transport);
    await transport.handleRequest(req, res, req.body);

});


app.listen(port, '0.0.0.0', () => {
    console.log(`Demo MCP Server running on http://localhost:${port}/mcp`);
}).on('error', error => {
    process.exit(1);
});

and this is my python code for the agent:

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.os import AgentOS
from agno.tools.mcp import MCPTools
from agno.tools.mcp.params import StreamableHTTPClientParams
import asyncio


async def run_agent(message: str) -> None:

    mcp_tools = MCPTools(
        transport="streamable-http",
        url="http://127.0.0.1:5555/mcp",
        refresh_connection=True
    )

    await mcp_tools.connect()

    _agent = Agent(
        model=OpenAIChat(
            id="my-llama-server-instance",
            base_url="http://localhost:8123/v1",
            api_key="none",
        ),
        tools=[mcp_tools],
        markdown=True,
        debug_mode=True,
        debug_level=2, # Uncomment to get more detailed logs
    )

    await _agent.aprint_response(input=message, stream=True, markdown=True)
    await mcp_tools.close()


if __name__ == "__main__":
    asyncio.run(run_agent("give me the sum of 2 and 3"))

i started my LLM with llama-server.exe that way:

.\llama-b7023-cuda\llama-server.exe --model .\models\llama-3.2-3b-instruct-q8_0.gguf --port 8123 --no-webui --chat-template chatml --jinja

if i understand that should be a bare minimum setup. i am getting a beahviour i don not understand. my agent makes a first call that reaches LLM, then the agent performs a call to ‘add’ tool.
the problem is that look like the agent is stuck in an endless loop calling the tool. i see an http call from agent with schema ‘tools/call’ repeated indefinitely.

ami missing something? both AGNO and @modelcontextprotocol/sdk are fully MCP compliant so the should “speak” the same language.

thanks everyone for help

Hi @akirapix, thank you for reaching out and supporting Agno. I’ve shared this with the team, we’re working through all queries one by one and will get back to you soon. If it’s urgent, please let us know. We appreciate your patience!

after some struggling look like llama-server (standalone exe on windows) is not compatible with AGNO because it does not support hidden fields/data injected by agno itself