Hey @manu
I tried a couple more changes, but the agents still do not run parallel. Would be open for some suggestions here.
Thanks!
Hey @Sharan17
- The
parser_model
field is coming to Teams very soon, we are working on it right now! - Streaming should work for teams, also with media involved. Are you seeing any errors? Can I see how you are configuring the team? Are you passing both
stream
andstream_intermediate_steps
? - I am not familiarized right now with how GCP is serving LLMs. If the LLMs you are using are OpenAI-like, you can probably use the
OpenAILike
class to setup the connection. Else we totally should add support, can you share a link to the docs of the product or feature you are using? - The runs should happen in parallel! are you using the async methods (e.g.
arun
)? if you are and it is still not parallel, can I see how you are configuring and calling the team?
Hey @manu
Thanks for the reply!
- Great to know about the parser_model
- I reverted the change but I think I used both of them.
- I’m planning to deploy via Ollama on GCP or AWS so should be easy I suppose.
- So my team lead has Gemini 2.5 with mode as coordinate, has reasoning tool, a description, instructions, expected output, response model, monitoring, add_datetime_to_instructions, enable_agentic_context, show_member_responses, share_member_interactions=False, debug_mode=False.
- 4 team members that should work parallel on different task, each is a Gemini 2.0 Flash model with description, instructions, expected output and their own response_model (which is a part of the final team lead response model).
- I am doing team.arun(prompt, files/images). Still they run one after the other in the team.
Thanks for the help!
Hey @manu
Just checking in here, do you thing this is a Gemini Issue like the api calls cant be run parallelly or something?
Hey @Sharan17
About the parallelization problem, I see what happens. The Agent runs won’t be parallel in coordinate
mode. The collaborate
mode is probably what we want to use here.
Hi @manu
Thanks for the update.
Ran into another issue.
I have been using Vertex AI now. I wanted to switch from Gemini to Mistral as I have Mistral models on my vertex AI. However despite setting everything right it gives me a 404 error. All my GCP settings are correct.
model = Gemini(
id=“mistral-ocr-2505”,
name=“Mistral”,
provider=“mistral”, (also tried with provider google)
vertexai=True,
project_id=“”,
location=‘’,
temperature=0.0
)
I also enter the right project_id and location. Basically the provider doesn’t change. I think because the path I get in agno is (projects/PROJECT_ID/locations/LOCATION/publishers/google/models/mistral-small-2503)
I think the google needs to change to mistral
Please let me know your thoughts on this asap.
Best Regards,
Sharan
Hey @Sharan17, sorry about that. Our engineers will get back to you soon
Hey @Sharan17, sorry for the wait.
Your setup looks OK. Just double check that model id, project id and location are correct.
- Have you been able to interact with that Mistral model in any other way? Without using Agno
- What error are you seeing exactly?
Hi @manu
Thanks for getting back.
As mentioned the URL it is trying to hit is:
(projects/PROJECT_ID/locations/LOCATION/publishers/google/models/mistral-small-2503)
While the URL Vertex AI is telling me needs to have mistral instead of google:
(projects/PROJECT_ID/locations/LOCATION/publishers/mistral/models/mistral-small-2503)
You are right @Sharan17! the problem is indeed with how the google genai library builds that url. There is no way to pass the publisher on client initialization so it is not considered when the url is built.
You can try to manually set the base_url used by the client by setting the GOOGLE_VERTEX_BASE_URL
env var. I seem to be able to edit the final url doing this.
Let me know if that works.
Hi @manu
Thank you for the support. Unfortunately. I am having multiple agents. For some I want to use Mistral and for some I want to use Gemini. This probably won’t work.
Might have to try to figure out different ways.
Best Regards,
Sharan
I see. In that case you can also use the client_params
when initializing the Gemini class. It will look something like this:
client_params = {"http_options": {"base_url": ...}}
And use different Gemini instances depending on the url you need to use
gemini_agent = Agent(model=Gemini(...))
mistral_agent = Agent(model=Gemini(client_params=client_params, ...))
Hi @manu
Thanks. Will try this out.
In the meantime I tried working with Nebius AI Studio since I had a couple of open source models available there but seems like the integration doesnt work with media.
I am running:
file = File(content=file_bytes, mime_type=mime_type)
result = self.classifier_agent.run(
f"Classify the document",
files=[file],
)
I get this error:
{‘type’: ‘literal_error’, ‘loc’: [‘body’, ‘messages’, 1, ‘content’,
‘list[union[ChatCompletionContentPartTextParam,ChatCompletionContentPartImageParam,ChatCompletionContentPartVideoParam]]’, 0,
‘ChatCompletionContentPartVideoParam’, ‘type’], ‘msg’: “Input should be ‘video_url’”, ‘input’: ‘file’, ‘ctx’: {‘expected’:
“‘video_url’”}}]}
The same is working with Gemini, OpenAI, Claude etc.
Would really appreciate your help here.
Best Regards,
Sharan
@manu It seems like none of the OpenAILike related classes are built to handle files/images.
Hey @manu
I tried the client_params thing but started getting 404 Not found errors now.
Hey @manu @Monali
I think I would need some advice here.
My use case is that I use a team of agents who receive a PDF or Image which is provided to the Agno team.arun as a bytes using the File or Image function from Agno Media.
I see that most of the OpenAILike related classes (AI/ML API, Nebius) don’t support Agno media.
Additionally the client_params don’t help as I kept 404 errors, and hence I cant use open-source models.
This is a huge blocker for me, and I would like to know what the quickest way possible for me to fix this? Currently, I am not in favour of deploying any models on my AWS but if you feel that is something that would be better then I’m open to hearing from you.
Regards,
Sharan
Hey @Sharan17, sorry for the difficulties you are facing!
-
About the files: the implementations of OpenAILike should support receiving the media. It may be that the provider you are using does not support it, or there is a mismatch with accepted types, mime_types (you would see this in the logs) or something else. Can I see how you are setting up the model? and running it plus passing the file (in case it’s not exactly as with the snippet you shared earlier)?
-
About the client_params: I understand that we were using this just to adjust the URL the Agent was hitting. Have checked how that URL looks like now? Are we being able to update it using
client_params
or not at all?
Hi @manu
Thanks for getting back to me
- For the OpenAILike, I am sending normal mime_types (application/pdf or image/jpg). Nothing new. It seems Nebius, AI/ML all expect a different parameter which is blocking this.
My model setup is quite straighforward for Nebius:
model = Nebius(
id=“mistralai/Mistral-Small-3.1-24B-Instruct-2503”,
api_key=os.getenv(“NEBIUS_API_KEY”),
)
For running I just create the files or image list based on the mime_type and pass it to the arun:
team.arun(prompt, files=[File(content=file_bytes, mime_type=mime_type)])
The error I get when running with Nebius or AI/ML:
{‘type’: ‘literal_error’, ‘loc’: [‘body’, ‘messages’, 1, ‘content’,
‘list[union[ChatCompletionContentPartTextParam,ChatCompletionContentPartImageParam,ChatCompletionContentPartVideoParam]]’, 0,
‘ChatCompletionContentPartVideoParam’, ‘type’], ‘msg’: “Input should be ‘video_url’”, ‘input’: ‘file’, ‘ctx’: {‘expected’:
“‘video_url’”}}]}
This comes when I try to send a pdf
- Wasnt able to get the URL printed so far. But it just kept giving 404 errors.
Best Regards,
Sharan