Support for LiteLLM

Hi All,

I believe that LiteLLM is currently not supported, as I could not find any references to it neither here (in the community forums) not in Agno docs.

Since LiteLLM allows calling all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.], it makes it very easy and convenient to swap the models during development and testing.

Additionally, LiteLLM Proxy Server (LLM Gateway) supports other advanced functionalities such as cost tracking, rate limiting, and more.

I was wondering if support for LiteLLM was ever considered and if there is a chance to have it added in the future.

Hi @Leonid
Thanks for reaching out and for using Agno! I’ve looped in the right engineers to help with your question. We usually respond within 24 hours, but if this is urgent, just let us know, and we’ll do our best to prioritize it.
Appreciate your patience—we’ll get back to you soon! :smile:

hey @Leonid this is on our roadmap to add it.

1 Like