Hi All,
I believe that LiteLLM is currently not supported, as I could not find any references to it neither here (in the community forums) not in Agno docs.
Since LiteLLM allows calling all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.], it makes it very easy and convenient to swap the models during development and testing.
Additionally, LiteLLM Proxy Server (LLM Gateway) supports other advanced functionalities such as cost tracking, rate limiting, and more.
I was wondering if support for LiteLLM was ever considered and if there is a chance to have it added in the future.