You just released Agno, which the ‘news’ release indicates that it is a rebranding of phidata. But comparing the old codebase structure to the new Agno code base on GitHub, there is a major reordering of where the code is stored and you removed/retired a significant amount of code. Furthermore, you also released a new PyPi library for agno. It appears that there have been significant cleanup and improvements in the code base.
BUT, I can’t find any documentation that describes your recommended method(s) for migrating our code from Phidata to Agno. While similar, there are differences… And for those of us that did code patching, we might need to redo some of that.
Is there a migration guide and if not, are you going to create one? It would also be VERY helpful that in that guide, you gave a philosophy statement on how/why the code was restructured and simplified. That would greatly help in finding and fixing and adapting to your new code structure. LLM hallucinations are already a major challenge. We’d like to mitigate any new ones.
You are correct that along with the rebrand, we have overhauled the framework to be more performant. We did this to squeeze every ounce of performance from the system. The end result is that Agno Agents instantiate 5000x faster than Langgraph (see our performance guide for the numbers)
It contains the more tactical changes you need to migrate, that said, i will write up a philosophy statement on why we refactored the code, what advantages are there to the new structure and how to fix any issues.
I would like to thank you again for building with Agno, I am here to help as needed and thank you for bearing with us.
Thanks for the quick reply and the addition of the new topic under general.
May I also suggest that you add a Release Notes category, where you can identify what is added, changed and fixed with any recommendations on how to take advantage of them.
Your new documentation is cleaner and more precise… and I realized that I missed some significant additions you made to Agent… which are great, but will require specific customizations depending on the LLM that someone is using. As you know there can be significant differences when working in a closed, offline environment versus online and using public, cloud facilities. We are strictly offline and secure… Thanks…