I am making an agent for our software. We have large data like 3000+ products. In our case the user sometimes type the text in different language and sometimes in english language.
For that reason I wanted to load all the product data to a common memory. so when user asks anything the agent can find the actual data from the memory.
I stored the product information in a product.json file and on the instruction i told the agent to search from this file. for 50 products it was working nice.
But when i put my actuall 3000+ product file to the json then i got a token limit exceed notice from open AI.
Is there any way, I can store all the product information to any memory. So the model can search on that memory / knowledge and dont use too much token?
There will always be limitations on how much data any computer/program can handle at one time. Typically these types of issues are resolved through program architecture changes. Given the number of products and supporting data you have, I have to believe the products are or can be organized by category. Group the product information by category and let the user first identify the desired category and then load only that data. That will still probably be too much for some categories, so break those down further into sub-categories. Your program would then start with some sort of ai-assisted category identification process.
Hey @rahat090255
thanks for reaching out and supporting Agno. I’ve shared this with the team, we’re working through all requests one by one and will get back to you soon.
If it’s urgent, please let us know. We appreciate your patience!