i am using SentenceTransformerEmbedder Embedder to embed my long Mysql database and add to knowledge_base but cannot use SentenceTransformerEmbedder in my agent while gemini embedder has a mismatch with SentenceTransformerEmbedder so it throws error aswell..
Cannot use GeminiEmbedder as it has rate limit and wont allow large database
embedder = SentenceTransformerEmbedder(dimensions=384)
vector_db = Qdrant(collection="my_collection", url="http://localhost:6333", embedder=embedder)
knowledge_base = AgentKnowledge(vector_db=vector_db)
memory_db = SqliteMemoryDb(table_name="user_memories", db_file="tmp/memory.db")
memory = Memory(db=memory_db)
agent = Agent(
model=Gemini(
id="gemini-2.0-flash",
api_key="***"),
knowledge=knowledge_base,
memory=memory,
markdown=True,
enable_agentic_memory=True,
enable_user_memories=True,
add_history_to_messages=True)
upsert code
qdrant_client.recreate_collection(
collection_name="my_collection",
vectors_config=models.VectorParams(size=384, distance=models.Distance.COSINE)
)
connection = mysql.connector.connect(
host="***",
user="***",
password="***",
database="***"
)
cursor = connection.cursor()
# post_content, post_title, post_excerpt
cursor.execute("SELECT post_content, post_title FROM jf8hky_posts")
rows = cursor.fetchall()
connection.close()
logging.info(f"Fetched {len(rows)} rows from the database.")
texts = [str(row[0]) for row in rows if row[0] is not None and str(row[0]).strip()]
titles = [row[1] for row in rows if row[0] is not None and str(row[0]).strip()]
logging.info(f"Prepared {len(texts)} texts for embedding.")
# Prepare points for Qdrant
batch_size = 200
for i in range(0, len(texts), batch_size):
logging.info(f"Processing batch {i // batch_size + 1}")
batch_texts = texts[i:i + batch_size]
batch_embeddings = [embedder.get_embedding(text) for text in batch_texts]
points = [
PointStruct(
id=str(uuid.uuid4()),
vector=embedding,
payload={"text": text}
)
for text, embedding in zip(batch_texts, batch_embeddings)
]
qdrant_client.upload_points(
collection_name="my_collection",
points=points,
batch_size=batch_size,
parallel=4
)
on running this code i get some error in console while response does not read my data
agent.run("Query here")
Console logs:
2025-05-14 21:59:02,303 - INFO - AFC is enabled with max remote calls: 10.
2025-05-14 21:59:03,706 - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent "HTTP/1.1 200 OK"
2025-05-14 21:59:03,712 - INFO - AFC remote call 1 is done.
2025-05-14 21:59:03,771 - INFO - Use pytorch device_name: mps
2025-05-14 21:59:03,771 - INFO - Load pretrained SentenceTransformer: sentence-transformers/all-MiniLM-L6-v2
Batches: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 3.44it/s]
2025-05-14 21:59:09,380 - INFO - HTTP Request: GET http://localhost:6333 "HTTP/1.1 200 OK"
2025-05-14 21:59:09,412 - INFO - HTTP Request: POST http://localhost:6333/collections/my_collection/points/search "HTTP/1.1 200 OK"
ERROR Error searching for documents: 'name'
2025-05-14 21:59:09,421 - INFO - AFC is enabled with max remote calls: 10.
2025-05-14 21:59:11,118 - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent "HTTP/1.1 200 OK"
2025-05-14 21:59:11,123 - INFO - AFC remote call 1 is done.
2025-05-14 21:59:11,147 - INFO - AFC is enabled with max remote calls: 10.
2025-05-14 21:59:12,590 - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent "HTTP/1.1 200 OK"
2025-05-14 21:59:12,594 - INFO - AFC remote call 1 is done.
2025-05-14 21:59:14,027 - INFO - HTTP Request: POST https://api.agno.com/v1/telemetry/agent/run/create "HTTP/1.1 200 OK"
INFO: 127.0.0.1:58265 - "POST /query HTTP/1.1" 200 OK