File “C:\Users\jscol\OneDrive\Desktop\Projects\PhiData\venv\Lib\site-packages\ollama_client.py”, line 124, in _request_raw
raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama.
Ollama is up and running. I’m using the github ollama basic.py
Hi
Thanks for reaching out and for using Agno! I’ve looped in the right engineers to help with your question. We usually respond within 24 hours, but if this is urgent, just let us know, and we’ll do our best to prioritize it.
Appreciate your patience—we’ll get back to you soon!
I hope you’re doing well. To address the ConnectionError you’re encountering, please ensure that the Ollama server is running by executing the following command in your terminal:
curl http://localhost:11434
If the server is active, you should receive a response indicating that “Ollama is running.”
Also If you could share the complete error log or a screenshot, it would really help us diagnose the problem more effectively. Thank you!
Here is the debug output:
basic.py
DEBUG *********** Agent ID: b343e97d-6beb-4cae-a312-e4a6f94dccfc ***********
DEBUG *********** Session ID: 9d9efd13-e0fa-49b7-8df6-a0d620c50aab ***********
DEBUG *********** Agent Run Start: e68857f8-25fb-412e-8535-71362dd704ea ***********
DEBUG ---------- Ollama Response Start ----------
DEBUG ============== system ==============
DEBUG <additional_information>
- Use markdown to format your answers.
</additional_information>
DEBUG ============== user ==============
DEBUG Share a 2 sentence horror story
▰▱▱▱▱▱▱ Thinking…
Traceback (most recent call last):
File “c:\Users\jscol\OneDrive\Desktop\Projects\agno\cookbook\models\ollama\basic.py”, line 12, in
agent.print_response(“Share a 2 sentence horror story”)
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\agent\agent.py”, line 3381, in print_response
run_response = self.run(
^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\agent\agent.py”, line 869, in run
return next(resp)
^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\agent\agent.py”, line 592, in _run
model_response = self.model.response(messages=run_messages.messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\models\ollama\chat.py”, line 413, in response
response: Mapping[str, Any] = self.invoke(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\models\ollama\chat.py”, line 194, in invoke
return self.get_client().chat(
^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\ollama_client.py”, line 333, in chat
return self._request(
^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\ollama_client.py”, line 178, in _request
return cls(**self._request_raw(*args, **kwargs).json())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\ollama_client.py”, line 124, in _request_raw
raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. Download Ollama on macOS
Perhaps environment variables problem? Is Ollama working for others now?
Ollama is known to change occasionally. We are using it in Mac, Windows and Linux environments. Simply restarting it usually clears the problem. We suspect that it is more likely to happen when there is an uninstalled update, but we have not confirmed that.
I just went through the Your first Agent examples with Ollama and llama3.1:8b LLM and everything worked fine.
The steps that I performed to replace OpenAI with Ollama are as follows:
pip install ollama
replace “from agno.models.openai import OpenAIChat” with “from agno.models.ollama import Ollama”
replace “model=OpenAIChat(id="gpt-4o"),” with “model=Ollama(id="llama3.1:8b"),”
have Ollama running and pull the model - llama3.1:8b in that case: ollama pull llama3.1:8b
Please NOTE that in my example I ran everything on the same machine, so I didn’t have to specify the Ollama URL (http://localhost:11434) anywhere - I guess this is a default.