Ollama not working

File “C:\Users\jscol\OneDrive\Desktop\Projects\PhiData\venv\Lib\site-packages\ollama_client.py”, line 124, in _request_raw
raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama.

Ollama is up and running. I’m using the github ollama basic.py

Hi
Thanks for reaching out and for using Agno! I’ve looped in the right engineers to help with your question. We usually respond within 24 hours, but if this is urgent, just let us know, and we’ll do our best to prioritize it.
Appreciate your patience—we’ll get back to you soon! :smile:

Hey @jscoltock

I hope you’re doing well. To address the ConnectionError you’re encountering, please ensure that the Ollama server is running by executing the following command in your terminal:

curl http://localhost:11434

If the server is active, you should receive a response indicating that “Ollama is running.”

Also If you could share the complete error log or a screenshot, it would really help us diagnose the problem more effectively. Thank you!

thanks for the quick response. Here is the curl output:
PS C:\Users\jscol\OneDrive\Desktop\Projects\agno> curl http://localhost:11434

StatusCode : 200
StatusDescription : OK
Content : Ollama is running
RawContent : HTTP/1.1 200 OK
Content-Length: 17
Content-Type: text/plain; charset=utf-8
Date: Mon, 10 Feb 2025 19:56:25 GMT

                Ollama is running

Forms : {}
Headers : {[Content-Length, 17], [Content-Type, text/plain; charset=utf-8], [Date, Mon, 10 Feb 2025
19:56:25 GMT]}
Images : {}
InputFields : {}
Links : {}
ParsedHtml : mshtml.HTMLDocumentClass
RawContentLength : 17

Here is the debug output:
basic.py
DEBUG *********** Agent ID: b343e97d-6beb-4cae-a312-e4a6f94dccfc ***********
DEBUG *********** Session ID: 9d9efd13-e0fa-49b7-8df6-a0d620c50aab ***********
DEBUG *********** Agent Run Start: e68857f8-25fb-412e-8535-71362dd704ea ***********
DEBUG ---------- Ollama Response Start ----------
DEBUG ============== system ==============
DEBUG <additional_information>
- Use markdown to format your answers.
</additional_information>
DEBUG ============== user ==============
DEBUG Share a 2 sentence horror story
▰▱▱▱▱▱▱ Thinking…
Traceback (most recent call last):
File “c:\Users\jscol\OneDrive\Desktop\Projects\agno\cookbook\models\ollama\basic.py”, line 12, in
agent.print_response(“Share a 2 sentence horror story”)
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\agent\agent.py”, line 3381, in print_response
run_response = self.run(
^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\agent\agent.py”, line 869, in run
return next(resp)
^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\agent\agent.py”, line 592, in _run
model_response = self.model.response(messages=run_messages.messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\models\ollama\chat.py”, line 413, in response
response: Mapping[str, Any] = self.invoke(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\agno\models\ollama\chat.py”, line 194, in invoke
return self.get_client().chat(
^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\ollama_client.py”, line 333, in chat
return self._request(
^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\ollama_client.py”, line 178, in _request
return cls(**self._request_raw(*args, **kwargs).json())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jscol\OneDrive\Desktop\Projects\agno\venv\Lib\site-packages\ollama_client.py”, line 124, in _request_raw
raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. Download Ollama on macOS

Perhaps environment variables problem? Is Ollama working for others now?

Ollama is known to change occasionally. We are using it in Mac, Windows and Linux environments. Simply restarting it usually clears the problem. We suspect that it is more likely to happen when there is an uninstalled update, but we have not confirmed that.

Hi,

I just went through the Your first Agent examples with Ollama and llama3.1:8b LLM and everything worked fine.

The steps that I performed to replace OpenAI with Ollama are as follows:

  1. pip install ollama
  2. replace “from agno.models.openai import OpenAIChat” with “from agno.models.ollama import Ollama
  3. replace “model=OpenAIChat(id="gpt-4o"),” with “model=Ollama(id="llama3.1:8b"),
  4. have Ollama running and pull the model - llama3.1:8b in that case: ollama pull llama3.1:8b

Please NOTE that in my example I ran everything on the same machine, so I didn’t have to specify the Ollama URL (http://localhost:11434) anywhere - I guess this is a default.

I hope this helps.

Hello @jscoltock! Can you please try running: ollama run llama3.1:8b in your terminal?

Also, please consider upgrading to Agno from phidata if you have not done so yet. Agents are now faster and leaner than before.

Basic agent:

from agno.agent import Agent
from agno.models.ollama import Ollama

agent = Agent(model=Ollama(id="llama3.1:8b"), markdown=True)
agent.print_response("Share a 2 sentence horror story")

You can close this. An ollama update seems to have fixed the issue

1 Like