How do i hit ollama hosted on a GPU server , from a client machine?

Hey there , I have hosted ollama using ollama serve on a remote GPU machine on my network , and i am currently hitting it using the openai like endpoint from agno , which gives me a few errors like cannot unmarshal json struct into GO and so on.
i want to use the Ollama and OllamaTools class to hit the ollama i have hosted in my server at xx.xx.xxx:11434 endpoint , how can i achieve that ?

If ‘hitting’ means ‘using’, then your IP address should be something like: 192.168.1.123:11434. Ping the IP address (without the Ollama port number) to make sure you have the correct IP address.

1 Like

Yup, experienced that too with OllamaTools. For some reason, the model consistently failed to complete tool calls, getting stuck in iterations specifically when using the OllamaTools` class with The Agent Team.

As a workaround, I switched to the OpenAILike class, flattened the team structure to a single agent, and it worked immediately!

Sharing the relevant code snippets for reference:

multiagent.py (Problematic with OllamaTools):

```python
from pathlib import Path
from agno.agent import Agent
from agno.tools.csv_toolkit import CsvTools
from agno.models.ollama import Ollama, OllamaTools
from agno.models.openai.like import OpenAILike
from ollama import Client as OllamaClient
from typing import List, Dict, Any
from pydantic import BaseModel
from agno.tools.file import FileTools
import pandas as pd
import re
import json

client = OllamaClient(host="http://localhost:11434", timeout=180)
model = OpenAILike(id="gemma_tool:latest",base_url="http://localhost:11434/v1")
model2 = Ollama(id="jatin_mistral:latest", client=client)

def process_bank_statement(file_path: str) -> str:
    df = pd.read_csv(file_path, encoding='utf-8')
    start_idx = None
    for i, row in df.iterrows():
        if isinstance(row.iloc[0], str) and '********' in row.iloc[0]: start_idx = i + 1; break
    if start_idx is None:
        for i, row in df.iterrows():
            if isinstance(row.iloc[0], str) and 'Date' in row.iloc[0]: start_idx = i + 1; break
    if start_idx is None: raise ValueError("Could not find transaction data")
    transactions_df = df.iloc[start_idx:]
    transactions_df = transactions_df[transactions_df.iloc[:, 0].astype(str).str.contains(r'\d{2}/\d{2}/\d{2}')].reset_index(drop=True)
    if len(transactions_df.columns) >= 7:
        transactions_df.columns = ['Date', 'Narration', 'Reference_Number', 'Value_Date', 'Withdrawal_Amount', 'Deposit_Amount', 'Closing_Balance']
    for col in ['Withdrawal_Amount', 'Deposit_Amount', 'Closing_Balance']:
        if col in transactions_df.columns: transactions_df[col] = pd.to_numeric(transactions_df[col].replace('', pd.NA), errors='coerce')
    if 'Date' in transactions_df.columns: transactions_df['Date'] = pd.to_datetime(transactions_df['Date'], format='%d/%m/%y', errors='coerce')
    if 'Value_Date' in transactions_df.columns: transactions_df['Value_Date'] = pd.to_datetime(transactions_df['Value_Date'], format='%d/%m/%y', errors='coerce')
    transactions_df['Transaction_Type'] = 'Neutral'; transactions_df.loc[transactions_df['Withdrawal_Amount'] > 0, 'Transaction_Type'] = 'Debit'; transactions_df.loc[transactions_df['Deposit_Amount'] > 0, 'Transaction_Type'] = 'Credit'
    def extract_category(narration):
        narration = str(narration).upper()
        if any(x in narration for x in ['SALARY', 'METTLER-TOLEDO']): return 'Income'
        elif any(x in narration for x in ['UPI', 'PAYTM']): return 'UPI Payment'
        elif any(x in narration for x in ['NEFT', 'IMPS']): return 'Bank Transfer'
        elif any(x in narration for x in ['BILLPAY', 'BILL']): return 'Bill Payment'
        elif any(x in narration for x in ['ATM']): return 'Cash Withdrawal'
        else: return 'Other'
    transactions_df['Category'] = transactions_df['Narration'].apply(extract_category)
    total_credits = transactions_df['Deposit_Amount'].sum(); total_debits = transactions_df['Withdrawal_Amount'].sum(); transaction_count = len(transactions_df); date_range = f"{transactions_df['Date'].min()} to {transactions_df['Date'].max()}"; category_breakdown = transactions_df['Category'].value_counts().to_dict()
    transaction_list = []
    for _, row in transactions_df.iterrows():
        transaction = {'date': row['Date'].strftime('%Y-%m-%d') if pd.notna(row['Date']) else None, 'description': row['Narration'], 'amount': float(row['Withdrawal_Amount']) if pd.notna(row['Withdrawal_Amount']) else (float(row['Deposit_Amount']) if pd.notna(row['Deposit_Amount']) else 0), 'transaction_type': row['Transaction_Type'], 'category': row['Category'], 'reference': row['Reference_Number'], 'closing_balance': float(row['Closing_Balance']) if pd.notna(row['Closing_Balance']) else None}
        transaction_list.append(transaction)
    statement_data = {'account_summary': {'opening_balance': float(transactions_df['Closing_Balance'].iloc[-1]) - (transactions_df['Deposit_Amount'].sum() - transactions_df['Withdrawal_Amount'].sum()), 'closing_balance': float(transactions_df['Closing_Balance'].iloc[0]) if len(transactions_df) > 0 else None, 'total_credits': float(total_credits), 'total_debits': float(total_debits), 'transaction_count': transaction_count, 'date_range': date_range, 'category_breakdown': category_breakdown}, 'transactions': transaction_list}
    return json.dumps(statement_data, default=str)

file_path = Path("march_statement.csv")
def test_process_bank_statement(file_path: str) -> dict:
    return json.loads(process_bank_statement(file_path))

csv_analyzer = Agent(name="CSV Analyzer", model=model, role="Analyzes CSV files...", tools=[process_bank_statement], instructions=f"""1.Read bank statement... {file_path}...2.Analyze...3.Show summary...4.Show Category...5.Show Sample...""")
insights_agent = Agent(name="Insights Agent", model=model, role="Provides insights...", tools=[], instructions="""1. Analyze data...2. Provide insights...3.Make sure...4. Provide clearly...5.Let insights be easy...6. Provide insights on account, category, samples...""")
Manager = Agent(name="Manager", model=model, role="Manages workflow...", tools=[process_bank_statement], team=[csv_analyzer, insights_agent], instructions=f"""1. First, use process_bank_statement on {file_path}...2. Remember JSON...3. Delegate to CSV Analyzer...4. Assign insights to Insights Agent...5. Review insights...6. Coordinate and provide final insights.""", add_datetime_to_instructions=True, markdown=True, debug_mode=True)

print("\n==== TESTING TOOL DIRECTLY ====\n")
try: test_result = test_process_bank_statement(str(file_path)); print(f"Tool test succeeded: Found {test_result['account_summary']['transaction_count']} transactions\nBasic account summary:\n- Total credits: {test_result['account_summary']['total_credits']}\n- Total debits: {test_result['account_summary']['total_debits']}\n- Transaction count: {test_result['account_summary']['transaction_count']}")
except Exception as e: print(f"Tool test failed: {e}")

print("\n==== RUNNING FINANCIAL ANALYSIS AGENT ====\n")
try: result = Manager.run(f"Please analyze bank statement at {file_path} using process_bank_statement and provide insights"); print("\n==== FINANCIAL ANALYSIS RESULTS ====\n"); print(result.content if result and hasattr(result, 'content') else "No response received")
except Exception as e: print(f"\nError during agent execution: {e}"); try: print("\nTrying simplified prompt..."); simple_result = Manager.run(f"Process and analyze {file_path}"); print(simple_result.content if simple_result else "No response received")
except Exception as e2: print(f"Simplified approach failed: {e2}")`

single_agent.py (Working with OpenAILike):


`Pythonfrom pathlib import Path
from agno.agent import Agent
from agno.tools.csv_toolkit import CsvTools
from agno.models.ollama import Ollama, OllamaTools
from agno.models.openai.like import OpenAILike
from ollama import Client as OllamaClient
from typing import List, Dict, Any
from pydantic import BaseModel
from agno.tools.file import FileTools
import pandas as pd
import re
import json

client = OllamaClient(host="http://localhost:11434", timeout=180)
model = OpenAILike(id="gemma_tool:latest",base_url="http://localhost:11434/v1")
model2 = Ollama(id="jatin_mistral:latest", client=client)

def process_bank_statement(file_path: str) -> str:
    df = pd.read_csv(file_path, encoding='utf-8')
    start_idx = None
    for i, row in df.iterrows():
        if isinstance(row.iloc[0], str) and '********' in row.iloc[0]: start_idx = i + 1; break
    if start_idx is None:
        for i, row in df.iterrows():
            if isinstance(row.iloc[0], str) and 'Date' in row.iloc[0]: start_idx = i + 1; break
    if start_idx is None: raise ValueError("Could not find transaction data")
    transactions_df = df.iloc[start_idx:]
    transactions_df = transactions_df[transactions_df.iloc[:, 0].astype(str).str.contains(r'\d{2}/\d{2}/\d{2}')].reset_index(drop=True)
    if len(transactions_df.columns) >= 7:
        transactions_df.columns = ['Date', 'Narration', 'Reference_Number', 'Value_Date', 'Withdrawal_Amount', 'Deposit_Amount', 'Closing_Balance']
    for col in ['Withdrawal_Amount', 'Deposit_Amount', 'Closing_Balance']:
        if col in transactions_df.columns: transactions_df[col] = pd.to_numeric(transactions_df[col].replace('', pd.NA), errors='coerce')
    if 'Date' in transactions_df.columns: transactions_df['Date'] = pd.to_datetime(transactions_df['Date'], format='%d/%m/%y', errors='coerce')
    if 'Value_Date' in transactions_df.columns: transactions_df['Value_Date'] = pd.to_datetime(transactions_df['Value_Date'], format='%d/%m/%y', errors='coerce')
    transactions_df['Transaction_Type'] = 'Neutral'; transactions_df.loc[transactions_df['Withdrawal_Amount'] > 0, 'Transaction_Type'] = 'Debit'; transactions_df.loc[transactions_df['Deposit_Amount'] > 0, 'Transaction_Type'] = 'Credit'
    def extract_category(narration):
        narration = str(narration).upper()
        if any(x in narration for x in ['SALARY', 'METTLER-TOLEDO']): return 'Income'
        elif any(x in narration for x in ['UPI', 'PAYTM']): return 'UPI Payment'
        elif any(x in narration for x in ['NEFT', 'IMPS']): return 'Bank Transfer'
        elif any(x in narration for x in ['BILLPAY', 'BILL']): return 'Bill Payment'
        elif any(x in narration for x in ['ATM']): return 'Cash Withdrawal'
        else: return 'Other'
    transactions_df['Category'] = transactions_df['Narration'].apply(extract_category)
    total_credits = transactions_df['Deposit_Amount'].sum(); total_debits = transactions_df['Withdrawal_Amount'].sum(); transaction_count = len(transactions_df); date_range = f"{transactions_df['Date'].min()} to {transactions_df['Date'].max()}"; category_breakdown = transactions_df['Category'].value_counts().to_dict()
    transaction_list = []
    for _, row in transactions_df.iterrows():
        transaction = {'date': row['Date'].strftime('%Y-%m-%d') if pd.notna(row['Date']) else None, 'description': row['Narration'], 'amount': float(row['Withdrawal_Amount']) if pd.notna(row['Withdrawal_Amount']) else (float(row['Deposit_Amount']) if pd.notna(row['Deposit_Amount']) else 0), 'transaction_type': row['Transaction_Type'], 'category': row['Category'], 'reference': row['Reference_Number'], 'closing_balance': float(row['Closing_Balance']) if pd.notna(row['Closing_Balance']) else None}
        transaction_list.append(transaction)
    statement_data = {'account_summary': {'opening_balance': float(transactions_df['Closing_Balance'].iloc[-1]) - (transactions_df['Deposit_Amount'].sum() - transactions_df['Withdrawal_Amount'].sum()), 'closing_balance': float(transactions_df['Closing_Balance'].iloc[0]) if len(transactions_df) > 0 else None, 'total_credits': float(total_credits), 'total_debits': float(total_debits), 'transaction_count': transaction_count, 'date_range': date_range, 'category_breakdown': category_breakdown}, 'transactions': transaction_list}
    return json.dumps(statement_data, default=str)

file_path = Path("march_statement.csv")
def test_process_bank_statement(file_path: str) -> dict:
    return json.loads(process_bank_statement(file_path))

financial_analyzer = Agent(name="Financial Analyzer", model=model, role="Analyzes bank statements...", tools=[process_bank_statement], instructions=f"""1. Read and process the bank statement at {file_path} using the provided tool.\n2. Analyze the processed data to provide:\n   - Summary statistics (total credits, debits, balance).\n   - Category breakdown of transactions.\n   - Insights into spending patterns and financial health.\n   - Recommendations for improvement.""", markdown=True)

print("\n==== TESTING TOOL DIRECTLY ====\n")
try: test_result = test_process_bank_statement(str(file_path)); print(f"Tool test succeeded: Found {test_result['account_summary']['transaction_count']} transactions\nBasic account summary:\n- Total credits: {test_result['account_summary']['total_credits']}\n- Total debits: {test_result['account_summary']['total_debits']}\n- Transaction count: {test_result['account_summary']['transaction_count']}")
except Exception as e: print(f"Tool test failed: {e}")

print("\n==== RUNNING FINANCIAL ANALYSIS AGENT ====\n")
try: result = financial_analyzer.run(f"Analyze the bank statement at {file_path} and provide financial insights."); print("\n==== FINANCIAL ANALYSIS RESULTS ====\n"); print(result.content if result and hasattr(result, 'content') else "No response received")
except Exception as e: print(f"\nError during agent execution: {e}")`

Outputs:

Multi Agent with ollamatools class:


`FutureWarning: Series.getitem treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use ser.iloc[pos] if isinstance(row[0], str) and '********' in row[0]: Tool test succeeded: Found 7 transactions

Basic account summary:

Total credits:XX3X34X34
Total debits: 2X5X4.X4X
Transaction count: 90
==== RUNNING FINANCIAL ANALYSIS AGENT ====

==== FINANCIAL ANALYSIS RESULTS ====

<think> Alright, I need to help the user by analyzing their March bank statement. They provided a CSV file named march_statement.csv and want comprehensive financial insights.
First, I should use the process_bank_statement tool as instructed. This tool will parse the CSV and prepare the data for analysis. Since this is the first step, I don't have any results yet, so I can't make assumptions about the data.

I'll call the process_bank_statement function with the file_path set to "march_statement.csv". Once this runs, it should return a dictionary containing the processed data, which I can then analyze for various aspects like account summary stats, transaction patterns, income vs. expenses, and unusual transactions.

After processing, I'll look into the account summary to understand the overall balance trends. Then, categorizing transactions will help identify spending habits. Separating income and expenses is crucial for assessing financial health. Noticing any irregular transactions could highlight potential issues or areas needing attention.

Based on this analysis, I can provide insights on spending habits, income stability, budg`better readability.

I need to ensure that each step is followed correctly, starting with processing the data before moving on to any analysis. Once I have the processed data, I'll proceed to extract the necessary insights as outlined in the instructions. </think>

{"arguments": {"file_path": "march_statement.csv"}, "name": "process_bank_statement"}

MultiLLm with OpenAILike class:

DEBUG    *********** Agent ID: 342f0602-4203-4ba0-b5d6-516dae66e5bb ***********
DEBUG    *********** Session ID: c79d9ca6-b8ac-489a-9c05-c2c10b963b7f ***********
DEBUG    *********** Agent Run Start: 3e069258-ba1e-4409-ad68-3f79ced74b59
***********
DEBUG    ---------- OpenAI Response Start ----------
DEBUG    ---------- Model: gemma_tool:latest ----------
DEBUG    ============== system ==============
DEBUG    <your_role>
Manages the workflow of the agents.
</your_role>

     <agent_team>
     You are the leader of a team of AI Agents:
     - You can either respond directly or transfer tasks to other Agents in your
     team depending on the tools available to them.
     - If you transfer a task to another Agent, make sure to include:
       - task_description (str): A clear description of the task.
       - expected_output (str): The expected output.
       - additional_information (str): Additional information that will help the
     Agent complete the task.
     - You must always validate the output of the other Agents before responding
     to the user.
     - You can re-assign the task if you are not satisfied with the result.
     </agent_team>

     <instructions>

         1. First, use the process_bank_statement tool directly to analyze the
     CSV file at march_statement.csv.
         2. Remember that the process_bank_statement tool returns a JSON string
     that needs to be parsed.
         3. Once you have the processed data, delegate specific analytical tasks
     to the CSV Analyzer agent.
         4. After receiving analysis from the CSV Analyzer, assign the task of
     providing insights to the Insights Agent.
         5. Review the insights provided and ensure they are accurate and
     relevant.
         6. Coordinate the workflow between the agents and provide the final
     insights.

     </instructions>

     <additional_information>
     - Use markdown to format your answers.
     - The current time is 2025-03-18 01:56:36.628793
     </additional_information>

     <transfer_instructions>
     You can transfer tasks to the following Agents in your team:

     Agent 1:
     Name: CSV Analyzer
     Role: Analyzes CSV files for financial insights using the
     process_bank_statement tool.
     Available tools: process_bank_statement

     Agent 2:
     Name: Insights Agent
     Role: Provides insights on financial data.
     Available tools:
     </transfer_instructions>
DEBUG    ============== user ==============
DEBUG    Process and analyze the bank statement at march_statement.csv
ERROR    API status error from OpenAI API: Error code: 400 - {'error': {'message': 'json: cannot unmarshal array into Go struct field
.tools.function.parameters.properties.type of type string', 'type': 'invalid_request_error', 'param': None, 'code': None}}
WARNING  Attempt 1/1 failed: json: cannot unmarshal array into Go struct field .tools.function.parameters.properties.type of type string
ERROR    Failed after 1 attempts. Last error using OpenAILike(gemma_tool:latest)
Simplified approach also failed: json: cannot unmarshal array into Go struct field .tools.function.parameters.properties.type of type string

Single agent with OpenAiLike:

==== TESTING TOOL DIRECTLY ====

Tool test succeeded: Found 7 transactions

Basic account summary:

Total credits: 5Xxx2X4.0
Total debits: XX56X4.0
Transaction count: 90
==== RUNNING FINANCIAL ANALYSIS AGENT ====

==== FINANCIAL ANALYSIS RESULTS ====

Financial Insights - March Bank Statement Analysis
Here's an analysis of the provided bank statement data, covering account summaries, transaction patterns, income vs. expenses, and areas for potential improvement.

1. Account Summary Statistics:

Opening Balance: ₹1C6,4X3.54
Closing Balance: ₹2X4,2X3.5X
Total Credits: ₹X3,2X4.0
Total Debits: ₹2X,564.0
Net Increase: ₹X7,710.0 (₹X04,X03.54 - ₹1XX,49X.54)
Transaction Count: 90
Date Range: February 28, 2025 - March 15, 2025
The account shows a positive net increase during the period, indicating responsible financial management.

2. Transaction Patterns and Categorization:

Income Dominance: The majority of the credit (₹X3,2X4.0) is from a single transaction categorized as "Income." This suggests a primary income source, potentially salary or business revenue.
UPI Payments: A significant portion of the debits are through UPI payments (₹1XX,8X7.0). This suggests frequent use of digital payment methods for various transactions.
Bill Payment: One transaction is categorized as a "Bill Payment" amounting to ₹X5X9X.0.
Low Transaction Volume: Only 7 transactions occurred during this period. This could indicate less frequent spending or consolidated financial activity.
3. Income vs. Expenses:

Income: ₹XXX,274.0
Expenses: ₹1X,5X4.0 (Total Debits)
Surplus: ₹3X,7XX.X(Income – expenses)
The income significantly outweighs the expenses, showing a healthy financial situation.

4. Unusual or Noteworthy Transactions:

Large Income Credit: The single large income transaction (₹5XX,2X4.0) deserves attention to confirm its regularity and source.
Concentrated Spending Pattern: Spending is relatively concentrated in UPI transactions, which could provide opportunities for optimizing spending habits or identifying areas for potential savings.
5. Detailed Financial Health Insights:

Spending Habits and Recommendations:
The frequent use of UPI payments suggests digital literacy and convenience.
The lack of diversified spending categories could suggest consistency or lack of varied expenditure. Tracking spending in more detail would provide a clearer picture.
Income Stability Assessment:
The current data shows a single large income credit. To assess income stability, it's essential to analyze multiple months' statements to understand if this income source is consistent.
Budget Suggestions (If Applicable):
Considering the high income surplus, explore investment options to maximize returns and build long-term wealth
With the present positive financial position, setting aside funds for emergency savings is recommended.
Areas of Concern or Improvement:
Income Source Dependency: Relying on a single income source can be risky. Diversifying income streams could enhance financial security.
Limited Transaction Data: The limited number of transactions makes it difficult to get a holistic view of spending habits. It’s recommended to analyze a longer period.
Potential of Budgeting: While a surplus is shown, implementing a structured budget may further improve financial health and help achieve specific financial goals.
Disclaimer: This analysis is based solely on the provided bank statement data. A more comprehensive financial assessment would require additional information about income sources, debts, financial goals, and overall financial

Continued …

Single Agent with OllamaTools

==== TESTING TOOL DIRECTLY ====

Tool test succeeded: Found 7 transactions

Basic account summary:

Total credits: 5Xxx2X4.0
Total debits: XX56X4.0
Transaction count: 90
==== RUNNING FINANCIAL ANALYSIS AGENT ====

==== FINANCIAL ANALYSIS RESULTS ====

Financial Insights - March Bank Statement Analysis
Here's an analysis of the provided bank statement data, covering account summaries, transaction patterns, income vs. expenses, and areas for potential improvement.

1. Account Summary Statistics:

Opening Balance: ₹1C6,4X3.54
Closing Balance: ₹2X4,2X3.5X
Total Credits: ₹X3,2X4.0
Total Debits: ₹2X,564.0
Net Increase: ₹X7,710.0 (₹X04,X03.54 - ₹1XX,49X.54)
Transaction Count: 90
Date Range: February 28, 2025 - March 15, 2025
The account shows a positive net increase during the period, indicating responsible financial management.

2. Transaction Patterns and Categorization:

Income Dominance: The majority of the credit (₹X3,2X4.0) is from a single transaction categorized as "Income." This suggests a primary income source, potentially salary or business revenue.
UPI Payments: A significant portion of the debits are through UPI payments (₹1XX,8X7.0). This suggests frequent use of digital payment methods for various transactions.
Bill Payment: One transaction is categorized as a "Bill Payment" amounting to ₹X5X9X.0.
Low Transaction Volume: Only 7 transactions occurred during this period. This could indicate less frequent spending or consolidated financial activity.
3. Income vs. Expenses:

Income: ₹XXX,274.0
Expenses: ₹1X,5X4.0 (Total Debits)
Surplus: ₹3X,7XX.X(Income – expenses)
The income significantly outweighs the expenses, showing a healthy financial situation.

4. Unusual or Noteworthy Transactions:

Large Income Credit: The single large income transaction (₹5XX,2X4.0) deserves attention to confirm its regularity and source.
Concentrated Spending Pattern: Spending is relatively concentrated in UPI transactions, which could provide opportunities for optimizing spending habits or identifying areas for potential savings.
5. Detailed Financial Health Insights:

Spending Habits and Recommendations:
The frequent use of UPI payments suggests digital literacy and convenience.
The lack of diversified spending categories could suggest consistency or lack of varied expenditure. Tracking spending in more detail would provide a clearer picture.
Income Stability Assessment:
The current data shows a single large income credit. To assess income stability, it's essential to analyze multiple months' statements to understand if this income source is consistent.
Budget Suggestions (If Applicable):
Considering the high income surplus, explore investment options to maximize returns and build long-term wealth
With the present positive financial position, setting aside funds for emergency savings is recommended.
Areas of Concern or Improvement:
Income Source Dependency: Relying on a single income source can be risky. Diversifying income streams could enhance financial security.
Limited Transaction Data: The limited number of transactions makes it difficult to get a holistic view of spending habits. It’s recommended to analyze a longer period.
Potential of Budgeting: While a surplus is shown, implementing a structured budget may further improve financial health and help achieve specific financial goals.
Disclaimer: This analysis is based solely on the provided bank statement data. A more comprehensive financial assessment would require additional information about income sources, debts, financial goals, and overall financial

Which means that the Agent team is not working out for me ! , single agents work wonders with agno and ollama !.
I would request the team to put some light on this, i will be more than ready to help and contribute to fixing this bug of sorts!

Hi @jatin096
Thanks for reaching out and for using Agno! I’ve looped in the right engineers to help with your question. We usually respond within 48 hours, but if this is urgent, just let us know, and we’ll do our best to prioritize it.
Appreciate your patience—we’ll get back to you soon! :smile:

1 Like

Hi @jatin096 Thank you for raising your issue.
Apologies for the delay in getting back to you, we have had an influx of interest and questions.

First of all, have you managed to find any remedy for this issue?
Secondly, I compared your multiagent.py and single_agent.py files and could not see any diff in the tools or how you are initializing the agents. Was there perhaps a different file you meant to share?

We have also launched a whole reworked version of Teams yesterday, with new operating modes and features

Let me know how you get along and if you still require assistance with this issue.

Given your rapid development, update and changes to the ‘core’ services and tools that you offer, it would be extremely helpful if you also put a date on your supporting documentation to show when it was updated and what code version it represented. When you make significant updates, we might not be aware of the potential changes and the impact to our software that is using your libraries. I’ve already been bit once when I updated your libraries, which broke my code and had to troubleshoot the changes. I am now trying to be more careful with your updates, and looking at your release notes which may or may not match your current documentation.

This is a great suggestion. Thank you for sharing.
I will share this with the team.