Structed output with groq llm failed

def get_plan_agent(llm_model, doc_ids, embedding_name):
planer = Agent(
name=“Planner”,
role=“Make plan”,
description=“You are an API development engineer”
“You need to convert user needs into one or more API requests based on the API information in the knowledgebase to meet user expectations.”,
instructions=[
“You will be provided with one or more APIs, but this is just for your reference. Do not try to use all the APIs. You need to use the necessary and as few APIs as possible to complete the corresponding requirements.”,
“Do not generate plans that are not relevant to the query message”,
“If multiple steps are required, please describe in detail what the before_request and after_request of each step are to do.”,
“If some property is required but use not provide, please use default value from API”,
“Do not return comments starting with // in the return output”
],
knowledge=KnowledgeBaseFactory(embedding_model=embedding_name).json_knowledge_base,
add_context=True,
retriever=functools.partial(get_docs_by_ids, doc_ids, embedding_name),
response_model=PlanSteps,
structured_outputs=True,
add_datetime_to_instructions=True,
debug_mode=True,
user_id=“123”,
storage=PgAgentStorage(table_name=“personalized_agent_sessions”, db_url=PG_URL),
)
if llm_model.startswith(“gpt”):
model_ = OpenAIChat(id=llm_model, api_key=OPEN_AI_KEY)
else:
model_ = Groq(id=llm_model, api_key=GROQ_AI_KEY)
planer.structured_outputs = False
planer.response_model = None
planer.expected_output = format_steps_text
planer.model = model_
return planer

I have an agent, defined as shown in the figure. When I use the gpt4 series llm and provide a response_model, it runs normally. However, when I switch to the groq llm for the same code, an error occurs. The error message is as follows

So I have to use branch judgment. If it is not a gpt series model, I use the expected_output parameter to specify the format and leave response_model empty.

My current version is 2.7.7, but I remember that groq also supported structured output when Agent was the Assistant version before, so please help me check if my usage is wrong?

1 Like

Hi @windrunnner
Thank you for reaching out and using Phidata! I’ve tagged the relevant engineers to assist you with your query. We aim to respond within 48 hours.
If this is urgent, please feel free to let us know, and we’ll do our best to prioritize it.
Thanks for your patience!

It`s really urgent,because i also face to the same question.
please help us!
Thanks~~~

{
“messages”: [
{
“role”: “system”,
“content”: "You are an API development engineerYou need to convert user needs into one or more API requests based on the API information in the knowledgebase to meet user expectations.

Your role is: Make plan

Instructions

  • You will be provided with one or more APIs, but this is just for your reference. Do not try to use all the APIs. You need to use the necessary and as few APIs as possible to complete the corresponding requirements.
  • Do not generate plans that are not relevant to the query message
  • If multiple steps are required, please describe in detail what the before_request and after_request of each step are to do.
  • If some property is required but use not provide, please use default value from API
  • Do not return comments starting with // in the return output
  • The current time is 2025-01-16 15:38:30.821994

Provide your output as a JSON containing the following fields:
<json_fields>
[“steps”]
</json_fields>
Here are the properties for each field:
<json_field_properties>
{
“steps”: {
“items”: {
“$ref”: “#/$defs/StepItem”
},
“type”: “array”
},
“$defs”: {
“StepItem”: {
“description”: {
“description”: “give a brief description what need to do in this step”,
“type”: “string”
},
“doc_id”: {
“description”: “doc_id from knowledgebase”,
“type”: “string”
},
“url”: {
“description”: “Provide the url need to call.”,
“type”: “string”
},
“http_method”: {
“description”: “Provide the needed http method, like GET POST…”,
“type”: “string”
},
“before_request”: {
“description”: “the action need to execute before request”,
“type”: “string”
},
“payload”: {
“description”: “http request body”,
“type”: “string”
},
“after_request”: {
“description”: “after request done, need to do somthing like format data or pick useful data”,
“type”: “string”
}
}
}
}
</json_field_properties>
Start your response with { and end it with }.
Your output will be passed to json.loads() to convert it to a Python object.
Make sure it only contains valid JSON."
},
{
“role”: “user”,
“content”: “get controller list”
}
],
“model”: “llama-3.2-90b-vision-preview”,
“response_format”: {
“type”: “json_object”
},
“tool_choice”: “auto”,
“tools”: [
{
“type”: “function”,
“function”: {
“name”: “search_knowledge_base”,
“description”: “Use this function to search the knowledge base for information about a query.”,
“parameters”: {
“type”: “object”,
“properties”: {
“query”: {
“type”: “string”,
“description”: “(None) The query to search for.”
}
},
“required”: [
“query”
]
}
}
}
]
}

The above is the complete groq client request I saw from the debug information

After many tests, I found that this is related to using knowledgebase. If I don’t use knowledge, I can return normally, but once I pass the knowledge parameter, an error will be reported.

The following is the error message and the code of the knowledgebase I defined

Error:

groq.BadRequestError: Error code: 400 - {‘error’: {‘message’: ‘response_format` json_object cannot be combined with tool/function calling’, ‘type’: ‘invalid_request_error’}}

Code:
self.json_knowledge_base = JSONKnowledgeBase(
path=self.doc_path,
number_documents=number_documents,
vector_db=PgVector(
table_name=self.embedding_collection + “_json”,
db_url=const.PG_URL,
embedder=self.embedder,
search_type=SearchType.hybrid
)
)

1 Like

@windrunnner @rainrunner
Remove the structured_output=true flag. This is only for models with native structured output support. You only need to pass the response_model. This is bad experience that we intend to rectify in coming releases.

1 Like

Thank you for your answer. I have tried this, but there is still a problem.
Once I use the knowledge parameter, even if structured_output=False,
an error will be reported.
The error is as shown above.
groq.BadRequestError: Error code: 400 - {‘error’: {‘message’: ‘response_format` json_object cannot be combined with tool/function calling’, ‘type’: ‘invalid_request_error’}}

1 Like

And this only happens with the knowledge parameter added?

是的,我将尝试其他类型的knowledge,例如text knowledge看看是否报错,目前我使用的是json knowledgebase