Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: openai.BadRequestError when using GPTAssistantAgent in GroupChat #3284

Closed
LarsAC opened this issue Aug 2, 2024 · 2 comments · Fixed by #3555
Closed

[Bug]: openai.BadRequestError when using GPTAssistantAgent in GroupChat #3284

LarsAC opened this issue Aug 2, 2024 · 2 comments · Fixed by #3555
Labels
bug Something isn't working

Comments

@LarsAC
Copy link

LarsAC commented Aug 2, 2024

Describe the bug

I have put together a small team of agents (user_proxy, two researchers, and a data analyst). The researchers are AssistantAgents, the data analyst is a GPTAssistantAgent with the code_interpreter tool.

Using a sequential chat mode (user_proxy.initiate_chats()) the conversation terminates fine. When I switch to using a GroupChat though, the chat aborts upon trying to talk to the data_analyst:

--------------------------------------------------------------------------------
Next speaker: data_analyst

Traceback (most recent call last):
  File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/azure-ml-ci01/code/Users/LarsAC/llm/my-agent/run_autogen.py", line 231, in <module>
    report = crew.run()
             ^^^^^^^^^^
  File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/azure-ml--ci01/code/Users/LarsAC/llm/my-agent/run_autogen.py", line 203, in run
    result = user_proxy.initiate_chat(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1019, in initiate_chat
    self.send(msg2send, recipient, silent=silent)
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 656, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 819, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1973, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/autogen/agentchat/groupchat.py", line 1052, in run_chat
    reply = speaker.generate_reply(sender=self)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 1973, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/autogen/agentchat/contrib/gpt_assistant_agent.py", line 212, in _invoke_assistant
    self._openai_client.beta.threads.messages.create(
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/openai/resources/beta/threads/messages.py", line 88, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1266, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 942, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/azureuser/.cache/pypoetry/virtualenvs/my-agent-6V5iQAlW-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1046, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value: 'tool'. Supported values are: 'user' and 'assistant'.", 'type': 'invalid_request_error', 'param': 'role', 'code': 'invalid_value'}}

Steps to reproduce

No response

Model Used

Currently using gpt-4o, but does not seem model related.

Expected Behavior

Conversation should run smooth with out error.

Screenshots and logs

No response

Additional Information

pyautogen==0.2.33
openai==1.37.1
Python 3.11.9

I went through the issues #3164 and #960. While they seem somewhat related I think this error has a different origin.

@LarsAC LarsAC added the bug Something isn't working label Aug 2, 2024
@PersonaDev
Copy link

It looks like you’re running into an issue with the OpenAI API where the role parameter is set to 'tool', but the API only accepts 'user' or 'assistant' as valid values for this parameter. To resolve this, you’ll need to locate the part of your code where the API request is being made and make sure that the role is set to either 'user' or 'assistant' depending on what you’re trying to achieve.

@evandavid1
Copy link
Contributor

evandavid1 commented Aug 18, 2024

@LarsAC @PersonaDev I've encountered the same issue and can reproduce it. after enabling debug output, I found that the gptassistantagent is in fact generating a message with role value = 'tool' while using groupchat. I'm not specifically setting any messages, I'm simply calling initiate_chat for groupchat and then conversing. the tool that my agent is utilizing is the openai assistant file search, configured in my code per below. I have two gptassistant agents along with 4 conversable agents. I've noticed this error doesn't always occur for the same gptassistant in a given conversation, but it does always occur for one of them. please let me know any thoughts on debugging. i'll dig into conversable_agent.py and gpt_assistant_agent.py in the meantime to try to find the role assignment issue. based on the other related open issues, seems it might be unrelated to gptassistant agent, and a general issue with tool calling agents in groupchat.

Thanks
Evan

initiate groupchat:

groupchat_result = user_proxy.initiate_chat(
manager,
message=task2,
)

I followed this guide for configuration:
https://microsoft.github.io/autogen/docs/topics/openai-assistant/gpt_assistant_agent

my config:

assistant_config = {
"tools": [
{"type": "file_search"},
],
"tool_resources": {
"file_search": {
"vector_store_ids": ["$vector_store.id"]
}
}
}

content_manager = GPTAssistantAgent(
name="content_manager",
description="provide documents for strategy",
llm_config={
"config_list": llm_config_gpt_4o["config_list"],
"assistant_id": "asst_wmOpxxxxxxxxxxxxxxx"
},
assistant_config=assistant_config,
)

ERROR logging:

DEBUG:openai._base_client:HTTP Response: POST https://api.openai.com/v1/threads/thread_EOMfr5UY2sOG7jrOAYn9CP27/messages "200 OK" Headers({'date': 'Sun, 18 Aug 2024 22:32:57 GMT', 'content-type': 'application/json', 'transfer-encoding': 'chunked', 'connection': 'keep-alive', 'openai-version': '2020-10-01', 'openai-organization': 'user-odczqxrlslggkjmvuya9yqaq', 'x-request-id': 'req_20f00ee86f9343181d7f642062e24f9d', 'openai-processing-ms': '134', 'strict-transport-security': 'max-age=15552000; includeSubDomains; preload', 'cf-cache-status': 'DYNAMIC', 'x-content-type-options': 'nosniff', 'server': 'cloudflare', 'cf-ray': '8b555cdfead72a9a-LAX', 'content-encoding': 'gzip', 'alt-svc': 'h3=":443"; ma=86400'})
DEBUG:openai._base_client:request_id: req_20f00ee86f934xxxxxxxx
DEBUG:openai._base_client:Request options: {'method': 'post', 'url': '/threads/thread_EOMfr5UY2sOG7jrOAYn9CP27/messages', 'headers': {'OpenAI-Beta': 'assistants=v2'}, 'files': None, 'json_data': {'content': '"Neuralink's
.....
Overall, Neuralink is at the forefront of the BCI industry, driving innovation and pushing the boundaries of what is possible with neurotechnology."', 'role': 'tool'}}
DEBUG:openai._base_client:Sending HTTP Request: POST https://api.openai.com/v1/threads/thread_EOMfr5UY2sOG7jrOAYn9CP27/messages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants