Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for o1-mini and o1-preview #56

Open
jame25 opened this issue Nov 18, 2024 · 9 comments
Open

Support for o1-mini and o1-preview #56

jame25 opened this issue Nov 18, 2024 · 9 comments

Comments

@jame25
Copy link

jame25 commented Nov 18, 2024

Since there is now API access to these models, could you please add support for them. I've tried generating a new API key, however I still get the following error: state=finished raised KeyError

@hibobmaster
Copy link
Owner

Doc: https://platform.openai.com/docs/guides/reasoning

图片

There is no compatible issue i think. Do you have the required tier to use this model?

@jame25
Copy link
Author

jame25 commented Nov 19, 2024

Ah, that is probably the issue. I am only tier 1. I have however recently become able to interact with the API via the playground:

Capture

Thanks for taking the time to respond.

@jame25
Copy link
Author

jame25 commented Nov 20, 2024

FYI, I got an email an hour ago confirming that I now have API access to both models (o1-mini and o1-preview). Also the related webpage has been updated:

Capture

I am still facing the same error: state=finished raised KeyError

@hibobmaster
Copy link
Owner

hibobmaster commented Nov 20, 2024

Can you share me your config with sensitive information redacted.

Besides, what's in container log?

@jame25
Copy link
Author

jame25 commented Nov 20, 2024

The container log:

2024-11-20 12:15:07,453 - INFO - matrix chatgpt bot start.....
2024-11-20 12:15:10,841 - INFO - Successfully login via password
Mismatch in keys payload of device MatrixChatGPTBot (xxxxx) of user @bot:matrix.org (@bot:matrix.org).
2024-11-20 12:15:25,930 - INFO - Message received in room XXXX
User | !gpt hello
2024-11-20 12:15:30,729 - gpt - ERROR - RetryError[<Future at 0x7fb025e2f790 state=finished raised KeyError>]
2024-11-20 12:15:30,729 - ERROR - RetryError[<Future at 0x7fb025e2f790 state=finished raised KeyError>]
2024-11-20 12:15:32,003 - INFO - Message received in room XXXX
bot | > <@user:matrix.org> !gpt hello
Something went wrong, please try again or contact admin.

My config:

HOMESERVER="https://matrix.org"
USER_ID="@bot:matrix.org"
PASSWORD=xxxxx
DEVICE_ID="MatrixChatGPTBot"
ROOM_ID="!xxxxx:matrix.org"
OPENAI_API_KEY="xxxxxxxxxxxxxx"
GPT_API_ENDPOINT="https://api.openai.com/v1/chat/completions"
GPT_MODEL="o1-preview"
MAX_TOKENS=4000
TOP_P=1.0
PRESENCE_PENALTY=0.0
FREQUENCY_PENALTY=0.0
REPLY_COUNT=1
SYSTEM_PROMPT="You are ChatGPT, a large language model trained by OpenAI. Respond conversationally"
TEMPERATURE=0.8
TIMEOUT=120.0

@hibobmaster
Copy link
Owner

hibobmaster commented Nov 20, 2024

Does gpt-3.5-turbo work on well?

Can you use curl to do some test as well? If curl work on well, please paste the output here with sensitive info redacted.
https://platform.openai.com/docs/api-reference/chat/create?lang=curl

@jame25
Copy link
Author

jame25 commented Nov 20, 2024

Yes, other models including gpt-3.5-turbo work fine. The curl request for o1-preview returned the following:

BadRequestError: Error code: 400 - {'error': {'message': "Unsupported value: 'messages[0].role' does not support 'system' with this model.", 'type': 'invalid_request_error', 'param': 'messages[0].role', 'code': 'unsupported_value'}}

Which in turn, lead me to stackoverflow. And subsequently the creation of the following python script - which does work with the o1-mini and o1-preview models:

import os
import requests
import json

# Retrieve your OpenAI API key from an environment variable
api_key = os.getenv("OPENAI_API_KEY")

# Check if the API key is available
if not api_key:
    raise ValueError("Please set the OPENAI_API_KEY environment variable.")

# Set up the URL and headers for the request
url = "https://api.openai.com/v1/chat/completions"

headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {api_key}"
}

# Define your prompt
prompt = "Can you tell me a joke?"

# Structure the messages accordingly
messages = [
    {
        "role": "user",
        "content": prompt  # Use the prompt directly as a string
    }
]

# Define the data payload for the request
data = {
    "model": "o1-preview",
    "messages": messages
}

# Send the POST request
response = requests.post(url, headers=headers, json=data)

# Check if the request was successful
if response.status_code == 200:
    # Extract the assistant's reply content
    reply_content = response.json()["choices"][0]["message"]["content"]

    # Print the assistant's reply (since it's already a string)
    print("Response from GPT:", reply_content)
else:
    # Print the error
    print(f"Status Code: {response.status_code}")
    print("Response:")
    print(response.json())

@hibobmaster
Copy link
Owner

Try: ghcr.io/hibobmaster/matrixchatgptbot:sha-c2d6ecfaf62f539f9e5ac0d321aa6c56002ec437

@jame25
Copy link
Author

jame25 commented Nov 21, 2024

That version works great with both o1 models. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants