You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everyone! First I'd like to thank OpenAI for their wonderful products and push forward in the research!
I'm writing because there is one theme that I find painful when using OpenAI.
That's testing.
Sometimes it makes no sense to call the OpenAI model during tests, but this means I have to build a dummy class by hand, which is sort of time consuming.
Before building it as a separate, standalone pipy package, I was wondering if I could try and add it directly to the openai python package.
My idea would be the following:
adding a flag, something like client = openai.OpenAI(dummy=True)
adding a method (for the seq2seq models) like client.load_dummy_answers(default: str, Optional[Dict[str, str]]), where one sets up a default answer and a dictionary where key is prompt and the value is the dummy response
the other parameters returned by the models would default dummies (except maybe for the time and date of generations)
(of course I would add this, for now, just to text2text models...in theory this could be extended to other models too).
My idea is to do everything in package...but I'm sure you could do something similar adding a "dummy" model to the list (but then people would still need to use the internet connection for testing).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi everyone! First I'd like to thank OpenAI for their wonderful products and push forward in the research!
I'm writing because there is one theme that I find painful when using OpenAI.
That's testing.
Sometimes it makes no sense to call the OpenAI model during tests, but this means I have to build a dummy class by hand, which is sort of time consuming.
Before building it as a separate, standalone pipy package, I was wondering if I could try and add it directly to the openai python package.
My idea would be the following:
client = openai.OpenAI(dummy=True)
(of course I would add this, for now, just to text2text models...in theory this could be extended to other models too).
My idea is to do everything in package...but I'm sure you could do something similar adding a "dummy" model to the list (but then people would still need to use the internet connection for testing).
Beta Was this translation helpful? Give feedback.
All reactions