Replies: 1 comment
-
What you are observing is the "temperature fallback", which is also used in the original Whisper (cf. openai/whisper#81). See also #76. If you are using the default options, this "temperature fallback" is triggered on the same conditions in both faster-whisper and openai-whisper. So if one implementation is non deterministic for an audio, the other should also be non deterministic. If that's not the case, please share the input audio if possible. If you are using different transcription or model options, it's possible that one implementation is deterministic and not the other for the same audio. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Could someone advise, why the original Whisper Large V2 model has deterministic responses (You can pass multiple times the same audio -> transcription will be always the same). On Fatser Whisper implementation, results - are always/ofter different?
Beta Was this translation helpful? Give feedback.
All reactions