Whisper Wrapper (v6)
About this version
- Submitter: keighrim
- Submission Time: 2024-02-12T19:47:39+00:00
- Prebuilt Container Image: ghcr.io/clamsproject/app-whisper-wrapper:v6
-
Release Notes
v6 fixes a bug running en-only large models since there is no en-only large model
About this app (See raw metadata.json)
A CLAMS wrapper for Whisper-based ASR software originally developed by OpenAI.
- App ID: http://apps.clams.ai/whisper-wrapper/v6
- App License: Apache 2.0
- Source Repository: https://github.com/clamsproject/app-whisper-wrapper (source tree of the submitted version)
- Analyzer Version: 20231117
- Analyzer License: MIT
Inputs
(Note: “*” as a property value means that the property is required but can be any value.)
One of the following is required: [
-
http://mmif.clams.ai/vocabulary/AudioDocument/v1 (required) (of any properties)
-
http://mmif.clams.ai/vocabulary/VideoDocument/v1 (required) (of any properties)
]
Configurable Parameters
(Note: Multivalued means the parameter can have one or more values.)
-
modelSize
: optional, defaults totiny
- Type: string
- Multivalued: False
- Choices:
tiny
,True
,base
,b
,small
,s
,medium
,m
,large
,l
,large-v2
,l2
,large-v3
,l3
The size of the model to use. When
modelLand=en
is given, for non-large
models, English-only models will be used instead of multilingual models for speed and accuracy. (Forlarge
models, English-only models are not available.) -
modelLang
: optional, defaults to""
- Type: string
- Multivalued: False
Language of the model to use, accepts two- or three-letter ISO 639 language codes, however Whisper only supports a subset of languages. If the language is not supported, error will be raised.For the full list of supported languages, see https://github.com/openai/whisper/blob/20231117/whisper/tokenizer.py . In addition to the langauge code, two-letter region codes can be added to the language code, e.g. “en-US” for US English. Note that the region code is only for compatibility and recording purpose, and Whisper neither detects regional dialects, nor use the given one for transcription. When the langauge code is not given, Whisper will run in langauge detection mode, and will use first few seconds of the audio to detect the language.
-
pretty
: optional, defaults tofalse
- Type: boolean
- Multivalued: False
- Choices:
false
,true
The JSON body of the HTTP response will be re-formatted with 2-space indentation
Outputs
(Note: “*” as a property value means that the property is required but can be any value.)
(Note: Not all output annotations are always generated.)
-
http://mmif.clams.ai/vocabulary/TextDocument/v1 (of any properties)
- http://mmif.clams.ai/vocabulary/TimeFrame/v2
- timeUnit = “millisecond”
-
http://mmif.clams.ai/vocabulary/Alignment/v1 (of any properties)
-
http://vocab.lappsgrid.org/Token (of any properties)
- http://vocab.lappsgrid.org/Sentence (of any properties)