CLAMS docTR Wrapper (v1.1)
About this version
- Submitter: keighrim
- Submission Time: 2024-04-23T20:49:23+00:00
- Prebuilt Container Image: ghcr.io/clamsproject/app-doctr-wrapper:v1.1
-
Release Notes
Minor update with various fixes
- updated SDK version
- fixed container image dependency problem
- updated app metadata and made app description less verbose
About this app (See raw metadata.json)
CLAMS app wraps the docTR, End-to-End OCR model. The model can detect text regions in the input image and recognize text in the regions (via parseq OCR model, only English is support at the moment). The text-localized regions are organized hierarchically by the model into “pages” > “blocks” > “lines” > “words”, and this CLAMS app translates them into TextDocument
, Paragraphs
, Sentence
, and Token
annotations to represent recognized text contents. See descriptions for I/O types below for details on how annotations are aligned to each other.
- App ID: http://apps.clams.ai/doctr-wrapper/v1.1
- App License: Apache 2.0
- Source Repository: https://github.com/clamsproject/app-doctr-wrapper (source tree of the submitted version)
- Analyzer Version: 0.8.1
- Analyzer License: Apache 2.0
Inputs
(Note: “*” as a property value means that the property is required but can be any value.)
- http://mmif.clams.ai/vocabulary/VideoDocument/v1 (required) (any properties)
- http://mmif.clams.ai/vocabulary/TimeFrame/v5 (required)
The Time frame annotation that represents the video segment to be processed. When
representatives
property is present, the app will process videos still frames at the underlying time point annotations that are referred to by therepresentatives
property. Otherwise, the app will process the middle frame of the video segment.- representatives = “?”
Configurable Parameters
(Note: Multivalued means the parameter can have one or more values.)
Name | Description | Type | Multivalued | Default | Choices |
---|---|---|---|---|---|
tfLabel | The label of the TimeFrame annotation to be processed. By default ([] ), all TimeFrame annotations will be processed, regardless of their label property values. |
string | Y | [] | |
pretty | The JSON body of the HTTP response will be re-formatted with 2-space indentation | boolean | N | false | false , true |
Outputs
(Note: “*” as a property value means that the property is required but can be any value.)
(Note: Not all output annotations are always generated.)
- http://mmif.clams.ai/vocabulary/TextDocument/v1
Fully serialized text content of the recognized text in the input images. Serialization isdone by concatenating
text
values ofParagraph
annotations with two newline characters.- @lang = “en”
- http://vocab.lappsgrid.org/Token
Translation of the recognized docTR “words” in the input images.
text
andword
properties store the string values of the recognized text. The duplication is for keepingbackward compatibility and consistency withParagraph
andSentence
annotations.- text = “*”
- word = “*”
- http://vocab.lappsgrid.org/Sentence
Translation of the recognized docTR “lines” in the input images.
text
property stores the string value of space-joined words.- text = “*”
- http://vocab.lappsgrid.org/Paragraph
Translation of the recognized docTR “blocks” in the input images.
text
property stores the string value of newline-joined sentences.- text = “*”
- http://mmif.clams.ai/vocabulary/Alignment/v1
Alignments between 1)
TimePoint
<->TextDocument
, 2)TimePoint
<->Token
/Sentence
/Paragraph
, 3)BoundingBox
<->Token
/Sentence
/Paragraph
(any properties) - http://mmif.clams.ai/vocabulary/BoundingBox/v4
Bounding boxes of the detected text regions in the input images. No corresponding box for the entire image (
TextDocument
) region- label = “text”