Scenes-with-text Detection (v4.1)

About this version

  • Submitter: keighrim
  • Submission Time: 2024-03-07T03:29:41+00:00
  • Prebuilt Container Image: ghcr.io/clamsproject/app-swt-detection:v4.1
  • Release Notes

    This version includes many bugfixes and clarification for the previous version.

    • more informative, consistent, and flask-friendly debug-level logging for future development
    • two additional pretrained models, including one based on convnext_tiny backbone for quicker annotation
    • TimePoint annotations is re-worked
      • classification-related props in TP anns are now all based on the “RAW” labels from classifier
      • all images classification results are now recorded as TP annotations regardless of TF annotations
    • added two runtime parameter
      • useStitcher - when "false", app will only generate TimePoint annotations, not stitching them into TimeFrame annotations
      • modelName - to pick a model between pre-built classifiers (by default, app will use the best performing model from training experiments)
    • updated to the latest mmif-python and clams-python, and thus no longer generates MMIFs with a non-existing version

About this app (See raw metadata.json)

Detects scenes with text, like slates, chyrons and credits.

Inputs

(Note: “*” as a property value means that the property is required but can be any value.)

(any properties)

Configurable Parameters

(Note: Multivalued means the parameter can have one or more values.)

  • startAt: optional, defaults to 0

    • Type: integer
    • Multivalued: False

    Number of milliseconds into the video to start processing

  • stopAt: optional, defaults to 10000000

    • Type: integer
    • Multivalued: False

    Number of milliseconds into the video to stop processing

  • sampleRate: optional, defaults to 1000

    • Type: integer
    • Multivalued: False

    Milliseconds between sampled frames

  • minFrameScore: optional, defaults to 0.01

    • Type: number
    • Multivalued: False

    Minimum score for a still frame to be included in a TimeFrame

  • minTimeframeScore: optional, defaults to 0.5

    • Type: number
    • Multivalued: False

    Minimum score for a TimeFrame

  • minFrameCount: optional, defaults to 2

    • Type: integer
    • Multivalued: False

    Minimum number of sampled frames required for a TimeFrame

  • modelName: optional, defaults to 20240126-180026.convnext_lg.kfold_000

    • Type: string
    • Multivalued: False
    • Choices: 20240126-180026.convnext_lg.kfold_000, 20240212-132306.convnext_lg.kfold_000, 20240212-131937.convnext_tiny.kfold_000

    model name to use for classification

  • useStitcher: optional, defaults to true

    • Type: boolean
    • Multivalued: False
    • Choices: false, true

    Use the stitcher after classifying the TimePoints

  • pretty: optional, defaults to false

    • Type: boolean
    • Multivalued: False
    • Choices: false, true

    The JSON body of the HTTP response will be re-formatted with 2-space indentation

Outputs

(Note: “*” as a property value means that the property is required but can be any value.)

(Note: Not all output annotations are always generated.)