Skip to content

Sighthound Traffic Analytics

Pipeline Overview

TrafficAnalytics identifies traffic objects for use with sensors such as presence, counts, etc. This pipeline mainly supports processing an input video file. It is broken up into several sub-pipeline files, but the main executable is the TrafficAnalytics.yaml file.

The following modules are specific entry-points

  • TrafficAnalyticsFile.yaml - runs the pipeline on a video or image input file
  • TrafficAnalyticsRTSP.yaml - runs the pipeline in an RTSP stream
  • TrafficAnalyticsGstFile.yaml - runs the pipeline on a video or image input file (GStreamer backend, only available on Jetson)
  • TrafficAnalyticsGstRTSP.yaml - runs the pipeline in an RTSP stream (GStreamer backend, only available on Jetson)
  • TrafficAnalyticsFolderWatch.yaml - using a configured watched folder as an input
  • TrafficAnalyticsDirect.yaml - pipeline used by applications embedding SIO as an SDK
  • TrafficAnalyticsImageLoop.yaml - pipeline feeds a pre-configured image in a loop; useful for performance testing.

How to Use

./bin/runPipeline share/pipelines/TrafficAnalytics/TrafficAnalyticsFile.yaml VIDEO_IN=<path to video file>

Running RTSP feed with TrafficAnalyticsRTSP.yaml

./bin/runPipeline share/pipelines/TrafficAnalytics/TrafficAnalyticsRTSP.yaml VIDEO_IN=<path RTSP feed>


View an example of the output here. The main use of this pipeline is to generate a JSON output via AMQP to other microservices. The file pipeline also has an option to render a video output with the generate boxes, labels, and tracker data

Pipeline Parameters



  • detectionMinConfidence - detection confidence threshold. Default: 0.5
  • detectionOverlapThreshold - parameter to help with filtering detection that have bounding boxes overlapping at least that much. Default: 0.8
  • detectionModel - specifies the detection model to use. This will affect both performance and accuracy. Values will match the models available in ./models folder, and have a form of generation+[server or embedded]+[standard or fast or best]. So, for example, 6th generation embedded best model will be gen6eb. Default: gen5es
  • enabledClasses - list of classes enabled in the object detector. Default: [ "bicycle", "bus", "car", "motorbike", "person", "truck" ]
  • imageSaveDir - folder name and prefix where to save analyzed frames corresponding to the generated analytics output. Default: empty (do not generate images)
  • imageWriterFormat - if set, images will be saved in the specified format. Default: 'jpg'
  • imageSaveFrequencyMs - frequency in which to save images to the previously set directory. Default: 1000
  • sourceId - a "callback context" : reported results will include this ID. Useful if gathering outputs from multiple cameras/pipelines at a single point. Default: traffic-analytics-1
  • useTracker - by default, tracker is not utilized, resulting in detected vehicles assuming unique object ID in every frame, unless they are correlated with a license plate. Setting this value to true enables the tracker, at a significant performance cost. We recommend it is never set to true for a stream from a moving source. Default: false
  • trackerType - choose which algorithm of tracker to use. Options are [cpuBest, cpuFast, cpuFaster, cpuFastest]. Default: cpuFaster
  • debugSettings - comma-separated list that enables debugging in multiple modules. Supported options: eventAnalytics: Print input and output of event analytics. OSD: Enable OSD streamView events. dumpJson: Save frame_data to /data/tmp. jsonPrint: Print frameData to stdout. json: Add debug section in frameData. log: Enable debug logging in python modules. filteringDebug: Enable OutputFilter debug. Default: ""
  • splitMakeModel - if true, the "value" of a car object will be represented as JSON dictionary with values for 'make', 'model', 'generation'. If false, it will be represented as a comma-separated string. Default: false
  • gpuDevice - index of GPU to use. Default: 1

Update Strategy

  • updateAllowEmptyJSON - if true, output analytics result, even if no objects are contained in it. When false, empty JSON may still be emitted in the case the update is forced due to any of the other update related parameters described here. Default: false
  • updateMinFrequencyMs - send an output refresh even if no updates had been seen within the specified time from the last output. The update will be sent even if the output is empty. This can be used as a keep-alive mechanism for the client. Default: 0 (Never force updates based on time)
  • updateOnlyOnChange - if true, only emit new or updated detections (that includes previously detected objects with an improved score). Default: true (except for FolderWatch pipeline, where it is false)
  • updateOnlyOnEvents - if true, only emit frameData with events ( "sensorEvents" ). Default: false
  • updateOnMetaclassCountChange - comma-separated list of metaclasses (person ,vehicles) to trigger an update whenever count of that class changes in the output. Default: '' (no updates are triggered by count changes). Special value all means update on count changing for any classes supported by the pipeline.
  • updateOnStart - send an update at the very start of processing, even if no objects are detected in that frame. Default: true
  • updateResetStateOnNewInput - relevant for FolderWatch pipeline only; if true, treat each new input as a separate stream. Default: true
  • updateOnImprovedScore - if true, score improvements will trigger updates for detected objects. Default: false

Box Filtering

Allows to filter out detection boxes based on bounding box dimensions.

  • boxFilterConfig - specifies a path to box filter configuration. Default: '' (no filtering). Example configuration:

          "name" : "myRoiFilter1",
          # ROI-specific config
          "type" : "roi",
          # The box coordinates are interpreted as `[ x, y, w, h ]`;
          # those coordinates may be absolute pixel coordinates, or relative to the size of the image, with each value being in range of `[0,1]`
          "region" : [ 0.1, 0.1, 0.2, 0.2 ],
          # Classes to apply this filter to
          "classes" : [  "car", "truck" ],
          # boxInRoi (box contained in ROI with minimum of requiredOverlap), roiInBox (opposite)
          "behavior": "boxInRoi",
          # Minimum overlap. Strict > is applied to 0, so specifying that value requires any overlap (Optional, default: 100)
          "requiredOverlap" : 0
          # When set to true, if this filter applies, the object won't pass BoxFilter (Optional, default: false)
          "exclude" : false,
          # Setting this to true will enable additional logging related to this filter
          "debug" : false
          "name" : "mySizeFilter1",
          # Size-specific configuration
          "type" : "size",
          # Value to filter on. Relevant parameters failling outside of constraints will cause the object to be removed from the output
          # Possible values: dimension (either width or height), width, height, area, aspectRatio
          "subtype" : "dimension",
          # Absolute value in pixels, or relative to the frame size, if in (0-1] range
          # 0 means no limit. Both values are optional (defaulting to 0, but if both aren't specified,
          # the filter has no effect)
          "max" : 0.1,
          "min" : 0.1,
          # Classes to apply this filter to
          "classes" : [ "car", "truck" ],
          # Setting this to true will enable additional logging related to this filter
          "debug" : false

This can also be useful to turn off unnecessary classifications. For example, setting min of person class to 5000 ensures that no person is ever reported out of the detector, and thus never analyzed, effectively turning off person detection. Similarly, setting all the vehicle constraints to similarly large value can be used to turn off MMC (make model color) analytics.


  • extensionModules - comma-separated list of file paths pointing to Python modules to be used, in order specified, to post-process and potentially modify the JSON output emitted by the pipeline. Default: ''
  • extensionConfigurations - a JSON config file for the extension module. Default: ''

Each module specified needs to implement two methods:

  • def configure(configPath) - will be called once, while loading the module. Path to specified configuration will be provided as the parameter extensionConfigurations.
  • def process(tick, data, frame) - will be called for each frame processed. Parameters are sequential frame ID, json output and RGB frame object. Must return the new value for data.


  • debugSettings - enables various debugging facilities. Default: ''. Formatted as a comma separated list of the following values:
    • json a debug section is added to the JSON output.
    • log enable log output from the pipeline.
    • history periodically (and on pipeline termination) log all the reports emitted so far.
  • debugRawResultsPrefix - when set to a path, will generate raw results useful for post-processing and tuning in the path specified. Default: '' (disabled)
  • performanceInfo - report performance metrics. Default: false
  • includeAverageHardware - report average hardware usage information. Default: false
  • performanceCSVFile - save path for performance metrics CSV, if empty (default) no file will be saved. Default: ''
  • debugOutputPath - Save path for debug files. Default: '/data/tmp'
  • jsonDumpFormat - save format for output JSONs. Default: 'json'. Options: 'json' or 'csv'
  • saveJSON - debug option to save encoded JSON upon flush of TrafficAnalyticsOutputAggregator node. Default false

For more detailed information on SIOPerformanceMonitor see, ""


  • sensorsConfigFile - path of configuration sensors file, if empty (default) the Event Analytics module will be disabled and input will be forwarded to output. Default: ''


  • amqpUseSsl - option to specify if SSL is used. Default: false.
  • amqpHost - host address for the AMQP output. AMQP output generation will be skipped if this is empty. Default: ''
  • amqpPort - connection port. Host address and port number make a complete address. Default: 0
  • amqpMaxRetries - maximum number of connection retries before producing an error message. Default: 5
  • amqpUser - username that will be used to establish a connection with the host address. Default: ''
  • amqpPassword - password that will be used to establish a connection with the host address. Default: ''
  • amqpExchange - used to configure the AMQP Exchange. Default: ''
  • amqpRoutingKey - used to configure the AMQP Routing key. Default: "sio"
  • amqpErrorOnFailure - generate an error on AMQP connection/message post failure. Default: false

Pipeline Specific


Used for processing video files.

Required Parameters: VIDEO_IN='Video_Path'

Pipeline Specific Options:

  • VIDEO_IN - file to process.
  • outputPrefix - save path and image filename prefix. When set the pipeline wil save images with detection boxes drawn. Default: ''
  • recordToFile - save path and video filename. When set the pipeline wil save rendered video output with detection boxes drawn. Default: ''


Used for processing videos streamed using Real Time Streaming Protocol (RTSP).

Required Parameters: VIDEO_IN='RTSP_URI'

Pipeline Specific Options:

  • VIDEO_IN - RTSP URI to process.
  • hardwareDecoderType - provides control over the hardware video decoder. Default value results in the use of system default. This can be completely turned off by setting the value to "None". Default: ''
  • resizeToSize - the size of the images that the pipeline ingests can be controlled using this option. The larger the image the more resource intensive it becomes to process it. A larger source image might also result in more frames dropped during video processing. This value will be used as the maximum height of the frame when it is being resized. Default: 720
  • maxQueueSize - the maximum number of frames to hold in the frame queue. The default setting for the RTSP pipeline is lossy. Lossy queue will drop frames, rather than stop consuming input when full. When set to 0, asynchronous reading of frames is disabled. Default: 200
  • fpsLimit - the maximum number of frames that are let through the frame queue. This limits incoming FPS to process to a specific value. This can be used to control resource usage in cases where several streams need to be processed simultaneously. Default: 20
  • recordTo - Folder to record the incoming video to. Default: '' (recording disabled)
  • recordInterval - Interval, in milliseconds, after which a new file will be opened for a recording (may take longer, depending on I-frame distance). Default: 5000
  • rtspTimeoutMs - RTSP timeout in milliseconds (actual timeout may be a double, if RTSP client attempts to reconnect with a different authentication mechanism). Default: 5000
  • rtspRetryCount - Number of time RTSP connection will be retried before giving up. Default: 3


Used for processing videos streamed using RTSP. This uses the GStreamer backend which is only available on Jetson.

Required Parameters: VIDEO_IN='RTSP_URL'

Pipeline Specific Options:

  • fpsLimit - same as the TrafficAnalyticsRTSP pipeline fpsLimit option.
  • VIDEO_IN - RTSP URI to process.
  • rtspTimeoutMs - RTSP timeout in milliseconds (actual timeout may be a double, if RTSP client attempts to reconnect with a different authentication mechanism). Default: 5000
  • rtspRetryCount - Number of time RTSP connection will be retried before giving up. Default: 3


Used for processing video files; same as TrafficAnalyticsFile, but using GStreamer rather than ffmpeg-based intake

Required Parameters: VIDEO_IN='Video_Path'

Pipeline Specific Options:

  • VIDEO_IN - file to process.


Used for processing images using a configured "watched" folder as an input.

Required Parameters: folderPath='Valid_Folder_Path'

Pipeline Specific Options:

  • folderPath - this is the folder path that will be used to get the input images.
  • folderPollInterval - this options sets the time interval in milliseconds to wait before checking for new files. Default: 200
  • folderPollAgeMin - this options sets the time interval in milliseconds for which files are not used, to ensure they're not still being written to storage. Default: 1000
  • folderPollPrefix - only use files with the prefix set with this option, default is an empty string.
  • folderPollExtensions - list of image extensions to consider. Default: "["jpg"]"
  • folderGenerateAnalyticsFile - this option creates a JSON file with the frame data results for each image/video processed in "processed" subfolder. Default: true
  • folderRemoveSourceFiles - this option when set to true removes source files upon completion, when false, it will move them to "processed" subfolder. Default: false
  • folderWatchTerminateOnIdle - When no new inputs can be found in the specified folder, the pipeline will terminate. Useful if, and only if, running on a specific contained data set. Default: false


This pipeline is used by applications which use SIO as an embedded SDK.

Required Parameters: Pipeline Specific Options:


Used for feeding a pre-configured image in a loop. This is useful for performance testing.

Required Parameters: VIDEO_IN='Image_Path'

Pipeline Specific Options:

  • VIDEO_IN - path to image to feed as input.