Skip to content

Working with SIO framework

Installation

Docker

The preferred method of using SIO on Linux, both x86 and ARM/Jetson, is by utilizing a Docker container. The SIO docker image can be obtained from our private registry, once we've created an account for you, and provided you with sighthound-keyfile.json.

Note

You may need to append sudo before each of the docker commands in this document depending on your machine's configuration.

cat sighthound-keyfile.json | docker login -u _json_key --password-stdin us-central1-docker.pkg.dev

docker pull us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:r220822

You have a choice of either using the image as is, or inheriting from it. The latter approach is beneficial, if you plan on modifying the image in any way: embedding a license, installing additional software packages etc.

The image is using python 3.8, and it is recommended that no changes are made to python runtime. A small set of packages is installed with pip (numpy, etc), and more can be added.

Important environment variables that can be used within the container:

  • SH_HOME - root of Sighthound software installation, normally /sighthound
  • SIO_HOME - root of SIO installation, normally /sighthound/sio

Additional variables that can be defined to modify the container behavior:

  • SIO_DATA_DIR - path at which SIO will be storing information, such as CUDA engines. It is recommended to set this to a volume shared from the host, to prevent re-creation of generated data, such as CUDA engines, with each container restart
  • SIO_INFERENCE_RUNTIME - override default inference runtime (D3T - TensorRT, D3V - x86 CPU)
  • SIO_INFERENCE_PRELOAD_ENGINE - engine for which all available models will be preloaded at pipeline initialization. Without it, the models are loaded on ad hoc basis, which may cause undesirable delays once pipeline has already started the processing.
  • SIO_INFERENCE_PRELOAD_MODELS - comma-separated list of models to preload (see models subfolder in SIO installation for options) or all

NVIDIA Container Toolkit

If an NVIDIA GPU is available on your host, install the NVIDIA Container Toolkit.

Native installation

  • Windows
    • Unzip the supplied package into a folder, for example c:\sio
    • Install the supplied redistributable packages from c:\sio\redist (at the time of writing VC_redist.x64.exe and w_dpcpp_cpp_runtime_p_2022.0.0.3663.exe )
    • Windows installation comes with an embedded Python installation, with required packages, like numpy already present. It may be extended by using c:\sio\bin\python.exe -m pip install [package]
  • Linux
    • It is highly recommended that docker image is used; there needs to be a very good reason to use a direct installation.
    • Install SIO in the desired location, for example /opt/sio
    • Ensure integration with the Python version of your choice (in the example, 3.9): ln -s /usr/lib/x86_64-linux-gnu/libpython3.9.so.1.0 /opt/sio/lib/libpython3.6m.so.1.0
    • Set LD_LIBRARY_PATH with LD_LIBRARY_PATH=$/opt/sio/lib:${LD_LIBRARY_PATH}
    • Ensure required Python packages are present. At a minimum, have numpy, pillow, shapely.

License

To run SIO you'd need a license file provided by Sighthound. In case of the native installation, or inherited docker image the license may be placed under ${SIO_HOME}/share/sighthound-license.json. Otherwise, the license file may be specified as one of the runPipeline parameters.


Running pipelines

SIO operates by executing pipelines. For the sake of this example, we'll concentrate on VehicleAnalytics pipeline, available at ./pipelines/VehicleAnalytics.

Example of running a pipeline, that'd process images deposited into a watched folder:

./bin/runPipeline share/pipelines/VehicleAnalytics/VehicleAnalyticsFolderWatch.yaml folderPath=/tmp

Same, but with a docker container, and license file provided from a shared external volume:

docker run -it --rm -v /data:/data -e SIO_DATA_DIR=/data us-central1-docker.pkg.dev/ext-edge-analytics/docker/sio:r220822 /sighthound/sio/bin/runPipeline /sighthound/sio/share/pipelines/VehicleAnalytics/VehicleAnalyticsFolderWatch.yaml folderPath=/data/watchedFolder --license-path /data/sighthound-license.json

VehicleAnalytics and TrafficAnalytics pipelines

SIO ships with two production-ready pipelines: VehicleAnalytics and TrafficAnalytics. The two are fairly similar, with purpose being the primary difference. VehicleAnalytics is intended for cases where traffic object identification and classification is the primary goal. It will deliver information such as make/model/color/generation of the detected vehicles and ALPR for the detected license plates. TrafficAnalytics pipeline is primarily used to detect objects, without classifying them, and potentially track them across the frame.

Output

The pipeline generates its output in JSON format. The schema can be found in ./docs/schemas.

Input adaptors

The pipeline can run with a single file, watched folder or RTSP as an input, depending on the entry point used. Each pipeline has a number of parameters that may be used to alter its behavior. For details, please refer to share/pipelines/VehicleAnalytcis/README.md and share/pipelines/TrafficAnalytcis/README.md

Pipeline extensions

For minor changes to pipeline behavior (such as output filtering or alteration, specifying a different method of egress from the pipeline, etc), a pipeline extension mechanism can be used. It allows execution of user-specified Python code, without altering the core pipeline's behavior.

An example of excuting a pipeline with an extension module looks like

./bin/runPipeline share/pipelines/VehicleAnalytics/VehicleAnalyticsFile.yaml VIDEO_IN=examples/media/2lps.png extensionModules=examples/extensions/OutputLogger.py extensionConfigurations=examples/extensions/OutputLoggerCfg.json

In this example the extension intercepts the output, modifies one of its fields, and saves it to a file. The extension contract is fairly simple:

  • Define 3 methods: configure, process, finalize with the signature similar to those in the example.
  • process must always return the desired (either modified or not) output JSON.