Sighthound Redactor - Image and Video API - Docker¶
This document contains instructions on how to run the Sighthound Redactor API using Docker Compose. If you prefer to use your own docker run command or another orchestration method, please refer to the docker-compose.yml for the necessary environment variables and volume mounting examples. Contact support@redactor.com with any questions.
Requirements¶
Docker¶
Ensure your system has a recent version of Docker Engine and Docker Compose. You can either manually install it for your specific operating system by following the directions on Docker's site, or you can run the convenience script that Docker provides:
NVIDIA (optional)¶
If you have an NVIDIA GPU, install the latest drivers and install the NVIDIA Container Toolkit to give the Docker containers access to the GPU.
Edit docker-compose.yml and uncomment the runtime: nvidia line for both the videoapi and imageapi services to enable GPU acceleration.
API Services Overview¶
The Sighthound Redactor API is comprised of several services that are defined in a Docker Compose file:
-
Video API: This service runs the Redactor Docker image and is configured for video processing with various environment variables related to debugging, GPU config, and API settings. It exposes port 49001 on the host instance. This service is named
videoapiin Docker Compose. The Videos API endpoints can be accessed fromhttp://localhost:49001/api/v1/. This Redactor service also offers a User Interface that may be helpful during development/integration. It can be accessed at http://localhost:49001 with the default usernameadminand passwordsighthound. Credentials for that user can be changed in the Redactor Admin section. -
Image API: This service also runs the Redactor Docker image but is configured for image processing. It uses environment variables for debugging, GPU config, CV model selection, and API settings. It exposes 49002 on the host instance. This service depends on the
videoapiandrabbitmqservices. This service is namedimageapiin Docker Compose. The Image API endpoints can be accessed fromhttp://localhost:49002/api/v1/. There is no User Interface for this service. -
RabbitMQ: This service runs the
rabbitmqDocker image for use by the Image API. The ports are not mapped to the host and are accessible only via the API containers.
Project Folder Structure¶
Unzip the sighthound-redactor-api-YYMMDD.zip to the computer where the services will run. It will create a new folder called sighthound-redactor-api with the following contents:
docker-compose.yaml # Docker Compose configuration
README.md # Instructions on how to run and use the API
config/ # Various configuration files and pipelines.
media/ # Mapped to /media inside of docker containers
input/ # Input sources can be placed here
output/ # Can be used for outputs
volumes/ # Redactor state and log files are stored here
imageapi/ # Files from Image API service
sio/ # SIO model cache location
videoapi/ # Files from Video API service
Of particular interest is the media folder. It is mounted into both the Image and Video service containers and can be used for easy access to both input and output files. The ./media/input folder contains a couple of test images and videos that are used by the examples listed below. You can also modify the docker-compose.yml to map additional host folders into the containers as required.
Start API Services¶
Open a terminal, cd into the sighthound-redactor-api folder, and run the following commands:
Note
You may need to append sudo before each of the docker commands in this document depending on your machine's configuration. Additionally, you may need to run docker-compose instead of docker compose if you have an older version.
Open a terminal, cd into the sighthound-redactor-api folder, and run the following command to start the services:
API Requests¶
The following examples show how to make API calls to the videos:process and images:process endpoints. It's important to remember that these services are running on separate ports, so please choose accordingly. In a production deployment, a reverse proxy could be placed in front of the service so that only one URL would be needed to access either. Sighthound can provide an example, if desired.
If you would like to save the JSON detection data along with the redacted video, please use "renderConfig": {"exportMetadata": true} as shown below in your requests.
Videos API¶
Request¶
POST http://localhost:49001/api/v1/videos:process
{
"inputUri": "file:///media/input/small.mp4",
"features": ["LICENSE_PLATE_DETECTION", "MEDIA_RENDERING"],
"videoContext": {"processConfig": {"profileName": "gen64StandardLight"}, "renderConfig": {"exportMetadata": true}},
"outputUri": "file:///media/output/small-redacted.mp4"
}
Response¶
See the Videos API reference for full detail on the available options. The Operations API reference covers how to use the string returned in the above response to view the operation's status and other information.
Images API¶
Request¶
POST http://localhost:49002/api/v1/images:process
{
"inputUri": "file:///media/input/ups.jpg",
"features": ["FACE_DETECTION", "LICENSE_PLATE_DETECTION", "MEDIA_RENDERING"],
"imageContext": {"renderConfig": {"exportMetadata": true}},
"outputUri": "file:///media/output/ups.jpg"
}
Response¶
See the Images API reference for full detail on the available options. The Operations API reference covers how to use the string returned in the above response to view the operation's status and other information.
Curl Examples¶
curl --location --request POST 'http://localhost:49002/api/v1/images:process' \
--header 'Content-Type: application/json' \
--data-raw '{
"inputUri": "file:///media/input/cadillac.jpg",
"features": ["FACE_DETECTION", "LICENSE_PLATE_DETECTION", "MEDIA_RENDERING"],
"imageContext": {"renderConfig": {"exportMetadata": true}},
"outputUri": "file:///media/output/cadillac.jpg"
}
'
curl --location --request POST 'http://localhost:49002/api/v1/images:process' \
--header 'Content-Type: application/json' \
--data-raw '{
"inputUri": "file:///media/input/ups.jpg",
"features": ["FACE_DETECTION", "LICENSE_PLATE_DETECTION", "MEDIA_RENDERING"],
"imageContext": {"renderConfig": {"exportMetadata": true}},
"outputUri": "file:///media/output/ups.jpg"
}
'
curl --location --request POST 'http://localhost:49001/api/v1/videos:process' \
--header 'Content-Type: application/json' \
--data-raw '{
"inputUri": "file:///media/input/small.mp4",
"features": ["LICENSE_PLATE_DETECTION", "MEDIA_RENDERING"],
"videoContext": {"processConfig": {"profileName": "gen64StandardLight"}, "renderConfig": {"exportMetadata": true}},
"outputUri": "file:///media/output/small-redacted.mp4"
}
'
Modifying Parameters¶
Certain parameters, such as detection confidence scores, can be modified for both the Image API and Video API, but their approaches are slightly different. Sighthound's Computer Vision team can help determine the best settings for your use case, so please reach out if you have any questions.
Video API¶
The Video API allows the following parameters to be specified as part of the REST request:
profileName- The name of the predefined Redactor profile to use. The default value isstandard, but we recommend explicitly specifying this in your API call with a value ofgen64StandardLightas shown in the examples above.objectsConfidenceThreshold- This is the confidence threshold for objects to be detected. The default value is 0.3, and it's a number between 0 and 1.innerObjectsConfidenceThreshold- If a face or license plate is not detected inside of a person or vehicle detection, this confidence score is used during a second pass with a different model. The default value is 0.2, and it's a number between 0 and 1.faceBoxExpansionFactor- This is the factor by which the face bounding box is expanded. The default value is 1.5 (50% larger than detection size) and acceptable values are between 1.0 - 5.0.
See the following example for where to specify these parameters:
curl --location --request POST 'http://localhost:49001/api/v1/videos:process' \
--header 'Content-Type: application/json' \
--data-raw '{
"inputUri": "file:///media/input/small.mp4",
"features": ["LICENSE_PLATE_DETECTION", "MEDIA_RENDERING"],
"videoContext": {
"processConfig": {
"profileName": "gen64StandardLight",
"objectsConfidenceThreshold": 0.25,
"innerObjectsConfidenceThreshold:": 0.15,
"faceBoxExpansionFactor": 1.0
},
"renderConfig": { "exportMetadata": true }
},
"outputUri": "file:///media/output/small-redacted.mp4"
}'
Image API¶
Unlike the Video API's ability to specify different parameters with every REST request, the Image API is a long running process and currently must have all of its parameters defined at run time. If any changes are made to its config file, the service must be restarted for the changes to take effect.
The config file is located at ./config/pipeline/profiles.json and the Image API uses the one named gen6StandardLight located at the bottom of the file in the redactImage object. The Video API also uses the profiles at the top of the document (inside of the processVideo), so ensure you are making changes to the one in redactImage to avoid any issues. The default values are:
"gen6StandardLight": {
"blurType": "mosaic",
"confidenceThreshold": 0.3,
"detectionMode": "gen6eb",
"sioAdvancedChildMode": "medium",
"sioAdvancedChildThreshold" :0.2
}
blurType- The type of redaction style to apply. Default is "mosaic". Other options are "blur", "pixelate", and "color".confidenceThreshold- This is the confidence threshold for objects to be detected. The default value is 0.3, and it's a number between 0 and 1.detectionMode- The detection model to use for the primary pass. We recommend using "gen6eb".sioAdvancedChildMode- The detection model to use for the secondary pass. We recommend using "medium".sioAdvancedChildThreshold- If a face or license plate is not detected inside of a person or vehicle detection, this confidence score is used during a second pass with a different model. The default value is 0.2, and it's a number between 0 and 1.
After making changes to the profiles.json, restart the Image API service:
# If the services are already running:
docker compose restart imageapi
# If the services are not running:
docker compose up -d
Obtaining Support¶
The Video and Image services each produce log files that are helpful during development and for support. They are located at the following locations, inside of the sighthound-redactor-api folder:
- Video API:
./volumes/videoapi/logs/main.log - Image API:
./volumes/imageapi/redactor-cloud.log
A support ticket can be opened by emailing support@redactor.com. Please include a description of the issue and attach the pertinent log file(s) for our team to review.