Making predictions from your models

inference infer command offers an easy way to make predictions from your model based on your input images or video files sending requests to the inference server.

Note

To check detail of the command, run inference infer --help.

Command details

inference infer takes input path / URL and model version to produce predictions (and optionally make visualisation using supervision). You can also specify a host to run inference on the hosted inference server.

Note

If you decided to use the hosted inference server, make sure the inference server start command was used first. Roboflow API key can be provided via the ROBOFLOW_API_KEY environment variable.

Examples

Predict On Local Image

inference infer -i ./image.jpg -m {your_project}/{version} --api-key {YOUR_API_KEY}

To display visualised prediction use -D option. To save prediction and visualisation in a local directory, use -o {path_to_your_directory} option:

inference infer -i ./image.jpg -m {your_project}/{version} --api-key {YOUR_API_KEY} -D -o {path_to_your_output_directory}

Predict On Image URL

inference infer -i https://[YOUR_HOSTED_IMAGE_URL] -m {your_project}/{version} --api-key {YOUR_API_KEY}

Using Hosted API

inference infer -i ./image.jpg -m {your_project}/{version} --api-key {YOUR_API_KEY} -h https://detect.roboflow.com

Predict From Local Directory

inference infer -i {your_directory_with_images} -m {your_project}/{version} -o {path_to_your_output_directory} --api-key {YOUR_API_KEY}

Predict On Video File

inference infer -i {path_to_your_video_file} -m {your_project}/{version} -o {path_to_your_output_directory} --api-key {YOUR_API_KEY}

Configure The Visualization

Option -c can be provided with a path to *.yml file configuring supervision visualisation. There are a few pre-defined configs:

  • bounding_boxes -- with BoxAnnotator and LabelAnnotator annotators
  • bounding_boxes_tracing -- with ByteTracker and annotators (BoxAnnotator, LabelAnnotator)
  • masks -- with MaskAnnotator and LabelAnnotator annotators
  • polygons -- with PolygonAnnotator and LabelAnnotator annotators

Custom configuration can be created following the schema:

annotators:
  - type: "bounding_box"
    params:
      thickness: 2
  - type: "label"
    params:
      text_scale: 0.5
      text_thickness: 2
      text_padding: 5
  - type: "trace"
    params:
      trace_length: 60
      thickness: 2
tracking:
  track_activation_threshold: 0.25
  lost_track_buffer: 30
  minimum_matching_threshold: 0.8
  frame_rate: 30

Provide Inference Hyperparameters

-mc parameter can be provided with a path to *.yml file that specifies model configuration (like confidence threshold or IoU threshold). If given, the configuration will be used to initialise InferenceConfiguration object from inference_sdk library. See the SDK configuration docs to discover which options can be configured.