Inference CLI
Roboflow Inference CLI is the command-line interface for the inference ecosystem, providing an easy way to:
- Run and manage the
inferenceserver locally - Process data with Workflows
- Benchmark
inferenceperformance - Make predictions from your models
- Deploy
inferenceserver in the cloud
Installation
pip install inference-cli
Note
If you have installed the inference Python package, the CLI extension is already included.
Supported Devices
Roboflow Inference CLI currently supports the following device targets:
- x86 CPU
- ARM64 CPU
- NVIDIA GPU (including Jetson)
For Jetson specific inference server images, check out the Roboflow Inference package, or pull the images directly following instructions in the official Inference Server documentation.