What Devices Can I Use?

You can deploy Inference on the edge, in your own cloud, or using the Roboflow hosted inference option.

Supported Edge Devices

You can set up a server to use computer vision models with Inference on the following devices:

  • ARM CPU (macOS, Raspberry Pi)
  • x86 CPU (macOS, Linux, Windows)
  • NVIDIA GPU
  • NVIDIA Jetson (JetPack 4.5.x, JetPack 4.6.x, JetPack 5.x, JetPack 6.x)

Model Compatibility

The table below shows on what devices you can deploy models supported by Inference.

See our installation guide for more information on how to deploy Inference on your device.

Table key:

  • Yes: Fully supported
  • No: Not supported
ModelCPUGPUJetson 4.5.xJetson 4.6.xJetson 5.xRoboflow Hosted Inference
YOLOv8 Object DetectionYesYesNoNoYesYes
YOLOv8 ClassificationYesYesNoNoYesYes
YOLOv8 SegmentationYesYesNoNoYesYes
YOLOv5 Object DetectionYesYesYesYesYesYes
YOLOv5 ClassificationYesYesYesYesYesYes
YOLOv5 SegmentationYesYesYesYesYesYes
CLIPYesYesYesYesYesYes
DocTRYesYesYesYesYesYes
GazeYesYesNoNoNoYes
SAMYesYesNoNoNoNo
ViT ClassificationYesYesYesYesYesYes
YOLACTYesYesYesYesYesYes

Cloud Platform Support

You can deploy Inference on any cloud platform such as AWS, GCP, or Azure.

The installation and setup instructions are the same as for any edge device, once you have installed the relevant drivers on your cloud platform. We recommend deploying with an official "Deep Learning" image from your cloud provider if you are running inference on a GPU device. "Deep Learning" images should have the relevant drivers pre-installed so you can set up Inference without configuring GPU drivers manually.

Use Hosted Inference from Roboflow

You can also run your models in the cloud with the Roboflow hosted inference offering. The Roboflow hosted inference solution enables you to deploy your models in the cloud without having to manage your own infrastructure. Roboflow's hosted solution does not support all features available in Inference that you can run on your own infrastructure.