Upload Custom Model Weights
Once you've completed training your custom model, upload your model weights back to your Roboflow project to take advantage of Roboflow Inference.
Model Support
Refer to the Supported Models table for details on weights upload compatibility.
- YOLOv8 models must be trained on
ultralytics==8.0.196 - YOLOv9 models must be trained and uploaded using
ultralyticsfrom https://github.com/WongKinYiu/yolov9 - YOLOv10 models must be trained and uploaded using
ultralyticsfrom https://github.com/THU-MIG/yolov10 - YOLOv11 models must be trained on
ultralytics<=8.3.40 - YOLOv12 models must be trained and uploaded using
ultralyticsfrom https://github.com/sunsmarterjie/yolov12
Larger model sizes provide better training results. However, the larger the model size, the slower the training time, and inference (model prediction) speed. Consider whether you're looking for real-time inference on fast-moving objects or video feeds (better to use a smaller model), or you are processing data after it is collected, and more concerned with higher prediction accuracy (choose a larger model).
Versioned vs. Versionless Models Upload
Roboflow provides two distinct approaches for deploying models to your projects, each serving different use cases and organizational needs.
- Versionless Deployments
- Tied to the workspace level
- Can be deployed to multiple projects simultaneously
- Ideal for sharing models across different projects within the same workspace
- Versioned Deployments
- Tied to specific project versions
- One model per dataset version
- Ideal for tracking model evolution alongside dataset versions
- Ideal for using model on Label Assist
- Ideal for using model as checkpoint for training other models
Upload Custom Weights
First, make sure you have the latest roboflow Python package installed:
pip install --update roboflow
To upload versionless custom weights, use the workspace.deploy_model() method:
workspace.deploy_model(
model_type="yolov8", # Type of the model
model_path="path/to/model", # Path to model directory
project_ids=["project1", "project2"], # List of project IDs
model_name="my-model", # Name for the model
filename="weights/best.pt" # Path to weights file (default)
)
Parameters
- model_type (str): The type of model being deployed (e.g., "yolov8", "yolov11")
- model_path (str): File path to the directory containing the model weights
- project_ids (list[str]): List of project IDs to deploy the model to
- model_name (str): Name to identify the model (must have at least 1 letter, and accept numbers and dashes)
- filename (str, optional): Name of the weights file (defaults to "weights/best.pt")
Example
from roboflow import Roboflow
rf = Roboflow(api_key="YOUR_API_KEY")
workspace = rf.workspace("YOUR_WORKSPACE")
workspace.deploy_model(
model_type="yolov8",
model_path="./runs/train/weights",
project_ids=["project-1", "project-2", "project-3"],
model_name="my-custom-model"
)
Next Steps
- Check out your model in the "Models" tab of Roboflow.
- Run your model locally with Roboflow Inference Server.
- Deploy your model.