Endpoints
Create Instance
POST
https://api.oneinfer.ai/v1/developer/:developerId/create-instanceProvision a new GPU instance on a specific provider with custom resources and a docker image.
Request Body
provider_namestringrequired
Name of the provider (e.g., 'novita', 'verda', 'runpod').
instance_namestringrequired
User-defined name for the instance.
gpu_idstringrequired
Identifier for the requested GPU type.
gpu_numnumberrequired
Quantity of GPUs to provision.
disk_sizenumberrequired
Size of the root disk in GB.
image_urlstringrequired
Docker image URL to deploy.
regionstringrequired
Region identifier (e.g., 'us-east', 'eu-west').
Example Request Body
{
"provider_name": "novita",
"instance_name": "my-gpu-instance",
"gpu_id": "nvidia_a100_80gb",
"gpu_num": 1,
"disk_size": 100,
"image_url": "pytorch/pytorch:latest",
"region": "us-west-1"
}Error Status Codes
| Code | Status | Description |
|---|---|---|
| 200 | OK | Instance provisioned successfully. |
| 400 | Bad Request | Invalid request body or unsupported provider. |
| 401 | Unauthorized | Missing or invalid Authorization header / Bearer token. |
| 403 | Forbidden | Insufficient credit balance or provider validation failed. |
| 422 | Unprocessable Entity | Request body failed schema validation. |
| 500 | Internal Server Error | Unexpected error during instance provisioning. |