Technical FAQ
Questions and answers about configuration, performance, and deployment.
What operating systems and drivers are supported?
We support recent Linux distributions with NVIDIA driver versions that match your GPU. On instance creation, you can select from several preset images (e.g. Ubuntu + latest CUDA drivers), or use a custom Docker image.
If you supply your own image, ensure the appropriate driver and CUDA version are installed so GPU functions properly.
If you supply your own image, ensure the appropriate driver and CUDA version are installed so GPU functions properly.
Can I use custom Docker containers or must I use the default images?
You can absolutely use custom Docker containers. oneinfer.ai allows users to supply their own Dockerfile or image — so long as necessary drivers / libraries (CUDA / cuDNN / etc) are included, your workloads should work the same as default templates.
This flexibility lets you bring your own environment, dependencies, and configurations without being locked to preset templates.
This flexibility lets you bring your own environment, dependencies, and configurations without being locked to preset templates.
How is storage handled? What volume options exist?
Each instance provides a base disk — you can configure disk size when deploying. Additionally, you may attach extra volumes or mount remote storage (if supported by the provider).
Remember: if you delete the instance and you did not persist data somewhere external, data on the attached disk will be lost.
Remember: if you delete the instance and you did not persist data somewhere external, data on the attached disk will be lost.
What about performance — how to optimize GPU and disk throughput?
- Choose a GPU type with sufficient VRAM and compute power.
- Use SSD-backed storage when possible (many providers offer NVMe/SSD disks).
- Avoid shared-disk I/O bottlenecks by using local disk or dedicated volumes.
- For heavy I/O workloads (data loading, disk I/O), consider splitting data preparation and GPU tasks.
- Use SSD-backed storage when possible (many providers offer NVMe/SSD disks).
- Avoid shared-disk I/O bottlenecks by using local disk or dedicated volumes.
- For heavy I/O workloads (data loading, disk I/O), consider splitting data preparation and GPU tasks.
What happens if instance crashes or is force-stopped — is data preserved?
If the instance is terminated or force-stopped and no persistent external storage (snapshot / S3 / mounted volume) was configured, then the local disk and data are lost. To preserve important data, always back up to external storage before shutting down.
Is networking / GPU over remote connection supported?
Yes — you can connect via SSH, Jupyter, web-terminal, or any remote-capable interface you install on the instance. As long as the network ports and security settings are configured properly, remote usage works just like on a local machine.