LogoTensorFusion 文档
LogoTensorFusion 文档
首页文档

WorkloadProfile

WorkloadProfile is the Schema for the workloadprofiles API.

WorkloadProfile is the Schema for the workloadprofiles API.

Resource Information

FieldValue
API Versiontensor-fusion.ai/v1
KindWorkloadProfile
ScopeNamespaced

Spec

WorkloadProfileSpec defines the desired state of WorkloadProfile.

PropertyTypeDescription
autoScalingConfigobjectAutoScalingConfig configured here will override Pool's schedulingConfig This field can not be fully supported in annotation, if user want to enable auto-scaling in annotation, user can set tensor-fusion.ai/auto-resources
gpuCountinteger<int32>The number of GPUs to be used by the workload, default to 1
gpuModelstringGPUModel specifies the required GPU model (e.g., "A100", "H100")
isLocalGPUbooleanSchedule the workload to the same GPU server that runs vGPU worker for best performance, default to false
nodeAffinityobjectNodeAffinity specifies the node affinity requirements for the workload
poolNamestring
qosstringQos defines the quality of service level for the client. Allowed values: low, medium, high, critical
replicasinteger<int32>If replicas not set, it will be dynamic based on pending Pod If isLocalGPU set to true, replicas must be dynamic, and this field will be ignored
resourcesobject
sidecarWorkerbooleanWhen set to sidecar worker mode, its always Local GPU mode, and hard-isolated with shared memory default to false, indicates the workload's embedded worker is same process, soft-isolated
workerPodTemplateobjectWorkerPodTemplate is the template for the worker pod, only take effect in remote vGPU mode

Status

WorkloadProfileStatus defines the observed state of WorkloadProfile.

目录

Resource Information
Spec
Status