InferenceConfig
@fovea/annotation-tool / api/client / InferenceConfig
Interface: InferenceConfig
Defined in: src/api/client.ts:167
Inference configuration settings.
Properties
max_memory_per_model
max_memory_per_model:
number
Defined in: src/api/client.ts:168
offload_threshold
offload_threshold:
number
Defined in: src/api/client.ts:169
warmup_on_startup
warmup_on_startup:
boolean
Defined in: src/api/client.ts:170