Module: torch_inference_workflow
Torch Inference Workflow
A class for loading & running inference on Torch models.
Models can be loaded in two ways:
- Preloading: The model is loaded in the
setup()
method ifmodel_id
is provided when at class instantiation. - On-demand: The model is loaded following an inference request. This happens if
model_id
is provided with the input (see optional field in theTorchInferenceInput
class) and is not preloaded or cached.
Loaded models are cached in-memory using an LRU cache. The cache size can be configured
using the TORCH_MODEL_LRU_CACHE_SIZE
environment variable.
Additional Installations
Since this workflow uses some additional libraries, you'll need to install
infernet-ml[torch_inference]
. Alternatively, you can install those packages directly.
The optional dependencies "[torch_inference]"
are provided for your convenience.
Example
import torch
from infernet_ml.utils.codec.vector import RitualVector
from infernet_ml.workflows.inference.torch_inference_workflow import (
TorchInferenceInput,
TorchInferenceWorkflow,
)
def main():
# Instantiate the workflow
workflow = TorchInferenceWorkflow()
# Setup the workflow
workflow.setup()
# Define the input
input_data = TorchInferenceInput(
model_id="huggingface/Ritual-Net/california-housing:california_housing.torch",
input=RitualVector.from_tensor(
tensor=torch.tensor(
[[8.3252, 41.0, 6.984127, 1.02381, 322.0, 2.555556, 37.88, -122.23]],
dtype=torch.float64,
),
),
)
# Run the model
result = workflow.inference(input_data)
# Print the result
print(f"result: {result}")
if __name__ == "__main__":
main()
Outputs:
TorchInferenceInput
Bases: BaseModel
Input data for Torch inference workflows. If model_id
is provided, the model is
loaded. Otherwise, if the class is instantiated with a model_id
, the model is
preloaded in the setup method.
Input Format
Input is a RitualVector.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
RitualVector
|
Input tensor |
required |
model_id |
Optional[MlModelId | str]
|
Model to be loaded at instantiation |
None
|
Source code in src/infernet_ml/workflows/inference/torch_inference_workflow.py
TorchInferenceWorkflow
Bases: BaseInferenceWorkflow
Inference workflow for Torch-based models. Models are loaded using the default
torch pickling by default (i.e. torch.load()
).
Source code in src/infernet_ml/workflows/inference/torch_inference_workflow.py
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
|
__init__(model_id=None, use_jit=False, *args, **kwargs)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id |
Optional[MlModelId | str]
|
Model to be loaded |
None
|
use_jit |
bool
|
Whether to use JIT for loading the model |
False
|
*args |
Any
|
Additional arguments |
()
|
**kwargs |
Any
|
Additional keyword arguments |
{}
|
Source code in src/infernet_ml/workflows/inference/torch_inference_workflow.py
do_run_model(inference_input)
Runs the model on the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inference_input |
TorchInferenceInput
|
Input data for the inference workflow |
required |
Returns:
Name | Type | Description |
---|---|---|
TorchInferenceResult |
TorchInferenceResult
|
Output of the model |
Source code in src/infernet_ml/workflows/inference/torch_inference_workflow.py
do_setup()
If model_id
is provided, preloads the model & starts the session. Otherwise,
does nothing & model is loaded with an inference request.
Source code in src/infernet_ml/workflows/inference/torch_inference_workflow.py
do_stream(preprocessed_input)
Streaming inference is not supported for Torch models.
inference(input_data, log_preprocessed_data=True)
Inference method for the torch workflow. Overridden to add type hints.
Source code in src/infernet_ml/workflows/inference/torch_inference_workflow.py
load_torch_model(model_id, use_jit)
cached
Loads a torch model from the given source. Uses torch.jit.load()
if use_jit
is set, otherwise uses torch.load()
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_id |
MlModel
|
Model to be loaded |
required |
use_jit |
bool
|
Whether to use JIT for loading the model |
required |
Returns:
Type | Description |
---|---|
Module
|
torch.nn.Module: Loaded model |