Module: tgi_client_inference_workflow
TGI Client Inference Workflow
A class that uses the HuggingFace Text Generation Inference client to run LLM inference on any TGI-compliant inference server.
Additional Installations
Since this workflow uses some additional libraries, you'll need to install infernet-ml[tgi_inference]
. Alternatively,
you can install those packages directly. The optional dependencies "[tgi_inference]"
are provided for your
convenience.
Example Usage
In the example below we use an API key from Hugging Face to access the Mixtral-8x7B-Instruct-v0.1
model.
You can obtain an API key by signing up on the HuggingFace website.
import os
from infernet_ml.workflows.inference.tgi_client_inference_workflow import (
TGIClientInferenceWorkflow,
TgiInferenceRequest,
)
def main():
server_url = "https://api-inference.huggingface.co/models/mistralai/Mixtral-8x7B-Instruct-v0.1"
# Instantiate the workflow
workflow: TGIClientInferenceWorkflow = TGIClientInferenceWorkflow(
server_url,
timeout=10,
headers={"Authorization": f"Bearer {os.environ['HF_TOKEN']}"},
)
# Setup the workflow
workflow.setup()
# Run the inference
res = workflow.inference(TgiInferenceRequest(text="Is the sky blue during a clear day?"))
print(f"response: {res}")
# Stream the inference
collected_res = ""
for r in workflow.stream(TgiInferenceRequest(text="Is the sky blue during a clear day?")):
collected_res += r.token.text
print(f"streaming: {collected_res}")
if __name__ == "__main__":
main()
Outputs:
response:
Yes, the sky is blue during a clear day.
streaming:
Yes, the sky is blue during a clear day.
More Information
For more info, check out the reference docs below.
TGIClientInferenceWorkflow
Bases: BaseInferenceWorkflow
Inference workflow for requesting LLM inference on TGI-compliant inference servers.
Source code in src/infernet_ml/workflows/inference/tgi_client_inference_workflow.py
116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 |
|
__init__(server_url, timeout=30, headers=None, cookies=None, retry_params=None, **inference_params)
Constructor. Any named arguments passed to LLM during inference.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
server_url |
str
|
url of inference server |
required |
Source code in src/infernet_ml/workflows/inference/tgi_client_inference_workflow.py
do_generate_proof()
do_postprocessing(input_data, gen_text)
Implement any postprocessing here. For example, you may need to return additional data. By default returns a dictionary with a single output key.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data |
TgiInferenceRequest
|
user input |
required |
gen_text |
str
|
generated text from the model. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
transformed llm output |
Source code in src/infernet_ml/workflows/inference/tgi_client_inference_workflow.py
do_preprocessing(input_data)
Implement any preprocessing of the raw input. For example, you may want to append additional context. By default, returns the value associated with the text key in a dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data |
TgiInferenceRequest
|
user input |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
transformed user input prompt |
Source code in src/infernet_ml/workflows/inference/tgi_client_inference_workflow.py
do_run_model(prompt)
Run the model with the given prompt.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt |
str
|
user prompt |
required |
Returns:
Name | Type | Description |
---|---|---|
Any |
str
|
result of inference |
Source code in src/infernet_ml/workflows/inference/tgi_client_inference_workflow.py
do_setup()
no specific setup needed
do_stream(_input)
Stream results from the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
_input |
str
|
user input |
required |
Returns:
Type | Description |
---|---|
Iterator[StreamResponse]
|
Iterator[StreamResponse]: stream of results |
Source code in src/infernet_ml/workflows/inference/tgi_client_inference_workflow.py
generate_inference(preprocessed_data)
Use tgi client to generate inference.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
preprocessed_data |
str
|
input to tgi |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
output of tgi inference |
Source code in src/infernet_ml/workflows/inference/tgi_client_inference_workflow.py
stream(input_data)
Stream results from the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_data |
TgiInferenceRequest
|
user input |
required |
Returns:
Type | Description |
---|---|
Iterator[StreamResponse]
|
Iterator[StreamResponse]: stream of results |