API
Integrate your interface as an API.
Once your flow is ready for production, you can deploy it as an API. Just click on deploy to get a production-ready version of your model.
Get your RestAPI
To obtain your API just go to the Export View and select API.
In this section, you will receive a code snippet to call your flow via a POST request in Python, JavaScript, and cURL.
import requests
API_URL = f"https://stack-inference.com/inference/v0/run/<YOUR_ORG_ID>/<YOUR_FLOW_ID>"
headers = {'Authorization':
'Bearer YOUR_PUBLIC_KEY',
'Content-Type': 'application/json'
}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
# you can add all your inputs here:
body = {'in-0': 'text for in-0', 'audio2text-0': 'base64 of audio to send',
'string-0': 'long string to send', 'url-0': 'url of website to load'}
output = query(body)
Some quick facts:
This request receives all the inputs to the LLM as the body.
This request returns the value of all the outputs to the LLM as a JSON.
The API supports auto-scaling for a large volume of requests.
Stack protects this API with the Token of your organization
Last updated
Was this helpful?