(serverless.completions)
Generate text based on the given text prompt.
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.serverless.completions.complete(
serverless_completions_body={
"model": "meta-llama-3.1-8b-instruct",
"prompt": "Say this is a test!",
"stream": False,
}
)
# Handle response
print(res)| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
serverless_completions_body |
models.ServerlessCompletionsBody | ✔️ | N/A | { "model": "meta-llama-3.1-8b-instruct", "prompt": "Say this is a test!" } |
x_friendli_team |
OptionalNullable[str] | ➖ | ID of team to run requests as (optional parameter). | |
retries |
Optional[utils.RetryConfig] | ➖ | Configuration to override the default retry behavior of the client. |
models.ContainerCompletionsSuccess
| Error Type | Status Code | Content Type |
|---|---|---|
| models.SDKError | 4XX, 5XX | */* |
Generate text based on the given text prompt.
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.serverless.completions.stream(
serverless_completions_stream_body={
"model": "meta-llama-3.1-8b-instruct",
"prompt": "Say this is a test!",
"stream": True,
}
)
with res as event_stream:
for event in event_stream:
# handle event
print(event, flush=True)| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
serverless_completions_stream_body |
models.ServerlessCompletionsStreamBody | ✔️ | N/A | { "model": "meta-llama-3.1-8b-instruct", "prompt": "Say this is a test!" } |
x_friendli_team |
OptionalNullable[str] | ➖ | ID of team to run requests as (optional parameter). | |
retries |
Optional[utils.RetryConfig] | ➖ | Configuration to override the default retry behavior of the client. |
| Error Type | Status Code | Content Type |
|---|---|---|
| models.SDKError | 4XX, 5XX | */* |