POST/api/chat

Chat Completion

Generate a chat completion using the local Falcon-H1 model.

Request Body

messagesarrayrequired

Array of message objects with role and content

max_tokensnumber

Maximum tokens to generate

temperaturenumber

Sampling temperature (0-2)

streamboolean

Enable streaming response

Try It Out

Request

POST /api/chat

Response Schema

{"id":"chat-1","model":"falcon-h1-90m","content":"Hello! How can I help you?","tokens":25,"finish_reason":"stop"}

Code Examples

cURL

curl -X POST "/api/chat" \
  -H "Content-Type: application/json" \

JavaScript

const response = await fetch('/api/chat', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
});
const data = await response.json();

Python

import requests

response = requests.post(
    '/api/chat',
    headers={'Content-Type': 'application/json'},
)

print(response.json())