Chainlit Integration¶
Hayhooks provides optional integration with Chainlit, allowing you to embed a chat UI directly within your Hayhooks server. This provides a zero-configuration frontend for interacting with your deployed Haystack pipelines.
Overview¶
The Chainlit integration offers:
- Single Deployment: Run both backend and frontend in one process/container
- Zero Configuration: Works out-of-the-box with Hayhooks' OpenAI-compatible endpoints
- Streaming Support: Real-time streaming responses in the chat interface
- Pipeline Selection: Automatically discovers and lists deployed pipelines
Installation¶
Install Hayhooks with the chainlit extra:
pip install "hayhooks[chainlit]"
Quick Start¶
Using CLI¶
The simplest way to enable the Chainlit UI is via the --with-chainlit flag:
hayhooks run --with-chainlit
This starts Hayhooks with the embedded Chainlit UI available at http://localhost:1416/chat.
Custom UI Path¶
You can customize the URL path where the UI is mounted:
hayhooks run --with-chainlit --chainlit-path /my-chat
Now the UI will be available at http://localhost:1416/my-chat.
Using Environment Variables¶
You can also configure the UI via environment variables:
export HAYHOOKS_CHAINLIT_ENABLED=true
export HAYHOOKS_CHAINLIT_PATH=/chat
hayhooks run
Configuration¶
Environment Variables¶
| Variable | Description | Default |
|---|---|---|
HAYHOOKS_CHAINLIT_ENABLED |
Enable/disable the Chainlit UI | false |
HAYHOOKS_CHAINLIT_PATH |
URL path where UI is mounted | /chat |
HAYHOOKS_CHAINLIT_APP |
Custom Chainlit app file path | (uses default) |
Additional Settings¶
| Variable | Description | Default |
|---|---|---|
HAYHOOKS_CHAINLIT_DEFAULT_MODEL |
Default pipeline to auto-select | (auto-select if only one) |
HAYHOOKS_CHAINLIT_REQUEST_TIMEOUT |
Timeout (seconds) for chat requests | 120.0 |
How It Works¶
flowchart LR
subgraph server ["Hayhooks Server"]
UI["Chainlit UI\n(mounted at /chat)"]
API["/v1/chat/completions\n(OpenAI-compatible)"]
Pipelines["Haystack Pipelines\n(with chat support)"]
UI -- "HTTP streaming" --> API
API --> Pipelines
end
Browser["🌐 Browser"] --> UI
The Chainlit UI is mounted as a FastAPI sub-application and communicates with your pipelines through Hayhooks' OpenAI-compatible endpoints. This means:
- Your pipelines must implement
run_chat_completionorrun_chat_completion_async - The UI automatically discovers available pipelines via
/v1/models - Streaming is supported out-of-the-box
Example: Complete Setup¶
1. Create a Pipeline Wrapper¶
# pipelines/my_chat/pipeline_wrapper.py
from typing import Generator
from haystack import Pipeline
from haystack.components.builders import ChatPromptBuilder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from hayhooks import BasePipelineWrapper, streaming_generator
class PipelineWrapper(BasePipelineWrapper):
def setup(self) -> None:
self.system_message = ChatMessage.from_system("You are a helpful assistant.")
chat_prompt_builder = ChatPromptBuilder()
llm = OpenAIChatGenerator(model="gpt-4o-mini")
self.pipeline = Pipeline()
self.pipeline.add_component("chat_prompt_builder", chat_prompt_builder)
self.pipeline.add_component("llm", llm)
self.pipeline.connect("chat_prompt_builder.prompt", "llm.messages")
def run_chat_completion(self, model: str, messages: list[dict], body: dict) -> Generator:
chat_messages = [self.system_message] + [
ChatMessage.from_openai_dict_format(msg) for msg in messages
]
return streaming_generator(
pipeline=self.pipeline,
pipeline_run_args={"chat_prompt_builder": {"template": chat_messages}},
)
2. Run Hayhooks with UI¶
hayhooks run --with-chainlit --pipelines-dir ./pipelines
3. Open the UI¶
Navigate to http://localhost:1416/chat in your browser. You'll see your deployed pipeline and can start chatting!
Custom Chainlit App¶
You can provide your own Chainlit app for more customization:
hayhooks run --with-chainlit
With environment variable:
export HAYHOOKS_CHAINLIT_APP=/path/to/my_chainlit_app.py
hayhooks run --with-chainlit
Example Custom App¶
# my_chainlit_app.py
import os
import chainlit as cl
import httpx
HAYHOOKS_URL = os.getenv("HAYHOOKS_BASE_URL", "http://localhost:1416")
@cl.on_chat_start
async def start():
await cl.Message(content="Welcome! How can I help you today?").send()
@cl.on_message
async def main(message: cl.Message):
response_msg = cl.Message(content="")
await response_msg.send()
async with httpx.AsyncClient() as client:
async with client.stream(
"POST",
f"{HAYHOOKS_URL}/v1/chat/completions",
json={
"model": "my_pipeline", # Your pipeline name
"messages": [{"role": "user", "content": message.content}],
"stream": True,
},
timeout=120.0,
) as response:
async for line in response.aiter_lines():
if line.startswith("data: ") and line[6:] != "[DONE]":
import json
chunk = json.loads(line[6:])
content = chunk["choices"][0].get("delta", {}).get("content", "")
if content:
await response_msg.stream_token(content)
await response_msg.update()
Custom Elements¶
Chainlit supports custom elements -- JSX components that pipelines can push to the UI at runtime for rich, interactive widgets (charts, cards, maps, etc.).
The chainlit_weather_agent example includes a WeatherCard element that demonstrates this feature. To use your own elements, point Hayhooks at a directory of .jsx files:
Using CLI¶
hayhooks run --with-chainlit --chainlit-custom-elements-dir ./my_elements
Using Environment Variable¶
export HAYHOOKS_CHAINLIT_CUSTOM_ELEMENTS_DIR=./my_elements
hayhooks run --with-chainlit
At startup, Hayhooks copies every .jsx file from that directory into the Chainlit public/elements/ folder. The file name (minus .jsx) becomes the element name you reference from your pipeline:
my_elements/
StockChart.jsx
MapView.jsx
# In your pipeline wrapper
from hayhooks.chainlit_events import create_custom_element_event
def on_tool_call_end(self, tool_name, arguments, result, error):
return [create_custom_element_event(name="StockChart", props={"symbol": "AAPL", ...})]
JSX Requirements¶
Custom elements must:
- Be written in JSX (not TSX) and export a default component
- Access data via the global
propsobject (not function arguments) - Use Tailwind CSS classes for styling (shadcn + tailwind environment)
Available imports include @/components/ui/* (shadcn), lucide-react, react, and recharts. See the Chainlit custom element docs for the full list.
Tip
If a custom file has the same name as a built-in element, the custom version takes precedence. A warning is logged when this happens.
Branding & Theme¶
The default Chainlit UI ships with Hayhooks logos, favicons, and a theme.json. To override any of these, place your own files in a public/ directory next to your custom Chainlit app:
my_chainlit/
my_app.py # Custom Chainlit app
public/
logo_dark.png # Overrides the default dark-mode logo
theme.json # Overrides the default theme
export HAYHOOKS_CHAINLIT_APP=my_chainlit/my_app.py
hayhooks run --with-chainlit
When HAYHOOKS_CHAINLIT_APP is set, Hayhooks uses the parent directory of that file as the Chainlit app root. Built-in assets (logos, favicons, theme) are automatically seeded into the public/ directory, so you only need to provide the files you want to override -- anything you don't provide keeps the default.
The built-in public assets are:
| File | Description |
|---|---|
logo_dark.png |
Logo displayed in dark mode |
logo_light.png |
Logo displayed in light mode |
favicon.ico |
Browser tab icon (ICO) |
favicon.png |
Browser tab icon (PNG) |
favicon.svg |
Browser tab icon (SVG) |
apple-touch-icon.png |
iOS home screen icon |
theme.json |
Chainlit theme (colors, fonts, etc.) |
For theme configuration options, see the Chainlit theme documentation.
Why Embedded Chainlit¶
The embedded Chainlit UI is designed for simplicity and fast iteration:
- Single process deployment: No extra containers or services to manage
- Minimal setup: Just
pip install "hayhooks[chainlit]"and--with-chainlit - Zero configuration: Automatically discovers deployed pipelines via
/v1/models - Streaming out of the box: Real-time token streaming with tool call visualization
For production deployments requiring persistent conversation history, multi-user authentication, or advanced features, consider pairing Hayhooks with an external frontend like Open WebUI which connects through the same OpenAI-compatible endpoints.
Troubleshooting¶
UI Not Loading¶
- Ensure Chainlit is installed:
pip install "hayhooks[chainlit]" - Check that
--with-chainlitflag is set orHAYHOOKS_CHAINLIT_ENABLED=true - Verify the UI path in logs (default:
/chat)
No Pipelines Available¶
The UI requires at least one deployed pipeline with chat completion support:
- Ensure your pipeline implements
run_chat_completionorrun_chat_completion_async - Check that pipelines are deployed:
curl http://localhost:1416/v1/models
Streaming Not Working¶
- Ensure your pipeline returns a
GeneratororAsyncGenerator - Use
streaming_generatororasync_streaming_generatorhelpers - Check browser console for WebSocket errors
Assets Not Loading Behind a Reverse Proxy¶
When Hayhooks is served behind a reverse proxy with a path prefix (root_path), Chainlit assets (logos, theme, favicon) may fail to load because their URLs don't include the prefix. If you experience this:
- Verify that
HAYHOOKS_ROOT_PATHis set correctly - Check your reverse proxy is forwarding requests to the correct paths
- As a workaround, consider serving static assets directly from the reverse proxy
Production Notes¶
Multiple Workers and Sticky Sessions¶
Chainlit uses WebSockets via socket.io, which keeps session state in-memory. Running with --workers > 1 requires sticky sessions (session affinity) on your load balancer so that all requests from the same client hit the same worker.
Even with sticky sessions, some load balancers struggle to consistently route WebSocket upgrades. If you experience intermittent disconnects, set transports = ["websocket"] in your .chainlit/config.toml to skip the HTTP long-polling fallback:
[project]
# Use WebSocket transport only (recommended behind load balancers with sticky sessions)
transports = ["websocket"]
Authentication¶
The embedded Chainlit UI is public by default -- anyone with network access to the URL can use it. To restrict access:
- Set the
CHAINLIT_AUTH_SECRETenvironment variable (generate one withchainlit create-secret) - Implement an authentication callback in a custom Chainlit app via
HAYHOOKS_CHAINLIT_APP - Or place Hayhooks behind a reverse proxy with its own authentication layer
Conversation Persistence¶
By default, conversation history lives only in the server's memory and is lost on page refresh or server restart. Chainlit supports pluggable data layers for persistent storage:
- SQLite (file-based, zero infrastructure) or PostgreSQL via the community SQLAlchemy data layer
- DynamoDB for cloud-native deployments
- Custom implementations via Chainlit's
BaseDataLayerAPI
To configure a data layer, provide a custom Chainlit app (HAYHOOKS_CHAINLIT_APP) that registers the data layer at startup.
Note
Chainlit sessions are server-side only. Fully client-side storage (cookies, localStorage) is not supported by Chainlit's architecture.
Endpoint Ordering¶
mount_chainlit() must be called after all other FastAPI routes are registered -- it captures all unmatched URL space. Hayhooks handles this correctly in create_app(), but if you add custom middleware or routes, ensure they are registered before the Chainlit mount.
Limitations¶
- Single worker without sticky sessions: Use
--workers 1(the default) when running with--with-chainlit, or configure sticky sessions on your load balancer. See Production Notes above. - No conversation persistence out of the box: History is in-memory only. Configure a data layer for persistence.
- No built-in authentication: The UI is public by default. See Authentication above.
- No client-side session storage: Chainlit's architecture requires server-side sessions over WebSocket. Page refreshes create a new session.
Next Steps¶
- OpenAI Compatibility - Learn about chat completion implementation
- Open WebUI Integration - For a more feature-rich frontend
- Examples - Working pipeline examples