Fabricatio is a streamlined Python library for building LLM applications using an event-based agent structure. It leverages Rust for performance-critical tasks, Handlebars for templating, and PyO3 for Python bindings.
- Event-Driven Architecture: Robust task management through an EventEmitter pattern.
- LLM Integration & Templating: Seamlessly interact with large language models and dynamic content generation.
- Async & Extensible: Fully asynchronous execution with easy extension via custom actions and workflows.
# install fabricatio with full capabilities.
pip install fabricatio[full]
# or with uv
uv add fabricatio[full]
# install fabricatio with only rag and rule capabilities.
pip install fabricatio[rag,rule]
# or with uv
uv add fabricatio[rag,rule]
You can download the templates from the github release manually and extract them to the work directory.
curl -L https://github.com/Whth/fabricatio/releases/download/v0.19.1/templates.tar.gz | tar -xzOr you can use the cli tdown bundled with fabricatio to achieve the same result.
tdown download --verbose -o ./Note:
fabricatioperforms template discovery across multiple sources with filename-based identification. Template resolution follows a priority hierarchy where working directory templates override templates located in<ROAMING>/fabricatio/templates.
"""Example of a simple hello world program using fabricatio."""
from typing import Any
# Import necessary classes from the namespace package.
from fabricatio import Action, Event, Role, Task, WorkFlow, logger
# Create an action.
class Hello(Action):
"""Action that says hello."""
output_key: str = "task_output"
async def _execute(self, **_) -> Any:
ret = "Hello fabricatio!"
logger.info("executing talk action")
return ret
# Create the role and register the workflow.
(Role()
.register_workflow(Event.quick_instantiate("talk"), WorkFlow(name="talk", steps=(Hello,)))
.dispatch())
# Make a task and delegate it to the workflow registered above.
assert Task(name="say hello").delegate_blocking("talk") == "Hello fabricatio!"For various usage scenarios, refer to the following examples:
- Simple Chat
- Retrieval-Augmented Generation (RAG)
- Article Extraction
- Propose Task
- Code Review
- Write Outline
(For full example details, see Examples)
Fabricatio supports flexible configuration through multiple sources, with the following priority order:
Call Arguments > ./.env > Environment Variables > ./fabricatio.toml > ./pyproject.toml > <ROMANING>/fabricatio/fabricatio.toml > Builtin Defaults.
Below is a unified view of the same configuration expressed in different formats:
FABRICATIO_LLM__API_ENDPOINT=https://api.openai.com
FABRICATIO_LLM__API_KEY=your_openai_api_key
FABRICATIO_LLM__TIMEOUT=300
FABRICATIO_LLM__MAX_RETRIES=3
FABRICATIO_LLM__MODEL=openai/gpt-3.5-turbo
FABRICATIO_LLM__TEMPERATURE=1.0
FABRICATIO_LLM__TOP_P=0.35
FABRICATIO_LLM__GENERATION_COUNT=1
FABRICATIO_LLM__STREAM=false
FABRICATIO_LLM__MAX_TOKENS=8192
FABRICATIO_DEBUG__LOG_LEVEL=INFO[llm]
api_endpoint = "https://api.openai.com"
api_key = "your_openai_api_key"
timeout = 300
max_retries = 3
model = "openai/gpt-3.5-turbo"
temperature = 1.0
top_p = 0.35
generation_count = 1
stream = false
max_tokens = 8192
[debug]
log_level = "INFO"[tool.fabricatio.llm]
api_endpoint = "https://api.openai.com"
api_key = "your_openai_api_key"
timeout = 300
max_retries = 3
model = "openai/gpt-3.5-turbo"
temperature = 1.0
top_p = 0.35
generation_count = 1
stream = false
max_tokens = 8192
[tool.fabricatio.debug]
log_level = "INFO"We welcome contributions from everyone! Before contributing, please read our Contributing Guide and Code of Conduct.
Fabricatio is licensed under the MIT License. See LICENSE for details.
Special thanks to the contributors and maintainers of: