[Download Python for your OS](https://www.python.org/downloads/)
pip install -r requirements.txt
Download embedding LLM in GGUF format. Suggested starting source an model:
https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1/tree/main/gguf
Download inference LLM in GGUF format. Suggested starting source an model:
https://huggingface.co/microsoft/phi-4-gguf/tree/main
Place both GGUF file in the /models folder and update their path and name in the vector_config.json and chat_config.json.
Copy accepted file types (md, pdf, txt) to the /documents folder.
To start the embedding pipeline, go to the main folder and run the chunk_embed_vector.py script
python chunk_embed_vector.py
To start the chat in the CLI, run the chat_cli.py script.
python chat_cli.py
To start the chat in the ChainLit web UI, run the chat_webui.py script.
chainlit run chat_webui.py