- Serving: Ollama
- Model: Mistral (via Ollama runtime)
- Embedding Model: all-MiniLM-L6-v2 (SentenceTransformers)
- Vector Store: FAISS
- Framework: LangGraph + LangChain
- Pipeline Type: Execution Graph-based RAG
- conda create -n graphrag python=3.10
- conda activate langgraph_graphrag
- pip install -r requirements.txt
- ollama pull mistral
- ollama serve > /dev/null 2>&1 &
- ps aux | grep ollama # 서비스 확인
- nohup ollama run mistral > /dev/null 2>&1 &