Enterprise-grade LLM automated deployment tool that makes AI servers truly "plug-and-play".
-
Updated
Mar 4, 2026 - TypeScript
Enterprise-grade LLM automated deployment tool that makes AI servers truly "plug-and-play".
A modern CLI tool for generating production-ready Model Context Protocol (MCP) servers
Deploy intelligence. Open-source infrastructure for AI agents in production.
Semester Project WS 2017/2018 - A chrome extension for twitch and a python-flask backend to analyse the emotion of streamers to find which one are raging.
Ollama toCloud 是一个第三方Ollama服务器 & 云端模型协议转换器
Run Hermes Agent autonomously on your Linux server using systemd and native cron scheduler. Production-ready, headless setup with Nous Portal integration.
HomeServer Automation Script ⚡ Build your own home server in minutes! Automated deployment and configuration for Docker, CasaOS, Jellyfin, Immich, Ollama, n8n, and Tailscale. One-command script to transform any Linux machine into a powerful self-hosted home server.
AWS_AI_Cloudflare_workshop, Kaosiung 20240606
LLM FOR OpenVINO 多模型管理伺服器 (for Intel NPU/GPU)
Server for Thunderbird AI Compose Extension
Deploy and run Hermes Agent autonomously on Linux servers using systemd and native cron for stable, headless AI task scheduling.
Add a description, image, and links to the ai-server topic page so that developers can more easily learn about it.
To associate your repository with the ai-server topic, visit your repo's landing page and select "manage topics."