-
-
Notifications
You must be signed in to change notification settings - Fork 524
Open
Labels
Description
🚀 Feature Description
Add support for generating skills compatible with multiple LLM platforms (Gemini, ChatGPT, etc.), not just Claude AI.
📋 Problem Statement
Currently, Skill Seekers is tightly coupled to Claude AI:
- Upload endpoint:
https://api.anthropic.com/v1/skills - Skill format: Claude-specific YAML frontmatter + SKILL.md structure
- MCP integration: Claude Code only
- Documentation: Claude-focused
Impact: Users of other LLM platforms (Gemini, ChatGPT, etc.) cannot use the generated skills directly, even though the documentation scraping and organization works perfectly.
💡 Proposed Solution
Create a multi-LLM adaptor system that allows users to generate skills in different formats for different LLM platforms.
Architecture
Skill Seekers (Core)
├── Scraping (universal) ✅ Already works
├── Content Organization (universal) ✅ Already works
└── Output Adaptors (NEW)
├── Claude Adaptor (current default)
├── Gemini Adaptor (NEW)
├── ChatGPT Adaptor (NEW)
└── Generic Markdown Adaptor (NEW)
🎯 Target Platforms
Priority 1: Google Gemini
- API: Google AI Studio / Vertex AI
- Format: Plain markdown with system instructions
- Context Upload: Files API or grounding with Google Search
- Implementation:
cli/adaptors/gemini_adaptor.py
Priority 2: OpenAI ChatGPT
- API: OpenAI API with assistants
- Format: Assistant instructions + file search
- Context Upload: Vector store / file attachments
- Implementation:
cli/adaptors/openai_adaptor.py
Priority 3: Generic Markdown
- Format: Pure markdown reference documentation
- Use Case: Any LLM, custom integrations, RAG systems
- Implementation:
cli/adaptors/markdown_adaptor.py
Future: Other Platforms
- Mistral AI
- Cohere
- Local LLMs (Ollama, LM Studio)
- LangChain integration
🔧 Technical Implementation
1. Create Adaptor Interface
# cli/adaptors/base_adaptor.py
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Dict, Any
class SkillAdaptor(ABC):
"""Base class for LLM-specific skill adaptors"""
@abstractmethod
def convert_skill(self, skill_dir: Path) -> Dict[str, Any]:
"""Convert universal skill format to platform-specific format"""
pass
@abstractmethod
def package_skill(self, skill_dir: Path, output_path: Path) -> Path:
"""Package skill for platform-specific upload"""
pass
@abstractmethod
def upload_skill(self, package_path: Path, api_key: str) -> str:
"""Upload skill to platform (returns skill ID or URL)"""
pass2. Implement Gemini Adaptor
# cli/adaptors/gemini_adaptor.py
class GeminiAdaptor(SkillAdaptor):
"""Adaptor for Google Gemini"""
def convert_skill(self, skill_dir: Path) -> Dict[str, Any]:
# Convert SKILL.md to Gemini system instructions
# Strip YAML frontmatter
# Format as plain markdown
pass
def package_skill(self, skill_dir: Path, output_path: Path) -> Path:
# Package as individual markdown files
# Create gemini_config.json with system instructions
pass
def upload_skill(self, package_path: Path, api_key: str) -> str:
# Upload to Google AI Studio via API
# Use Files API for context
pass3. Implement ChatGPT Adaptor
# cli/adaptors/openai_adaptor.py
class OpenAIAdaptor(SkillAdaptor):
"""Adaptor for OpenAI ChatGPT"""
def convert_skill(self, skill_dir: Path) -> Dict[str, Any]:
# Convert to Assistant instructions
# Prepare for vector store upload
pass
def package_skill(self, skill_dir: Path, output_path: Path) -> Path:
# Package as OpenAI-compatible format
pass
def upload_skill(self, package_path: Path, api_key: str) -> str:
# Create assistant with file search
# Upload files to vector store
pass4. Update CLI Tools
# Add --target flag to package_skill.py
python3 cli/package_skill.py output/react/ --target claude # Default
python3 cli/package_skill.py output/react/ --target gemini
python3 cli/package_skill.py output/react/ --target openai
python3 cli/package_skill.py output/react/ --target markdown
# Add --target flag to upload_skill.py
python3 cli/upload_skill.py react.zip --target gemini --api-key GOOGLE_API_KEY
python3 cli/upload_skill.py react.zip --target openai --api-key OPENAI_API_KEY5. Update MCP Server
{
"mcp__skill-seeker__package_skill": {
"target": "claude|gemini|openai|markdown"
},
"mcp__skill-seeker__upload_skill": {
"target": "claude|gemini|openai"
}
}📦 File Structure
cli/
├── adaptors/
│ ├── __init__.py
│ ├── base_adaptor.py # Abstract base class
│ ├── claude_adaptor.py # Current implementation (refactored)
│ ├── gemini_adaptor.py # NEW - Google Gemini
│ ├── openai_adaptor.py # NEW - ChatGPT
│ └── markdown_adaptor.py # NEW - Generic markdown
├── package_skill.py # Updated with --target flag
└── upload_skill.py # Updated with --target flag
configs/
└── adaptors/
├── gemini_defaults.json # Gemini-specific settings
├── openai_defaults.json # OpenAI-specific settings
└── claude_defaults.json # Claude-specific settings (current)
🧪 Testing Strategy
Unit Tests
# tests/test_adaptors/test_gemini_adaptor.py
def test_convert_skill_strips_yaml_frontmatter()
def test_package_creates_gemini_config()
def test_upload_to_gemini_api()
# tests/test_adaptors/test_openai_adaptor.py
def test_convert_to_assistant_instructions()
def test_package_for_vector_store()
def test_upload_to_openai_api()Integration Tests
# Test full workflow for each platform
python3 cli/doc_scraper.py --config configs/react.json
python3 cli/package_skill.py output/react/ --target gemini
python3 cli/upload_skill.py react-gemini.zip --target gemini --api-key $GEMINI_KEY📚 Documentation Updates
New Documentation Files
docs/GEMINI_INTEGRATION.md- Gemini setup guidedocs/OPENAI_INTEGRATION.md- ChatGPT setup guidedocs/MULTI_LLM_SUPPORT.md- Multi-LLM overviewdocs/ADAPTOR_DEVELOPMENT.md- Guide for adding new adaptors
README Updates
## Supported LLM Platforms
- ✅ **Claude AI** (Primary, full integration)
- ✅ **Google Gemini** (Full support with adaptor)
- ✅ **OpenAI ChatGPT** (Full support with adaptor)
- ✅ **Generic Markdown** (Universal format for any LLM)🎯 Success Criteria
- ✅ Users can generate Gemini-compatible skills
- ✅ Users can generate ChatGPT-compatible skills
- ✅ Upload works for all platforms
- ✅ All existing Claude functionality preserved
- ✅ Comprehensive documentation for each platform
- ✅ Test coverage for all adaptors
🔗 Related Issues
- [FEATURE] can this be used for gemini #177 - User request for Gemini support (original trigger)
🚀 Implementation Phases
Phase 1: Foundation (Week 1-2)
- Create adaptor interface (
base_adaptor.py) - Refactor existing Claude code to use adaptor pattern
- Add
--targetflag to CLI tools - Update tests
Phase 2: Gemini Support (Week 3-4)
- Implement
gemini_adaptor.py - Add Gemini API integration
- Test with real Gemini API
- Write documentation
Phase 3: ChatGPT Support (Week 5-6)
- Implement
openai_adaptor.py - Add OpenAI API integration
- Test with real OpenAI API
- Write documentation
Phase 4: Polish & Release (Week 7)
- Add generic markdown adaptor
- Update README and all docs
- Create migration guide
- Release v2.1.0
💬 Discussion Points
- Should we make Claude the default, or prompt users to choose platform?
- How should we handle platform-specific features (e.g., Claude's context window vs Gemini's)?
- Should we support multiple platforms in a single package?
- API key management - environment variables vs config file?
🎉 Benefits
- 🌍 Broader Audience - Opens Skill Seekers to all LLM users
- 🔄 Platform Flexibility - Users not locked into one LLM
- 🚀 Future-Proof - Easy to add new platforms
- 📈 Growth - More users, more contributors
- 🎯 Use Case Expansion - RAG systems, custom integrations
Labels: type: feature, priority: medium, scope: large, good first issue (for adding new adaptors after foundation is built)
Related to: #177
ProAlexUSC