A privacy-focused offline AI translation application built with Transformers and Tkinter, providing secure local translation without sending data to external servers.
⚠️ IMPORTANT NOTICE: This repository contains placeholder model files. You must download the actual model files (~14GB) before running the application. See Installation section for details.
- 🤖 Local AI Translation - Complete privacy protection with offline processing
- 🌍 Multi-language Support - Bidirectional translation between 8 languages (Chinese, English, Spanish, French, German, Japanese, Korean, Russian)
- ⚡ GPU Acceleration - Automatic GPU detection with CUDA support for faster translation
- 🎨 Modern Interface - Clean, intuitive graphical user interface
- ⌨️ Keyboard Shortcuts - Rich hotkey support for efficient workflow
- 📋 One-click Copy - Easy result copying to clipboard
- 🔄 Quick Language Swap - Instant source/target language switching
- 📏 Smart Text Limits - 5000 character limit with helpful user guidance
- Python: 3.8 or higher
- PyTorch: Latest stable version
- Transformers: Hugging Face transformers library
- Accelerate: For optimized model loading
- Tkinter: GUI framework (usually included with Python)
- Clone the repository:
git clone https://github.com/your-username/offline-translator.git
cd offline-translator- Install dependencies:
pip install -r requirements.txt-
⚠️ IMPORTANT: Download Model FilesThe repository contains placeholder model files. You MUST download the actual model before running the application.
- 📁 Navigate to the
models/directory - 📖 Read the
README.txtfile for detailed download instructions - 🔗 Download the model files (~14GB) from the provided links
- 📋 Replace the placeholder
model.safetensors(0KB) with the actual model file
Quick Setup:
# After cloning, check the models directory cd models/ cat README.txt # Read download instructions # Download model files from the provided links # Replace model.safetensors with the downloaded file (~14GB)
- 📁 Navigate to the
-
Verify Installation:
- Ensure
model.safetensorsis approximately 14GB (not 0KB) - All JSON configuration files should contain actual data
- Run the application to test:
python translator.py
- Ensure
- Launch the application:
python translator.py- Setup process:
- Select your preferred device (GPU recommended for speed)
- Click "Load Model" to initialize the translation engine
- Choose source and target languages
- Enter text (up to 5000 characters)
- Click "Translate" or press
Ctrl+Enter
| Shortcut | Action |
|---|---|
Ctrl+Enter |
Translate text |
Ctrl+L |
Load model |
Ctrl+Shift+C |
Copy translation result |
Ctrl+Shift+X |
Clear input text |
Ctrl+Shift+S |
Swap source and target languages |
F1 |
Show help information |
- Automatic GPU Detection - Scans for available CUDA devices
- GPU Priority - Graphics cards listed first, CPU as fallback
- Device Information - Shows GPU names for easy identification
- 8 Language Support - Comprehensive language coverage
- Bidirectional Translation - Translate between any supported language pair
- Quick Swap Button (⇄) - Instantly reverse translation direction
- Smart Validation - Prevents selecting identical source/target languages
- Scrollable Text Areas - Handle long texts comfortably
- 5000 Character Limit - Optimized for quality and performance
- Clear Warnings - Helpful messages when text exceeds limits
- Copy/Clear Functions - Easy text management
- Advanced Prompting - Optimized prompts for better translation quality
- Smart Output Cleaning - Removes artifacts and explanatory text
- Tuned Parameters - Optimized generation settings for accuracy
Your models/ directory should contain:
models/
├── config.json # Model configuration
├── model.safetensors # Model weights
├── tokenizer.json # Tokenizer configuration
└── generation_config.json # Generation parameters
- Efficient Loading - Uses
acceleratelibrary for optimized memory usage - Mixed Precision -
bfloat16support to reduce VRAM consumption - Smart Device Management - Automatic device allocation and detection
- Global Exception Handling - Comprehensive error catching
- Detailed Error Messages - Clear feedback for troubleshooting
- Automatic Logging - Errors saved to
error_log.txt
- Multi-threading - Non-blocking UI during translation
- Real-time Progress - Live status updates and progress indicators
- Responsive Design - Smooth interaction and feedback
Model Loading Fails
- Verify all model files are present in
models/directory - Check file permissions and integrity
- Ensure sufficient disk space
CUDA Errors
- Confirm PyTorch CUDA compatibility
- Update GPU drivers
- Check CUDA installation
Memory Issues
- Try CPU mode for large models
- Reduce input text length
- Close other GPU-intensive applications
Poor Translation Quality
- Verify model compatibility with selected language pair
- Check if model is specifically trained for translation
- Try shorter, simpler sentences
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
If you encounter any issues or have questions, please open an issue on GitHub.
