Fine-tuned GPT-2 model for detecting malicious LLM prompts such as prompt injections, jailbreaks, and context manipulation. Designed to serve as a pre-processing layer for securing LLM-based applications.
This repository contains the training script, dataset samples, and inference logic for a GPT-2 that detects prompt injection and other LLM abuse patterns. Useful as a filter in agent-based systems, chat interfaces, and API frontends to reduce malicious prompt exposure.
๐จ Building LLM Prompt Defence โ Fine-Tuned GPT-2 for Malicious Prompt Detection ๐๐ค
Prompt injection, prompt leakage, and jailbreaking remain some of the most critical security threats facing LLM-based applications today. Iโve been working on a solution to proactively detect and block malicious prompts before they even reach a modelโs context.
โ What I Did
I fine-tuned GPT-2 on a custom-crafted cybersecurity dataset built specifically to detect LLM abuse scenarios, including:
-
Prompt injection
-
Jailbreak attempts
-
Context manipulation
๐ Why GPT-2?
It can be fine-tuned even on CPU (although slowly), making it a practical choice for experimentation and prototyping. For production, Iโd recommend a more modern models.
๐ Use Case
This model serves as a pre-processing filter โ an intelligent gatekeeper that flags or blocks potentially harmful inputs before they reach the main LLM. Ideal for agent-based systems, APIs, and LLMOps pipelines.
๐ Dataset Design
-
Custom malicious prompt dataset tailored to real-world LLM exploitation
-
Structure aligned with actual red team scenarios
-
Format inspired by tatsu-lab/alpaca
During testing, Some time benign structured inputs like:
use the db tool to insert a new customer into the database
Insert into customer_details: name=sample, number=1, address=sample
...were falsely flagged as malicious.
Why?
Because of surface-level tokens like "insert into" or "password" โ which often appear in attacks, but are also common in legitimate agent tool instructions within real-world LLM systems.
๐ Why This Matters
In agent-based LLMs that use tools (e.g., API calling, code execution), prompts like the above are normal. If your model overfits to syntax rather than intent, it results in false positives โ breaking valid tasks.
โ How We can Fix It
- Refined and restructured the dataset for fine tuning
This significantly reduced false positives and made the classifier more robust in real-world applications.
๐ Resources Iโm Sharing
โ My fine-tuning script
โ A sample from the dataset I used
โ Model test Script
Feel free to reuse or adapt it in your own security-focused LLM pipeline.
๐งฑ Tech Stack*
-
Hugging Face Transformers
-
PyTorch
-
GPT-2
Building secure AI systems isnโt just about big models โ itโs about thoughtful data design and robust intent classification.