prompt-injection-detector 作成者: Vishesh Agarwal
Detects hidden prompt injection instructions that might manipulate AI models like Copilot and Claude.
ユーザーなしユーザーなし
拡張機能メタデータ
スクリーンショット
この拡張機能について
AI assistants like GitHub Copilot, ChatGPT, and others read web page content when you ask them to help. Attackers can hide malicious instructions in that content — invisible to you, but visible to the AI — to hijack its behaviour, steal your data, or bypass safety filters.
PromptGuard detects:
- Hidden elements (
- HTML comments — invisible to humans but read by AI tools ingesting page source
- LLM-specific formats:
Three sensitivity levels:
- 🟢 Normal — high-confidence imperative overrides only (low false positives)
- 🟠 High — adds jailbreak, DAN, developer-mode, bypass patterns
- 🔴 Ultra — adds roleplay, persona, exfiltration, and LLM prompt-format patterns
Click any finding to flash and scroll to the exact element on the page.
All scanning runs locally in your browser. Nothing is sent anywhere.
PromptGuard detects:
- Hidden elements (
display:none, visibility:hidden, zero opacity, sub-pixel fonts, same-colour text)- HTML comments — invisible to humans but read by AI tools ingesting page source
- LLM-specific formats:
[INST], system:, assistant: prompt injection patternsThree sensitivity levels:
- 🟢 Normal — high-confidence imperative overrides only (low false positives)
- 🟠 High — adds jailbreak, DAN, developer-mode, bypass patterns
- 🔴 Ultra — adds roleplay, persona, exfiltration, and LLM prompt-format patterns
Click any finding to flash and scroll to the exact element on the page.
All scanning runs locally in your browser. Nothing is sent anywhere.
0 人のレビュー担当者が 0 と評価しました
権限とデータ
詳しい情報
- アドオンリンク
- バージョン
- 1.0.0
- サイズ
- 20.48 KB
- 最終更新日
- 1ヶ月前 (2026年4月4日)
- 関連カテゴリー
- ライセンス
- MIT License
- プライバシーポリシー
- このアドオンのプライバシーポリシーを読む
- バージョン履歴
- コレクションへ追加