
TrojAI Browser Extension 作成者: Andrew
実験的実験的
In-Flight LLM Safety Monitoring for web based LLM applications. We monitor traffic coming from AI enabled sites to make sure your enterprise can safely enable these sites without the need to worry about data leakage and other issues.
この拡張機能を使用するには Firefox が必要です
拡張機能メタデータ
この拡張機能について
The emergence of AI and specifically Large Language Models such as ChatGPT has taken the world by storm. The landscape is changing daily with more and more powerful LLMs released by the month.
While these technologies offer unprecedented increases in productivity for their users, they are not without risk. It is not clear if the inputs sent into the models are properly sanitized or used for future training. Researchers are constantly identifying new attack vectors that can circumvent safety measures baked into the LLMs and expose users to harmful content or induce data leaks.
TrojAI's LLM Monitor extension sits in between you and the LLM and monitors your inputs to ensure sensitive data is not exposed to the LLM. We've the designed the extension to integrate seamlessly into your current flow. You can interact with LLM's directly in the browser like you normally would and our extension will let you know if you are about to send sensitive information before the model is able to read it.
We are actively developing the extension and looking to rollout new features quickly. Our current capabilities include the following:
- Personally Identifiable Information (Credit Cards, SSN, emails, etc..)
- Toxicity (Words that could induce harmful content in a response from the LLM)
- Prompt Injection Detection
- Jailbreak detection
- DLP
- Multimodal protections and more.
Whether you're an individual looking to keep your own data safe, or an organization seeking to safely onboard LLMs into your developer workflow, TrojAI's cutting edge LLM monitoring extension is for you!
While these technologies offer unprecedented increases in productivity for their users, they are not without risk. It is not clear if the inputs sent into the models are properly sanitized or used for future training. Researchers are constantly identifying new attack vectors that can circumvent safety measures baked into the LLMs and expose users to harmful content or induce data leaks.
TrojAI's LLM Monitor extension sits in between you and the LLM and monitors your inputs to ensure sensitive data is not exposed to the LLM. We've the designed the extension to integrate seamlessly into your current flow. You can interact with LLM's directly in the browser like you normally would and our extension will let you know if you are about to send sensitive information before the model is able to read it.
We are actively developing the extension and looking to rollout new features quickly. Our current capabilities include the following:
- Personally Identifiable Information (Credit Cards, SSN, emails, etc..)
- Toxicity (Words that could induce harmful content in a response from the LLM)
- Prompt Injection Detection
- Jailbreak detection
- DLP
- Multimodal protections and more.
Whether you're an individual looking to keep your own data safe, or an organization seeking to safely onboard LLMs into your developer workflow, TrojAI's cutting edge LLM monitoring extension is for you!
あなたの体験を評価
権限詳細情報
このアドオンの権限:
- ブラウザーのタブへのアクセス
- ナビゲーション中のブラウザーアクティビティへのアクセス
- chatgpt.com のユーザーデータへのアクセス
- www.bing.com のユーザーデータへのアクセス
- copilot.microsoft.com のユーザーデータへのアクセス
- www.office.com のユーザーデータへのアクセス
- gemini.google.com のユーザーデータへのアクセス
- claude.ai のユーザーデータへのアクセス
- copilot.cloud.microsoft のユーザーデータへのアクセス
- m365.cloud.microsoft のユーザーデータへのアクセス
このアドオンは次の権限も求めます:
- すべてのウェブサイトの保存されたデータへのアクセス
詳しい情報
- アドオンリンク
- バージョン
- 0.0.55
- サイズ
- 412.91 KB
- 最終更新日
- 1ヶ月前 (2025年4月3日)
- 関連カテゴリー
- ライセンス
- All Rights Reserved
- プライバシーポリシー
- このアドオンのプライバシーポリシーを読む
- バージョン履歴
- タグ
コレクションへ追加
Andrew が公開している他の拡張機能
- まだ評価されていません
- まだ評価されていません
- まだ評価されていません
- まだ評価されていません
- まだ評価されていません
- まだ評価されていません