TrojAI Browser Extension 作者: Andrew
实验性实验性
In-Flight LLM Safety Monitoring for web based LLM applications. We monitor traffic coming from AI enabled sites to make sure your enterprise can safely enable these sites without the need to worry about data leakage and other issues.
您需要 Firefox 来使用此扩展
扩展元数据
关于此扩展
The emergence of AI and specifically Large Language Models such as ChatGPT has taken the world by storm. The landscape is changing daily with more and more powerful LLMs released by the month.
While these technologies offer unprecedented increases in productivity for their users, they are not without risk. It is not clear if the inputs sent into the models are properly sanitized or used for future training. Researchers are constantly identifying new attack vectors that can circumvent safety measures baked into the LLMs and expose users to harmful content or induce data leaks.
TrojAI's LLM Monitor extension sits in between you and the LLM and monitors your inputs to ensure sensitive data is not exposed to the LLM. We've the designed the extension to integrate seamlessly into your current flow. You can interact with LLM's directly in the browser like you normally would and our extension will let you know if you are about to send sensitive information before the model is able to read it.
We are actively developing the extension and looking to rollout new features quickly. Our current capabilities include the following:
- Personally Identifiable Information (Credit Cards, SSN, emails, etc..)
- Toxicity (Words that could induce harmful content in a response from the LLM)
- Prompt Injection Detection
- Jailbreak detection
- DLP
- Multimodal protections and more.
Whether you're an individual looking to keep your own data safe, or an organization seeking to safely onboard LLMs into your developer workflow, TrojAI's cutting edge LLM monitoring extension is for you!
While these technologies offer unprecedented increases in productivity for their users, they are not without risk. It is not clear if the inputs sent into the models are properly sanitized or used for future training. Researchers are constantly identifying new attack vectors that can circumvent safety measures baked into the LLMs and expose users to harmful content or induce data leaks.
TrojAI's LLM Monitor extension sits in between you and the LLM and monitors your inputs to ensure sensitive data is not exposed to the LLM. We've the designed the extension to integrate seamlessly into your current flow. You can interact with LLM's directly in the browser like you normally would and our extension will let you know if you are about to send sensitive information before the model is able to read it.
We are actively developing the extension and looking to rollout new features quickly. Our current capabilities include the following:
- Personally Identifiable Information (Credit Cards, SSN, emails, etc..)
- Toxicity (Words that could induce harmful content in a response from the LLM)
- Prompt Injection Detection
- Jailbreak detection
- DLP
- Multimodal protections and more.
Whether you're an individual looking to keep your own data safe, or an organization seeking to safely onboard LLMs into your developer workflow, TrojAI's cutting edge LLM monitoring extension is for you!
为您的体验打分
权限详细了解
此附加组件需要:
- 获知浏览器导航时的行为状态
- 存取您在 chatgpt.com 的数据
- 存取您在 www.bing.com 的数据
- 存取您在 copilot.microsoft.com 的数据
- 存取您在 www.office.com 的数据
- 存取您在 gemini.google.com 的数据
- 存取您在 claude.ai 的数据
- 存取您在 copilot.cloud.microsoft 的数据
- 存取您在 m365.cloud.microsoft 的数据
此附加组件可能也会要求:
- 存取您在所有网站的数据
更多信息
- 附加组件链接
- 版本
- 0.0.11
- 大小
- 134.5 KB
- 上次更新
- 20 天前 (2024年11月7日)
- 相关分类
- 许可证
- 保留所有权利
- 隐私政策
- 阅读此附加组件的隐私政策
- 版本历史
- 标签
添加到收藏集
Andrew 制作的更多扩展
- 目前尚无评分
- 目前尚无评分
- 目前尚无评分
- 目前尚无评分
- 目前尚无评分
- 目前尚无评分