TrojAI Browser Extension 作者: Andrew
In-Flight LLM Safety Monitoring for web based LLM applications. We monitor traffic coming from AI enabled sites to make sure your enterprise can safely enable these sites without the need to worry about data leakage and other issues.
扩展元数据
关于此扩展
The emergence of AI and specifically Large Language Models such as ChatGPT has taken the world by storm. The landscape is changing daily with more and more powerful LLMs released by the month.
While these technologies offer unprecedented increases in productivity for their users, they are not without risk. It is not clear if the inputs sent into the models are properly sanitized or used for future training. Researchers are constantly identifying new attack vectors that can circumvent safety measures baked into the LLMs and expose users to harmful content or induce data leaks.
TrojAI's LLM Monitor extension sits in between you and the LLM and monitors your inputs to ensure sensitive data is not exposed to the LLM. We've the designed the extension to integrate seamlessly into your current flow. You can interact with LLM's directly in the browser like you normally would and our extension will let you know if you are about to send sensitive information before the model is able to read it.
We are actively developing the extension and looking to rollout new features quickly. Our current capabilities include the following:
- Personally Identifiable Information (Credit Cards, SSN, emails, etc..)
- Toxicity (Words that could induce harmful content in a response from the LLM)
- Prompt Injection Detection
- Jailbreak detection
- DLP
- Multimodal protections and more.
Whether you're an individual looking to keep your own data safe, or an organization seeking to safely onboard LLMs into your developer workflow, TrojAI's cutting edge LLM monitoring extension is for you!
While these technologies offer unprecedented increases in productivity for their users, they are not without risk. It is not clear if the inputs sent into the models are properly sanitized or used for future training. Researchers are constantly identifying new attack vectors that can circumvent safety measures baked into the LLMs and expose users to harmful content or induce data leaks.
TrojAI's LLM Monitor extension sits in between you and the LLM and monitors your inputs to ensure sensitive data is not exposed to the LLM. We've the designed the extension to integrate seamlessly into your current flow. You can interact with LLM's directly in the browser like you normally would and our extension will let you know if you are about to send sensitive information before the model is able to read it.
We are actively developing the extension and looking to rollout new features quickly. Our current capabilities include the following:
- Personally Identifiable Information (Credit Cards, SSN, emails, etc..)
- Toxicity (Words that could induce harmful content in a response from the LLM)
- Prompt Injection Detection
- Jailbreak detection
- DLP
- Multimodal protections and more.
Whether you're an individual looking to keep your own data safe, or an organization seeking to safely onboard LLMs into your developer workflow, TrojAI's cutting edge LLM monitoring extension is for you!
评分 1(1 位用户)
权限与数据
必要权限:
- 拦截任何页面上的内容
- 获取浏览器标签页
- 获知浏览器导航时的行为状态
- 访问您在 replit.com 域名的数据
- 访问您在 replit.dev 域名的数据
- 访问您在 repl.co 域名的数据
- 访问您在 chatgpt.com 的数据
- 访问您在 www.bing.com 的数据
- 访问您在 copilot.microsoft.com 的数据
- 访问您在 www.office.com 的数据
- 访问您在 gemini.google.com 的数据
- 访问您在 claude.ai 的数据
- 访问您在 copilot.cloud.microsoft 的数据
- 访问您在 m365.cloud.microsoft 的数据
- 访问您在 app.devin.ai 的数据
- 访问您在 devin.ai 的数据
- 访问您在 aexp.devinenterprise.com 的数据
- 访问您在 aexp-qa.devinenterprise.com 的数据
- 访问您在 answers.capacity.com 的数据
- 访问您在 ask.lucy.ai 的数据
- 访问您在 insights.americanexpress.com 的数据
- 访问您在 replit.com 的数据
可选权限:
- 访问您在所有网站的数据
根据开发者所述,必要的数据收集:
- 网站活动
- 网站内容
更多信息
- 版本
- 1.73.0
- 大小
- 354.21 KB
- 上次更新
- 10 天前 (2026年5月6日)
- 许可证
- 保留所有权利
- 隐私政策
- 阅读此附加组件的隐私政策
- 版本历史
- 添加到收藏集