
TrojAI Browser Extension 作者: Andrew
實驗中實驗中
In-Flight LLM Safety Monitoring for web based LLM applications. We monitor traffic coming from AI enabled sites to make sure your enterprise can safely enable these sites without the need to worry about data leakage and other issues.
必須使用 Firefox 才能使用此擴充套件
擴充套件後設資料
關於此擴充套件
The emergence of AI and specifically Large Language Models such as ChatGPT has taken the world by storm. The landscape is changing daily with more and more powerful LLMs released by the month.
While these technologies offer unprecedented increases in productivity for their users, they are not without risk. It is not clear if the inputs sent into the models are properly sanitized or used for future training. Researchers are constantly identifying new attack vectors that can circumvent safety measures baked into the LLMs and expose users to harmful content or induce data leaks.
TrojAI's LLM Monitor extension sits in between you and the LLM and monitors your inputs to ensure sensitive data is not exposed to the LLM. We've the designed the extension to integrate seamlessly into your current flow. You can interact with LLM's directly in the browser like you normally would and our extension will let you know if you are about to send sensitive information before the model is able to read it.
We are actively developing the extension and looking to rollout new features quickly. Our current capabilities include the following:
- Personally Identifiable Information (Credit Cards, SSN, emails, etc..)
- Toxicity (Words that could induce harmful content in a response from the LLM)
- Prompt Injection Detection
- Jailbreak detection
- DLP
- Multimodal protections and more.
Whether you're an individual looking to keep your own data safe, or an organization seeking to safely onboard LLMs into your developer workflow, TrojAI's cutting edge LLM monitoring extension is for you!
While these technologies offer unprecedented increases in productivity for their users, they are not without risk. It is not clear if the inputs sent into the models are properly sanitized or used for future training. Researchers are constantly identifying new attack vectors that can circumvent safety measures baked into the LLMs and expose users to harmful content or induce data leaks.
TrojAI's LLM Monitor extension sits in between you and the LLM and monitors your inputs to ensure sensitive data is not exposed to the LLM. We've the designed the extension to integrate seamlessly into your current flow. You can interact with LLM's directly in the browser like you normally would and our extension will let you know if you are about to send sensitive information before the model is able to read it.
We are actively developing the extension and looking to rollout new features quickly. Our current capabilities include the following:
- Personally Identifiable Information (Credit Cards, SSN, emails, etc..)
- Toxicity (Words that could induce harmful content in a response from the LLM)
- Prompt Injection Detection
- Jailbreak detection
- DLP
- Multimodal protections and more.
Whether you're an individual looking to keep your own data safe, or an organization seeking to safely onboard LLMs into your developer workflow, TrojAI's cutting edge LLM monitoring extension is for you!
為您的體驗打分數
權限了解更多
此附加元件需要:
- 存取瀏覽器分頁
- 在上網時了解瀏覽器行為狀態
- 存取您在 chatgpt.com 的資料
- 存取您在 www.bing.com 的資料
- 存取您在 copilot.microsoft.com 的資料
- 存取您在 www.office.com 的資料
- 存取您在 gemini.google.com 的資料
- 存取您在 claude.ai 的資料
- 存取您在 copilot.cloud.microsoft 的資料
- 存取您在 m365.cloud.microsoft 的資料
此附加元件可能也會要求:
- 存取您所有網站中的資料
更多資訊
新增至收藏集
Andrew 製作的更多擴充套件
- 目前沒有評分
- 目前沒有評分
- 目前沒有評分
- 目前沒有評分
- 目前沒有評分
- 目前沒有評分