Inhibitor Assist AppliedAI Studio מאת
Inhibitor Assist by appliedAIstudio. Spot, rethink, and correct risky AI prompts before you send them.
נתוני העל של ההרחבה
על אודות הרחבה זו
Assist is a quiet companion for AI use on campus. When you type a prompt into ChatGPT, Claude, or Gemini, Assist glances at it before you press send and gently flags anything that conflicts with your university's policies student records, draft research, identifying classmates, exam material, and similar concerns. You stay in control. Assist only ever advises, asks for confirmation, suggests a redacted rewrite, or for a small, clearly-marked set of strict policies and blocks the send.
WHAT MAKES ASSIST DIFFERENT
· No account, no institutional dashboard, no analytics SDKs. Your preferences and the rolling intervention log live only on your device, in browser local storage.
· One outbound check per send, to your institution's chosen model. When your university packages Assist with an LLM key, every send is checked by the institution's chosen model (the default build uses Anthropic Claude; the institution can swap it). The model receives your prompt and the policy bundle, decides whether to intervene, and writes the explanation. The provider's data-handling terms apply (Anthropic does not use API content for training). If the build was not packaged with a key, no inhibition runs and your prompts pass through the extension untouched.
· On-device policy bundle modelled on real university AI-use guidance: PII / FERPA, draft research, exam material, attributing AI work, financial aid, classmate identification, clinical data, and others. Each policy has a default mode (advise / rewrite) and you can override any of them in settings.
· Calm, not loud. When something matches, Assist shows a small Shadow-DOM toast in the corner of the page. It explains what matched and why, offers a redacted rewrite when one helps, and lets you confirm or cancel. The host page's send is deferred until you decide.
· Pause it whenever. One click in the toolbar popup pauses Assist on the current site. A second click brings it back. The pause state is per-host and persists.
POLICIES INCLUDED OUT OF THE BOX
· Student records & FERPA
· Draft research / unpublished manuscripts
· Exam questions and answer keys
· Identifying classmates by name
· Attributing AI work as your own
· Financial aid and account numbers
· Clinical / patient data
· Health information about minors
· Job application materials
· Personal contact details (yours and others')
· Login credentials / API keys
· Confidential institutional documents
EVERYTHING IS LOCAL
Your preferences, policy overrides, paused sites, and a rolling intervention log all live in browser.storage.local on your device. The intervention log is used only to render the 24-hour counts in the toolbar popup. Nothing is exported anywhere.
TARGET AUDIENCE
Assist is for university students, staff, and IT teams piloting an institution-wide AI governance approach. It runs on chatgpt.com, claude.ai, and gemini.google.com.
WHAT MAKES ASSIST DIFFERENT
· No account, no institutional dashboard, no analytics SDKs. Your preferences and the rolling intervention log live only on your device, in browser local storage.
· One outbound check per send, to your institution's chosen model. When your university packages Assist with an LLM key, every send is checked by the institution's chosen model (the default build uses Anthropic Claude; the institution can swap it). The model receives your prompt and the policy bundle, decides whether to intervene, and writes the explanation. The provider's data-handling terms apply (Anthropic does not use API content for training). If the build was not packaged with a key, no inhibition runs and your prompts pass through the extension untouched.
· On-device policy bundle modelled on real university AI-use guidance: PII / FERPA, draft research, exam material, attributing AI work, financial aid, classmate identification, clinical data, and others. Each policy has a default mode (advise / rewrite) and you can override any of them in settings.
· Calm, not loud. When something matches, Assist shows a small Shadow-DOM toast in the corner of the page. It explains what matched and why, offers a redacted rewrite when one helps, and lets you confirm or cancel. The host page's send is deferred until you decide.
· Pause it whenever. One click in the toolbar popup pauses Assist on the current site. A second click brings it back. The pause state is per-host and persists.
POLICIES INCLUDED OUT OF THE BOX
· Student records & FERPA
· Draft research / unpublished manuscripts
· Exam questions and answer keys
· Identifying classmates by name
· Attributing AI work as your own
· Financial aid and account numbers
· Clinical / patient data
· Health information about minors
· Job application materials
· Personal contact details (yours and others')
· Login credentials / API keys
· Confidential institutional documents
EVERYTHING IS LOCAL
Your preferences, policy overrides, paused sites, and a rolling intervention log all live in browser.storage.local on your device. The intervention log is used only to render the 24-hour counts in the toolbar popup. Nothing is exported anywhere.
TARGET AUDIENCE
Assist is for university students, staff, and IT teams piloting an institution-wide AI governance approach. It runs on chatgpt.com, claude.ai, and gemini.google.com.
מדורג 0 על־ידי 0 סוקרים
הרשאות ונתונים
הרשאות נדרשות:
- גישה לנתונים שלך עבור chatgpt.com
- גישה לנתונים שלך עבור claude.ai
- גישה לנתונים שלך עבור gemini.google.com
הרשאות אופציונליות:
- גישה לנתונים שלך עבור chatgpt.com
- גישה לנתונים שלך עבור claude.ai
- גישה לנתונים שלך עבור gemini.google.com
- גישה לנתונים שלך עבור iaas.appliedai.studio
- גישה לנתונים שלך עבור api.anthropic.com
איסוף נתונים נדרש, לפי המפתח:
- תוכן אתר
מידע נוסף
- קישורים לתוספת
- גרסה
- 2.9.0
- גודל
- 191.53 ק״ב
- עדכון אחרון
- לפני 5 ימים (6 מאי 2026)
- קטגוריות קשורות
- רישיון
- MIT License
- מדיניות פרטיות
- קריאת מדיניות הפרטיות עבור תוספת זו
- היסטוריית הגרסאות
- הוספה לאוסף