Add-ons foar Firefox Browser
  • Utwreidingen
  • Tema’s
    • foar Firefox
    • Wurdboeken en taalpakketten
    • Oare browserwebsites
    • Add-ons foar Android
Oanmelde
Foarbyld fan Local Manga Translator

Local Manga Translator troch Camekan

Local Manga Translator allows you to read raw Manga (Japanese), Manhwa (Korean), and Manhua (Chinese) by capturing text from your browser and translating it using a powerful AI server running locally on your computer.

Guon funksjes kinne betelling fereaskjeGuon funksjes kinne betelling fereaskje
0 (0 beoardielingen)0 (0 beoardielingen)
Firefox downloade en de útwreiding ûntfange
Bestân downloade

Metadata útwreiding

Oer dizze útwreiding
🦊 Firefox Add-on Description & Instructions

⚠️ IMPORTANT: REQUIRES COMPANION SCRIPT
This add-on is a "connector" tool. To perform the translation, you must run the free companion Python script (manga_server.py) on your computer.

✨ Key Features
  • Universal Hardware Support: Works on NVIDIA (CUDA), AMD/Intel (Vulkan), or CPU.
  • Specialized Japanese OCR: Uses Manga-OCR to read vertical, handwritten, and messy manga text perfectly.
  • Advanced Bubble Detection: Now uses Comic-Text-Detector (specialized for Manga/Manhwa) to accurately split connected bubbles and ignore background noise.
  • Smart Korean Mode: Uses PaddleOCR for high-accuracy recognition of Korean webtoons.
  • Natural AI Translation: Connects to local LLMs (like Qwen, Llama 3) for human-quality translation.
  • 100% Private & Free: No API keys, no monthly fees. Everything runs offline.



🛠️ Step-by-Step Installation Guide

Step 1: Install Visual Studio Build Tools (Windows Only)
  1. Download Visual Studio Build Tools 2022 from the Microsoft website.
  2. Run the installer.
  3. CRITICAL: Select the workload named "Desktop development with C++".
  4. Ensure the checklist on the right includes "Windows 10/11 SDK" and "MSVC... C++ x64/x86 build tools".
  5. Click Install and wait for it to finish.

Step 2: Install Python
  1. Download Python 3.10.11 from python.org.
  2. CRITICAL: During installation, check the box "Add Python to PATH".

Step 3: Download & Setup the Server Files

1. Get the Main Script
  • Download manga_server.py from thehttps://github.com/Camekan/Manga_server.py/blob/main/manga_server.py and place it in a new folder (e.g., C:\MangaTranslator).

2. Get the Comic Detector (Critical Step)
  • Download this ZIP file: https://github.com/dmMaze/comic-text-detector/archive/refs/heads/master.zip
  • Extract the ZIP. You will see a folder named comic-text-detector-master.
  • RENAME that folder to: comic_text_detector
  • ⚠️ Important: Use underscores _, not dashes -.
  • MOVE this folder next to manga_server.py.

3. Get the Detector Model (Optional - Auto-downloads on first run)
  • The script will try to download this automatically. If it fails, do this:
  • Download the model file: https://github.com/zyddnys/manga-image-translator/releases/download/beta-0.2.1/comictextdetector.pt
  • Create a new folder named models inside your comic_text_detector folder if it isn`t there..
  • Place the .pt file there.
  • Correct Path: C:\MangaTranslator\comic_text_detector\models\comictextdetector.pt

Your folder must look exactly like this:

MangaTranslator/
├── manga_server.py
└── comic_text_detector/ <-- The folder you renamed
├── inference.py <-- File inside
├── basemodel.py <-- File inside
└── models/ <-- Folder inside
└── comictextdetector.pt

Step 4: Install Dependencies

Open Command Prompt (cmd) inside the folder you extracted and run these commands:

1. Install Basic Tools:

pip install flask manga-ocr pytesseract pillow opencv-python numpy requests

2. Install Korean OCR (PaddleOCR):

pip install paddlepaddle paddleocr protobuf==3.20.3

3. Install AI Engine (Choose Your Hardware):
  • Option A: NVIDIA Users (Best Performance)

pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124
  • Option B: AMD / Intel Users (Vulkan Mode)
  • Download and install the Vulkan SDK.
  • Run this command:

set CMAKE_ARGS="-DGGML_VULKAN=on" && pip install llama-cpp-python --upgrade --force-reinstall --no-cache-dir
  • Option C: CPU Only (Compatible but Slower)

pip install llama-cpp-python

Step 5: Get an AI Model
  1. Download a .gguf model (Recommended: Qwen2.5-14B-Instruct-Q4_K_M.gguf) from HuggingFace.
  2. Open manga_server.py with a text editor (like Notepad) and set the MODEL_PATH to point to your downloaded file.

Step 6: Run & Read!
  1. Double-click manga_server.py to start the server.
  2. First run note: It will automatically download the detection model (~100MB). Wait for it to finish.
  3. Open a manga page in Firefox.
  4. Press Alt+Q (or your custom hotkey) and draw a box over the text!

🔗 Download the Script Here: https://github.com/Camekan/Manga_server.py/blob/main/manga_server.py
Bug Reports: Please report issues on the GitHub Issues page.
Wurdearre: 0 troch 0 beoardielers
Meld jo oan om dizze útwreiding te wurdearjen
Der binne noch gjin wurdearringen

Stjer-wurdearring wurdt bewarre

5
0
4
0
3
0
2
0
1
0
Noch gjin beoardelingen
Tastimmingen en gegevens

Fereaske machtigingen:

  • Jo gegevens foar alle websites benaderje

Opsjonele machtigingen:

  • Jo gegevens foar 127.0.0.1:5000 benaderje
  • Jo gegevens foar 127.0.0.1 benaderje
Mear ynfo
Mear ynformaasje
Add-on-keppelingen
  • Stipewebsite
Ferzje
2.8
Grutte
16,04 KB
Lêst bywurke
23 dagen lyn (21 jan. 2026)
Sibbe kategoryen
  • Taalstipe
  • Foto’s, muzyk en fideo’s
Lisinsje
MIT-lisinsje
Privacybelied
It privacybelied foar dizze add-on lêze
Ferzjeskiednis
  • Alle ferzjes besjen
Tafoegje oan kolleksje
Dizze add-on rapportearje
Nei Mozilla’s startside

Add-ons

  • Oer
  • Firefox-add-onsblog
  • Utwreidingsworkshop
  • Untwikkelershub
  • Untwikkelersbelied
  • Mienskipsblog
  • Foarum
  • In bug melde
  • Beoardielingsrjochtlinen

Browser

  • Desktop
  • Mobile
  • Enterprise

Produkten

  • Browsers
  • VPN
  • Relay
  • Monitor
  • Pocket
  • Bluesky (@firefox.com)
  • Instagram (Firefox)
  • YouTube (firefoxchannel)
  • Privacy
  • Cookies
  • Juridysk

Utsein oars vermeld, is op de ynhâld fan dizze website de Creative Commons Attribution Share-Alike License v3.0 of lettere ferzje fan tapassing.