Tauri Implementation
- Python 42.6%
- TypeScript 34.9%
- Rust 17.6%
- Batchfile 3%
- Shell 1%
- Other 0.9%
- Look for bridge.py in APPDATA (for downloaded Python) - Look relative to executable for installed app - Use PowerShell with -WindowStyle Hidden to prevent popup windows - Copy source files to APPDATA after Python download |
||
|---|---|---|
| scripts | ||
| src | ||
| src-tauri | ||
| tests/prototype | ||
| .gitignore | ||
| AGENTS.md | ||
| package.json | ||
| project.md | ||
| pyproject.toml | ||
| README.md | ||
| requirements.txt | ||
| test_detector.py | ||
Deface2 Development Setup
Overview
This guide explains how to set up your development environment for Deface2.
Prerequisites
- Windows 10/11 (primary development platform)
- NVIDIA GPU (for GPU acceleration testing)
- 8GB+ RAM (recommended for ML processing)
- 20GB+ free disk space
1. Install Core Tools
1.1 Python 3.11+
# Download from https://www.python.org/downloads/
# OR using winget (Windows 10+)
winget install Python.Python.3.11
# Verify installation
python --version
1.2 Node.js 20+ (LTS)
# Using winget
winget install OpenJS.NodeJS.LTS
# OR download from https://nodejs.org/
1.3 Rust Toolchain
# Download and run rustup-init.exe from https://rustup.rs/
# OR using winget
winget install Rustlang.Rustup
# Verify installation
rustc --version
cargo --version
1.4 Git
winget install Git.Git
2. Install Python Dependencies
Create a virtual environment and install ML dependencies:
# Create virtual environment
python -m venv venv
.\venv\Scripts\activate
# Install core dependencies
pip install --upgrade pip
pip install numpy opencv-python onnxruntime onnx
# Install face detection libraries (we'll test which works best)
pip install mediapipe torch torchvision --index-url https://download.pytorch.org/whl/cu118
pip install ultralytics # YOLOv8
# Install ORB-HD deface as reference
pip install deface
# Verify installations
python -c "import cv2; print(f'OpenCV: {cv2.__version__}')"
python -c "import mediapipe; print('MediaPipe OK')"
python -c "import onnxruntime; print(f'ONNX Runtime: {onnxruntime.__version__}')"
Note: The torch installation with cu118 is for CUDA 11.8. If you have a different CUDA version, check https://pytorch.org/get-started/locally/ for the correct command.
3. Install Tauri CLI
# Install Rust dependencies for Tauri
cargo install tauri-cli
# OR install via cargo-binstall (faster)
cargo install cargo-binstall
cargo-binstall tauri-cli
# Verify
cargo tauri --version
4. Set Up Node.js Dependencies for Tauri
# Navigate to project root
cd C:\Users\user\Downloads\Deface2
# Install dependencies (package.json will be created)
npm install
5. Project Structure
Deface2/
├── src/ # Source code
│ ├── core/ # Python ML core
│ │ ├── __init__.py
│ │ ├── detector.py # Face detection interface
│ │ ├── engines/ # Detection engine implementations
│ │ │ ├── mediapipe_engine.py
│ │ │ ├── yolov8_engine.py
│ │ │ └── deface_engine.py
│ │ └── anonymizer.py # Anonymization logic
│ │
│ ├── ui/ # Tauri frontend (Svelte)
│ │ ├── src/
│ │ │ ├── App.svelte
│ │ │ ├── lib/
│ │ │ └── main.js
│ │ ├── index.html
│ │ └── package.json
│ │
│ └── main.rs # Tauri entry point
│
├── tests/ # Tests
│ ├── unit/
│ └── integration/
│
├── scripts/ # Build and utility scripts
│ ├── build.bat
│ └── package.bat
│
├── resources/ # Icons, assets
│
├── .github/ # GitHub Actions CI/CD
│ └── workflows/
│
├── Cargo.toml # Rust dependencies
├── package.json # Node dependencies
├── pyproject.toml # Python dependencies
├── tauri.conf.json # Tauri configuration
└── README.md
6. Run Development Server
Start Python ML Service (for development)
# Terminal 1: Start Python backend
cd Deface2
.\venv\Scripts\activate
python -m src.core.server
# Or run the face detection prototype
python tests/prototype/face_detection_test.py
Start Tauri Dev Server
# Terminal 2: Start Tauri UI
cd Deface2
cargo tauri dev
This will:
- Start the Tauri development window
- Connect to the Python backend via IPC
- Enable hot reload for UI changes
7. Build for Testing
Build Desktop App (Windows)
# Build production bundle
cargo tauri build
# Output will be in: src-tauri/target/release/bundle/
Build with Debug Info
cargo tauri build --debug
8. Testing GPU Acceleration
Verify CUDA/ONNX Runtime
.\venv\Scripts\activate
python -c "
import onnxruntime as ort
providers = ort.get_available_providers()
print(f'Available providers: {providers}')
print(f'Current provider: {ort.get_default_execution_provider()}')
"
Expected output should include CUDAExecutionProvider if your NVIDIA GPU is detected.
Test Face Detection Engines
python tests/prototype/compare_engines.py --input tests/fixtures/sample.jpg
This will benchmark MediaPipe, YOLOv8, and ORB-HD/deface on the same image.
9. Troubleshooting
Python virtual environment not activating
# PowerShell execution policy may need adjustment
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
CUDA not detected
- Verify NVIDIA driver:
nvidia-smi - Check CUDA toolkit:
nvcc --version - Reinstall PyTorch with correct CUDA version from pytorch.org
Tauri build fails
# Update Rust and Tauri
rustup update stable
cargo install tauri-cli --force
# Clean build
cargo clean
cargo tauri build
10. Useful Commands Summary
| Task | Command |
|---|---|
| Activate venv | .\venv\Scripts\activate |
| Install Python deps | pip install -r requirements.txt |
| Run Tauri dev | cargo tauri dev |
| Build production | cargo tauri build |
| Run tests | pytest tests/ |
| Lint Python | ruff check src/ |
| Format Python | black src/ |
Next Steps
- Complete environment setup (verify all tools work)
- Run
python tests/prototype/face_detection_test.pyto test ML setup - Run
cargo tauri devto launch the app - Begin Phase 1.2: ML Core Prototype development