Your audio never leaves this building. All transcription and AI processing happens on a local GPU workstation โ not AWS, not Google, not OpenAI. Nothing is transmitted to any external AI service.
🤖 Local AI Models
Whisper large-v3 โ Speech recognition running on a local NVIDIA GPU pyannote โ Speaker identification running locally Llama 3.1 โ Note generation via Ollama, running locally
No data is sent to OpenAI, Anthropic, or any third-party AI provider.
🔐 AES-256 Encryption
Transcripts stored in the cloud database are encrypted with AES-256 (the same standard used by banks and the military). Only the local server holds the decryption key.
💻 How It Works
1. Your phone records audio and sends it over an encrypted tunnel to the local PC 2. The PC transcribes using on-device AI (no internet needed for processing) 3. Results return to your phone via the same encrypted connection 4. Audio files are backed up to encrypted cloud storage, then cleared locally
🛡️ HIPAA Alignment
This system is designed with clinical privacy in mind:
• No third-party AI processors
• No audio stored on external servers
• Encrypted data at rest
• Encrypted data in transit (Cloudflare Tunnel)
• Access controlled by API authentication
Built by Michael A. RoBards, LCSW ยท The Human Equation
Powered by Whisper, pyannote, Llama 3.1 & Cloudflare