Select a note type โ IOP Group, Clinical, Intake, Discharge, or Meeting Minutes
2๏ธโฃ
Enter client name(s) โ for groups, separate with commas: John, Jane, Bob
3๏ธโฃ
Tap record & speak โ Whisper AI transcribes on the local GPU. pyannote identifies speakers.
4๏ธโฃ
Notes auto-generate โ Qwen3 AI drafts Medicaid-compliant documentation with PHI protection
5๏ธโฃ
Review, edit, export โ Edit notes, download DOCX, email, or push to billing
๐ Privacy: Audio never leaves this building. Transcription runs on the local GPU. Only de-identified text reaches Cloudflare AI for note generation. All storage is AES-256 encrypted.
00:00
Tap to record
Processing...
Uploading
๐ Upload audio files (mp3, m4a, wav...)
Recordings
Sun
Mon
Tue
Wed
Thu
Fri
Sat
System Dashboard
How it works ยท What's running ยท AI transparency
AI PIPELINE โ HOW YOUR VOICE BECOMES A NOTE
๐๏ธ
1. You Speak
Audio recorded on your phone, stored locally. Never leaves this building until step 3.
โ
๐ก
2. Encrypted Upload
Audio travels through a Cloudflare Tunnel (encrypted) to the local server PC in the building. Audio never goes to the cloud.
โ
๐ง
3. Whisper AI (Local GPU)
OpenAI's Whisper large-v3 transcribes speech โ text. Runs 100% on the GPU in this building. Nothing leaves the network.
โ
๐ฅ
4. Speaker Identification (Local GPU)
pyannote figures out WHO said WHAT. "Speaker 1 said X, Speaker 2 said Y." Also 100% local โ no cloud.
โ
๐
5. PHI Redaction
Before anything goes to cloud AI, names, SSNs, dates, addresses, and phone numbers are stripped out and replaced with placeholders.
always on
โจ
6. AI Note Generation (Cloud)
De-identified text goes to Cloudflare Workers AI (Qwen3 30B) to generate clinical notes. Only sees "[CLIENT_1]" not real names. Falls back to local Ollama if cloud is down.
โ
๐
7. PHI Restored & Encrypted
Real names are put back into the notes. Everything is encrypted with AES-256 before being saved to the database. Only this server can decrypt it.
always on
๐
8. Ready for Review
Medicaid-compliant notes ready for your review and signature. Edit anything before finalizing. Export as DOCX or email.
โ
๐ฅ๏ธ GPU
Loading...
โจ AI Engine
Loading...
๐พ Database
Loading...
๐ Queue
Loading...
๐ Truth in AI โ What You Should Know
AI is a tool, not a clinician. Every note generated is a draft. It may contain errors, miss context, or misinterpret speech. You must review and edit before signing.
The AI doesn't "understand" your clients. It recognizes patterns in language and generates text that looks like clinical notes. It has no clinical judgment, empathy, or therapeutic relationship. That's YOUR job.
Transcription is not perfect. Whisper AI is ~95% accurate in good conditions. Background noise, accents, overlapping speech, and quiet voices reduce accuracy. Always verify key details.
Speaker identification can be wrong. The system guesses who's talking based on voice characteristics. It can confuse similar-sounding speakers. Check that quotes are attributed correctly.
Your recordings stay in this building. Audio is transcribed on the GPU in this room. Only de-identified text (no names, no SSNs) reaches the cloud for note generation. Everything stored is encrypted.
You are the author of record. When you sign a note, you are certifying its accuracy โ not the AI. You are clinically and legally responsible for the content.
๐ System Data
Transcript
No notes generated yet.
Generating notes...
Loading stats...
AI Chat
Ask questions about your sessions
Settings
Configure your VoiceScribe
Ask the AI to diagnose issues, check system health, query data, or explain errors. It has read-only access to system diagnostics and the database.
Type a question about the system below...
Privacy & Security
🔒 Zero Cloud Processing
Your audio never leaves this building. All transcription and AI processing happens on a local GPU workstation โ not AWS, not Google, not OpenAI. Nothing is transmitted to any external AI service.
🤖 Local AI Models
Whisper large-v3 โ Speech recognition running on a local NVIDIA GPU pyannote โ Speaker identification running locally Llama 3.1 โ Note generation via Ollama, running locally
No data is sent to OpenAI, Anthropic, or any third-party AI provider.
🔐 AES-256 Encryption
Transcripts stored in the cloud database are encrypted with AES-256 (the same standard used by banks and the military). Only the local server holds the decryption key.
💻 How It Works
1. Your phone records audio and sends it over an encrypted tunnel to the local PC 2. The PC transcribes using on-device AI (no internet needed for processing) 3. Results return to your phone via the same encrypted connection 4. Audio files are backed up to encrypted cloud storage, then cleared locally
🛡️ HIPAA Alignment
This system is designed with clinical privacy in mind:
• No third-party AI processors
• No audio stored on external servers
• Encrypted data at rest
• Encrypted data in transit (Cloudflare Tunnel)
• Access controlled by API authentication
Built by Michael A. RoBards, LCSW ยท A Vision For You
Powered by Whisper, pyannote, Llama 3.1 & Cloudflare