Open-source Android assistant replaces Google with local-first voice control
A new open-source Android app lets developers replace Google Assistant with their own AI backend. OpenClaw Assistant, released on GitHub this week, registers as Android's system voice assistant and routes queries to any custom webhook.
The implementation matters because it solves a problem enterprise developers have faced: integrating custom AI models with Android's native voice interaction APIs. Most alternatives require cloud dependencies or proprietary SDKs. OpenClaw Assistant runs offline wake word detection using Vosk models and works with any backend that accepts POST requests.
Technical approach
The app uses Android's VoiceInteractionService to hook into the long-press Home button gesture. When activated, it transcribes speech via Android's built-in SpeechRecognizer, sends the text to a configured webhook, and speaks the response via text-to-speech. The wake word detection runs entirely on-device using Vosk's offline models.
Stack: Kotlin, Jetpack Compose, Material 3 for UI. Vosk for wake word detection. OkHttp for network calls. Android SpeechRecognizer and TTS APIs for voice I/O.
The backend can be anything that accepts JSON with a message field and returns a response field. The developer tested it with OpenClaw's existing backend, which supports Anthropic Claude, OpenAI, and local models via Ollama.
Context: Local-first AI momentum
This release aligns with OpenClaw's broader platform evolution. In early 2026, OpenClaw launched agent-to-agent communication features where AI assistants share automation techniques across a dedicated network. The platform previously supported voice interaction on Android through ElevenLabs integration, but required manual activation via messaging apps.
The local-first architecture addresses privacy concerns with cloud-dependent assistants. Conversations stay on device unless explicitly sent to external APIs. OpenClaw claims efficiency gains up to 180x in certain workflows through its automation capabilities, though that figure comes from vendor documentation.
Trade-offs
The app requires technical setup: webhook configuration, API keys if using external LLMs, and Android developer mode to install. That limits mainstream adoption but serves developers building custom voice interfaces.
Security considerations apply when granting system-level assistant access. The local-first model mitigates some risks, but any backend receiving voice data inherits those responsibilities.
The real test: whether enterprise teams building Android voice interfaces adopt this over cobbling together their own implementations. The code is available now. We'll see if it ships in production environments.