Voice Intelligence
Full voice pipeline on a single GPU. Speech recognition, intent routing, function execution, and spoken response in under 200ms with no cloud dependency.
Two models replace cloud voice stacks: LFM2.5-Audio-1.5B handles speech-to-text and text-to-speech, while LFM2-1.2B-Tool routes spoken intent to device functions. The full pipeline runs locally with no network round-trip, no per-query cost, and no data leaving the device. New voice commands deploy by updating the function schema.
2 specialist models
How It Works
Two specialist models in sequence,
running the full voice pipeline on-device
From Spoken Command to Executed Action in Under 200ms
Cloud voice assistants serialize three API calls: speech recognition, intent processing, and speech synthesis. Each adds network latency and data exposure. Two specialist LFMs run the full pipeline locally: LFM2.5-Audio-1.5B converts speech to text and back, LFM2-1.2B-Tool maps intent to one of 26 cockpit functions. Total latency under 200ms with zero cloud dependency.
Open Vocabulary Across 26 Device Functions
Keyword spotters understand 'set temperature to 72' but not 'it is freezing in here.' A tool-calling LFM understands natural language across climate control, media playback, navigation, window controls, and lighting. No rigid command grammar, no wake-word limitations. The function schema is the only constraint, and it updates without retraining.
Automotive-Grade Voice AI Without Cloud Infrastructure
Connected car voice systems fail in tunnels, parking garages, and rural areas. Cloud dependency means no voice control when connectivity drops. On-device LFMs provide consistent voice interaction regardless of network state. All audio processing stays on the vehicle hardware. No data transmitted, no privacy trade-offs, no per-query costs at fleet scale.
Try each model
All Demos
Ready to deploy in your environment?