Now building: Resonant

You Think Faster Than You Type

2 min read

One of the most underrated problems in working with AI is how much you have to type.

There is a low ceiling on what you can say. Not because the AI can't handle it — but because typing is slow, and your thoughts aren't.

Mouse and keyboard is low bandwidth

If you look at the bandwidth and density of human-computer interaction — how quickly and effectively we can supply a system with information — mouse and keyboard is actually quite low, with a very high contextual load.

Every word requires your fingers. Your fingers require your focus. That fine motor loop pulls you slightly out of thought and into execution. You start compressing ideas to fit what's comfortable to type. Context gets left out. Prompts get shorter than they should be.

Typing and talking use different parts of the brain. Most people find it easy to talk — to yap, a lot, without thinking too much about it. But typing recruits your hands, which forces more deliberate thought. It's bound to your fingertips for fine motor control.

Your voice doesn't have that overhead.

The in-between we already have

The most direct form of control over a computer would be a direct connection to our minds. Neuralink and others are working toward this — and I'm skeptical. Not of the science, but of what it means to hand that kind of access over to a company, or to anyone. A stable, non-threatening version of that doesn't really exist yet, and I'm not sure I'd want it to.

But we don't have to wait for that. We have voice.

You can simplify the stream of instruction to an agentic system simply by talking. It gives you handsfree, untethered, unfettered thought. The capability to express ideas in their entirety — without the filter that typing creates.

That filter isn't always bad. Sometimes the act of writing helps you think. But when you're prompting AI, moving between tasks, adding context — the filter is mostly just friction.

Apple should be better at this

On my phone I use Apple dictation constantly. The speech-to-text unlock allows me handsfree, untethered thought — and it changes how I communicate.

But on desktop, it's unreliable. Not fast enough. Not accurate enough. And most frustratingly — it doesn't use its own capabilities. Apple has screen context, OCR, window titles, open app data. None of it feeds into hot-word correction or vocabulary. The API access is there. They just don't connect it.

The result is a dictation tool that doesn't know what's on your screen, can't correct "Figma" from context, and misses words that would be obvious to anything with a little more awareness.

What I built

This is why I built Resonant — a private, on-device speech to text tool for talking into your computer. No cloud. Fast, accurate, available everywhere.

A few things I've learned:

  • It's much, much more work than just prompting a Whisper clone. The UX surface area for voice is enormous and mostly invisible until someone hits it.
  • People who enjoy using their voice, really enjoy voice-first interfaces. It's not a small improvement for them. It's a different way of working.

That last one stays with me. Voice isn't right for every context. But for anyone whose thinking outruns their typing — which is most people — it removes a constraint they didn't know they were carrying.


Related:

30k+ subscribers

Enjoyed this post?

Join creators and builders getting actionable AI tips every week.