We've put real engineering work into running our servers as efficiently as possible. That means keeping them near full utilization, using hardware purpose-built for energy-efficient AI, and partnering with cloud providers that run industry-leading data centers.
The numbers
Recording length | Energy | Water | Equivalent to a lightbulb running for | Equivalent to this many Google searches |
15 minutes | ~2 Wh | ~7 mL (about 1.5 teaspoons) | LED: 12 min Incandescent: 2 min | ~7 |
45 minutes | ~4.5 Wh | ~15 mL (about 1 tablespoon) | LED: 25 min Incandescent: 4 min | ~15 |
90 minutes | ~8 Wh | ~27 mL (about 2 tablespoons) | LED: 50 min Incandescent: 8 min | ~27 |
How to read the Google search comparison
The numbers above compare to plain Google searches, the kind without an AI Overview at the top. AI Overviews use significantly more energy and water, much closer to what an AI note costs.
In other words: if you're comfortable searching the web today, you're already in the same ballpark as generating a Twofold note.
Why we're sharing this
Clinicians ask us about this more often as AI becomes a bigger part of healthcare workflows, and the question deserves a real answer with real numbers, not a vague reassurance. We'd rather show our work.
If the numbers ever change meaningfully (better hardware, more efficient models, different data center partners), we'll update this page.
Estimates based on published research on LLM inference (Jegham et al., 2025) and our actual hardware profile.
