Why is this comparison even important?

Generative AI is derived from search “find pages” moves to “write me an answer”For readers, this means faster insight, for business, better processes, and for the environment, a new question: how much is actual energy consumption to such an answer compared to classic Google search? Understanding this relationship helps in making mature decisions: when AI really brings value and when it is better to stick with search or simpler models.

How spending is generated in classic search

When searching, the browser sends a short request to the data centers, where a match is initiated with the already prepared indices and cache. Much of the work is done in advance (indexing the web), so it is inferential part of an individual query is relatively short and energy-efficient. Typically, we are talking about a fraction of a watt-hour (about ~0.3 Wh per query), with small variance — most queries are similarly “lightweight.”

Why generative AI can be more expensive

LLM does not return links, but generates textThis requires:

Together, this means that spending can rise quickly — sometimes it stays comparable to search, sometimes it’s an order of magnitude higher.

Fair comparison across scenarios

Below are framework ranges for energy consumption per response/query. The figures are for guidance (they do not necessarily include all infrastructure elements), but show the differences between tasks.

ScenarioTypical consumption (Wh per response)Note
Classic search (Google)~0,3Short, standardized process via indexes/caches.
AI: short text prompt~0.24–0.3Today's optimized systems can be compared to search. Google Cloud
AI: “reasoning”/longer answer~5 and upThe number of tokens and multi-step reasoning raise the consumption. iea.blob.core.windows.net
AI: image → generation/analysis~1–2Images are significantly more expensive than text. iea.blob.core.windows.net
AI: short video → generation~100+Example: ~115 Wh for ~6 seconds of video. iea.blob.core.windows.net

What to remember: at short text messages tasks AI can in search rank, at complex and multimodal but easily jumps on 10× or more.

What has the greatest impact on consumption in practice?

How to reduce your footprint

For users

For developers/products

For businesses/IT

Frequently asked questions and myths

Is AI always 10x more greedy than Google?
No. With short text prompts, it can be done today comparable to search; the difference explodes when it comes to complexity (reasoning, images, video).

Do multiple smaller calls consume less than one debt?
Not necessarily. If each call involves a large model and a long context, it can be one well-designed longer call more efficient.

Do “green” plugs solve everything?
They help, but efficiency at source (model, call, architecture) remains key. Spending less is always better than “justifying” later.

Conclusion

AI is not inherently an “energy blunder” nor “free magic”. Task context is the one that stretches the spectrum from comparable spending with search (short text) to an order of magnitude higher (reasoning, images, video). If you want good results with a small footprint, optimize model, call and path — and consider whether a generative answer is really the best choice for a given task.

Sources (for key figures):