Google's AI Overviews must create entirely new information in response to a search query. That costs an estimated *30 times* more energy than simply extracting information from a source through a traditional search. 🧪 www.scientificamerican.com/article/what... by @parshallison.bsky.social
It seems like an article from Scientific American shouldn’t be conflating “training” with “use,” like it does when it misrepresents the study on BLOOM. And yet
If you need terabytes of data and more energy than the world's fourth largest economy to create a chatty, half-assed assistant that would probably be fired on the first day if human, the problem is not the data. The problem is the design.
Critical thinking skills require recalling the ongoing interaction.
AI always restarts from zero with each interaction.
Ergo no critical thinking skills in AI.
Boondoggles have more substance.
I still maintain that generative AI, in and of itself, is a tool with many legitimate uses. But it’s downright maddening how every major company that offers it insists on speedrunning through all the worst and most destructive ways to use it.
Scholars call BS on LLMs like
“Applications of large language models (LLMs) have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”
"We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit”
I wonder how a 1998 query's energy use, with far less efficient machines not architected specifically for page rank, compares to that 30x number.
Because the old page rank should have gotten cheaper/efficient with GPUs.