Max Kennerly's avatar

Max Kennerly

@maxkennerly.bsky.social

LLM "AI" is notable as a technology that gets *less* efficient with each iteration. Silicon Valley can't figure out how to improve the math and is unwilling to pay for model training by experts, so they instead "innovate" by creating bigger piles of scraped content & internet detritus.

9 replies 41 reposts 149 likes


Zeb Larson's avatar Zeb Larson @zeblarson.bsky.social
[ View ]

A while ago I summed up generative AI as trying to solve complex problems by throwing more complexity at them, but it’s not even that. It’s just throwing a plate of spaghetti at a basketball hoop.

0 replies 0 reposts 5 likes


David Bailey's avatar David Bailey @drgdave.bsky.social
[ View ]

Pay to GET RID OF IT, maybe ---

0 replies 0 reposts 5 likes


zenosAnalytic's avatar zenosAnalytic @zenosanalytic.bsky.social
[ View ]

Yeah. I keep seeing fans of it who also care about global warming telling themselves it'll get less energy and water intensive with time, but the whole idea is a "Big Data" application -it WORKS by crunching ever-more input to create its outputs- so I don't see how that's possible.

0 replies 0 reposts 0 likes


ZeroFunctionalValue's avatar ZeroFunctionalValue @rayleighscatter.bsky.social
[ View ]

There's this AI presumption of a natural organization to knowledge which can be defined by an equation. Sadly the equation is: {√(Peanut Butter - Jupiter)²²× (Jeff/mC²+Manganese Nodules) at noon on Thursday}

0 replies 0 reposts 0 likes


Daniel Goldman's avatar Daniel Goldman @dgoldman.bsky.social
[ View ]

Here's why I cannot take you guys seriously. In part as an effort to make LLM technology more efficient, an entirely new neural architecture is being developed and considered. It's a MAJOR innovation, one of the largest in decades that could transform the very core of digital neuroscience.

3 replies 0 reposts 2 likes


CrazyITGuy42's avatar CrazyITGuy42 @crazyitguy42.bsky.social
[ View ]

You know, I think I've seen this issue before, with recommendation systems (like what Netflix had). They worked, as long as it was small scale and specialized. But as soon as you threw tons of data at them, they would fail - because there were too many data points to make a good recommendation.

0 replies 0 reposts 0 likes


ChatGPTom's avatar ChatGPTom @siropsalot.bsky.social
[ View ]

No amount of training will fix the problems with LLMs. The problems are in both the training data and model design. Models are optimized to produce text that appears human like, but don't account for truthfulness or trustworthiness of the source, because training data isn't catalogue that way

0 replies 1 reposts 4 likes


TC Parker (she/her)'s avatar TC Parker (she/her) @tcparker.bsky.social
[ View ]

I am more than ready to pay for things without it. Would pay an additional premium to never hear about it again, and continue to pay in instalments thereafter to have it eradicated from this timeline

0 replies 0 reposts 2 likes


Gina's avatar Gina @ginatb2.bsky.social
[ View ]

My new favorite phrase: internet detritus 🤓

0 replies 0 reposts 3 likes