a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
too much information
last post

random post

random video post

random text post

average title length
last 5 posts: 47.2
last 50 posts: 62.7

tagging statistics
no tags: 21.2%
one tag: 37.4%
two tags: 62.6%
community tagged: 21.8%

most popular poster
last 5 posts: kleinbl00
last 50 posts: NikolaiFyodorov

hubski style popularity:
clean (default): 98.3%
dark: 0.8%
snow: 0.1%
spring: 0.3%
office: 0.3%
ugly: 0.0%
d20: 0.0%

zen users
0.3%

last comment
kleinbl00  ·  1 hour ago  ·  link  ·    ·  parent  ·  post: Ed Zitron has lost all patience with your AI Boosterism

The first phrase that really contained my understanding and experience of LLMs was "stochastic parrot."

The parrot has context and analysis around what it says, but that context and analysis is very... parrot-centric. If I say that parrots mostly speak in non-sequiturs very few people will argue with me. If i say that there's a certain randomness (stochasticity) to parrot speech the chin-strokers will nod. But if I say an LLM has less of a handle on its outputs than a parrot, Blake Lemione will tell me I'm a monster and the TESCReal posse will laugh and point about how I won't be able to scream when I have no mouth.

The most recent phrase that gets to the heart of the problem is "bag of heuristics." I came across it in this piece which talks about "world models" and "brittleness" - what are the LLMs solving for and is that anything like what we're solving for?

    The OthelloGPT world-model story faced a new complication when, in mid-2024 a group of student researchers released a blog post entitled “OthelloGPT Learned a Bag Of Heuristics.” The authors were part of a training program created by DeepMind’s Neel Nanda, and their project was to follow up on Nanda’s own work, and do careful experiments to look more deeply into OthelloGPT’s internal representations. The students reported that, while OthelloGPT’s internal activations do indeed encode the board state, this encoding is not a coherent, easy-to-understand model like, say, an orrery, but rather a collection of “many independent decision rules that are localized to small parts of the board.” As one example, they found a particular neuron (i.e., neural network unit) at one layer whose activation represents a quite specific rule: “If the move A4 was just played AND B4 is occupied AND C4 is occupied, then update B4 and C4 and D4 to ‘yours’ [assuming the mine, yours, or empty classification labels]”. Another neuron’s activation represents the rule “if the token for B4 does not appear before A4 in the input string, then B4 is empty.”

I think there's a real dividing line here: the credulous are all about "it gives me what I want." The incredulous are all about "it doesn't give me what I need." And the incredulous are mostly being castigated while the credulous are being pandered to - why block out "things that look like Miyazaki" if it's all over Twitter? I mean... that video was certainly Batman-adjacent, why harp on the fact that it's hammered ass?

My daughter's classmates are absolutely using AI to do their homework. She's 12. My daughter isn't because the point isn't doing the homework, the point is in having fun with it. I think the real blessing and real danger of AI as it is currently envisioned is it gives you the mediocre middle of everything you ask for. Which, if you're looking for mediocre friendships, a mediocre relationship, mediocre entertainment, mediocre prose, mediocre images? Saltman will hook you up for $8 a month. As you point out above, however, Google will do that shit for free, so... sorry, Sam. It was a former Google VP who first explained to me that Google's business model was to find a disruptable market and suck all the profit out of it so it could sell the eyeballs of everyone using it to advertisers and if there's any company that can starve out some dipshit who wants a trillion dollars to plagiarize the tower of Babel it's Google.

    Socially, I think we need to move real fast to figure out guardrails or norms or anything before we get a perpetual Her situation for any mildly loner kid.

It's the mediocrity of the situation that gives me hope. The "holy shit this is fucking awesome" aspects of "AI" for me? Were all back here.

That shit is still there. Is still interesting. Still holds promise for all sorts of creative weirdness. It's a tool! But what we're sold is the mediocre middle. And the thing is? If being a passive potato makes you happy, the passive potato path now has more creativity to it. That's a big part of it for me - the use cases for AI, as envisioned by every douche in a suit, are some form of TPS report. The use cases for AI for every incel with a twitter account is some form of hentai waifu. I think you would have been fine if you were fifteen in 2025 because it would have taken you a few days to see there's no ghost in the machine.

I think the downfall of OpenAI will be that there's nothing it does that's worth the money. And that, really, is my basic beef: all these AI dipshits keep talking about how exciting it all is and it's boring as fuck, dude. It'll make a meme without you having to spend ten minutes in MS Paint. It'll vomit up a mediocre essay without you having to search for it. It'll give you a bunch of code that sort of works, it'll provide you with a bunch of forgettable fucking content.

I say "Loab" and that woman is staring right at you.

I think there's a helluva future for AI... once everyone lets go of LLMs and starts focusing on stuff that can build a world model instead of a bag of heuristics. I even think there's a helluva future for LLMs... once everyone lets go of paying Saltman $8 a month and starts focusing on training their own. Damn near every post here is an LLM stretched to breaking, and it's when stretched to breaking it gets interesting.

https://hubski.com/domain/aiweirdness.com

There's fucktons of cool video games out there... and there's Farmville. And yeah, everyone played Farmville and everyone played it a little too much and then everyone moved the fuck on. And that's kinda where I'm at right now. Ain't nothing wrong with video games but Farmville is fucking boring and I judge you if you play it. Ain't nothing wrong with artificial intelligence but mass-trained LLMs are fucking boring and I judge you if you think otherwise.

I think most kids aren't boring enough to get suckered into Farmville. I think the people who got suckered into Farmville are the ones who sucked at Zelda. Yeah, they should get to play video games too but it's really fucking stupid to act like Farmville is the second coming of video games just because all the dipshits who think Comic Sans is okay are impressed by it.


last badged



average comment length
last 5 comments: 2,583.2
last 50 comments: 787.5

most active commentor
last 50 comments: OftenBen
last 200 comments: kleinbl00

wordiest commentor
last 5 comments: kleinbl00
last 50 comments: kleinbl00

newest hubskier
jecams

annual best of lists
2013
2014
2015