okay.
hey, I just wanted to say, like, I'm a huge fan of your art and writing, and I don't really want to have a contentious conversation with you! I'm sorry for anything I said that might have come across as negative or rude. I feel dogpiled and overwhelmed, so I am disconnecting now.
1 replies
0 reposts
4 likes
sorry, hit enter to soon--to elaborate, I don't equate intelligence with sentience.
chickens are intelligent, but I still eat them.
1 replies
0 reposts
1 likes
Yes
1 replies
0 reposts
0 likes
I'm not "defending" or "advocating" for it. I am trying to describe it accurately.
I agree that people are treating me as though I'm defending it. I blame character limits. too much assuming and not enough room to really talk
1 replies
0 reposts
0 likes
Thank you, I will give this a read
0 replies
0 reposts
1 likes
So like... there's no reason to believe that silicon is inherently unable to replicate what meat and electricity can, right? When you break it down like that?
So then going up a few levels, you look at what is being done. AI is a black box so we're both trying to judge from the end result.
5 replies
0 reposts
0 likes
I mean, our common ape ancestor spontaneously developed fundamentally new capabilities and dominated the entire planet. So there is a precedent.
1 replies
0 reposts
0 likes
It's a hypothetical regarding potential future capabilities, not current ones
1 replies
0 reposts
0 likes
this I fully agree with. I don't think LLMs are the path to AGI.
0 replies
0 reposts
0 likes
look, I'm trying to have a real conversation here, but it's difficult with character limits. I really don't appreciate people talking down to me or ad hom-ing like this. what would I even be a "faithful believer" in here? I hate generative AI. I wish it didn't exist.
3 replies
0 reposts
0 likes
I mean, when you get down to the brass tacks of dopamine/seratonin/GABA signaling, neurons firing, etc., is this not algorithmic?
5 replies
0 reposts
0 likes
I am also associated with the rationalist movement.
At least we've uncovered the source of our disagreement!
1 replies
0 reposts
0 likes
No
1 replies
0 reposts
0 likes
Wouldn't more data to work with also mean more noise?
1 replies
0 reposts
1 likes
And if you are wrong in your assertion that AIs understand nothing (and, by implication, never will), you spread false hope, and people will be woefully unprepared for the arrival of actual AGI.
2 replies
0 reposts
0 likes
Hey, I just wanted to say I really appreciate your kind tone and that you're trying to take the time to understand what I'm saying.
This article is one of the kinds of things I've been reading: open.substack.com/pub/astralco...
1 replies
0 reposts
0 likes
If this is true then why is the AI text getting less funny and more sensical over time?
2 replies
0 reposts
1 likes
I think if an LLM is analogous to a human or a monkey or something, the calculator is closer to a grasshopper.
The calculator itself doesn't understand anything, but what it is doing is the building blocks of understanding
1 replies
0 reposts
0 likes
I mean, as far as I understand it brains work on algorithms too.
5 replies
0 reposts
1 likes
No, the person searching understands it
1 replies
0 reposts
0 likes
Kind of like chicken:human:: autocorrect:LLM
I'd need to look into it more
4 replies
0 reposts
0 likes
i don't know as much about how it works, but maybe?
4 replies
0 reposts
0 likes
I don't think I can have this discussion with the character limits here. Thank you for your time. I respect your input, I just disagree and don't have space to explain
1 replies
0 reposts
1 likes
Ok.
1 replies
0 reposts
0 likes
No, the table understands nothing. But the thing that consults the table to give you an answer does understand things.
1 replies
0 reposts
0 likes
They don't really rearrange it that nonsensically. There's a lot of sense there.
1 replies
0 reposts
1 likes
So I think the fundamental disconnect here is that I think that you need intelligence *in order to* figure out what words are more likely to go next to other words.
I don't think I commune with them. I hate them actually.
6 replies
0 reposts
0 likes
Ok.
0 replies
0 reposts
0 likes
I block people who insult me. Calling me non serious is an insult. So, blocked.
0 replies
0 reposts
0 likes
I'm not anthropomorphizing it at all. I think it's incredibly alien.
1 replies
0 reposts
0 likes
Okay... Even if I grant that, they still know and understand things, the knowing and understanding is simply alien to ours and unhelpful.
9 replies
0 reposts
0 likes
Why do some people insist on being super rude when having a disagreement online?
I freely block if someone insults me for disagreement. Life is too short to deal with assholes.
Being short/terse is one thing; there are character limits. Being mean is uncalled for.
1 replies
0 reposts
0 likes
I don't think it has agency or selfhood. It *only* engages in abstracted reasoning because it has no sensory or conceptual framework of existence. It is utterly alien. It still understands and makes connections. You don't need to be an agent to do that.
1 replies
0 reposts
2 likes
I could describe neurons like that
1 replies
0 reposts
0 likes
I disagree, but we may define sentience differently. For example, I wouldn't say chickens are sentient but they can still understand things.
1 replies
0 reposts
0 likes
Yes, I agree.
They understand things in the context of language only, so their framework is fundamentally lacking. They connect ideas in the abstract but there's no embodied reality anchoring them to anything concrete.
1 replies
0 reposts
1 likes
I'm sorry, it's too frustrating to try and have this discussion with character limits. I appreciate your time and respect your opinion, I just disagree
1 replies
0 reposts
1 likes
I'm sorry, it's too frustrating to try and have this discussion with character limits. I appreciate your time and respect your opinion, I just disagree
0 replies
0 reposts
1 likes
I'm not pro LLM fwiw. I'm scared of them.
1 replies
0 reposts
0 likes
How are you defining "understand"?
5 replies
0 reposts
0 likes
Informally? Abt 2-5 years. You?
1 replies
0 reposts
1 likes
Human brains are also statistical probability machines
1 replies
0 reposts
0 likes
They don't learn exactly like a human, but they do learn. They understand things. People keep moving the goalposts of what "understanding" means so they won't have to grapple with this. LLMs are not conscious, but they are intelligent.
4 replies
0 reposts
0 likes
Right, so they're learning that apples and blood are red, but because they lack senses don't know what red is.
6 replies
0 reposts
1 likes
It likely will once given some kind of sensory apparatus to use.
1 replies
0 reposts
0 likes
I already said that what LLMs lack is sensory experience.
0 replies
0 reposts
1 likes
Meaning IS associations
2 replies
0 reposts
1 likes
They demonstrably know what words mean
1 replies
0 reposts
0 likes
You HAVE to have some understanding to rearrange text sensically.
4 replies
0 reposts
1 likes
If anyone is looking at my skyline because of comments I make about AI, I want to be very clear.
I HATE LLMS.
I THINK THE WORLD WOULD BE BETTER WITHOUT GENERATIVE AI AND THE ALGORITHM.
But AI capabilities are improving at an alarming rate. Assuming they'll "never" do X seems like hubris.
0 replies
0 reposts
0 likes