this post was submitted on 12 Sep 2023
153 points (100.0% liked)

Technology

37573 readers
498 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 1 year ago (38 children)

LLMs are fundamentally different from human consciousness. It isn't a problem of scale, but kind.

They are like your phone's autocomplete, but very very good. But there's no level of "very good" for autocomplete that makes it a human, or will give it sentience, or allow it to understand the words it is suggesting. It simply returns the next most-likely word in a response.

If we want computerized intelligence, LLMs are a dead end. They might be a good way for that intelligence to speak pretty sentences to us, but they will never be that themselves.

[–] [email protected] 4 points 1 year ago (4 children)

So for context, I am an applied mathematician, and I primarily work in neural computation. I have an essentially cursory knowledge of LLMs, their architecture, and the mathematics of how they work.

I hear this argument, that LLMs are glorified autocomplete and merely statistical inference machines and are therefore completely divorced from anything resembling human thought.

I feel the need to point out that not only is there no compelling evidence that any neural computation that humans do anything different from a statistical inference machine, there's actually quite a bit of evidence that that is exactly what real, biological neural networks do.

Now, admittedly, real neurons and real neural networks are way more sophisticated than any deep learning network module, real neural networks are extremely recurrent and extremely nonlinear, with some neural circuits devoted to simply changing how other neural circuits process signals without actually processing said signals on their own. And in the case of humans, several orders of magnitude larger than even the largest LLM.

All that said, it boils down to an insanely powerful statistical machine.

There are questions of motivation and input: we all want to stay alive (ish), avoid pain, and have constant feedback from sensory organs while a LLM just produces what it was supposed to. But in an abstraction the ideas of wants and needs and rewards aren't substantively different from prompts.

Anyway. I agree that modern AI is a poor substitute for real human intelligence, but the fundamental reason is a matter of complexity, not method.

Some reading:

Large scale neural recordings call for new insights to link brain and behavior

A unifying perspective on neural manifolds and circuits for cognition

a comparison of neuronal population dynamics measured with calcium imaging and electrophysiology

[–] [email protected] 1 points 1 year ago (3 children)

If you truly believe humans are simply autocompletion engines then I just don't know what to tell you. I think most reasonable people would disagree with you.

Humans have actual thoughts and emotions; LLMs do not. The neural networks that LLMs use, while based conceptually in biological neural networks, are not biological neural networks. It is not a difference of complexity, but of kind.

Additionally, no matter how many statistics, CPU power, or data you give an LLM, it will not develop cognition because it is not designed to mimic cognition. It is designed to link words together. It does that and nothing more.

A dog is more sentient than an LLM in the same way that a human is more sentient than a toaster.

[–] [email protected] 3 points 1 year ago

In a more diplomatic reading of your post, I'll say this: Yes, I think humans are basically incredibly powerful autocomplete engines. The distinction is that an LLM has to autocomplete a single prompt at a time, with plenty of time between the prompt and response to consider the best result, while living animals are autocompleting a continuous and endless barrage of multimodal high resolution prompts and doing it quickly enough that we can manipulate the environment (prompt generator) to some level.

Yeah biocomputers are fucking wild and put silicates to shame. The issue I have is with considering biocomputation as something that fundamentally cannot be be done by any computational engine, and as far as neural computation is understood, it's a really sophisticated statistical prediction machine

load more comments (2 replies)
load more comments (2 replies)
load more comments (35 replies)