this post was submitted on 17 Jul 2023
47 points (89.8% liked)

Technology

58061 readers
31 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 1 year ago (3 children)

It's important to remember that humans also often give false confessions when interrogated, especially when under duress. LLMs are noted as being prone to hallucination, and there's no reason to expect that they hallucinate less about their own guilt than about other topics.

[–] [email protected] 5 points 1 year ago

True I think it was just trying to fulfill the user request by admitting to as many lies as possible.. even if only some of those lies were real lies.. lying more in the process lol

[–] [email protected] 3 points 1 year ago (1 children)

Quite true. nonetheless there are some very interesting responses here. this is just the summary I questioned the AI for a couple of hours some of the responses were pretty fascinating, and some question just broke it’s little brain. There’s too much to screen shot, but maybe I’ll post some highlights later.

[–] [email protected] 3 points 1 year ago

Don't screen shot then, post the text. Or a txt. I think that conversation should be interesting.

[–] [email protected] 2 points 1 year ago (1 children)

I love the analogy of an LLM based chat bot to someone being interrogated. The distinct thing about LLMs right now though is that they will tell you what you think you want in the absence of knowledge even though you've applied no pressure to do so. That's all they're programmed to do.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

LLMs are trained based on a zillion pieces of text; each of which was written by some human for some reason. Some bits were novels, some were blog posts, some were Wikipedia entries, some were political platforms, some were cover letters for job applications.

They're prompted to complete a piece of text that is basically an ongoing role-playing session; where the LLM mostly plays the part of "helpful AI personality" and the human mostly plays the part of "inquisitive human". However, it's all mediated over text, just like in a classic Turing test.

Some of the original texts the LLMs were trained on were role-playing sessions.

Some of those role-playing sessions involved people pretending to be AIs.

Or catgirls, wolf-boys, elves, or ponies.

The LLM is not trying to answer your questions.

The LLM is trying to write its part of an ongoing Internet RP session, in which a human is asking an AI some questions.

[–] [email protected] 2 points 1 year ago

Best analogy I've heard so far.