this post was submitted on 18 Sep 2023
813 points (96.5% liked)

Technology

58061 readers
31 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The actor told an audience in London that AI was a “burning issue” for actors.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 1 year ago (1 children)

Ask chatGPT to do things a normal person can, and it also fails. ChatGPT is a tool, a particularly dangerous swiss army chainsaw.

[–] [email protected] -4 points 1 year ago (1 children)

I use it all the time at work.

Getting it to summerize articles is a really useful way to use it.

It's also great at explaining concepts.

[–] [email protected] 10 points 1 year ago (2 children)

It's also great at explaining concepts.

Is it? Or is it just great at making you think that? I've seen many ChatGPT outputs "explaining" something I'm knowledgeable of and it being deliriously wrong.

[–] [email protected] 4 points 1 year ago

I agree. I have a very specialized knowledge in certain areas, and when I've tried to use chat GPT to supplement my work, it often misses key points or gets them completely wrong. If it can't process the information, it will err on the side of creating an answer whether it is correct or not, and whether it is real or not. The creators call this "hallucination."

[–] [email protected] -2 points 1 year ago (2 children)

Yeah it is if you prompt it correctly.

I basically use it instead of reading the docs when learning new programming languages and Frameworks.

[–] [email protected] 4 points 1 year ago (1 children)

A coworker tried to use it with a well-established Python library and it responded with a solution involving a Class that did not exist.

LLMs can be useful tools but, be careful in trusting them too much - they are great at what I'd say is best described as "bullshitting". It's not even "trust but verify" it's more "be skeptical of anything that it says". I'd encourage you to actually read the docs, especially those for libraries as it will give you a deeper understanding of what's actually happening and make debugging and innovating easier.

[–] [email protected] 3 points 1 year ago

Ive had no problem using them. The more specific you get the more likely they are to do that. You just have to learn how to use them.

I use them daily for refactoring and things like that without issue.

[–] [email protected] 4 points 1 year ago (1 children)

That's great, it works until it doesn't and you won't know when unless you already are knowledgeable from a real source.

[–] [email protected] 0 points 1 year ago

You know it doesn't work when you try and tell it doesn't work and it'll usually correct itself.