this post was submitted on 01 Nov 2023
145 points (88.4% liked)

Technology

58061 readers
31 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Large language models (LLMs) like GPT-4 can identify a person’s age, location, gender and income with up to 85 per cent accuracy simply by analysing their posts on social media.

But the AIs also picked up on subtler cues, like location-specific slang, and could estimate a salary range from a user’s profession and location.

Reference:

arXiv DOI: 10.48550/arXiv.2310.07298

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 10 months ago* (last edited 10 months ago) (2 children)

Explaining what happens in a neural net is trivial. All they do is approximate (generally) nonlinear functions with a long series of multiplications and some rectification operations.

That isn't the hard part, you can track all of the math at each step.

The hard part is stating a simple explanation for the semantic meaning of each operation.

When a human solves a problem, we like to think that it occurs in discrete steps with simple goals: "First I will draw a diagram and put in the known information, then I will write the governing equations, then simplify them for the physics of the problem", and so on.

Neural nets don't appear to solve problems that way, each atomic operation does not have that semantic meaning. That is the root of all the reporting about how they are such 'black boxes' and researchers 'don't understand' how they work.

[–] [email protected] 5 points 10 months ago

When a human solves a problem, we like to think that it occurs in discrete steps with simple goals: "First I will draw a diagram and put in the known information, then I will write the governing equations, then simplify them for the physics of the problem", and so on.

I wonder how our brain even comes to formulate these steps in a way we can comprehend, the amount of neurons and zones firing on all cylinders seems tiring to imagine

[–] [email protected] 4 points 10 months ago

Yeah but most people don't know this and have never looked. It seems way more complex to the layman than it is because instinctually we assume that anything that accomplishes great feats must be incredibly intricate