this post was submitted on 29 Aug 2024
95 points (100.0% liked)

Technology

37573 readers
169 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 80 points 2 weeks ago* (last edited 2 weeks ago) (13 children)

This kind of seems like a non-article to me. LLMs are trained on the corpus of written text that exists out in the world, which are overwhelmingly standard English. American dialects effectively only exist while spoken, be it a regional or city dialect, the black or chicano dialect, etc. So how would LLMs learn them? Seems like not a bias by AI models themselves, rather a reflection of the source material.

[–] [email protected] 27 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

Seems like not a bias by Al models themselves, rather a reflection of the source material.

That's what is usually meant by AI bias: a bias in the material used to train the model that reflects in its behavior

[–] [email protected] 19 points 2 weeks ago (7 children)

But why is it even mentioned then? It's FUCKING OBVIOUS. It's like saying "AIs are biased towards english and neglect latin" or smth ffs

[–] [email protected] 10 points 2 weeks ago

It’s FUCKING OBVIOUS

What is obvious to you is not always obvious to others. There are already countless examples of AI being used to do things like sort through applicants for jobs, who gets audited for child protective services, and who can get a visa for a country.

But it's also more insidious than that, because the far reaching implications of this bias often cannot be predicted. For example, excluding all gender data from training ended up making sexism worse in this real world example of financial lending assisted by AI and the same was true for apple's credit card and we even have full-blown articles showing how the removal of data can actually reinforce bias indicating that it's not just what material is used to train the model but what data is not used or explicitly removed.

This is so much more complicated than "this is obvious" and there's a lot of signs pointing towards the need for regulation around AI and ML models being used in places it really matters, such as decision making, until we understand it a lot better.

load more comments (6 replies)
load more comments (6 replies)
load more comments (10 replies)