this post was submitted on 23 Oct 2023
82 points (100.0% liked)

196

16224 readers
3935 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 

KICK TECH BROS OUT OF 196

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 11 months ago (1 children)

Yes they can track some moving objects and if it is currently on a collision course it will react, but not until the point where it's clear that it is going to hit the thing. The car isn't going to gauge the situation and identify that there may or may not be a situation in which it needs to act or not.

For example, is an AI driver going to recognize an animal running in a fenced in yard as something it can ignore? What about when the animal is running in a trajectory that the car could see as an intersection in the future, but is otherwise prevented by the fence?

Or another common occurrence, you are driving in the right lane of a street, and traffic gets backed up in the left lane so a person doesn't look and just pulls into your lane. A good defensive driver would be slowing down a little and looking for any signs of someone trying to switch lanes. I guarantee an AI car would not identify the possibility until someone started making a move.

For it to truly be AI, it needs to think in advance, sort of like the chess computers do. It needs to take the current and past states, and judge possible future states and weigh them. Then take the outcomes from that process, and integrate them into future decisions. That is true AI, a lot of the AI that exists is just this static chain of probabilitys that sprinkles some randomness on top to appear as if it's different each time.

[โ€“] [email protected] 3 points 11 months ago

I think literally all those things are scenarios that a driving AI would be able to measure and heuristically say, "in scenarios like this that were in my training set, these are what often follows." Like, do you think the training set has no instances of people pulling out of blind spots illegally? Of course that's a scenario the model would have been trained on.

And secondarily, those are all scenarios that "real intelligences" fail on very very regularly, so saying AI isn't a real intelligence because it might fail in those scenarios doesn't logically follow.

But I think what you are trying to argue is that AI drivers aren't as good as an "actual intelligence" driver, which is immaterial to the point I'm making, and is ultimately super quantifiable. As the data comes in we will know in a very objective way if an AI driver is safer on average than a human. That's quantifiable. But regardless of the answer, it has no bearing on if the AI is in fact "intelligent" or not. Blind people are intelligent, but I don't want a blind person driving me around either.