Wapo journalist verifies that robotaxis fail to stop for pedestrians in marked crosswalk 7 out of 10 times. Waymo admitted that it follows “social norms” rather than laws.
The reason is likely to compete with Uber, 🤦
Wapo article: https://www.washingtonpost.com/technology/2024/12/30/waymo-pedestrians-robotaxi-crosswalks/
Cross-posted from: https://mastodon.uno/users/rivoluzioneurbanamobilita/statuses/113746178244368036
A few points of clarity, as I have a family member who’s pretty high up at waymo. First, they don’t want to compete with uber. Waymo isn’t really concerned with driverless cars that you or I would be owning/using, and they don’t want (at this point anyway) to try to start a new taxi service. Right now you order an uber and a waymo car might show up. . They want the commercial side of the equation. How much would uber pay to not have to pay drivers? How much would a shipping company fork over when they can jettison the $75k-150 drivers?
Second, I know for a fact that the upper management was pushing for the cars to drive like this. I can nearly quote said family member opining that if the cars followed all the rules of the road, they wouldn’t perform well, couching it in the language of ‘efficiency.’ It was something like, “being polite creates confusion in other drivers. They expect you to roll through the stop sign or turn right ahead of them even if they have right of way.” So now the waymo cars do the same thing. Yay, “social norms.”
A third point is that, as someone else mentioned, the cars are now trained, not ‘programmed’ with instructions to follow. Said family member spoke of when they switched to the machine learning model, and it was better than the highly complicated (and I’m dumbing down my description because I can’t describe it well) series of if-else statements. With that training comes the issue of the folks in charge of things not knowing exactly what is going on. An issue that was described to me was their cars driving right at the edge of the lane, rather than in the center of it, and they couldn’t figure out why or (at that point, anyway) how to fix it.
As an addendum to that third point, the training data is us, quite literally. They get and/or purchase people’s driving. I think at one time it was actual video, not sure now. So if 90% of drivers blast through at the moment of the red light change if they can, it’s likely you’ll hear about it eventually from waymo. It’s a weakness that ties right into that ‘social norm’ thing. We’re not really training safer driving by having machine drivers, we’re just removing some of the human factors like fatigue or attention deficits. Again, as I get frustrated with the language of said family member (and I’m paraphrasing), ‘how much do we really want to focus on low percentage occurrences? Improving the ‘miles per collision’ is best at the big things.’
Then maybe they should make sure to train them with footage and/or data of drivers who are following the traffic laws instead of just whatever drivers they happen to have data from.
Do they review all this training data to make sure data from people driving recklessly is not being included? If so, how? What process do they use to do that?
Hmmm yeah no surprises there and I like how you articulated it all really well
On the social norm thing, it’s still a conscious decision how much they’re investing in teaching their ai how to distinguish good vs bad behavior. In AI speak, you can totally mark adequate behavior with rewards and bad behavior with penalties. Then you get the car to shift its behavior in the right direction. You can’t predict how it fine tunes specific behavior like the line edge unless you are willing to start from scratch if necessary, but overall that’s how you teach it that crossing a red light is a big no no. Penalties, and if not enough, start over.
Yeah, that makes sense. I was in SF a few months ago, and I was impressed with how the Waymos drove–not so much the driving quality (which seemed remarkably average) but how lifelike they drove. They still seemed generally safer than the human-driven cars.
Given the nature of reinforcement learning algorithms, this attitude actually works pretty well. Obviously, it’s not perfect, and the company should really program in some guardrails to override the decision algorithm if it makes an egregiously poor decision (like y’know, not stopping at crosswalks for pedestrians) but it’s actually not as bad or ghoulish as it sounds.
We’ll have to agree to disagree on that one. I think decisions made solely for making the company’s cost as low as possible while actively choosing to not care about issues just because their chance is low (we’ve all seen fight club, right? [If A > B where B=cost of paying out * chance of occurrence and A=cost of recall, no recall]) even if devastating are ghoulish.