on thursday of new york times The magazine ran a big news feature Tesla’s clever, unreliable and sometimes deadly self-driving technology and what its successes and failures might reveal about Elon Musk. I have to say. For a list of subjects that may be revealed by the presence on public roads of fleets of partially automated vehicles that tend to mow down pedestrians and ram stationary objects. Indeed, “Elon Musk’s Personality and Ethics” is not only the least important, but also the absolute most interesting.
Most of the time, though, it’s a fun read. Reporter Christopher Cox finds that evangelism about the groundbreaking life-saving potential of self-driving cars is occasionally interrupted by the urgent need to prevent itself from sweaty Californians. Take a surreal, black comical ride with a Tesla enthusiast. Prevent self-driving cars from voluntarily killing someone.
A minute later, the car warned Key to get behind the wheel and keep his eyes on the road. “Tesla is kind of a nanny about it now,” he complained. If so, the defect (such as the stationary object bug) was fixed. “Between the steering wheel and eye tracking, it’s a solved problem,” Key said.
[…]
Finally, Key told the FSD to take us back to the cafe. But when I started to turn left, the steering wheel jerked and the brake pedal rattled and trembled. Key muttered tensely. …”
The next moment, the car stopped in the middle of the road. A line of cars was closing in on our side. Key hesitated for a moment, but he quickly took over and completed the turn. “I could have accelerated after that, but I wasn’t going to get that close,” he said. Of course, if he was wrong, the same he could well have been in an accident with his AI for the second time on a mile road.
When Tesla and Musk tell both what the company’s cars can do (Musk is the most reckless and overblown promise on the planet in light years) and what those cars have been doing. There are also good reports about the habitual craftiness of . For the latter, Tesla compared its self-driving technology’s crash stats to human-operated vehicle crash stats, suggesting that Tesla’s AI outperforms humans, at least in ways that appear designed to obscure context. It supports the false claim of safe driving. That’s not good.
of Times The blog raises this subject in the context of utilitarianism and risk-reward calculations. The only thing Elon Musk does credibly is retreating to half-hearted, messianic longevity when faced with his own malice towards others and the fact that his cars are killing people. is to Well, the fact that his car kills people gives that crap some disappointing weight. , analyzes the ethics of Musk’s willingness to flood the road with a car that spontaneously decides it’s time to flatten a small child.
Singer says that even if Autopilot and human drivers are equally deadly, AI should be prioritized if the next software update based on data from crash reports and near misses will make the system even safer. Told. “It’s a bit like a surgeon doing experimental surgery,” he said. “Maybe the first few times they operate, they’re going to lose patients. But the argument against it is that it will save more patients in the long run,” Singer said. added that it was important for surgeons to obtain the patient’s informed consent.
And at the top of the next paragraph is the exact moment my hair went up in flames.
do Does Tesla have driver informed consent?
In summary, this is a good question. Tesla overestimates the performance of its vehicles and falsifies its safety record. Many human Tesla drivers (or like operators) may not know what they are buying and what they are not.that too completely wrong question.
A salient question about informed consent when asking about unpredictable self-driving cars on public roads is: tesla driver have sufficient information to agree to the risk to one’s own safety; the “patient” in Singer’s analogy is not the Tesla driver, whether he or Cox are aware of it, but All other people using public roads, sidewalks and crosswalks, can be killed or maimed at any moment by unproven technology being tested without their knowledge or consent. No reasonable experimental surgical technique can randomly kill innocent bystanders who care about their own business in the same building.of Times The article itself later detailed one incident in which a self-driving Tesla drove through a red light at an intersection in Los Angeles and slammed into a human-driven Honda, killing the Honda occupant. The autopilot system that killed them.
So, in Singer’s analogy, Tesla owners take advantage of self-driving systems in their cars. I am a surgeonNot even regular surgeons do, but basically human centipede A man manipulating randomly selected unconscious strangers.who gives the fuck that Does he give informed consent to the risks he’s imposing on others in the face of the reality that others don’t even have a choice?
It’s like reporting on the dangers of an assault weapon, when a hilariously clueless AR-15 owner uses a machine gun to spray bullets at school kids. It’s like focusing on the risk that a machine gun could explode. It’s like the antivaxers hold the right to decide what’s in their bodies, and this gives them unilateral power over what’s in everyone else. You keep wondering if the blog will move toward considering public consent. Its members are turning their morning commute into a mentally ill two-ton. You might at least expect a consultation on the question of whether you want to share your killbot tech with a systematically exempt one.From normal safety inspections to real traffic on highways by volunteers of amateur fanboys. road test, but somehow never gets there.
Thus, whether intentionally or not, Cox blunders and adopts the libertarian or anti-social (assuming liberal acknowledgment that these are not synonymous) views of Musk and Tesla. doing. Experimental techniques that are inherently dangerous and unproven, and datasets constructed through trial and error and sometimes fatal errors, could at some point in the hypothetical future prove that the technology could become the marketing brag of manufacturers. may be able to meet.Also: That irrefutable claim about an ideal future version of self-driving artificial intelligence it is complete Making self-evident and pure goodness out of handing over public infrastructure for what amounts to a crash test trial. Make it an essential part of the day to prevent cancer.
Lurking behind all this, Times Blogging is not who or what is ultimately responsible for continuing to reasonably protect the public from the dangers that unbridled brain shit hypercapitalist class and their sad personal cult impose on all the rest of us. Maybe that battle has already been lost: after all, autonomously incapable Tesla cars are already on the streets, voluntarily tipping over, creaming pedestrians, Preempting hapless human drivers at intersections, going up in flames for no reason, seemingly without civil rights or just… the political will to push them off the road. Nor is there even a widely shared sense that it has the means to achieve authority in any institution.
nevertheless. Imagine an ideal world implied by Times Blog handling informed consent of Tesla drivers whose cars are killing others: They will know the dangers they face when using fully self-driving technology in their new cars. great. And I think the rest of us all have to accept the possibility that he’ll have a mile of dirt on the pavement by looting robocars at the cost of walking from here to there in the open air. Does that sound like it should be? At least we are notified!
Dressing up that dark surrender by stroking the chin about the painful or perhaps necessary trade-offs of a coherent ethical philosophy is giving up the game, no matter how well-intentioned the investigation may be.To say these things are broken death machines that have no business on the streets and the choice of whether or not to put them there doesn’t exist for shit-posting idiots like Elon Musk It’s easy. The media seems to dance to every piece of that ridiculous truth…well, let’s calm down. morally unstable.