Host of CNBC's Mad Money Jim Cramer recently defended Tesla's Autopilot and Full Self-Driving Beta technologies. He believes these systems are capable of saving lives, and we've seen proof that they have worked to avoid an accident. Cramer also points out that "every time there's an autonomous driving accident," it's all over the news. "The press treats it like it's the end of the world," according to Cramer.

What we rarely get to see is how advanced driver-assist systems are saving lives. Honestly, there's no way to know what may have happened in an incident if the car's tech didn't assist. However, active safety features wouldn't likely be implemented in cars if they were going to regularly kill people rather than "saving" them.

Sadly, this is not to say that there won't be issues along the way. There's an extremely high probability that someone will be hurt or killed due to a car's safety features, but it's much less likely that it will become the norm. People are hurt by seatbelts and airbags, but these features save people every day, so the few incidents of an injury don't justify eliminating them. People have been hurt or killed while stopping at a stop sign or for a red light, but that doesn't mean we remove traffic signals on our roadways.


Tens of thousands of people die in car crashes every year. So, what is the industry doing about it? Automakers have added a host of safety systems to cars over the years, and some are now required to come as standard.

Safety technologies, such as forward collision warning and automatic emergency braking are actually included in the Insurance Institute for Highway Safety's (IIHS) vehicle safety testing. In fact, to get the organization's top honors, a car must not only earn top scores in various crash tests, but also earn an "Advanced or Superior rating for available front crash prevention — vehicle-to-vehicle and vehicle-to-pedestrian evaluations."

Are these systems 100% safe and foolproof? Surely not. However, they're supposed to decrease the likelihood of a severe accident. There may be times that such features don't work correctly, and in the worst of circumstances, someone could die.

People die every day due to drunk drivers on the road, folks texting while driving, vehicles that are improperly maintained and unfit for the road, and the list goes on and on. Since this has all become commonplace, we don't really see it in the news.

Just the other day, a school bus driver working for my child's school drove off the road and into a sign, because he had been drinking. Thankfully, no students were hurt. However, if the bus had lane keep assist and automatic braking, the incident may have been avoided entirely. If it had technology that monitored the driver and was able to be aware of his condition, it may not have let him continue driving in the first place.

With all of that said, the main reason Tesla is under scrutiny for its semi-autonomous driving technologies is related to the way the company is testing them. Tesla's Full Self-Driving system is still in beta form, and will likely be that way for a long time. However, the company is allowing owners to test it on public roads.

The argument here would be that the tech should be tested by "professionals" in a "closed environment," which is how many of Tesla's rivals are handling their self-driving tech. However, it would probably be fair to say that no matter who tests the technology, how long it's tested in a closed environment, and how successful it becomes, there will almost certainly still be incidents once it's launched to the public. Would those incidents be attributed to human error, or would the manufacturer be held responsible?

There are many scenarios and edge cases that arguably cannot be tested unless the tech is used in real-world driving by "normal" drivers. This is how we teach our children to drive when they have their learning permits. We don't just train them in a parking lot and then hope for the best when they're on public roads.

With any new technology, there are concerns. Tesla's FSD Beta is very concerning since we've seen videos that prove that if a driver wouldn't have taken over, they may have been hurt or killed, or another innocent person may have been hurt or killed. However, thus far, the Tesla owners who are testing the technology have intervened when necessary and there have been no injuries or fatalities. It seems there may have been a few fender benders, and on that same given day, it's also likely that hundreds of people were killed in a car crash due to human error.

So, how do we handle this? Regulate the heck out of it and delay progress so that a few lives may be saved? No one wants anyone to die. If there's really a true concern that Tesla's FSD Beta is killing people, it should be banned from public roadways. However, what if banning such advanced driver-assist systems on cars leads to a whole host of deaths that could have otherwise been avoided?

We don't have the answer, and it seems no one really does. What do you think?

Got a tip for us? Email: