Monday, April 24, 2017

Critics fear Trump will tap auto exec for NHTSA



President Donald Trump meets with Mary Barra in January.
(Photo: Pablo Martinez Monsivais / AP)




Critics fear Trump will tap auto exec for NHTSA


Washington — Car-safety advocates are worried that President Donald Trump might turn over the keys to the agency charged with regulating the safety of the nation’s automobiles to someone from within the industry’s ranks.
Rosemary Shahan, president of the Sacramento, Calif.-based Consumers for Auto Reliability and Safety group, said she would not be surprised if Trump reaches out to an auto executive to fill the position of National Highway Traffic Safety administrator, vacant since Trump took office in January.
“He has a penchant of appointing people who have been regulated and allowing them to dismantle agencies,” Shahan continued. “You have all these companies who have been under investigations for safety violations recently. I wouldn’t be surprised if he appointed somebody from one of them. It would be consistent with his other appointments.”
No names for candidates appear to be circulating among industry and government insiders in Washington. Several have said it does not appear that filling the position is a high priority for the president, who has yet to make numerous appointments in the government.
But Shahan speculates on one potential candidate: General Motors Co. Chairman and CEO Mary Barra.
“He seems to be very friendly with her,” Shahan said of Trump’s relationship with GM’s chief, noting he has named Barra to a Strategic and Policy Forum that advises him on economic issues and jobs growth, and met with her in Washington on at least two occasions.
The White House declined to comment on the president’s plans for filling the vacancy. GM would not comment on whether Barra would be interested in the regulatory job.
Barra, who became the first woman to lead an automaker in January 2014, is in a strong position at her company, which is posting record profits. She has assembled a cohesive team of executives who all stand to earn substantial bonuses if they remain with the company.
Trump has appointed other high-level business executives to serve in his Cabinet: Former Exxon Mobile CEO Rex Tillerson is U.S. secretary of state. Investor Wilbur Ross is commerce secretary. Additionally, Trump selected school-choice advocate Betsy DeVos, a West Michigan GOP mega-donor and philanthropist, to be his education secretary. World Wrestling Entertainment CEO Linda McMahon leads the Small Business Administration.
“If he appoints someone from the auto industry, there is going to be a lot of concern on the Hill and among groups like ours,” said former Public Citizen president Joan Claybrook, who was National Highway Traffic Safety administrator during the Carter administration in the late 1970s. “That’s a real conflict of interest. You need someone who is more even-minded about what needs to be done.”
Trump has signed an executive order that requires the federal government to cut two regulations for every one that’s enacted. He has proposed cutting $2.4 billion, or 13 percent, from the U.S. Department of Transportation’s current budget levels as part of his effort to cut non-military spending by $54 billion to support an increase in defense funding. NHTSA is a subsidiary of the transportation department.
Shahan, the safety group president, expressed concern that an industry insider would target regulations that address auto safety.
“He’s on a deregulation kick,” she said. “That’s not comforting. That’s worrisome. Is the new administrator at NHTSA going to deregulate auto safety? He’s so fixated on threats from outside the U.S. that he doesn’t consider that there are threats to us domestically like auto crashes. When he talks about threats to our safety, he’s talking about ISIS.”
NHTSA and other federal agencies have career staffers who remain in place when presidential administrations change. It will likely be hard for Trump to make drastic changes to auto regulations before naming a new top highway safety cop.
Jeff Davis, senior fellow with the independent Eno Center for Transportation think tank in Washington, said Trump is not tardy with his NHTSA choice by recent historical standards. He noted that Obama did not nominate his first NHTSA administrator until nearly 11 months after taking office. President George W. Bush did not nominate his first until five months after moving into the White House. And President Bill Clinton did not nominate his first NHTSA administrator until 13 months after taking office.
Davis said the NHTSA vacancy is not impeding the Trump administration’s ability to police safety regulations.
“Legally, the authority to issue and revise motor vehicle safety standards ... is vested in the secretary of transportation,” he said. “The secretary can delegate or un-delegate that authority to the NHTSA administrator as they see fit, but the important thing is that the regulation-and-recall process can be carried out by the career staff of NHTSA and put into legal effect by the secretary in the absence of a confirmed NHTSA administrator.”
Claybrook, the former NHTSA administrator, said she started working at the agency three months after Jimmy Carter became president. “Agency heads are usually the last ones to get appointed,” she said.
But she said the highway safety agency needs a strong administrator because it has “always been a bit of a stepchild among agencies” and it is “desperately underfunded.”
“You need someone who is talented to fight those battles,” she said of the effort to convince Congress to spend more money on such things as hiring staff to monitor potential safety recalls. “There are quite a number of opportunities to save lives that there is no leadership on right now.”

http://www.detroitnews.com/story/business/autos/2017/04/24/trump-transportation/100830872/

Toyota Dealer Blasted For Giving Supra To Sales Manager’s Wife In Charity Raffle





Toyota Dealer Blasted For Giving Supra To Sales Manager’s Wife In Charity Raffle


Alanis King






To raise funds for a Texas nonprofit helping women and children, a Toyota dealership held a raffle for a restored twin-turbo 1994 Toyota Supra. But when they announced the results, it didn’t take long for people to realize the winner was the general sales manager’s wife. Now, the dealership taking a ton of heat. 
According to local newspaper Blue Ribbon News, the Dallas-area Toyota of Rockwall dealership held the raffle with tickets selling for $20 apiece or six for $100. All ticket proceeds went to the Genesis Center of Kaufman County, which “offers counseling, shelter, resources for daily needs, parenting classes, job services, medical referrals and spiritual mentoring,” according to itswebsite.
The car, according to Blue Ribbon News, was donated to the Genesis Center by a local resident battling cancer:
“The Supra was truly an answer to prayer,” said Pastor Nancy Schoenle of The Genesis Center. “I had been asking God for something different, something unique to help us touch more lives – then God put it on this donor’s heart to give his car to The Genesis Center.”
Schoenle said she called Toyota of Rockwall, and the folks there led efforts to refurbish the car. The dealership gave it new leather seats, new paint, new tires and new tint at no charge, Schoenle said. There were also five other companies involved in restoring the car, with Blue Ribbon News reporting that they spent a total of $17,000 to make the car’s restored value around $30,000.
The dealership printed 4,000 raffle tickets, with the proceeds from each going to the Genesis Center. The dealership later announced that it raised more than $50,000 for the center.
It all sounds like a heartwarming story, but things went downhill soon after the dealership announced the winner on April 14



The name of the dealership’s general sales manager is Danny Rawls, whose name is close to that of the “Rebecca Rawl” announced by the dealership’s Facebook page.
That’s because they’re married, and the dealership claims to have made a typo in the original announcement.
The dealership later released a statement regarding the winner on its Facebook page, saying that the Genesis Center permitted employees—that permission also includes their spouses and family, presumably—to participate in the raffle. The statement said Rebecca Rawls and four friends bought tickets as a group, spending $1,500 total: 



The statement continued by saying that Rebecca Rawls planned to sell the car for additional money to be contributed to the Genesis Center. The statement came more than a week after the drawing and after much public backlash, so it isn’t clear whether selling the car for additional funds to be given to the center was the original intent.
A Facebook commenter shared a photo 12 hours prior to this posting of an eBay listing of the car under seller “rawlsdanny09,” with current bids at $22,100. The listing could not be found when searched for at the time of this post.
As you might expect, recent reviews of the dealership on Facebook are pretty scathing, with many people calling the raffle a “scam.”



In response to the influx of comments on the Facebook statement above, the dealership’s page commented on the post with additional statement:
My name is Charles Pankey and I am the general manager and I have to say a few things. There seems to be a lot of people on here trying to stir this up into what it isn’t. The winners of the car pledged to donate 100% of the funds back to The Genesis Center, hopefully another $25,000. That would put the total amount to nearly $75,000 to help fight domestic violence.
I spent a considerable amount of time to help The Genesis Center raise the money that they did and I was excited when I heard the winners were essentially giving back what they won to the Genesis Center, a place that helps those effected with domestic violence. Lets not forget that this is all for a charitable cause.
If selling the car was the original intent of the winners, it is, of course, a lot more money going to a seemingly worthy cause. But it doesn’t clear up the fact that in most public contests of any kind, employees and family of employees are not permitted to enter due to the conflict-of-interest concerns that arise when something like this occurs.
Jalopnik has reached out to Toyota of Rockwall for additional comment and will update if we hear back.
Update, April 23 at 4:11 p.m. ET: A reader found the Supra sale listing from eBay username “rawlsdanny09,” which had a starting bid of $15,000 and a price of $35,000. The message on the top of the eBay listing is: “This listing was ended by the seller because the item is no longer available.”

Sunday, April 23, 2017

Tesla Model X Owner Asks For $1 Million After Falcon Doors Failed To Open In Crash




Tesla Model X Owner Asks For $1 Million After Falcon Doors Failed To Open In Crash

APR 23, 2017 AT 7:06 PM BY  

The owner of a Tesla Model X in China is asking the electric automaker for 8 million yuan (about $1 million) in compensation after claiming that a crash and fire caused the falcon doors to no longer function.

Speaking about the incident, the female owner of the Model X, Lee Tada, was sitting in the second row of seats with her boyfriend while their chauffeur was driving. Tada claims that while travelling at approximately 75 km/h (46 mph), the electric crossover hit a concrete siderail, spun 180 degrees and was hit by a Ford Focus.

She claims that both the falcon doors refused to open after the crash and that she and her boyfriend were forced to exit through the front door after they started hearing batteries explode, Electrek reports.

Lee says that she suffered a cut on her lower lip and a broken nose while the chauffeur was apparently hospitalized for over 40 days.

However, Tesla China isn’t buying the story and has refused to pay, claiming that the crash took place at high speed, not 75 km/h.

In a statement translated from Chinese, Tesla said “First of all, the lives of the owner and passengers were not threatened. We are working closely with the department concerned. The distribution of the debris at the site and the damage all indicate that this was a high-speed crash – in this case, not just electric cars, but any vehicle can catch on fire. In fact, another car involved in the accident (a fuel-powered vehicle) also caught on fire. Fuel tank fire incidents happen much more often than the electric car fires.

“In addition, Tesla has consistently insisted on the disclosure and transparency of information, including other information about the incident, such as the owner is asking us for 8 million yuan, and we will not accept.”


http://www.carscoops.com/2017/04/tesla-model-x-owner-asks-for-1-million.html



Tesla owner asks for $1 million after Model X caught on fire in crash and Falcon Wing doors wouldn’t open


Tesla is currently investigating an accident that happened in February in Guangzhou where a Model X crashed on the highway and caught on fire. The owner of the vehicle and her boyfriend were sitting in the second-row seat and they claim that the Falcon Wing doors were not opening after the crash resulting in them being stuck in the backseat while the car was starting to catch on fire.
They managed to exit through the front door just as the vehicle went up in flames, but not without injuries and now they are asking Tesla for 8 million Chinese yuan (~$1 million) in compensation.
Tesla China issued a statement about the accident following the owner’s demand (translated from Chinese):
“First of all, the lives of the owner and passengers were not threaten. We are working closely with the department concerned. The distribution of the debris at the site and the damage all indicate that this was a high-speed crash – in this case, not just electric cars, but any vehicle can catch on fire. In fact, another car involved in the accident (a fuel-powered vehicle) also caught on fire. Fuel tank fire incidents happen much more often than the electric car fires.
In addition, Tesla has consistently insisted on the disclosure and transparency of information, including other information about the incident, such as the owner is asking us for 8 million yuan, and we will not accept.”
Here are a few pictures of the aftermath (credits to Steven Liu):







In an open letter published this week on Cartek’s WeChat, Lee Tada, the owner of the Model X, gave her account of the accident.
She explained that her chauffeur was driving her and her boyfriend at ~75 km/h northbound on the highway in Guangzhou when they hit the concrete siderails, the car turned around 180-degrees and was hit front first by a Ford Focus.
They then tried to open the Falcon Wing doors, but she says that they were both stuck.
Side note: There’s actually an emergency latch to open the Falcon Wing doors if the button doesn’t work. It’s hidden behind the speaker cover, but as we learn while researching this story, this information is surprisingly not in the owner’s manual but in the emergency response guide, which is for first responders:
Therefore, it’s understandable that an owner wouldn’t know about it.
Back to her story. They started to hear the battery cells explode and managed to exit through the front door. A few seconds later, the Model X went up in flames.
Here’s a video of the aftermath (warning it’s graphic – and vertical):
In the open letter, the owner says that she suffered a broken nose and a severe cut to her lower lip that needed a dozen stitches, but the driver got the worst of it. She wrote that “he was hospitalized for more than 40 days with internal injuries and fractures”.
Beyond the physical injuries, she added: “it brought us more serious mental harm, after the accident and still today, I often have nightmares about being burned to death inside the Tesla Model X.
It’s apparently what led her to ask Tesla for 8 million Chinese yuan (~$1 million) in compensation, which Tesla China is refusing to pay, but they are still investigating the accident and collaborating with the local authorities.
Tesla has been under scrutiny before over several instances of vehicles catching on fire. The media made a big deal out of it despite the fact that almost every instance happened after a high-speed accident, like this one. Statistics showed that Tesla’s vehicles caught fire significantly less often than the national average and NHTSA eventually conducted an investigation and found no problem.

https://electrek.co/2017/04/23/tesla-model-x-fire-crash-falcon-wing-doors-stuck/



A few Tesla owners filed a class-action lawsuit over the rollout of Tesla Autopilot 2.0 [Updated]




A few Tesla owners filed a class-action lawsuit over the rollout of Tesla Autopilot 2.0 [Updated]


Last month, we reported on Hagens Berman, one of the law firms leading a class action lawsuit against VW and Mercedes for the emissions-cheating software, attempting to start a class action against Tesla over the claims made for Autopilot 2.0 features: Enhanced Autopilot and Full Self-Driving capabilities.
They have now officially filed the class action led by 3 Tesla owners.


In the actual class-action, they seem to have focused on the ‘Enhanced Autopilot’ feature instead of the self-driving feature, which, as we pointed out when they were seeking participants for the suit, they didn’t seem to understand.
As for ‘Enhanced Autopilot’, they are calling it “essentially unusable and demonstrably dangerous”. Steve Berman, managing partner of Hagens Berman, which represents the plaintiffs, said:
“Tesla has endangered the lives of tens of thousands of Tesla owners across the country, and induced them to pay many thousands of dollars for a product that Tesla has not effectively designed. Tesla sold these vehicles as the safest sedan on the road. What consumers received were cars without standard safety enhancements featured by cars costing less than half the price of a new Tesla, and a purported ‘Enhanced Autopilot’ that operates in an erratic and dangerous manner.”
He continued by saying that “to this day, Tesla has not released truly functional software for its Standard Safety Features or Enhanced Autopilot.”
While the suit states correctly that Tesla missed a few deadlines with bringing AP2 cars to parity with the first generation vehicles, the notes for the Enhanced Autopilot option clearly reads that it is “dependent on extensive software validation and regulatory approval”.
Furthermore, the lawsuit incorrectly describes the current features of the Autopilot on vehicles with second generation hardware as only having “a dangerously defective Traffic Aware Cruise Control” and “the remaining features simply do not exist.”
Of course, that’s inaccurate. Tesla’s ‘Summon’ feature has been released on Autopilot 2.0, as well as several updated versions of Autosteer and now even Auto Lane Change. They should actually know that since I mention it in an article that they are using as a reference in their own lawsuit.
The suit is not only seeking for Tesla to buy back the vehicles, but they also want damages for “the conduct of Tesla related to the defective Standard Safety Features and Enhanced Autopilot” and what they describe as “Tesla’s knowing fraud that garnered it illicit profits for a product suite that does not exist and put drivers at risk.”
We asked Tesla for a comment on the class-action, but we didn’t get an answer.
Update: a Tesla spokesperson sent us the following statement:
This lawsuit is a disingenuous attempt to secure attorney’s fees posing as a legitimate legal action, which is evidenced by the fact that the suit misrepresents many facts. Many of the features this suit claims are “unavailable” are in fact available, with more updates coming every month. We have always been transparent about the fact that Enhanced Autopilot software is a product that would roll out incrementally over time, and that features would continue to be introduced as validation is completed, subject to regulatory approval. Furthermore, we have never claimed our vehicles already have functional “full self-driving capability”, as our website has stated in plain English for all potential customers that “it is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval.”  The inaccurate and sensationalistic view of our technology put forth by this group is exactly the kind of misinformation that threatens to harm consumer safety.
Here’s the suit in full if you want to read:



Michael DeKort • April 20, 2017

It's unfortunate that Elon Musk's ego, which drove him to doing some amazing things. has now driven him to be a dangerous, unethical
charlatan.
Regarding regulations. When NASA first reviewed his SpaceX code they rejected it for not being properly tested, especially negative
testing, and for not doing nearly enough exception handling. That happened because NASA unlike Commercial IT actually uses best
engineering practices. They were the regulators. in this case NHTSA has punted. They are in over their heads just as much as the
Commercial IT folks who are making AP. These folks drank way too much of their own bathwater. They are way to impressed with their
skills of making apps, games and websites. Just look at the folks in charge across many of these AP companies. They came from
PayPal, Twitter, Swift, travel websites etc. It's a massive echo chamber.
In addition RAND, MIT and Mobileye have all come out recently and said AI is valuable but way over estimated. The folks who use it do
not really know how it works. Corner cases are not found and it would take hundreds of billions of miles of driving to stumble on all the
data they need. These engineers are using machine learning to be experts for them. Since they have almost no background in this domain
or in actual best practices they have no choice. What really should be happening is AI is mixed with proper systems engineering, other
existing data sources and simulation to create a Scenario Matrix. A tool that would ensure due diligence is done and everyone is on the
same page and handles scenarios the same way. What happens if a Ford AP owner buys a Tesla? Are all the same scenarios handled?
Are they handled the same way? Does a difference entice the driver to do or not do something they shouldn't be because of an
expectation from the previous car.
Just in case the echo chamber of folks who only know what they know from the press and have no experience in any of this chime in that
I am against AP. No I am for it. So much for it I want to see it happen ASAP. That ONLY happens if it is done in the right way. Tesla is not
doing it the right way. Putting people at risk needlessly, depending way too much on AI and not using actual best systems engineering
practices.
I provide more detail and cite links that support the comments I made above.
Lockheed Engineer/Whistleblower - Due Diligence Recommendations for Autonomous and Driverless Industry





Due Diligence Recommendations for the Mobile, Autonomous and Driverless Industry



Due Diligence Recommendations for the Mobile, Autonomous and Driverless Industry


Program Engineering Manager-Customer Success - Systems Engineer


Autopilot Issues and Weaponizing of Vehicles
Autonomous/Driverless Systems
Several companies are currently creating driverless or autopilot vehicles and are using their customers, the public around them and in some cases professional drivers, to gather the data they need and test those vehicles to create their products. This makes those people beta testers or Guinea pigs. It puts them at risk for no reason. It also wastes time. There is a better way. This includes Tesla and comma.ai. Waymo uses professional drivers. (Ford is stopping that practice because the professional drivers are falling asleep).
Tesla
Tesla has convinced NHTSA, the insurance companies and others that their autonomous vehicles, even though nowhere near complete, saves lives by lowering human caused front end collisions by 40%. They use that as a justification to use the public as data gathering and test subjects. While this statistic may be true (though not of their recently degraded “drunk” software version I explain below) it is misleading.  The problem is that they are nowhere near able to handle most of the other lesser common accident scenarios and actually need their customers and the public to experience them in order to get the data needed for them to handle those situations properly later. Said differently – they need people to experience near-accidents or accidents, to be hurt or worse, so they can get the data they need to improve the design. As I state in more detail below my issue with this is they should be doing this another way. Which does not put people at risk. A way that Commercial IT doesn't understand well. Unlike DoD, NASA, Boeing etc. Said differently - if I am curing a widespread disease like cancer but avoidably give people less common diseases that can kill them – and I doing my due diligence?
https://www.wired.com/2017/01/probing-teslas-deadly-crash-feds-say-yay-self-driving/
Tesla has stated that they need their customers to drive 6 billion miles to gather all the data needed. And that only a fraction have been driven. That many miles are needed because the data they need involves gathering non-standard or exception scenarios like accidents or near accidents. In order to stumble on all of the various scenarios Tesla believes 6 billion miles need to be driven. (Notice in this article Tesla basically says the lost lives are unavoidable and help them save more lives as a result). I wonder how regression testing is handled when an update is made? Drive a couple billion miles again until you stumble on the test cases again?
In the most recent update Tesla’s AP regressed so badly owners classified it as “driving drunk”. This system is still in use, was not recalled or replaced by Tesla and is a significant regression and risk to the public. The car can barely stay in the lines on an average road, in daylight and in good weather. Why can’t that be engineered on a test track or with simulation or manned simulator before going in to the public? How is it a good PR move to let people see how that system can’t handle rudimentary driving? Why would any company or NHTSA let those cars on the road or allow the public to be beta testers? To make matters worse an inside report was recently leaked from Uber recently showing that in spite of years of "AI, "Machine " and "Deep" learning their vehicles can't average a mile of autonomous driving without disengaging. (Tesla was no better). This is why an industry wide Scenario Matrix is needed.
Why can't these folks use simulation and test tracks to get the most basic scenarios down before putting the public at risk? Data used from traffic engineers, researchers, the insurance and auto companies would give you the majority of information you need to create the base scenario matrix. Using systems engineers and experts you could then add huge amounts of variation to those scenarios. Once you have that you can then design and test to that. The greater majority of real life scenarios would be covered by this process. While this is going on you can continue to gather data from drivers not in AP.
Lastly most of these companies follow normal Commercial IT engineering and project management practices. Those are traditionally nowhere near best practices. Most have no idea what CMMI is, what systems engineering best practices are or how to design, build or test a system nearly as complicated as what is required here. Especially regarding exception handling, negative testing, sensors integration or simulation.
Key Best Practices Not Being Used
Sensor Systems and Integration
These vehicles are not using a broad enough array of sensors and in many cases relying on only LIDAR or cameras. (Tesla is using one camera at this time). That is extremely unwise since every sensor has weaknesses. Aircraft manufactures use multiple sensors as well as probability and priority filters to ensure the right data is being used at all times. That includes, FLIR, several types of radar, GPS, cameras and inertial navigation. (Automobiles need to add sound detection to that list). This is what needs to be done in vehicles. Sensors can provide incorrect data. They can contradict each other. An example may be changed road patterns that contradict a map or signs that for whatever reason cannot be read correctly. Or sensors are broken or not doing well due to bad weather. (Many of the beta test cars out there now can't handle driving in simple scenarios or with minimal exception handling without disengaging every mile or so now. They aren't even scratching the surface yet.) That can never result in the vehicle doing the wrong thing. It is imperative to not just double verify but triple verify or more in many cases.
(Why am I not hearing about the use of inertial navigation?)
Exception Handling
Exception handling is where the system does something unplanned or expected. Accidents would be exception cases. NASA, DoD and the airlines industry spend more time on identifying these, designing in responses and testing them than they spend on the normal or expected path. Commercial IT on the other hand rarely identifies these let alone handles them. Their processes actually don’t support most of what is needed to find them let alone ensure proper designs are implemented and tested. While many Commercial IT produces don’t require as much rigor as an aircraft, weapon system or space craft, driverless vehicles surely do.
Other Key Areas
  • Using text based scope docs that do not build into a full system view. Use Cases and Stories are extremely poor ways to illicit and explain scope. Especially exception handling. What is needed is Diagrams. These facilitate visual flow where exception handling points would be seen. This step is the most important. If you cannot see all of the combinations you cannot design or test for them. Not doing this one step alone will cripple these companies. They will have zero visibility or view into the entire system. They will get lost, make design and coding mistakes, break things that used to work and not be able to test complete threads. All they will literally have is a massive stack of text.
  • Most Commercial IT companies have many products and separate teams. They rarely perform mass system integrations. There is very little system design being accomplished. Especially not to this size or complexity. There is also very little object oriented or UML design going on. This is caused by how many people choose to practice Agile. They purposefully ignore what they can know up front and utilize Use Cases and Stories and not Diagrams from that point forward, Most of Commercial IT's design process is not based on a full systems design approach. They build one step at a time purposefully ignoring whole systems.
  • They lack proper tools that facilitate scope decomposition through design, code and testing. Something like DOORs. Commercial IT rarely has separate tools let alone an integrated one. Most won't even use a proper Requirements Traceability Verification Matrix (RTVM) in Excel. This will result in missing and incomplete scope, design and testing. Where this would show up most is in their inability to deal with making, designing to and testing the massive Scenario Matrix that is needed to develop autonomous vehicles. They simply cannot handle all the variations.
  • They rarely have chief architects that look across the whole system.
  • Full system testing is rarely done. Especially when there are third party interfaces. Simulators are rarely built to replace those systems if they are not connected in the test environment. Exception handling or negative testing is rarely done.
  • There are rarely any coding standards. Especially built from in depth testing and exception handling. Examples - http://caxapa.ru/thumbs/468328/misra-c-2004.pdf, http://lars-lab.jpl.nasa.gov/JPL_Coding_Standard_C.pdf, http://www.stroustrup.com/JSF-AV-rules.pdf
  • Commercial IT rarely creates a product wide integrated software configuration management systems. They have dozens or even hundreds of little teams who have their own CM. This will result in the wrong software versions being used. Which will lead to defects. It will also lead to laying patches on top of patches which will result in defects.
Miles Driven and Disengagement
Miles driven is virtually meaningless. As is data on disengagements. Both likely leading to false confidence. Exactly what scenarios were experienced in this driving? Most is repeated. I can do far more with properly planned 50 miles than a million miles driven by drivers stumbling on scenarios. (Tesla says they need 6 BILLION miles driven to get the data they need. Much of it accident data). And since the data I really need is on exception handling or near/actual accident data I should be using simulation and simulators not overly trusting and unwitting human Guinea pigs. What do you do when software needs changed and there is a big regression test impact? Drive those miles 6 billion again? If the answer is simulation then you could have gotten it that way in the first place.
https://www.driverless.id/news/2016-disengagement-reports-show-waymo-absolutely-crushing-competition-every-single-metric-0176110/
NHTSA has not done their due diligence            
NHTSA has allowed Tesla, and others, to determine what the new best practices are. The problem with that is most of these engineers come from Commercial IT where they rarely experience engineering on a scale and complexity anywhere close to this. Their processes literally don’t support doing that. Especially regarding exception handling. They are in way over their heads. They cannot tell that because no one around them comes from places that have this experience and proper tools like NASA, DoD, Boeing etc.
NHTSA is allowing industry to determine what the Scenario Matrices looks like and there is no effort to make a single minimum acceptable set for design and testing. This will result in massive differences, gaps and confusion. Cars will work different based on brands. This is a mistake.
They determined that Joshua Brown should have been paying better attention when he was killed in an accident while using his Tesla autopilot. Tesla admitted the system did not have radar integrated well and the camera system mistook the trailer as the sky since the sun was shining on it. I believe this is another clear example of these cars not being ready to be on the road in autopilot, how the public should not be Guinea pigs and how the average person thinks “Autopilot” means the car can drive itself.
Regarding the term "autopilot". Using the term “autopilot” versus terms like driver assist" well before the vehicles have fully functional driverless systems is misleading, confusing, reckless and unnecessary. NHTSA stipulated that since Tesla states in their fine print that the systems are not actually autopilot and the user should keep their hands on or near the wheels that there is no issue. I contend that the term is misleading and that Tesla, through its own actions, like videos and press releases, has sent mixed signals to it users and the public. Thereby creating a significant level of false confidence. The German government made the exact same points. Other watchdog groups have had issues with the process. Like Consumer Watchdog. (I believe the reason Tesla misleads people is so their customers and the public will be comfortable being their beat testers.)
Video of Elon Musk using his system in ways he told others not to do - https://www.youtube.com/watch?v=gDv9TEXtHzw&list=FLcDGGGtllzLmeV_UCebqHUw&index=8
Remote Control – Weaponized Vehicles
Many of these companies are and already have released either remote control versions of their vehicles or the source code so the system can be modified. Not only does this put these neophytes at risk, and the public around them for accidents, these vehicles can be weaponized. The worst offender here is comma.ai. They not only use their customers as Guinea pigs but they released the software source code to them for free. This will allow users to modify the code and change the way the vehicles perform.
Also the vast majority of companies and government organizations can be easily hacked due to poor cyber-security practice use. Especially around Privileged Account Security. This means the source code for remote control cannot be deemed safe. Given this the potential for harm far outweighs the good. As such I believe remote control should not be an option under any circumstance.
More on this here - https://www.linkedin.com/pulse/privileged-account-security-massive-hole-most-why-isnt-michael-dekort

 Recommendations
Autonomous Vehicles
The term “autopilot” should not be used until vehicles meet Level 5 criteria. That should include demonstrating it can pass an industry-wide scenario matrix. That being the minimum normal driving and exception or accident situations the vehicle should be able to handle properly. This would include variables like vehicle type and performance, weather, terrain, moving and stationary obstacles, sensor degradation or conflicts, driver induced mistakes, driver take over, time of day, signage, road changes/condition, handling of external data sources etc.
These systems should not be released to the public until a minimum amount of scenarios are tested. This includes professional drivers. (Ford stopped the practice because professional drivers were falling asleep.) Simulation and simulators in combination with inputs from the participants below should be used for primary data gathering and testing.
Suggested Method for Data Capture, Design and Testing
Scenario Matrix
  • The key to all of this is creating a complete and accurate industry-wide Scenario Matrix which would be used for design and testing. (That includes regression and repeat testing)
  • That matrix should include any situation that a user of the system could reasonably be expected to experience as well as as many variations of those scenarios as possible
  • Design and Testing should include the boundaries of those combinations
  • This matrix will help inform where AI, Machine learning is needed to help fill in the gaps
  • Minimizes repeat data found by those gathering data via driving
  • Helps avoid wasting time on repeat scenarios or missing intricacy of existing scenarios
  • Provides a checklist to help ensure macro and even some micro scenarios are not missed or incomplete
  • This will also ensure drivers who select different brands of vehicle do not have to worry about changes in scenarios covered or how they are covered. What happens if someone goes into a different brand and a scenario that was covered in their former vehicle is not? Or if the scenario is handled differently? That drive could take control or not take control at the wrong time.
Data Sources
  • Note on AI - AI for every aspect OTHER than that which needlessly puts people at risk in “autopilot” systems that are not reliable yet, is obviously encouraged. For Mapping etc there is really no other way. (Mapping for example would have to be constant. Something that will have to be crowd sourced.)
  • Drivers not in autonomous mode
  • Automobile Companies
  • Insurance Companies
  • Researchers – to include Social Engineering – Expectations of other human drivers
  • Traffic Engineering
  • Government Agencies – NHTSA etc.
  • Product Team Exception Handling Inputs – Folks trying to break the system
  • Vehicle Performance, Configuration Changes and Issues
  • Weather
  • Road and Terrain – Time of Day - Changes especially for temporary work
  • Signage – To include human gestures
  • Sensors – System Device Capabilities - Handling of Conflicts and Missing or Flawed Data - Priority and Probability Filters - LIDAR, Radar, FLIR, GPS, Cameras, Sound etc. Every V2X receipt will have to be treated as a separate sensor input.
  • Moving and Stationary Objects
  • External Data Sources – Other Vehicles, Objects and Systems - V2X
  • Route deviations based on interior and exterior changes – Include handling of latency
  • User Error – When to ignore, correct or notify
  • System Wide Conflicts, Missing Data or Errors
Use of Data, Design and Testing Approach
  • Create the Scenario Matrix in an Object Oriented software system that represents the combination of all the various data and system types. Once the data areas and exception boundaries are know the various combinations of them can be created, tuned, changed and tested.
  • "Business" Rules - The variations of rules the vehicles need to use is massive. Far, far more than in most Commercial IT systems. Those folks don't "what if" much. Unlike in NASA, DoD or Boeing for example. Virtually every rule will have to be broken in certain situations. And those themselves need rules.
  • Utilize non-manned and manned simulations/simulators to run through the various scenarios. As well as variations of those scenarios. How do you do this driving around? How do you repeat scenarios? Or regression test? Drive around billions of miles over and over until you stumble on them? That's simply lazy and reckless systems engineering.
  • Utilize real world testing from test tracks and controlled public driving to verify the simulations. The key being to not go into the public domain until the rudimentary scenarios are proved via test tracks, simulation and manned simulators. And when public driving is done do it in a controlled environment first. So far most of the vehicles out there can't stay within the lines on the road. They need to get the basics right before they involve the public. (Most of these companies are saying they have to use the public domain and human beta testers to gather data, design and test. They are wrong. Wrong to the point of being reckless.) Tesla - Why wouldn't all of the sensors be integrated in simulations and on test tracks for basic ops the cars go into the public domain? Imagine if NASA, DoD or Boeing did things like this.
  • You cannot use Agile for a project like this. Bottoms up will not work. If people use Agile they will be constantly tripping over what they have already done. Constantly breaking things that used to work, miss huge pieces only to tear things bask apart later. They will miss a lot in regression testing. This whole thing is far, far too complicated for Commercial IT's engineering practices, tools and most engineers to handle. Using Stories or Use Cases alone and not Diagrams and a Scenario Matrix will hold them back for years if not doom them overall. This is not a bottoms up Agile exercise. It is a massive top down systems engineering effort around a scenario matrix and object oriented design. It's about defining what all the objects or variables are then filling in the types, ranges and combinations. What is needed is an Agile-Waterfall hybrid with the use of actual best engineering practices. https://www.linkedin.com/pulse/software-development-one-best-approach-michael-dekort and
  • V2X - Every receipt of information a vehicle gets has to be treated like a separate sensor unless there is a regional system that transmits truth to every vehicle. And can do so in actual real-time.
  • Real-time - There are very few systems in Commercial IT that operate in actual real-time. That term is usually defined by the user community system and its most demanding scenario(s). Most often that is determined by whether someone thinks something was fast enough. Data retransmissions usually occur all the time and are no impact. Folks from that industry do not having an understanding of the critical system timing that mass driverless vehicles and V2X systems will need. They think that since networks, CPUs and memory are all fast and have tons of capacity all is well. (That includes folks who make OS and games. Though folks who make games that involve networks surely have some insight. As do the VR folks. However I would bet that a primary reason for people getting sick is system lag and timing). With driverless vehicles and V2X the entire system has to be designed to accommodate the most dependent action of any one vehicle and then a thread of many vehicles having the same need. If every data need was plotted out on a sequence diagram you would see certain things happen at certain rates and in a certain order. This may be hundreds if not thousands of times per second. If a frame or window is missed something bad can happen, with many vehicles all dependent on each other’s actions. There could easily be a catastrophic domino effect. Networked aircraft simulators, especially those that fly at high rates of speed and in formation or even air refueling, have to deal with this. That industry has been using global memory and system architectures that most folks in Commercial IT are unaware of. The entire system, with V2X, is based on asynchronous data exchanges. In order for that system to work and meet very demanding timing needs that are synchronous, you have to transmit truth fast enough and often enough so that the receiving systems see exactly what they need, exactly when they need it. While they can often dead reckon there are plenty of times that will be a mistake. In order to avoid this every vehicle will have to look at all data sources and sensors, treat V2X transmissions as sensor inputs, calculate what is truth from a wide array of inputs then take the right action. And to do that at the right time and in the right order. (For those of you who think satellites are an option do the math comparing one hop to how far a car goes at 25mph and 75mph. This also needs to include electro-mechanical delay in steering and braking).
Remote Control
There is significant risk of these vehicles being weaponized. As such they should not be allowed to be remote controlled. At least not until the system is proven full proof and only the right organizations can control them. This should include these organizations proving that they themselves cannot be hacked.
Who will actually be first?
I do not have insight into what every company is doing. If one of them is experienced enough, has patience and the right funding they could be in the lead on this in 5 years or so. They will come out ahead because the others will have exceeded their actual experience or be locked up in civil or even criminal courts because of wrongful death lawsuits. You could build a system with a Scenario Matrix so complete that governments and everyone else would have no choice but to defer to you. You could license this or move forward with the only viable product and watch the other folks trying to play catch up.
My Background – 15 Years - Systems Engineer, Program Manager and Engineering Manager for Lockheed Martin – Aircraft Simulation, NORAD and the Aegis Weapon System. Commercial IT Project Manager for 11 years. Post 9/11 DoD/DHS Whistleblower - IEEE Barus Ethics Award recipient - http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4468728

Update 3-17-2017
New reports on Uber shows they are struggling to make improvements. The driver has to take over every MILE. This shows AI, Machine Learning, Deep Learning etc are only as good as the data they are fed and the plan to get them there. Driving around for billions of miles waiting to stumble on new scenarios, as your primary data gathering method, is not a good plan. Unless you are very lucky you will hit new data gathering plateaus. As I have said these folks are in over their heads. What they actually know can only get them so far. Apparently that isn't even far enough to stay between the lines of most roads in daylight and in good weather. Almost nothing they have experience in to this point applies to this. Twitter, Uber, Paypal, games etc are not training grounds for this. These folks are re-plowing fields plowed long ago in other industries. Looks like they are all finding this out. The problem is they are needlessly risking people's lives and delaying the final product which will help people. as a result.

Update 3-18-2017 - 11 Billion Miles to prove AP is only 20% safer than human drivers
Here is an excellent paper by Nidhi Kalra at RAND. Notice she says 11 billion miles need to be driven to demonstrate with 95% confidence and 80% power that their failure rate is 20% better than humans. Autonomous vehicles would have to be driven more than 11 billion miles to detect this difference. With a fleet of 100 autonomous vehicles being test-driven 24 hours a day, 365 days a year at an average speed of 25 miles per hour, this would take 518 years—about a half a millennium.
This is what we want to expose human beta testers in premature AP systems too?
http://www.rand.org/content/dam/rand/pubs/research_reports/RR1400/RR1478/RAND_RR1478.pdf

Update 3-28-2017
There is an excellent Mobileye video on YouTube. In the video Mobileye makes many of the points I have tried to make here.
https://www.youtube.com/watch?feature=em-subs_digest&v=b_lBL2yhU5A
At 53:45 there is a discussion on how simulation and simulators should be used
At 56:40 they mention fro driverless systems at least 2 different sensor types should be used
At 1:12:00 they mention all the hype around folks driving around billions of miles to stumble on data

Update 4-13-2017
A very interesting article from MIT came out discussing how the folks who are using machine learning to create autonomous cars do not know why it works. The author posits that until they do that approach cannot be considered safe enough. I agree. More here:
ADAS AI or Machine Learning - A Dark Art? Is it the best option? Is it safe?

Update 4-18-2017
Bloomberg article released today on the need to augment AI with simulation and the reasons for doing so.
https://www.bloomberg.com/news/articles/2017-04-17/don-t-worry-driverless-cars-are-learning-from-grand-theft-auto
The reasons cited include that scenarios cannot be repeated with AI, greatly inhibiting data gathering, engineering, primary and regression testing. In addition far too much time is needed to gather all the data needed and using human beta testers, especially in difficult conditions, puts them at risk unnecessarily. This clearly backs up information I presented earlier from RAND, MIT and Mobileye.
Update 4-20-2017
Tesla owners sue saying AP is dangerous

It's unfortunate that Elon Musk's ego, which drove him to doing some amazing things. has now driven him to be a dangerous, unethical charlatan.
Regarding regulations. When NASA first reviewed his SpaceX code they rejected it for not being properly tested, especially negative testing, and for not doing nearly enough exception handling. That happened because NASA unlike Commercial IT actually uses best engineering practices. They were the regulators. in this case NHTSA has punted. They are in over their heads just as much as the Commercial IT folks who are making AP. These folks drank way too much of their own bathwater. They are way to impressed with their skills of making apps, games and websites. Just look at the folks in charge across many of these AP companies. They came from PayPal, Twitter, Swift, travel websites etc. It's a massive echo chamber.
In addition RAND, MIT and Mobileye have all come out recently and said AI is valuable but way over estimated. The folks who use it do not really know how it works. Corner cases are not found and it would take hundreds of billions of miles of driving to stumble on all the data they need. These engineers are using machine learning to be experts for them. Since they have almost no background in this domain or in actual best practices they have no choice. What really should be happening is AI is mixed with proper systems engineering, other existing data sources and simulation to create a Scenario Matrix. A tool that would ensure due diligence is done and everyone is on the same page and handles scenarios the same way. What happens if a Ford AP owner buys a Tesla? Are all the same scenarios handled? Are they handled the same way? Does a difference entice the driver to do or not do something they shouldn't be because of an expectation from the previous car.
Just in case the echo chamber of folks who only know what they know from the press and have no experience in any of this chime in that I am against AP. No I am for it. So much for it I want to see it happen ASAP. That ONLY happens if it is done in the right way. Tesla is not doing it the right way. Putting people at risk needlessly, depending way too much on AI and not using actual best systems engineering practices.
Update 4-21-2017
Mercedes will no longer use the term "Autopilot" if the vehicle is not fully autopilot. Believes using that term before that is misleading and leads to a false sense of confidence.
Interesting that was done right on the back of Tesla being sued for a an autopilot bait and switch.

Final note - I want to explain the tone of my posts on this subject. It is direct, even critical, because I believe the only way to break through the overwhelming thought pattern is through an intervention. I realize that could turn off the folks I am trying to reach. As I have tried softer paths and this topic has life and death ramifications I believe the approach is warranted because I it is, unfortunately, the most likely approach to get folks to re-examine what they are doing. And to hopefully change their course. I am more than available to help in an way I can.
https://www.linkedin.com/pulse/due-diligence-recommendations-mobile-autonomous-industry-dekort