Quote:
Originally Posted by Wizzpop
Understand the philosophy, but I am a little dubious. The accidents with the Boeing 737 M8 were apparently due the aircraft sensors and software countermanding the human pilots to the extent they couldn't regain control.
|
At the risk both of going wildly off-topic and also speculating solely on the basis of what's in the media, I feel that the issue with the 737 Max was the way automation was implemented rather than with automation itself. A system classified as 'hazardous' should never have been designed to follow a single sensor input (there were two available on the aircraft which could have been compared). Also, Boeing appears to have decided that no operator intervention/overide would ever be required if the system failed (hence why they excluded any mention of MCAS from transition training or the manual).
It's this last point that should be the focus of any successful introduction of automation (whether aircraft or cars) - what are the potential failure modes for the automated system? What level of operator (driver) intervention might be required? Is it reasonable to expect a driver to provide such intervention?
In another thread there has been discussion of driver monitoring of level 2/level 3 autonomous systems. The human factors issues around monitoring automation are very significant... I'd be happy with the Volvo drugs and alcohol nanny provided I was assured about the rigour of the safety assessments that underpinned it.