WORKING TO PROMOTE FLYING SAFETY,
AFFORDABILITY, GROWTH AND FUN!!
 Member Login 

 Email Address 


Password

Forgot Password

Flyer Signup
 

Making the Magic Work: Decoding Automation Before It Derails Your Flight

By James Williams
Reprinted with permission from FAA Safety Briefing

I proudly admit to being a technophile. I’ve been using computers since I can remember, and possibly longer according to photographic evidence. I have at least six computers in my home, along with smart phones, media consoles, set top boxes, GPSs, iPods, etc. But despite my technical expertise, I’m still routinely flummoxed by automation. I know I’m not alone in thinking: “Why did that just do that?!” With more advanced systems and high levels of automation rapidly moving into general aviation (GA) aircraft, now is the time to figure out your automation management strategies. So here are a few pointers I’ve found useful.

Monitor the Magic

At some point in your aviation career, whether you fly for a living or for pleasure, you’ve probably heard about Eastern Airlines flight 401. Late on the night of December 29, 1972, the flight missed an approach at Miami International Airport following a failure of the nose gear position indicator light. While attempting to trouble shoot the problem, the crew failed to notice the aircraft was slowly descending into the Everglades. The National Transportation Safety Board (NTSB) concluded that the flight crew failed to monitor the flight instruments and detect the unexpected descent in time to prevent the accident.

Despite having a full crew of three properly qualified pilots (captain, first officer, and second officer) and a maintenance specialist in the jump seat, no one was monitoring the airplane’s flight path. Instead, everyone on the flight deck became completely consumed with what turned out to be a burnt-out bulb. They all assumed that the autopilot would hold the assigned altitude of 2,000 MSL, and no one noticed the autopilot disconnect or the radar altimeter warnings until it was too late.

This is a classic case of failure to monitor the automation. Even the best systems have their faults, and it’s never a good idea to trust them completely. Your life may be at stake, so keep your scan going even when the autopilot is engaged. Be vigilant about what automation modes are in use (e.g., NAV/Heading/VNAV, etc.). To keep your brain engaged, use verbal callouts anytime you make a change to airspeed, altitude, heading, frequency, or automation mode. You might also consider making callouts when you cross each waypoint along your route.

Know the Systems

The NTSB report observes that there were many factors at play in the fatal Colgan 3407 accident in 2009. One such factor was the crew apparently forgetting about activating a system and how that system worked with other aircraft systems. Early in the flight they turned on the anti-icing systems which included selecting a switch which increased reference speeds. This increases the margin over a stall to give the crew some compensation for any potential aerodynamic losses caused by the potential icing. The crew discussed their experience with icing and noted observing icing on the airframe but did not indicate any real concern (the NTSB agreed, concluding that icing did not adversely affect the handling characteristics of the accident flight). But when the first officer set performance data for landing, she did not include that the Vref increase system was active. This error created a conflict between how the aircraft was operating and the information the systems had provided regarding the reference speed to be flown on approach: The system recommended a speed of 118 KIAS when, with the Vref increase system on, it should have been 138 KIAS. The other solution would have been to turn the system off, which would have removed the conflict between the aircraft’s systems and the crew’s expectations.

As the captain slowed the aircraft for approach at 118 KIAS, the aircraft’s stick shaker activated at 131 KIAS. The surprised captain pulled back on the yoke while adding power. This action increased the g-load, which in turn increased the stall speed. As the airspeed decreased through 125 KIAS, the aircraft exceeded its critical angle of attack (AOA) and stalled. Even after the stick pusher twice activated in an attempt to break the stall, the captain continued to pull back in response. Multiple crew misunderstandings about the information and system interaction played a role in the outcome.

While most GA aircraft systems are less sophisticated, we still have interdependent systems. Moreover, interdependent avionics will become more common. Radios are tied to displays, which are tied to course deviation indicators (CDIs) and moving maps. The point is that you need to know how each of those systems interacts with the others, and where there might be potential pitfalls.

Be Ready for Malfunctions

While automation can help reduce workload, pilots must be prepared in case it suddenly disappears. In 2005, a Cirrus SR-22 crashed following apparent pilot disorientation. According to the NTSB, the pilot was instrument rated and had more than 400 hours in type. However, he had only 15 hours of actual instrument experience. He became disoriented after his Primary Flight Display (PFD) failed.

An instructor who previously flew with the pilot stated that they had practiced partial panel flying less than a month before the accident, in addition to a number of previous partial panel practice sessions. Clearly, therefore, the accident pilot had considered the chance that his PFD could fail; in fact it had malfunctioned in the past. But, as you might imagine there is a world of difference between practicing in a situation where you are prepared for the failure and seeing your workload dramatically and unexpectedly increase in actual instrument conditions. That alone is a good reason to make training as realistic as reasonably possible. And, as always, have a reliable and workable contingency plan.

These are three good starting points for how to manage not only the magic (aka automation) in the cockpit, but also your overall flying in a safe and professional manner.

What tips do you have?

James Williams is FAA Safety Briefing’s assistant editor and photo editor. He is also a pilot and ground instructor.

Better than Real

By Harlan Gray Sparrow III

You may have heard it is possible for a pilot to earn a type rating without ever having been in the real airplane. This is possible – and safe – because simulation technology these days is as real as it gets. In fact, simulators make it possible to conduct even more extensive training, because it is possible for the pilot in a simulator to experience realistic failures and malfunctions that would not be safe to simulate (much less perform) in the real airplane.

As you might imagine, someone in the FAA has to decide whether a simulator is sufficiently realistic to substitute for the actual airplane and meet training requirements. That “someone” is a group of people comprising the National Simulator Program (NSP), which is organizationally part of the Flight Standards Service’s Air Transportation Division. Established at FAA Headquarters in 1980, the NSP began with a staff of 12 and had regulatory oversight responsibility over 92 simulators, both visual and non-visual. Since 1982, the NSP has been physically located in Atlanta, Georgia.

The NSP is charged with evaluating and qualifying over 760 flight simulators, numerous flight training devices (FTDs), and recommending them for approval for use in FAA-approved flight training curricula. It is through the efforts of the NSP that qualified flight simulators are available for approval and subsequent use in the training of airline crewmembers, commercial and private operators, and FAA inspectors.

The policies and procedures established by the NSP focus on evaluating the performance of the simulator in comparison to the performance of the aircraft, both objectively and subjectively. Any comparison other than simulator-to-aircraft introduces the possibility of comparison errors and requires detailed evaluation by the NSP’s technical staff in accordance with the applicable regulations.

The NSP is also responsible for setting criteria and standards (as defined in Title 14 Code of Federal Regulations [14 CFR] part 60) for initial qualification and recurrent evaluations for aircraft and rotorcraft simulators, as well as for level six and seven FTDs. The NSP provides initial evaluation of reference data for level four and five FTDs, if required, and provides technical assistance to the Flight Standards District Office (FSDO) with responsibility for approval of the FTDs.

In addition, the NSP designates pilot simulator evaluation specialists to serve as operations members and active participants on the Flight Standardization Boards (FSB) and the Flight Operations Evaluation Boards (FOEB) of new aircraft.

NSP inspectors and engineers travel throughout the world evaluating FAA-approved simulators and assisting foreign countries that have requested technical assistance through the U.S. State Department. Moreover, the NSP works with international organizations to improve simulation standardization worldwide. We are truly here to help.

Harlan Gray Sparrow III is the manager of the FAA’s National Simulator Program. For more information, please see www.faa.gov/about/initiatives/nsp.

I Fly America
PO Box 882196
Port St. Lucie, FL 34988
614-497-4088

Office hours M-F 8:30am - 5:00pm
Our Privacy Policy
© I Fly America 2024