Friday, May 31, 2013

Variations and Training

Variations and Training

Since there are no Flight Data Recorders required by Regulations to be installed in Light Twin Engine Airplanes, Air Operators of these types of airplanes are not require to monitor and record altitude and airspeed during flight. However, the majority of light airplanes today are GPS equipped where flight parameters are available. 

Common Cause variations is Normal Operations and Special Cause variations is Emergency Operations. 

There are two segments to the chart, where the first is normal flight conditions and the second where the airplane enters into a thunderstorm. 
 Departure airport elevation is 3500ft, mountainous area with peaks to 10000ft and destination airport elevation is 3500ft. 
The climb up to cruising altitude of 12000ft is normal. At 12000 the pilot levels off, maintaining airspeed of 165kts. 

The airplane enters into a slow climb with a slight reduction in airspeed. At 12300ft the airplane suddenly drops down to 12000ft. 
Shortly thereafter the airplane descends with a slight increase in airspeed.  At 11700ft the airplane climbs back to 12000ft. 

The flight enters into a thunderstorm and there is a drop in airspeed to zero and the airplane climbs to 15000ft. Just as sudden as it climbed, the airplane drops down to 11000ft. The airspeed suddenly comes back, increases to 190kts and the airplane again climbs to 15000ft, where the airspeed decreases to 120kts, the airplane drops to 11000ft and then climbs back up to 12000ft. 
Next thing is another increase in airspeed, climb to 15000ft, with a sudden drop in speed to 55kts and drop in altitude from 15000ft to 11000ft. Shortly thereafter operations are back to normal with cruising altitude of 12000ft and 165kts.
The first segment of normal flight is Common Cause variations, where the pilot is not attentive and allows the airplane to climb and descend beyond acceptable altitude limits. Training for Common Cause variations is training for Normal Operations, where an Air Operator may apply training to enhance pilot performance.

The second segment of thunderstorm is Special Cause variations, where airplane is performing by the forces of the thunderstorm. Shortly after entering, the Pitot Tube ices over and airspeed goes to zero, followed by a force lifting the airplane to 15000ft and then drops it down to 11000ft. Airspeed comes back, airplane is hit with an updraft with an increases in airspeed to 190kts. This is followed by a downdraft and the pilot is attempting maintain altitude by pulling back on control column reducing the speed to 120kts. 

A final blow to the flight is when a severe gust causes one wing to stall at 55kts and the airplane enters into a spin.  

SMS is to use tools available and to be moving forward. 
SMS is not to get stuck in the past with old Systems
Altitude and speed variations beyond acceptable limits due to thunderstorm are Special Cause variations and require different training than Common Cause. Variable cause training is training for Emergency Operations.    

By applying GPS parameters Air Operators have an SMS tool available to collect and analyze data, apply this knowledge in developing Training Programs, to increase Safety Margins and to elevate level of Customer Service. 


Wednesday, May 22, 2013

WHY are Some Accident Recurring?

Why are some accident recurring?

 In our aviation culture we are great at Analyzing Failure. Our focus seems to be on “What happened.” Even with this great emphasis on Failure Analysis, we still see the recurrence of similar accidents. The question is why! I recently participated in a discussion on the Aviation Safety Group. The discussion brought up several reasons for the recurrence of some accidents. Here they are:

  1. No resources to complete the job. Once a cause or causes have been identified, upper management does not support the efforts to implement the action necessary to eliminate or mitigate the causes. It seems that all Safety related issues can be traced back to upper management. Dr. W. Edwards Deming would not even work with a company in improving it’s quality unless the CEO on down were actively supporting the system.
  2. Investigation team with no experience. I always push the idea that Root Cause Analysis, RCA,  teams must be “multidisciplinary.” In addition, there must be some experience members that have participated in other RCA sessions. No all the members of the team need to be experience but, there should be some facilitators that can lead the progress of the RCA.
  3. Solutions do not reduce the risk in fact. This occurs when the RCA team goes in the wrong direction. It is important to utilize tools for Root Cause Analysis that take away the “type A” personality and gives each member of the team equal say. I have found that these tools are readily available and I recommend a booklet: “SMS Memory Jogger II” published by GOAL/QPC. There are many tools found in this book that will accomplish the equitable results of a RCA. (SMS Memory Jogger II can be found at

Tool book authored by Dennis and Sol Taboada
4. Failure on management recommendations. Remember that the person who knows         the most about a job is the person that actually does it. Management can recommend but, the recommendation must be analyzed and in most cases a risk assessment done to see if the recommendation itself does not add to the risk. 

5. The root causes are not identified. Of course this is the most popular answer as to why accidents recur. It is important to follow the procedures for a standard RCA tool. First we must know the tools and how to use them. If we do not address the real Root Cause(s), the problem will recur. 

It is important to educate yourself and your key employees on how to, first pick the right tool for the job, and then know how to use it. I have found symposiums to be an excellent forum to discuss, identify and use different tools to help manage risk, failure, RCA and goals for our company. There is an excellent symposium that I recommend coming up this September. “The TOOLS for SMS and QA.” This symposium will be held at the Coronado Resort on the Walt Disney World property in Orlando Florida..a beautiful location to bring the family!! Speakers will be excellent and the work shops very engaging and guaranteed to arm you with tools that will enhance your companies Management Systems, both QA and SMS. You can learn about the symposium by going to my website:  click on symposium or go to the GOAL/QPC website:  

Your thoughts.....

Tuesday, May 21, 2013

SMS, QA and Diabetes

SMS, QA and Diabetes

Sometimes things don’t get done until they have to be done. Human are great at procrastinating and will often take the “least challenging road”, with less work and less responsibilities when options are faced. So, more often than not, things are not getting done unless they have to be done. 

In any process there are planned strategies. However, in a system with interacting processes the result might not be as expected. 

Diabetes is a life threatening disease and if not managed a person might not have more than a few to live. Being diagnosed with diabetes is therefore a huge incentive to get started on a Carbohydrate Quality Assurance Program.
Diabetes is when the body is not conducting its own automatic safety checks and it is not making corrective action, but its own Quality Assurance program is detecting faulty processes. Young people with diabetes learns at an early age to document events, analyze result, analyze for common-cause variation or special-cause variation, then make corrective actions and every 3 month do a QA check by the A1C test. (Since the glucose is attached to the hemoglobin for about 3 months, the test reveals an average over 3 months). In a life with diabetes one has to plan for every activity, execute the plan, check result and make corrective actions. Diabetes is the Plan-Do-Check-Act (PDCA) and it’s a system young people learn and is capable of managing by the time they are 10-12 years old.
Diabetes is much like driving a standard stick shift car and not an automatic transmission. When in automatic, all one needs to do is to give it gas and it will go. With a stick shift one must plan and control.

When an SMS process is found to be a complete system failure, the operations will in most case cease to operate and a new and proven process will be established. When the process quits in diabetes, one would be injected with Glucagon (Hormone to raise blood sugar) and implement a new process, since the current one did not work. 

So, where does Diabetes fit in with SMS? It fits in with SMS in that managing Diabetes is much like SMS in that one has to manage the processes, be in control of the processes and understand when what is wrong when the processes don’t give the expected results. Just like SMS, Diabetes is to manage intangible processes where knowledge and understanding are inputs and the output is a value which should be within established parameters. 

Procrastination is to wish for results without planning and management. It’s like buying a lottery ticked and use it for collateral before the draw. 

The incentive to implement SMS for a certificate holder is to be regulatory compliant and able to continue operation. By applying SMS processes the organization has established the first step in conforming to regulatory requirements. However, there is no guarantee that the processes always are producing desired results.  After this threshold has been established an organization should implement Best Practices (BP) to ensure safety and to operate in an environment with zero tolerance to compromise aviation safety. 

Humans are procrastinators and a Safety Management System must therefore be mandated by regulatory requirements to be effective. Further, a mandated regulatory requirement for SMS establishes uniform and similar SMS requirements in the aviation industry across international borders.


Aviation Safety Management Systems (SMS): Five Aviation SMS Training Mistakes to Avoid

Aviation Safety Management Systems (SMS): Five Aviation SMS Training Mistakes to Avoid: Aviation SMS Training Mistakes to Avoid Lesson Learned : Aviation safety training remains an important part of aviation SMS implementatio...

Wednesday, May 15, 2013

SMS, NDT and Human Errors

SMS, NDT and Human Errors

After major airplane accidents pilot error is often determined as the only cause, and calls for more automation is immediately on the list of demands. Automation is necessary and helpful. The days are gone when a pilot must leave the flight controls and manually and mechanically lower the landing gears in an emergency situation, which is necessary on the WWII veteran airplane PBY-5A.   

However, when pilot error is determined to be the root cause, safety is compromised and doors are closed to apply human error as an operational link to organizational factors. 

Making flying critical dependant on automation may at first impression appear to be the right thing to do. On the other side of the coin, removing authority of human factors from the processes may lead to unexpected consequences. 

In the aircraft production line human factors are applied in testing of material quality. 
Non Destructive Testing (NDT) is a process to improve aviation safety as quality control of material defects or material fatigue. This method of testing does not destroy the product and tested components are used in the production line. NDT is used in almost any industry where there is a material performance quality control system in place.  

There are different types of NDT applications. Some of these are x-ray, ultrasound, fluorescent penetrant, magnetic particle, isotopes and acid inspections. Each one of these processes is applicable to specific parameters of the quality control process. X-ray is used to check for internal errors, ultrasound is non-precession inspection of material flaws, fluorescent penetrant is to inspect for surface pores or cracks, magnetic particle for defects in ferrous material, isotopes for coating thickness and acid for material temperature variances. 

An example of the fluorescent penetrant process is to inspect for pores and cracks on a compressor-disk for assembly in jet engine. By applying a fluid which is visible under fluorescent light, any pores or cracks will show up when applying fluorescent light in a dark room. 
Another example is the inspection of temperature control in production of a turbine engine rotor shaft. A rotor shaft in production must be exposed evenly and within tolerances of temperatures. After production a quality control test is conducted by submerging the shaft into a tub of acid for a period of time. When lifted out, discoloring on the surface will indicate if the material has experienced variations in temperatures. 

When pores are found and discolouring identified, the investigation review the possibility of inspector’s human errors and the inspection and manufacturing processes are investigated. By identifying segments of the processes, it becomes possible to investigate human factors, inspection processes and the manufacturing processes without ambiguity.

If automation replaces humans in critical stages of a process, the human-error factor is not eliminated, but transferred into an automation package. 

When human errors are concealed in automation, these errors may not be correctable due to automation lack of performance resilience. 
People are resilient with the ability to recover from mistakes. In an SMS world, it is people who make the difference in how errors are managed. 


Monday, May 13, 2013

Safety Through Control

Safety through Control

An Event
A true event from the Civil Aviation Daily Occurrence Reporting System, (CADORS),  in Canada. 
Two separate companies flying between the same cities in Canada. Both companies compete to fly passengers from one city to the other with the slogan “We’ll have you home before dinner time.” One company, we shall call Dart Air, has two crews to fly this route. Dart air management pressures the crews to do all that they can to reduce the flying time. The management pressure on the crews appears to be working since the times from wheels up to wheels down has been steadily reducing. Dart air records the flight time in the run chart below:

Things look good.
As evidences by the run chart, the fly time is being reduced by the crews. Managements pressure is paying off. The pilots and crews were praised by the progress. The companies record against the competition also evidenced a steady reduction in flight times. 
The Incident.
On the last Friday, as illustrated on the run chart, one of the flights ran off the end of the runway. The CADOR revealed that there were no injuries but, the plane sustained moderate damage to the landing gear. An internal investigation was initiated by Dart air. Interviews of both pilot teams revealed that the two crews were engaged in a competition to see who could fly the route the fastest! The pressure from Dart air management fostered and encouraged the competition which ended in a near disaster. It was not until the overrun and near accident that revealed the unsafe condition that was present in this route. You might say, we have an excellent reactive process.
Swiss cheese
The model.
The Swiss Cheese model of accident causation is a model used in the risk analysis and risk management of human systems, commonly aviation, engineering, food industry and healthcare.  It likens human systems to multiple slices of swiss cheese, stacked together, side by side. The swiss cheese model is sometimes call the cumulative act effect. Reason hypothesizes that most accidents can be traced to one or more of four levels of failure: Organizational influences, unsafe supervision, preconditions for unsafe acts, and the unsafe acts themselves. In the Swiss Cheese model, an organization's defenses against failure are modeled as a series of barriers, represented as slices of swiss cheese. The holes in the cheese slices represent individual weaknesses in individual parts of the system, and are continually varying in size and position in all slices. The system as a whole produces failures when all of the holes in each of the slices momentarily align, permitting (in Reason's words) "a trajectory of accident opportunity", so that a hazard passes through all of the holes in all of the defenses, leading to a failure.

The Swiss Cheese Model to Accidents

The swiss cheese model suggests that several latent failures lined up to allow condition to be right for an accident. The question can be posed, what if we can catch the latent failures before the alignment. In other words, close the hole in even just one of the swiss cheese barriers. The conclusion would have to be drawn that the accident could then be prevented. The use of “control” methods in each of the sub process would accomplish this same prevention

Traditional Safety Management Systems
What’s wrong with the traditional safety management system? The answer is simply nothing. However, the problem occurs with the events we choose to “analyze.” Most safety management systems analyze problems discovered by the quality control system. The quality control system “discovers” problems that are then analyzed for root cause in the quality assurance system. This relegates the safety management system to a system of “failure analysis” acting on incidents and problems that have already occurred. 

The flowchart is a graphic of the traditional safety process. Some would argue that the safety management system does act “proactively” on identifying “potential hazards” in the proactive reporting system. Of course identifying hazards proactively is better. But, what if we can have a system that identifies potential hazards before they become hazards. That is where the concept of “control” comes in. 
Dr. Walter Shewhart stressed that bringing a production process into a state of Statistical control, where there is only chance cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically. Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood data from physical processes never produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell shaped curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times. It is the uncontrolled variation that must be identified and mitigated. This can be accomplished statistically, visually or experientially. 

Probably the greatest contributor to the concept of “control” is my hero Dr. W. Edwards Deming. In 1927, Deming was introduced to Walter Shewhart of the Bell Telephone Laboratories by Dr. C.H. Kunsman of the United States Department of Agriculture (USDA). Deming found great inspiration in the work of Shewhart, the originator of the concepts of statistical control of processes and the related technical tool of the control chart, as Deming began to move toward the application of statistical methods to industrial production and management. Shewhart's idea of common and special causes of variation led directly to Deming's theory of management. Deming saw that these ideas could be applied not only to manufacturing processes but also to the processes by which enterprises are led and managed. This key insight made possible his enormous influence on the economics of the industrialized world after 1950.
Deming edited a series of lectures delivered by Shewhart at USDA, Statistical Method from the Viewpoint of Quality Control, into a book published in 1939. One reason he learned so much from Shewhart, Deming remarked in a videotaped interview, was that, while brilliant, Shewhart had an "uncanny ability to make things difficult." Deming thus spent a great deal of time both copying Shewhart's ideas and devising  ways to present them with his own twist.
Deming developed the sampling techniques that were used for the first time during the 1940 U.S. Census. in 1947, Deming was involved in early planning for the 1951 Japanese Census. The Allied Powers were occupying Japan, and he was asked by the United States Department of the Army to assist with the census. While in Japan, Deming's expertise in quality control techniques, combined with his involvement in Japanese society, led to his receiving an invitation from the Japanese Union of Scientists and Engineers (JUSE).
JUSE members had studied Shewhart's techniques, and as part of Japan's reconstruction efforts, they sought an expert to teach statistical control. During June–August 1950, Deming trained hundreds of engineers, managers, and scholars in Statistical Process Control (SPC) and concepts of quality. He also conducted at least one session for top management.Deming's message to Japan's chief executives: improving quality will reduce expenses while increasing productivity and market share. Perhaps the best known of these management lectures was delivered at the Mt. Hakone Conference Center in August 1950.
A number of Japanese manufacturers applied his techniques widely and experienced theretofore unheard of levels of quality and productivity. The improved quality combined with the lowered cost created new international demand for Japanese products.

Process Control: The Key to Safety
Dr. Deming states that everything is a process. All functions in our companies can be identified by a process. The process model shows the continuous improvement cycle. You could easily substitute the quality management system with the safety management system. The measurement and analysis portion of the QMS is engine that runs control of processes. Instead of analyzing incidents and failures, the system monitors “variation” of the processes that were used to realize the product. 

Product realization can also mean service realization. All process have outputs. These outputs can be monitored numerically. Numerical outputs can be then be monitored and improved by using control charts. 
Control Charts
A control chart consists of:
  • Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality characteristic in samples taken from the process at different times [the data]
  • The mean of this statistic using all the samples is calculated (e.g., the mean of the means, mean of the ranges, mean of the proportions)
  • A center line is drawn at the value of the mean of the statistic
  • The standard error (e.g., standard deviation/sqrt(n) for the mean) of the statistic is also calculated using all the samples
  • Upper and lower control limits (sometimes called "natural process limits") that indicate the threshold at which the process output is considered statistically 'unlikely' and are drawn typically at 3 standard errors from the center line
The chart may have other optional features, including:
  • Upper and lower warning limits, drawn as separate lines, typically two standard errors above and below the center line
  • Division into zones, with the addition of rules governing frequencies of observations in each zone
  • Annotation with events of interest, as determined by the Quality Engineer in charge of the process's quality
Control Acts on Variation Rather than Event

The control chart is a powerful tool for monitoring variation in a process.  The chart allows you to determine when variation is simply due to random (common cause) variation or when the variation is due to special causes.  How does a control chart tell if only common cause variation is present? This is determined by the data itself.  Using the data, we compute a range of values we would expect if only common cause variation is present.  The largest number we would expect is called the upper control limit.  The smallest number we would expect is called the lower control limit.  In general, if all the results fall between the smallest and the largest number and there is no evidence of nonrandom patterns, the process is in statistical control, i.e., only common cause variation is present. 

A control chart tells you if your process is in statistical control (i.e., only random variation is present) or if your process is out of statistical control (i.e., special cause variation is also present).  To be able to determine which situation is present, you must be able to interpret the control chart.  The most important factor in finding the reason for a special cause is time.  The faster the out of control situation is detected, the better the odds of finding out what happened. This is why control charts are very powerful tools for all associates.  If associates are keeping the charts, they can immediately begin to look for the reasons for out of control points when they appear on the chart. Control charts are based on a general model.  The general model is that the control limits are ±3 standard deviations from the average. The control limits are sometimes called 3 sigma limits. 

Dart Air Revisited
The Incident Again.
Dart air management was praising the job of the crews in reducing the flight time between cities. The run charts reveals a steady drop in flight times. It was obvious that the crews were coming up with ways to make the flight much more efficient. This trend continued until one of the flights ran off the runway. The CADOR report stated excess landing speed as a contributor to the incident. Upon analysis by the company’s own personnel, it was revealed that two crews had a competition with each other on who could reduce the flight time best.  Unfortunately after the fact, we applied control limits using the data produced by the run chart. 
By applying control limits to the run chart, we can see that the Monday before the incident occurred the chart went “out of control.”  If Dart air monitored the flight time with a control chart rather than a run chart, the chart would have alerted company management that something was “abnormal.” with the flight time. Yes, the abnormal situation would have been “before” the incident and thus could have prevented the plane from running off the end of the runway. 

Variation vs. Failures

The Dart air case is a perfect illustration of the power of “control.” When a company uses control tools such as control charts, we see management acting on “variation” rather than failures. There is also an added benefit to the company’s safety management system. When a process is monitored using statistical methods, then that process can be exempt from the audit program freeing up audit resources to audits other areas.

The question arises; what if a process does not produce numerical outputs? Remember the key is “variation.”  Processes must be governed by procedures that define what is normal. Anybody can see if something is “abnormal” if and only if they know what normal is in the process. Some examples of “variation” or “abnormality” is a doors that doesn’t close as usual, an unusual oder in an area, a discoloration that is not normal, a crew member that is acting differently, an unusual reading of an instrument. It is important to note that these examples are not failures but simply “variations” from what is defined as normal. By acting on variation we prevent the incident or accident from occurring. It should be noted that no one can predict a truly catastrophic failure. But, by looking at historical data, truly catastrophic failures are rare. Most accidents occur as a result of processes that went “out of control.” 

Applying Controls and Swiss Cheese

Monitoring the variation from each process will effectively “close” the hole in that process, thus rendering the swiss cheese model more like provolone! There are many presentations of the controls that can be applied to processes. We will examine the main forces effecting a process. These forces were defined by Dr. Deming. Each of these forces have forces that act upon them. It is up to the analyst to decide what controls should be placed on these forces.

These interactional forces can only be controlled by management of an organization. Often considerable monetary investments must be made to change or enhance these factors. These factors should no be changed without the “Profound” knowledge of the process. Management does not have the profound knowledge to change these factors. As stated by Dr. Deming, “Only the person or persons doing the job have the knowledge needed to change it. Management must consult the process stakeholder before making a change to a process. Doing otherwise is ‘Tampering’ with the system
”. The team approach using tools to direct the consensus, is a powerful process improvement activity

When we say act on variation, what do we mean? Variation is any event, circumstance, action or routine that is “abnormal” as defined by the standard. The established process has already identified the inputs, e.g. methods, material, machines, environment, and people. In the day-to-day operation or running of the process is there evidence of abnormality or variation that is beyond the normal variation of the process.  An illustrative example: You get into your car in the morning and it usually starts right up. But, today it took 3 tries to get it started. Note that. Does it happen again? this may an indication of a latent problem. A worker is always on time, but today he or she was late. Is it a one time occurrence or do you note that they are coming in later several days. The oil pressure reading on an engine is lower than normal but, within tolerance. Record this and note. Does it happen again? if so, analysis and corrective action may be needed. A door closes properly. Today you notice that it was harder to close. Is this an isolated event? Does it happen again? note it as variation. 
The establishment of a standard for all processes gives us limits of variation. These limits of variation enable us to define “control” within the process. Keep in mind that establishment of limits, whether statistical or experiential, gives us the means to analyze on variation rather than analyze the failure(s) that have occurred to cause an accident.

Your Thoughts............

Tuesday, May 7, 2013

Safety is Common Sense and Accidents are Meaningless.

Safety is common sense and accidents are meaningless.

How often has it not been said that if common sense had been used the accident would not happened and accidents are meaningless. How do common sense and meaningless function in an organization with a Safety Management System? They don’t.

If common sense and meaningless don’t function in an SMS world, how is then possible to apply common sense to safety and meaningless to accidents? 
If common sense is applied in SMS, the processes are not identifiable and if accidents are meaningless there are no operational reasons. 

A common sense approach is to expect a person to make sound and practical judgement without specialized training or knowledge. When driving a car it makes common sense to some stay on the road, not to exceed speed limit and observe other drivers. These common sense processes are to avoid hazards and based on instincts to avoid being harmed. 

The incident involved common sense, regular operating procedures and highly experienced personnel. 

Further, common sense is not always the same from one to another person. Most of us have experienced drivers who believe it is common sense to honk the horn instead of applying brakes. Others have experienced drivers who believe they are not required to apply signal lights, because everyone else should know where they are going. Then there is the driver who believe its common sense to speed up when the light is about to change from green to red. Since there is no specialized training or knowledge required to manage safety by common sense, all the above actions should be expected as individual actions. Common sense is based on personal experience, knowledge and functional environment. It is not based on a systematic approach to manage safety. It is therefore not possible to manage safety by applying common sense.  
An SMS processes must ensure safety by individual training of regulatory requirements, technical skills and understanding of desired outcome. 

A meaningless event has no purpose or reason. An accident is emotionally meaningless since there is no reason or purpose for people to be harmed or loss of life due to operational errors. However, if applying meaningless in an SMS organization it is not possible to establish cause and integrated factors of an accident. When analyzing an accident a purpose must be identified and reason established to understand why actions which lead to the accident were applied. 

An accident is an outcome which was not planned or desirable. Cause and integrated factors must be identified for the organization to establish criteria of human performance as a test on which a judgment or decision can be based. 

It is often said that an accident should be a wakeup call for others and a reminder to stay alert. Accidents should be investigated and analyzed, but they should not be accepted as Safety Management tools to be used to teach each other lessons, 

(SMS is to apply organizational accountability, training and 
resilience in human performance.)

or tools to remain focused on what should not have happened. 

In an SMS world accidents happen “because of the way things are done” and not “because of the way things are not done”. It therefore becomes essential to have organizational and operational accountability, specialized training and knowledge, and resilience in human performance. 

 A business owner once said: “Don’t let anyone should on you. There are many who will tell you what you should not have done, but few with courage to accept responsibility”.


Line-Item Audits

  Line-Item Audits By OffRoadPilots A irports and airlines are required to conduct a triennial audit of the entire quality assurance program...