Monday, May 18, 2020

The Red Car

The Red Car
By Catalina9

Hazard identification is the foundation of a healthy Safety Management System. Events and occurrences are the consequences of hazards and a simple task to identify. In the old days of aviation safety incident and accidents were defined as pilot error. Without any further analysis, pilot error became the standard solution to past occurrences. After a major occurrences new regulation were implemented, technical standards were changed, and new equipment were installed. Still, after decades with new and improved changes, accidents still happened. As an attempt to overshadow the inherent hazards of flying accidents were defined as meaningless and safety defined as common sense. Hazards were trivialized and flying was promoted as the safest mode of transportation. After thousand of hours of accident investigations hazards were brushed aside as an insignificant element of safety since safety was common sense and accidents meaningless.

It’s not always the change, but the process change itself that is opposed.
With the implementation of the Safety Management System (SMS), the aviation industry was required to actively identify hazards, implement a hazard registry and analyze hazards affecting their operations. This approach was new to the industry and rejected with the explanation that hazard identifications was a part of the pilot’s or airport personnel daily task and their duty to avoid. In addition, the lack of hazard reports was a sign of complete safety in an operational environment without any hazards. In their own mind they had become as safe as possible without the need for improvements.

One day, when you bought a new red car, you noticed how many other red cars on the road. Not were the other cars the same colors as your, but they were also the same make and model. It was not until you became aware of your own make and model that you noticed this. How often did you not drive down the same road for several year, but then one day you noticed a new house. In your own mind, the house was brand new. However, after a short review, you realized that it was always there, except you had not noticed it before.

All hazards identification is as distracting as using the smartphone while taxiing
Hazard identification operates with the same principles. Unless they are actively identified, they will not be observed. A hazard is not only the airside vehicle that out of nowhere runs across the taxiway in front of you, but it is also the vehicle that waits for you to taxi or enter the taxiway behind you. Hazards are everywhere, but when the same hazard is observed regularly the tendency is to eliminate this as a hazard since it has become a part of normal operations. Some operators, being airlines or airports, may demand that all hazards are reported. However, the answer to hazard management is not as simple as to report everything. Reporting all hazards in itself could distract a pilot’s attention of priority tasks and be a contributing cause of an incident. For the airside vehicle operator, identifying all hazards could be a contributing factor for a runway incursion. Hazard management is hard work and extremely complex.

Hazards are an inherent risk of aviation for both airlines and airport operators. That a hazard repeat itself regularly and often, does not eliminate it as a hazard, but it becomes a common cause variation of hazard management. For an airline, the constant airside vehicle operations is a hazard to their operations. On the other hand, for an airport operator, the constant taxiing of airplanes is a hazard to their operations. At some point in time, these two hazards are literally on collision course. Even though the primary purpose of an airport is for aircraft operations, does not give an airline the hazard priority. Both airplanes and vehicles are of the same hazard priority, while they are operating under different rules. That an airplane has the right-of-way, while a vehicle is required to yield does not imply that the vehicle is the sole hazard.

Both airlines and airports have access to a statistical process control tool to identify the effectiveness of their hazard reporting culture, or if their hazard reporting system is in control.

In the control chart below the hazard reporting culture is in-control. Statistically, the result conforms to the process, adjusting the upper and lower control limits.



A shown in the chart below, if there were 10 times more hazards reported, the process is still an in-control reporting culture.


There are several data to be extracted from these control chart, but one fundamental observation is that the process conforms to its own environment to maintain an in-control process. I.e. human behavior conforms to expectations or reporting all or reporting none. In an organization where zero hazards are reported, human behavior conforms to that expectation. With a hiring spree and several personnel beginning at the same time, the same process may show an out-of-control process, since these new eyes are observing hazards without biased, or without prior exposure to the hazards.


Without the comprehension of both airline and airport operations, this control chart may cause incorrect mitigation of hazards. Since there are inherent risks involved in aviation, it is the special cause variations of hazards that must be reported. That there are new personnel involved is not an indication of additional hazards, but an indication that operational management did not have in place a hazard reporting system.

A hazard reporting system is when there are defined parameters of what is expected to be reported. For an airline this could be for the flight crew to report wildlife hazards while taxiing straight on Taxiway A, or for the airport operator the task for airside personnel could be to report wildlife hazards on approach to RWY 27 during a specified time. When the parameters are established it becomes possible to capture special cause variations, populate the hazard registry, conduct a root-cause analysis and implement a corrective action plan. Operational hazard management must live by the principle of “The Red Car” and define hazard parameters.  


Catalina9






Monday, May 4, 2020

CAP Complexity

CAP Complexity
By Catalina9

When the CAP is too complex for the regulator to understand they will dump it, reject it and without any attempt to analyze it, trash it. A complex CAP is nothing more than a reflection of publicly available guidance material issued by the regulator. This guidance material comes in the form of an Advisory Circular (AC).

Guidance material is communication
 The purpose of guidance material is to communicate the regulator’s expectation to the aviation industry.  An AC for a root cause method is a document which explains the root cause analysis and corrective action process to address internal audit findings or oversight inspection findings of non-compliance. A root cause AC incorporates ideas from experts in the field of causal analysis.

Each ICAO State may have different objectives, but their common goal is to ensure a level of safety in aviation that the flying public will accept. One goal a regulator publishes is to provide a safe and secure transportation system moves people and goods across the world, without loss of life, injury or damage to property. This is a goal of nice, positive and carefully selected words, but it is also an unattainable goal. In an environment of moving parts, equipment and people, damages are inevitable. A utopia of safety only exists in a regulatory and static environment. When a goal is utopia, safety is status quo where there is no room for incremental safety improvements. Since there are zero process that exists for an operation to ensure no damages, the regulator must exercise their opinions to enforce subjective compliance. If this subjective compliance is not adhered to, they take certificate actions against an aviation document. In a world where no damages are acceptable, the regulator cannot issue one single operations certificate. In a world where no damages are acceptable, it would be foolish by an operator to implement a new process without first the regulator designing and approve the process with their corporate seal. When a corporate seal is attached, the regulator has a tool to micromanage an operator, without operational responsibility. When the regulator applies an inspector’s opinions as regulatory compliance, their view is backwards looking where new systems are incompatible and an obstruction to their opinion.

Internal, or external audit findings can be at a system level or at a process level. System level findings identify both the system and the specific technical regulation that failed, and process level findings identify the process that was not functioning. To develop an effective CAP, an operator and more important, the regulator must understand the nature of the system or process deficiency which led to the finding. A finding must clearly identify which system or process allowed the non-compliance to occur. Without this clarification a corrective action plan cannot be developed.

A system may be without aim or directions for the untrained eye.
A system level finding is a finding of a process without oversight. Some of the system findings may be related to safety management system, quality assurance program, operational control system or a training program. An operational control system is applicable to an aviation document in flight operations. An airport aviation document is the airport certificate, which is issued to the airport parcel itself. An operational control tool for an airport certificate is the airport zoning regulations.

A process level finding is a finding where at least one component of a system generated an undesired outcome. A process level finding is an operational task of any system, except for the oversight system of affected process. Without oversight, or a Daily Rundown Quality Control, a process, or how things are done, are continuing to generate undesirable outcomes.

When a corrective action plan is developed, it is as effective as the operational comprehension level of the person implementing the plan. An Accountable Executive may fully comprehend the CAP, wile an inspector of the regulatory body oversight may not. It is normal for an inspector, who is not involved in the daily operations, to be at a level below comprehension of the plan. This is the exact reason why an Advisory Circular so beautifully directed their regulatory oversight inspectors to only assess the process used to come up with the CAP and not the CAP itself.

There are four levels to comprehension of a system. The first level is data, second level is information, third level is knowledge and the fourth level is comprehension. Data is collected by several means and methods. This data is then formatted and analyzed into sounds, letters or images to provide information, which again is turned into knowledge for a person to absorbed. The absorbed knowledge turns into comprehension of one system and how multiple systems interacts. It is unreasonable and unjust to expect that a regulatory oversight inspector comprehends the operational systems of airlines and airports.   

A short-term corrective action plan is to immediately design and implement the plan. This immediate plan could be as simple as schedule training to be completed within 30 days. A long-term corrective action plan is a change of policy or a process change to design a plan to be implemented within a reasonable timeframe. A long-term winter operations CAP might take a year to be implemented, while a short term could be to clear the snow that day. Without defined timelines the long-term CAP does not exist, no matter how well the plan is written. 

Facts give you directions.
A root cause analysis is fundamental to the design of a corrective action plan. Questions to ask when developing a CAP is to ask the 5-W’s and How; What, when, where, why, who and how.
The What question is to establish the facts. The When question is to establish a timeline. The Where question is to establish a location. The Why question is to populate the events as defined in the What question. The Who question is to define a position within the organization as defined in the Where question. The How question is to answer the events as defined in the Why question. When asked correctly, the How question takes you backwards in the process to the Fork In The Road where a different decision would have lead down a different path. This does not ensure that an incident would not have happened if this path was taken. All it does is to take a different path than the path that lead to an incident.

The 5-Why is a recognized root cause analysis. However, if the Why question is asked incorrectly the root cause statement becomes an incorrect answer. The Why question must be asked how it relates to the How question.

Another element to be analyzed within a root cause analysis are the four causal factors, or factors that affected the root cause statement. Depending on organizational operations and policies, these factors may be expanded to include other and specific operational factors. The four are the Human Factors, Organizational Factors, Supervision Factors and Environmental Factors. When analysed in a root cause analysis each factor is assigned a weight-factor in a matrix of the 5-W’s and How. The factor with the highs weight factor then becomes the determining, and priority factor in the root cause analysis.

When applying this comprehensive approach to the CAP and root cause analysis it should be expected that the process is too complex for someone who are not daily involved in operations. Additional supplementary information of the CAP could be to design a flowchart of how each item in the system affects other items with an expected outcome. This design must be simple and directed to specifics of the Fork In The Road where multiple options are available. When submitting a CAP to the regulatory oversight body, being the regulator or Accountable Executive, it is vital for operational success that reasoning for the CAP is supported by data.  
     


Catalina9

Friday, April 17, 2020

The Practical Compliance Gap

The Practical Compliance Gap
By Catalina9

When the Safety Management System was first implemented one of the reasons was to keep the Regulator away from interfering with operations of both airlines and airports. It worked well in the beginning, but after a while the Regulator reverted to Pre-SMS to micromanage safety compliance rather than regulatory compliance. This is best described in the Regulator’s own words saying “To be direct, [the Department] has a number of concerns surrounding [Your Amazing Airport / Airline], not only from a regulatory perspective...”

You are in charge to the degree you can keep the glass ½ empty
There is a great danger to aviation safety when the Regulator believe they comprehend your operations better than you do yourself. In addition, the Regulatory has turned away from a Safety Management System in their own words, by being the rule-maker, enforcer and judge. Again, this can best be described in their own words. “Our role at [the Department] is to monitor for regulatory compliance which can take various forms, additionally we can provide regulatory interpretation.” What is crucial to safety in operations is that every operator develops their own plan. However, what most operators do, being Airports or Airlines, they succumb to the Regulator’s non-regulatory demands.

"...Regulators believe they comprehend your operations better than you do..."


Aviation Safety blogposts write about airline and airline safety while airports and airport safety is often forgotten in this equation. The safety of each flight originates and terminates at an airport. It’s not just a pilot’s decision to take safety actions, it’s also the role and responsibility of an Accountable Executive at any airport. A memorable example is the Concorde accident in 2000 were the runway hazard was assessed as being low and an acceptable risk. Airport compliance is based on ICAO Annex 14 Airport Standards, which contains about 200 Standards. Airport Standards and Airport Certificate are applicable to the land it sits on and within surveyed limits. In addition, there is no certificate requirement for an airport manager or airport operator. Without a certificate attached to the operator, the Regulator’s option for certificate action against an airport operator is not exciting.

The Regulatory Compliance Gap is to be  bridged not closed
It is impractical for an Enterprise to operate an airport or airline without a gap between operations and regulatory compliance. This is the Practical Compliance Gap. The Practical Compliance Gap exist for one reason, which is that regulatory compliance is only available to a static state operations. Let’s take a moment and compare an airline pilot and an airport operator. When operating at night, both an airplane and airport are required to have functioning lights. Should the lights go out in an airborne airplane, with an official inspector onboard, the flight crew is expected to return to the airport at that time. As long as the flight crew returns to the airport, a regulatory violation finding is not issued to the flight crew or airline. The inspector applied the Practical Compliance Gap, even if both flight crew and operator were in non-compliance with the lights out. Let’s then assume that this same airplane is now a perfectly complaint airplane is returning to the airport as scheduled. Then, just a few seconds before the airplane touches down, all runway lights fail, and the airfield becomes dark. At this point the Practical Compliance Gap is not accepted by the inspector who issues a finding to the airport operator for regulatory non-compliance. The difference is that an airport remains a static state at all the times, while an airplane is in motion. Compliance is discriminating against a static-state operations. When the Regulatory don’t apply the Practical Compliance Gap, there is no room for continuous improvements since any improvements could fail and status quo becomes the preferred state of operations.

Airport standards and regulations are comprehensive with a million shades of gray. When the Regulators and Airport Operators are interpreting the gray shades differently, there is a conflict, which the Regulator always will win, no matter how unsafe that will make airport operations.

The beauty of SMS is to accept the risk by reducing likelihood.
The beauty of the Safety Management System is that it addresses the Practical Compliance Gap. Should a Regulator observe a non-compliance, as airplane lights or runway lights, there are no other option but to issue a finding. It’s beyond a Regulator’s mandate to overlook or ignore. What the Safety Management System, or the SMS Voluntary Program allow for, is for the operator to develop a Corrective Action Plan to reduce the likelihood of these types of malfunctions happening again. Within an SMS the CAP options must be made available to all Enterprises. The Regulator does not have the mandate to apply CAP options selectively and favor operators. In ICAO States without a regulatory SMS the CAP options are not available, and malfunctions are also sometimes criminalized. The Safety Management System understand that there is no such thing that an incident occurred due to failure to comply with a process, procedure or directives. An SMS comprehends the fact that incidents happens due to the limitations within the likelihood. It’s the responsibility of an operator to take control of the likelihood. Remember, “If you don't design Your Amazing Airport / Airline plan, chances are you'll fall into someone else's plan. And guess what they have planned for you? Not much,"

Catalina9

Sunday, April 5, 2020

COVID-19

COVID-19
By Catalina9

Aviation came to a standstill when COVID-19 virus spread globally. Airplanes are parked, taking up space on runways and taxiways that before was used for landing, takeoff and taxiing. Every airport is an aviation ghost town. Every airplane is a liability to safety. A liability to safety is a hazard where the parked airplanes become a higher risk to aviation than when they were flying. The COVID-19 is a special cause variation, as opposed to a common cause variation in an SPC control chart. A special cause variation requires a root-cause analysis with a Corrective Action Plan. A CAP requires data to establish factual causes, or the merit of the case itself. There is no such data available, only computer models, which are based on opinions and not facts of what the future holds.

COVID-09 are aeronautical obstructions globally.
Last time the world stopped flying was in 2001. Since then aviation globally has operated normally. Applying data from normal operations when conducting a root-cause analysis of the COVID-19 hazard skews the picture. It’s not COVID-19 itself that is the hazard, but the effect of aviation safety surrendering to operational demands. Some time ago I wrote the following in a blog: “When there is a conflict between safety and on-time performance, on-time performance will always win.” This is what we are experiencing now, when non-professional or non-aviation experts are making aviation decisions that adversely affect safety, and they are unaware of how their decisions are hazardous to aviation safety. It’s as simple as this and ask any pilot who has flown an old airplane what happens to an airplane when it doesn’t fly: They break down and airplanes have not changed since Orville’s first flight

Now is the time for the global aviation industry to develop Safety Cases or Change Management Cases. This also applies to the Regulators globally to assess the impact on aviation safety in their own country. During this COVID-19 epidemic, some inspectors are conducting investigations of small airports with no traffic, since local flying schools or clubs are closed due to COVID-19. Marriam-Webster definition of an investigation is “to observe or study by close examination and systematic inquiry.”  Aviation safety inspectors should focus on the future of how to return safely to normal operations rather than investigate lost manuals. 

An airplane wants to be flown and not parked
Back in 2001 there were no Safety Management System (SMS) providing guidance of how to return to normal operations. Today the global aviation industry has the advantage of an SMS tool for proactive steps. The beauty of the SMS is that it’s an exceptional tool when accepted as a proactive tool, rather than a regulatory burden. With 30-50% of the airline industry’s fleet parked, their priorities and responsibilities may have changed. While airlines may have a reduced operational demand, airports may have been assigned additional responsibilities and increased operational demand by parking thousands of airplanes around the world. An airport is not a long-term parking-garage, but an airfield with short-term parking. An additional hazard is the conflict between airlines demand for parking space and airport design for short-term aircraft operations.

Airplanes are not designed to be parked and airports are not designed to be a parking area. Airplanes that don’t fly break down and airports with long-term parking are deteriorating. There is a heavy load on the runways and taxiways with airplanes parked for an unlimited or unspecified period of time. Depending on location latitude, some places may be extremely hot and soften the surfaces. Other places may disrupt the frost heaves, causing additional stress on runway or taxiway surfaces.

Airports are there to help your dream come true.
The aviation industry has now an opportunity to accept safety to be paramount and not rely on the Regulator to tell them what to do next. This is not the time to request exemptions from regulatory requirement, but it’s the time to prepare for return to normal operations with Safety Cases and Change Management. It’s time to be Accountable. In countries where the SMS is implemented by regulations, the aviation industry has a tool for Safety Cases to be made and if requested, presented to the Regulator.

A Safety Case is sort of the same a NOTAM, where airports publish issues, or facility failures of short-term nature. A Safety Case is not for normal operations, but how to initiate their return, and their first few steps of entering back into operations. A Safety Case is not to operate in non-compliance with regulatory requirements, but to have a plan for compliance where there are underlying special cause variations that could affect operations.

COVID-19 is in itself a hazard to the flying public. Safety in operations must always view safety from the travelling passenger’s point of view. The future might require normal operating procedures to sterilize airplanes between flights and allow for social distancing. Seats may be required to be spaced farther apart, not by regulatory requirements, but by customer demands requirements. COVID-19 could have a fundamental change on the aviation industry, including airports. Airports may be required to install sterilization equipment of passengers prior to boarding.

In some of my previous blogs I have posted one simple question to the aviation professionals: “Why does the Global Aviation Industry, being Airlines or Airports, need a Safety Management System (SMS) today, when they were safe yesterday without an SMS?” COVID-19 is one of the answers.     




Catalina9

Monday, March 23, 2020

Selecting SMS Software Programs

Selecting SMS Software Programs
By Catalina 9

When selecting an SMS software program, the task becomes to select a program that gives you an SMS workout. It’s not to pick a program where you can sit back and relax that will give you the most benefits. A supporting SMS software is there for you to tell you a story of how your operation is safe. It’s there so you don’t need to second-guess your decision anymore. An effective SMS program there to take the pressure of the Accountable Executive. A trap that’s easy to fall into is that an enterprise ranks instant gratification as a high priority when selecting the SMS software. Instant gratification is a hazard to aviation safety. Human nature is the path of least resistance and these facts are pushed in sales and marketing. Everyone wants something better, cheaper, faster and we want it now.

A cloud based SMS program is a force-multiplier
Understanding sales techniques and behaviors are critical skills and tool for selecting a program. Sales are processes and process control. An excellent salesperson applies in-control processes in their presentations. This is a positive behavior, since that person is then filling the needs of what an enterprise has defined, rather that imposing their opinions of what an enterprise needs are. The first step in selecting an SMS software, or cloud-based program, is to ask: What’s In It For Me? The second question is to ask: How does your SMS program improve over our existing SMS program? The third question is: “Why does the Global Aviation Industry, being Airlines or Airports, need a Safety Management System (SMS) today, when they were safe yesterday without an SMS?”

An Accountable Executive and Senior Management must be able to answer the question whey they need a Safety Management System. If the answer is not known, there is no need to investigate and research an SMS software program. Without knowing the answer their decision to what program to purchase, or if they should purchase at all, becomes a biased decision and not a decision based on facts and data. The salesperson should also be able to give a general answer to this question, but since they are not intimate familiar with your operations it is not reasonable to expect a direct answer applicable to your enterprise.

Making a choice is to pick one element and run with it
When selecting an SMS program service provider, it is vital to the selection to know that selling is not telling. The more a salesperson talks about themselves and their program, the less faith they have in the product or service they are selling. The more questions they ask about your enterprise, the more they know how their program functions. Sales is what makes businesses profitable and when researching SMS programs, the task also becomes to decide to what level a salesperson’s comprehension of their program is.

It is extremely labor intensive to operate a Safety Management System manually by using spreadsheets and relying on memory. When open reports exceed three reports, oversight and monitoring is lost in the masses. Everything becomes a blur, it becomes unclear of how the current safety level was reached and operating with an SMS no longer make sense. When SMS becomes irrelevant, the path of least resistance is to revert to pre-SMS processes. This drift or change is not noticeable to the enterprise and with the next incident or accident they are taken by a surprise why the SMS didn’t fix the problem and prevented the occurrence. The reason for the surprise is not that they did not catch errors, but that they did not analyze processes how operations performed while in compliance. The path of least resistance is to assume that operations, or process inputs, are acceptable when the output is acceptable. It’s crucial to the integrity of SMS to comprehend that input processes could be complete failures, while the outcome still is acceptable.   

The two options when operating an SMS is to increase the labor force for each person to be responsible for maximum three open reports at a time or operate with an online SMS program as a force-multiplier of operational effectiveness. When operating with an online system, it is crucial that Senior Management comprehend their own SMS. Comprehension comes from data, which are turned into information. Information is then transferred into knowledge and knowledge is required for comprehension of one system or multiple interacting systems.

When selecting an SMS program, the most important factor is the simplicity of submitting a report. An initial report should require a maximum of five fields and take less than a minute to process. These fields should include the name of contributor or anonymous reporting, a date, a location, aircraft or vehicle identification and a narrative. SMS is hard work, and the hard work begins after a report is received and is not integrated in the reporting itself.

When selecting an SMS program service provider consider these three elements:
1.    What is the impact, hazard or risk level if we don’t solve the challenges defined in occurrence or hazard reports;
2.    How is the preliminary selected supplier superior to other suppliers; and
3.    Establish a timeline of three months to activate your SMS online program.
a.    Three months are required to:
                                              i.     One month to analyze and research #1;
                                             ii.     One month to analyze and research #2; and
                                           iii.     One month to design, implement and train personnel.

An isotope may have a half-time of billions of years, while the half-time of a process is a split second.


Catalina9

Monday, March 9, 2020

Avoid The SMS Traps

Avoid The SMS Traps

By Catalina9


There are several traps or hazards associated with a Safety Management System (SMS) program. The three major traps both the regulator and operators fall into are expectations, non-punitive reporting and human factors. The first trap is the expectation trap that applying opinion-based activities ensures regulatory compliance. The second trap is the non-punitive reporting policy trap applying an expectation of punitive actions, or an expectation that punishment is acceptable for illegal activity, negligence or wilful misconduct. The third SMS trap is the human factors trap that a root cause is the absence of a behavior within a defined timeline qualifying as either illegal activity, negligence or wilful misconduct.   

Expectations applied correctly is for a desirable outcome.
Expectations:
Job performance compliance with expectations are necessary in our daily interactions, any profession or areas of airline or airport operations. Driving down the highway there is an expectation that an operator of a private vehicle has a system is in place to travel at or below the speed limit, stay on the correct side of the road or comply with the standards of highway travel. In the trucking industry there is an expectation that operators have systems in place to calculate the weight loaded on the truck or the height of the vehicles. There are expectations that commercial long-haul trucks, at any time during their travel on the highways, have operational systems such as brakes, steering, lights or tires that are conforming to the standards. Just as in any other industry, there are expectations established for compliance with the Safety Management System regulations. Anyone could fall into the expectation traps, or hazards. Organizational titles with roles and responsibilities within SMS, or regulatory oversight does not ensure that a person applies expectations equally, without bias or suitability for size and complexity of the operations. SMS expectations may be applied different regionally or may be applied different based on a person’s background and experience. Expectations may be applied different based on what is expected politically, what is expected by supervisors, management or the Accountable Executive or may be applied differently based on comprehension of systems. The expectation trap becomes an extreme hazard to aviation when a finding, by internal or by external audits, is applied to the expectation itself, assuming this expectation to be the only acceptable behavior for regulatory compliance.      

A job performance review is not a review of legal or illegal activities.
Non-Punitive Reporting Policy:
The second SMS trap is the application of a non-punitive reporting policy. A non-punitive policy is to provide immunity from disciplinary action for employees that report hazards, incidents or accidents and ensure that the policy is widely understood within the organization. A non-punitive reporting policy is built on a just-culture, where there is trust, learning, accountability and information sharing. Within a just-culture an enterprise operates with training systems that includes learning areas about trust, learning processes, accountability and information sharing. Just-culture learning is about having documented processes in place to identify training requirements so that personnel are competent to perform their duties. With these simple steps any person may report hazards, incidents or accidents without fear of failures, or punishment. The report is entered into the SMS system as data. Data is fair and unbiased. Data is processed into information to be absorbed by one of the five senses. As information is absorbed by a person, that person gain knowledge. With knowledge a person comprehends one system, and multiple interacting systems. Other expectations of a non-punitive policy are to establish conditions under which punitive disciplinary action would be considered. Some of the most common conditions are illegal activity, negligence or wilful misconduct. There is an expectation that the Accountable Executive of an organization clearly defines when punitive disciplinary actions would be considered. Including the words “illegal activity, negligence or wilful misconduct”, or making these words policies, does not clearly define reasons for punitive actions. When Accountable Executives behave in a manner that implements these options, they are falling into an expectation trap that of an unsafe behavior since illegal activity, negligence or wilful misconduct are not within a job description or job-performance criteria.

An Accountable Executive implementing illegal activity, negligence or wilful misconduct as reasons for punishment contradicts a just-culture, where there is trust, learning, accountability and information sharing. Illegal activity is a behavior violating a regulation which the courts decision to establish guilt of illegal activity and it’s not an Accountable Executive’s role. For negligence to be applicable as a reason for punishment it must be established and defined what it is prior to the behavior takes place. Negligence after the fact is a biased interpretation of the action. An Accountable Executive who applies a negligence policy for a reason of punishment has established a policy accepting that their own enterprise only accept mediocracy. When wilful misconduct is implemented as a policy it must be establish prior to the event and clearly defined. An Accountable Executive who implements this policy, contradicts the non-punitive reporting policy expectation itself.

The intent of including illegal activity, negligence or wilful misconduct as reasons for punishment comes with good intentions. However, intentions are irrelevant in aviation safety. It’s actions that are relevant. With these policies implemented there are no reasons for reporting and only those who do not understand these policies will make reports. The expectation trap becomes an extreme hazard to aviation when illegal activity, negligence or wilful misconduct are applied as a reasoning and leave no room for trust, learning, accountability and information sharing. 

Human Factors is the domino effect of operations.
Human Factors:
The third SMS trap that is easy to fall into is the human factors trap. Aviation before SMS was simple. After an accident the pilot was blamed, and problem solved. …at leased in the eyes of regulatory oversight and the public opinion. SMS was fist implemented in Canada in 2006. Between 1903 and 2006 aviation safety was generally speaking a reactive process with the flying public unaware of the hazards.

After decades of improving airplane design and technology, pilot training and expectations, flying was sold as the safest mode of transportation. In the 70’ human factors research gained interest and the SHELL model was developed. This model is the interaction between Liveware (human interactions) and Liveware, Liveware hand Hardware, Liveware and Software and Liveware and Environment. The environment is the work environment and how user-friendly the workstation is. In addition, the environment is about climate, topography and weather. How a workstation is design is a positive addition in a goal to satisfy an expectation of the job description. However, the climate, topography and the weather all impact processes to satisfy a job performance expectation. Human factors are to analyze how success is reached, or how flight crew, or airport personnel behave to ensure that human factors expectations are met. That several performance parameters are met is not an indication that the crew followed processes or procedures. When a parameter is satisfactory to the management, e.g. on time departures, runway surface condition reporting or maintenance performed, it does not show how the tasks were met. It only shows that it was.

As an Accountable Executive, take a minute, if that is enough time, and write down all the rules a pilot or airport manger needs to be able to recall in their daily job. If an AE is not able to recall all rules, the enterprise has just fallen into the third SMS Human Factors trap. It is important to know what an undesired event in operations is. However, it is not practical within an SMS world to apply what went wrong to improve safety. Operations goes right most of the times. Pilots and airport personnel live by the GSD rule, or Get Stuff Done. Within any job sacrifices must be made and if rules and expectations are prohibiting job performance, the job falls into the GSD category. For an SMS to function the AE must comprehend all systems and become an interactive part of operations. An AE is not there as a rule maker for downstream enforcement, but as an oversight body comprehending why things goes right most of the times. When an enterprise finds out why things go right, that is when they have reached a level where continuous safety improvements are within reach.  The expectation trap becomes an extreme hazard to aviation when Human Factors are analyzed to errors only and not to success. Within any success lays at least one failure.



Catalina9

The Red Car

The Red Car By Catalina9 H azard identification is the foundation of a healthy Safety Management System. Events and occurrences are...