Monday, June 14, 2021

How To Do A Risk Analysis

 How To Do A Risk Analysis

By Catalina9

There is a difference between a risk analysis and a risk assessment. A risk assessment involves several steps and forms a platform of an overall risk management plan. A risk analysis is one of those steps and is a defining a characteristic of each risk level and is assigned a weight score as a part of the risk assessment. Generally speaking, a risk assessment includes identification of the issues that contribute to risk, analyzing their significance, identifying options to manage, or maintain oversight of the risk, determining which option is likely to be the best fit for size, complexity and scaled to an organization, and assigning recommendations to decision-makers. A risk assessment also includes one or multiple risk analyses for both pre-risk and post-risk mitigation. A risk analysis is one single justification task of likelihood and severity of a hazard and communicated as a risk level, and it may be a standalone document or a supporting document in a risk assessment. 

There are several guidance materials available on how to do a risk analysis which comes with different designs. A risk analysis may focus on likelihood of a hazard, or it may focus on the severity of a hazard as a determining factor. The combination of likelihood and severity is communicated as a risk level. A risk analysis tool is the risk matrix, which assign a reaction to the risk by colors. Red is normally an unacceptable risk level, while yellow may be acceptable with mitigation and green is acceptable without a reaction to the hazard. In a risk matrix likelihood and severity are assigned classification letters and numbers. A low number could be assigned a high severity or a low severity depending on how the risk matrix is designed. The same is true for the likelihood level where the letter “A” could also be a high severity or a low severity depending on risk matrix design.  

Level of exposure is a third component of a risk analysis and to simplify the risk analysis it is normally assigned an exposure level of 1. An assigned exposure level would be between 0 and 1, or 0% to 100% certainty. With the exposure level assigned as 1, the certainty is definite, and the hazard has appeared. As an example, birds are a hazard to aviation. The exposure level to birds for an aircraft 

A current risk analysis level.
on an approach is quite different for an aircraft on an approach to the same runway in January or May. Due to a common cause variation, the migratory bird season increases bird activities during spring and fall months at airports. During the migratory season an airport may apply multiple mitigation processes by ATIS notification, ATC direct notification or means to scare the birds away from airport. Birds are attracted to food sources, and the black runway surface is an attraction for insects, which then again attracts birds. An exposure level for bird activities, without affecting flight operations, may be between 0.1 to 0.9, or up to 90%. An operator may decide to cancel flight with an exposure level at 90%. However, this is an extreme operational decision task since passengers and freight are dependent on the airline in support of their own integrity and on-time commitments. Most often a scheduled or on-demand flight would continue as planned and rely on other aircraft or the airport to scare birds away from the approach or departure ends. By eliminating the exposure criteria and applying an exposure level of 1, a hazard, or risk level for each segment of the flight may be applied. Another common cause variation are thunderstorms, and an expectation that they are to be mitigated at the time of exposure.       

When conducting a risk analysis, one of the most important factors is to reduce inputs of subjective, wishes, or biased, or opinion-based applications. A common cause variation of a risk analysis are the individual assumptions, which does not make it a faulty risk analysis, but an analysis with justification of assumptions or individual variations. One descriptor of a risk assessment is “Possible” with a definition that “It is Possible to occur at leas once a year”.  When a risk analysis is conducted of the hazard of bush flying or flying into one of the most congested airports in the world, the Likelihood of Occurrence would not reach the level of “Possible”, since human behavior is to take the path of least resistance. A likelihood of “Possible” would increase the workload dramatically and it could also restrict business, or flights, into areas of a high profit margin. If this is a new route or area of operations their justification is based on a wish or opinion and might not be a true picture of the hazard. However, if the risk analysis justification is based on prior years documented records, the risk analysis is based on data and paints a true picture. There are no one-fits-all answer in a risk assessment, and there are no correct or incorrect answers to a risk assessment, since it is the operator who accepts or rejects the risk. While this is true, it is also a customer who accepts or rejects the risk to use services provided by one or the other air carrier.       

One of the principles of a Safety Management System is to operate within a Just Culture. A Just Culture is a culture where there are justifications for actions, both proactive and reactive actions. A risk analysis is just as much a part of justification as any other areas of a Just Culture operations. After a risk analysis, both likelihood and severity are processed through a justification process. The platform to build on for likelihood justification is times between intervals. The first level of times between intervals is when times between intervals are imaginary, theoretical, virtual, or fictional. This is a level with no data available and it is unreasonable to expect the likelihood to occur. An example would be the likelihood of a meteor to land in your back yard. The second level is when times between intervals are beyond factors applied for calculation of problem-solving in operation. At this level, the likelihood cannot be reasonable calculated. It is just as impossible as reaching the last number of PI. Third level is when times between intervals are separated by breaks, or spaced greater than normal operations could foresee. 

In a justification culture there is a
justification why the
scale is not balanced
Number four is when times between intervals are without definite aim, direction, rule, or method. Incidents happens, but they are random and unpredictable. Level five is when times between intervals are indefinable. This is when it is impossible to predict an incident, but most likely one or more will occur during an established timeframe. Level six is when times between intervals are inconsistent. This is when incidents occurs regularly, but they are not consistent with expectations. Level seven is when times between intervals are protracted and infrequent and may last longer than expected, but the frequency is relatively low. Level eight of the likelihood are the foothills of a systemic likelihood and when times between intervals are reliable and dependable. Levels nine and ten are the systemic levels when times between intervals are short, constant and dependable, or times between intervals are methodical, planned and dependable, without defining the operational system or processes involved.





After the likelihood level is justified, the justification process continues to the severity level. There are also ten severity levels, which are independent of the likelihood levels. The platform for classifications of the severity levels is a platform of expectations. Building on the platform is a severity level that is not compatible with another fact or claim of the hazard. The next blocks are a severity level with insignificant consequences, a severity level inferior in importance, size or degree, a severity level that would attract attention to operational process, cause operational inconvenience, or unscheduled events, a severity level large in extent or degree, a severity level involving an industry standard defined risk, or a risk significant in size, amount, or degree, a severity level having influence or effect of a noticeably or measurably large amount caused by something other than mere chance, or ignorance. Severity level eight is the foothills of the catastrophic levels, when a severity level having influence or effect of an irrevocable harm, damage, or loss. Severity levels nine and ten are the catastrophic levels with a severity level of a turning point with an abrupt change approaching a state of crisis and sufficient in size to sustain a chain reaction of undesirable events, occurrences, incidents, accidents or disaster, or a severity level where functions, movements, or operations cease to exist. 

With a justification risk analysis, the first action is defined based on the risk level. At level one the risk the initial task is to communicate. Level two is to communicate – monitor. Level three is to communicate – monitor – pause. Level four is to communicate – monitor – pause – suspend. And level five is to communicate – monitor – pause -suspend – cease. The beauty of a justification-based risk analysis is that after corrective action is implemented and during the follow up process, the tasks are to be completed in reversed order until the risk reaches the communicate task level.  

Catalina9





Tuesday, June 1, 2021

What To Expect From An Audit

What To Expect From An Audit
By Catalina9 

What we expect of an audit is that it is an unbiased and a neutral report of the facts. 

Everyone in the aviation industry needs to do audits for one reason or another. Audits might be done for regulatory compliance, for compliance with the enterprise’s safety policy, as a contract compliance agreement, at customer’s request as a satisfaction evaluation or after a major occurrence. An airport client must feel ensured that operating out of one specific airport does not cause interruptions to passengers due to inadequate maintenance of nav-aids, visual aids, markings, runways, taxiways, or aprons, or that are any surprises for aircraft, crew or passengers. 

Include in your SMS manual that audit
results are not automatically implemented
An airline or charter operator most often carefully research new airports they are planning to operate out of, and when there is a tie between two or more airports, the one with the best customer service wins the draw. A passenger on an airliner must feel ensured that the flight will be occurrence-free, or a shipper of goods must trust the carrier to ensure that their goods arrive at the destination airport in the same condition it was when first shipped. There is a million considerations and reasons why audits are needed. Since there are several reasons for audits, there are also several expectations of outcome of an audit. What these expectations are, depends on what side of the audit you are and the scope of the audit.

Let’s take a few minutes and reflect on these three different types of audits. The audits are the Regulatory compliance audit, the Safety Policy compliance audit, or the Customer Satisfaction compliance audit. 

The Regulatory compliance audit is a static audit, where no movements or processes are required for the audit. When an operator’s certificate is issued to an airline there are zero movements required for that certificate to be issued. However, there are conditions for operations attached to the certificate, which becomes the scope of regulatory audits. These conditions are management personnel, maintenance personnel and flight crew. All these positions for an air carrier are certificated positions and each person must to comply with their roles, responsibilities, and privileges of their licenses for the operating certificate to remain valid. For a new certificate holder, at the time the first aircraft leaves the gate for a flight, there is an expectation of an audit that pre-departure regulatory requirements are met and that all regulatory requirements are met at the closing of the flight upon arrival at their destination. When an audit of an airline is carried out, the first step is to review their operations manuals for regulatory compliance. At the time of issuance of the certificate they were compliant, but over time amendments are added and new regulatory requirements are implemented. One major implementation example is the Safety Management System (SMS), which had an enormous impact on airlines. Their compliance requirements went from a “job well done” to who did the job and how did they do it. After manuals are reviewed, their operational records are reviewed for compliance. Records for their very first flight, or first flight since last audit, to the most current records are reviewed. Regulatory compliance audits are audits of pre-flight compliance, in-flight compliance, and post-flight compliance. Training records, operations records, maintenance records or crew license records are all audited and assigned a compliance or non-compliance grade. The expectation of a regulatory audit is that any items audited are linked to a regulatory requirement. 

A Safety Policy compliance audit is an audit of an enterprise’s Safety Management System. The audit process is the same as for a regulatory compliance audit, with a difference the audit becomes a job-performance audit. A job-performance audit is about what the task was, when was the task performed, where in the operations was the task assigned, who did the task, why was the task necessary and how was these tasks performed. The “how” audit is an overarching audit for the other five questions: what, when, where, who, and why. A safety policy audit must answer how a decision was reached for each one of the five questions. E.g., how was a decision reach to select and airplane and crew, how was the timeline for crew-pairing selected, criteria for destinations and how was it decide who makes the final decision and why was this person selected.

A safety policy to be “safe” is a
policy with undefined destinations
A safety policy audit is the most comprehensive audit, since is involves all aspects of operations, each person in those operations and a complete timeline of that operations. An inflight example of a safety policy audit is the process for preparing for an emergency upon arrival. A person seated in the emergency exit is prior to takeoff asked if they are willing and able to assist the flight crew with opening the emergency exit. During the flight alcohol is also served to that person who could be intoxicated upon arrival as a temporary crew member with limited duties. A safety policy audit conducts interviews of operational personnel, crew members and maintenance personnel. During these interviews, an auditor may discover that intoxicated personnel are expected to be frontline crew members during an emergency. For each task required by regulatory requirements the same audit process is applied. Just one simple task may take hours to complete, and it becomes a resource impossibility and impracticability to conducts SMS audits of 100% of the requirements, 100% of the personnel and at100% of the times. An SMS audit must therefore apply random sampling and statistical process 

control (SPC) for a confidence level analysis. The industry standard is that there is a 95% confidence level for each element of an SMS to be present for an acceptable audit result. 

A customer satisfaction compliance audit is the simplest audit of all audits. A customer satisfaction audit is audited against opinions, or industry standard expectations. A customer may conduct an audit as an opinion of regulatory compliance, as an opinion of safety policy compliance audit or as an opinion of conforming to industry expectations. Customer satisfaction auditors is not required to be technical experts in regulatory interpretation, operational experts, or experts in airport operations, but are experts in providing opinions of their observations based on their operational experience in aviation. A customer satisfaction audit does not issue findings since the auditor is unqualified to issue findings against regulatory requirements, or operational recommendations. They issue opinions and suggestions for operational changes or implementations as viewed from a customer’s point of view and on behalf of a customer. An operator, being airline or airport, makes a decision if they should implement these changes and how these changes could affect their operations. The criterion for change may solely be based on a customer’s wish, public opinion, or social media trends. An enterprise without a clause in their SMS manual that any findings from any types of audits must first be assessed by the enterprise before accepted or rejected to be implemented in their operations, may be compelled to make changes without knowing the effect.        

An auditor has no responsibility for any occurrences an operator may experience by in their operations after implementing audit recommendations. A new regulatory requirement implemented may affect operational safety. A safety policy recommendation may affect safety and the implementation of a customer suggestion may affect safety in operations. In any case after an audit, being an airline or airport, must prior to implementation of changes conduct a safety case, or change management assessment, to evaluate the risk impact on their operations. Since there is an inherent hazard in aviation from the time an aircraft is moving under its own powers, an operator must monitor what direction the implementation of audit suggestions or requirements are taking and from their assessment continue the course or make operational changes to avoid or eliminate hazards on the horizon. 


Catalina9





 

Monday, May 17, 2021

Training Works

Training Works

By Catalina9

When applying the fact that training is associated with Human Performance, ongoing training becomes a tool to capture process deviations from performance parameters. Deviations from performance parameters are not lack of knowledge, but it is a human factor to take the path of least resistance and to deviate for effectiveness to reach a common goal. Most standardized processes are arbitrarily chosen based on opinions. This does not make the process wrong, bad, incorrect, or dangerous, it is just the fact that someone established the process based on their experience and personal view of what to them made sense. From these processes, rules and job performance expectations are derived to establish the lowest bar acceptable in aviation safety. One example of a new rule that was implemented after an accident was the sterile cockpit rule. This rule was implemented due to one notable accident which caused a crash just short of the runway conducting an instrument approach in dense fog. Training is a tool to assess the effectiveness of standardized procedures, capture deviations and excel in performance above the lowest acceptable safety bar.    

Training is time sensitive
Training is to prepare the Safety Management System (SMS) for tomorrow, which will be different than what it is today. A future SMS enters into a commitment agreement with the flying public, the regulator, airports, and airlines to accept nothing less than excellence in operational processes. Excellence is not to be perfect and operate in a virtual, or fantasy world. Excellence is incremental improvements of safety processes. SMS is not to show that we always get everything right, but to show that we can build a portfolio of safety even when we get it wrong. A fully potential SMS operates with a businesslike approach to safety where losses are
 accounted for and profits are rewarded.

The days of SMS as we know it is taking a new course into a professional SMS management direction. Just like an organization is relying on lawyers and accountants, organizations have come to a point when they need to rely on SMS experts to oversee, administer and manage their SMS. It is no longer enough to run a professional SMS organization because the SMS Manager wants to be safe. SMS has become more complex and needs to be managed by professional SMS experts. COVID19 was the catalyzer which moved SMS at rocket speed in a new direction. In the blink of an eye the world changed from virtual fun and games to a virtual corporate culture as their businesslike approach platform.  

One component that is critical for a successful SMS is training of the Accountable Executive to comprehend SMS as a businesslike approach and, most important, that the AE title does not qualify a person as a professional SMS expert. The regulations stats that no person is to be appointed as the AE unless they have control of the financial and human resources that are necessary for the activities and operations authorized under the certificate. The brilliancy of this regulation is that an AE is an SMS team member within the organization, and to be responsible for operations and accountable on behalf of the enterprise for meeting the requirements. This requirement does not make the AE the sole expert but leaves the door wide open for an AE to be surrounded by experts at any level in the organization. As the final authority, as opposed to the final decision-maker, the AE can with confidence sign off on the SMS for regulatory compliance and safety in operations. Just as the CEO of an enterprise signs off on legal and tax documents, the AE’s role is to sign off on SMS documents. SMS is the overarching umbrella, or the hub of a wheel where the processes lead a path of safety improvements.  

Conventional wisdom is that training is required because of regulatory requirements, and that someone who does the same tasks daily should know how to do it without require training. Nothing could be farther from the facts that this is the only reason for training. There are several elements to training process, which one of them is refresher training of personnel who does the same task daily. Refresher training has two main goals; 1) Evaluate a skill; and 2) Evaluate drift within an organization. Short term corrective actions are applied to skill test training findings, i.e., additional training of personnel who failed, while policy changes are applied to organizational drift findings. During the old way of SMS training, if several of the pilots failed their missed approach task during training, each person who failed would require additional training until they successfully passed the missed approach task once. In the new-SMS era, training of each pilot would continue until their success became persistent. In addition, the missed approach training program would be reviewed for drift, or what would be the expected outcome based on the missed approach training process. If the expected outcome of a process deviated from the expectation of outcome, drift is discovered and a change in training policy required.  

Training is to train for resilience.
Training serves several other functions, and one of them is to train for what is not expected to happen. It makes sense that glider pilots are trained to make off-airport landing for every flight. Even though the Captain does not have power available, their power plant is a power-reversal by spoiler applications. A glider can speed up or slow down by applying speed brakes. Glider training benefited the Captain of the Gimli Glider several years later. In 1983 a Boeing 767 was gliding from FL410 after fuel starvation. When fueling the aircraft, the fuel volume was displayed in litres, while the fueler expected it to be displayed in gallons, with the effect that the 767’s fuel load was 25% of required load. The fuel supplier had drifted away from their expected outcome, or what number was expected to be displayed when fully fueled. Resources from the Captain’s glider training kicked in and they successfully landed at an abandoned airstrip. The same principle is true for the Hudson River landing in 2009. Resources from the Captain’s prior decision-making training kicked in and assisted in a successful outcome. Just a few weeks ago glider aircraft landed in a lake after the tow-airplane lost power shortly after takeoff. All these examples are examples of events that were not expected to happen, and that training works when unexpected events occur. It could be missed approach training, glider training, decision-making process training, or any other training, are training of resilience with an expectation that resources become available when they are needed.

Training works when a candidate learns that they are consistently capable of completing a task successfully. During the old-SMS, a task completed once was all that was required, while in the new era of SMS, training is the success of completing a task over and over again. SMS training is to build confidence as a resilience tool when things go wrong, and unexpected resources must become available to a person. Resilience training also includes training of the Accountable Executive to comprehend the SMS without being overwhelmed with details. 

  


Catalina9












Monday, May 3, 2021

Teamwork Simplified

 Teamwork Simplified

By Catalina9

In aviation, both airlines and airports, teamwork is the foundation for an organization to function within a Safety Management System (SMS). A common expectation is that everyone must unconditionally “take one for the team” for the team to win or succeed. When someone “take one for the team” they are expected to willingly undertake an unpleasant task or make a personal sacrifice for the collective benefit of one's friends or colleagues. Should someone reject this notion that it is moral or necessarily for them to sacrifice their emotions, they will more than likely be kicked off the team. 

A scale is balanced by the SMS policy.
 Conventional wisdom is that there is “no I in team”.   This is as far from the fact that it could be. There will   always be an “I” in a team. The “I” could be by their   position of authority, by their vocabulary, by their   technical expertise or simply by their reputation within   the organization. Until the Safety Management System   came along, it was the “I” in the team who had control   over the masses. They would use the “safety card” and   imply that anyone who opposed their opinion of safety   were against safety and should be silenced in the   conversation. Playing the “safety card” is when   someone is making references to safety as a tool to   further their opinions and gain control of the conversation. An operational plan in aviation is called the safety management system for that exact reason. A safety management system in aviation is not about safety, but about process design, management, and oversight. The outcome of these SMS tasks are expected reduce, or even eliminate unexpected events and therefore we are safe. 

In 1912 an unsinkable ship left on a journey across the North Atlantic. A few years earlier Captain Smith’s own words were that “When anyone asks me how I can best describe my experiences of nearly forty years at sea, I merely say uneventful. I have never been in an accident of any sort worth speaking about....I never saw a wreck and have never been wrecked, nor was I ever in any predicament that threatened to end in disaster of any sort.” Everyone’s opinion was correct, in that the Titanic could not sink, since the experts who designed it said so, and they were in the good hands of Captain Smith. As we all know, the Titanic went down, but not because someone failed to complete a task, but because the system worked the way it was designed to work. NOTE: The system didn’t work as expected, but as designed. The team who designed the ship and operational process were in agreement and could therefore not be wrong. A safety statement in advertising is to persuade their team that a million people cannot be wrong. Any person who does not accept this statement is shunned or rejected by the group. A team was in the pre-SMS days a group of experts where the person with the best vocabulary or authority made an opinion-based decision and called it a team decision. 

Behind every door is a virtual reality attendee with facts to be discovered.
Over time virtual reality meetings or conferences has become the acceptable platform for meeting. Just a few months ago virtual meetings were infrequent and used as a last resort but changed very quickly. From small organizations to international level conferences, meetings are today conducted via virtual attendance. The aviation industry also adapted quickly to this platform where attendees are now placed in separate rooms or even separate locations across the globe. The transition from old-fashion meetings to virtual attendance just happened without conducting a safety case or change management analysis. Just as the Titanic was unsinkable, transitioning to virtual attendance was to be a flawless transition. 

An analysis of a transition to virtual attendance shows that teamwork has become much more team platform oriented and reduced the “I” from the team. Attendees now has an opportunity to raise their concerns, opinions, or suggestions by their physical distance from the other team members. There is also an opportunity for everyone to make their voice heard by anonymous submissions. Virtual attendance has opened a new door to the Safety Management System where facts are forced to be analyzed, rather than someone needs to “take one for the team”.  In a virtual conference environment, the other option but to accept inputs from everyone on the team, is to end the meeting. This unexpected change of personal involvement is a positive change to the aviation industry and hazard identification. 

An opportunity is delivered on a blank sheet of paper.
An enterprise operating within an SMS-world is required to implement a non-punitive policy, or a policy that differences of options cannot be punished. Since the beginning of SMS, in 2006 when Canada as the first country implemented the SMS regulations, a non-punitive policy was expected to be applied to airline or airport operations for hazard or incident reporting. This policy often came with a caveat that it would not be applied to illegal activity, negligence, or wilful misconduct. The intent, or expectation of the non-punitive policy was to protect a person when involved or observed unexpected events of job performance. A non-punitive policy is integrated in the safety policy on which the SMS system is based. The non-punitive policy was not considered to applied to meetings or teamwork, since it was a flight crewmember or airside worker who would fail their tasks and not the management who designed the systems. 

In a regulatory world the Safety Management System is applicable to an air operator certificate and an airport certificate. Any person who is without a role or responsibility in operations or management of these certificates may be excluded from the non-punitive policy. E.g., someone maintaining offices may be excluded, while someone maintaining an aircraft, or the airfield must be included. The Accountable Executive, CEO, or President are included, any management levels are included, and any operational and support levels are included in the non-punitive policy. However, senior management were excluded from the caveat that a non-punitive policy should be applied for illegal activity, negligence, or wilful misconduct. 

The “I” in team still exists within an SMS system and cannot be removed or ignored. The “I” in team is the SMS policy, the SMS non-punitive policy, objectives, goals, and parameters. Virtual reality attendance meetings have improved the opportunity for attendees to assign data, information, knowledge and their comprehension of systems to policies and objectives and bypass the gatekeeper’s opinion. Virtual attendance has placed the “I” in team where it should be, which is in the Safety Policy Team. When the Safety Policy is the focus of the discussion, teamwork is simplified.      
   

Catalina9






Sunday, April 18, 2021

Diversity in Aviation

 Diversity in Aviation 

By Catalina9

Now and then, there are newspaper articles and news stories about diversity in  aviation and discussions if diversity would improve safety or not. Some of the  arguments against diversity is that the captain of the aircraft should be the best and diversity should not be considered. At the other end, the argument for  diversity is that a diverse team stay stronger against bullies, or the one with the  best vocabulary who wins the deal. The news media portrays these opinions as  opposite ends of a spectrum from safe to unsafe. Sometimes discussions are in  favor of diversity, while other times diversity is unsafe. No matter who wants to  be the safest candidate, both sides are using the safety card. The only reason to  play the safety card is that there is no data to support their statements. Everyone  falls for the safety card and becomes paralyzed in a discussion against safety. The  answer to diversity is not if it is safe or not, but to answer the question of what personal qualities a successful pilot has. The only different between a pilot and  other jobs, is that everyone else do not fall to the ground when taking the wrong  turn at the fork in the road. Learning pilot skills are no special or difficult skills to  learn that only certain pre-selected people can learn. It is a skill anyone can learn  by determination and focus. Diversity in aviation is to recognize people who are  excellent at managing these human behavior skills. 

Becoming a successful pilot requires a combination of skills. It is not all  mathematical and technical, but they have to think creatively, act under pressure,  and adopt a mentality fitting for a role of such great responsibility. Pilots not only  require leadership qualities, but they also have to communicate and work well as  part of a team. The aviation industry in general expects that a plot develops  several human behaviors skills to be a successful pilot. These principles are true  for single-pilot operations or as a member of a multi-crew operations. They are  true for recreational, or general aviation an professional or commercial aviation. A  successful pilot communicates with clear communication, not only verbally, but  also behavioral communication. Clear communication closes the gap between  expectations, assumptions and anticipation and those immediate tasks to take  place in the immediate future. Clear communication also closes the gap between these short-term actions and long-term objections, which is to move the aircraft  from one location to another without disruptions, unplanned, or unexpected  events.  

Pilots has developed a successful skill of situational awareness. Generally,  situational awareness in aviation is that pilots know where they are and know  where they are going. However, this definition assumes that the pilots is working  within a flawless operational environment and that they are the only system that  potentially will malfunction. Situational awareness is comprehension awareness,  where data collected is turned into information, knowledge and system  comprehension or interactions. Situational awareness is therefore more than just  


The pilot of an aircraft maintained situational awareness of the Cali fix, but crashed into the high mountains

knowing where you are at point in space over the surface of the earth. Situational  awareness is to understand where in the process the automated system is, it is to  understand what is coming next in the process, it is to understand the effect of  flight control inputs, including long-term effect, it is to understand power plants,  it is to understand human factors, it is to understand the environment, it is to  understand topography, it is to understand law of physics, it is to understand  aircraft systems, it is to understand navigation aid inputs, it is to understand  display outputs, including visual navigation, it is to understand positions as to  point in space, it is to understand air traffic controller communication and intent,  it is to understand outside visual clues, it is to understand airport environment  and it is to overlay all these situational awareness clues in correct order onto  visual cockpit displays and instrument communication, with a mental picture of  what effect it will have on continuing the flight. In addition, when in visual  meteorological conditions, or on a visual approach, situational awareness is to  transfer this virtual information onto the visual view ahead. 

Successful pilots have developed teamworking skills. A successful airline several  years ago was operating with a principle that captains who had not developed  teamworking skills, were forced into a crew environment with major frictions.  Eventually, this crew-pairing caused a fatal crash, but the airline also made its  point. Accidents will happen if the crew opposes teamwork. Teamworking skills is  to develop forward-looking accountability, or to recognize and accept that current  actions, or reactions, have consequences. Conventional wisdom is that there is no  “I” in team, but there is. Within a team the Captain still has and must have the  final decision authority. Teamwork is not to accept the lowest common  denominator, but to input data into the Captain’s decision-making process.  

There is an “I” in TEAM.

A successful pilot has developed decisiveness and resilience. Time and resource constraints,as well as other pressure-adding  factors are decision making challenges for pilots. Pilot’s resilience to unexpected situations, or special  cause variations, are fully developed by self-experienced events. Resilience is  continuous improvements, or incremental improvements of a pilot’s behavior  during unexpected events, obstacle assessments or emergencies. A pilot is  expected to make sound judgements for the best possible outcome for their  circumstance. 

A successful pilot remains calm. Remaining calm is different than taking your time  to assess or react to events. It is to react immediately based on the unexpected  event. Depending on the emergency, there will always be a point of no return,  where any sound decision, or reaction does not change the outcome. Remaining  calm is also a skill developed over time and matured to a point where a reaction  to an emergency becomes a normal part of the operations. Several years ago, a  flight crew experienced a fire in the airplane just after takeoff. The captain  initiated the emergency action by returning for landing, knowing that the aircraft  needed be on the ground within 8-minutes before the point of no return was  reached. The first officer panicked and froze on the controls at which time the  captain had to momentarily interrupt the person’s behavior, taking time away  from the emergency. Calmness, as a skill developed incrementally and by self experience is a quality of a successful pilot.  
Self discipline and time management are two other traits of a successful pilot. Discipline is to do what you know you needs to be done to become very best in  your field as a successful pilot. Perhaps the best definition of self discipline is the  ability to make yourself do what you should do when you should do it, whether  you feel like it or not. It is easy to do something when you feel like it. It’s when  you don’t feel like it and you force yourself to do it anyway that you move your  life and career onto the fast track. A successful pilot has become successful in self  discipline and time management.
  
Leadership motivation is another skill needed for a successful pilot. There are five  general leadership style that a pilot should comprehend. The first leadership style  is Structural. Everyone knows exactly what needs to be done, why it needs to be  done, and to what standard. The next leadership style is Participative. This style  makes your team feel that you really care about them by putting them first and  they are treated with the same respect, patience, and understanding. The third  leadership style is Servant, which is a great style to start off with to gain respect,  trust, and loyalty. The style also builds a strong culture since it tailors to the  team’s needs. The next leadership style is Freedom, where people have an  opportunity to perform. This style inspires an entrepreneurial spirit with a clear  goal in your team members. The fifth leadership style is Transformational, which  is a leadership style that affect people’s emotions by painting a big, exciting  picture of the future. People are transformed by tapping into their hopes, dreams,  and ideals. Personnel becomes motivated, leaders become motivated, and  productivity is enhanced through high transparency and communication. 

A successful pilot has the ability to understand technical information and they  need to comprehend how their aircraft works. They comprehend how decisions affects aircraft performance, regulatory compliance and compliance with their  company’s safety policy. A successful pilot’s technical expertise is limited to  control inputs and comprehension of how this affects aircraft performance. The  extent they are expected to repair a malfunction is limited to the checklist items.  The times when a pilot also was a certified aircraft mechanic has passed. By  understanding technical information, a successful pilot has a tool to communicate errors effectively. 

Flying is more than recalling infinity of numbers.
A successful pilot is more than  
a numbers person. Pilots need 
to know the numbers for the  
aircraft, with the capability to  
perform mental arithmetic  
calculations quickly on  
demand. These calculations are  
automated and performed by  
computers. However, as an  
auditor of these systems a pilot  
needs to comprehend what,  when, where, why, who and  how of these calculations. A tragic example is how an airliner crashed in the South  Atlantic several years ago since the pilot could not comprehend these auto calculations.  

It is also said that a successful pilot must know when it is acceptable to break the  rules. Pilots have strict set of rules to follow, laid out by regulating bodies and  various other authoritative sources. Rules are often implemented from public  pressure as a response to prevent known causes of accidents. Regulatory fuel reserve became a regulatory requirement since fuel-burn calculations were  unreliable in the past, pilots were pressured to accept ATC approach delay  clearances, or incorrect weight calculation or winds aloft changed the fuel-burn.  

Many great inventions and safety improvements came about after major aviation  accidents, but with all the rules implemented, they still did not prevent accidents.  As to the conventional wisdom that a pilot must know when it is acceptable to  break the rule is false and incorrect. The Captain is the final authority of an  aircraft and responsible for the safety of that flight. As the final authority the  Captain must comply with a rule taking precedence over all other rules, which is  the safety of that flight. Whatever a Captain does to ensure safety of that flight  does therefore not break any rule at all.  

There are many examples in aviation history of a perfect pilot being involved in an  aviation accident. The perfect pilot was involved in the fist accident on September  17, 1908, injuring Captain Orville Wright. Two perfect pilots were operating two  airplanes that crashed over Grand Canyon in 1956. An exemplary pilot and a  model for all other pilots was in 1977 involved in the worst aviation accident to  date. In 1978 a perfect pilot crashed when a thrust-reverser deployed during a  missed approach. In 2017 the perfect pilot lined up their approach on a taxiway  with sequenced airplanes. The list could go on an on how perfect pilots were  involved in accidents. 
 
When diversity is being discussed, the discussion evolves about safety, ensuring  that the best pilot is flying the airplane or that there is only one type of pilot that  is safety. Often that type is a pilot who fits all the checkbox answers. What is  forgotten in this equation is that the past does not guarantee the future.  

Diversity in aviation is about the Enterprise itself, the Accountable Executive, and  their Project Solutions Leadership Motivation. When diversity is discussed on the  public platform, these discussions take an emotional turn where diversity is no  longer based on facts but on the comfort level of the participants. The public  opinion, which is a trigger for new regulations, is swayed by the participant with  the best vocabulary as opposed to the facts providing directions at the fork in the  road. The best example of this is that a Regulatory changed their aviation Safety  Management System due to emotional pressure from inspectors and the public. 

Diversity in aviation is about exposure to events and the environment. Exposure is  more than training, since it is about personal experience, it is about the emotions  when unexpected events occur, and it is about a Captain making the right  decisions when everything else goes wrong and when all odds are against you. It  was exposure that saved a light twin in the Rockies running into severe icing, losing all instruments, and ended up in a spin. It was exposure that saved the  MU2 with dual flameout in the Rockies. It was exposure that saved the Hudson River aircraft and it was exposure that saved the Gimli Glider from a major  accident.  

Catalina9





Tuesday, April 6, 2021

Predictive SMS

 Predictive SMS

By Catalina9

There are three level of a Safety Management System (SMS). Level 1 is the reactive level, Level 2 is the proactive level, and Level 3 is the predictive level. A fully developed SMS is an SMS at a level when predictions are applied. A predictive level is different than a proactive level, but these two levels also work in harmony. Level 1, as a reactive level stands out in its own class since no actions are required until after the fact, or after the data is available in a data collection tool. Level 2 is reactive to ensure that certain events do not occur again. Level 3, the predictive level, is when the system delivers predictions of future events.  


A predictive level is the guide to excellence 
 A predictive level is different than to foresee the future.   A predictive SMS is about comparing data collected   and results from the past with current data collected to   predict the future. Without making any changes to   human factors system, organizational factors system,   supervision system, or environmental system the   outcome from the past will repeat itself in the future. A   predictive system is not designed, or capable of   predicting a specific event in time (duration), space   (location) and compass (direction), but can predict an   outcome when certain parameters are met. If a person   is  not trained to tow airplanes but expected to park five   airplanes in a hangar that normally holds only four, there is a high probability that at least one airplane will be damaged during one of the towing process. This is predictable, but it is impossible to predict a date and time of the future incident. 

A predictive SMS is also quite different from a proactive process. A reactive SMS is to generated corrective actions and implemented to respond to event analysis and to avoid future incidents or accidents. A predictive SMS is opposite to a reactive system in that a predictive level accepts that future incidents or accidents are inevitable. In a reactive system hazards are captured and entered into a hazard register for analysis. After the proactive and hazard register process is completed, the predictive system goes into the hazard register to predict what hazards are next in line to cause an incident or accident. In short, the proactive system placed hazards in the hazard register into boxes so they could no longer cause incidents, or unscheduled events, while the predictive system then removed some hazards and placed them into another box of future known incidents. It is only when an enterprise accepts that incidents are inevitable that incremental safety improvements becomes available as a safety tool.

At fist glance it also appears that the proactive and predictive system opposes each other, since the proactive system is generating corrective actions to ensure that certain incidents do not happen again, while the predictive system makes these same hazards a cause for the next incident. Yes, they are two different systems, but they complement each other. 

Remember, a predictive system is not a system with the ability to pinpoint the next incident. After a hazard register is populated and corrective actions plans (CAP) are assigned, a predictive system takes over and monitor the CAP processes. A predictive system is a daily quality control system. Some predictions may be long term, while other predictions are only available short term. If an aircraft is still travelling at 100 KTS when reaching the threshold markings at the other end, a confident prediction is that within the next few seconds the aircraft runs of the runway. If this same airplane is touching down beyond the half-way point of the runway, a runway excursion may be predicted, but it is a prediction with less confidence. At the third event, the aircraft is on approach speed and slope for a touchdown point within the touchdown area. A prediction can be made that this will be a successful landing. All this make sense, but why even bother making such obvious short-term predictions. It is absolutely true that it is nonsense to make these short-term predictions since they do not include processes to affect the outcome. A predictive SMS predicts long term predictions with a high level of confidence of an outcome by monitoring hazards. 

A predictive level is the foundation of a sound marketing plan for SMS.
From a reactive system point of view each hazard are placed in a box labeled “Corrected Hazards”. What the predictive system does is to pick up one hazard at a time and monitor it. Hazards are monitored daily within a quality control system and analyzed to the level of job performance, or how the job is done. A job performance level is analyzed to four high level factors: Human Factors (HF), Organizational Factors (OF), Supervision Factors (SF) and Environmental Factors (EF). Applying a daily rundown of airport operations tasks, these factors are monitored and recorded. Over time a predictive SMS will paint a picture of each task and if the picture mirrors expectations from the hazard register incidents are bypassed. 

In a predictive SMS the four factors are in a mastermind alliance where they are working actively together in perfect harmony toward a common definite objective. This is similar to the S-H-E-L-L model where Software, Hardware, Environment, Liveware (people) and Liveware interact with each other in a robust way. Human Factors, Organizational Factors, Supervision Factors and Environmental Factors in a predictive SMS interact with each other in incremental improvements. A change to one of the factors requires a change to one or more of the other factors. Example: An organization may change their organizational factors to suit a Safety Management System, or a person may learn to recite the alphabet for the first time, but without making changes to supervision factors there are opposing 
internal forces acting for and against incremental improvements. A person may learn to recite the alphabet for the first time, but without harmony between HR, OF, SF and EF the opposing forces are destructive to learning. 

A predictive SMS detects these opposing internal forces and can with a high level of certainty, or probability, predict that by continuing down the same path without harmony, eventually the bubble will burst, or an incident will happen. When a paired flight crew, captain and first officer, are opposing each other that aircraft is on its way towards an accident. A predictive SMS recognizes and display these forces.  


Catalina9


Monday, March 22, 2021

Where Is SMS Going

 Where Is SMS Going

By Catalina9

The Safety Management System (SMS) in aviation is has since its infancy taken many twists and turns to find a path forward. SMS started out as an idea of how aviation should manage safety and for the system to be integrated into a functional safety system in the operations. Prior to SMS safety was managed by the “safety card”, or an opinion-based safety solution process. With this in mind, the onset of SMS forced airlines and airport operators to revamp their safety structure and change their approach to safety 180 degrees. 

Without taking ownership of SMS direction for success is only random
This new approach caused conflicts and confusion and the path of least resistance was to reject the new Safety Management System. Rejection became apparent in news articles about how SMS had failed safety and surveys were tailored to show this. However, over a short time SMS grew enough roots to resist being pulled out and it grew stronger. One lesson the aviation industry learned quickly about SMS was that it could not fail since it was a mirror of their operations and painted a true picture of their leadership. As a mirror, the SMS caused a friction between SMS and its Accountable Executive. Since the aviation industry had developed a great safety record over the years, it was difficult to accept the fact that they might not be as safe as they thought they were.

SMS will be going in the direction of what direction the aviation industry wants it to take. It is therefore crucial that high-level leaders understand and comprehend their SMS and policies they are drafting. SMS is unlike older safety systems in that it does not force safety onto operators, but rather identifies to operators if they drift away from or remains on the path towards their objectives and goals. SMS is designed to be a fluid system and adjust to operational needs. Regulatory oversight bodies and the aviation industry are both affected by external pressure from the public, from the industry itself and from political polices of how to regulatory shape the SMS. Just a short time after SMS became a regulatory requirement for all operators, the smaller on-demand and charter only operators were excluded from operating with an SMS. This was the first time the aviation industry mapped the SMS landscape and chose their path of least resistance.

The path to a successful SMS is a balancing act.
 There are two different paths the SMS needs to take   going forward. One is the regulatory path and the   other is the operational path. These are two distinct   and different paths, while they still are connected to   the outcome of safety. Look at this as each rail of a   railroad track. Regulations in themselves are not   safety in operations requirements, but requirements   for compliance in a static environment. This can best   be described as the issuance of an airline or airport   certificate, which is issued to a static environment with planned directions of travel. As soon as there are movements is when it becomes operational and incremental safety improvements kicks in. The regulator must assess an SMS based on regulatory compliance, while an operator must assess their SMS in a fluid and operational state. Only by comprehending SMS is it possible to see the differences and that these two paths are parallel and not opposing paths.

The path SMS needs to take is the system approach path where the task becomes to design systems and processes to complete operational tasks without first assessing each task for regulatory compliance. This does not imply that systems are not assessed for regulatory compliance, but rather that the first task is to identify current operational processes since they paint a true picture of operations. This is different than conducting a gap analysis, since it is a process tracking task. After systems and processes are identified, they are assigned a regulatory compliance component and integrated in a daily quality control system. Quality control of operations is a prerequisite for the Quality Assurance System. 
Without a quality control path, the SMS is wavering

Where SMS is going is difficult to predict since there are special cause variations that will affect its path. The path it must take is the path of incremental safety improvements for both airlines and airports. Over time it will be possible to identify drift away the desired and projected path. When drift is identified it becomes possible to make incremental corrections of operational processes to change course or move back onto the path. Drift in itself is not necessarily undesirable or an unsafe change, but often a change because the planned systems and processes were impractical. The unsafe portion of drift is when the drift itself remains unidentified. 

The first stage of drift at the operational level is for a process to self-adjusts to a practical process; e.g. a pilot changing from IFR to a visual approach in VFR conditions. Eventually this drift was identified and integrated as a standard process. The second part of drift is at the management or organizational level where complacency drives the processes. Social media also has a major impact on the SMS decision-making process.  Social media is free advertising for special interest groups, including support groups for a healthy Safety Management System. 

When assessing the future of where SMS Is Going one must reflect on the past path. It is reasonable to assess that the past path of SMS, becomes a forward-looking guidance of the path to come in the future. By laying out the path from the past drift can be monitored and adjusted if the drift is undesirable. SMS is a system which cannot fail since it paints a true picture of an enterprise. For several reasons there were opposition to SMS from the industry and the regulator when the SMS regulations first was implemented. Some of the opposition was reasonable and relevant to the facts, while other were emotional and irrelevant to facts. Within a short time, surveys were designed to fail the SMS. An example is the CBC News article posted on April 14, 2014 about SMS; “A survey of Canada’s aviation inspectors shows they are increasingly concerned about aviation safety because of Transport Canada rules that leave responsibility for setting acceptable levels of risk up to the airlines. The survey, conducted by Abacus Data on behalf of the Canadian Federal Pilots Association (CFPA), indicates 67 per cent of Canadian aviation inspectors believe the current system increases the risk of a major aviation accident, up from 61 per cent in 2007.” Today is 2021, and we now know that 67% were wrong at that time. There has not been a major aviation accident being contributed to the SMS since the survey. Human factors has not changed and it is reasonable to assume that when opinions about SMS is applied, as opposed to data and facts, 67% will be wrong in the future. 

There was one simple reason why SMS was made a regulatory requirement some years ago. The reason was an understanding that their old aviation oversight system was not capable of preventing accidents. It was also understood that operators, both airlines and airports, did not have a regulatory tool available to prevent accidents until SMS became available.  A friend of mine once said: “As long as the regulatory authorities don't receive feedback from operators (as it is now in many countries), and safety accountability is not practiced and not even understood or taken seriously, the SMS will still generate data, but I cannot imagine or would say if it would be worth data; i.e. proactive data.“ This is so true. Data will be pouring in and stored without assessment or considerations. The test today to lay out the path for the NexGen SMS, is to apply the WINK test, or the What I Now Know test. If I had known then, what I now know, what would I have done different about SMS and then apply this comprehension to the NextGen SMS path. 

Comprehending SMS is a process.

Foundation of comprehension is data. When raw data is collected it comes in all types, shapes, and forms. Some enterprises do not accept data, or reports, if they are not submitted in its proper format. When the report-format is the primary tool to validate a report, the report itself will be a support tool for the safety policy rather than a support tool for data collection of hazards. An enterprise should accept any reports submitted in any format, by a report form, email, telephone, fax, verbally, news article, regulatory finding or even as a hearsay. Look at the reports as the ballots for a small-town mayor election, where the candidates are randomly submitted without preference until the count is completed. It is when data is analyzed it can be turned into information. Information is neutral, without bias or emotions. Information generate knowledge by being absorbed by one or more of the five senses. Absorbed knowledge then generates comprehension of one or more systems and their interactions.

A Safety Management System is irrelevant to safety unless it operates with a daily rundown quality controls system and daily incremental improvements derived from WINK or the What I Now Know test. For the Safety Management System to be effective, all levels in an organization must be able to answer the same question asked over and over again; “Why does the Global Aviation Industry, being Airlines or Airports, need a Safety Management System (SMS) today, when they were safe yesterday without an SMS?” Unless the reason is known, there is no motivation to improve. For the next ten years, the one major definite purpose and the greatest single reason for an SMS is for every single airline or airport personnel to accept and take ownership of their Safety Management System. There is nothing else that matters on the path Where SMS Is Going.  

Catalina9


Sunday, March 7, 2021

Complacency

 Complacency

By Catalina9

Complacency is a human behavior hazardous to aviation safety. Complacency has become the new root cause for accidents and replaced pilot error. It is conventional wisdom that complacency is caused by the very things that should prevent accidents, factors like experience, training and knowledge contribute to complacency. Complacency makes crews skip hurriedly through checklists, fail to monitor instruments closely or utilize all navigational aids. It can cause a crew to use shortcuts and poor judgement and to resort to other malpractices that mean the difference between hazardous performance and professional performance. Complacency is also given as the reason when things go wrong flying the same route daily or doing the same job regularly. Complacency has just become another word for pilot error. However, this is all wrong. Complacency is not caused by experience, training, or knowledge. Complacency is all about organizational factor. 

Complacency is to take the path of least resistance.
 When conducting a root cause analysis within a   Safety Management System (SMS) world there are   four factors to consider. These are human factors,   organizational factors, supervision factors and   environmental factors. It is also crucial to a root   cause analysis to know that these factors do not   cause  complacency. In a healthy enterprise   complacency as a root cause does not exist. 



Complacency is when you are no longer striving to do your best or perform with accountability, but just do the minimum to get by. Complacency is also when you are not staying up to date in your field as an airline or airport operator. Complacency is to wait for the regulator to find problems with operations, rather than operating with a Quality Assurance System. Even if the subject is not linked to the aviation industry, take a course, or attend a conference. It is easy to drift into complacency, but it is not noticeable yourself.  Complacency is also when you are not seeking or taking advantage of new opportunities but relying on yesterdays news. There are enterprises, both large and small, that believe training is busy-time, or waste of time since their personnel was already trained. Annual training that is not a regulatory requirement are discouraged by these so-called leaders.  When you do not seek or take advantage of opportunities your skills become stale. Doing the same thing over and over gets boring. You remain invisible. Key stakeholders and decision makers do not know that the value that you contribute is to set up for an accident. Look for opportunities to work on new projects and maintain an active and curious mind. Complacency is when you are not maintaining or building your network of business contacts or associating with the industry.  When you do not build ongoing relationships at work or stay tuned to aviation news, you are not privy to critical information that can influence your daily job performance. Complacency is when you do not risk sharing your opinion or ideas. This is a high-risk factor, since when there is an inherent risk by sharing opinions, enterprises are operating outside a just culture environment. 

Complacency is to force the wrong piece to fit the puzzle.
 Complacency is not a condition but a symptom of   hazards within an enterprise and their lack of   commitment to organizational factors. To perform at 
 their best, individuals have two basic needs in the   world of work, if it is in the aviation industry or any   other industry.  The first is the autonomy need. This   is  the need to be seen and respected as an individual,   and to stand out for one’s personal performance. It is   a need to be recognized for individual achievements.   The second need is the dependency need that each   person has in the workplace. This is the need that people have to feel a part of something bigger than themselves. People want to be part of a team. It is the need to feel recognized and accepted as part of a group of people in the workplace.

Leaders create environments where people feel both autonomous and important, on the one hand, and have their dependency needs satisfied by making them feel as if they are part of a team; part of the whole organization. Using positive reinforcement at work is a key factor in personnel motivation. It is what takes place at the moment of contact or communication between the manager and personnel that is the key determinant of performance, effectiveness, productivity, output and profitability of an organization. The point at which the two people connect, whether positively or negatively, is where the past, present and future performance of the individual and the organization is determined. When this contact between the boss and the subordinate is positive, supportive, and encouraging of self-esteem and a positive self-image, then performance, productivity and output of the individual will reach its highest level.

When lightning strikes it’s best to play it safe.

  The worst way to gain personnel satisfaction is   when  the point of contact between the manager and   the managed is negative for any reason at all,   performance and output will decline. A negative   relationship with the boss will trigger fears of failure,   rejection, and disapproval. When their boss is   negative for any reason, people will play it safe, and   only do exactly what they need to do to avoid being 
 fired. Almost everyone has worked in a low self-   esteem environment. These are usually remembered   as the worst jobs the person ever had. Everything you do to improve this intersection or contact improves the overall quality of your work life, no matter where you are on the ladder of management.

The more effective you can become in eliciting peak performance from each of your staff members, the more and better people you will be given to manage for it. The top managers and leaders of today are those who are capable of eliciting extraordinary performance from ordinary people. Effective managers are intensely action oriented. When they hear a good idea, they move quickly to implement the idea and put it into action. Therefore, if you hear about anything that you think can help you to motivate your staff to a higher level, do not delay. Practice it immediately, that very day. You will be amazed at the results.

The Safety Management System (SMS) has all the tools an enterprise needs for Project Solutions Leadership Motivation. SMS has a just culture, where there is trust, learning, accountability and information sharing. In a successful SMS world, comprehension is derived from data (collected by hazard, incident or accident reports), information (data is turned into information), knowledge (absorbed information) and comprehension (interacting systems). When comprehension is missing the system is faulty, or data is not analyzed, system comprehension is faulty. This faulty system comprehension does not rest with pilots, mechanics, or airport crew, but with the enterprise. When a CEO or Accountable Executive wants to find out the reason for complacency in their organization, all they have to do is to take a look in the mirror. 

Catalina9





Tuesday, February 23, 2021

When Hazards Are Reactive

 When Hazards Are Reactive

By Catalina9

It is a regulatory requirement that an airport or airline has a process in place for identifying hazards to aviation safety. It is also expected that an airport or airline has a proactive process or system that provides for the capture of information identified as hazards. At the time when the Safety Management System (SMS) was implemented, both airlines and airports established a reactive process to capture operational hazards as they were relaying on organizational personnel to identify and report hazards. This process in itself is a hazard, but was put in place without a risk assessment or change management analysis. The directive was simply for their personnel to head out to identify and report hazards.

Some activity is a hazard simply due to regulatory non-compliance



Within the SMS regulations, hazards are defined as a proactive process. A proactive process is to recognize an opportunity and plan a change. It is also to test the change by carrying out a small-scale study or apply your SMS random sampling process. After testing is completed, the task is to review the test, analyze the results, and identify what you have learned. The next critical step, which is a step often assumed as an unwritten rule, is to make a decision. A decision is more than decide on what path to take, it is to identify and document hazards and make a risk analysis decision. A final .step of the decision circle is to take action based on what you learned in the study step. If the change did not work, go through the cycle again with a different plan. If you were successful, incorporate what you learned from the test into wider changes. Use what you learned to plan new improvements, beginning the cycle again.

At the time of SMS implementation and when airports and airlines made their decision for operational personnel to identify and report hazards, they had overlooked the decision step. Since the step was overlooked, or ignored, they unknowingly placed their personnel in a hazardous environment. It was understandable to all that no consideration to this issue was made at that time since there were no changes to their current operational processes. Pilots were still flying airplanes the same way, ground personnel did their regular jobs, mechanics kept on fixing airplanes and airport personnel continued with their same tasks as they had done for years. In their own mind there were no change management analysis required. However, if their analysis had included a decision process, a door would have opened to the fact that SMS regulations were a new and require a change management analysis, or a safety case. Organizations, small and large, are still sending their personnel out in the minefield of hazard identification. 

At first glance it may not seem like a high risk to send personnel out looking for hazards, since they had worked in that same hazard environment prior to SMS implementation. To an extent this is true, except that SMS was a new regulation and required to come with a proactive hazard approach and personnel assigned duties are required to be trained. In addition, that all personnel were aware of the hazard environment they worked in was an assumption causing an assumed and untrue risk level. When airlines or airports are sending personnel out looking for hazards without guidance, they are accepting a risk beyond their own imagination. 

Identifying hazards is a process and like any other process which includes training and that there is a documented process to identify training requirements so that personnel are competent to perform their duties. An Accountable Executive is responsible for operations and accountable for meeting the regulatory requirements. It only takes a label, or organizational position to be accepted as an accountable executive, without any knowledge of SMS processes. The accountable executives for both airports and airlines have a responsibility to identify hazards prior to assigning personnel in their operations to identify these hazards. 

The task is to conduct a pre-hazard assessment and define the hazard as Safety Critical Areas (SCA) and Safety Critical Functions (SCF). The Safety Critical Function is a sub-category of the Safety Critical Area. It is assumed that any accountable executive has the knowledge and comprehension of their operations to develop their SCA and SCF. When a comprehensive list of SCA and SFC are developed, and personnel trained, they are qualified to go looking for hazards and report how they affect their operational tasks. An airport may assign their SCA to runways, taxiways, aprons, approaches, the runway strip etc, and assign SCF, or hazards that are common within those areas. The same concept goes for airlines, to establish SCA of ground operations, cockpit, cabin etc, and assign SCF to these areas.

Some years ago, I climbed a
tree to take a picture.
There was an inherent
risk by climbing
while the true risk
was waiting below.
   
    One question I am often asked is if a pilot or airport person, should         report the same hazard day after day and the answer is no, they             don’t. Hazards which are present daily and regularly are inherent         risks of aviation, or common cause variations and are mitigated             progressively. In addition, knowledge of these risks are learned by         obtaining a pilot license, crew training, company flight training,             airport manger certificate or other operational training. Knowing             what not to report is just as much a part of organizational hazard             training as knowing what to report. This type of training is also             commonly called Judgement Training.

    Operators without a Judgement Training program are operating with     a reactive hazard reporting system. A couple of examples would be        an aircraft leaving the gate may have to navigate different routes            from time to time due to vehicle traffic or oncoming aircraft. These       are hazards, but not expected to be reported. 

    However, if a vehicle moves in an uncomfortable proximity to the       aircraft it becomes a reportable hazard. For airport operations, snow   on  the runway, while still reported as runway surface conditions, is   also a common, or inherent risk in aviation and not to be reported as a   hazard. On the other hand, if the snow is at a rate and quantity require   the airport to close, it becomes a reportable hazard. 


Catalina9






How To Do A Risk Analysis

 How To Do A Risk Analysis By Catalina9 T here is a difference between a risk analysis and a risk assessment. A risk assessment involves sev...