Saturday, July 8, 2023

Could SMS Have Prevented March 27th Disaster?

 Could SMS Have Prevented March 27th Disaster?

By OffRoadPilots

If the safety management system (SMS) of today could have prevented the March 27, 1977, worst aviation accident in history when two B-747 at the Los Rodeos Airport on Santa Cruz de Tenerife is a question without answers. There are no answers since SMS is forward-looking and accidents cannot be predicted until the last few seconds when it is evitable that an accident will occur. At the time of the accident, it was assumed that aviation was operating with safe and fail-free systems, except for pilot errors as the bad apples in the box. Pilot error had become industry acceptable root cause to any accidents. It is unknown when pilot error became the popular root cause solutions, but accident reports since the late 60’s and early 70’s support this as a solution. However, after the June 30, 1956, Grand Canyon disaster, the probable cause of the mid-air collision was not allocated to pilot error, but that the pilots did not see each other in time to avoid the collision due to other multiple factors. Human factors are not the same as human error. Human errors or other negatives are not useful for intervention to improve safety but are symptoms of much deeper cause within systems.

Before we answer the question if an SMS could have saved both aircraft, let’s look at what SMS is.SMS is a system that introduces an evolutionary and is a structured process that obligates organizations to manage safety with the same level of priority that other core business processes are managed. SMS is a structured means of safety risk management decision making, it is a means of demonstrating safety management capability before system failures occur, it is increased confidence in risk controls though structured safety assurance processes, it is an effective interface for knowledge sharing internally and between external organizations, and it is a safety promotion framework to support a sound safety culture an promote business strategies. An effective safety management system is a support system to the business itself just as serval other systems are required to conform to regulatory compliance, to recognize competitors, to maintain business relations and to evaluate processes for effectiveness to meet defined goals.

The four factors of the 1977 disaster that stands out in the accident report are human factors, such as communication and observations, it is organizational factors, such as authority and decision-making, it is supervision factors, such as air traffic services, lights and signage, and environmental factors, such as weather and airport design. These factors combined played their roles in designing, planning and execution of the disaster. At the time when the decision was made to divert all aircraft to Los Rodeos the accident process was put in place.

The first aircraft to taxi was backtracking runway 12 for departure runway 30 and instructed to exit the runway at the 1st taxiway to hold at runway 30 but was later cleared to taxi to button runway 30 for takeoff. The second aircraft was also cleared to backtrack runway 12 but to clear the runway at the 3rd taxiway. After they were lined up on runway 30 the first aircraft received their departure clearance. The second aircraft was still backtracking runway 12 looking for the third taxiway exit when the first aircraft departed runway 30 and an accident was evadible to occur. At the time of accident runway visibility varied between 300 meters to 1500 meters (1000 ft – 5000 ft).

The first task to operate with an SMS is to appoint an accountable executive (AE) to be responsible for operations or activities authorized under the certificate and accountable on behalf of the certificate holder for meeting the requirements of the regulations. An SMS policy includes safety objectives, commitment to fulfill safety objectives, a safety reporting policy of safety hazards or issues, and defines unacceptable behavior. A safety policy must also be documented and communicated throughout the organization. What the safety policy does is to establish the base and foundation to build an SMS, and to plant the seed that safety is paramount. Human factors, organizational factors, supervision factors and environmental factors must be linked to the safety policy to instill process awareness and accountability in all personnel. With a mature SMS it is expected that personnel have learned to consider special operations, such as the combination of hazards with overcrowded airports, low visibility, and more aircraft on the movement area than the airport was designed to support. An SMS in 1977 would include tools to affecting the outcome since the accountable executive appointed by their name takes pride in their roles to be responsible for safety. In addition, and between the airport operator, ATS and the two airlines, there was a tool available to recognize that their SMS includes processes to recognize that a combination of an overcrowded ramp and low visibility is special operations and therefore normal operations processes are invalidated.

A regulatory requirement of an SMS is to adapt to is the size, nature and complexity of the operations, activities, hazards and risks associated with the operations of the certificate holder. Since there were an abnormal level of heavy aircraft and abnormal number of aircraft that day, an SMS applicable to normal operations would be scaled to much smaller operations. If an SMS had been in place on that day, the airport operator or ATS would have had a tool to recognize that their SMS was not designed, or capable of managing the increased traffic volume. Human factors would be affected by communication and observations, organizational factors by authority and decision-making processes, supervision factors by ATS overload compared to normal operations, and environmental factors by weather and airport design. An SMS designed to size, and complexity is essential in a decision making process to establish a limit when the system becomes overloaded. Just as an electric cable is designed for a limited voltage, an SMS is designed for a limited load factor.

A quality assurance program is a requirement to be included in an SMS. A prerequisite to maintain a quality assurance program is operational quality control. If a quality assurance program had been in place at that time, it would have included a daily quality control system where processes are linked to regulatory requirement and safety expectations. In a business transaction cash is counted daily and the same principle applies to a safety management system. Process compliance with regulatory requirement, safety policy and process outcome must be accounted for daily to recognize drift, limitations and volume. An SMS that day would have included tools to capture the fact that runway capacity was overloaded that day.

A safety management system is required to assign duties on the movement area and any other area set aside for the safe operation of aircraft, including obstacle limitation surfaces, at the airport, only to personnel who have successfully completed a safety-related initial training course on human and organizational factors. Airside personnel that day would have been equipped with SMS tools to recognize the overload on human and organizational factors with the increased volume and aircraft size.

An SMS is required to include a policy for the internal reporting of hazards, incidents and accidents, including the conditions under which immunity from disciplinary action will be granted. If an SMS had been in place that day, the flight crew of any aircraft, not just the two involved in the accident, would have been equipped with a tool to recognize hazards and filed hazard reports by telephone or fax. A report is an SMS tool to trigger a reaction to an overloaded airport operations that day.

The accountable executive is the person accountable on behalf of the certificate holder to meet the requirements of the regulations, and compliance with their SMS policy. This position is not a position for the person to be held accountable, or responsible for past incidents, but for the person to maintain oversight and communicate with workers and the regulator on issues and compliances. An SMS is required to include procedures for making progress reports to the accountable executive at intervals determined by the accountable executive and other reports as needed in urgent cases. A report to the AE of low visibility operations, volume and aircraft size is an SMS tool to trigger urgent issues and when reported immediately it is an SMS tool for the AE for action and communication with their flight crew and airport operator.

A quality assurance program is required to be included in the SMS is a function of the SMS to establish policies, processes, and procedures. These processes and procedures are then applied in an operational quality assurance program to perform specific required task. One of the tasks is to perform regular audits. For airports, audits are preformed by checklists of all activities controlled by the airport operations manual. An SMS on March 27th would have included a tool to recognize the excess volume and workload and a trigger for the airport operator to review their activities controlled by the airport operations manual.

The person managing the SMS, which could be the position of an SMS Manager, Safety Officer, or Director of Safety, is required to determine the adequacy of the training required in their safety management system. This training includes indoctrination training, initial training, upgrade

training and annual refresher training. Flight crew or airside personnel received this training would have a tool to recognize hazardous condition and reported it via their SMS process.

The person managing the SMS is also required to monitor the concerns of the civil aviation industry in respect of safety and their perceived effect on the certificate holder, or SMS enterprise. Dispatch for any of the airlines were monitoring diversions and weather conditions with their tools available at that time, and their SMS training would have triggered a report of this abnormal condition to their SMS system, and someone would be required to make a decision if any actions were required, and if these hazards combined were incompatible with the safe operation of an airport or aircraft.

Within an SMS the hazards of March 27th need to be analyzed without knowing, or considering the outcome, but be analyzed as an event in the future based on information available at that time. SMS is unable to establish if an incident will occur or not in the future, and it is therefore impossible to determine if an SMS would have prevented the March 27th accident. On January 13th, 2022 there was a similar incident at JFK airport, except there were no low clouds or fog. The tower could see an aircraft crossing directly in front of a departing aircraft and their takeoff was aborted. On this day, both airlines involved were operating with an SMS, but an SMS did not prevent the incident, and an SMS by itself could not have prevented the Los Rodeos accident without applying tools in the SMS toolbox.

However, on March 27th an SMS would have made available several triggers for flight crews, ATS and the airport operator to pause operations and assess their next step and special cause variations that existed that day. A pause would at a minimum have generated a decision-making process for either the airlines, ATS or the airport operator.

An SMS exposes the holes in the Swiss cheese. When the cheese is sliced it exposes the holes within the cheese which comes available to assessed within the context of a system analyses and within observed special operating conditions. Without an SMS there were no triggers, or a person assigned to slice the Swiss cheese on that day, and that is what was missing.

OffRoadPilots


Sunday, June 25, 2023

Safety Uphill Battle

 Safety Uphill Battle

By OffRoadPilots

Conventional wisdom is that safety is to be protected from harm to a person or property, without knowing who the protector is, or how they are protecting a person or property from harm. Safety is a word with indefinite limits in protecting a person or property, and a word that encompasses all virtual events. When the word safety is used in communication its meaning is unlimited, but also restricted by imagination. A person is looking forward to a safe flight, which is to be protected by someone while airborne and deplane without experience harm to their person or to the aircraft. Safety has become a responsibility of someone unknown rather than the person who expects to be safe. Accepting risk is a way of life, and there is an inherent risk in flying, but when it is removed from the equation, safety in aviation becomes an uphill battle.

Safety in aviation, being airlines or airports, is not the absence of accidents or events, but the reliability of their processes, and expected process outputs. A safety management system (SMS) is a businesslike approach to safety. What this entails is that an SMS includes a transaction system, an accounting system and a balance sheet with results. As a businesslike approach to safety an SMS enterprise keeps up a daily quality control system, and closes that system daily. In a business the cash is flowing in, expenses are paid and the leftovers is for anything else. Cash is tangible, while safety is abstract and turns safety into an uphill battle.

When safety in aviation is turned into tangible cash, that is when a safety management system makes sense. It does not make sense to wait for a future accident that would happen without having an SMS, but now it will not happen because of the SMS.

A safety management system is an excellent tool for airlines and airport operators, but they need to know how to use the tool and known what there is to manage. Future accidents cannot be managed, past accidents cannot be managed and abstract, or virtual scenarios established in a risk analysis cannot be managed. Its name is a management system, but we don’t manage risks, we lead personnel, manage equipment and validate operational design for improved performance above the safety risk level bar. Simply said, it takes a leader as the accountable executive to operate with an SMS, it takes a leader who takes an active role in strategic planning, and most important, it takes a leader to accept bad news when performance takes unplanned turns. It is widely expected within the aviation industry, and communicated by the regulator that the safety management systems help companies identify safety risks before they become bigger problems, and that the aviation industry put safety management systems in place as an extra layer of protection to help save lives. Assuming that it is a fact that an SMS save lives, a question to answer should be how this life saving system saves lives, and what its proven track record in life saving is. A regulatory requirement is for an SMS enterprise to operate with a process for setting goals for the improvement of aviation safety and for measuring the attainment of those goals. Since SMS is published by the regulator as “an extra layer of protection to help save lives”, a measurement of the regulatory requirement for “measuring the attainment of those goals” should be measured in how many lives, and specifically what lives were saved. An unspecified goal is not a goal but a wish, and a wish does not have any impact on operations. The moral of the story is that an SMS cannot be a system to save lives since it does not include life-saving processes. A first-aid process is a life-saving process, a surgery is a life- saving process, digging a water well in the desert is a life-saving process, but operating with an SMS is not a life-saving process. The aviation industry has caught on to this misleading definition but are reluctant to oppose the regulators. When the safety-card is played it becomes an uphill battel to work within an SMS system. SMS, as a system in itself is an exceptional system, and the more we learn about SMS, the more intelligent it becomes. However, it was presented and sold to the aviation industry as an excellent system, but when it was delivered it came on the cover of a trash can.

Imagine for a minute that you are at the most beautiful restaurant together with your favorite person. It’s a wonderful atmosphere, the place is spectacular, friendly personnel and everything is a million times better than expected. You are waiting for the meal to be served when you hear the rattling noise of falling trash cans. The next thing you know is that your meal is served on the cover of a trash can. Your meal is also served with a note stating that you must consume this meal to avoid harm. You feel trapped and alone without a place to go and decide to accept the meal, but it is an uphill battle to consume. This is how SMS was presented. It is an excellent system, but it was presented on the cover of a trash can and enforced to be accepted for operators to remain in business.

If the SMS is a system to save lives, another question to answer is if airports and airlines prior to SMS knowingly worked within systems that destroyed lives and properties. When the safety-card is used to promote a cause, airlines and airport operators recognize this as opinions, but they also know that it is not an appropriate response to disagree with safety and obey by default. Opinions are often used to spread ideas, information, or rumor for the purpose of helping or injuring an institution, a cause, or a person. Working within a safety management system is an uphill battle for airlines and airports when they must conform to opinion messages and social media ratings.

It is not long ago that an airliner was cleared for takeoff and reach a takeoff speed of 100 KTS when another major airliner crossed the active runway at a short distance in front of the departing airliner. The tower cancelled takeoff clearance and the departing aircraft aborted their takeoff. A collision was avoided and there were no physical injuries. Both airlines were operating with a safety management system, but that did not prevent an incident. The worst aviation accident is still a pre-SMS accident that happened on March 27, 1977. Since SMS saves lives, the logic is also that an SMS would have prevented this disaster. However, if the departing aircraft had continued its takeoff run after their very first power application for takeoff (which was aborted), the question to answer is if a continued takeoff would have prevented the accident. SMS is not the system that saves lives. People is the system that saves lives.

SMS is an exceptional well-designed system, and when used as intended it is a system where there is trust, learning, accountability, and information sharing. These are the four foundations for an SMS to function in a healthy SMS environment.

One of the most important, but also one of the most overlooked requirements for an SMS enterprise is to monitor the concerns of the civil aviation industry in respect of safety and their perceived effect on the holder the certificate. This requirement is also overlooked by the regulator, who does not inspect for compliance, or how social media concerns affect operations.

There is a fine line to balance between obeying social media demands and assessing the facts before an action is initiated. It is a double edge sward since the aviation industry is dependant on high social media ratings, but also is required to make changes, and possible unpopular changes. Target marketing towards perception is crucial to stay in business and fund the safety management system. When cashflow is reduced, the temptation is to eliminate safety measures, since safety is abstract and does not come with past tangible results. Safety results can only be assessed by process outputs and the number of times things go right. That an airline or airport operates without incidents cannot directly be assigned to their safety management system, since aviation was the safest mode of transportation with very few major accidents prior to SMS implementation. A dilemma in safety is to sell safety to organizational management and the general public since the perception is already that flying is safe. Social media solutions are quick to assign pilot error to accidents, but within an SMS enterprise there is no such thing as pilot error, or human error when things go wrong, but there are human factors considerations in process design.

Safety is an uphill battle to sell and accept when operations already is safe. When an airport operator is focusing their SMS on the role of an accountable executive, (AE), to be responsible on behalf of the certificate holder for compliance with regulations, then they are focusing on maintaining a solid foundation for the SMS. When airport operators divert their focus from the AE to airside operations, their processes become operational control compliance, or assumed compliance, or an omission compliance when things go wrong, as opposed to oversight compliance why things went right the first time. When focusing on airside operations itself and making changes, the inevitable trap is overcontrolling of processes.

Safety is an uphill battle for your SMS enterprise when SMS portrays “life as it should be” and not “life as it is” during airline and airport service delivery operations. No matter what they tell you, there is an I in TEAM. Remove the uphill SMS battle by applying your SMS as the intended support tool that it is. Trust that your policies, processes, and procedures all come with built-in flaws. Trust that acceptable work practices have more values to safety in operations than written procedures, and finally, move away from the I in TEAM.

OffRoadPilots







Saturday, June 10, 2023

Decisions

 Decisions

By OffRoadPilots

Operational safety decisions in the aviation industry are based on internal or external pressure, social media ratings, customer opinions, or investors demands. Safety has become a fashionable word in the context of airline and airport operations. Conventional wisdom is that regulations are minimum safety standards and justify reasons for more regulations to improve safety. Regulations are neutral and may not by itself compatible with the safe operation of an airport or aircraft. What makes regulations effective is how it allows for operational processes. It wasn’t until the accountability within a safety management system in aviation became a regulatory requirement that airlines and airports could assess their operational processes to conform to regulatory requirements. The airmail act of 1926 is a prime example how promoting new regulations supported efficient and safe air transportation by applying the latest technology to airnavigation and airport improvements. This new regulation required licensing of pilots, aircraft airworthiness and a national airnavigation system. The airnavigation system used the latest technology for visual navigation and placed lighted towers on mountaintop to identify air routes. Some of these towers are still operational today. Making rules and regulations better improves the health of an organization.

Decisions that are made by authorities and whatever method is applied, or whichever direction it is viewed from, every single decision is made by one person only. Committee, or group decisions are virtual realities, since at the end of the day, it is the person with the best vocabulary who convince the others and makes the decision. When a person wins a vote by improving and impressing others, it is done by their vocabulary and not by promises, and at the end, it was only one person who made that decision. The same is true in aviation, for both airlines and airports, that only one person makes decisions. This person may be a person in authority, such as the CEO, President of a company, business owner, customer, or an investor, or the accountable executive, while the responsibility for regulatory compliance still rests with the accountable executive.

There is a significant difference between a decision and a choice. Decision connects to the place of behavior, performance, and consequence, while a choice connects to the place of desired intention, value, and belief. Simplified, decisions are connected to causes, or expected outcomes, and choices are connected to reasons, or emotions. The difference between a justification and a reason in that a justification is objective and a reason is subjective. An SMS enterprise needs to operate with a system where decisions are made to instill behaviors by objective decisions and unbiased justifications.

Decisions are made to improve the health of an SMS enterprise. It widely accepted that a safety management system is a businesslike approach to safety and applying a businesslike approach has become the first step to a successful SMS. When applying a businesslike approach, an optimization approach to the decision making process to improve its health is also needed. An optimized decisionmaking process within an SMS enterprise is a targeted approach to decisions. In marketing a specific audience is targeted based on their prior behaviors, perceived resources, purchase availability e.g. physical or online, and demographic. Marketing optimization is all about reaching goals. It is the process of making adjustments to marketing efforts based on data collected, and it is to make tune-ups using the marketing tools and tactics spelled out in the decision marketing strategy plan to align results with ambitions, or goals and objectives. In marketing the consumer is targeted by a supplier. In aviation safety, for both airlines and airports, and within a safety management system, an optimized decision making approach is a reversal of the marketing process.

In marketing the objective is to move product or services to a customer. In aviation safety, the objective is to move a customer, or personnel, to the product or service. The product is the process design, e.g. SMS cloudbased as opposed to paperformat, and the service is personnel accepting a cloudbased SMS. When marketing a new system, the task is not to enforce a new system, but to make sure the system userfriendly for personnel to use so that they accept a new system. A prerequisite to introduce a new system is to conduct a system analysis, test the system and communicate the reason for its purpose.

It is a myth that aviation safety operates with perfect safety systems that are without flaws and without malfunctioning system. When things go wrong, many fall into the hindsight bias trap and place blame on the person who was in control of the last link. A root cause analysis is a tool to prepare for decisions. The 5-Why root cause analysis process is an acceptable process within the aviation industry. When using the 5-Why method, the first answer to the Why-question establishes the pathway to the root cause. Root cause analyses are generally associated with accidents but should also be applied to other special cause variations.

When deciding what the answer to very first Why-question is, the answer must be an action item without an explanation or reason. When an answer is given with an opinion, the path to the root cause deviates from a fact-finding path to an opinion path. The first answer to an aircraft crash is often why did the aircraft crash. This question opens up for an answer to go in any direction, with the two primary direction is to go in an opinion-based direction, or a fact-finding direction. An answer that the aircraft crashed because the pilot did not follow procedure is an option-based answer. Scientific data does not conclusive assign deviations from procedures to be a prerequisite for accidents. Back in 1998 an aircraft crashed while the crew were completing their emergency procedure checklist. A decision to make a statement of what did not happen is irrelevant to a solution, since other tasks than checklist tasks were conducted. Time did not just stop while the procedure was not done. When the first answer to the question why the aircraft crashed is based on facts, an answer could be that it crashed because touched down outside of the touchdown area.

A hazard to a decision-making process is to fall into to the trap to assign a solution, or reason for the accidents before all facts are known. There is no rush to assign a solution or root cause to an accident. Assigning an incorrect root cause, such as pilot error, is a higher risk than waiting for facts to come in. After an accident a risk analysis is conducted, which is different from a root cause analysis. The purpose of a risk analysis is for an operator, being airline or airport, to justify a decision of their next action after an accident. The five basic actions are to communicate, or to monitor, or to pause operations (up to 48 hours), or to suspend operations (beyond 48 hours), or to cease operations (until a new system is in place). There are times when it is justified to suspend or cease operations until a root cause has been determined. However, cease, suspend, or pause operations after every accident until a root cause is established is without justification.

An operator with a conventional safety system without a systematic approach to safety, operates with a safety decision making processes defined as common sense. Implying that safety is common sense relegates it to those areas that don’t require much thought or close attention. When safety is treated as common sense, aviation safety is making a mindless act. Common sense changes over time, and it is a learned behaviour. When common sense are applied there are opposing views of how to improve aviation safety, which is an uphill struggle for the safety management system.

Common sense in one region might not be common sense in another region. A safety management system may function well in one region of the world, while it has opposing views in other regions. When exactly the same hazards are identified in different regions, decisions to improve safety may vary from one region to another region when applying the accepted safety management system processes. SMS is a human behavior system and acceptable human behaviors vary across regions. It is impossible to impose, or change, human behaviors to conform to one-world acceptable human behavior. Decision-making processes must therefore be based on data collected to improve safety and not common sense approach.

Humans are resilient with the ability to bounce back or adapt. Safety decisions are often made to force adaptability where management finds it necessary to stay in business. Some of these changes are automation and electronics. Aircraft systems are very different today from what they were just a few years ago. Pilots have become automation monitoring experts rather than pilots, operational managers, and aerodynamics experts. 

Transferring aircraft operational control from pilots to automation does not improve safety in aviation, but moves human-errors into automation as written in this 2013 post:” If automation replaces humans in critical stages of a process, the human-error factor is not eliminated, but transferred into an automation package.” Cabin crew automation may not be as obvious as for pilots, but cabin crews have become social media influencers for their organization to maintain a highest possible rating.

Decisions are necessary to maintain an acceptable level of safety in aviation. Policy, process, procedure, or acceptable work practices decisions do not necessarily improve safety. Decisions might be perceived as safety improvements, which is the purpose of operating a business, and this is not necessarily a hazard for the safe operations of aircraft or airport. Decision-making processes become hazardous to aviation when its outputs are accepted at face value and blue-stamped as an acceptable change without support of a system analysis. Operational risk analyses and decision-making reliability are judgements decisions to be made within a  highly constraint time limit, but are just as much part of a defined decision-making process as decisions made in the office.

OffRoadPilots



Saturday, May 27, 2023

SMS Most Wanted

SMS Most Wanted

By OffRoadPilots 

A safety management system includes a list of the ten most wanted fugitive hazards and they are on the run. The most wanted hazards are identifiable hazards, but airports or airlines are unable to locate the whereabouts of their solutions. Hazards are locally different in the operational environment of airport or aircraft operation based on locations, destinations or flight conditions and require an operating environment specific safety risk management system applied.

Conventional wisdom is that hazard is a condition, when left unattended becomes a risk that foreseeable could cause harm to personnel or contribute to an incident or accident. The person managing a safety management system (SMS) has an obligation to identify hazards and carry out risk management analyses of those hazards. When the whereabout of hazards are unknown there are no requirements to carry out risk management. Make sense. The person managing the SMS is also responsible for implement a reporting system to ensure the timely collection of information related to hazards, incidents and accidents that may adversely affect safety. This responsibility does not include collection of all hazards, but only hazards that may adversely affect safety. If a condition is an actual hazard to aviation safety is either determined by emotions or data. When emotions are the determining factor, most activities relating to aviation are hazardous. When data is applied as the determining factor, only past occurrences are applied to hazard identification. Both these hazard identification systems come with one built-in flaw, which is that they are a accepted hazards because someone reported it, or because of past results. What is missing is the identification, or the whereabout of the hazard itself.

That an aircraft did not slide off a runway when landing on a 100% ice covered runway did not eliminate ice on runway as a hazard because it went unreported or the aircraft arrived without an occurrence, but it became one of the most wanted hazards within the decisionmaking process used by an airport operator and aircraft operator. When an airport operator is using a safety data system to monitor and analyze trends in hazards, incidents and accidents, the value of their trend analysis, or return on their investment, is shaped by their decisionmaking process.

The ten SMS most wanted hazards are identified within an SMS Enterprise’s

  1. Decisionmaking process;

  2. Hazard classification process;

  3. Risk level process;

  4. Root cause process;

  5. Differences identification process;

  6. Humanfactorsprocess;

  7. Organizationalfactorsprocess;

  8. Supervision factors process;

  9. Environmentalfactorsprocess;and

10.System analysis.

A decisionmaking process is a learned process and highly customized to any specific tasks. A pilot may be responsible for the safety of a flight, but for large airlines the decisionmaking process rests with dispatch and management. A decisionmaking process to release an aircraft for departure is a learned process and must fall within approved parameters. Decisionmaking processes for airlines with operational dispatch may not necessarily be a decisionmaking process but are performed based on internal compliance processes to conform to regulatory requirements.

An on-demand and smaller air operator, operating aircraft under 12,500 lbs was using a similar method as their decisionmaking process. Since their routes were pre-established between the same airport and in the same sequence, they applied a standard time enroute and fuel consumption for each flight and applied the same fuel weight for VFR and IFR conditions based on the most critical condition of flight. Without a regulated dispatch, this process was unacceptable. When a decisionmaking process becomes a product of compliance, as opposed to safety limits and parameters, one of the ten most wanted hazards are disguised within the process itself.

The hazard classification process is a process to establish safety critical areas and unacceptable behaviors while performing airside tasks at an airport or operating an aircraft. A safety critical area is an area of airport or airline operations which for the purpose of safety or immediate threat to aviation or personnel should be fail- free. Conditions affecting safety critical areas and establishing unacceptable safety risk levels are unacceptable behaviors for continued operations. Hazard classification are the safety critical area and the safety critical function. A safety critical function is the activity or task performed within the safety critical area. An aircraft is taking off from a paved runway is a safety critical area. As the aircraft rotate and transitions into a 3D environment is a safety critical function of that area. Rotation becomes the function to focus on for both airlines and airport operators. Since both airlines and airports operates with declared distances and point of rotation becomes the critical point of action for airlines, and the clearway the critical point for airport operators.

The purpose of a differences identification process is to identify hazards locally. A process where one-fit-all process does not support a safety management system. An airlines may depart one airport within a set of hazards parameters, while these parameters may be invalid at their next departure point. Airport operators may assess a risk differently for each runway end with the same hazard classification. The most wanted hazard within differences are operational assumptions.

The risk level process is to analyse probability of occurrence (likelihood that a defined hazard will affect the outcome), severity (caused by the occurrence) and exposure of an identified hazard (level of exposure while performing a task). An aircraft is exposed to the same hazards through the entire flight, but one hazard may be more severe during a defined phase of flight. An engine failure may cause a more severe outcome if it happens on takeoff than if it happens in cruise flight. The most wanted hazard is hidden in the justification of likelihood that hazard will affect an operational task. When likelihood is the perfect number probability, but so complex to calculate that it is unrealistic to use it, the hazard lay within the likelihood itself. When calculating the likelihood of an occurrence with a statistically probability of 10-7-10-9 that it will occur includes an analysis of indefinite factors within the affected systems with a probability to activate a hazard. Such an analysis would include the probability that an engine attachment bolt would share off during takeoff due to incorrect installment process. Without justification documented by mathematical calculations the likelihood selection is invalid and a hazard in itself. It has been said that an aircraft is exposed to an engine failure at every takeoff. The hazard of an engine failure exists, but until the engine fails the flight crew is not exposed to an engine failure. The Titanic was not exposed to an iceberg until the iceberg approach its path. Airport operators are also affected by the hidden hazards within their risk level process. The most wanted hidden hazard in the airport operator’s risk level process is to apply the number of times things went wrong in their calculation as opposed to the reasons why things went right. When it is known why things go right, then drift and changes are based on a platform to be analysed.

The purpose of a root cause process is to establish an area, or factor, within operations to target corrective actions. Targeted corrective action plans are more successful in generating expected changes than randomly applied corrections to randomly selected areas. A root cause is allocated to human factors, organizational factors, supervision factors, or environmental factors. 

The first step in a root cause analysis is to determine if it is withing scope, control and authority of the SMS enterprise. The litmus test is if the Accountable Executive can freely apply human and financial resources to implement a corrective action plan. An AE at an airport has this authority to apply human and financial resources to airside operations, but does not have this same authority over a construction contractor doing work at the airport. An airline may use towing vendors to move their aircraft, but it is not within the airlines scope and control to determine the root cause within the towing contractors operations system. Two commonly used root cause analysis processes are the 5-WHY process and the fish-bone process. The 5-WHY process is most effective if analyzed within a 5x5 matrix.


When there is only one path to the answer in the 5-WHY process, the first question determines the root cause outcome if the WHY is asked five times, or 100 times. Within a 5x5 matrix there are five first-questions asked, and each question is different. When applying the fish- bone process, there are unlimited brain-storming opportunities. When a root cause is applied outside scope and control, limited reasonable questions are answered, and unless opportunities for a hazard to be activated are exhausted, the most wanted hazard in a root cause analysis is on the run within overcontrolled processes.

The most wanted hazards within Human Factors, Organizational Factors, Supervision Factors, and Environmental Factors are found within the answers to the WHAT-WHEN-WHRE-WHY-WHO (position) and HOW questions.

HUMAN FACTORS are human reaction triggered by eyesight, hearing, taste, touch, or smell. It is human behavior, personal attitude with respect to situation, person or thing, values, beliefs or a just culture environment. Human factors are character and emotions, and other factors affecting the decision-making process and output.

ORGANIZATIONAL FACTORS is the organizational environment a person works within and as it relates to interactions defined in the SHELL model.

SUPERVISION FACTORS are direct supervision, remote supervision, or self- supervision. General types of supervision and leaders are structural, participative, servant-leader, freedom-thinking, and transformational leader.

ENVIRONMENTAL FACTORS are operational environment, topographical environment, climate environment, geo-environment, level of just-culture environment, or workstation environment.

WHAT
HUMAN FACTORS - Human behavior, performance, and reaction to event ORGANIZATIONAL FACTORS - A framework to outline authority, accountability, roles, responsibilities, and communication processes.
SUPERVISION FACTORS - Function of leading, coordinating, and directing the work of others to accomplish the objective.
ENVIRONMENTAL FACTORS - Design and performance environment of design applicability for job performance and encouraging engagement or disengagement in task-result oriented activities.

WHEN
HUMAN FACTORS - Aviation safety process and decision making. ORGANIZATIONAL FACTORS - Design of process and application of process in the operational environment.
SUPERVISION FACTORS - Daily, within the regular working hours of personnel, with result-oriented applications.
ENVIRONMENTAL FACTORS - Daily, within working hours in Operations, Maintenance, Flight Following or as assigned location.

WHERE
HUMAN FACTORS - Operations and within operational management personnel. ORGANIZATIONAL FACTORS - Management policies and operational processes. SUPERVISION FACTORS - Organizational management in a hierarchy of organizational.
ENVIRONMENTAL FACTORS - Operations, Maintenance, Flight Following or as assigned.

WHY
HUMAN FACTORS - Human factors knowledge is used to optimize the fit between people and the system in which they work to improve safety and performance. ORGANIZATIONAL FACTORS - Establish an organizational culture for operational processes and expectations for level of safety in operations.
SUPERVISION FACTORS - Establishing authority, accountability, roles, and decision authority within the operational processes.
ENVIRONMENTAL FACTORS - Establishing and maintaining an environment where personnel have access to design tools and encouragement of performance engagement.

WHO [position]
HUMAN FACTORS - Anyone with operational or SMS roles and responsibilities in operations, maintenance or flight following or other personnel when designing operational processes.
ORGANIZATIONAL FACTORS - Established, maintained, communicated, and assessed by all Directors and managers reporting to the Safety Management System are responsible for activities on behalf of the Accountable Executive. SUPERVISION FACTORS - The Accountable Executive is responsible for operations and activities on behalf of the certificate holder. All Directors and managers reporting to the Safety Management System are responsible for activities on behalf of the Accountable Executive.
ENVIRONMENTAL FACTORS - Applicable to all personnel, where the Accountable Executive leads with a Safety Policy and objectives and goals safe operation.

HOW
HUMAN FACTORS - Application of processes and tasks for both reactive management and proactive management.
ORGANIZATIONAL FACTORS - The delivery of structured processes within the organization.
SUPERVISION FACTORS - Processes within the basic types of supervision. General types of supervision and leader are: Structural, Participative, Servant-Leader, Freedom-Thinking and Transformational Leader.

ENVIRONMENTAL FACTORS - Safety operational systems designed for timely delivery within the SHELL model, designed to achieve user friendliness, and for personnel to stay informed during process application.

A System Analysis is a comprehensive analysis of systems, their sub- subsystems, departments and divisions and on- demand processes. System analysis processes are processes to identify hazards within the context of the system analysis. A system analysis is applied to analyses when considering implementation of new systems, revision of existing systems, or design and development of operational procedures, or identification of hazards, ineffective risk controls through the safety assurance processes, or change management. In addition to a system analysis is of the entire safety management system, a system analysis includes operations or activities authorized under the certificate, and analysis of vendors who are performing tasks affecting how the aviation industry perceive the certificate holder and accountable executive performance. A system analysis is applicable to vendors and third-party contractors limited to their tasks of operations. In the unlikely event of an incident, a vendor or third-party contractor may conduct their internal root cause analysis and submit to the airline or airport operator. The inclusion of a system analysis of vendors and third-party contractors operational process does not affect the scope, control and authority of an airline or airport root cause analysis.

The most wanted hazard within a system analysis are hazards beyond scope, control and authority of a certificate holder and their accountable executive.

OffRoadPilots




Saturday, May 13, 2023

Elevated Runway Edge Lights By Inversion

 Elevated Runway Edge Lights By Inversion

By OffRoadPilots

When operating in the arctic, subarctic, mountainous areas, or sparsely settled areas, airlines and airports needs a safety management system (SMS) that includes optical illusion by inversion, optical illusion by sun angle, and optical illusion known as the black-hole effect. An optical illusion is real and the same as a mirage. A mirage is a real optical phenomenon that can be captured on camera since light rays are actually refracted form the false image. A mirage occurs when there is a temperature inversion. An inversion is when air at higher altitudes is warmer than the air below. When the air below the line of sight is colder than the air above it and when passing through the temperature inversion, the light rays are bent down, and so the image appears above the true object. Mirages tend to be stable, as cold air has no tendency to move up and warm air has no tendency to move down. Mirages make objects below the horizon, or outside of a normal line of sight, visible at the horizon. A sun angle optical illusion is when color of rocks in mountain combined with sun angle make a large mountain range impossible to see.

The black hole illusion is a nighttime illusion that occurring when only the runway is visible to pilots without surrounding ground lights. With this illusion there is a tendency, or a trap, for pilots to estimate an incorrect required descent angle and causing the approach to be lower than required for the runway. Another illusion caused by the black hole conditions on dark nights with no moon or starlight, or without a visible horizon, triggers pilots to believe that are on approach slope since they have a steady view of the runway in their windshield, causing them to fly a longer and shallower approaches than needed to clear obstacles. Unless a pilot has up to date knowledge and is intimately familiar with

the airfield, thorough pre- approach study and preparation is required to mitigate the black hole hazard. Today, there are online tools and maps available for pilots to become familiar with approaches and departures at most aerodromes and certified airports.



The same black hole illusion occurs during takeoff when the acceleration g-force is applied to the pilot and their cllimbout angle appears as a steeper than normal. On a dark night, without moonlight or starlight, and without a view of the horizon due to the black hole illusion, a tendency is to reduce aircraft pitch and departure angle may be lower than required to clear obstacles or could even be a negative angle. A contributing factor to a King Air accident in 2007 after a missed approach was caused by the illusion of a climb, when the aircraft was descending.

It is a regulatory requirement for an airport operator to identify in their airport emergency plan potential emergencies within a critical rescue and fire-fighting access area that extends 1000 m beyond the ends of a runway and 150 m at 90° outwards from the centreline of the runway, including any part of that area outside the airport boundaries. It is also a regulatory requirement for an airport operator to identify emergencies that can reasonably be expected to occur at the airport or in its vicinity and that could be a threat to the safety of persons or to the operation of the airport. Optical illusions are real and therefore reasonable to be expected to occur for arrivals and departures. The question to answer is how far away from the airport, beyond the 1000 m distance and 150 m from centerline mark an airport operator assess to be reasonable to initiate an emergency response. In 2017 an aircraft crashed and came to a rest beyond a point 150 m from the extended centerline. Since the airport was operating with a safety management system it was reasonable expect that they would initiate their emergency response plan at that time.

It is also reasonable to expect that airports identify their outer identification surface as their outer limits of primary responsibility and with a responsibility to assist upon request beyond that distance. In 2011 an airplane crashed about 3000 m from an airport and 280 m from the extended centerline, and the airport responded to the accident. In another accident in 2011 an aircraft crashed 1500 m from the centerline and the airport activated their response. A safety management system must be tailored specifically to each airport and that airport emergency plan definitions of distance in its vicinity will vary. Since the regulations are not broad enough to cover every detail of airline or airport operations, their SMS must include a practical application of their plan to address hazards and operational tasks. A rule of thumb for an effective SMS is if the regulations does not require it, this now becomes the very same reason why it is incumbent on airlines or airports to do it.

There are several non-certified aerodromes and remote airports operating without vertical or lateral guidance to their runways. At night, a lighted object, e.g. tower, may appear to be just a few miles ahead of an aircraft in cruise flight, while the actual distance could be 100 miles. When pilots are relying on visual clues as their vertical and lateral guidance, there are times when their aircraft has drifted away from an extended centerline or is low or high on approach. In 1993 a twin engine aircraft approach to an airport at night had the runway in sight at 1200 feet, with a flight visibility near minima. On final approach the crew descended blow a virtual glidepath and aircraft crashed in a hilly and snowy terrain located 5 km short of runway 26. Other examples are major carries approaching low on approach to international airports or lined up on the taxiway for landing. Optical illusions could happen at any airport, but there is a higher probability that an aircraft will be low, high, or drifted away from centerline on approach to airports without vertical and lateral guidance systems.

A guidance approach system installed an several airports is the Precision Approach Path Indicator (PAPI), which is a vertical guidance system for aircraft on final approach. Flying the glidepath of a PAPI keeps aircraft within the obstacle protected surface as long as the airport operator is applying their safety management system processes to monitor for unknown, or new obstacles. An optical illusion created by a PAPI system is when there is frost on the PAPI lenses, and their lights are deflected.

A rule of thumb when flying approaches without PAPI installed is to be 1000 feet above the runway at 3 NM and to maintain runway edge lights visible in a fixed view. The illusion without guidance is that an aircraft is too high when the actual altitude may be below the safe approach angle. In Canada, airports standards are only applicable to airports serving scheduled service for the transport of passengers. An aerodrome serving large airlines, with hundreds of passengers onboard, is not required to comply with the Canadian Aviation Regulations standards compliance. This is a flaw in the regulatory system when the method of how tickets are purchased determines monitoring of safety at destination or departure airports. If the same principle was to be applied to highway travel, speed limits would only be applicable to national bus carriers with paying passengers.

Requirements for a certified airport to install PAPI is that they conduct a risk assessment within their SMS to establish the need for a PAPI. One airport determined by their risk assessment that a PAPI was not to be required since there were no data supporting low, high, or off-centre approaches to their airport. When such data is not collected, risk analyses become simple, but do not paint a true picture of their operations. The absence of incidents is not an indication of a healthy safety management system, or a healthy operational environment. Most times things go right because human factors come with built-in resilience, or the ability to correct errors, or bounce back after an occurrence. An occurrence is not just that an aircraft crash, but also when an approach is flown below the slope of a standard approach path. When occurrences go unreported it makes it a simple to fill in the SMS compliance checkboxes, but optical illusions are occurrences to be reported.

On a dark October night an aircraft was on approach to an airport in the Arctic. That night it was a temperature inversion causing an illusion that runway edge lights were raised well above ground level. When runway lights were elevated, they could be seen from a farther distance and appear to be closer. This night the runway lights were raised by optical illusion to a heigh where they could be seen above a mountain range that normally would obscure the lights at this distance. Since the lights were visible, the position of the aircraft was determined to be inside the mountain range and safe of obstructions. However, within a few minutes the airplane crashed, since the viewed runway edge lights was an illusion, and they were still on the backside of the mountain.

In addition to natural made optical illusions, there is a man-made optical illusion that, at night, when an aircraft is parked on the runway in the same direction as an approaching aircraft, makes the park runway aircraft invisible.

Optical illusions are real. Only by knowing of their existence, learning about the nature of this phenomena, and verifying position by aircraft instruments can it be determined that they are illusions. When flying on visual clues illusions are real, aircraft may be invisible and runway edge lights may be elevated several feet above their actual ground level location.

OffRoadPilots





Saturday, April 29, 2023

Performance Is Exceptional

Performance Is Exceptional

By OffRoadPilots 

Since most of the tasks goes right most of the time, every day performance goes unnoticed and viewed as unexceptional tasks. Airports and airlines fall into a trap to accept repetitious tasks as trivial tasks without considering the successful outcome and these tasks to be less important than complex special tasks assignments. Conventional wisdom is that organizational drift in safety is to drift away from safety in operation to unsafe processes. Drift is neutral and does not affect safety in operations to be improved or reduced. Since drift is neutral, safety improvements or safety reductions are neutral. A concept of an effective safety management system (SMS) is to implement changes for incremental, or continuous safety improvements. Continuous safety improvement is a statement applied to emotions. Such statements are often used in sales and marketing describing a new product or service to be new and improved, which implies that the prior product or service was old and inferior. Safety today does not become old and inferior tomorrow but is fluid and adaptable to external changes.

Continuous safety improvements are the practical drift and the practical compliance gaps. A practical drift is the difference between work imagined and work as actually performed. Work imagined are documented by organizational policies, processes, and procedures. The practical compliance gap is the difference in regulatory compliance in a static environment where nothing moves, and the regulatory non-compliance within a moving environment. When work imagined becomes the compliance standard for safety in operations, it is with an assumption that their systems, processes or procedures are perfect and fail-free. The practical drift system is a common cause variation within the system itself. An airline or airport operator must identify these variations for their SMS to conform to regulatory requirements. Regulatory requirements for the person managing the safety management system are to monitor trends, monitor corrective actions and monitor concerns of the civil aviation industry. Monitoring these tasks is to monitor the outcome, which is different than monitoring for compliance with work imagined.

Continuous, or incremental safety improvements are needed for operations to maintain oversight due to external and common cause variations in processes. A change that is done for the purpose of safety, may or may not be an additional benefit to safety in operations. That the safety card is played, e.g., implemented for safety reasons, is more hazardous to aviation safety than continue operations without changing anything.

Takeoff performance charts for gravel runway are different from performance charts for paved runways. Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body rolls on a surface. An aircraft rolling on a graveltop surface experiences higher resistance than an aircraft rolling on a blacktop surface. A new type of graveltop runways is the Thin bituminous surface runways classification. Gravel runways have successfully been used for decades in places where it is expensive to make runway pavement or concrete. When operating on gravel runways there are loadbearing restrictions, and there are aircraft performance restrictions. These restrictions are often viewed as a burden and a restriction to operations rather as an additional layer of safety. Gravel runways are located far away from emergency services should an incident occur, and there are airports where public emergency services do not exist. Operating with gravel runways restrictions is to adjust operations to geolocations, since hazards are identified locally. Aircraft weight restriction due to gravel operations is viewed as a trivial task restricting affecting business revenue, rather than for the exceptional performance task it is. Classification of thin bituminous surface runways is a change in classification only, from a gravel runway to a paved runway. With this change, aircraft maximum gross weight is allowed by the regulator to be applied to their takeoff and landing performance. Simplified, there is no changes or construction made to the gravel runway, so the runway remains the same as what it was prior to reclassification. After reclassification, an airline is considered “safe” for takeoff with the stroke of a pen only. The classification criteria itself acknowledges this flaw in performance requirements.

The thin bituminous surface runways are a broad class of surface treatments, which have a variety of performance characteristics. Newly built thin bituminous surface runways require sufficient curing time to provide a competent and durable operational surface. A class 3 pavements may be considered to meet the definition of a thin bituminous surface runway. Since a runway may be considered, is in itself an acknowledgement that there is no data available in support of compliance with all the requirements. In other words, and since the runway may, there are no gravel runways that actually meets the most stringent standard requirement for reclassification as defined. In addition, there performance data for the actual groundroll is not required to assess the validity of the reclassification.

Airport operators designates their most critical aircraft by aircraft group number, which is a numeric value of characteristics of the critical aircraft for which the aerodrome is supporting. The aircraft group number is determined by the critical aircraft wingspan or tail height. An aircraft group number is based on a paved runway surface, with a limited maximum gross weight capability. An airport operator may select an aircraft group number based on aircraft wingspan and tail height, but the standard is lacking a method to verify how an airport operator select an aircraft size based on runway surface performance, or aircraft landing and takeoff performance. The root cause hazard with reclassification to the thin bituminous surface runways is the opportunity for regional compliance by airport operators who wish to maintain gravel operations to reclassify their runway for airlines to operate out of their airport. The one airport operator who remain a gravel runway operator has the highest risk to loose business due to their takeoff and landing restrictions. In addition, they are unable to provide friction characteristics of a runway surface for a runway serving turbojet aircraft, and there are still many unanswered questions. On the other side, the person who was the driving force behind reclassification to thin bituminous surface runways, received a recognized award for the work. When exceptional performance of current processes remains unrecognizable, and risk levels are established by emotions, checkbox syndrome, or by social media likes, any changes to processes becomes its own worst enemy.

Work well done often goes unrecognized as an important work task since everything is operating normally, processes are ignored and discarded as every day normal tasks. Drift is often recognized as drift into unsafe conditions, but unrecognized drift into improved safety in operations is just as much a hazard to aviation safety as drifting into hazardous operating conditions. Hazards are predictable, while incidents and accidents are unpredictable.

For an incident to occur there are three conditions that must meet at the fork-in- the-road. The first condition is that an aircraft, vehicle, or person is performing a task beyond the limits of their capabilities. E.g. an aircraft requiring 3,000 feet takeoff distance is taking off from a 2,000 foot runway. The second condition to be met is recognition of past practices without recognizing special cause variations. E.g.an aircraft normally departs empty and fly to a longer runway for passengers and freight to be loaded but does not recognize the effect of partial loaded aircraft. The third condition is operational drift to complete a task within a defined timeframe. E.g. daily departure performance records are exceptional, but it is not recognized as exceptional since it occurs daily, and drift is occurring to recover lost time for on-time task completion. Capability limits may be skewed based on established operational requirements, just as the thin bituminous surface runway scenario that justified the change as safety improvement in operations, while the root cause of change is to move operational limitations. Past practices may be skewed by an induced level of urgency to complete, and drift to improvement goes unrecognized when emotions or external forces are applied to decisions.

When performance is exceptional, such as operating out of a gravel runway with performance restrictions, drift into improved runway surface condition is just as much a hazard as drift into acceptable practices.

OffRoadPilots





SMS Decisionmaker

  S MS  Decision maker By OffRoadPilots A safety management system (SMS) enterprise is required to appoint an accountable executive (AE) wh...