Saturday, May 27, 2023

SMS Most Wanted

SMS Most Wanted

By OffRoadPilots 

A safety management system includes a list of the ten most wanted fugitive hazards and they are on the run. The most wanted hazards are identifiable hazards, but airports or airlines are unable to locate the whereabouts of their solutions. Hazards are locally different in the operational environment of airport or aircraft operation based on locations, destinations or flight conditions and require an operating environment specific safety risk management system applied.

Conventional wisdom is that hazard is a condition, when left unattended becomes a risk that foreseeable could cause harm to personnel or contribute to an incident or accident. The person managing a safety management system (SMS) has an obligation to identify hazards and carry out risk management analyses of those hazards. When the whereabout of hazards are unknown there are no requirements to carry out risk management. Make sense. The person managing the SMS is also responsible for implement a reporting system to ensure the timely collection of information related to hazards, incidents and accidents that may adversely affect safety. This responsibility does not include collection of all hazards, but only hazards that may adversely affect safety. If a condition is an actual hazard to aviation safety is either determined by emotions or data. When emotions are the determining factor, most activities relating to aviation are hazardous. When data is applied as the determining factor, only past occurrences are applied to hazard identification. Both these hazard identification systems come with one built-in flaw, which is that they are a accepted hazards because someone reported it, or because of past results. What is missing is the identification, or the whereabout of the hazard itself.

That an aircraft did not slide off a runway when landing on a 100% ice covered runway did not eliminate ice on runway as a hazard because it went unreported or the aircraft arrived without an occurrence, but it became one of the most wanted hazards within the decisionmaking process used by an airport operator and aircraft operator. When an airport operator is using a safety data system to monitor and analyze trends in hazards, incidents and accidents, the value of their trend analysis, or return on their investment, is shaped by their decisionmaking process.

The ten SMS most wanted hazards are identified within an SMS Enterprise’s

  1. Decisionmaking process;

  2. Hazard classification process;

  3. Risk level process;

  4. Root cause process;

  5. Differences identification process;

  6. Humanfactorsprocess;

  7. Organizationalfactorsprocess;

  8. Supervision factors process;

  9. Environmentalfactorsprocess;and

10.System analysis.

A decisionmaking process is a learned process and highly customized to any specific tasks. A pilot may be responsible for the safety of a flight, but for large airlines the decisionmaking process rests with dispatch and management. A decisionmaking process to release an aircraft for departure is a learned process and must fall within approved parameters. Decisionmaking processes for airlines with operational dispatch may not necessarily be a decisionmaking process but are performed based on internal compliance processes to conform to regulatory requirements.

An on-demand and smaller air operator, operating aircraft under 12,500 lbs was using a similar method as their decisionmaking process. Since their routes were pre-established between the same airport and in the same sequence, they applied a standard time enroute and fuel consumption for each flight and applied the same fuel weight for VFR and IFR conditions based on the most critical condition of flight. Without a regulated dispatch, this process was unacceptable. When a decisionmaking process becomes a product of compliance, as opposed to safety limits and parameters, one of the ten most wanted hazards are disguised within the process itself.

The hazard classification process is a process to establish safety critical areas and unacceptable behaviors while performing airside tasks at an airport or operating an aircraft. A safety critical area is an area of airport or airline operations which for the purpose of safety or immediate threat to aviation or personnel should be fail- free. Conditions affecting safety critical areas and establishing unacceptable safety risk levels are unacceptable behaviors for continued operations. Hazard classification are the safety critical area and the safety critical function. A safety critical function is the activity or task performed within the safety critical area. An aircraft is taking off from a paved runway is a safety critical area. As the aircraft rotate and transitions into a 3D environment is a safety critical function of that area. Rotation becomes the function to focus on for both airlines and airport operators. Since both airlines and airports operates with declared distances and point of rotation becomes the critical point of action for airlines, and the clearway the critical point for airport operators.

The purpose of a differences identification process is to identify hazards locally. A process where one-fit-all process does not support a safety management system. An airlines may depart one airport within a set of hazards parameters, while these parameters may be invalid at their next departure point. Airport operators may assess a risk differently for each runway end with the same hazard classification. The most wanted hazard within differences are operational assumptions.

The risk level process is to analyse probability of occurrence (likelihood that a defined hazard will affect the outcome), severity (caused by the occurrence) and exposure of an identified hazard (level of exposure while performing a task). An aircraft is exposed to the same hazards through the entire flight, but one hazard may be more severe during a defined phase of flight. An engine failure may cause a more severe outcome if it happens on takeoff than if it happens in cruise flight. The most wanted hazard is hidden in the justification of likelihood that hazard will affect an operational task. When likelihood is the perfect number probability, but so complex to calculate that it is unrealistic to use it, the hazard lay within the likelihood itself. When calculating the likelihood of an occurrence with a statistically probability of 10-7-10-9 that it will occur includes an analysis of indefinite factors within the affected systems with a probability to activate a hazard. Such an analysis would include the probability that an engine attachment bolt would share off during takeoff due to incorrect installment process. Without justification documented by mathematical calculations the likelihood selection is invalid and a hazard in itself. It has been said that an aircraft is exposed to an engine failure at every takeoff. The hazard of an engine failure exists, but until the engine fails the flight crew is not exposed to an engine failure. The Titanic was not exposed to an iceberg until the iceberg approach its path. Airport operators are also affected by the hidden hazards within their risk level process. The most wanted hidden hazard in the airport operator’s risk level process is to apply the number of times things went wrong in their calculation as opposed to the reasons why things went right. When it is known why things go right, then drift and changes are based on a platform to be analysed.

The purpose of a root cause process is to establish an area, or factor, within operations to target corrective actions. Targeted corrective action plans are more successful in generating expected changes than randomly applied corrections to randomly selected areas. A root cause is allocated to human factors, organizational factors, supervision factors, or environmental factors. 

The first step in a root cause analysis is to determine if it is withing scope, control and authority of the SMS enterprise. The litmus test is if the Accountable Executive can freely apply human and financial resources to implement a corrective action plan. An AE at an airport has this authority to apply human and financial resources to airside operations, but does not have this same authority over a construction contractor doing work at the airport. An airline may use towing vendors to move their aircraft, but it is not within the airlines scope and control to determine the root cause within the towing contractors operations system. Two commonly used root cause analysis processes are the 5-WHY process and the fish-bone process. The 5-WHY process is most effective if analyzed within a 5x5 matrix.

When there is only one path to the answer in the 5-WHY process, the first question determines the root cause outcome if the WHY is asked five times, or 100 times. Within a 5x5 matrix there are five first-questions asked, and each question is different. When applying the fish- bone process, there are unlimited brain-storming opportunities. When a root cause is applied outside scope and control, limited reasonable questions are answered, and unless opportunities for a hazard to be activated are exhausted, the most wanted hazard in a root cause analysis is on the run within overcontrolled processes.

The most wanted hazards within Human Factors, Organizational Factors, Supervision Factors, and Environmental Factors are found within the answers to the WHAT-WHEN-WHRE-WHY-WHO (position) and HOW questions.

HUMAN FACTORS are human reaction triggered by eyesight, hearing, taste, touch, or smell. It is human behavior, personal attitude with respect to situation, person or thing, values, beliefs or a just culture environment. Human factors are character and emotions, and other factors affecting the decision-making process and output.

ORGANIZATIONAL FACTORS is the organizational environment a person works within and as it relates to interactions defined in the SHELL model.

SUPERVISION FACTORS are direct supervision, remote supervision, or self- supervision. General types of supervision and leaders are structural, participative, servant-leader, freedom-thinking, and transformational leader.

ENVIRONMENTAL FACTORS are operational environment, topographical environment, climate environment, geo-environment, level of just-culture environment, or workstation environment.

HUMAN FACTORS - Human behavior, performance, and reaction to event ORGANIZATIONAL FACTORS - A framework to outline authority, accountability, roles, responsibilities, and communication processes.
SUPERVISION FACTORS - Function of leading, coordinating, and directing the work of others to accomplish the objective.
ENVIRONMENTAL FACTORS - Design and performance environment of design applicability for job performance and encouraging engagement or disengagement in task-result oriented activities.

HUMAN FACTORS - Aviation safety process and decision making. ORGANIZATIONAL FACTORS - Design of process and application of process in the operational environment.
SUPERVISION FACTORS - Daily, within the regular working hours of personnel, with result-oriented applications.
ENVIRONMENTAL FACTORS - Daily, within working hours in Operations, Maintenance, Flight Following or as assigned location.

HUMAN FACTORS - Operations and within operational management personnel. ORGANIZATIONAL FACTORS - Management policies and operational processes. SUPERVISION FACTORS - Organizational management in a hierarchy of organizational.
ENVIRONMENTAL FACTORS - Operations, Maintenance, Flight Following or as assigned.

HUMAN FACTORS - Human factors knowledge is used to optimize the fit between people and the system in which they work to improve safety and performance. ORGANIZATIONAL FACTORS - Establish an organizational culture for operational processes and expectations for level of safety in operations.
SUPERVISION FACTORS - Establishing authority, accountability, roles, and decision authority within the operational processes.
ENVIRONMENTAL FACTORS - Establishing and maintaining an environment where personnel have access to design tools and encouragement of performance engagement.

WHO [position]
HUMAN FACTORS - Anyone with operational or SMS roles and responsibilities in operations, maintenance or flight following or other personnel when designing operational processes.
ORGANIZATIONAL FACTORS - Established, maintained, communicated, and assessed by all Directors and managers reporting to the Safety Management System are responsible for activities on behalf of the Accountable Executive. SUPERVISION FACTORS - The Accountable Executive is responsible for operations and activities on behalf of the certificate holder. All Directors and managers reporting to the Safety Management System are responsible for activities on behalf of the Accountable Executive.
ENVIRONMENTAL FACTORS - Applicable to all personnel, where the Accountable Executive leads with a Safety Policy and objectives and goals safe operation.

HUMAN FACTORS - Application of processes and tasks for both reactive management and proactive management.
ORGANIZATIONAL FACTORS - The delivery of structured processes within the organization.
SUPERVISION FACTORS - Processes within the basic types of supervision. General types of supervision and leader are: Structural, Participative, Servant-Leader, Freedom-Thinking and Transformational Leader.

ENVIRONMENTAL FACTORS - Safety operational systems designed for timely delivery within the SHELL model, designed to achieve user friendliness, and for personnel to stay informed during process application.

A System Analysis is a comprehensive analysis of systems, their sub- subsystems, departments and divisions and on- demand processes. System analysis processes are processes to identify hazards within the context of the system analysis. A system analysis is applied to analyses when considering implementation of new systems, revision of existing systems, or design and development of operational procedures, or identification of hazards, ineffective risk controls through the safety assurance processes, or change management. In addition to a system analysis is of the entire safety management system, a system analysis includes operations or activities authorized under the certificate, and analysis of vendors who are performing tasks affecting how the aviation industry perceive the certificate holder and accountable executive performance. A system analysis is applicable to vendors and third-party contractors limited to their tasks of operations. In the unlikely event of an incident, a vendor or third-party contractor may conduct their internal root cause analysis and submit to the airline or airport operator. The inclusion of a system analysis of vendors and third-party contractors operational process does not affect the scope, control and authority of an airline or airport root cause analysis.

The most wanted hazard within a system analysis are hazards beyond scope, control and authority of a certificate holder and their accountable executive.


Saturday, May 13, 2023

Elevated Runway Edge Lights By Inversion

 Elevated Runway Edge Lights By Inversion

By OffRoadPilots

When operating in the arctic, subarctic, mountainous areas, or sparsely settled areas, airlines and airports needs a safety management system (SMS) that includes optical illusion by inversion, optical illusion by sun angle, and optical illusion known as the black-hole effect. An optical illusion is real and the same as a mirage. A mirage is a real optical phenomenon that can be captured on camera since light rays are actually refracted form the false image. A mirage occurs when there is a temperature inversion. An inversion is when air at higher altitudes is warmer than the air below. When the air below the line of sight is colder than the air above it and when passing through the temperature inversion, the light rays are bent down, and so the image appears above the true object. Mirages tend to be stable, as cold air has no tendency to move up and warm air has no tendency to move down. Mirages make objects below the horizon, or outside of a normal line of sight, visible at the horizon. A sun angle optical illusion is when color of rocks in mountain combined with sun angle make a large mountain range impossible to see.

The black hole illusion is a nighttime illusion that occurring when only the runway is visible to pilots without surrounding ground lights. With this illusion there is a tendency, or a trap, for pilots to estimate an incorrect required descent angle and causing the approach to be lower than required for the runway. Another illusion caused by the black hole conditions on dark nights with no moon or starlight, or without a visible horizon, triggers pilots to believe that are on approach slope since they have a steady view of the runway in their windshield, causing them to fly a longer and shallower approaches than needed to clear obstacles. Unless a pilot has up to date knowledge and is intimately familiar with

the airfield, thorough pre- approach study and preparation is required to mitigate the black hole hazard. Today, there are online tools and maps available for pilots to become familiar with approaches and departures at most aerodromes and certified airports.

The same black hole illusion occurs during takeoff when the acceleration g-force is applied to the pilot and their cllimbout angle appears as a steeper than normal. On a dark night, without moonlight or starlight, and without a view of the horizon due to the black hole illusion, a tendency is to reduce aircraft pitch and departure angle may be lower than required to clear obstacles or could even be a negative angle. A contributing factor to a King Air accident in 2007 after a missed approach was caused by the illusion of a climb, when the aircraft was descending.

It is a regulatory requirement for an airport operator to identify in their airport emergency plan potential emergencies within a critical rescue and fire-fighting access area that extends 1000 m beyond the ends of a runway and 150 m at 90° outwards from the centreline of the runway, including any part of that area outside the airport boundaries. It is also a regulatory requirement for an airport operator to identify emergencies that can reasonably be expected to occur at the airport or in its vicinity and that could be a threat to the safety of persons or to the operation of the airport. Optical illusions are real and therefore reasonable to be expected to occur for arrivals and departures. The question to answer is how far away from the airport, beyond the 1000 m distance and 150 m from centerline mark an airport operator assess to be reasonable to initiate an emergency response. In 2017 an aircraft crashed and came to a rest beyond a point 150 m from the extended centerline. Since the airport was operating with a safety management system it was reasonable expect that they would initiate their emergency response plan at that time.

It is also reasonable to expect that airports identify their outer identification surface as their outer limits of primary responsibility and with a responsibility to assist upon request beyond that distance. In 2011 an airplane crashed about 3000 m from an airport and 280 m from the extended centerline, and the airport responded to the accident. In another accident in 2011 an aircraft crashed 1500 m from the centerline and the airport activated their response. A safety management system must be tailored specifically to each airport and that airport emergency plan definitions of distance in its vicinity will vary. Since the regulations are not broad enough to cover every detail of airline or airport operations, their SMS must include a practical application of their plan to address hazards and operational tasks. A rule of thumb for an effective SMS is if the regulations does not require it, this now becomes the very same reason why it is incumbent on airlines or airports to do it.

There are several non-certified aerodromes and remote airports operating without vertical or lateral guidance to their runways. At night, a lighted object, e.g. tower, may appear to be just a few miles ahead of an aircraft in cruise flight, while the actual distance could be 100 miles. When pilots are relying on visual clues as their vertical and lateral guidance, there are times when their aircraft has drifted away from an extended centerline or is low or high on approach. In 1993 a twin engine aircraft approach to an airport at night had the runway in sight at 1200 feet, with a flight visibility near minima. On final approach the crew descended blow a virtual glidepath and aircraft crashed in a hilly and snowy terrain located 5 km short of runway 26. Other examples are major carries approaching low on approach to international airports or lined up on the taxiway for landing. Optical illusions could happen at any airport, but there is a higher probability that an aircraft will be low, high, or drifted away from centerline on approach to airports without vertical and lateral guidance systems.

A guidance approach system installed an several airports is the Precision Approach Path Indicator (PAPI), which is a vertical guidance system for aircraft on final approach. Flying the glidepath of a PAPI keeps aircraft within the obstacle protected surface as long as the airport operator is applying their safety management system processes to monitor for unknown, or new obstacles. An optical illusion created by a PAPI system is when there is frost on the PAPI lenses, and their lights are deflected.

A rule of thumb when flying approaches without PAPI installed is to be 1000 feet above the runway at 3 NM and to maintain runway edge lights visible in a fixed view. The illusion without guidance is that an aircraft is too high when the actual altitude may be below the safe approach angle. In Canada, airports standards are only applicable to airports serving scheduled service for the transport of passengers. An aerodrome serving large airlines, with hundreds of passengers onboard, is not required to comply with the Canadian Aviation Regulations standards compliance. This is a flaw in the regulatory system when the method of how tickets are purchased determines monitoring of safety at destination or departure airports. If the same principle was to be applied to highway travel, speed limits would only be applicable to national bus carriers with paying passengers.

Requirements for a certified airport to install PAPI is that they conduct a risk assessment within their SMS to establish the need for a PAPI. One airport determined by their risk assessment that a PAPI was not to be required since there were no data supporting low, high, or off-centre approaches to their airport. When such data is not collected, risk analyses become simple, but do not paint a true picture of their operations. The absence of incidents is not an indication of a healthy safety management system, or a healthy operational environment. Most times things go right because human factors come with built-in resilience, or the ability to correct errors, or bounce back after an occurrence. An occurrence is not just that an aircraft crash, but also when an approach is flown below the slope of a standard approach path. When occurrences go unreported it makes it a simple to fill in the SMS compliance checkboxes, but optical illusions are occurrences to be reported.

On a dark October night an aircraft was on approach to an airport in the Arctic. That night it was a temperature inversion causing an illusion that runway edge lights were raised well above ground level. When runway lights were elevated, they could be seen from a farther distance and appear to be closer. This night the runway lights were raised by optical illusion to a heigh where they could be seen above a mountain range that normally would obscure the lights at this distance. Since the lights were visible, the position of the aircraft was determined to be inside the mountain range and safe of obstructions. However, within a few minutes the airplane crashed, since the viewed runway edge lights was an illusion, and they were still on the backside of the mountain.

In addition to natural made optical illusions, there is a man-made optical illusion that, at night, when an aircraft is parked on the runway in the same direction as an approaching aircraft, makes the park runway aircraft invisible.

Optical illusions are real. Only by knowing of their existence, learning about the nature of this phenomena, and verifying position by aircraft instruments can it be determined that they are illusions. When flying on visual clues illusions are real, aircraft may be invisible and runway edge lights may be elevated several feet above their actual ground level location.


Saturday, April 29, 2023

Performance Is Exceptional

Performance Is Exceptional

By OffRoadPilots 

Since most of the tasks goes right most of the time, every day performance goes unnoticed and viewed as unexceptional tasks. Airports and airlines fall into a trap to accept repetitious tasks as trivial tasks without considering the successful outcome and these tasks to be less important than complex special tasks assignments. Conventional wisdom is that organizational drift in safety is to drift away from safety in operation to unsafe processes. Drift is neutral and does not affect safety in operations to be improved or reduced. Since drift is neutral, safety improvements or safety reductions are neutral. A concept of an effective safety management system (SMS) is to implement changes for incremental, or continuous safety improvements. Continuous safety improvement is a statement applied to emotions. Such statements are often used in sales and marketing describing a new product or service to be new and improved, which implies that the prior product or service was old and inferior. Safety today does not become old and inferior tomorrow but is fluid and adaptable to external changes.

Continuous safety improvements are the practical drift and the practical compliance gaps. A practical drift is the difference between work imagined and work as actually performed. Work imagined are documented by organizational policies, processes, and procedures. The practical compliance gap is the difference in regulatory compliance in a static environment where nothing moves, and the regulatory non-compliance within a moving environment. When work imagined becomes the compliance standard for safety in operations, it is with an assumption that their systems, processes or procedures are perfect and fail-free. The practical drift system is a common cause variation within the system itself. An airline or airport operator must identify these variations for their SMS to conform to regulatory requirements. Regulatory requirements for the person managing the safety management system are to monitor trends, monitor corrective actions and monitor concerns of the civil aviation industry. Monitoring these tasks is to monitor the outcome, which is different than monitoring for compliance with work imagined.

Continuous, or incremental safety improvements are needed for operations to maintain oversight due to external and common cause variations in processes. A change that is done for the purpose of safety, may or may not be an additional benefit to safety in operations. That the safety card is played, e.g., implemented for safety reasons, is more hazardous to aviation safety than continue operations without changing anything.

Takeoff performance charts for gravel runway are different from performance charts for paved runways. Rolling resistance, sometimes called rolling friction or rolling drag, is the force resisting the motion when a body rolls on a surface. An aircraft rolling on a graveltop surface experiences higher resistance than an aircraft rolling on a blacktop surface. A new type of graveltop runways is the Thin bituminous surface runways classification. Gravel runways have successfully been used for decades in places where it is expensive to make runway pavement or concrete. When operating on gravel runways there are loadbearing restrictions, and there are aircraft performance restrictions. These restrictions are often viewed as a burden and a restriction to operations rather as an additional layer of safety. Gravel runways are located far away from emergency services should an incident occur, and there are airports where public emergency services do not exist. Operating with gravel runways restrictions is to adjust operations to geolocations, since hazards are identified locally. Aircraft weight restriction due to gravel operations is viewed as a trivial task restricting affecting business revenue, rather than for the exceptional performance task it is. Classification of thin bituminous surface runways is a change in classification only, from a gravel runway to a paved runway. With this change, aircraft maximum gross weight is allowed by the regulator to be applied to their takeoff and landing performance. Simplified, there is no changes or construction made to the gravel runway, so the runway remains the same as what it was prior to reclassification. After reclassification, an airline is considered “safe” for takeoff with the stroke of a pen only. The classification criteria itself acknowledges this flaw in performance requirements.

The thin bituminous surface runways are a broad class of surface treatments, which have a variety of performance characteristics. Newly built thin bituminous surface runways require sufficient curing time to provide a competent and durable operational surface. A class 3 pavements may be considered to meet the definition of a thin bituminous surface runway. Since a runway may be considered, is in itself an acknowledgement that there is no data available in support of compliance with all the requirements. In other words, and since the runway may, there are no gravel runways that actually meets the most stringent standard requirement for reclassification as defined. In addition, there performance data for the actual groundroll is not required to assess the validity of the reclassification.

Airport operators designates their most critical aircraft by aircraft group number, which is a numeric value of characteristics of the critical aircraft for which the aerodrome is supporting. The aircraft group number is determined by the critical aircraft wingspan or tail height. An aircraft group number is based on a paved runway surface, with a limited maximum gross weight capability. An airport operator may select an aircraft group number based on aircraft wingspan and tail height, but the standard is lacking a method to verify how an airport operator select an aircraft size based on runway surface performance, or aircraft landing and takeoff performance. The root cause hazard with reclassification to the thin bituminous surface runways is the opportunity for regional compliance by airport operators who wish to maintain gravel operations to reclassify their runway for airlines to operate out of their airport. The one airport operator who remain a gravel runway operator has the highest risk to loose business due to their takeoff and landing restrictions. In addition, they are unable to provide friction characteristics of a runway surface for a runway serving turbojet aircraft, and there are still many unanswered questions. On the other side, the person who was the driving force behind reclassification to thin bituminous surface runways, received a recognized award for the work. When exceptional performance of current processes remains unrecognizable, and risk levels are established by emotions, checkbox syndrome, or by social media likes, any changes to processes becomes its own worst enemy.

Work well done often goes unrecognized as an important work task since everything is operating normally, processes are ignored and discarded as every day normal tasks. Drift is often recognized as drift into unsafe conditions, but unrecognized drift into improved safety in operations is just as much a hazard to aviation safety as drifting into hazardous operating conditions. Hazards are predictable, while incidents and accidents are unpredictable.

For an incident to occur there are three conditions that must meet at the fork-in- the-road. The first condition is that an aircraft, vehicle, or person is performing a task beyond the limits of their capabilities. E.g. an aircraft requiring 3,000 feet takeoff distance is taking off from a 2,000 foot runway. The second condition to be met is recognition of past practices without recognizing special cause variations. aircraft normally departs empty and fly to a longer runway for passengers and freight to be loaded but does not recognize the effect of partial loaded aircraft. The third condition is operational drift to complete a task within a defined timeframe. E.g. daily departure performance records are exceptional, but it is not recognized as exceptional since it occurs daily, and drift is occurring to recover lost time for on-time task completion. Capability limits may be skewed based on established operational requirements, just as the thin bituminous surface runway scenario that justified the change as safety improvement in operations, while the root cause of change is to move operational limitations. Past practices may be skewed by an induced level of urgency to complete, and drift to improvement goes unrecognized when emotions or external forces are applied to decisions.

When performance is exceptional, such as operating out of a gravel runway with performance restrictions, drift into improved runway surface condition is just as much a hazard as drift into acceptable practices.


Sunday, April 16, 2023

More is Less and Less is More

 More is Less and Less is More

By OffRoadPilots

An accountable executive (AE) once said that they operate their safety management system (SMS) and airport operations by a principle that more is less and less is more. When operating by this principle, their regulatory compliance was in essence non-existing, and the regulator demanded the surrender of their airport certificate. The airport operator presented a corrective action plan to abolish the principle that less is more, and the regulator accepted their corrective action plan. Their airport certificate was secured, but an enormous task was ahead to establish regulatory compliance with all SMS and airport regulations. After the airport certificate was secured and compliance level established, the airport operator abandoned their quality control system and reverted back to their previously less is more principle.

A regulatory requirement is for the AE to be responsible for operations or activities authorized under the certificate and accountable on behalf of the certificate holder for meeting the requirements of the regulations. Traditionally, the airport manager (APM) was the certificate holder and would also remain the certificate holder after implementing the safety management system. As the certificate holder, the APM is the airport authority, the decision maker, and an AE is accountable to the APM to maintain compliance with the regulations. Compliance with all regulations and standards are comprehensive tasks, with compliance established with a line-item audit. When operating by the less is more principle, airport operators take it upon themselves to exclude regulations they have decided not to be applicable to their operations. Airport operators do not take into account that there is none, or minimal, scaling of the regulations to suite size and complexity of airport operations. The scaling is a regulatory requirement applicable to scale the processes as opposed to decline compliance with the regulatory part.

When applying the less is more principle, they are applied laterally to any systems without considerations to the issue at stake or expected outcome. Statements such as, “remember that less is more” are commonly used when undefined expectations are a part of the outcome, or minor tasks are removed from regulatory requirements, or lack of process comprehension, or when non- compliant tasks are excluded from the equation. There are times when the less is more principle is true, but the less-is-more system, is not a system to integrate into a safety management system.

The less-is-more system absolutely has its place within many systems, and advertising is one of them. Imagine for a minute that you are driving down the highway and you see a sign that says something like Our breakfast menu has pancakes, toast, eggs, farmers gravy, bacon, sausage, eggs, and coset between $10-15 per adult person.” By the time you get to pancakes, you have passed the sign and wondering what it said. In a less-is-more system, the sign would say Hungry? Next exit” With this sign the business gets more visitors who are hungry and generate more revenue. In addition, since there are fewer words, the sign cost less to make. The less-is-more system is a trigger to the imagination to fill in the blanks, and the blanks give positive, or happy feelings of what the imaginary outcome is. Online advertising has also changed to the less-is-more system by shortening their advertisements to five seconds to hit their target points and for the imagination to fill in the rest. Whenever there is a void, it will be filled with something. Other examples of less-is-more is it lower an item price below regular price to sell more units, it is to offer 25-cents video machines to attract more plays, it is to show less of the neighbourhood when advertising a home to attract more customers, it is to pay less for internet with slower connection and spend more time to upload and download, it is to spend less money on personal improvements to assert more internal control of personnel, it is to spend less money on training for a more uniform and conforming environment, or possible most important reason to operate with a less-is-more system is to play ignorance after occurrences. 

An AE once said that it is difficult to work with hazards that are unknown. The less-is-more system absolutely serves a purpose for an AE operate with a less productive safety management system and to some extent the regulator accepts the ignorance play. Just a few weeks ago, an airline gave this excuse for an aircraft that took off with contaminated surfaces saying that the pilots did not follow safety rules and the regulator accepted without further investigation. In 1956 two airlines were operating in a less-is-more environment causing a midair collision. Most people would not chose the less-is-more system when selecting medical treatment, but it is accepted in aviation safety. Ignorance is bliss, or if you do not know about something, you do not worry about it.

The less-is-more system is a destructive system for a safety management system for both airport and airline operations. The role of an accountable executive is to maintain compliance with records keeping. The regulatory requirements for records keeping are to maintain a record system, that do not comprise the integrity of the records system, measures are taken to ensure that the records contained in the recording systems are protected against inadvertent loss or destruction and against tampering, and a copy of the records contained in the recording systems can be printed on paper and provided to the regulatory on notice given. It would take some imagination to make less-is-more out of these requirements, but if works when processes are combined to cover multiple requirements. This is only possible with a daily quality control system, and a user friendly software that comply with all requirements. When a quality control process is established and determined to conform to regulatory requirements, there is minimal work needed in daily operations. Without a proven daily quality control system an airport or airline operator must complete the same tasks daily and start from the bottom every day to ensure compliance. E.g., using paper format records without continuance to the next day or the historical records.

The regulatory requirement foranAEistobe responsible for operations or activities authorized under the certificate and accountable on behalf of the certificate holder for meeting the requirements of the regulations. This is an enormous task and it make sense that an AE sets performance goals to minimalize these tasks as much as possible. Exempting operations from the regulations is not the way to go. A small to medium airport operator may only receive a turbojet aircraft a few times a month and decide on their own that compliance with obligations is not justified since this is how we always did it.” This is the less-is-more system in that complying with fewer regulations provide more options, or opinions to how airport operations should run. With this approach the safety-card is played, and any tasks or actions are justified by the word safety.” When the word safety is applied, there is very little opposition to the tasks, and especially if airside personnel remain untrained and without knowledge of oversite requirements. Keeping workers in the dark is a prerequisite when operating with the less-is-more system. Only after a complete line-item audit is completed of the operations, the daily quality control system is in place, and processes assigned to regulatory requirements, the less-is-more system could be applied by monitoring drift and operations daily, and make adjustments as required when personnel are drifting from design operations. However, the AE who decided to change over to the less-is-more system, also excluded the audit requirement from compliance system.

The more is less and less is more system is incompatible with operation of an airport or aircraft, and the safety management system. The litmus test of systems compatible with airport and airline operations is in their daily quality control system.


Sunday, April 2, 2023

How to Capture Unknown Hazards

How to Capture Unknown Hazards

By OffRoadPilots 

There is a difference between an unknown hazard and a hidden hazard. Unknown hazards are unknown, but they are not hidden. An unknown hazard is a hazard without a hazard classification, it is a hazard defined by likelihood where times between intervals are imaginary, theoretical, virtual, or fictional.

Unknown hazards are incomprehensible to common sense but are still real hazards. An unknown hazard is in the open and in plain view but is not recognized as a hazard for the purpose of an immediate task to be performed. Unknown hazards also need to be assigned a scope and sequence to learn their whereabouts. A person may be exposed to unknown hazards without knowing it. Exposure to an unknown hazard is a higher risk to aviation safety than exposure to known and hidden hazards since they are unknown and cannot be mitigated.

Hidden hazards are known, but they are hidden and may become visible, or active, if triggered by human factors, organizational factors, supervision factors, or environmental factors. A hidden hazard is removed away from operations in a 3D environment and measured in time (speed), space (location), and compass (direction). Hazards also becomes hidden by remote management environment since the immediate threat to aviation safety does not affect a remote location. A hidden hazard may be hidden for one person, but still be active to another.

A widely accepted method to learn about hidden hazards is to ask personnel to search and identify them in their workplace. One person may identify a condition as a hazard, while another person do not see the same condition as a hazard.

Hazards identified by personnel are often based on emotions, past experiences, based on public opinions, or based on expectations. There are as may reasons to identify a condition as a hazard as there are workers. Mandating a search of hidden hazards is in itself a hazard, since a worker’s attention will be moved from their job activity to searching for hazards. Requesting voluntary hazard reporting as any hazards affects job performance is different, since the workers at that time are focusing on their job tasks rather than identification of what is hidden. After hazards are identified, the role of an SMS Enterprise is to analyze each hazard received, assign a classification, and enter into a hazard register. Identifying a hidden hazard is not the same as identifying an unknown hazard, since hidden hazards are known, but the condition for those hazards do not exist at this time. A prime example of a hidden hazard is when weather conditions are conducive to ice or frost formation on aircraft surfaces, although there is no observable precipitation or fog while an aircraft is on the ground.

Unknown hazards go unattended until there is an incident, accident, or published by a social media post. An unknown hazard is also a special cause variation to aircraft operations since exposure and likelihood has not been accounted for. However, the hazard may be a common cause variation within the process itself. Ice and snow accumulation is known to be a hazard to aviation safety, but at the time of conducting task at hand the hazard is unknown to flight crew until exposed by an incident, unstable flight or published on social media. When this happens, airlines are quick to place blame on pilots, who were just doing their job as expected. A prime example is when an air operator suspends pilots pending investigation into a failure to follow de-icing procedures. In this particular true story, there were no de-icing policy or process established by the air operator to operate out of this airport during icing conditions. 

An aircraft does not carry its own ground deicing equipment and fluids and agreements with airports and contractors are required to deice prior to departure. Without a contract agreement between the airport and airline to deice prior to departures when temperatures are below freezing, pilots complied with management expectations to operate without deicing the aircraft. The aircraft departed without issues, but the hazard became known when posted on social media.

Capturing unknown hazards is an analysis task as opposed to an observation task. Asking workers to actively search for hazards is an observation task. Several hazards can be identified by this method, but the process in itself is reactive since a mitigation plan, or control action is pending on the hazard first being identified. A safety management system (SMS) is simple in concept which is to find the hazards and do something about it. Also, identifying unknown hazards is a regulatory requirement. An SMS enterprise is required to operate with a process for identifying hazards to aviation safety and for evaluating and managing the associated risks. A requirement to identify hazards is for an SMS enterprise to find a hazard, name a hazard, assign a classification to the hazard, and record the hazard in the hazard register. When all this is done, they need a process for setting goals for the improvement of aviation safety and for measuring the attainment of those goals. Capturing unknown hazards is an invaluable tool for goalsetting.

The four items to analyze and capture unknown hazards are within human factors, organizational factors, supervision factors and environmental factors. When searching for unknown hazards, these are the starting points and work backwards from there until hazards are identified. Applying the process inspection flowchart is the same system as the process to capture unknown hazards.


An SMS enterprise has an obligation pursuant to the regulations to assign duties on the movement area and any other area set aside for the safe operation of aircraft, including obstacle limitation surfaces, at the airport, which are described in the airport operations manual, only to workers who have successfully completed a safety-related initial training course on human and organizational factors. A human factors training course includes identification of unknown hazards by recognizing that human factors is different than human error. Human factors are behaviors triggered by the five senses. Human error is to complete a task knowing that the task is completed by a non-standard process. This does not imply that that human error is a direct hazard to task at hand, but that unwritten processes are used to “get the job done”. When unwritten processes, or shortcuts are used, the foundation for operational safety analysis are based on unknown criteria, undocumented hazards, or unknown hazards. A shortcut to “get the job done” may actually be the preferred process, but it needs to be documented and unknown hazards identified within the process.

Human factors are vision, hearing, smell, taste, and touch. The SHELL model is the foundation of human factors interactions as the five senses observe and interprets the components in the SHELL model.

• S=Software includes regulations, standards, policies, job descriptions and expectations.

  • H=Hardware includes electronic devices, documents, tools, and airfield.

  • E=Environment includes designed environment, user friendly environment,

    design and layout, accessibility, and tasks-flow.

  • Social Environment includes distancing, experiences, culture, language

  • Climate Environment includes geo location, weather, and temperature.

  • L=Liveware is yourself and Liveware is other workers within your environment

    Within these areas there are unknown hazards to search for and how they affect operations. An example could be that a regulatory requirement induces stress and shortcuts, or that regulatory compliance increases a level of risk. Tenerife airport disaster is a prime example of how requirements and compliance were contributing factors to the incident. In addition, there are several additional components that could be added to search for unknown hazards within the SHELL model.


Organizational factors are factors are strategy solutions, acceptable cultures, technology, regulatory compliance factors and systems information flow within the organizational structure. When an organization, the CEO, President or Accountable Executive of an organization makes a statement to the fact that an incident was caused by non- compliance with a process, is an acknowledgement that their policies, processes and procedures within the organization is perfect and without flaws. There are multitude of organizations that are perfect and without flaws and they operate very successfully. Just recently a large global carrier experienced an unsuccessful event, which they did not have a policy, process or procedure in place to mitigate the hazard. The event was beyond what management expected and the hazard was unknown until it became one of the most disastrous events they experienced. Within an organizational structure data is collected, then turned into information, information is turned into knowledge and knowledge is turned into comprehension. The triennial line-item audit is a tool to identify unknown organizational hazards. An example is an audit line-item 34- 0403 and the debriefing after an emergency response tabletop or full-scale exercise. 

The organization placed the reason for the finding on the auditor who identify the non-compliance. When combining organizational observations, such as operating with icy runway, clearway across a highway, airport vehicle without radio communication on the runway, haying contractor with uncontrolled access to movement areas, construction operations with open trenches, and more, are examples of widespread unknown hazards within organizational factors. An accountable executive, or the regulator, would be unaware of this unless they monitor their daily quality control system. Without comprehension, and training to meet an acceptable comprehension level, unknown hazards will remain unknown.


Generally speaking, there are four types of supervision. However, in aviation an additional supervision level is introduces. Some of these levels are air traffic services (ATS), air traffic controllers (ATC), flight planning, weather services, control towers, airport ground control, runway, taxiway and apron lights, runway status lights, approach lights, airside markings, markers, and signs. Any of these items are supervisory tasks communicated by other means than words.

Autocratic or Authoritarian supervision:

Under this type, the supervisor wields absolute power and wants complete obedience from subordinates. The supervisor wants everything to be done strictly according to his instructions and never likes any intervention from subordinates. This type of supervision is resorted to tackle indiscipline subordinates.

Laissez-faire or free-rein supervision:
• This is also known as independent supervision. Under this type of supervision, maximum freedom is allowed to the subordinates. The supervisor does not interfere in the work of the subordinates. In other words, full freedom is given to workers to do their jobs. Subordinates are encouraged to solve their problems themselves.

Democratic supervision:
• Under this type, supervisor acts according to the mutual consent and discussion or in other words he consults subordinates in the process of decision making. This is also known as participative or consultative supervision. Subordinates are encouraged to give suggestions, take initiative, and exercise free judgment. This results in job satisfaction and improved morale of employees.

Bureaucratic supervision:
• Under this type certain working rules and regulations are laid down by the supervisor and all the subordinates are required to follow these rules and regulations very strictly. A serious note of the violation of these rules and regulations is taken by the supervisor. This brings about stability and uniformity in the organisation. But in actual practice it has been observed that there are delays and inefficiency in work due to bureaucratic supervision.

The task for an SMS enterprise is to conduct system analyses to find unknown hazards as they apply to operations. An unknown hazard may remain unknown to a

ground crew or aircraft mechanics, but is crucial that the hazard has been found and identified to the flight crew. An example of an unknown hazard is the non- punitive SMS policy, which is only appliable within the jurisdiction where the certificate holder is.


Environmental factors are about the surroundings and its affect on accountable executive, managers, workers, personnel, aircraft, cockpit, passenger cabin, aircraft operations, airport operations, or anything else that becomes a part of operations. Environmental factors are more than just the weather, or environment itself, it is also about how work tasks are laid out to function effectively. Environmental factors are about tool boxes and marked tools, it is about checklist and userfriendly flow, and it is about the organizational culture the everyone works within. Airport operators are changing slower to comply with the safety management system environmental factors than airlines are. Airport like to do what they “always” didand not make any changes. In the pre-SMS era, an airport operator could place all blame on the pilot after an accident, as long as the airport had NOTAM’d an issue. This changed with the new airport standards and the safety management system.

Today, the role of an airport operator is to assist flight crew to maintain compliance with their responsibility to ensure that the aerodrome is suitable for the intended operation. Airport operators are still NOTAM 100% ice on runways and expect the flight crew to decide course of action. What airport operators are doing, is preventing medevac or air ambulance from using the airport since most flight crew would not use an icy runway. Environmental factors are also factors withing the regulatory frameworks, which establishes the basis for an environment. Regulations are not minimum safety requirements, but compliance factors to remain as a certificate holder. An example of an unknown hazard within an environmental environment is fear of failure. 

It is critical for an SMS enterprise to accept that not all hazards can be mitigated to an acceptable risk level without cease operations. One such hazard is the probability that a flight crew could establishes an aircraft on an unplanned course any time during a flight but does not justify ceasing operations.



Sunday, March 19, 2023

The 95% Confidence Level

The 95% Confidence Level

By OffRoadPilots 

A confidence level is the percentage of times you expect to get close to the same estimate if you run your experiment again or resample the population in the same way. A confidence interval consists of the upper and lower bounds of the estimate you expect to find at a given level of confidence. If an airport is estimating a 95% confidence interval around the mean proportion of daily tasks, based on a random sampling of reports, you might find an upper bound of 0.56 and a lower bound of 0.48. These are the upper and lower bounds of the confidence interval. The confidence level is 95%.

With the introduction of a safety management system (SMS) to the aviation, two new terminologies were introduced to the global aviation industry, which were the confidence level, commonly known as a 95% confidence level, and confidence intervals. Prior to the introduction of a confidence level, trends were assessed by the criteria that fewer events are good, and more events are bad. In this system, a trend was established when there are two events, or datapoints, in a row moving in the same direction. When an airport or airline accept two datapoints as a trend they are building their own overcontrolling trap. Determining their level of safety and based on two data points, or events, is overcontrolling of processes. Overcontrolling happens when management do not comprehend information contained in variations with the result that overcontrolling will actually increase variations and causing more unexpected consequences. The old carpenter law when using one stick as a measuring tool for where to cut, is to use the same stick each time or the last stick will be at an incorrect length. Overcontrolling after two data points requires a newtarget to measure from and the last output will be incorrect. For airports and airlines to change over from a reactive SMS culture to a safety culture where they work within an SMS and its confidence level, their first task is to conduct a system analysis of the confidence level system.

A confidence level system is a forward-looking system for strategic planning and designing processes that conform to expected, or desired outcome. A confidence level system within an SMS provides for goal setting, planning, and measuring performance. It concerns itself with organizational safety rather than conventional health and safety at work concerns. An organization's SMS defines how it intends the management of airport and airline safety to be conducted as an integral part of their business management activities. An SMS is woven into the fabric of an organization and becomes part of the culture, or the way people do their jobs. Operating within a confidence level system is a comprehensive and systematic approach to the management of aviation safety, including the interfaces between the airports and airlines, and its suppliers, sub-contractors and business partners. A confidence level approach is also a regulatory requirement for airports to maintain procedures for the exchange of information in respect of hazards, incidents and accidents among all operators of aircraft and the airport operator.

When operating within a forward-looking system, or confidence level system, an SMS becomes a predictive SMS. A predictive SMS is when statistical analyses projects datapoints into the future, by applying process reliability. A process without special cause variations is an in-control process when outputs are as expected or are within the upper and lower confidence interval limits.

The confidence level is in the method, or process itself, and is not in a particular confidence interval. If the sampling method was repeated many times, 95% of the intervals constructed would capture the true population mean. As the sample size increases, the range of interval values will narrow, meaning that a larger sample size, or an increased number of data collected, the mean of the sample will generate a much more accurate result if compared with a smaller sample, or fewer tasks completed.

A confidence level interval are the upper control limits, and the lower control limits. These limits are based on statistical principles for assessing process performances. An SMS is not the old-fashion occupational health and safety reactive system, but is about the health of organizational safety performance, and measured by process reliability performance. A reliable, or stable process, and an in-control process may produce unacceptable results based on the average and calculated upper and lower control limits. An in-control process may be changed as desired to conform to expectations. As an example, if a call center has a policy to answer any calls before the fourth ring, but most calls are answered on the fifth or sixth ring, the process is stable, it falls within the upper and lower control limits, and in-control. However, it is outside of customer service expectation to answer by the third ring. A change in process is then required to meet that goal. When applying the analysis to a predictive SMS, the expectation is that the majority of calls during the next 12-month period will not meet the third-ring goal unless it is changed. When a process is changed, make one change at a time and monitor results.

It is a misconception that a 95% confidence level is unacceptable safety goals for airports and airlines. A wise person once said that you will capture a more correct number of hazards by applying a 95% confidence level to your operations than you will by capturing all hazards. An unknown hazard is also a hazard with an opportunity to affect an outcome in operations. Working with anything else but a confidence level is an unmanageable task.

An airport operating outside a predictive SMS system, assigns airport operations responsibilities to the captain of an intended arriving or departing flight. Conventional wisdom is that publishing NOTAMs releases an airport from all airport operations responsibilities and any incidents are causedby an aircraft captain’s own faulty judgement. A commonly applied airport operations manual (AOM) policy is that an airport is operational 24 hours per day, 7 days per week, supports both day and night VFR and IFR operations to non-precision approach limits, and departures visibility limits to 1⁄2 statute mile (SM) or greater. When a 24/7 policy is published in the aeronautical publications, the airport operator is required to have someone onsite 24/7, but they don’t staff at night, weekends or holidays. 

Airports may be operating with a paper documented SMS, but their operations are still proactive without acceptable processes. Operating without incidents or accidents at an airport, does not equal an operation with a healthy or successful SMS. A process may be in-control, but by the same token it is performing in non-compliance with regulatory requirements, standard requirements and an airports SMS safety policy.

When operating a safety management system and applying the confidence level system, both airports and airlines have a golden opportunity to go above and beyond regulatory requirements in both safety in operations and customer service. An airline providing scheduled air service is required by the regulations to operate out of certified airports, but they are not required to use certified airports as their alternate airports. A certified airport must comply with airport standards, while a non-certified airport, or registered aerodrome, is not required to comply with any airport standards at all. When an airline is using an aerodrome as their alternate destination, this alternate aerodrome may not be suitable for operations. All that the airline knows, is what was published in the aero publications and NOTAM, but there they are unable to verify current suitability upon arrival. A registered (non- certified) aerodrome operates in a reactive culture without responsibility to ensure compliance. The only requirements for an aerodrome to be registered and publish their airfield in the aeronautical publications is that warning notices are published for low-flying aircraft, that they have a wind direction indicator installed, if operating at night they need lights to be installed, they need no entry signs installed and they need no smoking or open flame signs installed. Everything else required for the safe operation of an airport or aircraft are voluntary tasks. This includes NOTAM, snow clearing, obstacle limitations on approach, runway, taxiway and apron aircraft size support, fuel availability and more. There are also certified airport operating under the same reactive principle and believe that by publishing NOTAMs they transfer all responsibility to an aircraft operator. When working within a confidence level system with confidence limits established, both airports and airlines have an opportunity to analyze data to conduct an accept or reject risk assessment.

An airport operator is also required to conduct an airport inspection daily, or more often, depending on type of operations and cause of runway contamination. An airport inspection include runways, taxiways, aprons, lights, signage, markings, markers, approaches and items such as new obstacles outside of airport property. Let’s assume that they are required to produce one report daily. Over a year 365 reports are generated, or 1,095 reports over a 36-month period. The first question to answer is if the tasks were completed daily, with a yes or no answer. There is an expectation that over 36 months, 1,095 reports were generated. Let’s assume that 1,095 reports were submitted for an inspection. In a predictive SMS culture, or when working within a confidence level culture, the next step is to learn if the process generated expected results, or output 1,095 times. What makes airports feel secure or safe, is not so much objective security or safety in operations, as a sense of confidence in their own ability to take care of themselves as they did in the past.

Airport and airline operators need to learn what to measure. SMS is to analyze processes and the health of organizational operations, which can only be discovered by applying a confidence level system with confidence limits.

This is the second reason why the global aviation industry, being airlines or airports, need a safety management system today, when they were safe yesterday without an SMS.


SMS Most Wanted

SMS Most Wanted By OffRoadPilots   A safety management system includes a list of the ten most wanted fugitive hazards and they are on the r...