Monday, June 28, 2021

Illegal Activity, Negligence or Wilful Misconduct

 Illegal Activity, Negligence or Wilful Misconduct

By Catalina9

The Safety Management System (SMS) Safety Policy is the platform on which the SMS is built. The policy is built on an idea, a vision, and expectations of future achievements. A policy is a course or principle of action adopted or proposed by a government, party, business, or individual. An SMS policy follows these same principles and remains in force until the idea, vision, or expectations changes. It is crucial to the success and integrity of a Safety Management System that the Safety Policy is designed to serve the major definite purpose of an enterprise. There is one quality which one must possess to win, and that is definiteness of purpose, the knowledge of what one wants, and a burning desire to possess it. A major definite purpose is the core purpose for the existence of an organization, and the hub where goals, objectives and processes are developed and designed. The more you think about the purpose of your SMS Policy, and how to achieve it, you begin to attract people, opportunities, ideas, and resources that help you to move more rapidly toward your goal. 


Negligence is drift, or abandonmentand invisible in
the daily operations.
A requirement of a safety policy is that it includes an anonymous reporting policy, a non-punitive reporting policy and a confidential reporting policy. A purpose of these reporting systems is to preserve the integrity of the Safety Policy. A non-punitive reporting policy is a commitment by the Accountable Executive (AE) to the event itself and to personnel who were involved in an event that punitive actions are off the table. During the pre-SMS days, punitive actions were applied depending on the severity of the outcome, prior history and expected reoccurrence. Punitive actions were subjective, biased and based on one person’s opinion. A pilot who was involved in an incident expected to be terminated on the spot. There was an expectation by air operators that a commercial pilot should have knowledge and experience to get the job done. Back then, when the weather was low the expectation was to go and take a look and if you see the runway land, but if you don’t, try again. A young pilot did just that and flew an approach to zero visibility, landed and kept the job. Today, in an SMS world, a pre-take off hazard report could have been submitted and the flight cancelled. Then there are other examples of a pilot who was terminated for the operator to look good for their clients when an aircraft on fire was recovered with the first officer frozen on the controls. Punitive actions were an integrated part of a system to improve pilot skills and remove the bad apples. In a non-punitive reporting system, the report may go to those who needs to know, to those who should know and are also disseminated as a summary throughout the organization of events for information purposes.   

A confidential reporting system is when the report only goes to the persons who needs to know, or to a director level within the organization. At the level of directors, the report may go to other than those who needs to know, but the report is still confidential to the director level. The purpose of a confidential reporting system is that the reporting process is within a controlled system and that the contributor has confidence that the report is not shared outside of a director level, or to those who needs to know. A contributor of a confidential report may allow for the report to be shared within the organization, or also contribute to the SMS with videos and clarification of how an incident happened. 

A non-punitive policy is the most often applied policy since it is applied to every single SMS report received as a commitment to the contributor by the AE. There is an ongoing discussion in the aviation industry what makes a non-punitive policy effective. One opinion is to view it from a contributor’s point of view as a job performance assessment when an incident becomes a learning tool for continuous safety improvements. This is defined in a statement where the conditions under which immunity from disciplinary action will be granted. Another opinion is to view this from an enterprise’s point of view when the policy is only applied if an incident does not trigger litigation or legal action against the worker or enterprise and defined in a statement that the non-punitive is not applied if a worker was involved in illegal activity, negligence or wilful misconduct. 

These two opposing regulatory requirements above are supporting the same common goal, which is safety in aviation, but they are opposing views. They are opposing views since one requires definitions of unacceptable behaviors, while the other requires definitions of acceptable behaviors. There is a fine line, and often an invisible line, to balance between accepting an event to be accepted under the non-punitive policy, or for the event to be excluded by the policy. 

Non-punitive policy is a tool for continuous learning.

The foundation of a Safety Management System is a just-culture. In a just-culture there is trust, learning, accountability and information sharing. A just-culture is where there are justifications for actions or reactions. For an enterprise to apply one or the other definition to their non-punitive policy, a safety case, or change management case must be conducted with a risk assessment of their justifications for the application of either of these two definitions. Both unacceptable behaviors, when punitive actions are necessary, and acceptable behavior when immunity will be granted must be pre-event defined in the Safety Policy, with detailed definitions and publications of the five W’s and How in their SMS Manual. The five W’s are to define the process of What, When, Where, Why, Who and How to both illegal activity, negligence or wilful misconduct, and to when immunity from disciplinary action will be granted.

There is no expectation that an enterprise retains workers who shows behaviors of illegal activity, negligence or wilful misconduct. These behaviors could cause the destruction of a successful business. However, SMS is job-performance review and not legal activity review. When an SMS policy states that illegal activity, negligence or wilful misconduct are unacceptable, everything else becomes acceptable. Until the level of these behaviors is reached, the AE makes a commitment to the worker to continue to work. In addition, in an enterprise that allows for any behavior, except for that illegal activity, negligence or wilful misconduct, there is no room for training or continuous safety improvements. On the other hand, in an organization where the conditions under which immunity from disciplinary action will be granted, a defined list of job-performance safety critical areas can be defined and applied. It is crucial to an enterprise to comprehend that even if punitive actions are accepted, there is no regulatory requirement that they must be applied. However, when applied, they must be applied systematically, or evenly to all workers, including senior management. The very first case pursuant to the SMS policy applying the punitive action sets the bar for all future punitive actions.  

When conducting a safety case for which definition to apply to a safety policy, the case must focus on how a policy affects the future of operations and more important, how the policy affects an expanding business. A short term non-punitive policy applied to a single-pilot, single engine operator, or a small regional airport, may restrict the operator to expand into multicrew and multi-engine aircrafts, or an airport may be restricted to expand to multi runways and international traffic. A safety case applies the 5-W’s and How to processes rather than to the issue. As an example, the What question could be asked as; What is illegal activity, negligence or wilful misconduct, or What is the process to establish the baseline for illegal activity, negligence or wilful misconduct. Asking a process question does not eliminate the fact that these behaviors must be clearly defined in the SMS manual. 





Both scenarios require comprehensive pre-defined and published definitions. A concept of the SMS is to pre-define and clearly spell out job-performance expectations. When job performance expectations are undefined until they reach the level of illegal activity, negligence or wilful misconduct, the line when these levels are reached must be clearly defined. Generally speaking, illegal activity is an act committed in violation of law where the consequence of conviction by a court is punishment, especially where the punishment is a serious one such as imprisonment. A definition of negligence is failure to use reasonable care, resulting in damage or injury to another, and a definition of wilful misconduct any act, omission or failure to act (whether sole, joint or concurrent) by a person that was intended to cause the harmful consequences to the safety or property of another person. In addition to general definitions, each sub definition must be clearly defined. When job performance expectations are defined under which immunity from disciplinary actions will be granted, these expectations must be clearly defined. They are defined as Safety Critical Areas with a subcategory of Safety Critical Functions. A comprehensive list could include more 500 events to consider. 

An airport or airline operator must apply the regulatory requirement applicable to their operations. Within a just-culture, or a non-punitive environment, there must be justification for pre-defined actions or reactions. The four principles within a just-culture there is trust, learning, accountability and information sharing. As long as an operator is governed by these principles, they may apply any non-punitive policy tailored to the needs of their operations.  

Catalina9




Monday, June 14, 2021

How To Do A Risk Analysis

 How To Do A Risk Analysis

By Catalina9

There is a difference between a risk analysis and a risk assessment. A risk assessment involves several steps and forms a platform of an overall risk management plan. A risk analysis is one of those steps and is a defining a characteristic of each risk level and is assigned a weight score as a part of the risk assessment. Generally speaking, a risk assessment includes identification of the issues that contribute to risk, analyzing their significance, identifying options to manage, or maintain oversight of the risk, determining which option is likely to be the best fit for size, complexity and scaled to an organization, and assigning recommendations to decision-makers. A risk assessment also includes one or multiple risk analyses for both pre-risk and post-risk mitigation. A risk analysis is one single justification task of likelihood and severity of a hazard and communicated as a risk level, and it may be a standalone document or a supporting document in a risk assessment. 

There are several guidance materials available on how to do a risk analysis which comes with different designs. A risk analysis may focus on likelihood of a hazard, or it may focus on the severity of a hazard as a determining factor. The combination of likelihood and severity is communicated as a risk level. A risk analysis tool is the risk matrix, which assign a reaction to the risk by colors. Red is normally an unacceptable risk level, while yellow may be acceptable with mitigation and green is acceptable without a reaction to the hazard. In a risk matrix likelihood and severity are assigned classification letters and numbers. A low number could be assigned a high severity or a low severity depending on how the risk matrix is designed. The same is true for the likelihood level where the letter “A” could also be a high severity or a low severity depending on risk matrix design.  

Level of exposure is a third component of a risk analysis and to simplify the risk analysis it is normally assigned an exposure level of 1. An assigned exposure level would be between 0 and 1, or 0% to 100% certainty. With the exposure level assigned as 1, the certainty is definite, and the hazard has appeared. As an example, birds are a hazard to aviation. The exposure level to birds for an aircraft 

A current risk analysis level.
on an approach is quite different for an aircraft on an approach to the same runway in January or May. Due to a common cause variation, the migratory bird season increases bird activities during spring and fall months at airports. During the migratory season an airport may apply multiple mitigation processes by ATIS notification, ATC direct notification or means to scare the birds away from airport. Birds are attracted to food sources, and the black runway surface is an attraction for insects, which then again attracts birds. An exposure level for bird activities, without affecting flight operations, may be between 0.1 to 0.9, or up to 90%. An operator may decide to cancel flight with an exposure level at 90%. However, this is an extreme operational decision task since passengers and freight are dependent on the airline in support of their own integrity and on-time commitments. Most often a scheduled or on-demand flight would continue as planned and rely on other aircraft or the airport to scare birds away from the approach or departure ends. By eliminating the exposure criteria and applying an exposure level of 1, a hazard, or risk level for each segment of the flight may be applied. Another common cause variation are thunderstorms, and an expectation that they are to be mitigated at the time of exposure.       

When conducting a risk analysis, one of the most important factors is to reduce inputs of subjective, wishes, or biased, or opinion-based applications. A common cause variation of a risk analysis are the individual assumptions, which does not make it a faulty risk analysis, but an analysis with justification of assumptions or individual variations. One descriptor of a risk assessment is “Possible” with a definition that “It is Possible to occur at leas once a year”.  When a risk analysis is conducted of the hazard of bush flying or flying into one of the most congested airports in the world, the Likelihood of Occurrence would not reach the level of “Possible”, since human behavior is to take the path of least resistance. A likelihood of “Possible” would increase the workload dramatically and it could also restrict business, or flights, into areas of a high profit margin. If this is a new route or area of operations their justification is based on a wish or opinion and might not be a true picture of the hazard. However, if the risk analysis justification is based on prior years documented records, the risk analysis is based on data and paints a true picture. There are no one-fits-all answer in a risk assessment, and there are no correct or incorrect answers to a risk assessment, since it is the operator who accepts or rejects the risk. While this is true, it is also a customer who accepts or rejects the risk to use services provided by one or the other air carrier.       

One of the principles of a Safety Management System is to operate within a Just Culture. A Just Culture is a culture where there are justifications for actions, both proactive and reactive actions. A risk analysis is just as much a part of justification as any other areas of a Just Culture operations. After a risk analysis, both likelihood and severity are processed through a justification process. The platform to build on for likelihood justification is times between intervals. The first level of times between intervals is when times between intervals are imaginary, theoretical, virtual, or fictional. This is a level with no data available and it is unreasonable to expect the likelihood to occur. An example would be the likelihood of a meteor to land in your back yard. The second level is when times between intervals are beyond factors applied for calculation of problem-solving in operation. At this level, the likelihood cannot be reasonable calculated. It is just as impossible as reaching the last number of PI. Third level is when times between intervals are separated by breaks, or spaced greater than normal operations could foresee. 

In a justification culture there is a
justification why the
scale is not balanced
Number four is when times between intervals are without definite aim, direction, rule, or method. Incidents happens, but they are random and unpredictable. Level five is when times between intervals are indefinable. This is when it is impossible to predict an incident, but most likely one or more will occur during an established timeframe. Level six is when times between intervals are inconsistent. This is when incidents occurs regularly, but they are not consistent with expectations. Level seven is when times between intervals are protracted and infrequent and may last longer than expected, but the frequency is relatively low. Level eight of the likelihood are the foothills of a systemic likelihood and when times between intervals are reliable and dependable. Levels nine and ten are the systemic levels when times between intervals are short, constant and dependable, or times between intervals are methodical, planned and dependable, without defining the operational system or processes involved.





After the likelihood level is justified, the justification process continues to the severity level. There are also ten severity levels, which are independent of the likelihood levels. The platform for classifications of the severity levels is a platform of expectations. Building on the platform is a severity level that is not compatible with another fact or claim of the hazard. The next blocks are a severity level with insignificant consequences, a severity level inferior in importance, size or degree, a severity level that would attract attention to operational process, cause operational inconvenience, or unscheduled events, a severity level large in extent or degree, a severity level involving an industry standard defined risk, or a risk significant in size, amount, or degree, a severity level having influence or effect of a noticeably or measurably large amount caused by something other than mere chance, or ignorance. Severity level eight is the foothills of the catastrophic levels, when a severity level having influence or effect of an irrevocable harm, damage, or loss. Severity levels nine and ten are the catastrophic levels with a severity level of a turning point with an abrupt change approaching a state of crisis and sufficient in size to sustain a chain reaction of undesirable events, occurrences, incidents, accidents or disaster, or a severity level where functions, movements, or operations cease to exist. 

With a justification risk analysis, the first action is defined based on the risk level. At level one the risk the initial task is to communicate. Level two is to communicate – monitor. Level three is to communicate – monitor – pause. Level four is to communicate – monitor – pause – suspend. And level five is to communicate – monitor – pause -suspend – cease. The beauty of a justification-based risk analysis is that after corrective action is implemented and during the follow up process, the tasks are to be completed in reversed order until the risk reaches the communicate task level.  

Catalina9





Tuesday, June 1, 2021

What To Expect From An Audit

What To Expect From An Audit
By Catalina9 

What we expect of an audit is that it is an unbiased and a neutral report of the facts. 

Everyone in the aviation industry needs to do audits for one reason or another. Audits might be done for regulatory compliance, for compliance with the enterprise’s safety policy, as a contract compliance agreement, at customer’s request as a satisfaction evaluation or after a major occurrence. An airport client must feel ensured that operating out of one specific airport does not cause interruptions to passengers due to inadequate maintenance of nav-aids, visual aids, markings, runways, taxiways, or aprons, or that are any surprises for aircraft, crew or passengers. 

Include in your SMS manual that audit
results are not automatically implemented
An airline or charter operator most often carefully research new airports they are planning to operate out of, and when there is a tie between two or more airports, the one with the best customer service wins the draw. A passenger on an airliner must feel ensured that the flight will be occurrence-free, or a shipper of goods must trust the carrier to ensure that their goods arrive at the destination airport in the same condition it was when first shipped. There is a million considerations and reasons why audits are needed. Since there are several reasons for audits, there are also several expectations of outcome of an audit. What these expectations are, depends on what side of the audit you are and the scope of the audit.

Let’s take a few minutes and reflect on these three different types of audits. The audits are the Regulatory compliance audit, the Safety Policy compliance audit, or the Customer Satisfaction compliance audit. 

The Regulatory compliance audit is a static audit, where no movements or processes are required for the audit. When an operator’s certificate is issued to an airline there are zero movements required for that certificate to be issued. However, there are conditions for operations attached to the certificate, which becomes the scope of regulatory audits. These conditions are management personnel, maintenance personnel and flight crew. All these positions for an air carrier are certificated positions and each person must to comply with their roles, responsibilities, and privileges of their licenses for the operating certificate to remain valid. For a new certificate holder, at the time the first aircraft leaves the gate for a flight, there is an expectation of an audit that pre-departure regulatory requirements are met and that all regulatory requirements are met at the closing of the flight upon arrival at their destination. When an audit of an airline is carried out, the first step is to review their operations manuals for regulatory compliance. At the time of issuance of the certificate they were compliant, but over time amendments are added and new regulatory requirements are implemented. One major implementation example is the Safety Management System (SMS), which had an enormous impact on airlines. Their compliance requirements went from a “job well done” to who did the job and how did they do it. After manuals are reviewed, their operational records are reviewed for compliance. Records for their very first flight, or first flight since last audit, to the most current records are reviewed. Regulatory compliance audits are audits of pre-flight compliance, in-flight compliance, and post-flight compliance. Training records, operations records, maintenance records or crew license records are all audited and assigned a compliance or non-compliance grade. The expectation of a regulatory audit is that any items audited are linked to a regulatory requirement. 

A Safety Policy compliance audit is an audit of an enterprise’s Safety Management System. The audit process is the same as for a regulatory compliance audit, with a difference the audit becomes a job-performance audit. A job-performance audit is about what the task was, when was the task performed, where in the operations was the task assigned, who did the task, why was the task necessary and how was these tasks performed. The “how” audit is an overarching audit for the other five questions: what, when, where, who, and why. A safety policy audit must answer how a decision was reached for each one of the five questions. E.g., how was a decision reach to select and airplane and crew, how was the timeline for crew-pairing selected, criteria for destinations and how was it decide who makes the final decision and why was this person selected.

A safety policy to be “safe” is a
policy with undefined destinations
A safety policy audit is the most comprehensive audit, since is involves all aspects of operations, each person in those operations and a complete timeline of that operations. An inflight example of a safety policy audit is the process for preparing for an emergency upon arrival. A person seated in the emergency exit is prior to takeoff asked if they are willing and able to assist the flight crew with opening the emergency exit. During the flight alcohol is also served to that person who could be intoxicated upon arrival as a temporary crew member with limited duties. A safety policy audit conducts interviews of operational personnel, crew members and maintenance personnel. During these interviews, an auditor may discover that intoxicated personnel are expected to be frontline crew members during an emergency. For each task required by regulatory requirements the same audit process is applied. Just one simple task may take hours to complete, and it becomes a resource impossibility and impracticability to conducts SMS audits of 100% of the requirements, 100% of the personnel and at100% of the times. An SMS audit must therefore apply random sampling and statistical process 

control (SPC) for a confidence level analysis. The industry standard is that there is a 95% confidence level for each element of an SMS to be present for an acceptable audit result. 

A customer satisfaction compliance audit is the simplest audit of all audits. A customer satisfaction audit is audited against opinions, or industry standard expectations. A customer may conduct an audit as an opinion of regulatory compliance, as an opinion of safety policy compliance audit or as an opinion of conforming to industry expectations. Customer satisfaction auditors is not required to be technical experts in regulatory interpretation, operational experts, or experts in airport operations, but are experts in providing opinions of their observations based on their operational experience in aviation. A customer satisfaction audit does not issue findings since the auditor is unqualified to issue findings against regulatory requirements, or operational recommendations. They issue opinions and suggestions for operational changes or implementations as viewed from a customer’s point of view and on behalf of a customer. An operator, being airline or airport, makes a decision if they should implement these changes and how these changes could affect their operations. The criterion for change may solely be based on a customer’s wish, public opinion, or social media trends. An enterprise without a clause in their SMS manual that any findings from any types of audits must first be assessed by the enterprise before accepted or rejected to be implemented in their operations, may be compelled to make changes without knowing the effect.        

An auditor has no responsibility for any occurrences an operator may experience by in their operations after implementing audit recommendations. A new regulatory requirement implemented may affect operational safety. A safety policy recommendation may affect safety and the implementation of a customer suggestion may affect safety in operations. In any case after an audit, being an airline or airport, must prior to implementation of changes conduct a safety case, or change management assessment, to evaluate the risk impact on their operations. Since there is an inherent hazard in aviation from the time an aircraft is moving under its own powers, an operator must monitor what direction the implementation of audit suggestions or requirements are taking and from their assessment continue the course or make operational changes to avoid or eliminate hazards on the horizon. 


Catalina9





 

Santa Rollover

  Santa Rollover By OffRoadPilots S anta has operated with a streamlined mission service (SMS) for several years. A Santa Claus safety manag...