Monday, June 14, 2021

How To Do A Risk Analysis

 How To Do A Risk Analysis

By Catalina9

There is a difference between a risk analysis and a risk assessment. A risk assessment involves several steps and forms a platform of an overall risk management plan. A risk analysis is one of those steps and is a defining a characteristic of each risk level and is assigned a weight score as a part of the risk assessment. Generally speaking, a risk assessment includes identification of the issues that contribute to risk, analyzing their significance, identifying options to manage, or maintain oversight of the risk, determining which option is likely to be the best fit for size, complexity and scaled to an organization, and assigning recommendations to decision-makers. A risk assessment also includes one or multiple risk analyses for both pre-risk and post-risk mitigation. A risk analysis is one single justification task of likelihood and severity of a hazard and communicated as a risk level, and it may be a standalone document or a supporting document in a risk assessment. 

There are several guidance materials available on how to do a risk analysis which comes with different designs. A risk analysis may focus on likelihood of a hazard, or it may focus on the severity of a hazard as a determining factor. The combination of likelihood and severity is communicated as a risk level. A risk analysis tool is the risk matrix, which assign a reaction to the risk by colors. Red is normally an unacceptable risk level, while yellow may be acceptable with mitigation and green is acceptable without a reaction to the hazard. In a risk matrix likelihood and severity are assigned classification letters and numbers. A low number could be assigned a high severity or a low severity depending on how the risk matrix is designed. The same is true for the likelihood level where the letter “A” could also be a high severity or a low severity depending on risk matrix design.  

Level of exposure is a third component of a risk analysis and to simplify the risk analysis it is normally assigned an exposure level of 1. An assigned exposure level would be between 0 and 1, or 0% to 100% certainty. With the exposure level assigned as 1, the certainty is definite, and the hazard has appeared. As an example, birds are a hazard to aviation. The exposure level to birds for an aircraft 

A current risk analysis level.
on an approach is quite different for an aircraft on an approach to the same runway in January or May. Due to a common cause variation, the migratory bird season increases bird activities during spring and fall months at airports. During the migratory season an airport may apply multiple mitigation processes by ATIS notification, ATC direct notification or means to scare the birds away from airport. Birds are attracted to food sources, and the black runway surface is an attraction for insects, which then again attracts birds. An exposure level for bird activities, without affecting flight operations, may be between 0.1 to 0.9, or up to 90%. An operator may decide to cancel flight with an exposure level at 90%. However, this is an extreme operational decision task since passengers and freight are dependent on the airline in support of their own integrity and on-time commitments. Most often a scheduled or on-demand flight would continue as planned and rely on other aircraft or the airport to scare birds away from the approach or departure ends. By eliminating the exposure criteria and applying an exposure level of 1, a hazard, or risk level for each segment of the flight may be applied. Another common cause variation are thunderstorms, and an expectation that they are to be mitigated at the time of exposure.       

When conducting a risk analysis, one of the most important factors is to reduce inputs of subjective, wishes, or biased, or opinion-based applications. A common cause variation of a risk analysis are the individual assumptions, which does not make it a faulty risk analysis, but an analysis with justification of assumptions or individual variations. One descriptor of a risk assessment is “Possible” with a definition that “It is Possible to occur at leas once a year”.  When a risk analysis is conducted of the hazard of bush flying or flying into one of the most congested airports in the world, the Likelihood of Occurrence would not reach the level of “Possible”, since human behavior is to take the path of least resistance. A likelihood of “Possible” would increase the workload dramatically and it could also restrict business, or flights, into areas of a high profit margin. If this is a new route or area of operations their justification is based on a wish or opinion and might not be a true picture of the hazard. However, if the risk analysis justification is based on prior years documented records, the risk analysis is based on data and paints a true picture. There are no one-fits-all answer in a risk assessment, and there are no correct or incorrect answers to a risk assessment, since it is the operator who accepts or rejects the risk. While this is true, it is also a customer who accepts or rejects the risk to use services provided by one or the other air carrier.       

One of the principles of a Safety Management System is to operate within a Just Culture. A Just Culture is a culture where there are justifications for actions, both proactive and reactive actions. A risk analysis is just as much a part of justification as any other areas of a Just Culture operations. After a risk analysis, both likelihood and severity are processed through a justification process. The platform to build on for likelihood justification is times between intervals. The first level of times between intervals is when times between intervals are imaginary, theoretical, virtual, or fictional. This is a level with no data available and it is unreasonable to expect the likelihood to occur. An example would be the likelihood of a meteor to land in your back yard. The second level is when times between intervals are beyond factors applied for calculation of problem-solving in operation. At this level, the likelihood cannot be reasonable calculated. It is just as impossible as reaching the last number of PI. Third level is when times between intervals are separated by breaks, or spaced greater than normal operations could foresee. 

In a justification culture there is a
justification why the
scale is not balanced
Number four is when times between intervals are without definite aim, direction, rule, or method. Incidents happens, but they are random and unpredictable. Level five is when times between intervals are indefinable. This is when it is impossible to predict an incident, but most likely one or more will occur during an established timeframe. Level six is when times between intervals are inconsistent. This is when incidents occurs regularly, but they are not consistent with expectations. Level seven is when times between intervals are protracted and infrequent and may last longer than expected, but the frequency is relatively low. Level eight of the likelihood are the foothills of a systemic likelihood and when times between intervals are reliable and dependable. Levels nine and ten are the systemic levels when times between intervals are short, constant and dependable, or times between intervals are methodical, planned and dependable, without defining the operational system or processes involved.





After the likelihood level is justified, the justification process continues to the severity level. There are also ten severity levels, which are independent of the likelihood levels. The platform for classifications of the severity levels is a platform of expectations. Building on the platform is a severity level that is not compatible with another fact or claim of the hazard. The next blocks are a severity level with insignificant consequences, a severity level inferior in importance, size or degree, a severity level that would attract attention to operational process, cause operational inconvenience, or unscheduled events, a severity level large in extent or degree, a severity level involving an industry standard defined risk, or a risk significant in size, amount, or degree, a severity level having influence or effect of a noticeably or measurably large amount caused by something other than mere chance, or ignorance. Severity level eight is the foothills of the catastrophic levels, when a severity level having influence or effect of an irrevocable harm, damage, or loss. Severity levels nine and ten are the catastrophic levels with a severity level of a turning point with an abrupt change approaching a state of crisis and sufficient in size to sustain a chain reaction of undesirable events, occurrences, incidents, accidents or disaster, or a severity level where functions, movements, or operations cease to exist. 

With a justification risk analysis, the first action is defined based on the risk level. At level one the risk the initial task is to communicate. Level two is to communicate – monitor. Level three is to communicate – monitor – pause. Level four is to communicate – monitor – pause – suspend. And level five is to communicate – monitor – pause -suspend – cease. The beauty of a justification-based risk analysis is that after corrective action is implemented and during the follow up process, the tasks are to be completed in reversed order until the risk reaches the communicate task level.  

Catalina9





Tuesday, June 1, 2021

What To Expect From An Audit

What To Expect From An Audit
By Catalina9 

What we expect of an audit is that it is an unbiased and a neutral report of the facts. 

Everyone in the aviation industry needs to do audits for one reason or another. Audits might be done for regulatory compliance, for compliance with the enterprise’s safety policy, as a contract compliance agreement, at customer’s request as a satisfaction evaluation or after a major occurrence. An airport client must feel ensured that operating out of one specific airport does not cause interruptions to passengers due to inadequate maintenance of nav-aids, visual aids, markings, runways, taxiways, or aprons, or that are any surprises for aircraft, crew or passengers. 

Include in your SMS manual that audit
results are not automatically implemented
An airline or charter operator most often carefully research new airports they are planning to operate out of, and when there is a tie between two or more airports, the one with the best customer service wins the draw. A passenger on an airliner must feel ensured that the flight will be occurrence-free, or a shipper of goods must trust the carrier to ensure that their goods arrive at the destination airport in the same condition it was when first shipped. There is a million considerations and reasons why audits are needed. Since there are several reasons for audits, there are also several expectations of outcome of an audit. What these expectations are, depends on what side of the audit you are and the scope of the audit.

Let’s take a few minutes and reflect on these three different types of audits. The audits are the Regulatory compliance audit, the Safety Policy compliance audit, or the Customer Satisfaction compliance audit. 

The Regulatory compliance audit is a static audit, where no movements or processes are required for the audit. When an operator’s certificate is issued to an airline there are zero movements required for that certificate to be issued. However, there are conditions for operations attached to the certificate, which becomes the scope of regulatory audits. These conditions are management personnel, maintenance personnel and flight crew. All these positions for an air carrier are certificated positions and each person must to comply with their roles, responsibilities, and privileges of their licenses for the operating certificate to remain valid. For a new certificate holder, at the time the first aircraft leaves the gate for a flight, there is an expectation of an audit that pre-departure regulatory requirements are met and that all regulatory requirements are met at the closing of the flight upon arrival at their destination. When an audit of an airline is carried out, the first step is to review their operations manuals for regulatory compliance. At the time of issuance of the certificate they were compliant, but over time amendments are added and new regulatory requirements are implemented. One major implementation example is the Safety Management System (SMS), which had an enormous impact on airlines. Their compliance requirements went from a “job well done” to who did the job and how did they do it. After manuals are reviewed, their operational records are reviewed for compliance. Records for their very first flight, or first flight since last audit, to the most current records are reviewed. Regulatory compliance audits are audits of pre-flight compliance, in-flight compliance, and post-flight compliance. Training records, operations records, maintenance records or crew license records are all audited and assigned a compliance or non-compliance grade. The expectation of a regulatory audit is that any items audited are linked to a regulatory requirement. 

A Safety Policy compliance audit is an audit of an enterprise’s Safety Management System. The audit process is the same as for a regulatory compliance audit, with a difference the audit becomes a job-performance audit. A job-performance audit is about what the task was, when was the task performed, where in the operations was the task assigned, who did the task, why was the task necessary and how was these tasks performed. The “how” audit is an overarching audit for the other five questions: what, when, where, who, and why. A safety policy audit must answer how a decision was reached for each one of the five questions. E.g., how was a decision reach to select and airplane and crew, how was the timeline for crew-pairing selected, criteria for destinations and how was it decide who makes the final decision and why was this person selected.

A safety policy to be “safe” is a
policy with undefined destinations
A safety policy audit is the most comprehensive audit, since is involves all aspects of operations, each person in those operations and a complete timeline of that operations. An inflight example of a safety policy audit is the process for preparing for an emergency upon arrival. A person seated in the emergency exit is prior to takeoff asked if they are willing and able to assist the flight crew with opening the emergency exit. During the flight alcohol is also served to that person who could be intoxicated upon arrival as a temporary crew member with limited duties. A safety policy audit conducts interviews of operational personnel, crew members and maintenance personnel. During these interviews, an auditor may discover that intoxicated personnel are expected to be frontline crew members during an emergency. For each task required by regulatory requirements the same audit process is applied. Just one simple task may take hours to complete, and it becomes a resource impossibility and impracticability to conducts SMS audits of 100% of the requirements, 100% of the personnel and at100% of the times. An SMS audit must therefore apply random sampling and statistical process 

control (SPC) for a confidence level analysis. The industry standard is that there is a 95% confidence level for each element of an SMS to be present for an acceptable audit result. 

A customer satisfaction compliance audit is the simplest audit of all audits. A customer satisfaction audit is audited against opinions, or industry standard expectations. A customer may conduct an audit as an opinion of regulatory compliance, as an opinion of safety policy compliance audit or as an opinion of conforming to industry expectations. Customer satisfaction auditors is not required to be technical experts in regulatory interpretation, operational experts, or experts in airport operations, but are experts in providing opinions of their observations based on their operational experience in aviation. A customer satisfaction audit does not issue findings since the auditor is unqualified to issue findings against regulatory requirements, or operational recommendations. They issue opinions and suggestions for operational changes or implementations as viewed from a customer’s point of view and on behalf of a customer. An operator, being airline or airport, makes a decision if they should implement these changes and how these changes could affect their operations. The criterion for change may solely be based on a customer’s wish, public opinion, or social media trends. An enterprise without a clause in their SMS manual that any findings from any types of audits must first be assessed by the enterprise before accepted or rejected to be implemented in their operations, may be compelled to make changes without knowing the effect.        

An auditor has no responsibility for any occurrences an operator may experience by in their operations after implementing audit recommendations. A new regulatory requirement implemented may affect operational safety. A safety policy recommendation may affect safety and the implementation of a customer suggestion may affect safety in operations. In any case after an audit, being an airline or airport, must prior to implementation of changes conduct a safety case, or change management assessment, to evaluate the risk impact on their operations. Since there is an inherent hazard in aviation from the time an aircraft is moving under its own powers, an operator must monitor what direction the implementation of audit suggestions or requirements are taking and from their assessment continue the course or make operational changes to avoid or eliminate hazards on the horizon. 


Catalina9





 

How To Do A Risk Analysis

 How To Do A Risk Analysis By Catalina9 T here is a difference between a risk analysis and a risk assessment. A risk assessment involves sev...