Tuesday, February 22, 2022

THE MINI BLACKBOX

 THE MINI BLACKBOX

By Catalina9

BACKGROUND

Data recorders are primary tools in aircraft accident. The recording of an aircraft condition and operation can provide additional data to reconstruct the flight profile during all stages of flight, enhancing the investigation’s ability to comprehend data during the events leading up to an accident or incident. This information is a critical tool in assisting in preventing future accidents or incidents.

 

Flight data recorders (FDR) and cockpit voice recorders (CVR) are considered comprehensive methods of capturing flight data and can assist investigators in determining the reasons for an accident. FDRs record information such as altitude, airspeed, heading, and other aircraft parameters, many times per second. CVRs record radio transmissions and ambient cockpit sounds, such as pilot voices, alarms, and engine noises.


A blackbox is a special gift to be opened after an occurrence



LIGHTWEIGHT BLACKBOX

A lightweight blackbox is a unit which collects cockpit voice and video, or frames of images, and in addition collects data of established event triggers parameters. The unit comes with an internal hard drive for storage and a removable storage card for increased storage space and uploading into the analysis system. As an “after market” installed unit, (not originally installed when the aircraft was manufactured) the unit is mounted to the bulkhead behind the captain for viewing and videotaping pilot’s activities, performance instruments and engine instruments. The regulators work with the industry to implement a requirement to install lightweight flight data recorders in aircraft currently not require cockpit voice reorders or flight data recorders to be installed. 

 

The lightweight blackbox is an autonomous unit without direct links to aircraft operational performance, such as yaw, roll, pitch or engine performance. The unit is reliant on satellite reception to calculate geoposition, altitude, elevation, rate of descent, groundspeed, yaw, roll and pitch. Engine performance is not monitored but is viewed through the integrated camera. In essence, a lightweight blackbox is a smartphone nailed to the wall. 

 

A digital animation of data collected is available for view in a flight data analysis software, with an overlay on Google Maps, or other map suppliers. The data itself is available for download to spreadsheet for analysis. When this spreadsheet data is analyzed correctly it becomes a valuable tool for continuous safety improvements in operations. However, when data is analyzed to an exceedance level of established event trigger parameters, it becomes its worst enemy to safety. 

 

SELECTING PARAMETERS

A flight data recorder or cockpit voice recorder is only available for discovery after an accident or after a reportable incident to the Transportation Safety Board. This data is then used as a tool to learn what events lead up to the crash. Exceedance limits analyzed after an incident are aircraft design limits, aircraft performance limits, engine performance limits, and crew comprehension limits. None of these limits are analyzed prior to an accident when operating with a regulatory required FDR and CVR, or a comprehensive blackbox. 

 

With a regulated mini-blackbox, or flight data recorder, several parameters are established and defined as operational exceedances. The exceedances may be set at a lower value than safety in flight operations requires, they may be set at a lower value than what the flight crew is trained for, or they might be set at a lower limit than the manufacturer’s recommendation.

 

There are hundreds of parameters, or exceedances which may be set as event triggers, but some common triggers are excessive roll, excessive roll below 400 feet after takeoff, maximum approach speed below 300 feet, maximum descent rate, maximum G-force, maximum pitch, stabilized approach descent rate, or stabilized approach roll.  

 

As examples, event exceedances could be an excessive roll to be greater than 30° bank, excessive roll below 400 feet greater than 5° of bank, maximum approach speed below 300 feet for a light twin greater than 130KTS airspeed, a maximum descent rate greater than 2000 FT/min rate of descent, maximum G-force 4.0 G or greater, maximum pitch greater than 20° of nose pitch up, stabilized approach descent rate with rate of descent grater than 1000 FT/min, or stabilized approach roll at 10° of bank. 


Boundaries has consequences, except for the lightweight blackbox arbitrary event triggers.


These are examples of exceedances available for an operator to view and assess the flight-behavior of their company flight crew. A flight crew could be dinged with an exceedance violation of an excessive airspeed, when maintaining an IAS of 120 KTS, if on a high, hot, and light quartering tailwind day. A ground speed at 120 KTS indicated, could be 135-140 KTS ground speed. A flight crew on a regular run to this airport could easily get more red-flags against them than other flight crew who are arriving at lower and cooler elevations. When this type of data, where the input is ground speed, but data is analysed as indicated airspeed, and to its level of exceedance, an air operator has passed the fork-in-the-road and already took the wrong turn. When data is analysed to the level of exceedance, of criteria for success is to see a reduction of the bars in the graph. What is not understood when using this approach, is that fewer events does not equal improved level of safety in flying. It is a real temptation when conducting these analyses to overcontrol all processes, since an exceedance in itself is an unacceptable level of hazard independent of what event trigger is exceeded. The only process to run and operate with a successful mini blackbox is to apply spcforexcel.com, which is a highly recommended and userfriendly statistical process control software. It is impossible to run and operate a true safety management system without a statistical process control system and to establish safety critical areas and safety critical functions to scale hazards within human factors, supervision factors, organizational factors, and environmental factors. 

 

    

OVERCONTROLLING A PROCESS  

When a policy, system, objective or process is not comprehended, the only reaction is to overcontrol a process to ensure, in their own mind, that this person did everything possible to improve safety for the public and flight crew. If we take this to a practical level where the speed limit is 50, any exceedance is unacceptable, since it is the limit that is exceeded, and any associated issues becomes irrelevant. An operator may feel a pressure or temptation to overcontrol the process to ensure that no values exceed 50. If the number 50 is assigned as a level of safety, the next question to answer is what makes the speed 51 unsafe, which is also impossible to answer, and overcontrolling, or enforcing the 50 limit becomes even more forceful, or the limit could be raised to 100. 

 

The same holds true with the arbitrary established event trigger in a flight data recording system. When analyzing only to the event trigger exceedance level, the three options available for an operator is to enforce, or overcontrol, flight crew behaviors, or ignore the exceedances, or establish event triggers parameters beyond any reasonable expected flight maneuver. E.g. set the parameters to a level that scares the flight crew. 

 

WHEN VARIATIONS ARE NOT UNDERSTOOD

If an operator or management does not understand the information contained in variation, the operator would be over-controlling the process and increase the variation itself. 

 

To understand variation, it is essential to understand the difference between a process that is in statistical control (stable) and a process that is out of control (unstable). A process is in statistical control if there are no special causes present, or a cause required within the process for the process to function as intended. An example is the migratory bird season. Season migratory bird activities is a common cause variation and an in-control process but can easily be mistaken as an out-of-control process and special cause variation. 

 

A major fault, which exists in many places today, is the assumption that every event is attributable to someone or is related to one event. The fact is, however, that most troubles with aeronautical violations lie in the system and are not attributable to one person or one event.

 

There are two types of variation. Common cause variation is variation due to the system. Reducing common cause variation is the responsibility of management. Special cause variation is variation due to sporadic, unnatural events. Special cause variation is the responsibility of the flight crew or front-line personnel to correct. 

 

There are two types of mistakes we can make when looking at data. One mistake is to assume that a data point is due to a special cause when in fact it is due to common cause and the other type of mistake is to assume that a data point is due to common cause when in fact it is due to special cause. As an example, the migratory bird season is a common cause variation, and required for the process to function, but could easily be assumed to be a special cause variation since birds is a hazard to aircraft operations. An exceedance of an event trigger could be viewed as a special cause variation, but when analyzed within a statistical process control system, it is actually a common cause variation. 

 

WHEN SAFETY BECOMES A HAZARD

When a safety reaction to an event is to over correct, the correction is not at all useful even if it includes the word safety. When safety is an argument for safety itself is when the safety-card is played. Safety becomes a hazard when a person in authority makes event trigger decisions based on random sampling or emotional criteria. Within an SMS enterprise, and if two event triggers are disagreed on, it is tempting to split the number and meet in the middle. This is a process beyond an acceptable process to establish triggers and is also a process out-of-control since the special cause variation, e.g. how to select a number became the determining factor in selecting an operational safety exceedance trigger. When the selection process is out-of-control, the process is in-control to the level at which it conform to the selected parameters, and to as the process relates to safety itself.   

 

 

Catalina9

Friday, February 4, 2022

Anniversary

 Anniversary

By Catalina9

This month is nine years ago since I wrote the first post, and 235 posts later there is no end of writing Safety Management System (SMS) stories. An SMS is a tremendous system, and asset to an enterprise, when applied as an oversight tool but can also its worst enemy when applied as a compliance tool. A high compliance score is often associated with safety in operations and if decisions are solely based on a compliance score, there is a risk to overlook, or assume, the inherent risks in aviation. An inherent risk is also a common cause variation. On the other hand, when SMS is applied as an oversight tool, it is applied to the outcome or results with an analysis of occurrences, or special cause variation. A line in one of the first posts is still true, that one grassroot method to evaluate the effectiveness of an SMS is to learn how many hazard reports an AE has submitted. In an enterprise where the AE is actively submitting hazard reports, there is a hope that other personnel also voluntarily submit reports. In an enterprise where the AE does not submit hazard reports, their organizational culture may come to reflect that behavior. This very first post is an interesting post about aviation safety and the “trial-and-error” method. 


Hazard comprehension has evolved over the years.


In prior years of aviation, hazard identification was short-sighted. If the hazard was not an immediate threat, the hazard was not of any concerns. In a later post, an aircraft was departing from a grass-strip and the general public with a breathtaking close view. An on-the-fly risk assessment at that time was that as long as the propeller is pointing away from me, I am safe. The physics of a departing taildragger was excluded from the risk assessment, since the public did not know about hazards involved.  When humans adapt to hazards, we establish subjective standards in hazard mitigation. Airports, airlines, pilots, mechanics, or aviation leaders also have the ability to adapt to hazards.  When human adapt, subjective standards are applied to hazard management, which is a scenario for hazard escalation. This is also true in an SMS world today, that without leadership, subjective, or opinion-based standards are applied to hazard mitigation. An example of an opinion-based hazard is that the Titanic was an unsinkable ship. In 1911, Shipbuilder magazine published an article on the White Star Line’s sister ships Titanic and Olympic. The article described the construction of the ship and concluded that Titanic was practically unsinkable. 


SMS is to know the next move to achieve desired result

One of the most difficult decisions for a brand-new SMS enterprise is how to manage interpersonal conflicts, or the reporting of another person for making mistakes. Everyone wants to be safe, and “arguing with safety” is considered to be an unacceptable behavior. When a person states that something is unsafe, or that an action must be initiated for the safety of a person, an aircraft, or the public, they have the upper hand, and many will agree to avoid being viewed as being against safety. In a previous posts, it was written about being SMS’d. When someone is SMS’d the person submitting the report expect that the other person will be blamed for any occurrences related to the issue. When using a hazard report to get someone SMS’d, it becomes just as ineffective as an attempt to eliminate all hazards. It’s not the correct tool, it does not improve aviation safety and the desired result is not reached. Even if a fully competent person is included in the equation, the outcome does not improve safety. A process may work great and all the pieces in the puzzle fits, but the design inputs are incorrect when placing blame one a person. An effective SMS root cause analysis consider at least human factors, organizational factors, supervision factors and environmental factors. When an SMS enterprise consider these factors against one single person who has been SMS’d, the outcome still reflects their organizational safety culture. One of the greatest hazards within an SMS is to assume that a person reporting an issue as a safety issue is correct.   


One of the few accidents where a speculative, or subjective cause still exists.

A prior post was about training and how training has evolved. Training is the foundation of a successful SMS. In the pre-SMS days, it was expected a pilot would know everything there was to know. If there was a question, the answer was that you should know since you have a license. Back then training was only a function of a regulatory requirement and was also viewed as waste of time. It's a misconception that training only has one function of learning, and that this function is to become qualified. Human culture associates training with learning, where learning begins in preschool, graduates to kindergarten, then elementary, and finally to high school. Each step is required as a level of learning to qualify for the next level. These are building blocks of learning moving from unknown to known. It's to instill knowledge in someone who didn't have that knowledge. Training has several other functions and cannot only be associated with learning, or lack of knowledge. Functions of training are associated with human performance, which again have multiple subsections. Some of these subsections are human behavior, organizational performance, human factors, medical performance, aviation performance, optimal operational design, interaction modeling and more. In an SMS world, the concept of training has evolved to a high level of importance and are components of both regulatory compliance and safety in operations. 

 

There are several interesting posts in this blog. Most of the posts are about the Safety Management System, and how aviation safety evolved into SMS. Several posts were used as reference when building one of the most successful SMS manuals. Several times a post would also ask a relevant and everlasting question: “Why does the Global Aviation Industry, being Airlines or Airports, need a Safety Management System (SMS) today, when they were safe yesterday without an SMS?” Take a minute and review the posts to learn how SMS has evolved within your enterprise. 

 

 

 

Catalina9


Santa Rollover

  Santa Rollover By OffRoadPilots S anta has operated with a streamlined mission service (SMS) for several years. A Santa Claus safety manag...