top of page

Safety Performance Indicators: Turning Data into Meaning

  • Margrét Hrefna Pétursdóttir
  • Oct 31
  • 4 min read

Updated: 5 days ago

In the last article, we discussed the purpose and value of Safety Performance Indicators (SPIs)  how they show whether safety controls are working as intended. Today, let’s take that one step further: how to build and calculate SPIs that actually reflect your operation.

Digital aviation dashboard showing ten Safety Performance Indicators: Runway Excursion, Runway Incursion, Airborne Conflict, Loss of Control, Controlled Flight Into Terrain, Ground Operations, Fire, Technical Operations, SACA/SAFA Ratio, and Reporting Culture, with colored bars representing safety performance levels.
Ten key indicators — monitoring what truly matters in aviation safety

Choosing the Right Safety Performance Indicators

There are many ways to structure SPIs, but I’ve always believed in starting with the basics, the events that truly define operational safety.


Years ago, my aviation authority asked me to build our indicators based on what was called the “Significant Seven.” These originated from the UK Civil Aviation Authority and were later adopted by other authorities as a guideline for the main safety risks facing flight operations:

1️⃣ Runway Excursion

2️⃣ Runway Incursion

3️⃣ Airborne Conflict

4️⃣ Loss of Control

5️⃣ Controlled Flight Into Terrain (CFIT)

6️⃣ Ground Operations

7️⃣ Fire

These seven categories became my foundation, always monitored, always reported.

But as our operation evolved, I added three more, which I still consider essential today:

8️⃣ Technical Operations

9️⃣ SACA/SAFA Ratio

🔟 Reporting Culture

Together, these ten indicators provided a balanced view of flight operations, ground handling, and organizational health.

If you are developing your own SPIs, I recommend using the European Plan for Aviation Safety (EPAS) as a reference point, since it reflects the most up-to-date priorities and safety concerns in Europe.

In addition to these core indicators, it’s also valuable to develop temporary SPIs when specific issues arise, either within your own operation or as part of broader industry focus. For example, if there is an increase in a certain type of event, or a new risk trend identified in EPAS or through safety data sharing networks, a focused SPI can be created to monitor it. Once the issue stabilizes or improvement is confirmed, that SPI can be retired. This keeps your monitoring both consistent and adaptable.


Selecting What to Measure

Of course, we hope that a runway excursion, airborne conflict, or CFIT never occurs in an operator’s lifetime. But SPIs aren’t only about major outcomes, they’re about the precursors that could lead there.

In aviation safety, we often talk about two categories of indicators: lagging and leading.

  • Lagging indicators measure what has already happened : such as an unstable approach, a hard landing, or a maintenance-related delay. They show the outcome of system performance.

  • Leading indicators focus on what could happen: conditions or behaviors that point to potential risks, such as incomplete checklists, late risk assessments, or deferred maintenance tasks.

A mature SPI program maintains a balance of both. Lagging indicators confirm where risk has already materialized, while leading indicators help anticipate where the next weakness might appear. Together, they tell the full safety story.

For example:

  • Unstable approaches and deep landings can be precursors to runway excursions.

  • Flight path deviations and altitude deviations can be precursors to airborne conflicts.

  • Maintenance errors or technical write-ups can reveal trends before they become findings in a SAFA inspection.

By tracking the contributing events, not just the end result, SPIs become truly predictive.


Normalizing the Data

One of the most important steps in SPI development is normalizing the data. That simply means putting numbers into context.

Depending on the size of your operation, you can divide your total number of occurrences by flight hours or flight sectors (cycles). I’ve always preferred using sectors, since they give a direct reflection of operational activity.

Why normalize? Because flight volume changes seasonally. A spike in reports during summer doesn’t necessarily mean the operation became less safe, it often means you were simply flying more. By dividing occurrences by the number of flights, you can compare performance across months or seasons fairly.


Weighting the Risk

Every occurrence report is risk assessed. This means not all reports carry the same significance, and that difference should be reflected in your SPI.


Here’s the system I’ve used:

🟩 Green (low severity) = 0.25

🟨 Yellow (medium severity) = 1.0

🟥 Red (high severity) = 2.0


By weighting the data, you can calculate a more meaningful safety performance score for each category.


The Formula

The monthly SPI formula I’ve used looks like this:

= (0.25 × sum of Green + 1 × sum of Yellow + 2 × sum of Red) / Flight Sectors × 100

This produces a simple percentage that shows the weighted rate of occurrences per 100 sectors. It’s practical, scalable, and easy to visualize in dashboards or trend charts.

Even smaller operators can apply this method effectively. It provides enough granularity to spot patterns and prioritize resources, without creating unnecessary administrative load.


Making It Work

What matters most is consistency. Review your SPIs monthly or quarterly depending on your flight activity, but do it regularly. If the numbers start to shift, don’t jump to conclusions. Look for the story behind the trend.

Each data point represents an opportunity to ask:

  • What changed operationally?

  • Was there a new route, procedure, or aircraft type introduced?

  • Are human factors or workload influencing performance?

That’s where the real learning begins.


Final Thought

Safety Performance Indicators don’t need to be complicated, they need to be relevant. When SPIs are designed around real operational risks, normalized for activity, and supported by consistent review, they become one of the most powerful tools in your Safety Management System.

They don’t just measure safety. They help you manage it.

Comments


bottom of page