Attend any conference on aviation security and you will hear speaker after speaker, including those representing technology manufacturers, speak about the need to invest in human factors.
Likewise, you will hear appeals for a more intelligent approach to aviation security, whereby not all passengers are screened as if they pose an equal threat.
Airports and airlines alike, it seems, seek differentiation, yet the regulators are reticent to embrace a passenger screening solution where ‘gut feeling’ is as important a decision-making tool as an alarm from an explosive detection system.
So, is there a place for passenger profiling in the aviation security system of the future and, if so, how do we move forward to incorporate it effectively?
For far too long the industry has simply accepted the criticism that it is reactive in nature. Indeed, it would appear that many just feel that it is the nature of the beast. The reality is somewhat different, however, as we can be proactive, providing we are prepared to invest in human beings rather than place our entire faith in machines that can identify prohibited items, but have zero capability of sounding the alarm for a passenger with negative intent.
The three key arguments against the deployment of profiling are firstly, that decisions will end up being made on racial grounds, secondly, that we simply can’t treat people differently, and thirdly, that it is impossible to test that the system is working.
These negative comments may have their place on television chat shows, but for industry professionals to be citing them simply demonstrates their lack of imagination or sense of reality.
I would even go as far as to say that this resistance to change, and blind acceptance of the fallibilities of our existing system, are as much our enemy as those terrorists we are supposed to be guarding ourselves from.
It could be argued that airport security hasn’t done too badly over the decade since the terrorist attacks of 9/11. I would not disagree, but there have also been some glaring failures to identify terrorist attacks, where the result has been the loss of aircraft (as in Russia in 2004) or where we have been lucky that the devices have not detonated as planned, or where we can be thankful that heroic crew members saved the day.
Deterrent factor aside, the airport screening checkpoint is simply ‘not fit for purpose’, unless we can inject the ability to detect the individual with negative intent.
Maybe we should be clear as to what that purpose is? It has to be accepted that aviation security is not about preventing the next al-Qaeda attack; it’s about preventing any unlawful attack against civil aviation.
Suicidal terrorists hijacking planes and flying them into buildings or infiltrating devices onto aircraft in shoes or underwear may be the big story for the mass media, but we can, and ought, to be less sensational about the issue.
In training courses, we need to start emphasising the threat of the criminal or psychologically disturbed individual, rather than focusing excessively on the likes of Messrs Reid, Abdulmutallab and Atta. This both addresses the issue and helps prevent decisions being made on racial grounds.
Furthermore, it creates a system that has value around the globe, regardless as to what degree of terrorist threat an airport or State may be exposed to.
Immigration and Customs authorities do differentiate at airport and successfully too, so the argument that it’s not deployable carries little weight unless, of course, the powers-that-be simply don’t want it to work.
And maybe that is the crux of the matter – regulators really don’t want a system that they can’t effectively test. For example, X-ray operators can be evaluated by TIP (threat image projection) images during routine operations and can sit batteries of exams using computer-based training solutions in the classroom, but how can regulators test whether a screener can identify intent?
To make an analogy with the medical profession, whilst there are campaigns for routine screening for certain conditions, in the majority of cases we only visit doctors when we feel that something might be wrong.
Is the medical profession criticised for failing to have an effective test that we could all take each year to ensure that we are fit and healthy? Of course it isn’t. And there are two reasons for this. Firstly, it’s not necessarily going to identify all diseases and, secondly, because it is regarded as a waste of resources.
So, too, with airport security – there is little value in identifying many of the prohibited items that many regulators claim show the success of the system, as they were never going to be used against the industry.
Like doctors, our screeners are more useful to us if they focus on indicators of mal-intent rather than on blindly screening everyone.
There is much talk in the industry of differentiating passengers based on pre-screening, a process potentially carried out by a government agency, whereby passengers about whom sufficient information exists will be deemed to be ‘trusted’ and passengers about whom no information exists may be regarded as posing an ‘elevated risk’ to a flight.
The analysis of data is certainly one of the building blocks for an effective profiling solution, but if we build a system that is based on data analysis rather than behavioural analysis, we have not improved the process at all.
Indeed, we have arguably made it worse as it offers the cleanskin the opportunity to be perceived as ‘trusted’.
I do not believe in the concept of trusted passengers. One only has to look at the significant incidents on board aircraft in February and March of this year to see why I draw this conclusion.
On February 11, it is alleged that an employee of TAM Airlines attacked the flight crew en route from Uruguay to Brazil, causing the aircraft to dive as the pilots struggled to regain control of the aircraft whilst the flight attendants restrained the assailant.
And in March, we saw an incident involving an American Airlines flight attendant having to be restrained on board a flight in Dallas whilst it was taxiing for departure, and a JetBlue pilot having to be restrained en route from New York to Las Vegas after “going berserk”, forcing the flight to make a ‘medical’ emergency landing in Amarillo.
In all these cases, I am pretty sure that the perpetrators would have been classified as ‘trusted’ as they were industry insiders.
So, if we can’t trust them, how do we begin to trust those we know nothing about aside, perhaps, from whether or not they have a job or possess a credit card?
Behavioural analysis is a process that needs to take place on the day of travel and must be performed by a human being. As keen as we are to automate the check-in and screening processes in order to enhance passenger facilitation, in my opinion, excessively automating processes is counter-productive as we end up only being able to identify prohibited items and reduce the opportunity to identify negative intent.
Part of the challenge has been one of semantics and the negative connotations associated with the word ‘profiling’. Some prefer phrases such as behavioural analysis, yet this implies that we are hoping to identify a potentially threatening passenger based on their appearance and behaviour (at the airport) alone.
The beauty of profiling, or to use the phrase I prefer, passenger risk assessment, is that it combines behavioural analysis with data analysis.
It is a solution that looks at the baseline expectations of passengers; those baselines will vary depending on the airport, the airline, the day of the week, the time of year and the destination.
To deploy the system effectively requires the right personnel, and that – with or without profiling – is the first hurdle we need to overcome. States can run trials, but the solution will only really work once the right calibre screeners are in place.
Then again, airport security will only be able to identify the threats of the future once we recognise that, in many parts of the world, it’s not only the process that is not ‘fit for purpose’, it’s the manpower, who are just not up to the task of understanding how terrorists, criminals, unruly passengers and psychologically disturbed individuals act.
Profiling works and has proven itself more effective than most screening technologies, but the debate we need to have now is not how we create the legal framework for it to operate, or how we can test screeners, but rather how we can professionalise the industry by employing the right people to perform the techniques effectively.
That will cost, but the cost of not doing so, will be higher still.