The infamous “red-shirt drill” is a test of a lifeguard’s vigilance. It works like this: The aquatic facility manager or head lifeguard hands a swimmer a red shirt (i.e., red cap, red ball, or dark silhouette of a submerged victim, sometimes collectively known as VATs-vigilance awareness tests) who brings the thing into the water close to the lifeguard about to be tested and, upon receiving a signal from the office puts on the red shirt/cap, releases the red ball, or drops the silhouette on the bottom of the pool. Inside the office, the stop watch is clicked on. The test has begun. Only this is not a fair test of a lifeguard’s surveillance capacity; it is biased test that can lead the tester to draw a false conclusion.
I have long been an advocate against these types of drills and tests. I was subjected to them in the 1970s when I began lifeguarding, and I learned very soon that they were not an accurate indicator of surveillance effectiveness.
Putting this issue aside for a second, there are other reasons why this type of drill or test is ill-advised. First and foremost, conducting this drill is a violation of the RID factor, specifically the intrusion of a secondary duty (a supervisor-imposed drill or test) while the lifeguard being tested is involved in a primary duty (supervising the public and looking for real victims to rescue). Second, this drill puts the public at risk because the lifeguard leaves the station without a real victim to rescue, and the drill is conducted with the public in the way where they can be jumped on, swam over, struck with a rescue tube, etc. In the case of the red-shirt drills I remember, the participant wearing the shirt had to be rescued, subjecting the “victim” to more risk.
The red shirt drill and all of its variations are an old-school method of testing lifeguard efficiency. In more modern times, it was revived by organizations like Ellis and Associates, Poseidon Technologies, and others as a way of proving something is wrong with lifeguarding that must be improved. For example, here is an article written by Joshua L. Brener of Poseidon Technologies in 2002: “Your Lifeguards Watch, but Do They See?”.
This article mentions a test conducted by Ellis and Associates who dropped manikins in 500 manikins in pools across the United States without telling the lifeguards what they were beforehand and then timed each lifeguard’s response. The problem with a test like this is that the results are far from scientific, and they do not take into account that operation of each lifeguard’s thought process with regard to situational awareness, principles of attention capture, sensorimotor response, etc.
On the very surface of the issue (no pun intended), a lifeguard actively looking for real hazards and victims, and unaware that a test was being conducted, might react with confusion at the sudden appearance of a manikin in the water or he/she might overlook it entirely because of honed situational awareness and sensorimotor skills might discount an object detected that is not part of the set of targets being sought. And, this decision to overlook or discount the manikin may initially occur at less than a conscious level. To testers and observers, it simply looks like the lifeguard is not focused and “really looking.”
(Ironically, Mr. Brener’s company, Poseidon Technologies, which uses cameras and sensors to detect unmoving victim and to signal the lifeguards, has had the opposite problems. In an article in 2005, it was reported that Poseidon was reacting to the shadows cast into a pool by birds and clouds and sounding the alarm when no one was in danger. Despite that devices like Poseidon can work tirelessly, they do not have the capacity to be situationally aware—to make a judgment and throw out false stimuli that resemble a drowning victim but is not a drowning victim.)
This means that a lifeguard who is focused and doing his/her job may not have scored well on the Ellis and Associates test. Obviously, lifeguards who are not focused and watching would do poorly as well. The trouble with this test is that you cannot know why a lifeguard scored poorly or even why a lifeguard scored well.
For example, if lifeguard know beforehand of the possibility of being tested, and a lifeguard scores well, that may mean the lifeguard was tipped off about the test or he/she may be looking for the test to the exclusion of looking for real victims and hazards. At the very least these tests, when announced beforehand, compete with the lifeguard’s focus on real emergencies.
In my first experience with a red shirt drill, I was accidentally tipped off by the 12-year-old they sent in with the red shirt. I remember that I was scanning the pool when I saw her. She was looking at me with a devilish smile; the only person in the water looking at me and smiling. It seemed strange but I didn’t think much of it until I looked toward the office. There, I saw the pool manager’s silhouette in the window, staring out and holding an object in his hand. Others in the office were standing and looking on the window as well.
When my visual sweep came around again, I saw the little girl underwater wearing a red shirt. It still took a few seconds for me to process this and switch from scanning mode to “this is a test” mode. Realizing I was being tested, I blew my whistle and rescued my “victim.” I got a really great time, which was included in my personnel file. If the girl hadn’t starred at me with her big smile, I might have never noticed her in that red shirt!
Some of my fellow guards scored poorly in later tests, and a few were even warned that they had to improve. All this time, I was thinking how unfair and biased such a test was.
Today, I still think of these tests as unfair, biased, and hazardous to the surveillance process (as an intrusion of a secondary duty). Every now and then, I read something about a supervisor dropping a silhouette or a manikin on the bottom of the pool to test lifeguard awareness. “Back to the stone age” I think, and I worry about all the ways these tests, designed to improve lifeguard attentiveness, do the opposite and may even cause good lifeguards to be falsely evaluated.