Tag Archives: research design

No evidence for adverse effects of text-messaging

Mobile telephone with message screen. Image: Mark Diamond.

Mobile telephone with message screen. Image: Mark Diamond.

A paper in the journal, Bioelectromagnetics [1] recently reported a study (part of a longer-term project called MoRPhEUS) of the effects of mobile telephone usage on the cognitive function of young adolescents. Surprisingly, given the obscurity of the journal, it took only a few days before Google News was showing that over 1000 sites, including newspapers such as the Telegraph, had picked up the story. Under headlines like “Predictive text makes teens prone to making mistakes in life”, “How predictive text messaging takes it’s toll on a child’s brain”, and “Text first, think later: Predictive text makes teens even more impulsive”, journalists presented a doom and gloom view of adolescent cognitive development and were referring to the study as “groundbreaking”.

I’d be surprised, given the extraordinary publicity that the story has attracted, if the claims of the researchers did not enter into that vast primordial soup of “things everyone knows to be true” … an especially worrying prospect given that there was no evidence at all for the claims!

A detailed critique of the study would take several pages, but the most glaring defects are quickly recognized by looking at the two things that were most highlighted by journalists — “impulsiveness”, and “predictive text messaging”.

Speed does not imply impulsiveness

Consider the following task. A complex scene is shown on a computer screen in front of you. Your job is to decide whether Wally is hidden in the picture or not. You have to answer “yes” or “no”. If you answer correctly, you get paid $1.00. You get nothing if you’re wrong. Either way, the next picture flashes up immediately you answer. If Wally is hidden in half of the pictures and not in the other half, then what strategy should you follow to maximize your winnings? Answer: you should not waste your time looking for Wally! You should just say “yes” or “no” as quickly as you can, without regard to whether you are correct or not. That way you will maximize the number of scenes that you are shown in the time available and, on average, you will earn 50 cents per scene.

Now consider a very slight variant: same setup, same display, same pictures. The only difference is that this time there is no money to win and instead I tell you that I will cut off one of your fingers each time you make a mistake. What strategy will maximize your profits (i.e., your remaining fingers) this time? I’ll let you in on a secret — you should go slowly and carefully, taking all day over one picture if you need to so as to make sure that you answer correctly.

My purpose in presenting the preceding scenarios is to make it starkly apparent that speed and accuracy in many tests cannot both be maximized. That is, you cannot go as fast as possible and simultaneously be as accurate as possible. Rather, speed and accuracy trade-off against one another; greater speed reduces accuracy, and conversely, aiming for greater accuracy will slow you down. The critical thing, however, is that in some circumstances speed is better than accuracy, and vice versa. Which choice is better depends on the task, the actual instructions, the instructions as they are understood by the person being tested, the payoffs, and the threats.

The Stroop task in the Bioelectomagnetics paper

Now consider the task in the Bioelectromagnetics paper that led to the claim that predictive text messaging was making adolescents more impulsive. The task was the Stroop [2] test which requires the person being tested to identify the colour in which words are printed on the computer screen. It sounds easy but it is not. What makes it hard is that the words on the screen are colour names, like RED, GREEN, BLUE, YELLOW but they are printed in colours that don’t match the word. For example, you might be shown a list like RED BLUE GREEN YELLOW YELLOW GREEN BLUE and you have to call out the print colour.

As you might expect, speed and accuracy cannot both be maximized in the Stroop test and as with the “Where’s Wally” game, they trade off against one another. If I offer to pay you $1 for each correct colour that you answer, then you are likely to do best just by saying “green”, no matter what is on the computer screen. That way, if you can talk as fast as I can, you will be be able to answer about four items every second and on average get one of them correct. Being paid $1 per second isn’t a bad rate. On the other hand, if your fingers are at stake, you’d do well to take your time. The standard, well-researched instructions for the Stroop test are explicit about speed and accuracy. In Ridley Stroop’s original experiment, he told his test subjects to read out 100 words as quickly as they could but to correct any errors that they made. The last part of those instruction is vitally important because it tells the subject that they must not go “as fast as possible”. Instead, the instructions require that the subjects moderate their speed. If they try to go to fast then they will make mistakes which they would have to correct and overall they would be slower than if they were simply more careful to begin with.

Which brings me to the instructions given to the adolescents in the Bioelectromagnetics study, namely “Answer as fast as you can.” No mention of accuracy, just an imperative for speed. And (making the doubtful assumption that the results are correct), the kids who sent more text messages also responded more quickly. What does that say about “impulsiveness”? Absolutely nothing. It might say something about how adolescents who don’t use SMS are apparently incapable of following a simple instruction like “Answer as fast as you can” … but even that seems unlikely.

T9 predictive text messaging — phantom data and ghostly explanations

Now let’ turn to the matter of predictive text message. To quote one of the researchers as reported in the Telegraph, “We suspect that using mobile phones a lot, particularly tools like predictive texts for SMS, is training them to be fast but inaccurate.”. That statement might lead you to think that the researchers had some data relating to differences between the adolescent participants in the extent to which they used predictive text messaging! But that is not the case. The researchers did not collect any data about predictive text messaging. None. Zero. Zippo. The kids completed a crude questionnaire about the number of text messages that they estimated they sent and received each week … but they were not asked anything at all about predictive test messaging. In fact, the word “predictive” appears in the paper for the first time in the final discussion, not in the Methods section, nor in relation to any of the analyses.

There are a two other things that are worth touching on briefly. First, the paper gives no explanation for why the researchers thought that standard T9 predictive text system should encourage errors or sloppiness. My initial response to reading the assertion that T9 could encourage speed in preference to accuracy was to accept the assertion without question but a moment’s reflection suggests that the opposite is more likely to be true — that T9 rewards accuracy and punishes inaccuracy. Word processors allow one to type a complete word incorrectly and then correct it at the end of the word, or even days later. See the image below.

Microsoft Word screen showing a selection of alternatives for a misspelled word

Microsoft Word screen showing a selection of alternatives for a misspelled word

T9 doesn’t allow either of those possibilities. If you press the wrong key using T9 then you will be led down a path from which you will have to backtrack step by step in order to correct your mistake. In fact, the only (but useful) advantage that T9 does confer is to permit the message sender to type a single key-stroke for each letter in the word that they are writing (much like a typewriter, or even handwriting!) rather than having to tap each key multiple times to select a single wanted letter!

Second, despite the length of the paper and five tables of numbers, there is surprisingly little information in the paper about the data. Various “coefficients” are reported, but there is no explanation of what they mean even in the context of knowing that they have something to do with a regression analysis. Are they actually regression weights, and if so, why not call them that? And if they are regression weights, then are they the standardized or the unstandardized weights? It’s important for a reader to know because the interpretaton of the values for standardized and unstandardized weights is quite different.

Third, there is no mention of the size of the purported effects, just a report of the probability (p) values associated with the enigmatic “coefficients”, yet it is surely the size of the purported differences that would be most relevance to determining whether they matter.

The Australian National Health and Medical Research Council reports that in 2009, the MoRPhEUS project (Application number 545927) will receive AUD$531,000 in funding.

Contributors: Mark R. Diamond