No evidence for adverse effects of text-messaging

Mobile telephone with message screen. Image: Mark Diamond.

Mobile telephone with message screen. Image: Mark Diamond.

A paper in the journal, Bioelectromagnetics [1] recently reported a study (part of a longer-term project called MoRPhEUS) of the effects of mobile telephone usage on the cognitive function of young adolescents. Surprisingly, given the obscurity of the journal, it took only a few days before Google News was showing that over 1000 sites, including newspapers such as the Telegraph, had picked up the story. Under headlines like “Predictive text makes teens prone to making mistakes in life”, “How predictive text messaging takes it’s toll on a child’s brain”, and “Text first, think later: Predictive text makes teens even more impulsive”, journalists presented a doom and gloom view of adolescent cognitive development and were referring to the study as “groundbreaking”.

I’d be surprised, given the extraordinary publicity that the story has attracted, if the claims of the researchers did not enter into that vast primordial soup of “things everyone knows to be true” … an especially worrying prospect given that there was no evidence at all for the claims!

A detailed critique of the study would take several pages, but the most glaring defects are quickly recognized by looking at the two things that were most highlighted by journalists — “impulsiveness”, and “predictive text messaging”.

Speed does not imply impulsiveness

Consider the following task. A complex scene is shown on a computer screen in front of you. Your job is to decide whether Wally is hidden in the picture or not. You have to answer “yes” or “no”. If you answer correctly, you get paid $1.00. You get nothing if you’re wrong. Either way, the next picture flashes up immediately you answer. If Wally is hidden in half of the pictures and not in the other half, then what strategy should you follow to maximize your winnings? Answer: you should not waste your time looking for Wally! You should just say “yes” or “no” as quickly as you can, without regard to whether you are correct or not. That way you will maximize the number of scenes that you are shown in the time available and, on average, you will earn 50 cents per scene.

Now consider a very slight variant: same setup, same display, same pictures. The only difference is that this time there is no money to win and instead I tell you that I will cut off one of your fingers each time you make a mistake. What strategy will maximize your profits (i.e., your remaining fingers) this time? I’ll let you in on a secret — you should go slowly and carefully, taking all day over one picture if you need to so as to make sure that you answer correctly.

My purpose in presenting the preceding scenarios is to make it starkly apparent that speed and accuracy in many tests cannot both be maximized. That is, you cannot go as fast as possible and simultaneously be as accurate as possible. Rather, speed and accuracy trade-off against one another; greater speed reduces accuracy, and conversely, aiming for greater accuracy will slow you down. The critical thing, however, is that in some circumstances speed is better than accuracy, and vice versa. Which choice is better depends on the task, the actual instructions, the instructions as they are understood by the person being tested, the payoffs, and the threats.

The Stroop task in the Bioelectomagnetics paper

Now consider the task in the Bioelectromagnetics paper that led to the claim that predictive text messaging was making adolescents more impulsive. The task was the Stroop [2] test which requires the person being tested to identify the colour in which words are printed on the computer screen. It sounds easy but it is not. What makes it hard is that the words on the screen are colour names, like RED, GREEN, BLUE, YELLOW but they are printed in colours that don’t match the word. For example, you might be shown a list like RED BLUE GREEN YELLOW YELLOW GREEN BLUE and you have to call out the print colour.

As you might expect, speed and accuracy cannot both be maximized in the Stroop test and as with the “Where’s Wally” game, they trade off against one another. If I offer to pay you $1 for each correct colour that you answer, then you are likely to do best just by saying “green”, no matter what is on the computer screen. That way, if you can talk as fast as I can, you will be be able to answer about four items every second and on average get one of them correct. Being paid $1 per second isn’t a bad rate. On the other hand, if your fingers are at stake, you’d do well to take your time. The standard, well-researched instructions for the Stroop test are explicit about speed and accuracy. In Ridley Stroop’s original experiment, he told his test subjects to read out 100 words as quickly as they could but to correct any errors that they made. The last part of those instruction is vitally important because it tells the subject that they must not go “as fast as possible”. Instead, the instructions require that the subjects moderate their speed. If they try to go to fast then they will make mistakes which they would have to correct and overall they would be slower than if they were simply more careful to begin with.

Which brings me to the instructions given to the adolescents in the Bioelectromagnetics study, namely “Answer as fast as you can.” No mention of accuracy, just an imperative for speed. And (making the doubtful assumption that the results are correct), the kids who sent more text messages also responded more quickly. What does that say about “impulsiveness”? Absolutely nothing. It might say something about how adolescents who don’t use SMS are apparently incapable of following a simple instruction like “Answer as fast as you can” … but even that seems unlikely.

T9 predictive text messaging — phantom data and ghostly explanations

Now let’ turn to the matter of predictive text message. To quote one of the researchers as reported in the Telegraph, “We suspect that using mobile phones a lot, particularly tools like predictive texts for SMS, is training them to be fast but inaccurate.”. That statement might lead you to think that the researchers had some data relating to differences between the adolescent participants in the extent to which they used predictive text messaging! But that is not the case. The researchers did not collect any data about predictive text messaging. None. Zero. Zippo. The kids completed a crude questionnaire about the number of text messages that they estimated they sent and received each week … but they were not asked anything at all about predictive test messaging. In fact, the word “predictive” appears in the paper for the first time in the final discussion, not in the Methods section, nor in relation to any of the analyses.

There are a two other things that are worth touching on briefly. First, the paper gives no explanation for why the researchers thought that standard T9 predictive text system should encourage errors or sloppiness. My initial response to reading the assertion that T9 could encourage speed in preference to accuracy was to accept the assertion without question but a moment’s reflection suggests that the opposite is more likely to be true — that T9 rewards accuracy and punishes inaccuracy. Word processors allow one to type a complete word incorrectly and then correct it at the end of the word, or even days later. See the image below.

Microsoft Word screen showing a selection of alternatives for a misspelled word

Microsoft Word screen showing a selection of alternatives for a misspelled word

T9 doesn’t allow either of those possibilities. If you press the wrong key using T9 then you will be led down a path from which you will have to backtrack step by step in order to correct your mistake. In fact, the only (but useful) advantage that T9 does confer is to permit the message sender to type a single key-stroke for each letter in the word that they are writing (much like a typewriter, or even handwriting!) rather than having to tap each key multiple times to select a single wanted letter!

Second, despite the length of the paper and five tables of numbers, there is surprisingly little information in the paper about the data. Various “coefficients” are reported, but there is no explanation of what they mean even in the context of knowing that they have something to do with a regression analysis. Are they actually regression weights, and if so, why not call them that? And if they are regression weights, then are they the standardized or the unstandardized weights? It’s important for a reader to know because the interpretaton of the values for standardized and unstandardized weights is quite different.

Third, there is no mention of the size of the purported effects, just a report of the probability (p) values associated with the enigmatic “coefficients”, yet it is surely the size of the purported differences that would be most relevance to determining whether they matter.

The Australian National Health and Medical Research Council reports that in 2009, the MoRPhEUS project (Application number 545927) will receive AUD$531,000 in funding.

Contributors: Mark R. Diamond

Honours theses — time to start on the data

IBM key punch and verifier, model 029. Photo: en.wikipedia.org

IBM key punch and verifier, model 029. Photo: en.wikipedia.org

If you are an honours research student in Australia then you are probably nearing the end of your data collection. What comes next?

Although starting on the analyses is going to be high on your agenda, I would encourage you not to rush into it. First, ensure that the records of your data are accurate. If you have been copying data from paper records into a database then it would be surprising if you have made no errors in the data entry.

One well-tried method of checking for errors in data entry is to enter the data twice and then to compare the two sets. No one seems to do this anymore, at least not in universities, but in the days when punched Hollerith cards were used for data storage, instead of the now ubiquitous magnetic discs and flash memory sticks, double entry was considered almost mandatory. The process was referred to as “key punch” and “verification” in keeping with the terminology used by IBM. Originally, one machine, the IBM Model 026, was used for punching the holes in the Hollerith cards, and another machine, the Model 056, was used for verification. Although I have occasionally used both models, I used the IBM Model 029 much more. It could be used both for punching and for verification and was even able to print directly onto the cards, obviating the need to read one’s data by interpreting the holes!

But back to the matter of how a modern honours student should verify their data. If a single mistake in your data entry could lead to catastrophe, then I would suggest that double entry still has its place. You could enter your data into two spreadsheets, or even into two plain text files, and then use some simple software (such as the Unix diff command) to check for differences between the two entered sets.

The next step is to ensure that there are no impossible values in your data set by which I mean checking for values that cannot represent real measurements. For example, if you have been measuring the temperature of liquid water in drains, then values below zero or over 100 °C indicate that you have made an error. Similarly, if you are a psychologist measuring marks on a visual analogue scale that is 12 centimetres long, then negative values or values exceeding 12 will be erroneous. One of the easiest ways of finding these sorts of errors is to have your data analysis package print the minimum and maximum or each of the variables in your data set.

In your hunt for impossible values, look next for numbers that indicate spurious accuracy. For example, if you are measuring water temperature with a standard alcohol thermometer, then it is unlikely that you should have more than one decimal place in your measurements. Numbers like 22.048 will indicate that something has gone wrong. Sometimes you will discover that you have just hit an extra key by mistake; more often, you will find that the digits 4 and 8 actually belonged to the next column of data and that you forgot to type a delimiter between the first valid datum (22.0) and the next datum (possibly 48).

It is impossible to be exhaustive in describing what sorts of things to check in your data because so much depends on the specific context of the data collection. What I can say is that you should think carefully about the kinds of numbers that are impossible, and ensure that you do not have any.

Related to impossible data is the problem of illogical data. Illogical data are those where two items are jointly nonsensical even if they would make sense on their own. For example, a person can have a 1989 birthday, and a person can be 50 years old, but if the year is 2009, then it is not possible to be 50 and to have a 1989 birthday. Similarly, being born in 1989 and being enrolled at primary school in 2009 are unlikely co-occurrences. Again, it is impossible to be exhaustive but if you can discover mutual dependencies in your data set (such as year of birth and age), then you can cross-tabulate the two variables to discover whether you have illogical data.

Data checking is vital. Do it carefully and thoroughly and you will be more confident of being able to rely on the results of your later analyses. Do it badly and you might embarrass yourself by claiming to have discovered the cause of global climate change when what you actually had was a fly-speck on your thermometer scale — and possibly a mote in your own eye.

Contributors: Mark R. Diamond

Swine Flu vaccination and consent — you read it here first

Image: Mark Diamond

Image: Mark Diamond

The Age newspaper this morning is carrying a story [1] on the new Panvax ® H1N1 influenza vaccine manufactured by CSL Ltd. About two thirds of the way through the article, there is a paragraph reading

“Concerns include the development of a consent form that each recipient must sign. There is not yet enough data from trials of the drug to inform patients of the risks involved.”

It’s five days since I first wrote about the difficulties of patient consent in relation to the new vaccine so it’s pleasing to see that the issue will now get a wider airing.

There are some interesting difficulties implied in the quoted comment. The suggestion is that the information in which patients would be interested would be information that relates to the trials alone. I’m not convinced that that would be true for three reasons. Briefly, they are (i) the frequency of rare events is difficult to estimate from small samples such as the samples used in the trials, (ii) the risks of cross-contamination are largely related to human factors, and we know that human performance is frequently better when it is being watched (as in the trials) than when it is not, (iii) the population who made up the trial sample is different from the population that is likely to be first-vaccinated, and (iv) some of the adverse consequences might take a long time to show up.

Estimating the frequency of rare events

For the sake of argument, assume that the probability of a vial of vaccine being contaminated is 1 per 100,000 doses administered. If there are only 400 people in the trials then the chance that there will be an actual contamination is relatively small, and the chance of the contamination being detected is even smaller. Once a vial is contaminated and the contamination is not detected, then given a 10 dose vial and assuming a 100 percent infection rate from contaminated needles, one can expect 4–5 people to become infected. So cross-contamination might be rare, but given an occurrence the infections then cluster.

Patient differences

The people that the government is proposing to vaccinate are an unusual subset of the whole population. They include, at a higher rate than average, the people whose health or resistance has in some way already been compromised and so, I think, are more likely than the average person to have an infection of some sort. As a consequence, by estimating by the probability of cross-infection from the trial data, one is likely to under-estimate the actual risk in the mass vaccination program.

Human performance

If you are a health practitioner and you know that you are part of a study on the risks of using multi-dose vials, then you are likely to perform better than if you were just giving vaccinations without being studied. The likely change in your behaviour, which is known as the Hawthorne effect, will result in the risk of cross-infection being underestimated.

Delayed consequences

If the cross-infection that you get is Creutzfeldt–Jakob disease (CJD), then your infection is unlikely to show up for a long time and will also result in the risks being underestimated. Nonetheless, harking back to the Rogers v. Whitaker decision that I referred to in the earlier posting, I expect that patients would consider the CJD risk to be “material” even if it is very very very very small.

[1] Miller, N. (2009). Doctors urge delay on vaccine. The Age. 29 August 2009. http://www.theage.com.au/national/doctors-urge-delay-on-vaccine-20090828-f2k6.html

Contributors: Mark R. Diamond

Swine Flu vaccine and patient consent

Influenza virus vaccine, Fluzone®. Photo: United States Centres for Disease Control and Prevention

Influenza virus vaccine, Fluzone®. Photo: United States Centres for Disease Control and Prevention

The Australian Health Minister Nicola Roxon announced on Thursday, 20 August 2009 that, before the end of the month, the Australian government will take delivery of two million doses of the new CSL vaccine (Panvax ®) against the novel H1N1 (“Swine Flu”) virus, and that the vaccine would be supplied by CSL in multi-dose vials. The Australasian Society for Infectious Diseases reacted to the announcement by writing to the Commonwealth Chief Medical Officer, Professor Jim Bishop, urging him to delay distribution of the vaccine until it could be supplied in single dose vials. Multi-dose vials, they said, pose too great a risk of infection (as a result of contamination) to the people being vaccinated. Professor Bishop, responded by saying that the government’s pandemic plan had always included the use of multi-dose vials because they were more efficient and could be distributed more quickly.

I’ll probably return in a later posting to the matter of how one might rationally assess the risks of the proposed vaccinations. Here, I want only to highlight an interesting side issue related to patient consent to medical procedures.

In 1992, in judgement in the case of Rogers v Whitaker [1992] HCA 58; (1992) 175 CLR 479 (19 November 1992) the High Court of Australia made it clear that medical professionals have a responsibility to explain the material risks of proposed medical procedures to their patients. In particular, they said that, “a risk is material if, in the circumstances of the particular case, a reasonable person in the patient’s position, if warned of the risk, would be likely to attach significance to it or if the medical practitioner is or should reasonably be aware that the particular patient, if warned of the risk, would be likely to attach significance to it. ” It might be a cumbersome way of saying things, but it strongly suggests that if you are a doctor or a nurse about to give me a vaccination then it is incumbent upon you to explain to me, in a way that I can understand, what the material risks are of having the vaccination. It is not up to me to know all about the possible risks; nor is it up to me to formulate questions about things that I did not know I needed to know. That is the responsibility of the health professional.

It’s worth noting that, in the view of the High Court, probability per se is not of much relevance! The fact that a possible adverse consequence of treatment has only a small probability of occurring is not important in determining whether I should be told about the possibility of that consequence. Indeed, in the original trial of Whitaker v. Rogers, evidence was presented on behalf of Dr Rogers to show that the consequence that Whitaker suffered (sympathetic ophthalmia) occurred on average only once in 14,000 procedures (0.007 percent).

So, to return to the present matter of the multi-dose vials. I expect that most people being vaccinated would attach significance to the fact that multi-dose vials are more likely to lead to fatal infection by a contaminant than are single dose vials. I also think people would attach significance to each of the following: (i) that the usual Australian practice has been to use single dose vials, (ii) that any vaccinations that a person has previously had in Australia would most likely have been from single-dose vials, and (iii) that the use of multi-dose vials was part of a pandemic plan that was developed for dealing with infection by virulent avian influenza — an infection that has a 50 percent case-fatality rate as opposed to the current H1N1 virus that has a case-fatality rate of less that 0.4 percent.

I am not suggesting that every (or even any) person informed of the facts in the preceding paragraph would necessarily decide against being vaccinated. But then the High Court made it pretty clear that what I think, or what a nurse thinks, or what a doctor thinks is of no consequence when it is you who is being vaccinated. Your right is to be given the information and to make your decision in the way that you think best.

So, one can imagine the conversations that should ensue. But will they? Will Professor Bishop direct his staff to remind health-practitioners that they should warn patients about the increased risks associated with multi-dose vials?

Contributors: Mark R. Diamond

A test of fundamental economics

Photo: en.wikipedia.org

Photo: en.wikipedia.org

No, not something that will win you the Bank of Sweden Prize but rather a simple honours research project crossing the domains of economics and human behaviour. The subject is toilet paper.

You might have noticed that the quality of toilet paper in large office blocks, universities, schools and sporting complexes usually isn’t a patch on what you might have a home. A quick check of warehouse prices for bulk buys of toilet paper suggests that you could spend anything from AUD$0.40 to AUD$1.60 per roll so my guess is that purchasers believe that buying a lower quality product will save money overall. But does it, or does usage increase as quality decreases, more than compensating for any of the original cost-per-item saving? I’m assuming that price and quality are highly correlated but they might not be.

It couldn’t be too hard to create nicely controlled experiments to answer both the question about the relationship between price and quality, and the question about quality and usage. Using my own mythical numbers, I estimate that a building of 1000 people could save around AUD$20,000 annually by answering the questions.

Contributors: Mark R. Diamond