Monthly Archives: October 2009

Thesis writing: tell a story not a statistic

Self-portrait of Joseph Ducreux yawning and stretching. Photo: wikipedia.org

Self-portrait of Joseph Ducreux yawning and stretching. Photo: wikipedia.org

Worrying about how to analyze your data can get you so focussed on statistics that by the time you come to write up you results you might forget that writing a good thesis is all about telling a good story. No journalist has ever been persuaded to pay attention to a new discovery because the scientist started shouting about the size of their F statistic or the smallness of their p values. Rather, it is by telling a good story that you will capture the curiosity and imagination of your intended audience.

So when you come to write up your results, remember to start with your ideas and only thereafter turn your attention to your analyses. A good approach is something like the following. First, the basic idea or hypothesis phrased in an informal but interesting way (e.g., I think that A is probably the underlying cause of B because of C). Second, the formal test or measure of the idea (e.g., I don’t have powerful approach to being able to test the idea so I am going to use the weaker measure of correlation to see whether there is at least some sort of association between A and B). Third, the results of the formal measure (e.g., I found a correlation between A and B of 0.94). And finally, the conclusion about the idea based on the results of the formal measure (e.g., the surprisingly high correlation shows that there is a strong linear relationship between A and B but the possibility of a causal link will need further investigation).

That particular formula is neither the only one you could use nor necessarily the best one … but it is far from the worst approach and many many times better than simply reporting in succession, the fact that you calculated the correlation coefficient between A and B, did a regression analysis of C, D and E against F, and so forth. Remember, a weak study that is told in an interesting way will almost always get a better examination result than a stronger study that bores your reader to death.

Contributorss: Mark R. Diamond

That nice new doctor might kill you

Cover of the British Medical Journal 10 October 2009. Photo: BMJ.com

Cover of the British Medical Journal 10 October 2009. Photo: BMJ.com

If you go to hospital at the start of a new rotation of resident medical officers, your chances of being maimed are dramatically higher than at other times.

In a paper [1] in the British Medical Journal published today, Haller et al. report that the overall rate at which adverse events occur at the beginning of the academic year is 137 per 1000 patient hours as compared with 107 per 1000 patient hours at other times. The likelihood of adverse events decreases steadily in the succeeding months and by the time four months have elapsed, the risk to patients has returned to baseline levels. Interestingly, the rate of adverse events is higher at the beginning of the academic year across all trainee doctors, irrespective of their level of seniority (i.e., experience) suggesting that there is something about a doctor commencing at a new hospital or in a new environment that leads them into error, rather than a simple lack of knowledge about medicine. Haller et al. suggest that it might be aspects of work such as unfamiliarity with the work environment and with hospital procedures, problems with teamwork in newly formed teams, and communcation problems that might be the root cause of the adverse events.

On the bright side, the results also suggest a solution to the problem of increased risk. Specifically, it might be sufficient to have new trainees spend more time getting to know the hospital routine before then take over responsibility for patients from the outgoing rotation of doctors.

References

[1] Haller, G., Myles, P.S., Taffé, P., Perneger, T.V., & Wu, C.L. (2009). Rate of undesirable events at beginning of academic year: retrospective cohort study. British Medical Journal, 339, b3974. doi: 10.1136/bmj.b3974

Contributors: Mark R. Diamond

No advice to patients about the risks of multi-dose vaccine vials

Influenza virus vaccine, Fluzone®. Photo: United States Centres for Disease Control and Prevention

Influenza virus vaccine, Fluzone®. Photo: United States Centres for Disease Control and Prevention

The Australian Government has commenced its H1N1 influenza vaccination program and the Department of Health and Ageing (DoHA) has published the consent form that patients are being asked to sign. You can look at a copy of the consent form here.

Vaccine delivery and consent

The Panvax® influenza vaccine (manufactured by CSL Ltd) is being supplied in multi-dose vials, the use of which is known to be associated with the risk of contamination of the vaccine and the risk of cross-infection of vaccinated patients. In August, the Australasian Society for Infectious Diseases raised their concerns about the use of multi-dose vials with the Commonwealth Chief Medical Officer, Dr Jim Bishop, and urged the Government to abandon its vaccination plan until single-dose vials became available. Nonetheless, despite those concerns, DoHA has pressed ahead with its vaccination plan.

In a couple of earlier postings I asked how the public would be informed about the risks associated with multi-dose vials so that each patient would be able to make an informed choice about whether to get vaccinated now or to wait until single-dose pre-filled syringes are available. If you look at the consent form you will see that that issue has now been settled. Patients will not be told anything!

The Bolam Principle

After seeing the consent form for the first time yesterday, I wrote to DoHA earlier today. I asked (1) why is there no mention on the consent form of the risks of using multi-dose vials? and (2) why DoHA has not heeded the very specific admonishments of the High Court in its 1992 judgement in the medical negligence case of Rogers v Whitaker? [1]

The reply that I received did not address either of those questions. In fact, it didn’t even try to answer them. Instead (and revealingly) the response simply quoted an unrelated and irrelevant earlier media release: “The vaccine will be distributed in multi-dose vials. This is consistent with countries around the world also delivering mass vaccination programs. Guidelines for the safe use of the vials have been developed by Australia’s specialist immunisation reference group, the Australian Technical Advisory Group on Immunisation in consultation with the Royal Australian College of General Practitioners and the Australian Nursing Federation.”

In other words, “doctor knows best”. Patients do not need to be told about the risks because DoHA is delivering the vaccine in a form that is consistent with medical opinion about the best method of delivery. On the humorous side, the reference to the “specialist immunisation reference group” reminded me of the famous line at the end of the film, Raiders of the Lost Ark, when Harrison Ford asks what has been done with the Ark. He is told, “I assure you, top men are working on it right now.” Harrison Ford then asks naively, “Who?” The answer: “Top men”.

More seriously, I remarked (above) that the response from DoHA was revealing. How so? Readers who are familiar with the law of medical negligence will recognize that the approach taken by DoHA is entirely consistent with a legal doctrine known as the “Bolam Principle”. To quote from the judgement in Rogers v Whitaker, “The Bolam principle may be formulated as a rule that a doctor is not negligent if he acts in accordance with a practice accepted at the time as proper by a responsible body of medical opinion even though other doctors adopt a different practice.” The problem with that approach for doctors, nurses and for DoHA is that the High Court refused to accept the Bolam principle as the basis on which to determine medical negligence cases! Instead, the High Court decided that the law, “should recognize that a doctor has a duty to warn a patient of a material risk inherent in the proposed treatment” and added that “a risk is material if, in the circumstances of the particular case, a reasonable person in the patient’s position, if warned of the risk, would be likely to attach significance to it …”

In other words, with regard to vaccinations, the issue is not about how likely it is that a risk of contamination or cross-infection will materialize; it is not about how dangerous such contamination is likely to be; it is not about whether the doctor is doing everything as well as they know how; it is not about whether having a vaccination is more likely to protect you from a serious H1N1 infection than it is to create a problem of its own. It is simply about patient autonomy—the right of each competent individual to decide for themselves whether or not to be vaccinated.

How do the risks with multi-dose vials arise?

A multi-dose vial is a bottle that has enough vaccine in it for about 12 people. The bottle is sealed with a permeable cap. The doctor or nurse attaches a needle to the end of a syringe, sticks the needle through the cap, and draws up enough of the vaccine for a single dose. The patient then gets injected with the vaccine. What could possibly go wrong?

Scenario 1: The person doing the vaccinations could accidentally reuse the syringe and needle for more than one patient. If the first patient to receive the vaccine has a disease that is transmissible by needle stick, then the second person to be vaccinated with the same needle might become infected with that disease. Sure, the guidelines say that a new needle and new syringe must be used; but, the nature of accidents is that they are accidental. If they were not, we would assume that some sort of deliberate malfeance was involved! The nature of a mistake is that something that should have been done is not done, or that something that should not have been done is done without the person doing it having done so deliberately. By refusing to acknowledge the risk of a mistake, the DoHA consent form tacitly implies that in the history of vaccination, no needle or syringe has ever been accidentally reused when the guidelines say that needles and syringes should not be reused. Furthermore, they suggest that such an accident could not ever happen.

If the form had spelled out those tacit implications would you believe them? Do you believe in Santa Claus? Interestingly, in contrast to the implications of the DoHA consent form, the United States Centers for Disease Control specifically acknowledges that things can happen contrary to safety guidelines.

Scenario 2: Person A has an infection and has a vaccination with a new syringe and new needle. The person giving the vaccinations accidently begins to reuse the equipment but realises their mistake, but only after they have put the used needle back through the cap of the bottle of vaccine and transferred infective material from the used needle to the vaccine. The remaining vaccine in the multi-dose vial might then infect up to 11 more people even if new needles and syringes are used for each of them.

Scenario 3: The vaccine becomes contaminated after the vial is opened and that the contamination occurs independently of any particular person being vaccinated.

As I remarked in an earlier post, none of the risks with the mutli-dose vials will necessarily lead a person to refuse the vaccine but that is a decision for the patient and not for the doctor.

How could DoHA go so wrong?

My guess—but that is all it is—is that DoHA sought medical advice about how to phrase the consent form but did not seek legal advice from someone familiar with the law of medical negligence. Traditions take a long time to change and the idea that the medical profession knows what is best is well entrenched. If you think that you know what is best for your patients and you are convinced that the risk associated with the use of multi-dose vials is small, then in your presumptive though unintended arrogance you might just neglect to mention those risks! For a discussion of similar problems that arise in psychological practice, you might like to have a look at a paper [2] that Angela O’Brien-Malone and I wrote for Australian Psychologist.

References

[1] Rogers v. Whitaker, F.C. 92/04 (High Court of Australia 1992).

[2] O’Brien-Malone, A., & Diamond, M. R. (2006). Tell your patients you might hurt them. Australian Psychologist, 41(3), 160-167(8). DOI: 10.1080/00050060600776366

Contributors: Mark R. Diamond, Angela O’Brien-Malone

The Price is Right—giving youself a little bit extra

Logo of the television game show “The Price is Right”

Logo of the television game show “The Price is Right”

Some years ago, Daniel Reidpath and I wrote a paper on the optimal strategy for players in the television game show, “The Price is Right”. We didn’t publish the paper and its been sitting in a drawer for over a decade but I started to think about it again while reading “Freakonomics”, the book by Steven Levitt and Stephen Dubner. “Freakonomics” describes several games with economic consequences and I was reminded of two particular features of game shows that make them close to ideal as research settings for the behavioural economist or social scientist. First, they frequently involve large sums of money—far greater than one could ever offer a participant in a laboratory study. Second, they are constrained both by time and by formal rules in a way that most worldly decisions are not.

The advantage of the first feature—big money—is that you’d expect people to be more engaged with the TV game than they would with a university laboratory game that pays peanuts. The advantage of the second feature—constraint—is that it is easier for the researcher to figure out what is going on that it is with decisions like buying a house or car. It is also easier to determine objectively whether, and by how much and in what ways, the participants’ behaviour deviates from the optimum. With those thoughts in mind, I decided to resurrect the paper and to make it available here.

Contributors: Daniel D. Reidpath, Mark R. Diamond

Focus groups and missing species

Focus groups are frequently conducted in a roundtable format. Photo: United States Department of Veterans Affairs.

Focus groups are frequently conducted in a roundtable format. Photo: United States Department of Veterans Affairs.

Focus groups have a venerable history. They appear to have been first mentioned in a letter of 2 March 1938 from the English diplomat, Sir Harold George Nicolson. He wrote “I went to such an odd luncheon yesterday. It is called ‘The Focus Group’, and is one of Winston’s things.” Whether Winston Churchill invented focus groups, I don’t know, but since his time they’ve become ubiquitous amongst hucksters of all kinds—marketers, politicians, sociologists, psychologists, and the like.

If you’re planning focus group research, you’ll probably want to know how many groups you should run. Sometimes the question will be answered simply and quickly by your available budget; if you can’t afford to run more than two groups, then that is all you’re going to plan for. But what if you can afford more? Is there a rational basis for settling on any particular number of groups?

Most of the suggestions that I have seen are variants on a single theme — sample until you stop hearing anything new. Lunt and Livingstone [1], for example, say “…one should continue to run new groups until the last group has nothing new to add, but merely repeats previous contributions.” That sounds simple enough but what does it really mean? If you run two groups which proffer essentially the same three opinions, does that mean you should stop? Or should you assume that a third group might come up with some, as yet unstated opinions, and continue running more groups? Would it make any difference if each group gave exactly the same 20 different opinions, as compared with the possibility that each group offered only two opinions?

One way of looking at the decision problem is to try to create a model of the process that is assumed to underlie the formation of opinions and their subsequent revalation in the focus group. The closest that I have found to an explicit statement of the model that underlies the previously described rule of “sampling to saturation” is in another remark by Lunt and Livingstone, namely, “A useful rule of thumb holds that for any given category of people discussing a particular topic there are only so many stories to be told.” That might not sound like a description of a model, but here is my attempt at enlarging upon their statement.

Regarding any particular topic, there are a finite number of stories (or beliefs, or opinions) floating around in the ether. Each person acts as an “story trap”; stories that approach too closely to the trap are caught, and when the person is subsequently interviewed, in a focus group for example, the contents of the trap are revealed.

The reason that I have phrased the model in these unusual terms is because it then becomes obvious how the activity of running a focus group to discover opinions is similar to the activity of a biologist who is trying to discover how many species of animal there are in a particular environment. The biologist sets special traps which capture, mark and release the animals that are ensnared and at the end of the day the biologist has information on how many species were captured only once, and how many of them were captured multiple times. Statisticians have developed various methods [for example, 3] for estimating, from the capture data, the number of undiscovered or untrapped species.

Similar approaches could be made, first, to determining how many opinions one has not managed to tap by the focus groups one has run so far, and second, estimating how many more focus groups one should run to increase the probability of capturing those opinions with some arbitrary likelihood. I know of only one paper [4] that touches on first problem and I know of no research that has attempted to tackle the second problem. Given the very large sums of money that are devoted to market research, both problems seem to me to be worthy of more attention than they have so far been given.

[1] Lunt, P., & Livingstone, S. (1996). Rethinking the focus group in media and communications research. Journal of Communication, 46(2), 79–98.

[2] Karanth, K. U. (1995). Estimating tiger Panthera tigris populations from camera-trap data using capture\u2014recapture models. Biological Conservation, 71(3), 333–338.

[3] Efron, B., & Thisted, R. (1976). Estimating the Number of Unseen Species: How Many Words Did Shakespeare Know? Biometrika, 63(3), 435–447

[4] Griffin, A., & Hauser, J.R. (1993). The voice of the customer. Marketing Science, 12(1), 1–27.

Contributors: Mark R. Diamond