Statistical Fallacies in the Abortion Debate: part 1/2
This blog post is the first in a short three-part series on using statistics in the pro-life debate. This week we will look at some common statistical fallacies people make when discussing abortion and how you can avoid making them in a debate, friendly discussion, argument on the internet or some other kind of conversation. Next week we will continue to discuss fallacies, followed by a blog post explaining what to do instead.
Today we are going to be discussing an element of the pro-life debate that often gets overlooked by pro-lifers: fallacies involving statistics. Many of you may look at the image below and think that statistics are terrifying and too difficult for ordinary pro-lifers to use, but hopefully this post will convince you that it is easy to argue persuasively and accurately without needing to know anything particularly advanced.
Although cubic polynomial regression really is as bad as it sounds if statistics isn’t something you deal with a lot. Image via Wikipedia.
Here are several fallacies that you can easily avoid making in a debate without needing to study statistics (although there is no harm in doing this). We will start with the least egregious errors and finish with the worst.
Using extreme cases to make a point
One fallacy of which both pro-life and pro-choice people are often guilty is trying to argue a position on abortion based purely on extreme cases without explaining why the argument also works in general. To give a specific example, it is very common to see pro-lifers try to argue implicitly that we should ban all abortions due to some extreme cases such as abortions due to minor birth defects such a cleft lip and palate. The problem is that while such cases are highly troubling, they really are a tiny proportion of all abortions overall, accounting for about 157 out of 922460 abortions from 2006-2010, or roughly 0.017%1. A much more common variation of this fallacy is to cite cases of very late-term abortions regularly, however most (89% or more) abortions occur during the first trimester, with 52.5% happening before 6 weeks from conception or sooner.
Which occurs once the pre-born baby has reached around this level of development. Remember an image speaks a thousand words. Image via PMC Canada.
A further example of this fallacy which many of you will have encountered before is for people to try and argue that abortion should be legal in general and to then jump back on the case of rape when asked to justify the general statement. How to respond to this in a graceful way needs a whole blog post of its own and you should never ever be anything other than compassionate when discussing this topic, but it is worth noting that this can be a fallacious pro-choice argument if it isn’t suitably qualified, given that abortions due to rape account for around 0.3% of all abortions in the US.2
That said, these arguments do not always amount to fallacies if you are careful when using them. In the third post of this series we will explain how to use these sorts of extreme cases correctly and honestly without misleading people.
Using small samples
Another common mistake to watch out for is the use of overly small samples underlying abortion statistics. This might not seem like an immediate issue, but it can lead to some problems where seemingly strong results turn out not to be as significant as they first appear. To explain why this is a problem, we need to discuss a pair of concepts called the null hypothesis and the p-value briefly.
A null hypothesis is an initial belief that you wish to test in view of some evidence. If your data is strong enough, you will reject it in terms of an alternative hypothesis instead. This idea underlies the modern scientific method. The null hypothesis is not something that you can prove per se, so much as something for which you can gather evidence and have confidence in the truth of.
The significance level of a result or p-value is the probability that a seemingly significant result was due to chance, given a particular initial null hypothesis that there is no underlying effect. Typically, a result is not considered significant unless p< 5%, with results such as p< 1% or p<0.5% being considered much stronger.
One common mistake is to assume that if a result has p>5% then it is nonsense and if p<5% then it’s really good evidence. This is another mistake that you can easily make if you are careless- rather think of p as a measure of how sceptical you should be of a result. The smaller p is the better the result. For a fuller discussion of abuses of p-values, see here.
How does this connect to sample sizes? The larger your sample, the less extreme your data needs to be relative to your null hypothesis in order to get a result that might be considered significant. Furthermore, if you run a lot of studies, there is a good chance that at least one of them will show a significant result. Citing a single study by itself is something of which one has to be wary, particularly when the sample size is small. Always give priority to literature reviews and meta-studies.
One example which may invite controversy from the pro-life side is the abortion-breast cancer link (which is discussed at length in here). If the studies with large samples suggest there is not a link whereas those with small samples do, that is going to make many people highly sceptical of the existence of such a link, including pro-lifers! Therefore, it is best not to use this argument unless you have convincing data from large studies.
Next week we will continue discussing statistical fallacies in the abortion debate, talking about biased samples, false causality and push polls.
If there are any questions about anything we have discussed or about pro-life issues generally, please leave a comment below and we will try to respond quickly.
Dane Rogers is a third year DPhil student in the Department of Statistics based at Merton College, currently working on Chinese Restaurants and Lévy process.
1 It is worth noting that official statistics suggest that the number of abortions due to cleft lip and palate from 2006-2010 was actually 14, but that only reinforces the point made if true.
2 Note that that there are issues with the quality and accuracy of the data, so there is quite a bit of uncertainty around the true value here.