Misleading evidence? Show me the data?
One of the most tricky questions I get asked is ‘how big a deal is the misinterpretation of forensic science evidence really?’ We all like to get a sense of the scale of a challenge, problem, or risk, because that kind of information tells us how much we need to consider it in the decisions that we make (should I take that opportunity to do a sky dive? (just for the record, the answer is almost always ‘yes’!)).
It’s this that makes the study that we published earlier this year so interesting (to us at least). Before this study we didn’t have any tangible data to offer in answer to that question, and the research lead by my PhD student Nadine Smit was the first study to extract the kind of data that could start us on the journey to being able to answer it. We wanted to demonstrate the extent of misleading evidence, but we had to start small and focus in just one place, the Court of Appeal of England and Wales. In the 7-year period between 2010 and 2016, 10,859 cases were identified as having been heard in that court. Through a systematic content analysis approach that drew on information from the case transcripts, 996 cases were identified which involved ‘criminal evidence’, and rulings in 218 cases (22%) were argued unsafe because they contained ‘misleading evidence’.
Of these 218 upheld cases that were successful on appeal, 235 incidents of misleading evidence were identified. The majority (76%) of successful appeals were based upon the same materials available in the original trial, rather than the presentation of new relevant information. Witness (39%), forensic science (32%), and character evidence (19%) were the most commonly observed evidence types, with the validity of witnesses (26%), probative value of forensic evidence (12%), and relevance of character evidence (10%) being the most prevalent issues. Additionally, the majority (66%) of misleading evidence types related to interpretations made that addressed activity level propositions (establishing the answers to questions of ‘how’ and ‘when’, as opposed to source attribution, questions of ‘what’ or ‘who’).
The study identified that:
the relevance, probative value, and validity of evidence are often misunderstood or miscommunicated when expressing beliefs in (competing) hypotheses, and that this has led to wrongful convictions.
many of these misleading aspects could have been prevented by providing more transparency in the relationship between evidence and hypotheses.
a challenge remains to prioritise research that can be used to underpin the testing of hypotheses, and to continuously assess the magnitude and consequences of misleading evidence in criminal cases.
there is a need for an important debate around improving access to case documents post-conviction (and if you’re interested in this, check out the great work from the Centre for Criminal Appeals at: https://www.crowdfunder.co.uk/show-us-the-evidence)
The identified cases in this study are likely to be only the tip of the iceberg because this study only looked at upheld cases from the Court of Appeal. The presence of misleading evidence can no longer be simply attributed to individual ‘bad apples’ in the system. These identified cases indicate that there is a 'dark figure of misinterpreted evidence', in a similar vein to the 'dark figure of crime' (the number of crimes that are never reported or discovered), and this needs to be addressed.
So, for good or for ill, it helps having some numbers when thinking about how to give a tangible answer to that question of ‘how big a deal is the misinterpretation of forensic science evidence really?’. But getting the numbers is just the beginning. It’s what we do with them, and the insights that they offer, that is so important.