The scientific evidence used to sway the opinions of jury members and judges in US courtrooms may not be all that reliable, according to the findings of a new study in the journal Psychological Science in the Public Interest.
Analysis found that some 60 percent of all psychological assessments that are admitted as evidence appear to be based on junk science, although only about 5 percent of these dodgy testimonies are ever challenged by lawyers.
The study authors began by pooling data from 22 separate surveys of forensic mental health practitioners, who were found to use a total of 364 different psychological assessment tools when acting as experts in legal cases. These tools serve a variety of purposes, such as determining a defendant’s competence to stand trial or indicating whether or not a parent is deserving of child custody.
A team of coders were then employed to scan the scientific literature for references to each of these 364 assessment tools, and to determine whether they were generally accepted as reliable by the scientific community.
Results indicated that only 67 percent of the psychological tests used by forensic experts in court cases are generally accepted by scientists as valid. However, only 40 percent of these assessment tools were given favorable reviews in the Mental Measurements Yearbook, which is seen as an authority on the effectiveness of psychological testing.
The researchers then narrowed their focus onto 30 of these 364 assessment tools, which were used in a total of 372 court cases in the US between 2016 and 2018. Despite only 40 percent of these tests being seen as solid by the scientific community, their admissibility was challenged by lawyers just 19 times.
This means that suspect scientific evidence went unchallenged in 94.9 percent of cases. What’s more, only 6 of these 19 challenges were successful.
The team notes limitations with the study, primarily in regards to scope. "We did not conduct a comprehensive survey of the case law regarding the admissibility of psychological tools; rather, we conducted a limited but organized investigation into a sample of legal cases citing a sample of psychological tools. Our methods provide us a rough nonparametric sense of the population of cases."
In their write-up, the study authors explain that lawyers can’t be blamed for letting so much junk science enter their testimonies, since they are not trained psychologists and are therefore unable to identify the weakness of an assessment tool unless they happen to be alerted to it by an expert.
Furthermore, since precedent holds so much sway in legal proceedings, and so many of these questionable tools have been seen as admissible for so long, there is now a precedent for allowing bad science to stand as evidence in US courtrooms.