About 40% of the psychological tests used to assess those party to litigation are favorably rated by experts for their scientific merit, according to research explored during a news briefing at the 2020 AAAS Annual Meeting on Feb. 15. Release of the underlying study by the journal Psychological Science in the Public Interest coincided with the news briefing.
Tess Neal, a forensic psychologist shared the study’s findings alongside psychologists Ira Hyman and Sarah Brown-Schmidt who also presented their research at a related AAAS Annual Meeting symposium entitled “Psychological Science: Lessons for the Law” following the news briefing.
Neal’s study examined such tests, which are used to determine such things as parental fitness in child custody disputes, gauge the mental health status of those involved in insanity defenses, or establish the disability standing of those seeking Social Security benefits — outcomes that influence many kinds of legal proceedings. Yet, while 60% of these tests receive mixed or negative reviews in scientific literature, according to the study, lawyers challenge their use in only 5.4% of cases.
“This is a problem because bad psychological evidence may contribute to unfair legal processes and unjust verdicts., said Neal, first author of the paper “Psychological Assessment in Legal Contexts: Are Courts Keeping ‘Junk Science’ Out of the Courtroom.”
Professional witnesses, lawyers and judges who introduce such evidence that lacks scientific foundations threaten the legitimacy of our legal system, the study authors contend.
“Courts are not separating the good from the bad,” said Neal. “Even though courts are required to screen out ‘junk science,’ one of the major findings of our paper is that nearly all psychological assessment evidence is admitted into court without even being screened. ”Neal and a team of psychologists and law professors conducted the study in two parts. They identified 364 tests used by psychologists who serve as professional witnesses in legal cases. Such tests are sold by for-profit companies, sometimes for hundreds of dollars, with costs often being passed to clients or the state.
Researchers found that 67% of the tests are accepted by other experts in the field. Publications that conduct scientific reviews of the tests, such as the Mental Measurements Yearbook, found only 40% are found to be “scientifically favorable.” The discrepancy between what is accepted and what is scientifically favorable can be explained, Neal said. Older and inaccurate tests continue to be used, while newer and more reliable tests have yet to gain popularity among psychologists, Neal found.
Hyman, a professor of psychology at Western Washington University, shared research at the press briefing examining how unreliable eyewitness testimony can be. His team found that most eyewitnesses are unaware of a crime or accident as it unfolds. When such witnesses are questioned about the start of the incident, they unconsciously fill in details they did not actually observe, research found.
“We’re not sitting around watching for crimes or accidents to happen,” said Hyman. “What we’ve been interested in our research recently is how not being prepared, and probably doing something else, is going to impact the awareness and potential memories of eyewitnesses. ”Hyman’s research seeks to inform the judicial system about how to effectively weigh eyewitness testimony and improve questioning techniques with such testimony.
Brown-Schmidt of Vanderbilt University presented findings examining how well witnesses recall conversations and in what contexts memories are better or worse. She discussed how interactions with online content can influence witness memories and presented findings on the value of contemporaneous notetaking on later recall of conversations.
“If you go on to Instagram and browse through an Instagram feed, you’re way more likely to remember a post you commented on as opposed to one you just viewed,” said Brown-Schmidt. “So, if you were in a dispute with somebody and a conversation was relevant, it’s totally possibly that you would have different memories of what was said.” The researchers on Neal’s team studied specific court cases in which a sample of poorly reviewed assessments were used by lawyers and judges who allowed the tests to be submitted as evidence. In this group, lawyers challenged the assessments 5.4% of the time and only half of those challenges were due to concerns about the evidence’s scientific validity and just a third of those challenges succeeded.
U.S. Supreme Court rulings on evidence require courts to consider four factors in determining if expert evidence is scientifically reliable: if the methods an expert witness relies upon have been empirically tested; if the error rate of the method is known; if the method has been reviewed in a peer-reviewed journal; and if the method is generally accepted by the scientific community. Judges are then to analyze the methods and act as “gatekeepers,” by excluding methods with poor performances. This study found that courts often fail to adhere to the standards.
While Neal does not expect the study data to reverse cases, she said it has the potential to inspire defense lawyers to seek to overturn cases that are based on weak assessments. Neal is hopeful the team’s research will motivate lawyers and judges to approach evidence from psychological assessments more skeptically and empower individuals involved in legal cases to question their assessment results.
“We believe all these things have the potential to make the legal cases in which psychologists are involved more fair and just,” said Neal.