Forensic Science Needs Greater Judicial Scrutiny
Apr. 29, 2016 (Mimesis Law) — The myth of General Cincinnatus remains a powerful one. There is something appealingly romantic about a farmer leaving his plow and leading an army to victory. Moreover, Cincinnatus was named Dictator of Rome twice, and each time, he promptly resigned from the Republic’s most powerful office when the threat had passed. The story of humble farmer successfully leading an army and the Roman Republic was familiar to Founders of the country, who often drew explicit comparison between George Washington and Cincinnatus.
In contrast, modern readers might find this story interesting, but they probably find it about as believable as the Roman myth about infant Romulus and Remus being nursed by a she-wolf. Today, our idea of an army is one that is logistically and technologically complex, which in turns requires a specialized professional to manage people and things in movement. Amateurism is increasingly giving way to professional specialization.
This notion is not just limited to the military. In all aspects of life, we usually expect that experts will be in charge of anything more complex than a shovel. At least one contributing cause has been the overwhelming success of science to solve problems—even problems we failed to recognize as problems.
The idea that science was (is) the driving force of social progress is at least as old as the Victorian Age. The idea that scientifically-minded experts could do everything better really picked up steam, eventually leading to the Progressive conclusion that these experts can make everything better, including humans and culture. Ultimately, the 20th century proved that science-wielding experts do not necessarily improve the human condition. But the idea lingers that with just the right social engineering by experts, all human ills, ranging from world poverty to sickness, can be overcome.
The law, like any other social institution, gives scientific experts special deference. But even before turning to the field of law, there is ample evidence to make us suspicious that far too many of these credentialed experts are simply confidence men.
Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.
Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it.
Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. * * *
But, and there is no putting it nicely, deliberate fraud is far more widespread than the scientific establishment is generally willing to admit. One way we know that there’s a great deal of fraud occurring is that if you phrase your question the right way, scientists will confess to it.
But, you may say to yourself, those are all softer sciences, surely physics is without such negligence or fraud. Sadly, you’d be wrong:
Even in physics, supposedly the hardest and most reliable of all sciences, Wilson points out that “two of the most vaunted physics results of the past few years — the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border — have now been retracted, with far less fanfare than when they were first published.”
And while some error maybe indeed innocent, much of it not:
Then there is outright fraud. In a 2011 survey of 2,000 research psychologists, over half admitted to selectively reporting those experiments that gave the result they were after.
The survey also concluded that around 10 percent of research psychologists have engaged in outright falsification of data, and more than half have engaged in “less brazen but still fraudulent behavior such as reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable.”
Then there’s everything in between human error and outright fraud: rounding out numbers the way that looks better, checking a result less thoroughly when it comes out the way you like, and so forth. * * *
All of this suggests that the current system isn’t just showing cracks, but is actually broken, and in need of major reform. There is very good reason to believe that much scientific research published today is false, there is no good way to sort the wheat from the chaff, and, most importantly, that the way the system is designed ensures that this will continue being the case.
If this is the state of science in academia, then what, pray tell, is the state of the science in the area of law, which has real consequences, like imprisonment, death, or the possible bankruptcy of a company? Not good either:
The Washington Post published a story so horrifying this weekend that it would stop your breath: “The Justice Department and FBI have formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence against criminal defendants over more than a two-decade period before 2000.”
What went wrong? The Post continues: “Of 28 examiners with the FBI Laboratory’s microscopic hair comparison unit, 26 overstated forensic matches in ways that favored prosecutors in more than 95 percent of the 268 trials reviewed so far.”
The shameful, horrifying errors were uncovered in a massive, three-year review by the National Association of Criminal Defense Lawyers and the Innocence Project. Following revelations published in recent years, the two groups are helping the government with the country’s largest ever post-conviction review of questioned forensic evidence.
Chillingly, as the Post continues, “the cases include those of 32 defendants sentenced to death.” Of these defendants, 14 have already been executed or died in prison.
Fault Lines managing editor Scott Greenfield puts it like this:
That this meant that decades of convictions were based upon knowingly false junk science was similarly presumed.
But this review of actual cases puts a price tag on the fraud. Of the cases reviewed, 32 defendants were sentenced to death. Death. No doubt the crimes for which these defendants were convicted were horrible, but were these the perpetrators of those crimes? Fourteen of those defendants have been executed, fast or slow.
So yeah, I hope they were guilty as sin, because if they weren’t, the fraud perpetrated by the forensic science industry is a crime of enormous magnitude. Want to talk about mass murder? Are 14 enough bodies for you?
If you are a careful reader, then you might have noticed that the 20-year period that the FBI lab was apparently making some stuff up to send people to jail includes the roughly 7-year period after the game-changing decision, Dow v. Merrell Dow Pharmaceuticals.
Under Daubert, expert witnesses were required to do more than utter arcane words and give their divine insight to the case at hand, their scientific opinion was now required to be reliable. In retrospect, it is a little puzzling it took that long for courts to affirmatively require reliability. Because jurors are highly influenced by the expert, the problem with unreliable expert testimony is that is leads to unreliable verdicts.
Daubert directed trial courts to undertake a gatekeeper role, screening out scientifically unreliable opinions in advance of the jury trial. This was in contrast to the previous circumstances that was practically anything goes. Despite a series of cases and a revision to the federal rule, trial judges proved resistant to leaving the “admit anything” default rules. What this looks like in real numbers is as follows:
The Federal Judicial Center conducted surveys in 1991 and 1998 asking federal judges and attorneys about expert testimony. In the 1991 survey, seventy-five percent of the judges reported admitting all proffered expert testimony. By 1998, only fifty-nine percent indicated that they admitted all proffered expert testimony without limitation. Furthermore, sixty-five percent of plaintiff and defendant counsel stated that judges are less likely to admit some types of expert testimony since Daubert.
In essence, we have untrustworthy or fraudulent experts peddling their junk science, willing advocates who want hired guns to testify, and judges who have been traditionally disposed to admitting all expert testimony—no matter how unreliable.
The bottom line is simple: In a number of forensic science disciplines, forensic science professionals have yet to establish either the validity of their approach or the accuracy of their conclusions, and the courts have been utterly ineffective in addressing this problem.
So, despite Daubert and Evid. R. 702, unreliable expert testimony has been used to consistently convict defendants.
Jurors trust that experts are testifying with scientific authority. Daubert and the federal rules sensibly require that such testimony meet a minimum standard of scientific rigor. And more rigor would go a long way in avoiding the sort of tragedy associated with the FBI lab scandal. This is even more prudent in light of the recent revelations that the researchers and academics themselves have proceeded without sufficient rigor.