In 2005, Congress appropriated funds – P.L. No. 109-108, 119 Stat. 2302 – and asked the National Academy of Sciences (NAS) to execute certain tasks identified at page 46 of Senate Report No. 109-88. One of those tasks was to “disseminate best practices and guidelines concerning the collection and analysis of forensic evidence[.]” Id.

With that statutory mandate, in 2009, a NAS committee published a groundbreaking report entitled Strengthening Forensic Science in the United States: A Path Forward. That report was discussed at a Congressional hearing where it was said that the NAS had “found that many of the techniques and technologies used in forensic science lack rigorous scientific discipline.” Congress concurred with the report’s recommendation that “a new agency, separate from the legal and law enforcement communities, be created to provide oversight to correct these inconsistencies which impact the accuracy, reliability, and validity of forensic evidence.” Id.

Accordingly, the task of reforming the practice of forensic science in this country was entrusted to an agency of the Department of Commerce: the National Institute of Standards and Technology (NIST). In turn, in 2015, NIST chartered the Organization of Scientific Area Committees (OSAC) for Forensic Science, which is a multi-disciplinary body made up of over 500 forensic science practitioners.  OSAC’s job is to facilitate the development of forensic science standards for the nation.

Towards that end, last month, one of OSAC’s committees, the Legal Resource Committee (LRC), issued a memorandum entitled Question on the Hypothesis Testing in ASTM 2926-13 and the legal principle that false convictions are worse than false acquittals. That document is published by the Harvard Law Review at 130 Harv. L. Rev. F. 137 (2017), and dives straight to the heart of how forensic evidence finds its way into American court-rooms. The LRC’s missive establishes that a forensic scientist does not have to adopt the conventions of the legal forum he or she serves.  Accordingly, when working on a criminal matter, the analyst does not have to presume innocence and does not have to use a beyond a reasonable doubt standard of proof for their conclusions.

The LRC’s memorandum starts by acknowledging that the criminal justice system, with its presumption of innocence and high burden of proof, is structured to assume that evidence does not inculpate an accused. In contrast, the standard forensic practice is to assume samples “match” (incriminating the accused) unless the relevant sample can be excluded. Id. at 140-141. The LRC discounts this difference in approach by declaring that “the law prizes neutral experts” and also “contemplates” that experts will perform their work without “favoring one party over the other.” Id. at 139.

The LRC then opines that, when reaching their conclusions, analysts do not need to use a standard that equates to proof beyond a reasonable doubt.  Indeed, the memorandum advises:

A liberal matching rule can be legally acceptable.

Id. at 142 (italics in original).  The LRC reaches this conclusion even while acknowledging that under “a broad matching rule . . . as the uncertainty of the measurement method used for comparison increases, the false match rate increases.” Id. However, the opinion continues:

Although this incongruity [between a liberal matching rule and the increased risk of false matches] has very serious implications for how a finding of ‘no significant difference,’ ‘indistinguishable,’ ‘matching,’ or ‘consistent’ should be presented in court, it does not necessarily render the finding inadmissible.

Id.

However, the memorandum’s approval of lesser standards of proof comes with two important caveats.  First, whatever standard is employed, forensic analysts must “provide evidence using methods and analyses that are no less rigorous than the norm for scientific inquiry and publication.” Id.  Second, experts, whatever standard they apply, must take care to ensure that any conclusion they issue is “reported with a suitable description of its probative value.” Id.  On that point, the LRC explained, “[i]f this is done, the fact that it is harder for a laboratory whose measurements are imprecise to reject [the match] is balanced by the fact that the laboratory must report a less impressive estimate of the probability of a false inclusion.” Id.  Further, the LRC cautions forensic experts that:

[A] report of a match without more information about the probability of a match to other [sources] in the relevant population would not fulfill the expert’s role of impartiality and adequately educating the trier of fact of what the scientific measurements establish.

Id. at 143-144.

Though only a month old, the LRC’s memorandum has already become the subject of a lot of academic discussion.  A kind reader brought my attention to a commentary entitled Hypothesis Testing in Law and Forensic Science: A Memorandum authored by Penn State Law’s Associate Dean for Research and Distinguished Professor of Law, David H. Kaye.  Dean Kaye’s commentary is published by the Harvard Law Review at 130 Harv. L. Rev. F. 127 (2017) and, as an introduction to the LRC’s memorandum, explains some of that document’s more esoteric allusions in greater detail.  Specifically, he explains the three main statistical approaches that forensic scientists use to evaluate evidence – the “frequentist, likelihood, and Bayesian” approaches.  Id. at 132. The frequentist’s “matching” approach is the one described in the LRC’s memorandum and, according to Dean Kaye, has traditionally “dominated thinking about trace evidence.”  Id. at 136.  However, he posits that “other schools of statistical thought [exist which] hold that a better measure of probative value is available and that there is no fundamental reason [for a forensic analysist] to make an inherently arbitrary match/no-match decision.” Id. at 134.

A second commentary entitled The Burden of Proof and the Presentation of Forensic Results by Professor Edward K. Cheng of the Vanderbilt University Law School is published by the Harvard Law Review after the LRC’s memorandum at 130 Harv. L. Rev. F. 154 (2017).  Professor Cheng expounds upon the LRC’s admonition that experts must ensure their conclusions are accompanied by sufficient information to allow factfinders to understand the value of those conclusions.  He opines that the contextual information an expert must provide the fact-finder along with their conclusion is a “likelihood ratio associated with the ‘match’ or ‘no match[.]’” Id. at 155.  So long as the expert is required to articulate a likelihood ratio, Professor Cheng believes the expert should be permitted to use whatever standard he or she prefers. Fairness to the accused will be protected by the fact that, if an expert employs a less stringent standard, then the resulting likelihood ratio will be less impressive.  Id. at 158-159.

A third commentary entitled No Room for Error: Clear-Eyed Justice in Forensic Science Oversight by New York University Professor of Law Erin Murphy is also published by the Harvard Law Review after the LRC’s memorandum at 130 Harv. L. Rev. F. 145 (2017).  Professor Murphy disagrees with allowing analysts to employ less stringent proof standards to reach their conclusions. However, she does not dispute that:

It is an axiom of evidence law that “a brick is not a wall.” Any single piece of evidence need not prove the ultimate fact conclusively; it can simply be a component of a larger whole that supports a finding of guilt beyond a reasonable doubt.

Id. at 146 (string citation omitted).  Indeed, she agrees that, legally, it is “probably right” that “a forensic analyst may declare a ‘match’ using a standard that does not require confidence beyond a reasonable doubt.”  She agrees in the legality of that statement, at least, “so long as the significance of the particular margin of error tolerated is adequately conveyed to legal factfinders.” Id.

But, Professor Murphy’s concern lies, like the devil, in the details.  The LRC’s latter premise – that experts will adequately convey to factfinders the significance of the margin of error associated with their conclusions – is what Professor Murphy doubts can be achieved.  As she puts it:

[E]ven with [] qualifying statement[s], there are reasons to doubt whether a factfinder can truly distinguish between ‘match’ testimony offered by an analyst applying generous margins of error and ‘match’ testimony offered by an analyst applying much stricter standards, as well as whether the forensic analyst has sufficient comprehension to offer such explanations.

Id. at 147.

Articulating her concerns further, Professor Murphy first argues that “factfinders will be largely incapable of assigning meaningful weight to evidence according to the subtle methodological differences described.” Id. at 148.  She notes that:

[S]tudies of how jurors incorporate error rate information generally indicate that they do so poorly, even when efforts are made to help them assimilate a probability and its associated confidence level.  Thus, if an expert may testify that the suspect’s sample ‘matches’ or ‘could have come from the source,’ then research suggests that the jury will internalize these ideas, either minimizing or dismissing outright any added qualifying statements as to the statistical uncertainty underlying those assertions.  In short, what matters most to jurors is the ‘match’ statement – whether qualitative or quantitative; jurors do not seem to mire themselves in the details of the scientific standards applied to determine what constitutes a match.

Id. at 149.  Professor Cheng’s commentary shares this concern.  He notes that the idea that factfinders will discount the probative value of forensic conclusions that are derived by liberal standards presents an “important practical issue[.]” The Burden of Proof and the Presentation of Forensic Results at 160.  Specifically:

[T]he solution requires that juries or other legal actors comprehend and properly use likelihood ratios.  An increasingly complex literature has emerged on lay understanding of likelihood ratios and how such quantitative information is best presented.  Research thus far has yielded no easy answers, with Professor William C. Thompson and Eryn J. Newman recently concluding that the best presentation method may depend on context and the specific forensic discipline, and even then, the definition of “best” is debatable.

Id. at 161 (referencing Professors Thompson & Newman’s empirical research, published at Lay Understanding of Forensic Statistics: Evaluation of Random Match Probabilities, Likelihood Ratios, and Verbal Equivalents, 39 Law & Hum. Behav. 332 (2015) (UC eScholarship)).

But Professor Murphy is not only skeptical of whether jurors and judges have the capacity to give weakly-founded forensic conclusions their proper weight.  She also posits that:

[G]iven the realities of our system, it is not really factfinders who will undertake this difficult task [of weighing the probative value of a ‘match’ given the methodology behind that match], but rather defense lawyers and prosecutors, yet the culture of forensic evidence in the criminal justice system does not currently encourage the clear transmission of this information.

No Room for Error: Clear-Eyed Justice in Forensic Science Oversight at 148.

Professor Murphy notes that “a logical response to this concern would be to cite to that great engine of truth: cross-examination by defense counsel.”  Id. at 149. “The trouble with that . . . is that the actual practice of criminal defense does not always live up to its ideals[.]” Id. She posits that those ideals are often left unfulfilled because many indigent defense lawyers face excessive workloads and “lack basic competencies, much less sophisticated scientific expertise.” Id.

Professor Murphy also notes that even skilled defense counsel often are not in a position to challenge forensic conclusions because “[f]orensic reports often do not even make it to defense counsel before a plea is negotiated; live testimony is rarer still.” Id. at 150.  As a result, the LRC’s premise that the probative value of expert opinion will be “tested through an adversarial crucible is pure fantasy in the overwhelming number of cases.” Id.  “In this respect, the LRC Memo’s demand for a ‘suitable decription’ of the evidence’s probative value feels tepid.” Id.  Professor Murphy’s suggested change to the LRC’s guidance:

Rather than nest within the Memo a single line encouraging greater clarity, the Committee could have explicitly conditioned tolerance for relaxed error margins on reports that contain express, clear statements as to the meaning and impact of different margins.

Id.

2 Responses to “Scholarship Saturday: Forensic science standards beginning to take form”

  1. DCGoneGalt says:

    Awesome, thank you Isaac Kennen!

  2. Dew_Process says:

    Exceptionally well done, thank you!
     
    FYI, Prof. Kaye was a scientist before becoming a lawyer, with the following academic credentials: J.D., Yale Law School;  M.A., Harvard University (astronomy); and B.S., MIT (physics).
     
    Prof. Erin Murphy should be a name familiar with anyone involved with forensic DNA issues, as she is one of the true legal scholars in this area. Starting with a J.D. from Harvard with Honors, she clerked for Judge Merrick Garland on the DC Court of Appeals, and then was a Public Defender in DC prior to entering academia. Her book, Inside the Cell: The Dark Side of Forensic DNA (Nation Books, 2015), is a “must read” for any litigator with DNA issues.
     
    Prof. Cheng, is likewise a force to be appreciated. From Vanderbilt Law’s website:
     

    He holds a B.S.E. (summa cum laude, Phi Beta Kappa) in electrical engineering from Princeton University, where he also earned a certificate from the Woodrow Wilson School for Public and International Affairs; an M.Sc. in information systems (with distinction) from the London School of Economics and Political Science, where he was a Fulbright Scholar; and a J.D. (cum laude) from Harvard Law School, where he was the articles, book reviews, and commentaries chair of the Harvard Law Review. He is currently pursuing a Ph.D. in statistics at Columbia University.
     

    Again, a great piece of work for everyone reading CAAFlog!