As more states try to reform the bail system, studies continue to surface highlighting some of the concerns with Pre-Trial risk assessment tools which would determine whether to safely release an individual. Recently, there was a signed statement that was publicly released and endorsed by several prestigious universities, including Harvard Law, Columbia University, MIT, Princeton, among others. The statement noted there are technical flaws that are associated with risk assessment tools, that result in the inaccuracy of the scores they generate. In summary, it is noted “no tool today can adequately distinguish one person’s risk of violence from another”.

 

The statement goes on to say that in order to generate predictions, that the tools rely on data that is flawed, and leads to distorted results. Garbage in garbage out, if you will. The statement reviewed records of one of the most popular tools being used today, the Public Safety Assessment (PSA). It noted, risk assessments generate substantially more false positives, than true positives… 92% of people labeled as high violence, did not commit violent crime while released. This goes swimmingly with other examples we have previously blogged about… The tools are over/ wrongly estimating pretrial violence, reoffending, etc. and will lead to a higher number of defendants in jail, the opposite of what pretrial reformist desire.

 

Proponents of criminal justice reform cite there is a disproportionate number of minorities in our criminal justice system… With all of these talking points going on around the country, many groups think the resolution is to eliminate the bail system, and implement a “non-biased” risk assessment tool… Well, according to this statement, supported by the most prestigious/reputable universities in the nation, the data used to build these tools are racially biased… It is noted, decades of research show for the same conduct African Americans and Hispanics are more likely to be arrested, prosecuted, convicted, and sentenced than whites. This same data will be used to generate predictive scores… Do we want the same distorted/ inaccurate data to be magnified even further when these tools generate so many false positives?

About Administrator