How Do (Human) Child Welfare Workers Respond to Machine-Generated Risk Scores?

Martin Eiermann, Maria Fitzpatrick, Katharine Sadowski, Christopher Wildeman

Sociological Science January 6, 2026
10.15195/v13.a1


Algorithmic risk scoring tools have been widely incorporated into governmental decision making, yet little is known about how human decision makers interact with machine-generated risk scores at the street level. We examined such human–machine interactions in the child welfare system, a high-stakes setting where caseworkers ascertain whether government interventions in family life are warranted. Using novel data—verbatim transcripts of caseworker discussions—we found that decision makers: (1) disregarded scores in the middle of the distribution while paying attention to extremely high or low risk scores and (2) rationalized divergences between human decisions and machine-generated scores by highlighting the algorithm’s overemphasis on historical data and specific risk factors and its lack of contextual knowledge. This meant that caseworkers were unlikely to modify their decisions so that they aligned with risk scores. However, we did not find evidence of principled resistance to algorithmic tools. Our findings advance research on such tools by specifying how human perceptions of the utility and limitations of novel technologies shape discretionary decision making by state officials; and they help to explain their uneven and potentially modest impact on the bureaucratic management of social vulnerability.
Creative Commons LicenseThis work is licensed under a Creative Commons Attribution 4.0 International License.

Martin Eiermann: Department of Sociology, University of Wisconsin-Madison.
E-mail: meiermann@wisc.edu.
Maria Fitzpatrick: Books School of Public Policy, Cornell University; National Bureau of Economic Research.
E-mail: maria.d.fitzpatrick@cornell.edu.
Katharine Sadowski: Graduate School of Education, Stanford University.
E-mail: ksadow@stanford.edu.
Christopher Wildeman: Department of Sociology, Duke University; Sanford School of Public Policy, Duke University; ROCKWOOL Foundation Research Unit.
E-mail: christopher.wildeman@duke.edu.

Acknowledgments: The authors are grateful to Ruby Richards and Nicole Adams for feedback on earlier drafts of this manuscript and the Douglas County Department of Human Services for providing data throughout this project.

No supplemental materials.

Reproducibility Package: The terms of our Data Use Agreement with the Douglas County Department of Human Services (DCDHS) legally prohibit us from sharing the original data, which are temporarily stored on a secure Cornell University research server, cannot be shared externally, and must be destroyed at the end of the agreement period. These restrictions reflect the presence of highly sensitive child welfare data in verbatim transcripts of caseworker discussions. All analysis code and documentation of qualitative coding workflows are publicly available at OSF. Researchers with questions about Douglas County Decision Aide (DCDA) data that were generated during the randomized controlled trial may contact: Ruby Richards, Director of Human Services, Douglas County (303-688-4825).

  • Citation: Eiermann, Martin, Maria Fitzpatrick, Katharine Sadowski, and Christopher Wildeman. 2025. “How Do (Human) Child Welfare Workers Re- spond to Machine-Generated Risk Scores?” Sociological Science 13: 1-21.
  • Received: September 3, 2025
  • Accepted: November 14, 2025
  • Editors: Ari Adut, Jeremy Freese
  • DOI: 10.15195/v13.a1

, , , ,

No reactions yet.

Write a Reaction


The reCAPTCHA verification period has expired. Please reload the page.

SiteLock