The Government Wants to Hear Your Creepy Plan to Predict Behavior With AI and Ankle Monitors

Photo: Getty

Have a plan to track and predict the behavior of ex-cons using artificial intelligence? The U.S. Department of Justice is all ears.

Last week, the Justice Department announced it was awarding up to $3 million for artificial intelligence projects that promote “the successful reentry of offenders under community supervision.” The solicitation for R&D proposals characterizes the program as a way “to reduce crime and to protect police, other public safety personnel, and the public.” That might sound innocent enough—until you hear the Justice Department’s examples.

While presented as measures to help former convicts from being rearrested and put back in prison, some of the “potential applications” of AI listed in the document are disturbingly invasive.

One of three offered examples is more innocuous than the others. Categorized as “Mobile Service Delivery,” this application is described as using AI to offer offenders personalized resources through their phones. This might include sending them information about fitting reentry programs or remote peer or mentor support.

The other ways in which the U.S. government hopes to use AI to decrease recidivism are far less concerned with developing personal resources for offenders, and instead signal a future in which corrections agencies lean on technology to track and predict offenders’ behaviors in real-time. One such example—called “Situationally Dependent, Real-time Updates to an Offender’s Risk-need-responsivity Assessment”—involves using AI to identify when an offender is in a high-risk situation and send real-time updates to both supervision officers and offenders themselves.

The most unsettling idea detailed in the document, however, is referred to as “Intelligent Offender Tracking,” which would use ankle monitor data to detect “and possibly predict” offenders’ behavior. The Justice Department suggests this data could be used to autonomously alert a supervising officer, mentor, or the offender if they spend time in places associated with risky behavior. The latter could involve a “chat bot system” that is intended to deescalate the situation through the phone.

“AI-initiated actions may also include notifying the offender through their mobile device to suggest a cooling-off period in a safe space, or to promote behavior modification techniques,” the document states.

Not all of these suggestions are inherently bad, such as offering an ex-convict personalized, accessible resources to independently (and non-invasively) help themselves. But the application of AI to predict the behavior of offenders and send that information in real-time to corrections officers is a strategy that only serves the ones doing the policing.

Aside from the very obvious invasion of privacy, there is plenty of evidence indicating that criminal justice AI is still very much biased, and it’s these biased systems that corrections officers and mentors might be blindly trusting to predict risky behavior that hasn’t even happened yet. Of course, the winning proposals have yet to be selected, but the solicitation indicates the type of intrusive projects the agencies might be most interested in. According to the document, all of the training data will also be given to the Justice Department, “along with detailed implementation instructions.”

[h/t Logan Koepke]

Share This Story


Date:

by