SAN FRANCISCO—On July 1, the city of San Francisco will become the first city to implement a new “open-source bias mitigation tool” using artificial intelligence (AI) to remove racial bias from prosecutor’s charging decisions.

The new tool was developed by the Stanford Computational Policy Lab at a no-cost charge to the San Francisco District Attorney’s Office. According to a press release from the SFDA’s Office, Sharad Goel, assistant professor at Stanford in the Department of Science & Engineering, is heading the lab’s endeavor in removing implicit racial bias from police reports.

“The Stanford Computational Policy Lab is pleased that the District Attorney’s office is using the tool in its efforts to limit the potential for bias in charging decisions and to reduce unnecessary incarceration,” said Goel.

There are two phases to the bias mitigation review. Phase 1 will use artificial intelligence to review the police incident report and will automatically remove any information indicating the race of the parties involved. This includes names—of both officers and suspects—specific locations and neighborhoods, hair and eye color, and officer star numbers. After reviewing the redacted incident report, prosecutors will record a preliminary charging decision.

Phase 2 will give prosecutors access to the full unredacted incident report where the final decision may change after reviewing other information, such as body cam footage. Prosecutors will then document the additional evidence that led them to change their charge decision.

“SFDA will collect and review these metrics to identify the volume and types of cases where charging decisions changed from phase 1 to phase 2 in order to refine the tool and to take steps to further remove the potential for implicit bias to enter our charging decisions,” said Gascón.

To read the full release, visit: