Computers, Privacy & the Constitution

The intersection between AI and Police.

-- By GorbeaF - 29 Feb 2024

There is a growing concern with the use of artificial intelligence technologies (AI) in everyday life. The general population believes that AI will eliminate jobs and ultimately replace individuals through automatization. Others believe that AI is a useful tool that will revolutionize the world. I believe AI can be used as a tool; however, in a context-specific analysis, I am worried that it may destroy the thin line between ethics and police force. As technology increases, so do its negative and positive uses to it. The individual user of AI may not perceive a threat regarding their confidential information or data; consequently, they may not care or realize the harm. However, when a government agency, which has access to and creates millions of data points in databases, authorizes the use of AI technology, it poses a great risk to the whole community. Can there be a balance between the positive and negative effects? The International Association of Chiefs of Police (AICP) argued that “AI will be a game-changer for law enforcement…”. The use of AI, ML, and LLMs in policing activities threatens to destroy the limited public trust in police forces; transparency, accountability, and bias may be some of the most pressing issues.

Policing with AI

Efficiency and workload are the fundamental arguments behind police agencies' use of technology, and their use has expanded throughout various periods. According to the EDPT Report from 1998, the workload crime imposes on the police has increased fivefold since 1960, and their resources have not caught up. The police have used every technological advancement available to them to fulfill their jobs. For example, fingerprinting in the 1900s, crime laboratories in the 1920s, and in the 1930s, the use of the two-way radio and vehicles marked breakthroughs that allowed faster response times and comprehensive investigations. According to the EDPT Report, the 1970s marked a new age in policing with the introduction of computers into policing. The following years were welcomed with CCTV, DNA profiling (in conjunction with the addition of the nation's automated fingerprint identification database), and cell phones. Now, police agencies are leveraging the use of facial recognition software, AI, and ML to sort, review, and summarize colossal amounts of data and "deduct patterns of behavior". Police departments partner up with third-party vendors that supply them with an array of different technologies. For example, according to Police1 public information, on the software front, the most used programs are Cellebrite Pathfinder and LEMA, which focus on managing digital investigations.

As we can note, the conjunction between technology and police functions has grown exponentially for decades, and this rampant growth brings forth new issues with data usage and safeguarding concerns. Police agencies enter into agreements with third-party vendors for top-of-the-line products that may not be ready or safe to use. For instance, PredPol was recently canceled for discrimination problems. Therefore, we must answer whether these technologies outweigh the negative considerations they bring along, such as privacy violations, accountability issues, and discrimination events.

Privacy concerns

The question proposed in the aforementioned paragraph seems simple to answer; absolutely! The police should and must use all the available resources to prevent, investigate, and solve crime. However, we must delve into the specifics and address the very relevant privacy concerns attached to the use of these technologies -which I will refer to together as “tech.” By allowing the police to rely heavily on the tech that offers a wide array of programs and services that manage the investigation and gather knowledge, we are placing an immeasurable amount of confidential and private information into algorithms and databases of corporate entities, which we may know nothing about. Granted, police agencies will most likely use this for a good cause. However, presumably by contract terms, the entities behind the curtain will be granted the power to use, manipulate, sell, and profit from all the data and information that is run through their software, including all the crime and evidence statistics that may be generated therein. Still, assuming that all parties act in good faith, it seems far-fetched to believe that the information will be secure and not used in any way.

Accountability and transparency

The second issue we face is accountability and transparency. Who will be responsible for the AI in real-time crime centers that sort, review, and analyze information that ultimately conjures predictions and suspects? The use of AI in these scenarios brings nothing but unintelligent probability guesses based on information fed to the AI. "This poses a serious risk of automatization and an accountability gap". How are these algorithms going to be evaluated? How are they going to be trained? How is the oversight going to work? The only possible way for police agencies to build the general public trust in using these technologies is to disclose how they work. However, public disclosure of the functionality and decision process - showing how innocent individuals become suspects by probability - is only going to be detrimental to their main goal: crime prevention.

Bias and discrimination

The most serious problem agencies will encounter is training their models. For AI and/or ML to function and be leveraged as a tool, they must be trained appropriately and thus rely on past information to provide accurate or predictive results. The issue is that past police actions concerning civil liberties and rights have not been great. Ultimately, the best way would be to start from scratch and feed new cases and faces to the AI to detect any suspect through CCTV; however, applying this starts to sound like a surveillance state.

It is a matter of time for governmental agencies to deploy the real power of AI and related technologies to their everyday functions. There may be some good faith uses and ideas; however, privacy, utilization, and accountability concerns may not outweigh the ultimate benefits. Agencies should withhold the temptation to use the technology until every negative concern is fully addressed.

It seems to me that the route to improvement is a clearer point of your own. "Artificial intelligence" here is really a synonym for "software." Your point is that when software is the most important constituent of society, rather than petrochemicals or steel, policing is also different because all of society is different. The evaluations we apply to policing (accuracy, fairness, avoidance of unnecessary intrusion or use of force, effectiveness at maintaining public security and order) will be affected by the transformations policing goes through, and the changes in societal values that will also happen.

All this is basic. The current draft uses many more words to say as much. Now we want a specific relevant idea of your own, which you can describe, analytically evaluate, and enable the reader to put to her own use. You have much room to reduce the introductory material. Levels of technical description are only necessary where the specifics rather than the general importance of software and data analytics are directly involved.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r3 - 27 Apr 2024 - 13:42:29 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM