Why Artificial Intelligence, Machine Learning May Be First Response’s Next Big Technologies

In late 2016, the National Aeronautics and Space Administration (NASA) and Department of Homeland Security (DHS) quietly unveiled a project named AUDREY, otherwise known as the Assistant for Understanding Data through Reasoning, Extraction, and sYnthesis. Combined with NASA’s sterling public image, the experimental technology’s lofty goal and cutting-edge approach made it inherently newsworthy. In short, the program aimed to make firefighters safer and more effective in the field, using artificial intelligence (AI) and machine learning techniques to provide real-time positioning and hazard information, among other data, to personnel throughout the fire grounds.

Of course, this is not the first time NASA’s advancements have shown promise in the terrestrial realm of first response. Even then (and despite NASA’s uncharacteristically quiet stance on the technology since its formal unveiling), AUDREY is notable for what it forecasted: a future where firefighters, emergency medical personnel, law enforcement professionals, and correctional facilities make critical decisions based on input from AI-based computer programs and machine learning.

The foundations of that future are largely in place. Institutions across the public sector have long bolstered human tasks and decision-making processes with digitally derived insights, a statement that naturally includes responder organizations. Looking at advancements like AUDREY, however, it is fair to assume AI, machine learning, and related technologies will continue to grow in scope and capability — further enhancing the abilities of skilled personnel by carrying out tasks only computers can handle.

By the numbers: For responder orgs, untapped data equates to massive AI, machine learning potential

The past few decades have produced a long list of technologies with nearly limitless potential. Personal computers, the internet, smartphones, and the cloud — and numerous others — have provided an open-ended path to the future, with teams of tech-industry workers continually developing newer and better ways to harness their power. For AI and machine learning, this equates to a list of use cases limited only by the number of organizations requiring their capabilities. The Chinese government’s controversial decision to install face-scanning, fugitive-tracking CCTV cameras is only possible because of these technologies, as are more-benign solutions designed to determine ambulance routes or prisoner placement.

Despite the high variance on the surface, all of these use cases offer the same basic functionality: parsing through a large set of provided data (video camera feeds or geographic distribution of 911 calls, for instance) and providing insight based on their findings (e.g. an alert when a fugitive is spotted or an ideal distribution of squad car patrols). Above other factors, this is AI’s defining attribute for responder organizations. Public entities generate troves of information as a matter of course, making any tool that can pluck workable insights from it immensely valuable in the right context.

AI’s data-crunching value only grows when including machine-learning solutions, defined as tools and technologies that allow the system to teach itself. Here, solutions are presented with the given data and a desired outcome, and not much else. While it has nothing to do with response on its face, a popular video of a computer program playing through a video game level it mastered on its own underpins the concept quite well. Where a human player’s previous experiences, assumptions, and cognitive limitations would undoubtedly color the outcome, the computer devised an unorthodox but extremely effective method to master the content simply by attempting to reach the end, failing, and recording what worked and what did not thousands of times over.

More, machine learning’s utility already extends far past high-concept, ultimately frivolous applications like gameplay. Indeed, it has already been used to productive ends in response settings. Most potential uses come down to scale at this point in the technology’s development, since a single computer can analyze certain data faster and more thoroughly than whole teams of humans. For instance, machine-learning systems in one Midwestern prison processed enough calls to help officials break an inmate phone code. The software, self-taught through recordings of Congressional hearings, noticed disproportionate use of the term “three-way” and alerted investigators, who discovered inmates engaging in illicit conversation with unapproved callers. By calling an approved number and requesting a conference call with someone else, they were able to call whomever they wished, flaunting prison phone policy in the process.

While not particularly flashy — eventually, some astute human screener would have made the connection between “three-way” and “three-way calling” — it does illustrate a simple fact in the U.S. and abroad: responders may have expansive data, but they tend to lack the tools needed to capitalize on it.  In an era where the right information can be every bit as valuable as the priciest physical goods, AI and machine learning are a natural fit in this regard.

How do first responders use AI today?

The technologies are not at the point where they can make qualified decisions on their own. Considering the complexity and variability public-facing institutions are designed to encompass, they may never reach that point. However, the tools can be a great aid for enhancing human decision-making or providing useful insights in situations where personnel must make decisions based on large, complex datasets.

Take the previously mentioned trend towards predictive policing. By combining human insights with the scale and speed of computing, law enforcement organizations can now make important patrol and staffing decisions with the support of advanced, automated data analysis: the computer tells them where crime is most statistically likely to happen based on historical data, and stakeholders are free to interpret the data as they see fit. In one study of the Los Angeles Police Department’s AI-backed hotspot mapping, researchers found computer-based predictive tools outperformed human analysts, lowered crime rates, and, were the process used at larger scale for longer periods, had potential to save the department millions of dollars per year.

With its focus on community health and data-driven initiatives, healthcare is another natural candidate for hotspot solutions. Beyond this function, emergent technologies will soon help ambulatory services make better care decisions in better time. In research settings, teams have already built self-learning AIs that outperform trained professionals in detecting various cardiology issues. In the real world, dispatchers in Denmark currently have access to phone-monitoring AI that determines nonverbal signs of cardiac arrest (rattling breath et. al.) and catches common dispatch mistakes such as relaying the wrong address to field personnel. The latter technology is so accurate it recognized cardiac arrest after a victim’s wife called to report he had fallen off a roof. Responding professionals came to find he fell because he suffered the ailment, and that the computer had made its diagnosis from his rattling breath in the background of the call.

Returning to firefighting, AUDREY’s immense potential is offset by the dearth of information on the tool since its initial unveiling, but the NASA/DHS-built tool is not the only option. Soon, firefighters will don protective gear that monitors their risk of cardiovascular injury (PDF link) and other undesirable outcomes based on recorded environmental parameters and personal statistics like age and weight. On the other end of fire safety, current-day programs like WIFIRE use a supercomputer and vast collections of collected data (such as wind movement and fire trajectory) to make calculated predictions about an active wildfire’s next moves. The data is then transmitted to emergency squads, giving them another tool to help gauge the blaze’s destructive path.

An undeniable enhancement — but concerns linger

In some of these examples, computers expound on human research to calculate known trends at a rate and scale teams of people would not be able to match. In others such as WIFIRE, the machines examine the data for new trends that would be highly difficult or impossible for human eyes to spot. Even if an individual could review and quickly retain info from untold pages of fire- and wind-movement data, they would likely be unable to make the same statistical connections a computer could in mere minutes or hours. The same computer, meanwhile, can bring the connection to human researchers and let them determine its ultimate validity. By applying data to known connections and examining it for new ones, the tools allow responder organizations to make better use of their data than ever, including data that does not appear to be useful or relevant.

The widespread implementation of AI does not come without conceptual concerns and real-world obstacles, which are the focus of intense debate even today. First is the growing concern that impartial predictive systems within criminal justice are susceptible to the same biases as human-run systems. Automated, AI-backed risk assessment tools sometimes used in an attempt to gauge a suspect’s risk of flight or re-offense have seen particular criticism on this front. Similarly, hotspot programs that use previous incidences of crime as a predictive factor may subject members of the assessed community to an inherent bias within the data — in other words, if the data is biased, the program that presents it as mathematically correct will contain a measure of bias by proxy.

Then there is the idea that statistics are a useful-but-imperfect way to gauge reality, not the arbiter thereof. If decision-makers sacrifice too much agency because they view the AI system feeding them predictive information as somehow infallible, they give up the useful human ability to consider emotional context and other data beyond a computer’s reach.

These are all valid concerns, and ones that will necessarily need to be addressed as AI’s role within the public sector grows. Critics have suggested several potential fixes, such as enhanced transparency and public visibility into the algorithms governing the systems. Moreover, as technologies mature and intertwine, it is easy to see AI taking more of an active role than a consultative one. If a system that tries to assign a danger score to a suspect or neighborhood holds potential for bias, perhaps a CCTV that automatically recognizes the signs of violence and dispatches police or a drone that automatically attacks minor fires can take a more objective approach (even if they do open the door to questions regarding privacy and overreach).

In any event, AI’s role in many areas of response is certain. As with most high-profile technologies, it is hard to imagine a future where its capabilities improve but its presence diminishes. Even if they never move past their consultative roles, AI and machine learning will continue to give fire, law enforcement, medical, and other agencies in the response sphere more actionable intelligence — and more innovative ways to acquire it.

Additional Links

Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?

Using Machine Learning to Predict Patterns of Crime

Inside the Algorithm That Tries to Predict Gun Violence in Chicago

US Police Use Machine Learning to Curb Their Own Violence

 

Posted on Oct 4, 2018