Harnessing a Tweet Storm: Using Fairness-aware Artificial Intelligence and Social Media to Improve Hurricane Resilience, and More
October 23, 2019
Mingxuan Sun, assistant professor in the LSU Division of Computer Science and Engineering, is developing fairness-aware artificial intelligence and machine learning models using Twitter and other data to improve search and rescue efforts and enhance community disaster resilience during hurricanes, earthquakes, floods, and fires thanks to a $300,000 NSF EAGER grant.
“For this project, I’m collaborating with Professor Nina Lam in the LSU College of the Coast & Environment. She is an expert on spatial analysis and disaster resilience, while my expertise is in artificial intelligence and machine learning. Our project is interdisciplinary, and I am very excited about the collaboration!
“Our central question is how we can use artificial intelligence for social good, that is, to make fair decisions in predicting and planning for large-scale rescue events. Artificial intelligence, or AI, can help us make decisions, but one of the biggest concerns is the bias problem. As an example—in predictive policing, AI algorithms that use arrest records predict which areas have a higher risk of crime, and the police department can then use this system to send more police to patrol a specific area more. The problem is that if the police only look in one area, they will likely have more arrests there than in other areas, and then biased arrests are amplified through the feedback loop. What we’re working on is different. Our goal is to come up with prediction models for natural disaster events that compensate for any possible bias, so there can be an equal opportunity for every person to be rescued and get help.
“ Our goal is to come up with prediction models for natural disaster events that compensate for any possible bias, so there can be an equal opportunity for every person to be rescued and get help. ”
“To develop our algorithms, we’re using historical data from Houston during Hurricane Harvey and social media data from Twitter. Tweets include geotags that show the user’s latitude and longitude, and this is really important. Previous research studies have found that people who use Twitter a lot to report disasters tend to belong to communities of higher social and economic status. During Harvey, it wasn’t necessarily the people who most needed help who were posting and reporting on Twitter. An AI system based only on Twitter data could exhibit socioeconomic bias when forecasting future events. We propose to leverage this data and investigate how to balance our algorithms since we also have maps of elevation and socio-economic status and lots of other data; we’re combining traditional data with streaming social media data. Our goal is to revise and adjust our artificial intelligence prototype so we can create an emergency informatics system that’s fair for everyone. Then, of course, we hope to use our framework to monitor and predict other disasters in other areas in the future and help state and local government agencies allocate resources and direct rescue teams in response.
“By investigating statistical learning problems when event data are noisy, biased, and incomplete, and by comparing approaches—with and without fairness adjustment—our project will reveal patterns of disparities, if any, and add new knowledge on disaster resilience and emergency management—as well as on how to use artificial intelligence for social good!”