Game Changer in Emergency Response: AI System Learns to Detect Natural Disasters from Social Media Images

An international research team has developed a deep learning mechanism proficient in identifying natural disasters through visuals shared on social media. By employing image processing algorithms trained with 1.7 million pictures, they demonstrated the ability to examine, filter and pinpoint genuine calamities. Among the research team members, led by the Massachusetts Institute of Technology (MIT), was Àgata Lapedriza, head of the AIWELL study team that focuses on AI for human welfare, affiliated with the eHealth Center, and part of the Faculty of Computer Science, Multimedia and Telecommunications at the Universitat Oberta de Catalunya (UOC).

Study: Incidents1M: A Large-Scale Dataset of Images With Natural Disasters, Damage, and Incidents. Image Credit: metamorworks / ShutterstockStudy: Incidents1M: A Large-Scale Dataset of Images With Natural Disasters, Damage, and Incidents. Image Credit: metamorworks / Shutterstock

With the progression of climate change, catastrophes like floods, tornadoes, and wildfires are becoming increasingly frequent and catastrophic. Given the lack of tools to forecast when and where these events might happen, it's crucial that emergency services and global aid agencies can react promptly and efficiently to save lives. "Fortunately, in these scenarios, technology can serve a vital role. The information from social media posts can be leveraged as a near real-time data source to grasp the evolution and impact of a disaster," Lapedriza stated.

Previous studies centered on the analysis of text posts, but this new research, published in Transactions on Pattern Analysis and Machine Intelligence, extends beyond. During her tenure at the MIT Computer Science and Artificial Intelligence Laboratory, Lapedriza aided in establishing an incident taxonomy and the training database for deep learning models and conducted tests to authenticate the technology.

The team established a catalog of 43 types of incidents, covering natural disasters (avalanches, dust storms, earthquakes, volcanic eruptions, droughts, etc.) and events involving human activities (aircraft crashes, construction mishaps, etc.). Together with 49 location categories, this enabled the researchers to tag the images utilized in system training.

The team created a database named Incidents1M, comprising 1,787,154 images for training the incident detection model. Of these, 977,088 had at least one positive tag linking them to an incident type, while 810,066 had negative class labels. For the place categories, 764,124 images had positive class tags, and 1,023,030 were class-negative.

Avoiding False Positives

The system's training utilized these negative labels to reduce false positives. For instance, an image of a fireplace doesn't imply a house is on fire, despite visual resemblances. After assembling the database, the team trained a model to detect incidents "employing a multi-task learning approach and a convolutional neural network (CNN)."

After training the deep learning model to detect incidents within images, the team performed multiple tests using an extensive collection of images downloaded from social media, such as Flickr and Twitter. "Our model effectively used these images to identify incidents and we verified that they correlated with certain recorded incidents, like the 2015 earthquakes in Nepal and Chile," Lapedriza conveyed.

Using authentic data, the authors showed the utility of a deep learning-based tool for gathering data from social media about natural disasters and situations requiring humanitarian aid. "This can enhance the efficiency of aid organizations during disasters and streamline the management of humanitarian assistance when necessary," she added.

Following this success, the next potential challenge could be, for instance, to employ the same images of floods, fires, or other incidents to automatically gauge the severity of incidents or to track them more proficiently over time. The authors also proposed that further research could combine image analysis with the corresponding text to achieve a more precise classification.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
YesBut Dataset Challenges Vision-Language Models to Understand Satire