GeoAI Challenge Workshop: A Modern-day Moonshot for a Trillion-Pixel View of Earth

Experts across varied technology fields gathered at the Department of Energy's Oak Ridge National Laboratory to collaborate on the future of geospatial systems at the Trillion-Pixel GeoAI Challenge workshop. The third iteration of this event focused on multimodal advances in the field, including progress in artificial intelligence, cloud infrastructure, high-performance computing and remote sensing. These capabilities, when combined, can help solve problems in national and human security such as disaster response and land-use planning.

Trillion Pixel Challenge attendees included interdisciplinary experts from image science, computer vision, high-performance computing, architecture, machine learning, advanced workflows, and end-user communities who came together to discuss geospatial AI challenges. Credit: Carlos Jone/ORNL, U.S. Dept. of Energy

The event is named for the number of pixels it would take to visually represent the surface of the Earth each day -; 100 trillion pixels at 5-meter resolution. Since the first event in 2019, organizers have seen that analysis techniques for geospatial data with AI and machine learning have progressed dramatically.

"With the increasing abundance of data from small satellites, we are poised to collect more than 100 trillion pixels every day, and we have 24 hours to gain insights from those 100 trillion pixels until the next 100 trillion arrive," said Budhu Bhaduri, director of the Geospatial Science and Human Security Division at ORNL. "That is a daunting data analysis challenge for the community."

To address the challenge, leaders from industry, academia and research organizations took part in six concentrated panel sessions over the two-day event.

Dalton Lunga, ORNL GeoAI group lead and senior R&D scientist, described the multimodal workshop as a "modern-day moonshot" for geospatial science and technology. Lunga organized the event with NASA's Rahul Ramachandran and a team from ORNL, Maxar Technologies, the National Geospatial-Intelligence Agency and several universities.

The event kicked off with a panel of government leaders who teed up the existing use cases where GeoAI could be applied over the next 10 years. The panel unpacked topics such as transferring knowledge to state and local emergency managers as well as large conversational and spatial-aware models for interacting with both the natural environment and emergency management systems.

The panel discussions highlighted where current AI tools may fall short and what new AI methods may be needed.

"To address these grand challenges, not only do we need to advance generative AI tools but also think of the need for domain-specific GeoAI spatial processors -; do we even have the right hardware to solve our spatial problems? We are faced with needs that require that we reduce the latency from data generation to decision-making.

"That's a hard problem given the amount of geospatial information to process," Lunga said.

The question of investments continued over the next few panel sessions as attendees discussed data management capabilities. The event's theme of multimodality came into the spotlight during a panel on geospatial data infrastructure. Next-generation geospatial programs will incorporate not only imagery, but also text data from social media, creating a complex data system. This will require technology that can handle the data volume and intricacies to deliver actionable information. And they should be able to do it remotely.

"What if we could escape the bounds of on-premise?" said panelist Erwin Gilmore, a senior AI and machine learning specialist at Amazon Web Services.

A later panel discussed edge computing. As Lunga pointed out, national laboratories have great capacity to process large amounts of data, but the people affected by challenges GeoAI aims to solve don't have access to high-performance technology resources. Edge computing explores devices such as smart phones and drones that can get closer to the final mile of a challenge and transmit data in real time.

But technology alone can't address GeoAI challenges.

"We need people," Lunga said. "As much as we talk about designing the next generation of supercomputing machines, we need to think about the next generation of the workforce that is going to create these solutions moving forward."

The final panel, including academic and industry leaders, discussed the need to prepare society for the future of geospatial research and development. This includes training, collaborative partnerships and where investments should be made.

Panelists, including Orhun Aydin, of St. Louis University, discussed the need for GeoAI experts to better communicate the "why" behind the science.

"I think we have to articulate what we do, how it matters, how it impacts your life," he said. "If you want society to be scientifically literate, we in the technical realm have to be socially literate to bridge that gap to communicate what we do and why it matters."

Bhaduri wrapped up the workshop by announcing the event would transition from every two years to an annual event, with the next iteration hosted by NASA in Huntsville, Alabama, in the summer of 2024.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Tackling Text-to-Image AI Flaws