Advancing Urban Acoustic Management through Binaural Sensing and Cloud Processing

In a paper published in the Journal of Sensor and Accurate Networks, researchers proposed leveraging advanced technologies like cloud computing, artificial intelligence, and data science to enhance acoustics processes for real-world applications.

Study: Advancing Urban Acoustic Management through Binaural Sensing and Cloud Processing. Image credit: Anatolii Stoiko/Shutterstock
Study: Advancing Urban Acoustic Management through Binaural Sensing and Cloud Processing. Image credit: Anatolii Stoiko/Shutterstock

While current noise monitoring systems rely on singular measured values, this study introduces a binaural system aligned with human auditory perception. The system prototype, which resembles a human head, transmits and processes acoustical data via the cloud, enabling noise monitoring through binaural hearing. This approach also allows the extraction of spatial acoustic indicators and source location detection on the azimuthal plane. The paper presents a significant step towards more comprehensive noise analysis.

Introduction

The growth of the urban population presents challenges in managing the city's environmental quality. The United Nations predicts that by 2030, over 80% of people in Europe and the Americas will reside in cities. This urbanization leads to elevated noise emissions stemming from traffic, industrial, and commercial activities, posing health risks and diminishing residents’ well-being.

Urban noise management primarily employs noise control methods complemented by the emerging soundscape concept. Noise control monitors and regulates noise emissions, with regulations defining permissible sound pressure levels based on land use and time. Traditional monitoring systems offer energetic acoustic data featuring descriptors like LAeq, maximum/minimum sound pressure levels, and percentiles like L10 and L90. These descriptors correlate with annoyance and play a pivotal role in studying noise impacts. However, these systems lack spatial and temporal analysis, and monaural approaches differ from human binaural sound perception.

Related work

Drawing from past studies, the soundscape concept has revolutionized acoustic management by incorporating environmental perception, transcending conventional comfort and annoyance factors. This approach emphasizes sound quality and its relevance to specific tasks, improving urban acoustic control to meet user needs. Integrating the soundscape concept in assessing acoustic environments, especially amidst heightened noise, is pivotal.

The RBS plays a vital role here, providing spatial acoustic descriptors aligned with perception for comprehensive soundscape studies. These insights empower governments and research centers to enhance urban noise strategies, bolstering the overall quality of life. Recent strides in binaural systems, influenced by previous research, have transformed acoustic sensing. Geometric-generated headsets excel, surpassing natural alternatives, with applications like helmet sound attenuation measurement and improved acoustic quality in industries like automotive and aerospace.

Blazing the trail, innovative endeavors have introduced continuous noise monitoring by incorporating binaural psychoacoustic parameters. Pioneering acoustic sensor devices synthesize binaural signals, fueling Wireless Acoustic Sensor Networks (WASN) for urban acoustic assessment. Despite lacking specific guidelines, ongoing projects, inspired by prior findings, strive to create binaural monitoring systems capturing both energy and spatial attributes. This envisioned architecture supports multiple stations, efficiently processing extensive acoustic data, and driving noise management advancements.

Proposed method

This article introduces two fundamental elements: a binaural data sensing system and a cloud-based data processing architecture. The binaural sensing system employs an artificial head with embedded microphones, designed using Computer-Aided Design software (CAD) software and 3D printed to resemble an average adult human head. Acoustic signals are captured through omnidirectional condenser microphones, and measurements are conducted in an anechoic chamber using pink noise as an excitation signal. The prototype's performance is compared against a class 1 sound level meter in real-world field measurements.

The data processing segment involves developing acoustic parameter software for energetic and spatial indicators. The algorithm's accuracy is validated against established tools. For cloud-based processing, an event-driven architecture is implemented using Amazon Web Services (AWS) Beanstalk and Amazon Simple Queue Service (SQS), with S3 buckets storing results in JavaScript Object Notation (JSON) format. Raspberry Pi devices collect audio data, which is then processed on the cloud due to resource constraints, enabling efficient and scalable data processing and storage. This integrated approach showcases the potential for comprehensive acoustic monitoring in urban environments.

Experimental results

The study discusses the outcomes of experiments conducted in an anechoic chamber and field settings using a binaural monitoring prototype. The frequency response graphs of the prototype's microphones reveal particular resonances, leading to the application of an inverse filter during data processing. Analysis of the prototype's frequency response in different angular positions indicates variations attributed to resonances within the ear structure. Comparative assessments with a commercial head system exhibit differences due to varying technical characteristics, design objectives, and ear simulation standards.

This approach also demonstrates the successful application of correlation functions to compute interaural time differences and spatial acoustic indicators for different source positions. Field measurements showcase differences in equivalent continuous levels between the prototype and a sound level meter, with variations attributed to the prototype's design and effects like the wind. Spatial parameters, including interaural cross-correlation (IACC), temporal differences (τIACC), and apparent width (WIACC), reflect the prototype's performance in capturing spatial characteristics of the sound environment. Overall, the research highlights the capabilities and challenges of the binaural monitoring system across controlled and real-world scenarios.

Conclusion

To sum up, the study presents a binaural acoustic monitoring prototype fashioned after a 3D-printed human head. Anechoic chamber measurements exhibit frequency response variations at diverse angles attributed to pinna and ear canal effects. While adhering to trends seen in commercial heads, adjustments in the prototype's physical design, such as increased microphone-to-ear distance, could enhance its performance. Field measurements comparing temporal acoustic parameters with a class 1 sound level meter underscore the prototype's ability to simulate the human head's impact on sound perception. By transmitting and processing data on the cloud, the prototype aims to provide spatial indicators, thereby offering valuable insights for urban soundscape evaluations. Ongoing endeavors encompass the development of additional stations, AI-based source classification, and potential Edge Computing solutions for regions with limited internet access. Ultimately, the project seeks to support urban planning strategies aimed at enhancing the overall quality of life of citizens.

Journal reference:
Susha Cheriyedath

Written by

Susha Cheriyedath

Susha is a scientific communication professional holding a Master's degree in Biochemistry, with expertise in Microbiology, Physiology, Biotechnology, and Nutrition. After a two-year tenure as a lecturer from 2000 to 2002, where she mentored undergraduates studying Biochemistry, she transitioned into editorial roles within scientific publishing. She has accumulated nearly two decades of experience in medical communication, assuming diverse roles in research, writing, editing, and editorial management.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Cheriyedath, Susha. (2023, August 16). Advancing Urban Acoustic Management through Binaural Sensing and Cloud Processing. AZoAi. Retrieved on July 06, 2024 from https://www.azoai.com/news/20230816/Advancing-Urban-Acoustic-Management-through-Binaural-Sensing-and-Cloud-Processing.aspx.

  • MLA

    Cheriyedath, Susha. "Advancing Urban Acoustic Management through Binaural Sensing and Cloud Processing". AZoAi. 06 July 2024. <https://www.azoai.com/news/20230816/Advancing-Urban-Acoustic-Management-through-Binaural-Sensing-and-Cloud-Processing.aspx>.

  • Chicago

    Cheriyedath, Susha. "Advancing Urban Acoustic Management through Binaural Sensing and Cloud Processing". AZoAi. https://www.azoai.com/news/20230816/Advancing-Urban-Acoustic-Management-through-Binaural-Sensing-and-Cloud-Processing.aspx. (accessed July 06, 2024).

  • Harvard

    Cheriyedath, Susha. 2023. Advancing Urban Acoustic Management through Binaural Sensing and Cloud Processing. AZoAi, viewed 06 July 2024, https://www.azoai.com/news/20230816/Advancing-Urban-Acoustic-Management-through-Binaural-Sensing-and-Cloud-Processing.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Control and Motion Planning of Fixed-wing UAVs Through Reinforcement Learning