In a paper published in the Journal of Sensor and Accurate Networks, researchers proposed leveraging advanced technologies like cloud computing, artificial intelligence, and data science to enhance acoustics processes for real-world applications.
While current noise monitoring systems rely on singular measured values, this study introduces a binaural system aligned with human auditory perception. The system prototype, which resembles a human head, transmits and processes acoustical data via the cloud, enabling noise monitoring through binaural hearing. This approach also allows the extraction of spatial acoustic indicators and source location detection on the azimuthal plane. The paper presents a significant step towards more comprehensive noise analysis.
Introduction
The growth of the urban population presents challenges in managing the city's environmental quality. The United Nations predicts that by 2030, over 80% of people in Europe and the Americas will reside in cities. This urbanization leads to elevated noise emissions stemming from traffic, industrial, and commercial activities, posing health risks and diminishing residents’ well-being.
Urban noise management primarily employs noise control methods complemented by the emerging soundscape concept. Noise control monitors and regulates noise emissions, with regulations defining permissible sound pressure levels based on land use and time. Traditional monitoring systems offer energetic acoustic data featuring descriptors like LAeq, maximum/minimum sound pressure levels, and percentiles like L10 and L90. These descriptors correlate with annoyance and play a pivotal role in studying noise impacts. However, these systems lack spatial and temporal analysis, and monaural approaches differ from human binaural sound perception.
Related work
Drawing from past studies, the soundscape concept has revolutionized acoustic management by incorporating environmental perception, transcending conventional comfort and annoyance factors. This approach emphasizes sound quality and its relevance to specific tasks, improving urban acoustic control to meet user needs. Integrating the soundscape concept in assessing acoustic environments, especially amidst heightened noise, is pivotal.
The RBS plays a vital role here, providing spatial acoustic descriptors aligned with perception for comprehensive soundscape studies. These insights empower governments and research centers to enhance urban noise strategies, bolstering the overall quality of life. Recent strides in binaural systems, influenced by previous research, have transformed acoustic sensing. Geometric-generated headsets excel, surpassing natural alternatives, with applications like helmet sound attenuation measurement and improved acoustic quality in industries like automotive and aerospace.
Blazing the trail, innovative endeavors have introduced continuous noise monitoring by incorporating binaural psychoacoustic parameters. Pioneering acoustic sensor devices synthesize binaural signals, fueling Wireless Acoustic Sensor Networks (WASN) for urban acoustic assessment. Despite lacking specific guidelines, ongoing projects, inspired by prior findings, strive to create binaural monitoring systems capturing both energy and spatial attributes. This envisioned architecture supports multiple stations, efficiently processing extensive acoustic data, and driving noise management advancements.
Proposed method
This article introduces two fundamental elements: a binaural data sensing system and a cloud-based data processing architecture. The binaural sensing system employs an artificial head with embedded microphones, designed using Computer-Aided Design software (CAD) software and 3D printed to resemble an average adult human head. Acoustic signals are captured through omnidirectional condenser microphones, and measurements are conducted in an anechoic chamber using pink noise as an excitation signal. The prototype's performance is compared against a class 1 sound level meter in real-world field measurements.
The data processing segment involves developing acoustic parameter software for energetic and spatial indicators. The algorithm's accuracy is validated against established tools. For cloud-based processing, an event-driven architecture is implemented using Amazon Web Services (AWS) Beanstalk and Amazon Simple Queue Service (SQS), with S3 buckets storing results in JavaScript Object Notation (JSON) format. Raspberry Pi devices collect audio data, which is then processed on the cloud due to resource constraints, enabling efficient and scalable data processing and storage. This integrated approach showcases the potential for comprehensive acoustic monitoring in urban environments.
Experimental results
The study discusses the outcomes of experiments conducted in an anechoic chamber and field settings using a binaural monitoring prototype. The frequency response graphs of the prototype's microphones reveal particular resonances, leading to the application of an inverse filter during data processing. Analysis of the prototype's frequency response in different angular positions indicates variations attributed to resonances within the ear structure. Comparative assessments with a commercial head system exhibit differences due to varying technical characteristics, design objectives, and ear simulation standards.
This approach also demonstrates the successful application of correlation functions to compute interaural time differences and spatial acoustic indicators for different source positions. Field measurements showcase differences in equivalent continuous levels between the prototype and a sound level meter, with variations attributed to the prototype's design and effects like the wind. Spatial parameters, including interaural cross-correlation (IACC), temporal differences (τIACC), and apparent width (WIACC), reflect the prototype's performance in capturing spatial characteristics of the sound environment. Overall, the research highlights the capabilities and challenges of the binaural monitoring system across controlled and real-world scenarios.
Conclusion
To sum up, the study presents a binaural acoustic monitoring prototype fashioned after a 3D-printed human head. Anechoic chamber measurements exhibit frequency response variations at diverse angles attributed to pinna and ear canal effects. While adhering to trends seen in commercial heads, adjustments in the prototype's physical design, such as increased microphone-to-ear distance, could enhance its performance. Field measurements comparing temporal acoustic parameters with a class 1 sound level meter underscore the prototype's ability to simulate the human head's impact on sound perception. By transmitting and processing data on the cloud, the prototype aims to provide spatial indicators, thereby offering valuable insights for urban soundscape evaluations. Ongoing endeavors encompass the development of additional stations, AI-based source classification, and potential Edge Computing solutions for regions with limited internet access. Ultimately, the project seeks to support urban planning strategies aimed at enhancing the overall quality of life of citizens.