Privacy-Preserving Training for Fisheye Camera Images in Autonomous Vehicles

In an article recently submitted to the ArXiv* server, researchers proposed a novel framework to train privacy-preserving models for license plate and face anonymization on fisheye camera images.

Study: Privacy-Preserving Training for Fisheye Camera Images in Autonomous Vehicles. Image credit: DZiegler/Shutterstock
Study: Privacy-Preserving Training for Fisheye Camera Images in Autonomous Vehicles. Image credit: DZiegler/Shutterstock

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.

Background

The ever-increasing amount of data, specifically the images of surrounding vehicle license plates and pedestrian faces, collected by autonomous vehicles on public highways worldwide has significantly increased data privacy concerns and the need for data privacy protection.

Moreover, several data privacy regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), must be followed while collecting data on public roadways. These regulations mandate the protection of personal identity information/data of participants and the deletion of such information upon their request.

Thus, effective solutions must be developed to identify and anonymize nearby car license plates and pedestrian faces in actual road-driving scenarios. Several commercial devices have been introduced that de-identify the collected data by obscuring the data captured using fisheye and normal cameras. 

Models such as UAI Anonymizer, Facebook Mapillary, and Brighter AI are also used to anonymize license plates and faces. However, fisheye camera images are rarely used for testing and training privacy-preserving models, unlike regular/normal images. Thus, these models display poor performance on fisheye images, which necessitated the development of a novel training method to enable these models to adapt to fisheye camera data effectively.

A novel privacy-preserving framework

In this paper, researchers proposed a novel framework to train a model for license plate and face anonymization through model distillation from multiple teacher models. A fisheye transformation was proposed to convert both the pseudo label and image obtained from the teacher model into fisheye-like data to train the student model.

The fisheye transformation included different kinds of realistic distortions for improved adaptation. Eventually, the anonymization model was trained for autonomous vehicles. Multiple teacher models, Privacy-Preserving for Autonomous Driving (PP4AV) preprocessing, fisheye transformation, and the student model were the major parts of the proposed framework.

Researchers leveraged several models trained on other tasks as teacher models for license plate and face detection due to the lack of ground truth. These teacher models were utilized to teach license plate and face detection to the student model through pseudo-label generation. Pseudo-label preprocessing was used from PP4AV after pseudo-label generation for batch data training to aggregate confidence scores and pseudo-labels from different teacher models into a single set.

This step improved the power of utilizing several models to make confident, high-quality pseudo labels that were used for training. Subsequently, the fisheye transformation turned the pseudo labels and images into fisheye-like images and pseudo labels, which were used to train the student model.

Fisheye image transformation is defined as a function of a set distortion transformation. In this study, four distortion functions, including tangential transformation, radial transformation, rectangular transformation, and circular transformation, were applied to transform the normal data into fisheye-like data.

RetinaFace, YOLO5Face, and UAI Anonymizer were selected as teacher models for pedestrian face detection, UAI Anonymizer was selected as the teacher model for detecting license plates, and YOLOX was selected as the student model.

Evaluation of the method

Researchers built the model training dataset from available public datasets for autonomous driving. Overall, 62,927 photos and 10,250 images obtained from six open datasets, including Kitti, LeddarPixSet, Comma2K19, BDD100K, Bosch, and Cityscape, were used for training and validation, respectively.

They used the fisheye data and its annotation form from the PP4AV dataset to evaluate the student model performance on fisheye images. The fisheye data comprised 244 annotated fisheye camera images of both license plates and faces provided by WoodScape.

PP4AV was the first open benchmarking dataset with 3,447 driving photos of license plates and pedestrian faces captured using regular and fisheye cameras used to evaluate a model on autonomous driving that protects privacy.

A baseline model was used by researchers in the PP4AV study, and its performance on the PP4AV dataset was thoroughly compared with several other pre-trained models to demonstrate the limitations of privacy-preserving models in the domain of autonomous driving.

In this study, the student model performance was compared with the PP4AV baseline model performance on fisheye images of the PP4AV dataset. Average precision (AP50) and average recall (AR50) were utilized as evaluation metrics.

Significance of the study

The student model of this study outperformed the baseline PP4AV model in both AR50 and AP50 on fisheye images. Specifically, the student model improved both AP50 and AR50 scores by 1.89% and 1.21%, respectively, compared to the PP4AV baseline model for face detection on fisheye images. The model also improved the AP50 and AR50 scores by 0.24% and 1.94%, respectively, compared to the baseline model for license plate detection on fisheye images.

Additionally, the fisheye-like data-trained student model successfully detected most of the clear deformed plates and failed to detect only one far, non-transparent license plate when compared with the real world. Moreover, the student model also correctly recognized the strongly warped shape of a human face positioned on the boundary of a significantly distorted image.

To summarize, the findings of this study demonstrated the feasibility of using the novel framework to train privacy-preserving models to improve their adaptability to fisheye camera images in autonomous vehicles.

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.

Journal reference:
Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2023, September 12). Privacy-Preserving Training for Fisheye Camera Images in Autonomous Vehicles. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20230912/Privacy-Preserving-Training-for-Fisheye-Camera-Images-in-Autonomous-Vehicles.aspx.

  • MLA

    Dam, Samudrapom. "Privacy-Preserving Training for Fisheye Camera Images in Autonomous Vehicles". AZoAi. 22 December 2024. <https://www.azoai.com/news/20230912/Privacy-Preserving-Training-for-Fisheye-Camera-Images-in-Autonomous-Vehicles.aspx>.

  • Chicago

    Dam, Samudrapom. "Privacy-Preserving Training for Fisheye Camera Images in Autonomous Vehicles". AZoAi. https://www.azoai.com/news/20230912/Privacy-Preserving-Training-for-Fisheye-Camera-Images-in-Autonomous-Vehicles.aspx. (accessed December 22, 2024).

  • Harvard

    Dam, Samudrapom. 2023. Privacy-Preserving Training for Fisheye Camera Images in Autonomous Vehicles. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20230912/Privacy-Preserving-Training-for-Fisheye-Camera-Images-in-Autonomous-Vehicles.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
5x Faster LiDAR Scene Completion Brings Real-Time Navigation Closer