A 3D Dataset of Diverse Surgical Instruments for Machine Learning and Mixed Reality Applications

In a recent publication in the journal Scientific Data, researchers proposed the establishment of a publicly accessible repository containing three-dimensional (3D) datasets encompassing 103 clinically scanned instruments. The objective is to expedite advancements in medical machine learning (MML) and the integration of medical mixed realities (MMR).

Study: Advancing Surgical Data Science: A 3D Dataset of Diverse Surgical Instruments for Machine Learning and Mixed Reality Applications. Image credit: Generated using DALL.E.3
Study: Advancing Surgical Data Science: A 3D Dataset of Diverse Surgical Instruments for Machine Learning and Mixed Reality Applications. Image credit: Generated using DALL.E.3

Background

The historical roots of surgical instruments date back to the Bronze Age, evident in archaeological findings and depictions on stone, walls, and papyrus. The evolution of instruments such as forceps and retractors has been particularly notable since the 18th century. Modern surgical specialization has led to a diverse array of specialized instruments, although a core set including scalpels, scissors, retractors, and forceps remains. Dentistry, as an interventional discipline, boasts a unique set of instruments tailored for dental treatments.

Recent strides in deep learning empower computer-assisted surgery systems, relying on extensive datasets for tool detection and movement tracking. Current datasets, with their limitations, underscore the need for synthetic datasets in computer vision, proving valuable in training machine learning models. In this vein, a 3D dataset of medical instruments emerges, facilitating realistic surgical scene creation for training algorithms in detection, segmentation, and marker-less 3D registration and tracking. Beyond surgical data science, these 3D models find application in medical simulation, virtual reality training scenarios, and medical mixed reality.

Diversifying surgical instrument datasets

The comprehensive data collection underwent four pivotal stages: instrument preparation, post-processing in proprietary software, 3D scanning utilizing light scanners, and model adaptation for diversity. A total of 103 surgical instruments comprised the dataset. A white scanning spray (AESUB) prepared the scanning instruments. Structured light scanners, namely Autoscan Inspec and Artec Leo, captured the 3D models, followed by post-processing in their respective proprietary software. The output, Stereolithography (STL) files, underwent additional visual inspection in Microsoft 3D Viewer and Blender 3.4.1.

A Python-based Blender add-on facilitated model adaptation, offering transformations such as bending, twisting, and scaling. This enabled the generation of diverse models from the originals. The dataset's integrity was maintained with visual inspections, and all processing scripts, including the Blender add-on, were included in the data repository. The instruments, categorized into 27 classes, serve various purposes in surgeries, including providing a clear view, applying controlled force, holding blood vessels, manipulating tissues, and examining teeth and gums. The instruments, primarily stainless steel, were polished, posing challenges for structured light 3D scanners. To address this, AESUB was utilized, offering a layer thickness of approximately 0.007 millimeters.

The scanning process involved two scanners, Artec Leo and Autoscan Inspec, each contributing to distinct scans and configurations. The data collection comprised 114 STL files from Artec Leo and 120 from Autoscan Inspec, ensuring diverse post-processing options. Reproducibility was confirmed through scans performed by different individuals. Additionally, the Blender add-on demonstrated the creation of multiple models through deformations and transformations. This method added variability to the dataset which is crucial for deep learning applications. Visual inspection, alignment assessments, and precise measurements using the Trimesh library ensured submillimeter precision.

The presented data collection serves as a valuable resource for advancing research in surgical data science, offering realistic 3D models for training algorithms and applications in virtual and mixed realities. The detailed methodology and inclusion of processing scripts contribute to the dataset's transparency and usability in various research domains.

Dataset

The final dataset, stored in the repository, encompasses 103 surgical instruments. Artec Leo scanned 49 instruments, yielding STL files with sharp and smooth fusion settings. Autoscan Inspec scanned 55 instruments, providing STL models with and without the "remove highlight" function. Eleven instruments were scanned in various configurations, and the provided add-on generated 6,263 modified models.

A Python script produced 5,380 STL models for two instrument classes. The original 3D models' measurements, including volume and dimensions, are available in the repository. The original instruments will be shared on MedShapeNet for wider accessibility in the medical imaging and computing community.

Technical validation

The primary challenge in validating the 3D models of medical instruments lies in the spatial geometry reconstructed by Artec Leo and Autoscan Inspec scanners, with manufacturers specifying resolutions of 0.2 millimeters and 0.05 millimeters, respectively. Submillimeter accuracy is anticipated with the use of 3D scanning spray and recommended settings. Deviations between the 3D models and physical instruments are likely due to manual post-processing, such as the unintentional removal of point cloud data or misalignment during fusion.

The lack of real texture information lost due to the use of 3D scanning spray, is a limiting factor. However, the presented STL models are compatible with various visualization and processing software, allowing for applications in computer vision, medical mixed-reality simulations, and even 3D printing with adaptation. The data is freely accessible, adaptable, and redistributable.

Conclusion

In summary, researchers presented a robust dataset comprising 103 surgical instruments, meticulously curated through a four-stage process. Incorporating synthetic elements via a Python-based Blender add-on ensures diversity, which is crucial for advancing deep learning applications. The dataset's transparency, comprehensive methodology, and technical validation establish it as a valuable resource for enhancing research in surgical data science, with broad applications in virtual and mixed realities.

Journal reference:
Dr. Sampath Lonka

Written by

Dr. Sampath Lonka

Dr. Sampath Lonka is a scientific writer based in Bangalore, India, with a strong academic background in Mathematics and extensive experience in content writing. He has a Ph.D. in Mathematics from the University of Hyderabad and is deeply passionate about teaching, writing, and research. Sampath enjoys teaching Mathematics, Statistics, and AI to both undergraduate and postgraduate students. What sets him apart is his unique approach to teaching Mathematics through programming, making the subject more engaging and practical for students.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Lonka, Sampath. (2023, November 14). A 3D Dataset of Diverse Surgical Instruments for Machine Learning and Mixed Reality Applications. AZoAi. Retrieved on November 24, 2024 from https://www.azoai.com/news/20231114/A-3D-Dataset-of-Diverse-Surgical-Instruments-for-Machine-Learning-and-Mixed-Reality-Applications.aspx.

  • MLA

    Lonka, Sampath. "A 3D Dataset of Diverse Surgical Instruments for Machine Learning and Mixed Reality Applications". AZoAi. 24 November 2024. <https://www.azoai.com/news/20231114/A-3D-Dataset-of-Diverse-Surgical-Instruments-for-Machine-Learning-and-Mixed-Reality-Applications.aspx>.

  • Chicago

    Lonka, Sampath. "A 3D Dataset of Diverse Surgical Instruments for Machine Learning and Mixed Reality Applications". AZoAi. https://www.azoai.com/news/20231114/A-3D-Dataset-of-Diverse-Surgical-Instruments-for-Machine-Learning-and-Mixed-Reality-Applications.aspx. (accessed November 24, 2024).

  • Harvard

    Lonka, Sampath. 2023. A 3D Dataset of Diverse Surgical Instruments for Machine Learning and Mixed Reality Applications. AZoAi, viewed 24 November 2024, https://www.azoai.com/news/20231114/A-3D-Dataset-of-Diverse-Surgical-Instruments-for-Machine-Learning-and-Mixed-Reality-Applications.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Machine Learning Optimizes EV Charging Stations in Hong Kong's Green Transport Push