Computer Vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. By using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects, and then react to what they "see."
Researchers from the UK, Germany, USA, and Canada unveiled a groundbreaking quantum-enhanced cybersecurity analytics framework using hybrid quantum machine learning algorithms. The novel approach leverages quantum computing to efficiently detect malicious domain names generated by domain generation algorithms (DGAs), showcasing superior speed, accuracy, and stability compared to traditional methods, marking a significant advancement in proactive cybersecurity analytics.
Researchers present a groundbreaking T-Max-Avg pooling layer for convolutional neural networks (CNNs), introducing adaptability in pooling operations. This innovative approach, demonstrated on benchmark datasets and transfer learning models, outperforms traditional pooling methods, showcasing its potential to enhance feature extraction and classification accuracy in diverse applications within the field of computer vision.
Researchers from Beijing University introduce Oracle-MNIST, a challenging dataset of 30,222 ancient Chinese characters, providing a realistic benchmark for machine learning (ML) algorithms. The Oracle-MNIST dataset, derived from oracle-bone inscriptions of the Shang Dynasty, surpasses traditional MNIST datasets in complexity, serving as a valuable tool not only for advancing ML research but also for enhancing the study of ancient literature, archaeology, and cultural heritage preservation.
This article introduces LC-Net, a novel convolutional neural network (CNN) model designed for precise leaf counting in rosette plants, addressing challenges in plant phenotyping. Leveraging SegNet for superior leaf segmentation, LC-Net incorporates both original and segmented leaf images, showcasing robustness and outperforming existing models in accurate leaf counting, offering a promising advancement for agricultural research and high-throughput plant breeding efforts.
This research pioneers the use of acoustic emission and artificial neural networks (ANN) to detect partial discharge (PD) in ceramic insulators, crucial for electrical system reliability. With a focus on defects caused by environmental factors, the study achieved a 96.03% recognition rate using ANNs, further validated by support vector machine (SVM) and K-nearest neighbor (KNN) algorithms, showcasing a significant advancement in real-time monitoring for electrical power network safety.
Researchers proposed a cost-effective solution to address the escalating issue of wildlife roadkill, focusing on Brazilian endangered species. Leveraging machine learning-based object detection, particularly You Only Look Once (YOLO)-based models, the study evaluated various architectures, introducing data augmentation and transfer learning to enhance model training with limited data.
Canadian researchers at Western University and the Vector Institute unveil a groundbreaking method employing deep neural networks to predict the memorability of face photographs. Outperforming previous models, this innovation demonstrates near-human consistency and versatility in handling different face shapes, with potential applications spanning social media, advertising, education, security, and entertainment.
Duke University researchers present a groundbreaking dataset of Above-Ground Storage Tanks (ASTs) using high-resolution aerial imagery from the USDA's National Agriculture Imagery Program. The dataset, with meticulous annotations and validation procedures, offers a valuable resource for diverse applications, including risk assessments, capacity estimations, and training object detection algorithms in the realm of remotely sensed imagery and ASTs.
This paper unveils FaceNet-MMAR, an advanced facial recognition model tailored for intelligent university libraries. By optimizing traditional FaceNet algorithms with innovative features, including mobilenet, mish activation, attention module, and receptive field module, the model showcases superior accuracy and efficiency, garnering high satisfaction rates from both teachers and students in real-world applications.
Researchers introduce machine learning-powered stretchable smart textile gloves, featuring embedded helical sensor yarns and IMUs. Overcoming the limitations of camera-based systems, these gloves provide accurate and washable tracking of complex hand movements, offering potential applications in robotics, sports training, healthcare, and human-computer interaction.
This paper delves into the transformative role of attention-based models, including transformers, graph attention networks, and generative pre-trained transformers, in revolutionizing drug development. From molecular screening to property prediction and molecular generation, these models offer precision and interpretability, promising accelerated advancements in pharmaceutical research. Despite challenges in data quality and interpretability, attention-based models are poised to reshape drug discovery, fostering breakthroughs in human health and pharmaceutical science.
Researchers from the University of Birmingham unveil a novel 3D edge detection technique using unsupervised learning and clustering. This method, offering automatic parameter selection, competitive performance, and robustness, proves invaluable across diverse applications, including robotics, augmented reality, medical imaging, automotive safety, architecture, and manufacturing, marking a significant leap in computer vision capabilities.
Researchers delve into the challenges of protein crystallography, discussing the hurdles in crystal production and structure refinement. In their article, they explore the transformative potential of deep learning and artificial neural networks, showcasing how these technologies can revolutionize various aspects of the protein crystallography workflow, from predicting crystallization propensity to refining protein structures. The study highlights the significant improvements in efficiency, accuracy, and automation brought about by deep learning, paving the way for enhanced drug development, biochemistry, and biotechnological applications.
Researchers unveil PLAN, a groundbreaking Graph Neural Network, transforming earthquake monitoring by seamlessly integrating phase picking, association, and location tasks for multi-station seismic data. Demonstrating superiority over existing methods, PLAN's innovative architecture excels in accuracy and adaptability, paving the way for the next generation of automated earthquake monitoring systems.
Researchers propose an AI-powered posture classification system, employing MoveNet and machine learning, to address ergonomic challenges faced by agricultural workers. The study demonstrates the feasibility of leveraging AI for precise posture detection, offering potential advancements in safety practices and worker health within the demanding agricultural sector.
Researchers have unveiled innovative methods, utilizing lidar data and AI techniques, to precisely delineate river channels' bankfull extents. This groundbreaking approach streamlines large-scale topographic analyses, offering efficiency in flood risk mapping, stream rehabilitation, and tracking channel evolution, marking a significant leap in environmental mapping workflows.
Researchers introduce a groundbreaking Optical Tomography method employing Multi-Core Fiber-Optic Cell Rotation (MCF-OCR). This innovative system overcomes limitations in traditional optical tomography by utilizing an AI-driven reconstruction workflow, demonstrating superior accuracy in 3D reconstructions of live cells. The MCF-OCR system offers precise control over cell rotation, while the autonomous reconstruction workflow, powered by computer vision technologies, significantly enhances efficiency and accuracy in capturing detailed cellular morphology.
Researchers focus on improving pedestrian safety within intelligent cities using AI, specifically support vector machine (SVM). Leveraging machine learning and authentic pedestrian behavior data, the SVM model outperforms others in predicting crossing probabilities and speeds, demonstrating its potential for enhancing road traffic safety and integrating with intelligent traffic simulations. The study emphasizes the significance of SVM in accurately predicting real-time pedestrian behaviors, contributing to refined decision models for safer road designs.
This study introduces a deep learning-based Motor Assessment Model (MAM) designed to automate General Movement Assessment (GMA) in infants, predicting the risk of cerebral palsy (CP). The MAM, utilizing 3D pose estimation and Transformer architecture, demonstrated high accuracy, sensitivity, and specificity in identifying fidgety movements, essential for CP risk assessment. With interpretability, the model aids GMA beginners and holds promise for streamlined, accessible, and early CP screening, potentially transforming video-based diagnostics for infant motor abnormalities.
This article covers breakthroughs and innovations in natural language processing, computer vision, and data security. From addressing logical reasoning challenges with the discourse graph attention network to advancements in text classification using BERT models, lightweight mask detection in computer vision, sports analytics employing network graph theory, and data security through image steganography, the authors showcase the broad impact of AI across various domains.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.