Feature extraction is a process in machine learning where relevant and informative features are selected or extracted from raw data. It involves transforming the input data into a more compact representation that captures the essential characteristics for a particular task. Feature extraction is often performed to reduce the dimensionality of the data, remove noise, and highlight relevant patterns, improving the performance and efficiency of machine learning models. Techniques such as Principal Component Analysis (PCA), wavelet transforms, and deep learning-based methods can be used for feature extraction.
Researchers have investigated geographic biases in text-to-image generative models, revealing disparities in image outputs across different regions. They introduced three indicators to evaluate these biases, providing a comprehensive analysis to promote fairer AI-generated content.
Researchers developed a neural network (NN) architecture based on You Only Look Once (YOLO) to automate the detection, classification, and quantification of mussel larvae from microscopic water samples.
Researchers used a novel AI method combining RGB orthophotos and digital surface models to improve building footprint extraction from aerial and satellite images, achieving higher accuracy and efficiency.
Researchers applied deep learning (DL) models, including ResNet-34, to segment canola plants from other species in the field, treating non-canola plants as weeds. Using datasets containing 3799 canola images, the study demonstrated that ResNet-34 achieved superior performance, highlighting its potential for precision agriculture and innovative weed control strategies.
Researchers compared traditional feature-based computer vision methods with CNN-based deep learning for weed classification in precision farming, emphasizing the former's effectiveness with smaller datasets
Researchers in Food Control explored machine learning's effectiveness in predicting quality attributes of Prunoideae fruits like peaches, apricots, and cherries. They utilized XGBoost, LightGBM, CatBoost, and random forest algorithms alongside hyperspectral denoising and feature extraction techniques, achieving notable results in estimating soluble solids content (SSC) and titratable acidity (TA).
A study introduces advanced deep learning models integrating DenseNet with multi-task learning and attention mechanisms for superior English accent classification. MPSA-DenseNet, the standout model, achieved remarkable accuracy, outperforming previous methods.
A systematic review in the journal Sensors analyzed 77 studies on facial and pose emotion recognition using deep learning, highlighting methods like CNNs and Vision Transformers. The review examined trends, datasets, and applications, providing insights into state-of-the-art techniques and their effectiveness in psychology, healthcare, and entertainment.
Researchers introduced CIMNet, a novel network for crop disease image recognition, excelling in noisy environments. Featuring a non-local attention module and multi-scale critical information fusion, CIMNet outperformed traditional models in accuracy and applicability, significantly enhancing crop disease detection and improving agricultural productivity.
Researchers developed an advanced automated system for early plant disease detection using an ensemble of deep-learning models, achieving superior accuracy on the PlantVillage dataset. The study introduced novel image processing and data balancing techniques, significantly enhancing model performance and demonstrating the system's potential for real-world agricultural applications.
Researchers introduced an RS-LSTM-Transformer hybrid model for flood forecasting, combining random search optimization, LSTM networks, and transformer architecture. Tested in the Jingle watershed, this model outperformed traditional methods, offering enhanced accuracy and robustness, particularly for long-term predictions.
A study in Heliyon introduced a machine learning-based approach for predicting defects in BLDC motors used in UAVs. Researchers compared KNN, SVM, and Bayesian network models, with SVM demonstrating superior accuracy in fault classification, highlighting its potential for improving UAV operational safety and predictive maintenance.
Researchers developed a deep learning and particle swarm optimization (PSO) based system to enhance obstacle recognition and avoidance for inspection robots in power plants. This system, featuring a convolutional recurrent neural network (CRNN) for obstacle recognition and an artificial potential field method (APFM) based PSO algorithm for path planning, significantly improves accuracy and efficiency.
Researchers presented a novel dual-branch selective attention capsule network (DBSACaps) for detecting kiwifruit soft rot using hyperspectral images. This approach, detailed in Nature, separates spectral and spatial feature extraction, then fuses them with an attention mechanism, achieving a remarkable 97.08% accuracy.
Researchers introduce a novel electronic tongue (E-tongue), the multichannel triboelectric bioinspired E-tongue (TBIET), engineered with advanced triboelectric components on a single glass slide chip. Through comprehensive classification studies across medical, environmental, and beverage samples, the TBIET demonstrates exceptional taste classification accuracy, promising significant advancements in on-site liquid sample detection and analysis.
This study introduces MST-DeepLabv3+, a novel model for high-precision semantic segmentation of remote sensing images. By integrating MobileNetV2, SENet, and transfer learning, the model achieves superior accuracy while maintaining a compact parameter size, revolutionizing remote sensing image analysis and interpretation.
Researchers present an innovative ML-based approach, leveraging GANs for synthetic data generation and LSTM for temporal patterns, to tackle data scarcity and temporal dependencies in predictive maintenance. Despite challenges, their architecture achieves promising results, underlining AI's potential in enhancing maintenance practices.
Researchers introduced a multi-stage progressive detection method utilizing a Swin transformer to accurately identify water deficit in vertical greenery plants. By integrating classification, semantic segmentation, and object detection, the approach significantly improved detection accuracy compared to traditional methods like R-CNN and YOLO, offering promising solutions for urban greenery management.
In a recent paper published in Scientific Reports, researchers introduced a novel image denoising approach that combines dense block architectures and residual learning frameworks. The Sequential Residual Fusion Dense Network efficiently handles Gaussian and real-world noise by progressively integrating shallow and deep features, demonstrating superior performance across diverse datasets.
Researchers proposed a fusion algorithm merging Lightning Search Algorithm (LSA) with Support Vector Machine (SVM) technology, forming an advanced Power Network Security Risk Evaluation Model (PNSREM), achieving high accuracy, low error rates, and rapid convergence. Empirical validation demonstrated its superiority, empowering preemptive threat identification, ensuring uninterrupted power system operation, and highlighting its potential for real-world application in enhancing power network security.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.