Big Data refers to extremely large datasets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It's characterized by its high volume, velocity, and variety (the "3 Vs"), and requires specific tools and methods for storage, processing, and analysis.
Researchers employ advanced intelligent systems to analyze extensive traffic data on northern Iranian suburban roads, revolutionizing traffic state prediction. By integrating principal component analysis, genetic algorithms, and cyclic features, coupled with machine learning models like LSTM and SVM, the study achieves a significant boost in prediction accuracy and efficiency, offering valuable insights for optimizing transportation management and paving the way for advancements in traffic prediction methodologies.
Researchers presented a groundbreaking method for predicting industrial product manufacturing quality. Leveraging Synthetic Minority Oversampling Technique (SMOTE), Extreme Gradient Boosting (XGBoost), and edge computing, the active control approach tackles imbalanced data challenges in quality prediction, introducing a novel framework for flexible industrial data handling. The study's application in brake disc production showcased superior performance, with the proposed SMOTE-XGboost_t method outperforming other classifiers, demonstrating its effectiveness in real-world industrial environments.
Published in Humanities and Social Sciences Communications, this paper explores the impact of language style congruity in AI voice assistants (VAs) on user experience. By aligning VAs with utilitarian or hedonic service contexts and adapting language styles accordingly, the study reveals a congruity effect that significantly influences users' evaluations, providing valuable insights for technology providers to enhance continuous usage intention.
This groundbreaking article presents a comprehensive three-tiered approach, utilizing machine learning to assess Division-1 Women's basketball performance at the player, team, and conference levels. Achieving over 90% accuracy, the predictive models offer nuanced insights, enabling coaches to optimize training strategies and enhance overall sports performance. This multi-level, data-driven methodology signifies a significant leap in the intersection of artificial intelligence and sports analytics, paving the way for dynamic athlete development and strategic team planning.
This research paper introduces an ensemble learning model, combining extreme gradient boosting (XGBoost) and random forest (RF) algorithms, to optimize bank marketing strategies. By leveraging financial datasets, the model demonstrates superior accuracy, achieving a 91% accuracy rate and outperforming other algorithms, leading to substantial sales growth (25.67%) and increased customer satisfaction (20.52%). The study provides valuable insights for banking decision-makers seeking to enhance marketing precision and customer relationships.
In this article, researchers unveil a cutting-edge gearbox fault diagnosis method. Leveraging transfer learning and a lightweight channel attention mechanism, the proposed EfficientNetV2-LECA model showcases superior accuracy, achieving over 99% classification accuracy in both gear and bearing samples. The study signifies a pivotal leap in intelligent fault diagnosis for mechanical equipment, addressing challenges posed by limited samples and varying working conditions.
Researchers address critical forest cover shortage, utilizing Sentinel-2 satellite imagery and sophisticated algorithms. Artificial Neural Networks (ANN) and Random Forest (RF) algorithms showcase exceptional accuracy, achieving 97.75% and 96.98% overall accuracy, respectively, highlighting their potential in precise land cover classification. The study's success recommends integrating hyperspectral satellite imagery for enhanced accuracy and explores the possibilities of deep learning algorithms for further advancements in forest cover assessment.
This study introduces innovative unsupervised machine-learning techniques to analyze and interpret high-resolution global storm-resolving models (GSRMs). By leveraging variational autoencoders and vector quantization, the researchers systematically break down massive datasets, uncover spatiotemporal patterns, identify inconsistencies among GSRMs, and even project the impact of climate change on storm dynamics.
Researchers introduced the MDCNN-VGG, a novel deep learning model designed for the rapid enhancement of multi-domain underwater images. This model combines multiple deep convolutional neural networks (DCNNs) with a Visual Geometry Group (VGG) model, utilizing various channels to extract local information from different underwater image domains.
Researchers introduced Relay Learning, a novel deep-learning framework designed to ensure the physical isolation of clinical data from external intruders. This secure multi-site deep learning approach, Relay Learning, significantly enhances data privacy and security while demonstrating superior performance in various multi-site clinical settings, setting a new standard for AI-aided medical solutions and cross-site data sharing in the healthcare domain.
A recent research publication explores the profound impact of artificial intelligence (AI) on urban sustainability and mobility. The study highlights the role of AI in supporting dynamic and personalized mobility solutions, sustainable urban mobility planning, and the development of intelligent transportation systems.
Tenchijin, a Japanese startup, is utilizing deep learning and satellite data to address issues with satellite internet, particularly the impact of weather on ground stations. Their AI system accurately predicts suitable ground stations, providing more reliable internet connectivity, and their COMPASS service has applications in renewable energy, agriculture, and city planning by optimizing land use decisions using a variety of data sources.
This review explores the applications of artificial intelligence (AI) in studying fishing fleet (FV) behavior, emphasizing the role of AI in monitoring and managing fisheries. The paper discusses data sources for FV behavior research, AI techniques used in monitoring FV behavior, and the uses of AI in identifying vessel types, forecasting fishery resources, and analyzing fishing density.
Researchers have harnessed the power of Vision Transformers (ViT) to revolutionize fashion image classification and recommendation systems. Their ViT-based models outperformed CNN and pre-trained models, achieving impressive accuracy in classifying fashion images and providing efficient and accurate recommendations, showcasing the potential of ViTs in the fashion industry.
In a groundbreaking study, AI-driven data analysis accurately predicts Greco-Roman wrestlers' competitive success, with just an 11% error rate. This research has the potential to revolutionize athlete selection and training in various sports, offering valuable insights for coaches and athletes alike.
Researchers conducted a comprehensive bibliometric exploration of non-destructive testing techniques for assessing fruit quality. Leveraging Web of Science data, they unveiled evolving research trends, hotspots, and the promising integration of advanced technologies like machine vision and deep learning, offering valuable insights for the fruit industry's competitiveness and quality assurance.
Researchers introduce the "general theory of data, artificial intelligence, and governance," offering fresh insights into the complexities of the data economy and its implications for digital governance. Their model, which incorporates data flows, knowledge concentration, and data sharing, provides a foundation for addressing the challenges of data capitalism and shaping equitable and innovative data policies in the digital age.
Researchers have successfully employed the MegaDetector open-source object detection model to automate cross-regional wildlife and visitor monitoring using camera traps. This innovation not only accelerates data processing but also ensures accurate and privacy-compliant monitoring of wildlife-human interactions.
Researchers use artificial neural networks (ANN) to classify UNESCO World Heritage Sites (WHS) and evaluate the impact of input variables on classification outcomes. The study compares multilayer perceptron (MLP) and radial basis function (RBF) neural networks, highlighting the significance of feature selection and the trade-off between evaluation time and accuracy.
This paper explores how the fusion of big data and artificial intelligence (AI) is reshaping product design in response to heightened consumer preferences for customized experiences. The study highlights how these innovative methods are breaking traditional design constraints, providing insights into user preferences, and fostering automation and intelligence in the design process, ultimately driving more competitive and intelligent product innovations.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.