Explainable AI (XAI) refers to methods and techniques in the application of artificial intelligence such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI is crucial for building trust in AI systems and for their ethical and fair use.
A study in Decision Support Systems reveals that explainable artificial intelligence (XAI) significantly improves decision-making in supply chains by enhancing transparency and agile responses to cyber threats. Experimental results and post hoc tweet analysis emphasize XAI's role in making AI processes more interpretable and trustworthy.
Researchers utilized machine learning algorithms to predict life satisfaction with high accuracy (93.80%) using data from a Danish government survey. By identifying 27 key questions and employing models such as KNN, SVM, and Bayesian networks, the study highlighted the significant impact of health conditions on life satisfaction and made the best predictive model publicly available.
Researchers explore the application of AI and ML in volatility forecasting, revealing their promise in improving accuracy and informing financial decisions. The review underscores the need for further exploration in explainable AI, uncertainty quantification, and alternative data sources to advance forecasting capabilities.
Despite expectations, incorrect AI-generated advice consistently led to performance decrements in personnel selection tasks, indicating overreliance. While both advice source and explainability influenced participants' reliance on inaccurate guidance, the effectiveness of visual explanations in preventing overreliance remained inconclusive, highlighting the complexity of human-AI interaction and the need for robust regulatory standards in HRM.
Researchers propose an AI-driven approach for predicting and managing water quality, crucial for environmental sustainability. Utilizing explainable AI models, they showcase the significance of transparent decision-making in classifying drinkable water, emphasizing the potential of their methodology for real-time monitoring and proactive risk mitigation in water management practices.
This study provides an in-depth exploration of the advancements, challenges, and future prospects of digital twins in various industrial applications. It covers the theoretical frameworks, technological implementations, and practical considerations essential for understanding and leveraging digital twins effectively across different sectors.
In a study published in Scientific Reports, advanced AI techniques dissected the social media activity of 1358 VK users, unveiling correlations between behavior and personality traits. Through meticulous analysis of 753,252 posts and reposts alongside Big Five traits and intelligence assessments, the research highlighted the influence of emotional tone and engagement metrics on psychological attributes, advocating for behavior-based diagnostic models in the digital realm.
This article explores the ramifications of the European Union (EU) Artificial Intelligence Act (AIA) on high-risk AI systems, focusing on decision-support systems with human control, particularly in the context of DeepFake detection. By delving into requirements under the AIA and proposing an adapted evaluation scheme, the paper contributes to the design and evaluation of high-risk AI systems. It emphasizes the critical role of human oversight, qualitative feedback, and explainability in ensuring the efficacy and ethicality of AI applications, especially in forensic scenarios.
This paper explores the dynamic integration of artificial intelligence/machine learning (AI/ML) in biomedical research, emphasizing its pivotal role in predictive analysis across diverse domains. While acknowledging transformative potential, the paper highlights challenges such as inclusivity, synergy between computational models and human expertise, and standardization of clinical data, presenting them as opportunities for innovation in a transformative era for human health optimization through AI/ML in biomedical research.
This research delves into the synergy of Artificial Intelligence (AI) and Internet of Things (IoT) security. The study evaluates and compares various AI algorithms, including machine learning (ML) and deep learning (DL), for classifying and detecting IoT attacks. It introduces a novel taxonomy of AI methodologies for IoT security and identifies LSTM as the top-performing algorithm, emphasizing its potential applications in diverse fields.
The paper published in the journal Electronics explores the crucial role of Artificial Intelligence (AI) and Explainable AI (XAI) in Visual Quality Assurance (VQA) within manufacturing. While AI-based Visual Quality Control (VQC) systems are prevalent in defect detection, the study advocates for broader applications of VQA practices and increased utilization of XAI to enhance transparency and interpretability, ultimately improving decision-making and quality assurance in the industry.
This study presents an innovative system for business purchase prediction that combines Long Short-Term Memory (LSTM) neural networks with Explainable Artificial Intelligence (XAI). The system is designed to predict future purchases in a medical drug company, offering transparent explanations for its predictions, fostering user trust, and providing valuable insights for business decision-making.
This research article underscores the importance of aligning AI outputs with human expectations in decision support systems and introduces the concept of Explainable AI (XAI). A systematic review results in a taxonomy of interaction patterns, emphasizing the need for more interactive functionality in AI systems.
Researchers outlined six principles for the ethical use of AI and machine learning in Earth and environmental sciences. These principles emphasize transparency, intentionality, risk mitigation, inclusivity, outreach, and ongoing commitment. The study also highlights the importance of addressing biases, data disparities, and the need for transparency initiatives like explainable AI (XAI) to ensure responsible and equitable AI-driven research in these fields.
This study employed an innovative approach combining Artificial Neural Networks (ANN) and Explainable AI (XAI) to match ungauged catchments with gauged ones in the Great Barrier Reef. The research successfully improved dissolved inorganic nitrogen (DIN) modeling by considering both biotic and abiotic factors, providing valuable insights for water quality management and conservation efforts in the region.
This research introduces TabNet-IDS, an innovative Intrusion Detection System for IoT networks. The model leverages deep learning and attentive mechanisms to enhance security in IoT systems, achieving high accuracy rates on various datasets while maintaining model interpretability, thus serving as a promising tool for safeguarding networked devices.
This research delves into the growing influence of artificial intelligence (AI) and machine learning (ML) on financial markets. Through a mixed-methods approach, it examines AI's applications in trading, risk management, and financial operations, highlighting adoption trends, challenges, and ethical considerations.
This article delves into the intricate relationship between causality and eXplainable Artificial Intelligence (XAI) from three perspectives. It examines the limitations of current XAI, explores how XAI can contribute to causal inquiry, and advocates for the integration of causality to enhance XAI.
Researchers have developed an open-source Python tool that integrates explainable artificial intelligence (XAI) with Google Earth Engine to improve land cover mapping and monitoring. The tool provides feature importance metrics and supports land cover classification and change detection workflows, making it a valuable resource for remote sensing applications with transparent machine learning.
Researchers have developed a novel approach that combines ResNet-based deep learning with Grad-CAM visualization to enhance the accuracy and interpretability of medical text processing. This innovative method provides valuable insights into AI model decision-making processes, making it a promising tool for improving healthcare diagnostics and decision support systems.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.