AI is used in social media to analyze user behavior, personalize content recommendations, and detect trends or sentiment. It employs machine learning algorithms and natural language processing to understand and categorize user-generated content, optimize ad targeting, and enhance user experiences on social media platforms.
Canadian researchers at Western University and the Vector Institute unveil a groundbreaking method employing deep neural networks to predict the memorability of face photographs. Outperforming previous models, this innovation demonstrates near-human consistency and versatility in handling different face shapes, with potential applications spanning social media, advertising, education, security, and entertainment.
Researchers present CrisisViT, a novel transformer-based model designed for automatic image classification in crisis response scenarios. Leveraging in-domain learning with the Incidents1M crisis image dataset, CrisisViT outperforms conventional models, offering enhanced accuracy in disaster type, image relevance, humanitarian category, and damage severity classification. This innovation provides an efficient solution for crisis responders, enabling rapid image analysis through smartphones and social media, thereby aiding timely decision-making during emergencies.
This study from South Korea delves into the factors influencing user satisfaction and loyalty in Algorithmic News Recommendation Services (ANRS). By proposing a research model based on loyalty theory, service quality, and personal factors, the authors offer insights for service providers to enhance user experience and manage potential challenges like privacy concerns and biased perspectives.
This paper emphasizes the crucial role of machine learning (ML) in detecting and combating fake news amid the proliferation of misinformation on social media. The study reviews various ML techniques, including deep learning, natural language processing (NLP), ensemble learning, transfer learning, and graph-based approaches, highlighting their strengths and limitations in fake news detection. The researchers advocate for a multifaceted strategy, combining different techniques and optimizing computational strategies to address the complex challenges of identifying misinformation in the digital age.
This study delves into the influence of exposure to social bots on individuals' perceptions and policy preferences regarding these automated accounts on popular platforms like Twitter, Facebook, and Instagram. The research reveals that even minimal exposure distorts perceptions of bot prevalence and self-efficacy, triggering reactive policy sentiments among social media users.
This article discusses bioRxiv's collaboration with ScienceCast, an AI startup, to use large language models for multi-level summaries of scientific preprints. While aiming to enhance accessibility, the pilot reveals challenges in accurately summarizing complex technical content, with scientists noting inaccuracies. The future outlook suggests potential benefits as AI capabilities advance, but concerns around precision and the need for a balance between automation and human oversight persist.
Researchers propose essential prerequisites for improving the robustness evaluation of large language models (LLMs) and highlight the growing threat of embedding space attacks. This study emphasizes the need for clear threat models, meaningful benchmarks, and a comprehensive understanding of potential vulnerabilities to ensure LLMs can withstand adversarial challenges in open-source models.
This paper presents MULTITuDE, a benchmark dataset designed for multilingual machine-generated text (MGT) detection. The study evaluates various detection methods across 11 languages, demonstrating that fine-tuning detectors with multilingual language models is an effective approach, and the linguistic similarity between languages plays a significant role in the generalization of detectors.
This review explores the landscape of social robotics research, addressing knowledge gaps and implications for business and management. It highlights the need for more studies on social robotic interactions in organizations, trust in human-robot relationships, and the impact of virtual social robots in the metaverse, emphasizing the importance of balancing technology integration with societal well-being.
Researchers present the "SCALE" framework, which evaluates the impact of AI on the mortgage market, with a focus on promoting homeownership inclusivity for marginalized communities. The framework encompasses societal values, contextual integrity, accuracy, legality, and expanded opportunity, aiming to address concerns about bias and discrimination in AI applications within the mortgage industry while advancing fair lending practices and social equity in homeownership.
In a groundbreaking study, AI-driven data analysis accurately predicts Greco-Roman wrestlers' competitive success, with just an 11% error rate. This research has the potential to revolutionize athlete selection and training in various sports, offering valuable insights for coaches and athletes alike.
Researchers have introduced a groundbreaking approach to AI learning in social environments, where agents actively interact with humans. By combining reinforcement learning with social norms, the study demonstrated a 112% improvement in recognizing new information, highlighting the potential of socially situated AI in open social settings and human-AI interactions.
This study delves into the transformative potential of data science in African healthcare and research, emphasizing the critical role of ethical governance. It highlights ongoing initiatives, investments, and challenges while stressing the need for collaboration and investment in ethical oversight to drive impactful research in the continent.
A recent study delves into the automated classification of short texts from social media, crucial for social science research. The research compares lexicon-based and supervised machine learning approaches, highlighting the significance of traditional ML algorithms in short text classification and their efficiency compared to deep neural architectures, especially in cases with limited data resources.
Researchers have developed a robust web-based malware detection system that utilizes deep learning, specifically a 1D-CNN architecture, to classify malware within portable executable (PE) files. This innovative approach not only showcases impressive accuracy but also bridges the gap between advanced malware detection technology and user accessibility through a user-friendly web interface.
Researchers delve into the intricacies of user intent modeling in conversational recommender systems, revealing symbiotic relationships between models and features. Through systematic literature reviews and real-world case studies, they present a structured decision model that emphasizes practical adaptability and promotes collaboration, equipping academics and practitioners to innovate in the realm of AI-driven conversations.
Researchers propose a hybrid model that integrates sentiment analysis using Word2vec and Long Short-Term Memory (LSTM) for accurate exchange rate trend prediction. By incorporating emotional weights from Weibo data and historical exchange rate information, combined with CNN-LSTM architecture, the model demonstrates enhanced prediction accuracy compared to traditional methods.
Researchers unveil the MedMine initiative, a pioneering effort that harnesses the power of pre-trained language models like Med7 and Clinical-XLM-RoBERTa for medication mining in clinical texts. By systematically evaluating these models and addressing challenges, the initiative lays the groundwork for a transformative shift in healthcare practices, promising accurate medication extraction, improved patient care, and advancements in medical research.
Recent advancements in Natural Language Processing (NLP) have revolutionized various fields, yet concerns about embedded biases have raised ethical and fairness issues. To combat this challenge, a team of researchers presents Nbias, an innovative framework introduced in an arXiv* article. Nbias detects and mitigates biases in textual data, addressing explicit and implicit biases that can perpetuate stereotypes and inequalities.
This comprehensive review explores the integration of machine learning (ML) techniques in forest fire science. The study highlights the significance of early fire prediction and detection for effective fire management. It discusses various ML methods applied in forest fire detection, prediction, fire mapping, and data evaluation. The review identifies challenges and research priorities while emphasizing the potential benefits of ML in improving forest fire resilience and enabling more efficient data analysis and modeling.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.