Predicting Concrete Strength with Machine Learning Models

In a paper published in the journal Materials, researchers investigated an effective machine-learning (ML) model to predict the compressive strength of concrete by creating an experimental dataset of 228 samples and testing six different algorithms.

Study: Predicting Concrete Strength with Machine Learning Models. Image Credit: Fit Ztudio/Shutterstock.com
Study: Predicting Concrete Strength with Machine Learning Models. Image Credit: Fit Ztudio/Shutterstock.com

The evaluation showed that the extreme gradient boosting (XGBoost) model delivered the greatest accuracy. They used key features, trends, single-sample, and correlation analysis to interpret the model. The interpretability analyses showed that each factor's contribution to compressive strength aligned with conventional theory. Overall, the XGBoost model effectively predicted concrete's compressive strength.

Background

Past work has shown that predicting concrete's compressive strength is challenging due to its complex composition and variable curing conditions. Traditional methods like rebound and core drilling have limitations in accuracy and consistency. Conventional methods, such as rebound and core drilling, need help with precision and consistency. At the same time, machine learning algorithms—like decision trees (DT), naive Bayes (NB), random forest (RF), adaptive boosting (AdaBoost), gradient boosting decision trees (GBDTs), and XGBoost—have shown promise in enhancing prediction accuracy.

Modeling Concrete Strength 

This study selected six algorithms to predict concrete's compressive strength and compared their performance to identify the best model. The algorithms included K nearest neighbors (KNN), DT, RF, GBDT, AdaBoost, and XGBoost. KNN predicts target values based on the distance between data points, using the labels or averages of the K nearest neighbors. DT summarizes decision rules from featured and labeled data in a tree structure. RF integrates multiple decision trees, reducing overfitting and improving generalization. Boosting algorithms turn weak learners into strong learners through iterative processes. XGBoost, an optimized distributed gradient boosting library, offers improvements over GBDT, such as accounting for tree complexity, fitting second-order derivative expansions, and utilizing multi-threading for faster processing.

The team evaluated the performance of the prediction models through four key regression metrics: R-squared (R²), root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). These metrics provided a comprehensive assessment of model accuracy and reliability. The Shapley additive explanation (SHAP) method enhanced the best-performing model's interpretability. SHAP assigns contributions to each feature in the model, providing both global and local interpretations. The modeling process involved Establishing a database with 228 concrete samples.

Dividing it into training (80%) and testing (20%) sets. They were tuning parameters to prevent overfitting or underfitting. Several prediction models were then evaluated for performance, with XGBoost emerging as the best model. The credibility of the XGBoost model was further verified through key feature analysis, trend evolution, single-sample interpretation, and correlation analysis. The SHAP method enabled a detailed understanding of each feature's contribution to the model's output, ensuring the reliability and interpretability of the predictions. 

Database and Results 

The experimental database used in this study consists of 228 concrete samples, with parameters including the water–binder ratio, water–– and–– sand ratios, superplasticizer, air-entraining agent, slump, air content, and age. The concrete specimens are standard cubic blocks of 150 × 150 × 150 mm, all cured under standard conditions. Key features include variations in the water–binder ratio (0.35 to 0.55), water content (124 kg/L to 154 kg/L), sand ratio (37% to 43%), superplasticizer (0.50% to 0.75%), and air-entraining agent (0.20% to 1.5%). The slump ranges from 163 mm to 202 mm, air content from 1.2% to 6.4%, and ages are 7, 28, and 90 days.

The compressive strength values span from 17.1 MPa to 61.4 MPa, with these metrics depicted in the accompanying figures. Six algorithms—KNN, DT, RF, GBDT, AdaBoost, and XGBoost—were assessed for their effectiveness. Among them, XGBoost outperformed the others, showing the lowest RMSE, MAE, and MAPE in the training and testing phases, underscoring its exceptional generalization capability.

The scatter plot and bar graph analysis further demonstrated XGBoost's higher accuracy than other models. Despite its high performance, the prediction process of XGBoost remains opaque, necessitating further interpretative analysis in the subsequent chapter using SHAP values to elucidate how input features influence predictions. 

Feature Insights 

The study used SHAP values to analyze the importance and impact of various input features on concrete's compressive strength. The relative importance of features such as age, air content, and slump were assessed, with age having the greatest influence. Trends in feature evolution showed that age and slump positively affect compressive strength, while air content and water-binder ratio have a negative impact. Single-sample analysis further decomposed predictions into contributions from each variable, validating the XGBoost model's accuracy. Correlation analysis confirmed that age, water-binder ratio, air content, and slump influence compressive strength, aligning with the feature importance findings. 

Conclusion 

To sum up, this study demonstrated that ML algorithms, particularly XGBoost, effectively predicted concrete's compressive strength. The analysis addressed the "black box" issue by providing interpretable insights into how input parameters influence the predictions. Despite this, limitations included not exploring aggregate types, excluding other concrete properties, and inaccuracies in compressive strength calculation.

Journal reference:
  • Wang, W., et al. (2024). Prediction of Compressive Strength of Concrete Specimens Based on Interpretable Machine Learning. Materials, 17:15, 3661. DOI: 10.3390/ma17153661, https://www.mdpi.com/1996-1944/17/15/3661
Silpaja Chandrasekar

Written by

Silpaja Chandrasekar

Dr. Silpaja Chandrasekar has a Ph.D. in Computer Science from Anna University, Chennai. Her research expertise lies in analyzing traffic parameters under challenging environmental conditions. Additionally, she has gained valuable exposure to diverse research areas, such as detection, tracking, classification, medical image analysis, cancer cell detection, chemistry, and Hamiltonian walks.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Chandrasekar, Silpaja. (2024, July 30). Predicting Concrete Strength with Machine Learning Models. AZoAi. Retrieved on December 22, 2024 from https://www.azoai.com/news/20240730/Predicting-Concrete-Strength-with-Machine-Learning-Models.aspx.

  • MLA

    Chandrasekar, Silpaja. "Predicting Concrete Strength with Machine Learning Models". AZoAi. 22 December 2024. <https://www.azoai.com/news/20240730/Predicting-Concrete-Strength-with-Machine-Learning-Models.aspx>.

  • Chicago

    Chandrasekar, Silpaja. "Predicting Concrete Strength with Machine Learning Models". AZoAi. https://www.azoai.com/news/20240730/Predicting-Concrete-Strength-with-Machine-Learning-Models.aspx. (accessed December 22, 2024).

  • Harvard

    Chandrasekar, Silpaja. 2024. Predicting Concrete Strength with Machine Learning Models. AZoAi, viewed 22 December 2024, https://www.azoai.com/news/20240730/Predicting-Concrete-Strength-with-Machine-Learning-Models.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Boost Machine Learning Trust With HEX's Human-in-the-Loop Explainability