In an article published in the journal Nuclear Engineering and Technology, researchers discussed using machine learning algorithms to improve the quantitative interpretation of uranium spectral gamma-ray logging. By addressing low statistical efficacy and spectral drift, the authors demonstrated that the backpropagation (BP) neural network achieved high accuracy in quantifying uranium in standard model wells, even at increased logging speeds, enhancing traditional spectral gamma-ray logging methods.
Background
Uranium spectral gamma-ray logging is a crucial method for studying well profiles and evaluating uranium resources based on gamma rays emitted by natural radionuclides in rocks. Traditional quantitative interpretation methods, such as manual calculations, stripping analysis, and least-squares inverse matrix methods, have faced significant challenges, including errors in overlapping peaks and low statistical accuracy. Despite advancements like weighted least-squares and direct demodulation methods, issues such as complex calculations and quantification errors persist.
With the advent of machine learning, novel spectral analysis methods have emerged, offering potential improvements in accuracy and efficiency. Early studies have shown promising results using neural networks and other artificial intelligence (AI) algorithms for qualitative and quantitative spectrum analysis.
This research addressed the limitations of traditional methods by employing machine learning algorithms—BP neural network, generalized regression neural network (GRNN), and support vector machine (SVM)—to enhance the quantitative interpretation of uranium in high-speed gamma-ray spectra, even under conditions of spectral drift.
Data Acquisition and Methodology
The researchers utilized experimental gamma spectra from model wells at the Shijiazhuang Aerial Survey Remote Sensing Center, focusing on uranium logging in sandstone models. Data collection involved various well diameters and detector placements. Gamma-ray spectra preprocessing included simulating spectrum drift using normal distribution and data interpolation, with MATLAB's “normrnd” and “interp1” functions. The simulation ensured realistic spectrum drift, primarily within 10 channels, by adjusting mean and variance values.
A total of 5000 measured spectra were expanded to 8500 through preprocessing. These spectra were then split into 6800 spectra for training and 1700 spectra for testing sets in an 80:20 ratio. Machine learning models were constructed using MATLAB to interpret low statistical spectra, employing BP neural network, GRNN, and SVM. Performance indicators included mean square error (MSE), mean absolute error (MAE), and correlation coefficient (R), reflecting accuracy, stability, and correlation in uranium quantitative interpretation.
Model optimization involved adjusting hyperparameters: the BP algorithm’s hidden layers, neurons, activation function, training algorithm, and learning rate; the GRNN’s smoothing factor and kernel function; and the SVM’s kernel function coefficient and penalty coefficient. The performance variation of these parameters was analyzed to determine optimal settings, with results shown in supplementary figures and tables.
The models were then applied to high-speed gamma spectra analysis, investigating the effects of different logging speeds, models, and spectral drift degrees on uranium interpretation accuracy. This machine learning approach aimed to enhance traditional spectral analysis methods, providing robust quantitative interpretation despite high logging speeds and spectral drift.
Comparative Analysis of Machine Learning Models for Gamma-Ray Spectral Interpretation
The BP neural network demonstrated superior accuracy, with the lowest errors in both training and testing phases, attributed to its use of the Levenberg-Marquardt training algorithm. Relative errors in interpreting medium-high grade uranium spectra were generally below 20%, with the BP neural network producing the most accurate results. The coefficient of determination (R²) values for the BP, GRNN, and SVM models were 0.99026, 0.95808, and 0.94020, indicating high accuracy and minimal error ranges.
The authors also explored the impact of different logging speeds on model performance. It was found that as logging speed decreased, the interpretation errors of all models reduced, with the BP neural network maintaining the best performance. The accuracy of the BP model in interpreting medium-high grade uranium at logging speeds of six meters(m)/minute(min) and 0.6 m/min was 93.487% and 98.489%, respectively.
All three models showed small quantitative errors even with varying degrees of spectral drift, proving the robustness of machine-learning models in analyzing gamma-ray spectra under different logging speeds and spectral drift conditions. The researchers concluded that machine learning models, particularly the BP neural network, were effective for the quantitative interpretation of gamma-ray spectra, achieving high accuracy and stability across different conditions.
Conclusion
In conclusion, the study demonstrated the efficacy of machine learning algorithms, particularly the BP neural network, in enhancing the quantitative interpretation of uranium spectral gamma-ray logging. By addressing challenges like spectral drift and low statistical efficacy, the BP model achieved the highest accuracy, with an R² value of 0.99026. The researchers confirmed that machine learning methods could effectively analyze gamma spectra at high logging speeds, maintaining accuracy across varying conditions.
Journal reference:
- Yan Zhang, Yujin Ye, Jun Qiu, Chunqing Fu, Haolong Huang, Renbo Wang, Bin Tang, Study on quantitative interpretation of uranium spectral gamma-ray logging based on machine learning algorithm, Nuclear Engineering and Technology, 2024, ISSN 1738-5733, DOI: 10.1016/j.net.2024.07.004, https://www.sciencedirect.com/science/article/pii/S173857332400319X