Core sampling is an expensive yet essential part of exploration and production of oil and gas reservoirs. Samples are the only direct evidence available for reducing the uncertainty in reservoir characterization. Due to the expense, cores are not collected from every well, unlike well logs that are collected in mass. Machine learning allows upstream teams to estimate core properties from measured well logs at a significant cost savings. This allows scientists and engineers to overcome problems of scale and harness the power of detailed fluid analysis and granular geomechanics during drilling and completions of future wells.
Measurements from core analysis include characterization of lithology, fluid, stress, and formation damage. Each measurement is associated with a specific depth and can therefore be matched to any set of logs for a given well. The relationship between existing logs (ex: gamma ray, neutron, sonic, density, caliper, PEF) and each of these core measurements at a given depth can be determined using a variety of machine learning algorithms, such as random forests or ANNs. Once a machine learning model is trained, it can be used to estimate core properties at any location where there is a log, but no core sample.
The degree of success in prediction of core properties is dependent on a number of factors, including the quality of data, the volume of data, and if there are existing physical relationships between the log properties and core properties. The physics-based approach to predicting core properties, while being less dependent on volume of data, is restricted in its ability to overcome issues of scale. Scale is an inherent problem during modeling because the representative volumes of reservoir rock and fluid that a core measurement observes (inches to a foot) are much smaller than what a log measurement observes (several feet). The machine learning approach to predicting core samples allows us to build “empirical” scaling rules quickly to work around this.
Petrophysicists and geologists are using OAG’s AI and machine learning platform to create core property estimation models. Careful quality control of the data is an essential step for this data driven analysis and can include a number of tasks such as re-aggregation of data and calibration or normalization of the log data. This data workflow is automated, allowing for reliable and consistent data processing as the operator adds additional data. The OAG platform allows subject matter experts to then quickly build and assess machine learning models without having to learn Python or R. These core property estimation models are used to make more informed decisions with regards to placement of stages and perforations for the completions process. Ultimately, this process promotes collaboration and allows for an increase in returns by using granular geologic detail to maximize contact with quality reservoir.
Interested in doing more with your core data? Contact OAG
This article is also available on Linked In.