![]() DEEP MODELING BASED ON DEEP LEARNING
专利摘要:
Embodiments of the present technology for deep learning-based field modeling provide input data including information associated with one or more well logs in a region of interest. The present technology determines, based at least in part on input data, an input characteristic associated with a first deep neural network (DNN) for predicting a value of a property at a location within the region of interest. In addition, the present technology teaches, using the input data and based at least in part on the first input characteristic, the first DNN. The present technology predicts, using the first DNN, the property value at the location in the region of interest. The present technology uses a second DNN that ranks the facies based on the predicted property in the region of interest. 公开号:FR3069330A1 申请号:FR1854604 申请日:2018-05-30 公开日:2019-01-25 发明作者:Yogendra Narayan PANDEY;Keshava Prasad Rangarajan;Jeffrey Marc Yarus;Naresh Chaudhary;Nagaraj Srinivasan;James Etienne 申请人:Landmark Graphics Corp; IPC主号:
专利说明:
DEPTH LEARNING-BASED MODELING TECHNICAL AREA This description generally relates to the modeling of deposits including three-dimensional ("3D") modeling of deposits based on deep learning. CONTEXT Geological models can be used to represent underground volumes of the earth. In some geological modeling systems, an underground volume can be divided into a grid made up of cells or blocks and the geological properties can be defined or predicted for cells or blocks. BRIEF DESCRIPTION OF THE FIGURES FIG. 1 illustrates a block diagram of an example of a deep learning method for developing 3D real-scale static deposit models, and a learning method and a prediction method in accordance with certain embodiments. Figure 2A illustrates a flow diagram of an exemplary method for predicting petrophysical property and classifying facies according to certain embodiments. Figure 2B illustrates a flow diagram of an exemplary method for displaying a deposit model in 3D using data from well logs in accordance with certain embodiments. FIG. 3 illustrates schematic representations of an example of spherical variogram models for a vertical variogram and a horizontal variogram in accordance with certain embodiments. Figure 4 is a diagram illustrating a subdivision into a region of interest in layers using the extent of a vertical variogram in accordance with certain embodiments. 2017-IPM-100997-U1 -FR FIG. 5 illustrates a diagram of an example architecture of a deep neural network (DNN) used for the prediction of the petrophysical property in accordance with certain embodiments. In one or more implementations, DNN can be an explanatory variable for a deep acyclic network (also often called an acyclic neural network or a multilayer perceptron). FIG. 6 illustrates a diagram of an example architecture of a DNN used for the classification of facies in accordance with certain embodiments. In one or more implementations, the DNN can be a deep acyclic network classifier (also often called an acyclic neural network or a multilayer perceptron). Figure 7 illustrates a perspective view of an example of a point-cloud representation of a result of a model for a petrophysical property (eg, porosity) according to certain embodiments. FIG. 8 illustrates a perspective view of an example of representation in point cloud of a result of a model for the classification of facies according to certain embodiments. FIG. 9 illustrates a diagram of a set of general components of an exemplary computer device according to certain embodiments. FIG. 10 illustrates a diagram of an example of an environment for implementing aspects according to certain embodiments. In one or more implementations, not all of the components illustrated in each figure would be required, and one or more implementations may include additional components which are not shown in a figure. Variations in the arrangement and type of components can be made without departing from the spirit or scope of this disclosure. Additional components, different components, or a smaller number of components may be used within the scope of this disclosure. 2017-IPM-100997-U1 -FR DETAILED DESCRIPTION The detailed description presented below is intended to be a description of various implementations and is not intended to represent only those implementations in which the technology described can be practiced. As will be understood by those skilled in the art, the implementations described can be modified in a number of different ways, without departing from the scope of this disclosure. Consequently, the figures and the description should be considered to be illustrative in nature and not restrictive. Geological volume modeling is used in various industries and technological fields. One of the objectives of such modeling is to organize existing information on a geological volume and to predict the nature and distribution of descriptive attributes and / or quantitative values within the geological volume, thus facilitating studies and actions relating to these volumes. Modeling can be done in different ways, for example, by generating maps or volume sections directly from the information. A map can refer to a two-dimensional projection of a horizontal planar surface of a representation of characteristics of a volume. A section can also relate to a graphic representation of the volume projected on a vertical plane cutting the volume. The modeling of a geological volume can be based on the assembly of known or conceptual data, extrapolated data and data interpolated across the entire modeled volume. Once the model has been constructed, displays such as maps, cross sections and statistical information can be derived from the model. Modeling the Earth's crust, including map and section generation, involves complex geological and geophysical relationships and several types of data and observations. The geological volumes of rocks or sedimentary volumes are of particular interest, since gas and oil, mineral deposits and groundwater are generally found in sedimentary deposits, which are generally in porous deposits such as elastic, secreted and / or precipitated deposits. Such deposits generally exist in layers (eg, strata, beds) formed over geological time periods by various physical, chemical and biological processes. The deposits could have been formed by rivers depositing sediments inside their canals or their deltas, by sediments transported by the wind, by the action of the waves and the sea, by the action of the tides, by precipitation from a solution, by secretions from living organisms or other mechanisms. Deposits 2017-IPM-100997-U1 -FR may have been modified by exposure to the weather, erosion, diagenesis, burial and / or structural movement. A current formation or layer of rocks or sedimentary deposits was originally deposited on a deposit surface (eg, time line) that was either essentially horizontal or at an angle or slope (eg, deposit slope) relative to a horizontal plane (e.g., sea level). The deposited layer could have undergone vast changes in position and configuration over time. Burial forces, compaction, distortion, lateral and vertical movement, bad weather, etc., could have caused fracturing, faulting, bending, shearing of the formation or a significant modification of it. A geological volume can be a complex relationship of layers of rock that can extend thousands of feet below the surface of the earth into the Earth's mantle. A given geological volume can involve many layers of superimposed sediments, which were originally deposited on a horizontal or sloping deposit surface and which could have been laterally inclined, fractured, folded, drilled, turned, faulted, modified, eroded or otherwise modified in different ways. Significant efforts are made when studying any given geological volume to obtain as much volume data as possible. Even if several wells can be dug, and many geological surveys carried out, it is a common practice to interpolate and extrapolate important data throughout the volume. However, interpretation by manual interpolation and extrapolation of data is tedious, time-consuming, and may be subject to logic errors. It could also be difficult because the geological layers, strata or beds may not lie one above the other in clean, coherent, horizontal and laterally extended sequences. The formations vary in their lateral extent and their position and spatial attitude and the interpolation and extrapolation must take these into account. To resolve at least some of the above parameters, geological modeling can include techniques for three-dimensional "meshing" (eg, "3D meshing"). Mesh, in one example, refers to the division of an underground region into cells, tiling, or a form of mesh, in which petrophysical properties or parameters or attributes (eg, lithology, porosity, saturation in water, permeability, density, oil / water ratio, geological information, paleodata, etc.) are assigned to each 2017-IPM-100997-U1 -EN cell. Each geological volume of interest can thus be modeled by a set of cells. In a three-dimensional geological volume for the 3D mesh, each of the cells has a respective shape which can be a cube, a polygon with regular volume, a polygon with irregular volume, an ellipsoid, a volume with irregular curve, a pebble or any other three-dimensional shape. However, creating such models using 3D grids can be technically difficult and laborious. A model using a mesh can also have one or more geometric constraints in that the model could be deficient in the representation of the geological volume since the arrangement of the cells can be a rough representation of the volume. In particular, the present position of the layers of a geological sequence is rarely in a perfectly horizontal orientation. Even if the sedimentary layers are normally formed on a deposit surface which is essentially horizontal or on a sloping surface, this condition rarely persists after a significant period of geological time. Therefore, a pattern or stratigraphic style can be varied within the sequences and it may be significantly different and may not be precisely modeled using the mesh approach for the geological volume. Compared to the previous discussion, the embodiments described here are meshless, and therefore provide several advantages, such as avoiding the IT and hardware requirements involved in creating and storing a huge mesh. for geological modeling. The technology of the present invention uses deep neural network models of one or more point distributions within a geological volume, called a point cloud. The scatter plot representation can provide a very high resolution representation of a deposit model, in which the deposit model can relate to a computer model of an oil deposit which can be used, in some examples, to improve the estimation of reserves, make decisions regarding the development of the field, predict future production, the placement of additional wells and the evaluation of alternative scenarios of deposit management. Through the use of deep learning algorithms applied in the embodiments described herein, it is possible to have a distributed architecture implementation of such embodiments in order to obtain highly optimized performance. The technology of the present invention, as described in more detail here, formulates a simulation of rock-like facies in the form of a 2017-IPM-100997-U1 -EN classification using simulated petrophysical properties, thus establishing a direct relationship between the simulated petrophysical properties and the facies. Artificial intelligence (AI) is a technical area with practical applications and active research themes, including those that are applied to real world problems (e.g. recognition of images or objects, recognition voice, robotics, automated driving, finance, etc.). A source of difficulty in some real-world artificial intelligence applications may be factors of variation that can influence every single piece of data that is observed. A computer device or machine using AI techniques may therefore have difficulty extracting high or abstract level characteristics from the raw data. Deep learning algorithms can solve this difficulty by breaking up a complicated desired mapping into a series of simple mappings, each described by a different layer of a model. The implementations of the present technology describe a static deposit modeling method based on deep learning algorithms. More specifically, the implementations described here receive, as input, a point cloud with spatial properties, which are used to provide a prediction of the properties, and to classify data points into buckets which represent respective facies. A point cloud as described here can be a set of data points in a coordinate system, such as the X, Y, and Z coordinates in a three-dimensional coordinate system, or a set of data points in a any other coordinate system. A point cloud for a geological model can correspond to a geological volume. Point cloud data can refer to data organized so that one or more spatial coordinates (eg, locations) of each point in a point cloud are included with other data related to that point. In the case of geological modeling, each point cloud may include point cloud data for one or more petrophysical properties or other attributes for a given geological volume, such as geomechanical and geochemical data. As described in the present embodiments, the point cloud data, derived from one or more well logs, can be used to develop 3D static deposit models. In one example, such 3D static deposit models can be used to provide a static description of the deposit before production. The 2017-IPM-100997-U1 -EN processes for developing a real-scale 3D static deposit model are illustrated in a flowchart in Figure 1 described in more detail below. The embodiments described in more detail here use a deep neural network formed to predict a petrophysical property (e.g., porosity, lithology, water saturation, permeability, density, oil / water ratio, information geological, paleodata, etc.) at the point level in a point cloud, and the reuse of this predicted petrophysical property as input into a second deep neural network formed to predict facies at the point level. The following description covers examples of steps ranging from preprocessing input data from well logs, learning the aforementioned deep neural networks, and using the aforementioned deep neural networks to predict petrophysical and facies properties, respectively. Even if for the purpose of explanation, the prediction of a petrophysical property is described here, it is understood that the present technology can predict any mappable property of appropriate volume (e.g., geochemical properties, geomechanical properties, etc. .). FIG. 1 illustrates a block diagram of an example of a deep learning method 100 for developing 3D real-scale static deposit models, a learning method 125 and a prediction method 150. All the methods illustrated would not be not necessary, however, and one or more implementations may include additional components which are not shown in the figure. Variations in the layout and type of processes can be made without departing from the spirit or scope of the claims described herein. Oil well "logging" can describe the collection of information regarding the properties of land formations traversed by a wellbore for drilling and oil production operations. For example, in a wired oil well logging method, a probe is lowered into the borehole after some or all of the well has been dug, and is used to determine certain properties of the formations traversed by the borehole. In one example, various properties of earth formations are measured and correlated with the position of the probe in the borehole, when the probe is raised to the top of the hole. These properties can be stored in one or more well logs. 2017-IPM-100997-U1 -FR In one embodiment as illustrated, the deep learning method 100 uses well logs as input 103. In one example, the well logs provide one or more petrophysical, facies, and other properties. related attributes along the well trajectory. These properties available in well logs are used to train a deep neural network (DNN) 110, such as the acyclic deep network, to predict petrophysical properties at a random location in a region of interest far from the location of the wells. In one embodiment, the DNN 110 used for predicting facies in a deep neural network classifier which uses the properties available in well logs to train a classifier to predict facies based on values available petrophysical properties, such as porosity, permeability, etc. , at a given location in the region of interest. In the present context, a facies can describe a rock body with specified characteristics. Further details for the development of the 3D static deposit model using deep learning algorithms are described in Figure 2 below. Initially, the input data 103 is read from one or more well logs. A well log may include data corresponding to at least one petrophysical property (e.g., porosity, lithology, water saturation, permeability, density, oil / water ratio, geochemical information, palaeodata, etc.) along the trajectory of a given well used for drilling oil wells. In another example, the input data 103 may also include other input characteristics and / or advanced conceptual data (eg, provided by a geoscientist based on prior knowledge and experience) as is described in more detail below. Data from the well logs are then preprocessed using one or more techniques. In one example, a mapping of UVW 105 coordinates to XYZ coordinates (e.g., Cartesian) is performed to eliminate discontinuities in the data in the horizontal directions due to faults, or large valleys and folds present in the current structural space. Further details of UVW coordinate mapping and other preprocessing techniques are presented in the description of Figure 2 below. 2017-IPM-100997-U1 -FR During a learning process 125, a DNN 107 (for example, an acyclic deep network) is learned for modeling a petrophysical property and DNN 110 for modeling facies. Other details of learning DNN 107 for modeling the petrophysical property and DNN 110 for modeling facies are presented in more detail in Figure 2 below. Following learning and testing of DNNs for prediction of petrophysical property and facies classification, during a prediction process 150, a predefined number of random points is generated in the region of interest in 3D including well logs (eg, hereinafter called "random point cloud"). A trained DNN 107 for predicting the petrophysical property is used to predict a petrophysical property at each point in the random point cloud. Following the availability of predicted petrophysical properties, DNN 110 is used to predict the facies at each point in the random point cloud based on the predicted petrophysical property at each point. Further discussions on the prediction of the petrophysical property and the prediction of facies using the predicted petrophysical property are presented in Figure 2 below. The following discussion describes, in more detail, examples of flowcharts for a method performing petrophysical property prediction and facies classification, and the display of a deposit model in 3D. The embodiments described in more detail here use a deep neural network formed to predict a petrophysical property (e.g., porosity, lithology, water saturation, permeability, density, oil / water ratio, information geological, paleodata, etc.) at the point level in a point cloud, and the use of this predicted petrophysical property as input to a second deep neural network formed to predict facies at the point level. In particular, the following description covers steps ranging from preprocessing input data from well logs, learning the aforementioned deep neural networks, and using the aforementioned deep neural networks to predict petrophysical property and facies, respectively. at the point level in a point cloud. FIG. 2A conceptually illustrates a flow diagram of an exemplary method 200 for the prediction of the petrophysical property and the classification of facies. Although this figure, as well as other process illustrations contained in this disclosure, can describe the functional steps in a given sequence, the processes are not necessarily 2017-IPM-100997-U1 -EN limited to this given order or to the illustrated steps. The various steps illustrated in this figure and in other figures can be modified, rearranged, performed in parallel or adapted in various ways. In addition, it should be understood that certain steps or sequences of steps can be added to or omitted from the process, without departing from the scope of the various embodiments. The method 200 can be implemented by one or more computer devices or systems in certain embodiments. At block 202, input data is received. In one example, the input data includes information associated with one or more well logs in a region of interest, and the region of interest corresponds to a geological volume. The information included in the input data may include Cartesian coordinates (eg, XYZ coordinates) corresponding to locations in the region of interest. In one example, well logs provide one or more petrophysical, facies, and other related attributes along the trajectory of the wells. These properties available in well logs are used to train a deep neural network to predict petrophysical properties at a random location in a region of interest remote from the location of the wells. In another example, the input data 103 may also include other input characteristics and / or advanced conceptual data (eg, provided by a geoscientist based on prior knowledge and experience) as is described in more detail below. The input data from the well logs are then preprocessed using one or more of the following techniques. At block 204, the Cartesian coordinates of each point are mapped to the UVW coordinates and / or other preprocessings are performed on the input data received. As discussed here, UVW mapping is performed to provide a different representation (eg, "shoebox" or "flat" space) of the original stratigraphic system. A UVW coordinate mapping of XYZ coordinates (e.g., Cartesian) is performed to eliminate discontinuities in the data in the horizontal directions due to faults, or large depressions and folds present in the current structural space. A substitution representation of the original stratigraphic system, called a "shoebox" or "flat" space described by the UVW transformation coordinates, is generated. It can be approximated using a flattening algorithm when faults and folds are present. If the data 2017-IPM-100997-U1 -EN input have stratigraphic layers almost horizontally aligned, the generation of the substitution representation in the UVW coordinates may not be required and the original Cartesian coordinates can be used for the development of the model. In the case where the UVW conversion is initially carried out, the calculations maintain a UVW representation through the learning and prediction methods, and a point cloud generated with one or more predicted properties can be back-mapped on the coordinates XYZ to from UVW coordinates. The facies are treated as classes and the prediction of the facies is formulated as a classification problem in at least some of the embodiments described here. In some cases, the number of data points belonging to one or more facies may be significantly higher or lower than the other facies causing population imbalance (e.g., an amount of limestone is higher than an amount of sandstone based on data from a well log). To train a DNN classifier to classify facies, a synthetic minority oversampling technique (SMOTE) can be used to balance the distribution of the learning sample population across the different facies. In a case where one or more facies dominate the learning data and other facies rarely occur, the population imbalance can negatively affect the learning of the DNN classifier. Therefore, in the context of preprocessing at block 204, it is important to ensure a population balance across the facies classes using the SMOTE technique before learning the DNN facies classifier. Even if the SMOTE technique is mentioned, it is understood that any suitable oversampling or undersampling technique can be used while remaining within the scope of the present technology. For example, an adaptive synthetic sampling technique (eg, an ADASYN algorithm) can be used for oversampling, or a random subsampling technique can be used to balance the distribution of classes by randomly removing a sample majority class. In one or more implementations, during preprocessing at block 204, normalization of the input and output data can be performed so that the values for different input and output variables are within the acceptable range for provide digital stability. 2017-IPM-100997-U1 -FR At block 206, a vertical variogram and a horizontal variogram of a petrophysical property in each stratigraphic interval of the region of interest is determined. Examples of variograms are presented in more detail with respect to Figure 3 below. Note that instead of variograms, other spatial models including multiple point models, explicit vectors and spatial models with various azimuths can be applied. At block 208, using at least the vertical and horizontal variograms, an input characteristic is determined to provide a first deep neural network (DNN), such as an acyclic deep network, to predict a petrophysical property (e.g. ., porosity, lithology, water saturation, permeability, density, oil / water ratio, geochemical information, paleodata, etc.). In one example, in order to determine the input characteristic, the region of interest can be divided into layers using the extent of a given vertical variogram. Other details of this approach are described in Figure 4 below. Other types of (advanced) input features are also described in more detail below. At block 210, the received input data is divided into a training data set, a validation data set, and a test data set. The training data set, the validation data set and the test data set can be mutually exclusive subsets of the input data received. In one example, the input and output data set is randomized and divided into training, validation, and test data sets. Data corresponding to a predefined number of wells is kept separate to validate and test the performance (eg, accuracy) of the DNNs trained. Even if three different data sets for training, test and validation data are presented above, in at least one embodiment, for a given fixed set of input variable, data sets of Different (and mutually exclusive) sampling can be taken using a k block validation algorithm. In the k block validation algorithm, the original sample is randomly partitioned into k subsamples of equal size. Among the k subsamples, a single subsample is used as validation data to test the model, and the remaining k - 1 subsamples are used as training data. The cross-validation process is then repeated k 2017-IPM-100997-U1 -EN times (eg "blocks") with each of the sub-samples & used exactly once as validation data. The k results from the folds and can then be averaged to produce a single estimate. An advantage of the k block validation algorithm over repeated random subsampling may be that all observations are used for both learning and validation, and each observation is used for validation exactly one time. In some examples, 10-fold cross validation can be used, but in general k can be an unfixed parameter. In an example of 10 block cross validation, the input data can be divided into 10 different and mutually exclusive data sets. One of the 10 divided data sets can be selected to be the validation data set, and the remaining nine (9) data sets are used for training. At block 212, using the input characteristic, the first DNN is formed to predict a value of the petrophysical property at one or more arbitrary locations in the region of interest. Learning the first DNN uses the training, validation, and test data sets (as described in more detail here) to minimize the mean square deviation of predicted property values and observed property values. In one example, a DNN with a predefined architecture is formed so that the prediction error on the validation data set (eg, wells selected for validation) is minimized. For learning the DNN for modeling the petrophysical property, a standard deviation (RMSE) of the predicted property values and of the observed property values is minimized. Even if the RMSE is mentioned, it is understood that any suitable technique for measuring the error can be used, for example, the mean deviation (MAE), the percentage of absolute mean deviation (MAPE), the mean absolute calibrated deviation (MASE), the mean deviation (ME) and the percentage of mean deviation (MPE), etc. Domain-specific attributes and / or domain-specific parameters can also be used. In yet another example, a cross-correlation coefficient can be used in combination with the RMSE. At block 214, after learning the first DNN, a second DNN, such as an acyclic deep network, is formed to classify a type of facies at the arbitrary location (s) in the region of interest. Properties that are available in the well logs can be used to form the second DNN. In one example, during learning, the cross entropy based on the predicted facies and the observed facies is minimized. 2017-IPM-100997-U1 -FR The first DNN and the second DNN are not authorized to access the test data (eg, remain invisible during the learning step in blocks 212 and 214). Once learning is completed with reasonable minimization of the validation error, the performance of the trained DNN can be measured on data from wells set aside for testing (e.g., the mentioned test data set above). At block 216, a random point cloud is generated in the region of interest. The random point cloud includes multiple randomly determined points corresponding to different locations in the region of interest. At block 218, for each point in the random point cloud, a value of the petrophysical property at point level is predicted using the first DNN formed. Thus, respective (predicted) values of the petrophysical property at the points corresponding to the different locations in the region of interest are provided. Even though the prediction of a petrophysical property is described here, it is understood that the present technology can predict any mappable property of appropriate volume (e.g., geochemical properties, geomechanical properties, etc.) for each point in the random point cloud. At block 220, the facies classification is carried out, using the second DNN formed, on the points in the random point cloud. Respective values of the facies classification at the points corresponding to the different locations in the region of interest are provided. At block 222, a second point cloud, representing a 3D deposit model of the region of interest, is generated using at least the respective values of the petrophysical property and / or the classification of facies. More specifically, the second point cloud comprises and / or represents information corresponding to the respective values of the petrophysical property and / or of the classification of facies for each point in the second point cloud. The second point cloud can then be used to display a 3D deposit model of the region of interest. In another example, two separate point clouds can be provided. A first point cloud can be generated using petrophysical property values from the random point cloud, and a second point cloud can be generated using facies classification values from the point cloud random. 2017-IPM-100997-U1 -FR FIG. 2B illustrates a flow diagram of an exemplary method 250 making it possible to display a deposit model in 3D using data coming from the well logs. The method 250 can be implemented by one or more computer devices or systems in certain embodiments. More specifically, method 250 represents operations that can be performed for a device on the client-side computer system to receive and display a 3D deposit model generated by method 200 described with reference to FIG. 2A above. At block 252, the input data including information corresponding to one or more well logs in a region of interest is sent (eg, to a server executing the instructions performing the method 200). At block 254, a point cloud is received comprising information corresponding to at least one petrophysical property and / or a facies classification. At block 256, based on at least the information included in the point cloud, a 3D reservoir model can be provided for display. Examples of such models are presented in more detail below with respect to Figures 7 and 8. The following discussion concerns variograms, which have been presented above in connection with Figure 2A and used to determine, in part, an input characteristic for DNN for the prediction of petrophysical property (which is presented in more detail below in Figures 3 and 4). In an example of a geostatistical approach, a half-variogram (also called a "variogram") of a Z property can be defined in the form of the following equation (1): Y (h) = ^^^ rt (h) E [(Z (u + h) - Z (u)) 2 ], ......... where n (h) represents the number of pairs that are separated by the distance h (also called delay). In light of equation (1), considering the variation of a property Z along a well at the same time, the vertical variogram y N (h) can be calculated in the form of the following equation (2) : YvW = ^^^ n (h) E [(Z (z + h) - Z (z)) 2 ], ......... 2017-IPM-100997-U1 -EN in which z is the depth measured along the trajectory of a vertical (or horizontal) well. The final y N (h) is obtained by accumulating histograms from individual wells and calculating the actual variogram (eg, for each geological stratigraphic interval separately). In one example, the experimental variograms calculated using equation (1) are adjusted using an analytical expression such as a spherical variogram model. Other variogram models could be used including nested models (integration of multiple models). FIG. 3 illustrates schematic representations of examples of spherical variogram models for a vertical variogram 300 and a horizontal variogram 350 in accordance with certain embodiments (which have been previously mentioned in block 206 in FIG. 2A). In Figure 3, r v represents the extent of the vertical variogram. The value indicates the vertical separation from a given point on which the data is correlated; the extent of the variogram. Beyond the extent of correlation, the data is not correlated. Using the separation r v , the vertical depth extent of the region of interest is subdivided into vertically stacked layers (as shown in Figure 4 described below). The points in each of these layers can be considered for the calculation of the horizontal variogram ya (h) using the following equation (3): Yu (h) = Ί ^ Ση (Κ) Ε [(Ζ (τ + h) -Z (r)) 2 ], ......... in which r represents the 2D representation of the points given in the horizontal plane. The final ya (h) is obtained by accumulating the histograms from the horizontal layers and calculating the actual variogram. Similar to the vertical variogram 300, Figure 3 illustrates a schematic representation of an example of a horizontal variogram 350 which results in an estimate of m, which represents the Euclidean distance in a horizontal plane beyond which the correlations in the values of the property s 'vanish. FIG. 4 is a diagram 400 which illustrates an example of subdivision in a region of interest 405 in layers using the extent of a vertical variogram. As previously mentioned in block 208 of Figure 2A, this subdivision of the region of interest 2017-IPM-100997-U1 -EN can be used to determine, in part, an input characteristic for the DNN to predict a petrophysical property. As previously discussed, using the separation r v corresponding to the extent of the vertical variogram, the extent of the vertical depth in the region of interest 405 is subdivided into vertically stacked layers (as illustrated by the lines horizontal dotted lines in Figure 4). Vertical wells 410, 411, 412, 415, 416 and 417 are considered as neighboring wells in the region of interest 405. The black dots and the covering arrows in one or more layers 435, 437 and / or 440 illustrate the use of the respective property values at the level of the neighboring wells for the prediction of the property at the point 450 distant from the wells. After calculating the vertical and horizontal variograms and dividing the region of interest into stacked vertical layers, the next step is to form an input characteristic for modeling the petrophysical property (eg, the first DNN presented here -above). The input characteristic for each sample point in learning the dataset (e.g., a subset of the observed or captured dataset) is based on the neighboring points located at the nearest neighboring wells. In one example, the dimensions of the weighting matrix for the first hidden layer are dependent on the number of characteristics (h) in the entry. Therefore, n must be set before the start of learning in such an example. For each point on the neighboring well, the following attributes can be considered to formulate the input characteristic: prop = Value of the petrophysical property at the neighboring point proph = Coarse estimate of the property based on the horizontal variogram (prop + j2y H (d xy )) propv = Coarse estimate of the property based on the vertical variogram (prop + in which, Jxy = The Euclidean distance from a neighboring point in the horizontal direction 7H (iZxy) = The horizontal half-variance at the distance r / z = The vertical separation from the neighboring point yv (<7 z ) = The vertical half-variance at the separation <7 Z Similar characteristics can be obtained along each well along two or more points offset above and below by a depth ε * r v , in which ε is a small number such that ε = 0.075. In one example, the contribution of the input characteristic from a neighboring point at the level of well i can be described by the following equation (4): 2017-IPM-100997-U1 -FR Fi = (propi, prop hi , prop vi , prop i + ii , prop h i + ii , prop v i + ii , prop ^, prop hi _ e , prop vi _ v .........) ... in which (z + ε) and (z - ε) are the points obtained by shifting the neighboring point z along the trajectory of the well by a depth ε * r v upwards and downwards. The property estimates at points (z + ε) and (z - ε) are obtained by interpolation of the generalized regression neural network (GRNN). A predefined number of nearest neighboring points (A'nbr) based on the calculated distances is considered for the formulation of the characteristic. In an example, each row in the matrix of basic characteristics for an example as illustrated in FIG. 4 comprises 9 x A'nbr of characteristics, and semantically represented as R = [Fi, F2 ... FNnbrJ. A randomly selected subset or batch of input feature lines formed based on the training data set is passed to form the DNN for the prediction of the petrophysical property. The characteristic definition illustrated by the above equation (4) can represent a basic set of input characteristics which can be used to teach the first DNN (which was previously mentioned in block 208 of Figure 2A) . In addition, as also mentioned in block 208 of Figure 2A, the following additional advanced characteristics can also be considered in the input characteristic for the simulation of petrophysical properties in which other types of data (e.g. , apart from those from well logs,) are used as an input characteristic. First, anisotropy data can be considered an advanced feature. In this case, the horizontal variogram calculated by equation (3) does not explain the anisotropy. Anisotropic variogram models, for example, are useful for capturing direction-dependent spatial variations in petrophysical properties. An anisotropic variogram model provides information on spatial variability along its major axis and its minor axis of variation. If an anisotropic variogram model is available, the characteristic prop ^ is divided into two components prop } . major and minor 2017-IPM-100997-U1 -EN representing rough estimates of the property along the major and minor axes of the variation, respectively. In addition, spatial continuity and trend model data can be considered an advanced characteristic. Spatial continuity and underlying trend patterns are also considered as input characteristics. In addition, data from the geomechanical stratigraphy can be considered as an advanced feature. The information obtained on the geomechanical stratigraphy model, based on the stress field, is also considered to be part of the input characteristic. Geomechanical stratigraphy (chemostratigraphy) data can be considered an advanced feature. Geochemical stratigraphy, which provides a detailed stratigraphy based on the geochemistry of rock, is also considered to be part of the input feature. Seismic stratigraphy data can be considered an advanced feature. In this example, a 3D model developed based on the seismic data is also considered to be part of the input characteristic. The data from the chronostratigraphy can be considered as an advanced characteristic. In this example, a chronostratigraphic model is also considered to be part of the input characteristic. Maps of the sedimentary environment can be considered an advanced feature. In this example, interpreted maps of the lithology and the sedimentary environment are also considered to be part of the input feature. Sequence stratigraphy data can be considered an advanced feature. In this example, a 3D model developed based on outcrop, well, seismic and other geological data is also considered to be part of the input characteristic. 2017-IPM-100997-U1 -FR Geodynamic and tectonic data can also be considered an advanced feature. In this example, the geodynamic and tectonic information that provides information about the deformation and kinematic history of the basement is also considered to be part of the input characteristic. Paleoclimatic data and derived maps of risk of occurrence can be considered as an advanced characteristic. In this example, data derived from numerical simulations of atmospheric, oceanic and tidal conditions in the geological past, and derived maps determining the risk of occurrence of elements of petroleum systems are also considered to be part of the input characteristic. It is understood that other types of advanced characteristics can be used as input characteristics in addition to those presented above. For example, conceptual data can be provided by a geoscientist based on previous knowledge and experience. In one example, a stratigraphic sequence model of a region of interest can be used and a geoscientist can provide one or more additional input characteristics based on his prior knowledge regarding the location of a given type of rock without having to use seismic data. The geoscientist may therefore have knowledge of trends in how a petrophysical property can change across different locations of rock layers in the region of interest. As presented here, a deep neural network such as an acyclic deep network can be implemented to approximate a function f. The models in this respect are called acyclic models because the information passes through the function which is evaluated from an input x, through one or more intermediate calculations used to define f and finally to an output y. Deep neural networks (DNNs) are called networks because they can be represented by linking different functions together. A DNN model can be represented as a graph representing how the functions are linked together from an input layer, through one or more hidden layers and finally to an output layer, and each layer can have one or more nodes. Furthermore, even if a deep neural network such as the acyclic deep network is presented in the examples of the present invention, it is understood that other types of neural networks can be used by the present technology. For example, a neural network 2017-IPM-100997-U1 -EN convolutional, a regulatory feedback network, a radial-based functional network, a recurrent neural network, a modular neural network, an instant learning neural network, an impulse neural network, a regulatory feedback, a dynamic neural network, a neuro-fuzzy network, a compositional pattern production network, a memory network and / or any other suitable type of neural network can be used. FIG. 5 illustrates a diagram of an example architecture of a deep neural network (DNN) 500, such as the acyclic deep network, used for the prediction of the petrophysical property (which was previously mentioned as the first DNN in Figure 2A). Not all of the components shown would be necessary, however, and one or more implementations may include additional components which are not shown in the figure. Variations in the arrangement and type of components can be made without departing from the spirit or scope of the claims described herein. Additional components, different components or fewer components may be used. The input characteristic, formed as previously described, is transferred to an input layer. The input layer transfers the input values to the fully connected stacked hidden N layers, each of which has Miode number of nodes. It is understood that all of the hidden layers may have the same number of nodes, or the number of nodes may vary from one hidden layer to another. In an example, the nodes in the hidden layers use a hyperbolic tangential activation function to perform the non-linear transformations on the weighted sum of the values from the previous layer. An activation function can be used at a hidden layer to calculate output values for values transferred from the previous layer. Even if the previous example uses the hyperbolic tangential activation function (tanh), other activation functions can be used and are still within the scope of this technology. For example, one or more other specific activation functions of the geosciences can be used in place of the tanh function. In addition, a custom activation function can be used. A domain specific activation function can also be used. Additionally, an activation function using a unitary step (eg, threshold), sigmoid, piecewise linear and / or Gaussian techniques can be used. Even for illustration purposes several hidden layers are illustrated in the figure 2017-IPM-100997-U1 -FR 5, it is understood that the number of hidden layers supported by the architecture of the DNN 500 can include any suitable number of hidden layers. Following the hidden layers in Figure 5, a linear output layer with a node summarizes the activations from the last hidden layer to provide an estimated property value at the point in 3D represented by the feature line. The learning step optimizes the weights and biases in the hidden layer and the output layer so that the estimation error between the estimated property values and the observed property values from the well log (s) can be minimized. The error estimate can be RMSE, or a composite RMSE and cross-correlation, or some other geoscience-specific error parameters. In order to avoid overfitting during training, the adjustment of the estimation error is carried out based on the L2-weight norm in the hidden layers are added to the RMSE. An optimization method then applies a stochastic gradient reduction algorithm (or any other suitable optimization algorithm), which can use one or more iterative optimization techniques and / or use a small subset of the set or the training data set with Word learning samples randomly selected at a time. The variances calculated based on the horizontal and vertical half-variograms are included in the input characteristic. The optimization process can optimize the weights and biases associated with the vertical and horizontal half-variances, and other input characteristics so that an error in the property estimates relative to the observed property values can be minimized. The learning method described here can not only minimize the error in the property estimates, but can also incorporate a spatial variance of the properties, as described by equations (2) and (3) above. The overall learning process also includes the optimization of hyperparameters. These hyperparameters include hyperparameters specific to an automated learning algorithm, eg learning speed (") and possible parameters for decomposition of learning speed, batch size (m), parameters regularization (2), Chews, Moeuds, etc., and geoscience-specific hyperparameters, e.g. , Μ * γ, ε etc. An example of the hyperparameter set is "= 0.000125, / 77 = 352, 2 = 0.0000525, Maché = 5, Mioeuds = {108, 72, 48, 32, 21}, Mibr = 8 (c. - , 72 input characteristics), ε = 0.075. 2017-IPM-100997-U1 -FR Following the completion of learning which can be determined by the estimation error on the validation data set passing below a threshold value, the test data set is used to determine the performance DNN trained on unseen well logs (eg, which were not used for learning). The DNN formed provides the ability to predict the values of the petrophysical property at random 3D points in the region of interest based on the nearest neighboring points. FIG. 6 illustrates a diagram of an example architecture of a DNN 600, such as an acyclic deep network, used for the classification of facies (which was previously mentioned as the second DNN in FIG. 2A). Not all of the components shown would be necessary, however, and one or more implementations may include additional components which are not shown in the figure. Variations in the arrangement and type of components can be made without departing from the spirit or scope of the claims described herein. Additional components, different components or fewer components may be used. In at least one embodiment, the petrophysical properties (e.g., porosity, lithology, water saturation, permeability, density, oil / water ratio, geochemical information, paleodata, etc.), Closely related to the type of facies are used to classify a given point in a well log (or at a random location) as corresponding to a given type of facies. Such petrophysical properties can be called facies guiding properties. The input characteristic for facies classification can include the Cartesian coordinates (x, y, z) of a point and the facies guiding properties of this point. In one example, the DNN 600 has an input layer and hidden stacked layers with a non-linear activation function (eg, rectified linear unit "ReLU"), followed by a linear output layer. The number of nodes in the output layer for facies classification is equal to the number of facies classes in an example. The values obtained from the nodes of the hidden layer with the activation function ReLU are subject to an abandonment with a probability of 0.5 to avoid overfitting. In addition, the weights of the hidden layer are regularized to avoid oversampling. Transformed values from the output layer, also called logits, are passed into a softmax function (eg, normalized exponential function). For a dataset containing K facies with 2017-IPM-100997-U1 -EN output values from the output layer identified by l = (h, h ... Ik), the softmax function can be defined in the form of the following equation (5): a ^ = s ^ vi e [1 ' K] ......... In one embodiment, the softmax function provides a probability that a given sample point belongs to a given facies. The observed facies values are transformed into “one-hot” coded values according to the “winner rounds all” principle (eg, in which the nodes in a layer compete with each other for activation, and only the node with the highest activation remains active while all other nodes are deactivated). The facies labels in the coded “one-hot” format contain binary indicators, which are 1 to indicate the presence of a specific facies at the location level and otherwise 0. In the present context, the coding “one- hot ”can refer to a group of bits among which the valid combinations of values are only those with a single high bit (1) and all others are low (0). As an example, for a data set containing 3 facies, the coded "one-hot" values for the 3 facies can be facies 1 = (1, 0, 0), facies 2 = (0, 1, 0) and facies 3 = (0, 0, 1). As an example, the softmax exit probabilities can then be evaluated against the “one-hot” coded values of the facies labels at the sample point level to calculate the loss of cross entropy C given by the following equation (6) : c = - ^ ΣτηΣ; = ι [^ Τη (γ, -) + (1 - ί;) ln (l - y, -)], ......... in which K represents the number of facies in the input data, and L are the “one-hot” coded values for the observed facies and y represents the probability that the output belongs to a given facies calculated using the softmax function. Regularization of the calculated cross entropy loss can be achieved by adding L2-weight standards in the hidden layers to the cross entropy loss C. Hyperparameters related to machine learning remain similar to those defined in the previous section for the modeling of the petrophysical property described above with reference to FIG. 5. An example of a set of hyperparameters is “= 0.0002 (with exponential decomposition based on the decomposition speed of 0.86 applied to each 1000 learning iterations), m = 256, 2 = 0.05, 7Vhidden = 3, 7Vnodes = {1024, 512, 256}. A DNN classifier 2017-IPM-100997-U1 -EN sufficiently trained minimizes the loss of cross entropy C and it is therefore subsequently used to predict the facies at any random 3D point in the region of interest using the properties guided by the pre-calculated facies at this point in random 3D. FIG. 7 illustrates a perspective view of an example of a point cloud representation 700 of an output of a model for a petrophysical property (eg, porosity) (eg, corresponding to the first DNN in the Figure 2A) in accordance with certain embodiments. As illustrated, the example of a point cloud representation 700 is a graphic representation of a point cloud with 500,000 points. The different colors in Figure 7 correspond to different respective values for the petrophysical property. It is understood that the number of points in the point cloud is given for illustration purposes only, and the development of a model with a significantly larger number of points may be possible in an implementation of distributed memory architecture of this technology. Figure 8 illustrates a perspective view of an example of a 800 point cloud representation of an output of a classification model (e.g., corresponding to the second DNN in Figure 2A) according to certain embodiments . As illustrated, the example of a point cloud representation 800 is a graphical representation of a point cloud with 500,000 points. The different colors in figure 8 correspond to different respective values for the type of facies. Furthermore, it is understood that the number of points in the point cloud is given for illustration purposes only, and the development of a model with a significantly larger number of points may be possible in an architectural implementation. of distributed memory of the present technology. The embodiments described here support computation in a distributed computing environment, which may include a distributed shared memory. In at least one embodiment, the DNN architecture described here can be highly scalable and perform well in a system based on a distributed computing architecture. FIG. 9 illustrates a diagram of a set of general components of an exemplary computer device 900. In this example, the computer device 900 comprises a processor 902 making it possible to execute instructions which can be stored in a device or element of memory 904. The computing device 900 can comprise several types of memory, data storage or non-transient storage media readable by computer, 2017-IPM-100997-U1 -EN such as a first data storage for program instructions for execution by the processor 902, separate storage for images or data, removable memory for sharing information with d other devices, etc. The computing device 900 can generally include certain types of display elements 906, such as a touch screen or a liquid crystal display (LCD). As discussed, the computing device 900 in several embodiments will include at least one input element 910 which can receive conventional input from a user. This conventional input may include, for example, a push button, a touchpad, a touchscreen, a scroll wheel, a joystick, a keyboard, a mouse, a numeric keypad, or any other such device or element with which a user can enter a command into the device. In some embodiments, however, such a computing device 900 may not include any keys, and can be controlled only through a combination of visual and auditory controls, so that a user can control the computing device 900 without being in direct contact with the computing device 900. In certain embodiments, the computing device 900 of FIG. 9 can comprise one or more network interface elements 908 for communicating on different networks, such as Wi-Fi, Bluetooth , RF, and wireless communication systems is wired. The computing device 900 in several embodiments can communicate with a network, such as the Internet, and may be able to communicate with other computing devices of this type. As presented here, different approaches can be implemented in different environments according to the described embodiments. For example, Figure 10 illustrates a diagram of an example of an environment 1000 for implementing aspects in accordance with various embodiments. As will be understood, even if a client-server based environment is used for explanatory purposes, different environments can be used, if desired, to implement various embodiments. The system includes an electronic client device 1002, which may include any suitable device which can operate to send and receive requests, messages or information over an appropriate network 1004 and return information to a user of the device. Examples of such client devices include personal computers, cell phones, portable messaging devices, portable computers, set-top boxes, personal digital assistants, electronic book readers, etc. 2017-IPM-100997-U1 -FR Network 1004 may include any suitable network, including the intranet, the Internet, a cellular network, a local area network or any other such network or a combination thereof. The network 1004 could be a "push" network, a "pull" network, or a combination of these. In a push network, one or more servers push data to the client device. In a pull network, one or more of the servers send data to the client device when the client device requests the data. The components used for such a system may depend at least in part on the type of network and / or the environment chosen. The protocols and components for communicating across such a network are well known and will not be discussed here in detail. Calculation on the 1004 network can be activated through wired or wireless connections and combinations thereof. In this example, the network includes the Internet, since the environment includes a server 1006 to receive requests and serve content in response to them, even if for other networks, an alternative device having the same utility could be used, as it would be apparent to a person skilled in the art. The server 1006 can store and provide DNN models to predict a petrophysical property and a facies classification as described above. In one example, the server 1006 can run one or more applications including those written using TensorFlow and / or other machine learning software banks to run DNN models. One or more of the client device in FIG. 10 can communicate with the server 1006 in order to execute the DNN models to generate models of static deposits in 3D in accordance with the embodiments described here. Server 1006 will generally include an operating system which provides executable program instructions for general administration and operation of that server and will generally include computer readable media storing instructions which, when executed by a processor of the server, allow the server to perform its intended functions. Appropriate implementations for the operating system and general functionality of the servers are known or commercially available and are readily implemented by those of skill in the art, particularly in light of the present disclosure. The environment in one embodiment is a distributed computing environment using multiple computer systems and components which are interconnected through computer links, using one or more computer networks or direct connections. 2017-IPM-100997-U1 -FR However, it will be understood by those skilled in the art that such a system could work just as well in a system having fewer or more components than those illustrated in Figure 10. Therefore, the illustration of the system 1000 in Figure 10 should be taken as illustrative in nature and not limiting to the scope of the disclosure. As presented above, the various embodiments can be implemented in a wide variety of operating environments, which, in some cases, can include one or more user computers, computing device or processing device which can be used to run any number of applications. User or client devices can include any number of versatile personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless and portable devices running mobile software and which can support a number of networking and messaging protocols. Such a system can also include a number of workstations running any of a variety of commercially available operating systems and other applications for purposes such as database development and management. These devices may also include other electronic devices, such as dummy terminals, thin clients, and other devices capable of communicating across a network. Most embodiments use at least one network to support communications using a variety of commercially available protocols, such as TCPÆP, FTP, UPnP, NFS and CIFS. The network can, for example, be a local area network, a wide area network, a virtual private network, the Internet, an intranet, an extranet, a public telephone network, an infrared network, a wireless network, or any combination of these. The server (s) may also be able to execute programs or scripts in response to requests from user devices, such as, for example, by running one or more applications which can be implemented as one or more multiple scripts or programs written in any programming language, such as Java®, C, C #, or C ++, or any scripting language, such as Peri, Python, or TCL, or combinations thereof. The server (s) may also include database servers, including, without limitation, those commercially available offered by Oracle®, Microsoft®, Sybase® and IBM®. 2017-IPM-100997-U1 -FR Storage media and other non-transient computer-readable media for containing code, or portions of code, may include any storage media used in the art, such as, without limitation, volatile and non-volatile media. volatile, removable and non-removable implemented in any process or technology for storing information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, DVD or other optical storage, magnetic tapes, magnetic tape, magnetic storage disc or other magnetic storage devices, or any other media that can be used to store desired information and to which a device in the system can access. Based on the disclosure and the teachings provided herein, a person skilled in the art will develop other ways and / or methods for implementing the various embodiments. Various examples of aspects of disclosure are described below as convenience clauses. These are provided as examples, and do not limit the present technology. Clause 1. A method comprising: receiving input data comprising information associated with one or more well logs in a region of interest; determining, based at least in part on the input data, an input characteristic associated with a first deep neural network (DNN) making it possible to predict a value of a property at the level of a location at the within the region of interest; learning, using the input data and based at least in part on the input characteristic, of the first DNN, and predicting, using the first DNN, the value of the property at the level a location in the region of interest. Clause 2. The method according to clause 1, also comprising: learning a second DNN to classify a type of facies at a location in the region of interest based at least in part on the predicted value of the property at the location level in the region of interest. Clause 3. The method according to clause 2, also comprising: predicting, using the first DNN, property values for a plurality of points in a point cloud, each of the plurality of points corresponding to a different location in the area 2017-IPM-100997-U1 -EN of interest; and classifying, using the second DNN, the facie types for the plurality of points in the point cloud based at least in part on the predicted property values for the plurality of points in the point cloud. Clause 4. The method according to clause 3, also comprising: generating, using property values and facies types for the plurality of points in the point cloud, a second point cloud representing the region interest. Clause 5. The method according to clause 1, wherein the determination, based at least in part on the input data, the input characteristic also includes: the determination of a vertical variogram and a horizontal variogram of a property in each stratigraphic interval of the region of interest based at least in part on the input data; and determining, based at least in part on the vertical and horizontal variograms, the input characteristic for supply to the first DNN. Clause 6. The method according to clause 5, also comprising: dividing, using the vertical variogram, the region of interest into a plurality of layers, in which the input characteristic is based on a plurality of points neighbors chosen from at least one layer from the plurality of layers. Clause 7. The method according to clause 1, further comprising: generating a point cloud in the region of interest, the point cloud comprising a plurality of points corresponding to different locations in the region of interest. Clause 8. The method according to clause 1, in which the region of interest corresponds to a geological volume, and the first DNN comprises a deep cyclic network. Clause 9. The method according to clause 3, further comprising: mapping a set of coordinates corresponding to each point of the plurality of points in the region of interest, the set of coordinates being in a first coordinate system, to a second coordinate set in a second coordinate system, wherein the first coordinate system includes a Cartesian coordinate system and the second coordinate system includes a UVW coordinate system. Clause 10. The method according to clause 1, also comprising: generating, using the input data received, a set of training data, a set of 30 2017-IPM-100997-U1 -EN validation data and a test data set, in which the training data set, the validation data set and the test data set are mutually exclusive subsets of received input data. Clause 11. The method according to clause 1, in which the property comprises at least one of a petrophysical property, a geochemical property or a geomechanical property. Clause 12. A system comprising: at least one processor; is a memory comprising instructions which, when executed by the at least one processor, allow the at least one processor to: receive input data comprising information associated with one or more well logs in a region of interest; determine, based at least in part on the input data, an input characteristic associated with a first deep neural network (DNN) to predict a value of a petrophysical property at a location inside the region of interest; predicting, using the first DNN, the values of a petrophysical property for a plurality of points in a point cloud, each of the plurality of points corresponding to a different location in the region of interest; and classifying, using the second DNN, the facies types for the plurality of points in the point cloud based at least in part on the predicted values of the petrophysical properties for the plurality of points in the point cloud. Clause 13. The system according to clause 12, in which the instructions also allow the at least one processor to: generate, using petrophysical property values and facies types for the plurality of points in the cloud points, a second point cloud representing the region of interest. Clause 14. The system according to clause 12, in which the determination, based at least in part on the input data, of the input characteristic also includes: the determination of a vertical variogram and a variogram horizontal of a petrophysical property in the region of interest based at least in part on the input data; and determining, based at least in part on the vertical and horizontal variograms, the input characteristic for supply to the first DNN. Clause 15. The system according to clause 14, in which the instructions also allow at least one processor to: divide, using the vertical variogram, the region 2017-IPM-100997-U1 -EN of interest in a plurality of layers, in which the input characteristic is based on a plurality of neighboring points chosen from at least one layer coming from the plurality of layers. Clause 16. The system according to clause 12, wherein the instructions also allow the at least one processor to: generate a point cloud in the region of interest, the point cloud comprising a plurality of points corresponding to different locations in the region of interest. Clause 17. The system according to clause 12, in which the instructions also allow the at least one processor to: map a set of coordinates corresponding to each point of the plurality of points in the region of interest, the set of coordinates being in a first coordinate system, to a second set of coordinates in a second coordinate system, wherein the first coordinate system comprises a Cartesian coordinate system and the second coordinate system comprises a UVW coordinate system. Clause 18. The system according to clause 12, in which the instructions also allow the at least one processor to: generate, using the input data received, a set of training data, a validation data set and a test data set, wherein the training data set, the validation data set and the test data set are mutually exclusive subsets of input data received. Clause 19. The system according to clause 12, in which the petrophysical property includes porosity, lithology, water saturation, permeability, density, oil / water ratio, geochemical information, paleodata or water saturation . Clause 20. A non-transient computer-readable medium comprising instructions stored thereon which, when executed by at least one computing device, allows the at least one computing device to: send input data to a server, the input data comprising the information associated with one or more well logs in a region of interest, the region of interest corresponding to a geological volume, in which the values of a petrophysical property for a plurality of points a point cloud, each of the plurality of points corresponding to a different location in the region of interest, is determined using a first deep neural network (DNN), in which, based at least 2017-IPM-100997-U1 -EN partly on the values of the petrophysical property for the plurality of points in the point cloud, the facies types for the plurality of points in the point cloud are determined using a second DNN; receiving from the server, a second point cloud corresponding to the region of interest, the second point cloud comprising information for at least the petrophysical properties and the facies classification of each point included in the second point cloud; and providing for display a 3D reservoir model based on information from the second point cloud. A reference to an element in the singular is not intended to mean "one and only one" except in the case of specific mention, but rather "one or more". For example, "a" module can describe one or more modules. An element preceded by “a”, “an”, “the” or “said” does not exclude, without other constraints, the existence of other similar elements. The headers and subtitles, if any, are used for convenience only and do not limit the invention. The word "for example" is used to mean an example or an illustration. To the extent that the term "includes", "a", or etc., is used, such a term is intended to be inclusive in a manner similar to the term "include" since "include" is interpreted when it is used as a transitional word in a claim. Relational terms such as "first" and "second", etc., can be used to distinguish one entity or action from another without necessarily requiring or implying any real relationship or order of this type between such entities or actions . Sentences such as: one aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the mode of embodiment, another embodiment, certain embodiments, one or more embodiments, a configuration, the configuration, another configuration, certain configurations, one or more configurations, the present technology, the disclosure, the present disclosure, other variations of these, etc. , are used for convenience and do not imply that a disclosure relating to such phrase (s) is / are essential to the present technology or that such disclosure is applies to all configurations of this technology. A disclosure relating to such phrase (s) may / may apply to all configurations, or one or more configurations. A disclosure relating to such phrase (s) may / may provide one or more examples. A sentence such as an aspect or certain 2017-IPM-100997-U1 -EN aspects may relate to one or more aspects and vice versa, and this applies similarly to the preceding sentences. A phrase "at least one of" that precedes a series of elements, with the terms "and" or "or" to separate any of these elements, modifies the entire list, rather than each member of the list. The phrase "at least one of" does not require the selection of at least one element; instead, the sentence allows a meaning that includes at least one of any of the elements and / or at least one of a combination of the elements and / or at least one of each of the elements . As an example, each of the phrases "at least one of A, B and C" or "at least one of A, B or C" describe only A, only B or only C; any combination of A, B and C and / or at least one of each of A, B and C. It is understood that the specific order or hierarchy of the disclosed steps, operations or procedures is an illustration of the examples of approaches. Unless specifically mentioned, it is understood that the specific order or hierarchy of steps, process operations can be carried out in a different order. Some of the steps, operations or processes can be carried out simultaneously. The attached process claims, if any, present elements of the various steps, operations or processes in an example order, and are not intended to be limited to the specific order or hierarchy presented . These can be performed in serial, linear, parallel or different order. It should be understood that the instructions, operations and systems described can generally be integrated together into a single software / hardware product or grouped into multiple software / hardware products. In one aspect, a term "coupled" or a similar term may describe a direct coupling. In another aspect, a term "coupled" or a similar term may describe an indirect coupling. Terms like "top", "bottom", "front", "back", "side", "horizontal", "vertical", etc., describe an arbitrary frame of reference, rather than the ordinary gravitational reference frame. Therefore, a term can extend up, down, diagonally or horizontally in a gravitational frame of reference. This disclosure is provided to enable any specialist in the field to practice the various aspects described herein. In some cases, well-known structures and components are illustrated in the form of flowcharts in order to avoid obscuring the 2017-IPM-100997-U1 -EN concepts of this technology. This disclosure provides various examples of the present technology, and the present technology is not limited to these examples. Various modifications of these aspects will be apparent to those skilled in the art, and the principles described herein can be applied to other aspects. All structural and functional equivalents to the elements of the various aspects described through the disclosure which are known or which will be known in the future to those skilled in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Furthermore, nothing disclosed herein is directed to the public regardless of whether such disclosure is explicitly described in the claims. No claimed element may be interpreted according to the provisions of section 35 USC §112, 6th paragraph, unless the element is explicitly described using the phrase "means for" or, in the case of a claim process, the item is described using the phrase "step to". The title, background, brief description of the figures, the abstract and the figures are hereby incorporated into the disclosure and are provided as illustrative examples of the disclosure, not as restrictive descriptions. These are submitted with the idea that they will not be used to limit the scope or the meaning of the claims. Furthermore, in the detailed description, it can be seen that the description provides illustrative examples and the various features are grouped together in various implementations for the purpose of simplifying the disclosure. The disclosure process is not to be construed as reflecting an intention that the claimed object requires other features which are expressly described in each claim. Instead, as the claims reflect, the inventive object is found in less than all of the features of a disclosed single configuration or operation. The claims are hereby incorporated into the detailed description, each claim being autonomous as a separately claimed object. The claims are not intended to be limited to the aspects described here, but should be granted in full in consistency with the language of the claims and to encompass all legal equivalents. However, none of the claims is intended to adopt an object which does not meet the requirements of the applicable patent law, and which should not be interpreted in that way either.
权利要求:
Claims (10) [1" id="c-fr-0001] What is claimed is: 1. Process comprising: receiving input data including information associated with one or more well logs in a region of interest; determining, based at least in part on the input data, an input characteristic associated with a first deep neural network (DNN) making it possible to predict a value of a property at the level of a location at the within the region of interest; learning, using the input data and based at least in part on the first input characteristic, of the first DNN; and predicting, using the first DNN, the value of the property at the location level in the region of interest. [2" id="c-fr-0002] 2. Method according to claim 1, also comprising: learning a second DNN to classify a type of facies at the level of a location in the region of interest based at least in part on the predicted value of the property at the level of the location in the region d interest which may also include: predicting, using the first DNN, property values for a plurality of points in a point cloud, each of the plurality of points corresponding to a different location in the region of interest; and classifying, using the second DNN, the facie types for the plurality of points in the point cloud based at least in part on the predicted property values for the plurality of points in the point cloud. [3" id="c-fr-0003] 3. Method according to claim 2, also comprising: generating, using property values and facies types for the plurality of points in the point cloud, a second point cloud representing the region of interest and / or mapping a set of coordinates corresponding to each point of the plurality of points in the region of interest, the set of coordinates being in a first coordinate system, to a second set of coordinates in a second coordinate system, in which the first 2017-IPM-100997-U1 -EN coordinate system includes a Cartesian coordinate system and the second coordinate system includes a UVW coordinate system. [4" id="c-fr-0004] 4. Method according to any one of claims 1 to 3, in which the determination, based at least in part on the input data, of the input characteristic also comprises: determining a vertical variogram and a horizontal variogram of a property in each stratigraphic interval of the region of interest based at least in part on the input data; and the determination, based at least in part on the vertical and horizontal variograms, of the input characteristic for the supply to the first DNN possibly, also comprising: dividing, using the vertical variogram, the region of interest into a plurality of layers, in which the input characteristic is based on a plurality of neighboring points chosen from at least one layer originating from the plurality of layers. [5" id="c-fr-0005] 5. Method according to any one of claims 1 to 4, also comprising: generating a point cloud in the region of interest, the point cloud comprising a plurality of points corresponding to different locations in the region of interest. [6" id="c-fr-0006] 6. Method according to any one of claims 1 to 5, wherein the region of interest corresponds to a geological volume, and the first DNN comprises a deep cyclic network. [7" id="c-fr-0007] 7. Method according to any one of claims 1 to 6, also comprising: generation, using the input data received, of a training data set, a validation data set and a test data set, in which the data set d learning, the validation data set and the test data set are mutually exclusive subsets of the input data received. 2017-IPM-100997-U1 -FR [8" id="c-fr-0008] 8. System comprising: at least one processor; and a memory comprising instructions which, when executed by the at least one processor, allow the at least one processor to: receiving input data comprising information associated with one or more well logs in a region of interest; determining, based at least in part on the input data, of an input characteristic associated with a first deep neural network (DNN) making it possible to predict a value of a petrophysical property at the level of a location to be inside the region of interest; predicting, using the first DNN, petrophysical property values for a plurality of points in a point cloud, each of the plurality of points corresponding to a different location in the region of interest; and classifying, using the second DNN, facies types for the plurality of points in the point cloud based at least in part on the predicted values of the petrophysical properties for the plurality of points in the point cloud. [9" id="c-fr-0009] 9. The system of claim 8, wherein the instructions also allow the at least one processor to perform the method of any of claims 2 to 7. [10" id="c-fr-0010] 10. Non-transient computer readable medium comprising instructions stored thereon which, when executed by at least one computing device, allows the at least one computing device to: send input data to a server, the input data comprising information associated with one or more well logs in a region of interest, and the region of interest corresponds to a geological volume, in which the values d a petrophysical property for a plurality of points of a point cloud, each of the plurality of points corresponding to a different location in the region of interest, are determined using the first deep neural network (DNN), 2017-IPM-100997-U1 -EN based at least in part on the petrophysical property values for the plurality of points in the point cloud, the facies types for the plurality of points in the point cloud are determined at l using a second DNN; receive, from the server, a second point cloud corresponding to the region 5 of interest, the second point cloud comprising information for at least the petrophysical properties and the facies classification of each point included in the second point cloud; and providing for display a 3D reservoir model based on information from the second point cloud.
类似技术:
公开号 | 公开日 | 专利标题 FR3069330A1|2019-01-25|DEEP MODELING BASED ON DEEP LEARNING Das et al.2019|Convolutional neural network for seismic impedance inversionCNN for seismic impedance inversion Laloy et al.2018|Training‐image based geostatistical inversion using a spatial generative adversarial neural network Mahmud et al.2014|Simulation of Earth textures by conditional image quilting Aleardi et al.2017|1D elastic full‐waveform inversion and uncertainty estimation by means of a hybrid genetic algorithm–Gibbs sampler approach EP1760492B1|2014-10-15|Method of updating a geological model of a reservoir with the aid of dynamic data Mo et al.2020|Integration of adversarial autoencoders with residual dense convolutional networks for estimation of non‐Gaussian hydraulic conductivities Zhou et al.2012|A pattern‐search‐based inverse method FR3039679A1|2017-02-03|ASSIGNMENT OF SEDIMENTARY SEQUENCES Charvin et al.2009|A Bayesian approach to inverse modelling of stratigraphy, part 1: method Zhong et al.2019|Application of a convolutional neural network in permeability prediction: A case study in the Jacksonburg-Stringtown oil field, West Virginia, USA Feng et al.2021|Uncertainty quantification in fault detection using convolutional neural networks Pirot et al.2017|Probabilistic inversion with graph cuts: Application to the B oise H ydrogeophysical R esearch S ite Scheidt et al.2016|Quantifying natural delta variability using a multiple‐point geostatistics prior uncertainty model Feng et al.2020|Lithofacies classification of a geothermal reservoir in Denmark and its facies-dependent porosity estimation from seismic inversion Edwards et al.2018|Uncertainty management in stratigraphic well correlation and stratigraphic architectures: A training-based method Jo et al.2020|Conditioning well data to rule-based lobe model by machine learning with a generative adversarial network Nesvold et al.2019|Geomodeling using generative adversarial networks and a database of satellite imagery of modern river deltas Wu et al.2020|Deep learning for characterizing paleokarst collapse features in 3‐D seismic images EP3570074A1|2019-11-20|Method for detecting geological objects in an image Mariethoz et al.2014|Analog‐based meandering channel simulation EP2873989B1|2018-07-04|Method for building a grid representing the distribution of a physical property of a subsurface formation by conditional multi-point statistical simulation Han et al.2020|Multiple point geostatistical simulation with adaptive filter derived from neural network for sedimentary facies classification Li2017|A Bayesian approach to causal and evidential analysis for uncertainty quantification throughout the reservoir forecasting process Nesvold et al.2021|Simulation of Fluvial Patterns With GANs Trained on a Data Set of Satellite Imagery
同族专利:
公开号 | 公开日 US20200160173A1|2020-05-21| NO20191478A1|2019-12-12| CA3067013A1|2019-01-24| GB201918289D0|2020-01-29| GB2577437A|2020-03-25| AU2017424316A1|2020-01-02| WO2019017962A1|2019-01-24|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7657494B2|2006-09-20|2010-02-02|Chevron U.S.A. Inc.|Method for forecasting the production of a petroleum reservoir utilizing genetic programming| US8452580B2|2010-02-26|2013-05-28|Chevron U.S.A. Inc.|Method and system for using multiple-point statistics simulation to model reservoir property trends| US9279314B2|2011-08-11|2016-03-08|Conocophillips Company|Heat front capture in thermal recovery simulations of hydrocarbon reservoirs| US9128203B2|2011-09-28|2015-09-08|Saudi Arabian Oil Company|Reservoir properties prediction with least square support vector machine| US20140149041A1|2012-10-29|2014-05-29|Saudi Arabian Oil Company|Rock facies prediction in non-cored wells from cored wells| US9501740B2|2014-06-03|2016-11-22|Saudi Arabian Oil Company|Predicting well markers from artificial neural-network-predicted lithostratigraphic facies|US10990882B2|2017-07-28|2021-04-27|International Business Machines Corporation|Stratigraphic layer identification from seismic and well data with stratigraphic knowledge base| US10996372B2|2017-08-25|2021-05-04|Exxonmobil Upstream Research Company|Geophysical inversion with convolutional neural networks| CA3075764A1|2017-09-12|2019-03-21|Schlumberger Canada Limited|Seismic image data interpretation system| US11120297B2|2018-11-30|2021-09-14|International Business Machines Corporation|Segmentation of target areas in images| US11105944B2|2019-04-30|2021-08-31|Chevron U.S.A. Inc.|System and method for lateral statistical estimation of rock and fluid properties in a subsurface formation| CN110866537A|2019-09-27|2020-03-06|华南理工大学|Brain wave-based emotion recognition method for game evaluation| WO2021236095A1|2020-05-22|2021-11-25|Landmark Graphics Corporation|Facilitating hydrocarbon exploration from earth system models| WO2021237327A1|2020-05-29|2021-12-02|Faculdades Catolicas|Method for detecting gas-reservoir signatures in seismic surveys| AU2020101108A4|2020-06-24|2020-07-30|Institute Of Geology And Geophysics, Chinese Academy Of Sciences|A method for geophysical observation information fusion|
法律状态:
2019-05-23| PLFP| Fee payment|Year of fee payment: 2 | 2021-02-12| ST| Notification of lapse|Effective date: 20210105 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 IBWOUS2017043228|2017-07-21| PCT/US2017/043228|WO2019017962A1|2017-07-21|2017-07-21|Deep learning based reservoir modeling| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|