Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Earthquake prediction model using support vector regressor and hybrid neural networks

Roles Conceptualization, Data curation, Methodology, Writing – original draft

* E-mail: [email protected]

Affiliation Centre for Earthquake Studies, National Centre for Physics, Islamabad, Pakistan

ORCID logo

Roles Formal analysis, Investigation, Writing – review & editing

Affiliation Department of Computer Sciences and IT, The University of Poonch, Rawalakot, Pakistan

Roles Formal analysis, Investigation, Supervision, Writing – review & editing

Roles Formal analysis, Funding acquisition, Investigation, Project administration, Writing – review & editing

Affiliation Department of Computer Science, Pablo de Olavide University, Seville, Spain

  • Khawaja M. Asim, 
  • Adnan Idris, 
  • Talat Iqbal, 
  • Francisco Martínez-Álvarez

PLOS

  • Published: July 5, 2018
  • https://doi.org/10.1371/journal.pone.0199004
  • Reader Comments

Fig 1

Earthquake prediction has been a challenging research area, where a future occurrence of the devastating catastrophe is predicted. In this work, sixty seismic features are computed through employing seismological concepts, such as Gutenberg-Richter law, seismic rate changes, foreshock frequency, seismic energy release, total recurrence time. Further, Maximum Relevance and Minimum Redundancy (mRMR) criteria is applied to extract the relevant features. A Support Vector Regressor (SVR) and Hybrid Neural Network (HNN) based classification system is built to obtain the earthquake predictions. HNN is a step wise combination of three different Neural Networks, supported by Enhanced Particle Swarm Optimization (EPSO), to offer weight optimization at each layer. The newly computed seismic features in combination with SVR-HNN prediction system is applied on Hindukush, Chile and Southern California regions. The obtained numerical results show improved prediction performance for all the considered regions, compared to previous prediction studies.

Citation: Asim KM, Idris A, Iqbal T, Martínez-Álvarez F (2018) Earthquake prediction model using support vector regressor and hybrid neural networks. PLoS ONE 13(7): e0199004. https://doi.org/10.1371/journal.pone.0199004

Editor: Xiangtao Li, Northeast Normal University, CHINA

Received: April 18, 2017; Accepted: May 10, 2018; Published: July 5, 2018

Copyright: © 2018 Asim et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: This study is based upon earthquake catalogs. The authors downloaded catalogs from United States Geological Survey (USGS, https://earthquake.usgs.gov/earthquakes/search/ ). The results can be reproduced using the publicly available catalog and proposed Latitudes and Longitudes in the article. All the relevant codes and computed seismic features are available at the following link: https://figshare.com/articles/Earthquake_Prediction_using_SVR_and_HNN/6406814

Funding: This work was supported by the Spanish Ministry of Economy and Competitiveness, Junta de Andalucia under projects TIN2014-55894-C2-R and P12-TIC-1728, respectively.

Competing interests: There are no competing interests.

1 Introduction

Earthquakes are one of the major catastrophe and their unpredictability causes even more destruction in terms of human life and financial losses. There has been a serious debate about the predictability of earthquakes with two concurrent point of views related to their prediction. One school of thought considers it impossible phenomenon to predict while other have spent their resources and efforts to achieve this task. It is an undeniable fact that the seismologist community has been unsuccessful in developing methods to predict earthquakes despite more than a century of efforts. Earthquake prediction remained an unachieved objective due to several reasons. One of the reasons is the lack of technology in accurately monitoring the stress changes, pressure and temperature variations deep beneath the crust through scientific instruments, which eventually results in unavailability of comprehensive data about seismic features. The second probable cause is the gap between seismologists and computer scientist for exploring the various venues of technology to hunt this challenging task. With the advent of modern computer science based intelligent algorithms, significant results have been achieved in different fields of research, such as weather forecasting [ 1 ], churn prediction [ 2 ] and disease diagnosis [ 3 ]. Therefore, by bridging gap between computer science and seismology, substantial outcomes may be achieved.

Contemporary studies show that scientists have conducted earthquake prediction research through diverse techniques. Retrospective studies of various earthquake precursory phenomena show anomalous trends corresponding to earthquakes, such as sub-soil radon gas emission [ 4 ], total electron content of ionosphere [ 5 ], vertical electric field, magnetic field of earth [ 6 ] and so forth. The bright aspect of recent achievements is the encouraging results for earthquake prediction achieved through Computational Intelligence (CI) and Artificial Neural Networks (ANN) in combination with seismic parameters [ 7 – 12 ], thus initiating new lines of research and ideas to explore for earthquake prediction. There is a slight difference between earthquake prediction and earthquake forecasting. It is a continuously evolving concept with various definitions stated in literature [ 13 ]. In the view of authors, earthquake forecasting relates to the concept in which a certain probability of future earthquake occurrence is given. While prediction means that earthquake prediction is made in the form of Yes or No without any associated probability factor, irrespective of confidence in prediction.

Earthquake prediction problem is initially considered as a time series prediction [ 14 ] Later, seismic parameters are mathematically calculated on the basis of well-known seismic laws and facts, corresponding to every target earthquake ( E t ). The calculation of seismic parameters corresponding to every E t provides a feature vector related to E t . Thus, earthquake prediction is carried out on the basis of computed features in place of time series of earthquakes, thereby converting a time series prediction problem into a classification problem. The mathematically calculated seismic parameters are basically meant to represent the internal geological state of the ground before earthquake occurrence. This research work employs the known mathematical methods and computes all the seismic features in a bid to retain maximum information, which leads to sixty seismic features corresponding to every earthquake occurrence ( E t ). After acquiring maximum available seismic features, Maximum Relevancy and Minimum Redundancy (mRMR) based feature selection is applied to select most relevant feature having maximum information. Support Vector Regression (SVR) followed by Hybrid Neural Network (HNN) model and Enhanced Particle Swarm Optimization (EPSO) is applied to model relationship between feature vectors and their corresponding E t , thereby generating robust earthquake prediction model (SVR-HNN). The distinctive contributions of this research methodology is summarized as:

  • Sixty seismic features, calculated on the principle of retaining maximum information.
  • A unique application of SVR in combination with HNN, on mRMR selected seismic features.

Hindukush, Chile and Southern California are three of the most active seismic regions in the world, thus considered in this study for studying problem of earthquake prediction. Earthquake prediction studies based on Artificial Neural Networks (ANN) have been performed on the considered seismic regions [ 9 , 11 , 15 ]. The proposed prediction methodology is separately applied to perform earthquake predictions for the said regions and results are evaluated. The suggested earthquake prediction model showed improved results as compared to the other models proposed for these regions.

The rest of the manuscript is structured as the section 2 contains literature survey. Section 3 details about the calculation of dataset, where section 4 explains the SVR-HNN based prediction mode. Results are evaluated and discussed in section 5.

2 Related literature

Numerous researchers have performed earthquake studies from prediction and forecasting perspectives through different angles. During earthquake preparation process beneath the surface, different geophysical and seismological processes occur. These happenings below the surface are believed to cause changes in sub-soil emission, vertical electric field and ionosphere. These precursory changes are studied and mapped retrospectively with major earthquakes [ 4 , 5 ]. Earthquake prediction is also studied through observing behavioral changes in animals [ 16 ]. The animal behavioral study is carried out using motion-triggered cameras at Yanachaga National Park, Peru. The decline in animal activity has been observed prior to Contamana earthquake of magnitude 7.0 in 2011. However, the focus of this research is to study earthquake prediction through computational intelligence and machine learning based methods.

Algorithm M8 which aims to forecast the earthquake of magnitude 8.0 and above tested successfully at the different regions of the globe along with increased efficiency of intermediate term earthquake forecasting. The study analyzes the earthquake catalog and gives the alarming area in circular region for next five years [ 17 , 18 ]. There are several studies conducted based on this algorithm and its advanced stabilized version i.e. M8S algorithm to forecast the seismic events of magnitude 5.5 and above [ 19 ].

The Three Dimensional Pattern Informatics (PI) approach is also applied which aims at the forecast of earthquakes with natural and synthetic data sets [ 20 ]. Considering the regional seismic environment, the method efficiently characterizes the spatial and temporal seismic activity patterns with angular allusion occurred in the extent of associated space. This technique is the improved version of two dimensional PI approach [ 21 ], in the sense that it resolves the vertically distributed seismic anomalies in the presence of complex tectonic structures. Moreover, it gives forecast by systematically analyzing the anomalous behaviors in the seismicity at regional level.

In another research work, the earthquake prediction for the regions of Southern California and San Francisco bay area have also been studied. Eight seismic parameters are mathematically calculated through the temporal sequence of past seismicity. The seismic parameters are then used in combination with Recurrent Neural Networks (RNN), Back Propagation Neural Network (BPNN) and Radial Basis Functions (RBF), separately. RNN yielded better results as compare to the other two applied neural networks [ 9 ]. Later, the Probabilistic Neural Network (PNN) has been applied for the same regions in combination with same seismic parameters [ 8 ], where PNN is reported to have produced better results for earthquakes of magnitude less than 6.0 than RNN. A similar approach with same eight seismic parameters has also been used to perform earthquake prediction for Hindukush region [ 11 ]. Pattern Recognition Neural Network (PRNN) is reported to have outperformed other classifiers, such as RNN, random forest, and LPBoost ensemble of trees for Hindukush region.

Earthquake magnitude prediction for northern Red Sea area is carried out in [ 22 ]. The methodology is based on the features extraction from the past available earthquake records followed by feedforward neural network. These features include sequence number of past earthquakes, respective locations, magnitude and depth. The similar kind of features have also been used for earthquake prediction for Pakistan region using BAT-ANN algorithm [ 23 ]. These features do not involve any seismological facts and laws, rather the direct modelling of earthquake sequence number, magnitude, depth and location with the future earthquakes is proposed.

Alexandridis et al. used RBF to estimate intervent time between large earthquakes of California earthquake catalog [ 24 ]. Aftershocks and foreshocks are removed from catalog through Reasenberg declustering technique before processing with neural network. Seismicity rates are taken as input to the neural network, whereas the intervent time between major earthquakes is taken as the output. Training of RBF is carried out through Fuzzy Mean algorithm.

The authors of [ 12 , 15 ] proposed new mathematically computed seven seismic parameters to be used in combination with ANN to predict earthquakes in Chile and Iberian Peninsula. The methodology is capable of predicting seismic event of magnitude of 5.0 and above for horizon of 15 days. Further in [ 25 ], the results were improved by performing feature selection through Principle Component Analysis. Similarly, Zamani et al. [ 26 ] carried out retrospective studies for the September 10th, 2008 Qeshm earthquake in Southern Iran. The spatio-temporal analysis of eight seismic parameters is performed through RBF and Adaptive Neural Fuzzy Inference System (ANFIS). A sensitivity analysis of different geophysical and seismological parameters is performed in [ 27 ]. Results for earthquake prediction are obtained for the regions of Chile through varying combinations of parameters along with variations in training and testing samples.

Last et al. [ 10 ] performed earthquake prediction for Israel and its neighboring countries. The used earthquake data of past years has been treated to first clean foreshocks and aftershocks and then seismic parameters are calculated. The computed parameters are then employed for prediction in combination with Multi-Objective Info-Fuzzy Network (M-IFN) algorithm. The proposed prediction system is capable of predicting maximum earthquake magnitude and total number of seismic events for next year.

All the aforementioned methodologies study the earthquake prediction, while focusing only one region. The prediction models are not applied and tested to other earthquake prone regions and no comparisons are also carried out with the results of other research studies. In this research, the prediction model is applied to more than one region and comparisons are also drawn with the results, available in literature.

3 Regions selection and feature calculation

3.1 region selection and earthquake catalog.

In this study, three different regions, namely Hindukush, Chile, Southern California have been selected for prediction of earthquakes of magnitude 5.0 and above. The same regions selected in the precedent studies are also considered for this research [ 9 , 11 , 15 ]. The advantage of selecting the same regions is that results can be compared in the end, so as to prove the superiority of suggested methodology.

Earthquake catalogs of these regions have been obtained from United States Geological Survey (USGS) [ 28 ] for the period from January 1980 to December 2016. These catalogs are initially evaluated for cut-off magnitude. Cut-off magnitude corresponds to the earthquake magnitude in the catalog above which catalog is complete and no seismic event is missing. This depends upon the level of instrumentation. Dense instrumentation in a region leads to better completeness of catalog with low cut-off magnitude. The cut-off magnitude for Southern California region is found to be less than 2.6, for Chile it is 3.4 and for Hindukush it is 4.0. The completeness of magnitude for all three regions shows the density of instrumentation in these regions. There are different methodologies proposed in literature for evaluation of cut-off magnitude [ 29 ]. In this study, cut-off magnitude is determined through Gutenberg-Richter law. The point where curve deviates from exponential behavior is selected as a cut-off magnitude. All the events reported below cut-off magnitude are removed from the catalog before using for parameter calculation. Earthquake magnitudes and frequency of occurrences for each region is plotted as shown in Fig 1 . The curves follow decreasing exponential behavior, which assures that each catalog is complete to its respective cut-off magnitude.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

(a): For Hindukush region catalog is complete upto M = 4.0 (b): Catalog of Chile shows completeness upto M = 3.4 (c): Southern California catalog is complete upto M = 2.6.

https://doi.org/10.1371/journal.pone.0199004.g001

After parameter calculation a feature vector is obtained corresponding to every target earthquake ( Et ). In this study, earthquake prediction problem is designed/modeled as a binary classification problem. Every earthquake magnitude is converted to Yes, No (1, 0) through applying threshold on magnitude 5.0. It is too early in this field of research to predict actual magnitudes of future earthquakes; however, endeavors are on the way to predict the categories of future events.

Fig 2 shows the overall flow of research methodology. The earthquake catalog is the starting point of this process therefore, quality of catalog directly affects the prediction results. Further processes involved are feature calculation, selection, training of model, and finally predictions are obtained on unseen part of dataset. In the end performance of prediction model is evaluated and comparison is drawn.

thumbnail

https://doi.org/10.1371/journal.pone.0199004.g002

3.2 Parameter calculation

Features are the most important part of a classification problem. In this study, features are also referred as seismic parameters. These seismic parameters are calculated mathematically and are based upon well-known geophysical and seismological facts. There are different geophysical and seismic parameters suggested in literature for earthquake prediction studies employing computational intelligence [ 9 , 15 , 26 ]. Discovering new geophysical and seismological facts leading to earthquake prediction is another aspect of earthquake prediction studies, which is currently not included in the scope of this research work. Seismic parameters calculated in this research are broadly classified into two main categories with regards of calculation perspective, which are defined as:

  • The seismic features whose calculation is not dependent upon any variable parameter are called non-parametric seismic features.
  • The seismic features whose calculation is dependent upon any variable parameter, such as a threshold, are called parametric seismic features.

The important contribution of this research from the perspective of seismic parameters are:

  • All the available geophysical and seismological features employed for earthquake prediction in contemporary literature are taken into account simultaneously, which has never been done before.
  • Multiple values of parametric seismic features have been calculated based upon different variations of a variable parameter, in order to retain maximum available information about the internal geological state of the ground.

All the seismic features are calculated using the 50 seismic events before the event of interest ( E t ), which is to be predicted using the feature vector. The numbers of features reach to 60 in every instance according to suggested methodology. Later, in order to handle the issues related to curse of dimensionality, the feature selection technique based on Maximum Relevance and Minimum Redundancy (mRMR) is employed to choose the features having maximum relevant and discriminating information.

3.2.1 Non-parametric seismic features.

As the calculation of non-parametric seismic features is not dependent on any variable parameter thus such variables have one possible value for every instance.

  • a and b value

These values are directly based on well-known geophysical law known as Gutenberg-Richter law. According to this law, number of earthquakes increase exponentially with decreasing magnitude, mathematically shown in Eq 1 , where N i is the total number of seismic events corresponding to Magnitude M i , b is slope of curve and a is y-intercept.

research paper on earthquake prediction

The values of a and b are calculated numerically through two different methods. Eqs 2 and 3 represent the linear least square regression ( lsq ), while Eqs 4 and 5 show the maximum likelihood ( mlk ) method for calculation of a and b values. In earthquake prediction study for Southern California, linear least square regression analysis based method is proposed [ 9 ]. While maximum likelihood method is preferred for earthquake prediction for Chile [ 15 ].

research paper on earthquake prediction

  • Seismic energy release

Seismic energy ( dE ) keeps releasing from ground in the form of small earthquakes as shown in Eq 6 . If the energy release stops, the phenomenon is known as quiescence, which may release in the form of major event. State of quiescence may also lead to the reduction in seismic rate for the region, thereby decreasing b value.

research paper on earthquake prediction

  • Time of n events

Time ( T ) in days, during which n number seismic events have occurred before E t as shown in Eq 7 . In this study n is selected to be 50.

research paper on earthquake prediction

  • Mean Magnitude

Mean magnitude ( M mean ) refers to the mean value of n events as shown in Eq 8 . Usually the magnitude of seismic events rise before any larger earthquake.

research paper on earthquake prediction

  • Seismic rate changes

Seismic rate change is the overall increase or decrease in the seismic behavior of the region for two difference intervals of time. There are two ways proposed to calculate seismic rate changes. z value shown in Eq 9 measures seismic rate change as proposed by [ 30 ], where R1 and R2 correspond the seismic rate for two different intervals. S 1 and S 2 represent the standard deviation of rate. n 1 and n 2 show the number of seismic event in both intervals.

research paper on earthquake prediction

The other way for seismic rate change calculation is suggested in [ 31 ] and given in Eq.10, where, n represents total events in the whole earthquake dataset, t is total time duration and δ is the normalized duration of interest. M(t , δ) shows the number of events observed, defined using end time t and interval of interest δ . Both z and β values possess opposite signs and are independent from each other.

research paper on earthquake prediction

  • Maximum magnitude in last seven days

The maximum magnitude recorded in the days previous to E t is also considered as an important seismic parameter as reported in [ 12 , 15 ] and represented as x 6i . The representation of this parameter is also kept the same as that of literature, so as to maintain better continuity. It is mathematically represented as given in Eq 11 .

research paper on earthquake prediction

Thus the total number of seismic parameters obtained from non-parametric features is accounted to 10.

3.2.2 Parametric features.

The formulae for parametric features contain a varying parameter, such as earthquake magnitude or b value. All these features are calculated through multiple available values of varying parameter. The details of all the parametric features are given below.

  • Probability of earthquake occurrence

The probability of earthquake occurrence of magnitude greater than or equal to 6.0 is also taken as an important seismic feature. It is represented by x 7i and calculated through Eq 12 . The inclusion of this feature supports the inclusion of Gutenberg-Richter law in an indirect way. The value of x 7i is dependent upon b value. Therefore, b lsq and b mlk are separately used to calculate x 7i , thus giving two different values for this seismic feature.

research paper on earthquake prediction

  • Deviation from Gutenberg-Richer law

It is the deviation η of actual data from the Gutenberg-Richter inverse law as shown in Eq 13 . This feature indicates how much actual data follows the inverse law of distribution. Its calculation is dependent upon a and b values, which in turn gives two values for η .

research paper on earthquake prediction

  • Standard deviation of b value

Standard deviation of b value σb is calculated using Eq 14 . This feature is parametric because it is based upon the b value, which have two values, therefore adding two values of σb .

research paper on earthquake prediction

  • Magnitude deficit

Magnitude deficit ( M def ) is the difference between the maximum observed earthquake magnitude and maximum expected earthquake magnitude ( Eq 16 ). Maximum expected magnitude is calculated through Gutenberg-Richter’s law as given in Eq 15 . The two sets of a and b values are separately used to calculate M def .

research paper on earthquake prediction

  • Total recurrence time

It is also known as probabilistic recurrence time ( T r ). It is defined as the time between two earthquakes of magnitude greater than or equal to M ′ and calculated using Eq 17 . This parameter is another interpretation of Gutenberg-Richter’s law. As evident from the statement of inverse law, there will be different value of T r for every different value of M ′, which would increase with increasing magnitude. Available literature does not focus on which value of M ′ to be selected in such a scenario therefore T r is calculated for every M ′ from 4.0 to 6.0 magnitudes following the principle of retaining maximum available information. So for two sets of a and b values along with varying M ′ adds 42 seismic features to the dataset.

research paper on earthquake prediction

4 Earthquake prediction model

Unlike previous other simple earthquake prediction models proposed in literature, in this paper a multistep prediction model is suggested (SVR-HNN). It is a combination of various machine learning techniques with every technique complementing the other through knowledge acquired during learning. Thus, every step in this model is adding further improvements to the robustness therefore, resulting in a final improved version of prediction model. The layout of overall prediction model is given in Fig 2 . Dataset obtained for all the three regions, is divided into training and testing sets. For training and validation purposes, 70% of the dataset is selected, while testing is performed on rest of 30% hold out dataset. The final results shown in Section 5 are the prediction results obtained on test dataset for every region, separately. The configurations and setups arranged in order to train a model are kept same for all three regions’ datasets. However, separate training has been performed for each region. The reason for separate training is that every region has different properties and can be classified tectonically into different categories, such as thrusting tectonics, strike-slip tectonics and so forth. Therefore every type of region possess different behaviors and relations to the earthquakes. Thus separate training for every region is meant to learn and model the relationship between seismic features and earthquakes for that particular region.

The proposed methodology includes the use of two step feature selection process. The features are selected after performing relevancy and redundancy checks, to make sure that only useful features are employed for earthquake prediction. The selected set of features are then passed to Support Vector Regression (SVR). The trend predicted by SVR is further used in combination with seismic features as input to the next stage of prediction model, i.e. ANN. The inclusion of SVR-output as a feature to ANN is to pass on the information learnt through SVR. After SVR, three different layers of neural networks are applied to the dataset in combination with Enhanced Particle Swarm Optimization (EPSO). The output of every ANN is used as an input to the next ANN in place of SVR-output along with feature set. The weight adjustments of each ANN layer is also passed to the next ANN, so that next ANN does not start learning from scratch. The purpose of including EPSO is to optimize the weights of ANN, which have tendency to get trapped in local minima. If during training ANN is stuck in local minima, EPSO plays a vital role in that scenario. The similar type of approach has also been used in other fields, such as wind power prediction [ 32 , 33 ] with successful outcomes. The flowchart of earthquake prediction methodology is provided in Fig 3 .

thumbnail

https://doi.org/10.1371/journal.pone.0199004.g003

4.1 Feature selection

The total number of 60 seismic features are computed for every instance. The approach of feature calculation is useful that tends to gather maximum obtainable information however a reduced feature set can be selected for learning process instead of utilizing the complete set of 60 features. The features having less discriminating information or redundant information can be excluded. In order to deal with such a situation, two step feature selection is applied based on maximum relevance and minimum redundancy. After the calculation of all available features mRMR method selects the features having most relevant and discriminating information. This step also helps in avoiding the curse of dimensionality problem, which is a major issue in machine learning algorithms. Feature selection step is embedded as a part of prediction model because it is separately applied for each dataset. The different seismic regions considered in this study, represent varying seismic properties recorded as per calculated features and thus results in different seismic datasets. A feature in Hindukush dataset having insignificant information content may possess opposite trend for Chile or Southern California, which can actually be observed when feature selection is applied for all these regions. Therefore, it is inappropriate to declare features selected for a specific region to be best for all the other regions.

4.1.1 Maximum relevance.

Irrelevancy filter is first of the two steps of feature selection. It removes all the features that are irrelevant and having less information content to be useful for prediction. It is proposed in [ 34 ], while certain modifications in its formulation is suggested in [ 35 ]. This technique has already been used for feature selection in different classification problems, such as medical image processing, cancer classification. However, it has been used for the first time, for seismic feature selection. The methodology includes calculation of mutual information (MI) of every feature corresponding to target earthquakes ( E t ) in binary form. A suitable threshold is applied on MI and all the features having MI less than the threshold are ignored. The value of threshold is kept fixed in this model for the considered earthquake datasets.

Irrelevancy criterion filters out different features for all the three regions. For Hindukush region, 5 features are filtered out after this step and rest of 55 features are passed to the next step. Similarly for dataset of Chile, 4 features are excluded through irrelevance filter and 56 are passed to next step. In Southern California dataset, 55 features are considered fit for the next step after leaving out 05 features.

4.1.2 Minimum redundancy.

There are features which are showing redundant information in the dataset, therefore the inclusion of such features for earthquake prediction is of no use. Removing the features with redundant information is the second phase in feature selection. The idea of feature redundancy filter is proposed in [ 34 ] and certain changes in the implementation are given in [ 32 ]. The basic idea behind this technique is that MI is calculated in-between all the features. Any two features possessing maximum MI are considered to be having redundant information. Therefore a certain redundancy criterion (RC) is empirically selected and kept fixed in the learning process for all the three regions.

Like irrelevancy filter, this technique may show different results for every region. In Hindukush dataset, out of 55 relevant features, 32 are found to be redundant and left out, therefore, leaving behind 23 effective features. Similarly, Chile’s dataset contains 42 redundant features out of 56 relevant features and leaving behind 14 useful features and for Southern California dataset 25 useful features are selected after excluding 30 redundant features.

4.2 Support vector regression

SVR is a machine learning technique that learns in a supervised manner. It is first proposed in [ 36 ] and implementation is carried out using LIBSVM (A library for Support vector machine) [ 37 ]. SVR has wide range of applications for both classification as well as regression problems.

The model generated through training of SVR, gives predictions about an estimated earthquake magnitude corresponding to the feature vectors. SVR then imparts its knowledge of predicted earthquake magnitudes to HNN in next step through auxiliary predictions, to be used as a part of feature set. Experiments have proven that the auxiliary predictions from SVR when used in combination with features, adds a distinctive classification capability to the prediction model.

4.3 Enhanced Particle Swarm Optimization

There are different nature-inspired optimization algorithms in literature [ 38 – 44 ] however, EPSO has been employed in this study for weight optimization of ANN. EPSO is an evolutionary algorithm. The idea of particle swarm optimization (PSO) is given in [ 45 ]. In this optimization methodology, exploration is carried out for finding the best possible solution or position in the search space, like a bird or an organism searches for food. Different factors affect the hunt for best possible solution, like current position and velocity. The record of best local positon, global position and worst global position is also kept for generating optimized solution. EPSO is also a well-considered an optimization methodology, which is being used in different application fields and explained in detail in [ 32 ].

4.4 Hybrid Neural Networks

The next step of prediction model is to train hybrid neural network model. This step is the combination of three different ANNs along with optimization support extended from EPSO. The training is carried out through 1 st ANN and weights are then passed to EPSO for further optimization. In case, if ANN is stuck in local minima, EPSO has the capability to guide it out from this situation. The optimization measure for EPSO is set to Matthews Correlation Coefficient (MCC). If the ANN has already learnt the best possible relation between features and earthquakes, EPSO would return the same weight matrix. Thus, it can be said that EPSO is included in this methodology, to save ANNs from being trapped in local minima.

The training of this step is carried out for a binary classification problem, in which intention is to predict the earthquakes of magnitude 5.0 and above. Initially the dataset along with auxiliary prediction from SVR is passed to Levenberg-Marquardt neural network. The network trains in back propagation manner, where error at the output layer is back propagated in every epoch. When the training stops, the weight matrices are passed to EPSO along with training features and targets. EPSO optimizes the weights in terms of MCC and returns the optimized weights back to ANN. The predictions from optimized weights of EPSO are taken as auxiliary predictions in place of SVR-output to be used along with features for next ANN.

BFGS quasi-Newton backpropagation neural network (BFGSNN) is initialized with the weights of previously learnt NN. In this way, the already learnt information is transferred to the next NN along with auxiliary predictors. The network is trained similarly and the weights along with training data are passed to EPSO for optimization via MCC. The optimized weights are then used to initialize Bayesian Regularization Neural Network (BRNN) and training data is passed in combination with BFGSNN auxiliary predictors. EPSO is again employed to optimize the weights of NN. The pseudo-code is also included here for the consideration of the reviewer:

Step by step procedure of SVR-HNN Model :

Inputs : T, N, RF, SVR, HNN, EPSO

[ T = Training Dataset

N = Numbers of layers of Neural Network

RF = Relevant Features

SVR = Support Vector Regresser

NN = Neural Network

EPSO = Enhanced Particle Swarm Optimization

HNN = Hybrid Neural Network (Combined package of NN+EPSO)]

     RF = mRMR (T)

    [SVR_Model, SVR_Predictions] = SVR[RF]

     for j = 1 to 3 (‘j’ indicates neural network and EPSO layer)

         if (j = = 1)(First NN takes SVR_Predictions as auxiliary input)

            [NN j _Model, NN j _Predictions] = NN j [RF, SVR_Predictions]

                (EPSO optimizes NN, if it traps in local minima)

          [NN j _Model, NN j _Predictions] = EPSO j [RF, NN j _Model, SVR_Predictions]

         else

            (Predictions of previous NN, becomes input to next NN)

          [NN j _Model, NN j _Predictions] = NN j [RF, NN j-1 _Predictions]

          [NN j _Model, NN j _Predictions] = EPSO j [RF, NN j _Model, NN j-1 _Predictions]

         endif

     Endfor

    SVR_HNN_Model = NN 3_ Model (The model obtained after SVR and all layers of NN supported by EPSO)

    Performance Evaluation = SVR_HNN_Model[Test Dataset, Actual Labels]

In this work, the output of SVR is treated as its own opinion about the earthquake occurrence. Thus, in a bid to improve the performance of earthquake prediction a combination of three ANNs and EPSO, called as Hybrid Neural Network (HNN) is formulated. Seismic Features are passed to HNN for yielding the predictions. At this point of stage, the results obtained through SVR are considered as auxiliary predictions and passed on to HNN, along with other features. The opinion of SVR would also hold importance in terms of discriminating power between earthquake and non-earthquake occurrences. Therefore, when coupled with dataset of other seismic features, it highly improves the discriminating power of earthquake prediction, resulting in improved prediction performance of HNN. In HNN, the weights and outputs of each ANN is passed to the next layer of ANN, to make the consequent layer start learning ahead of a certain point. Thus, the feedback of SVR or ANN at a specific layer enhance the learning of HNN, leading to the improved earthquake prediction. The role of EPSO is to rescue ANN, when it is trapped in local minima through optimizing the weights. However, in case when ANN is performing well, the EPSO refrains from updating the weight. The model is trained using 70% of feature instances and later independent testing is performed on unseen 30% of data. In the training data two-third part is used for training the algorithm while one-third is used for validation of model. The evaluation of model is performed on unseen feature instances through considering well-known evaluation measures.

5 Results and discussion

There are total 7656 feature instances for Chile out of which 2067 correspond to “Yes” earthquake while 5589 correspond to “No” earthquake. The total instances for Southern California are 33543 out of which 7671 belong to “Yes” earthquake while 25872 belong to “No” earthquake. Similarly, for Hindukush dataset, out of 4350 instances 1379 correspond to “Yes” while 2971 correspond to “No”. Therefore, the data distributions of the considered regions are highly imbalanced as shown in Fig 4 .

thumbnail

Distribution of feature vectors corresponding to earthquakes and Non-Earthquakes in datasets for (a): Hindukush (b): Chile (c) Southern California .

https://doi.org/10.1371/journal.pone.0199004.g004

5.1 Performance evaluation criteria

There are well-known performance metrics available for evaluation of binary classification results, such as sensitivity, specificity, positive predictive value or precision (P 1 ), negative predictive value (P 2 ), Matthews correlation coefficient (MCC) and R score. Whenever, predictions are made through an algorithm, it yields four categories of outputs, namely true positive (TP), false positive (FP), true negative (TP) and false negative (FN). TP and TN are the correct predictions made by the prediction model while FP and FN are the wrong predictions of the algorithm.

Sensitivity (S n ) is the rate of true positive predictions out of all positive instances while specificity (S p ) is the rate of true negative predictions out among all negative instances. P 1 is the ratio of correct positive predictions out of all the positive predictions made by prediction model, whereas P 0 refers to the correct negative predictions made by algorithm out of all negative predictions. P 1 inversely relates to false alarms, which refers higher P 1 means lesser false alarms and vice versa. Similarly, R score and MCC are also proposed as a balanced measure for binary classification evaluation. These are calculated using all four basic measures (TP, FP, TN, FN) and vary between -1 and +1. The values approaching +1 correspond to perfect classification, while 0 refers to total random behavior of prediction algorithm and -1 relates to opposite behavior of classification model. These two performance measures can be considered as a benchmark measure for drawing comparison, because the four types of basic measures are also incorporated in them as shown in Eqs 22 and 23 . Accuracy is a very general criterion for evaluation, used in many perspectives. It states the overall accurate predictions made by the algorithm.

The purpose of analyzing the same results through these mentioned criteria is that each performance metric highlights certain aspect of results. Therefore, the purpose is to highlight all the merits and demerits of the results obtained through proposed prediction models. The formula for calculation of all the mentioned criteria are given in Eqs 18 to 24 .

research paper on earthquake prediction

5.2 Earthquake prediction results

SVR-HNN based model is separately trained for all three regions, using 70% of all the datasets. Prediction results are evaluated for all the regions. Accuracy is used for performance evaluation in many aspects. But in unbalanced classification problems, it may not be the best choice to only use accuracy for evaluation. When in a dataset, one class is in abundance and other is less, it is said to be unbalanced dataset. For example, in a dataset of 100 instances, if only 10 instances correspond to earthquake occurrence while rest of 90 belong to no-earthquake then a prediction algorithm with minimal prediction capability predicts all of them as no-earthquakes. In this scenario, the algorithm has no knowledge of predictability but its accuracy would still be 90%, therefore misguiding the overall capability of prediction model. However, MCC and R score would yield 0 value for this scenario, thereby giving better insights into the competence of prediction model. This is the reason different aspects of a prediction model is evaluated for better analysis. Earthquake prediction is highly delicate issue and false alarms may lead to financial loss and cause panic, therefore, cannot be tolerated. A prediction model with even less than 50% sensitivity but better P 1 is preferable over another prediction model having around 90% sensitivity but lesser P 1. Considering the fact that there exists no earthquake prediction system till date, thus results obtained through SVR-HNN model are commendable.

5.2.1 Predictions for Hindukush region.

Asim et al. [ 11 ] carried out earthquake prediction studies for Hindukush region where considerable prediction results are obtained through different machine learning techniques. Pattern recognition neural networks (PRNN) yields better results than other discussed methods, therefore PRNN is chosen in this work, for comparison with SVR-HNN based prediction methodology. Table 1 shows that SVR-HNN based prediction results are outperforming PRNN based predictions by wide margin in all aspects, except sensitivity. The value of MCC has been improved considerably from 0.33 to 0.6 and R Score from 0.27 to 0.58. Improvement is also observed in P 1 from 61% to above 75%, thus improving false alarm generation for Hindukush region from 39% to less than 25%. Decreased sensitivity is acceptable with notable improvement in P 1 , MCC, R score and accuracy. A model with increased sensitivity may sensitize false earthquakes, leading to the generation of false alarms. Therefore, a model robust towards false alarms may have less sensitivity, which is acceptable given the other performance evaluation criteria have improved.

thumbnail

https://doi.org/10.1371/journal.pone.0199004.t001

5.2.2 Predictions for Chile region.

The results are even more improved for Chile region through SVR-HNN based prediction model. Previously, Reyes et al. [ 15 ] carried out earthquake prediction for four Chilean regions through applying different machine learning techniques with ANN achieving the best results. The ANN based results for all the Chilean regions are averaged (average of TPs, FPs, TNs, and FNs) in this study for drawing comparison with SVR-ANN based results. The proposed SVR-HNN approach has underperformed the ANN based Chilean results in the all aspects by considerable margins, except marginal difference in Specificity as evident from Table 2 . False alarm generation reduced from 39% to less than 27% along with 5% increase in accuracy as well. The considerable difference can be observed in MCC and R score which increased from 0.39 to 0.61 and 0.34 to 0.60, respectively.

thumbnail

https://doi.org/10.1371/journal.pone.0199004.t002

5.2.3 Predictions for Southern California region.

Southern California region is considered earlier for earthquake prediction using ANN by Panakkat and Adeli [ 9 ]. Recurrent Neural Network (RNN) is reported to have produced better results in terms of R Score. The results were evaluated in terms of false alarms ratio (FAR), Probability of detection (POD) or sensitivity, frequency bias (FB) and R score. To discuss the results through the same evaluation criteria, the values of basic performance parameters (TP, TN, FP and FN) are calculated through set of four equations of FAR, POD, FB and R Score. After calculating basic performance parameters, other evaluation criteria are calculated and given in Table 3 . The results generated through SVR-HNN based methodology are better than RNN based results of [ 9 ]. False alarm generation are decreased considerably from 29% to less than 7%. A noteworthy increase in MCC and R score is also observed from 0.51 to 0.722 and .051 to 0.62, respectively. Hence proving the SVR-HNN based prediction methodology better than already available prediction models. SVR-HNN is outperforming previous prediction models because it is a multilayer model with every layer adding to its robustness. SVR provides initial estimation of the earthquake predictions, which is further refined by three different ANNs and EPSO supporting it through optimization.

thumbnail

https://doi.org/10.1371/journal.pone.0199004.t003

5.2.4 Interregional comparison of earthquake prediction.

SVR-HNN based prediction model has shown improved results for Southern California as compared to the other two regions with 0.722, 0.623 and 90.6% of MCC, R score and accuracy, respectively. While Chilean region is holding second best position with MCC of 0.613, R score of 0.603 and accuracy of 84.9%. The SVR-HNN based proposed model has shown least results for the Hindukush region with MCC, R score and accuracy of 0.6, 0.58 and 82.7%, respectively. Fig 5 graphically compares the prediction results for all the three regions, scaled between 0 and 1. This inter-region results comparison has led to the practical visualization of the fact that lower cut-off magnitude and better completeness of earthquake catalog would lead to the improved results for earthquake prediction. The completeness magnitude of Southern California is taken as 2.6, for Chile its 3.4 and for Hindukush its 4.0. Hence demonstrated that earthquake prediction results are inversely related to the cut-off magnitude. In other words, it can be inferred that dense instrumentation for earthquake monitoring plays key role for better earthquake prediction.

thumbnail

https://doi.org/10.1371/journal.pone.0199004.g005

The overall achievements of this study are presented as:

  • SVR-HNN based prediction methodology generated considerably improved results for Hindukush, Chile and Southern California. It has exceedingly outperformed other available predictions results in literature.
  • Inter-regional comparison of prediction results shows that the region with better maintained earthquake catalog having low cut-off magnitude is capable of generating better prediction results. It highlights the need of better instrumentation for earthquake catalog maintenance.

5.2.5 Comparison between individual techniques and SVR-HNN.

In order to prove the superiority of SVR-HNN model, the earthquake prediction performance is also computed separately using SVR and HNN. An independent arrangement has been made to get the earthquake prediction results for SVR and HNN. The performance of SVR and HNN is compared with the combine results of both (SVR-HNN). A notable difference between the performances of individual techniques and their combination can be observed. MCC of 0.43 and 0.41 is obtained for Hindukush region using SVR and HNN, respectively. The combination of two techniques by including SVR as auxiliary predictor for HNN improves MCC to 0.58. A similar trend in improvement is also observed for other performance measures, as shown in Table 4 , for the three considered regions. Thereby, proving the superiority of SVR-HNN over their separate counter parts.

thumbnail

https://doi.org/10.1371/journal.pone.0199004.t004

The role of EPSO is to optimize the weights of neural networks. Artificial neural networks have tendency of getting trapped in local minima. If such a situation happens during the training of a neural network, EPSO helps ANN in escaping local minima and guides in towards optimum solution. But this is not the case every time that ANN is trapped in local minima. It may happen occasionally. If ANN is already performing well and learning in the right direction, the inclusion of EPSO does not affect the weights of ANN in such a situation. The term HNN stands for combination of three different neural networks coupled with EPSO. Sometimes HNN without EPSO would show the similar results as compare to HNN with EPSO, however, on some other training occasions, it may show lesser performance.

5.2.6 Performance stability.

The performance of SVR, HNN and EPSO are dependent on respective parameter selection. Thus, the collective performance obtained in SVR-HNN model is governed through selection of appropriate parameter values. The extensive experimentation is performed to empirically select the values for parameters, which obtain good results for each of the used algorithms (SVR, HNN, and EPSO). The proposed model has shown consistent performance for the three regions, which approves the appropriate selection of values for parameters. The ten simulation runs of SVR-HNN model show little variation, which strengthens the claim of appropriate selection of parametric values, leading to a stable earthquake prediction performance in three regions. The graphs given in Fig 6 show, the performance stability of SVR-HNN model for the three considered regions.

thumbnail

https://doi.org/10.1371/journal.pone.0199004.g006

Conclusions

In this study, interdisciplinary research has been carried out for earthquake prediction through interaction of seismology-based earthquake precursors and computer science based computational intelligent techniques. A robust multilayer prediction model is generated in combination with the computation of maximum obtainable seismic features. Sixty seismic features are computed for Hindukush, Chile and Southern California regions. Separate feature selection is performed for every region through maximum relevance and minimum redundancy (mRMR) approach. The selected features are employed for training of an earthquake prediction model. The prediction model consists of Support Vector Regressor (SVR) followed Hybrid Neural Networks (HNN) and Enhanced Particle Swarm Optimization (EPSO). SVR provides an initial estimation for earthquake prediction which is passed to HNN as an auxiliary predictor in combination with features. Three different neural networks are further employed along with EPSO weight optimization. Thus SVR-HNN based prediction model is trained and tested successfully with encouraging and improved results for all three regions.

Acknowledgments

The authors would like to thank friends and colleagues for the enormous support and valuable discussion, in particular Mr. Najeebullah Khan has been very helpful regarding technical discussion of the prediction model. The authors would like to thank the Spanish Ministry of Economy and Competitiveness, Junta de Andalucia for the support under projects TIN2014-55894-C2-R and P12-TIC-1728, respectively.

  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 28. Quaternary fault and fold database for the United States [Internet]. 2006 [cited June, 2016, from USGS web site: ]. Available from: http://earthquake.usgs.gov/hazards/ .
  • 45. Eberhart R, Kennedy J, editors. A new optimizer using particle swarm theory. Micro Machine and Human Science, 1995 MHS'95, Proceedings of the Sixth International Symposium on; 1995: IEEE.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Earthquake prediction using expert systems: a systematic mapping study.

research paper on earthquake prediction

1. Introduction

2. background, 2.1. fuzzy expert system (fes), 2.2. rule based expert system (rbes), 2.3. neuro fuzzy expert system (nfes), 2.4. machine learning (ml), 3. research methodology, 3.1. defining research questions, 3.2. search and selection strategy, 3.2.1. identification of search string, 3.2.2. screening and selection criteria, 3.3. data extraction and synthesis, classification of articles, 4. analysis, 4.1. basic analysis, 4.2. key facts of expert system based earthquake prediction publications, 4.3. researchtype facets addressed by the identified publications, 4.4. es type and system specific key aspects of proposed es, 4.5. quality assessment, 5. discussion, 5.1. comparitive analysis of methods, 5.2. principal findings.

  • Globally, we need a program of identification and characterization of potentially hazardous faults in multiple seismic zones. From those studies, site-specific expected seismic shaking maps can be developed that would facilitate in developing expert system for earthquake prediction process.
  • By comparing different forecasts that are computed from common data, contrasts in performance can be tied to specific features of the computational prediction method. Enforcing the need to create a testable prediction, hypothesis that may reveal shortcomings or incomplete features of the prediction method is needed.
  • Activities focusing on comparative testing of computational prediction methods based on seismicity and fault information that provide probabilistic predictions of moderate magnitude earthquakes on a geographic grid are needed. This approach can be optimized to achieve useful statistics in a short time and can also advance the research field by providing insights into the computational predictability of earthquakes. However, visible hypotheses such as the M8/MSc predictions of global earthquakes, the “reverse detection of precursors” method, or the Retrograde Intravenous Pressure Infusion “RIPI” method, each of which analyze temporal and spatial variations in seismicity, or other methods based on observable quantities such as the electromagnetic field, ground temperature, gaseous emissions, geodetic deformation, or changes in seismic wave speed. Many of the most visible and influential earthquake predictions are posed as “alarms” or “times of increased probability” (TIPs) within some specified region rather than as probabilities on a grid of points.
  • Evaluation of emerging situations such as earthquake swarms, the likelihood of damaging aftershocks or triggered earthquakes following major quakes, or the likelihood of re-rupture of a fault following a major earthquake should be examined. Likewise, a broader suite of statistical tests, spanning the range from straightforward to sophisticated, would allow some prediction methods to be easily disproven in a way that’s clear to researchers, the media and the public, while providing the rigorous analysis required for comparative testing. These should include statistical tests applicable to alarm-based computational prediction methods.

5.3. Evolution of Tools and Techniques

6. conclusion and future directions, author contributions, acknowledgments, conflicts of interest.

  • Laasri, H.A.; Akhouayri, E.-S.; Agliz, D.; Zonta, D.; Atmani, A. A fuzzy expert system for automatic seismic signal classification. Expert Syst. Appl. 2015 , 42 , 1013–1027. [ Google Scholar ] [ CrossRef ]
  • Andrić, J.M.; Lu, D.-G. Fuzzy probabilistic seismic hazard analysis with applications to Kunming city, China. Nat. Hazards 2017 , 89 , 1031–1057. [ Google Scholar ] [ CrossRef ]
  • Asim, K.M.; Awais, M.; Martínez–Álvarez, F.; Iqbal, T. Seismic activity prediction using computational intelligence techniques in northern Pakistan. Acta Geophys. 2017 , 65 , 919–930. [ Google Scholar ] [ CrossRef ]
  • Asim, K.M.; Martínez-Álvarez, F.; Basit, A.; Iqbal, T. Earthquake magnitude prediction in Hindukush region using machine learning techniques. Nat. Hazards 2016 , 85 , 471–486. [ Google Scholar ] [ CrossRef ]
  • Aghdam, I.N.; Varzandeh, M.H.M.; Pradhan, B. Landslide susceptibility mapping using an ensemble statistical index (Wi) and adaptive neuro-fuzzy inference system (ANFIS) model at Alborz Mountains (Iran). Environ. Earth Sci. 2016 , 75 , 553. [ Google Scholar ] [ CrossRef ]
  • Ahmadi, M.; Nasrollahnejad, A.; Faraji, A. Prediction of Peak ground acceleration for earthquakes by using intelligent methods. In Proceedings of the 2017 5th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), Qazvin, Iran, 7–9 March 2017; pp. 7–12. [ Google Scholar ]
  • Bahrami, B.; Shafiee, M. Fuzzy Descriptor Models for Earthquake Time Prediction Using Seismic Time Series. Int. J. Uncertain. Fuzziness Knowl. -Based Syst. 2015 , 23 , 505–519. [ Google Scholar ] [ CrossRef ]
  • Ahumada, A.; Altunkaynak, A.; Ayoub, A. Fuzzy logic-based attenuation relationships of strong motion earthquake records. Expert Syst. Appl. 2015 , 42 , 1287–1297. [ Google Scholar ] [ CrossRef ]
  • Aboonasr, S.F.G.; Zamani, A.; Razavipour, F.; Boostani, R. Earthquake hazard assessment in the Zagros Orogenic Belt of Iran using a fuzzy rule-based model. Acta Geophys. 2017 , 65 , 589–605. [ Google Scholar ] [ CrossRef ]
  • Fernández-Gómez, M.J.; Asencio-Cortés, G.; Troncoso, A.; Martínez-Álvarez, F. Large Earthquake Magnitude Prediction in Chile with Imbalanced Classifiers and Ensemble Learning. Appl. Sci. 2017 , 7 , 625. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Andrić, J.M.; Lu, D.G. Fuzzy–Based Method for the Prediction of Seismic Resilience of Bridges ; Elsevier: Amsterdam, The Netherlands, 2016. [ Google Scholar ]
  • Azadeh, A.; Mohammadfam, I.; Khoshnoud, M.; Nikafrouz, M. Design and implementation of a fuzzy expert system for performance assessment of an integrated health, safety, environment (HSE) and ergonomics system: The case of a gas refinery. Inf. Sci. 2008 , 178 , 4280–4300. [ Google Scholar ] [ CrossRef ]
  • Cai, W.; Dou, L.; Zhang, M.; Cao, W.; Shi, J.-Q.; Feng, L. A fuzzy comprehensive evaluation methodology for rock burst forecasting using microseismic monitoring. Tunn. Undergr. Space Technol. 2018 , 80 , 232–245. [ Google Scholar ] [ CrossRef ]
  • Cerna, M.A.D.; Maravillas, E.A. Mamdani Fuzzy Decision Model for GIS-Based Landslide Hazard Mapping. In Transactions on Engineering Technologies ; Springer: Singapore, 2017; pp. 59–73. [ Google Scholar ]
  • D’Urso, M.G.; Masi, D.; Zuccaro, G.; Gregorio, D. Multicriteria Fuzzy Analysis for a GIS-Based Management of Earthquake Scenarios. Comput. Civ. Infrastruct. Eng. 2017 , 33 , 165–179. [ Google Scholar ] [ CrossRef ]
  • Dutta, P.K.; Mishra, O.P.; Naskar, M.K. A review of operational earthquake forecasting methodologies using linguistic fuzzy rule-based models from imprecise data with weighted regression approach. J. Sustain. Sci. Manag. 2013 , 8 , 220–235. [ Google Scholar ]
  • Martínez-Álvarez, F.; Reyes, J.; Morales-Esteban, A.; Rubio-Escudero, C. Determining the best set of seismicity indicators to predict earthquakes. Two case studies: Chile and the Iberian Peninsula. Knowl. -Based Syst. 2013 , 50 , 198–210. [ Google Scholar ] [ CrossRef ]
  • Last, M.; Rabinowitz, N.; Leonard, G. Predicting the Maximum Earthquake Magnitude from Seismic Data in Israel and Its Neighboring Countries. PLoS ONE 2016 , 11 , e0146101. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Ikram, A.; Qamar, U. Developing an expert system based on association rules and predicate logic for earthquake prediction. Knowl. -Based Syst. 2015 , 75 , 87–103. [ Google Scholar ] [ CrossRef ]
  • Ikram, A.; Qamar, U. A rule-based expert system for earthquake prediction. J. Intell. Inf. Syst. 2014 , 43 , 205–230. [ Google Scholar ] [ CrossRef ]
  • Ghorbani, S.; Barari, M.; Hoseini, M. Presenting a new method to improve the detection of micro-seismic events. Environ. Monit. Assess. 2018 , 190 , 464. [ Google Scholar ] [ CrossRef ]
  • Ghorbanzadeh, O.; Rostamzadeh, H.; Blaschke, T.; Gholaminia, K.; Aryal, J. A new GIS-based data mining technique using an adaptive neuro-fuzzy inference system (ANFIS) and k-fold cross-validation approach for land subsidence susceptibility mapping. Nat. Hazards 2018 , 94 , 497–517. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Meng, Q.; Miao, F.; Zhen, J.; Wang, X.; Wang, A.; Peng, Y.; Fan, Q. GIS-based landslide susceptibility mapping with logistic regression, analytical hierarchy process, and combined fuzzy and support vector machine methods: A case study from Wolong Giant Panda Natural Reserve, China. Bull. Int. Assoc. Eng. Geol. 2015 , 75 , 923–944. [ Google Scholar ] [ CrossRef ]
  • Tahernia, N. Fuzzy-Logic Tree Approach for Seismic Hazard Analysis. Int. J. Eng. Technol. 2014 , 6 , 182–185. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Wang, S.; Liu, H.; Wang, S.; Tong, S.; Wang, R. Pseudo-acoustic inversion method and its application. In Proceedings of the 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery, Yantai, China, 10–12 August 2010; Volume 2, pp. 598–601. [ Google Scholar ]
  • Ratnam, D.V.; Vindhya, G.; Dabbakuti, J.R.K.K. Ionospheric forecasting model using fuzzy logic-based gradient descent method. Geodesy Geodyn. 2017 , 8 , 305–310. [ Google Scholar ] [ CrossRef ]
  • Hossain, M.S.; Al Hasan, A.; Guha, S.; Andersson, K. A Belief Rule Based Expert System to Predict Earthquake under Uncertainty. J. Wirel. Mob. Netw. Ubiquitous Comput. Dependable Appl. 2018 , 9 , 26–41. [ Google Scholar ]
  • Mirrashid, M. Earthquake magnitude prediction by adaptive neuro-fuzzy inference system (ANFIS) based on fuzzy C-means algorithm. Nat. Hazards 2014 , 74 , 1577–1593. [ Google Scholar ] [ CrossRef ]
  • Abayon, R.C.; Apilado, J.R.; Pacis, D.B.; Chua, M.G.; Aguilar, A.V.; Calim, J.; Padilla, S.M.A.; Puno, J.C.S.; Apsay, M.R.B.; Bustamante, R. A Weather Prediction and Earthquake Monitoring System. In Proceedings of the 2018 IEEE Conference on Systems, Process and Control (ICSPC), Malacca, Malaysia, 14–15 December 2018; pp. 203–208. [ Google Scholar ]
  • Shibli, M. A novel approach to predict earthquakes using adaptive neural fuzzy inference system and conservation of energy-angular momentum. Int. J. Comp. Inf. Syst. Ind. Manag. Appl. 2011 , 2150 , 371–390. [ Google Scholar ]
  • Torres, V.M.; Castillo, O. A Type-2 Fuzzy Neural Network Ensemble to Predict Chaotic Time Series. In Studies in Computational Intelligence ; Springer: Berlin/Heidelberg, Germany, 2015; Volume 601, pp. 185–195. [ Google Scholar ]
  • Tosunoğlu, N.G.; Apaydin, A. A New Spatial Algorithm Based on Adaptive Fuzzy Neural Network for Prediction of Crustal Motion Velocities in Earthquake Research. Int. J. Fuzzy Syst. 2018 , 20 , 1656–1670. [ Google Scholar ] [ CrossRef ]
  • Kamath, R.S.; Kamat, R.K. Earthquake Magnitude Prediction for Andaman-Nicobar Islands: Adaptive Neuro Fuzzy Modeling with Fuzzy Subtractive Clustering Approach. J. Chem. Pharm. Sci. 2017 , 10 , 1228–1233. [ Google Scholar ]
  • Asim, K.M.; Moustafa, S.S.; Niaz, I.A.; Elawadi, E.A.; Iqbal, T.; Martínez-Álvarez, F. Seismicity analysis and machine learning models for short-term low magnitude seismic activity predictions in Cyprus. Soil Dyn. Earthq. Eng. 2020 , 130 , 105932. [ Google Scholar ] [ CrossRef ]
  • Vasti, M.; Dev, A. Classification and Analysis of Real-World Earthquake Data Using Various Machine Learning Algorithms. In Lecture Notes in Electrical Engineering ; Springer: Singapore, 2019; pp. 1–14. [ Google Scholar ]
  • Mukhopadhyay, U.K.; Sharma, R.N.K.; Anwar, S.; Dutta, A.D. Correlating Thermal Anomaly with Earthquake Occurrences Using Remote Sensing ; Springer: Berlin/Heidelberg, Germany, 2019; pp. 863–875. [ Google Scholar ]
  • Karimzadeh, S.; Matsuoka, M.; Kuang, J.; Ge, L. Spatial Prediction of Aftershocks Triggered by a Major Earthquake: A Binary Machine Learning Perspective. ISPRS Int. J. Geo-Inf. 2019 , 8 , 462. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zhou, Z.; Lin, Y.; Zhang, Z.; Wu, Y.; Johnson, P. Earthquake Detection in 1D Time—Series Data with Feature Selection and Dictionary Learning. Seism. Res. Lett. 2019 , 90 , 563–572. [ Google Scholar ] [ CrossRef ]
  • Corbi, F.; Sandri, L.; Bedford, J.; Funiciello, F.; Brizzi, S.; Rosenau, M.; Lallemand, S. Machine Learning Can Predict the Timing and Size of Analog Earthquakes. Geophys. Res. Lett. 2019 , 46 , 1303–1311. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Kong, Q.; Trugman, D.T.; Ross, Z.E.; Bianco, M.; Meade, B.J.; Gerstoft, P. Machine Learning in Seismology: Turning Data into Insights. Seism. Res. Lett. 2018 , 90 , 3–14. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Galkina, A.; Grafeeva, N. Machine Learning Methods for Earthquake Prediction: A Survey. In Proceedings of the Fourth Conference on Software Engineering and Information Management (SEIM-2019), Saint Petersburg, Russia, 13 April 2019; full papers. p. 25. [ Google Scholar ]
  • Gitis, V.G.; Derendyaev, A. Machine Learning Methods for Seismic Hazards Forecast. Geosciences 2019 , 9 , 308. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Al-Najjar, H.A.H.; Kalantar, B.; Pradhan, B.; Saeidi, V. Conditioning factor determination for mapping and prediction of landslide susceptibility using machine learning algorithms. In Earth Resources and Environmental Remote Sensing/GIS Applications X ; International Society for Optics and Photonics: California, CA, USA, 2019; Volume 11156, p. 111560K. [ Google Scholar ]
  • Ganter, T.; Sundermier, A.; Ballard, S. Alternate Null Hypothesis Correlation: A New Approach to Automatic Seismic Event Detection. Bull. Seism. Soc. Am. 2018 , 108 , 3528–3547. [ Google Scholar ] [ CrossRef ]
  • Mosavi, A.; Salimi, M.; Ardabili, S.F.; Rabczuk, T.; Shamshirband, S.; Várkonyi-Kóczy, A.R. State of the Art of Machine Learning Models in Energy Systems, a Systematic Review. Energies 2019 , 12 , 1301. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Mosavi, A.; Ozturk, P.; Chau, K.-W. Flood Prediction Using Machine Learning Models: Literature Review. Water 2018 , 10 , 1536. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Dineva, A.; Mosavi, A.; Ardabili, S.F.; Vajda, I.; Shamshirband, S.; Rabczuk, T.; Chau, K.-W. Review of Soft Computing Models in Design and Control of Rotating Electrical Machines. Energies 2019 , 12 , 1049. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Nosratabadi, S.; Mosavi, A.; Shamshirband, S.; Zavadskas, E.K.; Rakotonirainy, A.; Chau, K.-W. Sustainable Business Models: A Review. Sustainability 2019 , 11 , 1663. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zhang, L.; Si, L.; Yang, H.; Hu, Y.; Qiu, J. Precursory Pattern Based Feature Extraction Techniques for Earthquake Prediction. IEEE Access 2019 , 7 , 30991–31001. [ Google Scholar ] [ CrossRef ]
  • Kayastha, P.; Bijukchhen, S.; Dhital, M.R.; De Smedt, F. GIS based landslide susceptibility mapping using a fuzzy logic approach: A case study from Ghurmi-Dhad Khola area, Eastern Nepal. J. Geol. Soc. India 2013 , 82 , 249–261. [ Google Scholar ] [ CrossRef ]
  • Lu, J.; Hu, S.; Niu, Z.; You, R. The Application of Fuzzy Comprehensive Evaluation Model in Landslide Prediction. In Proceedings of the 2010 3rd International Conference on Information Management, Innovation Management and Industrial Engineering, Kunming, China, 26–28 November 2010; Volume 4, pp. 612–615. [ Google Scholar ]
  • Mallick, J.; Singh, R.K.; Alawadh, M.A.; Islam, S.; Khan, R.A.; Qureshi, M.N. GIS-based landslide susceptibility evaluation using fuzzy-AHP multi-criteria decision-making techniques in the Abha Watershed, Saudi Arabia. Environ. Earth Sci. 2018 , 77 , 276. [ Google Scholar ] [ CrossRef ]
  • Mohsin, S.; Azam, F. Computational seismic algorithmic comparison for earthquake prediction. Int. J. Geol. 2011 , 5 , 53–59. [ Google Scholar ]
  • Sengar, S.S.; Kumar, A.; Ghosh, S.K.; Wason, H.R.; Raju, P.L.N.; Murthy, Y.V.N.K. Earthquake-induced built-up damage identification using fuzzy approach. Geomat. Nat. Hazards Risk 2013 , 4 , 320–338. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Sun, D.; Sun, B. Rapid prediction of earthquake damage to buildings based on fuzzy analysis. In Proceedings of the 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery, Yantai, China, 10–12 August 2010; Volume 3, pp. 1332–1335. [ Google Scholar ]
  • Huang, L.; Xiang, L.-Y. Method for Meteorological Early Warning of Precipitation-Induced Landslides Based on Deep Neural Network. Neural Process. Lett. 2018 , 48 , 1243–1260. [ Google Scholar ] [ CrossRef ]
  • Li, W.; Narvekar, N.; Nakshatra, N.; Raut, N.; Sirkeci, B.; Gao, J. Seismic Data Classification Using Machine Learning. In Proceedings of the 2018 IEEE Fourth International Conference on Big Data Computing Service and Applications (BigDataService), Bamberg, Germany, 26–29 March 2018; pp. 56–63. [ Google Scholar ]
  • Asim, K.M.; Idris, A.; Iqbal, T.; Martínez-Álvarez, F. Earthquake prediction model using support vector regressor and hybrid neural networks. PLoS ONE 2018 , 13 , e0199004. [ Google Scholar ] [ CrossRef ]
  • Asencio–Cortés, G.; Morales-Esteban, A.; Shang, X.; Martínez–Álvarez, F. Earthquake prediction in California using regression algorithms and cloud-based big data infrastructure. Comput. Geosci. 2018 , 115 , 198–210. [ Google Scholar ] [ CrossRef ]
  • Hoang, N.-D.; Bui, D.T.T. Predicting earthquake-induced soil liquefaction based on a hybridization of kernel Fisher discriminant analysis and a least squares support vector machine: A multi-dataset study. Bull. Int. Assoc. Eng. Geol. 2016 , 77 , 191–204. [ Google Scholar ] [ CrossRef ]
  • Gitis, V.G.; Derendyaev, A. Web-Based GIS platform for automatic prediction of earthquakes. In Proceedings of the International Conference on Computational Science and Its Applications, Melbourne, Australia, 2–5 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 268–283. [ Google Scholar ]
  • Thomas, S.; Pillai, G.; Pal, K. Prediction of peak ground acceleration using ϵ-SVR, ν-SVR and Ls-SVR algorithm. Geomatics, Nat. Hazards Risk 2016 , 8 , 177–193. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Rouet-Leduc, B.; Hulbert, C.; Lubbers, N.; Barros, K.; Humphreys, C.J.; Johnson, P.A. Machine Learning Predicts Laboratory Earthquakes. Geophys. Res. Lett. 2017 , 44 , 9276–9282. [ Google Scholar ] [ CrossRef ]
  • Rafiei, M.H.; Adeli, H. NEEWS: A novel earthquake early warning model using neural dynamic classification and neural dynamic optimization. Soil Dyn. Earthq. Eng. 2017 , 100 , 417–427. [ Google Scholar ] [ CrossRef ]
  • Asencio–Cortés, G.; Scitovski, S.; Scitovski, R.; Martínez–Álvarez, F. Temporal analysis of croatianseismogenic zones to improve earthquake magnitude prediction. Earth Sci. Inform. 2017 , 10 , 303–320. [ Google Scholar ] [ CrossRef ]
  • Rahmani, M.E.; Amine, A.; Hamou, R.M. A Novel Bio Inspired Algorithm Based on Echolocation Mechanism of Bats for Seismic States Prediction. Int. J. Swarm Intell. Res. 2017 , 8 , 1–18. [ Google Scholar ] [ CrossRef ]
  • Asencio-Cortés, G.; Martínez-Álvarez, F.; Troncoso, A.; Morales-Esteban, A. Medium–large earthquake magnitude prediction in Tokyo with artificial neural networks. Neural Comput. Appl. 2015 , 28 , 1043–1055. [ Google Scholar ] [ CrossRef ]
  • Asim, K.M.; Idris, A.; Martinez-Alvarez, F.; Iqbal, T. Short Term Earthquake Prediction in Hindukush Region Using Tree Based Ensemble Learning. In Proceedings of the 2016 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 19–21 December 2016; pp. 365–370. [ Google Scholar ]
  • Yang, D.; Yang, K. Multi-step prediction of strong earthquake ground motions and seismic responses of SDOF systems based on EMD-ELM method. Soil Dyn. Earthq. Eng. 2016 , 85 , 117–129. [ Google Scholar ] [ CrossRef ]
  • Vahaplar, A.; Tezel, B.T.; Nasiboglu, E.; Nasibov, E. A monitoring system to prepare machine learning data sets for earthquake prediction based on seismic-acoustic signals. In Proceedings of the 2015 9th International Conference on Application of Information and Communication Technologies (AICT), Rostov on Don, Russia, 14–16 October 2015; pp. 44–47. [ Google Scholar ]
  • Buscema, P.M.; Massini, G.; Maurelli, G. Artificial Adaptive Systems to predict the magnitude of earthquakes. Bollettino di GeofisicaTeorica ed Applicata 2015 , 56 , 227–256. [ Google Scholar ]
  • Kamogawa, M.; Nanjo, K.; Izutsu, J.; Orihara, Y.; Nagao, T.; Uyeda, S. Nucleation and Cascade Features of Earthquake Mainshock Statistically Explored from Foreshock Seismicity. Entropy 2019 , 21 , 421. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Reyes, J.; Morales-Esteban, A.; Martínez-Álvarez, F. Neural networks to predict earthquakes in Chile. Appl. Soft Comput. 2013 , 13 , 1314–1328. [ Google Scholar ] [ CrossRef ]
  • Farrokhzad, F.; Choobbasti, A.; Barari, A. Liquefaction microzonation of Babol city using artificial neural network. J. King Saud Univ.-Sci. 2012 , 24 , 89–100. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Gu, T.-F.; Wang, J.-D. Application of fuzzy neural networks for predicting seismic subsidence coefficient of loess subgrade. In Proceedings of the 2010 Sixth International Conference on Natural Computation, Yantai, China, 10–12 August 2010; Volume 3, pp. 1556–1559. [ Google Scholar ]
  • Korkmaz, K.A.; Demir, F. Ground Motion Data Profile of Western Turkey with Intelligent Hybrid Processing. Pure Appl. Geophys. 2016 , 174 , 293–303. [ Google Scholar ] [ CrossRef ]
  • Dutta, P.K.; Mishra, O.P.; Naskar, M.K. Decision analysis for earthquake prediction methodologies: Fuzzy inference algorithm for trust validation. Int. J. Comput. Appl. 2012 , 45 , 13–20. [ Google Scholar ]
  • Mishra, O.P. Seismological research in India. Proc. Indian Natl. Sci. Acad. 2012 , 76 , 361–375. [ Google Scholar ]
  • Pandit, A.R.; Biswal, K.C. Prediction of earthquake magnitude using adaptive neuro fuzzy inference system. Earth Sci. Inform. 2019 , 12 , 513–524. [ Google Scholar ] [ CrossRef ]
  • Pham, B.T.; Bui, D.T.T.; Pham, H.V.; Le, H.Q.; Prakash, I.; Dholakia, M.B. Landslide Hazard Assessment Using Random SubSpace Fuzzy Rules Based Classifier Ensemble and Probability Analysis of Rainfall Data: A Case Study at Mu Cang Chai District, Yen Bai Province (Viet Nam). J. Indian Soc. Remote. Sens. 2016 , 45 , 673–683. [ Google Scholar ] [ CrossRef ]
  • Polykretis, C.; Chalkias, C.; Ferentinou, M. Adaptive neuro-fuzzy inference system (ANFIS) modeling for landslide susceptibility assessment in a Mediterranean hilly area. Bull. Int. Assoc. Eng. Geol. 2017 , 78 , 1173–1187. [ Google Scholar ] [ CrossRef ]
  • Razifard, M.; Shoaei, G.; Zaré, M. Application of fuzzy logic in the preparation of hazard maps of landslides triggered by the twin Ahar-Varzeghan earthquakes (2012). Bull. Int. Assoc. Eng. Geol. 2018 , 78 , 223–245. [ Google Scholar ] [ CrossRef ]
  • Pourghasemi, H.R.; Pradhan, B.; Gokceoglu, C. Application of fuzzy logic and analytical hierarchy process (AHP) to landslide susceptibility mapping at Haraz watershed, Iran. Nat. Hazards 2012 , 63 , 965–996. [ Google Scholar ] [ CrossRef ]
  • Pradhan, B. Use of GIS-based fuzzy logic relations and its cross application to produce landslide susceptibility maps in three test areas in Malaysia. Environ. Earth Sci. 2010 , 63 , 329–349. [ Google Scholar ] [ CrossRef ]
  • Zhang, Y.; Oldenburg, C.M.; Finsterle, S. Percolation-theory and fuzzy rule-based probability estimation of fault leakage at geologic carbon sequestration sites. Environ. Earth Sci. 2009 , 59 , 1447–1459. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Meten, M.; Bhandary, N.P.; Yatabe, R. Application of GIS-based fuzzy logic and rock engineering system (RES) approaches for landslide susceptibili mapping in Selelkula area of the Lower Jema River Gorge, Central Ethiopia. Environ, Earth Sci. 2015 , 74 , 3395–3416. [ Google Scholar ] [ CrossRef ]
  • Thomas, S.; Pillai, G.N.; Pal, K.; Jagtap, P. Prediction of ground motion parameters using randomized ANFIS (RANFIS). Appl. Soft Comput. 2016 , 40 , 624–634. [ Google Scholar ] [ CrossRef ]
  • Wu, A. Design and practice of a digital seismic waveform analyzing tool. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, Chongqing, China, 29–31 May 2012; pp. 2299–2303. [ Google Scholar ]
  • Ismail, N.; Khattak, N. Reconnaissance Report on the Mw 7.5 Hindu Kush Earthquake of 26th October 2015 and the Subsequent Aftershocks ; United Arab Emirates University: Al Ain, UAE, 2015. [ Google Scholar ]
  • Mignan, A. Retrospective on the Accelerating Seismic Release (ASR) hypothesis: Controversy and new horizons. Tectonophysics 2011 , 505 , 1–16. [ Google Scholar ] [ CrossRef ]
  • Keilis-Borok, V. Earthquake prediction: State-of-the-art and emerging possibilities. Annu. Rev. Earth Planet. Sci. 2002 , 30 , 1–33. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Mignan, A. The debate on the prognostic value of earthquake foreshocks: A meta-analysis. Sci. Rep. 2014 , 4 , 4099. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Wyss, M. Evaluation of proposed earthquake precursors. EOS Trans. Am. Geophys. Union 1991 , 72 , 411. [ Google Scholar ] [ CrossRef ]
  • Mignan, A. A preliminary text classification of the precursory accelerating seismicity corpus: Inference on some theoretical trends in earthquake predictability research from 1988 to 2018. J. Seism. 2019 , 23 , 771–785. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Cicerone, R.D.; Ebel, J.E.; Britton, J. A systematic compilation of earthquake precursors. Tectonophysics 2009 , 476 , 371–396. [ Google Scholar ] [ CrossRef ]
  • Azimi, Y.; Khoshrou, S.H.; Osanloo, M. Prediction of blast induced ground vibration (BIGV) of quarry mining using hybrid genetic algorithm optimized artificial neural network. Measurement 2019 , 147 , 106874. [ Google Scholar ] [ CrossRef ]
  • Mehrgini, B.; Izadi, H.; Memarian, H. Shear wave velocity prediction using Elman artificial neural network. Carbonates Evaporites 2017 , 34 , 1281–1291. [ Google Scholar ] [ CrossRef ]
  • Udegbe, E.; Morgan, E.; Srinivasan, S. Big data analytics for seismic fracture identification using amplitude-based statistics. Comput. Geosci. 2019 , 23 , 1277–1291. [ Google Scholar ] [ CrossRef ]
  • Sharma, S.; Venkateswarlu, H.; Hegde, A. Application of Machine Learning Techniques for Predicting the Dynamic Response of Geogrid Reinforced Foundation Beds. Geotech. Geol. Eng. 2019 , 37 , 4845–4864. [ Google Scholar ] [ CrossRef ]
  • Fanos, A.M.; Pradhan, B. A Novel Hybrid Machine Learning-Based Model for Rockfall Source Identification in Presence of Other Landslide Types Using LiDAR and GIS. Earth Syst. Environ. 2019 , 3 , 491–506. [ Google Scholar ] [ CrossRef ]
  • Břizová, L.; Kříž, J.; Studnička, F.; Slegr, J. Methods for the Evaluation of the Stochastic Properties of the Ionosphere for Earthquake Prediction—Random Matrix Theory. Atmosphere 2019 , 10 , 413. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Andrén, M.; Stockmann, G.; SkeltoniD, A.; Sturkell, E.; Mörth, C.; Guðrúnardóttir, H.R.; Keller, N.S.; Odling, N.; Dahrén, B.; Broman, C.; et al. Coupling between mineral reactions, chemical changes in groundwater, and earthquakes in Iceland. J. Geophys. Res. Solid Earth 2016 , 121 , 2315–2337. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Sarlis, N.; Skordas, E.S.; Varotsos, P. Natural Time Analysis: Results Related to Two Earthquakes in Greece during 2019. Proceedings 2019 , 24 , 20. [ Google Scholar ]
  • Tareen, A.D.K.; Asim, K.M.; Kearfott, K.; Rafique, M.; Nadeem, M.S.A.; Iqbal, T.; Rahman, S.U. Automated anomalous behaviour detection in soil radon gas prior to earthquakes using computational intelligence techniques. J. Environ. Radioact. 2019 , 203 , 48–54. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Orihara, Y.; Kamogawa, M.; Nagao, T. Preseismic Changes of the Level and Temperature of Confined Groundwater related to the 2011 Tohoku Earthquake. Sci. Rep. 2014 , 4 , 6907. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • SkeltoniD, A.; Andrén, M.; Kristmannsdóttir, H.; Stockmann, G.; Mörth, C.-M.; Sveinbjörnsdóttir, A.; Jónsson, S.; Sturkell, E.; Guðrúnardóttir, H.R.; Hjartarson, H.; et al. Changes in groundwater chemistry before two consecutive earthquakes in Iceland. Nat. Geosci. 2014 , 7 , 752–756. [ Google Scholar ] [ CrossRef ] [ Green Version ]

Click here to enlarge figure

Research Question (RQ)RQ StatementMotivation
RQ 1What are the bibliometric key facts of expert systems (ES) based earthquake prediction publications?
RQ 1.1How many studies have been contributed from January 2010 to January 2020?The intentions of this research question is to find out the number of publications that have been contributed in the selected time period and the main venues where the studies have been published.
RQ 1.2What are the venues where these studies have been published?
RQ 2Which research type facets do the identified publications address?
RQ 2.1What is the type of research conducted in the publication?The main intention is to categorize the selected publications through the schema established by [ , ]. Therefore, we use the research type facets given by Zhang et al. Reference [ ]. Based on these type facets, we wanted to find out multiple research contexts, including the type of the research, empirical type of the research, approaches used in the research and areas targeted by the researchers for data extraction.
RQ 2.2What is the empirical type of the research conducted in the publication?
RQ 2.3What approach has been used by the researcher?
RQ 2.4Which area has been targeted by the research for data collection?
RQ 3What is the type and other key aspects of proposed Expert System (ES) in the classified publications?
RQ 3.1What type of expert system has been proposed in the selected studies?The main aim is to determine the types of proposed ES used for earthquake prediction in the articles published during January 2010 till January 2020. This question is helpful in highlighting the other parameters of the proposed ES like input domain, number of input attributes passed, type of the input attributes, prediction logic, the tools and techniques used in the articles have been categorized.
RQ 3.2Which input domain does the proposed ES address?
RQ 3.3How many input attributes are passed to the proposed ES?
RQ 3.4What is the type of the input attributes passed to the proposed ES?
RQ 3.5Which type of prediction logic has been used by the proposed ES?
RQ 3.6Which tool or technique has been used to develop the proposed ES?
AND TermsOR Terms
EarthquakeRule based, Fuzzy, Frame based
Machine Learning, Deep learning, Expert system
Seismic, Tremor
IndicatorPrecursor, Feature
PredictionPredict* (* means wildcard)

IC1Articles in which an expert system has been developed for earthquake prediction
IC2Articles in which earthquake precursors have been analyzed
IC3Articles presenting unique and new ideas
IC4Literature published as book chapter and technical reports for earthquake prediction
IC5Articles with identical abstracts (on the basis of Kappa coefficient)

EC1Duplicates and identical titles
EC2Papers not in English language
EC3Thesis (cover several different aspects)
EC4Papers with unclear methodology
EC5Papers not satisfying quality criteria
Quality Ranking
Sr.CriteriaTypeWeight
Study Presents contributionYes1
No0
Partially0.5
Study presents solutionYes1
No0
Partially0.5
The study presents empirically validated resultsYes1
No0
Partially0.5
Research Questions (RQs)Data Extracted
RQ 1RQ 1.1Number of publications contributed in the given time period has been determined.
RQ 1.2A main venue where the study has been published has been noted.
RQ 2RQ 2.1Research type (solution, evaluation, experience) has been determined.
RQ 2.2Empirical type (Experiment, survey, case study) has been determined.
RQ 2.3The approach used (model, method, guideline, framework, tool) has been noted.
RQ 2.4Seismic zone (global, regional) focused by the study has been determined.
RQ 3RQ 3.1Type of the proposed expert system (Fuzzy expert system, rule based expert system, Neuro fuzzy expert system) has been noted.
RQ 3.2Identification of the input domain i.e., quake or precursive
RQ 3.3Number of input attributes, i.e., single or multiple that have been passed to the proposed ES for earthquake prediction.
RQ 3.4Type of the input attributes (numeric or discrete) has been determined.
RQ 3.5Prediction logic (inductive or deductive) used by the proposed expert system has been noted.
RQ 3.6Tools and techniques used to develop the proposed expert system given in the studies have been categorized.
Ref.Bibliometric
Facts
Type FacetsSystem Specific InformationQuality Ranking
Publication
Channel
Publication YearResearch TypeEmpirical TypeApproachTarget AreaProposed ES
type
Input DomainInput
Attribute
Input
Attribute type
Data TypePrediction LogicTools and Techniques(a)(b)(c)Score
[ ]Journal2014EvaCSModRLFESPRMLDVDisINSW10.50.52.0
[ ]Journal2017EvaExtMetRLFESQESLDVNumINAM10.512.5
[ ]Journal2017EvaExtModRLOtherQEMLDVNumDDSW10.512.5
[ ]Journal2016EvaCSMetRLNFESPRSLDVDisINAM10.50.52.0
[ ]Journal2018EvaExtMetRLFESQEMLDVNumINSW10.512.5
[ ]Journal2015EvaExtModRLFESQEMLPENumINAM10.512.5
[ ]Confe2012SolExtMetRLFESPRMLDVDisINOth0.5112.5
[ ]Journal2014EvaExtModRLFESPRMLDVDisINSW10.512.5
[ ]Journal2017EvaCSMetRLOtherQEMLDVNumINSW10.512.5
[ ]Journal2017EvaCSModRLFESPRSLDVDisINSW10.50.52.0
[ ]Journal2017EvaCSModRLFESQEMLDVNumINSW10.50.52.0
[ ]Journal2018SolExtModRLFESQESLDVNumINSW0.5112.5
[ ]Book 2017EvaCSMetRLFESPRSLDVDisINSW10.50.52.0
[ ]Journal2017ExpSurMetRLFESQEMLPEDisINOth0.50.501.0
[ ]Journal2013ExpSurModGLRBESPRSLPEDisDDSW0.50.501.0
[ ]Journal2013EvaExtModRLRBESPRMLPEDisINOth10.512.5
[ ]Journal2016EvaSurMetRLFESQEMLPENumDDAM10.501.5
[ ]Confe2015EvaExtModGLRBESQEMLDVDisINSW10.512.5
[ ]Journal2014EvaExtModGLRBESQEMLPEDisINSW10.512.5
[ ]Journal2018EvaExtMetGLFESQEMLDVDisDDAM10.512.5
[ ]Journal2018EvaExtMetRLNFESQEMLDVDisINSW10.512.5
[ ]Journal2015ExpCSMetRLFESQESLPEDisINOth0.50.50.51.5
[ ]Confe2016ExpCSMetRLFESQEMLDVDisINOth0.50.50.51.5
[ ]Confe2010EvaExtMetRLOtherPRSLPENumDDAM10.512.5
[ ]Journal2017SolCSModRLFESPRSLPEDisINSW0.510.52.0
[ ]Journal2018EvaExtModGLRBESPRSLPEDisINAM10.512.5
[ ]Journal2012SolSurModGLFESPRMLPENumINAM0.5101.5
[ ]Journal2015ExpSurGleGLNFESPRMLPENumINSW0.50.501.0
[ ]Journal2015EvaExtModRLFESQEMLPENumINAM10.512.5
[ ]Journal2018EvaCSModRLNFESPRSLDVDisINSW10.50.52.0
[ ]Journal2014ExpSurGleGLNFESQEMLDVNumDDSW0.50.501.0
[ ]Journal2020EvaExtFWRLMlQEMLPENumDDAM10.512.5
[ ]Confe2020EvaExtModRLMlQEMLDVNumDDAM10.512.5
[ ]Journal2020ExpCSMetRLMlPRSLDVDisDDSW0.50.501
[ ]Journal2019ExpExtMetRLMlQEMLPENumINAM0.50.512
[ ]Confe2019ExpExtMetGLMlPRSLPEDisDDSW0.50.512
[ ]Confe2019SolExtModGLMlQEMLPENumDDAM0.5112.5
[ ]Confe2019ExpSurGleGLMlPRMLDVDisINAM0.50.501
[ ]Confe2019ExpSurGleGLMlPRMLPEDisDDAM0.50.501
[ ]Journal2019EvaExtMetRLMlQEMLDVNumDDAM10.512.5
[ ]Confe2019EvaExtModGLMlPRMLDVDisDDAM10.512.5
[ ]Journal2018SolExtMetGLMlQEMLDVNumINAM0.5112.5
[ ]Journal2019EvaExtMetRLNFESPRMLDVNumDDAM10.512.5
[ ]Journal2013ExpExtMetRLFESPRMLPENumINOth0.50.512.0
[ ]Confe2010SolCSModRLFESPRSLDVDisDDAM0.510.52.0
[ ]Journal2018EvaExtModRLFESPRMLPEDisINOth10.512.5
[ ]Journal2011SolSurModRLOtherQESLDVNumDDAM0.5101.5
[ ]Journal2019ExpCSGleRLOtherPRSLPENumDDAM0.50.50.51.5
[ ]Confe2010SolExtMetRLFESPRSLDVDisINOth0.5112.5
[ ]Confe2018ExpExtFWGLNNQEMLDVNumDDAM0.50.512
[ ]Confe2018ExpSurMetGLMlPRMLPENumINAM0.50.501
[ ]Journal2018ExpExtMetRLNNQEMLPENumDDAM0.50.512
[ ]Journal2018SolCSMetRLMlQEMLPENumDDSW0.510.52
[ ]Journal2018EvaExtMetGL MlQEMLDVNumINSW10.512.5
[ ]Confe2018SolExtMetRLMlQEMLPENumDDSW0.5112.5
[ ]Journal2017EvaCSModRLMlQEMLPENumDDSW10.50.52
[ ]Confe2017ExpExtModGLMlPRSLPEDisINAW0.50.512
[ ]Journal2017SolCSModRLNNPRMLDVNumINSW0.510.52
[ ]Journal2017EvaCSMetRLMlQEMLDVNumDDAW10.50.52
[ ]Journal2017SolCSMetRLMlPRSLPEDisINSW0.510.52
[ ]Journal2017EvaCSMetRLNNPRMLDVNumDDAW10.50.52
[ ]Confe2017EvaCSModRLMlQEMLPENumINSW10.50.52
[ ]Confe2015EvaCSMetRLMlQEMLDVNumINSW10.50.52
[ ]Journal2015EvaExtModGLMlQEMLPENumINSW10.512.5
[ ]Journal2013EvaCSModRLMlPRMLDVDisDDSW10.50.52
[ ]Journal2013EvaCSMetRLMlQEMLDVNumDDSW10.50.52
[ ]Journal2012ExpCSMetRLMlPRSLDVDisDDSW0.50.50.51.5
[ ]Journal2016SolExtMetGLMlQEMLDVNumINSW0.5112.5
SourceChannelReference
International Conference on Natural Computation (ICNC)Conference[ ]
Pure and Applied GeophysicJournal[ ]
Expert Systems with ApplicationsJournal[ , ]
IEEEACCESSJournal[ ]
International Journal of Computer ApplicationsJournal[ ]
Proceedings of Indian National Science AcademyJournal[ ]
Earth Science InformaticsJournal[ , ]
Journal of Indian Society of Remote SensingJournal[ ]
Bulletin of Engineering Geology and EnvironmentJournal[ , , , ]
Natural HazardsJournal[ , , , , , , ]
Knowledge Based SystemsJournal[ , ]
Journal of Environmental RadioactivityJournal[ ]
International Journal of Coal GeologyJournal[ ]
Computer-Aided Civil and Infrastructure EngineeringJournal[ ]
Applied SciencesJournal[ ]
International Journal of Disaster Risk ReductionJournal[ ]
Tunnelling and Underground Space TechnologyJournal[ ]
PLoS ONEJournal[ , ]
Environmental Earth SciencesJournal[ , , , , ]
International Journal of Fuzzy SystemsJournal[ ]
Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable ApplicationsJournal[ ]
Applied Soft ComputingJournal[ ]
Journal of Intelligent Information SystemsJournal[ ]
Geodesy and GeodynamicsJournal[ ]
Environmental Monitoring AssessmentJournal[ ]
Earth Science InformaticsJournal[ , ]
International Journal of Computer Information Systems and Industrial Management ApplicationsJournal[ ]
Biostatistics and BiometricsJournal[ ]
International Journal of Engineering Research & TechnologyJournal[ ]
Journal Geological Society of IndiaJournal[ ]
Journal of Sustainability Science and ManagementJournal[ ]
Journal of Chemical and Pharmaceutical SciencesJournal[ ]
Acta GeophysicaJournal[ ]
International Conference on Fuzzy Systems and Knowledge Discovery (FSKD)Conference[ , , ]
International Journal of Uncertainty, Fuzziness and Knowledge-Based SystemsJournal[ , ]
International Conference on Information Management, Innovation Management and Industrial EngineeringConference[ ]
Analysis & Computation Specialty ConferenceConference[ ]
Soil Dynamics and earthquake engineeringJournal[ , , ]
Lecture notes on electrical engineeringConference[ ]
Advances in intelligent system and computingJournal[ ]
ISPR- International Journal of geo informationJournal[ ]
Seismological Research LetterConference[ , ]
Geophysical Research LetterConference[ , ]
CEUR workshop procedingsConference[ ]
GeosciencesJournal[ , ]
Proceedings of SPIE-the international society of optical engineeringConference[ ]
Bulletin of seismological society of AmericaJournal[ ]
Neural processing letterConference[ ]
Proceedings-IEEE 4th International conference on big data, computing services and applicationsConference[ ]
Lecture notes on computer scienceConference[ ]
Geomagnatics, Natural hazards and risksJournal[ ]
International Journal of SWARM intelligence researchJournal[ ]
Neural computing and applicationJournal[ ]
Proceedings- 14th international conference on frontiers of information technologyConference[ ]
Proceedings- 9th international conference on application of information and commucation technologyConference[ ]
Bollettino deGesfisica Teorica ed applicataJournal[ ]
Applied soft computing journalJournal[ ]
Journal of King Saud UniversityJournal[ ]
Target AreaGeographic dimensionRef.
Global[ , , , , , , , , , , , , , , , , , , , , , ]
North American plate+Pacific plate+Pphilipine sea plate[ , ]
North American plate+Pacific plate[ , , , , , , ]
Eurasian plate+Indian plate+Philipine sea plate[ , , , , , , , , , ]
Eurasian plate + Philipine sea plate[ ]
Eurasian plate + Indian plate[ , , , , , ]
Eurasian plate+African plate[ , , , ]
Eurasian plate+Arabian plate[ ]
[ ]
African plate[ ]
Indian plate+African plate[ , ]
Indian plate[ , , , , ]
Iranian plate[ , , , , , , , , ]
Arabian plate[ ]
Arabian plate+Somali plate+Nubian plate[ ]
Philipine sea plate[ ]
Somali plate[ , ]
Nazca plate[ , , , , ]
Republic of CroatiaApulian Plate[ ]
CyprusAfrican plate+Eurasian Plate +Arabian plate[ ]
Tools and Techniques%Reference
MATrix Laboratory (MATLAB)41[ , , , , , , , , , , , , , , , , , , , , , ]
Database Index normalization4[ , ]
Generalized Langevin equation (GLE)1.8[ ]
Subsidence Coefficient calculator1.8[ ]
Predicate (PRED) in C++1.8[ ]
Annealing, Sparsespike1.8[ ]
Classification and regression trees(CART) 1.8[ ]
Fuzzy C-mean4[ , ]
Upgraded IF THEN ELSE4[ , ]
Normalized fuzzy peak ground acceleration (FPGA)1.8[ ]
Predicate Logic7[ , , , ]
Mean absolute error(MAE), Root mean square error(RMSE)1.8[ ]
Earth resources data analysis system (ERDAS) model maker1.8[ ]
3Dimensional seismic tomography1.8[ ]
Mean square error(MSE)4[ , ]
Rapid miner software, frequency, pattern growth algorithm1.8[ ]
Adobe1.8[ ]
Geological carbon storage (GCS) analyzer- Monecarle1.8[ ]
Fuzzy probablistic seismic hazard analyzer (FPSHA)1.8[ ]
FURIA1.8[ ]
AriGIS1.8[ ]
Saga1.8[ ]
Aeronautical reconnaissance coverage Geographic information system (ARC/INFO GIS)1.8[ ]
Geographic information system (GIS), Multi criteria decision analysis (MCDA)4[ , ]
Multilayer Preceptron -Rule Based (MLP-RB) 1.8[ ]
Nearest neighbor Invariant Riemannian metric (AIRM)1.8[ ]
WI (Weighted index)1.8[ ]
Knowledge extraction based on evolutionary learning (KEEL)1.8[ ]
Particle SWARM Optimization (PSO)1.8[ ]
Apache SPARK1.8[ ]
Kernal Fisher Discriminant Algoritthm (KFDA)1.8[ ]
Novel earthquake early warning system (NEEWS)1.8[ ]
ReferenceNumber of Records (EQ)AccuracyMagnitude RangeData Set
[ ]6078%5.2–7.7TS
[ ]343 seismograms99.71%≥5.0TS
[ ]4793.54≥5.5TS
[ ]12 indices91%≥4.5TS
[ ]953169.8%≥2.0ITS
[ ]67724587.853.6–9.1TS
[ ]12690 50.14%≥3.0ITS
[ ]52295.8%≥4.0TS
[ ]177385.73%≥3.5TS
[ ]33763%≥3.0ITS
[ ]100080.1%< 5.5TS
[ ]22770%<5.0TS
[ ]7780.11%≥5.0TS
[ ]2648178%2.5–7.5TS
[ ]1056740%0.1–5.9ITS
[ ]100 99.99%5.5–7.7TS
[ ]47687.2%≥5.0TS
[ ]24884%≥5.5TS
[ ]7888%≥5.0TS
[ ]105984686.28%≥1.5TS
ReferenceScore
Average =1.5
Total%
[ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ]Above average5579
[ , , , , , , ]Average710
[ , , , , , , , ]Below average811
Deterministic
Characteristic earthquakeRupture length
Modified Mercalli
Return Period
Classification,
Clustering,
Machine Learning (ML),
Neural network (NN)
[ , , , , , , , , , , , , , , , , , , , , , , , , , , , , ]
ProbabilisticPrecursorAnimal behaviorPredicate Logic,
Aggregated Indices Randomization
Method(AIRM),
Regression,
Comparison, Clustering, ML, NN
[ , ]
Seismic velocity[ , , ]
Seismic resistivity[ ]
Topography uplift[ , , , , , , , , ]
Radon emission[ , ]
Seismic electric signal[ ]
Electromagnetic signals[ , , , ]
Ground water elevation[ , ]
Land sliding[ , , , ]
Earthquake physicsEarthquake light
Ionosphere disorder
ML, NN
Technique for Order of Preference by Similarity to Ideal Solution
[ , ]
[ ]
ElsticreboundSeismic Gap
Seismic Pattern
Pattern recognition
Clustering, ML
[ , , , ]
[ , ]
MethodComparison
Neural networks
and
Expert systems
Expert system is about capturing and encoding (often manually) rules that experts use so as to develop a program that can mimic their behavior in a very specific domain. It often involved chaining these rules together. With ANN the rules are encoded automatically by presenting examples, good and bad, to the network. The network adjusts weightings over many iterative cycles, honing its output to the correct value. Feed Forward Neural Networks can predict long term and short term earthquakes but it cannot get feedback of output from multiple layers and Back Propagation Neural Network mostly trapped in different local conditions during the training phase of earthquake data sets. However, probability of getting desired output raises when it is tested with ideally designed inputs.
Machine learning
and
Expert systems
Machine learning (ML) focuses on modeling of data statistically and expert is involved at the time of decision. Supervised learning algorithms are used to copy the ending decisive behavior of the Expert systems are based upon set of rules prescribed by human expert and learn by directly injecting the domain level knowledge of human expert. The knowledge obtained from the expert is completely converted into membership functions and used in decision making. Explanation facility is also available as an expert describes all the steps till decision, the basis and exception handling procedures. A rigid system is developed that follows exact rules as described by the expert. Rigidness of the expert system makes it most suitable from all other techniques for predicting future earthquakes.
MethodRef.Prediction ApproachAlgorithm DefinedApplication
Area
Dataset
DeterministicProbabilisticAnalytical
Work
Global
Approximation
Numerical
Experiment
Exploration with
actual forecasts
Success
Achieved
Characteristic
Earth quake
PrecursorsZone Studied
[ ] China
[ ] Iran
[ ] Taiwan
[ ] Iran
[ ] California
[ ] China
[ ] Caraga
[ ] Turkey
[ ] Nepal
[ ]
[ ] China
[ ] India
[ ] India
[ ] Nepal
[ ] China
[ ] Saudi Arabia
[ ] Malaysia
[ ] Iran
[ ] Iran
[ ] Malaysia
[ ] Ethiopia
[ ] Iran
[ ] Chile
[ ]
[ ]
[ ] Iran
[ ]
[ ] Iran
[ ]
[ ] Turkey
[ ]
[ ] China
[ ]
[ ] Greece
[ ] Cyprus
[ ] India
[ ] India
[ ] Iran
[ ]
[ ]
[ ]
[ ] California
[ ]
[ ]
[ ]
[ ] California
[ ] California
[ ]
[ ] Japan
[ ]
[ ]
[ ]
[ ] California
[ ] Croatia
[ ]
[ ] Pakistan
[ ] Greece
[ ] Turkey
[ ] Iran
[ ] Chile
[ ]

Share and Cite

Tehseen, R.; Farooq, M.S.; Abid, A. Earthquake Prediction Using Expert Systems: A Systematic Mapping Study. Sustainability 2020 , 12 , 2420. https://doi.org/10.3390/su12062420

Tehseen R, Farooq MS, Abid A. Earthquake Prediction Using Expert Systems: A Systematic Mapping Study. Sustainability . 2020; 12(6):2420. https://doi.org/10.3390/su12062420

Tehseen, Rabia, Muhammad Shoaib Farooq, and Adnan Abid. 2020. "Earthquake Prediction Using Expert Systems: A Systematic Mapping Study" Sustainability 12, no. 6: 2420. https://doi.org/10.3390/su12062420

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

PERSPECTIVE article

Precursor-based earthquake prediction research: proposal for a paradigm-shifting strategy.

Alexandru Szakcs

  • Department of Endogene Processes, Natural Hazard and Risk, Romanian Academy, Institute of Geodynamics, Bucharest, Romania

The article discusses the controversial topic of the precursor-based earthquake prediction, based on a personal perspective intending to stir the current still waters of the issue after twenty years have passed since the influential debate on earthquake prediction hosted by Nature in 1999. The article challenges the currently dominant pessimistic view on precursor-based earthquake prediction resting on the “impossible in principle” paradigm. Instead, it suggests that a concept-based innovative research strategy is the key to obtain significant results, i.e., a possible paradigm shift, in this domain. The basic concept underlying such a possible strategy is the “precursory fingerprint” of individual seismic structures derived from the uniqueness of the structures themselves. The aim is to find as many unique fingerprints as possible for different seismic structures worldwide, covering all earthquake typologies. To achieve this, a multiparameter approach involving all possible sensor types (physical, chemical, and biological) of the highest available sensitivity and artificial intelligence could be used. The findings would then be extrapolated to other similar structures. One key issue is the emplacement location of the sensor array in privileged “sensitive” Earth surface sites (such as volcanic conduits) where the signal-to-noise ratio is maximized, as suggested in the article. The strategy envisages three stages: experimental phase, validation, and implementation. It inherently could be a costly, multidisciplinary, international, and long-term (i.e., multidecade) endeavor with no guaranteed success, but less adventurous and societally more significant to the currently running and well-funded SETI Project.

Introduction

“Short-term earthquake prediction is the only useful and meaningful form for protecting human lives and social infrastructures” from the effects of disastrous seismic events ( Hayakawa, 2018 ).

More than twenty years have passed since the Nature debate on earthquake prediction (introduced and concluded by Main, 1999a ; Main, 1999b ). The time passed since then apparently seems to justify the most “pessimistic (or skeptical) party” of that debate according to which earthquake prediction based on precursory signals is “impossible in principle” because of the chaotic and nonlinear nature of the seismic phenomenon (e.g., Geller et al., 1996 ; Matthews, 1997 ) or because “it is likely that an earthquake has no preparatory stage” ( Kagan, 1997 ). As Uyeda and Nagao (2018) put it recently, “…because they could not identify reliable precursors, seismologists maintained a negative attitude toward earthquake prediction.” This style of reasoning penetrated the consciousness of the scientific community so profoundly that it is explicitly expressed in Predicting the Unpredictable —the title of a book ( Hough, 2010 ). Meanwhile, a number of large-magnitude earthquakes struck worldwide without being “predicted” and causing numerous victims and incommensurable economic losses such as the 2004 Sumatra earthquake (227, 898 victims and US$15 billion total damage; Telford and Cosgrave, 2006 ), the 2010 Haiti earthquake (>100,000 death toll and USD 7.8–8.5 billion economic loss; U.S. Geological Survey, 2013 ), and the 2011 Tohoku earthquake (15,900 victims and USD 360 billion economic loss; Bachev, 2014 ) that apparently confirmed the pessimistic view on earthquake prediction reinforced by a number of post-1999 papers. This pessimism has essentially lasted until today ( Uyeda and Nagao, 2018 ).

However, there are still a few alternative expert views around (e.g., “there are increased amounts of data, new theories and powerful computer programs and scientists are using those to explore ways that earthquakes might be predicted in the future.”, Blanpied, 2008 ). Developments in the domain of earthquake prediction research during the last few decades prompted by the occurrence of devastating seismic events worldwide seem to confirm such an optimistic view as mentioned by Uyeda and Nagao (2018) referring to “the recent remarkable revival of seismology in earthquake prediction research (…) emerged from the shadows of electromagnetic research.” Martinelli (2020) also noted that “some recent projects on earthquake precursors have produced interesting data recognized by the whole scientific community.” Likewise, Hayakawa (2018) feels himself “very optimistic about the future of earthquake prediction.”

On the other hand, one may question why all attempts in “predicting” earthquakes have failed so far or were not validated by the international scientific community: is it just because earthquake prediction is “impossible in principle” as most pessimists claim? Or, is “impossible in principle” the final and unquestionable answer to the precursor-based earthquake prediction problem? If not, then how an alternative solution may look like?

This article intends to discuss such questions and proposes a radically new approach to the issue of precursor-based earthquake prediction research strategy.

A Short Summary of the State of the Art in Earthquake Prediction Research

Jordan et al. (2011) evaluated the known “diagnostic precursors” (i.e., strain-rate changes, seismic velocity changes, electrical conductivity changes, radon emission, hydrogeological changes, electromagnetic signals, thermal anomalies, anomalous animal behavior, seismic patterns, and proxies for accelerating strain) individually, one-by-one, and found that none of them is universally valid concluding that “the search for diagnostic precursors has thus far been unsuccessful.”

Crampin (2012) claimed that “in one case when seismic data from Iceland was being monitored online, the time, magnitude, and fault break of a M = 5 earthquake in Iceland was successfully stress-forecast three days before it occurred. ” However, this claimed prediction success “in one case,” based on a single monitoring method, cannot be generalized as a universally valid solution applicable to all types of seismic events and all geodynamic environments.

As a consequence, the need for multiparameter monitoring of potential earthquake precursors emerged. It was increasingly invoked in the last 2 decades and researchers started coupling two or more monitored parameters in order to gain better confidence in their prediction efforts. Ryabinin et al. (2011) , for example, studied together chlorine-ion concentration variations and geoacoustic emission in Kamchatka peninsula in boreholes within the same seismic zone claiming that they obtained significant anomalies “70 to 50 days before the earthquake for the hydrogeochemical data and at 29 and 6 days in advance for the geoacoustic data.”

A recently (2018) published book (Pre-Earthquake Processes. A Multidisciplinary Approach to Earthquake Prediction Studies) edited by Dimitar Ouzounov, Sergey Pulinets, Katsumi Hattori, and Patrick Taylor resumes excellently the encouraging progress achieved in the research domain of earthquake prediction. However, the invoked positive results were rather disparate reflecting research efforts of individuals, small groups of researchers or, in the best case, national programs, such as those in China ( Wang et al., 2018 ) or Taiwan (the iSTEP-1, two and three programs following the 1999 Chi-Chi earthquake, Tsai et al., 2018 ; Fu and Lee, 2018 ); they are essentially based on the most common approach of looking for the identification of universally valid precursors and considering only a small number of premonitory phenomena (different from country to country) in their respective multiparameter monitoring systems. Symptomatically, for instance, although biological sensors are mentioned as potential recorders of preseismic signals (e.g., Ouzounov et al., 2018a ; Tramutoli et al., 2018b ), none of the invoked monitoring systems considers them in their research programs. A common global research strategy concept is clearly lacking because, among other reasons, governmental opinions are different and changing over time. For instance, Iceland, Taiwan, China, Russian Federation, and Japan support researches oriented to possible earthquake forecasting, whereas the USA appears contradictory and Europe does not have a unique research policy.

Despite the encouraging results obtained in the last few decades in the field of earthquake prediction research, including a few alleged successful a priori predictions (e.g., using the CN seismicity pattern prediction algorithm, Peresan et al., 2012 , Peresan, 2018 , or using atmospheric-ionospheric precursors, Ouzounov et al., 2018b ), no fully credible, validated, and generally accepted method emerged, as Jordan et al. (2011) put it: “the search for diagnostic precursors has not yet produced a successful short-term prediction scheme.” Reviewing geofluid monitoring results, Martinelli (2020) also concluded that “earthquake prediction research based on parameters believed to be precursors of earthquakes is still controversial and still appear to be premature for the practical purposes demanded by governmental standards.”

Most reported “successes” were “a posteriori” statements (i.e., “postpredictions”) based on the post-factum recognition or retrospective tests of precursory signals related to particular seismic events (e.g., Shebalin et al., 2006 ; Papadopoulos et al., 2018 ; Fu and Lee, 2018 ; Zafrir et al. (2020) , including some of the most devastating recent ones, e.g., Peresan, 2018 ; Tramutoli et al., 2018b ).

Ouzounov et al. (2018b) presented noticeable results in devising a sound methodology to check the predictive potential of preearthquake signals based on a sensor web of several physical and environmental parameters (satellite thermal infrared radiation, electron concentration in the ionosphere, air temperature, and relative humidity). They claim success in the validation of different anomalous preearthquake signals in both retrospective (3 M > 6 events in the US, Taiwan, and Japan) and prospective (22 M > 5.5 events in Japan) modes with a success rate of 21 out of 22 for the latter mode. However, one may question whether this methodology using just a small number of parameters registered by a few ground-based and satellite-held instruments can be generalized and considered valid for all types of earthquakes and all regional or local geodynamic environments.

Taking into consideration the above state of the art, this perspective article does not propose to review the burgeoning literature exhaustively on the subject of earthquake prediction. A number of recently published review articles (e.g., Martinelli, 2020 ) and books (e.g., Dimitar Ouzounov, Sergey Pulinets, Katsumi Hattori, and Patrick Taylor, eds, 2018) did that successfully. Rather, it focuses on the presentation of a possible strategic research approach based on a novel concept.

Challenging the Pessimistic View on the Earthquake Prediction Problem

Science is about discovery. Discovering unknown features of nature is the foremost task of the natural sciences. Most scientific endeavors start by identifying unsolved problems. The scientists enrolled in such an adventure are interested, at least by genuine curiosity, to understand the unknown or unexplained. A lot of unknowns addressed by science were not solved and understood for a long time or during the lifetime of the generation that identified the problem. However, they remained in the collective scientific consciousness as something to be solved in the future, a challenge.

The history of science is rife with examples of universally accepted paradigms, equivalent with the “impossible in principle” statement, challenged by individuals and later recognized as viable. In Earth sciences, Wegener's hypothesis on the migration of continents was considered as “impossible in principle” (although not formulated with the same words). Likewise, flying with objects denser than air was explicitly declared “impossible in principle” just one hundred years ago even by leading scientists of the epoch.

The pessimists always argue that effort and money should not be spent for precursor-based earthquake prediction, given that all such efforts were unsuccessful in the past and, more importantly, because this is “impossible in principle”; rather, money should be spent for hazard mitigation programs. Leaving aside the fact that the two approaches are not mutually exclusive, one may wonder how other large-scale and costly research programs with uncertainties about their outcome comparable with possible earthquake prediction research programs were accepted for funding and are still ongoing for decades with no positive results. The NASA's SETI Program (run by the SETI Institute since 1994), for example, has spent more than USD 110 M in the 1980–2005 time period ( https://phys.org/news/2015-08-seti-unprecedented.html ) and is currently spending USD 2.5 M yearly ( https://geeknewscentral.com/2011/05/02/the-real-cost-of-seti/ ) with no relevant results. One may wonder, for good reasons, whether the chances of identifying extraterrestrial intelligence are higher than devising a reliable precursor-based earthquake prediction methodology. And, what is the relevance of both of them to society?

I conclude that precursor-based earthquake prediction should be viewed as a challenge rather than an insolvable (in principle) problem. Wyss (2001) expressed a similar view: “as a physical phenomenon, earthquakes must be predictable to a certain degree.” Addressing the earthquake prediction problem as a challenge for science is mobilizing (intelligence, effort, time, and money), whereas looking at it as an “impossible in principle” task is demobilizing. As so, “perhaps, now is the time to discard the long-held pessimism and combine all our forces to venture toward transforming precursor information into practical earthquake prediction” ( Uyeda and Nagao, 2018 ).

Why Was Precursor-Based Earthquake Prediction Unsuccessful So Far?

Despite a large number of (mostly post-factum) claims of successful earthquake prediction based on precursory phenomena such as radon anomalies (e.g., Crockett et al., 2006 ) or anomalous behavior of living creatures (e.g., Polyakov et al., 2015 ), the scientific community did not validate them so far. A classic example of claimed but not validated success is the 1975 Haicheng earthquake in China claimed by the Chinese scientists ( Wang et al., 2006 ) as a successful prediction saving many lives. However, the prediction was just in the following year questioned by the devastating Tangshan earthquake (>240,000 victims, USGS, 2013 ). The major lesson to be drawn is that no two earthquakes are alike. Therefore, the most frequently undertaken approach to predict earthquakes based on precursory signals by looking at, or monitoring, one single (or a few) parameter(s) of the presupposed precursory phenomenon, such as VAN, using merely electromagnetic parameters ( Varotsos et al., 1986 ) does not work. There is no Holy Grail of a single, or a few, universally valid prediction signal to be surveyed at least because “it is practically impossible (…) to collect the large set of data for all parameters in real-time globally” ( Pulinets et al., 2018 ).

Another reason is that individuals or small groups of researchers addressed the challenge of precursor-based earthquake prediction on their own, detached from a broader, national or international, systemic approach. As Wyss (2001) puts it, “no real program for earthquake prediction research exists in the United States (…) but motivated individuals are active”. Also, “research connected with earthquake prediction has been characterized by the absence of great projects” ( Martinelli, 2018 ). And this is, in my opinion, the cornerstone of the failure: the lack of a long-term strategy. Long ago, Frank Press (1968) complained that there is no research strategy in the US in the domain of earthquake prediction. Japan's investigation strategy, given as an example, was short-lived (10 years, Press, 1968 ), far less than what would have been necessary to obtain significant results. More recent successive short-term programs in Japan following the 1995 Kobe earthquake ended in remarkable results by retrospectively identifying electromagnetic precursors associated with ground movements (e.g., in the case of the 2011 M 9 Tōhoku megaearthquake); however, no currently running long-term program is founded ( Hayakawa, 2018 ).

It is true that multisensor-/multiparameter-based research strategies are currently implemented in a number of earthquake disaster-prone countries, such as Turkey ( Yuce et al., 2010 ), Russia ( Pulinets et al., 2016 ), Japan ( Hayakawa, 2018 ), China ( Wang et al., 2018 ), Taiwan ( Tsai et al., 2018 ), and Italy ( Peresan, 2018 ); however, they are 1) part of local national programs, 2) unconnected to each other, hence lacking a common strategic concept, and 3) partial, i.e., considering only a few or a limited number of precursor types and corresponding parameters and sensors. The spectrum of “preearthquake phenomena” considered in China for its current multidisciplinary earthquake monitoring system, for instance, includes crustal deformation, seismicity, geoelectricity and geomagnetism and the behavior of crustal fluids ( Wang et al., 2018 ), but no biological response. In Taiwan, the components of the multidisciplinary research on earthquake prediction include monitoring of microearthquake activities, crustal deformation, microgravity, geomagnetic total intensity, and geothermal water changes complemented with ionospheric data and statistical studies ( Fu and Lee, 2018 ; Tsai et al., 2018 ). Pulinets et al. (2018) considered using only two groups of precursors, thermal and ionospheric “in order to simplify” the investigations.

Some limited-participation international projects were also initiated recently, such as the PRE-EARTHQUAKES project (EU-FP7cordis.europa.eu/result/rcn/57410_en.html) involving research institutions from Italy, Germany, Turkey, and Russia ( Ouzounov et al., 2018b ).

All of the research initiatives and strategies mentioned above are, however, different—in breath, philosophy, underlying concept, and international significance—from the strategic approach proposed in this article.

To summarize, despite some notable recent advancements, the precursor-based earthquake prediction research, as a whole, is generally considered unsuccessful so far (e.g., Wang et al., 2006 ; Uyeda and Nagao, 2018 ). This is, in my opinion, due to 1) the lack of long-term research strategy and related funding 2), the lack of large-scale international cooperation, 3) individualism of researchers/groups, aiming at finding the Holy Grail of earthquake prediction based on a single (or a few) signal of a single (or a few) precursory phenomenon, and, perhaps 4) the lack of high-level technical prerequisites (e.g., computing facilities and sensor technology). Therefore, any further approach to the problem has to be based on a strategy. A strategy, in turn, has to be based on a concept. A possible shift of paradigm from today's dominant pessimistic “impossible in principle” to an optimistic “yes, we can” needs a new concept.

Outlines of a Possible Paradigm Shift in Precursory-Based Earthquake Prediction Research

Conceptual framework.

The basic principle of a possible new paradigm is the uniqueness of seismogenic structures. This trivial statement needs some explanations. Seismogenic structures are most commonly defined as active faults or fault segments. However, there are other structures that cannot be equated with faults, such as the Vrancea seismic zone in Romania (e.g., Radulian et al., 2000 ) that is rather a seismogenic volume of rocks of ca. 280,000 km 3 having a surface-projected area of 70 × 40 km. Some “diffuse” seismogenic structures, such as those located in deep intraplate settings, are difficult to be defined, in the sense that their geometrical parameters (volume and outline) cannot be determined.

Irrespective of their nature, well-defined or not, those geological structures are “seismogenic” because they produce earthquakes. And they are unique. Each of them has its own particular geotectonic setting, unique mutual relationships with neighboring structures, unique internal composition and structure, unique seismic history, and a particular stress field.

As a consequence of their uniqueness, the seismogenic structures produce particular seismic events with typical features and parameters. Moreover, reequilibration after major events will cause modifications of the structure itself, so that the next events will take place in somewhat modified local conditions. However, one may suppose that seismogenic structures are stable enough in time (at least on the scale of human history) and that their basic features do not change and their general behavior is preserved.

Another consequence of the seismogenic structures’ uniqueness and their consistent behavior in time is that any precursory phenomenology to be expected is also unique. Therefore, one should not expect the same precursory signal to be received from different seismogenic structures, not to mention any universally valid signals.

Although questioned, the concept of precursory phenomena is generally considered valid in the scientific community (e.g., Geller, 1991 ; Wyss, 2001 ). Theoretically, the sudden rupture/slide produced by/in the seismogenic structure is preceded by stress accumulation and escalation, which, in turn, triggers modifications of the physical fields and chemical components (e.g., fluids) in the neighboring medium that propagate out from the critical zone in the form of geophysical and/or geochemical signals of various kinds. Those signals are, in principle, receivable at Earth's surface by adequately designed, tuned, and located sensors. Moreover, those propagating changes may trigger, by induction, modifications in other fields, with which they interact, hence generating secondary signals, for instance, in the atmosphere, ionosphere, and even the magnetosphere, through a complex coupling mechanism with the lithosphere, as Pulinets et al. (2018) and Hayakawa et al. (2018) convincingly demonstrated. As a consequence, an impending major seismic event may be preceded by a number of precursory signals of various kinds (physical, chemical, and biological), primary or induced.

Indeed, current research in China on “preearthquake phenomena” resulted in important findings ( Wang et al., 2018 ). However, the use of those findings for actual preevent prediction (as opposed to postprediction) and warning meets enormous challenges, because of the complexity of the precursory phenomenology, since event location, time, and magnitude are to be “predicted”, as Wang et al. (2018) put it, “this complexity may be due to differences in the tectonic environments around seismogenic zones”. And, even more significantly, “the characteristics of the preearthquake phenomena preceding each event [of those monitored] differed,” and “different geological structures and crustal environments are likely to produce different spatiotemporal patterns of pre-earthquake phenomena” ( Wang et al., 2018 ). In other words, according to the terminology used in this article, this complexity and these differences arise because of the uniqueness of the seismogenic structures. Martinelli and Dadomo (2018) also arrived to the idea that not all seismogenic structures behave in the same manner as reflected in the fluid-related precursors: “Not all earthquakes seem to be preceded by detectable crustal strain changes in the epicentral area and this could explain the lack of fluid -related precursors.” Hayakawa et al. (2018) , searching for preseismic ionospheric perturbations found that “with earthquake depths of > 40 km (…) there is no clear precursory signal evident.” Parrot and Li (2018) also emphasized that “it cannot be excluded that a [precursory] mechanism could be efficient in a given seismic area and not in another one.” Ouzounov et al. (2018a) explicitly recognized that “no solitary existing method (…) can provide successful and consistent short-term forecasting on a global scale. This is most likely because of the local geology….” Furthermore, “it is difficult to determine the location of the epicenter of a major event based only on recorded observations of pre-earthquake phenomena” ( Wang et al., 2018 ). Considering the concept proposed here (i.e., addressing “preearthquake phenomena” at/for particular individual seismogenic structures), this latter type of shortcomings is automatically eliminated.

The common sense statements, and the copiously cited examples, presented above, all converge toward the acceptance of the uniqueness of seismogenic structures, which, in turn leads to the derived concept of precursory fingerprint. Each seismogenic structure, in particular those well-defined (in terms of nature, stress field, and size/volume), might have its unique assemblage of precursory phenomena, each of them being associated with a particular type of signal propagating through the surrounding medium. As a consequence, an earthquake prediction researcher may consider a particular assemblage of precursory signals for every particular seismogenic structure, which is the unique precursory fingerprint of that unique seismogenic structure. The task is to find that precursory fingerprint of the studied seismogenic structure. How would a strategy that takes this task seriously look like?

Outlines of a Possible Internationally Coordinated Research Strategy

The conceptual framework of the envisaged strategy involves two postulates: 1) precursory signals do exist and they are detectable in principle; 2) the concept of precursory fingerprint of individual seismic structures is valid. Instead of looking for universally valid precursors, the strategy targets a less ambitious goal: identifying the precursory fingerprint of individual seismic structures, hence having a merely local validity (as a starting assumption). The precursory fingerprint has to be found at as many individual seismic structures as possible, ideally covering all types of tectonic regimes and stress field. This is achievable by monitoring selected well-known structures worldwide at purposefully designed and adequately equipped observatories hosting a wide range of sensors of the highest-resolution currently available covering all possible types of precursory signals (seismic, physical, chemical, and biological) in order to assure a multisensor/multiparameter monitoring system. It is worth noting that because “a majority of the reported earthquake precursor data found during the past few decades have been proven to be nonseismological (mainly electromagnetic)” ( Hayakawa, 2018 ), the electromagnetic component of that part of the monitoring system considers that physical precursors must be adequately represented in the research programs including ground-based and satellite-held instruments (e.g., those on-board the currently active French DEMETER satellite, Parrot and Li, 2018 ) in order to understand the effects of lithosphere-atmosphere-ionosphere-magnetosphere coupling (e.g., Hattori and Han, 2018 ; Hayakawa, 2018 ; Hayakawa et al., 2018 ; Pulinets et al., 2018 ) at the local scale. Preseismic atmospheric thermal anomalies are among those signals able to be effectively detected by satellite-held instruments ( Ouzounov et al., 2018a ; Tramutoli et al., 2018a ). Fu and Lee (2018) also advocate for a “systematic characterization of all possible precursors” that “may help us.” Tramutoli et al., 2018b , based on a reach literature, listed a large number of precursors, identified (mostly post-factum!) at various locations as preceding strong earthquakes, during the many-decade-long modern history of earthquake prediction research: deformation, geochemical, thermal infrared, latent head, earthquake clouds and lights, air temperature and humidity, atmospheric pressure, VHF and VLF signals, and GPS-associated total electron content; interestingly, biological precursors are missing from that list. The potential benefits of “geofluid monitoring” (including hydrogeologic measurements and geochemical analyses) of earthquake-prone areas were recently discussed in great detail by Martinelli (2020) as part of the research arsenal in the quest for diagnostic precursors. Ongoing geofluid monitoring research is mentioned by the same author at test sites located in China, Iceland, Japan, the Russian Federation, Taiwan, and the USA. However, he warns about the inherent limitations of that type of research: “in principle, all earthquakes occurring in compressional tectonic regimes cannot be forecasted by geofluid monitoring.”

Therefore, there is an extremely rich “offer” of virtual preearthquake phenomena, and related parameters, to be observed/measured and monitored, of which an n-sized sensor matrix can be completed.

Once installed, a matrix of n (say, 50) different sensors, measuring many more (say 80) parameters, will monitor each selected structure trying to capture precursory signals preceding a potentially destructive earthquake. One may suppose that only a few (say, four of the 50) sensors will be activated with eight measured parameters before imminent seismic events and only above a certain magnitude threshold (also characteristic of the monitored structure) depending on the sensors’ sensitivity. The number and type of activated sensors and above-the-threshold parameters would provide the precursory fingerprint of the individual seismic structure. Experts of each precursory phenomenon may establish the significant threshold values of the monitored parameters (e.g., following Shebalin et al., 2006 ) to distinguish signal from noise and anomalous behavior from background activity. Artificial intelligence and machine learning involving pattern-recognizing algorithms ( Shebalin et al., 2006 , and references within) can also be implemented to evaluate sensor activity. Such extremely powerful modern computing tools are able not only to process and evaluate the response of certain sensors but also to point out complex correlation patterns of sensor responses. Boxberger et al. (2017) , for instance, concluded that the innovative “multi-Parameter Wireless Sensing system allows different sensor types to be combined with h-high-performance computing and communication components.”

Such an endeavor involves large-scale international effort, leadership, coordination, and funding of decades-long observations ( Wyss, 1997 : “long-term data sets are needed to make progress in earthquake prediction research” ), measurements, and experiments, as Wyss (2001) envisaged that “leadership is necessary to raise the funding to an adequate level and to involve the best minds in this promising, potentially extremely rewarding, but controversial research topic.”

The leadership can be assumed by IUGG's IASPEI Commission that already had some sparse initiatives in this sense, as follows.

Resolution 1 of IASPEI RESOLUTIONS adopted at the closing plenary meeting in Santiago, Chile (October 2005), on an International Active-Monitoring Network expressed the need for international cooperation in this domain with the following words: “IASPEI encourages the formation of an International Network of Active Monitoring Test Sites in order to facilitate collaborative seismic and geoelectrical studies of crustal deformation; active monitoring of seismically active zones, and exchange of technical information, data and personnel” (IASPEI, 2020).

Of the 14 IASPEI Resolutions and Statements in the period 1991–2017 ( IASPEI, 2020 ), two explicitly address earthquake prediction issues by recommending the “establishment of a global network of Test Areas for Earthquake Prediction corresponding to the major types of geotectonic settings: Kamchatka (plate-subduction), Iceland (plate spreading), Yunnan, China (intercontinental strike-slip), Gulf of Corinth, Greece (continental rifting) and Beijing (intra-continental) and it “urges all nations to collaborate to extend coverage to the full globe, and recommends its Commissions and Committees to pursue the task in the years ahead” ( ftp://ftp.iaspei.org/pub/resolutions/resolutions_1997_thessaloniki.pdf ).

Likewise, of the 14 ESC (European Seismological Commission) business meetings (1996–2015) ( http://www.esc-web.org/minutes-of-esc-meetings.html ), a few ( http://www.esc-web.org/minutes-of-esc-meetings/79-european-seismological-commission/88-esc-buisness-meeting-reykjavik-iceland-september-12-1996.html ; Reykjavik, 1996; http://www.esc-web.org/minutes-of-esc-meetings/79-european-seismological-commission/88-esc-buisness-meeting-reykjavik-iceland-september-12-1996.html ; Tel Aviv, 1998) ( http://www.esc-web.org/minutes-of-esc-meetings/79-european-seismological-commission/90-esc-buisness-meeting-tel-aviv-1998.html ) addressed explicitly earthquake prediction issues expressing the need for international cooperation.

Although disparate so far and without being based on a unique strategic concept, such initiatives are valuable precedents worthy of being followed and enhanced in a much consequent manner to assure international professional guidance and leadership for the implementation of a global earthquake precursor research strategy such as that proposed here.

The long-term strategy involves three phases: 1) experimental, 2) validation/extension, and 3) implementation.

The experimental phase (or “learning stage,” acc. to Peresan, 2018 ) aims at checking the validity of the precursory fingerprint concept by setting up a small number of observatories at/near the best-studied seismic structures worldwide, each equipped with a matrix of as many kinds of sensors as possible in consensus with Birkhäuser's (2004) statement: “progress in earthquake science and prediction over the next few decades will require increased monitoring in several active areas.” Sensors designed to capture primary and/or induced precursory signals will measure a high number of parameters, combined with an array of seismographs detecting changes in background seismicity ( Sammis and Sornette, 2002 ; Shebalin et al., 2006 ; Peresan, 2018 ) to recognize foreshock activity ( Papadopoulos et al., 2018 ). Other sensors are destined to point out subtle changes of the physical parameters (e.g., temperature, mass-flux, and gas flow rate fluctuations) and composition of fluids (dissolved ions, dissolved gases, soil gas, CO 2 , CH 4 , He, H, radon, and thoron) ( Zoran et al., 2012 ; Oh and Kim, 2015 ; Martinelli, 2020 ) circulating in the crust (e.g., Tsunogai and Wakita, 1995 ; Claesson et al., 2004 ; Hartman et al., 2005 ; Fu and Lee, 2018 ; Martinelli and Dadomo, 2018 ). Ground deformation and other space-monitorable atmospheric and ionospheric signals (e.g., Sgrigna et al., 2007 ; Hayakawa et al., 2018 ; Tramutoli et al., 2018a ) might be considered to complete the ground-based monitoring system. Still other sensors will monitor the behavior of living creatures under stress conditions induced by changes in their physical and chemical environment due to an impending megaseismic event. Biological sensors may include all levels of organization across the biosphere, from bacteria to the human sensor (e.g., Polyakov et al., 2015 ), including vegetal life.

In the experimental phase, laboratory investigations are also needed in specialized high-performance labs in order to devise and check adequate sensors, i.e., to check the capability of various instruments and methods to be used as seismic sensors, including living organisms as potential biological sensors. Innovative approaches are welcome. A worldwide network of laboratories performing experimental work on precursor-sensitive instruments and methods would be required. New enhanced-sensitivity sensors resulting from the lab investigations will be implemented and tested at the monitoring observatories.

Another set of experiments aims at identifying the most suitable sensor emplacement sites for certain types of parameters to be monitored. It might be based on the recognition that not all Earth surface points are equivalent in terms of signal-receiver capability. In other words, certain types of sensors have to be emplaced at locations where the signal/noise ratio is the highest in the vicinity of the targeted seismogenic structure. One may speculate that those most “sensitive sites” are located at the endpoints of signal transmission trajectories along which the energy/information loss of the precursory signal is minimal. For instance, crust-crossing volcanic conduits with no intervening magma-chambers may serve as upside-down antennas (waveguides) for signal transmission ( Szakács, 2011 ), given that any possible geophysical signal will travel faster and with less loss of information energy along such a more homogenous medium than along any other crustal trajectory. Likewise, deep crustal fractures are privileged transmission paths for fluids-carrying geochemical signals. For example, in recent years, several multiparameter continuous soil gas and gamma-ray monitoring stations have been deployed in Taiwan, “strategically located near active faults” ( Tsai et al., 2018 ). Likewise, Fu and Lee (2018) found that “the Rn precursory anomalies were not observed at all the stations because the crust was not homogeneous” (i.e., some of the stations are located in “sensitive” sites, whereas others are not). Martinelli and Dadamo (2018) also state, citing a number of previous works, that “possible geochemical and hydrogeologic precursors have been observed hours to months before some strong earthquakes in ‘sensitive’ monitoring sites among many insensitive sites.” Martinelli (2020) reiterated the idea of monitoring location sensitivity in his review article: “sensitive locations [for geofluid monitoring] are generally found along active faults, in thermal springs, or in deep wells that reach confined reservoirs capable of acting as natural strain meters.”

Therefore, in the experimental phase, purpose-oriented and interdisciplinary investigations are also needed to identify and map the most suitable sensor emplacement sites.

The duration of the experimental phase depends on the seismic activity of the monitored structures: at least one high-magnitude event has to occur in order to evaluate the effectiveness of the monitoring system and to find out whether the observed structure produced significant precursory signals detected by the sensor matrix or not. In other words, can that particular seismic structure be characterized by a specific precursory fingerprint or not?

In the most optimistic scenario, the expected outcome of the experimental phase would be the emergence of a reliable methodology to identify the precursory fingerprint of at least part of the monitored structures. In the case no such result is obtained for none of the observed structures, one has to evaluate whether the precursory fingerprint project has to be abandoned or continued at least until the next megaseismic event occurs.

In the validation/extension phase—following the experimental phase only if considered successful or, at least, meaningful—the experience gained during the first phase will be extended to more seismic structures worldwide in order to 1) validate the results at other structures similar to those where the experiments were successful and 2) enhance and refine the multiparameter sensor matrix for those structures where negative results were obtained in the experimental phase maintaining the monitoring observatories instead of being dismantled. Again, this phase's duration depends on the occurrence of major earthquakes.

The implementation stage will consider only those seismic structures where the first two stages provided positive results (where the characteristic precursory fingerprint was readily identified). As a result, a worldwide network of multiparameter monitoring stations will be operational at a number of well-known seismic structures, including part of those of the highest hazard and risk. The multiparameter monitoring system will be rationalized and optimized by eliminating the inert (i.e., nonresponsive) infrastructure from the sensor matrix. Instead, the sensitivity of the remaining sensors will be continuously improved through further onsite experimental work and the results shared with all active monitoring stations worldwide.

Attempts of setting up monitoring systems in order to detect seismic precursory signals are not without precedents as Martinelli (2020) has shown. However, they were territorially limited to particular countries, such the Soviet Union and China, and to a particular time, e.g., 1970–1990 in the Soviet Union ( Martinelli, 2020 ), all prompted by the occurrence of damaging earthquakes in the surveyed area. Such efforts were basically national endeavors uncoordinated internationally or not based on an underlying strategic concept other than the desire to identify universally valid individual, or a group of “key” (i.e., diagnostic, acc. to Jordan et al., 2011 ) precursors.

It is possible that the final outcome of the multidecadal research effort based on the strategy sketched above will result in a small number of seismogenic structures whose precursory fingerprints are readily identified and where a reliable monitoring system is implemented based on an optimized sensor matrix. The worst scenario implies that no such case will be found. In that case, the whole project will be abandoned and no more money will be invested in it. In the most optimistic scenario, the precursory fingerprint of a significant number of seismogenic structures will be found. Moreover, one may envisage that some kind of regularity of the identified precursory fingerprints will be revealed. For instance, it would turn out that a particular kind of stress regime or a particular genetic type of earthquakes manifests itself via a particular and recognizable type of precursory fingerprint, allowing the generalization of the findings over other structures belonging to the same class. Pattern-recognizing artificial intelligence would help in sorting and evaluating the results in the most optimistic outcome scenario. More innovative approaches, such as machine learning, as Rouet-Leduc et al. (2017) reported for laboratory earthquakes, using time-series datasets gathered at monitoring stations, might be implemented for information evaluation.

The final outcome of the proposed scientific endeavor, its benefits in terms of new knowledge and research methodology, is comparable with other large-scale scientific adventures of humankind (such as the SETI program) at a similar or lower cost and with a similar, if not higher, chance of success. In contrast, the pessimistic approach to the earthquake prediction puzzle (i.e., the “impossible in principle” postulate, which posits that any effort to solve it is futile) is of no benefit for science.

These conclusions are fully consistent with those of Wyss (2001) who stated that “earthquake prediction is difficult but not impossible,” “we must exercise patience and not expect spectacular success quickly,” and any expectations are unrealistic “unless the field of prediction research is reformed and well-funded.”

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Ágnes Gál is thanked for her help in acquiring up-to-date literature on the subject. Simona Szakács helped to improve the English expression of the text. Two reviewers are acknowledged for their thoughtful comments on the early version of the manuscript. Giovanni Martinelli, Antonella Peresan, and Ying Li contributed to improving the final version of the manuscript with their valuable comments and recommendations.

Bachev, H. (2014). Impacts of March 2011 earthquake, tsunami and Fukushima nuclear accident in Japan. SSRN Electron. J . 125. doi:10.2139/ssrn.2538949

CrossRef Full Text | Google Scholar

Birkhäuser, B. (2004). Rethinking earthquake prediction. Pure Appl. Geophys . 155 (2–4), 207–232. doi:10.1007/978-1-4020-4399-4_106

Blanpied, M. (2008). Can we predict earthquakes? USGS CoreCast Available at: http://www.usgs.gov/corecast/details.asp?ID=76 (Accessed May 20, 2008).

Google Scholar

Boxberger, T., Fleming, K., Pittore, M., Parolai, S., Pilz, M., and Mikulla, S. (2017). The multi-parameter wireless sensing system (MPwise): its description and application to earthquake risk mitigation. Sensors . 17, 10. doi:10.3390/s17102400

Claesson, L., Skelton, A., Graham, C., Dietl, C., Mörth, M., Torssander, P., et al. (2004). Hydrogeochemical changes before and after a major earthquake. Geology . 32 (8), 641–644. doi:10.1130/G20542.1

Crampin, S. (2012). Comment on the report “operational earthquake forecasting” by the international commission on earthquake forecasting for civil protection. Ann. Geophys . 55, 1. doi:10.4401/ag-5516

Crockett, R. G., Gillmore, G. K., Phillips, P. S., Denman, A. R., and Groves-Kirkby, C. J. (2006). Radon anomalies preceding earthquakes which occurred in the UK, in summer and autumn 2002. Sci. Total Environ . 364 (1–3), 138–148. doi:10.1016/j.scitotenv.2005.08.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Fu, Ch.-Ch., and Lee, L.‐Ch. (2018). “Continuous monitoring of fluid and gas geochemistry for seismic study in taiwan,” in Pre‐earthquake Processes. a multidisciplinary approach to earthquake prediction studies. . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 199–218.

Geller, R. J. (1991). Unpredictable earthquakes. Nature . 353, 612.

Geller, R. J., Jackson, D. D., Kagan, Y. Y., and Mulargia, F. (1996). Earthquakes cannot be predicted. Science . 275, 5306, 1616. doi:10.1126/science.275.5306.1616

Hartmann, J., Berner, Z., Stüben, D., and Henze, N. (2005). A statistical procedure for the analysis of seismotectonically induced hydrochemical signals: a case study from the Eastern Carpathians, Romania. Tectonophysics . 405, 77–98. doi:10.1016/j.tecto.2005.05.014

Hattori, K., and Han, P. (2018). “Statistical analysis and assessment of ultralow frequency magnetic signals in Japan as potential earthquake precursors,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 229–240.

Hayakawa, M., Asano, T., Rozhnoi, A., and Solovieva, M. (2018). “Very‐low‐ to low‐frequency sounding of ionospheric perturbations and possible association with earthquakes,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 277–304.

Hayakawa, M. (2018). “Earthquake precursor studies in Japan,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 7–18.

Hough, S. (2010). Predicting the unpredictable. The tumultuous science of earthquake prediction . Princeton: Princeton University Press , 280.

IASPEI (2020). Resolutions & statements. Available at: http://www.iaspei.org/documents/resolutions-statements (Accessed October 2020).

Jordan, T. H., ChenGasparini, Y.-T. P., Madariaga, R., Main, I., Marzocchi, W., Papadopoulos, G., et al. (2011). Operational Earthquake forecasting. state of knowledge and guidelines for utilization. Report by the International Commission on earthquake forecasting for civil protection. Istituto Nazionale di Geofisica e Vulcanologia. Ann. Geophys . 54 (4), 391.

Kagan, Y. Y. (1997). Are earthquakes predictable?. Geophys. J. Int . 131 (3), 505–525. doi:10.1111/j.1365-246X.1997.tb06595.x

Kelman, I. (2019). Axioms and actions for preventing disasters. Prog. Disaster Sci . 2, 100008. doi:10.1016/j.pdisas.2019.100008

Main, I. (1999a). Is the reliable prediction of individual earthquakes a realistic scientific goal? Nature . doi:10.1038/nature28107

Main, I. (1999b). Earthquake prediction: concluding remarks. Nature . doi:10.1038/nature28133

Martinelli, G. (2020). Previous, current, and future trends in research into earthquake precursors in geofluids. Geosciences . 10, 189. doi:10.3390/geosciences10050189

Martinelli, G. (2018). “Contributions to a history of earthquake prediction research,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 67–76.

Martinelli, G., and Dadomo, A. (2018). “Geochemical and fluid‐related precursors of earthquakes: previous and ongoing research trends,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 219–228.

Matthews, R. A. J. (1997). Decision-theoretic limits on earthquake prediction. Geophys. J. Int . 131 (3), 526–529. doi:10.1111/j.1365-246X.1997.tb06596.x

Oh, H. Y., and Kim, G. (2015). A radon-thoron isotope pair as a reliable earthquake precursor. Sci. Rep . 5, 13084. doi:10.1038/srep13084

Ouzounov, D., Pulinets, S., Kafatos, M. C., and Taylor, P. (2018a). “Thermal radiation anomalies associated with major earthquakes,” in Pre‐earthquake processes: a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 259–274.

Ouzounov, D., Pulinets, S., Liu, J.-Y., Hattori, K., and Han, P. (2018b). “Multiparameter assessment of pre‐earthquake atmospheric signals,” in Pre‐earthquake processes: a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 339–359.

Papadopoulos, G., Minadakis, G., and Orfanogiannaki, K. (2018). “Short‐term foreshocks and earthquake prediction,” in Pre‐earthquake processes: a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 127–147.

Parrot, M., and Li, M. (2018). “Statistical analysis of the ionospheric density recorded by the DEMETER satellite during seismic activity,” in Pre‐earthquake processes: a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 319–328.

Peresan, A., Kossobokov, V., and Panza, G. F. (2012). Operational earthquake forecast/prediction. Rend. Fis. Acc. Lincei . 23, 131–138. doi:10.1007/s12210-012-0171-7

Peresan, A. (2018). “Recent developments in the detection of seismicity patterns for the Italian region,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 149–171.

Polyakov, Y. S., Ryabinin, G. V., Solovyeva, A. B., and Timashev, S. F. (2015). Is it possible to predict strong earthquakes? Pure Appl. Geophys . 172 (7), 1945–1957.

Press, F. (1968). A strategy for an earthquake prediction research program. Tectonophysics . 6 (1), 11–15. doi:10.1016/0040-1951(68)90022-X

Pulinets, S., Ouzounov, D., Karelin, A., and Davidenko, D. (2018). “Lithosphere–atmosphere–ionosphere–magnetosphere coupling—a concept for pre‐earthquake signals generation,” in Pre-earthquake processes: a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 79–98.

Pulinets, S., Ouzounov, D., and Petrukhin, A. (2016). Multiparameter monitoring of short-term earthquake precursors and its physical basis. implementation in the Kamchatka region. E3S Web Conf . 11, 00019. doi:10.1051/e3sconf/20161100019

Radulian, M., Mandrescu, N., Panza, G. F., Popescu, E., and Utale, A. (2000). Characterization of seismogenic zones of Romania. Pure Appl. Geophys . 157, 57–77. doi:10.1007/PL00001100

Rouet-Leduc, B., Hulbert, C., Lubbers, N., Barros, K., Humphreys, C. J., and Johnson, P. A. (2017). Machine learning predicts laboratory earthquakes. Geophys. Res. Lett . 44, 9276–9282. doi:10.1002/2017GL074677

Ryabinin, G. V., Polyakov, Yu. S., Gavrilov, V. A., and Timashev, S. F. (2011). Identification of earthquake precursors in the hydrogeochemical and geoacoustic data for the Kamchatka peninsula by flicker-noise spectroscopy. Nat. Hazards Earth Syst. Sci . 11, 541–548. doi:10.5194/nhess-11-541-2011

Sammis, C. G., and Sornette, D. (2002). Positive feedback, memory, and the predictability of earthquakes. Proc. Natl. Acad. Sci. U.S.A . 99 (Suppl. 1), 2501–2508. doi:10.1073/pnas.012580999

Sgrigna, V., Buzzi, A., Conti, L., Picozza, P., Stagni, C., and Zilpimiani, D. (2007). Seismo-induced effects in the near-earth space: combined ground and space investigations as a contribution to earthquake prediction. Tectonophysics . 431 (1–4), 153–171. doi:10.1016/j.tecto.2006.05.034

Shebalin, P., Keilis-Borok, V., Gabrielov, A., Zaliapin, I., and Turcotte, D. (2006). Short-term earthquake prediction by reverse analysis of lithosphere dynamics. Tectonophysics . 413, 63–75. doi:10.1016/j.tecto.2005.10.033

Szakács, A. (2011). Earthquake prediction using extinct monogenetic volcanoes: a possible new research strategy. J. Volcanol. Geotherm. Res . 201, 404–411. doi:10.1016/j.jvolgeores.2010.06.015

Telford, J., and Cosgrave, J. (2006). Tsunami evaluation coalition: synthesis report. London: TEC Available at: https://www.sida.se/contentassets/f3e0fbc0f97c461c92a60f850a35dadb/joint-evaluation-of-the-international-response-to-the-indian-ocean-tsunami_3141.pdf (Accessed March 2020).

Tramutoli, V., Filizzola, C., Genzano, N., and Lisi, M. (2018a). “Robust satellite techniques for detecting preseismic thermal anomalies,” in Pre‐earthquake processes: a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 243–258.

Tramutoli, V., Genzano, N., Lisi, M., and Pergola, N. (2018b). “Significant cases of preseismic thermal infrared anomalies,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 331–338.

Tsai, Y-B., Liu, J. Y., Shin, T.-C., Yen, H.-Y., and Chen, C.-H. (2018). “Multidisciplinary earthquake precursor studies in taiwan: a review and future prospects,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 41–65.

Tsunogai, U., and Wakita, H. (1995). Precursory chemical changes in ground water: Kobe earthquake, Japan. Science . 269, 61–63. doi:10.1126/science.269.5220.61

U.S. Geological Survey (2013). Earthquakes with 50,000 or more deaths . Archive Available at: http://earthquake.usgs.gov/earthquakes/world/most_destructive.php (Accessed March 2013).

Uyeda, S., and Nagao, T. (2018). “International cooperation in pre‐earthquake studies: history and new directions,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 3–6.

Varotsos, P., Alexopoulos, K., Nomicos, K., and Lazaridou, M. (1986). Earthquake prediction and electric signals. Nature . 322, 120. doi:10.1038/322120a0

Wang, H., Zhang, Y., Liu, J., Shen, X., Yu, H., Jiang, Z., et al. (2018). “Pre‐earthquake observations and their application in earthquake prediction in China: a review of historical and recent progress,” in Pre‐earthquake processes. a multidisciplinary approach to earthquake prediction studies . Editors D. Ouzounov, S. Pulinets, K. Hattori, and P. Taylor ( John Wiley & Sons ), 19–39.

Wang, K., Chen, Q. F., Shihong, S., and Wang, A. (2006). Predicting the 1975 Haicheng earthquake. Bull. Seismol. Soc. Am . 96 (3), 757–795. doi:10.1785/0120050191

Wyss, M. (1997). Second round of evaluations of proposed earthquake precursors. Pure Appl. Geophys . 149, 3–16. doi:10.1007/BF00945158

Wyss, M. (2001). Why is earthquake prediction research not progressing faster? Tectonophysics . 338 (3–4), 217–223. doi:10.1016/S0040-1951(01)00077-4

Yuce, G., Ugurluoglu, D., Adar, N., and Oeser, V. (2010). Monitoring of earthquake precursors by multi-parameter stations in Eskisehir region (Turkey). Appl. Geochem . 25 (4), 572–579. doi:10.1016/j.apgeochem.2010.01.013

Zafrir, H., Barbosa, S., Levintal, E., Weisbrod, N., Horin5, Y. B., and Zalevsky, Z. (2020). The impact of atmospheric and tectonic constraints on radon-222 and carbon dioxide flow in geological porous media—a dozen-year research summary. Front. Earth Sci . 8, 433. doi:10.3389/feart.2020.559298

Zoran, M., Savastru, R., Savastru, D., Chitaru, C., Baschir, L., and Tautan, M. (2012). Monitoring of radon anomalies in South-Eastern part of Romania for earthquake surveillance. J. Radioanal. Nucl. Chem . 293, 769–781. doi:10.1007/s10967-012-1780-4.

Keywords: earthquake prediction, precursor signal, paradigm shift, strategy, sensors, experiment

Citation: Szakács A (2021) Precursor-Based Earthquake Prediction Research: Proposal for a Paradigm-Shifting Strategy. Front. Earth Sci. 8:548398. doi: 10.3389/feart.2020.548398

Received: 02 April 2020; Accepted: 03 December 2020; Published: 15 January 2021.

Reviewed by:

Copyright © 2021 Szakács. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alexandru Szakács, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Help | Advanced Search

Statistics > Methodology

Title: a prospect of earthquake prediction research.

Abstract: Earthquakes occur because of abrupt slips on faults due to accumulated stress in the Earth's crust. Because most of these faults and their mechanisms are not readily apparent, deterministic earthquake prediction is difficult. For effective prediction, complex conditions and uncertain elements must be considered, which necessitates stochastic prediction. In particular, a large amount of uncertainty lies in identifying whether abnormal phenomena are precursors to large earthquakes, as well as in assigning urgency to the earthquake. Any discovery of potentially useful information for earthquake prediction is incomplete unless quantitative modeling of risk is considered. Therefore, this manuscript describes the prospect of earthquake predictability research to realize practical operational forecasting in the near future.
Comments: Published in at the Statistical Science ( ) by the Institute of Mathematical Statistics ( )
Subjects: Methodology (stat.ME); Geophysics (physics.geo-ph)
Report number: IMS-STS-STS439
Cite as: [stat.ME]
  (or [stat.ME] for this version)
  Focus to learn more arXiv-issued DOI via DataCite
Journal reference: Statistical Science 2013, Vol. 28, No. 4, 521-541
: Focus to learn more DOI(s) linking to related resources

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 20 September 2024

Self-evolving artificial intelligence framework to better decipher short-term large earthquakes

  • In Ho Cho 1 &
  • Ashish Chapagain 1  

Scientific Reports volume  14 , Article number:  21934 ( 2024 ) Cite this article

75 Accesses

1 Altmetric

Metrics details

  • Computational science
  • Natural hazards

Large earthquakes (EQs) occur at surprising loci and timing, and their descriptions remain a long-standing enigma. Finding answers by traditional approaches or recently emerging machine learning (ML)-driven approaches is formidably difficult due to data scarcity, interwoven multiple physics, and absent first principles. This paper develops a novel artificial intelligence (AI) framework that can transform raw observational EQ data into ML-friendly new features via basic physics and mathematics and that can self-evolve in a direction to better reproduce short-term large EQs. An advanced reinforcement learning (RL) architecture is placed at the highest level to achieve self-evolution. It incorporates transparent ML models to reproduce magnitude and spatial location of large EQs ( \(M_w \ge \) 6.5) weeks before of the failure. Verifications with 40-year EQs in the western U.S. and comparisons against a popular EQ forecasting method are promising. This work will add a new dimension of AI technologies to large EQ research. The developed AI framework will help establish a new database of all EQs in terms of ML-friendly new features and continue to self-evolve in a direction of better reproducing large EQs.

Similar content being viewed by others

research paper on earthquake prediction

Gauss curvature-based unique signatures of individual large earthquakes and its implications for customized data-driven prediction

research paper on earthquake prediction

Discovering and forecasting extreme events via active learning in neural operators

research paper on earthquake prediction

Next generation reservoir computing

Introduction.

Predicting large earthquakes (EQs) within a practical time window and at a specific location remains a long-standing enigma. A quote from 1 well reflects our limit— “Short-term deterministic earthquake prediction remains elusive and is perhaps impossible...” Despite notable advances in research communities 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , the capability of the existing methods for predicting “individual” large EQ’s specific location and magnitude is limited 17 , 18 . Recently, geophysics communities actively utilize advanced machine learning (ML) methods 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , advancing our knowledge about EQs and improving EQ forecasting. However, there is also a doubt whether the sophistication of ML methods can necessarily increase prediction accuracy 28 , 29 . In broad computational science, recent rule-learning ML methods 30 , 31 , 32 , 33 , 34 , 35 , 36 show some promises, but purely EQ data-oriented and ML-driven exploration of hidden mechanisms is in its infancy. When it comes to large EQs, obtaining reliable, informative large data sets is formidably difficult, and internal data are intrinsically inaccessible due to various technological limits. We still lack precise descriptions inside the earth’s lithosphere before large EQs. Fundamental limitations regarding large EQs (i.e., lack of large quality data, multi-physics interactions, and unknown principles) hinder the direct adoption of existing ML methods as recently done in geophysics—e.g., deep learning for EQ focal mechanism 37 , logistic regression for EQ-induced landslide susceptibility mapping 38 and extreme gradient boosting for explosion/mining-induced EQ classification 39 . The proposed artificial intelligence (AI) framework seeks to generate ML-friendly new data and let ML-driven prediction rules gradually evolve, for which the reinforcement learning (RL) offers the well-established architecture. The proposed AI framework will be inclusive in a sense that promising ML models (e.g., 37 , 38 , 39 can be incorporated as a prediction rule and their evolution can be managed by the RL architecture.

To add a new research dimension to existing attempts, this paper develops a novel AI framework with self-evolving capability that can transform raw observational EQ data into ML-friendly features and that can evolve in a direction to better reproduce short-term large EQs. Contrary to existing EQ forecasting/prediction methods, this paper focuses on deterministic reproduction of large EQs weeks before the failures. Figure 1 B compares the proposed framework against the existing EQ forecasting approaches 13 , 14 , 15 , 16 , 17 . Although the ranges and scopes of EQ forecasting approaches are broad, and some overlaps may exist 2 , Fig. 1 B intentionally draws the separation line to emphasize the notable differences of the proposed framework from the forecastings—i.e., this framework pursues (i) deterministic reproductions (not probabilistic average rate of EQs) (ii) of large events with \(M_w>\) 6.5 (not wide ranges of magnitudes) (iii) in terms of specific loci and magnitudes of the peaks (not probabilistic counts in discrete spatial cells and subdivided magnitudes bins), (iv) primarily a few weeks or a month before the main EQ (not including the long-term events). Undoubtedly, EQ forecasting 13 , 14 , 15 , 16 , 17 plays an important role in understanding and mitigating probabilistic seismic hazards based on robust statistics and models with long histories (e.g., 40 , 41 , 42 and international collaborations (e.g., Collaboratory for the Study of Earthquake Predictability). To some extent, Fig. 1 B illustrates how the proposed framework will contribute to the existing EQ forecasting research paradigm by providing additional angles with new AI-driven technologies. This paper hypothesizes that available EQ catalogs conceal rules behind the occurrence of large EQs, which are hitherto unknown but can be learned by a fusion of data, geophysics, and ML. The author’s initial works offer proofs-of-concept that the hypothesis appears promising 43 , 44 . The proposed framework transforms decades-long EQ catalogs into ML-friendly new features via multi-layered data transformation by leveraging basic physics and mathematics 44 . Then, the transparent ML can distinguish and remember individual large EQs with the aid of the new features, which can be supported by the “uniqueness” of the new features of individual large EQs 43 . This framework’s prediction uses the threefold objective, magnitudes, three-dimensional (3D) loci, and short-term timing of large EQs, pursuing deterministic, short-term reproductions 44 . This framework seeks self-evolving capability by harnessing RL architecture at the highest level of the framework so that consistent improvement can be done autonomously by RL with new data (Fig. 1 ). This paper presents the architecture and primary cores of the self-evolving AI framework for short-term large EQs. After initial training with the past 40 years of EQ data in the western U.S. region, this paper conducted pseudo-prospective short-term predictions/reprouctions of large EQs. Comparison results support the promising performance of this framework compared to UCERF3-ETAS, one of the well-established EQ forecasting methods.

Seismogenesis agent and environment

figure 1

( A ) The highest level analogy between the real-world seismogenesis and the proposed reinforcement learning (RL); ( B ) Conceptual difference between the EQ forecasting methods and the EQ prediction used in this paper. ( C ) Overall architecture of the proposed AI framework with reinforcement learning (RL) schemes for autonomous improvement in reproduction of large EQs: Core (1) transforms EQs data into ML-friendly new features ( \(\mathcal{I}\mathcal{I}\) ); Core (2) uses ( \(\mathcal{I}\mathcal{I}\) ) to generate a variety of new features in terms of pseudo physics ( \(\mathcal {U}\) ), Gauss curvatures ( \(\mathbb {K}\) ), and Fourier transform ( \(\mathbb {F}\) ), giving rise to unique “state” \(S \in \mathcal {S}\) at t ; Core (3) selects “action” ( \(A \in \mathcal {A}\) )—a prediction rule—according to the best-so-far “policy” \(\pi \in \mathcal {P}\) ; Core (4) calculates “reward” ( R )—accumulated prediction accuracy; Core (5) improves the previous policy \(\pi \) ; Core (6) finds a better prediction rule (action), enriching the action set \(\mathcal {A}\) .

There are two analogous entities between the real-world seismogenesis and the proposed RL, i.e., seismogenesis agent and environment (Fig. 1 A). In general RL, given the present state ( \(S^{(t)}\) ), an “agent” takes actions ( \(A^{(t)}\) ), obtains reward ( \(R^{(t+1)}\) ), and affects the environment leading to next state ( \(S^{(t+1)}\) ). These terminologies are defined in typical RL method and their analogies to the present framework are summarized in Fig. S1 . The “environment” resides outside of the agent, interacts with the agent, and generates the states. In this study, a “seismogenesis agent” is introduced as the virtual entity that can take actions according to a hidden policy and determines all future EQs. The seismogenesis agent’s choice of action constantly determines the next EQ’s location, magnitude, and timing given the present state and thereby affects the surrounding environment. In this study, “environment” is assumed to include all geophysical phenomena and conditions in the lithosphere and the Earth, e.g., plate motions, strain and stress accumulation processes near/on faults, and so on. We can consider the Markov decision process (MDP) in terms of a sequence (or trajectory) and the so-called four argument probability \(p:\mathcal {S}\times \mathcal {R}\times \mathcal {S}\times \mathcal {A}\rightarrow \mathbb {R}[0,1]\) as

According to the “Markov property” 45 , the present state \(S^{(t)}\) is assumed to contain sufficient and complete information that can lead to the accurate prediction of the next state \(S^{(t+1)}\) as would be done by using the full past history. As shall be described in detail, the reward is rooted in the prediction accuracy of future EQs in terms of loci, magnitudes, and timing, and thus the true seismogenesis agent can always receive the maximum possible reward since it always reproduces the real EQ. The “true” seismogenesis agent remains unknown and so does the policy. The primary goal of the proposed RL is to learn the hidden policy and to evolve the agent so that it can imitate the true seismogenesis agent as closely as possible. Thus, as evolution continues, the RL’s agent pursues the higher reward (i.e., the long-term return) and gradually becomes the true seismogenesis agent. There are notable benefits of the adopted RL. First, with incoming new data, improving the new features and transparent ML methods will be autonomously done by RL, without human interventions. Second, RL will search and allow many pairs of state-action (i.e., current observation-future prediction), and thus researchers may obtain with uncertainty measures many prediction rules that are “customized” to different locations, rather than a single unified prediction model. In essence, the proposed global RL framework will act as a virtual scientist who keeps improving many pre-defined control parameters of ML methods, keeps searching for a better prediction rule, and keeps customizing the best prediction rule for each location and new time.

Information fusion core

As shown in Fig. 1 C, the framework’s first core is the information fusion core that can transform raw EQ catalog data into ML-friendly new features—convolved information index (II) which can quantify and integrate the spatio-temporal information of all past EQs. The term “convolved” II is used since this core mainly utilizes a sort of convolution process in the spatial and temporal domains of the observed raw EQ data in the lithosphere. By extending convolution to the time domain, this core quantifies and incorporates cumulative information from past EQs, creating spatio-temporal convolved II ( \(\overline{II}_{ST}^{(t)}\) ), which is denoted as “New Feature 1” in this framework. Detailed derivation procedure is given in Supplementary Materials and Table 1 .

State generation core

The second core of the framework is to generate and determine “states” (Fig. 1 C). For RL to understand, distinguish, and remember all the different EQs across the large spatio-temporal domains, the state should be based on physics and unique signatures about spatio-temporal information of past EQs. Also, the effective state should be helpful in pinpointing specific location’s past, present and future events. To meet these criteria, this core defines “point-wise state” at each spatial location \(\xi _i\) in terms of the pseudo physics ( \(\mathcal {U}\) ), the Gauss curvatures of the pseudo physics ( \(\mathbb {K}\) ), and the Fourier transform-based features of the Gauss curvatures ( \(\mathbb {F}\) ) as defined in Eq. ( 3 ). Since \(n(\mathcal {U})=4, n(\mathbb {K})=8,\) and \(n(\mathbb {F})=160\) , a state vector \(S_j\) has \(n(S_j) = 172.\) “New Feature 2” is defined as the high-dimensional, pseudo physics quantities (denoted as \(\mathcal {U}\) ) using the spatio-temporal convolved IIs (i.e., New Feature 1) of the information fusion core. “New Feature 3” is defined as the Gauss curvature-based signatures ( \(\mathbb {K}\) ) that consider distributions of the pseudo physics quantities \(\mathcal {U}\) at each depth as surfaces. “New Feature 4” is defined as the Fourier transform-based signatures ( \(\mathbb {F}\) ) that quantify the time-varying nature of the Gauss curvature-based signatures \(\mathbb {K}\) . Table 1 summarizes the key equations and formulas of the four-layer data transformations. With these multi-layered new features we can define “state” in the RL context as:

The primary hypothesis is that the “states” can be used as unique indicators of individual large EQs before the events, in hopes of enabling ML methods to distinguish and remember EQs. This core in essence infuses basic physics terms into new features. It should be noted that these physics-infused quantities are all “pseudo” quantities, derived from data, not from any known first principles. The author’s prior works 43 , 44 fully describe the generations of all the New Features. Its compact summary is presented in Supplementary Materials.

Prediction core

The main objective of the Prediction Core (Fig. 1 C) is to select the best “prior” action ( \(A^*\in \mathcal {A}^{(t)}\) ) out of the up-to-date entire action set \(\mathcal {A}^{(t)}\) by using the policy ( \(\pi \) ) given the present state ( \(S^{(t)}\) ). The best action predicts the location and magnitudes of the EQ before the event (e.g., 30 days or a week ahead). This core leverages the so-called Q-value function approximation 45 . Q-value function (denoted \(Q_\pi (S,A)\) ) is the collection of future returns (e.g., the accumulated prediction accuracy) of all state-action pairs following the given policy. In this paper, Q-value function stores the error-based reward, i.e., the smaller error, the higher return. One difficulty is the fact that the space of the state set \(\mathcal {S}\) is a high-dimensional continuous domain, unlike the discrete action space. The adopted tabular Q-value function contains “discrete” state-action pairs. To resolve the difficulty and to leverage the efficiency of the tabular Q-value function, it is important to determine whether the present state \(S^{(t)}\) is similar to or different from all the existing states \( S^{(\forall \tau < t)} \in \mathcal {S}\) . For this purpose, the adopted policy uses the L2 norm (i.e., Euclidean distance) between states—e.g., \(S^{*}(S^{(t)}) :=\text {argmin}_{S^{(\tau )}} \Vert S^{(\tau )} - S^{(t)} \Vert _2, \forall \tau < t.\) The L2 norm includes the state’s \(\mathcal {U}\) and \( \mathbb {K}\) but excludes \(\mathbb {F}\) , for calculation brevity. Preliminary simulations confirm that the use of \(\mathcal {U}\) and \( \mathbb {K}\) appears to work favorably in finding the “closest” prior state \(S^*(S^{(t)})\) to the present state \(S^{(t)}\) . For the policy \(\pi \) (i.e., the probability to select an action for the given state), this core leverages the so-called “greedy policy” in which the policy chooses the action that is associated with the maximum Q-value. Details about the formal expressions of the adopted policy and specialized schemes for this policy are presented in Supplementary Materials.

figure 2

Internal searching process with the tabular Q-value functions used for the pseudo-prospective prediction by this framework 14 days before 2019/7/6 ( \(M_w 7.1\) ) Ridgecrest EQ: ( A ) The entire state sets stored in memory; ( B ) A tabular Q-value function identified to contain the closest state of the potential peaks. Vertical axis shows the reward of each state-action pair and outstanding spikes stand for the best actions of corresponding states. The last action is the dummy action that has the dummy reward as given by Eq. (14); ( C ) One specific Q-value function of actions and the state that is considered to be the potential peak. This process is used for all the pseudo-prospective predictions; ( D ) General illustration of a Jacob’s ladder for large EQ reproduction consisting of the state set with increasingly many pseudo physics terms and the action set (inspired by the Jacob’s ladder within the density functional theory 46 ). Raw EQs data are complex multi-physics vectors in space and time.

Action update core

Action update core seeks to find new action \(A^{*(t)}\) (i.e., prediction rules) when there are expanded states \(S^{(t)} \in \varvec{S}^{(ID)}, ID \ge 18\) . This new action will be different from existing actions ( \(A^{*(t)} \ne \forall A_i\in \mathcal {A}^{(t-1)}\) ) already used in the Prediction Core. Since the state \(S^{(t)}\) is assumed to be “unique,” this core seeks to find a “customized” prediction rule (action) for each new state. In general, there is no restriction of the new action (i.e., new prediction rule/model). In the present framework, the Action Update Core leverages the author’s existing work, Glass-Box Physics Rule Learner (GPRL), to find the new best action (GPRL’s full details are available in 43 ). In essence, GPRL is built upon two pillars—flexible link functions (LFs) for exploring general rule expressions and the Bayesian evolutionary algorithm for free parameter searching. LF ( \(\mathcal {L}\) ) can take (i) a simple two-parameter exponential form or (ii) cubic regression spline (CRS)-based flexible form. As shown in 34 , 35 , 36 , the exponential LF is useful when a physical rule of interest is likely monotonically increasing or decreasing with concave or convex shapes. In this framework, the pseudo released energy \(E_r\) takes the exponential LF due to the favorable performance 43 . Also, GPRL can utilize CRS-based LFs for higher flexibility 47 , which is effective when shapes of the target physics rules are highly nonlinear or complex. In this framework, the actions (prediction rules) take CRS-based LFs for generality and accuracy. The space of LFs’ parameters (denoted as \(\varvec{\uptheta }\) ) is vast. Finding “best-so-far” free parameters \(\varvec{\uptheta }^*\) is done by the Bayesian evolutionary algorithm in this framework, as successfully done by 34 , 35 , 36 . Table 2 presents the salient steps of the Bayesian evolutionary algorithm. All the new features (e.g., \(\overline{II}, \mathcal {U}, \mathbb {K}, \mathbb {F}\) ) generated by the State Generation Core are used and explored as potential candidates in the prediction rules (i.e., actions). The best-so-far prediction rule’s expression identified by this framework is presented in Supplementary Materials. As shown by 44 , the inclusion of the Fourier-transform-based new features \(\mathbb {F}\) appears to sharpen the accuracy of each prediction rule for large EQs ( \(M_w \ge 6.5\) ). It is noteworthy that the action (prediction rule) shares the same spirit as the so-called Jacob’s ladder within the density functional theory 46 in a sense that EQ reproduction accuracy gradually improves with the addition of more physics and mathematical terms (see Fig. 2 D). Since the developed AI framework holds the self-evolving capability, whenever new large EQs occur, the old sets (State and Action) and Q-value functions should expand autonomously. The details about autonomous expansion of state, action, and policy are presented in Supplementary Materials.

figure 3

Pseudo-Prospective Predictions by UCERF3-ETAS and Reproductions by the Present AI framework 14 days before 2019/7/6 ( \(M_w 7.1\) ) Ridgecrest EQ: ( A ) Real Observed EQ; ( B ) This AI framework’s best-so-far reproduction by the pre-trained RL policy; ( C ) ComCat probability spatial distribution from UCERF3-ETAS predictions of \(M\ge 5\) events with probability of 0.3% within 1 month time window; ( D ) ComCat magnitude-time functions plot, i.e., magnitude versus time probability function since prediction simulation starts. Time 0 means the prediction starting time. Probabilities above the minimum simulated magnitude (2.5) are shown. Favorable Evolution of New Action Learning: ( E ) Distribution of the entire rewards of “new” best actions given a new state set. The maximum reward \(\text {max}[r(S,A)]\) of the best-so-far prediction of the peak increases to 67.13 from 47.60 in ( F ) which shows the distribution of the entire rewards of “old” best actions given a new state set; Predicted magnitudes by using the new best action ( G ) whereas ( I ) is using the old best action. Compared to the real EQs ( H ), the new best action appears to favorably evolve ( E , F ).

Feasibility test results of pseudo-prospective short-term predictions

After training with all EQ catalog during the past 40 years from 1990 through 2019 in the western U.S. region (i.e., longitude in (− 132.5, − 110) [deg], latitude in (30, 52.5) [deg], and depth (− 5, 20) [km]), this paper applied the trained AI framework to large EQs ( \(M_w \ge 6.5\) ). The objectives of the feasibility tests are to confirm (1) whether the initial version of AI framework can “remember and distinguish” individual large EQs and take the best actions according to the up-to-date RL policy, (2) whether the best action can accurately reproduce the location and magnitude 14 days before the failure, and (3) whether RL of the framework can self-evolve by expanding its experiences and memory autonomously.

This feasibility test is “pseudo-prospective” since this framework has been pre-trained with the past 40 years’ data, identified unique states of individual large EQs, and found the best-so-far prediction rules. Each of the best prediction rules is proven successful in reproducing large EQ’s location and magnitude 30 days before the event as shown in the author’s prior work 44 . It is anticipated that the framework should have “experienced” during training by which it can distinguish the individual events and can determine how to reproduce them using the best-so-far action out of the “memory,” the tabular Q-value function in this stage of framework. The objective of this feasibility test is to prove the potential before applications and expansions to the “true” prospective predictions in future research. Figure 2 schematically explains the process behind the RL-based pseudo-prospective prediction, starting from a new state, to Q-value function, and to the best-so-far action. Figure S29 presents some selected cases that confirm the promising performance of this framework. All other comparative investigations with large EQs ( \(M_w \ge 6.5\) ) during the past 40 years in the western U.S. are presented in Figs. S16 through S24 . All cases confirm this framework’s promising performance compared to the EQ forecasting method. Particularly, in all pseudo-prospective predictions, RL can remember the states (i.e., the quantified information around the spatio-temporal vicinity of the large EQs during training) and also select the best actions (i.e., the customized prediction rules to the individual large EQs) from the stored policy.

As a reference comparison, a well-established EQ forecasting method UCERF3-ETAS 13 , 14 , 15 , 16 is adopted to conduct short-term predictions 14 days before the large EQs. This may not be an apple-to-apple comparison since EQ forecasting is not mainly designed for such short-term predictions of specific large EQs before the events. Still, this comparison meaningfully provides a relative standing of the framework showing its role and difference from the existing EQ forecasting approaches. Detailed settings of UCERF3-ETAS are presented in Supplementary Materials. Figure 3 compares the pseudo-predictions by UCERF3-ETAS and this framework. UCERF3-ETAS predicts large EQs in ranges of \(M_w \in [5,6)\) with 0.3% probability 30 days before the onset. Despite the low probability and the magnitude error, the spatial proximity of the prediction to real EQ is noteworthy, i.e., the distance between the dashed box and the real EQ (Fig. 3 C). All EQs ( \(M_w>3.5\) ) of the past year are used as self-triggering sources of UCERF3-ETAS (Fig. 3 D). In contrast, Fig. 3 B confirms that this AI framework can remember the unique states of the reference volumes near the hypocenter of the Ridgecrest EQ, select the best-so-far action from the up-to-date policy, and successfully reproduce the peak of the real EQ 14 days before the failure. As expected, this framework’s reproduction of large EQs appears successful in achieving the threefold objective, i.e., accuracy in magnitude, location, and short-term timing. It should be noted that this framework does not remember any information on specific dates or loci of past EQs for training and prediction. Only the point-wise states near the large EQs are remembered. Also, the best actions associated with the states are stored as policy in terms of relative selection probabilities. At the time of the feasibility test, the framework accumulated about 110 important state sets and the associated 110 best actions. Thus, the feasibility test used the up-to-date RL policy stored in 110 tabular Q-value functions.

This performance of pseudo-prospective predictions is promising since searching for a similar (or identical) state from the storage is not a trivial task given a new state. Each time step (here, 1 day), there are \(n_s = \) 253,125 reference volumes for the present resolution of 0.1 degrees of latitude and longitude, and 5 km depth. If we use all the point-wise states of the past 10 years, it amounts to \(10\times 365\times 253125\) , i.e., approximately over 0.92 billion states. Figure 2 illustrates such a long RL searching sequence, starting from a new state, to Q-value function, and to the best-so-far action.

In some cases, UCERF3-ETAS appears to show promising prediction accuracy about epicenters’ loci, which can be also found in 1992/4/25 \(M_w 7.2\) EQ (Fig. S17 C), 1992/6/28 \(M_w 7.3\) EQ (Fig. S17 C), and 2010/4/4 \(M_w 7.0\) EQ (Fig. S20 C). The common aspect of these cases is that they have relatively large prior EQs right before the prediction begins (time = 0), i.e., \(M_w \approx 6\) about 3 months earlier (Fig. S20 D), \(M_w \approx 6\) about 50 days earlier (Fig. S17 D), and \(M_w \approx 5.2\) about 40 days earlier (Fig. S16 D). This may play favorably for the “self-triggering” mechanism of UCERF3-ETAS. but a general conclusion is not available since other cases do not support the consistent accuracy in loci of epicenters.

Self-evolution is the central feature of the present framework. Fig. 3 E–I show how the new best action update can evolve in a positive direction. Given a new state (i.e., when a new large EQ occurs; Fig. 3 H), “old” best action is used to predict the magnitudes which may not be accurate enough (Fig. 3 I). Then, the Action Update Core seeks to learn “new” best action for the new state (Fig. 3 G). Comparison of Fig. 3 G and I clearly demonstrate the positive evolution of the new action, and Fig. 3 E and F quantitatively compare the maximum reward of all state-action pairs. This example underpins that favorable evolution can take place by learning new actions.

Impact of new features on prediction rules

One of the key contributions of this paper is to generate new ML-friendly features that are based on basic physics and generic mathematics. It is informative to touch upon varying contributions of new features on the prediction rules. We conducted three separate training with 3 prediction rules that are

Model A (denoted by the author inside the program) utilizes the pseudo released energy and its power whereas Model B further harnesses pseudo vorticity and Laplacian terms. Model C, the prediction rule of this paper, uses Gauss curvature and FFT-based new features, as fully explained in Eq. (10) of Supplementary Materials. The detailed definitions and explanations about the terms in above models are presented in Supplementary Information. Figure 4 shows how the new features contribute to the prediction accuracy. All predictions are made 28 days before the real EQ event (Cape Mendocino EQ, \(M_w=7.2\) , April 25, 1992) by using the best-so-far prediction rules. As clearly seen in Fig. 4 B–D, the increasing addition of new features appears to boost the accuracy of the large EQ reproduction. Interestingly, the seemingly poor reproductions with Model A (i.e., pseudo released energy and power) and Model B (i.e., all four pseudo physics terms) still can capture the peak’s location. With the Gauss curvature and FFT-based new features (Model C, Fig. 4 D), both location and magnitude are sharply reproduced. Naturally, the AI framework favors Model C over Models A and B. This comparison demonstrates that the new ML-friendly features hold significant impact on the accuracy on large EQ reproduction in magnitude, location, and short-term timing aspects. Also, it suggests that future extensions with more new features may substantially help improve the accuracy of the framework. Thus, the positive evolution of this framework appears possible, which warrants further investigation into new ML-friendly features.

figure 4

Comparison of prediction rules with different new features: ( A ) Real observed EQ [Cape Mendocino EQ, \(M_w=7.2\) , April 25, 1992]; ( B – D ) Reproduced EQ peaks by using the best-so-far prediction rules. ( B ) Model A uses the pseudo released energy and power whereas ( C ) Model B additionally uses vorticity and Laplacian terms (i.e., all four pseudo physics terms). ( D ) Model C is the rule presented by this paper in Eq. (10). All prediction rules are trained with the same hyperparameters and settings.

Conclusions and outlook

It is instructive to touch upon the optimality of the adopted approach. One of the primary goals of the RL is to find the best policy that leads to the “optimal” values of the states, i.e., \( v^*(s) :=\underset{\pi }{\text {max}}v_{\pi }(s), \forall s\in \mathcal {S}\) . The “Bellman optimality equation” 45 formally states

where \(\pi ^*\) is the “global” optimal policy, q ( s ,  a ) is the action-value function meaning the expected return of the state-action pair following the policy, \(\gamma \) is the discount factor of future return, \(s'\) is the next state. In view of Eq. ( 6 ), this paper’s algorithm is partially aligned with the optimality equation, achieving the “local optimality.” To explain this, it is necessary to note what we don’t know and what we do know. On one hand, the transition probability \(\text {Prob}(s',r|s,a)\) from the present state \(S^{(t)}=s\) to the next state \(S^{(t+1)}=s'\) is assumed to remain unknown and completely random (i.e., the hidden transition from past EQs to future EQ at a specific location). Also, the “global” optimal policy \(\pi ^*\) is not known at the early stage of learning. On the other hand, this paper seeks to use the best action for the given state (by Eqs. 12 and 13, thereby satisfying \({\text {max}}q_{\pi ^{(t)}}(s,a)\) of the 1st line of Eq. 6 . Therefore, the present algorithm of this paper will be able to achieve at least “local optimality” with the present policy \(\pi ^{(t)}\) , facilitating a gradual evolution toward the globally optimal policy with gradually better policy (i.e., \(\pi ^{(t)} \rightarrow \pi ^*, t\rightarrow \infty \) ). This framework can embrace the existing geophysical approaches. For instance, the RL’s “state-transition probability” \(p:\mathcal {S}\times \mathcal {S}\times \mathcal {A}\rightarrow \mathbb {R}[0,1]\) is given by

Such state-transition probabilities are widely used in various forms in seismology since geophysicists derived the statistical rules such as ETAS 41 , 48 or Gutenberg-Richter law 42 based on persistent observations of the EQ transitions over a long time. In the future extension, the existing statistical laws may help the proposed framework in the form of state-transition probability. The adopted GPRL is in spirit similar to the symbolic regression 49 , 50 , which may provide more general (also complicated) forms of the prediction rules. For general forms, a future extension of Action Update Core may harness a deep neural network that takes input of all the aforementioned new features (e.g., \(\overline{II}, \mathcal {U}, \mathbb {K}, \mathbb {F}\) ) and generates output of scalar action value. Since the policy determines actions for the given state, achieving consistently improving policy is vital for the reliable evolution of prediction models. The proposed learning framework is a “continuing” process, not an “episodic” one. Also, the state-transition \(S^{(t)}\rightarrow S^{(t+1)}\) (i.e., the transition from present EQ to future EQ at a location) is assumed to be stochastic and remains unknown. Therefore, future extensions may leverage general policy approximations and policy update methods 51 . The present state-action pair is a simple one-to-one mapping, and the present Q-value function is the simplest tabular form with clear interpretability. To accommodate complex relationships between states and actions, a future extension may find a general policy that is parameterized by \(\varvec{\uptheta }_{\pi }\) , denoted as \(\pi (A^{(t)}|S^{(t)}; \varvec{\uptheta }_{\pi })\) . A possibility of concurrent approximation and improvement of both Q-value function \(q(S,A;\varvec{\uptheta }_q)\) and policy \(\pi (A|S; \varvec{\uptheta }_{\pi })\) , i.e., update of policy (actor) and value (critic) called actor-critic method 45 .

figure 5

Potential intellectual benefit from this paper’s outcomes.

The proposed AI seismogenesis agent will continue to evolve with clear interpretability, and machine learning-friendly data to be generated by this framework will accumulate. Toward the short-term deterministic large EQ predictions, this approach will add a meaningful dimension to our endeavors, empowered by data and AI. This AI framework will continue to enrich the new feature database which will accelerate the data- and AI-driven research and discovery about short-term large EQs. As depicted in Fig. 5 , the proposed database will help diverse ML methods to explore and discover meaningful models about short-term EQs, importantly from data. Since the database will always preserve the physical meanings of each features, the resultant AI-driven models will offer (mathematically and physically) clear interpretations to researchers. The best-so-far AI-driven large EQ model will tell us which set of physics terms play a decisive role in reproducing the individual large EQ. Relative importance of the selected physics terms will be provided in terms of weights (e.g., in deep learning models) or mathematical expressions (e.g., in rule-learning models). These will substantially complement probabilistic or statistical models available in EQ forecasting/prediction methods of the existing research communities.

Another future research direction should include the generality of the proposed AI framework. Can the learned prediction rules apply to large EQ events in the untrained spatio-temporal ranges? If so, how reliable could the prediction be? To answer the general applicability of the AI framework is of fundamental importance since it will hint at a rise of practical large EQ prediction capability. This AI framework aims to establish a foundation for such a bold capability. Currently, the present research is dedicated to training the AI framework with the past four decades, generating new ML-friendly database, and accumulating new prediction rules. Still, we can glimpse the general applicability of the AI framework. Figure 6 shows examples of reasonably promising predictions of different temporal ranges (i.e., never used for training) with the best-so-far prediction rules. Some noisy false peaks are noticeable (Fig. 6 B and D), but the overall prediction appears to be meaningful. It should be noted, however, that the majority of such blind predictions to other time ranges have failed to accurately predict the location and magnitude of large EQs ( \(M_w\ge 5.5\) ). This could be attributed to the early phase of the AI framework. As of August 2024, only 6.67% of total available EQ catalog between the year 1980 and 2023 are transformed into new ML-friendly features due mainly to the expensive computation cost even with the high-performance computing. If the AI framework continues expanding its coverage to all the available EQ catalog and learn more prediction rules, thereby enriching state and action sets, it may lead to practically meaningful generality in the not-too-distant future.

figure 6

Examples of generality of the AI framework applied to different temporal regions: ( A ) Real observed EQ [Ferndale2 EQ, \(M_w=6.6\) , February 19, 1995] and ( B ) the predicted EQ peaks by using the best-so-far prediction rules 16 days before the EQ. ( C ) Real observed EQ [Cape Mendocino EQ, \(M_w=7.2\) , April 25, 1992] and ( B ) the predicted EQ peaks by using the best-so-far prediction rules 28 days before the EQ.

This paper holds transformative impacts in several aspects. This paper’s AI framework will (1) help transform decades-long EQ catalogs into ML-friendly new features in diverse forms while preserving basic physics and mathematical meanings, (2) enable transparent ML methods to distinguish and remember individual large EQs via the new features, (3) advance our capability of reproducing large EQs with sufficiently detailed magnitudes, loci, and short time ranges, (4) offer a database to which geophysics experts can facilely apply advanced ML methods and validate the practical meaning of what AI finds, and (5) serve as a virtual scientist to keep expanding the database and improving the AI framework. This paper will add a new dimension to existing EQ forecasting/prediction research.

Materials and methods

Bayesian evolutionary algorithm with threefold objective.

Table 2 briefly summarizes the threefold error function that is critical for the short-term EQ deterministic prediction capability. It also summarizes the salient steps of the Bayesian evolutionary algorithm used for searching the free parameters of all the LFs.

where \(M_{real}(\varvec{\xi }_j)\) is the observed magnitude at \(\varvec{\xi }_j\) ; \(\text {erf(.)}\) is the Gauss error function. \({M^*}_{real}\) is the maximum observed magnitude within a distance range, denoted as “reward distance range” (RDR), which is centered at \(\varvec{\xi }_j\) . \(\varvec{\xi ^*}\) is the spatial location vector of \({M^*}_{real}\) (Fig. S10 ). Physically, RDR is a spatial range where the prediction is meaningful. Even if the peak magnitudes are nearly identical, prediction may be considered valid only when the predicted and observed peaks are close enough (Fig. S10 ). \(\omega _{dist}\in \mathbb {R}(0,1]\) is a distance-dependent discount factor given as \(\omega _{dist}:=\text {exp} \left( \frac{-\Vert \varvec{\xi }_j - \varvec{\xi ^*} \Vert _2}{ c_{Dr} } \right) .\) In essence, the reward is discounted when the distance between predicted and observed peaks becomes large via \(\omega _{dist}\) whereas the reward is scaled up when the observed real EQ is large, i.e., the larger EQ, the more reward as shown in Fig. S25 . For instance, suppose that the current state-action pair \((S_j, A_i)\) predicts magnitude, the closest real peak magnitude within RDR is \(M^*_{real}=7.0\) and their distance is 100 km. Then, for the reward, the distance-dependent discount occurs by a factor of 0.606 (i.e., exp(− 100 km/200 km)) while the magnitude-dependent scaling up occurs by a factor of 106.342 (i.e., exp(7/1.5)). From preliminary investigations, it is recommended that RDR = 200 km, \(c_{Dr}\) = 200 km, and \(c_{Mr} = 1.5\) (see Fig. S11 ).

Reward calculation core

This core is to calculate “reward” (denoted as \(R_{\pi }(A_k^{(t)}, S^{(t)})\) ) that is defined by the prediction accuracy of the chosen actions \(A_k^{(t)}\) \((k=1,...,n(\mathcal {A}^{(t)}))\) according to the best-so-far policy \(\pi \) given the state \(S^{(t)}\) (Fig. 1 C). In concept, the reward is inversely proportional to the error of individual event predictions: \(R \propto \mathcal {J}^{-1}\) where \(\mathcal {J}\) is a generic error function in terms of loci, magnitudes, and timings (explained in Table 2 ). It is similar to but more general than the collective measure used in the existing statistical EQ forecasting methods (e.g., 18 . For comparison, the reward core calculates two types of reward—Type 1 depending on the error of peak magnitudes while Type 2 on the error of both magnitude and distance. Type 2 reward turns out to be more favorable for the present framework than Type 1. It is important to note the non-zero lower bound of the proposed reward (Type 2). According to the Type 2 reward definition (Eq. 9 ), the magnitude-dependent scale-up factor \(\text {exp}(M^*_{real}/c_{Mr})\) of larger EQs can increase greater than 100.0 whereas the distance-dependent discount factor \(\omega _{dist}\in \mathbb R(0.5, 1.0]\) and the magnitude error term (1-erf(.)) \(\in \mathbb R(0,1]\) as shown in Fig. S25 . When a magnitude error is 100% with \(M^*_{real}=7.0\) , the magnitude error term (1-erf(.)) \(\approx 0.1573\) whereas \(\text {exp}(M^*_{real}/c_{Mr})\) \(\approx \) 106.342. Therefore, even if there is no distance error (i.e., \(\omega _{dist}=1.0\) ), the incorrect prediction may end up with a reward of 16.727.

High-performance computing

All training and simulations are conducted on a high-performance computing (HPC) facility. One-time update of state, action, and policy with the past 10 years’ EQ catalog takes more than 40 hours on 144 cores. One-time pseudo-prospective prediction takes about 70 minutes on 36 cores. The specification of the HPC is as follows: 36 cores per node (each node has two 18-core Intel Skylake 6140 processors), 384GB memory per node, 100G infinite band interconnect, and 1.5TB local hard drive. In the future extension, higher resolutions of larger domains and faster computation can be achieved by advanced parallel computing algorithm such as 53 .

Simulation setting of UCERF3-ETAS for pseudo-prospective predictions

To conduct prospective predictions by using UCERF3-ETAS, we include all the fore shocks available in Comprehensive Earthquake Catalog (ComCat) of the Advanced National Seismic System. In particular, all past EQs within one-year window up to about 14 days before the main shock ( \(M_w\ge 6.5\) ) are fetched as sources for self-triggering mechanisms. In this way, it is anticipated that the UCERF3-ETAS may predict the main shock ( \(M_w\ge 6.5\) ) two weeks before the event by using prior 1-year EQs. For the data fetching, we used 100–200 km radius centered at the main shock location. We specified “ -- min-mag 3.5” to force the ComCat evaluation plots to use this specified magnitude-of-completeness value. We used the ShakeMap surfaces option looking for all ruptures with \(M_w \ge 5\) via “ -- finite-surf-shakemap-min-mag 5”. In total, 1000 simulations are conducted for each pseudo prospective prediction. Figure S13 presents actual UCERF3-ETAS configuration commands used to conduct the pseudo prospective predictions of the large EQs presented in this paper.

Data availability

The processed 40-years data sets consisting of the month-based epochs and the refined day-based epochs are shared on a cloud storage (available upon request). Other supplementary data and parallel programs supporting other findings of this paper will be available upon request to the corresponding author.

Beroza, G. C., Segou, M. & Mousavi, S. M. Machine learning and earthquake forecasting-next steps. Nat. Commun. 12 , 4761 (2021).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Bayona, J. A., Savran, W. H., Rhoades, D. A. & Werner, M. J. Prospective evaluation of multiplicative hybrid earthquake forecasting models in California. Geophys. J. Int. 229 , 1736 (2022).

Article   ADS   Google Scholar  

Wang, L. & Barbot, S. Excitation of San Andreas tremors by thermal instabilities below the seismogenic zone. Sci. Adv. 6 , eabb2057 (2020).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Bletery, Q. & Nocquet, J.-M. The precursory phase of large earthquakes. Science 381 , 6655 (2023).

Article   MathSciNet   Google Scholar  

Ross, Z. E., Cochran, E. S., Trugman, D. T. & Smith, J. D. 3D fault architecture controls the dynamism of earthquake swarms. Science 368 , 1357–1361 (2020).

Article   ADS   MathSciNet   CAS   PubMed   Google Scholar  

Jiang, Junle & Lapusta, Nadia. Deeper penetration of large earthquakes on seismically quiescent faults. Science 352 (6291), 1293–1297 (2016).

Allison, K. L. & Dunham, E. M. Earthquake cycle simulations with rate-and-state friction and power-law viscoelasticity. Tectonophysics 733 (9), 232–256 (2018).

Zhu, W., Allison, K. L., Dunham, E. M. & Yang, Y. Fault valving and pore pressure evolution in simulations of earthquake sequences and aseismic slip. Nat. Commun. 11 , 4833 (2020).

Mitchell, E. K., Fialko, Y. & Brown, K. M. Velocity-weakening behavior of Westerly granite at temperature up to 600 C. J. Geophys. Res. Solid Earth 121 (9), 6932–6946 (2016).

Xu, X. et al. Surface deformation associated with fractures near the 2019 Ridgecrest earthquake sequence. Science 370 (6516), 605–608. https://doi.org/10.1126/science.abd1690 (2020).

Article   MathSciNet   CAS   PubMed   Google Scholar  

Simons, M. et al. The 2011 magnitude 9.0 Tohoku-Oki earthquake: mosaicking the megathrust from seconds to centuries. Science 332 , 1421–1425. https://doi.org/10.1126/science.1206731 (2011).

Article   ADS   CAS   PubMed   Google Scholar  

Toda, S. & Stein, R. S. Long- and short-term stress interaction of the 2019 ridgecrest sequence and coulomb-based earthquake forecasts. Bull. Seismol. Soc. Am. 110 (4), 1765–1780 (2020).

Article   Google Scholar  

Field, E. H. et al. Long-term time-dependent probabilities for the third uniform California earthquake rupture forecast (UCERF3). Bull. Seismol. Soc. Am. 105 (2A), 511–543 (2015).

Field, E. H. et al. A spatiotemporal clustering model for the third Uniform California Earthquake Rupture Forecast (UCERF3-ETAS): toward an operational earthquake forecast. Bull. Seismol. Soc. Am. 107 (3), 1049–1081 (2017).

Milner, K. R., Field, E. H., Savran, W. H., Page, M. T. & Jordan, T. H. Operational earthquake forecasting during the 2019 Ridgecrest, California, Earthquake sequence with the UCERF3-ETAS Model. Seismol. Res. Lett. 91 , 1567–1578 (2020).

Page, Morgan T., Field, Edward H., Milner, Kevin R. & Powers, Peter M. The UCERF3 grand inversion: Solving for the long-term rate of ruptures in a fault system. Bull. Seismol. Soc. Am. 104 (3), 1184–1204 (2014).

Shcherbakov, R., Zhuang, J., Z \({\ddot{o}}\) ller, G. & Ogata, Y., Forecasting the magnitude of the largest expected earthquake. Nat. Commun. 10 , 4051 (2019).

Nandan, S., Ram, S. K., Ouillon, G. & Sornette, D. Is seismicity operating at a critical point?. Phys. Rev. Lett. 126 , 128501 (2021).

Tan, Y. J. et al. Machine-learning-based high-resolution earthquake catalog reveals how complex fault structures were activated during the 2016–2017 Central Italy sequence. Seismic Rec. 1 , 11–19 (2021).

Bergen, K. J., Johnson, P. A., de Hoop, M. V. & Beroza, G. C. Machine learning for data-driven discovery in solid earth geoscience. Science 363 , 1299 (2019).

Mousavi, S. M., Zhu, W., Ellsworth, W. & Beroza, G. C. Unsupervised clustering of seismic signals using deep convolutional autoencoders. IEEE Geosci. Rep. Sens. Lett. 16 , 11 (2019).

Google Scholar  

Yang, L., Liu, X., Zhu, W., Zhao, L. & Beroza, G. C. Toward improved urban earthquake monitoring through deep-learning-based noise suppression. Sci. Adv. 8 , eabl3564 (2022).

Mousavi, S. M. & Beroza, G. C. Deep-learning seismology. Science 377 , 725 (2022).

Rouet-Leduc, B. et al. Machine learning predicts laboratory earthquakes. Geophys. Res. Lett. 44 , 9276–9282 (2017).

Hulbert, C. et al. Similarity of fast and slow earthquakes illuminated by machine learning. Nat. Geosci. 12 , 69–74 (2019).

Article   ADS   CAS   Google Scholar  

Rouet-Leduc, B., Hulbert, C. & Johnson, P. A. Continuous chatter of the Cascadia subduction zone revealed by machine learning. Nat. Geosci. 12 , 75–79 (2019).

DeVries, P. M. R., Viégas, F., Wattenberg, M. & Meade, B. J. Deep learning of aftershock patterns following large earthquakes. Nature 560 , 632–634 (2018).

Mignan, A. & Broccardo, M. Neural network applications in earthquake prediction (1994–2019):Meta-analytic and statistical insights on their limitations. Seismol. Res. Lett. 91 , 4 (2020).

Mignan, A. & Broccardo, M. One neuron versus deep learning in aftershock prediction. Nature 574 , E1–E3 (2019).

Raissi, Maziar & Yazdani, Alireza. George Em Karniadakis, Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367 , 1026–1030 (2020).

Article   ADS   MathSciNet   CAS   PubMed   PubMed Central   Google Scholar  

Karpatne, A. et al. Theory-guided data science: A new paradigm for scientific discovery from data. IEEE Trans. Knowl. Data Eng. 29 (10), 2318–2331 (2017).

Champion, K., Lusch, B., Kutz, J. N. & Brunton, S. L. Data-driven discovery of coordinates and governing equations. Proc. Natl. Acad. Sci. 116 (45), 22445–22451 (2019).

Cho, I., Li, Q., Biswas, R. & Kim, J. A framework for putting scientists’ eyes on glass-box physics rule learner and its application to nano-scale phenomena. Nat. Commun. Phys. 3 , 78. https://doi.org/10.1038/s42005-020-0339-x (2020).

Bazroun, M., Yang, Y. & Cho, I. Flexible and interpretable generalization of self-evolving computational materials framework. Comput. Struct. 260 , 106706. https://doi.org/10.1016/j.compstruc.2021.106706 (2021).

Cho, I., Yeom, S., Sarkar, T. & Oh, T. Unraveling hidden rules behind the wet-to-dry transition of bubble array by glass-box physics rule learner. Nat. Sci. Rep. 12 , 3191 (2022).

ADS   CAS   Google Scholar  

Cho, I. A framework for self-evolving computational material models inspired by deep learning. Int. J. Numer. Methods Eng. 120 (10), 1202–1226. https://doi.org/10.1002/nme.6177 (2019).

Kuang, W., Yuan, C. & Zhang, J. Real-time determination of earthquake focal mechanism via deep learning. Nat. Commun. 12 , 1432 (2021).

Pokharel, B., Alvioli, M. & Lim, S. Assessment of earthquake-induced landslide inventories and susceptibility maps using slope unit-based logistic regression and geospatial statistics. Nat. Sci. Rep. 11 , 21333 (2021).

Wang, T., Bian, Y., Zhang, Y. & Hou, X. Classification of earthquakes, explosions and mining-induced earthquakes based on XGBoost algorithm. Comput. Geosci. 170 (C), 105242 (2023).

Omori, F. J. On the aftershocks of earthquakes. Coll. Sci. Imp. Univ. Tokyo 7 , 111–200 (1984).

Helmstetter, A. & Sornette, D. Importance of direct and indirect triggered seismicity in the ETAS model of seismicity. Geophys. Res. Lett. 30 , 1576 (2003).

Gutenberg, B. & Richter, C. F. Seismicity of the Earth and Associated Phenomena (Princeton Univ. Press, 1954).

Cho, I. Gauss curvature-based unique signatures of individual large earthquakes and its implications for customized data-driven prediction. Nat. Sci. Rep. 12 , 8669. https://doi.org/10.1038/s41598-022-12575-w (2022).

Cho, I. Sharpen data-driven prediction rules of individual large earthquakes with aid of Fourier and Gauss. Nat. Sci. Rep. 13 , 16009. https://doi.org/10.1038/s41598-023-43181-z (2023).

Sutton, R. S. & Barto, A. G. Introduction to Reinforcement Learning (MIT Press, 2017).

Huang, B., von Rudorff, G. F. & von Lilienfeld, O. A. The central role of density functional theory in the AI age. Science 381 , 170–175 (2023).

Wood, S. Generalized Additive Models: An Introduction with R (CRC Press, 2006).

Book   Google Scholar  

Helmstetter, A., Kagan, Y. Y. & Jackson, D. D. Comparison of short-term and time-independent earthquake forecast models for Southern California. Bull. Seismol. Soc. Am. 96 (1), 90–106. https://doi.org/10.1785/0120050067 (2006).

Ma, H., Narayanaswamy, A., Riley, P. & Li, L. Evolving symbolic density functionals. Sci. Adv. 8 , eabq0279 (2022).

Udrescu, S. M. & Tegmark, M. AI Feynman: A physics-inspired method for symbolic regression. Sci. Adv. 6 , eaay2631 (2020).

Ciosek, K. & Whiteson, S. Expected policy gradients for reinforcement learning. J. Mach. Learn. Res. 21 , 1–51 (2020).

MathSciNet   Google Scholar  

United States Geological Survey (USGS), Earthquake Catalog. USGS https://earthquake.usgs.gov/earthquakes/search/ (Accessed Apr 2022), (2022).

Cho, I. & Porter, K. Multilayered grouping parallel algorithm for multiple-level multiscale analyses. Int. J. Numer. Meth. Eng. 100 , 914–932 (2014).

Download references

Acknowledgements

The authors are grateful for the various research supports. This work was supported, in part, by the National Science Foundation (NSF) of the U.S.A. under grants CSSI-1931380 and CMMI-2129796. High-performance computing (HPC) for this study is partially supported by the HPC@ISU equipment at Iowa State University, some of which has been purchased through funding provided by the NSF CNS-2018594.

NSF CSSI-1931380 and CMMI-2129796.

Author information

Authors and affiliations.

CCEE Department, Iowa State University, Ames, IA, 50011, USA

In Ho Cho & Ashish Chapagain

You can also search for this author in PubMed   Google Scholar

Contributions

I.C. is responsible for all algorithms and programs presented as well as writing of the manuscript. A.C. is responsible for conducting earthquake forecasting simulations for feasibility tests.

Corresponding author

Correspondence to In Ho Cho .

Ethics declarations

Competing interests.

The author declares no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cho, I.H., Chapagain, A. Self-evolving artificial intelligence framework to better decipher short-term large earthquakes. Sci Rep 14 , 21934 (2024). https://doi.org/10.1038/s41598-024-72667-7

Download citation

Received : 26 July 2024

Accepted : 09 September 2024

Published : 20 September 2024

DOI : https://doi.org/10.1038/s41598-024-72667-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

research paper on earthquake prediction

Assessing the Effectiveness of ML Algorithms in Earthquake Damage Prediction

  • Conference paper
  • First Online: 18 September 2024
  • Cite this conference paper

research paper on earthquake prediction

  • Avinash Bhandiya 8 &
  • Kapil Pandey 8  

Part of the book series: Algorithms for Intelligent Systems ((AIS))

Included in the following conference series:

  • International Conference on Deep Learning and Visual Artificial Intelligence

15 Accesses

In this research paper, we examine various machine-learning algorithms to predict earthquake damage in an attempt to improve predictive performance and overall stability. We investigate the application of Random Forest, Decision Tree, SVM, KNN, and Naive Bayes algorithms and conduct a comparative study to determine which is the most suitable method to predict seismic damage cost. In this investigation, we employ an extensively engineered dataset featuring several relevant features and harmonized by preprocessing methods. Key performance points, such as accuracy and Area Under the Receiver Operating Characteristic Curve are considered during the evaluation of the examined machine-learning algorithms. This study aims to highlight the ideal approach to machine-learning algorithm selection to meet specific predictive requirements. The outcomes of this study can serve as a valuable reference for disaster management authorities when deciding on suitable algorithms to ensure precise and timely earthquake damage prediction, thereby enhancing disaster response strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Earthquake dataset|Kaggle. https://www.kaggle.com/datasets/warcoder/earthquake-dataset .

Gaba A, Jana A, Subramaniam R, Agrawal Y, Meleet M (2019) Analysis andPrediction of Earthquake Impact-a Machine Learning approach. In: 2019 4th international conference on computational systems and information technology for sustainable solution (CSITSS), IEEE, Bengaluru, India, pp 1–5

Google Scholar  

Shibata A (2006) Estimation of earthquake damage to urban systems. Structural control and health monitoring. Off J Int Assoc Struct Control Monit Eur Assoc Control Struct 13(1):454–471

Reyes J, Morales-Esteban A, Martínez-Álvarez F (2013) Neural networks to predict earthquakes in Chile. Appl Soft Comput 13(2):1314–1328

Basu M, Ghosh S, Jana A, Bandyopadhyay S, Singh R (2017) Resource mapping during a natural disaster: a case study on the 2015 Nepal earthquake. Int J Disas Risk Reduc 24:24–31

Karbassi A, Mohebi B, Rezaee S, Lestuzzi P (2014) Damage prediction for regular reinforced concrete buildings using the decision tree algorithm. Comput Struct 130:46–56

Nguyen HD, Le TL, Le VH (2019) Predicting building damage classification after earthquakes using random forest ensembles. In: 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE), IEEE, pp 779–783

Kohiyama M, Oka K, Yamashita T (2020) Detection method of unlearned pattern using support vector machine in damage classification based on deep neural network. Struct Control Health Monit 27(8):2552

Mangalathu S, Hwang SH, Choi E, Jeon JS (2019) Rapid seismic damage evaluation of bridge portfolios using machine learning techniques. Eng Struct 201:109785

Gao L, Qian H, Wu X, Xie L (2016) The seismic vulnerability assessment of buildings and transportation network based on K-Nearest Neighbors (KNN) algorithm. Nat Hazard 82(1):197–214

Aghamohammadi A et al (2015) Twitter sentiment analysis during natural disasters: a machine learning approach

Reddy MRK, Aggarwal JK, Iyengar SS (2012) A machine learningapproach for disaster response resource allocation

Behl S, Rao A, Aggarwal S, Chadha S, Pannu HS (2021) Twitter for disaster relief through sentiment analysis for COVID-19 and natural hazard crises. Int J Disaster Risk Reduction 55:102101

Vinod AM, Venkatesh D, Kundra D, Jayapandian N (2022) Natural disaster prediction by using image based deep learning and machine learning. In: Second international conference on image processing and capsule networks: ICIPCN 2021. Springer International Publishing, vol 2, pp 56–66

Akinci H, Zeybek M (2021) Comparing classical statistic and machine learning models in landslide susceptibility mapping in Ardanuc (Artvin), Turkey. Nat Hazards 108(2):1515–1543

Sawant N, Khadapkar DR. Comparison of the performance of Gaussian NB Algorithm, the K Neighbors Classifier Algorithm, the logistic regression algorithm, the linear discriminant analysis algorithm, and the decision tree classifier algorithm on same dataset. IJRAR

Cho JH, Kurup PU (2011) Decision tree approach for classification and dimensionality reduction of electronic nose data. Sens Actuat B Chem 160(1):542–548

Myles AJ, Feudale RN, Liu Y, Woody NA, Brown SD (2004) An introduction to decision tree modeling. J Chemometr Soc 18(6):275–285

Vapnik VN (1999) An overview of statistical learning theory. IEEE Trans Neural Netw 10(5):988–999

Yousefzadeh M, Hosseini SA, Farnaghi M (2021) Spatiotemporally explicit earthquake prediction using deep neural network. Soil Dyn Earthq Eng 144:106663

Mallouhy R, Abou Jaoude C, Guyeux C, Makhoul A (2019) Major earthquake event prediction using various machine learning algorithms. In: 2019 international conference on information and communication technologies for disaster management (ICT-DM), IEEE, pp 1–7

Pradhan B (2013) A comparative study on the predictive ability of the decision tree, support vector machine and neuro-fuzzy models in landslide susceptibility mapping using GIS. Comput Geosci 51:350–365

Download references

Author information

Authors and affiliations.

Bikaner Technical University, Bikaner, India

Avinash Bhandiya & Kapil Pandey

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Avinash Bhandiya .

Editor information

Editors and affiliations.

Engineering College Bikaner, Bikaner Technical University, Bikaner, Rajasthan, India

Vishal Goar

Department of Computer Science and Engineering, Symbiosis Institute of Technology, Symbiosis International University, Pune, Maharashtra, India

Aditi Sharma

Pattern Processing Lab, School of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu, Fukushima, Japan

Jungpil Shin

Department of Computer Science, American International University, Dhaka, Bangladesh

M. Firoz Mridha

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Bhandiya, A., Pandey, K. (2024). Assessing the Effectiveness of ML Algorithms in Earthquake Damage Prediction. In: Goar, V., Sharma, A., Shin, J., Mridha, M.F. (eds) Deep Learning and Visual Artificial Intelligence. ICDLAI 2024. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-97-4533-3_24

Download citation

DOI : https://doi.org/10.1007/978-981-97-4533-3_24

Published : 18 September 2024

Publisher Name : Springer, Singapore

Print ISBN : 978-981-97-4532-6

Online ISBN : 978-981-97-4533-3

eBook Packages : Engineering Engineering (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. (PDF) Earthquake prediction using the fields estimated by an adaptive

    research paper on earthquake prediction

  2. (PDF) Earthquake prediction: Basics achievements, perspectives

    research paper on earthquake prediction

  3. (PDF) Innovative Mathematical Model for Earthquake Prediction

    research paper on earthquake prediction

  4. (PDF) Earthquake Prediction using Seismic Information

    research paper on earthquake prediction

  5. (PDF) Earthquake Prediction Using Expert Systems: A Systematic Mapping

    research paper on earthquake prediction

  6. The science of earthquake prediction

    research paper on earthquake prediction

VIDEO

  1. Can We Predict Earthquakes?

  2. Why are earthquakes so hard to predict?

  3. Earthquake Prediction

  4. Can you predict an earthquake?

  5. Basic Geophysics: Earthquakes

  6. Determining the Epicenter Distance of an Earthquake

COMMENTS

  1. Machine learning for earthquake prediction: a review (2017-2021)

    For decades, earthquake prediction has been the focus of research using various methods and techniques. It is difficult to predict the size and location of the next earthquake after one has occurred. However, machine learning (ML)-based approaches and methods have shown promising results in earthquake prediction over the past few years. Thus, we compiled 31 studies on earthquake prediction ...

  2. (PDF) Analysis and Prediction of Earthquakes using ...

    A reliable and accurate method for earthquake prediction has the potential to save countless human lives. With that objective in mind, this paper looks into various methods to predict the ...

  3. Machine learning and earthquake forecasting—next steps

    Metrics. A new generation of earthquake catalogs developed through supervised machine-learning illuminates earthquake activity with unprecedented detail. Application of unsupervised machine ...

  4. A Novel Approach for Earthquake Prediction Using Random Forest and

    INTRODUCTION: This research paper presents an innovative method that merges neural networks and random forest algorithms to enhance earthquake prediction. OBJECTIVES: The primary objective of the ...

  5. Deep learning for laboratory earthquake prediction and autoregressive

    1. Introduction. Earthquake forecasting and prediction have long been of interest because of the obvious practical and societal implications. While research has waxed and waned with many failed directions, recent work on early warning systems, hazard assessment, and precursors has provided renewed interest (Allen and Stogaitis, 2022; Ben-Zion and Lyakhovsky, 2001; Beroza et al., 2021; Wang et ...

  6. Advancing Earthquake Prediction: A Comprehensive Review of Data Science

    This paper delves into the transformative potential of data science for earthquake prediction techniques. Through a thorough literature review, it explores methodologies spanning machine learning, deep learning, time series analysis, signal processing, Bayesian models, and data integration, assessing their relevance in earthquake forecasting. Key algorithms, including Random Forest, Support ...

  7. Earthquake prediction model using support vector regressor and ...

    Earthquake prediction has been a challenging research area, where a future occurrence of the devastating catastrophe is predicted. In this work, sixty seismic features are computed through employing seismological concepts, such as Gutenberg-Richter law, seismic rate changes, foreshock frequency, seismic energy release, total recurrence time. Further, Maximum Relevance and Minimum Redundancy ...

  8. Earthquake Prediction Using Expert Systems: A Systematic Mapping ...

    Earthquake is one of the most hazardous natural calamity. Many algorithms have been proposed for earthquake prediction using expert systems (ES). We aim to identify and compare methods, models, frameworks, and tools used to forecast earthquakes using different parameters. We have conducted a systematic mapping study based upon 70 systematically selected high quality peer reviewed research ...

  9. The Theoretical and Practical Foundations of Strong Earthquake

    The mid-term sys-tems are the most reliable and can predict most strong and major earthquakes at least 1 - 2 years in advance. The closer event to us—the most accurate assess-ment of magnitude and location we can make. A range of expected magnitudes for alerted events can be defined, for example, such as M6.5 - 6.8+.

  10. PDF Machine learning for earthquake prediction: a review (2017 2021)

    Hence, this study aims to provide a comprehensive review of past research on machine learning for earthquake predic-tion from 2017 to 2021. We also intend to study earthquake seismic indicators, because various indicators are used to predict earthquakes and observe the best seismic indicators that ofer a high-performance ML algorithm.

  11. Earthquake prediction from seismic indicators using tree ...

    Earthquake prediction is a challenging research area, but the use of a variety of machine learning models, together with a range of seismic indicators as inputs, has over the last decade led to encouraging progress, though the variety of seismic indicator features within any given study has been generally quite small. Recently, however, a multistage, hybrid learning model has used a total of ...

  12. Unravelling the prediction of strong earthquakes

    But he adds that there have been more erroneous predictions than successful ones, with one analysis of 548 strong earthquakes occurring between 2007 and 2010 finding just 20 "acceptable ...

  13. Earthquake Prediction Using Deep Neural Networks

    Reliable prediction of earthquakes has numerous societal and engineering benefits. In recent years, the exponentially rising volume of seismic data has led to the development of several automatic earthquake detection algorithms through machine learning approaches. In this study, we propose a fully functional and efficient earthquake detector cum forecaster based on deep neural networks of long ...

  14. DLEP: A Deep Learning Model for Earthquake Prediction

    Abstract: Earthquakes are one of the most costly natural disasters facing human beings, which happens without an explicit warning, therefore earthquake prediction becomes a very important and challenging task for humanity. Although many existing methods attempt to address this task, most of them use either seismic indicators (explicit features) designed by geologists, or feature vectors ...

  15. An attention-based LSTM network for large earthquake prediction

    In this paper, an attention-based long short-term memory approach is proposed for large earthquake prediction, tested with the Japan and northern Red SEA datasets from the USGS. The results are evaluated and compared with a baseline method and some proposed empirical scenarios that demonstrate the performance of our proposed method.

  16. Earthquake prediction: a critical review

    Earthquake prediction research has been conducted for over 100 years with no obvious successes. Claims of breakthroughs have failed to withstand scrutiny. Extensive searches have failed to find reliable precursors. Theoretical work suggests that faulting is a non-linear process which is highly sensitive to unmeasurably fine details of the state ...

  17. A Prospect of Earthquake Prediction Research

    Such probability varies from 1% to rence of a large earthquake. Nevertheless, when earth- 10% with an average of 3.8% throughout Japan. Prob- quakes begin to occur in a local region, its residents ability forecasts using this map have been conducted. should determine whether or not such movement is.

  18. Frontiers

    Introduction "Short-term earthquake prediction is the only useful and meaningful form for protecting human lives and social infrastructures" from the effects of disastrous seismic events (Hayakawa, 2018).More than twenty years have passed since the Nature debate on earthquake prediction (introduced and concluded by Main, 1999a; Main, 1999b).The time passed since then apparently seems to ...

  19. Using Machine Learning Models for Earthquake Magnitude Prediction in

    The research continues the work done by ... Our paper extends these studies by combining various seismic indicators with other time-series type features to create a unified model that can predict earthquakes in multiple locations. ... Earthquake prediction in this study is defined as a binary classification task based on the median of maximum ...

  20. PDF A Prospect of Earthquake Prediction Research

    A Prospect of Earthquake Prediction Research Yosihiko Ogata Abstract. Earthquakes occur because of abrupt slips on faults due to accumulated stress in the Earth's crust. Because most of these faults and their mechanisms are not readily apparent, deterministic earth-quake prediction is difficult. For effective prediction, complex condi-

  21. [1312.7712] A Prospect of Earthquake Prediction Research

    A Prospect of Earthquake Prediction Research. Earthquakes occur because of abrupt slips on faults due to accumulated stress in the Earth's crust. Because most of these faults and their mechanisms are not readily apparent, deterministic earthquake prediction is difficult. For effective prediction, complex conditions and uncertain elements must ...

  22. Application of Artificial Intelligence in Predicting Earthquakes: State

    In the case of earthquake prediction, these models also produce a promising outcome. This work systematically explores the contributions made to date in earthquake prediction using AI-based techniques. A total of 84 scientific research papers, which reported the use of AI-based techniques in earthquake prediction, have been selected from ...

  23. Earthquake prediction: 20 years of global experiment

    Prediction of time and location of an earthquake of a certain magnitude can be classified into the temporal and spatial accuracy categories listed in Table 1.Note that there is a wider variety of possible combinations in the table than just the usually considered temporal "short-term" and spatial "exact" accuracy. In view of (1) the complexities of the Earth's lithosphere; (2) its ...

  24. Self-evolving artificial intelligence framework to better decipher

    Large earthquakes (EQs) occur at surprising loci and timing, and their descriptions remain a long-standing enigma. Finding answers by traditional approaches or recently emerging machine learning ...

  25. Assessing the Effectiveness of ML Algorithms in Earthquake ...

    In this research paper, we examine various machine-learning algorithms to predict earthquake damage in an attempt to improve predictive performance and overall stability. ... In 2013, Reyes et al. introduced an innovative approach for earthquake prediction in Chile, a nation known for its frequent seismic activity. This method relied on the ...