Research and development of actual methods, technologies and methods applied to solution of applied machine training problems for predictive protection

Authors

  • Rodion Anatoliyovych Ivchenko
  • Andriy Ivanovich Kupin

DOI:

https://doi.org/10.34185/1562-9945-2-127-2020-05

Keywords:

искусственные нейронные сети, аппроксимация нейронными сетями, нейросетевое моделирование, регрессионный анализ, Data Mining

Abstract

A study was made of relevant techniques, technologies and techniques used to solve applied problems of machine learning, based on materials from scientific articles in highly rated journals of foreign researchers, analytical and review notes from open sources, as well as technical documentation and press releases of technical and software solutions. The search for new methods of model selection, cross-validation, evolutionary and analytical selection of training algorithms is of both scientific and purely practical interest. The development of machine learning technologies will only accelerate in the near future. Currently, we are witnessing progress in the development of automated search methods for constructing effective learning models for data analysis that are applicable to many practical problems of data mining. During the review of modern trends in machine learning, we identified promising areas of fundamental and applied research in this area.
Development of a process model based on the use of neural networks. Neural networks are successfully used for the synthesis of control systems for dynamic objects. Neural networks have a number of properties that determine the prospects of their use as an analytical apparatus of control systems. In the context of the problem under consideration, this is, above all, the ability to learn by example. The presence of large volumes of monitoring data, which presents interconnected measurements of both the inputs and outputs of the studied system, allows the neural network to be provided with representative training samples. Other important properties are the ability of the neural network to adapt to changes in the properties of the control object and the external environment, as well as high resistance to “failures” of individual network elements due to the parallelism originally built into its architecture. The ability of a neural network to predict directly follows from its ability to generalize and highlight hidden relationships between input and output data. After training, the network is able to “predict” future output values ​​based on several previous values and current monitoring data. In the framework of ongoing research, the most promising is the use of counterpropagation networks
Among the considered neural network architectures, the following networks are applicable for solving the problems of approximation and regression analysis: multilayer perceptrons and radial basis networks. Both types of neural networks have their advantages and disadvantages when used in dependency recovery tasks. Each of the networks considered effectively approximates complex functions, learning from noisy data. Multilayer perceptrons show good results in processing experimental data, including multidimensional ones, allowing us to simulate patterns hidden in them. When training a 3-layer perceptron based on the linear function of activation of the output neuron and hidden layers with a hyperbolic activation function, it showed the best result in terms of training accuracy and prediction accuracy.

References

Koroteev, MV A review of some current trends in the technology of machine learning Creative Commons Attribution 4.0. the world. (http://creativecommons.org/licenses/by/4.0/)

Oleynik, A.G. Scheme of operative forecasting of production processes of ore enrichment / AG. Oleynik, L.P. Kovalev // Proceedings of the Kola Scientific Center of the Russian Academy of Sciences. Information technology. - Apatity: Publishing House of the KSC RAS. - 4/2011 (7). -No. 2. - P.211-219.

Chernodub, AN Review of Neuro-management Methods / AN Chernodub, D.A. Beetle // Programming problems. -2011.-№ 2. - P.79 -94.

Kohonen¸ T. Self-organization and associative memory / T. Kohonen // 2d ed. -New-York, Springer-Verlag, 1988. - 312 p.

Grossberg, S. Some networks that can learn, remember and reproduce any number of complicated space-time patterns / S. Grossberg // Journal of Mathematics and Mechanics, 1969. -Vol. 19, No. 1, - P.53-91.

Khokhlova D. (2016). Neural Network Boom: Who Makes Neural Networks, Why They Need It and How Much Money They Can Make, 06/12/2016. Access mode: https://vc.ru/16843-neural-networks (accessed: 06/09/2018).

Molnar C. (2018), “Interpretable machine learning”, available at: https://christophm.github.io/interpretable-ml-book/ (accessed September 6, 2018).

Olson R. (2016), “TPOT: A Python tool for automating data science”, available at: https://www.kdnuggets.com/2016/05/ tpot-python-automating-data-science.html / 2 (accessed September 6, 2018).

Thornton C. et al. (2013), “Auto-WEKA: combined selection and hyperparameter optimization of ACM classification algorithms”, pp. 847–855.

Olson R.S. & Moore J.H. (2016), “TPOT: A tree-based pipeline optimization tool for automating machine learning”, pp. 66–74

Zoph B. & Le Q.V. (2016), “Neural architecture search with reinforcement learning”, available at: https://arxiv.org/ abs / 1611.01578.

Gulakov K.V. The choice of neural network architecture for solving the problems of approximation and regression analysis of experimental data. Bulletin of the Bryansk State Technical University. 2013. No. 2 (38).

Downloads

Published

2020-02-24