Библиографическое описание:

Чжун Ж. Numerical integration algorithm based on cosine basis neural network // Молодой ученый. — 2016. — №8. — С. 30-34.



In this paper, we present a neural network approach based on the cosine basis functions to solve highly oscillatory integration. The main idea was to use a Fourier series to approximate a function by training the weights of neural networks, then use integration of Fourier serie to approximate integration of unknown function. This paper also gives the examples of numerical integration based on neural networks, and compares with results of another algorithm of numerical integration.

1. Introduction

In practice, we usually consider the entire family integration, and not just a single integral, so we can use this model

(1.1)

To be the general form of highly oscillatory integrals, where and are non-oscillating functions, parameter represents the oscillation frequency of the integrand, is called amplitude function,is called oscillation factor.

When the frequency is large, the classic integration methods for solving integral fail. Like Newton-Cotes type, Guass classic type integral methods are based on polynomial interpolation, and they are not suited oscillating function’s approximation, this is because the classical method for calculating the integral (1.1) always calculating by complex integration [1]. In order to eliminate the influence of oscillation, the number of subinterval of integration interval [a, b] have to be commensurate with the oscillation frequency .It means the number of integral nodes are related with frequency, so the larger frequency, the more calculating works, further the method of integral will generate more error. So we have to look for other methods to compute highly oscillatory integrals. In this paper, we present a neural network approach based on the cosine basis functions to solve highly oscillatory integration. Then we will compare this method’s values of integral with some classic algorithms, like Composite Trapezoidal Rule and Composite Simpson’s Rule[2–5], to prove our point of view.

2. Neural network model of cosine basis function

The main characteristic of artificial neural network is ability of nonlinear mapping. This ability makes neural network can approach any continuous functions. From application point of view, an artificial neural network is structured according to a 'goal', usually this kind of goal is a multivariate function or unary function, it called 'objective function', through learning the weights, it can make objective function as possible approach ideal function which is people wants. Based on this thinking, the model of cosine basis neural network is shown as Figure 1.

Figure 1 The Model of Neural Network

And are weights of cosine basis neural networkare activation functions of hidden layer neurons, and . Assume weights matrix of cosine basis neural network is , activation matrix is, then the output of network:

(2.1) And the error cost function:

Where: m---the number of training samples, --- non-oscillating function of integrand.

Assume the error vector is:. Then the performance indicators:

Where: --- The square of the Euclidean norm. According to learning rules of the gradient descent method, the formulas for adjusting weights as following:

Where: u is learning rate, and 0 < u < 1.

3The convergence of cosine basis neural network algorithm

Lemma 1[6]:

Assume u is learning rate, and when , the algorithm of neural network is convergent, and n is the number of hidden layer neurons.

In the process of learning in neural network, the selection of size of learning rate greatly affect the convergence speed of neural network algorithm. If it's too big or too small, the convergence speed of neural network algorithm will be too slow. The practical experience shows that choosing learning rate is , the convergence speed of neural network algorithm is fastest.

Theorem 1 [7]:

Assume a, b are upper and lower limits of integral, and , is weight of neural network.

Then

I = (b)

Proof:

I

(b)

(b)

According to Theorem 1, we can obtain three inferences as following:

Inference 1

If a 0, b , then

I = b

Inference 2

If a 0, b , then

I

Inference 3

If a 0, b , then

I ()

Attention: If upper and lower limits of integral a, b beyond , that is , Then you need to transform variable x: . At this moment, function f(x) should be corrected: .

4. Algorithm Steps

The goal of neural network learning is weights learning. Refer to learning process of normal neural network. Then the steps for algorithm of numerical integration method for oscillating functions based on cosine basis neural network as following:

Step 1. Obtain the training sample set of the neural network:

Make sure the learning rate . Randomly generate the weights matrix of neural network W = randn(n, 1). Give error accuracy .Calculate activation matrix: .Let performance indicators J = 0.

Step 2. Calculate output of neural network: .

Step 3. Calculate error function: .

Step 4. Calculate performance indicators: .

Step 5. Update the weights: .

Step 6. If the sample were not fully trained, return to step2, or else if J > , let J = 0 and go to step2. Repeat above steps. If , stop training of neural network. Output of network weights W = [w0, w1, w2, …, wn-1].

5. Examples of Numerical Integration

Example 1 We calculated those integrals by MATLAB, their intervals of integration are [0, 1], and their integrands are , , . Then we compared values of integral with method which this paper proposed, and compared with their numbers of subinterval (n).The Table 1 as following:

Table 1

Exact value

0.90933067

0.00073815

0.07154431

Composite Trapezoidal Rule

0.90818527

(n=18)

0.00164847

(n=21)

0.06878611

(n=58)

Composite Simpson’s Rule

0.90933064

(n=15)

0.00073803

(n=57)

0.07150236 (n=120)

Method proposed

0.90933017

(n=16)

0.00072176

(n=16)

0.07151104

(n=18)

We can get some conclusions from Table 1. If the integrand is oscillatory. The error of Composite Trapezoidal Rule is quite large. So Composite Trapezoidal Rule is not suitable for solving integral with oscillatory integrand. Another result is if the integrand is getting more oscillatory, the numbers of subinterval in Composite Simpson’s Rule is raising much faster than the method proposed. It means the Composite Simpson’s Rule needs much calculation to keep its values. It’s not an effective algorithm.

Example 2 Compute the following approximation to

The integrand of above integral is highly oscillatory. The Figure 2 shows this function.

C:\Users\Nadya\Desktop\untitled.jpg

Figure 2

We used Composite Simpson’s Rule and method proposed to compute this integral by MATLAB. Then we compared their values and numbers of subinterval (n) in Table 2 as following:

Table 2

Exact value

0.07073553

Composite Simpson’s Rule

-0.3247595(n=108)

Method proposed

0.07067063 (n=17)

The result is obvious. So the Composite Simpson’s Rule is not suitable for solving integral with highly oscillatory function. And the method proposed which based on cosine basis neural network can be suitable for solving it with effective calculation.

6. Conclusion

In this paper, the algorithm of the neural network to compute numerical integral was researched. Its main idea is to make the outputs of neural networks fit the function by training weights of neural network based on the cosine basis functions using recursive least square algorithm. Since quadrature of cosine basis functions is very easy, the numerical integral of any functioncan be approximately considered as the integration of the cosine basis functions. The method effectively solves the numerical integration of the functions that modeling is difficult, or the unknown functions.

References:

  1. Li Yi-fu. A new type of highly efficient method for the numerical integration of oscillating functions [J]. Mathematica Numerica Sinica,14(4): 506–512, 1992.
  2. Wang Neng-chao. Numerical Analysis. Higher Education Press, Beijing, China,pages 66–96, 1997.
  3. Richard L. Burden, J. Douglas Faires. Numerical Analysis(Seventh Edition).Thomson Learning, Inc., pages 186–226, 772, 2001.
  4. Shen Jian-hua. Foundation of Numerical Method. TongJi University Press, Shanghai, China, pages 73–109, 1999.
  5. Liu Cheng-zhi, Cui Dou-xing. An Efficient Step-Size Control Method in Numerical Integration for Astrodynamical Equations [J]. ACTA Astronomica Sinica, 43(4): 387–390, 2002.
  6. Zeng Zhe-Zhao, Wang Yao-Nan, Wen Hui, «Numerical Integration Based on a Neural Network Algorithm», Computing in Science & Engineering, vol.8, no. 4, pp. 42–48, July/August 2006, doi:10.1109/MCSE.2006.73
  7. Luo Yu-xiong, Wen Hui. A Numerical Integration Approach Using Neural Network Algorithm [J]. Chinese Journal Of Sensors And Actuators, 19(4): 1187–1194, 2006.
  8. Zhang Yan-hong, Chen Shan-yang, Hu Xiao. A Numerical Integration Method For Structural Analysis. Engineering Mechanics, 22 (3): 39–45, 2005.
  9. M. S. Al-Haik, H.Garmestani, I. M. Navon. Truncated-Newton training algorithm for neurocomputational viscoplastic model. Computer methods in applied mechanics and engineering, Vol.192, 2249–2267, 2003.
  10. Cao Chunhong, Zhang Yongjian, Li Wenhui. A new method for geometric constraint solving-A hybrid genetic algorithm based on conjugate gradient

Обсуждение

Социальные комментарии Cackle