SeriesAcceleration

TODO: desciption here

Index

SeriesAcceleration.RichardsonType
Richardson <: SumHelper

Desciption

Richardson method helper. This struct is initialized with a range of integers dom and a list of exponents exponents, which are used internally. More exponents can lead to better convergence at the cost of noise. There are two methods available for the fitting of internal weights: :bender, :rohringer. See C. Bender, A. Orszag 99, p. 375; G. Rohringer, A. Toschi 2016 for the derivation Due to the different fit methods, method=:bender will not use

Usage

Fields

  • indices : AbstractVector{Int} of indices of partial sum array which have been fitted
  • weights : Matrix{Float64} fit weights
source
SeriesAcceleration.build_M_matrixMethod
build_M_matrix(dom::AbstractArray{Int,1}, exponents::AbstractArray{Int,1})

Helper function that build the matrix $M$ used to fit data obtained for indices dom to a $\sum_{n\in \text{exponents}} c_n/i^n$ tail.

dom specifies the number of terms for the partial sum, that are to be fitted. exponents specifies the exponents of $\sum_{x \in \text{dom}} \sum_{p \in \text{exponents}} 1/i^p$ that the partial sums should be fitted to.

The coefficients $c_n$ are obtained by solving $M c = b$. $b$ can be constructed from the data using build_weights_rohringer.

source
SeriesAcceleration.build_weights_benderMethod
build_weights_bender(dom::AbstractArray{Int,1}, exponents::AbstractArray{Int,1})

Build weight matrix in closed form. See C. Bender, A. Orszag 99, p. 375. Fit coefficients can be obtained by multiplying w with data: $a_k = W_{kj} g_j$

source
SeriesAcceleration.build_weights_rohringerMethod
build_weights_rohringer(dom::AbstractArray{Int,1}, exponents::AbstractArray{Int,1})

Build weight matrix i.e. $W = M^{-1} R$ with M from build_M_matrix and $R_{kj} = \frac{1}{j^k}$. Fit coefficients can be obtained by multiplying w with data: $a_k = W_{kj} g_j$

source
SeriesAcceleration.esumMethod
esum(arr::AbstractArray{T1,1}, type::T2) where {T1 <: Number, T2 <: SumHelper}

Desciption

Computes improved estimated for infinite series using the summands of the series arr, as input. The method can be chosen by setting type. In cases where the cumulative sum can not be naively computed, one can also specify a function csum_f to obtain the 1 dimensional array of partial sums.

Usage

esum(arr, r)

Arguments

  • arr : AbstractVector of summands.
  • type : Instance SumHelper for construction of improved estimation of the limit.
  • csum_f : Optional. Function which constructs partial sum array from arr.

See also

esum_c, Richardson, Shanks.

Examples

using SeriesAcceleration

r = Richardson(1:5,0:3)
arr = S1_100 = 1 ./ (1:100) .^ 2
limit = esum(arr, r)
source
SeriesAcceleration.esum_cMethod
esum_c(carr::AbstractArray{T1,1}, type::T2) where {T1 <: Number, T2 <: SumHelper}

Desciption

Computes improved estimated for infinite series using the cumulative sum carr as input. The method can be chosen by setting type. See also Richardson, Shanks. For technical reasons. the algorithm uses partial sums internally. This method therefore provides a faster and more flexible interface, than the esum method, which will construct the partial sums itself.

Usage

esum_c(arr, r)

Arguments

  • carr : AbstractVector of partial sum's up to each index.
  • type : Instance SumHelper for construction of improved estimation of the limit.

See also

esum, Richardson, Shanks.

Examples

using SeriesAcceleration

r = Richardson(1:5,0:3)
arr = cumsum(S1_100 = 1 ./ (1:100) .^ 2)
limit = esum_c(arr, r)
source
SeriesAcceleration.rateOfConvMethod
rateOfConv(arr:AbstractArray{T, 1})

Estimates rate of convergence $\alpha_n = \frac{\log |(x_{n+1} - x_n)/(x_n - x_{n-1})|}{\log |(x_n - x_{n-1})/(x_{n-1}-x_{n-2})|}$ from array. trace can be set to true, to obtain the roc for all partial sums.

source