Skip to content

Sample L-moments

Estimation of (trimmed) L-moments from sample data.

L-moment Estimators

Unbiased sample estimators of the L-moments and the (generalized) trimmed L-moments.

lmo.l_moment(a, r, /, trim=0, *, axis=None, dtype=np.float64, fweights=None, aweights=None, sort=True, cache=None)

l_moment(
    a: lnpt.AnyArrayFloat,
    r: AnyOrder,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: None = ...,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f
l_moment(
    a: lnpt.AnyMatrixFloat | lnpt.AnyTensorFloat,
    r: AnyOrder | AnyOrderND,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> onpt.Array[Any, _SCT_f]
l_moment(
    a: lnpt.AnyVectorFloat,
    r: AnyOrder,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f
l_moment(
    a: lnpt.AnyArrayFloat,
    r: AnyOrderND,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int | None = ...,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> onpt.Array[Any, _SCT_f]

Estimates the generalized trimmed L-moment \(\lambda^{(s, t)}_r\) from the samples along the specified axis. By default, this will be the regular L-moment, \(\lambda_r = \lambda^{(0, 0)}_r\).

Parameters:

Name Type Description Default
a AnyArrayFloat

Array containing numbers whose L-moments is desired. If a is not an array, a conversion is attempted.

required
r AnyOrder | AnyOrderND

The L-moment order(s), non-negative integer or array.

required
trim AnyTrim

Left- and right-trim orders \((s, t)\), non-negative ints or floats that are bound by \(s + t < n - r\). A single scalar \(t\) can be proivided as well, as alias for \((t, t)\).

Some special cases include:

  • \((0, 0)\) or \((0)\): The original L-moment, introduced by Hosking in 1990.
  • \((t, t)\) or \((t)\): TL-moment (Trimmed L-moment) \(\lambda_r^{(t)}\), with symmetric trimming. First introduced by Elamir & Seheult in 2003, and refined by Hosking in 2007. Generally more robust than L-moments. Useful for fitting pathological distributions, such as the Cauchy distribution.
  • \((0, t)\): LL-moment (Linear combination of Lowest order statistics), introduced by Bayazit & Onoz in 2002. Assigns more weight to smaller observations.
  • \((s, 0)\): LH-moment (Linear combination of Higher order statistics), as described by Wang in 1997. Assigns more weight to larger observations.
0
axis int | None

Axis along which to calculate the moments. If None (default), all samples in the array will be used.

None
dtype _DType[_SCT_f]

Floating type to use in computing the L-moments. Default is numpy.float64.

float64
fweights AnyFWeights | None

1-D array of integer frequency weights; the number of times each observation vector should be repeated.

None
aweights AnyAWeights | None

An array of weights associated with the values in a. Each value in a contributes to the average according to its associated weight. The weights array can either be 1-D (in which case its length must be the size of a along the given axis) or of the same shape as a. If aweights=None (default), then all data in a are assumed to have a weight equal to one.

All aweights must be >=0, and the sum must be nonzero.

The algorithm is similar to that for weighted quantiles.

None
sort True | False | 'quick' | 'heap' | 'stable'

Sorting algorithm, see numpy.sort. Set to False if the array is already sorted.

True
cache bool | None

Set to True to speed up future L-moment calculations that have the same number of observations in a, equal trim, and equal or smaller r. By default, it will cache i.f.f. the trim is integral, and \(r + s + t \le 24\). Set to False to always disable caching.

None

Returns:

Name Type Description
l _Vectorized[_SCT_f]

The L-moment(s) of the input This is a scalar iff a is 1-d and r is a scalar. Otherwise, this is an array with np.ndim(r) + np.ndim(a) - 1 dimensions and shape like (*np.shape(r), *(d for d in np.shape(a) if d != axis)).

Examples:

Calculate the L-location and L-scale from student-T(2) samples, for different (symmetric) trim-lengths.

>>> import lmo
>>> import numpy as np
>>> rng = np.random.default_rng(12345)
>>> x = rng.standard_t(2, 99)
>>> lmo.l_moment(x, [1, 2])
array([-0.01412282,  0.94063132])
>>> lmo.l_moment(x, [1, 2], trim=1)
array([-0.0124483 ,  0.40120115])
>>> lmo.l_moment(x, [1, 2], trim=(1, 1))
array([-0.0124483 ,  0.40120115])

The theoretical L- and TL-location is 0, the L-scale is 1.1107, and the TL-scale is 0.4165, respectively.

See Also
References

lmo.l_ratio(a, r, s, /, trim=0, *, axis=None, dtype=np.float64, **kwds)

l_ratio(
    a: lnpt.AnyVectorFloat,
    r: AnyOrder,
    s: AnyOrder,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int | None,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f
l_ratio(
    a: lnpt.AnyMatrixFloat | lnpt.AnyTensorFloat,
    r: AnyOrder,
    s: AnyOrder,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> onpt.Array[Any, _SCT_f]
l_ratio(
    a: lnpt.AnyArrayFloat,
    r: AnyOrder,
    s: AnyOrder,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: None = ...,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f
l_ratio(
    a: lnpt.AnyArrayFloat,
    r: AnyOrder,
    s: AnyOrderND,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int | None = ...,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> onpt.Array[Any, _SCT_f]
l_ratio(
    a: lnpt.AnyArrayFloat,
    r: AnyOrderND,
    s: AnyOrder | AnyOrderND,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int | None = ...,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> onpt.Array[Any, _SCT_f]

Estimates the generalized L-moment ratio.

\[ \tau^{(s, t)}_{rs} = \frac {\lambda^{(s, t)}_r} {\lambda^{(s, t)}_s} \]

Equivalent to lmo.l_moment(a, r, *, **) / lmo.l_moment(a, s, *, **).

The L-moment with r=0 is 1, so the l_ratio(a, r, 0, *, **) is equivalent to l_moment(a, r, *, **).

Notes

Often, when referring to the \(r\)th L-ratio, the L-moment ratio with \(k=2\) is implied, i.e. \(\tau^{(s, t)}_r\) is short-hand notation for \(\tau^{(s, t)}_{r,2}\).

The L-variation (L-moment Coefficient of Variation, or L-CB) is another special case of the L-moment ratio, \(\tau^{(s, t)}_{2,1}\). It is sometimes denoted in the literature by dropping the subscript indices: \(\tau^{(s, t)}\). Note that this should only be used with strictly positive distributions.

Examples:

Estimate the L-location, L-scale, L-skewness and L-kurtosis simultaneously:

>>> import lmo
>>> import numpy as np
>>> rng = np.random.default_rng(12345)
>>> x = rng.lognormal(size=99)
>>> lmo.l_ratio(x, [1, 2, 3, 4], [0, 0, 2, 2])
array([1.53196368, 0.77549561, 0.4463163 , 0.29752178])
>>> lmo.l_ratio(x, [1, 2, 3, 4], [0, 0, 2, 2], trim=(0, 1))
array([0.75646807, 0.32203446, 0.23887609, 0.07917904])
See Also

lmo.l_stats(a, /, trim=0, num=4, *, axis=None, dtype=np.float64, **kwds)

Calculates the L-loc(ation), L-scale, L-skew(ness) and L-kurtosis.

Equivalent to lmo.l_ratio(a, [1, 2, 3, 4], [0, 0, 2, 2], *, **) by default.

Examples:

>>> import lmo, scipy.stats
>>> x = scipy.stats.gumbel_r.rvs(size=99, random_state=12345)
>>> lmo.l_stats(x)
array([0.79014773, 0.68346357, 0.12207413, 0.12829047])

The theoretical L-stats of the standard Gumbel distribution are [0.577, 0.693, 0.170, 0.150].

See Also

Shorthand aliases

Some of the commonly used L-moment and L-moment ratio’s have specific names, analogous to the named raw-, central-, and standard product-moments:

\(\lambda_r / \lambda_s\) \(s = 0\) \(r = 2\)
\(r=1\) \(\lambda_1\) – “L-loc[ation]” \(\tau\) – “L-variation” or “L-CV”
\(r=2\) \(\lambda_2\) – “L-scale” \(1\) – “L-one” or “L-unit”
\(r=3\) \(\lambda_3\) \(\tau_3\) – “L-skew[ness]”
\(r=4\) \(\lambda_4\) \(\tau_4\) – “L-kurt[osis]”

Note

The “L-” prefix often refers to untrimmed L-moments, i.e. \((s, t) = (0, 0)\).

For some of the trimmed L-moments trim-lengths, specific alternative prefixes are used:

\(\lambda_r^{(s, t)}\) \(t = 0\) \(t = 1\)
\(s = 0\) L-moment LL-moment
\(s = 1\) LH-moment TL-moment

The “L-” prefix refers to “Linear”, i.e. an L-moment is a “Linear combination of order statistics” 1. Usually “TL-moments” are used to describe symmetrically Trimmed L-moments, in most cases those with a trim-length of 1 23. Similarly, “LH-moments” describe “linear combinations of order of higher-order statistics” 4, and “LL-moments” that of “… the lowest order statistics” 5.

Lmo supports all possible trim-lengths. Technically, these are the “generalized trimmed L-moments”. But for the sake of brevity Lmo abbreviates this “L-moments”.

lmo.l_loc(a, /, trim=0, *, axis=None, dtype=np.float64, **kwds)

l_loc(
    a: lnpt.AnyMatrixFloat | lnpt.AnyTensorFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> onpt.Array[Any, _SCT_f]
l_loc(
    a: lnpt.AnyVectorFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f
l_loc(
    a: lnpt.AnyArrayFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: None = ...,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f

L-location (or L-loc): unbiased estimator of the first L-moment, \(\lambda^{(s, t)}_1\).

Alias for lmo.l_moment(a, 1, *, **).

Examples:

The first moment (i.e. the mean) of the Cauchy distribution does not exist. This means that estimating the location of a Cauchy distribution from its samples, cannot be done using the traditional average (i.e. the arithmetic mean). Instead, a robust location measure should be used, e.g. the median, or the TL-location.

To illustrate, let’s start by drawing some samples from the standard Cauchy distribution, which is centered around the origin.

>>> import lmo
>>> import numpy as np
>>> rng = np.random.default_rng(12345)
>>> x = rng.standard_cauchy(200)

The mean and the untrimmed L-location (which are equivalent) give wrong results, so don’t do this:

>>> np.mean(x)
-3.6805
>>> lmo.l_loc(x)
-3.6805

Usually, the answer to this problem is to use the median. However, the median only considers one or two samples (depending on whether the amount of samples is odd or even, respectively). So the median ignores most of the available information.

>>> np.median(x)
0.096825
>>> lmo.l_loc(x, trim=(len(x) - 1) // 2)
0.096825

Luckily for us, Lmo knows how to deal with longs tails, as well – trimming them (specifically, by skipping the first \(s\) and last \(t\) expected order statistics).

Let’s try the TL-location (which is equivalent to the median)

>>> lmo.l_loc(x, trim=1)  # equivalent to `trim=(1, 1)`
0.06522
Notes

The trimmed L-location naturally unifies the arithmetic mean, the median, the minimum and the maximum. In particular, the following are equivalent, given n = len(x):

  • l_loc(x, trim=0) / statistics.mean(x) / np.mean(x)
  • l_loc(x, trim=(n-1) // 2) / statistics.median(x) / np.median(x)
  • l_loc(x, trim=(0, n-1)) / min(x) / np.min(x)
  • l_loc(x, trim=(n-1, 0)) / max(x) / np.max(x)

Note that numerical noise might cause slight differences between their results.

Even though lmo is built with performance in mind, the equivalent numpy functions are always faster, as they don’t need to sort all samples. Specifically, the time complexity of lmo.l_loc (and l_moment in general) is \(O(n \log n)\), whereas that of numpy.{mean,median,min,max} is O(n)

See Also

lmo.l_scale(a, /, trim=0, *, axis=None, dtype=np.float64, **kwds)

l_scale(
    a: lnpt.AnyMatrixFloat | lnpt.AnyTensorFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> onpt.Array[Any, _SCT_f]
l_scale(
    a: lnpt.AnyVectorFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f
l_scale(
    a: lnpt.AnyArrayFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: None = ...,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f

L-scale unbiased estimator for the second L-moment, \(\lambda^{(s, t)}_2\).

Alias for lmo.l_moment(a, 2, *, **).

Examples:

>>> import lmo, numpy as np
>>> x = np.random.default_rng(12345).standard_cauchy(99)
>>> x.std()
72.87715244
>>> lmo.l_scale(x)
9.501123995
>>> lmo.l_scale(x, trim=(1, 1))
0.658993279
Notes

If trim = (0, 0) (default), the L-scale is equivalent to half the Gini mean difference (GMD).

See Also

lmo.l_variation(a, /, trim=0, *, axis=None, dtype=np.float64, **kwds)

l_variation(
    a: lnpt.AnyMatrixFloat | lnpt.AnyTensorFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> onpt.Array[Any, _SCT_f]
l_variation(
    a: lnpt.AnyVectorFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: int,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f
l_variation(
    a: lnpt.AnyArrayFloat,
    /,
    trim: lmt.AnyTrim = ...,
    *,
    axis: None = ...,
    dtype: _DType[_SCT_f] = np.float64,
    **kwds: Unpack[lmt.LMomentOptions],
) -> _SCT_f

The coefficient of L-variation (or L-CV) unbiased sample estimator:

\[ \tau^{(s, t)} = \frac {\lambda^{(s, t)}_2} {\lambda^{(s, t)}_1} \]

Alias for lmo.l_ratio(a, 2, 1, *, **).

Examples:

>>> import lmo, numpy as np
>>> x = np.random.default_rng(12345).pareto(4.2, 99)
>>> x.std() / x.mean()
1.32161112
>>> lmo.l_variation(x)
0.59073639
>>> lmo.l_variation(x, trim=(0, 1))
0.55395044
Notes

If trim = (0, 0) (default), this is equivalent to the Gini coefficient, and lies within the interval \((0, 1)\).

See Also

lmo.l_skew(a, /, trim=0, *, axis=None, dtype=np.float64, **kwds)

Unbiased sample estimator for the L-skewness coefficient.

\[ \tau^{(s, t)}_3 = \frac {\lambda^{(s, t)}_3} {\lambda^{(s, t)}_2} \]

Alias for lmo.l_ratio(a, 3, 2, *, **).

Examples:

>>> import lmo, numpy as np
>>> x = np.random.default_rng(12345).standard_exponential(99)
>>> lmo.l_skew(x)
0.38524343
>>> lmo.l_skew(x, trim=(0, 1))
0.27116139
See Also

lmo.l_kurt = l_kurtosis module-attribute

Alias for lmo.l_kurtosis.

lmo.l_kurtosis(a, /, trim=0, *, axis=None, dtype=np.float64, **kwds)

L-kurtosis coefficient; the 4th sample L-moment ratio.

\[ \tau^{(s, t)}_4 = \frac {\lambda^{(s, t)}_4} {\lambda^{(s, t)}_2} \]

Alias for lmo.l_ratio(a, 4, 2, *, **).

Examples:

>>> import lmo, numpy as np
>>> x = np.random.default_rng(12345).standard_t(2, 99)
>>> lmo.l_kurtosis(x)
0.28912787
>>> lmo.l_kurtosis(x, trim=(1, 1))
0.19928182
Notes

The L-kurtosis \(\tau_4\) lies within the interval \([-\frac{1}{4}, 1)\), and by the L-skewness \(\\tau_3\) as \(5 \tau_3^2 - 1 \le 4 \tau_4\).

See Also

L-moment Accuracy

lmo.l_moment_cov(a, r_max, /, trim=0, *, axis=None, dtype=np.float64, **kwds)

Non-parmateric auto-covariance matrix of the generalized trimmed L-moment point estimates with orders r = 1, ..., r_max.

Returns:

Name Type Description
S_l NDArray[_SCT_f]

Variance-covariance matrix/tensor of shape (r_max, r_max, ...)

Examples:

Fitting of the cauchy distribution with TL-moments. The location is equal to the TL-location, and scale should be \(0.698\) times the TL(1)-scale, see Elamir & Seheult (2003).

>>> import lmo, numpy as np
>>> rng = np.random.default_rng(12345)
>>> x = rng.standard_cauchy(1337)
>>> lmo.l_moment(x, [1, 2], trim=(1, 1))
array([0.08142405, 0.68884917])

The L-moment estimates seem to make sense. Let’s check their standard errors, by taking the square root of the variances (the diagonal of the covariance matrix):

>>> lmo.l_moment_cov(x, 2, trim=(1, 1))
array([[ 4.89407076e-03, -4.26419310e-05],
       [-4.26419310e-05,  1.30898414e-03]])
>>> np.sqrt(_.diagonal())
array([0.06995764, 0.03617989])
See Also
References
Todo
  • Use the direct (Jacobi) method from Hosking (2015).

lmo.l_ratio_se(a, r, s, /, trim=0, *, axis=None, dtype=np.float64, **kwds)

Non-parametric estimates of the Standard Error (SE) in the L-ratio estimates from lmo.l_ratio.

Examples:

Estimate the values and errors of the TL-loc, scale, skew and kurtosis for Cauchy-distributed samples. The theoretical values are [0.0, 0.698, 0.0, 0.343] (Elamir & Seheult, 2003), respectively.

>>> import lmo, numpy as np
>>> rng = np.random.default_rng(12345)
>>> x = rng.standard_cauchy(42)
>>> lmo.l_ratio(x, [1, 2, 3, 4], [0, 0, 2, 2], trim=(1, 1))
array([-0.25830513,  0.61738638, -0.03069701,  0.25550176])
>>> lmo.l_ratio_se(x, [1, 2, 3, 4], [0, 0, 2, 2], trim=(1, 1))
array([0.32857302, 0.12896501, 0.13835403, 0.07188138])
See Also
References

lmo.l_stats_se(a, /, trim=0, num=4, *, axis=None, dtype=np.float64, **kwds)

Calculates the standard errors (SE’s) of the L-stats.

Equivalent to lmo.l_ratio_se(a, [1, 2, 3, 4], [0, 0, 2, 2], *, **) by default.

Examples:

>>> import lmo, scipy.stats
>>> x = scipy.stats.gumbel_r.rvs(size=99, random_state=12345)
>>> lmo.l_stats(x)
array([0.79014773, 0.68346357, 0.12207413, 0.12829047])
>>> lmo.l_stats_se(x)
array([0.12305147, 0.05348839, 0.04472984, 0.03408495])

The theoretical L-stats of the standard Gumbel distribution are [0.577, 0.693, 0.170, 0.150]. The corresponding relative z-scores are [-1.730, 0.181, 1.070, 0.648].

See Also

Sensitivity & Robustness

Wikipedia describes the empirical influence function (EIF) as follows:

The empirical influence function is a measure of the dependence of the estimator on the value of any one of the points in the sample. It is a model-free measure in the sense that it simply relies on calculating the estimator again with a different sample.

Tip

The EIF can be used to calculate some useful properties related to the robustness of the estimate.

lmo.l_moment_influence(a, r, /, trim=0, *, sort=True, tol=1e-08)

Calculate the Empirical Influence Function (EIF) for a sample L-moment estimate.

Notes

This function is not vectorized, and can only be used for a single L-moment order r. However, the returned (empirical influence) function is vectorized.

Parameters:

Name Type Description Default
a AnyVectorFloat

1-D array-like containing observed samples.

required
r AnyOrder

L-moment order. Must be a non-negative integer.

required
trim AnyTrim

Left- and right- trim. Can be scalar or 2-tuple of non-negative int (or float).

0

Other Parameters:

Name Type Description
sort True | False | 'quick' | 'heap' | 'stable'

Sorting algorithm, see numpy.sort. Set to False if the array is already sorted.

tol float

Zero-roundoff absolute threshold.

Returns:

Name Type Description
influence_function Callable[[_T_x], _T_x]

The (vectorized) empirical influence function.

Raises:

Type Description
ValueError

If a is not 1-D array-like.

TypeError

If a is not a floating-point type.

lmo.l_ratio_influence(a, r, s=2, /, trim=0, *, sort=True, tol=1e-08)

Calculate the Empirical Influence Function (EIF) for a sample L-moment ratio estimate.

Notes

This function is not vectorized, and can only be used for a single L-moment order r. However, the returned (empirical influence) function is vectorized.

Parameters:

Name Type Description Default
a AnyVectorFloat

1-D array-like containing observed samples.

required
r AnyOrder

L-moment ratio order. Must be a non-negative integer.

required
s AnyOrder

Denominator L-moment order, defaults to 2.

2
trim AnyTrim

Left- and right- trim. Can be scalar or 2-tuple of non-negative int or float.

0

Other Parameters:

Name Type Description
sort True | False | 'quick' | 'heap' | 'stable'

Sorting algorithm, see numpy.sort. Set to False if the array is already sorted.

tol float

Zero-roundoff absolute threshold.

Returns:

Name Type Description
influence_function Callable[[_T_x], _T_x]

The (vectorized) empirical influence function.

L-moment sample weights

lmo.l_weights(r_max, n, /, trim=0, *, dtype=np.float64, cache=None)

Projection matrix of the first \(r\) (T)L-moments for \(n\) samples.

For integer trim is the matrix is a linear combination of the Power Weighted Moment (PWM) weights (the sample estimator of \(\beta_{r_1}\)), and the shifted Legendre polynomials.

If the trimmings are nonzero and integers, a linearized (and corrected) adaptation of the recurrence relations from Hosking (2007) are applied, as well.

\[ (2k + s + t - 1) \lambda^{(s, t)}_k = (k + s + t) \lambda^{(s - 1, t)}_k + \frac{1}{k} (k + 1) (k + t) \lambda^{(s - 1, t)}_{k+1} \]

for \(s > 0\), and

\[ (2k + s + t - 1) \lambda^{(s, t)}_k = (k + s + t) \lambda^{(s, t - 1)}_k - \frac{1}{k} (k + 1) (k + s) \lambda^{(s, t - 1)}_{k+1} \]

for \(t > 0\).

If the trim values are floats instead, the weights are calculated directly from the (generalized) order statistics. At the time of writing (07-2023), these “generalized trimmed L-moments” have not been discussed in the literature or the R-packages. It’s probably a good idea to publish this…

TLDR

This matrix (linearly) transforms \(x_{i:n}\) (i.e. the sorted observation vector(s) of size \(n\)), into (an unbiased estimate of) the generalized trimmed L-moments, with orders \(\le r\).

Parameters:

Name Type Description Default
r_max _OrderT

The amount L-moment orders.

required
n _SizeT

The number of samples.

required
trim AnyTrim

A scalar or 2-tuple with the trim orders. Defaults to 0.

0
dtype _DType[_SCT_f]

The datatype of the returned weight matrix.

float64
cache bool | None

Whether to cache the weights. By default, it’s enabled i.f.f the trim values are integers, and r_max + sum(trim) < 24.

None

Returns:

Name Type Description
P_r Array[tuple[_OrderT, _SizeT], _SCT_f]

2-D array of shape (r_max, n), readonly if cache=True

Examples:

>>> import lmo
>>> lmo.l_weights(3, 4)
array([[ 0.25      ,  0.25      ,  0.25      ,  0.25      ],
       [-0.25      , -0.08333333,  0.08333333,  0.25      ],
       [ 0.25      , -0.25      , -0.25      ,  0.25      ]])
>>> _ @ [-1, 0, 1 / 2, 3 / 2]
array([0.25      , 0.66666667, 0.        ])
References

References


  1. Jonathan RM Hosking. L-moments: analysis and estimation of distributions using linear combinations of order statistics. Journal of the Royal Statistical Society Series B: Statistical Methodology, 52(1):105–124, 1990. 

  2. Elsayed AH Elamir and Allan H Seheult. Trimmed l-moments. Computational Statistics & Data Analysis, 43(3):299–314, 2003. 

  3. JRM Hosking. Some theory and practical uses of trimmed l-moments. Journal of Statistical Planning and Inference, 137(9):3024–3039, 2007. 

  4. Qi J Wang. Lh moments for statistical analysis of extreme events. Water Resources Research, 33(12):2841–2848, 1997. 

  5. Mehmetcik Bayazit and Bihrat Önöz. Ll-moments for estimating low flow quantiles. Hydrological sciences journal, 47(5):707–720, 2002.