Skip to contents

The Marshall–Olkin distribution has the exogenous shock model as a natural stochastic representation. In this model, independent exponential random times can cause the failure of multiple components at once.

This article describe multiple alternative ways to combine multiple Marshall–Olkin distributed random vectors to create another Marshall–Olkin distributed random vector. The techniques described are very useful to build complex, hierarchical distributions from simpler Marshall–Olkin distributions.

All results presented in this article can be verified with simple calculations.

Independent exponentials

The easiest example of a mixing of Marshall–Olkin distribution is a vector of univariate independent exponential distributed random variables. This corresponds to setting all shocks in the exogenous shock model, which correspond to a set with cardinality larger than 1 to infinity.

Independent concatenations

Another very simple example is the concatenation of an arbitrary number of independent Marshall–Olkin distributed random vectors. For example, let \(\boldsymbol{\tau}^{(1)}, \ldots, \boldsymbol{\tau}^{(k)}\) be independent Marshall–Olkin distributed random vectors of dimensions \(d_{1}, \ldots, d_{k}\) and parameters \(\lambda_{I}\), \(\emptyset \neq I \subseteq {\{ 1, \ldots, d_{j} \}}\), \(1 \leq \cdots \leq k\), and define

\[ \boldsymbol{\tau} = \begin{pmatrix} \tau_{1}^{(1)} & \cdots & \tau_{d_1}^{(1)} & \cdots & \tau_{1}^{(k)} & \cdots & \tau_{d_k}^{(k)} \end{pmatrix} . \]

Then \(\boldsymbol{\tau}\) is a \((d_{1} + \cdots + d_{k})\)-variate Marshall–Olkin distribution with parameters

\[ \lambda_{I} = \begin{cases} \lambda_{I - \sum_{l=1}^{j-1}{ d_l } }^{(l)} & \emptyset \neq I \subseteq {\{ d_{l-1} + 1, \ldots, d_{l} \}} , \\ 0 & \text{else} , \end{cases} \]

where \(d_{0} = 0\) and \(I - k = \{ i_{1} - k , \ldots , i_{{\lvert I \rvert}} - k \}\).

Note that a vector of independent exponential variables can be represented like this, if \(\tau^{(1)}, \ldots, \tau^{(d)}\) are all univariate Marshall–Olkin distributed, i.e., univariate exponential, random variables.

Comonotone concatenations

A very similar example is the concatenation of perfectly dependent, identical Marshall–Olkin distributed random vectors. For this, consider a \(d\)-variate Marshall–Olkin distributed random vector \(\tilde{\boldsymbol{\tau}}\) with parameters \(\tilde{\lambda}_{I}\), \(\emptyset \neq I \subseteq {\{1, \ldots, d\}}\), and define the \(d k\)-variate random vector \(\boldsymbol{\tau}\) by

\[ \boldsymbol{\tau} := \begin{pmatrix} \tilde{\tau}_1 & \cdots & \tilde{\tau}_d & \cdots & \tilde{\tau_1} & \cdots & \tilde{\tau}_d \end{pmatrix} . \]

The corresponding parameters are defined by

\[ \lambda_I = \tilde{\lambda}_{J} \ \text{for} \ I \cap {\{ 1 , \ldots, d \}} = J , \quad \emptyset \neq I \subseteq {\{ 1, \ldots, d k \}} . \]

Scalar multiplication

Similarly to the univariate exponential distribution, a Marshall–Olkin distributed random vector which is scaled by a positive number has a Marshall–Olkin distribution, too. For this, let \(\tilde{\boldsymbol{\tau}}\) be a \(d\)-variate Marshall–Olkin distributed random vector with parameters \(\tilde{\lambda}_{I}\), \(\emptyset \neq I \subseteq {\{ 1, \ldots, d \}}\), and \(a > 0\) and define \(\boldsymbol{\tau}\) by

\[ \boldsymbol{\tau} = a \cdot \tilde{\boldsymbol{\tau}} . \]

The corresponding parameters are defined by

\[ \lambda_I = \frac{\tilde{\lambda}_{I}}{a} , \quad \emptyset \neq I \subseteq {\{ 1 , \ldots , d \}} . \]

Component-wise minima

The component-wise minimum of two independent Marshall–Olkin distributed random vectors of the same length is again Marshall–Olkin distributed with added rates: let \(\tilde{\boldsymbol{\tau}}^{(1)}\) and \(\tilde{\boldsymbol{\tau}}^{(2)}\) be Marshall–Olkin distributed with parameters \(\tilde{\lambda}_{I}^{(j)}\), \(\emptyset \neq I \subseteq {\{ 1, \ldots, d \}}\), \(j \in {\{ 1 , 2 \}}\), and define \(\boldsymbol{\tau}\) by

\[ \boldsymbol{\tau} = \begin{pmatrix} \min{\{ \tilde{\tau}_1^{(1)} , \tilde{\tau}_1^{(2)} \}} & \cdots & \min{\{ \tilde{\tau}_d^{(1)} , \tilde{\tau}_d^{(2)} \}} \end{pmatrix} . \]

The corresponding parameters are defined by

\[ \lambda_{I} = \tilde{\lambda}_I^{(1)} + \tilde{\lambda}_I^{(2)} , \quad \emptyset \neq I \subseteq {\{ 1, \ldots, d\}} . \]

This can be generalised by allowing components of \(\tilde{\boldsymbol{\tau}}^{(1)}\) and \(\tilde{\boldsymbol{\tau}}^{(2)}\) to be infinite as long as \(\boldsymbol{\tau}\) has finite margins. Specifically, \(\tilde{\lambda}_I^{(j)} = 0\) is allowed as long as \(\tilde{\lambda}_I^{(1)} + \tilde{\lambda}_I^{(2)} > 0\).

References

Mai, Jan-Frederik, and Matthias Scherer. 2017. Simulating Copulas: Stochastic Models, Sampling Algorithms and Applications. 2nd ed. Series in Quantitative Finance. World Scientific. https://doi.org/10.1142/10265.