minimc 0.5.1
Loading...
Searching...
No Matches
Estimators

Phase Space

All steady-state particle transport occurs inside a multidimensional phase space which is characterized by a position \( \boldsymbol{x} \), direction-of-flight \( \hat{\boldsymbol{\Omega}} \), energy \( E \), and reaction label \( r \). Together, these form a random vector \( X \) for which one realization looks like

\[ x_{i} = \begin{bmatrix} \boldsymbol{x}_{i} \\ \hat{\boldsymbol{\Omega}}_{i} \\ E_{i} \\ r_{i} \end{bmatrix} \,\mathrm{.} \]

A realization of \( X_{i} \) (here denoted \( x_{i}) \) and a realizaton of position (here denoted \( \boldsymbol{x}_{i} \)) must not be confused with each other.

Particle

Fundamentally, a Monte Carlo radiation transport code is a simulation of a discrete random process \( \Omega = \left( X_{0}, X_{1}, X_{2}, \ldots \right) \) for which we are interested in estimating the expected value of random variables \( S: \Omega \rightarrow \mathbb{R} \). At each step of a history \( \omega \in \Omega \), a Particle takes on some definite state in phase space \( X_{i} = x_{i} \). For this reason, the Particle class is heavily encapsulated and internally performs most of the operations required to update its state. Should other classes use a Particle object as a function parameter, they should only do so using a const qualifier. For instance, Material::GetMicroscopicTotal accepts a const Particle reference as a parameter to look up relevant cross sections. Two notable exceptions, the Interaction class and the TransportMethod class, use non-const Particle arguments but still update a Particle state using its public methods.

Estimation

An Estimator \( S: \Omega \rightarrow \mathbb{R} \) is a random variable which maps from a history \( \omega \in \Omega \) to a score \( s \in \mathbb{R} \). The expected value of an estimator is given by

\begin{equation} \label{eq:estimator-expected-value} \mathbb{E} \left[ S \right] = \int_{\Omega} S\left( \omega \right) p_{\Omega}(\omega) \mathrm{d}\omega \,\mathrm{.} \end{equation}

When only \( N \) sampled values from \( \Omega \) are available, an estimate of the expected value is given by

\[ \mathbb{E} \left[ S \right] \approx \sum_{n=1}^{N} S\left( \omega_{n} \right) \frac{1}{N} \,\mathrm{.} \]

Scoring Functions

In general an estimator is a function of all states that a Particle undergoes during transport \( S(\omega) = S(x_{1}, \ldots, x_{N}) \). However, a special class of estimators can be expressed

\[ S(\omega) = \sum_{i} f(x_{i}) \]

where the scoring function \( f: X \rightarrow \mathbb{R} \) is only a function of the \( i \)-th state. One example of a scoring function is

\[ f(x_{i}) = \delta_{r_{i}, \text{scatter}} \]

which scores \( 1 \) if the particle underwent a scatter in step \( i \), zero otherwise. Using this scoring function results in an estimator for the total scatter rate. Another scoring function is

\[ f(x_{i}) = \frac {\delta_{x_{i}, \text{capture}} + \delta_{x_{i}, \text{scatter}}} {\Sigma_{t}(x_{i})} \]

which scores \( \Sigma^{-1}_{t}(x_{i}) \) whenever a Particle collides. Using this scoring funtion results in an estimator for the scalar flux.

Differential Operator Sampling

When a perturbation to a Monte Carlo problem is introduced, the expected value of an Estimator \( \mathbb{E} [S] \) will generally change. Examples of perturbations include shifts in a geometric configuration or changes in a Nuclide cross section, to name a few.

If a system parameter is parameterized by \( \lambda \), the change in the expected value can be expressed as

\[ \frac{\mathrm{d} \mathbb{E} \left[ S \right]}{\mathrm{d}\lambda} = \int_{\Omega} \left[ \frac{1}{S\left( \omega \right)} \frac{\mathrm{d}S\left( \omega \right)}{\mathrm{d}\lambda} + \frac{1}{p_{\Omega}(\omega)} \frac{\mathrm{d}p_{\Omega}(\omega)}{\mathrm{d}\lambda} \right] S\left( \omega \right) p_{\Omega}(\omega) \mathrm{d}\omega \,\mathrm{.} \]

This form is identical to \( \eqref{eq:estimator-expected-value} \) if one makes the substitution

\[ S\left( \omega \right) \rightarrow \left[ \frac{1}{S\left( \omega \right)} \frac{\mathrm{d}S\left( \omega \right)}{\mathrm{d}\lambda} + \frac{1}{p_{\Omega}(\omega)} \frac{\mathrm{d}p_{\Omega}(\omega)}{\mathrm{d}\lambda} \right] S\left( \omega \right) \,\mathrm{.} \]

The first term in the square bracketes is called the direct effect and the second term is called the indirect effect.

Direct Effect

The direct effect

\[ \frac{1}{S\left( \omega \right)} \frac{\mathrm{d}S\left( \omega \right)}{\mathrm{d}\lambda} \]

reflects the change in how \( S \) maps from the history space \( \Omega \) to the score space \( \mathbb{R} \). It must be provided as a function depending on the particular type of perturbation.

Indirect Effect

The indirect effect

\[ \frac{1}{p_{\Omega}(\omega)} \frac{\mathrm{d}p_{\Omega}(\omega)}{\mathrm{d}\lambda} \]

reflects the change in the probability of sampling a history \( \omega \). If \( \omega = \left( x_{0}, x_{1}, x_{2}, \ldots \right) \) is Markovian, the probability of sampling a history can be expressed

\[ p_{\Omega}(\omega) = p_{X_{0}} \left( x_{0} \right) p_{X_{1} \mid X_{0}} \left( x_{1} \mid x_{0} \right) p_{X_{2} \mid X_{1}} \left( x_{2} \mid x_{1} \right) \ldots \]

so repeated application of the product rule gives for the indirect effect

\[ \frac {\frac{\mathrm{d}}{\mathrm{d}\lambda} p_{\Omega}(\omega)} {p_{\Omega}(\omega)} = \frac {\frac{\mathrm{d}}{\mathrm{d}\lambda} p_{X_{0}}(x_{0})} {p_{X_{0}}(x_{0})} + \frac {\frac{\mathrm{d}}{\mathrm{d}\lambda} p_{X_{1} \mid X_{0}}(x_{1} \mid x_{0})} {p_{X_{1} \mid X_{0}}(x_{1} \mid x_{0})} + \frac {\frac{\mathrm{d}}{\mathrm{d}\lambda} p_{X_{2} \mid X_{1}}(x_{2} \mid x_{1})} {p_{X_{2} \mid X_{1}}(x_{2} \mid x_{1})} + \ldots \]

The transition probabilities are conventionally decomposed

\begin{align*} \label{eq:transmission-emergence-kernel} p_{X_{i+1} \mid X_{i}} \left( x_{i+1} \middle| x_{i} \right) &= p \left( \boldsymbol{x}_{i+1} \hat{\boldsymbol{\Omega}}_{i+1} E_{i+1}, r_{i+1} \middle| \boldsymbol{x}_{i}, \hat{\boldsymbol{\Omega}}_{i}, E_{i}, r_{i} \right) \\ &= p \left( \boldsymbol{x}_{i+1} \middle| \boldsymbol{x}_{i}, \hat{\boldsymbol{\Omega}}_{i}, E_{i}, r_{i} \right) \\ &\quad\times p \left( r_{i+1} \middle| \boldsymbol{x}_{i+1} \hat{\boldsymbol{\Omega}}_{i}, E_{i}, r_{i} \right) \\ &\quad\times p \left( E_{i+1} \middle| \boldsymbol{x}_{i+1} \hat{\boldsymbol{\Omega}}_{i}, E_{i}, r_{i+1} \right) \\ &\quad\times p \left( \hat{\boldsymbol{\Omega}}_{i+1}, \middle| \boldsymbol{x}_{i+1} \hat{\boldsymbol{\Omega}}_{i}, E_{i+1}, r_{i+1} \right) \\ &= \mathcal{T}_{i} \times \mathcal{R}_{i} \times \mathcal{E}_{i} \times \mathcal{A}_{i} \end{align*}

where the four terms on the righthand side are, in order,

  1. \( \mathcal{T}_{i} \): The probability of streaming from \( \boldsymbol{x}_{i} \) then colliding at \( \boldsymbol{x}_{i+1} \),
  2. \( \mathcal{R}_{i} \): The probability that a particle colliding at \( \boldsymbol{x}_{i+1} \) undergoes reaction \( r_{i+1} \),
  3. \( \mathcal{E}_{i} \): The probability that a particle undergoing reaction \( r_{i+1} \) emerges with energy \( E_{i+1} \), and
  4. \( \mathcal{A}_{i} \): The probability that a particle emerging with energy \( E_{i+1} \) emerges with direction \( \hat{\boldsymbol{\Omega}}_{i+1} \).

Although the conditional probabilities could be selected in any order, the particular choice above corresponds to the order that each portion of phase space is actually sampled. Again using repeated application of the product rule, the indirect effect at step \( i \) is

\begin{equation} \label{eq:step-i-indirect-effect} \frac {\frac{\mathrm{d}}{\mathrm{d}\lambda} p_{X_{i+1} \mid X_{i}}(x_{i+1} \mid x_{i})} {p_{X_{i+1} \mid X_{i}}(x_{i+1} \mid x_{i})} = \frac { \frac{\mathrm{d}}{\mathrm{d}\lambda} \mathcal{T_{i}} } { \mathcal{T_{i}} } + \frac { \frac{\mathrm{d}}{\mathrm{d}\lambda} \mathcal{R_{i}} } { \mathcal{R_{i}} } + \frac { \frac{\mathrm{d}}{\mathrm{d}\lambda} \mathcal{E_{i}} } { \mathcal{E_{i}} } + \frac { \frac{\mathrm{d}}{\mathrm{d}\lambda} \mathcal{A_{i}} } { \mathcal{A_{i}} } \mathrm{.} \end{equation}

Thermal Neutron Scattering Law

When using partitioned thermal neutron scattering law data compressed using proper orthogonal decomposition, the parameters to be perturbed are \( u_{mr} \), \( \sigma_{r} \), or \( v_{nr} \). The following discussion considers perturbations of parameters that are used to sample \( \alpha \).

Given a sampled value of \( \beta \), a target temperature \( T \), and uniformly sampled \( \xi \in \left[ 0, 1 \right) \), the sampling scheme (based on ENDF Law 4) first identifies indices \( k^{\prime} \), \( l^{\prime} \), and \( m^{\prime} \) such that

\[ \begin{alignat*}{3} & \beta_{k^{\prime}-1} && \leq \lvert \beta \rvert && < \beta_{k^{\prime}} \,\mathrm{,}\quad && k^{\prime} \in \left\{ 1, \ldots, k \right\} \\ & T_{l^{\prime} - 1} && \leq T && < T_{l^{\prime}} \,\mathrm{,}\quad && l^{\prime} \in \left\{ 1, \ldots, l \right\} \\ & \hat{H}_{m^{\prime}-1} && \leq \xi && < \hat{H}_{m^{\prime}} \,\mathrm{,}\quad && m^{\prime} \in \left\{ 1, \ldots, m \right\} \end{alignat*} \]

are satisfied. Then, \( \tilde{k} \) is randomly assigned to either \( k^{\prime} - 1 \) or \( k^{\prime} \). Defining the flattened index

\[ n^{\prime} \equiv \left( \tilde{k} - 1 \right) l + l^{\prime} \]

and interpolation fractions

\[ f_{T} = \frac {T - T_{l^{\prime} - 1}} {T_{l^{\prime}} - T_{l^{\prime} - 1}} \quad\mathrm{and}\quad f_{\hat{H}} = \frac {\xi - \hat{H}_{m^{\prime} - 1}} {\hat{H}_{m^{\prime}} - \hat{H}_{m^{\prime} - 1}} \,\mathrm{,} \]

the sampled value of \( \alpha \) is obtained through bilinear interpolation in temperature and CDF:

\[ \alpha_{\tilde{k}} = \begin{bmatrix} 1-f_{\hat{H}} & f_{\hat{H}} \end{bmatrix} \begin{bmatrix} \alpha_{m^{\prime} - 1, n^{\prime} - 1} & \alpha_{m^{\prime} - 1, n^{\prime}} \\ \alpha_{m^{\prime}, n^{\prime} - 1} & \alpha_{m^{\prime}, n^{\prime}} \end{bmatrix} \begin{bmatrix} 1-f_{T} \\ f_{T} \end{bmatrix} \equiv \vec{f}_{\hat{H}}^{T} A_{m^{\prime}n^{\prime}} \vec{f}_{T} \]

where \( A_{m^{\prime}n^{\prime}} \) is a \( 2 \times 2 \) matrix dependent on \( m^{\prime} \) and \( n^{\prime} \) and should not be mistaken for a single element. Linear interpolation in CDF implies that the PDF is a histogram

\[ p\left( \alpha_{\tilde{k}} \right) = \frac {\Delta \hat{H}} {\Delta \alpha} = \frac {\hat{H}_{m^{\prime}} - \hat{H}_{m^{\prime}-1}} { \vec{d}^{T} A_{m^{\prime}n^{\prime}} \vec{f}_{T} } \]

where \( \vec{d}^{T} = \begin{bmatrix} -1 & +1 \end{bmatrix} \) is a row vector which takes the difference between the first and second elements of \( A_{m^{\prime}n^{\prime}} \vec{f}_{T} \). Setting \( \lambda \) to any of \( u_{m^{\prime}r^{\prime}} \), \( \sigma_{r^{\prime}} \), or \( v_{n^{\prime}r^{\prime}} \) will only affect the terms inside \( A_{m^{\prime}n^{\prime}} \) so the derivative becomes

\begin{equation} \label{eq:derivative-alpha-probability} \frac {\mathrm{d}p\left( \alpha_{\tilde{k}} \right)} {\mathrm{d}\lambda} = -p\left( \alpha_{\tilde{k}} \right) \frac { \vec{d}^{T} \left( \frac {\mathrm{d}} {\mathrm{d}\lambda} A_{m^{\prime}n^{\prime}} \right) \vec{f}_{T} } { \vec{d}^{T} A_{m^{\prime}n^{\prime}} \vec{f}_{T} } \,\textrm{.} \end{equation}

The last steps in sampling direction cosine \( \mu \) are unit base interpolation followed by conversion from dimensionless momentum transfer \( \alpha \) to \( \mu \). Unit base interpolation is the transformation

\[ \alpha(\beta) = \alpha_{\min}(\beta) + \frac {\alpha_{\tilde{k}} - \alpha_{\tilde{k}, \min}} {\alpha_{\tilde{k}, \max} - \alpha_{\tilde{k}, \min}} \left( \alpha_{\max}(\beta) - \alpha_{\min}(\beta) \right) \]

mapping \( \alpha_{\tilde{k}} \in \left[ \alpha_{\tilde{k}, \min}, \alpha_{\tilde{k}, \max} \right] \) to \( \alpha \left( \beta \right) \in \left[ \alpha_{\min}(\beta), \alpha_{\max}(\beta) \right] \) so that the thresholds at the actual value of \( \beta \) are preserved. The transformation

\[ \mu = \frac {E + E^{\prime} - \alpha A k_{\mathrm{B}} T} {2 \sqrt{E E^{\prime}}} \]

maps from \( \alpha \left( \beta \right) \in \left[ \alpha_{\min}(\beta), \alpha_{\max}(\beta) \right] \) to \( \mu \in \left[ -1, +1 \right] \), the scattering cosine used in transport. Using appropriate Jacobians, the probability of sampling \( \mu \) is

\[ p \left( \mu \right) = p \left( \alpha_{\tilde{k}} \right) \left| \frac {\mathrm{d} \alpha_{\tilde{k}}} {\mathrm{d} \alpha \left( \beta \right)} \right| \left| \frac {\mathrm{d} \alpha \left( \beta \right)} {\mathrm{d} \mu} \right| = p \left( \alpha_{\tilde{k}} \right) \frac {\alpha_{\tilde{k}, \max} - \alpha_{\tilde{k}, \min}} {\alpha_{\max}(\beta) - \alpha_{\min}(\beta)} \frac {2 \sqrt{E E^{\prime}}} {A k_{\mathrm{B}} T} \,\mathrm{.} \]

If the Jacobians are independent of the perturbed quantity \( \lambda \), we have

\[ \frac {\mathrm{d} p \left( \mu \right)} {\mathrm{d} \lambda} = \frac {\mathrm{d} p \left( \alpha_{\tilde{k}} \right)} {\mathrm{d} \lambda} \left| \frac {\mathrm{d} \alpha_{\tilde{k}}} {\mathrm{d} \alpha \left( \beta \right)} \right| \left| \frac {\mathrm{d} \alpha \left( \beta \right)} {\mathrm{d} \mu} \right| \]

so \( \eqref{eq:derivative-alpha-probability} \) can be expressed

\begin{equation} \label{eq:log-derivative-alpha-probability} \frac {1} {p\left( \mu \right)} \frac {\mathrm{d}p\left( \mu \right)} {\mathrm{d}\lambda} = \frac {1} {p\left( \alpha_{\tilde{k}} \right)} \frac {\mathrm{d}p\left( \alpha_{\tilde{k}} \right)} {\mathrm{d}\lambda} = - \frac { \vec{d}^{T} \left( \frac {\mathrm{d}} {\mathrm{d}\lambda} A_{m^{\prime}n^{\prime}} \right) \vec{f}_{T} } { \vec{d}^{T} A_{m^{\prime}n^{\prime}} \vec{f}_{T} } \,\mathrm{.} \end{equation}

The following discussions on perturbations in \( u_{mr} \), \( \sigma_{r} \), or \( v_{nr} \) will refer to the matrix in the numerator of \( \eqref{eq:log-derivative-alpha-probability} \)

\begin{equation} \label{eq:derivative-alpha-matrix} \frac{\mathrm{d}}{\mathrm{d}\lambda} A_{m^{\prime}n^{\prime}} = \begin{bmatrix} \frac{\mathrm{d}}{\mathrm{d}\lambda} \alpha_{m^{\prime}-1,n^{\prime}-1} & \frac{\mathrm{d}}{\mathrm{d}\lambda} \alpha_{m^{\prime}-1,n^{\prime}} \\ \frac{\mathrm{d}}{\mathrm{d}\lambda} \alpha_{m^{\prime},n^{\prime}-1} & \frac{\mathrm{d}}{\mathrm{d}\lambda} \alpha_{m^{\prime},n^{\prime}} \end{bmatrix} \end{equation}

where each matrix element in \( \eqref{eq:derivative-alpha-matrix} \) is

\begin{equation} \label{eq:derivative-alpha} \frac {\mathrm{d}} {\mathrm{d}\lambda} \alpha_{m^{\prime}n^{\prime}} = \sum_{r^{\prime}=1}^{r} \frac {\mathrm{d}} {\mathrm{d}\lambda} u_{m^{\prime}r^{\prime}} \sigma_{r^{\prime}} v_{n^{\prime}r^{\prime}} \,\mathrm{.} \end{equation}

Perturbing Σ

Setting \( \lambda = \sigma_{r} \), \( \eqref{eq:derivative-alpha} \) becomes

\[ \frac {\mathrm{d}} {\mathrm{d}\sigma_{r}} \alpha_{m^{\prime}n^{\prime}} = \sum_{r^{\prime}=1}^{r} \frac {\mathrm{d}} {\mathrm{d}\sigma_{r}} u_{m^{\prime}r^{\prime}} \sigma_{r^{\prime}} v_{n^{\prime}r^{\prime}} = u_{m^{\prime}r^{\prime}}v_{n^{\prime}r^{\prime}}\delta_{r,r^{\prime}} \,\textrm{.} \]

Defining

\[ \vec{u}_{m^{\prime}r} \equiv \begin{bmatrix} u_{m^{\prime}-1,r} \\ u_{m^{\prime}, r} \end{bmatrix} \quad\mathrm{and}\quad \vec{v}_{n^{\prime}r} \equiv \begin{bmatrix} v_{n^{\prime}-1,r} \\ v_{n^{\prime}, r} \end{bmatrix} \,\mathrm{,} \]

\( \eqref{eq:derivative-alpha-matrix} \) becomes

\[ \frac{\mathrm{d}}{\mathrm{d}\sigma_{r}} A_{m^{\prime}n^{\prime}} = \begin{bmatrix} u_{m^{\prime}-1,r}v_{n^{\prime}-1,r} & u_{m^{\prime}-1,r}v_{n^{\prime},r} \\ u_{m^{\prime},r}v_{n^{\prime}-1,r} & u_{m^{\prime},r}v_{n^{\prime},r} \end{bmatrix} = \vec{u}_{m^{\prime}r} \vec{v}_{n^{\prime}r}^{T} \]

so \( \eqref{eq:log-derivative-alpha-probability} \) becomes

\[ \frac {1} {p\left( \mu \right)} \frac {\mathrm{d}p\left( \mu \right)} {\mathrm{d}\sigma_{r}} = - \frac {\vec{d}^{T} \vec{u}_{m^{\prime}r} \vec{v}_{n^{\prime}r}^{T} \vec{f}_{T} } {\vec{d}^{T} A_{m^{\prime}n^{\prime}} \vec{f}_{T} } \,\mathrm{.} \]

Perturbing U

Setting \( \lambda = u_{mr} \), \( \eqref{eq:derivative-alpha} \) becomes

\[ \frac {\mathrm{d}} {\mathrm{d}u_{mr}} \alpha_{m^{\prime}n^{\prime}} = \sum_{r^{\prime}=1}^{r} \frac {\mathrm{d}} {\mathrm{d}u_{mr}} u_{m^{\prime}r^{\prime}} \sigma_{r^{\prime}} v_{n^{\prime}r^{\prime}} = \sigma_{r^{\prime}} v_{n^{\prime}r^{\prime}} \delta_{m,m^{\prime}} \delta_{r,r^{\prime}} \,\textrm{.} \]

The only nonzero cases of \( \eqref{eq:derivative-alpha-matrix} \) that get used are

\begin{cases} \frac{\mathrm{d}}{\mathrm{d}u_{mr}} A_{m,n} =& \begin{bmatrix} 0 & 0 \\ \sigma_{r} v_{n-1,r} & \sigma_{r} v_{n,r} \end{bmatrix} \,\mathrm{,} & \hat{H}_{m-1} \leq \xi < \hat{H}_{m} \\ \frac{\mathrm{d}}{\mathrm{d}u_{mr}} A_{m+1,n} =& \begin{bmatrix} \sigma_{r} v_{n-1,r} & \sigma_{r} v_{n,r} \\ 0 & 0 \end{bmatrix} \,\mathrm{,} & \hat{H}_{m} \leq \xi < \hat{H}_{m+1} \,\mathrm{.} \end{cases}

Recalling that \( \vec{d}^{T} = \begin{bmatrix} -1 & +1 \end{bmatrix} \), and defining

\[ C \left( \xi \right) \equiv \begin{cases} +1 & \hat{H}_{m-1} \leq \xi < \hat{H}_{m} \\ -1 & \hat{H}_{m} \leq \xi < \hat{H}_{m+1} \\ 0 & \mathrm{otherwise,} \end{cases} \]

we have

\[ \vec{d}^{T} \left( \frac{\mathrm{d}}{\mathrm{d}u_{mr}} A_{m^{\prime}n^{\prime}} \right) = C \left( \xi \right) \sigma_{r} \vec{v}_{n^{\prime}r}^{T} \]

so \( \eqref{eq:log-derivative-alpha-probability} \) becomes

\[ \frac {1} {p\left( \mu \right)} \frac {\mathrm{d}p\left( \mu \right)} {\mathrm{d}u_{mr}} = -C \left( \xi \right) \frac {\sigma_{r} \vec{v}_{n^{\prime}r}^{T} \vec{f}_{T}} {\vec{d}^{T} A_{m^{\prime}n^{\prime}} \vec{f}_{T}} \,\textrm{.} \]

Perturbing V

Setting \( \lambda = v_{nr} \), \( \eqref{eq:derivative-alpha} \) becomes

\[ \frac {\mathrm{d}} {\mathrm{d}v_{nr}} \alpha_{m^{\prime}n^{\prime}} = \sum_{r^{\prime}=1}^{r} \frac {\mathrm{d}} {\mathrm{d}v_{nr}} u_{m^{\prime}r^{\prime}} \sigma_{r^{\prime}} v_{n^{\prime}r^{\prime}} = u_{m^{\prime}r^{\prime}} \sigma_{r^{\prime}} \delta_{r,r^{\prime}} \delta_{n,n^{\prime}} \,\textrm{.} \]

The only nonzero cases of \( \eqref{eq:derivative-alpha-matrix} \) that get used are

\begin{cases} \frac{\mathrm{d}}{\mathrm{d}v_{nr}} A_{m,n} =& \begin{bmatrix} 0 & u_{m-1,r} \sigma_{r} \\ 0 & u_{m,r} \sigma_{r} \end{bmatrix} \,\mathrm{,} & T_{l^{\prime} - 1} \leq T < T_{l^{\prime}} \\ \frac{\mathrm{d}}{\mathrm{d}v_{nr}} A_{m,n+1} =& \begin{bmatrix} u_{m-1,r} \sigma_{r} & 0 \\ u_{m,r} \sigma_{r} & 0 \end{bmatrix} \,\mathrm{,} & T_{l^{\prime}} \leq T < T_{l^{\prime} + 1} \end{cases}

with the additional condition that \( n = \left( \tilde{k} - 1 \right) l + l^{\prime} \). In other words, the \( n \) index of the perturbed \( v_{nr} \) must also coincide with the randomly sampled \( \beta \) index \( \tilde{k} \).

Recalling that \( \vec{f_{T}}^{T} = \begin{bmatrix} 1-f_{T} & f_{T} \end{bmatrix} \), and defining

\[ B_{n^{\prime}} \left( T \right) \equiv \begin{cases} f_{T} & T_{l^{\prime} - 1} \leq T < T_{l^{\prime}} \quad\mathrm{and}\quad \left\lfloor \frac{n^{\prime} - 1}{l} \right\rfloor + 1 = \tilde{k} \\ 1-f_{T} & T_{l^{\prime}} \leq T < T_{l^{\prime} + 1} \quad\mathrm{and}\quad \left\lfloor \frac{n^{\prime} - 1}{l} \right\rfloor + 1 = \tilde{k} \\ 0 & \mathrm{otherwise,} \end{cases} \]

we have

\[ \left( \frac{\mathrm{d}}{\mathrm{d}v_{nr}} A_{m^{\prime}n^{\prime}} \right) \vec{f}_{T} = B_{n^{\prime}} \left( T \right) \sigma_{r} \vec{u}_{m^{\prime}r} \]

so \( \eqref{eq:log-derivative-alpha-probability} \) becomes

\[ \frac {1} {p\left( \mu \right)} \frac {\mathrm{d}p\left( \mu \right)} {\mathrm{d}v_{nr}} = -B_{n^{\prime}} \left( T \right) \frac {\sigma_{r} \vec{d}^{T} \vec{u}_{m^{\prime}r}} {\vec{d}^{T} A_{m^{\prime}n^{\prime}} \vec{f}_{T}} \,\textrm{.} \]

Implementation

When only perturbations in \( u_{mr} \), \( \sigma_{r} \), or \( v_{nr} \) for \( \alpha \) CDF data are considered, the only nonzero term in \( \eqref{eq:step-i-indirect-effect} \) is

\[ \mathcal{E}_{i} = p \left( \hat{\boldsymbol{\Omega}}_{i+1}, \middle| \boldsymbol{x}_{i+1} \hat{\boldsymbol{\Omega}}_{i}, E_{i+1}, r_{i+1} \right) = \frac{1}{2\pi} p \left( \mu_{i+1} \middle| \boldsymbol{x}_{i+1} E_{i+1}, r_{i+1} \right) \]

so we have

\[ \frac {\frac{\mathrm{d}}{\mathrm{d}\lambda} p_{X_{i+1} \mid X_{i}}(x_{i+1} \mid x_{i})} {p_{X_{i+1} \mid X_{i}}(x_{i+1} \mid x_{i})} = \frac {1} {p\left( \mu \right)} \frac {\mathrm{d}p\left( \mu \right)} {\mathrm{d}\lambda} \,\mathrm{.} \]