5.6. Bodoff’s Percentile Layer Capital Method
Objectives: Compare Bodoff with the natural allocation and show how to compute both in aggregate
.
Audience: Those interested in current allocation methods and CAS Exam 9 candidates.
Prerequisites: Background on allocation and Bodoff’s paper.
Contents:
5.6.2. Introduction
The abstract to Bodoff [2007], Capital Allocation by Percentile Layer reads:
This paper describes a new approach to capital allocation; the catalyst for this new approach is a new formulation of the meaning of holding Value at Risk (VaR) capital. This new formulation expresses the firm’s total capital as the sum of many granular pieces of capital, or “percentile layers of capital.” As a result, one must allocate capital separately on each layer and perform the capital allocation across all layers. The resulting capital allocation procedure, “capital allocation by percentile layer,” exhibits several salient features. First, it allocates capital to all losses, rather than allocating capital only to extreme losses in the tail of the distribution. Second, despite allocating capital to this broad range of loss events, the proposed procedure does not allocate in proportion to average loss; rather, it allocates disproportionate capital to severe losses. Third, it allocates capital by relying neither upon esoteric parameters nor upon elusive risk preferences. Ultimately, on the practical plane, capital allocation by percentile layer produces allocations that are different from many other methods. Concomitantly, on the theoretical plane, capital allocation by percentile layer leads to new continuous formulas for risk load and utility.
Bodoff’s paper is an important contribution to capital allocation and actuarial science. Its key insight is that layers of capital respond to a range of loss events and not just tail events and so it is not appropriate to focus solely on default states when allocating capital. Bodoff takes capital to mean total claims paying ability, comprised of equity and premium. Bodoff allocates capital by considering loss outcomes and assumes that expected loss, margin, premium, and equity all have the same allocation within each layer.
Less favorably, Bodoff blurs the distinction between events and outcomes. He allocates to identifiable events (wind-only loss, etc.) rather than to outcomes. In examples, outcome amounts distinguish events. In the Lee diagram, events are on the horizontal axis and outcomes on the vertical axis.
5.6.3. Assumptions and Notation
The examples model two independent units \(X_1\) and \(X_2\), usually wind
and quake
, with total \(X = X_1 + X_2\).
\(F\) and \(S\) represent the distribution and survival function
of \(X\) and \(q\) its lower quantile function. The capital
(asset) requirement set equal to the (lower) \(a:=p=0.99\)-VaR capital
5.6.4. Three Possible Allocation Methods
Consider three allocations:
Conditional VaR:
coVaR
, method allocates using\[a=\mathsf E[X\mid X=a] = \mathsf E[X_1\mid X=a] + \mathsf E[X_2\mid X=a]\]Alternative conditional VaR:
alt coVaR
, method allocates using\[a = a\,\mathsf E\left[\frac{X_1}{X}\mid X\ge a \right] + a\,\mathsf E\left[\frac{X_2}{X}\mid X\ge a \right]\]Naive conditional TVaR:
naive coTVaR
, method allocates \(a\) proportional to \(\mathsf E[X_1\mid X \ge a]\) and \(\mathsf E[X_2\mid X \ge a]\)
Bodoff’s principal criticism of these methods is that they all ignore the possibility of outcomes \(<a\).
coVaR
allocates based proportion of losses by unit on the events \(\{X=a\}\) of exact size \(a\). It ignores other events near \(X=a\) and all events \(X<a\), which seems unreasonable. The allocation is not numerically stable: in simulation output \(\{X=a\}\) is often only a single event.alt coVaR
allocates based proportion of losses by unit on the events \(\{X \ge a\}\). It still ignores all events \(<a\). It relies on the relationship\[\begin{split}a &= a\,\left(\mathsf E\left[\frac{X_1}{X}\mid X\ge a\right] + a\mathsf E\left[\frac{X_2}{X}\mid X\ge a\right]\right) \\ &= a\,\alpha_1(a) + a\,\alpha_2(a)\end{split}\]naive coTVaR
resorts to a pro rata kludge because \(\mathsf E[X\mid X \ge x]\ge x\) and is usually \(>x\). Pro rata adjustments signal the lack of a rigorous rationale and should be avoided. Note: what Bodoff calls TVaR is usually known as CTE.Alternative conditional TVaR: the
coTVaR
method (not considered by Bodoff but introduced by Mango, Venter, Kreps, Major) solves \(a=\mathsf{TVaR}(p^*)\) for \(p^*\le p\) (we shall see below we really need to use expected shortfall, not TVaR). Then determine \(a^*=q(p^*)\), the \(p^*\)-VaR and allocate using \(a=\mathsf E[X\mid X\ge a^*] =\mathsf E[X_1\mid X\ge a^*] + \mathsf E[X_2\mid X\ge a^*]\).
In addition, all methods can be criticized as actuarial allocation exercises without an economic motivation. They do not consider premium: additional assumptions needed to derive a premium from an asset or capital allocation, such as a target return on allocated capital. They just provide an allocation of premium plus capital, i.e., assets, and not a split between the two.
5.6.5. Percentile Layer Allocation: Definition
Bodoff introduces the percentile layer of capital, plc
, allocation
method to address the criticism that methods 1-4 all ignore events causing
losses below the level of capital, whereas capital is certainly used to pay
such losses. It allocates capital in the same proportion as losses for each
layer.
In a one-dollar, all-or-nothing cover that attaches with probability
\(s=1-p\) at \(x=q(p)\) (\(=p\)-\(\mathsf{VaR}\)),
under equal priority unit
\(i\) receives a proportion
\(\alpha_i(x):=\mathsf E\left[\dfrac{X_i}{X}\mid X > x\right]\) of
assets, conditional on a loss.
Therefore, unconditional expected loss recoveries equal
\(\alpha_i(x)S(x)\), part of total layer losses \(S(x)\). Allocating
each layer of capital between 0 and \(a\) in the same way gives
the percentile layer of capital plc
allocation:
By construction, \(\sum_i a_i=a\). The plc
allocation can be
understood better by decomposing
It splits unfunded assets (assets in excess of expected
losses) in the same proportion as losses in each asset layer, using
\(\alpha_i(x)\). plc
says nothing about how to split the allocated
unfunded capital \(\int_0^a \alpha_2(x)F(x)\, dx\) into margin
and equity. This is not surprising, since there are no pricing assumptions.
The natural allocation introduces a pricing distortion to compute an
allocation of premium, and hence margin.
There are six allocations considered by Bodoff, with the following allocations of assets to unit 1.
pct EX
: \(\mathsf E[X_1] / \mathsf E[X]\)coVaR
: \(\mathsf E[X_1\mid X=a]\)adj VaR
: \(a\,\mathsf E\left[\dfrac{X_1}{X}\mid X\ge a \right]\)naive coTVaR
: \(a\,\dfrac{\mathsf E[X_1\mid X \ge a]}{\mathsf E[X\mid X \ge a]}\)coTVaR
: \(\mathsf E[X_1\mid X > a^*]\), where \(a=\mathsf{TVaR}(p^*)\)plc
: \(\displaystyle \int_0^a \alpha_i(x)\,dx\), where \(\alpha_i(x):=\mathsf E\left[\dfrac{X_i}{X}\mid X > x\right]\)
5.6.6. Thought Experiments
Bodoff introduces four thought experiments:
Wind and quake, wind losses 0 or 99, quake 0 or 100, 0.2 probability of a wind loss and 0.01 probability of a quake loss.
Wind and quake, wind 0 or 50, quake 0 or 100, same probabilities.
Wind and quake, wind 0 or 5, quake 0 or 100, same probabilities.
Bernoulli / exponential compound distribution (see Bodoff Example 4.)
The units are independent. The next block of code sets up and validates Portfolio
objects for each. The Bodoff portfolios are part of the base library and can be extracted with
build.qlist
.
In [1]: import pandas as pd
In [2]: from collections import OrderedDict
In [3]: from aggregate import build, qd
In [4]: from aggregate.extensions import bodoff_exhibit
In [5]: bodoff = list(build.qlist('.*Bodoff').program)
In [6]: ports = OrderedDict()
In [7]: for s in bodoff:
...: port = build(s)
...: port.name = port.name.replace('L.', '')
...: ports[port.name] = port
...:
In [8]: for port in ports.values():
...: if port.name != 'Bodoff4':
...: port.update(bs=1, log2=8, remove_fuzz=True, padding=1)
...: else:
...: port.update(bs=1/8, log2=16, remove_fuzz=True, padding=2)
...: port.density_df = port.density_df.apply(lambda x: np.round(x, 14))
...: qd(port)
...: print(port.name)
...: print('='*80 + '\n')
...:
E[X] Est E[X] Err E[X] CV(X) Est CV(X) Skew(X) Est Skew(X)
unit X
wind1 Freq 1 0
Sev 19.8 19.8 -2.2204e-16 2 2 1.5 1.5
Agg 19.8 19.8 -2.2204e-16 2 2 1.5 1.5
quake1 Freq 1 0
Sev 5 5 8.8818e-16 4.3589 4.3589 4.1295 4.1295
Agg 5 5 8.8818e-16 4.3589 4.3589 4.1295 4.1295
total Freq 2 0
Sev 12.4 12.4 0 2.6458 2.2679
Agg 24.8 24.8 0 1.8226 1.8226 1.4715 1.4715
log2 = 8, bandwidth = 1, validation: not unreasonable.
Bodoff1
================================================================================
E[X] Est E[X] Err E[X] CV(X) Est CV(X) Skew(X) Est Skew(X)
unit X
wind2 Freq 1 0
Sev 10 10 -2.2204e-16 2 2 1.5 1.5
Agg 10 10 -2.2204e-16 2 2 1.5 1.5
quake2 Freq 1 0
Sev 5 5 8.8818e-16 4.3589 4.3589 4.1295 4.1295
Agg 5 5 8.8818e-16 4.3589 4.3589 4.1295 4.1295
total Freq 2 0
Sev 7.5 7.5 2.2204e-16 2.8087 2.8984
Agg 15 15 2.2204e-16 1.972 1.972 2.1153 2.1153
log2 = 8, bandwidth = 1, validation: not unreasonable.
Bodoff2
================================================================================
E[X] Est E[X] Err E[X] CV(X) Est CV(X) Skew(X) Est Skew(X)
unit X
wind3 Freq 1 0
Sev 1 1 -2.2204e-16 2 2 1.5 1.5
Agg 1 1 -2.2204e-16 2 2 1.5 1.5
quake3 Freq 1 0
Sev 5 5 8.8818e-16 4.3589 4.3589 4.1295 4.1295
Agg 5 5 8.8818e-16 4.3589 4.3589 4.1295 4.1295
total Freq 2 0
Sev 3 3 6.6613e-16 5.2015 5.9989
Agg 6 6 8.8818e-16 3.6477 3.6477 4.079 4.079
log2 = 8, bandwidth = 1, validation: not unreasonable.
Bodoff3
================================================================================
E[X] Est E[X] Err E[X] CV(X) Est CV(X) Skew(X) Est Skew(X)
unit X
a Freq 0.25 2 2
Sev 4 3.9998 -4.0689e-05 1 1.0001 2 1.9995
Agg 1 0.99996 -4.0689e-05 2.8284 2.8286 4.2426 4.2426
b Freq 0.05 4.4721 4.4721
Sev 20 20 -1.6276e-06 1 1 2 2
Agg 1 1 -1.6276e-06 6.3246 6.3246 9.4868 9.4868
c Freq 0.05 4.4721 4.4721
Sev 100 100 -6.5104e-08 1 1 2 2
Agg 5 5 -6.5104e-08 6.3246 6.3246 9.4868 9.4868
total Freq 0.35 1.6903 1.6903
Sev 20 20 -6.0917e-06 2.5467 5.3022
Agg 7 7 -6.0918e-06 4.6247 4.6247 8.9162 8.9162
log2 = 16, bandwidth = 1/8, validation: not unreasonable.
Bodoff4
================================================================================
5.6.7. Thought Experiment Number 1
There are four possible events \(\omega\), leading to the loss outcomes \(X(\omega)\) laid out next.
Compute the allocation using all the methods. In the next block, EX
shows
expected unlimited loss by unit. sa VaR
and sa TVaR
show stand-alone
0.99 VaR and TVaR. The remaining rows display results for the methods
just described. The apparent issue with the coTVaR
method is caused by
the probability mass at 100. A co ES
allocation would re-scale the
coTVaR
allocation shown.
In [9]: port = ports['Bodoff1']
In [10]: reg_p = 0.99
In [11]: a = port.q(reg_p, 'lower')
In [12]: print(f'VaR assets = {a}')
VaR assets = 100.0
In [13]: basic = bodoff_exhibit(port, reg_p)
In [14]: qd(basic, col_space=10)
wind1 quake1 total
method
EX 5 24.8 19.8
sa VaR 99 100 100
sa TVaR 99 100 199
pct EX 25.253 125.25 100
coVaR 0 100 100
alt coVaR 9.9497 90.05 100
naive coTVaR 16.528 83.472 100
coTVaR 82.5 20.833 103.33
plc 80.527 19.473 100
Graphs of the survival and allocation functions for Bodoff Example 1. Top row: survival functions, bottom row: \(\alpha_i(x)\) allocation functions. Left side shows full range of \(0\le x\le 200\) and right side highlights the functions around the loss points, \(96\le x \le 103\).
In [15]: fig, axs = plt.subplots(2, 2, figsize=(2 * 3.5, 2 * 2.45), constrained_layout=True)
In [16]: ax0, ax1, ax2, ax3 = axs.flat
In [17]: df = port.density_df
In [18]: for ax in axs.flat[:2]:
....: (1 - df.query('(S>0 or p_total>0) and loss<=210').filter(regex='p_').cumsum()).\
....: plot(drawstyle="steps-post", ax=ax, lw=1)
....: ax.lines[1].set(lw=2, alpha=.5)
....: ax.lines[2].set(lw=3, alpha=.5)
....: ax.grid(lw=.25)
....: ax.legend(loc='upper right')
....:
In [19]: ax0.set(ylim=(-0.025, .25), xlim=(-.5, 210), xlabel='Loss', ylabel='Survival function');
In [20]: ax1.set(ylim=(-0.025, .3), xlim=[96,103], xlabel='Loss (zoom)', ylabel='Survival function');
In [21]: for ax in axs.flat[2:]:
....: df.query('(S>0) and loss<=210').filter(regex='exi_xgta_[wq]').plot(drawstyle="steps-post", lw=1, ax=ax)
....: ax.lines[1].set(lw=2, alpha=.5)
....: ax.grid(lw=.25)
....: ax.legend(loc='upper right')
....:
In [22]: ax2.set(ylim=(-0.025, 1.025), xlabel='Loss', ylabel='$E[X_i/X | X]$');
In [23]: ax3.set(ylim=(-0.025, 1.025), xlim=(96,103), xlabel='Loss (zoom)', ylabel='$E[X_i/X | X]$');
Expected Shortfall (usually called TVaR) differs from Bodoff’s Tail Value at Risk (generally called CTE) for a discrete distribution. TVaR/CTE is a jump function. ES is a continuous, increasing function taking all values between the mean and maximum value of \(X\). The graph illustrates the functions for Bodoff Example 1.
In [24]: fig, ax = plt.subplots(1, 1, figsize=(3.5, 2.45), constrained_layout=True)
In [25]: ps = np.linspace(0, 1, 101)
In [26]: tp = port.tvar(ps)
In [27]: ax.plot(ps, tp, lw=1, label='ES');
In [28]: ax.plot(df.F, port.density_df.exgta_total, lw=1, label='TVaR', drawstyle='steps-post');
In [29]: ax.plot([0, .76], [port.ex/.24, port.ex/.24, ], c='C1', lw=1, label=None);
In [30]: ax.grid();
In [31]: ax.legend();
In [32]: ax.set(ylim=[-5, 205], xlabel='p', ylabel='ES or TVaR/CTE');
5.6.8. Bodoff Examples 1-3
Example 2 illustrates that plc
can produce an answer that is different
from expected losses. Example 3 it illustrates fungibility of pooled capital,
with losses from \(X_1\) covered by the total premium. coTVaR
suffers the
same issues in Examples 2 and 3 as it does in Example 1.
In [33]: basic1 = bodoff_exhibit(ports['Bodoff1'], reg_p)
In [34]: basic2 = bodoff_exhibit(ports['Bodoff2'], reg_p)
In [35]: basic3 = bodoff_exhibit(ports['Bodoff3'], reg_p)
In [36]: basic_all = pd.concat((basic1, basic2, basic3), axis=1,
....: keys=[f'Ex {i}' for i in range(1,4)])
....:
In [37]: qd(basic_all, col_space=7)
Ex 1 Ex 2 Ex 3
wind1 quake1 total wind2 quake2 total wind3 quake3 total
method
EX 5 24.8 19.8 5 15 10 5 6 1
sa VaR 99 100 100 50 100 100 5 100 100
sa TVaR 99 100 199 50 100 150 5 100 105
pct EX 25.253 125.25 100 50 150 100 500 600 100
coVaR 0 100 100 -0 100 100 0 100 100
alt coVaR 9.9497 90.05 100 6.6667 93.333 100 0.95238 99.048 100
naive coTVaR 16.528 83.472 100 9.0909 90.909 100 0.9901 99.01 100
coTVaR 82.5 20.833 103.33 10 100 110 1 100 101
plc 80.527 19.473 100 43.611 56.389 100 4.873 95.127 100
5.6.9. Bodoff Example 4
The next table recreates the exhibit in Section 9.1 of Bodoff’s paper. There are three units labelled a
, b
, and c
.
It shows the percent allocation of capital to each unit across different methods.
Breakeven percentile equals the percentile equal to expected losses. Bodoff’s
calculation uses 10,000 simulations. The table shown here uses FFTs to obtain a close-to exact
answer. The exponential distribution is borderline thick tailed, and so is quite hard
to work with for both simulation methods and FFT methods.
In [38]: p4 = ports['Bodoff4']
In [39]: df91 = pd.DataFrame(columns=list('abc'), dtype=float)
In [40]: tv = p4.var_dict(.99, 'tvar')
In [41]: df91.loc['sa TVaR 0.99'] = np.array(list(tv.values())[:-1]) / sum(list(tv.values())[:-1])
In [42]: pbe = float(p4.cdf(p4.ex))
In [43]: for p in [.99, .95, .9, pbe]:
....: tv = p4.cotvar(p)
....: df91.loc[f'naive TVaR {p:.3g}'] = tv[:-1] / tv[-1]
....:
In [44]: v = ((p4.density_df.filter(regex='exi_xgta_[abc]').
....: shift(1).cumsum() * p4.bs).loc[p4.q(.99)]).values
....:
In [45]: df91.loc['plc'] = v / v.sum()
In [46]: df91.index.name = 'line'
In [47]: qd(df91, col_space=10, float_format=lambda x: f'{x:.1%}')
a b c
line
sa TVaR 0.99 5.5% 15.8% 78.8%
naive TVaR 0.99 0.4% 0.7% 98.9%
naive TVaR 0.95 1.3% 11.3% 87.5%
naive TVaR 0.9 6.7% 14.7% 78.6%
naive TVaR 0.876 9.0% 14.7% 76.3%
plc 4.5% 9.6% 85.8%
5.6.9.1. Pricing for Bodoff Example 4
Bodoff Example 4 is based on a three unit portfolio. Each unit has a Bernoulli 0/1 frequency and exponential severity:
Unit
a
has a 0.25 probability of a claim and 4 severityUnit
b
has a 0.05 probability of a claim and 20 severityUnit
c
has a 0.01 probability of a claim and 100 severity
All units have unlimited expectation 1.0
Bodoff does not consider pricing per se. His allocation can be
considered as \(P_i+Q_i\), with no opinion on the split between
margin and equity. Making additional assumptions we can compare the plc
capital
allocation with other methods. Assume total roe = 0.1 at 0.99-VaR capital standard.
Set up the target return, premium, and regulatory capital threshold (99% VaR):
In [48]: roe = 0.1
In [49]: reg_p = 0.99
In [50]: v = 1 / (1 + roe)
In [51]: d = 1 - v
In [52]: port = ports['Bodoff4']
In [53]: a = port.q(reg_p)
In [54]: el = port.density_df.at[a, 'lev_total']
In [55]: premium = v * el + d * a
In [56]: q = a - premium
In [57]: margin = premium - el
In [58]: roe, a, el, port.ex, premium, el / premium, q, margin / q
Out[58]:
(0.1,
164.875,
5.97612830432361,
6.999957357424143,
20.42148027665783,
0.29263932992920433,
144.45351972334217,
0.10000000000000003)
Calibrate pricing distortions to required return.
In [59]: port.calibrate_distortions(ROEs=[roe], Ps=[reg_p], strict='ordered');
In [60]: qd(port.distortion_df)
S L P PQ Q COC param error
a LR method
164.875 292.639m ccoc 0.0099961 5.9761 20.421 0.14137 144.45 0.1 0.1 0
ph 0.0099961 5.9761 20.421 0.14137 144.45 0.1 0.60427 1.7693e-10
wang 0.0099961 5.9761 20.421 0.14137 144.45 0.1 0.69321 3.3953e-06
dual 0.0099961 5.9761 20.421 0.14137 144.45 0.1 3.8032 -2.7904e-08
tvar 0.0099961 5.9761 20.421 0.14137 144.45 0.1 0.70736 4.3553e-06
Allocate premium plus equity to each unit across different pricing methods. All methods
except percentile layer capital calibrated to the same total premium and capital level.
Distortions that price tail loss will allocate the most to unit c
, the most volatile.
More bowed distortions will allocate most to a
. The three units have the same expected loss
(last row). covar
is covariance method; coVaR
is conditional VaR. agg
corresponds to the PIR approach and bod
to Bodoff’s
methods. Only additive methods are shown. method
ordered by allocation
to unit a
the least skewed; c
is the most skewed.
In [61]: ad_ans = port.analyze_distortions(p=reg_p, kind='lower')
In [62]: basic = bodoff_exhibit(port, reg_p)
In [63]: qd(basic, col_space=10)
a b c total
method
EX 1 1 5 7
sa VaR 14 32.5 162.5 164.88
sa TVaR 18.42 52.989 264.94 267.26
pct EX 23.554 23.554 117.77 164.88
coVaR 1.0864 2.7956 160.99 164.88
alt coVaR 0.74288 1.3108 162.82 164.87
naive coTVaR 0.66867 1.1279 163.08 164.88
coTVaR 1.1127 7.0366 156.86 165.01
plc 7.4527 15.899 141.52 164.87
In [64]: ans = pd.concat((ad_ans.comp_df.xs('P', 0, 1) + ad_ans.comp_df.xs('Q', 0, 1),
....: basic.rename(columns=dict(X='total')).iloc[3:]), keys=('agg', 'bod'))
....:
In [65]: if port.name[-1] in list('123'):
....: ans = ans.sort_values('X1')
....: bit = ans.query(' abs(total - @a) < 1e-3 and abs(X1 + X2 - total) < 1e-3 ').dropna()
....:
In [66]: if port.name[-1] not in list('123'):
....: ans = ans.sort_values('a')
....: bit = ans.query(' abs(total - @a) < 1e-2 and abs(a + b + c - total) < 1e-2 ')
....:
In [67]: bit.index.names =['approach', 'method']
In [68]: qd(bit, col_space=10)
a b c total
approach method
bod naive coTVaR 0.66867 1.1279 163.08 164.88
alt coVaR 0.74288 1.3108 162.82 164.87
coVaR 1.0864 2.7956 160.99 164.88
plc 7.4527 15.899 141.52 164.87
pct EX 23.554 23.554 117.77 164.88
Premium for PIR and Bodoff methods, sorted by premium for a
.
All methods produce the same total premium by calibration.
Very considerable differences are evident across the methods.
In [69]: basic.loc['EXa'] = \
....: port.density_df.filter(regex='exa_[abct]').loc[a].rename(index=lambda x: x.replace('exa_', ''))
....:
In [70]: premium_df = basic.drop(index=['EX', 'sa TVaR', 'coTVaR'])
In [71]: premium_df = premium_df.loc['EXa'] * v + d * premium_df
In [72]: ans = pd.concat((ad_ans.comp_df.xs('P', 0, 1), premium_df),
....: keys=('agg', 'bod')).sort_values('a')
....:
In [73]: bit = ans.query(' abs(total - @premium) < 1e-2 and abs(a + b + c - total) < 1e-2 ')
In [74]: bit.index.names =['approach', 'method']
In [75]: qd(bit, col_space=10, sparsify=False)
a b c total
approach method
bod naive coTVaR 0.96674 1.0069 18.448 20.421
bod alt coVaR 0.97349 1.0236 18.424 20.421
bod coVaR 1.0047 1.1585 18.258 20.421
agg Dist ph 1.5177 2.1557 16.756 20.429
bod plc 1.5835 2.3497 16.488 20.421
agg Dist wang 1.8859 2.5978 15.944 20.428
agg Dist dual 2.6728 3.2827 14.471 20.426
bod pct EX 3.0472 3.0456 14.329 20.421
agg Dist tvar 3.4054 3.3995 13.621 20.426
Corresponding loss ratios (remember, these are cat lines).
In [76]: bit_lr = premium_df.loc['EXa'] / bit
In [77]: qd(bit_lr, col_space=10, sparsify=False,
....: float_format=lambda x: f'{x:.1%}')
....:
a b c total
approach method
bod naive coTVaR 103.1% 98.8% 21.6% 29.3%
bod alt coVaR 102.4% 97.2% 21.6% 29.3%
bod coVaR 99.2% 85.9% 21.8% 29.3%
agg Dist ph 65.7% 46.1% 23.8% 29.3%
bod plc 62.9% 42.3% 24.2% 29.3%
agg Dist wang 52.8% 38.3% 25.0% 29.3%
agg Dist dual 37.3% 30.3% 27.5% 29.3%
bod pct EX 32.7% 32.7% 27.8% 29.3%
agg Dist tvar 29.3% 29.3% 29.3% 29.3%
5.6.10. Bodoff Summary
Bodoff’s methods allocate all capital like loss and do not distinguish expected loss, margin and equity. It does not get to a price. It is event-centric, allocating to events, but really allocating to peril=lines. Premium is not mentioned until Section 7 (of 10). Then, it uses the basic CCoC formula \(P=vL + da\) (eq. 8.2).
5.6.11. CAS Exam Question: Spring 2018 Question 15
An insurer has exposure to two independent perils, wind and earthquake:
Wind has a 15% chance of a $5 million loss, and an 85% chance of no loss.
Earthquake has a 1 % chance of a $15 million loss, and a 99% chance of no loss.
Using the capital allocation by percentile layer methodology with a 99.5% VaR capital requirement, determine how much capital should be allocated to each peril.
Solution.
The last row gives the percentile layer capital.
In [78]: cas15 = build('port CASq15 '
....: 'agg X1 1 claim dsev [0, 5] [0.85, 0.15] fixed '
....: 'agg X2 1 claim dsev [0, 15] [0.99, 0.01] fixed ')
....:
In [79]: qd(cas15)
E[X] Est E[X] Err E[X] CV(X) Est CV(X) Skew(X) Est Skew(X)
unit X
X1 Freq 1 0
Sev 0.75 0.75 2.2204e-16 2.3805 2.3805 1.9604 1.9604
Agg 0.75 0.75 2.2204e-16 2.3805 2.3805 1.9604 1.9604
X2 Freq 1 0
Sev 0.15 0.15 8.8818e-16 9.9499 9.9499 9.8494 9.8494
Agg 0.15 0.15 8.8818e-16 9.9499 9.9499 9.8494 9.8494
total Freq 2 0
Sev 0.45 0.45 2.2204e-16 3.7168 4.7835
Agg 0.9 0.9 2.2204e-16 2.5856 2.5856 3.4839 3.4839
log2 = 16, bandwidth = 1/128, validation: not unreasonable.
# cas15.update(bs=1, log2=8, remove_fuzz=True, padding=1)
In [80]: cas15.density_df = cas15.density_df.apply(lambda x: np.round(x, 10))
In [81]: basic = bodoff_exhibit(cas15, reg_p=.995)
In [82]: qd(basic, col_space=10)
X1 X2 total
method
EX 0.75 0.15 0.9
sa VaR 5 15 15
sa TVaR 5 15 16.5
pct EX 12.5 2.5 15
coVaR 0 15 15
alt coVaR 0.5625 14.438 15
naive coTVaR 0.71429 14.286 15
coTVaR 0.75 15 15.75
plc 5.0714 9.9286 15
In [83]: df = cas15.density_df.query('S > 0 or p_total > 0')
The calculation of plc
as the integral of \(\alpha\) for unit 1 is simply:
In [84]: df.exi_xgta_X1.shift(1, fill_value=0).cumsum().loc[15] * cas15.bs
Out[84]: 5.071372239500204