In [1]:

```
from demos.setup import np, plt, demo
from compecon import DDPmodel, gridmake, getindex
%matplotlib inline
```

Assume a maximum asset age of 5 years, asset replacement cost $c = 75$, cost of servicing $k = 10$, and annual discount factor $\delta = 0.9$.

In [2]:

```
maxage = 5
repcost = 75
mancost = 10
delta = 0.9
```

This is an infinite horizon, deterministic model with time $t$ measured in years. The state variables

$a \in \{1, 2, 3, \dots, n\}$

$s \in \{0, 1, 2, \dots, n − 1\}$

are the age of the asset in years and the number of servicings it has undergone, respectively.

In [3]:

```
s1 = np.arange(1, 1 + maxage) # asset age
s2 = s1 - 1 # servicings
S = gridmake(s1,s2) # combined state grid
S1, S2 = S
n = S1.size # total number of states
```

The action variable

$x \in \{\text{no action}, \text{service}, \text{replace}\}$

is the hold-replacement-maintenance decision.

In [4]:

```
X = ['no action', 'service', 'replace'] # vector of actions
m = len(X) # number of actions
```

The reward function is \begin{equation} f (a, s, x) = \begin{cases} p(a, s), &x = \text{no action}\ p(0, 0) − c, &x = \text{service}\ p(a, s + 1) − k, &x = \text{replace} \end{cases} \end{equation}

Assuming a profit contribution $p(a) = 50 − 2.5a −2.5a^2$ that is a function of the asset age $a$ in years.

Here, the rows of the reward matrix, which correspond to the three admissible decisions (no action, service, replace), are computed individually.

In [5]:

```
f = np.zeros((m, n))
q = 50 - 2.5 * S1 - 2.5 * S1 ** 2
f[0] = q * np.minimum(1, 1 - (S1 - S2) / maxage)
f[1] = q * np.minimum(1, 1 - (S1 - S2 - 1) / maxage) - mancost
f[2] = 50 - repcost
```

The state transition function is \begin{equation} g(a, s, x) = \begin{cases} (a + 1, s), &x = \text{no action}\ (1, 0), &x = \text{service}\ (a + 1, s + 1), &x = \text{replace} \end{cases} \end{equation}

Here, the routine `getindex`

is used to find the index of the following period’s state.

In [6]:

```
g = np.empty_like(f)
g[0] = getindex(np.c_[S1 + 1, S2], S)
g[1] = getindex(np.c_[S1 + 1, S2 + 1], S)
g[2] = getindex(np.c_[1, 0], S)
```

The value of asset of age $a$ that has undergone $s$ servicings satisfies the Bellman equation \begin{equation} V(a,s) = \max{p(a,s) + \delta V(a+1,s),\quad p(a,s+1)−k + \delta V(a+1,s+1), \quad p(0,0) − c + \delta V(1,0)} \end{equation}

where we set $p(n, s) = −\infty$ for all $s$ to enforce replacement of an asset of age $n$. The Bellman equation asserts that if the manufacturer replaces an asset of age $a$ with servicings $s$, he earns $p(0,0) − c$ over the coming year and begins the subsequent year with an asset worth $V(1,0)$; if he services the asset, he earns $p(a, s + 1) − k$ over the coming year and begins the subsequent year with an asset worth $V(a + 1, s + 1)$. As with the previous example, the value $V(a, s)$ measures not only the current and future net earnings of the asset, but also the net earnings of all future assets that replace it.

To solve and simulate this model, use the CompEcon class `DDPmodel`

.

In [7]:

```
model = DDPmodel(f, g, delta)
model.solve()
```

Out[7]:

The paths are computed by performing q deterministic simulation of 12 years in duration using the `simulate()`

method.

In [8]:

```
sinit = 0
nyrs = 12
t = np.arange(nyrs + 1)
spath, xpath = model.simulate(sinit, nyrs)
```

The asset is replaced every four years, ...

In [9]:

```
demo.figure('Optimal State Path', 'Year', 'Age of Asset', [0, 12])
plt.plot(t, S1[spath])
plt.show()
```

...and is serviced twice, at the beginning of the second and third years of operation.

In [10]:

```
demo.figure('Optimal State Path', 'Year', 'Number of Servicings', [0, 12], [0, 2.25])
plt.plot(t, S2[spath])
plt.show()
```