Sun Yat-Sen Management Review  1993/9
Vol. 1, No.1 p.153-175
Graduate School of Resources Management National Defense Management College
This study examines optimal inspection-repair-replacement policy for the discrete-time partially observable Markov decision processes over an infinite horizon in which the state space is finite and the action space consist of “no action, inspection at beginning, instantaneous repair and replacement at beginning.＂Upon inspection to determine the precise state of the system, an additional cost is required. It is noted that repair cannot return the system to an as-good-as-new state. First, we construct the recursion to maximize the expected total discounted reward. Useful results are derived under the conditions of partial orders, namely stochastic dominance and monotone likelihood ratio as well as the totally positive of order two. Consequently, we show that the optimal policies have the structure which break up the space of state probability vectors into at most five-region. Next, alternate modeling results are set forth within two different action spaces:“no action, instantaneous inspection and repair at end, instantaneous replacement at beginning＂;“no action, inpection at beginning, instantaneous repair and replacement at end.＂Finally, several relevant studies presented for further consideration.(633621863954218750.pdf 35KB)
Partially observable markov decision processes(POMDP’s), stochastic dominance, monotone likelihood ratio, totally positive of order two, inspection-repair-replacement policy.