Article ID: | iaor20031784 |
Country: | Netherlands |
Volume: | 142 |
Issue: | 3 |
Start Page Number: | 548 |
End Page Number: | 576 |
Publication Date: | Nov 2002 |
Journal: | European Journal of Operational Research |
Authors: | Bloch-Mercier Sophie |
Keywords: | markov processes |
We consider a repairable system subject to a continuous-time Markovian deterioration while running, that leads to failure. The deterioration degree is measured with a finite discrete scale; repairs follow general distributions; failures are instantaneously detected. This system is submitted to a preventive maintenance policy, with a sequential checking procedure: the up-states are divided into two parts, the ‘good’ up-states and the ‘degraded’ up-states. Instantaneous (and perfect) inspections are then performed on the running system: when it is found in a degraded up-state, it is stopped to be maintained (for a random duration that depends on the degradation degree of the system); when it is found in a good up-state, it is left as it is. The next inspection epoch is then chosen randomly and depends on the degradation degree of the system by time of inspection. We compute the long-run availability of the maintained system and give sufficient conditions for the preventive maintenance policy to improve the long-run availability. We study the optimization of the long-run availability with respect to the distributions of the inter-inspection intervals: we show that under specific assumptions (often checked), optimal distributions are non-random. Numerical examples are studied.