Article ID: | iaor20164022 |
Volume: | 32 |
Issue: | 8 |
Start Page Number: | 2925 |
End Page Number: | 2943 |
Publication Date: | Dec 2016 |
Journal: | Quality and Reliability Engineering International |
Authors: | Shatnawi Omar |
Keywords: | statistics: distributions, simulation, computers |
In the software reliability engineering literature, few attempts have been made to study the fault debugging environment using discrete‐time modelling. Most endeavours assume that a detected fault to have been either immediately removed or is perfectly debugged. Such discrete‐time models may be used for any debugging environment and may be termed black‐box, which are used without having prior knowledge about the nature of the fault being debugged. However, if one has to develop a white‐box model, one needs to be cognizant of the debugging environment. During debugging, there are numerous factors that affect the debugging process. These factors may include the internal, for example, fault density, and fault debugging complexity and the external that originates in the debugging environment itself, such as the skills of the debugging team and the debugging effort expenditures. Hence, the debugging environment fault removal may take a longer time after having been detected. Therefore, it is imperative to clearly understand the testing and debugging environment and, hence, the urgency to develop a model. The model ought to take into account the fault debugging complexity and incorporate the learning phenomenon of the debugger under imperfect debugging environment. This objective dictates developing a framework through an integrated modelling approach based on nonhomogenous Poisson process that incorporates these realistic factors during the fault debugging process. Actual software reliability data have been used to demonstrate applicability of the proposed integrated framework.