Response-adaptive designs for clinical trials: Simultaneous learning from multiple patients

Response-adaptive designs for clinical trials: Simultaneous learning from multiple patients

0.00 Avg rating0 Votes
Article ID: iaor201527853
Volume: 248
Issue: 2
Start Page Number: 619
End Page Number: 633
Publication Date: Jan 2016
Journal: European Journal of Operational Research
Authors: ,
Keywords: health services, design, learning
Abstract:

Clinical trials have traditionally followed a fixed design, in which randomization probabilities of patients to various treatments remains fixed throughout the trial and specified in the protocol. The primary goal of this static design is to learn about the efficacy of treatments. Response‐adaptive designs, on the other hand, allow clinicians to use the learning about treatment effectiveness to dynamically adjust randomization probabilities of patients to various treatments as the trial progresses. An ideal adaptive design is one where patients are treated as effectively as possible without sacrificing the potential learning or compromising the integrity of the trial. We propose such a design, termed Jointly Adaptive, that uses forward‐looking algorithms to fully exploit learning from multiple patients simultaneously. Compared to the best existing implementable adaptive design that employs a multiarmed bandit framework in a setting where multiple patients arrive sequentially, we show that our proposed design improves health outcomes of patients in the trial by up to 8.6 percent, in expectation, under a set of considered scenarios. Further, we demonstrate our design’s effectiveness using data from a recently conducted stent trial. This paper also adds to the general understanding of such models by showing the value and nature of improvements over heuristic solutions for problems with short delays in observing patient outcomes. We do this by showing the relative performance of these schemes for maximum expected patient health and maximum expected learning objectives, and by demonstrating the value of a restricted‐optimal‐policy approximation in a practical example.

Reviews

Required fields are marked *. Your email address will not be published.