Dr. Boris Freidlin conducts research on the methodology of clinical trials. With Dr. Simon, he developed the adaptive signature design (Clinical Cancer Research, 2005) that combines prospective development of a sensitive-patient classifier and a properly powered test for overall effect in a single pivotal trial. Dr. Freidlin and colleagues more recently developed a new cross-validation extension of the adaptive signature design. The new design optimizes efficiency of both the classifier development and validation components of the design. The cross-validation approach has been demonstrated to considerably improve performance of the adaptive signature design. This approach also provides for estimation of the treatment effect for the identified sensitive subpopulation.
Dr. Freidlin, together with BRB and CTEP colleagues, conducted a comprehensive review of the randomized biomarker designs clinical biomarker tests that will play an important role in achieving personalized treatment for cancer patients. Definitive evaluation of the clinical utility of these biomarkers requires conducting large randomized clinical trials (RCTs). Efficient RCT design is therefore crucial for timely introduction of these medical advances into clinical practice, and a variety of designs have been proposed for this purpose. To guide design and interpretation of RCTs evaluating biomarkers, Dr. Freidlin and colleagues performed an in-depth comparison of advantages and disadvantages of commonly used designs. Key aspects of the evaluation include efficiency comparisons and special interim monitoring issues that arise due to the complexity of these RCTs. Important ongoing and completed trials were used as examples. It was concluded that, in most settings, randomized biomarker-stratified designs (that use the biomarker to guide analysis but not treatment assignment) should be used to obtain a rigorous assessment of biomarker clinical utility.
Drs. Freidlin and Korn conducted an evaluation of the impact of early stopping of RCTs for efficacy on the estimation of the treatment effect. It has been suggested that the well known bias of treatment-effect estimators, due to the possibility of early stopping for positive results, is a major concern with interim monitoring. Drs. Freidlin and Korn developed approaches for comparison of the inflation of the treatment-effect estimator when the trial is stopped early for positive results with the inflation that would be seen in a comparable set of positive trials that used fixed sample sizes with no interim monitoring, and for quantifying the relative inflation of monitored trials relative to that of the corresponding subset of positive fixed sample-size trials. Via simulation for some O’Brien-Fleming and Haybittle-Peto monitoring boundaries, they found that although the inflation of the treatment-effect estimator when a trial is stopped early can be considerable, only at very early interim analyses (≤25% of information) is this inflation much larger than the inflation that would be seen for an appropriate subset of similar positive fixed sample-size trials.
Dr. Freidlin and colleagues also collected information about treatment efficacy on NCI Cooperative Group trials that were stopped early for positive results (information both at the time the trial was stopped/released and at times of further follow up). Twenty-seven such trials were located. For 17 of the 18 trials with sufficient follow-up information, the treatment effect was very similar or only slightly smaller at last follow up as compared to the stopping/release time. Reasons why one might be concerned about early stopping for positive results were critically evaluated. They concluded that for trials with well designed interim-monitoring plans, the ability to stop early for positive results is an important component of the trial design, allowing the public to benefit as soon as possible from the study conclusions.
Dr. Freidlin and colleagues have proposed a new approach for futility monitoring of RCTs. The new approach is compared in a simulation study to the commonly used futility monitoring rules. Some of the commonly used inefficacy rules are suboptimal with respect to the strength of evidence required for stopping throughout the trial: too conservative in the middle and/or too aggressive at the end. Relative to common inefficacy rules, the new procedure is shown to result in potentially fewer treated patients and shorter study duration under the null hypothesis with only a minor loss of power under the alternative hypothesis. By decreasing average stopping times relative to the commonly used boundaries, the new rule lessens patient exposure to inactive treatments, improves resource utilization, and accelerates dissemination of important clinical information. At the same time, the proposed rule provides a clear benchmark for providing compelling evidence that the new therapy is not beneficial
As a member of the endpoint subcommittee of the Head and Neck Steering Committee, Dr. Freidlin is participating in a comprehensive project to standardize clinical endpoint definitions and the corresponding statistical analysis methodology for head and neck clinical trials.
Dodd LE, Korn E, Freidlin B, Rubinstein LV, Mooney MM, Jaffe CC, Dancey J. Onsite image evaluations and independent image blinded reads: Close cousins or distant relatives? Reply. J Clin Oncol 2009:27;2104-05.
Dodd LE, Korn, EL, Freidlin B, Rubinstein L, Dancey J, Jaffe CC, Mooney M. Are onsite image evaluations the solution or are we trading one problem for another? Reply. J Clin Oncol 2009:27;E265-E265 .
Dodd LE, Korn EL, Freidlin B, Gray R, and Bhattacharya S. An audit strategy for progression-free survival. Biometrics (In press). http://www.ncbi.nlm.nih.gov/pubmed/21210772
Freidlin B, Jiang W, Simon R. The cross-validated adaptive signature design. Clin Cancer Res 2010:15;691-8. http://www.ncbi.nlm.nih.gov/pubmed/20068112
Freidlin B, Korn EL. Stopping clinical trials early for benefit: Impact on estimation. Clin Trials 2009:6;119-25. http://www.ncbi.nlm.nih.gov/pubmed/19342463
Freidlin B, Korn EL. Monitoring for lack of benefit: a critical component of a randomized clinical trial. J Clin Oncol 2009:27;629-33. http://www.ncbi.nlm.nih.gov/pubmed/19064977
Freidlin B and Korn EL. Biomarker-adaptive clinical trial designs. Pharmacogenomics 2010:11;1679-1682. http://www.ncbi.nlm.nih.gov/pubmed/21142910
Freidlin B, McShane LM, and Korn EL. Randomized clinical trials with biomarkers: Design issues. J Natl Cancer Inst 2010:102;152-60. http://www.ncbi.nlm.nih.gov/pubmed/20075367
Freidlin B, Jiang W, Simon R. The cross-validated adaptive signature design. Clin Cancer Res. 2010:16;691-8. http://www.ncbi.nlm.nih.gov/pubmed/20068112
Freidlin B, Korn EL, Gray R. A general inefficacy interim monitoring rule for randomized clinical trials. Clinical Trials 2010:7;197-208. http://www.ncbi.nlm.nih.gov/pubmed/20423925
Korn EL, Freidlin B, Mooney M. Stopping or reporting early for positive results in randomized clinical trials: The National Cancer Institute Cooperative Group experience from 1990 to 2005. J Clin Oncol 2009: 27;1712-21. http://www.ncbi.nlm.nih.gov/pubmed/19237631
Korn EL, Freidlin B, Mooney M, Stopping trials early for positive results: The need to know how much. Reply. J Clin Oncol 2009:27:E30-E30.
Korn EL, Freidlin B, Mooney M, Abrams JS. Accrual experience of NCI cooperative group phase III trials activated in 2000-2007. J Clin Oncol 2010: 28; 5197-5201. http://www.ncbi.nlm.nih.gov/pubmed/21060029
Korn EL, Freidlin B. Inefficacy monitoring procedures in randomized clinical trials: The need to report. Am J Bioeth 2011:11; 2-10. http://www.ncbi.nlm.nih.gov/pubmed/21400374
Korn EL, Freidlin B. Outcome-adaptive randomization: Is it useful? J Clin Oncol 2011:29; 771-776. http://www.ncbi.nlm.nih.gov/pubmed/21172882
Korn EL, Freidlin B. Causal inference for definitive clinical endpoints in a randomized clinical trial with intervening nonrandomized treatments. J Clin Oncol 2010:28;3800-2. http://www.ncbi.nlm.nih.gov/pubmed/20660828
Korn EL, Dodd LE, Freidlin B. Measurement error in the timing of events: Effect on survival analyses in randomized clinical trials. Clin Trials 2010:7; 626-633. http://www.ncbi.nlm.nih.gov/pubmed/20819840