Early stopping is a widely-used regularization technique to avoid overfitting in iterative algorithms. In particular, Split Linearized Bregman Iteration methods are often equipped with an early stopping rule to achieve model selection consistency to recover the structural sparsity of parameters. However, theoretical early stopping rule with model selection consistency requires the incoherence condition, which is unknown in applications. In this work, we propose a data adaptive early stopping rule towards the False Discovery Rate (FDR) control under the framework of Knockoff methods. An inflated FDR is proved under a relaxation of the exchangeability condition in traditional Knockoff methods. The effectiveness of the proposed method is demonstrated by both simulations and two real world application examples, Alzheimer’s Disease (AD) and partial order ranking of basketball teams.

2 May 2022
11:00am - 12:00pm
Where
https://hkust.zoom.us/j/94401529969 (Passcode: hkust)
Speakers/Performers
Miss Wenqi ZENG
HKUST
Organizer(S)
Department of Mathematics
Contact/Enquiries
Payment Details
Audience
Alumni, Faculty and staff, PG students, UG students
Language(s)
English
Other Events
22 Nov 2024
Seminar, Lecture, Talk
IAS / School of Science Joint Lecture - Leveraging Protein Dynamics Memory with Machine Learning to Advance Drug Design: From Antibiotics to Targeted Protein Degradation
Abstract Protein dynamics are fundamental to protein function and encode complex biomolecular mechanisms. Although Markov state models have made it possible to capture long-timescale protein co...
8 Nov 2024
Seminar, Lecture, Talk
IAS / School of Science Joint Lecture - Some Theorems in the Representation Theory of Classical Lie Groups
Abstract After introducing some basic notions in the representation theory of classical Lie groups, the speaker will explain three results in this theory: the multiplicity one theorem for classical...