Portrait de Changjian Shui n'est pas disponible

Changjian Shui

Alumni

Publications

Information Gain Sampling for Active Learning in Medical Image Classification
Fair Representation Learning through Implicit Path Alignment
Qi CHEN
Jiaqi Li
Boyu Wang
Evolving Domain Generalization
Wei Wang
Gezheng Xu
Ruizhi Pu
Jiaqi Li
Fan Zhou
Charles Ling
Boyu Wang
On Learning Fairness and Accuracy on Multiple Subgroups
Gezheng Xu
Qi CHEN
Jiaqi Li
Charles Ling
Boyu Wang
We propose an analysis in fair learning that preserves the utility of the data while reducing prediction disparities under the criteria of g… (voir plus)roup sufficiency. We focus on the scenario where the data contains multiple or even many subgroups, each with limited number of samples. As a result, we present a principled method for learning a fair predictor for all subgroups via formulating it as a bilevel objective. Specifically, the subgroup specific predictors are learned in the lower-level through a small amount of data and the fair predictor. In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors. We further prove that such a bilevel objective can effectively control the group sufficiency and generalization error. We evaluate the proposed framework on real-world datasets. Empirical evidence suggests the consistently improved fair predictions, as well as the comparable accuracy to the baselines.
On the benefits of representation regularization in invariance based domain generalization
On the benefits of representation regularization in invariance based domain generalization
Deep Active Learning: Unified and Principled Method for Query and Training
Fan Zhou
Boyu Wang
Deep Active Learning: Unified and Principled Method for Query and Training
Fan Zhou
Boyu Wang
In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning. … (voir plus)We are providing theoretical insights from the intuition of modeling the interactive procedure in active learning as distribution matching, by adopting the Wasserstein distance. As a consequence, we derived a new training loss from the theoretical analysis, which is decomposed into optimizing deep neural network parameters and batch query selection through alternative optimization. In addition, the loss for training a deep neural network is naturally formulated as a min-max optimization problem through leveraging the unlabeled data information. Moreover, the proposed principles also indicate an explicit uncertainty-diversity trade-off in the query batch selection. Finally, we evaluate our proposed method on different benchmarks, consistently showing better empirical performances and a better time-efficient query strategy compared to the baselines.