Alice submits a number of papers to a machine learning conference and has knowledge of the quality of her papers. Given raw grades rated by independent reviewers, can Bob obtain accurate estimates of the grades of the papers by asking Alice questions about the ground truth? In this talk, we address this problem in the case where the raw grades are generated by an additive model and the payoff of Alice is additive convex utility over all her papers. First, we prove that, if Alice is truth-telling in the sense that her payoff is maximized when she truthfully answers the questions about the underlying grades, then the questions must be formulated as pairwise comparisons between her papers. Next, we show that if Alice is required to provide a ranking of her papers, which is the most refined question via pairwise comparisons, she would be truth-telling. By leveraging the true ranking provided by Alice, moreover, we demonstrate that the grades provided by Bob have the optimal estimation performance in certain regimes among all truth-telling ways of information elicitation. In particular, the adjusted grades are indeed significantly more accurate than the raw grades when the number of papers is large and the raw grades are very noisy. Finally, we conclude the talk with several extensions and some refinements for practical considerations. This is based on an ongoing work and arXiv:2110.14802.
University of Pennsyvania