学院动态

学术报告信息发布

当前位置: 首页 >> 学术报告信息发布 >> 正文

Optimal Kernel Quantile Learning with Random Features

发布日期:2024-10-12    作者:     点击:

报告题目:Optimal Kernel Quantile Learning with Random Features

报告时间:20241013上午900

报告地点:南湖校区教学科研楼104

主办单位:数学与统计学院/科研处

报告人:冯兴东

报告人简介:冯兴东,博士毕业于美国伊利诺伊大学香槟分校,现任上海财经大学统计与管理学院院长、统计学教授、博士生导师。研究领域为数据降维、稳健方法、分位数回归以及在经济问题中的应用、大数据统计计算、强化学习等,在国际顶级统计学/计量经济学期刊JASAAoSJRSSBBiometrikaJoE以及人工智能期刊/顶会JMLRNeurIPSICML上发表论文多篇。2018年入选国际统计学会推选会员(Elected member)2019年担任全国统计教材编审委员会第七届委员会专业委员(数据科学与大数据应用组),2020年担任第八届国务院学科评议组(统计学)成员,2022年担任全国应用统计专业硕士教指委委员,2023年担任全国工业统计学教学研究会副会长以及中国数学会概率统计分会常务理事,20222023年起分别兼任国际统计学权威期刊Annals of Applied StatisticsStatistica Sinica编委(Associate Editor)以及国内统计学权威期刊《统计研究》编委。

摘要:The random feature (RF) approach is a well-established and efficient tool for scalable kernel methods, but existing literature has primarily focused on kernel ridge regression with random features (KRR-RF), which has limitations in handling heterogeneous data with heavy-tailed noises. This paper presents a generalization study of kernel quantile regression with random features (KQR-RF), which accounts for the non-smoothness of the check loss in KQR-RF by introducing a refined error decomposition and establishing a novel connection between KQR-RF and KRR-RF. Our study establishes the capacity-dependent learning rates for KQR-RF under mild conditions on the number of RFs, which are minimax optimal up to some logarithmic factors. Importantly, our theoretical results, utilizing a data-dependent sampling strategy, can be extended to cover the agnostic setting where the target quantile function may not precisely align with the assumed kernel space. By slightly modifying our assumptions, the capacity-dependent error analysis can also be applied to cases with Lipschitz continuous losses, enabling broader applications in the machine learning community. To validate our theoretical findings, simulated experiments and a real data application are conducted.  


下一条:Quantile-Matched DC in Massive Data Regression