no code implementations • CCL 2020 • Yuchen Wang, Miaozhe Lin, Jiefan Zhan
word2vec是自然语言处理领域重要的词嵌入算法之一, 为了解决随机负采样作为优化目标可能出现的样本贡献消失问题, 提出了可以应用在CBOW和Skip-gram框架上的以余弦距离为度量的强负采样方法:HNS-CBOW和HNS-SG。将原随机负采样过程拆解为两个步骤, 首先, 计算随机负样本与目标词的余弦距离, 然后, 再使用距离较近的强负样本更新参数。以英文维基百科数据作为实验语料, 在公开的语义-语法数据集上对优化算法的效果进行了定量分析, 实验表明, 优化后的词嵌入质量显著优于原方法。同时, 与GloVe等公开发布的预训练词向量相比, 可以在更小的语料库上获得更高的准确性。
1 code implementation • 7 Dec 2021 • Yichen Huang, Yuchen Wang, Yik-Cheung Tam
Our model ranks second in the official evaluation on the object coreference resolution task with an F1 score of 73. 3% after model ensembling.
no code implementations • 13 Oct 2021 • Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik
We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.
1 code implementation • 25 Mar 2021 • Chuhua Wang, Yuchen Wang, Mingze Xu, David J. Crandall
We propose to predict the future trajectories of observed agents (e. g., pedestrians or vehicles) by estimating and using their goals at multiple time scales.
Ranked #1 on
Trajectory Prediction
on ETH/UCY
no code implementations • 2 Mar 2021 • Javier Rubio-Herrero, Yuchen Wang
The present paper introduces a data-driven framework for describing the time-varying nature of an SIRD model in the context of COVID-19.
1 code implementation • 1 Dec 2020 • Yuchen Wang, Matthieu Chan Chee, Ziyad Edher, Minh Duc Hoang, Shion Fujimori, Sornnujah Kathirgamanathan, Jesse Bettencourt
Black Sigatoka disease severely decreases global banana production, and climate change aggravates the problem by altering fungal species distributions.
no code implementations • 8 Oct 2020 • Yuchen Wang, Mingze Xu, John Paden, Lora Koenig, Geoffrey Fox, David Crandall
Understanding the structure of Earth's polar ice sheets is important for modeling how global warming will impact polar ice and, in turn, the Earth's climate.
no code implementations • 30 Jul 2020 • Yuchen Wang, Zixuan Hu, Barry C. Sanders, Sabre Kais
Qudit is a multi-level computational unit alternative to the conventional 2-level qubit.
Quantum Physics
3 code implementations • 2 Mar 2019 • Yu Yao, Mingze Xu, Yuchen Wang, David J. Crandall, Ella M. Atkins
Recognizing abnormal events such as traffic violations and accidents in natural driving scenes is essential for successful autonomous driving and advanced driver assistance systems.
Ranked #1 on
Traffic Accident Detection
on A3D
no code implementations • ECCV 2018 • Mingze Xu, Chenyou Fan, Yuchen Wang, Michael S. Ryoo, David J. Crandall
In this paper, we wish to solve two specific problems: (1) given two or more synchronized third-person videos of a scene, produce a pixel-level segmentation of each visible person and identify corresponding people across different views (i. e., determine who in camera A corresponds with whom in camera B), and (2) given one or more synchronized third-person videos as well as a first-person video taken by a mobile or wearable camera, segment and identify the camera wearer in the third-person videos.