no code implementations • CCL 2021 • Chenlin Zhang, Mingwen Wang, Yiming Tan, Ming Yin, Xinyi Zhang
“本文主要以汉语委婉语作为研究对象, 基于大量人工标注, 借助机器学习有监督分类方法, 实现了较高精度的委婉语自动识别, 并基于此对1946年-2017年的《人民日报》中的委婉语历时变化发展情况进行量化统计分析。从大规模数据的角度探讨委婉语历时性发展变化、委婉语与社会之间的共变关系, 验证了语言的格雷什姆规律与更新规律。”
1 code implementation • 16 Feb 2022 • Chenlin Zhang, Jianxin Wu, Yin Li
Self-attention based Transformer models have demonstrated impressive results for image classification and object detection, and more recently for video understanding.
Ranked #2 on audio-visual event localization on UnAV-100