no code implementations • 23 Dec 2020 • Tian Huang, Tao Luo, Joey Tianyi Zhou
We use model of the same precision for both forward and backward pass in order to reduce memory usage for training.
no code implementations • 28 Jan 2021 • Huan Wang, Tian Huang, Steve Granick
With raw NMR spectra available in a public depository, we confirm boosted mobility during the click chemical reaction (Science 2020, 369, 537) regardless of the order of magnetic field gradient (linearly-increasing, linearly-decreasing, random sequence).
Soft Condensed Matter
no code implementations • 19 Mar 2021 • Tian Huang, Siong Thye Goh, Sabrish Gopalakrishnan, Tao Luo, Qianxiao Li, Hoong Chuin Lau
In this way, we are able capture the common structure of the instances and their interactions with the solver, and produce good choices of penalty parameters with fewer number of calls to the QUBO solver.
no code implementations • 26 Mar 2021 • Tian Huang, Tao Luo, Ming Yan, Joey Tianyi Zhou, Rick Goh
For example, quantisation-aware training (QAT) method involves two copies of model parameters, which is usually beyond the capacity of on-chip memory in edge devices.
no code implementations • 24 Aug 2022 • Yugeng Huang, Haitao Liu, Tian Huang
We also provide a novel strategy for determining the kernel width which ensures that our method can efficiently exploit information redundancy supplied by relative motions in the presence of many outliers.
no code implementations • 9 May 2023 • Myat Thu Linn Aung, Daniel Gerlinghoff, Chuping Qu, Liwei Yang, Tian Huang, Rick Siow Mong Goh, Tao Luo, Weng-Fai Wong
Brain-inspired spiking neural networks (SNNs) replace the multiply-accumulate operations of traditional neural networks by integrate-and-fire neurons, with the goal of achieving greater energy efficiency.
no code implementations • 23 Nov 2023 • Tian Huang, Ke Li
Finally, we deploy our method in a practical problem, specifically in protein structure prediction (PSP).
no code implementations • CCL 2022 • Tian Huang, Yanqiu Shao, Wei Li
“语义依存图是NLP处理语义的深层分析方法, 能够对句子中词与词之间的语义进行分析。该文针对古代汉语特点, 在制定古代汉语语义依存图标注规范的基础上, 以《二十四史》为语料来源, 完成标注了规模为3000句的古代汉语语义依存图库, 标注一致性的kappa值为78. 83%。通过与现代汉语语义依存图库的对比, 对依存图库基本情况进行统计, 分析古代汉语的语义特色和规律。统计显示, 古代汉语语义分布宏观上符合齐普夫定律, 在语义事件描述上具有强烈的历史性叙事和正式文体特征, 如以人物纪传为中心, 时间、地点等周边角色描述细致, 叙事语言冷静客观, 缺少描述情态、语气、程度、时间状态等的修饰词语等。 "