no code implementations • WMT (EMNLP) 2021 • Hengchao Shang, Ting Hu, Daimeng Wei, Zongyao Li, Jianfei Feng, Zhengzhe Yu, Jiaxin Guo, Shaojun Li, Lizhi Lei, Shimin Tao, Hao Yang, Jun Yao, Ying Qin
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to WMT 2021 Efficiency Shared Task.
no code implementations • 22 Oct 2024 • Haoran Lin, Xianzhi Yu, Kang Zhao, Lu Hou, Zongyuan Zhan, Stanislav Kamenev, Han Bao, Ting Hu, Mingkai Wang, Qixin Chang, Siyue Sui, Weihao Sun, Jiaxin Hu, Jun Yao, Zekun Yin, Cheng Qian, Ying Zhang, Yinfei Pan, Yu Yang, Weiguo Liu
In this work, we propose FastAttention which pioneers the adaptation of FlashAttention series for NPUs and low-resource GPUs to boost LLM inference efficiency.
no code implementations • 12 Jun 2024 • Ryan Zhou, Jaume Bacardit, Alexander Brownlee, Stefano Cagnoni, Martin Fyvie, Giovanni Iacca, John McCall, Niki van Stein, David Walker, Ting Hu
Artificial intelligence methods are being increasingly applied across various domains, but their often opaque nature has raised concerns about accountability and trust.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 31 May 2024 • Yang Chen, Tian He, Junfeng Fu, Ling Wang, Jingcai Guo, Ting Hu, Hong Cheng
To address these challenges, we introduce a novel skeleton-based training framework (C$^2$VL) based on Cross-modal Contrastive learning that uses the progressive distillation to learn task-agnostic human skeleton action representation from the Vision-Language knowledge prompts.
no code implementations • 13 Sep 2023 • Ting Hu, Christoph Meinel, Haojin Yang
The increasingly Large Language Models (LLMs) demonstrate stronger language understanding and generation capabilities, while the memory demand and computation cost of fine-tuning LLMs on downstream tasks are non-negligible.
no code implementations • 12 Aug 2023 • Zhendong Sha, Yuanzhu Chen, Ting Hu
Through genome-wide association studies (GWAS), disease susceptible genetic variables can be identified by comparing the genetic data of individuals with and without a specific disease.
no code implementations • 23 Jun 2023 • Ryan Zhou, Ting Hu
We then focus on how evolutionary computing can be used in XAI/XML, and review some approaches which incorporate EC techniques.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 15 Nov 2022 • Ting Hu, Gabriela Ochoa, Wolfgang Banzhaf
Genotype-to-phenotype mappings translate genotypic variations such as mutations into phenotypic changes.
no code implementations • 29 Oct 2022 • Ting Hu, Christoph Meinel, Haojin Yang
We further explore the limit of quantization bit and show that OCS could quantize BERT-Base and BERT-Large to 3-bits and retain 98% and 96% of the performance on the GLUE benchmark accordingly.
no code implementations • 24 Feb 2022 • Zengfu Hou, Siyuan Cheng, Ting Hu
In hyperspectral, high-quality spectral signals convey subtle spectral differences to distinguish similar materials, thereby providing unique advantage for anomaly detection.
no code implementations • 27 Jan 2022 • Ting Hu
To learn the spatial response function and the point spread function from the image pairs to be fused, we propose a Dirichlet network, where both functions are properly constrained.
no code implementations • 29 Sep 2021 • Ryan Zhou, Christian Muise, Ting Hu
A key property of this representation is that there are multiple representations of a network which can be obtained by permuting the order of the neurons.
no code implementations • LREC 2020 • Jonathan Sauder, Ting Hu, Xiaoyin Che, Goncalo Mordido, Haojin Yang, Christoph Meinel
Recently, various approaches with Generative Adversarial Nets (GANs) have also been proposed.
no code implementations • 3 Feb 2019 • Yunwen Lei, Ting Hu, Guiying Li, Ke Tang
While the behavior of SGD is well understood in the convex learning setting, the existing theoretical results for SGD applied to nonconvex objective functions are far from mature.
no code implementations • 17 Dec 2014 • Jun Fan, Ting Hu, Qiang Wu, Ding-Xuan Zhou
The error entropy consistency, which requires the error entropy of the learned function to approximate the minimum error entropy, is shown to be always true if the bandwidth parameter tends to 0 at an appropriate rate.