no code implementations • 18 Apr 2024 • Qian Li, Cheng Ji, Shu Guo, Yong Zhao, Qianren Mao, Shangguang Wang, Yuntao Wei, JianXin Li
Existing methods are limited by their neglect of the multiple entity pairs in one sentence sharing very similar contextual information (ie, the same text and image), resulting in increased difficulty in the MMRE task.
no code implementations • 23 Feb 2024 • Yang Deng, Yong Zhao, Moxin Li, See-Kiong Ng, Tat-Seng Chua
Despite the remarkable abilities of Large Language Models (LLMs) to answer questions, they often display a considerable level of overconfidence even when the question does not have a definitive answer.
no code implementations • 4 Feb 2024 • Zhengqiu Zhu, Yong Zhao, Bin Chen, Sihang Qiu, Kai Xu, Quanjun Yin, Jincai Huang, Zhong Liu, Fei-Yue Wang
The transition from CPS-based Industry 4. 0 to CPSS-based Industry 5. 0 brings new requirements and opportunities to current sensing approaches, especially in light of recent progress in Chatbots and Large Language Models (LLMs).
no code implementations • 29 Sep 2023 • Yong Zhao, Runxin He, Nicholas Kersting, Can Liu, Shubham Agrawal, Chiranjeet Chetia, Yu Gu
SHAP package is a leading implementation of Shapley values to explain neural networks implemented in TensorFlow or PyTorch but lacks cross-platform support, one-shot deployment and is highly inefficient.
no code implementations • 17 Feb 2023 • Mufan Sang, Yong Zhao, Gang Liu, John H. L. Hansen, Jian Wu
The proposed models achieve 0. 75% EER on VoxCeleb 1 test set, outperforming the previously proposed Transformer-based models and CNN-based models, such as ResNet34 and ECAPA-TDNN.
no code implementations • CVPR 2023 • Haoliang Zhao, Huizhou Zhou, Yongjun Zhang, Jie Chen, Yitong Yang, Yong Zhao
In the field of binocular stereo matching, remarkable progress has been made by iterative methods like RAFT-Stereo and CREStereo.
no code implementations • 25 Nov 2022 • Jucai Zhai, Pengcheng Zeng, Chihao Ma, Yong Zhao, Jie Chen
The proposed method consists of a learnable blur kernel to estimate the defocus map, which is an unsupervised method, and a single-image defocus deblurring generative adversarial network (DefocusGAN) for the first time.
1 code implementation • 27 Mar 2022 • Yong Zhao, Edirisuriya M. Dilanga Siriwardane, Zhenyao Wu, Nihang Fu, Mohammed Al-Fahdi, Ming Hu, Jianjun Hu
Discovering new materials is a challenging task in materials science crucial to the progress of human society.
1 code implementation • 12 Dec 2021 • Daniel Gleaves, Edirisuriya M. Dilanga Siriwardane, Yong Zhao, Nihang Fu, Jianjun Hu
For synthesizability prediction, our model significantly increases the baseline PU learning's true positive rate from 87. 9\% to 97. 9\% using 1/49 model parameters.
no code implementations • 7 Dec 2021 • Yong Zhao, Edirisuriya MD Siriwardane, Jianjun Hu
Deep learning based generative models such as deepfake have been able to generate amazing images and videos.
no code implementations • 9 Sep 2021 • Jianjun Hu, Stanislav Stefanov, Yuqi Song, Sadman Sadeed Omee, Steph-Yves Louis, Edirisuriya M. D. Siriwardane, Yong Zhao
The availability and easy access of large scale experimental and computational materials data have enabled the emergence of accelerated development of algorithms and models for materials property prediction, structure prediction, and generative design of materials.
no code implementations • 7 Jul 2021 • Sihai Guan, Qing Cheng, Yong Zhao
In this paper, a family of novel diffusion adaptive estimation algorithm is proposed from the asymmetric cost function perspective by combining diffusion strategy and the linear-linear cost (LLC), quadratic-quadratic cost (QQC), and linear-exponential cost (LEC), at all distributed network nodes, and named diffusion LLCLMS (DLLCLMS), diffusion QQCLMS (DQQCLMS), and diffusion LECLMS (DLECLMS), respectively.
2 code implementations • 28 Feb 2021 • Rui Xin, Edirisuriya M. D. Siriwardane, Yuqi Song, Yong Zhao, Steph-Yves Louis, Alireza Nasiri, Jianjun Hu
Our experiments show that while active learning itself may sample chemically infeasible candidates, these samples help to train effective screening models for filtering out materials with desired properties from the hypothetical materials created by the generative model.
no code implementations • 11 Feb 2021 • Xiang Gao, Nikhil Karthik, Swagato Mukherjee, Peter Petreczky, Sergey Syritsyn, Yong Zhao
We study the form factor at the physical point with a lattice spacing $a=0. 076$ fm.
High Energy Physics - Lattice High Energy Physics - Experiment High Energy Physics - Phenomenology Nuclear Theory
1 code implementation • 2 Feb 2021 • Jianjun Hu, Yong Zhao, Wenhui Yang, Yuqi Song, Edirisuriya MD Siriwardane, Yuxin Li, Rongzhi Dong
To our knowledge, AlphaCrystal is the first neural network based algorithm for crystal structure contact map prediction and the first method for directly reconstructing crystal structures from materials composition, which can be further optimized by DFT calculations.
Protein Structure Prediction Materials Science
no code implementations • 27 Jan 2021 • Xiang Gao, Nikhil Karthik, Swagato Mukherjee, Peter Petreczky, Sergey Syritsyn, Yong Zhao
We present an exploratory lattice QCD investigation of the differences between the valence quark structure of pion and its radial excitation $\pi(1300)$ in a fixed finite volume using the leading-twist factorization approach.
High Energy Physics - Lattice High Energy Physics - Experiment High Energy Physics - Phenomenology Nuclear Theory
no code implementations • 1 Jan 2021 • Zilin Yu, Chao Wang, Xin Wang, Yong Zhao, Xundong Wu
This work studies intragroup sparsity, a fine-grained structural constraint on network weight parameters.
no code implementations • 16 Dec 2020 • Yuqi Song, Edirisuriya M. Dilanga Siriwardane, Yong Zhao, Jianjun Hu
Two dimensional (2D) materials have emerged as promising functional materials with many applications such as semiconductors and photovoltaics because of their unique optoelectronic properties.
1 code implementation • 5 Sep 2020 • Fangfang Zhou, Yong Zhao, Wenjiang Chen, Yijing Tan, Yaqi Xu, Yi Chen, Chao Liu, Ying Zhao
Reverse-engineering bar charts extracts textual and numeric information from the visual representations of bar charts to support application scenarios that require the underlying information.
no code implementations • 17 Mar 2020 • Yong Zhao, Kunpeng Yuan, Yinqiao Liu, Steph-Yves Louis, Ming Hu, Jianjun Hu
Extensive benchmark experiments over 2, 170 Fm-3m face-centered-cubic (FCC) materials show that our ECD based CNNs can achieve good performance for elasticity prediction.
1 code implementation • 11 Mar 2020 • Steph-Yves Louis, Yong Zhao, Alireza Nasiri, Xiran Wong, Yuqi Song, Fei Liu, Jianjun Hu
Machine learning (ML) methods have gained increasing popularity in exploring and developing new materials.
no code implementations • 26 Feb 2020 • Yuqi Song, Joseph Lindsay, Yong Zhao, Alireza Nasiri, Steph-Yves Louis, Jie Ling, Ming Hu, Jianjun Hu
Noncentrosymmetric materials play a critical role in many important applications such as laser technology, communication systems, quantum computing, cybersecurity, and etc.
no code implementations • 10 Dec 2019 • Takuya Yoshioka, Igor Abramovski, Cem Aksoylar, Zhuo Chen, Moshe David, Dimitrios Dimitriadis, Yifan Gong, Ilya Gurvich, Xuedong Huang, Yan Huang, Aviv Hurvitz, Li Jiang, Sharon Koubi, Eyal Krupka, Ido Leichter, Changliang Liu, Partha Parthasarathy, Alon Vinnikov, Lingfeng Wu, Xiong Xiao, Wayne Xiong, Huaming Wang, Zhenghao Wang, Jun Zhang, Yong Zhao, Tianyan Zhou
This increases marginally to 1. 6% when 50% of the attendees are unknown to the system.
no code implementations • 12 Nov 2019 • Yabo Dan, Yong Zhao, Xiang Li, Shaobo Li, Ming Hu, Jianjun Hu
The percentage of chemically valid (charge neutral and electronegativity balanced) samples out of all generated ones reaches 84. 5% by our GAN when trained with materials from ICSD even though no such chemical rules are explicitly enforced in our GAN model, indicating its capability to learn implicit chemical composition rules.
no code implementations • 26 Oct 2019 • Zhilin Yu, Chao Wang, Xin Wang, Qing Wu, Yong Zhao, Xundong Wu
Modern deep neural networks rely on overparameterization to achieve state-of-the-art generalization.
no code implementations • ICCV 2019 • Canmiao Fu, Wenjie Pei, Qiong Cao, Chaopeng Zhang, Yong Zhao, Xiaoyong Shen, Yu-Wing Tai
Typical methods for supervised sequence modeling are built upon the recurrent neural networks to capture temporal dependencies.
no code implementations • 25 Aug 2019 • Weida Yang, Xindong Ai, Zuliu Yang, Yong Xu, Yong Zhao
To improve the performance in ill-posed regions, this paper proposes an atrous granular multi-scale network based on depth edge subnetwork(Dedge-AGMNet).
no code implementations • 25 Jun 2019 • Li Zhang, Quanhong Wang, Haihua Lu, Yong Zhao
To tackle this problem, we propose a network for disparity estimation based on abundant contextual details and semantic information, called Multi-scale Features Network (MSFNet).
no code implementations • 29 Apr 2019 • Zhong Meng, Yong Zhao, Jinyu Li, Yifan Gong
The use of deep networks to extract embeddings for speaker recognition has proven successfully.
no code implementations • 28 Apr 2019 • Zhong Meng, Jinyu Li, Yong Zhao, Yifan Gong
To overcome this problem, we propose a conditional T/S learning scheme, in which a "smart" student model selectively chooses to learn from either the teacher model or the ground truth labels conditioned on whether the teacher can correctly predict the ground truth.
1 code implementation • ICCV 2019 • Yong Zhao, Shibiao Xu, Shuhui Bu, Hongkai Jiang, Pengcheng Han
SLAM technology has recently seen many successes and attracted the attention of high-technological companies.
no code implementations • 21 Feb 2019 • Guiying Zhang, Yuxin Cui, Yong Zhao, Jianjun Hu
State-of-the-art face recognition algorithms are able to achieve good performance when sufficient training images are provided.
no code implementations • 4 Jan 2019 • Ke Li, Jinyu Li, Yong Zhao, Kshitiz Kumar, Yifan Gong
We propose two approaches for speaker adaptation in end-to-end (E2E) automatic speech recognition systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 2 Apr 2018 • Zhong Meng, Jinyu Li, Zhuo Chen, Yong Zhao, Vadim Mazalov, Yifan Gong, Biing-Hwang, Juang
We propose a novel adversarial multi-task learning scheme, aiming at actively curtailing the inter-talker feature variability while maximizing its senone discriminability so as to enhance the performance of a deep neural network (DNN) based ASR system.
no code implementations • 26 Mar 2018 • Haihua Lu, Hai Xu, Li Zhang, Yong Zhao
Firstly, we propose a new multi-scale matching cost computation sub-network, in which two different sizes of receptive fields are implemented parallelly.
no code implementations • 18 Jan 2018 • Liwen Zheng, Canmiao Fu, Yong Zhao
Single Shot MultiBox Detector (SSD) is one of the fastest algorithms in the current object detection field, which uses fully convolutional neural network to detect all scaled objects in an image.
1 code implementation • 2 Aug 2017 • Di Wu, Wenbin Zou, Xia Li, Yong Zhao
Visual tracking is intrinsically a temporal problem.
no code implementations • 3 Jan 2017 • Shi-Xiong Zhang, Zhuo Chen, Yong Zhao, Jinyu Li, Yifan Gong
A new type of End-to-End system for text-dependent speaker verification is presented in this paper.
no code implementations • 28 Nov 2016 • Meshia Cédric Oveneke, Mitchel Aliosha-Perez, Yong Zhao, Dongmei Jiang, Hichem Sahli
The omnipresence of deep learning architectures such as deep convolutional neural networks (CNN)s is fueled by the synergistic combination of ever-increasing labeled datasets and specialized hardware.
1 code implementation • 11 Apr 2016 • Xundong Wu, Yong Wu, Yong Zhao
We trained Binarized Neural Networks (BNNs) on the high resolution ImageNet ILSVRC-2102 dataset classification task and achieved a good performance.