no code implementations • CCL 2021 • Wei Hu, Maoxi Li, Bailian Qiu, Mingwen Wang
“机器译文自动评价对机器翻译的发展和应用起着重要的促进作用, 它一般通过计算机器译文和人工参考译文的相似度来度量机器译文的质量。该文通过跨语种预训练语言模型XLM将源语言句子、机器译文和人工参考译文映射到相同的语义空间, 结合分层注意力和内部注意力提取源语言句子与机器译文、机器译文与人工参考译文以及源语言句子与人工参考译文之间差异特征, 并将其融入到基于Bi-LSTM神经译文自动评价方法中。在WMT’19译文自动评价数据集上的实验结果表明, 融合XLM词语表示的神经机器译文自动评价方法显著提高了其与人工评价的相关性。”
no code implementations • COLING 2022 • Daizong Liu, Wei Hu
Then, we develop a self-supervised coarse-to-fine paradigm to learn to locate the most query-relevant patch in each frame and aggregate them among the video for final grounding.
no code implementations • Findings (ACL) 2022 • Yao Zhao, Jiacheng Huang, Wei Hu, Qijin Chen, Xiaoxia Qiu, Chengfu Huo, Weijun Ren
In this paper, we propose an implicit RL method called ImRL, which links relation phrases in NL to relation paths in KG.
1 code implementation • 8 Sep 2024 • huan zhang, Wei Cheng, Yuhan Wu, Wei Hu
The Driver follows the guidance of Navigator to undertake initial code generation, code testing, and refinement.
no code implementations • 9 Aug 2024 • Lingbei Meng, Bi'an Du, Wei Hu
During training, we design a structure-aware masking strategy to further improve the model's robustness against sparse inputs and noise. Experimental results on the MipNeRF360 and OmniObject3D datasets demonstrate that the proposed method achieves state-of-the-art performances for sparse input views in both perceptual quality and efficiency.
1 code implementation • 23 Jul 2024 • Yang Liu, Xiaobin Tian, Zequn Sun, Wei Hu
Traditional knowledge graph (KG) completion models learn embeddings to predict missing facts.
no code implementations • 17 Jul 2024 • QiHao Zhao, Yalun Dai, Shen Lin, Wei Hu, Fan Zhang, Jun Liu
In real-world scenarios, where knowledge distributions exhibit long-tail.
1 code implementation • 15 Jul 2024 • Zhoutian Shao, Yuanning Cui, Wei Hu
Among existing models, graph neural networks (GNNs) based ones have shown promising performance for this task.
1 code implementation • 10 Jul 2024 • Daizong Liu, Mingyu Yang, Xiaoye Qu, Pan Zhou, Yu Cheng, Wei Hu
Compared to traditional Large Language Models (LLMs), LVLMs present great potential and challenges due to its closer proximity to the multi-resource real-world applications and the complexity of multi-modal processing.
no code implementations • 4 Jul 2024 • Zhigen Li, Jianxiang Peng, Yanmeng Wang, Tianhao Shen, Minghui Zhang, Linxi Su, Shang Wu, Yihang Wu, Yuqian Wang, Ye Wang, Wei Hu, Jianfeng Li, Shaojun Wang, Jing Xiao, Deyi Xiong
To bridge this gap, we propose a new framework for planning-based conversational agents (PCA) powered by large language models (LLMs), which only requires humans to define tasks and goals for the LLMs.
1 code implementation • 9 Jun 2024 • Daizong Liu, Yang Liu, Wencan Huang, Wei Hu
In this survey, we attempt to provide a comprehensive overview of the T-3DVG progress, including its fundamental elements, recent research advances, and future research directions.
1 code implementation • 30 May 2024 • Yi Liu, Xiangyu Liu, Xiangrong Zhu, Wei Hu
We alleviate the issue of imbalanced attribute correlations during training using counterfactual feature vectors in the attribute latent space by disentanglement.
1 code implementation • 30 May 2024 • Wei Cheng, Yuhan Wu, Wei Hu
Recent years have witnessed the deployment of code language models (LMs) in various code intelligence tasks such as code completion.
1 code implementation • 16 May 2024 • Jianhao Chen, Haoyuan Ouyang, Junyang Ren, Wentao Ding, Wei Hu, Yuzhong Qu
In addition, we evaluate the performance of LLMs for direct temporal fact extraction and get unsatisfactory results.
no code implementations • 26 Mar 2024 • Yongyi Yang, Jiaming Yang, Wei Hu, Michał Dereziński
In this paper, we propose HERTA: a High-Efficiency and Rigorous Training Algorithm for Unfolded GNNs that accelerates the whole training process, achieving a nearly-linear time worst-case training guarantee.
1 code implementation • 22 Mar 2024 • Xindi Luo, Zequn Sun, Jing Zhao, Zhe Zhao, Wei Hu
Parameter-efficient finetuning (PEFT) is a key technique for adapting large language models (LLMs) to downstream tasks.
no code implementations • 18 Mar 2024 • Haolan Chen, Jinhua Hao, Kai Zhao, Kun Yuan, Ming Sun, Chao Zhou, Wei Hu
In particular, we develop a cascaded controllable diffusion model that aims to optimize the extraction of information from low-resolution images.
1 code implementation • 15 Mar 2024 • Qianjiang Hu, Zhimin Zhang, Wei Hu
Autonomous driving demands high-quality LiDAR data, yet the cost of physical LiDAR sensors presents a significant scaling-up challenge.
1 code implementation • 12 Mar 2024 • Yutong Wang, Rishi Sonthalia, Wei Hu
Under a random matrix theoretic assumption on the data distribution and an eigendecay assumption on the data covariance matrix $\boldsymbol{\Sigma}$, we demonstrate that any near-interpolator exhibits rapid norm growth: for $\tau$ fixed, $\boldsymbol{\beta}$ has squared $\ell_2$-norm $\mathbb{E}[\|{\boldsymbol{\beta}}\|_{2}^{2}] = \Omega(n^{\alpha})$ where $n$ is the number of samples and $\alpha >1$ is the exponent of the eigendecay, i. e., $\lambda_i(\boldsymbol{\Sigma}) \sim i^{-\alpha}$.
no code implementations • CVPR 2024 • QiHao Zhao, Yalun Dai, Hao Li, Wei Hu, Fan Zhang, Jun Liu
Long-tail recognition is challenging because it requires the model to learn good representations from tail categories and address imbalances across all categories.
1 code implementation • CVPR 2024 • Bi'an Du, Xiang Gao, Wei Hu, Renjie Liao
Subsequently, we transform the point cloud using the latent poses, feeding it to the part encoder for aggregating super-part information and reasoning about part relationships to predict all part poses.
no code implementations • 8 Jan 2024 • Zhimin Zhang, Xiang Gao, Wei Hu
The convenience of 3D sensors has led to an increase in the use of 3D point clouds in various applications.
1 code implementation • 5 Jan 2024 • Jincen Jiang, Lizhi Zhao, Xuequan Lu, Wei Hu, Imran Razzak, Meili Wang
Recent works attempt to extend Graph Convolution Networks (GCNs) to point clouds for classification and segmentation tasks.
1 code implementation • 19 Dec 2023 • Xiangyu Liu, Yang Liu, Wei Hu
Knowledge graphs (KGs) often contain various errors.
1 code implementation • 18 Dec 2023 • Jianhao Chen, Junyang Ren, Wentao Ding, Haoyuan Ouyang, Wei Hu, Yuzhong Qu
Temporal facts, which are used to describe events that occur during specific time periods, have become a topic of increased interest in the field of knowledge graph (KG) research.
2 code implementations • 8 Dec 2023 • Xiaobin Tian, Zequn Sun, Wei Hu
In this paper, we present the first framework that can generate explanations for understanding and repairing embedding-based EA results.
1 code implementation • 30 Nov 2023 • Kaifeng Lyu, Jikai Jin, Zhiyuan Li, Simon S. Du, Jason D. Lee, Wei Hu
Recent work by Power et al. (2022) highlighted a surprising "grokking" phenomenon in learning arithmetic tasks: a neural net first "memorizes" the training set, resulting in perfect training accuracy but near-random test accuracy, and after training for sufficiently longer, it suddenly transitions to perfect test accuracy.
1 code implementation • 6 Nov 2023 • Peng Wang, Xiao Li, Can Yaras, Zhihui Zhu, Laura Balzano, Wei Hu, Qing Qu
To the best of our knowledge, this is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks.
no code implementations • 6 Nov 2023 • Xi Chen, Wei Hu, Jingru Yu, Ding Wang, Shengyue Yao, Yilun Lin, Fei-Yue Wang
This paper introduces a novel approach, aiming to enable cities to evolve and respond more effectively to such dynamic demand.
1 code implementation • 24 Oct 2023 • Zitao Wang, Xinyi Wang, Wei Hu
We study continual event extraction, which aims to extract incessantly emerging event information while avoiding forgetting.
no code implementations • 16 Oct 2023 • Tianyu Guo, Wei Hu, Song Mei, Huan Wang, Caiming Xiong, Silvio Savarese, Yu Bai
Through extensive probing and a new pasting experiment, we further reveal several mechanisms within the trained transformers, such as concrete copying behaviors on both the inputs and the representations, linear ICL capability of the upper layers alone, and a post-ICL representation selection mechanism in a harder mixture setting.
no code implementations • 4 Oct 2023 • Zhiwei Xu, Yutong Wang, Spencer Frei, Gal Vardi, Wei Hu
Second, they can undergo a period of classical, harmful overfitting -- achieving a perfect fit to training data with near-random performance on test data -- before transitioning ("grokking") to near-optimal generalization later in training.
1 code implementation • 13 Sep 2023 • Gaotang Li, Jiarui Liu, Wei Hu
Neural networks produced by standard training are known to suffer from poor accuracy on rare subgroups despite achieving high accuracy on average, due to the correlations between certain spurious features and labels.
no code implementations • 5 Sep 2023 • Wencan Huang, Daizong Liu, Wei Hu
Localizing objects in 3D scenes according to the semantics of a given natural language is a fundamental yet important task in the field of multimedia understanding, which benefits various real-world applications such as robotics and autonomous driving.
1 code implementation • ICCV 2023 • QiHao Zhao, Chen Jiang, Wei Hu, Fan Zhang, Jun Liu
In the analysis and ablation study, we demonstrate that our method compared with previous work can effectively increase the diversity of experts, significantly reduce the variance of the model, and improve recognition accuracy.
Ranked #5 on Long-tail Learning on CIFAR-10-LT (ρ=50)
no code implementations • ICCV 2023 • Yunbo Tao, Daizong Liu, Pan Zhou, Yulai Xie, Wei Du, Wei Hu
With the maturity of depth sensors, the vulnerability of 3D point cloud models has received increasing attention in various applications such as autonomous driving and robot navigation.
no code implementations • 13 Jul 2023 • Wei Hu, Xuhong Wang, Ding Wang, Shengyue Yao, Zuqiu Mao, Li Li, Fei-Yue Wang, Yilun Lin
In the realm of software applications in the transportation industry, Domain-Specific Languages (DSLs) have enjoyed widespread adoption due to their ease of use and various other benefits.
no code implementations • 29 Jun 2023 • Yongyi Yang, Jacob Steinhardt, Wei Hu
This appears to suggest that the last-layer representations are completely determined by the labels, and do not depend on the intrinsic structure of input distribution.
1 code implementation • 5 Jun 2023 • Zequn Sun, Jiacheng Huang, Jinghao Lin, Xiaozhou Xu, Qijin Chen, Wei Hu
We pre-train a large teacher KG embedding model over linked multi-source KGs and distill knowledge to train a student model for a task-specific KG.
1 code implementation • 5 Jun 2023 • Zequn Sun, Jiacheng Huang, Xiaozhou Xu, Qijin Chen, Weijun Ren, Wei Hu
In this paper, we provide a similarity flooding perspective to explain existing translation-based and aggregation-based EA models.
1 code implementation • 1 Jun 2023 • Can Yaras, Peng Wang, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu
Second, it allows us to better understand deep representation learning by elucidating the linear progressive separation and concentration of representations from shallow to deep layers.
1 code implementation • 24 May 2023 • Jianhao Ma, Rui Ray Chen, Yinghui He, Salar Fattahi, Wei Hu
This paper presents a simple mean estimator that overcomes both challenges under moderate conditions: it runs in near-linear time and memory (both with respect to the ambient dimension) while requiring only $\tilde O(k)$ samples to recover the true mean.
1 code implementation • 11 May 2023 • Xinyi Wang, Zitao Wang, Wei Hu
Continual few-shot relation extraction (RE) aims to continuously train a model for new relations with few labeled training data, of which the major challenges are the catastrophic forgetting of old relations and the overfitting caused by data sparsity.
1 code implementation • 11 May 2023 • Wenzheng Zhao, Yuanning Cui, Wei Hu
To address this issue, we propose a novel continual extraction model for analogous relations.
1 code implementation • 24 Apr 2023 • QiHao Zhao, Yangyu Huang, Wei Hu, Fan Zhang, Jun Liu
TransMix uses unreliable attention maps to compute mixed attention labels that can affect the model.
Ranked #1 on Data Augmentation on ImageNet
1 code implementation • CVPR 2023 • Qianjiang Hu, Daizong Liu, Wei Hu
Recently, few works attempt to tackle the domain gap in objects, but still fail to adapt to the gap of varying beam-densities between two domains, which is critical to mitigate the characteristic differences of the LiDAR collectors.
1 code implementation • 10 Apr 2023 • Jiacheng Huang, Zequn Sun, Qijin Chen, Xiaozhou Xu, Weijun Ren, Wei Hu
With deep learning, it learns the embeddings of entities, relations and classes, and jointly aligns them in a semi-supervised manner.
no code implementations • 4 Feb 2023 • Xiangrong Zhu, Guangyao Li, Wei Hu
To cope with the drift between local optimization and global convergence caused by data heterogeneity, we propose mutual knowledge distillation to transfer local knowledge to global, and absorb global knowledge back.
no code implementations • 6 Jan 2023 • Haoyan Wei, C. T. Wu, Wei Hu, Tung-Huan Su, Hitoshi Oura, Masato Nishi, Tadashi Naito, Stan Chung, Leo Shen
In this work, we present a machine learning-based multiscale method by integrating injection molding-induced microstructures, material homogenization, and Deep Material Network (DMN) in the finite element simulation software LS-DYNA for structural analysis of SFRC.
no code implementations • 9 Dec 2022 • Yuxin Wang, Jieru Lin, Zhiwei Yu, Wei Hu, Börje F. Karlsson
Storytelling and narrative are fundamental to human experience, intertwined with our social and cultural engagement.
1 code implementation • 29 Nov 2022 • Yuanning Cui, Yuxin Wang, Zequn Sun, Wenqiang Liu, Yiqiao Jiang, Kexin Han, Wei Hu
We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch.
no code implementations • 14 Nov 2022 • Xiang Gao, Wei Hu, Renjie Liao
The decoder takes the latent variable and the feature from the encoder as an input and predicts the per-point part distribution at the top level.
1 code implementation • 5 Nov 2022 • Xiaobin Tian, Zequn Sun, Guangyao Li, Wei Hu
Towards a critical evaluation of embedding-based entity alignment methods, we construct a new dataset with heterogeneous relations and attributes based on event-centric KGs.
2 code implementations • 19 Oct 2022 • Botao Yu, Peiling Lu, Rui Wang, Wei Hu, Xu Tan, Wei Ye, Shikun Zhang, Tao Qin, Tie-Yan Liu
A recent trend is to use Transformer or its variants in music generation, which is, however, suboptimal, because the full attention cannot efficiently model the typically long music sequences (e. g., over 10, 000 tokens), and the existing models have shortcomings in generating musical repetition structures.
no code implementations • 13 Oct 2022 • Spencer Frei, Gal Vardi, Peter L. Bartlett, Nathan Srebro, Wei Hu
In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with leaky ReLU activations when the training data are nearly-orthogonal, a common property of high-dimensional data.
1 code implementation • 23 Aug 2022 • Kexuan Xin, Zequn Sun, Wen Hua, Wei Hu, Jianfeng Qu, Xiaofang Zhou
Therefore, in this work, we propose a scalable GNN-based entity alignment approach to reduce the structure and alignment loss from three perspectives.
2 code implementations • 22 Aug 2022 • Yuanning Cui, Yuxin Wang, Zequn Sun, Wenqiang Liu, Yiqiao Jiang, Kexin Han, Wei Hu
We propose a walk-based inductive reasoning model to tackle the new setting.
1 code implementation • 21 Aug 2022 • Yang Liu, Zequn Sun, Guangyao Li, Wei Hu
To this end, we propose CoLE, a Co-distillation Learning method for KG Embedding that exploits the complementarity of graph structures and text information.
no code implementations • 18 Aug 2022 • Gusi Te, Xiu Li, Xiao Li, Jinglu Wang, Wei Hu, Yan Lu
We present a novel paradigm of building an animatable 3D human representation from a monocular video input, such that it can be rendered in any unseen poses and views.
no code implementations • 27 Jul 2022 • Daizong Liu, Wei Hu, Xin Li
Instead, we propose point cloud attacks from a new perspective -- the graph spectral domain attack, aiming to perturb graph transform coefficients in the spectral domain that corresponds to varying certain geometric structure.
no code implementations • 27 Jul 2022 • Daizong Liu, Wei Hu
SLP consists of a Skimming-and-Locating (SL) module and a Bi-directional Perusing (BP) module.
no code implementations • 27 Jul 2022 • Daizong Liu, Xiaoye Qu, Wei Hu
In this paper, we study the above issue of selection biases and accordingly propose a Debiasing-TSG (D-TSG) model to filter and remove the negative biases in both vision and language modalities for enhancing the model generalization ability.
1 code implementation • 23 Jul 2022 • Xindi Luo, Zequn Sun, Wei Hu
It is useful for a thorough comparison and analysis of various embedding models and tasks.
1 code implementation • 23 Jul 2022 • Xinyi Wang, Zitao Wang, Weijian Sun, Wei Hu
Document-level relation extraction (RE) aims to identify the relations between entities throughout an entire document.
Ranked #23 on Relation Extraction on DocRED
1 code implementation • 23 Jul 2022 • Yuxin Wang, Yuanning Cui, Wenqiang Liu, Zequn Sun, Yiqiao Jiang, Kexin Han, Wei Hu
To avoid retraining an entire model on the whole KGs whenever new entities and triples come, we present a continual alignment method for this task.
no code implementations • 19 Apr 2022 • Qianjiang Hu, Wei Hu
The gradient field is the gradient of the log-probability function of the noisy point cloud, based on which we perform gradient ascent so as to converge each point to the underlying clean surface.
no code implementations • 10 Apr 2022 • Zhimin Zhang, Zheng Wang, Wei Hu
In the past few years, there has been a dramatic growth in e-manga (electronic Japanese-style comics).
no code implementations • 2 Apr 2022 • Daochang Wang, Fan Zhang, Fei Ma, Wei Hu, Yu Tang, Yongsheng Zhou
As a result, deep learning methods have not been fully used in airport detection tasks.
1 code implementation • 12 Mar 2022 • Kexuan Xin, Zequn Sun, Wen Hua, Bing Liu, Wei Hu, Jianfeng Qu, Xiaofang Zhou
We also design a conflict resolution mechanism to resolve the alignment conflict when combining the new alignment of an aligner and that from its teacher.
no code implementations • 11 Mar 2022 • Alexander Wei, Wei Hu, Jacob Steinhardt
On the other hand, we find that the classical GCV estimator (Craven and Wahba, 1978) accurately predicts generalization risk even in such overparameterized settings.
no code implementations • 8 Mar 2022 • Shentong Mo, Daizong Liu, Wei Hu
Secondly, since some predicted frames (i. e., boundary frames) are relatively coarse and exhibit similar appearance to their adjacent frames, we propose a coarse-to-fine contrastive learning paradigm to learn more discriminative frame-wise representations for distinguishing the false positive frames.
no code implementations • 6 Mar 2022 • Daizong Liu, Xiang Fang, Wei Hu, Pan Zhou
Temporal sentence grounding aims to localize a target segment in an untrimmed video semantically according to a given sentence query.
no code implementations • 28 Feb 2022 • Jin Zeng, Yang Liu, Gene Cheung, Wei Hu
Specifically, based on a spectral analysis of multilayer GCN output, we derive a spectrum prior for the graph Laplacian matrix $\mathbf{L}$ to robustify the model expressiveness against over-smoothing.
no code implementations • 18 Feb 2022 • Simiao Ren, Wei Hu, Kyle Bradbury, Dylan Harrison-Atlas, Laura Malaguzzi Valeri, Brian Murray, Jordan M. Malof
These include the opportunity to extend the methods beyond electricity to broader energy systems and wider geographic areas; and the ability to expand the use of these methods in research and decision making as satellite data become cheaper and easier to access.
1 code implementation • 15 Feb 2022 • Qianjiang Hu, Daizong Liu, Wei Hu
Instead, we propose point cloud attacks from a new perspective -- Graph Spectral Domain Attack (GSDA), aiming to perturb transform coefficients in the graph spectral domain that corresponds to varying certain geometric structure.
1 code implementation • 21 Jan 2022 • Jiacheng Huang, Yao Zhao, Wei Hu, Zhen Ning, Qijin Chen, Xiaoxia Qiu, Chengfu Huo, Weijun Ren
In this paper, we propose a new trustworthy method that exploits facts for a KG based on multi-sourced noisy data and existing facts in the KG.
1 code implementation • 2 Jan 2022 • Kexuan Xin, Zequn Sun, Wen Hua, Wei Hu, Xiaofang Zhou
Entity alignment is a crucial step in integrating knowledge graphs (KGs) from multiple sources.
1 code implementation • 15 Dec 2021 • Ehsan Imani, Wei Hu, Martha White
We then highlight why alignment between the top singular vectors and the targets can speed up learning and show in a classic synthetic transfer problem that representation alignment correlates with positive and negative transfer to similar and dissimilar tasks.
no code implementations • 22 Nov 2021 • Daizong Liu, Wei Hu
Although many efforts have been made into attack and defense on the 2D image domain in recent years, few methods explore the vulnerability of 3D models.
no code implementations • 3 Nov 2021 • Haolan Chen, Bi'an Du, Shitong Luo, Wei Hu
3D point clouds acquired by scanning real-world objects or scenes have found a wide range of applications including immersive telepresence, autonomous driving, surveillance, etc.
no code implementations • 21 Oct 2021 • Lingbing Guo, Zequn Sun, Mingyang Chen, Wei Hu, Qiang Zhang, Huajun Chen
Embedding-based entity alignment (EEA) has recently received great attention.
no code implementations • 21 Oct 2021 • Wei Hu, Dian Xu, Zimeng Fan, Fang Liu, Yanxiang He
Vis-TOP summarizes the characteristics of all visual Transformer models and implements a three-layer and two-level transformation structure that allows the model to be switched or changed freely without changing the hardware architecture.
1 code implementation • EMNLP 2021 • Kailong Hao, Botao Yu, Wei Hu
Distantly supervised relation extraction (RE) automatically aligns unstructured text with relation instances in a knowledge base (KB).
no code implementations • 23 Jul 2021 • Yu Jing, Xiaogang Li, Yang Yang, Chonghang Wu, Wenbing Fu, Wei Hu, Yuanyuan Li, Hua Xu
With the rapid growth of qubit numbers and coherence times in quantum hardware technology, implementing shallow neural networks on the so-called Noisy Intermediate-Scale Quantum (NISQ) devices has attracted a lot of interest.
2 code implementations • ICCV 2021 • Shitong Luo, Wei Hu
Since $p * n$ is unknown at test-time, and we only need the score (i. e., the gradient of the log-probability function) to perform gradient ascent, we propose a neural network architecture to estimate the score of $p * n$ given only noisy point clouds as input.
no code implementations • 5 Jul 2021 • Bi'an Du, Xiang Gao, Wei Hu, Xin Li
Point clouds have attracted increasing attention.
1 code implementation • 29 Jun 2021 • Nikunj Saunshi, Arushi Gupta, Wei Hu
An effective approach in meta-learning is to utilize multiple "train tasks" to learn a good initialization for model parameters that can help solve unseen "test tasks" with very few samples by fine-tuning from this initialization.
no code implementations • 23 Jun 2021 • Qi Lei, Wei Hu, Jason D. Lee
Transfer learning is essential when sufficient data comes from the source domain, with scarce labeled data from the target domain.
1 code implementation • ACL 2021 • Zequn Sun, Muhao Chen, Wei Hu
Since KGs possess different sets of entities, there could be entities that cannot find alignment across them, leading to the problem of dangling entities.
Ranked #1 on Entity Alignment on DBP2.0 zh-en
1 code implementation • 25 May 2021 • Xiang Gao, Wei Hu, Guo-Jun Qi
We formalize the proposed model from an information-theoretic perspective, by maximizing the mutual information between topology transformations and node representations before and after the transformations.
1 code implementation • 21 Apr 2021 • Jidong Ge, Yunyun huang, Xiaoyu Shen, Chuanyi Li, Wei Hu
We believe that learning fine-grained correspondence between each single fact and law articles is crucial for an accurate and trustworthy AI system.
3 code implementations • CVPR 2021 • Shitong Luo, Wei Hu
We present a probabilistic model for point cloud generation, which is fundamental for various 3D vision tasks such as shape completion, upsampling, synthesis and data augmentation.
no code implementations • 1 Mar 2021 • Xiang Gao, Wei Hu, Guo-Jun Qi
Then, we self-train a representation to capture the intrinsic 3D object representation by decoding 3D transformation parameters from the fused feature representations of multiple views before and after the transformation.
no code implementations • 18 Jan 2021 • Gusi Te, Wei Hu, Yinglu Liu, Hailin Shi, Tao Mei
Face parsing infers a pixel-wise label to each facial component, which has drawn much attention recently.
Ranked #3 on Face Parsing on CelebAMask-HQ
no code implementations • 1 Jan 2021 • Xiang Gao, Wei Hu, Guo-Jun Qi
We formalize the TopoTER from an information-theoretic perspective, by maximizing the mutual information between topology transformations and node representations before and after the transformations.
no code implementations • 1 Jan 2021 • Shitong Luo, Wei Hu
Point cloud generation thus amounts to learning the reverse diffusion process that transforms the noise distribution to the distribution of a desired shape.
no code implementations • 1 Jan 2021 • Lingbing Guo, Zequn Sun, Mingyang Chen, Wei Hu, Huajun Chen
In this paper, we define a typical paradigm abstracted from the existing methods, and analyze how the representation discrepancy between two potentially-aligned entities is implicitly bounded by a predefined margin in the scoring function for embedding learning.
no code implementations • 7 Dec 2020 • Jeremy Blackstone, Wei Hu, Alric Althoff, Armaiti Ardeshiricham, Lu Zhang, Ryan Kastner
To justify our model, we prove that Precise Hardware IFT is equivalent to gate level X-propagation and imprecise fault propagation.
Hardware Architecture
no code implementations • 2 Dec 2020 • Yiming Gan, Yu Bo, Boyuan Tian, Leimeng Xu, Wei Hu, Shaoshan Liu, Qiang Liu, Yanjun Zhang, Jie Tang, Yuhao Zhu
We develop and commercialize autonomous machines, such as logistic robots and self-driving cars, around the globe.
Self-Driving Cars Hardware Architecture
2 code implementations • CVPR 2021 • Qianjiang Hu, Xiao Wang, Wei Hu, Guo-Jun Qi
Contrastive learning relies on constructing a collection of negative examples that are sufficiently hard to discriminate against positive queries when their representations are self-trained.
no code implementations • ICLR 2021 • Jiaqi Yang, Wei Hu, Jason D. Lee, Simon S. Du
For the finite-action setting, we present a new algorithm which achieves $\widetilde{O}(T\sqrt{kN} + \sqrt{dkNT})$ regret, where $N$ is the number of rounds we play for each bandit.
1 code implementation • EMNLP 2020 • Zequn Sun, Muhao Chen, Wei Hu, Chengming Wang, Jian Dai, Wei zhang
Capturing associations for knowledge graphs (KGs) through entity alignment, entity type inference and other related tasks benefits NLP applications with comprehensive knowledge representations.
Ranked #28 on Entity Alignment on DBP15k zh-en
1 code implementation • EMNLP 2020 • Difeng Wang, Wei Hu, Ermei Cao, Weijian Sun
Relation extraction (RE) aims to identify the semantic relations between named entities in text.
Ranked #39 on Relation Extraction on DocRED
1 code implementation • 14 Sep 2020 • Wei Hu, QiHao Zhao, Yangyu Huang, Fan Zhang
Learning deep neural network (DNN) classifier with noisy labels is a challenging task because the DNN can easily over-fit on these noisy labels due to its high capability.
1 code implementation • 9 Sep 2020 • Xinze Lyu, Guangyao Li, Jiacheng Huang, Wei Hu
However, existing work incorporated with KGs cannot capture the explicit long-range semantics between users and items meanwhile consider various connectivity between items.
no code implementations • 5 Aug 2020 • Wei Hu, Jiahao Pang, Xian-Ming Liu, Dong Tian, Chia-Wen Lin, Anthony Vetro
Geometric data acquired from real-world scenes, e. g., 2D depth images, 3D point clouds, and 4D dynamic point clouds, have found a wide range of applications including immersive telepresence, autonomous driving, surveillance, etc.
1 code implementation • 27 Jul 2020 • Shitong Luo, Wei Hu
Afterwards, the decoder infers the underlying manifold by transforming each sampled point along with the embedded feature of its neighborhood to a local surface centered around the point.
1 code implementation • ECCV 2020 • Gusi Te, Yinglu Liu, Wei Hu, Hailin Shi, Tao Mei
Specifically, we encode a facial image onto a global graph representation where a collection of pixels ("regions") with similar features are projected to each vertex.
Ranked #4 on Face Parsing on CelebAMask-HQ
no code implementations • NeurIPS 2020 • Wei Hu, Lechao Xiao, Ben Adlam, Jeffrey Pennington
Modern neural networks are often regarded as complex black-box functions whose behavior is difficult to understand owing to their nonlinear dependence on the data and the nonconvexity in their loss landscapes.
1 code implementation • 15 Jun 2020 • Cheng Yang, Gene Cheung, Wei Hu
Given a convex and differentiable objective $Q(\M)$ for a real symmetric matrix $\M$ in the positive definite (PD) cone -- used to compute Mahalanobis distances -- we propose a fast general metric learning framework that is entirely projection-free.
no code implementations • 10 Jun 2020 • Simon S. Du, Wei Hu, Zhiyuan Li, Ruoqi Shen, Zhao Song, Jiajun Wu
Though errors in past actions may affect the future, we are able to bound the number of particles needed so that the long-run reward of the policy based on particle filtering is close to that based on exact inference.
1 code implementation • 22 Apr 2020 • Zequn Sun, Jiacheng Huang, Wei Hu, Muchao Chen, Lingbing Guo, Yuzhong Qu
We refer to such contextualized representations of a relation as edge embeddings and interpret them as translations between entity embeddings.
1 code implementation • 18 Mar 2020 • Farahnaz Akrami, Mohammed Samiul Saeef, Qingheng Zhang, Wei Hu, Chengkai Li
A more fundamental defect of these models is that the link prediction scenario, given such data, is non-existent in the real-world.
no code implementations • 17 Mar 2020 • Wei Hu, Qianjiang Hu, Zehua Wang, Xiang Gao
In particular, we define a manifold-to-manifold distance and its discrete counterpart on graphs to measure the variation-based intrinsic distance between surface patches in the temporal domain, provided that graph operators are discrete counterparts of functionals on Riemannian manifolds.
1 code implementation • 10 Mar 2020 • Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, Chengkai Li
Recent advancement in KG embedding impels the advent of embedding-based entity alignment, which encodes entities in a continuous embedding space and measures entity similarities based on the learned embeddings.
1 code implementation • 21 Feb 2020 • Jiacheng Huang, Wei Hu, Zhifeng Bao, Yuzhong Qu
Knowledge bases (KBs) store rich yet heterogeneous entities and facts.
no code implementations • ICLR 2021 • Simon S. Du, Wei Hu, Sham M. Kakade, Jason D. Lee, Qi Lei
First, we study the setting where this common representation is low-dimensional and provide a fast rate of $O\left(\frac{\mathcal{C}\left(\Phi\right)}{n_1T} + \frac{k}{n_2}\right)$; here, $\Phi$ is the representation function class, $\mathcal{C}\left(\Phi\right)$ is its complexity measure, and $k$ is the dimension of the representation.
1 code implementation • 15 Feb 2020 • Ermei Cao, Difeng Wang, Jiacheng Huang, Wei Hu
In this paper, we propose a full-fledged approach to knowledge enrichment, which predicts missing properties and infers true facts of long-tail entities from the open Web.
no code implementations • 28 Jan 2020 • Cheng Yang, Gene Cheung, Wei Hu
We propose a fast general projection-free metric learning framework, where the minimization objective $\min_{\textbf{M} \in \mathcal{S}} Q(\textbf{M})$ is a convex differentiable function of the metric matrix $\textbf{M}$, and $\textbf{M}$ resides in the set $\mathcal{S}$ of generalized graph Laplacian matrices for connected graphs with positive edge weights and node degrees.
no code implementations • ICLR 2020 • Wei Hu, Lechao Xiao, Jeffrey Pennington
The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance.
no code implementations • 4 Dec 2019 • Yiming He, Wei Hu
To this end, we propose a regularized graph representation learning under a conditional adversarial learning framework for 3D hand pose estimation, aiming to capture structural inter-dependencies of hand joints.
1 code implementation • 20 Nov 2019 • Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei zhang, Yuzhong Qu
As the direct neighbors of counterpart entities are usually dissimilar due to the schema heterogeneity, AliNet introduces distant neighbors to expand the overlap between their neighborhood structures.
Ranked #29 on Entity Alignment on DBP15k zh-en
1 code implementation • CVPR 2020 • Xiang Gao, Wei Hu, Guo-Jun Qi
Recent advances in Graph Convolutional Neural Networks (GCNNs) have shown their efficiency for non-Euclidean data on graphs, which often require a large amount of labeled data with high cost.
no code implementations • 3 Nov 2019 • Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S. Du, Wei Hu, Ruslan Salakhutdinov, Sanjeev Arora
An exact algorithm to compute CNTK (Arora et al., 2019) yielded the finding that classification accuracy of CNTK on CIFAR-10 is within 6-7% of that of that of the corresponding CNN architecture (best figure being around 78%) which is interesting performance for a fixed kernel.
1 code implementation • 11 Sep 2019 • Jiaxiang Tang, Wei Hu, Xiang Gao, Zongming Guo
In particular, we cast the graph optimization problem as distance metric learning to capture pairwise similarities of features in each layer.
no code implementations • IJCNLP 2019 • Jiwei Ding, Wei Hu, Qixin Xu, Yuzhong Qu
Formal query generation aims to generate correct executable queries for question answering over knowledge bases (KBs), given entity and relation linking results.
no code implementations • 22 Jul 2019 • Wei Hu, Xiang Gao, Gene Cheung, Zongming Guo
In this work, we assume instead the availability of a relevant feature vector $\mathbf{f}_i$ per node $i$, from which we compute an optimal feature graph via optimization of a feature metric.
1 code implementation • NeurIPS 2019 • Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, Rong Ge
Mode connectivity is a surprising phenomenon in the loss landscape of deep nets.
1 code implementation • 6 Jun 2019 • Qingheng Zhang, Zequn Sun, Wei Hu, Muhao Chen, Lingbing Guo, Yuzhong Qu
Furthermore, we design some cross-KG inference methods to enhance the alignment between two KGs.
1 code implementation • NeurIPS 2019 • Sanjeev Arora, Nadav Cohen, Wei Hu, Yuping Luo
Efforts to understand the generalization mystery in deep learning have led to the belief that gradient-based optimization induces a form of implicit regularization, a bias towards models of low "complexity."
no code implementations • ICLR 2020 • Wei Hu, Zhiyuan Li, Dingli Yu
Over-parameterized deep neural networks trained by simple first-order methods are known to be able to fit any labeling of data.
1 code implementation • 13 May 2019 • Lingbing Guo, Zequn Sun, Wei Hu
Moreover, triple-level learning is insufficient for the propagation of semantic information among entities, especially for the case of cross-KG embedding.
no code implementations • 28 Apr 2019 • Wei Hu, Qianjiang Hu, Zehua Wang, Xiang Gao
Finally, based on the spatial-temporal graph learning, we formulate dynamic point cloud denoising as the joint optimization of the desired point cloud and underlying spatio-temporal graph, which leverages both intra-frame affinities and inter-frame consistency and is solved via alternating minimization.
2 code implementations • NeurIPS 2019 • Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang
An attraction of such ideas is that a pure kernel-based method is used to capture the power of a fully-trained deep net of infinite width.
no code implementations • 23 Apr 2019 • Zeqing Fu, Wei Hu, Zongming Guo
With the development of 3D laser scanning techniques and depth sensors, 3D dynamic point clouds have attracted increasing attention as a representation of 3D objects in motion, enabling various applications such as 3D immersive tele-presence, gaming and navigation.
no code implementations • 23 Apr 2019 • Xiang Gao, Wei Hu, Zongming Guo
In this paper, we propose Graph Learning Neural Networks (GLNNs), which exploit the optimization of graphs (the adjacency matrix in particular) from both data and tasks.
2 code implementations • CVPR 2019 • Wei Hu, Yangyu Huang, Fan Zhang, Ruirui Li
Benefit from large-scale training datasets, deep Convolutional Neural Networks(CNNs) have achieved impressive results in face recognition(FR).
2 code implementations • 28 Feb 2019 • Wei Hu, Kyle Bradbury, Jordan M. Malof, Boning Li, Bohao Huang, Artem Streltsov, K. Sydny Fujita, Ben Hoen
Our findings suggest that traditional performance evaluation of the automated identification of solar PV from satellite imagery may be optimistic due to common limitations in the validation process.
no code implementations • 24 Jan 2019 • Simon S. Du, Wei Hu
We prove that for an $L$-layer fully-connected linear neural network, if the width of every hidden layer is $\tilde\Omega (L \cdot r \cdot d_{\mathrm{out}} \cdot \kappa^3 )$, where $r$ and $\kappa$ are the rank and the condition number of the input data, and $d_{\mathrm{out}}$ is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate.
no code implementations • 24 Jan 2019 • Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruosong Wang
This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works: (i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [Zhang et al. ICLR'17].
no code implementations • 29 Dec 2018 • Junkun Qi, Wei Hu, Zongming Guo
With the development of 3D sensing technologies, point clouds have attracted increasing attention in a variety of applications for 3D object representation, such as autonomous driving, 3D immersive tele-presence and heritage reconstruction.
no code implementations • 29 Nov 2018 • Xiang Gao, Wei Hu, Jiaxiang Tang, Jiaying Liu, Zongming Guo
In this paper, we represent skeletons naturally on graphs, and propose a graph regression based GCN (GR-GCN) for skeleton-based action recognition, aiming to capture the spatio-temporal variation in the data.
Ranked #2 on Skeleton Based Action Recognition on Florence 3D
no code implementations • 28 Nov 2018 • Wei Hu, Gusi Te, Ju He, Dong Chen, Zongming Guo
Face anti-spoofing plays a crucial role in protecting face recognition systems from various attacks.
no code implementations • 6 Nov 2018 • Lingbing Guo, Zequn Sun, Ermei Cao, Wei Hu
We consider the problem of learning knowledge graph (KG) embeddings for entity alignment (EA).
1 code implementation • 30 Oct 2018 • Lingbing Guo, Qingheng Zhang, Weiyi Ge, Wei Hu, Yuzhong Qu
Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of $(subject, relation, object)$.
no code implementations • ICLR 2019 • Sanjeev Arora, Nadav Cohen, Noah Golowich, Wei Hu
We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network (parameterized as $x \mapsto W_N W_{N-1} \cdots W_1 x$) by minimizing the $\ell_2$ loss over whitened data.
no code implementations • 28 Sep 2018 • Zeqing Fu, Wei Hu, Zongming Guo
Hence, leveraging on recent advances in graph signal processing, we propose an efficient point cloud inpainting method, exploiting both the local smoothness and the non-local self-similarity in point clouds.
no code implementations • 24 Sep 2018 • Wei Hu, Weining Shen, Hua Zhou, Dehan Kong
We propose a novel linear discriminant analysis approach for the classification of high-dimensional matrix-valued data that commonly arises from imaging studies.
1 code implementation • 8 Jun 2018 • Gusi Te, Wei Hu, Zongming Guo, Amin Zheng
Leveraging on spectral graph theory, we treat features of points in a point cloud as signals on graph, and define the convolution over graph by Chebyshev polynomial approximation.
no code implementations • NeurIPS 2018 • Simon S. Du, Wei Hu, Jason D. Lee
Using a discretization argument, we analyze gradient descent with positive step size for the non-convex low-rank asymmetric matrix factorization problem without any regularization.
no code implementations • 29 Apr 2018 • Kai Yue, Lei Yang, Ruirui Li, Wei Hu, Fan Zhang, Wei Li
For the task of subdecimeter aerial imagery segmentation, fine-grained semantic segmentation results are usually difficult to obtain because of complex remote sensing content and optical conditions.
no code implementations • NeurIPS 2018 • Elad Hazan, Wei Hu, Yuanzhi Li, Zhiyuan Li
We revisit the question of reducing online learning to approximate optimization of the offline problem.
1 code implementation • 17 Mar 2018 • Wei Hu, Yangyu Huang, Fan Zhang, Ruirui Li, Wei Li, Guodong Yuan
Deep convolutional neural networks (CNNs) have greatly improved the Face Recognition (FR) performance in recent years.
Ranked #1 on Face Verification on YouTube Faces DB
no code implementations • 5 Mar 2018 • Sanjeev Arora, Wei Hu, Pravesh K. Kothari
A first line of attack in exploratory data analysis is data visualization, i. e., generating a 2-dimensional representation of data that makes clusters of similar points visually identifiable.
no code implementations • 5 Feb 2018 • Simon S. Du, Wei Hu
We consider the convex-concave saddle point problem $\min_{x}\max_{y} f(x)+y^\top A x-g(y)$ where $f$ is smooth and convex and $g$ is smooth and strongly convex.
no code implementations • 1 Feb 2018 • Wei Hu, Zhao Song, Lin F. Yang, Peilin Zhong
We consider the $k$-means clustering problem in the dynamic streaming setting, where points from a discrete Euclidean space $\{1, 2, \ldots, \Delta\}^d$ can be dynamically inserted to or deleted from the dataset.
2 code implementations • 1 Sep 2017 • Ruirui Li, Wenjie Liu, Lei Yang, Shihao Sun, Wei Hu, Fan Zhang, Wei Li
Semantic segmentation is a fundamental research in remote sensing image processing.
1 code implementation • 16 Aug 2017 • Zequn Sun, Wei Hu, Chengkai Li
Our experimental results on real-world datasets show that this approach significantly outperforms the state-of-the-art embedding approaches for cross-lingual entity alignment and could be complemented with methods based on machine translation.
no code implementations • NeurIPS 2017 • Zeyuan Allen-Zhu, Elad Hazan, Wei Hu, Yuanzhi Li
We propose a rank-$k$ variant of the classical Frank-Wolfe algorithm to solve convex optimization over a trace-norm ball.
no code implementations • 17 Jul 2017 • Meng Wang, Jiaheng Zhang, Jun Liu, Wei Hu, Sen Wang, Xue Li, Wenqiang Liu
Electronic medical records contain multi-format electronic medical data that consist of an abundance of medical knowledge.
no code implementations • NeurIPS 2016 • Wei Chen, Wei Hu, Fu Li, Jian Li, Yu Liu, Pinyan Lu
Our framework enables a much larger class of reward functions such as the $\max()$ function and nonlinear utility functions.
no code implementations • 10 Apr 2014 • Wei Hu, Wei Li, Fan Zhang, Qian Du
Decolorization is the process to convert a color image or video to its grayscale version, and it has received great attention in recent years.