1 code implementation • 28 Mar 2025 • Kaiyuan Yang, Huang Ouyang, Xinyi Wang, Bingjie Lu, Yanbo Wang, Charith Abhayaratne, Sizhao Li, Long Jin, Tiantai Deng
This paper introduces Natural-Level Synthesis, an innovative approach for generating hardware using generative artificial intelligence on both the system level and component-level.
1 code implementation • 17 Dec 2024 • Long Jin, Han Nong, Liangming Chen, Zhenming Su
The insufficient generalization of adaptive moment estimation (Adam) has hindered its broader application.
no code implementations • 21 Oct 2024 • Dong-Ho Lee, Adam Kraft, Long Jin, Nikhil Mehta, Taibai Xu, Lichan Hong, Ed H. Chi, Xinyang Yi
In this paper, we propose a Simple Training-free Approach for Recommendation (STAR), a framework that utilizes LLMs and can be applied to various recommendation tasks without the need for fine-tuning, while maintaining high quality recommendation performance.
no code implementations • 22 Jul 2024 • Alicia Y. Tsai, Adam Kraft, Long Jin, Chenwei Cai, Anahita Hosseini, Taibai Xu, Zemin Zhang, Lichan Hong, Ed H. Chi, Xinyang Yi
Recent advancements have showcased the potential of Large Language Models (LLMs) in executing reasoning tasks, particularly facilitated by Chain-of-Thought (CoT) prompting.
1 code implementation • 27 Jun 2024 • Wen Zhang, Long Jin, Yushan Zhu, Jiaoyan Chen, Zhiwei Huang, Junjie Wang, Yin Hua, Lei Liang, Huajun Chen
Further more, we have demonstrated the potential of our method for more general QA tasks, QA over mixed structured data and QA across structured data.
1 code implementation • 20 Dec 2023 • Weigang Lu, Ziyu Guan, Wei Zhao, Yaming Yang, Long Jin
However, due to the uneven location distribution of labeled nodes in the graph, labeled nodes are only accessible to a small portion of unlabeled nodes, leading to the \emph{under-reaching} issue.
no code implementations • 10 Nov 2023 • Huan Gui, Ruoxi Wang, Ke Yin, Long Jin, Maciej Kula, Taibai Xu, Lichan Hong, Ed H. Chi
We identify two key challenges for applying the vanilla Transformer architecture to web-scale recommender systems: (1) Transformer architecture fails to capture the heterogeneous feature interactions in the self-attention layer; (2) The serving latency of Transformer architecture might be too high to be deployed in web-scale recommender systems.
1 code implementation • 19 Oct 2023 • Zhiwei Huang, Juan Li, Long Jin, Junjie Wang, Mingchen Tu, Yin Hua, Zhiqiang Liu, Jiawei Meng, Wen Zhang
Specifically, for each conference, we first organize academic conference data in a tree-structured format through a semi-automated method.
no code implementations • 24 Aug 2023 • Ziqi Yang, Zhongyu Li, Chen Liu, Xiangde Luo, Xingguang Wang, Dou Xu, CHAOQUN LI, Xiaoying Qin, Meng Yang, Long Jin
To make full use of pixel-level and cell-level features dynamically, we propose an asymmetric co-training framework combining a deep graph convolutional network and a convolutional neural network for multi-class histopathological image classification.
1 code implementation • 15 Aug 2023 • Long Jin, Zhen Yao, Mingyang Chen, Huajun Chen, Wen Zhang
Though KGE models' capabilities are analyzed over different relational patterns in theory and a rough connection between better relational patterns modeling and better performance of KGC has been built, a comprehensive quantitative analysis on KGE models over relational patterns remains absent so it is uncertain how the theoretical support of KGE to a relational pattern contributes to the performance of triples associated to such a relational pattern.
1 code implementation • journal 2023 • Chuan Qin, Liangming Chen, Zangtai Cai, Mei Liu, Long Jin
As the number of long short-term memory (LSTM) layers increases, vanishing/exploding gradient problems exacerbate and have a negative impact on the performance of the LSTM.
1 code implementation • 27 Jun 2022 • Liangming Chen, Long Jin, Mingsheng Shang
We first give the interpretation of zero stability in the context of deep learning and then investigate the performance of existing first- and second-order CNNs under different zero-stable circumstances.
1 code implementation • NeurIPS 2021 • Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen
We propose a design principle to decouple the depth and scope of GNNs -- to generate representation of a target entity (i. e., a node or an edge), we first extract a localized subgraph as the bounded-size scope, and then apply a GNN of arbitrary depth on top of the subgraph.
Ranked #3 on
Node Classification
on Reddit
2 code implementations • 9 Jul 2021 • Mei Liu, Liangming Chen, Xiaohao Du, Long Jin, Mingsheng Shang
The experimental results also demonstrate that the proposed method is able to be adopted in various deep neural networks to improve their performance.
no code implementations • IEEE Transactions on Systems, Man, and Cybernetics: Systems 2021 • Dongyang Fu, Haoen Huang, Lin Wei, Xiuchun Xiao, Long Jin, Shan Liao, Jialiang Fan, Zhengtai Xie
Currently, the Newton-Raphson iterative algorithm has been extensively employed in the fields of basic research and engineering.
2 code implementations • 2 Dec 2020 • Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, Ren Chen
We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph.
2 code implementations • NeurIPS 2021 • Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, Long Jin
In this paper, we provide a theory of using graph neural networks (GNNs) for multi-node representation learning (where we are interested in learning a representation for a set of more than one node, such as link).
Ranked #1 on
Link Property Prediction
on ogbl-citation2
no code implementations • 28 Sep 2020 • Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, Long Jin
Graph neural networks (GNNs) have achieved great success in recent years.
no code implementations • 14 Sep 2020 • Liangming Chen, Long Jin, Xiujuan Du, Shuai Li, Mei Liu
With visualizations of loss landscapes, we evaluate the flatnesses of minima obtained by both the original optimizer and optimizers enhanced by VDMs on CIFAR-100.
no code implementations • 24 Jul 2020 • Liangming Chen, Long Jin, Xiujuan Du, Shuai Li, Mei Liu
Furthermore, the flatter minima could be obtained by exploiting the proposed deformation functions, which is verified on CIFAR-100, with visualizations of loss landscapes near the critical points obtained by both the original optimizer and optimizer enhanced by deformation functions.
no code implementations • ICCV 2017 • Justin Lazarow, Long Jin, Zhuowen Tu
We study unsupervised learning by developing a generative model built from progressively learned deep convolutional neural networks.
no code implementations • NeurIPS 2017 • Long Jin, Justin Lazarow, Zhuowen Tu
We propose introspective convolutional networks (ICN) that emphasize the importance of having convolutional neural networks empowered with generative capabilities.
no code implementations • 25 Apr 2017 • Justin Lazarow, Long Jin, Zhuowen Tu
We study unsupervised learning by developing introspective generative modeling (IGM) that attains a generator using progressively learned deep convolutional neural networks.
no code implementations • 28 Nov 2016 • Long Jin, Zeyu Chen, Zhuowen Tu
Instance segmentation has attracted recent attention in computer vision and existing methods in this domain mostly have an object detection stage.