Search Results for author: Hao Kong

Found 9 papers, 4 papers with code

Unveiling the Potential of Sentiment: Can Large Language Models Predict Chinese Stock Price Movements?

no code implementations25 Jun 2023 Haohan Zhang, Fengrui Hua, Chengjin Xu, Jian Guo, Hao Kong, Ruiting Zuo

The rapid advancement of Large Language Models (LLMs) has led to extensive discourse regarding their potential to boost the return of quantitative stock trading strategies.

PosterLayout: A New Benchmark and Approach for Content-aware Visual-Textual Presentation Layout

1 code implementation CVPR 2023 HsiaoYuan Hsu, Xiangteng He, Yuxin Peng, Hao Kong, Qing Zhang

Content-aware visual-textual presentation layout aims at arranging spatial space on the given canvas for pre-defined elements, including text, logo, and underlay, which is a key to automatic template-free creative graphic design.

Generative Adversarial Network

You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms

1 code implementation30 Aug 2022 Xiangzhong Luo, Di Liu, Hao Kong, Shuo Huai, Hui Chen, Weichen Liu

Benefiting from the search efficiency, differentiable neural architecture search (NAS) has evolved as the most dominant alternative to automatically design competitive deep neural networks (DNNs).

Neural Architecture Search

SymNMF-Net for The Symmetric NMF Problem

no code implementations26 May 2022 Mingjie Li, Hao Kong, Zhouchen Lin

Furthermore, we analyze the constraints of the inversion layer to ensure the output stability of the network to a certain extent.

Clustering

Fast and Differentiable Matrix Inverse and Its Extension to SVD

no code implementations1 Jan 2021 Xingyu Xie, Hao Kong, Jianlong Wu, Guangcan Liu, Zhouchen Lin

First of all, to perform matrix inverse, we provide a differentiable yet efficient way, named LD-Minv, which is a learnable deep neural network (DNN) with each layer being an $L$-th order matrix polynomial.

Bringing AI To Edge: From Deep Learning's Perspective

no code implementations25 Nov 2020 Di Liu, Hao Kong, Xiangzhong Luo, Weichen Liu, Ravi Subramaniam

To bridge the gap, a plethora of deep learning techniques and optimization methods are proposed in the past years: light-weight deep learning models, network compression, and efficient neural architecture search.

Edge-computing Hardware Aware Neural Architecture Search +2

Dynamic Anticipation and Completion for Multi-Hop Reasoning over Sparse Knowledge Graph

1 code implementation EMNLP 2020 Xin Lv, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Wei zhang, Yichi Zhang, Hao Kong, Suhui Wu

On the one hand, sparse KGs contain less information, which makes it difficult for the model to choose correct paths.

Maximum-and-Concatenation Networks

1 code implementation ICML 2020 Xingyu Xie, Hao Kong, Jianlong Wu, Wayne Zhang, Guangcan Liu, Zhouchen Lin

While successful in many fields, deep neural networks (DNNs) still suffer from some open problems such as bad local minima and unsatisfactory generalization performance.

Tensor Q-Rank: New Data Dependent Definition of Tensor Rank

no code implementations26 Oct 2019 Hao Kong, Canyi Lu, Zhouchen Lin

Recently, the \textit{Tensor Nuclear Norm~(TNN)} regularization based on t-SVD has been widely used in various low tubal-rank tensor recovery tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.