no code implementations • 7 Nov 2017 • Liang Shen, Zihan Yue, Fan Feng, Quan Chen, Shihao Liu, Jie Ma
In this paper, a low-light image enhancement model based on convolutional neural network and Retinex theory is proposed.
no code implementations • 21 Jan 2018 • Liang Shen, Zihan Yue, Quan Chen, Fan Feng, Jie Ma
On the other hand, the accumulation of rain streaks from long distance makes the rain image look like haze veil.
2 code implementations • 15 Nov 2019 • Qi She, Fan Feng, Xinyue Hao, Qihan Yang, Chuanlin Lan, Vincenzo Lomonaco, Xuesong Shi, Zhengwei Wang, Yao Guo, Yimin Zhang, Fei Qiao, Rosa H. M. Chan
Yet, robotic vision poses unique challenges for applying visual algorithms developed from these standard computer vision datasets due to their implicit assumption over non-varying distributions for a fixed set of tasks.
no code implementations • 26 Apr 2020 • Qi She, Fan Feng, Qi Liu, Rosa H. M. Chan, Xinyue Hao, Chuanlin Lan, Qihan Yang, Vincenzo Lomonaco, German I. Parisi, Heechul Bae, Eoin Brophy, Baoquan Chen, Gabriele Graffieti, Vidit Goel, Hyonyoung Han, Sathursan Kanagarajah, Somesh Kumar, Siew-Kei Lam, Tin Lun Lam, Liang Ma, Davide Maltoni, Lorenzo Pellegrini, Duvindu Piyasena, ShiLiang Pu, Debdoot Sheet, Soonyong Song, Youngsung Son, Zhengwei Wang, Tomas E. Ward, Jianwen Wu, Meiqing Wu, Di Xie, Yangsheng Xu, Lin Yang, Qiaoyong Zhong, Liguang Zhou
This report summarizes IROS 2019-Lifelong Robotic Vision Competition (Lifelong Object Recognition Challenge) with methods and results from the top $8$ finalists (out of over~$150$ teams).
no code implementations • 6 May 2020 • Yiting Li, Haiyue Zhu, Sichao Tian, Fan Feng, Jun Ma, Chek Sing Teo, Cheng Xiang, Prahlad Vadakkepat, Tong Heng Lee
Incremental few-shot learning is highly expected for practical robotics applications.
no code implementations • 9 Feb 2021 • Fan Feng, Daniel Duffy, John S. Biggins, Mark Warner
In contraction/elongation systems such as LCEs, we find an infinite set of compatible interfaces between any pair of patterns along which the metric is discontinuous, and a finite number across which the metric is continuous.
Soft Condensed Matter
no code implementations • 30 Apr 2021 • Chi-Man Wong, Fan Feng, Wen Zhang, Chi-Man Vong, Hui Chen, Yichi Zhang, Peng He, Huan Chen, Kun Zhao, Huajun Chen
We first construct a billion-scale conversation knowledge graph (CKG) from information about users, items and conversations, and then pretrain CKG by introducing knowledge graph embedding method and graph convolution network to encode semantic and structural information respectively. To make the CTR prediction model sensible of current state of users and the relationship between dialogues and items, we introduce user-state and dialogue-interaction representations based on pre-trained CKG and propose K-DCN. In K-DCN, we fuse the user-state representation, dialogue-interaction representation and other normal feature representations via deep cross network, which will give the rank of candidate items to be recommended. We experimentally prove that our proposal significantly outperforms baselines and show it's real application in Alime.
no code implementations • 22 Jun 2021 • Jin Zhang, Fan Feng, Pere Marti-Puig, Cesar F. Caiafa, Zhe Sun, Feng Duan, Jordi Solé-Casals
Empirical mode decomposition (EMD) has developed into a prominent tool for adaptive, scale-based signal analysis in various fields like robotics, security and biomedical engineering.
1 code implementation • ICLR 2022 • Biwei Huang, Fan Feng, Chaochao Lu, Sara Magliacane, Kun Zhang
We show that by explicitly leveraging this compact representation to encode changes, we can efficiently adapt the policy to the target domain, in which only a few samples are needed and further policy optimization is avoided.
no code implementations • 27 Jul 2021 • Alessio Xompero, Santiago Donaher, Vladimir Iashin, Francesca Palermo, Gökhan Solak, Claudio Coppola, Reina Ishikawa, Yuichi Nagao, Ryo Hachiuma, Qi Liu, Fan Feng, Chuanlin Lan, Rosa H. M. Chan, Guilherme Christmann, Jyun-Ting Song, Gonuguntla Neeharika, Chinnakotla Krishna Teja Reddy, Dinesh Jain, Bakhtawar Ur Rehman, Andrea Cavallaro
In this paper, we present a range of methods and an open framework to benchmark acoustic and visual perception for the estimation of the capacity of a container, and the type, mass, and amount of its content.
no code implementations • 30 Mar 2022 • Fan Feng, Biwei Huang, Kun Zhang, Sara Magliacane
Dealing with non-stationarity in environments (e. g., in the transition dynamics) and objectives (e. g., in the reward functions) is a challenging problem that is crucial in real-world applications of reinforcement learning (RL).
no code implementations • 4 Aug 2022 • Qihan Yang, Fan Feng, Rosa Chan
Finally, a practical solution for selecting replay methods for various data distributions is provided.
no code implementations • 5 May 2023 • Yuanxing Liu, Weinan Zhang, Baohua Dong, Yan Fan, Hang Wang, Fan Feng, Yifan Chen, Ziyu Zhuang, Hengbin Cui, Yongbin Li, Wanxiang Che
In this paper, we construct a user needs-centric E-commerce conversational recommendation dataset (U-NEED) from real-world E-commerce scenarios.
1 code implementation • 23 Oct 2023 • Yuanxing Liu, Wei-Nan Zhang, Yifan Chen, Yuchi Zhang, Haopeng Bai, Fan Feng, Hengbin Cui, Yongbin Li, Wanxiang Che
This paper investigates the effectiveness of combining LLM and CRS in E-commerce pre-sales dialogues, proposing two collaboration methods: CRS assisting LLM and LLM assisting CRS.
no code implementations • 22 Dec 2023 • Yin Luo, Qingchao Kong, Nan Xu, Jia Cao, Bao Hao, Baoyu Qu, Bo Chen, Chao Zhu, Chenyang Zhao, Donglei Zhang, Fan Feng, Feifei Zhao, Hailong Sun, Hanxuan Yang, Haojun Pan, Hongyu Liu, Jianbin Guo, Jiangtao Du, Jingyi Wang, Junfeng Li, Lei Sun, Liduo Liu, Lifeng Dong, Lili Liu, Lin Wang, Liwen Zhang, Minzheng Wang, Pin Wang, Ping Yu, Qingxiao Li, Rui Yan, Rui Zou, Ruiqun Li, Taiwen Huang, Xiaodong Wang, Xiaofei Wu, Xin Peng, Xina Zhang, Xing Fang, Xinglin Xiao, Yanni Hao, Yao Dong, Yigang Wang, Ying Liu, Yongyu Jiang, Yungan Wang, Yuqi Wang, Zhangsheng Wang, Zhaoxin Yu, Zhen Luo, Wenji Mao, Lei Wang, Dajun Zeng
As the latest advancements in natural language processing, large language models (LLMs) have achieved human-level language understanding and generation abilities in many real-world tasks, and even have been regarded as a potential path to the artificial general intelligence.