no code implementations • 30 Dec 2024 • Xingjian Tao, Yiwei Wang, Yujun Cai, Zhicheng Yang, Jing Tang
Large language models (LLMs) have shown promise as potential knowledge bases, yet they often struggle with question-answering tasks and are prone to hallucinations.
1 code implementation • 26 Nov 2024 • Xinhao Liu, Jintong Li, Yicheng Jiang, Niranjan Sujay, Zhicheng Yang, Juexiao Zhang, John Abanes, Jing Zhang, Chen Feng
This work shows the potential of using abundant online video data to develop robust navigation policies for embodied agents in dynamic urban settings.
no code implementations • 26 Sep 2024 • Zhangpu Li, Changhong Zou, Suxue Ma, Zhicheng Yang, Chen Du, YouBao Tang, Zhenjie Cao, Ning Zhang, Jui-Hsin Lai, Ruei-Sung Lin, Yuan Ni, Xingzhi Sun, Jing Xiao, Jieke Hou, Kai Zhang, Mei Han
In our online medical consultation scenario, a doctor responds to the texts and images provided by a patient in multiple rounds to diagnose her/his health condition, forming a multi-turn multimodal medical dialogue format.
1 code implementation • 13 Jul 2024 • Zhicheng Yang, Yiwei Wang, Yinya Huang, Zhijiang Guo, Wei Shi, Xiongwei Han, Liang Feng, Linqi Song, Xiaodan Liang, Jing Tang
Furthermore, to alleviate the data scarcity for optimization problems, and to bridge the gap between open-source LLMs on a small scale (e. g., Llama-3-8b) and closed-source LLMs (e. g., GPT-4), we further propose a data synthesis method namely ReSocratic.
2 code implementations • 4 Jun 2024 • Jianqiao Lu, Yingjia Wan, Zhengying Liu, Yinya Huang, Jing Xiong, Chengwu Liu, Jianhao Shen, Hui Jin, Jipeng Zhang, Haiming Wang, Zhicheng Yang, Jing Tang, Zhijiang Guo
Autoformalization, the conversion of natural language mathematics into formal languages, offers significant potential for advancing mathematical reasoning.
no code implementations • 28 May 2024 • Zihui Wang, Zheng Wang, Lingjuan Lyu, Zhaopeng Peng, Zhicheng Yang, Chenglu Wen, Rongshan Yu, Cheng Wang, Xiaoliang Fan
Second, to implement the BCF, we design a submodel allocation module with a theoretical guarantee of fairness.
1 code implementation • 23 May 2024 • Haiming Wang, Huajian Xin, Zhengying Liu, Wenda Li, Yinya Huang, Jianqiao Lu, Zhicheng Yang, Jing Tang, Jian Yin, Zhenguo Li, Xiaodan Liang
This approach allows the theorem to be tackled incrementally by outlining the overall theorem at the first level and then solving the intermediate conjectures at deeper levels.
no code implementations • 5 May 2024 • Xiaohan Lin, Qingxing Cao, Yinya Huang, Zhicheng Yang, Zhengying Liu, Zhenguo Li, Xiaodan Liang
We conduct extensive experiments to investigate whether current LMs can generate theorems in the library and benefit the problem theorems proving.
1 code implementation • 29 Nov 2023 • Yinya Huang, Ruixin Hong, Hongming Zhang, Wei Shao, Zhicheng Yang, Dong Yu, ChangShui Zhang, Xiaodan Liang, Linqi Song
In this study, we delve into the realm of counterfactual reasoning capabilities of large language models (LLMs).
1 code implementation • 22 Nov 2023 • Zhicheng Yang, Yinya Huang, Jing Xiong, Liang Feng, Xiaodan Liang, Yiwei Wang, Jing Tang
Large Language Models prompting, such as using in-context demonstrations, is a mainstream technique for invoking LLMs to perform high-performance and solid complex reasoning (e. g., mathematical reasoning, commonsense reasoning), and has the potential for further human-machine collaborative scientific findings.
1 code implementation • 4 Oct 2023 • Jing Xiong, Zixuan Li, Chuanyang Zheng, Zhijiang Guo, Yichun Yin, Enze Xie, Zhicheng Yang, Qingxing Cao, Haiming Wang, Xiongwei Han, Jing Tang, Chengming Li, Xiaodan Liang
Dual Queries first query LLM to obtain LLM-generated knowledge such as CoT, then query the retriever to obtain the final exemplars via both question and the knowledge.
no code implementations • 21 Jun 2023 • Zheng Wang, Xiaoliang Fan, Zhaopeng Peng, Xueheng Li, Ziqi Yang, Mingkuan Feng, Zhicheng Yang, Xiao Liu, Cheng Wang
Federated learning (FL) has found numerous applications in healthcare, finance, and IoT scenarios.
no code implementations • 9 Jun 2023 • Zepeng Liu, Zhicheng Yang, Mingye Zhu, Andy Wong, Yibing Wei, Mei Han, Jun Yu, Jui-Hsin Lai
Image dehazing is a meaningful low-level computer vision task and can be applied to a variety of contexts.
no code implementations • 23 Jun 2022 • Zhicheng Yang, Jui-Hsin Lai, Jun Zhou, Hang Zhou, Chen Du, Zhongcheng Lai
The Agriculture-Vision Challenge in CVPR is one of the most famous and competitive challenges for global researchers to break the boundary between computer vision and agriculture sectors, aiming at agricultural pattern recognition from aerial images.
2 code implementations • Findings (NAACL) 2022 • Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Xiaodan Liang
However, current solvers exist solving bias which consists of data bias and learning bias due to biased dataset and improper training strategy.
2 code implementations • 17 May 2022 • Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Liang Lin, Xiaodan Liang
To address this issue and make a step towards interpretable MWP solving, we first construct a high-quality MWP dataset named InterMWP which consists of 11, 495 MWPs and annotates interpretable logical formulas based on algebraic knowledge as the grounded linguistic logic of each solution equation.
no code implementations • CVPR 2021 • Yuxing Tang, Zhenjie Cao, Yanbo Zhang, Zhicheng Yang, Zongcheng Ji, Yiwei Wang, Mei Han, Jie Ma, Jing Xiao, Peng Chang
Starting with a fully supervised model trained on the data with pixel-level masks, the proposed framework iteratively refines the model itself using the entire weakly labeled data (image-level soft label) in a self-training fashion.
no code implementations • 31 Jan 2021 • Yunkai Yu, Yuyang You, Zhihong Yang, Guozheng Liu, Peiyao Li, Zhicheng Yang, Wenjing Shan
The variations of SROP is synchronizes with UI variations in various randomized and sufficiently trained model structures.
3 code implementations • 4 Aug 2020 • Zhuotao Tian, Hengshuang Zhao, Michelle Shu, Zhicheng Yang, Ruiyu Li, Jiaya Jia
It consists of novel designs of (1) a training-free prior mask generation method that not only retains generalization power but also improves model performance and (2) Feature Enrichment Module (FEM) that overcomes spatial inconsistency by adaptively enriching query features with support features and prior masks.
Ranked #71 on
Few-Shot Semantic Segmentation
on COCO-20i (1-shot)
no code implementations • 13 Jun 2019 • Pengyuan Lyu, Zhicheng Yang, Xinhang Leng, Xiao-Jun Wu, Ruiyu Li, Xiaoyong Shen
Irregular scene text, which has complex layout in 2D space, is challenging to most previous scene text recognizers.