no code implementations • 5 Feb 2025 • Fan Wang, Pengtao Shao, Yiming Zhang, Bo Yu, Shaoshan Liu, Ning Ding, Yang Cao, Yu Kang, Haifeng Wang
We introduce OmniRL, a highly generalizable in-context reinforcement learning (ICRL) model that is meta-trained on hundreds of thousands of diverse tasks.
no code implementations • 27 Jan 2025 • Xing Zhang, Jiaheng Wen, Fangkai Yang, Pu Zhao, Yu Kang, Junhao Wang, Maoquan Wang, Yufan Huang, Elsie Nallipogu, QIngwei Lin, Yingnong Dang, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
The advancement of large language models has intensified the need to modernize enterprise applications and migrate legacy systems to secure, versatile languages.
no code implementations • 23 Jan 2025 • Linghao Zhang, Junhao Wang, Shilin He, Chaoyun Zhang, Yu Kang, Bowen Li, Jiaheng Wen, Chengxing Xie, Maoquan Wang, Yufan Huang, Elsie Nallipogu, QIngwei Lin, Yingnong Dang, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
Large Language Models have advanced automated software development, however, it remains a challenge to correctly infer dependencies, namely, identifying the internal components and external packages required for a repository to successfully run.
no code implementations • 16 Dec 2024 • Yu Kang, Xianghui Sun, Liangyu Chen, Wei Zou
Generating Chain-of-Thought (CoT) before deriving the answer can effectively improve the reasoning capabilities of large language models (LLMs) and significantly improve the accuracy of the generated answer.
1 code implementation • 27 Nov 2024 • Chaoyun Zhang, Shilin He, Jiaxu Qian, Bowen Li, Liqun Li, Si Qin, Yu Kang, Minghua Ma, Guyue Liu, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
This has paved the way for a new generation of LLM-brained GUI agents capable of interpreting complex GUI elements and autonomously executing actions based on natural language instructions.
no code implementations • 19 Nov 2024 • Yu Kang, Junwei Pan, Jipeng Jin, Shudong Huang, Xiaofeng Gao, Lei Xiao
Modeling feature interactions plays a crucial role in accurately predicting click-through rates (CTR) in advertising systems.
no code implementations • 30 Sep 2024 • Huilin Deng, Hongchen Luo, Wei Zhai, Yang Cao, Yu Kang
Furthermore, we introduce the Real Industrial Anomaly Detection (RIAD), a comprehensive IAD dataset with detailed anomaly descriptions and analyses, offering a valuable resource for MLLM-based IAD development.
no code implementations • 19 Aug 2024 • Jun Lu, David Li, Bill Ding, Yu Kang
This paper presents an approach to improve text embedding models through contrastive fine-tuning on small datasets augmented with expert scores.
no code implementations • 31 Jul 2024 • Zichen Zhang, Hongchen Luo, Wei Zhai, Yang Cao, Yu Kang
To address this, we propose a novel model, PEAR (Phrase-Based Hand-Object Interaction Anticipation), which jointly anticipates interaction intention and manipulation.
no code implementations • 20 Jul 2024 • Shusen Ma, Yu Kang, Peng Bai, Yun-Bo Zhao
Technically, we first extract the temporal features of the input variables through an embedding layer, then compute the dependencies among input variables via the fast-attention module.
no code implementations • 17 Jul 2024 • Jike Wang, Jianwen Feng, Yu Kang, Peichen Pan, Jingxuan Ge, Yan Wang, Mingyang Wang, Zhenxing Wu, Xingcai Zhang, Jiameng Yu, Xujun Zhang, Tianyue Wang, Lirong Wen, Guangning Yan, Yafeng Deng, Hui Shi, Chang-Yu Hsieh, Zhihui Jiang, Tingjun Hou
Within 11 days, AMP-Designer enables de novo design of 18 novel candidates with broad-spectrum potency against Gram-negative bacteria.
1 code implementation • 10 Jul 2024 • Jike Wang, Rui Qin, Mingyang Wang, Meijing Fang, Yangyang Zhang, Yuchen Zhu, Qun Su, Qiaolin Gou, Chao Shen, Odin Zhang, Zhenxing Wu, Dejun Jiang, Xujun Zhang, Huifeng Zhao, Xiaozhe Wan, Zhourui Wu, Liwei Liu, Yu Kang, Chang-Yu Hsieh, Tingjun Hou
This model encodes all molecular information, including 2D and 3D structures, as well as molecular property data, into tokens, which transforms classification and regression tasks in drug discovery into probabilistic prediction problems, thereby enabling learning through a unified paradigm.
1 code implementation • 27 May 2024 • Fan Wang, Chuan Lin, Yang Cao, Yu Kang
In-context learning (ICL) empowers generative models to address new tasks effectively and efficiently on the fly, without relying on any artificially crafted optimization techniques.
no code implementations • 9 May 2024 • Zichen Zhang, Hongchen Luo, Wei Zhai, Yang Cao, Yu Kang
Building upon this relationship, a novel Bidirectional prOgressive Transformer (BOT), which introduces a Bidirectional Progressive mechanism into the anticipation of interaction intention is established.
no code implementations • 15 Mar 2024 • Odin Zhang, Yufei Huang, Shichen Cheng, Mengyao Yu, Xujun Zhang, Haitao Lin, Yundian Zeng, Mingyang Wang, Zhenxing Wu, Huifeng Zhao, Zaixi Zhang, Chenqing Hua, Yu Kang, Sunliang Cui, Peichen Pan, Chang-Yu Hsieh, Tingjun Hou
Most earlier 3D structure-based molecular generation approaches follow an atom-wise paradigm, incrementally adding atoms to a partially built molecular fragment within protein pockets.
no code implementations • 27 Feb 2024 • Kaikai An, Fangkai Yang, Junting Lu, Liqun Li, Zhixing Ren, Hao Huang, Lu Wang, Pu Zhao, Yu Kang, Hua Ding, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
Effective incident management is pivotal for the smooth operation of enterprises-level cloud services.
no code implementations • 18 Feb 2024 • Ku Du, Yu Kang
Within the context of fast switching framework, we initially examine the necessary conditions, commencing with the transformation of the consensus problem into a stability problem, introducing a new variable to make the coupled system achieve cluster synchronization if the system is controllable; communication topology switching fast enough and the coupling strength should be sufficiently robust.
1 code implementation • 8 Feb 2024 • Chaoyun Zhang, Liqun Li, Shilin He, Xu Zhang, Bo Qiao, Si Qin, Minghua Ma, Yu Kang, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
We introduce UFO, an innovative UI-Focused agent to fulfill user requests tailored to applications on Windows OS, harnessing the capabilities of GPT-Vision.
no code implementations • 24 Jan 2024 • Xuchao Zhang, Supriyo Ghosh, Chetan Bansal, Rujia Wang, Minghua Ma, Yu Kang, Saravan Rajmohan
The results reveal that our in-context learning approach outperforms the previous fine-tuned large language models such as GPT-3 by an average of 24. 8\% across all metrics, with an impressive 49. 7\% improvement over the zero-shot model.
no code implementations • 19 Dec 2023 • YuXuan Jiang, Chaoyun Zhang, Shilin He, Zhihao Yang, Minghua Ma, Si Qin, Yu Kang, Yingnong Dang, Saravan Rajmohan, QIngwei Lin, Dongmei Zhang
This paper presents a thorough empirical study on the utilization of queries of KQL, a DSL employed for incident management in a large-scale cloud management system at Microsoft.
1 code implementation • 29 Nov 2023 • Bo Qiao, Liqun Li, Xu Zhang, Shilin He, Yu Kang, Chaoyun Zhang, Fangkai Yang, Hang Dong, Jue Zhang, Lu Wang, Minghua Ma, Pu Zhao, Si Qin, Xiaoting Qin, Chao Du, Yong Xu, QIngwei Lin, Saravan Rajmohan, Dongmei Zhang
TaskWeaver provides support for rich data structures, flexible plugin usage, and dynamic plugin selection, and leverages LLM coding capabilities for complex logic.
no code implementations • 4 Aug 2023 • Haotian Zhang, Huifeng Zhao, Xujun Zhang, Qun Su, Hongyan Du, Chao Shen, Zhe Wang, Dan Li, Peichen Pan, Guangyong Chen, Yu Kang, Chang-Yu Hsieh, Tingjun Hou
Drug discovery is a highly complicated process, and it is unfeasible to fully commit it to the recently developed molecular generation methods.
1 code implementation • 22 Jun 2023 • Tianyue Wang, Xujun Zhang, Odin Zhang, Peichen Pan, Guangyong Chen, Yu Kang, Chang-Yu Hsieh, Tingjun Hou
Protein loop modeling is the most challenging yet highly non-trivial task in protein structure prediction.
1 code implementation • 21 Mar 2023 • Saizhe Ding, Jinze Chen, Yang Wang, Yu Kang, Weiguo Song, Jie Cheng, Yang Cao
Event cameras, such as dynamic vision sensors (DVS), are biologically inspired vision sensors that have advanced over conventional cameras in high dynamic range, low latency and low power consumption, showing great application potential in many fields.
1 code implementation • 10 Apr 2022 • Yu Kang, Tianqiao Liu, Hang Li, Yang Hao, Wenbiao Ding
Our pre-training framework consists of the following components: (1) Intra-modal Denoising Auto-Encoding (IDAE), which is able to reconstruct input text (audio) representations from a noisy version of itself.
4 code implementations • 24 Feb 2022 • Liangsheng Lu, Wei Zhai, Hongchen Luo, Yu Kang, Yang Cao
In this paper, we explore to perceive affordance from a vision-language perspective and consider the challenging phrase-based affordance detection problem, i. e., given a set of phrases describing the action purposes, all the object regions in a scene with the same affordance should be detected.
no code implementations • 23 Nov 2021 • Yifan Chang, Wenbo Li, Jian Peng, Bo Tang, Yu Kang, Yinjie Lei, Yuanmiao Gui, Qing Zhu, Yu Liu, Haifeng Li
Different from previous reviews that mainly focus on the catastrophic forgetting phenomenon in CL, this paper surveys CL from a more macroscopic perspective based on the Stability Versus Plasticity mechanism.
no code implementations • 29 Sep 2021 • Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Yang Cao, Yu Kang, Haifeng Wang
While artificial neural networks (ANNs) have been widely adopted in machine learning, researchers are increasingly obsessed by the gaps between ANNs and natural neural networks (NNNs).
2 code implementations • 8 Sep 2021 • Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Jie Fu, Yang Cao, Yu Kang, Haifeng Wang
In contrast, biological neural networks (BNNs) can adapt to various new tasks by continually updating the neural connections based on the inputs, which is aligned with the paradigm of learning effective learning rules in addition to static parameters, e. g., meta-learning.
1 code implementation • EMNLP 2021 • Hang Li, Yu Kang, Tianqiao Liu, Wenbiao Ding, Zitao Liu
Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms.
1 code implementation • 15 Jul 2021 • Hang Li, Yu Kang, Yang Hao, Wenbiao Ding, Zhongqin Wu, Zitao Liu
The quality of vocal delivery is one of the key indicators for evaluating teacher enthusiasm, which has been widely accepted to be connected to the overall course qualities.
no code implementations • 1 Nov 2019 • Xishan Zhang, Shaoli Liu, Rui Zhang, Chang Liu, Di Huang, Shiyi Zhou, Jiaming Guo, Yu Kang, Qi Guo, Zidong Du, Yunji Chen
Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers.
no code implementations • 22 Oct 2019 • Hang Li, Yu Kang, Wenbiao Ding, Song Yang, Songfan Yang, Gale Yan Huang, Zitao Liu
The experimental results demonstrate the benefits of our approach on learning attention based neural network from classroom data with different modalities, and show our approach is able to outperform state-of-the-art baselines in terms of various evaluation metrics.
no code implementations • 3 Oct 2017 • Hui Xu, Yangfan Zhou, Yu Kang, Michael R. Lyu
On the other hand, the performance requirement for model-oriented obfuscation approaches is too weak to develop practical program obfuscation solutions.
Cryptography and Security Software Engineering
no code implementations • CVPR 2017 • Jing Zhang, Yang Cao, Shuai Fang, Yu Kang, Chang Wen Chen
Then, we propose a simple but effective image prior, maximum reflectance prior, to estimate the varying ambient illumination.