1 code implementation • 10 Apr 2020 • Yihong Dong, Xiaohan Jiang, Huaji Zhou, Yun Lin, Qingjiang Shi
This paper proposes a ZSL framework, signal recognition and reconstruction convolutional neural networks (SR2CNN), to address relevant problems in this situation.
no code implementations • 18 Mar 2021 • Yihong Dong, Lunchen Xie, Qingjiang Shi
While a sufficient optimality condition is available in the literature, there is a lack of \yhedit{a} fast convergent algorithm to achieve stationary points.
no code implementations • 5 Jun 2021 • Yihong Dong, Ying Peng, Muqiao Yang, Songtao Lu, Qingjiang Shi
Deep neural networks have been shown as a class of useful tools for addressing signal recognition issues in recent years, especially for identifying the nonlinear feature structures of signals.
no code implementations • 22 Aug 2022 • Sijie Shen, Xiang Zhu, Yihong Dong, Qizhi Guo, Yankun Zhen, Ge Li
However, in some domain-specific scenarios, building such a large paired corpus for code generation is difficult because there is no directly available pairing data, and a lot of effort is required to manually write the code descriptions to construct a high-quality training dataset.
no code implementations • 22 Aug 2022 • Yihong Dong, Ge Li, Xue Jiang, Zhi Jin
To evaluate the effectiveness of our proposed loss, we implement and train an Antecedent Prioritized Tree-based code generation model called APT.
1 code implementation • 2 Nov 2022 • Yihong Dong, Xue Jiang, Yuchen Liu, Ge Li, Zhi Jin
CodePAD can leverage existing sequence-based models, and we show that it can achieve 100\% grammatical correctness percentage on these benchmark datasets.
no code implementations • 27 Jan 2023 • Xiaolong Xu, Lingjuan Lyu, Yihong Dong, Yicheng Lu, Weiqiang Wang, Hong Jin
With the frequent happening of privacy leakage and the enactment of privacy laws across different countries, data owners are reluctant to directly share their raw data and labels with any other party.
no code implementations • 19 Aug 2023 • Yihong Dong, Kangcheng Luo, Xue Jiang, Zhi Jin, Ge Li
Large language models (LLMs) have showcased remarkable potential across various tasks by conditioning on prompts.
no code implementations • 12 Jan 2024 • Jia Li, Ge Li, YunFei Zhao, Yongmin Li, Zhi Jin, Hao Zhu, Huanyu Liu, Kaibo Liu, Lecheng Wang, Zheng Fang, Lanshen Wang, Jiazheng Ding, Xuanming Zhang, Yihong Dong, Yuqi Zhu, Bin Gu, Mengfei Yang
Compared to previous benchmarks, DevEval aligns to practical projects in multiple dimensions, e. g., real program distributions, sufficient dependencies, and enough-scale project contexts.
1 code implementation • 24 Feb 2024 • Yihong Dong, Xue Jiang, Huanyu Liu, Zhi Jin, Ge Li
CDD necessitates only the sampled texts to detect data contamination, by identifying the peakedness of LLM's output distribution.
no code implementations • 29 Feb 2024 • Xue Jiang, Yihong Dong, Zhi Jin, Ge Li
Specifically, SEED involves identifying error code generated by LLMs, employing Self-revise for code revision, optimizing the model with revised code, and iteratively adapting the process for continuous improvement.
1 code implementation • 31 Mar 2024 • Jia Li, Ge Li, Xuanming Zhang, Yihong Dong, Zhi Jin
Existing benchmarks demonstrate poor alignment with real-world code repositories and are insufficient to evaluate the coding abilities of LLMs.