no code implementations • SemEval (NAACL) 2022 • Junyu Lu, Hao Zhang, Tongyue Zhang, Hongbo Wang, Haohao Zhu, Bo Xu, Hongfei Lin
For Subtask B, framed as a multi-label classification problem, we utilize various improved multi-label cross-entropy loss functions and analyze the performance of our method.
no code implementations • 26 Jan 2024 • XiaoJun Wu, Dixiang Zhang, Ruyi Gan, Junyu Lu, Ziwei Wu, Renliang Sun, Jiaxing Zhang, Pingjian Zhang, Yan Song
Recent advancements in text-to-image models have significantly enhanced image generation capabilities, yet a notable gap of open-source models persists in bilingual or Chinese language support.
no code implementations • 28 Dec 2023 • Dixiang Zhang, Junyu Lu, Pingjian Zhang
To solve this issue, we propose a Unified Lattice Graph Fusion (ULGF) approach for Chinese NER.
Chinese Named Entity Recognition named-entity-recognition +2
1 code implementation • 25 Dec 2023 • Yucong Luo, Mingyue Cheng, Hao Zhang, Junyu Lu, Qi Liu, Enhong Chen
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework aimed at further boosting the explanation quality by employing LLMs.
no code implementations • 8 Dec 2023 • Junyu Lu, Ruyi Gan, Dixiang Zhang, XiaoJun Wu, Ziwei Wu, Renliang Sun, Jiaxing Zhang, Pingjian Zhang, Yan Song
During the instruction fine-tuning stage, we introduce semantic-aware visual feature extraction, a crucial method that enables the model to extract informative features from concrete visual objects.
Ranked #1 on Image Captioning on nocaps entire
no code implementations • 7 Dec 2023 • Ruyi Gan, XiaoJun Wu, Junyu Lu, Yuanhe Tian, Dixiang Zhang, Ziwei Wu, Renliang Sun, Chang Liu, Jiaxing Zhang, Pingjian Zhang, Yan Song
However, there are few specialized models in certain domains, such as interior design, which is attributed to the complex textual descriptions and detailed visual elements inherent in design, alongside the necessity for adaptable resolution.
no code implementations • 6 Nov 2023 • Ruyi Gan, Ziwei Wu, Renliang Sun, Junyu Lu, XiaoJun Wu, Dixiang Zhang, Kunhao Pan, Ping Yang, Qi Yang, Jiaxing Zhang, Yan Song
Although many such issues are addressed along the line of research on LLMs, an important yet practical limitation is that many studies overly pursue enlarging model sizes without comprehensively analyzing and optimizing the use of pre-training data in their learning process, as well as appropriate organization and leveraging of such data in training LLMs under cost-effective settings.
no code implementations • 12 Oct 2023 • Junyu Lu, Dixiang Zhang, XiaoJun Wu, Xinyu Gao, Ruyi Gan, Jiaxing Zhang, Yan Song, Pingjian Zhang
Recent advancements enlarge the capabilities of large language models (LLMs) in zero-shot image-to-text generation and understanding by integrating multi-modal inputs.
no code implementations • 10 Jul 2023 • Junyu Lu, Hongfei Lin, Xiaokun Zhang, Zhaoqing Li, Tongyue Zhang, Linlin Zong, Fenglong Ma, Bo Xu
Our framework jointly optimizes the self-supervised and the supervised contrastive learning loss for capturing span-level information beyond the token-level emotional semantics used in existing models, particularly detecting speech containing abusive and insulting words.
no code implementations • 17 May 2023 • Ping Yang, Junyu Lu, Ruyi Gan, Junjie Wang, Yuxiang Zhang, Jiaxing Zhang, Pingjian Zhang
We propose a new paradigm for universal information extraction (IE) that is compatible with any schema format and applicable to a list of IE tasks, such as named entity recognition, relation extraction, event extraction and sentiment analysis.
1 code implementation • 8 May 2023 • Junyu Lu, Bo Xu, Xiaokun Zhang, Changrong Min, Liang Yang, Hongfei Lin
In addition, it is crucial to introduce lexical knowledge to detect the toxicity of posts, which has been a challenge for researchers.
1 code implementation • 7 Sep 2022 • Jiaxing Zhang, Ruyi Gan, Junjie Wang, Yuxiang Zhang, Lin Zhang, Ping Yang, Xinyu Gao, Ziwei Wu, Xiaoqun Dong, Junqing He, Jianheng Zhuo, Qi Yang, Yongfeng Huang, Xiayu Li, Yanghan Wu, Junyu Lu, Xinyu Zhu, Weifeng Chen, Ting Han, Kunhao Pan, Rui Wang, Hao Wang, XiaoJun Wu, Zhongshen Zeng, Chongpei Chen
We hope that this project will be the foundation of Chinese cognitive intelligence.
no code implementations • COLING 2022 • Junyu Lu, Dixiang Zhang, Pingjian Zhang
Then, we transform the fine-grained semantic representation of the vision and text into a unified lattice structure and design a novel relative position encoding to match different modalities in Transformer.
no code implementations • 24 Jun 2022 • Junyu Lu, Ping Yang, Ruyi Gan, Jing Yang, Jiaxing Zhang
Even as pre-trained language models share a semantic encoder, natural language understanding suffers from a diversity of output schemas.
no code implementations • 30 Dec 2020 • Wazir Ali, Jay Kumar, Zenglin Xu, Congjian Luo, Junyu Lu, Junming Shao, Rajesh Kumar, Yazhou Ren
The word segmentation is a fundamental and inevitable prerequisite for many languages.
no code implementations • LREC 2020 • Wazir Ali, Junyu Lu, Zenglin Xu
We introduce the SiNER: a named entity recognition (NER) dataset for low-resourced Sindhi language with quality baselines.
no code implementations • 28 Nov 2019 • Wazir Ali, Jay Kumar, Junyu Lu, Zenglin Xu
Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations.
1 code implementation • ACL 2019 • Junyu Lu, Chenbin Zhang, Zeying Xie, Guang Ling, Tom Chao Zhou, Zenglin Xu
Response selection plays an important role in fully automated dialogue systems.