1 code implementation • 25 Oct 2024 • Zelin Zang, Yuhao Wang, Jinlin Wu, Hong Liu, Yue Shen, Stan. Z Li, Zhen Lei
DMT-HI enhances DR accuracy by leveraging hyperbolic embeddings to represent the hierarchical nature of data, while also improving interpretability by explicitly linking input data, embedding outcomes, and key features through the MOE structure.
no code implementations • 5 Sep 2024 • Wei Gao, Jingqiang Wang, Xinv Zhu, Jun Zhong, Yue Shen, Youshuang Ding
To scale up the deployment, our DSL provides: 1) an easily accessible interface to construct & solve a sub-class of Task and Motion Planning (TAMP) problems that are important in practical applications; and 2) a mechanism to implement flexible control flow to perform integration and address customized requirements of distinct industrial application.
no code implementations • 22 Aug 2024 • Xiaohan Wang, Xiaoyan Yang, Yuqi Zhu, Yue Shen, Jian Wang, Peng Wei, Lei Liang, Jinjie Gu, Huajun Chen, Ningyu Zhang
Large Language Models (LLMs) like GPT-4, MedPaLM-2, and Med-Gemini achieve performance competitively with human experts across various medical benchmarks.
1 code implementation • 20 Jun 2024 • Junjie Wang, Mingyang Chen, Binbin Hu, Dan Yang, Ziqi Liu, Yue Shen, Peng Wei, Zhiqiang Zhang, Jinjie Gu, Jun Zhou, Jeff Z. Pan, Wen Zhang, Huajun Chen
LLMs fine-tuned with this data have improved planning capabilities, better equipping them to handle complex QA tasks that involve retrieval.
no code implementations • 6 Jun 2024 • Lei Liu, Xiaoyan Yang, Junchi Lei, Xiaoyang Liu, Yue Shen, Zhiqiang Zhang, Peng Wei, Jinjie Gu, Zhixuan Chu, Zhan Qin, Kui Ren
This survey provides a comprehensive overview of Medical Large Language Models (Med-LLMs), outlining their evolution from general to the medical-specific domain (i. e, Technology and Application), as well as their transformative impact on healthcare (e. g., Trustworthiness and Safety).
no code implementations • 30 May 2024 • Chunjing Gan, Dan Yang, Binbin Hu, Hanxiao Zhang, Siyuan Li, Ziqi Liu, Yue Shen, Lin Ju, Zhiqiang Zhang, Jinjie Gu, Lei Liang, Jun Zhou
In recent years, large language models (LLMs) have made remarkable achievements in various domains.
no code implementations • 25 Mar 2024 • Lei Liu, Xiaoyan Yang, Fangzhou Li, Chenfei Chi, Yue Shen, Shiwei Lyu Ming Zhang, Xiaowei Ma, Xiangguo Lyu, Liya Ma, Zhiqiang Zhang, Wei Xue, Yiran Huang, Jinjie Gu
Applying such paradigm, we construct an evaluation benchmark in the field of urology, including a LCP, a SPs dataset, and an automated RAE.
1 code implementation • 10 Mar 2024 • Xiaohan Wang, Shengyu Mao, Ningyu Zhang, Shumin Deng, Yunzhi Yao, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen
Recently, there has been a growing interest in knowledge editing for Large Language Models (LLMs).
1 code implementation • 5 Mar 2024 • Yuqi Zhu, Shuofei Qiao, Yixin Ou, Shumin Deng, Ningyu Zhang, Shiwei Lyu, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen
Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges, especially when interacting with environments through generating executable actions.
2 code implementations • 5 Feb 2024 • Xiang Chen, Chenxi Wang, Yida Xue, Ningyu Zhang, Xiaoyan Yang, Qiang Li, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen
Despite significant strides in multimodal tasks, Multimodal Large Language Models (MLLMs) are plagued by the critical issue of hallucination.
3 code implementations • 9 Jan 2024 • Junjie Wang, Dan Yang, Binbin Hu, Yue Shen, Wen Zhang, Jinjie Gu
To stimulate the LLMs' reasoning ability, the chain-of-thought (CoT) prompting method is widely used, but existing methods still have some limitations in our scenario: (1) Previous methods either use simple "Let's think step by step" spells or provide fixed examples in demonstrations without considering compatibility between prompts and concrete questions, making LLMs ineffective when the marketers' demands are abstract and diverse.
1 code implementation • 15 Dec 2023 • Shiwei Lyu, Chenfei Chi, Hongbo Cai, Lei Shi, Xiaoyan Yang, Lei Liu, Xiang Chen, Deng Zhao, Zhiqiang Zhang, Xianguo Lyu, Ming Zhang, Fangzhou Li, Xiaowei Ma, Yue Shen, Jinjie Gu, Wei Xue, Yiran Huang
We introduce RJUA-QA, a novel medical dataset for question answering (QA) and reasoning with clinical evidence, contributing to bridge the gap between general large language models (LLMs) and medical-specific LLM applications.
no code implementations • 8 Dec 2023 • Chunjing Gan, Dan Yang, Binbin Hu, Ziqi Liu, Yue Shen, Zhiqiang Zhang, Jinjie Gu, Jun Zhou, Guannan Zhang
In this paper, we seek to carefully prompt a Large Language Model (LLM) with domain-level knowledge as a better marketing-oriented knowledge miner for marketing-oriented knowledge graph construction, which is however non-trivial, suffering from several inevitable issues in real-world marketing scenarios, i. e., uncontrollable relation generation of LLMs, insufficient prompting ability of a single prompt, the unaffordable deployment cost of LLMs.
no code implementations • 2 Dec 2023 • Qiang Li, Xiaoyan Yang, Haowen Wang, Qin Wang, Lei Liu, Junjie Wang, Yang Zhang, Mingyuan Chu, Sen Hu, Yicheng Chen, Yue Shen, Cong Fan, Wangshu Zhang, Teng Xu, Jinjie Gu, Jing Zheng, Guannan Zhang Ant Group
(3) Specifically for multi-choice questions in the medical domain, we propose a novel Verification-of-Choice approach for prompting engineering, which significantly enhances the reasoning ability of LLMs.
no code implementations • 15 Nov 2023 • Lei Liu, Xiaoyan Yang, Yue Shen, Binbin Hu, Zhiqiang Zhang, Jinjie Gu, Guannan Zhang
Memory-augmented Large Language Models (LLMs) have demonstrated remarkable performance in long-term human-machine interactions, which basically relies on iterative recalling and reasoning of history to generate high-quality responses.
no code implementations • 5 Oct 2023 • Yanyi Chu, Dan Yu, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang
The model outperformed the best-known benchmark by up to 42% for predicting the Mean Ribosome Loading, and by up to 60% for predicting the Translation Efficiency and the mRNA Expression Level.
no code implementations • 21 Aug 2023 • Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, james Y zhang, Sheng Li
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs.
no code implementations • 21 Aug 2023 • Yan Wang, Zhixuan Chu, Xin Ouyang, Simeng Wang, Hongyan Hao, Yue Shen, Jinjie Gu, Siqiao Xue, james Y zhang, Qing Cui, Longfei Li, Jun Zhou, Sheng Li
In this paper, we propose a novel approach that leverages large language models (LLMs) to construct personalized reasoning graphs.
no code implementations • 30 May 2023 • Dan Yang, Binbin Hu, Xiaoyan Yang, Yue Shen, Zhiqiang Zhang, Jinjie Gu, Guannan Zhang
At the online stage, the system offers the ability of user targeting in real-time based on the entity graph from the offline stage.
no code implementations • 11 Jun 2022 • Sujia Zhu, Yue Shen, Zihao Zhu, Wang Xia, Baofeng Chang, Ronghua Liang, Guodao Sun
To fill the absence of combined causes discovery on temporal event sequence data, eliminating and recruiting principles are defined to balance the effectiveness and controllability on cause combinations.
no code implementations • 21 Apr 2022 • Yue Shen, Maxim Bichuch, Enrique Mallada
We define a set to be $\tau$-recurrent (resp.
no code implementations • 8 Feb 2022 • Guhong Nie, Lirui Xiao, Menglong Zhu, Dongliang Chu, Yue Shen, Peng Li, Kang Yang, Li Du, Bo Chen
For binary neural networks (BNNs) to become the mainstream on-device computer vision algorithm, they must achieve a superior speed-vs-accuracy tradeoff than 8-bit quantization and establish a similar degree of general applicability in vision tasks.
no code implementations • 27 Aug 2020 • Yu-Ching Chen, Xin Liu, Wei-Ting Liao, A. Miguel Holgado, Hengxiao Guo, Robert A. Gruendl, Eric Morganson, Yue Shen, Kaiwen Zhang, Tim M. C. Abbott, Michel Aguena, Sahar Allam, Santiago Avila, Emmanuel Bertin, Sunayana Bhargava, David Brooks, David L. Burke, Aurelio Carnero Rosell, Daniela Carollo, Matias Carrasco Kind, Jorge Carretero, Matteo Costanzi, Luiz N. da Costa, Tamara M. Davis, Juan De Vicente, Shantanu Desai, H. Thomas Diehl, Peter Doel, Spencer Everett, Brenna Flaugher, Douglas Friedel, Joshua Frieman, Juan García-Bellido, Enrique Gaztanaga, Karl Glazebrook, Daniel Gruen, Gaston Gutierrez, Samuel R. Hinton, Devon L. Hollowood, David J. James, Alex G. Kim, Kyler Kuehn, Nikolay Kuropatkin, Geraint F. Lewis, Christopher Lidman, Marcos Lima, Marcio A. G. Maia, Marisa March, Jennifer L. Marshall, Felipe Menanteau, Ramon Miquel, Antonella Palmese, Francisco Paz-Chinchón, Andrés A. Plazas, Eusebio Sanchez, Michael Schubnell, Santiago Serrano, Ignacio Sevilla-Noarbe, Mathew Smith, Eric Suchyta, Molly E. C. Swanson, Gregory Tarle, Brad E. Tucker, Tamas Norbert Varga, Alistair R. Walker
We present a systematic search for periodic light curves in 625 spectroscopically confirmed quasars with a median redshift of 1. 8 in a 4. 6 deg$^2$ overlapping region of the Dark Energy Survey Supernova (DES-SN) fields and the Sloan Digital Sky Survey Stripe 82 (SDSS-S82).
High Energy Astrophysical Phenomena Astrophysics of Galaxies
4 code implementations • 19 Aug 2020 • Hang Zhao, Jiyang Gao, Tian Lan, Chen Sun, Benjamin Sapp, Balakrishnan Varadarajan, Yue Shen, Yi Shen, Yuning Chai, Cordelia Schmid, Cong-Cong Li, Dragomir Anguelov
Our key insight is that for prediction within a moderate time horizon, the future modes can be effectively captured by a set of target states.
no code implementations • 27 Feb 2020 • Ziqi Liu, Dong Wang, Qianyu Yu, Zhiqiang Zhang, Yue Shen, Jian Ma, Wenliang Zhong, Jinjie Gu, Jun Zhou, Shuang Yang, Yuan Qi
In this paper, we present a graph representation learning method atop of transaction networks for merchant incentive optimization in mobile payment marketing.
no code implementations • 18 Feb 2020 • Yu Zhang, Xin Sun, Junyu Dong, Changrui Chen, Yue Shen
The network first introduces a High-Order Representation module to extract the contextual high-order information from all stages of the backbone.
1 code implementation • 3 Feb 2020 • Haoling Zhang, Chao-Han Huck Yang, Hector Zenil, Narsis A. Kiani, Yue Shen, Jesper N. Tegner
Using RET, two types of approaches -- NEAT with Binary search encoding (Bi-NEAT) and NEAT with Golden-Section search encoding (GS-NEAT) -- have been designed to solve problems in benchmark continuous learning environments such as logic gates, Cartpole, and Lunar Lander, and tested against classical NEAT and FS-NEAT as baselines.
no code implementations • 26 Nov 2019 • E. A. Huerta, Gabrielle Allen, Igor Andreoni, Javier M. Antelis, Etienne Bachelet, Bruce Berriman, Federica Bianco, Rahul Biswas, Matias Carrasco, Kyle Chard, Minsik Cho, Philip S. Cowperthwaite, Zachariah B. Etienne, Maya Fishbach, Francisco Förster, Daniel George, Tom Gibbs, Matthew Graham, William Gropp, Robert Gruendl, Anushri Gupta, Roland Haas, Sarah Habib, Elise Jennings, Margaret W. G. Johnson, Erik Katsavounidis, Daniel S. Katz, Asad Khan, Volodymyr Kindratenko, William T. C. Kramer, Xin Liu, Ashish Mahabal, Zsuzsa Marka, Kenton McHenry, Jonah Miller, Claudia Moreno, Mark Neubauer, Steve Oberlin, Alexander R. Olivas, Donald Petravick, Adam Rebei, Shawn Rosofsky, Milton Ruiz, Aaron Saxton, Bernard F. Schutz, Alex Schwing, Ed Seidel, Stuart L. Shapiro, Hongyu Shen, Yue Shen, Leo Singer, Brigitta M. Sipőcz, Lunan Sun, John Towns, Antonios Tsokaros, Wei Wei, Jack Wells, Timothy J. Williams, JinJun Xiong, Zhizhen Zhao
Multi-messenger astrophysics is a fast-growing, interdisciplinary field that combines data, which vary in volume and speed of data processing, from many different instruments that probe the Universe using different cosmic messengers: electromagnetic waves, cosmic rays, gravitational waves and neutrinos.
no code implementations • 1 Feb 2019 • Gabrielle Allen, Igor Andreoni, Etienne Bachelet, G. Bruce Berriman, Federica B. Bianco, Rahul Biswas, Matias Carrasco Kind, Kyle Chard, Minsik Cho, Philip S. Cowperthwaite, Zachariah B. Etienne, Daniel George, Tom Gibbs, Matthew Graham, William Gropp, Anushri Gupta, Roland Haas, E. A. Huerta, Elise Jennings, Daniel S. Katz, Asad Khan, Volodymyr Kindratenko, William T. C. Kramer, Xin Liu, Ashish Mahabal, Kenton McHenry, J. M. Miller, M. S. Neubauer, Steve Oberlin, Alexander R. Olivas Jr, Shawn Rosofsky, Milton Ruiz, Aaron Saxton, Bernard Schutz, Alex Schwing, Ed Seidel, Stuart L. Shapiro, Hongyu Shen, Yue Shen, Brigitta M. Sipőcz, Lunan Sun, John Towns, Antonios Tsokaros, Wei Wei, Jack Wells, Timothy J. Williams, JinJun Xiong, Zhizhen Zhao
We discuss key aspects to realize this endeavor, namely (i) the design and exploitation of scalable and computationally efficient AI algorithms for Multi-Messenger Astrophysics; (ii) cyberinfrastructure requirements to numerically simulate astrophysical sources, and to process and interpret Multi-Messenger Astrophysics data; (iii) management of gravitational wave detections and triggers to enable electromagnetic and astro-particle follow-ups; (iv) a vision to harness future developments of machine and deep learning and cyberinfrastructure resources to cope with the scale of discovery in the Big Data Era; (v) and the need to build a community that brings domain experts together with data scientists on equal footing to maximize and accelerate discovery in the nascent field of Multi-Messenger Astrophysics.
1 code implementation • 11 Sep 2015 • John J. Ruan, Scott F. Anderson, Sabrina L. Cales, Michael Eracleous, Paul J. Green, Eric Morganson, Jessie C. Runnoe, Yue Shen, Tessa D. Wilkinson, Michael R. Blanton, Tom Dwelly, Antonis Georgakakis, Jenny E. Greene, Stephanie M. LaMassa, Andrea Merloni, Donald P. Schneider
By leveraging the >10 year baselines for objects with repeat spectroscopy, we uncover two new changing-look quasars, and a third discovered previously.
High Energy Astrophysical Phenomena Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies
no code implementations • 13 Aug 2014 • Yang Liu, Bo He, Diya Dong, Yue Shen, Tianhong Yan, Rui Nian, Amaury Lendase
Second, an adaptive selective ensemble framework for online learning is designed to balance the robustness and complexity of the algorithm.