Search Results for author: Yue Shen

Found 31 papers, 10 papers with code

DMT-HI: MOE-based Hyperbolic Interpretable Deep Manifold Transformation for Unspervised Dimensionality Reduction

1 code implementation25 Oct 2024 Zelin Zang, Yuhao Wang, Jinlin Wu, Hong Liu, Yue Shen, Stan. Z Li, Zhen Lei

DMT-HI enhances DR accuracy by leveraging hyperbolic embeddings to represent the hierarchical nature of data, while also improving interpretability by explicitly linking input data, embedding outcomes, and key features through the MOE structure.

Dimensionality Reduction

KiloBot: A Programming Language for Deploying Perception-Guided Industrial Manipulators at Scale

no code implementations5 Sep 2024 Wei Gao, Jingqiang Wang, Xinv Zhu, Jun Zhong, Yue Shen, Youshuang Ding

To scale up the deployment, our DSL provides: 1) an easily accessible interface to construct & solve a sub-class of Task and Motion Planning (TAMP) problems that are important in practical applications; and 2) a mechanism to implement flexible control flow to perform integration and address customized requirements of distinct industrial application.

Industrial Robots Motion Planning +1

RuleAlign: Making Large Language Models Better Physicians with Diagnostic Rule Alignment

no code implementations22 Aug 2024 Xiaohan Wang, Xiaoyan Yang, Yuqi Zhu, Yue Shen, Jian Wang, Peng Wei, Lei Liang, Jinjie Gu, Huajun Chen, Ningyu Zhang

Large Language Models (LLMs) like GPT-4, MedPaLM-2, and Med-Gemini achieve performance competitively with human experts across various medical benchmarks.

A Survey on Medical Large Language Models: Technology, Application, Trustworthiness, and Future Directions

no code implementations6 Jun 2024 Lei Liu, Xiaoyan Yang, Junchi Lei, Xiaoyang Liu, Yue Shen, Zhiqiang Zhang, Peng Wei, Jinjie Gu, Zhixuan Chu, Zhan Qin, Kui Ren

This survey provides a comprehensive overview of Medical Large Language Models (Med-LLMs), outlining their evolution from general to the medical-specific domain (i. e, Technology and Application), as well as their transformative impact on healthcare (e. g., Trustworthiness and Safety).

Fairness

Editing Conceptual Knowledge for Large Language Models

1 code implementation10 Mar 2024 Xiaohan Wang, Shengyu Mao, Ningyu Zhang, Shumin Deng, Yunzhi Yao, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen

Recently, there has been a growing interest in knowledge editing for Large Language Models (LLMs).

knowledge editing

KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents

1 code implementation5 Mar 2024 Yuqi Zhu, Shuofei Qiao, Yixin Ou, Shumin Deng, Ningyu Zhang, Shiwei Lyu, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen

Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges, especially when interacting with environments through generating executable actions.

Hallucination Self-Learning

Unified Hallucination Detection for Multimodal Large Language Models

2 code implementations5 Feb 2024 Xiang Chen, Chenxi Wang, Yida Xue, Ningyu Zhang, Xiaoyan Yang, Qiang Li, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen

Despite significant strides in multimodal tasks, Multimodal Large Language Models (MLLMs) are plagued by the critical issue of hallucination.

Hallucination

Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs

3 code implementations9 Jan 2024 Junjie Wang, Dan Yang, Binbin Hu, Yue Shen, Wen Zhang, Jinjie Gu

To stimulate the LLMs' reasoning ability, the chain-of-thought (CoT) prompting method is widely used, but existing methods still have some limitations in our scenario: (1) Previous methods either use simple "Let's think step by step" spells or provide fixed examples in demonstrations without considering compatibility between prompts and concrete questions, making LLMs ineffective when the marketers' demands are abstract and diverse.

Language Modelling Large Language Model

RJUA-QA: A Comprehensive QA Dataset for Urology

1 code implementation15 Dec 2023 Shiwei Lyu, Chenfei Chi, Hongbo Cai, Lei Shi, Xiaoyan Yang, Lei Liu, Xiang Chen, Deng Zhao, Zhiqiang Zhang, Xianguo Lyu, Ming Zhang, Fangzhou Li, Xiaowei Ma, Yue Shen, Jinjie Gu, Wei Xue, Yiran Huang

We introduce RJUA-QA, a novel medical dataset for question answering (QA) and reasoning with clinical evidence, contributing to bridge the gap between general large language models (LLMs) and medical-specific LLM applications.

Question Answering

Making Large Language Models Better Knowledge Miners for Online Marketing with Progressive Prompting Augmentation

no code implementations8 Dec 2023 Chunjing Gan, Dan Yang, Binbin Hu, Ziqi Liu, Yue Shen, Zhiqiang Zhang, Jinjie Gu, Jun Zhou, Guannan Zhang

In this paper, we seek to carefully prompt a Large Language Model (LLM) with domain-level knowledge as a better marketing-oriented knowledge miner for marketing-oriented knowledge graph construction, which is however non-trivial, suffering from several inevitable issues in real-world marketing scenarios, i. e., uncontrollable relation generation of LLMs, insufficient prompting ability of a single prompt, the unaffordable deployment cost of LLMs.

graph construction Language Modelling +3

From Beginner to Expert: Modeling Medical Knowledge into General LLMs

no code implementations2 Dec 2023 Qiang Li, Xiaoyan Yang, Haowen Wang, Qin Wang, Lei Liu, Junjie Wang, Yang Zhang, Mingyuan Chu, Sen Hu, Yicheng Chen, Yue Shen, Cong Fan, Wangshu Zhang, Teng Xu, Jinjie Gu, Jing Zheng, Guannan Zhang Ant Group

(3) Specifically for multi-choice questions in the medical domain, we propose a novel Verification-of-Choice approach for prompting engineering, which significantly enhances the reasoning ability of LLMs.

Language Modelling Large Language Model +3

Think-in-Memory: Recalling and Post-thinking Enable LLMs with Long-Term Memory

no code implementations15 Nov 2023 Lei Liu, Xiaoyan Yang, Yue Shen, Binbin Hu, Zhiqiang Zhang, Jinjie Gu, Guannan Zhang

Memory-augmented Large Language Models (LLMs) have demonstrated remarkable performance in long-term human-machine interactions, which basically relies on iterative recalling and reasoning of history to generate high-quality responses.

A 5' UTR Language Model for Decoding Untranslated Regions of mRNA and Function Predictions

no code implementations5 Oct 2023 Yanyi Chu, Dan Yu, Yupeng Li, Kaixuan Huang, Yue Shen, Le Cong, Jason Zhang, Mengdi Wang

The model outperformed the best-known benchmark by up to 42% for predicting the Mean Ribosome Loading, and by up to 60% for predicting the Translation Efficiency and the mRNA Expression Level.

Language Modelling Translation

Who Would be Interested in Services? An Entity Graph Learning System for User Targeting

no code implementations30 May 2023 Dan Yang, Binbin Hu, Xiaoyan Yang, Yue Shen, Zhiqiang Zhang, Jinjie Gu, Guannan Zhang

At the online stage, the system offers the ability of user targeting in real-time based on the entity graph from the offline stage.

graph construction Graph Learning

VAC2: Visual Analysis of Combined Causality in Event Sequences

no code implementations11 Jun 2022 Sujia Zhu, Yue Shen, Zihao Zhu, Wang Xia, Baofeng Chang, Ronghua Liang, Guodao Sun

To fill the absence of combined causes discovery on temporal event sequence data, eliminating and recruiting principles are defined to balance the effectiveness and controllability on cause combinations.

Causal Discovery Decision Making +2

Binary Neural Networks as a general-propose compute paradigm for on-device computer vision

no code implementations8 Feb 2022 Guhong Nie, Lirui Xiao, Menglong Zhu, Dongliang Chu, Yue Shen, Peng Li, Kang Yang, Li Du, Bo Chen

For binary neural networks (BNNs) to become the mainstream on-device computer vision algorithm, they must achieve a superior speed-vs-accuracy tradeoff than 8-bit quantization and establish a similar degree of general applicability in vision tasks.

Quantization Super-Resolution

Candidate Periodically Variable Quasars from the Dark Energy Survey and the Sloan Digital Sky Survey

no code implementations27 Aug 2020 Yu-Ching Chen, Xin Liu, Wei-Ting Liao, A. Miguel Holgado, Hengxiao Guo, Robert A. Gruendl, Eric Morganson, Yue Shen, Kaiwen Zhang, Tim M. C. Abbott, Michel Aguena, Sahar Allam, Santiago Avila, Emmanuel Bertin, Sunayana Bhargava, David Brooks, David L. Burke, Aurelio Carnero Rosell, Daniela Carollo, Matias Carrasco Kind, Jorge Carretero, Matteo Costanzi, Luiz N. da Costa, Tamara M. Davis, Juan De Vicente, Shantanu Desai, H. Thomas Diehl, Peter Doel, Spencer Everett, Brenna Flaugher, Douglas Friedel, Joshua Frieman, Juan García-Bellido, Enrique Gaztanaga, Karl Glazebrook, Daniel Gruen, Gaston Gutierrez, Samuel R. Hinton, Devon L. Hollowood, David J. James, Alex G. Kim, Kyler Kuehn, Nikolay Kuropatkin, Geraint F. Lewis, Christopher Lidman, Marcos Lima, Marcio A. G. Maia, Marisa March, Jennifer L. Marshall, Felipe Menanteau, Ramon Miquel, Antonella Palmese, Francisco Paz-Chinchón, Andrés A. Plazas, Eusebio Sanchez, Michael Schubnell, Santiago Serrano, Ignacio Sevilla-Noarbe, Mathew Smith, Eric Suchyta, Molly E. C. Swanson, Gregory Tarle, Brad E. Tucker, Tamas Norbert Varga, Alistair R. Walker

We present a systematic search for periodic light curves in 625 spectroscopically confirmed quasars with a median redshift of 1. 8 in a 4. 6 deg$^2$ overlapping region of the Dark Energy Survey Supernova (DES-SN) fields and the Sloan Digital Sky Survey Stripe 82 (SDSS-S82).

High Energy Astrophysical Phenomena Astrophysics of Galaxies

Graph Representation Learning for Merchant Incentive Optimization in Mobile Payment Marketing

no code implementations27 Feb 2020 Ziqi Liu, Dong Wang, Qianyu Yu, Zhiqiang Zhang, Yue Shen, Jian Ma, Wenliang Zhong, Jinjie Gu, Jun Zhou, Shuang Yang, Yuan Qi

In this paper, we present a graph representation learning method atop of transaction networks for merchant incentive optimization in mobile payment marketing.

Graph Representation Learning Marketing

High-Order Paired-ASPP Networks for Semantic Segmenation

no code implementations18 Feb 2020 Yu Zhang, Xin Sun, Junyu Dong, Changrui Chen, Yue Shen

The network first introduces a High-Order Representation module to extract the contextual high-order information from all stages of the backbone.

Semantic Segmentation Vocal Bursts Intensity Prediction

Evolving Neural Networks through a Reverse Encoding Tree

1 code implementation3 Feb 2020 Haoling Zhang, Chao-Han Huck Yang, Hector Zenil, Narsis A. Kiani, Yue Shen, Jesper N. Tegner

Using RET, two types of approaches -- NEAT with Binary search encoding (Bi-NEAT) and NEAT with Golden-Section search encoding (GS-NEAT) -- have been designed to solve problems in benchmark continuous learning environments such as logic gates, Cartpole, and Lunar Lander, and tested against classical NEAT and FS-NEAT as baselines.

Deep Learning for Multi-Messenger Astrophysics: A Gateway for Discovery in the Big Data Era

no code implementations1 Feb 2019 Gabrielle Allen, Igor Andreoni, Etienne Bachelet, G. Bruce Berriman, Federica B. Bianco, Rahul Biswas, Matias Carrasco Kind, Kyle Chard, Minsik Cho, Philip S. Cowperthwaite, Zachariah B. Etienne, Daniel George, Tom Gibbs, Matthew Graham, William Gropp, Anushri Gupta, Roland Haas, E. A. Huerta, Elise Jennings, Daniel S. Katz, Asad Khan, Volodymyr Kindratenko, William T. C. Kramer, Xin Liu, Ashish Mahabal, Kenton McHenry, J. M. Miller, M. S. Neubauer, Steve Oberlin, Alexander R. Olivas Jr, Shawn Rosofsky, Milton Ruiz, Aaron Saxton, Bernard Schutz, Alex Schwing, Ed Seidel, Stuart L. Shapiro, Hongyu Shen, Yue Shen, Brigitta M. Sipőcz, Lunan Sun, John Towns, Antonios Tsokaros, Wei Wei, Jack Wells, Timothy J. Williams, JinJun Xiong, Zhizhen Zhao

We discuss key aspects to realize this endeavor, namely (i) the design and exploitation of scalable and computationally efficient AI algorithms for Multi-Messenger Astrophysics; (ii) cyberinfrastructure requirements to numerically simulate astrophysical sources, and to process and interpret Multi-Messenger Astrophysics data; (iii) management of gravitational wave detections and triggers to enable electromagnetic and astro-particle follow-ups; (iv) a vision to harness future developments of machine and deep learning and cyberinfrastructure resources to cope with the scale of discovery in the Big Data Era; (v) and the need to build a community that brings domain experts together with data scientists on equal footing to maximize and accelerate discovery in the nascent field of Multi-Messenger Astrophysics.

Astronomy Management +1

Towards an Understanding of Changing-Look Quasars: An Archival Spectroscopic Search in SDSS

1 code implementation11 Sep 2015 John J. Ruan, Scott F. Anderson, Sabrina L. Cales, Michael Eracleous, Paul J. Green, Eric Morganson, Jessie C. Runnoe, Yue Shen, Tessa D. Wilkinson, Michael R. Blanton, Tom Dwelly, Antonis Georgakakis, Jenny E. Greene, Stephanie M. LaMassa, Andrea Merloni, Donald P. Schneider

By leveraging the >10 year baselines for objects with repeat spectroscopy, we uncover two new changing-look quasars, and a third discovered previously.

High Energy Astrophysical Phenomena Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies

Robust OS-ELM with a novel selective ensemble based on particle swarm optimization

no code implementations13 Aug 2014 Yang Liu, Bo He, Diya Dong, Yue Shen, Tianhong Yan, Rui Nian, Amaury Lendase

Second, an adaptive selective ensemble framework for online learning is designed to balance the robustness and complexity of the algorithm.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.