Search Results for author: Yujin Tang

Found 17 papers, 13 papers with code

VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting

1 code implementation25 Mar 2024 Yujin Tang, Peijie Dong, Zhenheng Tang, Xiaowen Chu, Junwei Liang

Combining CNNs or ViTs, with RNNs for spatiotemporal forecasting, has yielded unparalleled results in predicting temporal and spatial dynamics.

Evolutionary Optimization of Model Merging Recipes

1 code implementation19 Mar 2024 Takuya Akiba, Makoto Shing, Yujin Tang, Qi Sun, David Ha

Surprisingly, our Japanese Math LLM achieved state-of-the-art performance on a variety of established Japanese LLM benchmarks, even surpassing models with significantly more parameters, despite not being explicitly trained for such tasks.

Evolutionary Algorithms Math

Evolution Transformer: In-Context Evolutionary Optimization

1 code implementation5 Mar 2024 Robert Tjarko Lange, Yingtao Tian, Yujin Tang

Given a trajectory of evaluations and search distribution statistics, Evolution Transformer outputs a performance-improving update to the search distribution.

Large Language Models As Evolution Strategies

no code implementations28 Feb 2024 Robert Tjarko Lange, Yingtao Tian, Yujin Tang

Large Transformer models are capable of implementing a plethora of so-called in-context learning algorithms.

In-Context Learning

LEVI: Generalizable Fine-tuning via Layer-wise Ensemble of Different Views

no code implementations7 Feb 2024 Yuji Roh, Qingyun Liu, Huan Gui, Zhe Yuan, Yujin Tang, Steven Euijong Whang, Liang Liu, Shuchao Bi, Lichan Hong, Ed H. Chi, Zhe Zhao

By combining two complementing models, LEVI effectively suppresses problematic features in both the fine-tuning data and pre-trained model and preserves useful features for new tasks.

NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications

1 code implementation NeurIPS 2023 Robert Tjarko Lange, Yujin Tang, Yingtao Tian

Recently, the Deep Learning community has become interested in evolutionary optimization (EO) as a means to address hard optimization problems, e. g. meta-learning through long inner loop unrolls or optimizing non-differentiable operators.

Benchmarking Meta-Learning

PostRainBench: A comprehensive benchmark and a new model for precipitation forecasting

1 code implementation4 Oct 2023 Yujin Tang, Jiaming Zhou, Xiang Pan, Zeying Gong, Junwei Liang

To address these limitations, we introduce the PostRainBench, a comprehensive multi-variable NWP post-processing benchmark consisting of three datasets for NWP post-processing-based precipitation forecasting.

NWP Post-processing Precipitation Forecasting

PatchMixer: A Patch-Mixing Architecture for Long-Term Time Series Forecasting

2 code implementations1 Oct 2023 Zeying Gong, Yujin Tang, Junwei Liang

Although the Transformer has been the dominant architecture for time series forecasting tasks in recent years, a fundamental challenge remains: the permutation-invariant self-attention mechanism within Transformers leads to a loss of temporal information.

Time Series Time Series Forecasting

DEIR: Efficient and Robust Exploration through Discriminative-Model-Based Episodic Intrinsic Rewards

1 code implementation21 Apr 2023 Shanchuan Wan, Yujin Tang, Yingtao Tian, Tomoyuki Kaneko

Exploration is a fundamental aspect of reinforcement learning (RL), and its effectiveness is a deciding factor in the performance of RL algorithms, especially when facing sparse extrinsic rewards.

Reinforcement Learning (RL)

Collective Intelligence for 2D Push Manipulations with Mobile Robots

1 code implementation28 Nov 2022 So Kuroki, Tatsuya Matsushima, Jumpei Arima, Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu, Yujin Tang

While natural systems often present collective intelligence that allows them to self-organize and adapt to changes, the equivalent is missing in most artificial systems.

Robot Manipulation

Learning to Generalize with Object-centric Agents in the Open World Survival Game Crafter

1 code implementation5 Aug 2022 Aleksandar Stanić, Yujin Tang, David Ha, Jürgen Schmidhuber

We show that current agents struggle to generalize, and introduce novel object-centric agents that improve over strong baselines.

Meta-Learning

Evolving Modular Soft Robots without Explicit Inter-Module Communication using Local Self-Attention

2 code implementations13 Apr 2022 Federico Pigozzi, Yujin Tang, Eric Medvet, David Ha

We show experimentally that the evolved robots are effective in the task of locomotion: thanks to self-attention, instances of the same controller embodied in the same robot can focus on different inputs.

Inductive Bias

EvoJAX: Hardware-Accelerated Neuroevolution

1 code implementation10 Feb 2022 Yujin Tang, Yingtao Tian, David Ha

Evolutionary computation has been shown to be a highly effective method for training neural networks, particularly when employed at scale on CPU clusters.

Collective Intelligence for Deep Learning: A Survey of Recent Developments

no code implementations29 Nov 2021 David Ha, Yujin Tang

In this review, we will provide a historical context of neural network research's involvement with complex systems, and highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence to advance its current capabilities.

The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning

3 code implementations NeurIPS 2021 Yujin Tang, David Ha

In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture.

reinforcement-learning Reinforcement Learning (RL)

Learning Agile Locomotion via Adversarial Training

no code implementations3 Aug 2020 Yujin Tang, Jie Tan, Tatsuya Harada

In contrast to prior works that used only one adversary, we find that training an ensemble of adversaries, each of which specializes in a different escaping strategy, is essential for the protagonist to master agility.

Reinforcement Learning (RL)

Neuroevolution of Self-Interpretable Agents

3 code implementations18 Mar 2020 Yujin Tang, Duong Nguyen, David Ha

Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight.

Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.