no code implementations • 24 Nov 2024 • Enea Monzio Compagnoni, Tianlin Liu, Rustem Islamov, Frank Norbert Proske, Antonio Orvieto, Aurelien Lucchi
Despite the vast empirical evidence supporting the efficacy of adaptive optimization methods in deep learning, their theoretical understanding is far from complete.
1 code implementation • 7 Nov 2024 • AmirEhsan Khorashadizadeh, Tobías I. Liaudat, Tianlin Liu, Jason D. McEwen, Ivan Dokmanić
Neural fields or implicit neural representations (INRs) have attracted significant attention in computer vision and imaging due to their efficient coordinate-based representation of images and 3D volumes.
1 code implementation • 21 Oct 2024 • Tianlin Liu, Jannes Münchmeyer, Laura Laurenti, Chris Marone, Maarten V. de Hoop, Ivan Dokmanić
We introduce the Seismic Language Model (SeisLM), a foundational model designed to analyze seismic waveforms -- signals generated by Earth's vibrations such as the ones originating from earthquakes.
1 code implementation • 24 Jun 2024 • Jiangshu Du, Yibo Wang, Wenting Zhao, Zhongfen Deng, Shuaiqi Liu, Renze Lou, Henry Peng Zou, Pranav Narayanan Venkit, Nan Zhang, Mukund Srinath, Haoran Ranran Zhang, Vipul Gupta, Yinghui Li, Tao Li, Fei Wang, Qin Liu, Tianlin Liu, Pengzhi Gao, Congying Xia, Chen Xing, Jiayang Cheng, Zhaowei Wang, Ying Su, Raj Sanjay Shah, Ruohao Guo, Jing Gu, Haoran Li, Kangda Wei, ZiHao Wang, Lu Cheng, Surangika Ranathunga, Meng Fang, Jie Fu, Fei Liu, Ruihong Huang, Eduardo Blanco, Yixin Cao, Rui Zhang, Philip S. Yu, Wenpeng Yin
This study focuses on the topic of LLMs assist NLP Researchers, particularly examining the effectiveness of LLM in assisting paper (meta-)reviewing and its recognizability.
no code implementations • 7 Feb 2024 • Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, Mathieu Blondel
Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy.
no code implementations • 5 Feb 2024 • Tianlin Liu, Shangmin Guo, Leonardo Bianco, Daniele Calandriello, Quentin Berthet, Felipe Llinares, Jessica Hoffmann, Lucas Dixon, Michal Valko, Mathieu Blondel
Aligning language models with human preferences is crucial for reducing errors and biases in these models.
1 code implementation • 5 Feb 2024 • Shengyi Huang, Quentin Gallouédec, Florian Felten, Antonin Raffin, Rousslan Fernand Julien Dossa, Yanxiao Zhao, Ryan Sullivan, Viktor Makoviychuk, Denys Makoviichuk, Mohamad H. Danesh, Cyril Roumégous, Jiayi Weng, Chufan Chen, Md Masudur Rahman, João G. M. Araújo, Guorui Quan, Daniel Tan, Timo Klein, Rujikorn Charakorn, Mark Towers, Yann Berthelot, Kinal Mehta, Dipam Chakraborty, Arjun KG, Valentin Charraut, Chang Ye, Zichen Liu, Lucas N. Alegre, Alexander Nikulin, Xiao Hu, Tianlin Liu, Jongwook Choi, Brent Yi
As a result, it is usually necessary to reproduce the experiments from scratch, which can be time-consuming and error-prone.
no code implementations • 29 Jan 2024 • Tianlin Liu, Mathieu Blondel, Carlos Riquelme, Joan Puigcerver
Routers for sparse MoEs can be further grouped into two variants: Token Choice, which matches experts to each token, and Expert Choice, which matches tokens to each expert.
1 code implementation • 1 Jan 2024 • AmirEhsan Khorashadizadeh, Valentin Debarnot, Tianlin Liu, Ivan Dokmanić
Deep learning is the current de facto state of the art in tomographic imaging.
no code implementations • 30 Sep 2022 • Tianlin Liu, Joan Puigcerver, Mathieu Blondel
The smoothness of the objectives increases as $k$ increases, giving rise to a trade-off between convergence speed and sparsity of the optimal plan.
no code implementations • ICLR 2022 • Anastasis Kratsios, Behnoosh Zamanlooy, Tianlin Liu, Ivan Dokmanić
Many practical problems need the output of a machine learning model to satisfy a set of constraints, $K$.
2 code implementations • 25 Nov 2020 • Tianlin Liu, Anadi Chaman, David Belius, Ivan Dokmanić
To close the performance gap, we thus propose a multiscale convolutional dictionary structure.
1 code implementation • ICML 2020 • Tianlin Liu, Friedemann Zenke
Deep neural networks have dramatically transformed machine learning, but their memory and energy demands are substantial.
1 code implementation • 24 Nov 2019 • Zekun Yang, Tianlin Liu
Distributional representations of words, also known as word vectors, have become crucial for modern natural language processing tasks due to their wide applications.
no code implementations • 28 May 2019 • Tianlin Liu
We empirically show that, by harnessing slow dynamics, spiking neural networks on analog neuromorphic systems can gain non-trivial performance boosts on a battery of real-time signal processing tasks.
1 code implementation • NAACL 2019 • Tianlin Liu, Lyle Ungar, João Sedoc
Distributed representations of sentences have become ubiquitous in natural language processing tasks.
1 code implementation • 17 Nov 2018 • Tianlin Liu, Lyle Ungar, João Sedoc
Word vectors are at the core of many natural language processing tasks.
no code implementations • 11 Aug 2018 • Tianlin Liu
In the traditional framework of spectral learning of stochastic time series models, model parameters are estimated based on trajectories of fully recorded observations.
no code implementations • 2 May 2018 • Tianlin Liu, Arvid Kappas
In this paper, we describe our approach for the OMG- Emotion Challenge 2018.
no code implementations • 30 Jan 2018 • Tianlin Liu, Dae Gwan Lee
We present a Compressive Sensing algorithm for reconstructing binary signals from its linear measurements.