Search Results for author: Danlu Chen

Found 8 papers, 4 papers with code

LogogramNLP: Comparing Visual and Textual Representations of Ancient Logographic Writing Systems for NLP

no code implementations8 Aug 2024 Danlu Chen, Freda Shi, Aditi Agarwal, Jacobo Myerston, Taylor Berg-Kirkpatrick

Standard natural language processing (NLP) pipelines operate on symbolic representations of language, which typically consist of sequences of discrete tokens.

Pixel Sentence Representation Learning

2 code implementations13 Feb 2024 Chenghao Xiao, Zhuoxu Huang, Danlu Chen, G Thomas Hudson, Yizhi Li, Haoran Duan, Chenghua Lin, Jie Fu, Jungong Han, Noura Al Moubayed

To our knowledge, this is the first representation learning method devoid of traditional language models for understanding sentence and document semantics, marking a stride closer to human-like textual comprehension.

Natural Language Inference Representation Learning +3

VizSeq: A Visual Analysis Toolkit for Text Generation Tasks

1 code implementation IJCNLP 2019 Changhan Wang, Anirudh Jain, Danlu Chen, Jiatao Gu

Automatic evaluation of text generation tasks (e. g. machine translation, text summarization, image captioning and video description) usually relies heavily on task-specific metrics, such as BLEU and ROUGE.

Benchmarking Image Captioning +5

Predictive Ensemble Learning with Application to Scene Text Detection

no code implementations12 May 2019 Danlu Chen, Xu-Yao Zhang, Wei zhang, Yao Lu, Xiuli Li, Tao Mei

Taking scene text detection as the application, where no suitable ensemble learning strategy exists, PEL can significantly improve the performance, compared to either individual state-of-the-art models, or the fusion of multiple models by non-maximum suppression.

Classification Ensemble Learning +5

Memory-Efficient Implementation of DenseNets

6 code implementations21 Jul 2017 Geoff Pleiss, Danlu Chen, Gao Huang, Tongcheng Li, Laurens van der Maaten, Kilian Q. Weinberger

A 264-layer DenseNet (73M parameters), which previously would have been infeasible to train, can now be trained on a single workstation with 8 NVIDIA Tesla M40 GPUs.

Cannot find the paper you are looking for? You can Submit a new open access paper.