no code implementations • Findings (ACL) 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, Bing Qin
Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation.
no code implementations • 29 Aug 2024 • Attila Lischka, Jiaming Wu, Morteza Haghir Chehreghani, Balázs Kulcsár
We can use such a trained GREAT model to produce sparse TSP graph instances, keeping only the edges GREAT finds promising.
no code implementations • 25 Mar 2024 • Attila Lischka, Jiaming Wu, Rafael Basso, Morteza Haghir Chehreghani, Balázs Kulcsár
Furthermore, we propose ensembles of different sparsification levels allowing models to focus on the most promising parts while also allowing information flow between all nodes of a TSP instance.
no code implementations • 8 Dec 2023 • Pablo Laso, Stefano Cerri, Annabel Sorby-Adams, Jennifer Guo, Farrah Mateen, Philipp Goebl, Jiaming Wu, Peirong Liu, Hongwei Li, Sean I. Young, Benjamin Billot, Oula Puonti, Gordon Sze, Sam Payabavash, Adam DeHavenon, Kevin N. Sheth, Matthew S. Rosen, John Kirsch, Nicola Strisciuglio, Jelmer M. Wolterink, Arman Eshaghi, Frederik Barkhof, W. Taylor Kimberly, Juan Eugenio Iglesias
Brain atrophy and white matter hyperintensity (WMH) are critical neuroimaging features for ascertaining brain injury in cerebrovascular disease and multiple sclerosis.
no code implementations • CVPR 2022 • Zhongyun Bao, Chengjiang Long, Gang Fu, Daquan Liu, Yuanzhen Li, Jiaming Wu, Chunxia Xiao
Specifically, we firstly apply a physically-based rendering method to construct a large-scale, high-quality dataset (named IH) for our task, which contains various types of foreground objects and background scenes with different lighting conditions.