no code implementations • 7 Nov 2023 • Su Wang, Roberto Morabito, Seyyedali Hosseinalipour, Mung Chiang, Christopher G. Brinton
Our optimization methodology aims to select the best combination of sampled nodes and data offloading configuration to maximize FedL training accuracy while minimizing data processing and D2D communication resource consumption subject to realistic constraints on the network topology and device capabilities.
no code implementations • 4 Nov 2023 • Su Wang, Shini Zhang, Xuchong Qiu
Based on the extracted 2D-3D point pairs, we further propose an occlusion-guided point-matching method that improves the calibration accuracy and reduces computation costs.
no code implementations • 27 Oct 2023 • Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, Su Wang
With extensive experimentation and human evaluation on a range of model configurations (LLM, VQA, and T2I), we empirically demonstrate that DSG addresses the challenges noted above.
no code implementations • 8 Jun 2023 • Su Wang, Rajeev Sahay, Adam Piaseczny, Christopher G. Brinton
In this work, we first reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions.
no code implementations • 24 Apr 2023 • Su Wang, Seyyedali Hosseinalipour, Christopher G. Brinton
Our methodology, Source-Target Determination and Link Formation (ST-LF), optimizes both (i) classification of devices into sources and targets and (ii) source-target link formation, in a manner that considers the trade-off between ML model accuracy and communication energy efficiency.
no code implementations • 15 Mar 2023 • Su Wang, Seyyedali Hosseinalipour, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Weifeng Su, Mung Chiang
Federated learning (FL) has been promoted as a popular technique for training machine learning (ML) models over edge/fog networks.
no code implementations • 22 Feb 2023 • Tianhe Yu, Ted Xiao, Austin Stone, Jonathan Tompson, Anthony Brohan, Su Wang, Jaspiar Singh, Clayton Tan, Dee M, Jodilyn Peralta, Brian Ichter, Karol Hausman, Fei Xia
Specifically, we make use of the state of the art text-to-image diffusion models and perform aggressive data augmentation on top of our existing robotic manipulation datasets via inpainting various unseen objects for manipulation, backgrounds, and distractors with text guidance.
no code implementations • 21 Jan 2023 • Su Wang, Rajeev Sahay, Christopher G. Brinton
In this work, we reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions.
no code implementations • CVPR 2023 • Su Wang, Chitwan Saharia, Ceslee Montgomery, Jordi Pont-Tuset, Shai Noy, Stefano Pellegrini, Yasumasa Onoe, Sarah Laszlo, David J. Fleet, Radu Soricut, Jason Baldridge, Mohammad Norouzi, Peter Anderson, William Chan
Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.
no code implementations • CVPR 2023 • Aishwarya Kamath, Peter Anderson, Su Wang, Jing Yu Koh, Alexander Ku, Austin Waters, Yinfei Yang, Jason Baldridge, Zarana Parekh
Recent studies in Vision-and-Language Navigation (VLN) train RL agents to execute natural-language navigation instructions in photorealistic environments, as a step towards robots that can follow human instructions.
Ranked #1 on
Vision and Language Navigation
on RxR
(using extra training data)
no code implementations • 7 Feb 2022 • Seyyedali Hosseinalipour, Su Wang, Nicolo Michelusi, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Mung Chiang
PSL considers the realistic scenario where global aggregations are conducted with idle times in-between them for resource efficiency improvements, and incorporates data dispersion and model dispersion with local model condensation into FedL.
no code implementations • CVPR 2022 • Su Wang, Ceslee Montgomery, Jordi Orbay, Vighnesh Birodkar, Aleksandra Faust, Izzeddin Gur, Natasha Jaques, Austin Waters, Jason Baldridge, Peter Anderson
We study the automatic generation of navigation instructions from 360-degree images captured on indoor routes.
1 code implementation • 8 Nov 2021 • Su Wang, Zhiliang Wang, Tao Zhou, Xia Yin, Dongqi Han, Han Zhang, Hongbin Sun, Xingang Shi, Jiahai Yang
Recent studies propose leveraging the rich contextual information in data provenance to detect threats in a host.
1 code implementation • 23 Sep 2021 • Dongqi Han, Zhiliang Wang, Wenqi Chen, Ying Zhong, Su Wang, Han Zhang, Jiahai Yang, Xingang Shi, Xia Yin
Experimental results show that DeepAID can provide high-quality interpretations for unsupervised DL models while meeting the special requirements of security domains.
no code implementations • 29 Jun 2021 • Su Wang, Seyyedali Hosseinalipour, Maria Gorlatova, Christopher G. Brinton, Mung Chiang
The presence of time-varying data heterogeneity and computational resource inadequacy among device clusters motivate four key parts of our methodology: (i) stratified UAV swarms of leader, worker, and coordinator UAVs, (ii) hierarchical nested personalized federated learning (HN-PFL), a distributed ML framework for personalized model training across the worker-leader-core network hierarchy, (iii) cooperative UAV resource pooling to address computational inadequacy of devices by conducting model training among the UAV swarms, and (iv) model/concept drift to model time-varying data distributions.
no code implementations • EACL 2021 • Ming Zhao, Peter Anderson, Vihan Jain, Su Wang, Alexander Ku, Jason Baldridge, Eugene Ie
Vision-and-Language Navigation wayfinding agents can be enhanced by exploiting automatically generated navigation instructions.
no code implementations • 4 Jan 2021 • Su Wang, Mengyuan Lee, Seyyedali Hosseinalipour, Roberto Morabito, Mung Chiang, Christopher G. Brinton
The conventional federated learning (FedL) architecture distributes machine learning (ML) across worker devices by having them train local models that are periodically aggregated by a server.
no code implementations • 1 Jan 2021 • Xiaolei Hua, Su Wang, Lin Zhu, Dong Zhou, Junlan Feng, Yiting Wang, Chao Deng, Shuo Wang, Mingtao Mei
However, due to complex correlations and various temporal patterns of large-scale multivariate time series, a general unsupervised anomaly detection model with higher F1-score and Timeliness remains a challenging task.
no code implementations • 17 Aug 2020 • Su Wang, Greg Durrett, Katrin Erk
We propose a method for controlled narrative/story generation where we are able to guide the model to produce coherent narratives with user-specified target endings by interpolation: for example, we are told that Jim went hiking and at the end Jim needed to be rescued, and we want the model to incrementally generate steps along the way.
no code implementations • 17 Apr 2020 • Yuwei Tu, Yichen Ruan, Su Wang, Satyavrat Wagle, Christopher G. Brinton, Carlee Joe-Wong
Unlike traditional federated learning frameworks, our method enables devices to offload their data processing tasks to each other, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading, and discarding data points.
Distributed, Parallel, and Cluster Computing
no code implementations • IJCNLP 2019 • Su Wang, Greg Durrett, Katrin Erk
The news coverage of events often contains not one but multiple incompatible accounts of what happened.
no code implementations • 31 Oct 2018 • Su Wang, Rahul Gupta, Nancy Chang, Jason Baldridge
Paraphrasing is rooted in semantics.
no code implementations • EMNLP 2018 • Su Wang, Eric Holgate, Greg Durrett, Katrin Erk
During natural disasters and conflicts, information about what happened is often confusing, messy, and distributed across many sources.
1 code implementation • NAACL 2018 • Su Wang, Greg Durrett, Katrin Erk
Distributional data tells us that a man can swallow candy, but not that a man can swallow a paintball, since this is never attested.
1 code implementation • IJCNLP 2017 • Su Wang, Elisa Ferracane, Raymond J. Mooney
We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution.
no code implementations • IJCNLP 2017 • Su Wang, Stephen Roller, Katrin Erk
We test whether distributional models can do one-shot learning of definitional properties from text only.