Search Results for author: Tarek Abdelzaher

Found 31 papers, 11 papers with code

On the Efficiency and Robustness of Vibration-based Foundation Models for IoT Sensing: A Case Study

no code implementations3 Apr 2024 Tomoyoshi Kimura, Jinyang Li, Tianshi Wang, Denizhan Kara, Yizhuo Chen, Yigong Hu, Ruijie Wang, Maggie Wigness, Shengzhong Liu, Mani Srivastava, Suhas Diggavi, Tarek Abdelzaher

This paper demonstrates the potential of vibration-based Foundation Models (FMs), pre-trained with unlabeled sensing data, to improve the robustness of run-time inference in (a class of) IoT applications.

SudokuSens: Enhancing Deep Learning Robustness for IoT Sensing Applications using a Generative Approach

no code implementations3 Feb 2024 Tianshi Wang, Jinyang Li, Ruijie Wang, Denizhan Kara, Shengzhong Liu, Davis Wertheimer, Antoni Viros-i-Martin, Raghu Ganti, Mudhakar Srivatsa, Tarek Abdelzaher

To incorporate sufficient diversity into the IoT training data, one therefore needs to consider a combinatorial explosion of training cases that are multiplicative in the number of objects considered and the possible environmental conditions in which such objects may be encountered.

Contrastive Learning

InfoPattern: Unveiling Information Propagation Patterns in Social Media

no code implementations27 Nov 2023 Chi Han, Jialiang Xu, Manling Li, Hanning Zhang, Tarek Abdelzaher, Heng Ji

Social media play a significant role in shaping public opinion and influencing ideological communities through information propagation.

Stance Detection

FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent Space

1 code implementation NeurIPS 2023 Shengzhong Liu, Tomoyoshi Kimura, Dongxin Liu, Ruijie Wang, Jinyang Li, Suhas Diggavi, Mani Srivastava, Tarek Abdelzaher

Existing multimodal contrastive frameworks mostly rely on the shared information between sensory modalities, but do not explicitly consider the exclusive modality information that could be critical to understanding the underlying sensing physics.

Contrastive Learning Time Series

Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting

1 code implementation20 Oct 2023 Chenkai Sun, Jinning Li, Yi R. Fung, Hou Pong Chan, Tarek Abdelzaher, ChengXiang Zhai, Heng Ji

Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury.

Language Modelling Large Language Model

LM-Switch: Lightweight Language Model Conditioning in Word Embedding Space

no code implementations22 May 2023 Chi Han, Jialiang Xu, Manling Li, Yi Fung, Chenkai Sun, Nan Jiang, Tarek Abdelzaher, Heng Ji

As pre-training and fine-tuning are costly and might negatively impact model performance, it is desired to efficiently adapt an existing model to different conditions such as styles, sentiments or narratives, when facing different audiences or scenarios.

Language Modelling Word Embeddings

Mutually-paced Knowledge Distillation for Cross-lingual Temporal Knowledge Graph Reasoning

no code implementations27 Mar 2023 Ruijie Wang, Zheng Li, Jingfeng Yang, Tianyu Cao, Chao Zhang, Bing Yin, Tarek Abdelzaher

This paper investigates cross-lingual temporal knowledge graph reasoning problem, which aims to facilitate reasoning on Temporal Knowledge Graphs (TKGs) in low-resource languages by transfering knowledge from TKGs in high-resource ones.

Knowledge Distillation Knowledge Graphs +1

Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning

1 code implementation6 Nov 2022 Yu Meng, Martin Michalski, Jiaxin Huang, Yu Zhang, Tarek Abdelzaher, Jiawei Han

In this work, we study few-shot learning with PLMs from a different perspective: We first tune an autoregressive PLM on the few-shot samples and then use it as a generator to synthesize a large amount of novel training samples which augment the original training set.

Few-Shot Learning

Learning to Sample and Aggregate: Few-shot Reasoning over Temporal Knowledge Graphs

no code implementations16 Oct 2022 Ruijie Wang, Zheng Li, Dachun Sun, Shengzhong Liu, Jinning Li, Bing Yin, Tarek Abdelzaher

Second, the potentially dynamic distributions from the initially observable facts to the future facts ask for explicitly modeling the evolving characteristics of new entities.

Knowledge Graphs Meta-Learning

Phy-Taylor: Physics-Model-Based Deep Neural Networks

no code implementations27 Sep 2022 Yanbing Mao, Lui Sha, Huajie Shao, Yuliang Gu, Qixin Wang, Tarek Abdelzaher

To do so, the PhN augments neural network layers with two key components: (i) monomials of Taylor series expansion of nonlinear functions capturing physical knowledge, and (ii) a suppressor for mitigating the influence of noise.

Self-Contrastive Learning based Semi-Supervised Radio Modulation Classification

no code implementations29 Mar 2022 Dongxin Liu, Peng Wang, Tianshi Wang, Tarek Abdelzaher

This paper presents a semi-supervised learning framework that is new in being designed for automatic modulation classification (AMC).

Classification Contrastive Learning

RETE: Retrieval-Enhanced Temporal Event Forecasting on Unified Query Product Evolutionary Graph

no code implementations12 Feb 2022 Ruijie Wang, Zheng Li, Danqing Zhang, Qingyu Yin, Tong Zhao, Bing Yin, Tarek Abdelzaher

And meanwhile, RETE autoregressively accumulates retrieval-enhanced user representations from each time step, to capture evolutionary patterns for joint query and product prediction.

Product Recommendation Retrieval

Unsupervised Belief Representation Learning with Information-Theoretic Variational Graph Auto-Encoders

1 code implementation1 Oct 2021 Jinning Li, Huajie Shao, Dachun Sun, Ruijie Wang, Yuchen Yan, Jinyang Li, Shengzhong Liu, Hanghang Tong, Tarek Abdelzaher

Inspired by total correlation in information theory, we propose the Information-Theoretic Variational Graph Auto-Encoder (InfoVGAE) that learns to project both users and content items (e. g., posts that represent user views) into an appropriate disentangled latent space.

Representation Learning Stance Detection

Controllable and Diverse Text Generation in E-commerce

no code implementations23 Feb 2021 Huajie Shao, Jun Wang, Haohong Lin, Xuezhou Zhang, Aston Zhang, Heng Ji, Tarek Abdelzaher

The algorithm is injected into a Conditional Variational Autoencoder (CVAE), allowing \textit{Apex} to control both (i) the order of keywords in the generated sentences (conditioned on the input keywords and their order), and (ii) the trade-off between diversity and accuracy.

Text Generation

Scheduling Real-time Deep Learning Services as Imprecise Computations

no code implementations2 Nov 2020 Shuochao Yao, Yifan Hao, Yiran Zhao, Huajie Shao, Dongxin Liu, Shengzhong Liu, Tianshi Wang, Jinyang Li, Tarek Abdelzaher

The paper presents an efficient real-time scheduling algorithm for intelligent real-time edge services, defined as those that perform machine intelligence tasks, such as voice recognition, LIDAR processing, or machine vision, on behalf of local embedded devices that are themselves unable to support extensive computations.

Scheduling

ControlVAE: Tuning, Analytical Properties, and Performance Analysis

4 code implementations31 Oct 2020 Huajie Shao, Zhisheng Xiao, Shuochao Yao, Aston Zhang, Shengzhong Liu, Tarek Abdelzaher

ControlVAE is a new variational autoencoder (VAE) framework that combines the automatic control theory with the basic VAE to stabilize the KL-divergence of VAE models to a specified value.

Disentanglement Image Generation +1

DynamicVAE: Decoupling Reconstruction Error and Disentangled Representation Learning

no code implementations15 Sep 2020 Huajie Shao, Haohong Lin, Qinmin Yang, Shuochao Yao, Han Zhao, Tarek Abdelzaher

Existing methods, such as $\beta$-VAE and FactorVAE, assign a large weight to the KL-divergence term in the objective function, leading to high reconstruction errors for the sake of better disentanglement.

Disentanglement

Semi-supervised Hypergraph Node Classification on Hypergraph Line Expansion

1 code implementation11 May 2020 Chaoqi Yang, Ruijie Wang, Shuochao Yao, Tarek Abdelzaher

Previous hypergraph expansions are solely carried out on either vertex level or hyperedge level, thereby missing the symmetric nature of data co-occurrence, and resulting in information loss.

Classification Graph Learning +1

Analyzing the Design Space of Re-opening Policies and COVID-19 Outcomes in the US

1 code implementation30 Apr 2020 Chaoqi Yang, Ruijie Wang, Fangwei Gao, Dachun Sun, Jiawei Tang, Tarek Abdelzaher

We further compare policies that rely on partial venue closure to policies that espouse wide-spread periodic testing instead (i. e., in lieu of social distancing).

Physics and Society Computers and Society Social and Information Networks

paper2repo: GitHub Repository Recommendation for Academic Papers

no code implementations13 Apr 2020 Huajie Shao, Dachun Sun, Jiahao Wu, Zecheng Zhang, Aston Zhang, Shuochao Yao, Shengzhong Liu, Tianshi Wang, Chao Zhang, Tarek Abdelzaher

Motivated by this trend, we describe a novel item-item cross-platform recommender system, $\textit{paper2repo}$, that recommends relevant repositories on GitHub that match a given paper in an academic search system such as Microsoft Academic.

Recommendation Systems

ControlVAE: Controllable Variational Autoencoder

no code implementations ICML 2020 Huajie Shao, Shuochao Yao, Dachun Sun, Aston Zhang, Shengzhong Liu, Dongxin Liu, Jun Wang, Tarek Abdelzaher

Variational Autoencoders (VAE) and their variants have been widely used in a variety of applications, such as dialog generation, image generation and disentangled representation learning.

Image Generation Language Modelling +1

Revisiting Over-smoothing in Deep GCNs

no code implementations30 Mar 2020 Chaoqi Yang, Ruijie Wang, Shuochao Yao, Shengzhong Liu, Tarek Abdelzaher

Oversmoothing has been assumed to be the major cause of performance drop in deep graph convolutional networks (GCNs).

Node Classification

Macross: Urban Dynamics Modeling based on Metapath Guided Cross-Modal Embedding

no code implementations28 Nov 2019 Yunan Zhang, Heting Gao, Tarek Abdelzaher

As the ongoing rapid urbanization takes place with an ever-increasing speed, fully modeling urban dynamics becomes more and more challenging, but also a necessity for socioeconomic development.

STFNets: Learning Sensing Signals from the Time-Frequency Perspective with Short-Time Fourier Neural Networks

1 code implementation21 Feb 2019 Shuochao Yao, Ailing Piao, Wenjun Jiang, Yiran Zhao, Huajie Shao, Shengzhong Liu, Dongxin Liu, Jinyang Li, Tianshi Wang, Shaohan Hu, Lu Su, Jiawei Han, Tarek Abdelzaher

IoT applications, however, often measure physical phenomena, where the underlying physics (such as inertia, wireless signal propagation, or the natural frequency of oscillation) are fundamentally a function of signal frequencies, offering better features in the frequency domain.

speech-recognition Speech Recognition

FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices

no code implementations19 Sep 2018 Shuochao Yao, Yiran Zhao, Huajie Shao, Shengzhong Liu, Dongxin Liu, Lu Su, Tarek Abdelzaher

We show that changing neural network size does not proportionally affect performance attributes of interest, such as execution time.

RDeepSense: Reliable Deep Mobile Computing Models with Uncertainty Estimations

no code implementations9 Sep 2017 Shuochao Yao, Yiran Zhao, Huajie Shao, Aston Zhang, Chao Zhang, Shen Li, Tarek Abdelzaher

Recent advances in deep learning have led various applications to unprecedented achievements, which could potentially bring higher intelligence to a broad spectrum of mobile and ubiquitous applications.

DeepIoT: Compressing Deep Neural Network Structures for Sensing Systems with a Compressor-Critic Framework

1 code implementation5 Jun 2017 Shuochao Yao, Yiran Zhao, Aston Zhang, Lu Su, Tarek Abdelzaher

It is thus able to shorten execution time by 71. 4% to 94. 5%, and decrease energy consumption by 72. 2% to 95. 7%.

Cannot find the paper you are looking for? You can Submit a new open access paper.