Search Results for author: Yingjie Li

Found 21 papers, 8 papers with code

Improving Stance Detection with Multi-Dataset Learning and Knowledge Distillation

1 code implementation EMNLP 2021 Yingjie Li, Chenye Zhao, Cornelia Caragea

To address these challenges, first, we evaluate a multi-target and a multi-dataset training settings by training one model on each dataset and datasets of different domains, respectively.

Knowledge Distillation Stance Detection

A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution

no code implementations2 Apr 2024 Bowen Ding, Qingkai Min, Shengkun Ma, Yingjie Li, Linyi Yang, Yue Zhang

Based on Pre-trained Language Models (PLMs), event coreference resolution (ECR) systems have demonstrated outstanding performance in clustering coreferential events across documents.

coreference-resolution counterfactual +3

BoolGebra: Attributed Graph-learning for Boolean Algebraic Manipulation

no code implementations19 Jan 2024 Yingjie Li, Anthony Agnesina, Yanqing Zhang, Haoxing Ren, Cunxi Yu

Boolean algebraic manipulation is at the core of logic synthesis in Electronic Design Automation (EDA) design flow.

Graph Learning

Verilog-to-PyG -- A Framework for Graph Learning and Augmentation on RTL Designs

1 code implementation9 Nov 2023 Yingjie Li, Mingju Liu, Alan Mishchenko, Cunxi Yu

The complexity of modern hardware designs necessitates advanced methodologies for optimizing and analyzing modern digital systems.

Data Augmentation Graph Learning

XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners

1 code implementation9 Oct 2023 Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Fang Guo, Qinglin Qi, Jie zhou, Yue Zhang

Active learning (AL), which aims to construct an effective training set by iteratively curating the most formative unlabeled data for annotation, has been widely used in low-resource tasks.

Active Learning text-classification +1

Enhancing Argument Structure Extraction with Efficient Leverage of Contextual Information

1 code implementation8 Oct 2023 Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Jie zhou, Yue Zhang

However, we observe that merely concatenating sentences in a contextual window does not fully utilize contextual information and can sometimes lead to excessive attention on less informative sentences.

Rubik's Optical Neural Networks: Multi-task Learning with Physics-aware Rotation Architecture

no code implementations25 Apr 2023 Yingjie Li, Weilu Gao, Cunxi Yu

Recently, there are increasing efforts on advancing optical neural networks (ONNs), which bring significant advantages for machine learning (ML) in terms of power efficiency, parallelism, and computational speed.

Autonomous Driving Multi-Task Learning +1

RESPECT: Reinforcement Learning based Edge Scheduling on Pipelined Coral Edge TPUs

1 code implementation10 Apr 2023 Jiaqi Yin, Yingjie Li, Daniel Robinson, Cunxi Yu

Deep neural networks (DNNs) have substantial computational and memory requirements, and the compilation of its computational graphs has a great impact on the performance of resource-constrained (e. g., computation, I/O, and memory-bound) edge computing systems.

Edge-computing reinforcement-learning +2

Physics-aware Roughness Optimization for Diffractive Optical Neural Networks

no code implementations4 Apr 2023 Shanglin Zhou, Yingjie Li, Minhan Lou, Weilu Gao, Zhijie Shi, Cunxi Yu, Caiwen Ding

As a representative next-generation device/circuit technology beyond CMOS, diffractive optical neural networks (DONNs) have shown promising advantages over conventional deep neural networks due to extreme fast computation speed (light speed) and low energy consumption.

Physics-aware Differentiable Discrete Codesign for Diffractive Optical Neural Networks

no code implementations28 Sep 2022 Yingjie Li, Ruiyang Chen, Weilu Gao, Cunxi Yu

Diffractive optical neural networks (DONNs) have attracted lots of attention as they bring significant advantages in terms of power efficiency, parallelism, and computational speed compared with conventional deep neural networks (DNNs), which have intrinsic limitations when implemented on digital platforms.

Quantization

QuickSkill: Novice Skill Estimation in Online Multiplayer Games

no code implementations15 Aug 2022 Chaoyun Zhang, Kai Wang, Hao Chen, Ge Fan, Yingjie Li, Lifang Wu, Bingchao Zheng

However, the skill rating of a novice is usually inaccurate, as current matchmaking rating algorithms require considerable amount of games for learning the true skill of a new player.

Fairness

Differentiable Discrete Device-to-System Codesign for Optical Neural Networks via Gumbel-Softmax

no code implementations29 Sep 2021 Yingjie Li, Ruiyang Chen, Weilu Gao, Cunxi Yu

Specifically, Gumbel-Softmax with a novel complex-domain regularization method is employed to enable differentiable one-to-one mapping from discrete device parameters into the forward function of DONNs, where the physical parameters in DONNs can be trained by simply minimizing the loss function of the ML task.

Quantization Scheduling

Stance Detection in COVID-19 Tweets

no code implementations ACL 2021 Kyle Glandt, Sarthak Khanal, Yingjie Li, Doina Caragea, Cornelia Caragea

The prevalence of the COVID-19 pandemic in day-to-day life has yielded large amounts of stance detection data on social media sites, as users turn to social media to share their views regarding various issues related to the pandemic, e. g. stay at home mandates and wearing face masks when out in public.

Domain Adaptation Stance Detection

Target-Aware Data Augmentation for Stance Detection

no code implementations NAACL 2021 Yingjie Li, Cornelia Caragea

The goal of stance detection is to identify whether the author of a text is in favor of, neutral or against a specific target.

Data Augmentation Language Modelling +3

Real-time Multi-Task Diffractive Deep Neural Networks via Hardware-Software Co-design

no code implementations16 Dec 2020 Yingjie Li, Ruiyang Chen, Berardi Sensale Rodriguez, Weilu Gao, Cunxi Yu

Deep neural networks (DNNs) have substantial computational requirements, which greatly limit their performance in resource-constrained environments.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.