Search Results for author: Junjie Yang

Found 31 papers, 13 papers with code

AI-Based Fully Automatic Analysis of Retinal Vascular Morphology in Pediatric High Myopia

no code implementations30 Sep 2024 Yinzheng Zhao, Zhihao Zhao, Junjie Yang, Li Li, M. Ali Nasseri, Daniel Zapp

Results: There were 279 (12. 38%) images in normal group and 384 (16. 23%) images in the high myopia group.

KLDD: Kalman Filter based Linear Deformable Diffusion Model in Retinal Image Segmentation

no code implementations19 Sep 2024 Zhihao Zhao, Yinzheng Zhao, Junjie Yang, Kai Huang, Nassir Navab, M. Ali Nasseri

To better optimize the coordinate positions of deformable convolution, we employ the Kalman filter to enhance the perception of vascular structures in linear deformable convolution.

Image Segmentation Retinal Vessel Segmentation +2

Deep Sketched Output Kernel Regression for Structured Prediction

1 code implementation13 Jun 2024 Tamim El Ahmad, Junjie Yang, Pierre Laforgue, Florence d'Alché-Buc

By leveraging the kernel trick in the output space, kernel-induced losses provide a principled way to define structured output prediction tasks for a wide variety of output modalities.

Cross-Modal Retrieval regression +1

Async Learned User Embeddings for Ads Delivery Optimization

no code implementations9 Jun 2024 Mingwei Tang, Meng Liu, Hong Li, Junjie Yang, Chenglin Wei, Boyang Li, Dai Li, Rengan Xu, Yifan Xu, Zehua Zhang, Xiangyu Wang, Linfeng Liu, Yuelei Xie, Chengye Liu, Labib Fawaz, Li Li, Hongnan Wang, Bill Zhu, Sri Reddy

In recommendation systems, high-quality user embeddings can capture subtle preferences, enable precise similarity calculations, and adapt to changing preferences over time to maintain relevance.

Graph Learning Recommendation Systems +1

Better YOLO with Attention-Augmented Network and Enhanced Generalization Performance for Safety Helmet Detection

no code implementations4 May 2024 Shuqi Shen, Junjie Yang

Safety helmets play a crucial role in protecting workers from head injuries in construction sites, where potential hazards are prevalent.

SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters

1 code implementation2 May 2024 Shengsheng Lin, Weiwei Lin, Wentai Wu, Haojun Chen, Junjie Yang

This paper introduces SparseTSF, a novel, extremely lightweight model for Long-term Time Series Forecasting (LTSF), designed to address the challenges of modeling complex temporal dependencies over extended horizons with minimal computational resources.

Time Series Time Series Forecasting

Take the Bull by the Horns: Hard Sample-Reweighted Continual Training Improves LLM Generalization

1 code implementation22 Feb 2024 Xuxi Chen, Zhendong Wang, Daouda Sow, Junjie Yang, Tianlong Chen, Yingbin Liang, Mingyuan Zhou, Zhangyang Wang

Our study starts from an empirical strategy for the light continual training of LLMs using their original pre-training data sets, with a specific focus on selective retention of samples that incur moderately high losses.

Any2Graph: Deep End-To-End Supervised Graph Prediction With An Optimal Transport Loss

no code implementations19 Feb 2024 Paul Krzakala, Junjie Yang, Rémi Flamary, Florence d'Alché-Buc, Charlotte Laclau, Matthieu Labeau

We propose Any2graph, a generic framework for end-to-end Supervised Graph Prediction (SGP) i. e. a deep learning model that predicts an entire graph for any kind of input.

A Large-Scale Empirical Study on Improving the Fairness of Image Classification Models

1 code implementation8 Jan 2024 Junjie Yang, Jiajun Jiang, Zeyu Sun, Junjie Chen

Specifically, we target the widely-used application scenario of image classification, and utilized three different datasets and five commonly-used performance metrics to assess in total 13 methods from diverse categories.

Fairness Image Classification

Rethinking PGD Attack: Is Sign Function Necessary?

1 code implementation3 Dec 2023 Junjie Yang, Tianlong Chen, Xuxi Chen, Zhangyang Wang, Yingbin Liang

Based on that, we further propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.

Meta ControlNet: Enhancing Task Adaptation via Meta Learning

1 code implementation3 Dec 2023 Junjie Yang, Jinze Zhao, Peihao Wang, Zhangyang Wang, Yingbin Liang

However, vanilla ControlNet generally requires extensive training of around 5000 steps to achieve a desirable control for a single task.

Edge Detection Image Generation +1

EyeLS: Shadow-Guided Instrument Landing System for Intraocular Target Approaching in Robotic Eye Surgery

no code implementations15 Nov 2023 Junjie Yang, Zhihao Zhao, Siyuan Shen, Daniel Zapp, Mathias Maier, Kai Huang, Nassir Navab, M. Ali Nasseri

Robotic ophthalmic surgery is an emerging technology to facilitate high-precision interventions such as retina penetration in subretinal injection and removal of floating tissues in retinal detachment depending on the input imaging modalities such as microscopy and intraoperative OCT (iOCT).

Exploiting Edge Features in Graphs with Fused Network Gromov-Wasserstein Distance

no code implementations28 Sep 2023 Junjie Yang, Matthieu Labeau, Florence d'Alché-Buc

Pairwise comparison of graphs is key to many applications in Machine learning ranging from clustering, kernel-based classification/regression and more recently supervised graph prediction.

M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast Self-Adaptation

1 code implementation28 Feb 2023 Junjie Yang, Xuxi Chen, Tianlong Chen, Zhangyang Wang, Yingbin Liang

This data-driven procedure yields L2O that can efficiently solve problems similar to those seen in training, that is, drawn from the same ``task distribution".

Learning to Generalize Provably in Learning to Optimize

1 code implementation22 Feb 2023 Junjie Yang, Tianlong Chen, Mingkang Zhu, Fengxiang He, DaCheng Tao, Yingbin Liang, Zhangyang Wang

While the optimizer generalization has been recently studied, the optimizee generalization (or learning to generalize) has not been rigorously studied in the L2O context, which is the aim of this paper.

Embedded Silicon-Organic Integrated Neuromorphic System

no code implementations18 Oct 2022 Shengjie Zheng, Ling Liu, Junjie Yang, Jianwei Zhang, Tao Su, Bin Yue, Xiaojian Li

The development of artificial intelligence (AI) and robotics are both based on the tenet of "science and technology are people-oriented", and both need to achieve efficient communication with the human brain.

APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking

4 code implementations12 Jun 2022 Yuxiang Yang, Junjie Yang, Yufei Xu, Jing Zhang, Long Lan, DaCheng Tao

Based on APT-36K, we benchmark several representative models on the following three tracks: (1) supervised animal pose estimation on a single frame under intra- and inter-domain transfer learning settings, (2) inter-species domain generalization test for unseen animals, and (3) animal pose estimation with animal tracking.

Animal Pose Estimation Domain Generalization +1

Generalizable Learning to Optimize into Wide Valleys

no code implementations29 Sep 2021 Junjie Yang, Tianlong Chen, Mingkang Zhu, Fengxiang He, DaCheng Tao, Yingbin Liang, Zhangyang Wang

Learning to optimize (L2O) has gained increasing popularity in various optimization tasks, since classical optimizers usually require laborious, problem-specific design and hyperparameter tuning.

Provably Faster Algorithms for Bilevel Optimization

1 code implementation NeurIPS 2021 Junjie Yang, Kaiyi Ji, Yingbin Liang

Bilevel optimization has been widely applied in many important machine learning applications such as hyperparameter optimization and meta-learning.

Bilevel Optimization Hyperparameter Optimization +1

Neural Network Training Techniques Regularize Optimization Trajectory: An Empirical Study

no code implementations13 Nov 2020 Cheng Chen, Junjie Yang, Yi Zhou

Specifically, we find that the optimization trajectories of successful DNN trainings consistently obey a certain regularity principle that regularizes the model update direction to be aligned with the trajectory direction.

Bilevel Optimization: Convergence Analysis and Enhanced Design

2 code implementations15 Oct 2020 Kaiyi Ji, Junjie Yang, Yingbin Liang

For the AID-based method, we orderwisely improve the previous convergence rate analysis due to a more practical parameter selection as well as a warm start strategy, and for the ITD-based method we establish the first theoretical convergence rate.

Bilevel Optimization Hyperparameter Optimization +1

Provably Faster Algorithms for Bilevel Optimization and Applications to Meta-Learning

no code implementations28 Sep 2020 Kaiyi Ji, Junjie Yang, Yingbin Liang

For the AID-based method, we orderwisely improve the previous finite-time convergence analysis due to a more practical parameter selection as well as a warm start strategy, and for the ITD-based method we establish the first theoretical convergence rate.

Bilevel Optimization Hyperparameter Optimization +1

Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning

2 code implementations18 Feb 2020 Kaiyi Ji, Junjie Yang, Yingbin Liang

As a popular meta-learning approach, the model-agnostic meta-learning (MAML) algorithm has been widely used due to its simplicity and effectiveness.

Meta-Learning

Retrospective Reader for Machine Reading Comprehension

2 code implementations27 Jan 2020 Zhuosheng Zhang, Junjie Yang, Hai Zhao

Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction.

Machine Reading Comprehension Question Answering

Deepening Hidden Representations from Pre-trained Language Models

no code implementations5 Nov 2019 Junjie Yang, Hai Zhao

Transformer-based pre-trained language models have proven to be effective for learning contextualized language representation.

Natural Language Understanding

An Optimization Principle Of Deep Learning?

no code implementations25 Sep 2019 Cheng Chen, Junjie Yang, Yi Zhou

In particular, we observe that the trainings that apply the training techniques achieve accelerated convergence and obey the principle with a large $\gamma$, which is consistent with the $\mathcal{O}(1/\gamma K)$ convergence rate result under the optimization principle.

SGD Converges to Global Minimum in Deep Learning via Star-convex Path

no code implementations ICLR 2019 Yi Zhou, Junjie Yang, Huishuai Zhang, Yingbin Liang, Vahid Tarokh

Stochastic gradient descent (SGD) has been found to be surprisingly effective in training a variety of deep neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.