no code implementations • 11 Sep 2023 • Chien-Chih Wang, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Bryan Wang
Firstly, an outer SNN is trained using labeled and unlabeled data.
no code implementations • 7 Dec 2021 • Huidong Liu, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Ning Xie, Chien-Chih Wang, Bryan Wang, Yi Sun
In this paper, we propose the Cross-Modality Attention Contrastive Language-Image Pre-training (CMA-CLIP), a new framework which unifies two types of cross-modality attentions, sequence-wise attention and modality-wise attention, to effectively fuse information from image and text pairs.
1 code implementation • 15 Oct 2019 • Tianyu Li, Chien-Chih Wang, Yukun Ma, Patricia Ortal, Qifang Zhao, Bjorn Stenger, Yu Hirate
Existing algorithms aiming to learn a binary classifier from positive (P) and unlabeled (U) data generally require estimating the class prior or label noises ahead of building a classification model.
no code implementations • 14 Nov 2018 • Chien-Chih Wang, Kent Loong Tan, Chih-Jen Lin
Deep learning involves a difficult non-convex optimization problem, which is often solved by stochastic gradient (SG) methods.
no code implementations • 1 Feb 2018 • Chien-Chih Wang, Kent Loong Tan, Chun-Ting Chen, Yu-Hsiang Lin, S. Sathiya Keerthi, Dhruv Mahajan, S. Sundararajan, Chih-Jen Lin
First, to reduce the communication cost, we propose a diagonalization method such that an approximate Newton direction can be obtained without communication between machines.