no code implementations • 19 Jan 2024 • Edward Y. Chang
In the knowledge generation phase, the moderator defines the debate topic and contentiousness level, prompting the agents to formulate supporting arguments for their respective stances.
no code implementations • 17 Feb 2023 • Edward Y. Chang
This paper presents a systematic approach to using the Socratic method in developing prompt templates that effectively interact with large language models, including GPT-3.
no code implementations • 27 Dec 2022 • Edward Y. Chang
The success of deep learning is largely due to the availability of large amounts of training data that cover a wide range of examples of a particular concept or meaning.
3 code implementations • ICCV 2019 • Po-Wei Wu, Yu-Jing Lin, Che-Han Chang, Edward Y. Chang, Shih-wei Liao
Our method is capable of modifying images by changing particular attributes of interest in a continuous manner while preserving the other attributes.
no code implementations • 30 May 2019 • Yang-En Chen, Kai-Fu Tang, Yu-Shao Peng, Edward Y. Chang
Effective medical test suggestions benefit both patients and physicians to conserve time and improve diagnosis accuracy.
no code implementations • 29 May 2019 • Fu-Chieh Chang, Hao-Jen Wang, Chun-Nan Chou, Edward Y. Chang
Performing supervised learning from the data synthesized by using Generative Adversarial Networks (GANs), dubbed GAN-synthetic data, has two important applications.
no code implementations • 29 May 2019 • Che-Han Chang, Chun-Hsien Yu, Szu-Ying Chen, Edward Y. Chang
Can generative adversarial networks (GANs) generate roses of various colors given only roses of red petals as input?
no code implementations • CVPR 2019 • Yu-Hsun Lin, Chun-Nan Chou, Edward Y. Chang
In this paper we propose the macroblock scaling (MBS) algorithm, which can be applied to various CNN architectures to reduce their model size.
no code implementations • 16 Jul 2018 • Yu-Hsun Lin, Chun-Nan Chou, Edward Y. Chang
This paper proposes BRIEF, a backward reduction algorithm that explores compact CNN-model designs from the information flow perspective.
no code implementations • 19 Feb 2018 • Sheng-Wei Chen, Chun-Nan Chou, Edward Y. Chang
For training fully-connected neural networks (FCNNs), we propose a practical approximate second-order method including: 1) an approximation of the Hessian matrix and 2) a conjugate gradient (CG) based method.
no code implementations • 10 Aug 2017 • Shang-Xuan Zou, Chun-Yen Chen, Jui-Lin Wu, Chun-Nan Chou, Chia-Chin Tsao, Kuan-Chieh Tung, Ting-Wei Lin, Cheng-Lung Sung, Edward Y. Chang
Scale of data and scale of computation infrastructures together enable the current deep learning renaissance.
no code implementations • 25 Jul 2017 • Chun-Nan Chou, Chuen-Kai Shie, Fu-Chieh Chang, Jocelyn Chang, Edward Y. Chang
Deep learning owes its success to three key factors: scale of data, enhanced models to learn representations from data, and scale of computation.
no code implementations • CVPR 2017 • Che-Han Chang, Chun-Nan Chou, Edward Y. Chang
The main component of this architecture is a Lucas-Kanade layer that performs the inverse compositional algorithm on convolutional feature maps.
no code implementations • CVPR 2015 • Zhizhong Li, Deli Zhao, Zhouchen Lin, Edward Y. Chang
In the line search step, R3MC approximates the minimum point on the searching curve by minimizing on the line tangent to the curve.
no code implementations • 17 Nov 2014 • Miao Fan, Deli Zhao, Qiang Zhou, Zhiyuan Liu, Thomas Fang Zheng, Edward Y. Chang
The essence of distantly supervised relation extraction is that it is an incomplete multi-label classification problem with sparse and noisy features.
no code implementations • NeurIPS 2007 • Kaihua Zhu, Hao Wang, Hongjie Bai, Jian Li, Zhihuan Qiu, Hang Cui, Edward Y. Chang
Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time.