Search Results for author: Shou-De Lin

Found 28 papers, 5 papers with code

Explainable and Sparse Representations of Academic Articles for Knowledge Exploration

no code implementations COLING 2020 Keng-Te Liao, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, PoChun Chen, Kuansan Wang, Shou-De Lin

Provided with the interpretable concepts and knowledge encoded in a pre-trained neural model, we investigate whether the tagged concepts can be applied to a broader class of applications.

Glyph2Vec: Learning Chinese Out-of-Vocabulary Word Embedding from Glyphs

no code implementations ACL 2020 Hong-You Chen, Sz-Han Yu, Shou-De Lin

Chinese NLP applications that rely on large text often contain huge amounts of vocabulary which are sparse in corpus.

Multiple Text Style Transfer by using Word-level Conditional Generative Adversarial Network with Two-Phase Training

no code implementations IJCNLP 2019 Chih-Te Lai, Yi-Te Hong, Hong-You Chen, Chi-Jen Lu, Shou-De Lin

The objective of non-parallel text style transfer, or controllable text generation, is to alter specific attributes (e. g. sentiment, mood, tense, politeness, etc) of a given text while preserving its remaining attributes and content.

Style Transfer Text Style Transfer

Controlling Sequence-to-Sequence Models - A Demonstration on Neural-based Acrostic Generator

no code implementations IJCNLP 2019 Liang-Hsin Shen, Pei-Lun Tai, Chao-Chung Wu, Shou-De Lin

An acrostic is a form of writing that the first token of each line (or other recurring features in the text) forms a meaningful sequence.

Text Generation

MetricGAN: Generative Adversarial Networks based Black-box Metric Scores Optimization for Speech Enhancement

5 code implementations13 May 2019 Szu-Wei Fu, Chien-Feng Liao, Yu Tsao, Shou-De Lin

Adversarial loss in a conditional generative adversarial network (GAN) is not designed to directly optimize evaluation metrics of a target task, and thus, may not always guide the generator in a GAN to generate data with improved metric scores.

Speech Enhancement

DEEP-TRIM: REVISITING L1 REGULARIZATION FOR CONNECTION PRUNING OF DEEP NETWORK

no code implementations ICLR 2019 Chih-Kuan Yeh, Ian E. H. Yen, Hong-You Chen, Chun-Pei Yang, Shou-De Lin, Pradeep Ravikumar

State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.

A Regulation Enforcement Solution for Multi-agent Reinforcement Learning

no code implementations29 Jan 2019 Fan-Yun Sun, Yen-Yu Chang, Yueh-Hua Wu, Shou-De Lin

If artificially intelligent (AI) agents make decisions on behalf of human beings, we would hope they can also follow established regulations while interacting with humans or other AI agents.

Multi-agent Reinforcement Learning

MixLasso: Generalized Mixed Regression via Convex Atomic-Norm Regularization

no code implementations NeurIPS 2018 Ian En-Hsu Yen, Wei-Cheng Lee, Kai Zhong, Sung-En Chang, Pradeep K. Ravikumar, Shou-De Lin

We consider a generalization of mixed regression where the response is an additive combination of several mixture components.

Attribute-aware Collaborative Filtering: Survey and Classification

no code implementations20 Oct 2018 Wen-Hao Chen, Chin-Chi Hsu, Yi-An Lai, Vincent Liu, Mi-Yen Yeh, Shou-De Lin

Attribute-aware CF models aims at rating prediction given not only the historical rating from users to items, but also the information associated with users (e. g. age), items (e. g. price), or even ratings (e. g. rating time).

Classification General Classification

A Memory-Network Based Solution for Multivariate Time-Series Forecasting

2 code implementations6 Sep 2018 Yen-Yu Chang, Fan-Yun Sun, Yueh-Hua Wu, Shou-De Lin

Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Time-series network (MTNet) for time series forecasting.

Multivariate Time Series Forecasting Question Answering +1

ANS: Adaptive Network Scaling for Deep Rectifier Reinforcement Learning Models

no code implementations6 Sep 2018 Yueh-Hua Wu, Fan-Yun Sun, Yen-Yu Chang, Shou-De Lin

This work provides a thorough study on how reward scaling can affect performance of deep reinforcement learning agents.

A Low-Cost Ethics Shaping Approach for Designing Reinforcement Learning Agents

1 code implementation12 Dec 2017 Yueh-Hua Wu, Shou-De Lin

This paper proposes a low-cost, easily realizable strategy to equip a reinforcement learning (RL) agent the capability of behaving ethically.

Towards a More Reliable Privacy-preserving Recommender System

no code implementations21 Nov 2017 Jia-Yun Jiang, Cheng-Te Li, Shou-De Lin

This paper proposes a privacy-preserving distributed recommendation framework, Secure Distributed Collaborative Filtering (SDCF), to preserve the privacy of value, model and existence altogether.

Recommendation Systems

Latent Feature Lasso

no code implementations ICML 2017 Ian En-Hsu Yen, Wei-Cheng Lee, Sung-En Chang, Arun Sai Suggala, Shou-De Lin, Pradeep Ravikumar

The latent feature model (LFM), proposed in (Griffiths \& Ghahramani, 2005), but possibly with earlier origins, is a generalization of a mixture model, where each instance is generated not from a single latent class but from a combination of latent features.

Toward Implicit Sample Noise Modeling: Deviation-driven Matrix Factorization

no code implementations28 Oct 2016 Guang-He Lee, Shao-Wen Yang, Shou-De Lin

Specifically, by modeling and learning the deviation of data, we design a novel matrix factorization model.

A Dual Augmented Block Minimization Framework for Learning with Limited Memory

no code implementations NeurIPS 2015 Ian En-Hsu Yen, Shan-Wei Lin, Shou-De Lin

In past few years, several techniques have been proposed for training of linear Support Vector Machine (SVM) in limited-memory setting, where a dual block-coordinate descent (dual-BCD) method was used to balance cost spent on I/O and computation.

Sparse Random Feature Algorithm as Coordinate Descent in Hilbert Space

no code implementations NeurIPS 2014 Ian En-Hsu Yen, Ting-Wei Lin, Shou-De Lin, Pradeep K. Ravikumar, Inderjit S. Dhillon

In this paper, we propose a Sparse Random Feature algorithm, which learns a sparse non-linear predictor by minimizing an $\ell_1$-regularized objective function over the Hilbert Space induced from kernel function.

Cannot find the paper you are looking for? You can Submit a new open access paper.