no code implementations • EMNLP 2021 • Mieradilijiang Maimaiti, Yang Liu, Yuanhang Zheng, Gang Chen, Kaiyu Huang, Ji Zhang, Huanbo Luan, Maosong Sun
Besides, the robustness of the previous neural methods is limited by the large-scale annotated data.
1 code implementation • COLING 2022 • Yangjun Wu, Han Wang, Dongxiang Zhang, Gang Chen, Hao Zhang
Specifically, we design 5-type templates as instructional prompts, and each template includes a question that acts as the driver to teach UGEN to grasp the paradigm, options that list the candidate intents or slots to reduce the answer search space, and the context denotes original utterance.
1 code implementation • ACL 2022 • Jue Wang, Ke Chen, Gang Chen, Lidan Shou, Julian McAuley
In this paper, we propose SkipBERT to accelerate BERT inference by skipping the computation of shallow layers.
no code implementations • 18 May 2023 • Gang Chen, Victoria Huang
Armed with these technical developments, we propose a new policy gradient algorithm that learns to minimize the absolute divergence in the Riemannian manifold as an important regularization mechanism, allowing the Riemannian manifold to smoothen its policy gradient vector field.
no code implementations • 15 May 2023 • Wentao Ye, Mingfeng Ou, Tianyi Li, Yipeng chen, Xuetao Ma, Yifan Yanggong, Sai Wu, Jie Fu, Gang Chen, Haobo Wang, Junbo Zhao
With most of the related literature in the era of LLM uncharted, we propose an automated workflow that copes with an upscaled number of queries/responses.
no code implementations • 10 May 2023 • Haobo Wang, Shisong Yang, Gengyu Lyu, Weiwei Liu, Tianlei Hu, Ke Chen, Songhe Feng, Gang Chen
In partial multi-label learning (PML), each data example is equipped with a candidate label set, which consists of multiple ground-truth labels and other false-positive labels.
1 code implementation • 11 Apr 2023 • Jianan Yang, Haobo Wang, Ruixuan Xiao, Sai Wu, Gang Chen, Junbo Zhao
The recent large-scale generative modeling has attained unprecedented performance especially in producing high-fidelity images driven by text prompts.
no code implementations • 24 Mar 2023 • Hanyu Zhou, Yi Chang, Gang Chen, Luxin Yan
In motion adaptation, we utilize the flow consistency knowledge to align the cross-domain optical flows into a motion-invariance common space, where the optical flow from clean weather is used as the guidance-knowledge to obtain a preliminary optical flow for adverse weather.
no code implementations • 13 Feb 2023 • Yuchen Liu, Chen Chen, Lingjuan Lyu, Fangzhao Wu, Sai Wu, Gang Chen
In order to address this issue, we propose GAS, a \shorten approach that can successfully adapt existing robust AGRs to non-IID settings.
no code implementations • 9 Feb 2023 • Hexiang Pan, Quang-Trung Ta, Meihui Zhang, Yeow Meng Chee, Gang Chen, Beng Chin Ooi
Consequently, it improves both the response time and throughput, and effectively handles nodes distributed across the Internet where crash and network failures might occur.
no code implementations • 18 Jan 2023 • Jixing Li, Xiaozhou Guo, Benzhe Dai, Guoliang Gong, Min Jin, Gang Chen, Wenyu Mao, Huaxiang Lu
This is always ignored and restricts the quantization performance.
no code implementations • 18 Dec 2022 • Gang Chen
Diffusion probabilistic models have generated high quality image synthesis recently.
1 code implementation • 27 Nov 2022 • Gang Chen, Jiawei Chen, Fuli Feng, Sheng Zhou, Xiangnan He
Traditional solutions first train a full teacher model from the training data, and then transfer its knowledge (\ie \textit{soft labels}) to supervise the learning of a compact student model.
1 code implementation • 3 Oct 2022 • Huanzhou Zhu, Bo Zhao, Gang Chen, Weifeng Chen, Yijie Chen, Liang Shi, Yaodong Yang, Peter Pietzuch, Lei Chen
Yet, current distributed RL systems tie the definition of RL algorithms to their distributed execution: they hard-code particular distribution strategies and only accelerate specific parts of the computation (e. g. policy network updates) on GPU workers.
no code implementations • 29 Sep 2022 • Gang Chen, Victoria Huang
In this paper, we propose a new technique to train an ensemble of base learners based on an innovative multi-step integration method.
1 code implementation • 21 Sep 2022 • Haobo Wang, Mingxuan Xia, Yixuan Li, YUREN MAO, Lei Feng, Gang Chen, Junbo Zhao
Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth.
no code implementations • 28 Jun 2022 • Haitao Meng, Changcai Li, Gang Chen, Alois Knoll
In the experiments, we develop a system with a less powerful stereo matching predictor and adopt the proposed refinement schemes to improve the accuracy.
no code implementations • 14 Jun 2022 • Beng Chin Ooi, Gang Chen, Mike Zheng Shou, Kian-Lee Tan, Anthony Tung, Xiaokui Xiao, James Wei Luen Yip, Meihui Zhang
In the Metaverse, the physical space and the virtual space co-exist, and interact simultaneously.
1 code implementation • 15 Apr 2022 • Gang Chen, Yu Lu, Rong Su, Zhaodan Kong
Machine learning-based methods have achieved successful applications in machinery fault diagnosis.
1 code implementation • 22 Jan 2022 • Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.
no code implementations • 13 Nov 2021 • Gang Chen
For example, we need the Gaussian policy with high variance to explore object of interests in a large image, which may cause randomized search and unstable learning.
no code implementations • 29 Sep 2021 • Gang Chen
For example, we need the Gaussian policy with high variance to explore object of interests in a large image, which may cause randomized search and unstable learning.
1 code implementation • ICLR 2022 • Haobo Wang, Ruixuan Xiao, Sharon Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.
no code implementations • 9 Sep 2021 • Lei Zhu, Zhaojing Luo, Wei Wang, Meihui Zhang, Gang Chen, Kaiping Zheng
In multimedia analysis, domain adaptation studies the problem of cross-domain knowledge transfer from a label rich source domain to a label scarce target domain, thus potentially alleviates the annotation requirement for deep learning models.
no code implementations • 3 Aug 2021 • Naili Xing, Sai Ho Yeung, ChengHao Cai, Teck Khim Ng, Wei Wang, Kaiyuan Yang, Nan Yang, Meihui Zhang, Gang Chen, Beng Chin Ooi
Specifically, in terms of usability, it is demanding for non-experts to implement deep learning models, obtain the right settings for the entire machine learning pipeline, manage models and datasets, and exploit external data sources all together.
no code implementations • 15 Jul 2021 • Guanqun Ai, Xingquan Zuo, Gang Chen, Binglin Wu
Building on an existing method for calculating the carrying capacity, we develop a new technique to enhance the matching degree at each bus station.
1 code implementation • 5 Jul 2021 • Shaofeng Cai, Kaiping Zheng, Gang Chen, H. V. Jagadish, Beng Chin Ooi, Meihui Zhang
The key idea is to model feature interactions with cross features selectively and dynamically, by first transforming the input features into exponential space, and then determining the interaction order and interaction weights adaptively for each cross feature.
no code implementations • 17 Jun 2021 • Jake Zhao, Mingfeng Ou, Linji Xue, Yunkai Cui, Sai Wu, Gang Chen
Most, if not all, modern deep learning systems restrict themselves to a single dataset for neural network training and inference.
no code implementations • 8 Jun 2021 • Jimin Tan, Jianan Yang, Sai Wu, Gang Chen, Jake Zhao
The establishment of these split protocols are based on two assumptions: (i)-fixing the dataset to be eternally static so we could evaluate different machine learning algorithms or models; (ii)-there is a complete set of annotated data available to researchers or industrial practitioners.
1 code implementation • AAAI 2021 • Jue Wang, Ke Chen, Lidan Shou, Sai Wu, Gang Chen
By using some particular weakly-labeled data, namely the plain phrases included in sentences, we propose a weaklysupervised slot filling approach.
no code implementations • 21 Apr 2021 • Zeyu Chen, Xinhang Zhang, Juan Li, Jingxuan Ni, Gang Chen, Shaohua Wang, Fangfang Fan, Changfeng Charles Wang, Xiaotao Li
Our experimental results in multiple metrics proved that our framework captured some typical, micro and dynamic facial features along spatiotemporal dimensions, contributing to the mild fatigue detection in the wild.
no code implementations • 30 Mar 2021 • Can Cui, Wei Wang, Meihui Zhang, Gang Chen, Zhaojing Luo, Beng Chin Ooi
In this paper, we introduce a new class of alphas to model scalar, vector, and matrix features which possess the strengths of these two existing classes.
no code implementations • 10 Mar 2021 • Xin Qian, Jungwoo Shin, Yaodong Tu, James Han Zhang, Gang Chen
This work also presents a comprehensive model with a coupled analysis of mass transfer and reaction kinetics in a porous electrode that can accurately capture the flow rate dependence of power density and energy conversion efficiency.
Applied Physics Classical Physics
no code implementations • 15 Feb 2021 • Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama
To enhance adversarial robustness, adversarial training learns deep neural networks on the adversarial variants generated by their natural data.
no code implementations • 6 Feb 2021 • Victoria Huang, Gang Chen, Qiang Fu
Recently, distributed controller architectures have been quickly gaining popularity in Software-Defined Networking (SDN).
no code implementations • 21 Jan 2021 • Bruno Lorenzi, Paolo Mariani, Andrea Reale, Aldo Di Carlo, Gang Chen, Dario Narducci
The model results showed in all three cases efficiency gains with a maximum of +3. 1% for Perovskites (from 16. 4% to 19. 5%).
Applied Physics
no code implementations • 31 Dec 2020 • Zhixing Tan, Shuo Wang, Zonghan Yang, Gang Chen, Xuancheng Huang, Maosong Sun, Yang Liu
Machine translation (MT) is an important sub-field of natural language processing that aims to translate natural languages using computers.
no code implementations • 25 Dec 2020 • Gang Chen, Maosong Sun, Yang Liu
In this work, we propose a method for building a continuous knowledge base (CKB) that can store knowledge imported from multiple, diverse neural networks.
no code implementations • 16 Dec 2020 • Chao-Kai Li, Xu-Ping Yao, Gang Chen
Magnetic skyrmions are topological spin textures that can be used as information carriers for the next-generation information storage and processing.
Strongly Correlated Electrons Mesoscale and Nanoscale Physics Materials Science
no code implementations • 12 Dec 2020 • Gang Chen
Most of the neural networks (NNs) learned via state-of-the-art machine learning techniques are black-box models.
no code implementations • 23 Nov 2020 • Hong Lin, Lidan Shou, Ke Chen, Gang Chen, Sai Wu
On occasion of NFL recovery, the framework makes adaptation to the federated model on each client's local data by learning a Layer-wise Intertwined Dual-model.
no code implementations • 18 Nov 2020 • Yayuan Qin, Yao Shen, ChangLe Liu, Hongliang Wo, Yonghao Gao, Yu Feng, Xiaowen Zhang, Gaofeng Ding, Yiqing Gu, Qisi Wang, Shoudong Shen, Helen C. Walker, Robert Bewley, Jianhui Xu, Martin Boehm, Paul Steffens, Seiko Ohira-Kawamura, Naoki Murai, Astrid Schneidewind, Xin Tong, Gang Chen, Jun Zhao
We report thermodynamic and neutron scattering measurements of the triangular-lattice quantum Ising magnet TmMgGaO 4 in longitudinal magnetic fields.
Strongly Correlated Electrons Materials Science
no code implementations • 9 Nov 2020 • Zhebin Zhang, Sai Wu, Dawei Jiang, Gang Chen
In this work, we propose a novel BERT-enhanced NMT model called BERT-JAM which improves upon existing models from two aspects: 1) BERT-JAM uses joint-attention modules to allow the encoder/decoder layers to dynamically allocate attention between different representations, and 2) BERT-JAM allows the encoder/decoder layers to make use of BERT's intermediate representations by composing them using a gated linear unit (GLU).
no code implementations • 17 Oct 2020 • Zhaojing Luo, Sai Ho Yeung, Meihui Zhang, Kaiping Zheng, Lei Zhu, Gang Chen, Feiyi Fan, Qian Lin, Kee Yuan Ngiam, Beng Chin Ooi
In this paper, we identify two main challenges that arise during the deployment of machine learning pipelines, and address them with the design of versioning for an end-to-end analytics system MLCask.
no code implementations • 14 Aug 2020 • Yuncheng Wu, Shaofeng Cai, Xiaokui Xiao, Gang Chen, Beng Chin Ooi
Federated learning (FL) is an emerging paradigm that enables multiple organizations to jointly train a model without revealing their private data to each other.
1 code implementation • 11 Aug 2020 • Gang Chen, Yi Ding, Hugo Edwards, Chong Hin Chau, Sai Hou, Grace Johnson, Mohammed Sharukh Syed, Haoyuan Tang, Yue Wu, Ye Yan, Gil Tidhar, Nir Lipovetzky
Planimation is a modular and extensible open source framework to visualise sequential solutions of planning problems specified in PDDL.
no code implementations • 8 Jul 2020 • Qiang Wei, Xuewei Zhang, Weiyin Deng, Jiuyang Lu, Xueqin Huang, Mou Yan, Gang Chen, Zhengyou Liu, Suotang Jia
Here we report the realization of a second-order topological Weyl semimetal in a 3D-printed acoustic crystal, which possesses Weyl points in 3D momentum space, 2D Fermi arc states on surfaces and 1D gapless states on hinges.
Mesoscale and Nanoscale Physics
2 code implementations • ACL 2020 • Jue Wang, Lidan Shou, Ke Chen, Gang Chen
Its hidden state at layer l represents an l-gram in the input text, which is labeled only if its corresponding text region represents a complete entity mention.
Ranked #1 on
Nested Named Entity Recognition
on NNE
no code implementations • 15 Jun 2020 • Chen Chen, Jingfeng Zhang, Anthony K. H. Tung, Mohan Kankanhalli, Gang Chen
We argue that the key to Byzantine detection is monitoring of gradients of the model parameters of clients.
no code implementations • 12 Jun 2020 • Gang Chen
Q-learning with value function approximation may have the poor performance because of overestimation bias and imprecise estimate.
2 code implementations • 23 Mar 2020 • Pingcheng Ruan, Dumitrel Loghin, Quang-Trung Ta, Meihui Zhang, Gang Chen, Beng Chin Ooi
For evaluation, we implement our method in two blockchains respectively, FabricSharp on top of Hyperledger Fabric, and FastFabricSharp on top of FastFabric.
Distributed, Parallel, and Cluster Computing Databases Performance
2 code implementations • 4 Mar 2020 • Cong Yue, Zhongle Xie, Meihui Zhang, Gang Chen, Beng Chin Ooi, Sheng Wang, Xiaokui Xiao
We establish the worst-case guarantees of each index in terms of these five metrics, and we experimentally evaluate all indexes in a large variety of settings.
Databases
4 code implementations • 9 Feb 2020 • Razvan V. Marinescu, Neil P. Oxtoby, Alexandra L. Young, Esther E. Bron, Arthur W. Toga, Michael W. Weiner, Frederik Barkhof, Nick C. Fox, Arman Eshaghi, Tina Toni, Marcin Salaterski, Veronika Lunina, Manon Ansart, Stanley Durrleman, Pascal Lu, Samuel Iddi, Dan Li, Wesley K. Thompson, Michael C. Donohue, Aviv Nahon, Yarden Levy, Dan Halbersberg, Mariya Cohen, Huiling Liao, Tengfei Li, Kaixian Yu, Hongtu Zhu, Jose G. Tamez-Pena, Aya Ismail, Timothy Wood, Hector Corrada Bravo, Minh Nguyen, Nanbo Sun, Jiashi Feng, B. T. Thomas Yeo, Gang Chen, Ke Qi, Shiyang Chen, Deqiang Qiu, Ionut Buciuman, Alex Kelner, Raluca Pop, Denisa Rimocea, Mostafa M. Ghazi, Mads Nielsen, Sebastien Ourselin, Lauge Sorensen, Vikram Venkatraghavan, Keli Liu, Christina Rabe, Paul Manser, Steven M. Hill, James Howlett, Zhiyue Huang, Steven Kiddle, Sach Mukherjee, Anais Rouanet, Bernd Taschler, Brian D. M. Tom, Simon R. White, Noel Faux, Suman Sedai, Javier de Velasco Oriol, Edgar E. V. Clemente, Karol Estrada, Leon Aksman, Andre Altmann, Cynthia M. Stonnington, Yalin Wang, Jianfeng Wu, Vivek Devadas, Clementine Fourrier, Lars Lau Raket, Aristeidis Sotiras, Guray Erus, Jimit Doshi, Christos Davatzikos, Jacob Vogel, Andrew Doyle, Angela Tam, Alex Diaz-Papkovich, Emmanuel Jammeh, Igor Koval, Paul Moore, Terry J. Lyons, John Gallacher, Jussi Tohka, Robert Ciszek, Bruno Jedynak, Kruti Pandya, Murat Bilgel, William Engels, Joseph Cole, Polina Golland, Stefan Klein, Daniel C. Alexander
TADPOLE's unique results suggest that current prediction algorithms provide sufficient accuracy to exploit biomarkers related to clinical diagnosis and ventricle volume, for cohort refinement in clinical trials for Alzheimer's disease.
no code implementations • ICLR 2020 • Shaofeng Cai, Yao Shu, Wei Wang, Gang Chen, Beng Chin Ooi
Recent years have witnessed growing interests in designing efficient neural networks and neural architecture search (NAS).
no code implementations • 5 Dec 2019 • Gang Chen, Shengyu He, Haitao Meng, Kai Huang
Over the last years, a great success of deep neural networks (DNNs) has been witnessed in computer vision and other fields.
no code implementations • 5 Dec 2019 • Gang Chen, Yang Liu, Huanbo Luan, Meng Zhang, Qun Liu, Maosong Sun
While the use of neural networks has proven effective in improving story generation, how to learn to generate an explainable high-level plot still remains a major challenge.
no code implementations • 24 Nov 2019 • Gang Chen
Deep reinforcement learning (DRL) on Markov decision processes (MDPs) with continuous action spaces is often approached by directly training parametric policies along the direction of estimated policy gradients (PGs).
no code implementations • 11 Nov 2019 • Gang Chen, Dingcheng Li, ran Xu
Then given the selected samples, we propose the adaptive multi-step TD, which generalizes TD($\lambda$), but adaptively switch on/off the backups from future returns of different steps.
no code implementations • 23 Oct 2019 • Shaojin Cai, Yuyang Xue3 Qinquan Gao, Min Du, Gang Chen, Hejun Zhang, Tong Tong
It is not necessary for an expert to pick a representative reference slide in the proposed TAN method.
no code implementations • 21 Oct 2019 • Gang Chen
Guided by this framework and the maximum-entropy learning technique, we will first train agents' policies with shared global component to foster coordinated and effective learning.
no code implementations • 13 Oct 2019 • Gang Chen, Hongzhe Yu, Wei Dong, Xinjun Sheng, Xiangyang Zhu, Han Ding
While training an end-to-end navigation network in the real world is usually of high cost, simulation provides a safe and cheap environment in this training stage.
1 code implementation • 3 Oct 2019 • Pingcheng Ruan, Gang Chen, Tien Tuan Anh Dinh, Qian Lin, Dumitrel Loghin, Beng Chin Ooi, Meihui Zhang
As blockchain evolves into another data management system, the natural question is how it compares against distributed database systems.
Databases Performance
no code implementations • 6 Sep 2019 • Dumitrel Loghin, Shaofeng Cai, Gang Chen, Tien Tuan Anh Dinh, Feiyi Fan, Qian Lin, Janice Ng, Beng Chin Ooi, Xutao Sun, Quang-Trung Ta, Wei Wang, Xiaokui Xiao, Yang Yang, Meihui Zhang, Zhonghua Zhang
With 5G on the verge of being adopted as the next mobile network, there is a need to analyze its impact on the landscape of computing and data management.
Networking and Internet Architecture Databases Distributed, Parallel, and Cluster Computing
no code implementations • 21 Jun 2019 • Wei Wang, Meihui Zhang, Gang Chen, H. V. Jagadish, Beng Chin Ooi, Kian-Lee Tan
Deep learning has recently become very popular on account of its incredible success in many complex data-driven applications, such as image classification and speech recognition.
no code implementations • 19 Jun 2019 • Chen Wang, Hui Ma, Gang Chen, Sven Hartmann
The objective of this problem is to find a solution with optimized or near-optimized overall QoS and QoSM within polynomial time over a service request.
2 code implementations • 16 May 2019 • Dumitrel Loghin, Gang Chen, Tien Tuan Anh Dinh, Beng Chin Ooi, Yong Meng Teo
Motivated by the massive energy usage of blockchain, on the one hand, and by significant performance improvements in low-power, wimpy systems, on the other hand, we perform an in-depth time-energy analysis of blockchain systems on low-power nodes in comparison to high-performance nodes.
Distributed, Parallel, and Cluster Computing Databases Emerging Technologies Performance
no code implementations • 6 Apr 2019 • Shaofeng Cai, Yao Shu, Gang Chen, Beng Chin Ooi, Wei Wang, Meihui Zhang
However, many recent works show that the standard dropout is ineffective or even detrimental to the training of CNNs.
1 code implementation • 3 Apr 2019 • Shaofeng Cai, Gang Chen, Beng Chin Ooi, Jinyang Gao
Model slicing could be viewed as an elastic computation solution without requiring more computational resources.
no code implementations • 18 Feb 2019 • Chen Wang, Hui Ma, Gang Chen, Sven Hartmann
We also found that the use of the proper neighborhood structure can enhance the effectiveness of our approach.
no code implementations • 14 Feb 2019 • Gang Chen, Yiming Peng
We propose a new policy iteration theory as an important extension of soft policy iteration and Soft Actor-Critic (SAC), one of the most efficient model free algorithms for deep reinforcement learning.
no code implementations • 14 Feb 2019 • Victoria Huang, Gang Chen, Qiang Fu, Elliott Wen
In comparison to communication delay, existing literature on the CPP assumes that the influence of controller workload distribution on network performance is negligible.
no code implementations • 26 Jan 2019 • Soheila Sadeghiram, Hui Ma, Gang Chen
Data-intensive Web services, which manipulate and deal with those data, are of great interest to implement data-intensive processes, such as distributed Data-intensive Web Service Composition (DWSC).
no code implementations • 16 Jan 2019 • Soheila Sadeghiram, Hui Ma, Gang Chen
As a fundamental challenge for service developers, service composition must fulfil functional requirements and optimise Quality of Service (QoS) attributes, simultaneously.
no code implementations • 2 Sep 2018 • Gang Chen, Yiming Peng, Mengjie Zhang
With the aim of improving sample efficiency and learning performance, we will develop a new DRL algorithm in this paper that seamless integrates entropy-induced and bootstrap-induced techniques for efficient and deep exploration of the learning environment.
no code implementations • 26 Apr 2018 • Jinyang Gao, Wei Wang, Meihui Zhang, Gang Chen, H. V. Jagadish, Guoliang Li, Teck Khim Ng, Beng Chin Ooi, Sheng Wang, Jingren Zhou
In many complex applications such as healthcare, subject matter experts (e. g. Clinicians) are the ones who appreciate the importance of features that affect health, and their knowledge together with existing knowledge bases are critical to the end results.
no code implementations • 17 Apr 2018 • Gang Chen, Yiming Peng, Mengjie Zhang
While PPO is inspired by the same learning theory that justifies trust region policy optimization (TRPO), PPO substantially simplifies algorithm design and improves data efficiency by performing multiple epochs of \emph{clipped policy optimization} from sampled data.
1 code implementation • PVLDB (The Proceedings of the VLDB Endowment) 2018 • Wei Wang, Sheng Wang, Jinyang Gao, Meihui Zhang, Gang Chen, Teck Khim Ng, Beng Chin Ooi
Second, expertise knowledge is required to optimize the training and inference procedures in terms of efficiency and effectiveness, which imposes heavy burden on the system users.
no code implementations • 14 Feb 2018 • Sheng Wang, Tien Tuan Anh Dinh, Qian Lin, Zhongle Xie, Meihui Zhang, Qingchao Cai, Gang Chen, Wanzeng Fu, Beng Chin Ooi, Pingcheng Ruan
By integrating the core application properties into the storage, ForkBase not only delivers high performance but also reduces development effort.
Databases Cryptography and Security Distributed, Parallel, and Cluster Computing
1 code implementation • 18 Dec 2017 • Johannes Albrecht, Antonio Augusto Alves Jr, Guilherme Amadio, Giuseppe Andronico, Nguyen Anh-Ky, Laurent Aphecetche, John Apostolakis, Makoto Asai, Luca Atzori, Marian Babik, Giuseppe Bagliesi, Marilena Bandieramonte, Sunanda Banerjee, Martin Barisits, Lothar A. T. Bauerdick, Stefano Belforte, Douglas Benjamin, Catrin Bernius, Wahid Bhimji, Riccardo Maria Bianchi, Ian Bird, Catherine Biscarat, Jakob Blomer, Kenneth Bloom, Tommaso Boccali, Brian Bockelman, Tomasz Bold, Daniele Bonacorsi, Antonio Boveia, Concezio Bozzi, Marko Bracko, David Britton, Andy Buckley, Predrag Buncic, Paolo Calafiura, Simone Campana, Philippe Canal, Luca Canali, Gianpaolo Carlino, Nuno Castro, Marco Cattaneo, Gianluca Cerminara, Javier Cervantes Villanueva, Philip Chang, John Chapman, Gang Chen, Taylor Childers, Peter Clarke, Marco Clemencic, Eric Cogneras, Jeremy Coles, Ian Collier, David Colling, Gloria Corti, Gabriele Cosmo, Davide Costanzo, Ben Couturier, Kyle Cranmer, Jack Cranshaw, Leonardo Cristella, David Crooks, Sabine Crépé-Renaudin, Robert Currie, Sünje Dallmeier-Tiessen, Kaushik De, Michel De Cian, Albert De Roeck, Antonio Delgado Peris, Frédéric Derue, Alessandro Di Girolamo, Salvatore Di Guida, Gancho Dimitrov, Caterina Doglioni, Andrea Dotti, Dirk Duellmann, Laurent Duflot, Dave Dykstra, Katarzyna Dziedziniewicz-Wojcik, Agnieszka Dziurda, Ulrik Egede, Peter Elmer, Johannes Elmsheuser, V. Daniel Elvira, Giulio Eulisse, Steven Farrell, Torben Ferber, Andrej Filipcic, Ian Fisk, Conor Fitzpatrick, José Flix, Andrea Formica, Alessandra Forti, Giovanni Franzoni, James Frost, Stu Fuess, Frank Gaede, Gerardo Ganis, Robert Gardner, Vincent Garonne, Andreas Gellrich, Krzysztof Genser, Simon George, Frank Geurts, Andrei Gheata, Mihaela Gheata, Francesco Giacomini, Stefano Giagu, Manuel Giffels, Douglas Gingrich, Maria Girone, Vladimir V. Gligorov, Ivan Glushkov, Wesley Gohn, Jose Benito Gonzalez Lopez, Isidro González Caballero, Juan R. González Fernández, Giacomo Govi, Claudio Grandi, Hadrien Grasland, Heather Gray, Lucia Grillo, Wen Guan, Oliver Gutsche, Vardan Gyurjyan, Andrew Hanushevsky, Farah Hariri, Thomas Hartmann, John Harvey, Thomas Hauth, Benedikt Hegner, Beate Heinemann, Lukas Heinrich, Andreas Heiss, José M. Hernández, Michael Hildreth, Mark Hodgkinson, Stefan Hoeche, Burt Holzman, Peter Hristov, Xingtao Huang, Vladimir N. Ivanchenko, Todor Ivanov, Jan Iven, Brij Jashal, Bodhitha Jayatilaka, Roger Jones, Michel Jouvin, Soon Yung Jun, Michael Kagan, Charles William Kalderon, Meghan Kane, Edward Karavakis, Daniel S. Katz, Dorian Kcira, Oliver Keeble, Borut Paul Kersevan, Michael Kirby, Alexei Klimentov, Markus Klute, Ilya Komarov, Dmitri Konstantinov, Patrick Koppenburg, Jim Kowalkowski, Luke Kreczko, Thomas Kuhr, Robert Kutschke, Valentin Kuznetsov, Walter Lampl, Eric Lancon, David Lange, Mario Lassnig, Paul Laycock, Charles Leggett, James Letts, Birgit Lewendel, Teng Li, Guilherme Lima, Jacob Linacre, Tomas Linden, Miron Livny, Giuseppe Lo Presti, Sebastian Lopienski, Peter Love, Adam Lyon, Nicolò Magini, Zachary L. Marshall, Edoardo Martelli, Stewart Martin-Haugh, Pere Mato, Kajari Mazumdar, Thomas McCauley, Josh McFayden, Shawn McKee, Andrew McNab, Rashid Mehdiyev, Helge Meinhard, Dario Menasce, Patricia Mendez Lorenzo, Alaettin Serhan Mete, Michele Michelotto, Jovan Mitrevski, Lorenzo Moneta, Ben Morgan, Richard Mount, Edward Moyse, Sean Murray, Armin Nairz, Mark S. Neubauer, Andrew Norman, Sérgio Novaes, Mihaly Novak, Arantza Oyanguren, Nurcan Ozturk, Andres Pacheco Pages, Michela Paganini, Jerome Pansanel, Vincent R. Pascuzzi, Glenn Patrick, Alex Pearce, Ben Pearson, Kevin Pedro, Gabriel Perdue, Antonio Perez-Calero Yzquierdo, Luca Perrozzi, Troels Petersen, Marko Petric, Andreas Petzold, Jónatan Piedra, Leo Piilonen, Danilo Piparo, Jim Pivarski, Witold Pokorski, Francesco Polci, Karolos Potamianos, Fernanda Psihas, Albert Puig Navarro, Günter Quast, Gerhard Raven, Jürgen Reuter, Alberto Ribon, Lorenzo Rinaldi, Martin Ritter, James Robinson, Eduardo Rodrigues, Stefan Roiser, David Rousseau, Gareth Roy, Grigori Rybkine, Andre Sailer, Tai Sakuma, Renato Santana, Andrea Sartirana, Heidi Schellman, Jaroslava Schovancová, Steven Schramm, Markus Schulz, Andrea Sciabà, Sally Seidel, Sezen Sekmen, Cedric Serfon, Horst Severini, Elizabeth Sexton-Kennedy, Michael Seymour, Davide Sgalaberna, Illya Shapoval, Jamie Shiers, Jing-Ge Shiu, Hannah Short, Gian Piero Siroli, Sam Skipsey, Tim Smith, Scott Snyder, Michael D. Sokoloff, Panagiotis Spentzouris, Hartmut Stadie, Giordon Stark, Gordon Stewart, Graeme A. Stewart, Arturo Sánchez, Alberto Sánchez-Hernández, Anyes Taffard, Umberto Tamponi, Jeff Templon, Giacomo Tenaglia, Vakhtang Tsulaia, Christopher Tunnell, Eric Vaandering, Andrea Valassi, Sofia Vallecorsa, Liviu Valsan, Peter Van Gemmeren, Renaud Vernet, Brett Viren, Jean-Roch Vlimant, Christian Voss, Margaret Votava, Carl Vuosalo, Carlos Vázquez Sierra, Romain Wartel, Gordon T. Watts, Torre Wenaus, Sandro Wenzel, Mike Williams, Frank Winklmeier, Christoph Wissing, Frank Wuerthwein, Benjamin Wynne, Zhang Xiaomei, Wei Yang, Efe Yazgan
Particle physics has an ambitious and broad experimental programme for the coming decades.
Computational Physics High Energy Physics - Experiment
1 code implementation • 17 Aug 2017 • Tien Tuan Anh Dinh, Rui Liu, Meihui Zhang, Gang Chen, Beng Chin Ooi, Ji Wang
Blockchain technologies are gaining massive momentum in the last few years.
Databases Cryptography and Security
2 code implementations • 12 Mar 2017 • Tien Tuan Anh Dinh, Ji Wang, Gang Chen, Rui Liu, Beng Chin Ooi, Kian-Lee Tan
However, there is a clear lack of a systematic framework with which different systems can be analyzed and compared against each other.
Databases Cryptography and Security Distributed, Parallel, and Cluster Computing
1 code implementation • 4 Dec 2016 • Gang Chen, Yawei Li, Sargur N. Srihari
Our model is a 3-pathway deep architecture with a hidden-layer representation which is shared by multi-inputs and outputs, and each branch can be composed of a multi-layer deep model.
1 code implementation • 4 Dec 2016 • Gang Chen, Yawei Li, Sargur N. Srihari
On the other hand, word recognition is a sequential problem where we need to model the correlation between characters.
1 code implementation • 8 Oct 2016 • Gang Chen
We describe recurrent neural networks (RNNs), which have attracted great attention on sequential tasks, such as handwriting recognition, speech recognition and image to text.
no code implementations • 25 Mar 2016 • Wei Wang, Gang Chen, Haibo Chen, Tien Tuan Anh Dinh, Jinyang Gao, Beng Chin Ooi, Kian-Lee Tan, Sheng Wang
The other is scalability, that is the deep learning system must be able to provision for a huge demand of computing resources for training large models with massive datasets.
no code implementations • 18 Sep 2015 • Gang Chen, Mikel L. Forcada
This paper describes a free/open-source implementation of the light sliding-window (LSW) part-of-speech tagger for the Apertium free/open-source machine translation platform.
no code implementations • 26 Mar 2015 • Gang Chen, Sargur N. Srihari
In this paper, we propose a K-fan deep structure model, which can handle the multi-input and muti-output learning problems effectively.
no code implementations • 26 Jan 2015 • Gang Chen
Thus, our model unifies transductive learning, feature learning and maximum margin techniques in the semi-supervised clustering framework.
1 code implementation • 13 Jan 2015 • Gang Chen
As an unsupervised method, our model first leverages the advantages of deep learning for feature representation and dimension reduction.
no code implementations • 10 Dec 2014 • Gang Chen, ran Xu, Sargur Srihari
Deep learning has attracted great attention recently and yielded the state of the art performance in dimension reduction and classification problems.
no code implementations • 21 Oct 2014 • Ran Xu, Gang Chen, Caiming Xiong, Wei Chen, Jason J. Corso
The focus of the action understanding literature has predominately been classification, how- ever, there are many applications demanding richer action understanding such as mobile robotics and video search, with solutions to classification, localization and detection.
no code implementations • 13 Jun 2014 • Gang Chen, Sargur H. Srihari
We propose a hierarchical correlated RBM for classification problem, which generalizes the classification RBM with sharing information among different classes.
no code implementations • 21 Sep 2013 • Gang Chen
In this paper, we overcome this limitation and propose a latent variable Fisher discriminant analysis model.