no code implementations • EMNLP 2020 • Fei Tan, Yifan Hu, Changwei Hu, Keqian Li, Kevin Yen
In this work, we present a new language pre-training model TNT (Text Normalization based pre-training of Transformers) for content moderation.
no code implementations • 16 Dec 2024 • Zhenyu Wang, Yifan Hu, Peter Bühlmann, Zijian Guo
To address both challenges, we focus on the additive intervention regime and propose nearly necessary and sufficient conditions for ensuring that the invariant prediction model matches the causal outcome model.
1 code implementation • 16 Dec 2024 • Rui Liu, Shuwei He, Yifan Hu, Haizhou Li
The multi-modal aims to take both the RGB and Depth spaces of the spatial image to learn more comprehensive spatial information, and the multi-scale seeks to model the local and global spatial knowledge simultaneously.
no code implementations • 12 Oct 2024 • Rui Liu, Zhenqi Jia, Jie Yang, Yifan Hu, Haizhou Li
In this paper, we propose a novel Emphasis Rendering scheme for the CTTS model, termed ER-CTTS, that includes two main components: 1) we simultaneously take into account textual and acoustic contexts, with both global and local semantic modeling to understand the conversation context comprehensively; 2) we deeply integrate multi-modal and multi-scale context to learn the influence of context on the emphasis expression of the current utterance.
1 code implementation • 6 Oct 2024 • Peiyuan Liu, Beiliang Wu, Yifan Hu, Naiqi Li, Tao Dai, Jigang Bao, Shu-Tao Xia
Eliminating non-stationarity is essential for avoiding spurious regressions and capturing local dependencies in short-term modeling, while preserving it is crucial for revealing long-term cointegration across variates.
no code implementations • 25 Sep 2024 • Peng Zhu, Yuante Li, Yifan Hu, Sheng Xiang, Qinyuan Liu, Dawei Cheng, Yuqi Liang
Existing approaches also generally struggle to capture unobservable latent market states effectively, such as market sentiment and expectations, microstructural factors, and participant behavior patterns, leading to an inadequate understanding of market dynamics and subsequently impact prediction accuracy.
no code implementations • 25 Sep 2024 • Xin Chen, Yifan Hu, Minda Zhao
Policy gradient methods are widely used in reinforcement learning.
1 code implementation • 26 Aug 2024 • Peng Zhu, Yuante Li, Yifan Hu, Qinyuan Liu, Dawei Cheng, Yuqi Liang
However, existing methods mostly focus on the short-term dynamic relationships of stocks and directly integrating relationship information with temporal information.
no code implementations • 20 Aug 2024 • Yifan Hu, Jie Wang, Xin Chen, Niao He
This setting captures various optimization paradigms, such as conditional stochastic optimization, distributionally robust optimization, shortfall risk optimization, and machine learning paradigms, such as contrastive learning.
1 code implementation • 31 Jul 2024 • Rui Liu, Yifan Hu, Yi Ren, Xiang Yin, Haizhou Li
After that, the expressive conversational speech is synthesized by the conversation-enriched VITS to deliver feedback to the user. Furthermore, we propose a large-scale Natural CSS Dataset called NCSSD, that includes both naturally recorded conversational speech in improvised styles and dialogues extracted from TV shows.
no code implementations • 22 Jun 2024 • Christopher Burger, Yifan Hu, Thai Le
The location of knowledge within Generative Pre-trained Transformer (GPT)-like models has seen extensive recent investigation.
1 code implementation • 6 Jun 2024 • Yifan Hu, Peiyuan Liu, Peng Zhu, Dawei Cheng, Tao Dai
Transformer-based and MLP-based methods have emerged as leading approaches in time series forecasting (TSF).
Ranked #14 on Time Series Forecasting on ETTh1 (336) Multivariate
1 code implementation • 3 Jun 2024 • Vinzenz Thoma, Barna Pasztor, Andreas Krause, Giorgia Ramponi, Yifan Hu
We empirically demonstrate the performance of our algorithm for reward shaping and tax design.
2 code implementations • 30 May 2024 • Shyam Sundhar Ramesh, Yifan Hu, Iason Chaimalas, Viraj Mehta, Pier Giuseppe Sessa, Haitham Bou Ammar, Ilija Bogunovic
Our approach builds upon reward-free direct preference optimization methods, but unlike previous approaches, it seeks a robust policy which maximizes the worst-case group performance.
no code implementations • 29 May 2024 • Xuxing Chen, Abhishek Roy, Yifan Hu, Krishnakumar Balasubramanian
We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional stochastic optimization problem.
no code implementations • 25 Mar 2024 • Huifeng Yin, Hanle Zheng, Jiayi Mao, Siyuan Ding, Xing Liu, Mingkun Xu, Yifan Hu, Jing Pei, Lei Deng
By designing and evaluating several variants of the classic model, we systematically investigate the functional roles of key modelling components, leakage, reset, and recurrence, in leaky integrate-and-fire (LIF) based SNNs.
1 code implementation • 19 Dec 2023 • Rui Liu, Yifan Hu, Yi Ren, Xiang Yin, Haizhou Li
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
no code implementations • 5 Sep 2023 • Shyam Sundhar Ramesh, Pier Giuseppe Sessa, Yifan Hu, Andreas Krause, Ilija Bogunovic
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
1 code implementation • ICCV 2023 • Qiaoyi Su, Yuhong Chou, Yifan Hu, Jianing Li, Shijie Mei, Ziyang Zhang, Guoqi Li
Spiking neural networks (SNNs) are brain-inspired energy-efficient models that encode information in spatiotemporal dynamics.
no code implementations • 2 Feb 2023 • Meng Zhao, Yifan Hu, Ruixuan Jiang, Yuanli Zhao, Dong Zhang, Yan Zhang, Rong Wang, Yong Cao, Qian Zhang, Yonggang Ma, Jiaxi Li, Shaochen Yu, Wenjie Li, Ran Zhang, Yefeng Zheng, Shuo Wang, Jizong Zhao
Conclusions: The proposed deep learning algorithms can be an effective tool for early identification of hemorrhage etiologies based on NCCT scans.
no code implementations • 16 Jan 2023 • Thai Le, Ye Yiran, Yifan Hu, Dongwon Lee
CRYPTEXT is a data-intensive application that provides the users with a database and several tools to extract and interact with human-written perturbations.
1 code implementation • 14 Jan 2023 • Teham Bhuiyan, Linh Kästner, Yifan Hu, Benno Kutschank, Jens Lambrecht
Industrial robots are widely used in various manufacturing environments due to their efficiency in doing repetitive tasks such as assembly or welding.
1 code implementation • 11 Dec 2022 • Kailin Liang, Bin Liu, Yifan Hu, Rui Liu, Feilong Bao, Guanglai Gao
Text-to-Speech (TTS) synthesis for low-resource languages is an attractive research issue in academia and industry nowadays.
1 code implementation • 27 Oct 2022 • Yifan Hu, Rui Liu, Guanglai Gao, Haizhou Li
Therefore, we propose a novel expressive conversational TTS model, termed as FCTalker, that learn the fine and coarse grained context dependency at the same time during speech generation.
no code implementations • 28 Sep 2022 • Man Yao, Guangshe Zhao, Hengyu Zhang, Yifan Hu, Lei Deng, Yonghong Tian, Bo Xu, Guoqi Li
On ImageNet-1K, we achieve top-1 accuracy of 75. 92% and 77. 08% on single/4-step Res-SNN-104, which are state-of-the-art results in SNNs.
1 code implementation • 22 Sep 2022 • Yifan Hu, Pengkai Yin, Rui Liu, Feilong Bao, Guanglai Gao
This paper introduces a high-quality open-source text-to-speech (TTS) synthesis dataset for Mongolian, a low-resource language spoken by over 10 million people worldwide.
no code implementations • 7 Aug 2022 • Yifan Hu, Yu Wang
However, due to the inconsistent frequency of human activities, the amount of data for each activity in the human activity dataset is imbalanced.
no code implementations • 13 Jun 2022 • Xiaoqi Wang, Kevin Yen, Yifan Hu, Han-Wei Shen
There are a few existing methods that have attempted to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria.
no code implementations • 28 May 2022 • Siqi Zhang, Yifan Hu, Liang Zhang, Niao He
We further study the algorithm-dependent generalization bounds via stability arguments of algorithms.
1 code implementation • Findings (ACL) 2022 • Thai Le, Jooyoung Lee, Kevin Yen, Yifan Hu, Dongwon Lee
We find that adversarial texts generated by ANTHRO achieve the best trade-off between (1) attack success rate, (2) semantic preservation of the original text, and (3) stealthiness--i. e. indistinguishable from human writings hence harder to be flagged as suspicious.
no code implementations • 9 Mar 2022 • Keqian Li, Yifan Hu, Logan Palanisamy, Lisa Jones, Akshay Gupta, Jason Grigsby, Ili Selinger, Matt Gillingham, Fei Tan
Accurate understanding of users in terms of predicative segments play an essential role in the day to day operation of modern internet enterprises.
no code implementations • 9 Mar 2022 • Keqian Li, Yifan Hu
We study the problem of user segmentation: given a set of users and one or more predefined groups or segments, assign users to their corresponding segments.
1 code implementation • 2 Mar 2022 • Yihan Lin, Yifan Hu, Shijie Ma, Guoqi Li, Dongjie Yu
In this work, a new SNN training paradigm is proposed by combining the concepts of the two different training methods with the help of the pretrain technique and BP-based deep SNN training mechanism.
1 code implementation • 15 Dec 2021 • Yifan Hu, Lei Deng, Yujie Wu, Man Yao, Guoqi Li
Despite the rapid progress of neuromorphic computing, inadequate capacity and insufficient representation power of spiking neural networks (SNNs) severely restrict their application scope in practice.
no code implementations • 9 Dec 2021 • Yifan Hu, Yujie Wu, Lei Deng, Guoqi Li
In this paper, we identify the crux and then propose a novel residual block for SNNs, which is able to significantly extend the depth of directly trained SNNs, e. g., up to 482 layers on CIFAR-10 and 104 layers on ImageNet, without observing any slight degradation problem.
no code implementations • NeurIPS 2021 • Yifan Hu, Xin Chen, Niao He
We consider stochastic optimization when one only has access to biased stochastic oracles of the objective, and obtaining stochastic gradients with low biases comes at high costs.
no code implementations • EMNLP 2021 • Fei Tan, Yifan Hu, Kevin Yen, Changwei Hu
Text moderation for user generated content, which helps to promote healthy interaction among users, has been widely studied and many machine learning models have been proposed.
no code implementations • 18 Aug 2021 • Shaunak Mishra, Changwei Hu, Manisha Verma, Kevin Yen, Yifan Hu, Maxim Sviridenko
To realize this opportunity, we propose an ad text strength indicator (TSI) which: (i) predicts the click-through-rate (CTR) for an input ad text, (ii) fetches similar existing ads to create a neighborhood around the input ad, (iii) and compares the predicted CTRs in the neighborhood to declare whether the input ad is strong or weak.
no code implementations • 27 Jun 2021 • Xiaoqi Wang, Kevin Yen, Yifan Hu, Han-Wei Shen
In this paper, we propose a Convolutional Graph Neural Network based deep learning framework, DeepGD, which can draw arbitrary graphs once trained.
no code implementations • 15 Apr 2021 • Zhao Wang, Yifan Hu, Jun Xiao, Chao Wu
A novel ring FL topology as well as a map-reduce based synchronizing method are designed in the proposed RDFL to improve decentralized FL performance and bandwidth utilization.
no code implementations • 7 Feb 2021 • Yifan Hu, Changwei Hu, Thanh Tran, Tejaswi Kasturi, Elizabeth Joseph, Matt Gillingham
Gender information is no longer a mandatory input when registering for an account at many leading Internet companies.
no code implementations • 19 Dec 2020 • Xuan Qin, Meizhu Liu, Yifan Hu, Christina Moo, Christian M. Riblet, Changwei Hu, Kevin Yen, Haibin Ling
In this paper, we propose a method that efficiently utilizes appearance features and text vectors to accurately classify political posters from other similar political images.
2 code implementations • 29 Oct 2020 • Hanle Zheng, Yujie Wu, Lei Deng, Yifan Hu, Guoqi Li
To this end, we propose a threshold-dependent batch normalization (tdBN) method based on the emerging spatio-temporal backpropagation, termed "STBP-tdBN", enabling direct training of a very deep SNN and the efficient implementation of its inference on neuromorphic hardware.
no code implementations • 21 Oct 2020 • Yifan Hu, YuHang Zhou, Jun Xiao, Chao Wu
Federated learning(FL) is a rapidly growing field and many centralized and decentralized FL frameworks have been proposed.
no code implementations • EMNLP 2020 • Thanh Tran, Yifan Hu, Changwei Hu, Kevin Yen, Fei Tan, Kyumin Lee, Serim Park
HABERTOR inherits BERT's architecture, but is different in four aspects: (i) it generates its own vocabularies and is pre-trained from the scratch using the largest scale hatespeech dataset; (ii) it consists of Quaternion-based factorized components, resulting in a much smaller number of parameters, faster training and inferencing, as well as less memory usage; (iii) it uses our proposed multi-source ensemble heads with a pooling layer for separate input sources, to further enhance its effectiveness; and (iv) it uses a regularized adversarial training with our proposed fine-grained and adaptive noise magnitude to enhance its robustness.
no code implementations • EMNLP 2020 • Bang An, Jie Lyu, Zhenyi Wang, Chunyuan Li, Changwei Hu, Fei Tan, Ruiyi Zhang, Yifan Hu, Changyou Chen
The neural attention mechanism plays an important role in many natural language processing applications.
1 code implementation • 15 Jul 2020 • Hong-Yu Zhou, Shuang Yu, Cheng Bian, Yifan Hu, Kai Ma, Yefeng Zheng
In deep learning era, pretrained models play an important role in medical image analysis, in which ImageNet pretraining has been widely adopted as the best way.
no code implementations • 10 Jun 2020 • Jiuwen Zhu, Yuexiang Li, Yifan Hu, S. Kevin Zhou
To this end, self-supervised learning (SSL), as a potential solution for deficient annotated data, attracts increasing attentions from the community.
1 code implementation • 2 May 2020 • Bingyu Xin, Yifan Hu, Yefeng Zheng, Hongen Liao
We use the synthesized modalities by TC-MGAN to boost the tumor segmentation accuracy, and the results demonstrate its effectiveness.
no code implementations • NeurIPS 2020 • Yifan Hu, Siqi Zhang, Xin Chen, Niao He
Conditional stochastic optimization covers a variety of applications ranging from invariant learning and causal inference to meta-learning.
no code implementations • 2 Jan 2020 • Changwei Hu, Yifan Hu, Sungyong Seo
The seasonality component is approximated via a non-liner function of a set of Fourier terms, and the event components are learned by a simple linear function of regressor encoding the event dates.
no code implementations • 2 Jan 2020 • Yao Zhan, Changwei Hu, Yifan Hu, Tejaswi Kasturi, Shanmugam Ramasamy, Matt Gillingham, Keith Yamamoto
In this paper, we propose graph based and deep learning models for age and gender predictions, which take into account user activities and content features.
1 code implementation • 3 Nov 2019 • Lei Deng, Yujie Wu, Yifan Hu, Ling Liang, Guoqi Li, Xing Hu, Yufei Ding, Peng Li, Yuan Xie
As well known, the huge memory and compute costs of both artificial neural networks (ANNs) and spiking neural networks (SNNs) greatly hinder their deployment on edge devices with high efficiency.
no code implementations • 5 Oct 2019 • Xinrui Zhuang, Yuexiang Li, Yifan Hu, Kai Ma, Yujiu Yang, Yefeng Zheng
Witnessed the development of deep learning, increasing number of studies try to build computer aided diagnosis systems for 3D volumetric medical data.
no code implementations • 5 Jun 2019 • Yifan Hu, Yefeng Zheng
Computer-aided diagnosis (CADx) systems have been shown to assist radiologists by providing classifications of all kinds of medical images like Computed tomography (CT) and Magnetic resonance (MR).
no code implementations • 28 May 2019 • Yifan Hu, Xin Chen, Niao He
In this paper, we study a class of stochastic optimization problems, referred to as the \emph{Conditional Stochastic Optimization} (CSO), in the form of $\min_{x \in \mathcal{X}} \EE_{\xi}f_\xi\Big({\EE_{\eta|\xi}[g_\eta(x,\xi)]}\Big)$, which finds a wide spectrum of applications including portfolio selection, reinforcement learning, robust learning, causal inference and so on.
1 code implementation • 5 Nov 2018 • Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze
This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.
1 code implementation • 25 Aug 2017 • Junting Ye, Shuchu Han, Yifan Hu, Baris Coskun, Meizhu Liu, Hong Qin, Steven Skiena
Through our analysis of 57M contact lists from a major Internet company, we are able to design a fine-grained nationality classifier covering 39 groups representing over 90% of the world population.
3 code implementations • 23 Jun 2017 • Haochen Chen, Bryan Perozzi, Yifan Hu, Steven Skiena
We present HARP, a novel method for learning low dimensional embeddings of a graph's nodes which preserves higher-order structural features.
Social and Information Networks