no code implementations • 14 Dec 2024 • Zeyu Zhang, Jianxun Lian, Chen Ma, Yaning Qu, Ye Luo, Lei Wang, Rui Li, Xu Chen, Yankai Lin, Le Wu, Xing Xie, Ji-Rong Wen
In this paper, we propose TrendSim, an LLM-based multi-agent system to simulate trending topics in social media under poisoning attacks.
no code implementations • 9 Nov 2024 • Leo Li, Ye Luo, Tingyou Pan
The Orion-1 model by OpenAI is claimed to have more robust logical reasoning capabilities than previous large language models.
no code implementations • 4 Nov 2024 • Jin Li, Ye Luo, Xiaowei Zhang
This paper examines how spillover effects in A/B testing can impede organizational progress and develops strategies for mitigating these challenges.
no code implementations • 1 Oct 2024 • Hanzhe Li, Jin Li, Ye Luo, Xiaowei Zhang
AI's interpretability influences how doctors attribute these sources and their willingness to change their minds.
no code implementations • 16 Apr 2023 • Jiahao Xie, Ye Luo, Jianwei Lu
In this paper, we propose a random-patch based defense strategy to robustly detect physical attacks for Face Recognition System (FRS).
1 code implementation • 14 Mar 2023 • Hikaru Ibayashi, Taufeq Mohammed Razakh, Liqiu Yang, Thomas Linker, Marco Olguin, Shinnosuke Hattori, Ye Luo, Rajiv K. Kalia, Aiichiro Nakano, Ken-ichi Nomura, Priya Vashishta
Specifically, Allegro-Legato exhibits much weaker dependence of timei-to-failure on the problem size, $t_{\textrm{failure}} \propto N^{-0. 14}$ ($N$ is the number of atoms) compared to the SOTA Allegro model $\left(t_{\textrm{failure}} \propto N^{-0. 29}\right)$, i. e., systematically delayed time-to-failure, thus allowing much larger and longer NNQMD simulations without failure.
1 code implementation • 11 Oct 2022 • Bo Li, Yongqiang Yao, Jingru Tan, Xin Lu, Fengwei Yu, Ye Luo, Jianwei Lu
Specifically, there are an object detection task (consisting of an instance-classification task and a localization task) and an image-classification task in our framework, responsible for utilizing the two types of supervision.
1 code implementation • CVPR 2022 • Bo Li, Yongqiang Yao, Jingru Tan, Gang Zhang, Fengwei Yu, Jianwei Lu, Ye Luo
The conventional focal loss balances the training process with the same modulating factor for all categories, thus failing to handle the long-tailed problem.
no code implementations • 28 Aug 2021 • Jin Li, Ye Luo, Xiaowei Zhang
This paper identifies and addresses dynamic selection problems in online learning algorithms with endogenous data.
no code implementations • 26 Aug 2021 • Xiaoang Shen, Guokai Zhang, Huilin Lai, Jihao Luo, Jianwei Lu, Ye Luo
The application of deep learning to medical image segmentation has been hampered due to the lack of abundant pixel-level annotated data.
no code implementations • 6 Mar 2021 • Jin Li, Ye Luo, Zigan Wang, Xiaowei Zhang
We model this as a Markov decision process and show that the dynamic interaction between data generation and data analysis leads to a new type of bias -- reinforcement bias -- that exacerbates the endogeneity problem in standard data analysis.
no code implementations • 3 Mar 2021 • Shiqing Fan, Ye Luo
Then we conducted a motion blur image generation experiment on some general facial data set, and used the pairs of blurred and sharp face image data to perform the training and testing experiments of the processor GAN, and gave some visual displays.
no code implementations • 3 Mar 2021 • Shiqing Fan, Liu Liying, Ye Luo
Convolutional neural networks (CNNs) have been used in many machine learning fields.
no code implementations • 12 Feb 2021 • Ye Luo, Shiqing Fan
We present a new model of neural networks called Min-Max-Plus Neural Networks (MMP-NNs) based on operations in tropical arithmetic.
no code implementations • 11 Feb 2021 • Jiahao Xie, Sheng Zhang, Jianwei Lu, Ye Luo
Coarse-to-fine models and cascade segmentation architectures are widely adopted to solve the problem of large scale variations in medical image segmentation.
no code implementations • 8 Nov 2020 • Guokai Zhang, Xiaoang Shen, Ye Luo, Jihao Luo, Zeju Wang, Weigang Wang, Binghui Zhao, Jianwei Lu
In this paper, we develop a cross-modal self-attention distillation network by fully exploiting the encoded information of the intermediate layers from different modalities, and the extracted attention maps of different modalities enable the model to transfer the significant spatial information with more details.
1 code implementation • 30 Dec 2019 • Xi Chen, Ye Luo, Martin Spindler
In this paper we develop a data-driven smoothing technique for high-dimensional and non-linear panel data models.
no code implementations • 4 Sep 2018 • Xi Chen, Victor Chernozhukov, Iván Fernández-Val, Scott Kostyshak, Ye Luo
A common problem in econometrics, statistics, and machine learning is to estimate and make inference on functions that satisfy shape restrictions.
no code implementations • 31 Dec 2017 • Jannis Kueck, Ye Luo, Martin Spindler, Zigan Wang
In this paper, we provide results for valid inference after post- or orthogonal $L_2$-Boosting is used for variable selection.
no code implementations • 10 Feb 2017 • Ye Luo, Martin Spindler
In the recent years more and more high-dimensional data sets, where the number of parameters $p$ is high compared to the number of observations $n$ or even larger, are available for applied researchers.
no code implementations • HLT 2016 • Linqing Liu, Yao Lu, Ye Luo, Renxian Zhang, Laurent Itti, Jianwei Lu
Spammer detection on social network is a challenging problem.
no code implementations • 29 Feb 2016 • Ye Luo, Martin Spindler, Jannis Kück
Finally, we present simulation studies and applications to illustrate the relevance of our theoretical results and to provide insights into the practical aspects of boosting.
1 code implementation • 17 Dec 2015 • Victor Chernozhukov, Ivan Fernandez-Val, Ye Luo
They are as convenient and easy to report in practice as the conventional average partial effects.
Methodology Econometrics
no code implementations • ICCV 2015 • Ye Luo, Loong-Fah Cheong, An Tran
We elicit from a fundamental definition of action low-level attributes that can reveal agency and intentionality.