no code implementations • 27 Oct 2024 • Yunhui Liang, Jianwen Gan, Yan Chen, Peng Zhou, Liang Du
By comparing DMRR with three original unsupervised feature selection algorithms and two unsupervised feature selection post-processing algorithms, experimental results confirm that the importance information of different samples and the dual relationship between sample and feature are beneficial for achieving better feature selection.
no code implementations • 27 Oct 2024 • Lei Wang, Liang Du, Peng Zhou
Multiple kernel learning (MKL) aims to find an optimal, consistent kernel function.
no code implementations • 20 Oct 2024 • Liang Du, Xin Ren, Haiying Zhang, Peng Zhou
It captures the local structure of kernel data and employs kernel regression on the local region to predict the clustering results.
no code implementations • 20 Oct 2024 • Lei Wang, Liang Du, Peng Zhou, Peng Wu
A symmetric nonnegative matrix factorization algorithm based on self-paced learning was proposed to improve the clustering performance of the model.
no code implementations • 20 Oct 2024 • Xiaolin Lv, Liang Du, Peng Zhou, Peng Wu
Feature selection technology is a key technology of data dimensionality reduction.
no code implementations • 15 Oct 2024 • Tengfei Ma, Xuan Lin, Tianle Li, Chaoyi Li, Long Chen, Peng Zhou, Xibao Cai, Xinyu Yang, Daojian Zeng, Dongsheng Cao, Xiangxiang Zeng
Besides, Y-Mol offers a set of LLM paradigms that can autonomously execute the downstream tasks across the entire process of drug development, including virtual screening, drug design, pharmacological properties prediction, and drug-related interaction prediction.
no code implementations • 10 Oct 2024 • Zhinuo Zhou, Peng Zhou, Xiaoyong Pan
In this paper, we propose a Few-shot Language Image model Embedded with latent Representations (FLIER) for image recognition by introducing a latent encoder jointly trained with CLIP's image encoder, it incorporates pre-trained vision-language knowledge of CLIP and the latent representations from Stable Diffusion.
no code implementations • 18 Sep 2024 • Zhaxizhuoma, Pengan Chen, Ziniu Wu, Jiawei Sun, Dong Wang, Peng Zhou, Nieqing Cao, Yan Ding, Bin Zhao, Xuelong Li
To validate the effectiveness of AlignBot, experiments are conducted in real-world household environments, which are constructed within the laboratory to replicate typical household settings.
1 code implementation • 11 Sep 2024 • Yu Zhang, Songlin Yang, Ruijie Zhu, Yue Zhang, Leyang Cui, Yiqiao Wang, Bolun Wang, Freda Shi, Bailin Wang, Wei Bi, Peng Zhou, Guohong Fu
Linear attention Transformers and their gated variants, celebrated for enabling parallel training and efficient recurrent inference, still fall short in recall-intensive tasks compared to traditional Transformers and demand significant resources for training from scratch.
no code implementations • 3 Sep 2024 • Jianhai Chen, Yanlin Wu, Dazhong Rong, Guoyao Yu, Lingqi Jiang, Zhenguang Liu, Peng Zhou, Rui Shen
The experimental results show that our proposed incentive mechanism can attract clients with superior training data to engage in the federal recommendation at a lower cost, which can increase the economic benefit of federal recommendation by 54. 9\% while improve the recommendation performance.
1 code implementation • 20 Aug 2024 • Peng Zhou, Yongdong Liu, Lixun Ma, Weiye Zhang, Haohan Tan, Zhenguang Liu, Butian Huang
The escalating prevalence of encryption protocols has led to a concomitant surge in the number of malicious attacks that hide in encrypted traffic.
1 code implementation • 4 Jun 2024 • Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, Jason K. Eshraghian
Our experiments show that our proposed MatMul-free models achieve performance on-par with state-of-the-art Transformers that require far more memory during inference at a scale up to at least 2. 7B parameters.
no code implementations • CVPR 2024 • Sizhe Zheng, Pan Gao, Peng Zhou, Jie Qin
In order to achieve better stylization, we design a content feature extractor and a style feature extractor, based on which pure content and style images can be fed to the transformer.
no code implementations • 12 May 2024 • Jing Xu, Wentao Shi, Sheng Ren, Pan Gao, Peng Zhou, Jie Qin
In the field of transportation, it is of paramount importance to address and mitigate illegal actions committed by both motor and non-motor vehicles.
6 code implementations • 8 Apr 2024 • Bo Peng, Daniel Goldstein, Quentin Anthony, Alon Albalak, Eric Alcaide, Stella Biderman, Eugene Cheah, Xingjian Du, Teddy Ferdinan, Haowen Hou, Przemysław Kazienko, Kranthi Kiran GV, Jan Kocoń, Bartłomiej Koptyra, Satyapriya Krishna, Ronald McClelland Jr., Jiaju Lin, Niklas Muennighoff, Fares Obeid, Atsushi Saito, Guangyu Song, Haoqin Tu, Cahya Wirawan, Stanisław Woźniak, Ruichong Zhang, Bingchen Zhao, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu
We present Eagle (RWKV-5) and Finch (RWKV-6), sequence models improving upon the RWKV (RWKV-4) architecture.
1 code implementation • CVPR 2024 • Yang Luo, Zhineng Chen, Peng Zhou, Zuxuan Wu, Xieping Gao, Yu-Gang Jiang
The results demonstrate that LTRP outperforms both supervised and other self-supervised methods due to the fair assessment of image content.
1 code implementation • 20 Mar 2024 • Peng Zhou, Jianmin Wang, Chunyan Li, Zixu Wang, Yiping Liu, Siqi Sun, Jianxin Lin, Leyi Wei, Xibao Cai, Houtim Lai, Wei Liu, Longyue Wang, Yuansheng Liu, Xiangxiang Zeng
While various models and computational tools have been proposed for structure and property analysis of molecules, generating molecules that conform to all desired structures and properties remains a challenge.
no code implementations • 29 Jan 2024 • Ketul Shah, Robert Crandall, Jie Xu, Peng Zhou, Marian George, Mayank Bansal, Rama Chellappa
We report state-of-the-art results on the NTU-60, NTU-120 and ETRI datasets, as well as in the transfer learning setting on NUCLA, PKU-MMD-II and ROCOG-v2 datasets, demonstrating the robustness of our approach.
1 code implementation • 24 Nov 2023 • Jens E. Pedersen, Steven Abreu, Matthias Jobst, Gregor Lenz, Vittorio Fra, Felix C. Bauer, Dylan R. Muir, Peng Zhou, Bernhard Vogginger, Kade Heckel, Gianvito Urgese, Sadasivan Shankar, Terrence C. Stewart, Sadique Sheik, Jason K. Eshraghian
By abstracting away assumptions around discretization and hardware constraints, NIR faithfully captures the computational model, while bridging differences between the evaluated implementation and the underlying mathematical formalism.
no code implementations • 30 Oct 2023 • Jialin Liu, Xinyan Su, Peng Zhou, Xiangyu Zhao, Jun Li
Mitigation of the survivor bias is achieved though counterfactual consistency.
no code implementations • 21 Aug 2023 • Peng Zhou, Alexander J. Edwards, Frederick B. Mancoff, Sanjeev Aggarwal, Stephen K. Heinrich-Barna, Joseph S. Friedman
Neuromorphic computing aims to mimic both the function and structure of biological neural networks to provide artificial intelligence with extreme efficiency.
no code implementations • 21 Aug 2023 • Yuhan Li, Yishun Dou, Yue Shi, Yu Lei, Xuanhong Chen, Yi Zhang, Peng Zhou, Bingbing Ni
While text-3D editing has made significant strides in leveraging score distillation sampling, emerging approaches still fall short in delivering separable, precise and consistent outcomes that are vital to content creation.
no code implementations • 17 Jun 2023 • Zhaolong Ling, Enqi Xu, Peng Zhou, Liang Du, Kui Yu, Xindong Wu
Fair feature selection for classification decision tasks has recently garnered significant attention from researchers.
7 code implementations • 22 May 2023 • Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Stella Biderman, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, Xuzheng He, Haowen Hou, Jiaju Lin, Przemyslaw Kazienko, Jan Kocon, Jiaming Kong, Bartlomiej Koptyra, Hayden Lau, Krishna Sri Ipsit Mantri, Ferdinand Mom, Atsushi Saito, Guangyu Song, Xiangru Tang, Bolun Wang, Johan S. Wind, Stanislaw Wozniak, Ruichong Zhang, Zhenyuan Zhang, Qihang Zhao, Peng Zhou, Qinghua Zhou, Jian Zhu, Rui-Jie Zhu
This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.
Ranked #22 on Natural Language Inference on WNLI
1 code implementation • 29 Dec 2022 • Bingchen Huang, Zhineng Chen, Peng Zhou, Jiayin Chen, Zuxuan Wu
The dynamic expansion architecture is becoming popular in class incremental learning, mainly due to its advantages in alleviating catastrophic forgetting.
no code implementations • 26 Jun 2022 • Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Wei D. Lu, Sung-Mo Kang
We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs).
no code implementations • 2 Mar 2022 • Peng Zhou, Dong-Uk Choi, Jason K. Eshraghian, Sung-Mo Kang
We present a fully memristive spiking neural network (MSNN) consisting of physically-realizable memristive neurons and memristive synapses to implement an unsupervised Spiking Time Dependent Plasticity (STDP) learning rule.
no code implementations • 2 Mar 2022 • Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Sung-Mo Kang
The natural spiking dynamics of the MIF neuron model are fully differentiable, eliminating the need for gradient approximations that are prevalent in the spiking neural network literature.
no code implementations • 10 Dec 2021 • Peng Zhou, Julie A. Smith, Laura Deremo, Stephen K. Heinrich-Barna, Joseph S. Friedman
The use of analog resistance states for storing weights in neuromorphic systems is impeded by fabrication imprecision and device stochasticity that limit the precision of synapse weights.
no code implementations • 9 Dec 2021 • Peng Zhou, Alexander J. Edwards, Fred B. Mancoff, Dimitri Houssameddine, Sanjeev Aggarwal, Joseph S. Friedman
We present the first experimental demonstration of a neuromorphic network with magnetic tunnel junction (MTJ) synapses, which performs image recognition via vector-matrix multiplication.
no code implementations • 19 Nov 2021 • Peng Zhou, Fangyi Li
The net value of the fund is affected by performance and market, and the researchers try to quantify these effects to predict the future net value by establishing different models.
no code implementations • 19 Nov 2021 • Peng Zhou, Jingling Tang
With the application of artificial intelligence in the financial field, quantitative trading is considered to be profitable.
no code implementations • 19 Nov 2021 • Yizhuo Li, Peng Zhou, Fangyi Li, Xiao Yang
The authors combined the deep Q network in reinforcement learning with the sentiment quantitative indicator ARBR to build a high-frequency stock trading model for the share market.
1 code implementation • 19 Oct 2021 • Peng Zhou, Lingxi Xie, Bingbing Ni, Qi Tian
The style-based GAN (StyleGAN) architecture achieved state-of-the-art results for generating high-quality images, but it lacks explicit and precise control over camera poses.
Ranked #1 on 3D-Aware Image Synthesis on FFHQ 256 x 256
no code implementations • 16 Mar 2021 • Alexander J. Edwards, Dhritiman Bhattacharya, Peng Zhou, Nathan R. McDonald, Walid Al Misba, Lisa Loomis, Felipe Garcia-Sanchez, Naimul Hassan, Xuan Hu, Md. Fahim Chowdhury, Clare D. Thiem, Jayasimha Atulasimha, Joseph S. Friedman
We therefore propose a reservoir that meets all of these criteria by leveraging the passive interactions of dipole-coupled, frustrated nanomagnets.
no code implementations • 26 Jan 2021 • Peng Zhou, Ning Yu, Zuxuan Wu, Larry S. Davis, Abhinav Shrivastava, Ser-Nam Lim
This paper studies video inpainting detection, which localizes an inpainted region in a video both spatially and temporally.
3 code implementations • ICCV 2021 • Peng Zhou, Lingxi Xie, Bingbing Ni, Cong Geng, Qi Tian
The conditional generative adversarial network (cGAN) is a powerful tool of generating high-quality images, but existing approaches mostly suffer unsatisfying performance or the risk of mode collapse.
Ranked #8 on Conditional Image Generation on ImageNet 128x128
1 code implementation • 25 Jun 2020 • Peng Zhou, Lingxi Xie, Xiaopeng Zhang, Bingbing Ni, Qi Tian
To learn the sampling policy, a Markov decision process is embedded into the search algorithm and a moving average is applied for better stability.
no code implementations • CVPR 2020 • Peng Zhou, Brian Price, Scott Cohen, Gregg Wilensky, Larry S. Davis
In this paper, we target refining the boundaries in high resolution images given low resolution masks.
no code implementations • 12 May 2020 • Hui Ding, Peng Zhou, Rama Chellappa
Recognizing the expressions of partially occluded faces is a challenging computer vision problem.
Facial Expression Recognition Facial Expression Recognition (FER)
1 code implementation • ECCV 2020 • Ning Yu, Ke Li, Peng Zhou, Jitendra Malik, Larry Davis, Mario Fritz
Generative Adversarial Networks (GANs) have brought about rapid progress towards generating photorealistic images.
3 code implementations • ACL 2020 • Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Haotang Deng, Qi Ju
Pre-trained language models like BERT have proven to be highly performant.
no code implementations • 25 Mar 2020 • Peng Zhou, Brian Price, Scott Cohen, Gregg Wilensky, Larry S. Davis
In this paper, we target refining the boundaries in high resolution images given low resolution masks.
no code implementations • 24 Mar 2020 • Peng Zhou, Nathan R. McDonald, Alexander J. Edwards, Lisa Loomis, Clare D. Thiem, Joseph S. Friedman
Reservoir computing is an emerging methodology for neuromorphic computing that is especially well-suited for hardware implementations in size, weight, and power (SWaP) constrained environments.
no code implementations • ICLR 2020 • Peng Zhou, Bingbing Ni, Lingxi Xie, Xiaopeng Zhang, Hang Wang, Cong Geng, Qi Tian
In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem.
no code implementations • 26 Nov 2019 • Kekai Sheng, Wei-Ming Dong, Menglei Chai, Guohui Wang, Peng Zhou, Feiyue Huang, Bao-Gang Hu, Rongrong Ji, Chongyang Ma
In this paper, we revisit the problem of image aesthetic assessment from the self-supervised feature learning perspective.
2 code implementations • arXiv 2019 • Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, Ping Wang
For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge.
1 code implementation • 12 Jun 2019 • Xinli Cai, Peng Zhou, Shuhan Ding, Guoyang Chen, Weifeng Zhang
Finally, through this easy-to-use specification language, we are able to build a full testing specification which leverages LLVM TableGen to automatically generate unit tests for ONNX operators with much large coverage.
no code implementations • 3 Apr 2019 • Peng Zhou, Long Mai, Jianming Zhang, Ning Xu, Zuxuan Wu, Larry S. Davis
Instead of sequentially distilling knowledge only from the last model, we directly leverage all previous model snapshots.
1 code implementation • 24 Nov 2018 • Peng Zhou, Bor-Chun Chen, Xintong Han, Mahyar Najibi, Abhinav Shrivastava, Ser Nam Lim, Larry S. Davis
The advent of image sharing platforms and the easy availability of advanced photo editing software have resulted in a large quantities of manipulated images being shared on the internet.
no code implementations • CVPR 2018 • Jinxian Liu, Bingbing Ni, Yichao Yan, Peng Zhou, Shuo Cheng, Jianguo Hu
On the other hand, in addition to the conventional discriminator of GAN (i. e., to distinguish between REAL/FAKE samples), we propose a novel guider sub-network which encourages the generated sample (i. e., with novel pose) towards better satisfying the ReID loss (i. e., cross-entropy ReID loss, triplet ReID loss).
no code implementations • CVPR 2018 • Peng Zhou, Bingbing Ni, Cong Geng, Jianguo Hu, Yi Xu
Scale problem lies in the heart of object detection.
2 code implementations • CVPR 2018 • Peng Zhou, Xintong Han, Vlad I. Morariu, Larry S. Davis
Image manipulation detection is different from traditional semantic object detection because it pays more attention to tampering artifacts than to image content, which suggests that richer features need to be learned.
no code implementations • 29 Mar 2018 • Peng Zhou, Xintong Han, Vlad I. Morariu, Larry S. Davis
We propose a two-stream network for face tampering detection.
2 code implementations • ACL 2017 • Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, Bo Xu
Joint extraction of entities and relations is an important task in information extraction.
Ranked #3 on Relation Extraction on NYT-single
3 code implementations • COLING 2016 • Peng Zhou, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, Bo Xu
To integrate the features on both dimensions of the matrix, this paper explores applying 2D max pooling operation to obtain a fixed-length representation of the text.
Ranked #6 on Text Classification on TREC-6