1 code implementation • 21 Sep 2023 • Jiaxin Zhang, Shiyuan Chen, Haoran Yin, Ruohong Mei, Xuan Liu, Cong Yang, Qian Zhang, Wei Sui
The recent development of online static map element (a. k. a.
1 code implementation • 12 Sep 2023 • Jingcan Duan, Pei Zhang, Siwei Wang, Jingtao Hu, Hu Jin, Jiaxin Zhang, Haifang Zhou
Finally, the model is refined with the only input of reliable normal nodes and learns a more accurate estimate of normality so that anomalous nodes can be more easily distinguished.
1 code implementation • 31 Aug 2023 • Daoguang Zan, Ailun Yu, Bo Shen, Jiaxin Zhang, Taihong Chen, Bing Geng, Bei Chen, Jichuan Ji, Yafen Yao, Yongji Wang, Qianxiang Wang
Results demonstrate that programming languages can significantly improve each other.
2 code implementations • ICCV 2023 • Mingxin Huang, Jiaxin Zhang, Dezhi Peng, Hao Lu, Can Huang, Yuliang Liu, Xiang Bai, Lianwen Jin
To this end, we introduce a new model named Explicit Synergy-based Text Spotting Transformer framework (ESTextSpotter), which achieves explicit synergy by modeling discriminative and interactive features for text detection and recognition within a single decoder.
no code implementations • 27 Jul 2023 • Bo Shen, Jiaxin Zhang, Taihong Chen, Daoguang Zan, Bing Geng, An Fu, Muhan Zeng, Ailun Yu, Jichuan Ji, Jingyang Zhao, Yuenan Guo, Qianxiang Wang
In this paper, we propose a novel RRTF (Rank Responses to align Test&Teacher Feedback) framework, which can effectively and efficiently boost pre-trained large language models for code generation.
Ranked #9 on
Code Generation
on HumanEval
no code implementations • 20 Jun 2023 • Ruohong Mei, Wei Sui, Jiaxin Zhang, Qian Zhang, Tao Peng, Cong Yang
To improve the efficiency of RoMe in large-scale environments, a novel waypoint sampling method is introduced.
no code implementations • 9 Jun 2023 • Jiaxin Zhang, Bangdong Chen, Hiuyi Cheng, Fengjun Guo, Kai Ding, Lianwen Jin
Furthermore, considering the importance of fine-grained elements in document images, we present a details recurrent refinement module to enhance the output in a high-resolution space.
no code implementations • 15 May 2023 • Hiuyi Cheng, Peirong Zhang, Sihang Wu, Jiaxin Zhang, Qiyuan Zhu, Zecheng Xie, Jing Li, Kai Ding, Lianwen Jin
Document layout analysis is a crucial prerequisite for document understanding, including document retrieval and conversion.
no code implementations • 11 Apr 2023 • Yue Cui, Syed Irfan Ali Meerza, Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu
In this paper, we seek to reconcile utility and privacy in FL by proposing a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes while obtaining significant improvements in utility over traditional defenses.
no code implementations • 21 Feb 2023 • Zhuohang Li, Jiaxin Zhang, Jian Liu
Distributed machine learning paradigms, such as federated learning, have been recently adopted in many privacy-critical applications for speech analysis.
3 code implementations • 4 Jan 2023 • Yuliang Liu, Jiaxin Zhang, Dezhi Peng, Mingxin Huang, Xinyu Wang, Jingqun Tang, Can Huang, Dahua Lin, Chunhua Shen, Xiang Bai, Lianwen Jin
Within the context of our SPTS v2 framework, our experiments suggest a potential preference for single-point representation in scene text spotting when compared to other representations.
Ranked #14 on
Text Spotting
on ICDAR 2015
1 code implementation • CVPR 2023 • Hiuyi Cheng, Peirong Zhang, Sihang Wu, Jiaxin Zhang, Qiyuan Zhu, Zecheng Xie, Jing Li, Kai Ding, Lianwen Jin
Document layout analysis is a crucial prerequisite for document understanding, including document retrieval and conversion.
1 code implementation • 8 Dec 2022 • Jiaxin Zhang, Wei Sui, Qian Zhang, Tao Chen, Cong Yang
In this paper, we introduce a novel approach for ground plane normal estimation of wheeled vehicles.
1 code implementation • 2 Dec 2022 • Jiaxin Zhang, Sirui Bi, Victor Fung
In the scope of "AI for Science", solving inverse problems is a longstanding challenge in materials and drug discovery, where the goal is to determine the hidden structures given a set of desirable properties.
1 code implementation • 18 Oct 2022 • Jiaxin Zhang, Yashar Moshfeghi
Numerical reasoning over text is a challenging task of Artificial Intelligence (AI), requiring reading comprehension and numerical reasoning abilities.
Ranked #1 on
Math Word Problem Solving
on MathQA
1 code implementation • 27 Jul 2022 • Victor Fung, Shuyi Jia, Jiaxin Zhang, Sirui Bi, Junqi Yin, P. Ganesh
These methods would help identify or, in the case of generative models, even create novel crystal structures of materials with a set of specified functional properties to then be synthesized or isolated in the laboratory.
1 code implementation • 23 Jul 2022 • Jiaxin Zhang, Canjie Luo, Lianwen Jin, Fengjun Guo, Kai Ding
To address this issue, we propose a novel approach called Marior (Margin Removal and \Iterative Content Rectification).
1 code implementation • CVPR 2022 • Zhuohang Li, Jiaxin Zhang, Luyang Liu, Jian Liu
Federated Learning (FL) framework brings privacy benefits to distributed learning systems by allowing multiple clients to participate in a learning task under the coordination of a central server without exchanging their private data.
no code implementations • 16 Dec 2021 • Wei Sui, Teng Chen, Jiaxin Zhang, Jiao Lu, Qian Zhang
The Depth-CNN and Pose-CNN estimate dense depth map and ego-motion respectively, solving SFM, while the Pose-CNN and Ground-CNN followed by a homography layer solve the ground plane estimation problem.
1 code implementation • 15 Dec 2021 • Dezhi Peng, Xinyu Wang, Yuliang Liu, Jiaxin Zhang, Mingxin Huang, Songxuan Lai, Shenggao Zhu, Jing Li, Dahua Lin, Chunhua Shen, Xiang Bai, Lianwen Jin
For the first time, we demonstrate that training scene text spotting models can be achieved with an extremely low-cost annotation of a single-point for each instance.
Ranked #2 on
Text Spotting
on SCUT-CTW1500
1 code implementation • 6 Dec 2021 • Alexander Lavin, David Krakauer, Hector Zenil, Justin Gottschlich, Tim Mattson, Johann Brehmer, Anima Anandkumar, Sanjay Choudry, Kamil Rocki, Atılım Güneş Baydin, Carina Prunkl, Brooks Paige, Olexandr Isayev, Erik Peterson, Peter L. McMahon, Jakob Macke, Kyle Cranmer, Jiaxin Zhang, Haruko Wainwright, Adi Hanuka, Manuela Veloso, Samuel Assefa, Stephan Zheng, Avi Pfeffer
We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence.
no code implementations • NeurIPS 2021 • Ján Drgoňa, Sayak Mukherjee, Jiaxin Zhang, Frank Liu, Mahantesh Halappanavar
Deep Markov models (DMM) are generative models that are scalable and expressive generalization of Markov models for representation, learning, and inference problems.
no code implementations • 29 Sep 2021 • Yu Wang, Jan Drgona, Jiaxin Zhang, Karthik Somayaji NS, Frank Y Liu, Malachi Schram, Peng Li
Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture.
no code implementations • 3 Jul 2021 • Zhuohang Li, Luyang Liu, Jiaxin Zhang, Jian Liu
Federated Learning (FL) enables multiple distributed clients (e. g., mobile devices) to collaboratively train a centralized model while keeping the training data locally on the client.
1 code implementation • 6 Jun 2021 • Victor Fung, Jiaxin Zhang, Guoxiang Hu, P. Ganesh, Bobby G. Sumpter
The ability to readily design novel materials with chosen functional properties on-demand represents a next frontier in materials discovery.
no code implementations • 18 Mar 2021 • Jiaxin Zhang, Wei Sui, Xinggang Wang, Wenming Meng, Hongmei Zhu, Qian Zhang
Second, the poses predicted by CNNs are further improved by minimizing photometric errors via gradient updates of poses during inference phases.
no code implementations • 14 Mar 2021 • Jiaxin Zhang, Sirui Bi, Guannan Zhang
However, the approach requires a sampling path to compute the pathwise gradient of the MI lower bound with respect to the design variables, and such a pathwise gradient is usually inaccessible for implicit models.
no code implementations • 14 Mar 2021 • Jiaxin Zhang, Sirui Bi, Guannan Zhang
However, the approach in Kleinegesse et al., 2020 requires a pathwise sampling path to compute the gradient of the MI lower bound with respect to the design variables, and such a pathwise sampling path is usually inaccessible for implicit models.
1 code implementation • 24 Jan 2021 • Jiapeng Wang, Chongyu Liu, Lianwen Jin, Guozhi Tang, Jiaxin Zhang, Shuaitao Zhang, Qianying Wang, Yaqiang Wu, Mingxiang Cai
Visual information extraction (VIE) has attracted considerable attention recently owing to its various advanced applications such as document understanding, automatic marking and intelligent education.
no code implementations • 28 Nov 2020 • Jiaxin Zhang, Congjie Wei, Chenglin Wu
In this paper, we propose a thermodynamic consistent neural network (TCNN) approach to build a data-driven model of the TSR with sparse experimental data.
no code implementations • 28 Nov 2020 • Sirui Bi, Jiaxin Zhang, Guannan Zhang
Unlike the existing studies of DL for TO, our framework accelerates TO by learning the iterative history data and simultaneously training on the mapping between the given design and its gradient.
1 code implementation • 7 Feb 2020 • Jiaxin Zhang, Hoang Tran, Dan Lu, Guannan Zhang
Standard ES methods with $d$-dimensional Gaussian smoothing suffer from the curse of dimensionality due to the high variance of Monte Carlo (MC) based gradient estimators.
no code implementations • 10 Aug 2019 • Jiaxin Zhang, Xianglin Liu, Sirui Bi, Junqi Yin, Guannan Zhang, Markus Eisenbach
In this study, a robust data-driven framework based on Bayesian approaches is proposed and demonstrated on the accurate and efficient prediction of configurational energy of high entropy alloys.
no code implementations • NeurIPS 2019 • Guannan Zhang, Jiaxin Zhang, Jacob Hinkle
We developed a Nonlinear Level-set Learning (NLL) method for dimensionality reduction in high-dimensional function approximation with small data.
Functional Analysis
no code implementations • 26 Feb 2018 • Jinglan Liu, Jiaxin Zhang, Yukun Ding, Xiaowei Xu, Meng Jiang, Yiyu Shi
This work explores the binarization of the deconvolution-based generator in a GAN for memory saving and speedup of image construction.
no code implementations • 29 Oct 2014 • Xiangyang Zhou, Jiaxin Zhang, Brian Kulis
Despite strong performance for a number of clustering tasks, spectral graph cut algorithms still suffer from several limitations: first, they require the number of clusters to be known in advance, but this information is often unknown a priori; second, they tend to produce clusters with uniform sizes.