1 code implementation • 23 Jan 2024 • Ki Hyun Tae, Hantian Zhang, Jaeyoung Park, Kexin Rong, Steven Euijong Whang
Given a user-specified group fairness measure, Falcon identifies samples from "target groups" (e. g., (attribute=female, label=positive)) that are the most informative for improving fairness.
1 code implementation • 15 Sep 2022 • Hantian Zhang, Ki Hyun Tae, Jaeyoung Park, Xu Chu, Steven Euijong Whang
We then propose an approximate linear programming algorithm and provide theoretical guarantees on how close its result is to the optimal solution in terms of the number of label flips.
no code implementations • 13 Mar 2021 • Hantian Zhang, Xu Chu, Abolfazl Asudeh, Shamkant B. Navathe
Existing techniques for producing fair ML models either are limited to the type of fairness constraints they can handle (e. g., preprocessing) or require nontrivial modifications to downstream ML training algorithms (e. g., in-processing).
no code implementations • 21 May 2019 • Jialin Ding, Umar Farooq Minhas, JIA YU, Chi Wang, Jaeyoung Do, Yi-Nan Li, Hantian Zhang, Badrish Chandramouli, Johannes Gehrke, Donald Kossmann, David Lomet, Tim Kraska
The original work by Kraska et al. shows that a learned index beats a B+Tree by a factor of up to three in search time and by an order of magnitude in memory footprint.
1 code implementation • 8 Mar 2019 • Zeke Wang, Kaan Kara, Hantian Zhang, Gustavo Alonso, Onur Mutlu, Ce Zhang
Learning from the data stored in a database is an important function increasingly available in relational engines.
no code implementations • 23 Mar 2018 • Dominic Stark, Barthelemy Launet, Kevin Schawinski, Ce Zhang, Michael Koss, M. Dennis Turp, Lia F. Sartori, Hantian Zhang, Yiru Chen, Anna K. Weigel
We test the method using Sloan Digital Sky Survey (SDSS) r-band images with artificial AGN point sources added which are then removed using the GAN and with parametric methods using GALFIT.
Astrophysics of Galaxies Data Analysis, Statistics and Probability
no code implementations • ICML 2017 • Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang
We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees?
no code implementations • 29 Jul 2017 • Yu Liu, Hantian Zhang, Luyuan Zeng, Wentao Wu, Ce Zhang
We then compare the performance of the top winning code available from Kaggle with that of running machine learning clouds from both Azure and Amazon on mlbench.
no code implementations • 1 Feb 2017 • Kevin Schawinski, Ce Zhang, Hantian Zhang, Lucas Fowler, Gokula Krishnan Santhanam
Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data.
1 code implementation • 16 Nov 2016 • Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang
When applied to linear models together with double sampling, we save up to another 1. 7x in data movement compared with uniform quantization.