no code implementations • 7 Jun 2024 • Amira Alakhdar, Barnabas Poczos, Newell Washburn
We also cover conditional generation on physical properties, conformation generation, and fragment-based drug design.
no code implementations • 2 May 2024 • Dhananjay Ashok, Barnabas Poczos
While most research on controllable text generation has focused on steering base Language Models, the emerging instruction-tuning and prompting paradigm offers an alternate approach to controllability.
no code implementations • 30 Aug 2023 • Hai Pham, Young Jin Kim, Subhabrata Mukherjee, David P. Woodruff, Barnabas Poczos, Hany Hassan Awadalla
Mixture-of-experts (MoE) architecture has been proven a powerful method for diverse tasks in training deep models in many applications.
no code implementations • 24 Aug 2023 • Chenghui Zhou, Barnabas Poczos
In this paper, we explore applying a multi-stage VAE approach, that can improve manifold recovery on a synthetic dataset, to the field of drug discovery.
no code implementations • 6 Dec 2022 • Chenghui Zhou, Barnabas Poczos
Variational autoencoder (VAE) is a popular method for drug discovery and there had been a great deal of architectures and pipelines proposed to improve its performance.
1 code implementation • NeurIPS 2020 • Mariya Toneva, Otilia Stretcu, Barnabas Poczos, Leila Wehbe, Tom M. Mitchell
These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.
no code implementations • 18 Aug 2020 • Hai Pham, Amrith Setlur, Saket Dingliwal, Tzu-Hsiang Lin, Barnabas Poczos, Kang Huang, Zhuo Li, Jae Lim, Collin McCormack, Tam Vu
Despite the advent of deep learning in computer vision, the general handwriting recognition problem is far from solved.
no code implementations • 25 Jul 2020 • Amrith Setlur, Barnabas Poczos, Alan W. black
This paper extends recent work on nonlinear Independent Component Analysis (ICA) by introducing a theoretical framework for nonlinear Independent Subspace Analysis (ISA) in the presence of auxiliary variables.
1 code implementation • ICML Workshop LifelongML 2020 • Amrith Setlur, Saket Dingliwal, Barnabas Poczos
Based on this model we propose a computationally feasible meta-learning algorithm by introducing meaningful relaxations in our final objective.
1 code implementation • ACL 2020 • Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W. black, Shrimai Prabhumoye
This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning.
no code implementations • NeurIPS 2020 • Ananya Uppal, Shashank Singh, Barnabas Poczos
We study minimax convergence rates of nonparametric density estimation in the Huber contamination model, in which a proportion of the data comes from an unknown outlier distribution.
no code implementations • 20 Feb 2020 • Ilqar Ramazanli, Han Nguyen, Hai Pham, Sashank J. Reddi, Barnabas Poczos
It often leads to the dependence of convergence rate on maximum Lipschitz constant of gradients across the devices.
no code implementations • 6 Feb 2020 • Ilqar Ramazanli, Barnabas Poczos
We study the problem of exact completion for $m \times n$ sized matrix of rank $r$ with the adaptive sampling method.
no code implementations • 27 Jan 2020 • Chenghui Zhou, Chun-Liang Li, Barnabas Poczos
However, they struggle with the inherent sparsity of meaningful programs in the search space.
no code implementations • 8 Dec 2019 • Austin Dill, Songwei Ge, Eunsu Kang, Chun-Liang Li, Barnabas Poczos
The typical approach for incorporating this creative process is to interpolate in a learned latent space so as to avoid the problem of generating unrealistic instances by exploiting the model's learned structure.
1 code implementation • NeurIPS 2019 • Emre Yolcu, Barnabas Poczos
We present an approach to learn SAT solver heuristics from scratch through deep reinforcement learning with a curriculum.
no code implementations • 18 Nov 2019 • Kai Hu, Barnabas Poczos
We further use a noise analysis method to interpret the difference between RotationOut and Dropout in co-adaptation reduction.
no code implementations • 22 Oct 2019 • Adarsh Dave, Jared Mitchell, Kirthevasan Kandasamy, Sven Burke, Biswajit Paria, Barnabas Poczos, Jay Whitacre, Venkatasubramanian Viswanathan
Innovations in batteries take years to formulate and commercialize, requiring extensive experimentation during the design and optimization phases.
no code implementations • 20 Aug 2019 • Songwei Ge, Austin Dill, Eunsu Kang, Chun-Liang Li, Lingyao Zhang, Manzil Zaheer, Barnabas Poczos
We explore the intersection of human and machine creativity by generating sculptural objects through machine learning.
1 code implementation • 5 Aug 2019 • Ksenia Korovina, Sailun Xu, Kirthevasan Kandasamy, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, Eric P. Xing
In applications such as molecule design or drug discovery, it is desirable to have an algorithm which recommends new candidate molecules based on the results of past tests.
1 code implementation • 20 Jun 2019 • Haiguang Liao, Wentai Zhang, Xuliang Dong, Barnabas Poczos, Kenji Shimada, Levent Burak Kara
At the heart of the proposed method is deep reinforcement learning that enables an agent to produce an optimal policy for routing based on the variety of problems it is presented with leveraging the conjoint optimization mechanism of deep reinforcement learning.
1 code implementation • NAACL 2019 • Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, Tom M. Mitchell
In this paper, we propose a curriculum learning framework for NMT that reduces training time, reduces the need for specialized heuristics or large batch sizes, and results in overall better performance.
1 code implementation • 15 Mar 2019 • Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, Eric P. Xing
We compare Dragonfly to a suite of other packages and algorithms for global optimisation and demonstrate that when the above methods are integrated, they enable significant improvements in the performance of BO.
no code implementations • 26 Feb 2019 • Michelle Ntampaka, Camille Avestruz, Steven Boada, Joao Caldeira, Jessi Cisewski-Kehe, Rosanne Di Stefano, Cora Dvorkin, August E. Evrard, Arya Farahi, Doug Finkbeiner, Shy Genel, Alyssa Goodman, Andy Goulding, Shirley Ho, Arthur Kosowsky, Paul La Plante, Francois Lanusse, Michelle Lochner, Rachel Mandelbaum, Daisuke Nagai, Jeffrey A. Newman, Brian Nord, J. E. G. Peek, Austin Peel, Barnabas Poczos, Markus Michael Rau, Aneta Siemiginowska, Dougal J. Sutherland, Hy Trac, Benjamin Wandelt
In recent years, machine learning (ML) methods have remarkably improved how cosmologists can interpret data.
Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics
1 code implementation • 21 Feb 2019 • Michael Andrews, John Alison, Sitong An, Patrick Bryant, Bjorn Burkle, Sergei Gleyzer, Meenakshi Narain, Manfred Paulini, Barnabas Poczos, Emanuele Usai
We describe the construction of end-to-end jet image classifiers based on simulated low-level detector data to discriminate quark- vs. gluon-initiated jets with high-fidelity simulated CMS Open Data.
1 code implementation • 15 Feb 2019 • Matthew Ho, Markus Michael Rau, Michelle Ntampaka, Arya Farahi, Hy Trac, Barnabas Poczos
Our first model, CNN$_\text{1D}$, infers cluster mass directly from the distribution of member galaxy line-of-sight velocities.
Cosmology and Nongalactic Astrophysics
1 code implementation • 31 Jan 2019 • Willie Neiswanger, Kirthevasan Kandasamy, Barnabas Poczos, Jeff Schneider, Eric Xing
Optimizing an expensive-to-query function is a common task in science and engineering, where it is beneficial to keep the number of queries to a minimum.
2 code implementations • 19 Dec 2018 • Hai Pham, Paul Pu Liang, Thomas Manzini, Louis-Philippe Morency, Barnabas Poczos
Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input.
no code implementations • 13 Nov 2018 • Chun-Liang Li, Eunsu Kang, Songwei Ge, Lingyao Zhang, Austin Dill, Manzil Zaheer, Barnabas Poczos
Our approach extends DeepDream from images to 3D point clouds.
1 code implementation • 13 Oct 2018 • Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnabas Poczos, Ruslan Salakhutdinov
In this paper, we first show a straightforward extension of existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data.
no code implementations • ICLR 2019 • Simon S. Du, Xiyu Zhai, Barnabas Poczos, Aarti Singh
One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth.
1 code implementation • 31 Jul 2018 • Michael Andrews, Manfred Paulini, Sergei Gleyzer, Barnabas Poczos
This paper describes the construction of novel end-to-end image-based classifiers that directly leverage low-level simulated detector data to discriminate signal and background processes in pp collision events at the Large Hadron Collider at CERN.
no code implementations • WS 2018 • Hai Pham, Thomas Manzini, Paul Pu Liang, Barnabas Poczos
Multimodal machine learning is a core research area spanning the language, visual and acoustic modalities.
1 code implementation • 28 Jun 2018 • Sumedha Singla, Mingming Gong, Siamak Ravanbakhsh, Frank Sciurba, Barnabas Poczos, Kayhan N. Batmanghelich
Our model consists of three mutually dependent modules which regulate each other: (1) a discriminative network that learns a fixed-length representation from local features and maps them to disease severity; (2) an attention mechanism that provides interpretability by focusing on the areas of the anatomy that contribute the most to the prediction task; and (3) a generative network that encourages the diversity of the local latent features.
1 code implementation • 25 May 2018 • Kirthevasan Kandasamy, Willie Neiswanger, Reed Zhang, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos
We design a new myopic strategy for a wide class of sequential design of experiment (DOE) problems, where the goal is to collect data in order to to fulfil a certain problem specific goal.
no code implementations • 13 Feb 2018 • Yifan Wu, Barnabas Poczos, Aarti Singh
A major challenge in understanding the generalization of deep learning is to explain why (stochastic) gradient descent can exploit the network architecture to find solutions that have good generalization performance when using high capacity models.
1 code implementation • NeurIPS 2018 • Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, Eric Xing
A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.
no code implementations • ICML 2018 • Simon S. Du, Jason D. Lee, Yuandong Tian, Barnabas Poczos, Aarti Singh
We consider the problem of learning a one-hidden-layer neural network with non-overlapping convolutional layer and ReLU activation, i. e., $f(\mathbf{Z}, \mathbf{w}, \mathbf{a}) = \sum_j a_j\sigma(\mathbf{w}^T\mathbf{Z}_j)$, in which both the convolutional weights $\mathbf{w}$ and the output weights $\mathbf{a}$ are parameters to be learned.
no code implementations • 6 Nov 2017 • Siamak Ravanbakhsh, Junier Oliva, Sebastien Fromenteau, Layne C. Price, Shirley Ho, Jeff Schneider, Barnabas Poczos
A major approach to estimating the cosmological parameters is to use the large-scale matter distribution of the Universe.
1 code implementation • ICCV 2017 • J. H. Rick Chang, Chun-Liang Li, Barnabas Poczos, B. V. K. Vijaya Kumar, Aswin C. Sankaranarayanan
While deep learning methods have achieved state-of-the-art performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks.
no code implementations • 5 Sep 2017 • Sashank J. Reddi, Manzil Zaheer, Suvrit Sra, Barnabas Poczos, Francis Bach, Ruslan Salakhutdinov, Alexander J. Smola
A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points.
no code implementations • 30 May 2017 • Junier B. Oliva, Kumar Avinava Dubey, Barnabas Poczos, Eric Xing, Jeff Schneider
After, an RNN is used to compute the conditional distributions of the latent covariates.
no code implementations • NeurIPS 2017 • Simon S. Du, Chi Jin, Jason D. Lee, Michael. I. Jordan, Barnabas Poczos, Aarti Singh
Although gradient descent (GD) almost always escapes saddle points asymptotically [Lee et al., 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be significantly slowed down by saddle points, taking exponential time to escape.
1 code implementation • 25 May 2017 • Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos
We design and analyse variations of the classical Thompson sampling (TS) procedure for Bayesian optimisation (BO) in settings where function evaluations are expensive, but can be performed in parallel.
no code implementations • 23 May 2017 • Wei-Cheng Chang, Chun-Liang Li, Yiming Yang, Barnabas Poczos
Large-scale kernel approximation is an important problem in machine learning research.
2 code implementations • 29 Mar 2017 • J. H. Rick Chang, Chun-Liang Li, Barnabas Poczos, B. V. K. Vijaya Kumar, Aswin C. Sankaranarayanan
On the other hand, traditional methods using signal priors can be used in all linear inverse problems but often have worse performance on challenging tasks.
no code implementations • ICML 2017 • Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabas Poczos
Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design.
5 code implementations • NeurIPS 2017 • Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola
Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong.
1 code implementation • 8 Mar 2017 • Francois Lanusse, Quanbin Ma, Nan Li, Thomas E. Collett, Chun-Liang Li, Siamak Ravanbakhsh, Rachel Mandelbaum, Barnabas Poczos
We find on our simulated data set that for a rejection rate of non-lenses of 99%, a completeness of 90% can be achieved for lenses with Einstein radii larger than 1. 4" and S/N larger than 20 on individual $g$-band LSST exposures.
Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies
2 code implementations • ICML 2017 • Junier B. Oliva, Barnabas Poczos, Jeff Schneider
Sophisticated gated recurrent neural network architectures like LSTMs and GRUs have been shown to be highly effective in a myriad of applications.
1 code implementation • ICML 2017 • Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos
We propose to study equivariance in deep neural networks through parameter symmetries.
no code implementations • NeurIPS 2017 • Simon Shaolei Du, Jayanth Koushik, Aarti Singh, Barnabas Poczos
We consider the Hypothesis Transfer Learning (HTL) problem where one incorporates a hypothesis trained on the source domain into the learning procedure of the target domain.
1 code implementation • NeurIPS 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, Barnabas Poczos
However, in many cases, cheap approximations to $\func$ may be obtainable.
no code implementations • NeurIPS 2016 • Kumar Avinava Dubey, Sashank J. Reddi, Sinead A. Williamson, Barnabas Poczos, Alexander J. Smola, Eric P. Xing
In this paper, we present techniques for reducing variance in stochastic gradient Langevin dynamics, yielding novel stochastic Monte Carlo methods that improve performance by reducing the variance in the stochastic gradient.
no code implementations • NeurIPS 2016 • Sashank J. Reddi, Suvrit Sra, Barnabas Poczos, Alexander J. Smola
We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex.
no code implementations • 14 Nov 2016 • Siamak Ravanbakhsh, Jeff Schneider, Barnabas Poczos
We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set.
no code implementations • 11 Nov 2016 • Chun-Liang Li, Siamak Ravanbakhsh, Barnabas Poczos
Due to numerical stability and quantifiability of the likelihood, RBM is commonly used with Bernoulli units.
no code implementations • 19 Sep 2016 • Siamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, Barnabas Poczos
To this end, we study the application of deep conditional generative models in generating realistic galaxy images.
no code implementations • 27 Jul 2016 • Sashank J. Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola
Finally, we show that the faster convergence rates of our variance reduced methods also translate into improved convergence rates for the stochastic setting.
no code implementations • 23 May 2016 • Sashank J. Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola
This paper builds upon our recent series of papers on fast stochastic methods for smooth nonconvex optimization [22, 23], with a novel analysis for nonconvex and nonsmooth functions.
1 code implementation • 20 Mar 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, Barnabas Poczos
However, in many cases, cheap approximations to $f$ may be obtainable.
no code implementations • 19 Mar 2016 • Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, Alex Smola
We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them.
no code implementations • 19 Mar 2016 • Sashank J. Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola
We analyze a fast incremental aggregated gradient method for optimizing nonconvex problems of the form $\min_x \sum_i f_i(x)$.
no code implementations • 1 Jan 2016 • Siamak Ravanbakhsh, Barnabas Poczos, Jeff Schneider, Dale Schuurmans, Russell Greiner
We propose a Laplace approximation that creates a stochastic unit from any smooth monotonic activation function, using only Gaussian noise.
no code implementations • NeurIPS 2015 • Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, James M. Robins
We propose and analyse estimators for statistical functionals of one or moredistributions under nonparametric assumptions. Our estimators are derived from the von Mises expansion andare based on the theory of influence functions, which appearin the semiparametric statistics literature. We show that estimators based either on data-splitting or a leave-one-out techniqueenjoy fast rates of convergence and other favorable theoretical properties. We apply this framework to derive estimators for several popular informationtheoretic quantities, and via empirical evaluation, show the advantage of thisapproach over existing estimators.
no code implementations • 28 Sep 2015 • Siamak Ravanbakhsh, Barnabas Poczos, Russell Greiner
Boolean matrix factorization and Boolean matrix completion from noisy observations are desirable unsupervised data-analysis methods due to their interpretability, but hard to perform due to their NP-hardness.
no code implementations • 4 Aug 2015 • Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman
We formally characterize the power of popular tests for GDA like the Maximum Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent variants of the Energy Distance with the Euclidean norm (eED) in the high-dimensional MDA regime.
no code implementations • 29 Jun 2015 • Junier Oliva, Avinava Dubey, Andrew G. Wilson, Barnabas Poczos, Jeff Schneider, Eric P. Xing
In this paper we introduce Bayesian nonparmetric kernel-learning (BaNK), a generic, data-driven framework for scalable learning of kernels.
no code implementations • 15 May 2015 • Aaditya Ramdas, Barnabas Poczos, Aarti Singh, Larry Wasserman
For larger $\sigma$, the \textit{unflattening} of the regression function on convolution with uniform noise, along with its local antisymmetry around the threshold, together yield a behaviour where noise \textit{appears} to be beneficial.
no code implementations • 5 Mar 2015 • Kirthevasan Kandasamy, Jeff Schneider, Barnabas Poczos
We prove that, for additive functions the regret has only linear dependence on $D$ even though the function depends on all $D$ dimensions.
no code implementations • 23 Nov 2014 • Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman
The current literature is split into two kinds of tests - those which are consistent without any assumptions about how the distributions may differ (\textit{general} alternatives), and those which are designed to specifically test easier alternatives, like a difference in means (\textit{mean-shift} alternatives).
2 code implementations • 17 Nov 2014 • Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, James M. Robins
We propose and analyze estimators for statistical functionals of one or more distributions under nonparametric assumptions.
1 code implementation • 8 Nov 2014 • Zoltan Szabo, Bharath Sriperumbudur, Barnabas Poczos, Arthur Gretton
In this paper, we study a simple, analytically computable, ridge regression-based alternative to distribution regression, where we embed the distributions to a reproducing kernel Hilbert space, and learn the regressor from the embeddings to the outputs.
no code implementations • 30 Oct 2014 • Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, Larry Wasserman
We give a comprehensive theoretical characterization of a nonparametric estimator for the $L_2^2$ divergence between two continuous distributions.
no code implementations • 27 Oct 2014 • Junier Oliva, Willie Neiswanger, Barnabas Poczos, Eric Xing, Jeff Schneider
Function to function regression (FFR) covers a large range of interesting applications including time-series prediction problems, and also more general tasks like studying a mapping between two separate types of distributions.
no code implementations • 2 Oct 2014 • Michelle Ntampaka, Hy Trac, Dougal J. Sutherland, Nicholas Battaglia, Barnabas Poczos, Jeff Schneider
In the conventional method, we use a standard M(sigma_v) power law scaling relation to infer cluster mass, M, from line-of-sight (LOS) galaxy velocity dispersion, sigma_v.
Cosmology and Nongalactic Astrophysics
no code implementations • 12 Feb 2014 • Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, Larry Wasserman
We consider nonparametric estimation of $L_2$, Renyi-$\alpha$ and Tsallis-$\alpha$ divergences between continuous distributions.
no code implementations • 7 Feb 2014 • Zoltan Szabo, Arthur Gretton, Barnabas Poczos, Bharath Sriperumbudur
To the best of our knowledge, the only existing method with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which suffers from slow convergence issues in high dimensions), and the domain of the distributions to be compact Euclidean.
no code implementations • 10 Nov 2013 • Junier B. Oliva, Barnabas Poczos, Timothy Verstynen, Aarti Singh, Jeff Schneider, Fang-Cheng Yeh, Wen-Yih Tseng
We present the FuSSO, a functional analogue to the LASSO, that efficiently finds a sparse set of functional input covariates to regress a real-valued response against.
no code implementations • 10 Nov 2013 • Junier B. Oliva, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, Eric Xing
We study the problem of distribution to real-value regression, where one aims to regress a mapping $f$ that takes in a distribution input covariate $P\in \mathcal{I}$ (for a non-parametric family of distributions $\mathcal{I}$) and outputs a real-valued response $Y=f(P) + \epsilon$.
no code implementations • 5 Mar 2013 • Xiaoying Xu, Shirley Ho, Hy Trac, Jeff Schneider, Barnabas Poczos, Michelle Ntampaka
We investigate machine learning (ML) techniques for predicting the number of galaxies (N_gal) that occupy a halo, given the halo's properties.
Cosmology and Nongalactic Astrophysics