Paper

Instrument variable detection with graph learning : an application to high dimensional GIS-census data for house pricing

Endogeneity bias and instrument variable validation have always been important topics in statistics and econometrics. In the era of big data, such issues typically combine with dimensionality issues and, hence, require even more attention. In this paper, we merge two well-known tools from machine learning and biostatistics---variable selection algorithms and probablistic graphs---to estimate house prices and the corresponding causal structure using 2010 data on Sydney. The estimation uses a 200-gigabyte ultrahigh dimensional database consisting of local school data, GIS information, census data, house characteristics and other socio-economic records. Using "big data", we show that it is possible to perform a data-driven instrument selection efficiently and purge out the invalid instruments. Our approach improves the sparsity of variable selection, stability and robustness in the presence of high dimensionality, complicated causal structures and the consequent multicollinearity, and recovers a sparse and intuitive causal structure. The approach also reveals an efficiency and effectiveness in endogeneity detection, instrument validation, weak instrument pruning and the selection of valid instruments. From the perspective of machine learning, the estimation results both align with and confirms the facts of Sydney house market, the classical economic theories and the previous findings of simultaneous equations modeling. Moreover, the estimation results are consistent with and supported by classical econometric tools such as two-stage least square regression and different instrument tests. All the code may be found at \url{https://github.com/isaac2math/solar_graph_learning}.

Results in Papers With Code
(↓ scroll down to see all results)