I'm an Associate Professor in the Department of Statistics at The Ohio State University. I completed my Ph.D. in Statistics at the University of California, Berkeley with Prof. Bin Yu. From 2009 until 2012, I worked with Prof. Rob Kass as a postdoctoral researcher in the Department of Statistics at Carnegie Mellon University.
My research interest is primarily in the area of high-dimensional data analysis and statistical inference. I work on both theoretical and applied aspects, and I'm often motivated by concrete problems arising in the analysis of neuroscientific data — from neural spike trains (electrophysiological recordings) to neuroimaging (such as fMRI). In my past life (before entering graduate school), I was a software engineer in silicon valley for over six years. So I'm very keen on opportunities for statistical theory and computation to come together to solve interesting scientific and engineering problems.
One topic that I am currently investigating is computational sufficiency. The main question is: "Given a collection of potential procedures for the analysis of a data set, what summaries of the data contain sufficient information for computing all of the procedures under consideration?"
Here are some recent papers: