Host: Norm Matloff
When: Thursday, November 10th, 2016 at 3:10 pm
Where: 1131 Kemper Hall
Abstract: When the methods of bagging or random forests are used for classification, an ensemble of t=1,2,… randomized classifiers is generated, and the predictions of the classifiers are aggregated by voting. Due to the randomization in these methods, there is a natural tradeoff between statistical performance and computational cost. On one hand, as t increases, the (random) prediction error of the ensemble tends to decrease and stabilize. On the other hand, larger ensembles require greater computational cost for training and making new predictions. In this talk, I will discuss some recent methods and theoretical results that quantify this tradeoff in a precise sense.
Bio: Miles Lopes joined the UC Davis Department of Statistics as an assistant professor in 2015, shortly after completing his PhD in statistics, along with an MS in computer science, at UC Berkeley. His main research interests are in high-dimensional statistics and machine learning, with particular emphasis on the topics of bootstrap methods, randomized algorithms for data analysis, and compressed sensing.
1131 Kemper Hall