BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARC - ECPv5.1.5//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARC
X-ORIGINAL-URL:https://arc.m3hosting.www.umich.edu
X-WR-CALDESC:Events for ARC
BEGIN:VTIMEZONE
TZID:America/Detroit
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20120311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20121104T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Detroit:20121130T000000
DTEND;TZID=America/Detroit:20121130T000000
DTSTAMP:20220528T230248
CREATED:20121128T050000Z
LAST-MODIFIED:20121128T050000Z
UID:15800-1354233600-1354233600@arc.m3hosting.www.umich.edu
SUMMARY:BigData: Probabilistic Methods for Efficient Search and Statistical Learning in Extremely High-Dimensional Data - November 30
DESCRIPTION:Ping Li\, Cornell University\n1:00 – 2 pm\, Friday November 30\nNorth Quad 3100 (Ehrlicher Room).\n Abstract: This talk will present a series of work on probabilistic hashing methods which typically transform a challenging (or infeasible) massive data computational problem into a probability and statistical estimation problem. For example\, fitting a logistic regression (or SVM) model on a dataset with billion observations and billion (or billion square) variables would be difficult. Searching for similar documents (or images) in a repository of billion web pages (or images) is another challenging example. In certain important applications in the search industry\, a web page is often represented as a binary (0/1) vector in billion square (2 to power 64) dimensions. For those data\, both data reduction (i.e.\, reducing number of nonzero entries) and dimensionality reduction are crucial for achieving efficient search and statistical learning. The talk will present two closely related probabilistic methods: (1) b-bit minwise hashing and (2) one permutation hashing\, which simultaneously perform effective data reduction and dimensionality reduction on massive\, high-dimensional\, binary data. For example\, training an SVM for classification on a text dataset of size 24GB took only 3 seconds after reducing the dataset to merely 70MB using our probabilistic methods. Experiments on close to 1TB data will also be presented. Several challenging probability problems still remain open. Key references: [1] P. Li\, A. Owen\, C-H Zhang\, On Permutation Hashing\, NIPS 2012; [2] P. Li\, C. Konig\, Theory and Applications of b-Bit Minwise Hashing\, Research Highlights in Communications of the ACM 2011. Bio: Ping Li is an Assistant Professor in the Department of Statistical Science at Cornell University. His research interests include BigData\, randomized algorithms\, boosting and trees\, information retrieval\, etc. Ping Li won a prize in the Yahoo! 2010 Learning to Rank Grand Challenge. He is also a recipient of the ONR (Office of Naval Research) Young Investigator Award in 2009. \n
URL:https://arc.m3hosting.www.umich.edu/event/bigdata-probabilistic-methods-for-efficient-search-and-statistical-learning-in-extremely-high-dimensional-data-november-30/
END:VEVENT
END:VCALENDAR