Geoff Webb

2017–2019 Distinguished Speaker
Share this on:

Professor Webb is the Director of the Monash University Centre for Data Science. He was editor-in-chief of the premier data mining journal, Data Mining and Knowledge Discovery from 2005 to 2014. He has been Program Committee Chair of the two top data mining conferences, ACM SIGKDD and IEEE ICDM, as well as General Chair of ICDM. He is the Director of the Monash University Center for Data Science. He is a Technical Advisor to BigML Inc, who have incorporating his best of class association discovery software, Magnum Opus, as a core component of their cloud based Machine Learning service. He developed many of the key mechanisms of support-confidence association discovery in the late 1980s. His OPUS search algorithm remains the state-of-the-art in rule search. He pioneered multiple research areas as diverse as black-box user modeling, interactive data analytics, and statistically-sound pattern discovery. He has developed many best-of-class machine learning algorithms that are widely deployed.

Professor of Information Technology Research
Monash University
URL: http://i.giwebb.com
geoff.webb@monash.edu

DVP term expires December 2019


Presentations

Learning from non-stationary distributions

Abstract: The world is dynamic – in a constant state of flux – but most learned models are static. Models learned from historical data are likely to decline in accuracy over time. This talk presents some theoretical tools for analyzing non-stationary distributions and discusses insights that they provide. Shortcomings of standard approaches to learning from non-stationary distributions are discussed together with strategies for developing more effective techniques.

Scaling log-linear analysis to datasets with thousands of variables

Abstract: Association discovery is a fundamental data mining task. The primary statistical approach to association discovery between variables is log-linear analysis. Classical approaches to log-linear analysis do not scale beyond about ten variables. By melding the state-of-the-art in statistics, graphical modeling, and data mining research, we have developed efficient and effective algorithms for log-linear analysis, performing in seconds log-linear analysis of datasets with thousands of variables and providing a powerful statistically-sound method for creating compact models of complex high-dimensional multivariate distributions.

Scalable learning of Bayesian network classifiers

I present our work on highly-scalable out-of-core techniques for learning well-calibrated Bayesian network classifiers. Our techniques are based on a novel hybrid generative and discriminative learning paradigm. These algorithms

  • provide straightforward mechanisms for managing the bias-variance trade-off
  • have training time that is linear with respect to training set size,
  • require as few as one and at most four passes through the training data,
  • allow for incremental learning,
  • are embarrassingly parallelisable,
  • support anytime classification,
  • provide direct well-calibrated prediction of class probabilities,
  • can learn using arbitrary loss functions,
  • support direct handling of missing values, and
  • exhibit robustness to noise in the training data.

Despite their computationally efficiency the new algorithms deliver classification accuracy that is competitive with state-of-the-art in-core discriminative learning techniques.

Finding Interesting Patterns

Abstract: Association discovery is one of the most studied tasks in the field of data mining. However, far more attention has been paid to how to discover associations than to what associations should be discovered. This talk: highlights shortcomings of the dominant frequent pattern paradigm; illustrates benefits of the alternative top-k paradigm; and presents the self-sufficient itemsets approach to identifying potentially interesting associations.

Presentations

  • Learning from non-stationary distributions
  • Scaling log-linear analysis to datasets with thousands of variables
  • Scalable learning of Bayesian network classifiers
  • Finding Interesting Patterns

Read the abstracts for each of these presentations