Skip to content

Active Bayesian Optimization


In optimal experiment design (or active learning) one seeks an online strategy for function approximation (or system identification). It is particularly useful in situations where it is costly to obtain each sample. But, what if the goal is to optimize a certain target instead of learning the entire function? For problems where parameter adjustment for maximum efficiency is required, for example, drug combination, neural micro-stimulation parameters or aircraft design, one is often not interested in recovering the full system response, but only the optimal set of parameters. Therefore it makes sense to do active learning about the locations of optimal set of parameters, but not on learning the full function.

So we decided to work on the problem under a Bayesian inference framework, and named the problem Active Bayesian Optimization (ABO). The main issue is the complexity of the posterior on the minimizer that we want to learn. Our effort based on approximation is briefly presented in this arXiv paper [1]. However, unfortunately, we were not the first to think the ABO problem. Villemonteix and colleagues [2] have presented the problem in a similar setup using sampling techniques instead of approximation. We got to know this from the NIPS Bayesian optimization workshop (2011) where the referees told us about previous works. At the workshop, we also found another recent solution to ABO problem by Henning and Schuler [3]. They used a clever approximation to the multi-modal posterior of the minimizer with EP (expectation propagation). Approximate Bayesian inference techniques or clever prior design are definitely needed for ABO, and the initial solutions in [1-3] are somewhat slow and can be computationally intractable. This is an exciting area that has a great potential to grow.

  1. Il Memming Park, Marcel Nassar, Mijung Park. Active Bayesian Optimization: Minimizing Minimizer Entropy.  arXiv:1202.2143v1 [stat.ME]
  2. Julien Villemonteix, Emmanuel Vazquez, Eric Walter. An informational approach to the global optimization of expensive-to-evaluate functions. arXiv:cs/0611143v2 [cs.NA] (published in Journal of Global Optimization 2008)
  3. Philipp Hennig and Christian J. Schuler. Entropy search for Information-Efficient global optimization. December 2011, arXiv:1112.1217
One Comment leave one →
  1. memming permalink*
    2012/07/16 1:06 am

    Henning & Schuler’s paper appeared in JMLR.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: