Skip to content

Advice on Multi-Target Optimization (MultiFidelitySingleTaskGP + qMultiFidelityKnowledgeGradient) #2172

Answered by Balandat
AlexanderMouton asked this question in Q&A
Discussion options

You must be logged in to vote

Hmm very interesting setting - I haven't seen this particular one before. The approach makes sense to me (although I haven't spent a ton of time thinking through any of the details).

The overall problem appears to be related to a contextual optimization problem (in your case the particular set of enhancements are the contexts) in which one can leverage the context observations to increase the model quality (i.e. borrow strength across contexts) but the goal is to a policy or a set of policies that works well for all/most contexts. @AlexanderMouton would that be a fair characterization?

Replies: 1 comment 9 replies

Comment options

You must be logged in to vote
9 replies
@AlexanderMouton
Comment options

@sdaulton
Comment options

@AlexanderMouton
Comment options

@Balandat
Comment options

Answer selected by AlexanderMouton
@Balandat
Comment options

@Balandat
Comment options

@AlexanderMouton
Comment options

@AlexanderMouton
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants