You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I put in the "prefix" argument so that you could do things recursively with Gibbs so that the hyperparameters are distinguishable. e.g. something like
functionlog_hparams(lg::TBLogger, sampler::Gibbs, prefix ="")
for i, alg inenumerate(sampler.algs)
log_hparams(lg, alg, "component_$i")
endreturnnothingend
But with all that said, it is not immediately clear to me where you can hook this into the current Turing callback structure. One possibility is to hack it by having a toggle on whether the hparams have been logged? e.g. TensorBoardCallback is mutable with a has_logged_hparams in it, then in the callback https://github.com/torfjelde/TuringCallbacks.jl/blob/master/src/callbacks/tensorboard.jl#L100 you check the toggle...
The text was updated successfully, but these errors were encountered:
Custom scalars is just a way to define ad-hoc how to visualise scalars.
Basically you log scalars (log_scalar) as you want, and then you specify with log_custom_scalar the layout of your plots, and (especially useful) the x and y axis.
log_custom_scalar must be called only once, because it's defining the layout, it's not actually logging anything (maybe we should rename it...)
The code was contributed by @fissoreg who knows much more about them than me.
I 100% agree this would be super-nice, so thank you for the suggestion! But I think it would be better to put the effort towards implementing the plugin on TensorBoardLogger.jl's side, since there's no good way of doing with the current functionality (AFAIK).
@torfjelde I agree completely. Better to wait and do it right. It sounds like @PhilipVinc doesn't have capacity right now to add in that plugin support. It is out of my skillset at this point, but if you have any bored grad students maybe they would like to play with it?
See JuliaLogging/TensorBoardLogger.jl#77
In one form or another, it would be great to have hyperparmaters visible in the tesnorboard output. See https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter.add_hparams
Maybe add support so the samplers themselves could do this? For example, have a dispatch to log hyperparameters for the sampler. For example,
And until hparams are suppored, maybe it could be done with custom scalars? https://philipvinc.github.io/TensorBoardLogger.jl/dev/explicit_interface/#Custom-Scalars-plugin Though I am not sure how that stuff works
I put in the "prefix" argument so that you could do things recursively with Gibbs so that the hyperparameters are distinguishable. e.g. something like
But with all that said, it is not immediately clear to me where you can hook this into the current Turing callback structure. One possibility is to hack it by having a toggle on whether the hparams have been logged? e.g.
TensorBoardCallback
is mutable with ahas_logged_hparams
in it, then in the callback https://github.com/torfjelde/TuringCallbacks.jl/blob/master/src/callbacks/tensorboard.jl#L100 you check the toggle...The text was updated successfully, but these errors were encountered: