Skip to content

Commit

Permalink
Generated gh-pages for commit 58cbb5a
Browse files Browse the repository at this point in the history
Merge: 252fc58 a024fbe
Author: John Jakeman <[email protected]>

    Merge remote-tracking branch 'origin/devel'
  • Loading branch information
jdjakem committed Feb 22, 2024
1 parent 564a91f commit 5ed8f10
Show file tree
Hide file tree
Showing 361 changed files with 102,772 additions and 70,999 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"Add latex macros$$\\newcommand{\\V}[1]{{\\boldsymbol{#1}}}\\newcommand{mean}[1]{{\\mathbb{E}\\left[#1\\right]}}\\newcommand{var}[1]{{\\mathbb{V}\\left[#1\\right]}}\\newcommand{covar}[2]{\\mathbb{C}\\text{ov}\\left[#1,#2\\right]}\\newcommand{corr}[2]{\\mathbb{C}\\text{or}\\left[#1,#2\\right]}\\newcommand{argmin}{\\mathrm{argmin}}\\def\\rv{z}\\def\\reals{\\mathbb{R}}\\def\\rvset{{\\mathcal{Z}}}\\def\\pdf{\\rho}\\def\\rvdom{\\Gamma}\\def\\coloneqq{\\colon=}\\newcommand{norm}{\\lVert #1 \\rVert}\\def\\argmax{\\operatorname{argmax}}\\def\\ai{\\alpha}\\def\\bi{\\beta}\\newcommand{\\dx}[1]{\\;\\text{d}#1}\\newcommand{\\mat}[1]{{\\boldsymbol{\\mathrm{#1}}}}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Multioutput Approximate Control Variates\n\nThis tutorial demonstrates how computing statistics for multiple outputs simultaneoulsy can improve the accuracy of ACV estimates of individual statistics when compared to ACV applied to each output separately.\n\nThe optimal control variate weights are obtained by minimizing the estimator covariance [RM1985]_.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom pyapprox import multifidelity as mf\nfrom pyapprox.benchmarks import setup_benchmark\nfrom pyapprox.util.visualization import mathrm_labels, mathrm_label\n\nnp.random.seed(1)\nbenchmark = setup_benchmark(\"multioutput_model_ensemble\")\ncosts = np.array([1, 0.01, 0.001])\nnmodels = 3\n\ncov = benchmark.covariance\n\nlabels = ([r\"$(f_{0})_{%d}$\" % (ii+1) for ii in range(benchmark.nqoi)] +\n [r\"$(f_{2})_{%d}$\" % (ii+1) for ii in range(benchmark.nqoi)] +\n [r\"$(f_{2})_{%d}$\" % (ii+1) for ii in range(benchmark.nqoi)])\nax = plt.subplots(1, 1, figsize=(8, 6))[1]\n_ = mf.plot_correlation_matrix(\n mf.get_correlation_from_covariance(cov), ax=ax, model_names=labels,\n label_fontsize=20)\n\ntarget_cost = 10\nstat = mf.multioutput_stats[\"mean\"](benchmark.nqoi)\nstat.set_pilot_quantities(cov)\nest = mf.get_estimator(\"gmf\", stat, costs)\nest.allocate_samples(target_cost)\n\n# get covariance of just first qoi\nqoi_idx = [0]\ncov_0 = stat.get_pilot_quantities_subset(\n nmodels, benchmark.nqoi, [0, 1, 2], qoi_idx)[0]\nstat_0 = mf.multioutput_stats[\"mean\"](benchmark.nqoi)\nstat_0.set_pilot_quantities(cov_0)\nest_0 = mf.get_estimator(\"gmf\", stat_0, costs)\nest_0.allocate_samples(target_cost)\n\nest_labels = mathrm_labels([\"MOACV\", \"SOACV\"])\n\n# only works if qoi_idx = [0]\nfrom pyapprox.multifidelity.factory import ComparisonCriteria\nclass CustomComparisionCriteria(ComparisonCriteria):\n def __call__(self, est_covariance, est):\n return est_covariance[0, 0]\n\n\nax = plt.subplots(1, 1, figsize=(8, 6))[1]\n_ = mf.plot_estimator_variance_reductions(\n [est, est_0], est_labels, ax, criteria=CustomComparisionCriteria())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Video\nClick on the image below to view a video tutorial on multi-output approximate control variate Monte Carlo quadrature\n\n<img src=\"file://../../figures/multi-output-acv-thumbnail.png\" target=\"https://youtu.be/astvKKFh2yA?si=8vgmKRbjdhJYeUfq\">\n\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n.. [RM1985] [Reuven Y. Rubinstein and Ruth Marcus. Efficiency of multivariate control variates in monte carlo simulation. Operations Research, 33(3):661\u2013677, 1985.](https://doi.org/10.48550/arXiv.2310.00125)\n\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
},
"outputs": [],
"source": [
"Add latex macros$$\\newcommand{\\V}[1]{{\\boldsymbol{#1}}}\\newcommand{mean}[1]{{\\mathbb{E}\\left[#1\\right]}}\\newcommand{var}[1]{{\\mathbb{V}\\left[#1\\right]}}\\newcommand{covar}[2]{\\mathbb{C}\\text{ov}\\left[#1,#2\\right]}\\newcommand{corr}[2]{\\mathbb{C}\\text{or}\\left[#1,#2\\right]}\\newcommand{argmin}{\\mathrm{argmin}}\\def\\rv{z}\\def\\reals{\\mathbb{R}}\\def\\pdf{\\rho}\\def\\rvdom{\\Gamma}\\def\\coloneqq{\\colon=}\\newcommand{norm}{\\lVert #1 \\rVert}\\def\\argmax{\\operatorname{argmax}}\\def\\ai{\\alpha}\\def\\bi{\\beta}\\newcommand{\\dx}[1]{\\;\\mathrm{d}#1}$$"
"Add latex macros$$\\newcommand{\\V}[1]{{\\boldsymbol{#1}}}\\newcommand{mean}[1]{{\\mathbb{E}\\left[#1\\right]}}\\newcommand{var}[1]{{\\mathbb{V}\\left[#1\\right]}}\\newcommand{covar}[2]{\\mathbb{C}\\text{ov}\\left[#1,#2\\right]}\\newcommand{corr}[2]{\\mathbb{C}\\text{or}\\left[#1,#2\\right]}\\newcommand{argmin}{\\mathrm{argmin}}\\def\\rv{z}\\def\\reals{\\mathbb{R}}\\def\\rvset{{\\mathcal{Z}}}\\def\\pdf{\\rho}\\def\\rvdom{\\Gamma}\\def\\coloneqq{\\colon=}\\newcommand{norm}{\\lVert #1 \\rVert}\\def\\argmax{\\operatorname{argmax}}\\def\\ai{\\alpha}\\def\\bi{\\beta}\\newcommand{\\dx}[1]{\\;\\text{d}#1}\\newcommand{\\mat}[1]{{\\boldsymbol{\\mathrm{#1}}}}$$"
]
},
{
Expand Down Expand Up @@ -62,7 +62,7 @@
},
"outputs": [],
"source": [
"from pyapprox.util.configure_plots import plt\nplt.plot(mc_samples[0, :], mc_samples[1, :], 'ko', label=\"MC\")\nplt.plot(sobol_samples[0, :], sobol_samples[1, :], 'rs', label=\"Sobol\")\nplt.legend()\nplt.show()\n\n#\n#Halton Sequences\n#================\n#Pyapprox also supports Halton Sequences\nhalton_samples = expdesign.halton_sequence(\n benchmark.variable.num_vars(), nsamples, variable=benchmark.variable)\nvalues = benchmark.fun(halton_samples)\nprint_statistics(halton_samples, values)"
"import matplotlib.pyplot as plt\nplt.plot(mc_samples[0, :], mc_samples[1, :], 'ko', label=\"MC\")\nplt.plot(sobol_samples[0, :], sobol_samples[1, :], 'rs', label=\"Sobol\")\nplt.legend()\nplt.show()\n\n#\n#Halton Sequences\n#================\n#Pyapprox also supports Halton Sequences\nhalton_samples = expdesign.halton_sequence(\n benchmark.variable.num_vars(), nsamples, variable=benchmark.variable)\nvalues = benchmark.fun(halton_samples)\nprint_statistics(halton_samples, values)"
]
},
{
Expand All @@ -89,7 +89,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
"version": "3.10.13"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
#
#Low-discrepancy sequences are typically more evenly space over the parameter
#space. This can be seen by comparing the Monte Carlo and Sobol sequence samples
from pyapprox.util.configure_plots import plt
import matplotlib.pyplot as plt
plt.plot(mc_samples[0, :], mc_samples[1, :], 'ko', label="MC")
plt.plot(sobol_samples[0, :], sobol_samples[1, :], 'rs', label="Sobol")
plt.legend()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
},
"outputs": [],
"source": [
"Add latex macros$$\\newcommand{\\V}[1]{{\\boldsymbol{#1}}}\\newcommand{mean}[1]{{\\mathbb{E}\\left[#1\\right]}}\\newcommand{var}[1]{{\\mathbb{V}\\left[#1\\right]}}\\newcommand{covar}[2]{\\mathbb{C}\\text{ov}\\left[#1,#2\\right]}\\newcommand{corr}[2]{\\mathbb{C}\\text{or}\\left[#1,#2\\right]}\\newcommand{argmin}{\\mathrm{argmin}}\\def\\rv{z}\\def\\reals{\\mathbb{R}}\\def\\pdf{\\rho}\\def\\rvdom{\\Gamma}\\def\\coloneqq{\\colon=}\\newcommand{norm}{\\lVert #1 \\rVert}\\def\\argmax{\\operatorname{argmax}}\\def\\ai{\\alpha}\\def\\bi{\\beta}\\newcommand{\\dx}[1]{\\;\\mathrm{d}#1}$$"
"Add latex macros$$\\newcommand{\\V}[1]{{\\boldsymbol{#1}}}\\newcommand{mean}[1]{{\\mathbb{E}\\left[#1\\right]}}\\newcommand{var}[1]{{\\mathbb{V}\\left[#1\\right]}}\\newcommand{covar}[2]{\\mathbb{C}\\text{ov}\\left[#1,#2\\right]}\\newcommand{corr}[2]{\\mathbb{C}\\text{or}\\left[#1,#2\\right]}\\newcommand{argmin}{\\mathrm{argmin}}\\def\\rv{z}\\def\\reals{\\mathbb{R}}\\def\\rvset{{\\mathcal{Z}}}\\def\\pdf{\\rho}\\def\\rvdom{\\Gamma}\\def\\coloneqq{\\colon=}\\newcommand{norm}{\\lVert #1 \\rVert}\\def\\argmax{\\operatorname{argmax}}\\def\\ai{\\alpha}\\def\\bi{\\beta}\\newcommand{\\dx}[1]{\\;\\text{d}#1}\\newcommand{\\mat}[1]{{\\boldsymbol{\\mathrm{#1}}}}$$"
]
},
{
Expand All @@ -26,7 +26,7 @@
},
"outputs": [],
"source": [
"import networkx as nx\nimport numpy as np\nfrom scipy import stats\nimport scipy\n\nfrom pyapprox.util.configure_plots import plt\nfrom pyapprox.bayes.laplace import (\n laplace_posterior_approximation_for_linear_models\n)\nfrom pyapprox.bayes.gaussian_network import (\n get_total_degree_polynomials, plot_1d_lvn_approx, GaussianNetwork,\n cond_prob_variable_elimination,\n convert_gaussian_from_canonical_form,\n plot_peer_network_with_data\n)\n\nnp.random.seed(2)"
"import networkx as nx\nimport numpy as np\nimport scipy\nfrom scipy import stats\nimport matplotlib.pyplot as plt\n\nfrom pyapprox.bayes.laplace import (\n laplace_posterior_approximation_for_linear_models)\nfrom pyapprox.bayes.gaussian_network import (\n get_total_degree_polynomials, plot_1d_lvn_approx, GaussianNetwork,\n cond_prob_variable_elimination,\n convert_gaussian_from_canonical_form,\n plot_peer_network_with_data)\n\nnp.random.seed(2)"
]
},
{
Expand Down Expand Up @@ -332,7 +332,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### References\n.. [GJGEIJUQ2020] [A. Gorodetsky et al. MFNets: Multi-fidelity data-driven networks for bayesian learning and prediction, International Journal for Uncertainty Quantification, 2020.](https://www.alexgorodetsky.com/static/papers/gorodetsky_jakeman_geraci_eldred_mfnets_2020.pdf)\n\n.. [GJGJCP2020] [A. Gorodetsky et al. MFNets: Learning network representations for multifidelity surrogate modeling, 2020.](https://res.arxiv.org/abs/2008.02672)\n\n### Appendix\nThere is a strong connection between the mean of the Bayes posterior distribution of linear-Gaussian models with least squares regression. Specifically the mean of the posterior is equivalent to linear least-squares regression with a regulrization that penalizes deviations from the prior estimate of the parameters. Let the least squares objective function be\n\n\\begin{align}f(\\theta)=\\frac{1}{2}(y-A\\theta)^\\top\\Sigma_\\epsilon^{-1}(y-A\\theta)+\\frac{1}{2}(\\mu_\\theta-\\theta)^\\top\\Sigma_\\theta^{-1}(\\mu_\\theta-\\theta),\\end{align}\n\nwhere the first term on the right hand side is the usual least squares objective and the second is the regularization term. This regularized objective is minimized by setting its gradient to zero, i.e.\n\n\\begin{align}\\nabla_\\theta f(\\theta)=A^\\top\\Sigma_\\epsilon^{-1}(y-A\\theta)+\\Sigma_\\theta^{-1}(\\mu_\\theta-\\theta)=0,\\end{align}\n\nthus\n\n\\begin{align}A^\\top\\Sigma_\\epsilon^{-1}A\\theta+\\Sigma_\\theta^{-1}\\theta=A^\\top\\Sigma_\\epsilon^{-1}y+\\Sigma_\\theta^{-1}\\mu_\\theta\\end{align}\n\nand so\n\n\\begin{align}\\theta=\\left(A^\\top\\Sigma_\\epsilon^{-1}A\\theta+\\Sigma_\\theta^{-1}\\right)^{-1}\\left(A^\\top\\Sigma_\\epsilon^{-1}y+\\Sigma_\\theta^{-1}\\mu_\\theta\\right).\\end{align}\n\nNoting that $\\left(A^\\top\\Sigma_\\epsilon^{-1}A\\theta+\\Sigma_\\theta^{-1}\\right)^{-1}$ is the posterior covariance we obtain the usual expression for the posterior mean\n\n\\begin{align}\\mu^\\mathrm{post}=\\Sigma^\\mathrm{post}\\left(A^\\top\\Sigma_\\epsilon^{-1}y+\\Sigma_\\theta^{-1}\\mu_\\theta\\right)\\end{align}\n\n"
"### References\n.. [GJGEIJUQ2020] [A. Gorodetsky et al. MFNets: Multi-fidelity data-driven networks for bayesian learning and prediction, International Journal for Uncertainty Quantification, 2020.](https://www.alexgorodetsky.com/static/papers/gorodetsky_jakeman_geraci_eldred_mfnets_2020.pdf)\n\n..\n .. [GJGJCP2020] [A. Gorodetsky et al. MFNets: Learning network representations for multifidelity surrogate modeling, 2020.](https://res.arxiv.org/abs/2008.02672)\n\n### Appendix\nThere is a strong connection between the mean of the Bayes posterior distribution of linear-Gaussian models with least squares regression. Specifically the mean of the posterior is equivalent to linear least-squares regression with a regulrization that penalizes deviations from the prior estimate of the parameters. Let the least squares objective function be\n\n\\begin{align}f(\\theta)=\\frac{1}{2}(y-A\\theta)^\\top\\Sigma_\\epsilon^{-1}(y-A\\theta)+\\frac{1}{2}(\\mu_\\theta-\\theta)^\\top\\Sigma_\\theta^{-1}(\\mu_\\theta-\\theta),\\end{align}\n\nwhere the first term on the right hand side is the usual least squares objective and the second is the regularization term. This regularized objective is minimized by setting its gradient to zero, i.e.\n\n\\begin{align}\\nabla_\\theta f(\\theta)=A^\\top\\Sigma_\\epsilon^{-1}(y-A\\theta)+\\Sigma_\\theta^{-1}(\\mu_\\theta-\\theta)=0,\\end{align}\n\nthus\n\n\\begin{align}A^\\top\\Sigma_\\epsilon^{-1}A\\theta+\\Sigma_\\theta^{-1}\\theta=A^\\top\\Sigma_\\epsilon^{-1}y+\\Sigma_\\theta^{-1}\\mu_\\theta\\end{align}\n\nand so\n\n\\begin{align}\\theta=\\left(A^\\top\\Sigma_\\epsilon^{-1}A\\theta+\\Sigma_\\theta^{-1}\\right)^{-1}\\left(A^\\top\\Sigma_\\epsilon^{-1}y+\\Sigma_\\theta^{-1}\\mu_\\theta\\right).\\end{align}\n\nNoting that $\\left(A^\\top\\Sigma_\\epsilon^{-1}A\\theta+\\Sigma_\\theta^{-1}\\right)^{-1}$ is the posterior covariance we obtain the usual expression for the posterior mean\n\n\\begin{align}\\mu^\\mathrm{post}=\\Sigma^\\mathrm{post}\\left(A^\\top\\Sigma_\\epsilon^{-1}y+\\Sigma_\\theta^{-1}\\mu_\\theta\\right)\\end{align}\n\n"
]
}
],
Expand All @@ -352,7 +352,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
"version": "3.10.13"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
},
"outputs": [],
"source": [
"Add latex macros$$\\newcommand{\\V}[1]{{\\boldsymbol{#1}}}\\newcommand{mean}[1]{{\\mathbb{E}\\left[#1\\right]}}\\newcommand{var}[1]{{\\mathbb{V}\\left[#1\\right]}}\\newcommand{covar}[2]{\\mathbb{C}\\text{ov}\\left[#1,#2\\right]}\\newcommand{corr}[2]{\\mathbb{C}\\text{or}\\left[#1,#2\\right]}\\newcommand{argmin}{\\mathrm{argmin}}\\def\\rv{z}\\def\\reals{\\mathbb{R}}\\def\\pdf{\\rho}\\def\\rvdom{\\Gamma}\\def\\coloneqq{\\colon=}\\newcommand{norm}{\\lVert #1 \\rVert}\\def\\argmax{\\operatorname{argmax}}\\def\\ai{\\alpha}\\def\\bi{\\beta}\\newcommand{\\dx}[1]{\\;\\mathrm{d}#1}$$"
"Add latex macros$$\\newcommand{\\V}[1]{{\\boldsymbol{#1}}}\\newcommand{mean}[1]{{\\mathbb{E}\\left[#1\\right]}}\\newcommand{var}[1]{{\\mathbb{V}\\left[#1\\right]}}\\newcommand{covar}[2]{\\mathbb{C}\\text{ov}\\left[#1,#2\\right]}\\newcommand{corr}[2]{\\mathbb{C}\\text{or}\\left[#1,#2\\right]}\\newcommand{argmin}{\\mathrm{argmin}}\\def\\rv{z}\\def\\reals{\\mathbb{R}}\\def\\rvset{{\\mathcal{Z}}}\\def\\pdf{\\rho}\\def\\rvdom{\\Gamma}\\def\\coloneqq{\\colon=}\\newcommand{norm}{\\lVert #1 \\rVert}\\def\\argmax{\\operatorname{argmax}}\\def\\ai{\\alpha}\\def\\bi{\\beta}\\newcommand{\\dx}[1]{\\;\\text{d}#1}\\newcommand{\\mat}[1]{{\\boldsymbol{\\mathrm{#1}}}}$$"
]
},
{
Expand Down Expand Up @@ -222,7 +222,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
"version": "3.10.13"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit 5ed8f10

Please sign in to comment.