From 27a0490de3744f0c6bbdcf6793ad9c2c3e77a730 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Sun, 13 Mar 2022 16:13:31 -0400 Subject: [PATCH 01/21] taking exp() for log_sigma --- stan_intro/stan_intro.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/stan_intro/stan_intro.Rmd b/stan_intro/stan_intro.Rmd index 0e22a05..3d7c9ff 100644 --- a/stan_intro/stan_intro.Rmd +++ b/stan_intro/stan_intro.Rmd @@ -2590,7 +2590,7 @@ parameters { model { mu ~ normal(0, 1); log_sigma ~ ???; - y ~ normal(mu, log_sigma); + y ~ normal(mu, exp(log_sigma)); } ``` From 8393f55fc22d19b67a2393cf825dfc5632db6535 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Sun, 13 Mar 2022 16:26:45 -0400 Subject: [PATCH 02/21] fixed typo: identify function -> identity function --- probability_theory/probability_theory.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/probability_theory/probability_theory.Rmd b/probability_theory/probability_theory.Rmd index 63dc957..29cff13 100644 --- a/probability_theory/probability_theory.Rmd +++ b/probability_theory/probability_theory.Rmd @@ -904,7 +904,7 @@ corresponding interval in the full real line. ![](figures/embeddings/interval/interval.png)

-If our target space is itself the real line then the identify function serves +If our target space is itself the real line then the identity function serves as an appropriate embedding.

@@ -1653,7 +1653,7 @@ norm_prob(B_union_min, B_union_max, mu, sigma) ### Computing Expectations -The real line has a unique embedding into the real line -- the identify +The real line has a unique embedding into the real line -- the identity function -- so means and variances are well-defined for the Gaussian family of probability density functions. In line with their names, the mean of any member is given by the location parameter, From d88fbe8f2c3d3a95d5777350e201258e41727ccd Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Mon, 14 Mar 2022 19:03:01 -0400 Subject: [PATCH 03/21] fixed a small typo --- variate_covariate_modeling/variate_covariate_modeling.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/variate_covariate_modeling/variate_covariate_modeling.Rmd b/variate_covariate_modeling/variate_covariate_modeling.Rmd index 747f56f..5f20b78 100644 --- a/variate_covariate_modeling/variate_covariate_modeling.Rmd +++ b/variate_covariate_modeling/variate_covariate_modeling.Rmd @@ -136,7 +136,7 @@ knitr::include_graphics("figures/covariation/covariation.png") If we can learn this covariation from complete observations then we might be -able apply it to predicting missing variates. +able to apply it to predicting missing variates. Mathematically the covariation between $y$ and $x$ is captured in the conditional observational model From 299f1816f22da35233068c502d70e5f89b439932 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Mon, 14 Mar 2022 19:05:40 -0400 Subject: [PATCH 04/21] fixed a typo in the Markov transition subscript --- markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd b/markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd index d78ea32..d982039 100644 --- a/markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd +++ b/markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd @@ -442,7 +442,7 @@ points(c(q0[1], q1[1], q2[1]), c(q0[2], q1[2], q2[2]), col=c_light, pch=16, cex= _Iterating_ Markov transitions, $$ \begin{align*} -\tilde{q}_{1} &\sim T(q_{2} \mid q_{0}) +\tilde{q}_{1} &\sim T(q_{1} \mid q_{0}) \\ \tilde{q}_{2} &\sim T(q_{2} \mid \tilde{q}_{1}) \\ From c122854a04bcfa5b6d3fc08fd48ab9202aac20ec Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Mon, 14 Mar 2022 19:06:57 -0400 Subject: [PATCH 05/21] fixed board to broad --- probabilistic_computation/probabilistic_computation.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probabilistic_computation/probabilistic_computation.Rmd b/probabilistic_computation/probabilistic_computation.Rmd index 50e1bb8..b7a9158 100644 --- a/probabilistic_computation/probabilistic_computation.Rmd +++ b/probabilistic_computation/probabilistic_computation.Rmd @@ -2037,7 +2037,7 @@ rapidly disperses. In the best case this will only inflate the estimation error but in the worst case it can render $w(q) \, f(q)$ no longer square integrable and invalidating the importance sampling estimator entirely! -In low-dimensional problems typical sets are board. Constructing a good +In low-dimensional problems typical sets are broad. Constructing a good auxiliary probability distribution whose typical set strongly overlaps the typical set of the target distribution isn't trivial but it is often feasible. From 82285529edc1eee1a471cb0e9b14299efb2dbcda Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Mon, 14 Mar 2022 21:33:02 -0400 Subject: [PATCH 06/21] typo? --- probabilistic_computation/probabilistic_computation.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probabilistic_computation/probabilistic_computation.Rmd b/probabilistic_computation/probabilistic_computation.Rmd index b7a9158..1ddb1a4 100644 --- a/probabilistic_computation/probabilistic_computation.Rmd +++ b/probabilistic_computation/probabilistic_computation.Rmd @@ -1732,7 +1732,7 @@ richness of the variational family and the structure of the divergence function. Quantifying estimator errors in a general application is typically infeasible, and we once again have to be weary of fragility. Moreover that fragility can be amplified when the variational family is specified by a family of -probability density functions...in a given parameterization. While the +probability density functions in a given parameterization. While the variational construction is invariant, its implementation might not be. Variational methods are relatively new to statistics and both the theory and From aff53bec2cb7ab1130dfd0757e4c3e7afd12de38 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Wed, 16 Mar 2022 18:33:02 -0400 Subject: [PATCH 07/21] fixed typo --- principled_bayesian_workflow/principled_bayesian_workflow.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/principled_bayesian_workflow/principled_bayesian_workflow.Rmd b/principled_bayesian_workflow/principled_bayesian_workflow.Rmd index b549ff4..8d56807 100644 --- a/principled_bayesian_workflow/principled_bayesian_workflow.Rmd +++ b/principled_bayesian_workflow/principled_bayesian_workflow.Rmd @@ -1081,7 +1081,7 @@ Consequently any modification to the phenomena, environment, or experimental probe spanned by the model will in general invalidate a Bayesian calibration. For example even if the latent phenomena are the same, varying environments and experimental probes can lead to very different utilities. What is good enough -to answer questions in one particular comtext may not be sufficient to answer +to answer questions in one particular context may not be sufficient to answer questions in different contexts! Because at least some aspect of the phenomena, environment, and probe are unique From b17ec3ff52129344300ba0d7617ba3f5e5518ece Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Thu, 17 Mar 2022 21:55:07 -0400 Subject: [PATCH 08/21] fixed typo --- variate_covariate_modeling/variate_covariate_modeling.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/variate_covariate_modeling/variate_covariate_modeling.Rmd b/variate_covariate_modeling/variate_covariate_modeling.Rmd index 5f20b78..909ec7b 100644 --- a/variate_covariate_modeling/variate_covariate_modeling.Rmd +++ b/variate_covariate_modeling/variate_covariate_modeling.Rmd @@ -3239,7 +3239,7 @@ plot_pred_res_by_index(x2, reverse_conditional_samples$x2_pred, We can see the source of this discrepancy already in the behavior of the location function and the predictive distribution that concentrates around the location function. The configuration of the conditional variate model for the -complete observation does not generalize to the incomplete observation becuase +complete observation does not generalize to the incomplete observation because of the heterogeneity induced by the confounding parameters. ```{r} From 89db0a97ce32a424f841d44587dd9606bde0cc45 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Fri, 18 Mar 2022 22:07:39 -0400 Subject: [PATCH 09/21] fixed typos --- principled_bayesian_workflow/principled_bayesian_workflow.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/principled_bayesian_workflow/principled_bayesian_workflow.Rmd b/principled_bayesian_workflow/principled_bayesian_workflow.Rmd index 8d56807..6aea238 100644 --- a/principled_bayesian_workflow/principled_bayesian_workflow.Rmd +++ b/principled_bayesian_workflow/principled_bayesian_workflow.Rmd @@ -1825,7 +1825,7 @@ can construct a collection of powerful visualizations by projecting to subspaces of the observational space that isolating particular consequences of our retrodictions that highlight potential limitations. Fortunately we've already considered how to isolate the features of the observation space relevant to our -scientific questions when we were motivating summary statistics we for prior +scientific questions when we were motivating summary statistics for prior predictive checks! In other words we can reuse those summary statistics to construct _posterior retrodictive checks_ that visually compare the pushforwards of the posterior predictive distribution, $\pi_{t(Y) \mid Y}(t \mid \tilde{y})$, @@ -1953,7 +1953,7 @@ knitr::include_graphics("figures/posterior_checks/posterior_retrodictive_heldout

-This might, for example, but due to an overly flexible model overfitting to +This might, for example, be due to an overly flexible model overfitting to $\tilde{y}_{1}$. At the same time it could also be a consequence of $\tilde{y}_{1}$ manifesting misfit less clearly than $\tilde{y}_{2}$, or even $\tilde{y}_{2}$ being an unlikely tail event. Identifying which requires From ea27284a2836f4e2ed08539f0d3b26ba8c54e2fb Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Fri, 18 Mar 2022 22:09:46 -0400 Subject: [PATCH 10/21] fixed a typo --- principled_bayesian_workflow/principled_bayesian_workflow.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/principled_bayesian_workflow/principled_bayesian_workflow.Rmd b/principled_bayesian_workflow/principled_bayesian_workflow.Rmd index 6aea238..e770eb2 100644 --- a/principled_bayesian_workflow/principled_bayesian_workflow.Rmd +++ b/principled_bayesian_workflow/principled_bayesian_workflow.Rmd @@ -2369,7 +2369,7 @@ anticipated. Importantly the outcomes of this step should be only informal, conceptual narratives of the measurement process. All we're trying to do is sit down with the domain experts, whether ourselves or our colleagues, and ask -_"How is are data being generated?"_. +_"How is our data being generated?"_. **Requirements:** Domain Expertise From e5f36eedff1a5b07f4d6db153d2585f8712261b5 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Fri, 1 Apr 2022 10:08:13 -0400 Subject: [PATCH 11/21] added sigma in the normalization constant for 1D Gaussian --- probability_theory/probability_theory.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probability_theory/probability_theory.Rmd b/probability_theory/probability_theory.Rmd index 29cff13..3dac753 100644 --- a/probability_theory/probability_theory.Rmd +++ b/probability_theory/probability_theory.Rmd @@ -1523,7 +1523,7 @@ To demonstrate a probability density function consider the ubiquitous _Gaussian_ probability density functions which allocate probabilities across real line, $X = \mathbb{R}$, $$ -\pi(x \mid \mu, \sigma) = \frac{1}{\sqrt{2 \pi}} +\pi(x \mid \mu, \sigma) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left( - \frac{1}{2} \left(\frac{x - \mu}{\sigma} \right)^{2} \right). $$ Each Gaussian probability density function is specified by a location parameter, From 0bade5b91a553ceabd9da0137063e88c46782f3b Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Fri, 1 Apr 2022 10:15:50 -0400 Subject: [PATCH 12/21] fixed typo --- probability_theory/probability_theory.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probability_theory/probability_theory.Rmd b/probability_theory/probability_theory.Rmd index 3dac753..2b5b786 100644 --- a/probability_theory/probability_theory.Rmd +++ b/probability_theory/probability_theory.Rmd @@ -2105,7 +2105,7 @@ This is consistent with the exact computation, ```{r} poisson_prob(A1, l) ``` -And we an readily visualize how the Monte Carlo estimator converges to the exact +And we readily visualize how the Monte Carlo estimator converges to the exact value as the size of the sample increases. The bands here in red cover the Monte Carlo estimator plus/minus 1, 2, and 3 standard errors to demonstrate the variation expected from the Monte Carlo Central Limit Theorem. From d659657f39edf3f0b3edbe071bc7b38015e10963 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Sat, 2 Apr 2022 17:44:07 -0400 Subject: [PATCH 13/21] fixed section reference --- probability_theory/probability_theory.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probability_theory/probability_theory.Rmd b/probability_theory/probability_theory.Rmd index 29cff13..d0a5b27 100644 --- a/probability_theory/probability_theory.Rmd +++ b/probability_theory/probability_theory.Rmd @@ -51,7 +51,7 @@ confuse the reader, but rather a consequence of the fact that we cannot explicitly construct abstract probability distributions in any meaningful sense. Instead we must utilize problem-specific _representations_ of abstract probability distributions which means that concrete examples will have to wait -until we introduce these representations in Section 3. +until we introduce these representations in Section 4. # Setting A Foundation {#sec:foundation} From 43d3b1fc6c690beebb5c937cfa4e8d15cbf5df6b Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Sat, 2 Apr 2022 17:46:14 -0400 Subject: [PATCH 14/21] changed from 0 to empty set --- probability_theory/probability_theory.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/probability_theory/probability_theory.Rmd b/probability_theory/probability_theory.Rmd index d0a5b27..be30766 100644 --- a/probability_theory/probability_theory.Rmd +++ b/probability_theory/probability_theory.Rmd @@ -533,7 +533,7 @@ $$ > \mathbb{P}_{\pi} [ \cup_{n = 1}^{N} \mathfrak{A}_{n} ], $$ -even when $\mathfrak{A}_{n} \cap \mathfrak{A}_{m} = 0, n \ne m$. We can also +even when $\mathfrak{A}_{n} \cap \mathfrak{A}_{m} = \emptyset, n \ne m$. We can also combine a finite number of different non-constructible subsets to achieve _super-additivity_, $$ @@ -569,7 +569,7 @@ $$ \sum_{n = 1}^{\infty} \mathbb{P}_{\pi} [ A_{n} ], $$ $$ -A_{n} \cap A_{m} = 0, \, n \ne m. +A_{n} \cap A_{m} = \emptyset, \, n \ne m. $$ The more familiar rules of probability theory can all be derived from these From 7f953326661e256d95214487ba4e6540f9c99fda Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Sun, 3 Apr 2022 15:25:20 -0400 Subject: [PATCH 15/21] changed to cumulative density functions for consistency --- probability_theory/probability_theory.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probability_theory/probability_theory.Rmd b/probability_theory/probability_theory.Rmd index be30766..f0bfc1c 100644 --- a/probability_theory/probability_theory.Rmd +++ b/probability_theory/probability_theory.Rmd @@ -1590,7 +1590,7 @@ plot_norm_probs(mu, sigma, -8, B1_min) plot_norm_probs(mu, sigma, B1_max, 8) ``` -We can compute it using the cumulative probability function, +We can compute it using the cumulative distribution function, ```{r} (1 - pnorm(B1_max, mu, sigma)) + pnorm(B1_min, mu, sigma) ``` From bbefd480c9cda81a9680dfd031eff3ad1ebd82ba Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Sun, 3 Apr 2022 21:48:55 -0400 Subject: [PATCH 16/21] fixed the index of \varpi --- markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd b/markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd index d78ea32..16bb7fa 100644 --- a/markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd +++ b/markov_chain_monte_carlo/markov_chain_monte_carlo.Rmd @@ -2859,7 +2859,7 @@ $$ \pi(q) = \text{normal}(\varpi_{1}(q) ; 1, 1) \cdot -\text{normal}(\varpi_{1}(q) ; -1, 1). +\text{normal}(\varpi_{2}(q) ; -1, 1). $$ One advantage of this example is that the component means and variances are given immediately by the locations and scales, which allows us to compare Markov From b0e573d292d383f749a1a2a14fe77a712f3c7fd3 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Wed, 13 Apr 2022 21:44:55 -0400 Subject: [PATCH 17/21] removed the name 'horseshoe' in the normal population models --- modeling_sparsity/stan_programs/normal_narrow.stan | 4 ++-- modeling_sparsity/stan_programs/normal_wide.stan | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/modeling_sparsity/stan_programs/normal_narrow.stan b/modeling_sparsity/stan_programs/normal_narrow.stan index bd27f90..dc0226a 100644 --- a/modeling_sparsity/stan_programs/normal_narrow.stan +++ b/modeling_sparsity/stan_programs/normal_narrow.stan @@ -7,12 +7,12 @@ data { } parameters { - // Horseshoe parameters + // Location parameters vector[K] theta; } model { - // Horseshoe prior model + // Prior model theta ~ normal(0, 0.1); // Observational model diff --git a/modeling_sparsity/stan_programs/normal_wide.stan b/modeling_sparsity/stan_programs/normal_wide.stan index f7f2c2f..a7fae0a 100644 --- a/modeling_sparsity/stan_programs/normal_wide.stan +++ b/modeling_sparsity/stan_programs/normal_wide.stan @@ -7,12 +7,12 @@ data { } parameters { - // Horseshoe parameters + // Location parameters vector[K] theta; } model { - // Horseshoe prior model + // Prior model theta ~ normal(0, 10); // Observational model From b9112c7702559b2c3829105301d62085934c791d Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Wed, 13 Apr 2022 23:05:20 -0400 Subject: [PATCH 18/21] replaced 'horseshoe' with 'Laplace' in the comments --- modeling_sparsity/stan_programs/hier_laplace_cp.stan | 4 ++-- modeling_sparsity/stan_programs/laplace_narrow.stan | 4 ++-- modeling_sparsity/stan_programs/laplace_wide.stan | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/modeling_sparsity/stan_programs/hier_laplace_cp.stan b/modeling_sparsity/stan_programs/hier_laplace_cp.stan index b2fe0b5..19172e8 100644 --- a/modeling_sparsity/stan_programs/hier_laplace_cp.stan +++ b/modeling_sparsity/stan_programs/hier_laplace_cp.stan @@ -7,13 +7,13 @@ data { } parameters { - // Horseshoe parameters + // Laplace parameters vector[K] theta; real tau; } model { - // Horseshoe prior model + // Laplace prior model theta ~ double_exponential(0, tau); tau ~ normal(0, 10); diff --git a/modeling_sparsity/stan_programs/laplace_narrow.stan b/modeling_sparsity/stan_programs/laplace_narrow.stan index d6ab939..686034e 100644 --- a/modeling_sparsity/stan_programs/laplace_narrow.stan +++ b/modeling_sparsity/stan_programs/laplace_narrow.stan @@ -7,12 +7,12 @@ data { } parameters { - // Horseshoe parameters + // Laplace parameters vector[K] theta; } model { - // Horseshoe prior model + // Laplace prior model theta ~ double_exponential(0, 0.1); // Observational model diff --git a/modeling_sparsity/stan_programs/laplace_wide.stan b/modeling_sparsity/stan_programs/laplace_wide.stan index 07a6ecf..5967c8a 100644 --- a/modeling_sparsity/stan_programs/laplace_wide.stan +++ b/modeling_sparsity/stan_programs/laplace_wide.stan @@ -7,12 +7,12 @@ data { } parameters { - // Horseshoe parameters + // Laplace parameters vector[K] theta; } model { - // Horseshoe prior model + // Laplace prior model theta ~ double_exponential(0, 10); // Observational model From 6dc52ca801938afb00bb3aae374f7db9d97fa99b Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Wed, 13 Apr 2022 23:30:06 -0400 Subject: [PATCH 19/21] fixed typos on the inferred scale --- modeling_sparsity/modeling_sparsity.Rmd | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modeling_sparsity/modeling_sparsity.Rmd b/modeling_sparsity/modeling_sparsity.Rmd index 431dd63..13ac2cb 100644 --- a/modeling_sparsity/modeling_sparsity.Rmd +++ b/modeling_sparsity/modeling_sparsity.Rmd @@ -391,7 +391,7 @@ hist(samples$tau, breaks=seq(0, 50, 0.5), main="", col=c_dark, border=c_dark_highlight, add=T) ``` -This balance, however, is still too much large for the parameters with small +This balance, however, is still too large for the parameters with small true values and a bit too small for the parameters with large true values. Because the balance favors the larger scale the over-regularization isn't too bad. @@ -1310,7 +1310,7 @@ for (k in 1:9) { } ``` -Unfortunately the inferred scale is too large small enough to narrow the +Unfortunately the inferred scale is too large to narrow the marginal posterior distributions of the small parameters below $\sigma = 0.5$. ```{r} From 844bfc403575646c9a4832e986e3c6c4b1e344d0 Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Wed, 13 Apr 2022 23:43:27 -0400 Subject: [PATCH 20/21] replaced to in the comments --- modeling_sparsity/stan_programs/cauchy_narrow.stan | 4 ++-- modeling_sparsity/stan_programs/cauchy_wide.stan | 4 ++-- modeling_sparsity/stan_programs/hier_cauchy_cp.stan | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/modeling_sparsity/stan_programs/cauchy_narrow.stan b/modeling_sparsity/stan_programs/cauchy_narrow.stan index ad9d5da..817985e 100644 --- a/modeling_sparsity/stan_programs/cauchy_narrow.stan +++ b/modeling_sparsity/stan_programs/cauchy_narrow.stan @@ -7,12 +7,12 @@ data { } parameters { - // Horseshoe parameters + // Cauchy parameters vector[K] theta; } model { - // Horseshoe prior model + // Cauchy prior model theta ~ cauchy(0, 0.1); // Observational model diff --git a/modeling_sparsity/stan_programs/cauchy_wide.stan b/modeling_sparsity/stan_programs/cauchy_wide.stan index d297ebe..027eb35 100644 --- a/modeling_sparsity/stan_programs/cauchy_wide.stan +++ b/modeling_sparsity/stan_programs/cauchy_wide.stan @@ -7,12 +7,12 @@ data { } parameters { - // Horseshoe parameters + // Cauchy parameters vector[K] theta; } model { - // Horseshoe prior model + // Cauchy prior model theta ~ cauchy(0, 10); // Observational model diff --git a/modeling_sparsity/stan_programs/hier_cauchy_cp.stan b/modeling_sparsity/stan_programs/hier_cauchy_cp.stan index 732bc2f..0540f5e 100644 --- a/modeling_sparsity/stan_programs/hier_cauchy_cp.stan +++ b/modeling_sparsity/stan_programs/hier_cauchy_cp.stan @@ -7,13 +7,13 @@ data { } parameters { - // Horseshoe parameters + // Cauchy parameters vector[K] theta; real tau; } model { - // Horseshoe prior model + // Cauchy prior model theta ~ cauchy(0, tau); tau ~ normal(0, 10); From 5a790de657e34ab716a6f7cebae1957ddd23943a Mon Sep 17 00:00:00 2001 From: Tatsuo Okubo Date: Sat, 16 Apr 2022 15:23:21 -0400 Subject: [PATCH 21/21] fixed typo --- probability_theory/probability_theory.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/probability_theory/probability_theory.Rmd b/probability_theory/probability_theory.Rmd index 6f2b020..f807eaa 100644 --- a/probability_theory/probability_theory.Rmd +++ b/probability_theory/probability_theory.Rmd @@ -2138,7 +2138,7 @@ plot_mc_evo <- function(iter, mc_stats, truth) { plot_mc_evo(iter, mc_stats, poisson_prob(A1, l)) ``` -Now we can apply this machinery to any desired probabilist computation. The +Now we can apply this machinery to any desired probabilistic computation. The probability of the complement of $A_{1}$? ```{r} pushforward_samples = sapply(stan_samples, function(x) 1 - indicator(x, A1))