Skip to contents

Given an input dataset, sample compositions that are consistent with the input. Specifically, this samples from a multinomial with mean \(\phi^{-1}(Bx)\). The default depth is 5e4. Modify the "depth" parameter to change this.

Usage

# S4 method for class 'lnm'
sample(x, size = 1, depth = 50000, newdata = NULL, ...)

Arguments

x

An object of class lnm with fitted parameters \(\hat{B}\) and which we want to use to form predictions on new samples.

size

The number of samples to generate.

depth

The depth to use when sampling the multinomial for each simulated element.

newdata

New samples on which to form predictions. Defaults to NULL, in which case predictions are made at the same design points as those used during the original training.

...

Additional keyword arguments, for consistency with R's predict generic (never used).

Value

A matrix of dimension size x n_outcomes, where each row represents one sample from the posterior predictive of the fitted logistic-normal multinomial model. Each row sums up to the depth argument, which defaults to 5e4.

Examples

example_data <- lnm_data(N = 50, K = 10)
xy <- dplyr::bind_cols(example_data[c("X", "y")])
fit <- lnm(
    starts_with("y") ~ starts_with("x"), xy, 
    iter = 25, output_samples = 25
)
#> Chain 1: ------------------------------------------------------------
#> Chain 1: EXPERIMENTAL ALGORITHM:
#> Chain 1:   This procedure has not been thoroughly tested and may be unstable
#> Chain 1:   or buggy. The interface is subject to change.
#> Chain 1: ------------------------------------------------------------
#> Chain 1: 
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Gradient evaluation took 0.001756 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 17.56 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Begin eta adaptation.
#> Chain 1: Iteration:   1 / 250 [  0%]  (Adaptation)
#> Chain 1: Iteration:  50 / 250 [ 20%]  (Adaptation)
#> Chain 1: Iteration: 100 / 250 [ 40%]  (Adaptation)
#> Chain 1: Iteration: 150 / 250 [ 60%]  (Adaptation)
#> Chain 1: Iteration: 200 / 250 [ 80%]  (Adaptation)
#> Chain 1: Success! Found best value [eta = 1] earlier than expected.
#> Chain 1: 
#> Chain 1: Begin stochastic gradient ascent.
#> Chain 1:   iter             ELBO   delta_ELBO_mean   delta_ELBO_med   notes 
#> Chain 1: Informational Message: The maximum number of iterations is reached! The algorithm may not have converged.
#> Chain 1: This variational approximation is not guaranteed to be meaningful.
#> Chain 1: 
#> Chain 1: Drawing a sample of size 25 from the approximate posterior... 
#> Chain 1: COMPLETED.
#> Warning: Pareto k diagnostic value is Inf. Resampling is disabled. Decreasing tol_rel_obj may help if variational algorithm has terminated prematurely. Otherwise consider using sampling instead.
head(sample(fit))
#>       y1   y2    y3   y4   y5    y6   y7  y8  y9 y10
#> [1,] 104  451  2145   57  107 46157  182 152 375 270
#> [2,]   1    0 49378    0    5   609    0   2   4   1
#> [3,] 295  787 18146  799 9126 18705 1065 496 359 222
#> [4,]   7   50 44643    6   33  5224   15   0  10  12
#> [5,] 517 4354 13614 3405  422 25693  986 450 287 272
#> [6,]   1   20 42403    0    9  7557    9   0   0   1