Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

different relative effect result in each run, but with the exact same setting #41

Open
ZixiaoChenMcK opened this issue Jun 2, 2024 · 1 comment

Comments

@ZixiaoChenMcK
Copy link

ZixiaoChenMcK commented Jun 2, 2024

Hi @dmphillippo ,

I was running releff_age_stage_hidden <- relative_effects(fit_re_age_stage_hidden, all_contrasts = TRUE), where fit_re_age_stage_hidden is hidding one trial information, so I can compare if the predicted relative effect from the model matches the true relative effect of the hidden trial.
However, I noticed that in each run, although with the exact same setting, the predicted relative effect of that hidden trial is always different. Could you please help me understand why this is the case, is this normal?
Thank you!

here is my code set up for more context:
"""
#excluding one trial NCT00021060 in input data
data_demographic_hidden <- data_demographic %>% filter(trial_identifier!="NCT00021060")
#setup network
network_demographic_hidden <- set_agd_arm(
data_demographic_hidden,
study = trial_identifier,
trt = treatment,
y = NULL,
se = NULL,
r = response_count,
n = person_count,
E = NULL,
sample_size = person_count,
trt_class = NULL
)

fit_re_age_stage_hidden <- nma(network_demographic_hidden,
trt_effects = "fixed",
regression = ~(age_group+disease_stage+biomarker_group)*.trt,
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 100),
prior_het = half_normal(scale = 5))

releff_age_stage_hidden <- relative_effects(fit_re_age_stage_hidden, all_contrasts = TRUE)
"""

dmphillippo added a commit that referenced this issue Jul 30, 2024
Fixes predict() error "no parameter beta_aux" #41
@dmphillippo
Copy link
Owner

Hi @ZixiaoChenMcK, yes this is expected.

Since the models are estimated using MCMC sampling the results will be a little different every time due to Monte Carlo error. An estimate of the Monte Carlo error is given in the se_mean column shown in the model output.

You can make the Monte Carlo error arbitrarily small by increasing the number of iterations; the default is iter = 2000 which is usually reasonable. If you provide a larger iter in the nma() function this will be passed along to Stan which does the sampling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants