[statnet_help] Basis for simulate and GOF

Adam Haber adamhaber at gmail.com
Sun Aug 9 01:16:00 PDT 2020


Hi,

I’m trying to understand the default behaviour of ergm’s simulate; specifically, the use of the original connectivity data as the default “basis” (as far as I understand, please correct me if I got this wrong).

Some background: I’ve fitted a series of models of increasing complexity using ergm, and estimated the “mean connectivity” (the marginal of each of the edges in the network) estimated by each of them:

model.i <- ergm(g ~ ...)
samples <- lapply(simulate(model.i, nsim=500), FUN=as.matrix)
mean_samp <- as.matrix(Reduce("+", samples) / length(samples))

I also conducted similar GOF comparison using the gof function. For some models, the mean of the samples (as well as the distribution of different statistics in the gof function) was suspiciously similar to the original network. Specifically, the models seemed to capture different “inhomogeneous”/“symmetry breaking" properties of the original network that they shouldn’t be able to, according to their terms/covariates. My current understanding is that this resulted from the default behaviour of “simulate”, which started from the original data matrix, and simply “didn’t get far enough”.

I’d really appreciate your answers for the following two questions:
Is there any quantitative way to measure that the samples generated by “simulate” (or “gof”) are “independent enough” for reliable estimation of single edge marginals or other statistics of interest?
Are there any obvious drawbacks to supplying a “naive" basis (like a Bernoulli graph with the same density) for simulate and/or gof?

Respectfully,
Adam Haber
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman13.u.washington.edu/pipermail/statnet_help/attachments/20200809/be0d82c5/attachment.html>


More information about the statnet_help mailing list