From nme.workshop at gmail.com Tue Mar 1 16:44:18 2022 From: nme.workshop at gmail.com (NME Workshop) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Network Modeling for Epidemics Workshop: Applications now open for 2022 In-Reply-To: References: Message-ID: Network Modeling for Epidemics Summer short course at the University of Washington 8-12 August, 2022 Apply online at https://forms.gle/rgm4Nxfv5ZtweyQo9 ------------------------------ Network Modeling for Epidemics (NME) is a 5-day short course at the University of Washington that provides training in stochastic network models for infectious disease transmission dynamics. This is a ''hands-on'' course, using the EpiModel software package in R ( www.epimodel.org). EpiModel provides a unified framework for statistically based modeling of dynamic networks from empirical data, and simulation of epidemic dynamics on these networks. It is a flexible open-source platform for learning and building epidemic models (including deterministic compartmental, stochastic individual-based, and stochastic network models). Resources include simple models that run in a browser window, built-in generic models that provide basic control over population contact patterns, pathogen properties and demographics, and templates for user-programmed modules that allow EpiModel to be extended for advanced research to the full range of pathogens, hosts, and disease dynamics. We use a mix of lectures, tutorials, and labs with students working in small groups. On the final day, students work to develop an EpiModel prototype model (either individually or in groups based on shared research interests), with input from the instructors, including the lead EpiModel software developer, Dr. Samuel Jenness. ------------------------------ Prerequisites: We assume students have some previous background in epidemic modeling and are comfortable using R. Examples include: - Research-level (post classroom) experience with epidemic modeling of any kind - A clearly defined modeling project, ideally with an associated network dataset - Previous experience with EpiModel (for example, have taken NME before, or worked through our online training materials on your own) ------------------------------ Dates and location: The course will be taught from Monday, August 8 to Friday, August 12, in-person on the University of Washington Seattle Campus. ------------------------------ Costs: Course fee is $1000. We offer a limited number of fee waivers for attendees from low income countries. ------------------------------ Deadlines: - April 15: Application deadline. - May 1: Decisions will be announced. - June 15: Registration deadline. Late registration is possible through July 15 with a late fee of $250. A waitlist will be established along with rolling admission through June 15 as space allows. ------------------------------ Application: Apply online at https://forms.gle/rgm4Nxfv5ZtweyQo9 ------------------------------ Course website and more information: http://statnet.github.io/nme Please feel free to share widely! Best, Martina Morris, Steve Goodreau and Samuel Jenness -------------- next part -------------- An HTML attachment was scrubbed... URL: From chlai at g2.nctu.edu.tw Wed Mar 23 21:01:27 2022 From: chlai at g2.nctu.edu.tw (Chih-Hui Lai) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] About the error message when running GOF Message-ID: <2B6271AE-D62F-4FBF-B993-34D9235323F5@g2.nctu.edu.tw> Dear Statnet people, Sorry I?m quite new with using Statnet to run ERGM. And the question might sound very silly? But I?d appreciate any advice or input. The problem I encountered is that when I ran gof, the error message poppped up: ?Error in 0:nb2 : NA/NaN argument?. What does that mean? Any solution to fix this error? Thank you very much! -Chih-Hui Chih-Hui LAI (???), PhD Associate Research Fellow Research Center for Humanities and Social Sciences (RCHSS) Academia Sinica Taipei, Taiwan, 115 Email: imchlai@gate.sinica.edu.tw Tel: 886-02-27898130 Mobile: 0966-192-712 Web: http://chihhui.wix.com/chihhuilai From goodreau at uw.edu Thu Mar 24 08:54:17 2022 From: goodreau at uw.edu (Steven Goodreau) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] About the error message when running GOF In-Reply-To: <2B6271AE-D62F-4FBF-B993-34D9235323F5@g2.nctu.edu.tw> References: <2B6271AE-D62F-4FBF-B993-34D9235323F5@g2.nctu.edu.tw> Message-ID: <970c11a4-77da-046d-0edc-6ee8c4df9c9d@uw.edu> Hello Chih-Hui - Could you provide us with a minimum reproducible example? Also, your package versions/session info, which you can obtain from the command sessionInfo() Thanks, Steve On 3/23/2022 9:01 PM, Chih-Hui Lai wrote: > Dear Statnet people, > Sorry I?m quite new with using Statnet to run ERGM. And the question might sound very silly? But I?d appreciate any advice or input. > > The problem I encountered is that when I ran gof, the error message poppped up: ?Error in 0:nb2 : NA/NaN argument?. What does that mean? Any solution to fix this error? > > Thank you very much! > > -Chih-Hui > > > > > Chih-Hui LAI (???), PhD > Associate Research Fellow > Research Center for Humanities and Social Sciences (RCHSS) > Academia Sinica > Taipei, Taiwan, 115 > Email: imchlai@gate.sinica.edu.tw > Tel: 886-02-27898130 > Mobile: 0966-192-712 > Web: http://chihhui.wix.com/chihhuilai > > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > http://mailman13.u.washington.edu/mailman/listinfo/statnet_help -- ***************************************************************** Steven M. Goodreau / Professor / Dept. of Anthropology (STEE-vun GOOD-roe) / he-him Physical address: Denny Hall M236 Mailing address: Campus Box 353100 / 4216 Memorial Way NE Univ. of Washington / Seattle WA 98195 st?x??ug?i?, dzidz?lali?, x???l? 1-206-685-3870 (phone) /1-206-543-3285 (fax) https://faculty.washington.edu/goodreau ***************************************************************** From michal2992 at gmail.com Thu Mar 24 09:14:22 2022 From: michal2992 at gmail.com (=?UTF-8?Q?Micha=C5=82_Bojanowski?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] About the error message when running GOF In-Reply-To: <970c11a4-77da-046d-0edc-6ee8c4df9c9d@uw.edu> References: <2B6271AE-D62F-4FBF-B993-34D9235323F5@g2.nctu.edu.tw> <970c11a4-77da-046d-0edc-6ee8c4df9c9d@uw.edu> Message-ID: Chih-Hui, Steve, I believe this problem (https://github.com/statnet/ergm/issues/424) is fixed already on github. Please install from there and you should be good to go. Best, Micha? czw., 24 mar 2022, 16:56 u?ytkownik Steven Goodreau napisa?: > Hello Chih-Hui - > > Could you provide us with a minimum reproducible example? Also, your > package versions/session info, which you can obtain from the command > sessionInfo() > > Thanks, > Steve > > > On 3/23/2022 9:01 PM, Chih-Hui Lai wrote: > > Dear Statnet people, > > Sorry I?m quite new with using Statnet to run ERGM. And the question > might sound very silly? But I?d appreciate any advice or input. > > > > The problem I encountered is that when I ran gof, the error message > poppped up: ?Error in 0:nb2 : NA/NaN argument?. What does that mean? Any > solution to fix this error? > > > > Thank you very much! > > > > -Chih-Hui > > > > > > > > > > Chih-Hui LAI (???), PhD > > Associate Research Fellow > > Research Center for Humanities and Social Sciences (RCHSS) > > Academia Sinica > > Taipei, Taiwan, 115 > > Email: imchlai@gate.sinica.edu.tw > > Tel: 886-02-27898130 > > Mobile: 0966-192-712 > > Web: http://chihhui.wix.com/chihhuilai > > > > > > > > _______________________________________________ > > statnet_help mailing list > > statnet_help@u.washington.edu > > http://mailman13.u.washington.edu/mailman/listinfo/statnet_help > > -- > ***************************************************************** > Steven M. Goodreau / Professor / Dept. of Anthropology > (STEE-vun GOOD-roe) / he-him > Physical address: Denny Hall M236 > Mailing address: Campus Box 353100 / 4216 Memorial Way NE > Univ. of Washington / Seattle WA 98195 > st?x??ug?i?, dzidz?lali?, x???l? > 1-206-685-3870 (phone) /1-206-543-3285 (fax) > https://faculty.washington.edu/goodreau > ***************************************************************** > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > http://mailman13.u.washington.edu/mailman/listinfo/statnet_help > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nme.workshop at gmail.com Wed Mar 30 15:01:52 2022 From: nme.workshop at gmail.com (NME Workshop) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Network Modeling for Epidemics Workshop: Applications open for 2022 [May 1 deadline] In-Reply-To: References: Message-ID: Network Modeling for Epidemics Summer short course at the University of Washington 8-12 August, 2022 Apply online at https://forms.gle/rgm4Nxfv5ZtweyQo9 ------------------------------ Network Modeling for Epidemics (NME) is a 5-day short course at the University of Washington that provides training in stochastic network models for infectious disease transmission dynamics. This is a ''hands-on'' course, using the EpiModel software package in R ( www.epimodel.org). EpiModel provides a unified framework for statistically based modeling of dynamic networks from empirical data, and simulation of epidemic dynamics on these networks. It is a flexible open-source platform for learning and building epidemic models (including deterministic compartmental, stochastic individual-based, and stochastic network models). Resources include simple models that run in a browser window, built-in generic models that provide basic control over population contact patterns, pathogen properties and demographics, and templates for user-programmed modules that allow EpiModel to be extended for advanced research to the full range of pathogens, hosts, and disease dynamics. We use a mix of lectures, tutorials, and labs with students working in small groups. On the final day, students work to develop an EpiModel prototype model (either individually or in groups based on shared research interests), with input from the instructors, including the lead EpiModel software developer, Dr. Samuel Jenness. ------------------------------ Prerequisites: We assume students have some previous background in epidemic modeling and are comfortable using R. Examples include: - Research-level (post classroom) experience with epidemic modeling of any kind - A clearly defined modeling project, ideally with an associated network dataset - Previous experience with EpiModel (for example, have taken NME before, or worked through our online training materials on your own) ------------------------------ Dates and location: The course will be taught from Monday, August 8 to Friday, August 12, in-person on the University of Washington Seattle Campus. ------------------------------ Costs: Course fee is $1000. We offer a limited number of fee waivers for attendees from low income countries. ------------------------------ Deadlines: - May 1: Application deadline. - May 15: Decisions will be announced. - July 1: Registration deadline. Late registration is possible through July 15 with a late fee of $250. A waitlist will be established along with rolling admission through July 15 as space allows. ------------------------------ Application: Apply online at https://forms.gle/rgm4Nxfv5ZtweyQo9 ------------------------------ Course website and more information: http://statnet.github.io/nme Please feel free to share widely! Yours, Martina Morris, Steve Goodreau and Samuel Jenness -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenj.chenjing at outlook.com Thu May 19 02:17:22 2022 From: chenj.chenjing at outlook.com (Jing Chen) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] standardized coefficients in ERGMs Message-ID: Dear Statnet community, I am trying to compare the strength of an edge attribute across ERGMs with different outcome objects. I am wondering how do we talk about effect sizes in the ERGM world? Are the "regression" coefficients generated from ERGMs standardized? If not, is there a way to do so? Any information would be appreciated. Thank you! Jing Chen, Ph.D. Assistant professor Shanghai Jiao Tong University -------------- next part -------------- An HTML attachment was scrubbed... URL: From zagibson at syr.edu Thu Jun 9 14:38:24 2022 From: zagibson at syr.edu (Zachary Gibson) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Appropriate terms for bipartite ERGM? Message-ID: Hello, I'm currently working on a project with bipartite networks. We're interested in understanding what drives actors in the first mode to connect with the same node in the second mode. We hypothesized: 1. Actors who are similar in certain traits would be more likely to connect with the same node 2. Actors who are geographically proximal would be more likely to connect with the same node. I'm unsure of terms in ERGM that aptly capture these hypotheses except for b1nodematch, which doesn't seem intended for continuous covariates. I considered building terms using the ergm.userterms package, but that appears to be offline at the moment? Any advice would be greatly appreciated! Thank you! Zachary Gibson, Ph.D. Research Associate (Community Services & Technology) D'Aniello Institute for Veterans and Military Families T 847.440.6930 zagibson@syr.edu National Veterans Resource Center Daniel & Gayle D'Aniello Building 101 Waverly Ave, Syracuse, NY 13244 ivmf.syracuse.edu [Syracuse University D'Aniello Institute for Veterans and Military Families Logo. JPMorgan Chase & Co., Founding Partner.] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16365 bytes Desc: image001.png URL: From 15927173853 at 163.com Wed Jun 15 18:19:40 2022 From: 15927173853 at 163.com (=?UTF-8?B?6ZyN6b6Z6Zye?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] error in ergm-term triangle Message-ID: <1b72ea0.c57.1816a183403.Coremail.15927173853@163.com> Hi all, I use the ergm to model the formation of network. but it is always degeneracy. there are 43 nodes, 492 edeges, 3562 triangles. i use the code as follow: ergm1<-ergm(net1~edges+triangle,control=control.ergm(seed=2345)) The error message popppes up: Model statistics ?triangle? are not varying. This may indicate that the observed data occupies an extreme point in the sample space or that the estimation has reached a dead-end configuration. I also try to use gwesp to solve model degeneracy. the code as follow ergm1<-ergm(net1~edges+gwesp(0.5, fixed = TRUE),control=control.ergm(seed=2345)) However, it doesn't work, either. And the error as follows: Iteration 2 of at most 60: Error in ergm.MCMLE(init, nw, model, initialfit = (initialfit <- NULL), : Unconstrained MCMC sampling did not mix at all. Optimization cannot continue. In addition: Warning message: In ergm_MCMC_sample(s, control, theta = mcmc.init, verbose = max(verbose - : Unable to reach target effective size in iterations alotted. I want to know is there any solution to fix this error? Thank you very much! Best, Longxia Huo Ph.D. Student -------------- next part -------------- An HTML attachment was scrubbed... URL: From buttsc at uci.edu Wed Jun 15 22:21:36 2022 From: buttsc at uci.edu (Carter T. Butts) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] error in ergm-term triangle In-Reply-To: <1b72ea0.c57.1816a183403.Coremail.15927173853@163.com> References: <1b72ea0.c57.1816a183403.Coremail.15927173853@163.com> Message-ID: <23c0488e-95d5-841a-5e5f-ffe7da5ba9e5@uci.edu> Hi, Longxia - This is not a bug: the model families you are fitting are usually degenerate in social networks.? The edge-triangle family is particularly infamous in that regard - you can find the first results on that one in Strauss's 1986 SIAM paper, and it has been the target of many other theoretical studies.? We love to work with it as a theoretical and didactic tool for understanding how ERGMs work, but it is rarely useful in empirical models.? (It can have uses when the triangle parameter is /negative/, but that doesn't happen in most social networks. /Local/ triangle terms can also work, but only when confined to very small groups.? So, while there are exceptions, one should not be surprised to find that an edge-triangle model fails when modeling social networks.) Without belaboring the point on why this goes bad (as noted, there's a whole literature on that), the basic intuition is that a positive triangle term "says" that every shared partner of an i,j pair should increase the conditional log-odds of an i,j tie by a constant unit.? That sounds harmless, but in practice it leads to runaway triangle formation: once triangles form, they don't break, and triangles beget more triangles without limit.? (This is the "density explosion" route to degeneracy.)? In the end, the MCMC simulation that is being used to fit your model tells you that something has gone horribly wrong, and quits.? That's not a bug: what this is telling you is that the model you are positing, is incompatible with your data. As you correctly observe, GWESP is an alternative parameterization that is less prone to the density explosion, and it often works very well.? Very approximately speaking, it works by implementing diminishing marginal effects for the impact of shared partners on tie formation: the first matters more than the second, the second matters more than the third, etc.? This fall-off is controlled by the decay parameter.? If the decay parameter is too large, then the decay will be too slow, and you will wind up right back in the same regime you were in with the triangle term; how large is too large is case specific, but if you find that the model stops mixing, it's a good idea to reduce the parameter and try again.? Although there are no (and cannot be) general rules, 0.25 is often a reasonable starting value for many social networks.? I would try much smaller values, and see if that helps. Another general issue is that your model is depending on a homogeneous clustering effect to explain all of the excess triangulation in the graph.? In real-world social networks, much triangulation usually results from inhomogeneities (e.g., non-uniform mixing due to demographics, shared social settings, exogenous group memberships, etc.), which produce a "patchy" and uneven distribution of triangles that is not similar to what is generated by homogeneous terms (e.g., triangles, GWESP, ESPs). Fitting such networks with edges + GWESP forces the model to struggle between putting too few triangles in the parts of the network that need them, and putting too many in the parts that don't - the results are often disappointing.? The solution here is to use covariate effects to capture differential mixing, and then to add GWESP and other dependence terms to "mop up" what cannot be captured via covariate effects.? This is something that we used to cover in our ergm workshop, so you may want to check our online workshop materials.? I'm not certain if it's still in the latest version (since various topics and examples get rotated in and out over time), but if not, it should be in some of the older ones. So, in sum: 1. Avoid triangle terms, unless you know that you are in one of those special exceptional cases where they are useful (and if they aren't working, you aren't in one of those situations!); 2. When in doubt, try starting GWESP with small decay parameters (0.25 or less is often, though not always, reasonable when trying to get a model to fit the first time); 3. Start by visualizing your network and trying to understand its sources of triangulation; account for those first using covariate effects, and then add dependence terms if/as needed to mop up what those couldn't capture. Hope that helps, -Carter On 6/15/22 6:19 PM, ??? wrote: > Hi all, > I use the ergm to model the formation of network. but it is always > degeneracy. there are 43 nodes, 492?edeges, 3562 triangles. i use the > code as follow: > / > / > /ergm1<-ergm(net1~edges+triangle,control=control.ergm(seed=2345))/ > / > / > The error message popppes up: > > /Model statistics ?triangle? are not varying. This may indicate that > the observed data occupies an extreme point in the sample space or > that the estimation has reached a dead-end configuration./ > / > / > I also try to use gwesp to solve model degeneracy. the code as follow > > /ergm1<-ergm(net1~edges+gwesp(0.5, fixed = > TRUE),control=control.ergm(seed=2345))/ > / > / > However, it doesn't work, either. And the error as follows: > > /Iteration 2 of at most 60:/ > /Error in ergm.MCMLE(init, nw, model, initialfit = (initialfit <- > NULL), ?: > ? Unconstrained MCMC sampling did not mix at all. Optimization cannot > continue. > In addition: Warning message: > In ergm_MCMC_sample(s, control, theta = mcmc.init, verbose = > max(verbose - ?: > ? Unable to reach target effective size in iterations alotted./ > / > / > I want to know is there any solution to fix this error? > Thank you very much! > > Best, > Longxia Huo > Ph.D. Student > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > https://urldefense.com/v3/__http://mailman13.u.washington.edu/mailman/listinfo/statnet_help__;!!CzAuKJ42GuquVTTmVmPViYEvSg!MEpJKUWuuhzOb4dbedz9CP8vReuII3z5EQcaUBdOHJhs7smhlyKxWK7zoy6WF4vaYfJJKJo9TK3LsXVf7hA$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenj.chenjing at outlook.com Thu Jun 16 04:43:57 2022 From: chenj.chenjing at outlook.com (Jing Chen) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] standardized coefficients in ERGMs In-Reply-To: References: Message-ID: Dear Statnet community, I am posting this question again - not sure if the previous one was sent successfully. I am trying to compare the strength of an edge effect across ERGMs (the identical model specification, but the outcome objects are different). I am wondering how do we talk about effect sizes in the ERGM world? Are the "regression" coefficients presented in the ERGM output standardized? If not, is there a way to do so? Any information would be appreciated. Thank you! Jing Chen, Ph.D. Assistant professor Shanghai Jiao Tong University -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.w.krause at fu-berlin.de Thu Jun 16 08:29:55 2022 From: robert.w.krause at fu-berlin.de (Krause, Robert) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] standardized coefficients in ERGMs In-Reply-To: References: , Message-ID: Dear Jing, In general terms, it is extremely difficult to compare parameters across models. Carina Mood (2010) showed this in detail for simple logistic regression models. Her results very much hold for ergms but are much more pronounced here due to the increased complexity and the connectedness, of well, everything. If you go to the mail that Carter wrote this morning and take the example he used, that you might have subgroups based on some covariate which have high levels of transitive closure within the groups but relatively fewer ties (and thus also fewer transitive ties) between groups. And now compare this to a network where the same covariates exert a far weaker influence on homophilic clustering. If the transitivity (gwesp...) parameters are the same across the two networks (but density is different so that overall degree is the same), for most people the real contribution on probability for creating a tie due to shared partners will be different. If you take the odds ratios, as I often see as a reviewer, than they are (as Mood 2010 shows!!) not a good idea, because, say your have a reciprocity/mutual effect of '2', then this effect will matter very differently for a dyad that shares several covariates and has many friends in common, having the incoming tie will increase the probability, but the real effect might be relatively small, given that the probability to connect was already high. On the other hand, two nodes that do not share any covariates or friends in common might be much stronger affected if one send a tie to another. If a certain parameter now primarily occurs together with other parameters in one network or one part of a network (e.g., where there are subgroups there is also clustering - but no interaction between groups and clustering), then this is very difficult to compare to another network. So, in short, no the parameters are not standardized and there is no easy way to do this. Average marginal effect would be great to have but no one has, as far as I know, implemented them or something similar for ergms and the computational power required will probably be very large in all but trivial models or very small graphs (SAOMs have Relative Importance, see Indlekofer & Brandes 2013, and something similar could probably be implemented in ergm). One option would be to create a few example cases and vary the statistics for the parameters to see the effects on tie probability, something like Marginal Effect at Representative values (MER)? Sorry :/ I do not have a better answer - and sorry for the late response, I saw the mail last month but forgot to reply... Cheers from a (way too) sunny Berlin, Robert ________________________________ Von: statnet_help im Auftrag von Jing Chen Gesendet: Donnerstag, 16. Juni 2022 13:43:57 An: statnet_help@u.washington.edu Betreff: [statnet_help] standardized coefficients in ERGMs Dear Statnet community, I am posting this question again ? not sure if the previous one was sent successfully. I am trying to compare the strength of an edge effect across ERGMs (the identical model specification, but the outcome objects are different). I am wondering how do we talk about effect sizes in the ERGM world? Are the ?regression? coefficients presented in the ERGM output standardized? If not, is there a way to do so? Any information would be appreciated. Thank you! Jing Chen, Ph.D. Assistant professor Shanghai Jiao Tong University -------------- next part -------------- An HTML attachment was scrubbed... URL: From buttsc at uci.edu Thu Jun 16 20:58:22 2022 From: buttsc at uci.edu (Carter T. Butts) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] standardized coefficients in ERGMs In-Reply-To: References: Message-ID: <1d81aaf7-becf-ee44-c8ad-78a36ccd8d66@uci.edu> Hi, Jing - Robert's email pointed out some of the complexities here, but I think one might even more usefully turn the question around.? What do does one /want to mean/ by an effect size?? By a standardized coefficient?? Once that is decided, one can determine (1) if such a quantity is well-defined, and (2) if so, how it can be calculated.? As Robert observed, many ideas that analysts have about effect sizes and standardization are actually special properties of linear models, and do not generalize to the non-linear world.? Such ideas are of limited use in understanding even logistic regression, much less complex systems with "feedback" among the elements (which is what we ERGMs are representing).? But, on the other hand, there are lots of ways to talk about effects and relative effects that can be useful for comparative purposes.? For instance: ?- Trivially, the ERGM coefficients tell you about how the conditional log odds of an edge vary as a function of the covariates, and the rest of the graph.? This provides an effect that has a consistent meaning across networks, and indeed is about as comparable as anything gets in a nonlinear model.? (The meaning is /local/, in the sense that the change in log odds is taken relative to a particular state, but that's the price of living in a nonlinear world.)? For quantitative covariate effects, one can pick any scale one wants, so if you wanted to partially "standardize" by scaling the covariates to have unit variance, one could do this.? (I tend to think that this type of practice usually causes more problems than it solves in the long run, but there's nothing illegitimate about it.) ?- Via conditional simulation, one can evaluate e.g. the expected change in one or more target statistics, as a function of change in one (or a combination of) parameters.? This is local with respect to the parameter vector, and may or may not be useful to compare across graphs depending on one's choice of statistics, but it does capture (local) "net" effects due to feedback between statistics, etc.? If one has a concrete substantive question relating to some aspect of network structure, computing such effects may be insightful.? One can, further, partially "standardize" these effects by scaling them by the standard deviation of the target statistics (or a function thereof) at the base parameter vector.? This gives you the expected change in statistics per unit change in parameters (local to the current base value), in units of the standard deviation in those statistics.? As with the above case, whether this is a good idea depends on what you want to know, but it can be helpful in giving you a sense of the extent to which small changes in the parameters are making a large difference in network structure, relative to the variation that you would naturally expect to see in that structure. ?- I'm a fan these days of scenario evaluations, and things like virtual "knock-out" experiments: in the latter case, we compare the expectation of some target statistic (or some other distributional statistic) for particular parameters (e.g., the MLE of a fitted model) with what we get if one or more terms in the model are set to zero.? That is, we reach in and "turn off" the term, and see what it does to the graph (as is done physically in a knock-out experiment, where one might e.g., "turn off" expression of some gene in a mouse and see what effect it has). This can be useful as a probe to better understand how particular mechanisms are contributing to the overall behavior of the model, and can be used to construct a certain type of "effect size" based on how the target changes when specific effects are removed. (Whether that is useful depends on what you want to know, of course, but it can be insightful.)? We can of course do knock-down/knock-in/knock-up versions, as well, as well as versions where we modify e.g. covariates rather than parameters - all boil down to trying to understand the implications of model terms on substantive behavior by comparing across hypothetical scenarios (whether of empirically plausible or entirely conceptual nature). ?- In some physical settings, the ERGM parameters have a pretty concrete meaning as effective forces: setting aside things like contributions from the reference measure, a given parameter is the energy per unit change in the statistic, times -1/(kT) where T is the system temperature and k is Boltzmann's constant.? So, what we see is the (additive inverse of the) energetic cost of changing a given statistic by one unit, relative to the size of typical energy fluctuations (i.e., kT).? Admittedly, this is not as immediately helpful for social networks, but there are other settings where T is known, and the adjusted parameters then have a very direct and absolute interpretation.? From this vantage point, our usual coefficients are already standardized, in the sense that they reflect (to gloss it a bit) costs of changing the graph relative to available resources.? I think this can be pushed a bit further even in the social case, but I think it is safe to say that this is still something that is being worked out.? We'll have to see where it leads. Anyway, those are just a few of the examples of things that folks are doing in this area.? I agree with Robert that we're not going to have some simple, generic recipe for how to think about effects that is ideal in all cases, but that doesn't exist even for linear models.? By turns, we have quite a lot of powerful ways to use and interpret the coefficients that we have, and I think that folks will continue to come up with new ones as the number of applications grows.? What needs to be in the driver's seat, in my view, are the substantive questions.? If folks know and can clearly articulate what they are trying to learn, then they are likely to be able to come up with ways to measure the right quantities. Hope that is helpful, -Carter On 6/16/22 4:43 AM, Jing Chen wrote: > > Dear Statnet community, > > I am posting this question again ? not sure if the previous one was > sent successfully. > > I am trying to compare the strength of an edge effect across ERGMs > (the identical model specification, but the outcome objects are > different). I am wondering how do we talk about effect sizes in the > ERGM world? Are the ?regression? coefficients presented in the ERGM > output standardized? If not, is there a way to do so? > > Any information would be appreciated. Thank you! > > Jing Chen, Ph.D. > > Assistant professor > > Shanghai Jiao Tong University > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > https://urldefense.com/v3/__http://mailman13.u.washington.edu/mailman/listinfo/statnet_help__;!!CzAuKJ42GuquVTTmVmPViYEvSg!M_zeRwLHiLMJLk_Nxn_dg32pF8ECLVLcU2Sed2IKAaJFtBJa4hT_afhwHhFydSu0iOdUHi2jERtRUQqE9FnFwOxq$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenj.chenjing at outlook.com Fri Jun 17 01:23:06 2022 From: chenj.chenjing at outlook.com (Jing Chen) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] standardized coefficients in ERGMs In-Reply-To: <1d81aaf7-becf-ee44-c8ad-78a36ccd8d66@uci.edu> References: <1d81aaf7-becf-ee44-c8ad-78a36ccd8d66@uci.edu> Message-ID: A lot of thanks to Carter and Robert for your thoughtful responses! They have been very helpful! What I am trying to compare is the strengths of a few edge effects, which basically are the associations between pairs of network objects. Carter?s idea of turning the question around is inspiring. Instead of diving deep methodologically into the ?effect size? issue, Jaccard index and QAP may be good enough for my purpose, given my reviewers are probably not familiar with network analysis. Thanks again, Jing From: statnet_help On Behalf Of Carter T. Butts Sent: Friday, June 17, 2022 11:58 AM To: statnet_help@u.washington.edu Subject: Re: [statnet_help] standardized coefficients in ERGMs Hi, Jing - Robert's email pointed out some of the complexities here, but I think one might even more usefully turn the question around. What do does one want to mean by an effect size? By a standardized coefficient? Once that is decided, one can determine (1) if such a quantity is well-defined, and (2) if so, how it can be calculated. As Robert observed, many ideas that analysts have about effect sizes and standardization are actually special properties of linear models, and do not generalize to the non-linear world. Such ideas are of limited use in understanding even logistic regression, much less complex systems with "feedback" among the elements (which is what we ERGMs are representing). But, on the other hand, there are lots of ways to talk about effects and relative effects that can be useful for comparative purposes. For instance: - Trivially, the ERGM coefficients tell you about how the conditional log odds of an edge vary as a function of the covariates, and the rest of the graph. This provides an effect that has a consistent meaning across networks, and indeed is about as comparable as anything gets in a nonlinear model. (The meaning is local, in the sense that the change in log odds is taken relative to a particular state, but that's the price of living in a nonlinear world.) For quantitative covariate effects, one can pick any scale one wants, so if you wanted to partially "standardize" by scaling the covariates to have unit variance, one could do this. (I tend to think that this type of practice usually causes more problems than it solves in the long run, but there's nothing illegitimate about it.) - Via conditional simulation, one can evaluate e.g. the expected change in one or more target statistics, as a function of change in one (or a combination of) parameters. This is local with respect to the parameter vector, and may or may not be useful to compare across graphs depending on one's choice of statistics, but it does capture (local) "net" effects due to feedback between statistics, etc. If one has a concrete substantive question relating to some aspect of network structure, computing such effects may be insightful. One can, further, partially "standardize" these effects by scaling them by the standard deviation of the target statistics (or a function thereof) at the base parameter vector. This gives you the expected change in statistics per unit change in parameters (local to the current base value), in units of the standard deviation in those statistics. As with the above case, whether this is a good idea depends on what you want to know, but it can be helpful in giving you a sense of the extent to which small changes in the parameters are making a large difference in network structure, relative to the variation that you would naturally expect to see in that structure. - I'm a fan these days of scenario evaluations, and things like virtual "knock-out" experiments: in the latter case, we compare the expectation of some target statistic (or some other distributional statistic) for particular parameters (e.g., the MLE of a fitted model) with what we get if one or more terms in the model are set to zero. That is, we reach in and "turn off" the term, and see what it does to the graph (as is done physically in a knock-out experiment, where one might e.g., "turn off" expression of some gene in a mouse and see what effect it has). This can be useful as a probe to better understand how particular mechanisms are contributing to the overall behavior of the model, and can be used to construct a certain type of "effect size" based on how the target changes when specific effects are removed. (Whether that is useful depends on what you want to know, of course, but it can be insightful.) We can of course do knock-down/knock-in/knock-up versions, as well, as well as versions where we modify e.g. covariates rather than parameters - all boil down to trying to understand the implications of model terms on substantive behavior by comparing across hypothetical scenarios (whether of empirically plausible or entirely conceptual nature). - In some physical settings, the ERGM parameters have a pretty concrete meaning as effective forces: setting aside things like contributions from the reference measure, a given parameter is the energy per unit change in the statistic, times -1/(kT) where T is the system temperature and k is Boltzmann's constant. So, what we see is the (additive inverse of the) energetic cost of changing a given statistic by one unit, relative to the size of typical energy fluctuations (i.e., kT). Admittedly, this is not as immediately helpful for social networks, but there are other settings where T is known, and the adjusted parameters then have a very direct and absolute interpretation. From this vantage point, our usual coefficients are already standardized, in the sense that they reflect (to gloss it a bit) costs of changing the graph relative to available resources. I think this can be pushed a bit further even in the social case, but I think it is safe to say that this is still something that is being worked out. We'll have to see where it leads. Anyway, those are just a few of the examples of things that folks are doing in this area. I agree with Robert that we're not going to have some simple, generic recipe for how to think about effects that is ideal in all cases, but that doesn't exist even for linear models. By turns, we have quite a lot of powerful ways to use and interpret the coefficients that we have, and I think that folks will continue to come up with new ones as the number of applications grows. What needs to be in the driver's seat, in my view, are the substantive questions. If folks know and can clearly articulate what they are trying to learn, then they are likely to be able to come up with ways to measure the right quantities. Hope that is helpful, -Carter On 6/16/22 4:43 AM, Jing Chen wrote: Dear Statnet community, I am posting this question again ? not sure if the previous one was sent successfully. I am trying to compare the strength of an edge effect across ERGMs (the identical model specification, but the outcome objects are different). I am wondering how do we talk about effect sizes in the ERGM world? Are the ?regression? coefficients presented in the ERGM output standardized? If not, is there a way to do so? Any information would be appreciated. Thank you! Jing Chen, Ph.D. Assistant professor Shanghai Jiao Tong University _______________________________________________ statnet_help mailing list statnet_help@u.washington.edu https://urldefense.com/v3/__http://mailman13.u.washington.edu/mailman/listinfo/statnet_help__;!!CzAuKJ42GuquVTTmVmPViYEvSg!M_zeRwLHiLMJLk_Nxn_dg32pF8ECLVLcU2Sed2IKAaJFtBJa4hT_afhwHhFydSu0iOdUHi2jERtRUQqE9FnFwOxq$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From 15927173853 at 163.com Wed Jun 22 00:28:39 2022 From: 15927173853 at 163.com (=?UTF-8?B?6ZyN6b6Z6Zye?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] question about standardized coefficients in MR-QAP Message-ID: <7f44c74.3d63.1818a502d41.Coremail.15927173853@163.com> Hi all, I have a question relating to MR-QAP coefficient. I have two different undirected collaborative networks: N1(35*35) and N2(41*41). And I want to run MR-QAP to (1) examine factors predicting collaboration (homophily, sender effect, and transitivity; the model specification are same); (2) explore the relative contribution of above effects within network; (3) and compare the strength of certain effect across QAPs (e.g. is the homophily effect in N1 stronger than N2). For the goal(3), Could I compare the standardized coefficient of N1 model and N2 model? Thank you! Best, Longxia Huo Ph.D. Student -------------- next part -------------- An HTML attachment was scrubbed... URL: From buttsc at uci.edu Wed Jun 22 01:00:14 2022 From: buttsc at uci.edu (Carter T. Butts) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] question about standardized coefficients in MR-QAP In-Reply-To: <7f44c74.3d63.1818a502d41.Coremail.15927173853@163.com> References: <7f44c74.3d63.1818a502d41.Coremail.15927173853@163.com> Message-ID: <9b8c557a-0a8d-b8a4-d87c-de9b4eb44050@uci.edu> Hi, Longxia Huo - OLS network regression coefficients (as obtained e.g., from netlm) can be interpreted in the same fashion as any other linear regression coefficients - they are, in fact, exactly equivalent to vectorizing the dependent network and regressing it on the vectorized versions of the independent networks.? So, in that regard, effect sizes and such have their usual linear meanings (provided that one interprets them in terms of edge variables, and sticks to descriptive statements - some inferential ones can also be justified, but beware of anything that requires an independence assumption).? By turns, the QAP null hypothesis tests are just that (tests), and you would not want to use QAP quantiles as a proxy for effect sizes, or otherwise compare them across models. (At least, not without some guiding theory, and I'm not aware of anything apposite.) Hope that helps! -Carter On 6/22/22 12:28 AM, ??? wrote: > > Hi all, > > I have a question?relating to MR-QAP coefficient. I have two different > undirected collaborative networks: N1(35*35) and N2(41*41). And I want > to run MR-QAP to (1) examine factors predicting collaboration > (homophily, sender effect, and transitivity; the model specification > are same); (2) explore the relative contribution of above effects > within network; (3) and compare the strength of certain effect across > QAPs (e.g. is the homophily effect in N1 stronger than N2). For the > goal(3), Could I compare the standardized coefficient of N1 model and > N2 model? > > Thank you! > > Best, > > Longxia Huo > > Ph.D. Student > > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > https://urldefense.com/v3/__http://mailman13.u.washington.edu/mailman/listinfo/statnet_help__;!!CzAuKJ42GuquVTTmVmPViYEvSg!MfEHpuDk-Bh78ovdq-bOxkwiDC4SivhxSS4BNxc_6gmEicM6kd8SK_IsIa6Q8rqyWMmqrUNCuB0IdZ6VCcQ$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From piombo at usc.edu Mon Aug 1 14:26:00 2022 From: piombo at usc.edu (Sarah Piombo) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] ERGM constraints and subgroups Message-ID: Hi, I have a question about the best way to specify ERGM constraints for my data. I have a network of ~1500 nodes and 8500 edges. The individuals live in 8 different villages, ~7000 edges are within villages while 1500 edges are between villages. I initially used a block diagonal model to only allow edges within the same village and this worked great, but without that constraint I have difficulty getting the model to converge with dyad dependent terms. Ideally I would like the between village ties to be possible but less probable than the within village ties. 1. What is the best way to do this? Is it possible with the blocks operator or another approach? 2. Any other suggestions or considerations (should I consider using ergm.multi?) Any help or advice would be much appreciated. Thank you! -- *Sarah Piombo, PhD(c), MPH* Doctoral Candidate Department of Population and Public Health Sciences Keck School of Medicine | University of Southern California https://sarahpiombo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From goodreau at uw.edu Mon Aug 1 14:28:17 2022 From: goodreau at uw.edu (Steven M. Goodreau) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] ERGM constraints and subgroups In-Reply-To: References: Message-ID: <35298f2f-3053-158f-6da3-7afd3e0a3e2e@uw.edu> Hi Sarah, based on your description it sounds like all you need is a term in your model: nodematch("village") Best, Steve On 8/1/2022 2:26 PM, Sarah Piombo wrote: > Hi, > > I have a question about the best way to specify ERGM constraints for > my data. > I have a network of ~1500 nodes and 8500 edges. > The individuals live in 8 different villages, ~7000 edges are within > villages while 1500 edges are between villages. > I initially used a block diagonal?model to only allow edges within the > same village and this worked great, but without that constraint I have > difficulty getting the model to converge with dyad dependent terms. > Ideally I would like the between village ties to be possible but less > probable than the within village ties. > 1. What is the best way to do this? Is it possible with the blocks > operator or another approach? > 2. Any other suggestions or considerations (should I consider using > ergm.multi?) > Any help or advice would be much appreciated. Thank you! > > -- > *Sarah Piombo, PhD(c), MPH* > Doctoral Candidate > > Department of Population and Public Health Sciences > > Keck School of Medicine | University of Southern California > > https://sarahpiombo.com > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > http://mailman13.u.washington.edu/mailman/listinfo/statnet_help -- ***************************************************************** Steven M. Goodreau / Professor / Dept. of Anthropology (STEE-vun GOOD-roe) / he-him (universal they also good) Physical address: Denny Hall M236 Mailing address: Campus Box 353100 / 4216 Memorial Way NE Univ. of Washington / Seattle WA 98195 st?x??ug?i?, dzidz?lali?, x???l? 1-206-685-3870 (phone) /1-206-543-3285 (fax) https://faculty.washington.edu/goodreau ***************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From fmugheir at mix.wvu.edu Sun Sep 4 12:42:06 2022 From: fmugheir at mix.wvu.edu (Fadi Mugheirbi) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] GOF for a bipartite network Message-ID: <465F97C8-FBA1-469A-9151-E9DFDFF03373@hxcore.ol> An HTML attachment was scrubbed... URL: From steffentriebel at icloud.com Mon Oct 17 14:32:29 2022 From: steffentriebel at icloud.com (Steffen Triebel) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] A couple of questions on modeling two-mode networks Message-ID: Greetings statnet-users! As this is my first message on this list, a few words about myself to start with: I am a PhD student working on my final PhD paper in which I apply ERGMs. My primary research area is interorganizational networks. So far my dissertation has focused on network dynamics. As such, I have grown pretty confident in using SAOMs, but have had limited touching points with ERGMs. The other thing I?m unfortunately not that well-versed in (yet), but which I will analyze in this project, are two-mode networks ? I don?t know why, but I just find them difficult to wrap my head around, so please forgive me if some of the questions in this mail are worded poorly or seem obvious to you. This will be a rather long e-mail with many questions that have come up for me so far, so apologies up front and thanks to anyone answering some of the points below. I have two general questions and then questions about specific effects: My network specification is somewhat odd. On the first mode (actors), I have about 9.5k nodes. On the second mode (groups), I have about 500. Most of the actors are only connected to one group, while there are some who are connected to 2 groups and very few that are connected to 3+. This is not a data error ? it?s just that these connections hold a lot of weight and that the actors basically connect the groups with each other and that these connections are meaningful. I am wondering if I should respect this in the model somehow by fixing certain effects (something like b1degree(1))? In a very basic specification, the model converges without issues and shows no degeneracy. About data specification: I have an edgelist with actor attributes in the columns next to it. When I read this as a network, R rightfully turns the actors into unique nodes, omitting some rows. Is there an elegant way to keep the attributes while reading in the network or should I extract the unique actors after reading in the network and then match them with the respective attributes in an additional step? I am unsure how to interpret the by-option in some effects. Would this just mean if I have a categorial variable (let?s say sex), I could set this to by=sex and will get different estimates based on each categorial value? Now, a couple of questions to see if I understood certain effects right / understand how I should specify certain research interests: I am very interested in single actors connecting multiple groups and I reckon this is portrayed by b1star(). Am I correct to assume that b1star(2) would mean likelihood of 2 groups being connected through 1 actor and that b1star(3) would mean the same for 3 groups respectively? And adding a covariate (again, let?s say sex) in the attribute-option would mean ?likelihood of 3 groups being connected through 1 actor is increased for male actors?? Let?s say I want to model that a female actor is more likely to be part of a certain group the more females are part of the group: Is b1nodematch(sex) correct? Notwithstanding attributes or theory, I am structurally interested in two very basic things. First: General Popularity of groups (basically something like inPop in Siena models). My understanding is that b2sociality is picking out the role of single groups, so I assume it is b2degree. Is b2degree(1, levels=1) correct? Stupid addendum: How would I interpret a significant positive parameter here (just so I can get a good feel of wording and logic)? Second: I want to understand how much more likely certain groups are in choosing actors who are already part of other groups. I am absolutely unsure how to specify this, especially when attributes come into play (such as: the higher attribute value X, the more likely the group is to recruit actors already active in other groups). Thanks to anybody taking the time to read this far. Steffen Triebel -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal2992 at gmail.com Tue Oct 18 03:23:38 2022 From: michal2992 at gmail.com (=?UTF-8?Q?Micha=C5=82_Bojanowski?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] A couple of questions on modeling two-mode networks In-Reply-To: References: Message-ID: Hi Steffen and welcome to the list! My reply will not be exhaustive (I reply below to some of your questions), but hopefully others will chip-in: > My network specification is somewhat odd. On the first mode (actors), I have about 9.5k nodes. On the second mode (groups), I have about 500. Most of the actors are only connected to one group, while there are some who are connected to 2 groups and very few that are connected to 3+. This is not a data error ? it?s just that these connections hold a lot of weight and that the actors basically connect the groups with each other and that these connections are meaningful. I am wondering if I should respect this in the model somehow by fixing certain effects (something like b1degree(1))? In a very basic specification, the model converges without issues and shows no degeneracy. I don't think there is a straightaway answer to this question because it depends on the context. What you write about "meaningful connections" is convincing to me, i.e. actors have limited resources and being a member of a group (any group) consumes them all thus it is not viable to being "spread to thin". When modeling I think I would first try to find a model that contains all the theory-guided effects, including those of node attributes etc., and then add a b1degree term as necessary to fit the degree distribution better. This seems justified as you can find ERGM applications in which in a similar fashion people add degree(0) to account for a unexplainably large ("unexplained" by other terms in the model) number of isolates in the data. > About data specification: I have an edgelist with actor attributes in the columns next to it. When I read this as a network, R rightfully turns the actors into unique nodes, omitting some rows. Is there an elegant way to keep the attributes while reading in the network or should I extract the unique actors after reading in the network and then match them with the respective attributes in an additional step? Cultures vary, but my personal workflow is to keep two files always: (1) nodes and their attributes and (2) edges and their attributes. I don't think I have any bipartite network data example on a side to explain it better, but will try to cook one up and reply separately. > I am unsure how to interpret the by-option in some effects. Would this just mean if I have a categorial variable (let?s say sex), I could set this to by=sex and will get different estimates based on each categorial value? By and large that's exactly what `by=` arguments are for. > I am very interested in single actors connecting multiple groups and I reckon this is portrayed by b1star(). Am I correct to assume that b1star(2) would mean likelihood of 2 groups being connected through 1 actor and that b1star(3) would mean the same for 3 groups respectively? And adding a covariate (again, let?s say sex) in the attribute-option would mean ?likelihood of 3 groups being connected through 1 actor is increased for male actors?? Yes and no. You are correct about the `k` argument, i.e. b1star(3) counts actors who are connected to exactly 3 groups. However the `attr` argument serves a different purpose as it counts only those stars, for which all the nodes have the same value of the attribute specified, i.e. b1star(2, attr="value") needs a `value` attribute defined on all the nodes and counts actors connected to two groups such that the actor and the two groups have identical values on "value". > Let?s say I want to model that a female actor is more likely to be part of a certain group the more females are part of the group: Is b1nodematch(sex) correct? Yes. > Notwithstanding attributes or theory, I am structurally interested in two very basic things. First: General Popularity of groups (basically something like inPop in Siena models). My understanding is that b2sociality is picking out the role of single groups, so I assume it is b2degree. Is b2degree(1, levels=1) correct? Stupid addendum: How would I interpret a significant positive parameter here (just so I can get a good feel of wording and logic)? b2socialty() will give you as many statistics as there are groups and parameters will give you a per-group indication which groups, ceteris paribus other terms, attract more ties than other groups. I don't think we have at this moment a term that would be a bipartite analogoue of gwdegree. I hope this helps, Michal From steffentriebel at icloud.com Tue Oct 18 09:25:11 2022 From: steffentriebel at icloud.com (Steffen Triebel) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] A couple of questions on modeling two-mode networks In-Reply-To: References: Message-ID: <0534762D-6B35-4B69-9C6B-48B2D49DC7CF@icloud.com> Hi Michal, thanks for the detailed and well-explained response! This already helped a lot and it's great to see that the statnet- list is equally friendly and welcoming as the Siena-list. Reading your comments, I think I need to force myself to wrap my head around the concept of "what explains *this* network at this time" instead of "what shapes the network going forward" a bit more. To that end, I do want to probe your responses a bit if you don't mind: - The b1degree term works very well, this is great. Do I understand correctly that, given the cross-sectional nature of the data, I should try to think less in terms of tendencies ("Actors who already have a lot of ties will likely form more ties in the future.") but rather try to model observed network features ("Most actors will only have one tie.") and then leave out other features (for reference, only 14 actors out of the 9.5k have 4 ties to groups)? - My actors do not share any attributes with the groups. So if I want to model "likelihood of being part of two firms (groups) from the same industry" I should rather use b1starmix and probably neglect b1star altogether? - The help page has a gwb2degree, which I assume is the bipartite version of gwdegree. Say I use this with an interval-scale attribute (values 1-10) which reflects regulation of groups (I am basically modelling institutional oversight), then a positive parameter would mean that: "The more subject a group is to regulation, the higher the likelihood of having actors as members who are also present in other groups."? Thank you so much for your time already. This is incredibly helpful to me. I am very excited to dive in deeper into ERGMs, Steffen ?Am 18.10.22, 12:23 schrieb "Micha? Bojanowski" : Hi Steffen and welcome to the list! My reply will not be exhaustive (I reply below to some of your questions), but hopefully others will chip-in: > My network specification is somewhat odd. On the first mode (actors), I have about 9.5k nodes. On the second mode (groups), I have about 500. Most of the actors are only connected to one group, while there are some who are connected to 2 groups and very few that are connected to 3+. This is not a data error ? it?s just that these connections hold a lot of weight and that the actors basically connect the groups with each other and that these connections are meaningful. I am wondering if I should respect this in the model somehow by fixing certain effects (something like b1degree(1))? In a very basic specification, the model converges without issues and shows no degeneracy. I don't think there is a straightaway answer to this question because it depends on the context. What you write about "meaningful connections" is convincing to me, i.e. actors have limited resources and being a member of a group (any group) consumes them all thus it is not viable to being "spread to thin". When modeling I think I would first try to find a model that contains all the theory-guided effects, including those of node attributes etc., and then add a b1degree term as necessary to fit the degree distribution better. This seems justified as you can find ERGM applications in which in a similar fashion people add degree(0) to account for a unexplainably large ("unexplained" by other terms in the model) number of isolates in the data. > About data specification: I have an edgelist with actor attributes in the columns next to it. When I read this as a network, R rightfully turns the actors into unique nodes, omitting some rows. Is there an elegant way to keep the attributes while reading in the network or should I extract the unique actors after reading in the network and then match them with the respective attributes in an additional step? Cultures vary, but my personal workflow is to keep two files always: (1) nodes and their attributes and (2) edges and their attributes. I don't think I have any bipartite network data example on a side to explain it better, but will try to cook one up and reply separately. > I am unsure how to interpret the by-option in some effects. Would this just mean if I have a categorial variable (let?s say sex), I could set this to by=sex and will get different estimates based on each categorial value? By and large that's exactly what `by=` arguments are for. > I am very interested in single actors connecting multiple groups and I reckon this is portrayed by b1star(). Am I correct to assume that b1star(2) would mean likelihood of 2 groups being connected through 1 actor and that b1star(3) would mean the same for 3 groups respectively? And adding a covariate (again, let?s say sex) in the attribute-option would mean ?likelihood of 3 groups being connected through 1 actor is increased for male actors?? Yes and no. You are correct about the `k` argument, i.e. b1star(3) counts actors who are connected to exactly 3 groups. However the `attr` argument serves a different purpose as it counts only those stars, for which all the nodes have the same value of the attribute specified, i.e. b1star(2, attr="value") needs a `value` attribute defined on all the nodes and counts actors connected to two groups such that the actor and the two groups have identical values on "value". > Let?s say I want to model that a female actor is more likely to be part of a certain group the more females are part of the group: Is b1nodematch(sex) correct? Yes. > Notwithstanding attributes or theory, I am structurally interested in two very basic things. First: General Popularity of groups (basically something like inPop in Siena models). My understanding is that b2sociality is picking out the role of single groups, so I assume it is b2degree. Is b2degree(1, levels=1) correct? Stupid addendum: How would I interpret a significant positive parameter here (just so I can get a good feel of wording and logic)? b2socialty() will give you as many statistics as there are groups and parameters will give you a per-group indication which groups, ceteris paribus other terms, attract more ties than other groups. I don't think we have at this moment a term that would be a bipartite analogoue of gwdegree. I hope this helps, Michal From michal2992 at gmail.com Tue Oct 18 10:08:23 2022 From: michal2992 at gmail.com (=?UTF-8?Q?Micha=C5=82_Bojanowski?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] A couple of questions on modeling two-mode networks In-Reply-To: <0534762D-6B35-4B69-9C6B-48B2D49DC7CF@icloud.com> References: <0534762D-6B35-4B69-9C6B-48B2D49DC7CF@icloud.com> Message-ID: Hi Steffen, > - The b1degree term works very well, this is great. Do I understand correctly that, given the cross-sectional nature of the data, I should try to think less in terms of tendencies ("Actors who already have a lot of ties will likely form more ties in the future.") but rather try to model observed network features ("Most actors will only have one tie.") and then leave out other features (for reference, only 14 actors out of the 9.5k have 4 ties to groups)? I think in the cross-sectional ERGM context an interpretation/narrative referring to tendencies is still valid: 1. ERGM is a probabilistic model so it specifies that certain ties are more likely than others in the sense of conditional probabilities. Thus, a statement "girls tend to befriend girls" refers to the fact that, ceteris paribus, tie probability in girl-girl dyads is higher than in, say, girl-boy and so on. But, as you wrote, the statement does not refer to any observed future. In particular, it does not imply that: should we keep observing the network all the girls will befriend all the other girls. 2. Cross-sectional ERGM, like a dynamic SAOM, does have a micro-process behind it, but: (1) the dynamics is tie-based rather than node-based, and (2) the focus is on the equilibrium of that process rather than change. In that sense over time the ties will form and dissolve but with probabilities such that on average, say, the proportion of connected girl-girl dyads will be stable at a model-specified level. See also Chapter 11 of Lusher-Koskinen-Robins book and the dynamic ERGMs such as TERGM and the `tergm` package. 3. Things are similar with node-related network characteristics, e.g. number of nodes with specific degree, say 1. Given a model, according to the ERGM micro-process the number of nodes with degree 1 will fluctuate stochastically but it should be stable in the long run around the number specified by the model parameter(s). 4. All of the above assuming the model is not degenerate... > - My actors do not share any attributes with the groups. So if I want to model "likelihood of being part of two firms (groups) from the same industry" I should rather use b1starmix and probably neglect b1star altogether? Yes, exactly. > - The help page has a gwb2degree, which I assume is the bipartite version of gwdegree. Say I use this with an interval-scale attribute (values 1-10) which reflects regulation of groups (I am basically modelling institutional oversight), then a positive parameter would mean that: "The more subject a group is to regulation, the higher the likelihood of having actors as members who are also present in other groups."? Of course! I missed that term, sorry! However, for the above mechanism I think you rather want the b2cov term, the parameter of which will essentially have a regression-like interpretation: a positive value will mean the higher the oversight the more attractive the group is for the actors (greater group degree). The gw2degree() would rather represent a kind of "cumulative attractiveness of groups" mechanism, namely: the more members the group has the more members it tends to attract (where "tendency" has the meaning as above). Best, Micha? From yguo.hit at gmail.com Sun Oct 23 23:36:54 2022 From: yguo.hit at gmail.com (Yu Guo) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Appropriate ergm-terms in R for ? Message-ID: Dear all, I hope this email finds you well. I am currently working on a project with one-mode networks. We're interested in understanding how to achieve the with ergm-term in R. We hypothesized : 1. Actors who are with "female"(for example) attributes would be more likely to play a bridge for the other two actors with the attributes of "male". I'm unsure if there are any ERGM-terms in R that can aptly capture the hypothesis. And any advice would be greatly appreciated! Thank you! Best, Yu Guo, Ph.D. Harbin institution of technology -------------- next part -------------- An HTML attachment was scrubbed... URL: From buttsc at uci.edu Mon Oct 24 00:48:30 2022 From: buttsc at uci.edu (Carter T. Butts) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Appropriate ergm-terms in R for ? In-Reply-To: References: Message-ID: <63674263-92b0-97fa-d446-b6dbf4f93fc1@uci.edu> Hi, Yu Guo - You may want to take a look at the graphlet orbit terms in the ergm.graphlets package (https://github.com/CarterButts/ergm.graphlets).? It is certainly possible to model the propensity for those with particular attributes to serve as bridges (for many different kinds of local bridges, though the one you probably have in mind is orbit 2), though there is not a specific form that selects for the full set of brokerage roles (e.g., you can model the propensity for those in group A to serve as brokers in general, but not the propensity for those in group A to broker relations between members of group B per se).? This is also distinct from the inhomogeneous 2-stars, since the lack of closure here is important (which the graphlet terms do respect). If you need precisely this term, then the most direct approach is to code it up oneself using ergm.userterms.? As terms go, this one is not too hard, though some C is involved.? You might first want to see if you can get the observed level of brokerage to emerge from other factors, in any event, rather than start by positing a generative force for itinerant brokerage. Hope that helps, -Carter On 10/23/22 11:36 PM, Yu Guo wrote: > > Dear all, > > I hope this email finds you well. > > > I am currently working on a project with one-mode networks. We're > interested in understanding how to achieve the > with ergm-term in R. We hypothesized : > > 1. Actors who are with "female"(for example) attributes would be more > likely to play a bridge for the other two actors with the > attributes of "male". > > I'm unsure if there are any ERGM-terms in R that can aptly capture the > hypothesis. And any advice would be greatly appreciated! Thank you! > > Best, > > Yu Guo, Ph.D. > > Harbin institution of technology > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > https://urldefense.com/v3/__http://mailman13.u.washington.edu/mailman/listinfo/statnet_help__;!!CzAuKJ42GuquVTTmVmPViYEvSg!LX_vxsD1tQDE-h6vttocpeTewPu_CsFb8a1IpwKV2WIktI71zX1UZId-kV504NINNdT-0H1iGWl8l1s$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffentriebel at icloud.com Mon Nov 14 04:08:21 2022 From: steffentriebel at icloud.com (Steffen Triebel) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] A question about AIC/BIC in ERGMs Message-ID: Greetings statnet-users, I have read Hunter et al. 2008 (specifically p. 256ff.) about how AIC may not be the best criterion to evaluate ERGMs and that this is even more true for BIC. However, while I am also reporting and discussing the statistics/visual representations estimated through the gof-function, it is common in my field to report AIC/BIC values in ERGMs and discuss them. I am modeling a large two-mode network and am a bit puzzled about the AIC/BIC values, as they are very large. My assumption is that the size of these values is due to the large network (about 9000 ?actors? and 500 ?groups?). My main question is what to make of the differences in AIC values. In Kim et al. (2016), the AIC value of 1196 compared to 1264 is interpreted as ?substantially smaller?. The AIC values in my models are 123352 versus 125383. I am unsure if the absolute or relative difference matters: If the absolute difference matters, then a difference of 2031 would also mean the AIC is ?substantially smaller?. If the relative difference matters, than the AIC in my models will have reduced around 1/60th versus roughly 1/20th in the work I referred in this paragraph. Thanks for any advice on this, Steffen -------------- next part -------------- An HTML attachment was scrubbed... URL: From buttsc at uci.edu Mon Nov 14 17:32:21 2022 From: buttsc at uci.edu (Carter T. Butts) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] A question about AIC/BIC in ERGMs In-Reply-To: References: Message-ID: Hi, Steffen - On 11/14/22 4:08 AM, Steffen Triebel wrote: > > Greetings statnet-users, > > I have read Hunter et al. 2008 (specifically p. 256ff.) about how AIC > may not be the best criterion to evaluate ERGMs and that this is even > more true for BIC. However, while I am also reporting and discussing > the statistics/visual representations estimated through the > gof-function, it is common in my field to report AIC/BIC values in > ERGMs and discuss them. > I think that the 2008 papers sounded an appropriately cautious note, given what we knew at that time.? (I would count myself heavily on the "cautious" end in that case - we had very little theory on the matter, and not much experience.)? At this point, however, we know a lot more.? I'll give a particular shout out to Michael Schweinberger, who has done a lot of important theoretical work in showing that the asymptotic and finite-N concentration behavior we'd hope to see in ERGMs does (so far) seem to be present in reasonable cases.? There's a lot more to be done there, but we now know that the dependence problem is less of a barrier to using conventional approximations than might have been feared. On the practical side, we also by now have a lot of simulation results (done by various people in various papers) that again show that the frequentist properties of the ERGM MLE seem to be pretty good for reasonable models of the type that people use.? This is also encouraging.? With respect to the AIC and BIC, our own simulation studies have so far indicated that using the BIC based on nominal degrees of freedom is annoyingly and unreasonably good for typical ERGMs (or at least, the ones we have looked at).? (I say "annoyingly and unreasonably" because BIC selection often beats alternatives even for outcomes for which it is not technically designed, including alternatives lovingly crafted to be superior for particular model selection goals.? My experience to date has been that it is very hard to beat the BIC for the sorts of relatively low-dimensional models that we typically use in the field.)? I'm unfortunately unaware of a good published comparison among model selection schemes (what I am describing above is unpublished), but that has been our experience so far. > I am modeling a large two-mode network and am a bit puzzled about the > AIC/BIC values, as they are very large. My assumption is that the size > of these values is due to the large network (about 9000 ?actors? and > 500 ?groups?). > The AIC and BIC are both penalized deviance metrics.? Their underlying rationales are different, but the actual metrics differ only in the penalty applied to the deviance.? For the AIC, this is 2 per model degree of freedom, and for the BIC it is the number of model degrees of freedom multiplied by the log of the data degrees of freedom.? We do not know the effective degrees of freedom in typical ERGM settings, but can use the nominal degrees of freedom (i.e., the number of edge variables) as a proxy; one can show that this approximation is unlikely to matter much in practice, and indeed it seems not to in my experience with typical models. Since the "core" of the metric in both cases is the deviance, you will see the values become larger when the network is large. Exactly how much larger will depend on a lot of things, but at constant density you would usually expect to see the deviance scale roughly with the square of the number of vertices.? (Of course, the density won't be constant in real life - it will usually fall as 1/N - but that at least gives you a sense of why it grows.) > My main question is what to make of the differences in AIC values. In > Kim et al. (2016), the AIC value of 1196 compared to 1264 is > interpreted as ?substantially smaller?. The AIC values in my models > are 123352 versus 125383. I am unsure if the absolute or relative > difference matters: If the absolute difference matters, then a > difference of 2031 would also mean the AIC is ?substantially smaller?. > If the relative difference matters, than the AIC in my models will > have reduced around 1/60^th versus roughly 1/20^th in the work I > referred in this paragraph. > Generally, it is difficult to talk about "big" or "small" differences absent some additional context.? But some heuristics are helpful.? Under the AIC, a deviance improvement of 2 units is needed to justify adding another degree of freedom to your model, so you can heuristically think of the (deviance change)/2 as a very rough unit of improvement - under appropriate assumptions, that's how many "noise predictors worth of improvement" you are seeing.? If I add a single parameter and it improves the deviance by 10, then that's about (under AIC asymptotics) five "minimal parameters' worth" of improvement.? For the BIC, you could use the log of the data degrees of freedom in a similar way.? It should be stressed that these are /heuristics/, and should not be taken too seriously, but can be helpful.? One can also consider the fractional improvements in the deviance (as one does in the case of the R^2), and some folks do...but these can be tricky to interpret in practice for binary models.? Metrics have been proposed for such things, but I am not sure that they are all that useful.? In the end, the deviance is important as an objective function, and penalized deviance metrics are very useful model /selection/ tools, but usually you'll be better off actually /assessing/ models by looking at how well they do at reproducing behaviors that are substantively important.? For ERGMs, the gof() function is a starting point for that (though, in any given application, one may want to use other tests). Hope that helps, -Carter > Thanks for any advice on this, > > Steffen > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > https://urldefense.com/v3/__http://mailman13.u.washington.edu/mailman/listinfo/statnet_help__;!!CzAuKJ42GuquVTTmVmPViYEvSg!OApk5DNxXp5qh_yjXWz1Vig3IveoxpTNy_P5SoENL-pUzh1j3G7u9cRKe7t5VTEjcYTSJ1MIGrR70_BFNVhg9wU$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From vvseva at u.northwestern.edu Fri Nov 18 12:57:55 2022 From: vvseva at u.northwestern.edu (Vsevolod Suschevskiy) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] ERGMs for multilevel/multilayer networks Message-ID: Dear statnet-users, I am trying to study a set of networks of 3-4 agents of two types: humans and robots. Humans could have direct ties (enjoy working with j, j is useful) to other humans, or robots, while robots could not initiate ties. Humans and robots have different attributes. Is there an obvious way to approach such a network? I am aware of Wang, P., Robins, G., Pattison, P., & Lazega, E. (2013). Exponential random graph models for multilevel networks. Social Networks, 35(1), 96-115. implementation in MPnet, but my network has ties only at the micro (human to human) and the meso (human to robot) levels, and the macro level ties are impossible. Currently, I am looking at ergm.multi, but I have not come up with a legal way to limit human-to-human ties at the first level and human-to-robot ties at the second level with constraints. My initial idea was to assign human attributes to robots randomly, but this seems to introduce too much noise to the data. Kind regards, Vsevolod Suschevskiy 1 year PhD student at Northwestern University -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal2992 at gmail.com Wed Nov 23 04:58:11 2022 From: michal2992 at gmail.com (=?UTF-8?Q?Micha=C5=82_Bojanowski?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] ERGMs for multilevel/multilayer networks In-Reply-To: References: Message-ID: Dear Vsevolod, Were you able to resolve your question? With constraints you should be able to fix an arbitrary set of dyads. Am I reading correctly that you have networks of 3 to 4 nodes? Best wishes, Micha? On Fri, Nov 18, 2022 at 9:58 PM Vsevolod Suschevskiy wrote: > > Dear statnet-users, > > I am trying to study a set of networks of 3-4 agents of two types: humans and robots. Humans could have direct ties (enjoy working with j, j is useful) to other humans, or robots, while robots could not initiate ties. Humans and robots have different attributes. Is there an obvious way to approach such a network? > > I am aware of Wang, P., Robins, G., Pattison, P., & Lazega, E. (2013). Exponential random graph models for multilevel networks. Social Networks, 35(1), 96-115. implementation in MPnet, but my network has ties only at the micro (human to human) and the meso (human to robot) levels, and the macro level ties are impossible. > > Currently, I am looking at ergm.multi, but I have not come up with a legal way to limit human-to-human ties at the first level and human-to-robot ties at the second level with constraints. My initial idea was to assign human attributes to robots randomly, but this seems to introduce too much noise to the data. > > Kind regards, > Vsevolod Suschevskiy > 1 year PhD student at Northwestern University > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > http://mailman13.u.washington.edu/mailman/listinfo/statnet_help From michal2992 at gmail.com Thu Nov 24 03:43:05 2022 From: michal2992 at gmail.com (=?UTF-8?Q?Micha=C5=82_Bojanowski?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] ERGMs for multilevel/multilayer networks In-Reply-To: References: Message-ID: Hi! > I resolved my problem partially. I was recommended to ditch the multilayer approach and to use fixallbut to achieve the combination of blockdiagonal constraints and removing robot-to-human ties. Great! > At this point, I know how to remove the evaluation of robots' factor attributes with levels in nodematch and nodemix terms, but I am still trying to figure out how to remove the estimation of robots' numeric attributes with terms such as absdiff. For terms such as 'absdiff' you should be able to create a matrix of abs(x_i - x_j) values and then put 0 everywhere but the dyads of interest (e.g. only between humans) and use that matrix as input for the 'dyadcov' term. ~Michal From steffentriebel at icloud.com Mon Nov 28 13:22:57 2022 From: steffentriebel at icloud.com (Steffen Triebel) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Decay & Cutoff in gwesp parameters (& convergence issues) Message-ID: <278E63B0-6F19-4D9B-808D-FFBDD1137FF2@icloud.com> Hi Statnet-list, I am pretty happy with a two-mode ERGM I estimated for a recent project. It converges well, the fit appears to be good, and the results make a lot of sense. However, it came up in discussions with colleagues that it would be great to cater more to the strengths of ERGMs by including more controls for network closure/structure. My perspective on this is a bit more lenient on this and I think that the strength of ERGMs already lies in including dyad-dependent effects. In any case, the amount of structural network effects in my model is on the low end and I would like to at least try to include more (perhaps to improve fit or just guard myself against surprises). Currently, I only control for b1degree(1) because most actors only have affiliations to one group and b1star(2) as a generalized way of controlling for groups that are connected. There are various b1star(2, attr)-effects, but other than that only dyad-independent effects. I am wondering about 3 things currently: Is it a reasonable argument for excluding effects such as gwb1dsp from the analysis that there are no specific theoretical arguments for their inclusion and that their inclusion leads to trouble with model convergence? In the same vein: Would the inclusion of several b1star-effects be considered sufficient as controls for dyad-dependencies? As I understand it, the geometrically weighted structural effects improve fit, but my fit is quite good already. I am unsure about what to do with the decay- and cutoff-parameters. Is decay similar to the alpha-parameter in SAOMs gwesp-effect where a=0 is similar to transitive ties and the higher the parameter, the closer it is to transitive triplets? As for cutoff, I have no idea what that does. I toyed around with various parameters for gwb1dsp, gwb2dsp, gwb1degree and gwb2degree and was only able to get a model to converge that includes gwb1degree with a decay of 0.25. The result is non-significant (by a long shot). I am not sure if I?m just incredibly bad at picking these parameters or if this is a good indication that I should just keep these effects excluded. I tend to think it?s the latter. Short last question: If my model does not convergence and I re-estimate with init=coef(prior_model) and reiterate this 3 times with no notable improvements, am I correct to assume there is not much hope and I should abandon this specification? Cheers & Thanks, Steffen -------------- next part -------------- An HTML attachment was scrubbed... URL: From kylequarles at gmail.com Wed Nov 30 05:47:28 2022 From: kylequarles at gmail.com (Kyle Quarles) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] question about custom frame slicing for ndtv Message-ID: Hello list, I'm a composer with an interest in using dynamic networks to understand and model musical syntax. I have a little tool I'm building for this purpose here: https://github.com/KyleQuarles/MIDI_Dynamic_Network This question concerns the ndtv package, specifically frame slicing. In the documentation, slice.par is always an attribute list, with a consistent interval and aggregate.dur for every slice; in other words, it appears the slice.par list can only generate slices of a single size for each animation. However, I'm interested in slicing at musical phrase boundaries, which are not very consistent or regular. So my wish is for the ability to send just an arbitrary list of frames to the compute.animation function, instead of an attribute list which sets them at regular intervals. Is such a thing possible somehow? I checked the tutorial/documentation and couldn't find anything about this, but perhaps I'm not looking for the right thing. Apologies if this is a duplicate question; the archive of questions here: https://mailman13.u.washington.edu/mailman/private/statnet_help is searchable, but none of the 'browse by' or download features work for me. Thank you, Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From JIMI.ADAMS at UCDENVER.EDU Fri Dec 2 08:28:44 2022 From: JIMI.ADAMS at UCDENVER.EDU (Adams, Jimi) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] question about custom frame slicing for ndtv In-Reply-To: References: Message-ID: Hi Kyle, You can also define windows with beginning and ending stamps on the time spell. I?ve typically used the activate.vertex.attribute(onset=, terminus=) options (there are corresponding ones for edges) to accomplish this. There are other ways as well, but this is the one that made the most intuitive sense to me. Hope that helps! jimi jimi adams Professor & Director of Graduate Studies Health & Behavioral Sciences University of Colorado Denver o. https://ucdenver.zoom.us/my/jimiadams e. jimi.adams@ucdenver.edu w. jimiadams.com From: Kyle Quarles Sent: Wednesday, November 30, 2022 6:47 AM To: statnet_help@uw.edu; skyebend@skyeome.net Subject: [statnet_help] question about custom frame slicing for ndtv Hello list, I'm a composer with an interest in using dynamic networks to understand and model musical syntax. I have a little tool I'm building for this purpose here: https://github.com/KyleQuarles/MIDI_Dynamic_Network This question concerns the ndtv package, specifically frame slicing. In the documentation, slice.par is always an attribute list, with a consistent interval and aggregate.dur for every slice; in other words, it appears the slice.par list can only generate slices of a single size for each animation. However, I'm interested in slicing at musical phrase boundaries, which are not very consistent or regular. So my wish is for the ability to send just an arbitrary list of frames to the compute.animation function, instead of an attribute list which sets them at regular intervals. Is such a thing possible somehow? I checked the tutorial/documentation and couldn't find anything about this, but perhaps I'm not looking for the right thing. Apologies if this is a duplicate question; the archive of questions here: https://mailman13.u.washington.edu/mailman/private/statnet_help is searchable, but none of the 'browse by' or download features work for me. Thank you, Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From linzhu1 at stanford.edu Sun Dec 4 12:43:12 2022 From: linzhu1 at stanford.edu (Lin Zhu) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Error when using simulate Message-ID: Hi group, I?ve checked this in the history but it seems there?s no information about it. So I use the ?simulate? function with an networkDynamic object. In some circumstances, it always stopped with an error message: Error in if (!is.null(am) && am[nrow(am), 2] == times[1]) { : missing value where TRUE/FALSE needed I?m trying to identify what caused this error but View(simulate) doesn?t provide any original code and View(simulate.networkDynamic) doesn?t exist. I?m using an older version of the package (to keep my code stable) right now: Installed ReposVer Built ergm "3.11.0" "4.3.2" "4.0.3" ergm.count "3.4.0" "4.1.1" "4.0.3" ndtv "0.13.2" "0.13.3" "4.0.2" network "1.17.1" "1.18.0" "4.0.2" networkDynamic "0.11.0" "0.11.2" "4.0.2" sna "2.6" "2.7" "4.0.2" statnet.common "4.5.0" "4.7.0" "4.0.2" tergm "3.7.0" "4.1.1" "4.0.3" Any idea what could cause the error or how I can find the function code so I can try to identify the error? Thank you so much! Best, Lin Lin Zhu | Research Engineer Department of Health Policy | School of Medicine Center for Health Policy | Freeman Spogli Institute for International Studies Stanford University 615 Crothers Way Encina Commons, Office 113 Stanford, CA 94305-6006 650.736.8139 | linzhu1@stanford.edu [signature_3152782964] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6796 bytes Desc: image001.png URL: From michal2992 at gmail.com Sun Dec 4 15:08:36 2022 From: michal2992 at gmail.com (=?UTF-8?Q?Micha=C5=82_Bojanowski?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Error when using simulate In-Reply-To: References: Message-ID: Hi Lin Zhu, This seems like a technical issue with 'tergm' or 'networkDynamic'. Can you please open an issue at https://github.com/statnet/tergm/issues adding a minimal reproducible example code that we could use to trace where the potential bug is? Best, Michal On Sun, Dec 4, 2022 at 9:48 PM Lin Zhu wrote: > Hi group, > > > > I?ve checked this in the history but it seems there?s no information about > it. > > > > So I use the ?simulate? function with an networkDynamic object. In some > circumstances, it always stopped with an error message: > > > > Error in if (!is.null(am) && am[nrow(am), 2] == times[1]) { : > > missing value where TRUE/FALSE needed > > > > I?m trying to identify what caused this error but View(simulate) doesn?t > provide any original code and View(simulate.networkDynamic) doesn?t exist. > > > > I?m using an older version of the package (to keep my code stable) right > now: > > > > Installed ReposVer Built > > ergm "3.11.0" "4.3.2" "4.0.3" > > ergm.count "3.4.0" "4.1.1" "4.0.3" > > ndtv "0.13.2" "0.13.3" "4.0.2" > > network "1.17.1" "1.18.0" "4.0.2" > > networkDynamic "0.11.0" "0.11.2" "4.0.2" > > sna "2.6" "2.7" "4.0.2" > > statnet.common "4.5.0" "4.7.0" "4.0.2" > > tergm "3.7.0" "4.1.1" "4.0.3" > > > > Any idea what could cause the error or how I can find the function code so > I can try to identify the error? > > > > Thank you so much! > > > > Best, > > Lin > > > > *Lin Zhu * | Research Engineer > > Department of Health Policy | School of Medicine > > Center for Health Policy | Freeman Spogli Institute for International > Studies > > Stanford University > > 615 Crothers Way > > Encina Commons, Office 113 > > Stanford, CA 94305-6006 > > 650.736.8139 | linzhu1@stanford.edu > > [image: signature_3152782964] > > > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > http://mailman13.u.washington.edu/mailman/listinfo/statnet_help > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6796 bytes Desc: not available URL: From jmoody77 at duke.edu Fri Dec 16 07:13:22 2022 From: jmoody77 at duke.edu (James Moody) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Any way to bump out when error detected? Message-ID: Hey Folks - Hope you are well. Is there a control parameter or similar that would change the behavior when the model detects an error? I.e. I'm seeing this: Error in check.objfun.output(out, minimize, d) : objfun returned value that is NA or NaN MLE could not be found. Trying Nelder-Mead... Then it goes on and iterates/searches for a (in my case) hopeless solution, which is *very* slow. Is there any way to get it to just bump out when this (or similar) happens? Would like to set it up so if it detects anything like this, it just stops trying. If it were just one case, I would just interrupt it by hand ... but this is embedded in a set of 100s of such models, so running unobserved, and when it hits one of these it slows the process down to a crawl... Thanks in advance, Peaceful Thoughts, Jim James Moody Professor of Sociology Director, Duke Network Analysis Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal2992 at gmail.com Fri Dec 16 07:27:51 2022 From: michal2992 at gmail.com (=?UTF-8?Q?Micha=C5=82_Bojanowski?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Any way to bump out when error detected? In-Reply-To: References: Message-ID: Hello Jim, AFAIK we don't have a control parameter like that at this moment. However, there is a harmless hack that could essentially stop ergm() at the very point of estimation you are referring to, namely before following with fallback methods of optimizing log-likelihood if the first method fails. Would that work for you? Best, Micha? On Fri, Dec 16, 2022 at 4:13 PM James Moody wrote: > > Hey Folks ? > > Hope you are well. > > Is there a control parameter or similar that would change the behavior when the model detects an error? I.e. I?m seeing this: > > > > Error in check.objfun.output(out, minimize, d) : > > objfun returned value that is NA or NaN > > MLE could not be found. Trying Nelder-Mead... > > > > Then it goes on and iterates/searches for a (in my case) hopeless solution, which is *very* slow. > > > > Is there any way to get it to just bump out when this (or similar) happens? Would like to set it up so if it detects anything like this, it just stops trying. > > > > If it were just one case, I would just interrupt it by hand ? but this is embedded in a set of 100s of such models, so running unobserved, and when it hits one of these it slows the process down to a crawl? > > > > Thanks in advance, > > Peaceful Thoughts, > > Jim > > > > > > James Moody > > Professor of Sociology > > Director, Duke Network Analysis Center > > > > > > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > http://mailman13.u.washington.edu/mailman/listinfo/statnet_help From buttsc at uci.edu Fri Dec 16 13:09:05 2022 From: buttsc at uci.edu (Carter T. Butts) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] Any way to bump out when error detected? In-Reply-To: References: Message-ID: <2c0e3b77-9ba5-e5a6-860a-1b8b6012e48b@uci.edu> Not a direct fix, but if you are fitting that many models (I presume, in a simulation study?), you may want to consider using stochastic approximation instead of the default Geyer-Thompson-Hummel scheme.? (Use main.method="Stochastic" in the control() list.)? SA converges more readily in non-optimal cases, and fails more gracefully.? I've now started using that as a go-to for simulation studies, or other unsupervised settings where a small fraction of cases with very slow convergence can cause problems.? (Just be sure to turn off the bridge sampler if you don't need it, because its settings make it become very slow if you tune up your MCMC settings with SA (which you should).) Hope that helps! -Carter On 12/16/22 7:13 AM, James Moody wrote: > > Hey Folks ? > > Hope you are well. > > Is there a control parameter or similar that would change the behavior > when the model detects an error?? I.e. I?m seeing this: > > Error in check.objfun.output(out, minimize, d) : > > ??objfun returned value that is NA or NaN > > MLE could not be found. Trying Nelder-Mead... > > Then it goes on and iterates/searches for a (in my case) hopeless > solution, which is **very** slow. > > Is there any way to get it to just bump out when this (or similar) > happens? Would like to set it up so if it detects anything like this, > it just stops trying. > > If it were just one case, I would just interrupt it by hand ? but this > is embedded in a set of 100s of such models, so running unobserved, > and when it hits one of these it slows the process down to a crawl? > > Thanks in advance, > > Peaceful Thoughts, > > Jim > > James Moody > > Professor of Sociology > > Director, Duke Network Analysis Center > > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > https://urldefense.com/v3/__http://mailman13.u.washington.edu/mailman/listinfo/statnet_help__;!!CzAuKJ42GuquVTTmVmPViYEvSg!LXP3f4NLB_jyVjQCHg_Q1zgKAcSKpv5l-0jblx8QKgDuwNGlWrCWzt2x2Q6tieF4Jd_1y9Ln0YxeZ5oE$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenbc20 at mails.tsinghua.edu.cn Thu Dec 22 17:52:34 2022 From: chenbc20 at mails.tsinghua.edu.cn (=?UTF-8?B?6ZmI5Y2a5bed?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] =?utf-8?b?5oiq5bGPMjAyMi0xMi0yMyDkuIrljYg5LjUxLjMz?= Message-ID: <2357350d.53090.1853cae9d6d.Coremail.chenbc20@mails.tsinghua.edu.cn> Hi there, Thanks for your time to read the email. I'm a freshman in Tsinghua University and I'm writing to ask about the error: "Error: `multiple` is `FALSE`, but `x` contains parallel edges.The following rows in `x` are duplicated:" While i cannot see what's the problem. So i append the screenshot of the data in the appendix. Thanks for your time and patience, hope to get the help soon! Sincerely Chen Bochuan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??2022-12-23 ??9.51.33.png Type: image/png Size: 718009 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??2022-12-23 ??9.51.50.png Type: image/png Size: 731795 bytes Desc: not available URL: From chenbc20 at mails.tsinghua.edu.cn Thu Dec 22 17:59:30 2022 From: chenbc20 at mails.tsinghua.edu.cn (=?UTF-8?B?6ZmI5Y2a5bed?=) Date: Mon Mar 25 10:47:50 2024 Subject: [statnet_help] statnet_help mailing list submissions Message-ID: <6f8fd883.530ae.1853cb4f64d.Coremail.chenbc20@mails.tsinghua.edu.cn> I would like to submit to statnet_help mailing list. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenbc20 at mails.tsinghua.edu.cn Thu Dec 22 18:12:54 2022 From: chenbc20 at mails.tsinghua.edu.cn (=?UTF-8?B?6ZmI5Y2a5bed?=) Date: Mon Mar 25 10:47:51 2024 Subject: [statnet_help] Question about duplicated variable Message-ID: <1de9e981.530fa.1853cc13bc2.Coremail.chenbc20@mails.tsinghua.edu.cn> Hi there, Thanks for your time to read the email. I'm a freshman in Tsinghua University and I'm writing to ask about the error: "Error: `multiple` is `FALSE`, but `x` contains parallel edges.The following rows in `x` are duplicated:" While i cannot see what's the problem. So i append the screenshot of the data in the appendix. Thanks for your time and patience, hope to get the help soon! Sincerely Chen Bochuan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??2022-12-23 ??10.12.29.png Type: image/png Size: 733988 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??2022-12-23 ??10.12.48.png Type: image/png Size: 727253 bytes Desc: not available URL: From daniel.gotthardt at uni-hamburg.de Fri Dec 23 05:55:10 2022 From: daniel.gotthardt at uni-hamburg.de (Daniel Gotthardt) Date: Mon Mar 25 10:47:51 2024 Subject: [statnet_help] Question about duplicated variable In-Reply-To: <1de9e981.530fa.1853cc13bc2.Coremail.chenbc20@mails.tsinghua.edu.cn> References: <1de9e981.530fa.1853cc13bc2.Coremail.chenbc20@mails.tsinghua.edu.cn> Message-ID: <890f431f-8374-727a-a59c-3880388fe59b@uni-hamburg.de> Hello Chen Bochuan, you have duplicates which you could not see, because you tried to subset the whole matrix with a row-indicator: df[duplicated(df)] should be df[duplicated(df), ]. For example the rows 1251 and 1258 are duplicates of each other with the values (FKZL26-1, HBL1191-1, 1). Best Regards, Daniel Am 23.12.2022 um 03:12 schrieb ???: > Hi there, > Thanks for your time to read the email. > I'm a freshman in Tsinghua University and I'm writing to ask about the > error: > "Error: `multiple` is `FALSE`, but `x` contains parallel edges.The > following rows in `x` are duplicated:" > While i cannot see what's the problem. So i append the screenshot of the > data in the appendix. > Thanks for your time and patience, hope to get the help soon! > Sincerely > Chen Bochuan > > _______________________________________________ > statnet_help mailing list > statnet_help@u.washington.edu > http://mailman13.u.washington.edu/mailman/listinfo/statnet_help -- Daniel Gotthardt, M.A. Wissenschaftlicher Mitarbeiter / Research Associate Universit?t Hamburg Fakult?t f?r Wirtschafts- und Sozialwissenschaften / Faculty of Business, Economics and Social Sciences Fachbereich Sozialwissenschaften / Department of Social Sciences Soziologie, insb. Digitale Sozialwissenschaft / Sociology, esp. Digital Social Science Max-Brauer-Allee 60 22765 Hamburg www.uni-hamburg.de From steffentriebel at icloud.com Tue Dec 27 08:17:01 2022 From: steffentriebel at icloud.com (Steffen Triebel) Date: Mon Mar 25 10:47:51 2024 Subject: [statnet_help] Interpreting attribute-based geometrically weighted effects in two-mode networks Message-ID: <88B51891-4696-426C-BD2B-284EB92E21FE@icloud.com> Dear all, I hope you had enjoyable holidays. I?m currently working on a large two-mode network (500 events, 9500 actors). My current specification includes mostly effects such as nodematch or b2star in combination with attributes and gives theoretically meaningful results. I initially thought that the fit (both upon visual inspection and based on p-values) was no reason for concern, but upon closer inspection it seems that the MCMC diagnostics point to model degeneracy. As far as I can tell the trace and density plots are all fine. The Geweke-diagnostics are all far from zero on all individual sample statistics but the auto-correlation, while constantly decreasing, takes quite a while to get close to zero (my understanding is that any statistic after lag 0 should be close to zero). To improve this, I tried adding geometrically weighted effects (gwb1degree, gwb1dsp, ..). It seems that no matter what combination of effects I try here (effects, decay and cutoff), I can?t get the models to converge. My guess is that this is because the network is highly formalized and not as organic as a school class for example. Now, I tried substituting the attribute-based b2star-effects with gwb2degree(attr=?covar_name?, decay=0.5, fixed=T) and these work fine, which is good. However, I am at a loss on how to interpret this, as each gwb2degree-effect produces two significant parameters (the attributes are dummy covariates, so one for each value) with basically the same parameter estimate. The estimated parameters are also very large. In sum, I have four questions: Are the MCMC-diagnostics as described here actually a reason for concern? How do I interpret the parameters that gwb2degree produces? Anything else I can do to improve my model? My controls currently are: (MCMLE.maxit=200, MCMC.burnin=3000, MCMC.interval=5000, MCMC.samplesize=10000) How can I specify the constraint that actors have a minimum degree of 1 and events have a maximum degree of 40? This should also help with further estimations, I hope. Cheers & enjoy the rest of the year, Steffen -------------- next part -------------- An HTML attachment was scrubbed... URL: