Andrew Hayes discusses "Modern Integration of Mediation and Moderation Analysis"

because you know but just a touch on that a few minutes his real research sighing methodology particularly linear models models and especially forces analysis and that’s the reason that we are here today was very interesting about his work is that he made this type of analysis accessible to all the researchers and definition and you see the Mexico and is very productive is very active is from three different in the next few months and if you look at his web page you’ll see that he will be traveling a lot and we grateful that he made time for us to come visit us without any further introduction on liquid water dr. James thank you for the invitation I’ve been much of two times in past today i’ma give you a very very broad and very very fast overview of the integration of mediation and moderation analysis now this will require about two-thirds of my time to be dedicated to the fundamentals so really isn’t until the last half hour so as I start to run out of time where things get really most exciting but we do have to spend some time going over some of the fundamentals and along the way you’re going to be introduced to a statistical tool that I’ve created that makes some of this work that I’m going to do about the yeren if you’re doing very very very easy so you’ll get an introduction to that and those of you who will be around after lunch when my workshop will get lots of hands-on experience working at this tool so here’s what we’re going we’re going to talk about mediation analysis I’m going to give you a fairly modern orientation to mediation analysis things have changed a lot since 1986 when people first started thinking and assigning the literature remediation analysis most frequently then you’re going to get a little review of the mod fundamentals of moderation analysis then it will combine them together that’s where things you’re going to get most exciting or again I’m going to integrate them into this procedure that I’ve called conditional process modeling that’s not a term that you have probably heard before that’s because I made it up but it’s something that I think that you will hear more about in the years that follow and again I’m going to describe a tool that I’ve invented for making all this very very easy to do in software that you no doubt are already using for today I’m going to be relying on data provided to me by a gentleman at the Business School I believe he’s it I want to get this wrong on camera but I think it’s the University of Richmond and Jeff haw so these data although the article was published recently are are several years old at this point and come from a time in which the US economy was really pretty in pretty bad shape at least in terms of recent history things are improving quite a bit now in spite of Congress’s attempt to distort our progress things are getting better but the context of this is a bad economy and we’ve got data from a number of people who are entrepreneurs these are generally business owners trying to make their way in this economy these entrepreneurs while members of a social networking site a networking site is focused primarily on small business so small business networking site they’re asked a series of questions relating to how much stress they feel with respect to the Nama climate that they’re in how it’s affecting their visitor they’re also asked a series of questions tapping into their business related to press back so don’t know how down or you can get off hopeless or how concerned are you about your well-being with respect to your business the primary outcome of interest we’re going to use is this thing called withdraw intentions to withdraw from entrepreneurship so this taps into people’s sort of projection as to whether they feel like maybe they’re going to get out of this there that’s going to go and do something else instead of trying to weather a difficult economy and then we have a measure of business-related social network size so participants in this study were asked a series of questions to tap into how deep or rich their social ties are in this social networking site primarily it’s a measure of how many contacts they have

on a day to day basis regarding their business or business related matters so these are the four variables that we’re going to focus on primarily there are a couple of control variables that I will mention in a sec but this is where we’re headed we’re headed toward this model this is a model which contains both a moderation and mediation component something I call a conditional process model and this is a conceptual model when you see boxes and arrows you start thinking about half analysis and structural queries moment and that’s not what this is representing in a formal sense this is a conceptual model it outlines a process in visual terms but as we’ll see when we go into actually estimating this model we’re going to look different what we call a medical the statistical one so here’s our question just a bit does the press business-related aspect function as a mechanism through which business-related economic stress influences and entrepreneurs intention to withdraw from entrepreneurship that’s a mouth what is and is the strength of this mechanism contingent on the extent to which one feels interpersonally connected to a network appears in the business community so what we’ll do with this model is we will break it up into its parts there’s what we called mediation component just as triangle there’s a moderation component which is here and when you tie them together you have something called a conditional process model where we can model the contingent nature of mechanisms so let’s start with the age so a mediation model is a model in which we are linking some kind of you concoct something we believe is the causal agent of influence on some outcome Jib outcome very wide and that influence of cause on effect occurs at least in part through some kind of an intermediary variable M now those intermediary variables or what are sometimes called mediators it could be anything it could be a psychological state of some kind of could be some kind of a cognitive process it could be a biological response to something it could be emotional it could be anything it’s the thing that we conceive of as the mechanism through which vexes influencing why in effect it’s an attempt to answer the how question how is X enforcing while it’s occurring through this mechanism this mechanism being this what we’ll call an indirect pathway from X to Y so in this intermediary or when it’s called a mediator is something that has to be thought of as causally located between x and y in other words it’s the middle variable in a puck in a chain of Cobble of it X causes M which and that influence on M then goes on an intransitive that’s why I say this because now and then I’ll get questions from people about mediation analysis and when the Provo further you realize there M is not between us and why so it’s somewhere else out in some conceptual space and that’s not a mediation process if we can’t make this assumption at least that M is causally between x and y now these are causal bottles they’re going to carry with them all the standard criteria that we use for making cause-effect claims and we know these are often very very often they’re actually typically it’s very difficult to meet these conditions covariation it turns out covariation is not right conditional cause out even though we often teach it that way generally thought of as covariation as a one of those criteria the other being temple president cause that’s appreciated fast and then the third being you ruled out alternative explanations for the association now often we don’t have the ability to meet those conditions in the data we have sometimes it’s it’s only our theory that maybe the strongest thing we’ve got to to hang our positive plans on that’s okay if that’s the best you got if you can make the argument and you can make the argument convincing me let’s sort of what science is about sparkly that making that argument this is important because I get questions from people all the time about data the purely correlational nature which this example I’m going to be using it it’s purely correlational study and they want to know what can I do this stuff can I do this kind of analysis they’re looking for permission I think and and the answer is yes of course we are we shouldn’t live on ourselves we shouldn’t limit the mathematical tools that we can

bring to understanding our data just because we can’t meet certain conditions that we would prefer to meet if we could for making these kinds of claims so as long as we recognize limitations of our data we can do anything that we want to with a mathematically that helps us to better understand so as long as we recognize limitations our data and couch our interpretations with the appropriate caveats and claims I like to say that inferences are products of mind not math right the inferences are up here they’re not in the mathematics they’re what we do with the information about notic schisms map this camera doesn’t even know where the data come from anything about the dot design okay sorry our emphasis will always be designed bound to some extent so the inferences are up here but not in the net okay this is a simple model it’s a civil mediation model that we’re going to talk about that’s not to say however that is trivial even though maybe it’s the oversimplification of most complex processes this is not at all uncommon you see these kinds of analyses being done all the time and here are a few these are just screenshots of figures in various journals in the marketing area these are the buttons it’s not difficult to find these I’ve got a catalogue of hundreds of these images here are a few from social sites social psychologists love this stuff that’s my background and social psychologists are doing this stuff all the time I write content analyses of the social psychology literature to show that you really can’t read the typical issue of a social psych journal like the Journal of personal psychology without encountering at least a couple of weeks and in fact I just did a Content analysis of psychological science which is a broad journal with the entire discipline of psychology and every single issue over a two-year period headed this one of these kinds of analyses in it and typically more as many 71 issue did I found I don’t I don’t know if there’s any people from the health area in the room now you should gotta have a feel unless the whole group I’m giving the health money’s and this is commented out as well people who say public computation console ecology these are all so we’re learning a little bit about them people are doing this all the time and no doubt you will as well I want to spend some time going over some of the mechanics of path analysis this is important because when we get to the end where we start integrating mediation and moderation we’re going to have to play around with regression coefficients that are estimated in a variety of ways so a very rudimentary primer right now we have analysis so here’s our simple mediation model and up here I’ve placed an even simpler model without a mediator in it at all so we have three variables we’re going to quantify quantify one variables effect on another now again remember I’m a condition that plane effect always on the quality of my data and my resource design I’ll use the term effect here maybe when it’s not technically warranted but you get the idea in the end the claim to make I’m going to be based on facade not about that so we’ll talk I’ll use the word of theft but I’ll use the word that loosely so what I quantify the effect of one variable on another we give you that a lot of different ways I think regression analysis is a very very simple and general way of thinking about quantified effects and everything that I’m going to talk about today applies to all US Russia and mostly applies when you do this kind of analysis with continuous outcomes in in a maximum of a basic chain I’m going to be doing all those firms this is generally true in structural equation modelling as long as we have an M and a y that are treated as continuous doesn’t matter if they actually are you’re just going to be analyzing that as if they are so let’s say we want to quantify X is effect on Y let’s just regress YX that regression coefficient and I’m gonna speak in on standardized metrics here that’s the metric I prefer but this applies to standardized metrics as well the FCI’s regression coefficient C that quantifies axis effect on why you wear more formally it tells us how much two cases that differ by 1 unit on X Resta mated to differ on 1 so now that regression coefficients will just call see now imagine and oh here is the equation for us that simple regression line but we’re just estimating Y from an intercept plus X

which is getting some way let’s do the same thing for M regress M on X you get then a regression coefficient a which has tomates how much two cases that you provide unit on X are estimated to differ on them there that’s the second equation and the third equation we’re going to estimate Y from both xn that will give us two regression coefficients one for action one for M which I’ve labeled be in C Prime so be estimates the amount by which two cases that differ by 1 unit on em but that are equal in X are estimated to differ on Y Z prime has the reverse interpretation reversing the roles of X and M how much two cases that differ by one unit or X that are equal I am are estimating a different line so we imagine these three regression coefficients that gives us out of these four regression coefficients it turns out that there’s a relationship between these regression coefficients expressed here C which we call the tonal effect of X on Y cleanly partitions into two components the direct effect of X on Y which is Z prime plus the indirect effect of X on Y through M which is the product of a and B so that’s this equation here so the total effect of X on Y I see C prime is the direct effect of X on Y and the product of a and B is the indirect effect of X on 1 through m algebra shows e the indirect effect can be thought of as could be thought of as the difference between the total and the direct effect and those of you who have done mediation analysis and other people sometimes make a deal a big deal of how much an effect changes in you introduce immediate and that’s sort of reflected here in that difference that’s just the indirect effect oh okay so this relation but between the direct total and indirect effect applies to any data you can give to an OS program does it make any difference that will always be true as long as you’re treating heavens lives cannibalize continua this relationship will hold you can’t make that you can’t give me an example where that wouldn’t be true because it won’t be true it’s not very often that I’m comfortable speaking an absolute so this is one now we do research and often we’re doing research with data that don’t lend themselves to unequivocal causal interpretation we have to deal with to balance right relationships can exist for a number of reasons among them being they are sham all kinds of maybe shared influences shared causes this is nice to get rid of those statistically what I want to show you here is that this relationship holds even after you account for those confounding variables so all I’ve done here is I’ve taken my model and I’ve added here I’m only added one but you could have many of these confounding variables added to these models and the same will be true so all of that is I’m added on confound your potential confounder variable into each of these equations we give them some weight we let a no less regression program figure out how to weight these things to produce the best fitting model and this screw pysics to halt so long as you partial these potential confounders out of both Y and out the example we’ll use today does include three potential confounding variables okay if you don’t want to take my word for it let me let me illustrate so here I’m predicting our entrepreneurs reporting fields of economic stress on their withdrawal intentions these are all scaled hot us such that higher values represent more so higher stress higher than drawl intentions so I’ve also got three variables that I’m partially now the sex of the entrepreneur their tenure which is essentially how long they’ve been in this business and then ESC is entrepreneurial self-efficacy so when we regress now withdrawal attentions on on economic stress we get C of point zero one nine if you can see that in green but there it is and that comes from this regression where I’m predicting withdrawal intentions from economic stress plus these confounders here’s an SPSS command that generates that output using these data so that’s the total effective economic stress on withdrawal intentions for the constants X tenure in entrepreneurial self-efficacy we do the same thing estimating depressed effect from economic stress as a lot as well as the potential confounders and we get point one five nine is our regression coefficient a

one more regression predicting withdrawal detentions from all five of these variables gives us B of point seven zero seven so when we put all these together apologize first a direct effect as well we get C prime which comes out of that same mock right same model different coefficient gives us that direction so carry it all this together sorry direct effect C prime our indirect effect is the product of a and B and then our total effect to see and what you can see the whole point of this is to show you I wasn’t lying to you before that that works the total effect is perfectly partitioning to those two components the direct effect of X and the indirect effect of X so what is this on the ad we interpreted so here is a generic way of thinking about direct and directed total effects first and then I’ll give you a more specific interpretation relative to these findings so we can say that two people who differ by 1 unit on X are estimated to differ by total effect units on Y on average so that’s that’s C they differ by indirect effect units on average as a result of the effect of X on M which in turn affects Y and then there’s something that’s going to be left over and that’s the direct effect the rest of that difference the difference of direct effect units is due to the effect of X on Y that’s independent of that so substitute in our variables into this generic interpretation and you get this to entrepreneurs who differ by one unit in their economic stress that’s one unit on whatever the metric is being used to measure their stress but there are the same sex tenure and entrepreneurial self efficacy because we’re holding those constant or parceling that’s out in the model their estimated digit four by 0.01 nine units of withdrawal intentions again whatever metric of withdrawing tend to review the person with one unit more stress is estimated to be point 1 1 3 units higher in withdrawal intentions as a result of the positive effect of stress on depressed advant which in turn affects withdrawal intentions so that point 1 1 3 units due to stress pyew to the influencing depression which in turn punitive ly influences withdrawal intentions and then holding the press effect tenure sex and entrepreneurial self-efficacy caused in an arch for wanting higher in stress is estimated to be point oh nine four units lower in withdrawal intention is lower because the direct effect is negative higher neck spared lower one holding M constant so let’s shift inference we have two effects that primarily are armed interest to us although some people care about the total effect focus on the total effect in 21st century mediation analysis is not as common relative to the 20th century so they’re primarily two effects that we care about the direct effect and the indirect effect we get the total effect from the two of those so infants focuses on can we make anything out of it in a statistical sense right we we need to deal with alternative explanations contrasted iskele control can help us on but the most parsimonious of all alternative explanations for a relationship has changed right so that’s primarily what we’re doing when we deal with infants a plausible is chance as an explanation for this attained result now if you’re a confidence interval advocate you know chance is not necessarily something to focus not much on we care more about about some inference about we’re in the realm of possibilities the effect resides rather than whether 0 is among those possibilities either way whether you’re a confidence interval advocate or a hypothesis testing advocate inference about the direct effect is primarily is for the most part not controversial and we get an inferential test from a regression analysis any regression program that we do will give that to us so here’s our direct effective minus 0.0904 and we see from our standard regression table here that our p-value attached to that is point zero seven seven or a confidence interval which straddles zero so if you want to take the point O 5 criterion seriously and literally we would say well that’s not no evidence of

an effective stress on withdrawal intentions after you account for differences between these entrepreneurs and how to press their again nothing controversial about that nor is there anything particularly interesting about it either where things get fun from my perspective at least is dealing with the indirect effect cuz that’s where the most controversy is that’s where the most literature in statistical mediation analysis has been over the last 10 years 15 years or so the indirect effect is estimating the influence of X on Y through this mechanism that’s represented by the causal sequence X to my 21st century mediation analysis she’s going to base a claim of mediation on evidence that the indirect effect is different from 0 so we can quantify that indirect effect I showed you how to do it is this product of these two coefficients well there is a popular sort of 20th century approach to inference and that’s called the Sobel test which says pretty much a standard inferential procedure let’s take an estimate of artifact let’s divide it by the standard error and there are a variety of standard err estimators that are out there in the literature primarily based on what happens with this third term and that ratio we have use the letter Z because we are using in the normal distribution as a representation of this sampling distribution that is we both the assumption that this sample distribution is normal under the null hypothesis and with that ratio you can calculate PMF try the p-value on the attained F so standard approach there’s nothing particularly insightful about that you can learn from this but all this information is available in any regression program we’ve got estimates of regression coefficients that we the square or don’t we’ve got standard errors of those regression coefficients which we swear and we taste the square roots nothing terribly fun about doing this computation by hand but not difficult and it’s available to you in any regression analysis that you got so you can plug away and get something and what this shows us is that that interact effective you calculation is statistically significant that it’s different from zero but this test has serious problems and I I don’t recommend using this the problem with the solo test is that we’re making this assumption of normality of the sampling distribution of the product of a and B that is that assumption is what gives rise to the use of the standard normal distribution for generating the pinna not a bad assumption to make in big samples although even then you could make the case that that assumption of normality actually tends not to be true it’s not as bad a violation in big samples but but but in small to moderate samples it’s just flat-out false you know that the product of two normal variables two random normal variables is itself not normal we know that analytically it can be shown that that’s the case and it’s easy enough to show that through simulation as well so the problem is the normal distribution is not an appropriate reference just should regenerate in probabilities so and as a result of this and part is a result of this the civil test is among the lowest and power of methods for detecting indirect effects so so experts in this area are not recommending using this test although it does remain popular do you see people doing it all the time still I think eventually researchers are going to get the message in I see that evidence of that happening already yes people will talk about it before we but some work that I’ve done with colleagues with my abandon milkers creature has helped to make this pot this a little bit more popular because it’s a little bit easier to do than it used to be so the bootstrap confidence interval is what I’m recommending these days now there is some literature that’s coming out or is out or coming out which is showing that this can be improved upon so it may be even in ten years we’re doing something different but but right now this is what people are doing based on an advice that’s being given by people like me and others bootstrapping as an inferential method that’s been around for quite some time and but this is a place where it seems particularly well-suited because we’re dealing with a situation in which the sampling distribution of a product this product it may be it has a pretty unregular shape we can add a we connected it had that shape has been analytically dropped and it’s really relatively difficult to work with not bad this is a good situation in which which trapping is great because bootstrapping is is what we call a resampling method which allows us to make inferences based on a sort of it what we call an empirical approximation

of the unknown sample distribution and I say empirical because we’re going to actually generate a representation of the sample sampling distribution through repeated sampling of the data so here’s how this works we have a sample we’ll call that sample of size n that’s the original dataset size hand so rows in a data file so let’s take a random sample of our sample then we’ll take a random sample of size n so we’re going to create a new sample from our original sample that is of the same size and we can do that without being silly about it because we’re doing this with replacement so when we draw when we build a new data set from our original data set sampling with replacement but if we draw a particular row from the data file as we’re building a new sample we can pre put that one back in to be redrawn if we didn’t do that obviously each time you do this you’ll get just the same data set so you have to do this with replacement so take a random sample of the original sample size n sample with replacement and then one will just say well this is our new data set it will calculate the indirect effect in that big data set and we’ll do this over and over again sampling the size again with replacement from the original dataset calculating the indirect effect in that new what’s called a bootstrap sample repeat this many many times I recommend at least 5000 so after this imagine you’ve got 5,000 of these estimates of the indirect effect well this is figure this is a data set 5000 rose there’s there’s a distribution there you could imagine visualizing it with a histogram or something like that that’s an empirical representation of the sampling distribution of the indirect effect when sampling from the original population so we use that distribution of the indirect effect over multiple resamples multiple bootstrap estimates as an approximation of the sampling distribution of the indirect effect in the original data with this distribution and we can generate an empirical estimate of the like we can generate an empirical confidence interval for the indirect effect let’s just imagine sorting these from low to high throw out the lower the opportunity a half percent take the two end points that now the lowest in the highest and that distribution those are the inputs of a 95% confidence makes no difference what the shape of the distribution is it doesn’t matter what’s happening the tails of what it looks like in the center because you’re only basing your efforts on those two end points after throwing out the extreme two and a half percent of the data on each side there are variations on this that’s what’s called a percentile approach to calculator which drop conference interval there are things you could do something called bias correction or bias structuring or acceleration which in theory should be better although we practice turns out not as this error to be or at least not always now it seems like a difficult thing to do and obviously requires a computer that’s not something you’re going to do when you’re clicking or whatever you want and all the times but it’s a perfect test for compute that’s what computers your bit so here’s an empirical representation of the Sanford distribution of the indirect effect in this model actually did ten thousand so ten thousand bootstrap estimates of the indirect effect here is a representation on that sampling distribution this is a this those ten thousand bootstrap estimates we threw out the logo in the upper two and a half percent the two values that are on the lower and the bottom of the top ten are point oh five six and point one seven so those are the endpoints of a 95% confidence interval for the interrupts event and because zero is not in that interval you use that as evidence that the indirect effect is different from zero with ninety five seconds sort of like saying P less than 0.05 although not technically that because hypothesis testing is condition of a true null hypothesis whereas this is not I always take questions that I’m having to take questions later about the rationale for this is this cheating are we making a day – no but it’s worth spending some time talking about those issues for them they’re asking okay so do you need special programming skills to do this remember help I have some program physicals but I would want to be doing this each time I

have a data set to analyze how do you recognize this you’ve got to be probably around my age or older to recognize this what is this basic yes can you be more specific collection is difficult why I remember this very well this is the screen it comes on when you turn on a Commodore vic-20 this is a big deal in the eighties when I was in high school I think it came out of 1982 and this was really the first computer that was available to people at a reasonably good practice a sort of this machine is sort of credited in part for bringing personal computing into the whole you could say that maybe apples stake claim to that as well so my father bought me one of these when I was young and I’ve been I love this thing I spend so much time on this in and I thought a partner on on this machine I find myself in my room and just I taught myself a sit there how we come out for dinner um my first publication actually was when I was 14 this was a magazine that supported this hummer computing platform that I published published a number of things over the years and that this was this particular cover actually it shows my landing on the screen you can’t see it but I wrote a lot of math learning utility for Victorian so I’ve been I’ve been programming and publishing since early high school and that’s why I learned it’s a program that I’ve credit my father in many ways for for for where I’ve ended up in that sense because I think really sort of influence where I went I’ve spent a lot of time in the last ten years writing software my column add-ons for existing software to make data analysis modern methods a little bit easier for people so we were talking about the bootstrap confidence interval and how it requires a lot of it would require some kind of computer programming skill but that’s what programmers do you as a user don’t have to think about that if you’ve got the software that does it and so I’ve spent some time with a colleague of mine Chris preacher writing about mediation analysis and in particular describing some of the principles I’m just going through and others but should but providing as well computational aids that make it easy for people to do and before 2012 these are the types of procedures you might be using we published something in 2004 this test it’s really bad a name called Sobel that does some of this bootstrap stuff in simple models like I’ve just described we did a follow-up paper to this in 2008 for more complex models with this macro called indirect these are both available for SPSS and SAS and you’ll get some experience using these so as we were hanging around with me later today and tomorrow they’re fairly easy to use and in fact for SPSS they’re actually you can integrate the right into the SPSS window so it’s pointing clip not difficult at all these are pretty much obsolete now because I I just released last year the beginning of last year a new tool I have writing tools like this for mediation and moderation analysis for the last seven or eight years and then when I started teaching this more formally at Ohio State and elsewhere realized boy it’s it’s kind of tough to keep track which of tool does why and and so I thought well I took sabbatical a couple years ago and thought one thing I’m going to do in this battle is I’m going to write a new tool so it does everything that everything all my other tools do just all from in one knife package and so that’s what process is so it does the kind of analysis I just described and a whole lot more so it combines all these features of some other mappers that I’ve written over the years I didn’t know how this would be received I liked it in a teaching a lot easier people seem to enjoy this though I’ve heard people call it addictive in some ways but probably way down the hall says you know I don’t get much sleep as many times using process and I should just forget my dating all these different ways it was so easy to do that with this to things that normally are difficult to be on write code or do all kinds of juggling a data file around in various ways process makes pretty easy anyway this is freely available you can find it for my home page so here is an example of process output can you see this in the back some of these are okay okay so this is I know some of the text is a little bit like so here is a super mediation analysis in process

so this here’s our conceptual model and this is the statistical model now a mediation analysis the statistical and the conceptual model are nearly identical when we do moderation analysis you see there’s a big difference between the statistical in the conceptual model so this is the model that we’re going to estimate and this is a more formal half diagram you can see about these errors and depends of things you see if you were drawing like an obstruction equation on the program like a Miss and plus and other things process can be used as syntax and here is the code in SPSS that would generate this analysis so I have a version for SPSS I’m a version for SAS as well down here you’ll learn about this after after lunch today how all this works and what Paul does eat the point is it’s just a simple command that you would type into SPSS syntax and once you define process it you will do this analysis the SPSS version does have a dialog box so you could install it with any lot and access to the machine you can actually integrate this right into get business minute minute which makes it a little bit easier and I know a lot of people like that so what I’m passing around for you that is output for process now I want to spend lots of time dwelling on this we’ll go over this after lunch and for those of you who aren’t will be with me I think you’ll find this very straightforward and simple to the interpret what we see is to string it out but it is a part of it this is the first page of the second page of the Hancock and what we see is the process was automatically doing all these regressions so here’s our model of depressed aspect which includes all these predictors and there’s our a coefficient that a pal then it’s automatically doing the second regression so that’s what drawing attentions and they’re the predictors and and so their feet and C prime there and then because I requested it with an option we got this thing called the total effect model and that gives us the sum of the direct and indirect effects and that’s what we called C totalbet here’s where all the fun is this is a summary where we get the total effect of X on Y here’s that point on one I showed you earlier the sum of the direct and the indirect effect there’s the direct effect and then the a direct effect and here we see because I asked for it a bootstrap confidence interval is the lower and upper bound on a 95% cotton and that doesn’t straddle zero it generates the product for us right so that’s already direct effect a times B there’s Thompson will even generates an estimate the standard error but it does have to bootstrapping running with an impaired then through something like the sample test for me okay I fail to mention to you that this was obvious to you in the bathroom beginning I say this because I’m going to skip a couple of things but I want you to know that these slides are all available actually there right now so when you have a computer with a wireless connection you can actually download all the files I say that because I’m a skip I don’t want you to feel like a bgt it’s there you can see if you can download it simply fight webpage which is easy enough for slash public and then for slash Concordia dot EDF that straight download them if you put your browser or your iPad or whatever to them right now so I’m going to skip this for now we come back to if you watch you later cuz I wanna get into moderation sounds like mediation looks different conceptually very very different and statistically totally different so even though the two words are similar and often used interchangeably they are very very different they shouldn’t be interchange they’re very very different so we talked about some variables effect on Y so that’s an effect on Y is moderated not needed’ moderated if it’s size or direction is dependent on some third variable it could be more than one variable but in this case I’m talking about a single variable and I’m going to use M here not to confuse you we were using them as mediators I’m using it as moderator here in context as determines whether it’s immediate or moderator we’re now talking about moderation not mediation so these moderation refers to the conditions that facilitate or inhibit the effect in heads or they talk about it talks about questions of who so for whom is this

effect large versus small present versus absent so here’s a graphical or conceptual depiction of mediation where X is effect on Y is itself influenced by something else that’s the arrow from M pointing at the path from X to Y so M is depicted here to moderate the size of the effect of X on Y so that effect depends on it and in that case we would say that and as a moderator of the XY relationship we also use the term interact it’s interaction moderation to the same thing different terms for the same thing X we call a focal predictor and out the bottom rate it’s nice to use those terms because when we talk about these models it’s nice to think what is the the causal agents you’re interested in that’s the focal group during which is the moderator it’s a very very quick overview of one property of partial regression coefficients and that’s that they are unconditional effects so let’s consider a simple regression model like this we’ve got only two predictors X and M so here’s an arbitrary example just pulled some coefficients out of a hat and so here’s a bottle of Y from two variables X and M here is some various values of X and M and values of Y that model generates here’s a graphical depiction of that law so X on the x-axis y here and then various values of m and that’s the lines depict what Y happens is from those various combinations of X and M the thing to point out here is that regardless of which value of n we choose a one-minute difference in X is associated with the same expected difference in Y so one unit meaning the distance of one unit on X regardless of which I you within you choose the difference in Y is the same between those two values that differ by one your own X it makes no difference what value of any chips that an unworthy is a parent that’s what that’s that’s that’s property of parallel it doesn’t make any difference which line you pick as you move from left right you change bus a now on waxed in this case B 1 is the coefficient for extra point 1 as you change X by 1 unit Y hat increases by point 1 this regardless of which value of energies so we would say X’s effect is independent of them it doesn’t matter which value of M so the 1 is an unconditional effect it’s not conditioned on the value of M or any other variable in the box there’s only one variable in this model in this case so that’s a bad property if you’re interested in moderation because moderation means the opposite that variables effect is contingent on another variable in the model now if you read ahead I’ll recognize this if you’ve ever done moderation analysis here’s where how we get to this let’s let X is affect the a function event so so what I’ve done here is I’ve substituted that partial regression coefficient b1 for some function and that’s just for simplicity let’s set that function let’s make that function a linear function we could choose other functions but linear functions are the most common commonly used so I substitute that function in and here’s what I got now you just rewrite this do with algebra distribute the X alone the two terms and wind up with this so by many X’s effect be a linear function of M I end up at a model that looks like next so by adding the product of exit M as a predictor to this model we now end up with a bottle in which X’s effect is a linear function of it the strength of that moderation is going to be determined by the size of B 3 so here’s an example of how that works so I’ve tapped on X town as a new predictor and I’ve given it some weight so here is an example same data set difference tomates of course for y because it’s a different model now but what happens now the different two cases are different by you neuron X differ by a different amount of Y depending on what value of M you choose as we’re collected in the slopes that are not parallel that diverges from parallel will be determined by the size of B 3 so when B 3 is 0 so if you set B 3 is 0 you go back to this model when

we’re exes effect is in pinyin those lines are comparable as you deviate from 0 those lines then become none the more the larger B 3 is the less parallel they are so B 3 is it work here’s another way of representing this and I’ll be using this notation throughout the rest of today so let me go over it so here’s our bottle with the product of XM is an additional per acre we saw that we can be writing this way so our group terms involving X and then I isolator next pull that X on so that’s this about a representation with that model or let’s just substitute this thing theta XY into where I set that to this so that will call the conditional effect of X theta is the conditional effect events of what is defined by this function V 1 plus V 3 there’s a nice way of representing this conceptually where our focal predictor is X our moderator is out and so this effect form X of X on Y is this function theta X Y which is B 1 plus B 3 now so BC plays an important role in moderation because if B 3 is 0 that basically then that turns this whole function into just be one where excess effect is no longer dependent on them it’s independent so we test moderation by testing to the value of B 3 you get some alternative so here’s the example three pairs from our data set our original data set I’ve changed the rights of my labels around Belgium I can see later I’m still kinda economic stress apps now we’re call it depressed attic why before we were calling that out and now I’ve got another variable I’m calling em that’s the social ties very that’s the strength of their social networks a business-related social networks so we want to know do social ties influence the relationship between stress and depressed affix so so does the social type so they somehow buffer or amplify the effect of stress on depression so here’s this model conceptual form where my focal predictor is economic stress that’s the variable these affect I’m interested in in quantifying on the outcome why and then I say well this bad for fact is I believe may be moderated by social ties and then I’ve got a bunch of covariates that I’m concluding in here I did this earlier so I’m going to stick with that here’s the model in statistical form so this is we’re like what it would look like in the form of a structural equation model Wilhelm I filled out the error term then you often correlate all these things together if you were drawing it out in Amos or another program the important point is that this conceptual model translates into a simple quake and this is a graphical representation of that equation and our focus is on b3 display we want to always be 3s2 just to be different from zero that’s the product of XM in this model so here’s a simple SPSS code all I’m doing is I’m creating a new prop variable I’m calling in a wrap it’s the product of economic stress in social ties and then I’m gonna clear it as a predictive variable in the model and here’s the output that I get so here is our equation here’s a graphical representation of that equation and my rational coefficient is Smith for that product term is minus point two one two and it’s too distant different here p-value is smaller than wants to show so from that all we conclude is that the effect of economic stress on depressed attic seems to depend on whereas moderated by social ties that’s all that we’ve got from this so far this is a very abstract mathematical representation of the data how it we make sense of it is anything but clear we have as estimate we know it’s different from zero but so what does that mean exactly except that we now know that this relationship between stress and aspect is dependent on social ties a picture is always helpful and I would argue is in fact mandatory before you start talking about these things you how to draw a picture here is a picture without moderation so what am I doing here well top we see the regression model with those six regression coefficients now included and it’s p3 as our product term of the regression coefficient for a product term so those are two terms involving X in this model so I’m rewriting this

equation this way by grouping X and then isolating that so in factoring X out of two of the terms the reason why I do that is because this is that function when I was calling theta X Y right so fate XY the conditional effect of X on Y is this function B 1 plus B 3 M here I plugged in some values of social ties from the data plug them into this function and you get theta X Y those different values and those correspond to the slopes of these lines so what it seems like is happening is that social ties appear to be buffering the effects of economic stress on depressed ethnic people who report relatively few social ties now what is the lowest begin the data remember these are people who are members of a social networking site they’ve all got it one person in their network those people with relatively few ties the relationship seems to be positive between stress and depression but among those who have greater ties so don’t think of these an absolute number though these are just these aren’t how many friends you have or something like that it’s actually an index based on frequency of contact with members in the group but those who have more frequent contacts have stronger ties the relationship is weaker if anything is even zero and maybe even slightly negative for those who are much higher and their social and stronger had a much stronger social network so that’s a lot easier to see than trying to interpret that – point two one two so a picture is helpful and this picture allows us to see how these slopes link on to that function theta X Y we have a number of tools available for doing this one of them being something called the pick a point approach or you might hear this also called an analysis of simple slopes the same thing different lengths so when we probe it interaction we’re focusing ourselves on picking values of the moderator and then testing a hypothesis about the effect of X on Y those values so select a value of the moderator which you’d like to estimate the effect of the focal predictors effect then we’ll drive it standard error and proceed as always we can use that information to test the hypothesis or to generate a confidence interval for the conditional endure for the conditional effect theta X Y right we already know how to generate theta XY for any value of n that we choose we need a standard error and it turns out that’s not a difficult problem not even all that difficult to calculate by hand you get most of the stuff you need from a regression and an output standard errors and values of paths you don’t get this thing called the covariance of regression slopes automatically you’d have to ask for that and then know where to find it and with any luck you’re plugging in the right things in the right locations and you get a number that is almost certainly going to be wrong I mean anyway because rounding error is going to creep in there you’re doing these to two or three decimal places probably and that can be significant when you start squaring small numbers and taking square roots so you could do this by hand you’ll like me to make mistakes a long way there are online tools that do it for you and process will also do this for you on that so no need to have to go into a canned West Record code and find those formulas one way there are easier ways of doing it you’ll see an output for that word later I’ve implemented to pick a point approach here just using those values of M that I chose earlier properly in the conditional effect of X on Y and the entropy value from this method and that’s what you get so it looks like this for those who are pretty low in social ties arbitrarily one and one play five the relationship between stress and depression stuff statistically different from zero but above for this other values that shows 22.5 the relationship is not of course the problem with this though is how do you choose out right I mean what what what do we use to guide our decision I think that which values of M you choose and certainly with the value of n we choose it’s going to influence

the result you get and often there’s no basis for choosing the market the values of the moderator instead you just rely on convention there are conventions outlet less common is when you just choose mean maybe the mean of the moderator and then standard deviation below and above use those as sort of operationalization of the notion of relatively low moderate and relatively high on the moderator so those are arbitrary of course there’s nothing magical about those values you could choose other values but regardless you’re not going to be able to get around the arbitrariness of this lecture most of the time there might be some reasons depending on the variable how you’ve measured you know what your moderator is there might be some practical or apply meaning to certain values but most likely you’re going to find yourself in a situation where the decision is arbitrary there is a better method that gets us around this problem entirely and that’s what we called the Johnson kneading technique the jokes aneema technique has been around since around 1937 I take it comes out of the analysis of covariance literature so you know when you do an analysis of covariance you want to ask compare differences between two groups on some outcome but you know those groups might differ on some other very we’re going to partial that effect out well it also the covariance makes this assumption that homogeneity oppression you have to assume that legend covariant and the outcome is the same in the two groups and what if you buy it that assumption the judge the imitate becomes out of that how do you make a difference about differences between groups when you can’t meet the homogeneity progression assumption analysis of covariance the judge said meme a technique though has been generalized to any regression model with interactions I was done in 2005 or 2006 I believe by a group at the University of North Carolina and since then and since I published some work implementing this in SPSS and SAS in a 2009 paper it’s caught on and I think that once you learned how easy to learn first of all like the but once you value what you can’t come to appreciate it and then see how easy it is to do this will probably be what you do in the future and since 2005 2009 somewhere in that area we’ve seen more and more people do with this anyway let me let me describe and then get excited about it party time okay so so it really it reverses the problem whereas with the pick up oi approach we have to choose out and then find out well is the conditional effect of X on Y significantly different from 0 at those values but how do you choose that well let’s reverse the power let’s find the values of M for which the effect of X on Y is different from 0 and the values from which is not now it turns out that you don’t have to do a whole lot of computation for this because you can there is you can conceptualize the problem in terms of what’s identifying what’s called the regions of significance that it is mathematically let me show you what we’re doing so here’s our conditional effect move we choose the value of M to pick up order books we have to choose it out of you and then we generate a standard error for that and that will give us some value T which we then generate a p-value now we don’t care so much about T it’s really just a way of getting to the P the P comes from the T distribution we don’t even really care exactly what the T value is so long as it’s at least some value that gives us a p-value of 0.05 glass or the so-called critical value of T so let’s reverse the problem and say well let’s decide what would we like the critical value to be let them depend on the alpha level you choose with the test it’s generally around 2 regardless but sometimes a little bit less than that or a little bit more depending on the sample size so so let’s instead say let’s let’s decide what we want a quick TV and then figure out what values of em produce a critical T exactly equal to that now that’s important because those define what are called the regions of significance those are points of transition to think about the moderators continuum as you move from left to right on that continuum you can calculate the conditional effect of X and put a p value onto it as you move left to right the p value will change depending on where on n we choose there will be some point or points at which something is significant a certain value as you increase em it’s no longer see them and then maybe it’s not significant anymore for any other value or it could be that it’s significant becomes non significant but boots up by event changes to see the pinning it or it’s not significant on these dreams there it is incentive so these points of transition are important because they define the regions of

significance so what we’re doing with the Johnson mean exactly because we’re trying to find the values of M to find those points of transition now you do some stuff from from from grade 9 mathematics very 910 maybe grade a if you’re advanced you will you actually probably doing this and simply here so if you so if you plug in if you do if you work through this what you’re going to end up with is a quadratic equation that is we’re going on isolating it and we set that to zero we have something called a quadratic equation or something called that we used something called the quadratic formula to generate the roots of that quadratic equation so this will end up with as a quadratic that has two roots but not all these roots are going to be valuable some of them will generate values of M that are outside the range of the data of some of the imaginary numbers but there will be some if they exist that are within the realm of one state so that’s all that we’re doing I’ll go more into that after lunch for those of you were for hanging around as I mentioned this methods become more popular because now it’s pretty easy to do in software that you’re already using and another colleague of mine here at math is not University Vienna wrote this paper where we talk about these methods of probing interactions and provide this tool Lancome is that tool and it is getting quite a bit of use the process renders modprobe obsolete because my problem the arcticus process does all this too so here it is let’s go ahead and I’m going to implement this moderation analysis and process this is the conceptual model we’re estimating here is its statistical representation and here is the process command that does the analysis now as you look to the output like we will find first is just the standard of regression of so our outcome is the press that it’s got six predictors one of them is a product see here says that IMT is for one that’s a product term notice that I didn’t generate that product there’s no there’s a product term in this command it’s doing that for you it bill is what to do generates the product for you so this is just standard lost regression – yeah v1 v2 and v3 in this output just to see what if you just did this in SPSS something you have to check with the front on your own if you’re doing and then process has a certain intelligence built into it for deciding absent any other instruction from you what values of M choose when it implements the pick a point approach in this case process has chosen these three values of the moderator which is social ties and it’s calculated this is data XY this is the conditional effect of an X on Y at those three values of social times gives us standard errors key values and key values and conferences in this case the intelligence that process has is this you haven’t said anything otherwise and it sees that your moderator has more than two values so it’s assumed you okay this as a continuum of some kind and so it will automatically implement the old mean plus and minus one generation from absent any other instruction from you that’s little to do for a variable that it ceases to continue if it sees that the mod of dichotomous moderator that it’s only got two values in the data it’ll just pick the two values because it assumes those defined groups of something and you probably want the conditional effect for those two groups so we’ll do that in this case it’s a it goes even further because it chooses one for the bottom for social ties that’s actually the reason why it does that okay well first of all this is the meet one point three honey that’s the sample mean from social ties two point zero nine three four is a standard deviation above the mean if you’re to do the same distance below the mean you go beyond the range of the data what is the lowest value in the data so process knows that and it takes one instead of giving you a number that’s actually outside of the range of your data I know how often you’ve done the finger point approach and ended up in a situation where ever that low value is outside of your data t picks that up for you and it doesn’t let you make how to say okay so that’s the pic important question does that for you whether you weren’t or not you can’t stop but what is an option is the Johnson meaning tip if you have to ask for that but it’s simple four characters in the code and

this is the output that you get so let’s tell you this is it finds two values that don’t identify the regions of significance of the effect of X on Y those values appear one point eight nine and two point eight one that itself is not very helpful though we need more information than that so it generates this table where it slices the distribution up into twenty-one values it gives you the values of the sort of moderator calculates the conditional effect that all those values brings in the values of the moderator in that table that it identifies is the regions of significance and from this you look at the p-values and the point of this is not to interpret each of these as a little hypothesis test the point of these is to see what’s happening at the point of transition notice here’s one of the points of transition 2.81 to see the p values exactly port-o-potty everywhere above that it’s true significant they get one point eight nine that’s another point of transition that it found the p value is 0.05 and everywhere below that value with the MA trigger it’s also said but anywhere between those two values you see it’s the effect of x and y nuts in the social sighs the one the 20 about want values that you need are they drinking from the detail or are they just take you the range and define me but just Archer we slicing it up in that way they’re not actual groups they’re not actual values okay you may not have anyone in the data points out those values yeah this is really more it’s a it’s a kind of a picture the picture I’ll show you it would be nice to do that beginning this gets us to do the picture I’m about to show you automatically is quite different it’s easy to do once you know how to do it but a great the code to do it automatically which would be relatively error-free and create a lot of email for me that’s hard so this is why this is my alternative it creates a sort of tabular representation of a figure like this so this is a plot of the regions of significance so normally you see this would be like an outcome variable like Y but no this is an effect this is this estate an X Y here on the y axis here’s our modern maker the red line is that cushion say s1 and then I’ve placed confidence dance I know whether you can see it with the gray but 1.89 3d visit here’s a line going up there and then 2.80 there’s a line moving up there what you should notice is that the confidence interval includes zero between those two times that is the confidence interval for theta X Y for values of social ties between those two conferences will include 0 for values to the left of that 1.89 the confidence interval isn’t happening above zero so zero is here and four values is the writer 248 the conference animals actually encountered the most now there’s almost no one in data is high so I really wouldn’t interpret this as a negative association that’s person because I know the data there’s really not enough data up here to make much of this there’s plenty of data here though so what I see happen here is that among those who are thought to be less connected in the business community they’re feeling these positive effects of stress the people who are more connected are okay so that’s pretty neat it’s gets slices to watch our hair the tide of your awareness of the choice of M instead you just analytic and figure out what are those regions of significance as you’ll see after lunch it may be that there are no regions of significance or no points of transition and there are a number of possible things that can occur to make no points of transition all right so now here we are finally where things get exciting and as I promised I start running at a time and I certainly want to leave some time to take your questions ok so combining moderation mediation analysis what I call conditional process model we’ve essentially all we’re doing is were merging two concepts process analysis or mediation oz the same thing process with you analysis was the original term from the early games and moderation analysis the notion of interaction we’re putting these two things together to model the contingent nature of mechanisms if indirect effects quantify mechanisms and interactions quantify contingencies combining them allow us to model the contingencies event that’s all overages people have been talking about this for a while I’m certainly not the first although I am the first to use the term conditional process modeling people have been talking about this for quite some time maybe I shouldn’t depends on your perception of history actually it hasn’t been that long it’s been eight or nine years now but it’s been around for that maybe it was the 80s sometimes

people started talking about about about this but nobody really was very systematic about it until around 2005 and then there were a bunch of people doing this myself included and here we are today additional process modeling here’s the gist of it so we know that in a model like this we can quantify the indirect effect it’s a product of two paths but what the path from extends model so maybe that tap is moderated or perhaps that path is moderate or both could be the unit both moderate if that’s the case then that means because we saw earlier in those situations that X is effective an M is a function of function does it work or an effect online making the function of Z that means that our indirect effect is no longer just a single number it’s instead of function it’s a function models in other words the magnitude of the indirect effect will depend on at least one other variable in a model like this in this case – for what we call moderated mediation so the mediation the indirect effect is what in that situation that it becomes sensible to ask okay so what is the value of the indirect effect conditioned on value of the month or what we call conditional indirect effects how do you quantify this how do you make inferences about them direct effects can also be conditional you can have a situation which the effect of X on Y independent of M is self-monitoring here are some of the possibilities this is just a very very small sampling of the possibilities you can imagine this is just with a very simple mediation component a simple triangle it very yes people are doing this you’re a few examples examples in which the x2m path is moderated by W something called first stage moderated mediation that is the first stage of the mediation component bottom I hear some there are two examples of what we call second-stage moderation which the path and why is moderated so let’s go back to our intent here’s the model that I started I said here’s where we’re working towards here we are this is the model that I’ve showed you at the beginning of this talk it’s a conditional Process Model it proposes the indirect effect of economic stress some withdraw intentions do depress effect is modeled it’s moderated is the first stage by social ties but we’re not allowing the indirect by the direct effect of stress on withdrawal attentions to be moderated that’s a decision that we’re making the model you okay so here’s that conceptual model here is its translation into statistical form now making that link from conceptual to statistical that’s probably the hardest part is they’re learning this how do I take my conceptual model and turn it into a set of questions in that case but this is the model in statistical form so it’s represented with two equations because we have two outcomes really we have the outcome here as a mediator and we have an outcome here’s its withdrawal intentions so we have two equations let’s break this up so you can see where this is all coming from okay so so here’s our conceptual model you have a component of this model it’s a moderation component right where we have X’s effect on M moderated by W so think of that as just the moderation model and it’s got a couple of controls because they’re these control variables in the model them so that’s this part of the this equation our moderation component is this and we got the stuff we’re parsing out for the bedroom so this part of the conceptual model translates into this part of the statistical model and just part of the equation for him see that now notice although even you look at this and I think okay well that’s a one well no it’s not tonight because remember this path is a function of W is shown by grouping the terms involving X nice lighting X right so that is actually that a 1 plus a 3 W in this law right so another way of writing this

part of the other model is like this where I just substituted theta XM where I defined that as this function of done that make sense okay what about this component now a mediation commode then we have this cut affected M on Y then we have the direct component well that’s down here then get the covariance that corresponds to the parts in red the statistical ABB’s effect are wild and everything else constant in this case is v1 it’s not moderated it’s not a folk okay so that allows us then to talk about the notion of a conditional indirect effect so it’s this function here B 1 that is the effect of that one why there’s no operation there so the indirect effect of X on Y through M isn’t a 1 B 1 it’s this function theta X M times B 1 doing all the substitution expressing it purely in the form of variables and coefficients that’s the indirect effect max on Y through M it depends on death so that’s our conditional effect events sorry our conditional indirect effect that’s why what if you met this model has a direct effect and it’s not modern it’s merely the effect index online independent with them and that’s just C Prime okay so now you’ve seen how that all works without the numbers now let’s just put the numbers in from them from the the analysis itself a 1 and a 3 we’ve already done this we did this earlier in the moderation too far into this talk but I’m just repeating it here this is the same model we use for what we were modeling the moderation the effective stress on the press effect you see this output before we’ve seen a 1 and a 3 on call to B 1 and B 3 4 there’s our – point 2 and 2 that you saw earlier you’ve got to bring in the coefficient for X into that as well so a 1 and a 3 are those two values but he went a simple leaf we’ve seen this before – there’s the point 707 that we saw from the simple mediation analysis and now we’ve got a new unique me at a1 me today 3b so put those together so the conditional indirect effect of X on Y program is the product of the conditional effect of X on M and stain 1 plus a3 W and the effect of M on why that’s being worked so there is in generic form announced with the numbers from the regression analysis on it now we can proceed much like with the pick of one approach right let’s just choose some values of duck our moderator and figure out what the conditional indirect effect with X of Y because n is but that’s values ability so here I just arbitrarily chose some social values of the moderator social ties we know that that different values will produce a different effect of the stress of the press app so that depends on social ties the effect of depression on the draw attention does not depend on social ties at least not in our model that’s not the same it does it but we have a model in a way that allows it to those two numbers when multiplied together gives us then the conditional indirect effect of stress on the drawing attention three depressed admin it depends on social types so it seems that indirect effect is

decreasing with increasing social ties now these are just descriptions mathematical descriptions of pointlessness you would like to do an inference as well just like in the bigger point approach we can estimate a conditional effect within connect on an inference about the same thing here we can do estimate the conditional indirect effect and then make an inference about it and bootstrap confidence intervals are great friends because again this is notice this is the product this involves products of normally distributed regression coefficients it’s not as obvious here but there are products of regression coefficients involved so that’s going to have a afraid of your regular safety distribution and so effect like the flu strep conference here what would be great influential tool for that I’ll get to that in a second just talk by this debating the direct effect of something going to talk about fortunately it comes right out of that model for why it’s just that C prime path and that’s point minus 4.9 for and that is not statistically different from 0 you’ve seen that before we start up the talk with that here’s official representation of the direct and the indirect effects in this model what I’ve done here is the line is the effect and I put social ties on the botanist and wire here’s the effect and it’s either a direct or indirect depending on the color so the gray is the direct effect then I’ve added two endpoints of a confidence interval and that’s not a function of W in my model so that’s the same anywhere you pick certain social ties the estimate is minus 0.09 for the confidence interval includes zero regardless is valued up for these relatively low values of social ties the bootstrap confidence interval does not include zero it’s positive but for these more moderate for higher values of social ties the both the direct and a conditional direct effects are not so the mechanism at work here depressed effect seems to be operating primarily for people to develop a new piece of the pie no evidence of the direct effect elsewhere no no evidence of a direct effect anywhere no evidence an indirect effect anywhere except from on both relatively few social times now where do those endpoints come from what talked about bootstrapping this is a 2009 2009 2007 paper that I tagged on with christen of colleague at the business school at Northwestern Derek Walker where we introduced these ideas of conditional interrupt effects that we talk about different inferential methods for them we talked about a normal theory or Silverlight approach to this which I don’t recommend we also derive the Johnson Niemann results for traditional indirect effects which seems like a natural extension but I actually don’t think that was such a good idea anymore and I think Chris and I are in agreement on that sometimes you have a good idea is equal to you idea and then you realize later on maybe not so I don’t really recommend this because it does invoke this assumption for about the derivation of the regions of significance for conditional indirect effects doesn’t help is assumption of normality of the sample distribution so I’m not so comfortable with that so which drop confidence intervals is what I prefer but process does all the stuff that this tool that we described in this article also does my head so my event isn’t even useful anymore because process does everything by mendes except for the josephine and stuff so here is this model implemented in process so here’s our conceptual model here’s our statistical model here’s the code that generates the output that we’ve got in front so here what you’ll see is our model depressed effect this is just a standard loss regression output there’s our coefficients a1 a3 and a2 and we don’t need a – but there it is so there’s a 1 and a 3 here’s our notice that process is generating that product for us instead of course automatically mean you have to do that it knows when I specify model said that loss of productivity and so there’s our optimal withdrawal that doesn’t have a product intermediate there’s b1 there C prime then it gives us this summary and again it knows what this model looks like it knows there’s an unwanted direct effect so we just get that in the summary there’s that minus 1 in 4 and then it knows the conditional indirect effect is a function so it’s chosen our return are the values of the moderator and calculated the conditional the indirect effect most the conditional indirect

effect of those values of the moderator generation of consoles so the end points for this figure I generated actually not with that command there’s an option in process that says it allows you to specify the value operator and then it will generate whatever you want for that value that’s someone use process to generate the end points from that figure here okay the last thing I want to talk about I’m only going to give you a hint of this if you’re going to be with me at the end of the day tomorrow you’ll get more details this is my next work work-in-progress the paper on this is about a third of the way through but I’ll give you a preview here’s the issue we have not really formally tested for evidence of moderated mediation we’ve done through a sort of a piecemeal approach a logical argument says that well one of those paths this is moderated and some two therefore must be indirect effect be moderated but it’s possible to be more formal than that and it relies on this insight we can think about mediation being moderated if the size of the indirect effect depends on a moderator so that pattern results implies moderated mediation we’ve seen evidence of moderation one of the paths and we’ve seen that it’s for some values of the moderator we have evidence an indirect effect that’s different from zero but not for other values of the moderator but we haven’t formally tested whether the indirect effect is moderated we tested whether one of the paths is moderated but the indirect effect is not that one path it’s the product of pass but look at this this line is that function right so the conditional indirect effect for this model is a lot let’s make that clear just by rewriting this in this form so I’ve just taken the v1 distributed among the terms plug-in two coefficients and that’s the equation for this line so the slope of this line is minus one five zero the question is is that slope statistically different from zero that becomes a formal test of moderation of the indirect effect process does this for you well they can give you the information you need and then you have to take it one step further with the process with the information process gives you to do this test and that’s I will show you how to do that with one example toward the end of the day tomorrow otherwise there’s a paper coming out on this topic alright so here’s where we’ve been so all effects function through some kind of a mechanism I think we can all probably agree that that’s true effects don’t just magically come to be there’s some kind of mechanism at work and mediation analysis has changed since 1986 we’re not doing things buried in penny style anymore although I haven’t told you why that is all effects are contingent on something I think we could probably all agree that’s true too although we don’t always model the effects is contingent probably there’s contingencies that we haven’t modeled that are there and if that’s true that any analysis which focuses only on one or the other but not both is going to be incomplete in some way we can combine moderation and mediation analytically together and we can do this in so-called modern ways and it could be done fairly easily on software that you’re already using if you want to learn more about this I have a home page which has links to all my work on this topic as well as all the statistical tools that I’ve written over the years you’ll also find process there if you wanted you perceive the fast version of this material if you want the slow motion version you can take a whole class from me on in the summer I’m teaching the five-day course on moderation mediation elseis with Chris preacher in Philadelphia and that’s July 15th to the 19th I’ve hidden from you the price there but it’s going to set you back about $1,700 I’m not sure I’m worth that much but but fortunately I don’t get it even much of a set of small chunk of that so but it’s but it’s a it’s where we go into all kinds of detail all kinds of different models all kinds of fun stuff that you can do with this and then finally I have a book coming out on the topic which was mentioned in my introduction and this is where you can learn more about both moderation mediation and traditional process modeling in one place here are the things that people will say about this book if you give me poppy and you

can actually get this book now although it’s not available until June you can order it on through their website or even through Amazon or other places so so you so go ahead and spend your money now if you want but wait till summer before you get it I appreciate your attention and have fun get some sleep don’t lose sleep using process I think he might have used and as always I try to take emails for people I get a lot of email I always give priority to people who have attended my talks in my workshops so if you have a question for me identify yourself saying you came to this and I will put you to higher risk