# Mod-01 Lec-10 Markov Chain In general, what is a Markov chain? So, we said a Markov process is a stochastic process So, what is a Markov chain? It is basically a Markov process with the discrete set of states. So, when you have a discrete set of states, so number of customer that is a discrete set countably in in finite, but countably infinite. So, that is what is called as a Markov chain technically. So, this is on the state space right; state space has to be discrete for something to be called a Markov chain, but your time can be continuous or discrete So, we have two options, discrete time Markov chain or you can have continuous time. By time, again I generally mean the parameter, discrete parameter or continuous speed parameter, I flip my definition CTMC So far, we looked at these two classes, and see how we can given a system that can be expressed as a DTMC or a CTMC, what kind of properties, what kind of performance evaluation properties can be derived from the system; that is where, we are trying to evolve towards So, as a simple example of the discrete time Markov chain system right, so if I am measuring let say q right of a system at every time instance a, t is some 1 micro second So, every micro second that is micro second, we measure the number of customer’s right in the system So, therefore, at time 0 right how do we do this, wait a minute. So, now here we look this is going to represent the states of system as such then, we start talking about probabilities of the system right. So, this is the state space right And then given that, I am in state 0 at whatever time right, this is now time is setoff now what is the probability that will go to state 1? What the probability I will have one customer at the next time instance? I am only looking at every time instant I am sorry Number of customer is a parameter. Now, I am trying to represent the system using this Markov chain, where I start talking about probabilities now. What is the probability of going from 0 to 1 right? Given that, I was in state 0 at whatever point in time, what is the probability I will go to state 1, I could also directly go to state 2 right If I started off with no customers and in 1 micro seconds, 2 customers arrived then I have probably have right I will be going to systems with state 2 customers in the system right. So, like this you will start representing a Markov chain in terms of these transition probabilities So, from 1 I could go to 0 right, from 1 I could go to 2, from 1 I could go to 3 and so on, and so forth. And you can also have self-loops in system itself with some you know non-zero probability will continue to stay in the system and so on You might have no customers arriving No no from I am currently in the system let us say your sometime instant t naught you have 0 customer; then your next instant is 1 micro second later you are looking at the state of the system at that time Yeah yeah So, every time instant I am looking at the probability of shifting from one state to different state. So, here I am so the system state is measured at every time at every micro second. So, it is possible that no packets arrived in 1 micro second; therefore, stay back in arrival. So, in each of the state, if no packet arrives I would stay in the same state or if there is a packet completion then, I will go from this 1 state to 0 states. If there was 1 customer in the system and then the customer has been serviced in is departed then we got back you go to state 0 and likewise in each of these cases you will have those right reverse ok So, now we start talking about the so called rate state transition probabilities. So, we now need to define from right what is a probability of going from state i to state j right in certain number of steps will get to that This is essentially your state So, let us first prepare this discrete time, DTMC for some definitions of derivations So, the parameter space we are denoting as discrete parameter . Again whole series of definition they are going to start cropping up. So, let p j of n be the probability that at time instant n, I was in state j So, the system starts in time 0 and some states we do not know what that is right. As the system evolves, there is a definition of that time n that should be in this particular state in some given probability. And then let p j k we are interested in starting in one state and then looking at how many like n steps later, what state I will be in right? What is the probability of being in some particular state? The previous diagram that I showed, I am talking of what is called single step right 1 step transition. What is the probability that I am in state n, I will go to n plus 1 and n plus 2 and so on But now I am looking at series of steps right I can go through several steps before I go from one state to some other state right; so that we will just define here. So, what is the probability that, given that I started state in j at time m, then at time n where this relation should goes right, so for some value m lager than n. So, at stay time m I was in state j, then for n greater than equal to m, what is the probability that I am in state k? This is the conditional theorem Yes, they can be intermediate transmission That is why your they can be other transition also, there can be yeah yeah. So, it is not n equal to n plus 1 right sorry n n is greater than or equal to m; so, m m plus 1, m plus 2 and so on Yes. So, here dimension is step so how many Discrete can be state j for move then one time Yes it can it can yeah. That is why m can equal or yeah k can equal j right. It can stay in the same state for several time instances before it goes some other instances states So, as an example, if we look at the number of customers at time 4 right let us say there is 1 customer in the system and that customer being serviced by the server. Then for the next five instances, if no other customer comes, then my state is always going to be in the same right is always in the state with with number of customer equals 1; therefore, there is no change of state. So, again this is just a definition this is the conditional probability that given a time in state j So, this p m p j k somehow is defined to be this one right This p j k on the (m, n) is the called the transition probability function Yes you can that is that will come up later on with this c k equation it will come up So, we will , so 0 eth step transition probability So, what is the probability of going from state j to state k in 0 steps that is again just for base line definition right. So, if j equals k then it is 1 right 0 otherwise probability enough self-evident, but we will just do that for notational convenient we will need this later on right So, basically I can go from one state to another state in 0 steps, only if you are in the same states only if both j and k are the same states, otherwise it is 0 probabilities. Then normally we can determine the 1 step transition probability So, given a system my 1 step transition probability is defined as follows So, this 1 step is whatever instant of time I am looking at there is probability that I am given at time instant n minus 1 and I was in state j, and the next time instant I will be some other state k; and what is the probability of that? This is the given right this somehow has to be determined. So, we will see how we can use this. Now, let us look at one example, so for just equation have been coming yeah So, in the condition probability function that you are given same discusses suppose the multiple paths from one states to another that is possible each having there their individual probabilities. So, will this be a cumulative of the other possibilities? Yes it will be yeah. If you are going to transition to multiple path, it will be yeah That if you want so from i to j we can go through several intermediate nodes right with m steps and n steps. So, then this is this the way we will define that right. You look at the summation of all those probabilities right that will be a total So, the total probability going from i to j (m plus n) steps will depend up on how whichever way you break that. So, m and n can be whatever combination, if it is 3 it can be 1 plus 2 or 2 plus 1 and so on right But in that case if we consider special case, where say g equals to k we return to the same state it means in same state Yes It is possible that it might take the path and return to the same state in that? Yes you can yes So, it might also take that path multiple times over? Yes. So, you are talking of starting in a state and then going to several sequences and coming back to same state, yes that is also possible that is that will be Is it possible that path repeated multiple times, we cannot capture that information? Yes yes that their is possible But that is possible to go from one state to come back to the same state after several other transitions. So, what do you mean that, the state that is not capture? So, I am saying is, suppose it is moving from i to j to the state k we take two different time instances, it goes form i to k and k to j and this transition i k and j happens multiple times over. So, will that not affect the probability? It needs certain time duration, suppose m and n has multiple like such transition present in the Number of steps will vary Number of steps varies For a given, so the probability you are saying does not depend upon, no the probability will not depend upon the number of steps; you are only trying to see, if I am starting in state i, what I will come back to state i in whatever k steps right. That is what this is trying to tell you Number of steps will vary in case that he is finding out number of steps will vary Right see one thing I am trying to list is the actual sequence of possibilities and other thing I trying to look at the probabilities right. So, the two the probability is trying to capture what is overall probability that I have; you are looking at specific instance if you have, run the simulation then yes you will find that there are several ways in which this can happen right. This is simply looking at the probability of going from one state to another not the number of ways in which you can go from one to the other or number of instances that such sequences occurring Because at each, if you are running this as the simulation right; you will randomly toss the coin and then it will probably go to one of those case states and yes you might come back in different points in time. But quite looking at is a long term steady state probability that I will come back to same state right that is what I want to find out So, if I write to have say 1000 seconds of monitoring the system and that you find that in five instances, I started from i and actually came back to state i right. So, I started in state i and I ran through several sequence of this right time instance then you keep on monitoring. Each of them you will be giving you different flow of states right; you will have different state transition each of this simulations And the probability tells you that, what is the rate with some finite probability I will come back to the same state or not right that is what right. Others did not follow that even around right, but anyway increase steps So, it is the one probability or even calculate in first step went to 0 to 1 then 1 to 2 then 2 to 2 then 3 to 4 cumulative probability ending all those No, I did not get What happens is, probability i to j and steps Yeah or m plus 1 step ok in this case I took steps So, i to j is going to take some path, directly also we can take some paths also Yes. So, this i p k will be against split into, it would finally expressed as sequence of 1 step transition cumulative Yeah it will be cumulative, because all this singles fives only you know the 1 step transition; from this we have to start building that second which is word I am trying to get to, but have not got to that it. So, I am so with this 1 step transition, I can actually figure out the probability of the 2 step, 3 step and so on; from which I can essentially compute from any state to any state, if you want to do that x number x plus y steps I can compute So, if now what I am trying to get is let me give this examples. Say, I have a two state system right this is a state of channel it see there good or bad ; and if is in state 0 right it will stay in that same state probability 0.5 and then leave it with probability 0.5 and like wise. So, this is this is an example of such a system ok So, if I measure the system, I found this right. Now, the question is so what? So, you are representing this system behavior with this Markov chain. Now, he would like to derive some properties of the system using this representation right. So, the first question is, in the long term right this is one time instant, at one time instant I go from this to the other In the long term, what is the probability that system will stay in state 0 or the probability system will stay in state 1? That is something that you want to calculate right big for what whatever reason; I will leaves this later on for computing the average number of customers in the system or for computing some other matrix or I could be having some sort of revert function for each state right r of 5 for state 0, r of r 0 for state 0 1 and so on. And you want to compute compute them expected reward of this system as its progressing through this right So, this is procedure from this. Now, I need to find out in the steady state in the long run right what is the probability that system will be in state 0, it will be state 1, and how do we compute that? Now, just assume that everything is known with that the index is simply 1 2 3 4 and so on right. So, in the 1 step transition I know; that is what this problem. This is the 1 step transition right You are say probability that x n equal to i equal to probability of x n minus 1 equal to j I mean sigma over j probability x n minus 1 equal to j into probability of x n like giving that x n minus 1 equal to j Minus 1 equal to j we can will be equal to x n equal to j, because n is large. Then we get the So, to get to that what we will do is. So, now we will represent this state 1 step probability matrix. So, we will call this as p, again p is appearing in several different, this is 1 step transition matrix. So, instead of trying to do that, we can use this matrix itself right to do some of our computation So, for I am, so what is the probability going from state 0 to state 0 or basically staying there and so on right I can represent this as my matrix. Remember that, the state is can be infinity t0o just that it is right it is discrete So, therefore, it essentially extent all the way here, if it is finite set if it is finite set of states then you can do some easier computation. It is infinite set, we cannot do this computation So, it is fine actually. But, if you look at a queuing system that will be an infinite system; because, number of customers could be infinite in the case of M M 1 queue, only if M M 1 B it will be finite. So, then I will use this term. So, p i j (n) is the probability of going from where step by mean time interval or n time right. This system is moving on a every time interval from one state to another state depending on the probability that the 1 step probability distribution the transition that we have seen And if I denote p of (n) to be basically this matrix; so, p of (n) is the probability this as transition probability matrix of going from state i to state j in n steps. This is probability of right this is this, so this is going in 1 step right, this represents going in n steps . As n approaches infinity, then we finally approach so called if you have you can derive, if the system that have a steady state distribution then you will find out that the probability of going from right state i to state j in n step is this much. And that will help us finally derive out so called final probability that will come to just a second So, this p of (n) is nothing but, your p raise to the n times. This is your right matrix p multiplied or raise to the power right . So, will see how do we use this in just a second right. So, your for that example that we saw p was this right 0.5, 0.5, 0.3 and then 0.7 This is the 1 state transition 1 step transition probability right So, what is p squared then? I am sorry now going from 1 to 0 is 0.3 in that example right read it. I read it the other way around you know, now yeah. So, there is that 0.4 0.6 I think you are using my incorrect matrix; correct matrix is 0.4, 0.6 is correct or not correct 0.6, 0.4 This is the value I have, but because matrix is wrong 0.6, 0.4 Now, I just derive it then correct it yeah, so then you are calculating I changed Now, let us repeat the values again. So, it is p squared. So, this is this so called right the probability of going from one state to another too. So, this is basically to go from state 0 to state 0 into 2 steps is 0.6 right and then so on This is the meaning of this one. So, if I let this p 0 are the initial state or initial probability vector. So, the initial probability vector let say that you are going to start the system with right being in state 0 probability of either 1 right. So, for example, let say that I am always starting the system in a good state as you are state 0 right I will never start in state 1 I always start in state 0. That is my initial probability vector p of 0 or it could be something more generic that I never know right it is like when you wake up in the morning if you are good mood or bad mood we never know right. So, 50 50 it is not always you wake up in a good mood So, this is your initial state in the system right So, this could be some other distribution also. This is again based on right measuring the system, estimating the system behavior function measurements. So, want to you some initial vector is known that is starting point  the two state Markov that Markov process that we saw last class rights. So, the p below states here this is your transition matrix of going from one state to other and then initial state could be anything that is dependent on the system, it could be simply always starting in state 1 or state 0 and so on So, these two essentially start the, or specify the initial state of the system. Then as your system progress with time then the state of the system also changes right. So, there are some cases for which we can actually end up with steady state probabilities, where regard after some 8, 10 iterations or 10 steps or 100 steps and so on. The system ends up in some steady state that tells you the probability of being in state 0 is x, state 1 is y and so on right. So, there are some cases where we can actually derive that and it exists So, will try to talk little bit about that So, this is the expression that we saw last class right. The probability distribution of the states at time step n is simply raising that matrix p to the power n right. So, p squared was we did this last time and other was 0.56, 0.44 ok. So, one way to find out the steady state probability, steady state is as in a process infinity right all this probabilities will converge to some common value right it was single value. So, you just use that yeah So, let us define this it is actually called the visiting probability V j is the limit in probability of p j (n) as n approaches infinity. So, this what we want to figure out right; for a given Markov chain thus such you know such a set of limited limiting state probability exists and if so, what is it? For some particular type of Markov chains this exists we will define that way So, one way to find this V j, so will let V right will use this. So, V is the vectors that are present these limiting state probabilities So, to calculate this V, one way is to simply keep on multiplying this matrix p by itself until you find that some point it converges right. So, that is we look at the example we did last time right. So, if you you can just write a small program to do the matrix multiplication or use MATLAB whatever it is, after 5 or 6 iterations or 5 or 6 steps this will simply converge right So, for the above example, the steady state probability right does exist and V becomes,  the probably different states, they still unchanged; that is what V equals V p s So, this is in other way of actually determining our where vector V. So, given p I can find out the steady state probability we are simply having this V equals V p. So, we saw one case where simply right rising to powers of n; this is in other way of getting out steady state probabilities right. And there are some other conditions also, this is V equals V p as well as, all V j should be greater than equal to 0 with all this this sigma V j should equal to 1 Because, otherwise V equals V p can be satisfied by all V is equal to 0 right as a trivial solution with does which do not care about So, this V j should sum up to 1, otherwise 0 will also satisfy all 0’s will satisfy V equals V p right, 0 equal 0; so their is no regardless of p we will end up with right Which one? No, that is equals to 1, if V j equals 0, then the this condition would not be satisfied like Yeah otherwise that this should be equal to that theirs non-zero probability of being in each of those states. That is solution that we want to we simply solve V equals V p; as one solution will be simply V j equals 0. But, to get to unique solution and this we have to satisfy this condition too ok So, now let us go back to our previous example that we saw right. So, if I were to solve, so so my p was and V equal V 0 V 1 right. So, from our equation we simply have to solve for this. So, you will have a set of linear equations to solve So, technically V equals V p, but I preferred write as V p equals V, because the equations will come out looking better then we start So, that is the, so to find out the steady state, this is what we would have to solve So, what does this equation work out to? It some of you already worked out on the board last time. So, you just see if I can squeeze this here. So, 0.5 anywhere make a mistake right. So, that is your set of equations And this we can now solve which I do not have to go to we can pick any of those equations to solve. So, this will end up giving you So, for given any as long as your matrix is finite right you can end up with these make sense. Now, there are some cases where this would not hold we would not able to right    