Lecture – 38 Introduction to Floquet Theory

In general it would be something like this Here is a 45 degree line In general it would be a structure something like this. If you have a map like this, we have seen that even though there is no fixed point, they can still be a periodic orbit closed bounded orbit. This is something that is very peculiar for discontinuous maps and in order to find out the different orbits, the essential methodology that we followed was first we find out whether a period one orbit exists or not depends on whether there is an intersection with the 45 degree line If not then there is a period two orbit exist The logic was simple you have got the state space, there is one point to the left and one point to the right, so this point comes here this point comes there. Start from here apply the left hand map, go here and apply the right hand map, come here and then make xn+2 is equal to xn That when you solve, you obtain the position of this particular point. Similarly you obtain this particular point and the condition of existence of the period two orbit will be that this will be in the left hand side, x coordinate negative. This will be in the right hand side that means this coordinate will be positive. Its stability will be given by the Jacobian of the left hand side times the Jacobian of the right hand side, you get a matrix, its Eigen values. If the Eigen values are inside the unit circle, it is stable else not. Similarly you will probe the possibility of existence of a period three orbit and there can be two types of period three orbit LLR and LRR. LLR orbit will be like this, one point here, next another point here, next to one point here and that fellow maps back In this case also you can obtain in a similar manner, this position and this position and this position but it’s clear that it is not necessary to consider this position. Why? Because the possibility that this fellow might violate the condition that this is in the negative side before this fellow is practically none. We will consider this particular point being in the negative side and this particular point is in the positive side as the two extremum conditions under which you will have this LLR orbit existing Similarly for LRR orbit you will have one point here, another point here, another point here and then it goes back. In a similar manner you can find out this position, this position, this position but ultimately the parameter range for which this orbit will be actually occurring will be dependent on this position being in the left hand and this position being in the right hand side. We will ignore this position because so long as this fellow is in the right hand side, this fellow obviously is in the right hand side. This orbit will have a specific range of existence with the extremum condition given by this position becoming equal to 0 and this position becomes equal to 0. Similarly this particular orbit will exist for a specific range of the parameters and this particular point become equal to 0 and this particular point become equal to 0, these two are the extremum conditions Similarly you can obtain the higher periodic orbits and their conditions also. In general

if you want to work that out algebraically, it may become a bit cumbersome actually to work out by hand. In those cases you might take request to any of the standard program that allows symbolic computation. For example maple does, matlab has a symbolic computation routine. You can use those in order to obtain the exact range for which this will occur But then you can see that the LLR orbit and LRR orbit, these are the two possibilities of period three orbits Period four orbit can happen in these combinations LLLR, LLRR and LRRR. Out of that this orbit is essentially the same orbit as the period two. LLRR are the two points L, two points R so essentially the same orbit has the period two orbit. When we consider the period four, we will have to consider these two orbits Similarly what are the possibilities for period five orbit? LLLLR, LLLRR, LLRRR, LLLRR so four possibilities. Similarly if you try to work out the period six orbit, you will find that there are… how many? No, it will not be 5 because in between there would be one combination which is LLLRRR which is the same as the earlier orbit. We will discount that and we will take the others Similarly we will see that these orbits will occur for specific ranges of the parameters Now here today instead of considering each case one by one, we will simply talk about the various possibilities and the general methodologies, otherwise the talk might become a little bit boring for you. Because if I keep considering every case then it would be somewhat difficult for you to digest. What I will do instead is that this is the general frame work and with the general frame work, it is possible to obtain in which parameter range can a specific orbit occur. It is not necessary for us to consider each particular case for example, a side becoming greater than or less than one, b side becoming greater than or less than one and all these are not necessary. Because with this frame work I can work out if a period two orbit will exist, a period three orbit will exist, a period four orbit will exist and ultimately for every condition we can talk about that But there are specific situations which we have already talked about. For example like this where you do not have any fixed point yet, periodic orbits will occur In this condition we can see from logic that both the sides are stable, their slopes are less than unity and therefore unit intervals do not expand and if they do not expand, you cannot have a chaotic orbit. So all orbits will be periodic orbits and all these sequences will occur and as a result, the bifurcation diagram will look like what is shown in this computer screen. Here you notice that there is a period two orbit, there is a couple of period three orbits and so on and so forth as anticipated from the theory. In this side there is a period one orbit, in this side there is period one orbit Notice that this structure of the bifurcation diagram is not generally found in other types of systems, this is typical of discontinuous maps. The purpose of today’s lecture, at least the first part is essentially to give you a visual impression of the kind of bifurcation diagrams that come from discontinuous maps So that if you encounter that kind of a bifurcation diagram in any given system, you would be able to intuitively say, okay I do anticipate a discontinuity in this case. The map even if I do not know what it is, it must be a discontinuous map if the bifurcation diagram is something like this. I will not go to each and every condition individually. Again I will just try to give you the flavor of the things. I will make it little smaller. Here is another bifurcation diagram. Take a look

at it Here obviously it is not this case that we just considered because I can see a chaotic orbit. Now in order for chaotic orbit to occur, it is necessary that one of the sides will have slope greater than unity. The other side has slope less than unity that is why periodic orbits also occur. If both the sides have slopes greater than unity then you don’t anticipate the periodic orbit at all. As you can see that, as you increase the parameter in this case then more and more higher periodic orbits are occurring and then finally it goes into chaos Notice one thing that in that case you are essentially, even from commonsense you might say that the side in which the slope is greater than unity, you are having larger and large number of points in that side. So that beyond a certain periodicity, you are having the slope of that high periodic orbit greater than unity and you do not have a stable periodic orbit in that case. In this part you see a chaotic orbit Again you would see that these high periodic orbits occur, again those things accumulate at certain points. Can you see the hand like thing here? Yes, they accumulate at certain points so that if you decrease the parameter, you will find that the periodicity is increasing and at this point the periodicity is actually infinity, even though the orbit is not chaotic This is another typical structure of a bifurcation diagram. Let’s go to some other cases Can you see the structure then? Here you would see that according to the structure of things that I have just explained, there is a range for which a period one orbit occurs. There is a range for which a period two orbit occurs, there is a range for which a period three orbit occurs so on and so forth. I can see in clean period adding cascade here but notice that the range for which the period two orbit occurs overlaps the range of the period three There is a range over which there is a coexistence of an attractors and again you have this accumulation of orbits up to this point See this is another typical case where you have the accumulation of orbits into a chaotic orbit. I am not describing under which condition and other things because I do not want today to go into each individual case which you can work out yourself with this framework For example there would be a condition when you have a particular range in which only chaotic orbit occurs and the bifurcation diagram will look very boring, nothing much is happening Why? Because in this range the behavior is only chaotic. This means that this happens in the case where you have, let me show you that situation where both the sides are greater than unity and as a result only under certain condition there will be a situation where the chaotic orbit can be stable That will happen when you see this is one extremum point, this is one extremum point When any point inside let me make it a little larger, it will be easier for you to see them When this point maps to a point outside this point, then it goes to infinity. When this point maps to again outside this range then it goes to infinity else it remains inside So there would be a range of parameters where this orbit distract will actually be stable as a result you will see a chaotic orbit which is something like this Basically the point is in most cases these can be worked out from this structure that I have just explained. Here also you see the period one to period two to period three to period four ultimately landing into a chaotic orbit or you can also have a transition to chaotic orbit directly Now let us turn our attention to the paper In general you have found that in these cases, you find cascades but these cascades are different from the period doubling cascades that you have found in smooth maps. These are often period adding cascades or what are known as

period incrementing cascades where there are three possibilities. suppose let us consider there would be period adding so that there would be a range for which period two will occur, there will be range for which period three will occur, there would be a range for which period four will occur and suppose the ranges are like this. There is a range for period two, there is a range for period three and then so on and so forth. This is one possibility Another possibility is where you have period two and then period three. In between these ranges up to what parameter value, here you can either have a mu or tau L or in this case a or b, any of these can be in the x axis and the parameter but you can easily find out the range to which this period two orbit will occur and the range from which a period three orbit will start. In this case also you can find that. The difference between these two is that in this case, the two ranges overlap. In this case there is a gap in between When they overlap, what happens we have seen When there is a gap in between; in between this range you find a chaotic orbit because the orbit is bounded. If the periodic orbit is not there then the only thing that can happen is a chaotic orbit It will be a period two orbit chaos, period three orbit chaos, period four orbit chaos and so on and so forth. In this case it will be period two orbit overlapping, period three orbit overlapping, period four orbit and so on and so forth. These are the two very standard sequences, period incrementing sequences that you find in many physical systems The critical case of the situation between these two is where this period two orbit ends exactly there period three orbits starts, where the period three orbit ends exactly there the period four orbit starts and so on and so forth. That would be critical case between these two but these two are the generic situation that you find often in physical systems. That explanation has to come from the actual computation of the parameter range for which each periodic orbit will occur I am not going into the details of it The point is that there are a large number of physical systems in which you do anticipate there to be a discontinuous map. Many people have investigated physical systems where they have obtained experimental or simulation bifurcation diagram that look like this. Then looking at that, one should be able to say that now I know that this particular system in some way must be giving rise to a discontinuous map and then investigation has to go in the direction of how in what way, by what logic is this system giving rise to a discontinuity Similarly one can also probe the bifurcation behavior in systems that are higher dimensional in the sense that you have a three dimensional system which when you place a Poincare section that becomes two dimensional and in that two dimensional system, if there is in some physical way there is a discontinuity, you have got a two dimensional discontinuous map. Similarly you can visualize that there can be higher dimensional, three dimensional, four dimensional discontinuous maps also. Unfortunately those theories are not yet so very well developed I am not presenting those in details. But what was the essential logic in probing this? Notice again the essential logic, we said that our investigation of the various types of non-smooth bifurcations are essentially going this way We know that in a system there is a boundary and as a physical systems fixed point is moving, as changing a parameter the fixed point is moving and at some point it is hitting the border then possibly it is going to the other side. Then we are locally linearizing it and we are only investigating the character of the locally linearized map that was the essential method. Then we said that the whole bifurcation theory relevant for these systems will depend

on Jacobian matrix here and the Jacobian matrix there. We had again normalized that Jacobian matrix into a matrix containing only the trace and the determinant but essentially it is the J1 and J2 that matters. You might ask how to measure the J1 and J2 for any given physical system. Remember any given physical system means you have some kind of an orbit and you have say place a Poincare section here and we are talking about the Jacobian matrix around this particular point. How to obtain it. It is not a trivial problem If you are lucky, if you are able to obtain the map in closed form, yes you can make a Jacobian matrix. But if it is not available in closed form then what? Then can you obtain or can you not obtain this J1 and J2? If you cannot obtain obviously you cannot apply the whole structure of the theory. It is necessary to obtain the Jacobian matrix in the two sides and only then you will be able to apply. Why did we need this Jacobian matrix in the first place? In the first place we needed the Jacobian matrix because what does the Jacobian matrix tell me? It tells me that if there is a fixed point here then the character of the fixed point character, if you start from an initial condition that is away from it, the way it will go either towards the fixed point or it will go away from the fixed point, the subsequent iterations of that will depend on the Jacobian matrix That means when we are considering the local linear neighborhood of that, the subsequent iterates will depend on the Jacobian matrix Essentially then the Jacobian matrix will tell about its stability. The problem then is to find out its stability. Now in history this has developed in certain stages, this idea has developed in certain stages. First the question was in general run of the control theory, most of you have attended some course on control theory that’s why you more or less know and those who are from science background, you have dealt with the same thing in mathematics courses. We are essentially dealing with equilibrium points of a set of differential equations There we have x dot is equal to fx where these are vectors as the given system description and by putting x dot is equal to 0, we solve this and thereby obtain the equilibrium point and then the whole gamut of the theory is essentially geared towards what is the stability of this equilibrium point. The whole theoretical structure is for that. As you know that when we talk about only the local linear neighborhood, it becomes far more convenient to convert this time domain behavior to Laplace domain behavior. We take a Laplace transform and do the whole thing in Laplace domain. Do the design and everything in the Laplace domain All these things are well developed but those ideas are related to only equilibrium points in the state space of a continuous time dynamical system But here we are talking about closed loops, we are talking about orbits like this, a limit cycle and the stability of the limit cycle Now when we are talking about the stability of the limit cycle, obviously it is not the same theory. Stability of the limit cycle cannot be obtained by the same kind of logical structure as you have applied for the equilibrium point of the continuous time system. But here what was the logic? The logic was that if I perturb it that means if I start the initial condition from somewhere else then the way that initial condition will either home on to that equilibrium point or go away from it, that will give you stability. In the time domain description you have the Eigen values of that equilibrium point and we said that has to be in the left of the plane. This is the real part and this is the imaginary part and if it is a left hand side, we said that it will be stable. But in this case the same idea will not work because here we are not

talking about a single point. Whose neighborhood are we talking about? Obviously then we have to talk about the neighborhood of this whole orbit, it is not the same problem Now the way to deal with that problem is to give a perturbation here and see how this part of trajectory evolves. Does it go closer and closer to that original trajectory or does it move away from it? Accordingly you will get the answer. Essentially the question is how the part of trajectory evolves. The way the question has been probed is something like this, I will bring the other one in picture Can you see? No, I don’t want to show because it is too small, I will write it here. I will write on this paper and it is not necessary for you to see the screen because this is a little bit smaller font then only you will be able to read What are my starting points? First we start from initial condition somewhere here, say this is my state space and this fellow evolves from that initial condition by some set of equations and that set of equations let that be x dot is equal to f (x, t). It will be dependent on t for non-autonomous systems, it will not be dependent on t for autonomous systems but in general it will be like this where x is a vector. So in future whenever I write x, just remember that it is a vector In case of a simple pendulum it will consist of the vector of the position and the momentum In case of an electrical circuit, the vector will consist of the voltages across the capacitors and the current through the inductors and depending on the number of these storage elements in the circuit, it will have n number of state variables. So these are these x The starting condition is x at t0 is say x0, that is a starting point or the initial condition Now starting from here it goes through a certain trajectory and if that trajectory is designated by the symbol phi then you can write phi which will be a function of time, which will be a function of the initial time and it will be a function of the initial condition. This is written as starting from x0 plus integral of the function which would be starting from t0 to t and you will integrate this function f of what? The solution is phi, so of phi of again this, I will write it as tau because I will integrate over tau, t0, x0, tau d tau Essentially what I am writing is that my starting point is x0 and then the integral over the derivative function that is how I am getting the actual solution that is how I get. This is the starting point, this is how the orbit is defined. Then what are we doing? We are giving some perturbation to the initial condition say this much and then we are trying to find out how does the part of trajectory evolve Essentially we are asking the question if I now hold the initial condition and move it then how much will the trajectory be part of it. That is the question we are asking If I perturbate but the trajectory is not part of much, even if you perturbate the trajectory after some time loses that perturbation, homes on to the original trajectory then its stable That is the concept. In order to do that this

is the trajectory and I am trying to ask the question how sensitive is my trajectory here to perturbations of the initial condition How will I quantify that? By taking a derivative of that with respect to x0 We will take a derivative of this with respect to x0, it would be… Now the phi, the right hand side was x0 and I am taking a derivative with respect to x0 but x0 is a vector. So it will yield an identity matrix. This will be I which is the identity matrix plus integral t0 to t, we have to take a derivative of this fellow. We will take a derivative by parts We will first say that the derivative of f of phi of t, t0, x0. Have I closed all the brackets? Yes, with respect to, at x equal to x0 times… Can you see? I will move it to the left a bit. This is tau, t0, x0. See what I have done. I have first differentiated with respect to phi and then differentiated phi with respect to x0. We have got this term and this term multiplied together, we have done it in two parts and this can be written as… I will write this as A, it will be function of tau and x0 So ultimately what we have is that this is equal to I plus integral of t0 to t. This is A matrix of tau x0 times tau t0 x0 by… So far so good. We have differentiated this and have obtained this. Now this term what it is saying? It says that if I perturbate it, how much will phi change? Phi is this function and naturally this phi, I have to answer the question is it here or here or here. If it is here then if I perturbate, it will change by some amount that means you obtain the derivative of phi with respect to x0 computed at this point something, computed at this point it will be something else, computed at this point it will be something else. So actually what I mean to say is that this derivative is actually a time varying quantity. This derivative is not a fixed thing, phi as a function of t so this term will actually keep on varying. In order to find out how it varies, we need to take a derivative of this. What we will do is we will take a d dt of this term with respect to x0 Now in the right hand side it was I, which when differentiated with respect to t will vanish. Here is an integral over time which when differentiated over time, this integral will go and we are left with only this. It is actually a large simplification A (t, x0)

and this will be delta phi (t, t0, x0) this will be in terms of dx0. What is this? Here is a term which also appears in the right hand side. Here is a derivative, here is not a derivative. What does it mean? It means it is a differential equation. Imagine this whole thing if you call it say p, it is dp dt is equal A times p. It is a differential equation but here the terms around matrices It is a matrix differential equation. It is a differential equation where terms are all matrices, so it is a matrix differential equation but nevertheless conceptually you can solve it As a result you can obtain this term. As a result you can obtain this term. What does this term say physically? It says that how does the function change with time, as you change the initial condition. This function goes around so that will be a function of time, so this is a function of time. So when you solve it ultimately when you have obtained this term, you have obtained that as a function of time. If you obtain that as a function of time, this fellow has been obtained then everything is simple Because now we will say that this distance is say delta phi. This distance is delta phi, here is the phi and here is the delta phi that means the perturbation itself is called delta phi Then we can write delta phi of (t, t0, x0) that means the perturbation as it travels through time is then written as this term (t, t0, x0) by delta x0. How is it changing with time, times the initial (t0, t0, x0) This is the initial perturbation times this term gives the final perturbation. There will be plus higher order terms, so this is nothing but a sort of a local linearization of the evolution of the perturbation around this original trajectory. What have we done? We have obtained this matrix differential equation, whose solution this fellow as a function of time is called a sensitivity function. This says that if I have the initial perturbation this much, then I can multiply that by the sensitivity function to obtain the final perturbation in the first order approximation. There will be higher order approximations also, I don’t want to go into that but in general, the idea is that I have my initial perturbation and the perturbation travels to time and the travel through time is simply represented by the multiplication by a factor which is the sensitivity function. Now that sensitivity function, if that goes to 0 you know that the original trajectory and the part of trajectory converge That is the concept of the sensitivity. Now this sensitivity idea is general and applicable to any given system. The only issue here is that in case of your switching systems where you have an orbit going like this and then going some other way and then going like that, there we need to do something about this process

That means we will need to figure out how the sensitivity function goes across the switching surface. We will do that later But presently the basic idea is that in case of orbits like this say I have got an orbit like this, closed loop. As it happens for say any oscillator so that is an orbit. I have the state space like this and then how to figure out the stability of this orbit? We will say that we will perturb this initial condition say to this point and we will try to figure out how it goes. Now in order to figure out the simplest possible way is to find out the sensitivity function so that if I now know the initial perturbation I can find out the perturbation after any length of time. Because the final perturbation will be nothing but the sensitivity function at that time times the initial condition, initial perturbation. That is the concept with which it all came Now it is clear that it is somewhat difficult to actually pin point, obtain the sensitivity function as a collection of numbers. It will be somewhat difficult. Conceptually writing things like this is fine but ultimately if I ask you to figure out, how these numbers are obtained? This is nothing but a matrix with numbers. How would you obtain these numbers? It is not easy to figure that out in a general nonlinear system. We need to have some kind of a method to do that. Now you notice that at this stage it will be good to have a look at the idea Floquet It’s good directly to the Floquet idea Floquet say, remember he was about a 100 years back. So very old piece of work but a master piece of work. He said that instead of considering the perturbation and the evolution of the part of trajectory, if you consider the perturbation itself that means this vector itself. Earlier what we are doing? We started from an initial condition and the perturbation and we tried to figure out how the part of trajectory moves Instead he said that let us consider instead the perturbation as a vector and its evolution The initial perturbation would be, t is the original trajectory and the perturb trajectory So x the periodic orbit for the perturb trajectory minus the original trajectory. I am using the subscript p because now we are talking about periodic orbits There is a point in talking about the stability of periodic orbits only. If it is not a periodic orbit obviously it doesn’t have the stability We are trying to figure out the stability of periodic orbits and that’s why I am using the subscript p. Here is the perturb trajectory minus the original trajectory is the perturbation This will be x of t. The perturbation itself is just this and then in terms of that, if you write that linearized equation it obviously takes the form, d dt of delta xt is equal to… it will be a coefficient times the initial perturbation that would be f of xt times calculated at times delta. You can write it as A of time

and the part depend on the trajectory itself times delta x Now what does this equation say? It’s says that the perturbation is derivative with respect to time is some factor times the perturbation itself. This is also a differential equation Here how it moves? Had this A be a constant term, it will be trivial for this to solve how the perturbation moves but unfortunately this A term is a function of time. See what it is saying. It says that suppose I have got a trajectory which is periodic orbit, so this is x of p. Now I am considering a perturbation from there and I am trying to figure out how this perturbation is evolving as time progresses, so it’s like this. But I am not talking in this case about the perturb trajectory. I am talking about the perturbation itself, how it is evolving? We are writing a differential equation in terms of the perturbation This is the differential equation in terms of perturbation. But then after having written so that means here we have sort of linearized it around that periodic orbit. As a result we have obtained this which is fine but then this fellow is a function of time, we have difficulty here. We cannot really directly solve it and people before Floquet were stopped at that stage Floquet proposed the great insight without proof, he actually didn’t prove it but later it was proved that this A is a matrix here This is a vector, this is a vector and A is a matrix, this matrix will be time varying but that time variation will be periodic in time. That means this A as it goes around the periodic orbit, this will have the same value. That means you imagine that A is a three by three matrix, imagine that there will be something here, something here, something here, something in these positions and all these will be time varying quantities. But all these will be periodic in time, these numbers will change and it will have periodicity The same as the periodicity of that original orbit. As a result of which he said, if you observe it say once every cycle all this will be constants. It is a great insight. He said that these numbers will be all time varying numbers but don’t get disturbed by that You only need to observe it once every cycle, then all this will be constants. Every time it comes back to same position these will be constants When you further probe that idea that means here you have a periodic orbit which according to him, you observe once every cycle it becomes a same as a Poincare section What you learnt as a Poincare section and the concept of stability of the fixed point on the Poincare section is the same as the stability of that periodic orbit as per Floquet Essentially then the problem becomes how to obtain this particular thing, once every cycle which as I said this will become constants then. These are actually time varying but when you observe it every time on this particular plane it will be constant. So forget about this time variation, try to figure out what will it be once every cycle. So from this point of view, essentially the cracks of the problem becomes to find out delta x of t where t is the whole cycle is equal to phi the function T + t0, is the final time, t0 is the initial time and delta x0 is the initial perturbation times delta x0

The essential problem becomes to figure out this value. Once you start from an initial perturbation and come over the whole cycle that means you start from an initial perturbation say here and you come over the whole cycle, you say come here. Then the initial perturbation is this much, the final perturbation is this much, initial perturbation is here, the final perturbation is here. Then you have to express it as the final perturbation is equal to something times the initial perturbation and that something has to be found out. If this is a vector, this is a vector, this will be a matrix. The problem boiled down to finding this. Now this fellow is called the monodromy matrix Monodromy matrix is nothing but the matrix with which the initial perturbation has to be multiplied in order to get the perturbation after a whole period It essentially embodies the evolution of the perturbation over a whole cycle so that is the monodromy matrix. Now you have come across the concept of state transition matrix. Those who have done some course in control theory have done something on state transition matrix What is state transition matrix? State transition matrix is essentially that this phi (t, t0, x0) that means this is the solution of the set of differential equations That solution can be written as a combination of linearly independent solutions. Those linearly independent solutions can be phi1 (t, t0, x0), phi2 (t, t0, x0) and so on and so forth These are the linearly independent solutions and the final solution, the actual solution is written as a linear combination of this Now out of these possible solutions, possible phii (t, t0, x0) there is only one that satisfies the condition that phi of substitute in case of t, t0, t0, x0. What should you get? The final condition is equal to this times the initial condition and here the final time is an initial time, so this fellows must be identity matrix, so this is I. There is only one out of these that satisfies this condition and this matrix is called the state transition matrix In general this state transition matrix has the property that if you have a travel from this point to this point via this point, so we will say this is a xA and this is xB and this is xC. This state travels in the state space through this then we can write that xB is equal to the state transition matrix AB times xA. This is the role of the state transition matrix that it takes the state to the state later. If you can write that then you can also write delta xB is equal to phiAB delta xA means if you perturb the state and the perturbation travels, then the perturbation at this point is also related

to the perturbation here by the state transition matrix State transition matrix not only relates the state initial to the state final but also it relates the perturbation from the initial state to the final state. Then if this fellow travels like this so you can also write xC is equal to phiBC xB and you can also write delta xC is equal to phiBC of delta xB. If these are true then you can also write delta xC is equal to phiAC delta xA and in that case your phiAC is equal to phiAB into phiBC. If you travel continuously from xA to xB to xC then you can write this and this and then how the final perturbation evolves from A to C is given by this. These two state transition matrixes are simply multiplied To obtain the state transition matrix from here to here. Let us stop here and we will continue in the next class