Running Docker and Rancher Locally on Dev Machines – May 2016 Online Meetup

hi everyone this is Shannon Williams from rancher labs thanks a lot for joining us for today’s Rancher meetup in May we’ve got a great session today we’re going to be going through some very entering stink content about running doctor and rancher locally on dev machines and Lou clay we’ve got a big audience here for for this topic so obviously soundly it must be something interesting the just for those of you who haven’t joined before I’m Shannon Williams I’m one of the co-founders here at Rancher labs and I don’t our marketing and field team I’m you can find me on twitter at sm w three five five and love to connect anyone who wants to talk about the meetups or has ideas for meetups and please feel free to reach out to me that way today I am joined by will Chan who is one of my co-founders here Darren my usual collaborator on these events is off doing a road trip with his kids so he’s um he’s not able to join but will is going to be filling his shoes will is as our VP of engineering will you want to say hi hey everyone and if you if you’ve got you know questions about features and roadmap and all sorts of things we’ll is that well is the man so feel free to ask and I’m sure he’ll do his best to answer we’re also joined today by our two guest speakers Chris Owen is as ranchers DevOps lead in the UK he’s going to be sharing a really an overview of how to how to run rancher locally and some of the work he’s done to enable that Chris you want to say hi everyone all right Chris is joining us from Leeds in the UK and mark Matthews will be sharing a story about how he and his organization have been using this mark is the principal at a rkm enterprise consulting and he works at one of the largest healthcare organizations in the world where they’ve been they’ve been implementing this this approach and so he’s going to be talking a little bit about that mark thanks for joining us on say hello welcome hi everybody fantastic all right guys so for everyone who hasn’t attended before we we run these sessions every month and these meetups and we really do try to do this a little different than I’ve been your typical you know company webinar it’s a it’s milli meant to be an interactive Meetup not a webinar and that means a couple things you know first of all the everything we’re doing here is is live and we demo every ass element using live machines on real servers so things won’t always get done on time you’ll certainly see at times services won’t work exactly as we’re expecting so you know expect that this will be a very live real demo and we’re very much going to be open to your questions so you know it’s meant to be a group conversation that’s meant to give you a chance to ask questions as we go so if you haven’t seen it there’s a little questions tab in the in the go-to meeting bar on your right and beef you know feel free to fire off any questions you have about the topic we’re discussing or other you know related topics around containers docker ranch or you know kubernetes may says anything you’re interested in learning about we’ll try to do our best to answer them we typically get lots and lots of questions over the course of these we’ll answer some in the chat we’ll share them out to to the whole audience if there is they’re relevant and interesting that we think a lot of people will be interested in we also will ask a lot of them live so we’ll try to take them and share them with everyone there aren’t any bad ones so feel free to ask your questions we’ll do our best to answer we may ask a clarifying question here too here here there but you know it’s very easy to get involved so please do and like I said before things will break be patient we’re recording this entire session it will go up on YouTube in the next day or so so once it’s up you know if you have to leave if you’ve only got an hour and this is we’re running over don’t worry too much you can always come back and watch the rest of the show later we get enormous audiences for these meetups and people join us from literally every country around the world the biggest downside to a virtual versus a real one as we don’t get to see each other and say hello and and become friends a lot of people come back for you know over and over again but if you you know if you want to get to know each other you want to get involved the conversation usually really is going on in Twitter so feel free to jump on Twitter there’s a hashtag ranch or meetup that you can use to to share share a picture of yourself show us what you’re actually doing out there show how your team watches it I there’s so many fun ones that as I see them I try to grab them and post them on this page but

share what you’re doing and meet other people there’s a lot of great people out there in the community I do want to just ask a couple of quick polls as we kind of look at topics to cover and and you know some of the some of the things we’re thinking about doing with the meetup going forward the first question I have though is just kind of a really simple one I want to know have you ever attended one of these rancher meetups before I’m curious what percentage of the audience has has attended or haven’t attended so I put it out there you know please vote let us know if you’ve if you’ve made it to one of these in the past or if this is your first time and and then we can leave it open looks like we’ve got about more than half have answered so I’m not quite but I’m going to close the poll now and share it and hopefully you can sort of see the results it looks like about 40% have attended in the past and 60% haven’t so that’s a pretty typical ratio I think we typically see people tend to come back over and over again and new people show up all the time the other the other poll I have is you know is really for you know for anyone who has attended and it’s really about you know what topics you’d like us to start covering in the future so it’s a multiple choice you can pick from any of these but I’ve shared four or five different options out there that I’d love to get your feedback on you know we’re thinking about the you know kind of scheduling the next few months of polls and you can pick more than one you can pick three four or five of these you can say I want to hear all these but you know we were you know from conversations with users I think there’s a lot of questions about networking DNS load-balancing SL how to manage all of that around containers there’s been tons of questions about monitoring and logging we’ve done two meetups in the past on monitoring logging but it tends to be a topic that people always want to dive into more you know persistent storage for containers seems to be a big topic that we get a lot of questions about so it’s probably worth doing a meet-up on that soon as well and then you know recently as ranchers go around to support kubernetes forum makes us and cattle obviously a lot of people asking questions about which of these make sense for them so I think I’ve got enough enough answers here so I’m going to close the poll and share it and it looks like maybe you know networking is the most popular topic we do try to weave in user stories into as many of these as we can I think sometimes you know recently had Sony join us for a meet-up and we had the guys from Don and Tom talking about how they’re using containers on the last meetup today marks joining us to share how you know in a healthcare setting they’re using they’re using you know Rancher and containers on their laptops we try to introduce these but have a broader topic and then have user stories top so maybe that make sense that seems to be the way people like it all right well thank you all very much for sharing all that and you know the the agenda for today’s talk is really to go through and and start with just you know kind of talk a little bit about you know how we’ve gotten to this point of sort of running Rancher running docker locally on machines it seems like an interesting transition as people have gone from you know being a dad to sort of you know containers have moved from the developer to the DevOps team and now we see it almost going full circle where the DevOps team is taking concepts like orchestration and bringing it back to the developers so I’ll talk a little bit about that and get us kicked off then chris is going to dive into his session and he’s going to go through some slides and try and sort of set the context for a running docker and rancher locally on machines and and how to do that it’ll be demoing the whole time on on this concept of basically how to build a you know a micro data center on your laptop so that you can you can do all of the build test deploy of complex applications locally consistently based on immutable objects coming out of your git repo then marks going to talk about how they’re using basically that same approach at his organization and and what it’s meant for them we’ll go through some of the benefits and finally we’ll do our kind of labor rancher update won’t be there in this month it’ll be will talking about some of the latest features would take a couple minutes if there’s any questions you’re asking that are sort of more generic about Rancher I may hold them for later and will answer in the second half so if I haven’t answered a question right away maybe either we’re about to cover that topic or we’re going to be maybe push that back a little bit into till the second half so to keep it sort of on target as we’re talking about the different topics today okay really quickly for those of you who aren’t familiar with Rancher you know Rancher Labs is an open-source software company we build really two main products Rancher which is a container management platform it’s open source it’s you know it’s used to deploy containerized

environments for organizations that want to run you know containers in production and it’s a very popular open source project Rancher OS is other main open source project it’s a tiny Linux distro that’s it’s now I think about 30 Meg’s that runs docker as pit one and makes it you know very easy to run a tiny OS with just enough operating system to deploy docker so very popular the two are not mutually exclusive you can use one without the other you run Rancher on top of any Linux distro that supports docker and you can run Rancho a son without needing any access to Rancher so the two projects that we we put a lot of resources into both open source available on github and you can you can learn about them there but you know I want to talk real quickly about sort of the concept behind Rancher because it will be it’ll be critical to what we’re going to discuss today you know Rancher enables you to basically consume any Linux machine from you know run something running in the cloud to a virtual machine to a physical box and in a standardized way you know make it very easy to deploy containers on it so if you’ve seen something like Amazon’s the last container service thinking about a rancher sort of an a software-based version something like that it allows you to build and deploy containers anywhere and the problem is it’s basically solving is you know how to take all of the different stack of technology necessary to run containers in production you know all of the projects if you’ve if you’ve tried to do this yourself you’ll run into you know it is a relatively large challenge to put together the the networking Pig capabilities the security approaches that you know manage the actual engine in the registry you know plug it into your user directories deploy and manage monitoring to build your clusters to build your schedulers to you know run an app catalog all these things are sort of independent projects at the moment and when we’ve launched Rancher last year is really the first time there was a open source product that you know gave you a turnkey approach to deploying a single framework a management server that would allow you to spin up multiple clusters so you know when you deploy ranchers to a single container and when it launches it allows you then to create what we call environment an environment is a group of physical resources or virtual resources so for VMs or Linux hosts that you isolate into these clusters and then with each cluster you’ll pick a container orchestration and scheduling approach so you can deploy it with cattle or swarm or kubernetes all of which will make it very easy to deploy your applications you have a framework for deploying applications but across the board Rancher implements all of the infrastructure services next try to do this as well so that you know we implement container based networking we deploy we have storage services you can deploy we have a full application catalog that you can launch any kind of you know software you can imagine out of so you know CI CD tools or logging tools monitoring tool databases content management systems you know big data platforms I mean just all sorts of things and those catalog is different depending on if using kubernetes or if you’re using swarm or cattle so Rancher basically gives you this enterprise organizational management platform that it really that you know for a lot of people the use case of rancher is to deploy it as you know something that’s going to run all the containers in Amazon or run it on you know their containers running in their data center and and that’s probably the when we still rancher that was really the initial goal we set out to do the today’s topic really is about how people have taken that approach and sort of brought it back to the to the laptop I am I was thinking about how to sort of describe what we’re doing today and I reminded me of the Google stickers used to see that you know my other my other computer is the data tuner and or the cloud I think it’s my other computer the data center is what it used to say and and I loved that idea because it’s almost like we’re we’re trying now that we’ve we’ve put in place you know kind of immutable deployments we’ve moved to using templates like compose to describe our applications and so you know now we’re kind of looking back to the laptop and thinking well how as a developer can i leverage all of that technology all of that approach and bring it to the laptop so that I can you know use my development resources to really work on code that is identical to what’s running in production and so that’s that’s really the goal of today’s session is to look and think about you know kind of how to take all the DevOps approached now to push code into production to standardize deployments to you know make it very easy to you know leverage things like service discovery and you know container level note networking to link together services to then run it on a local host and then see what that experience is like so that’s really the what we’re going to cover Chris was going to kick us off he’s going to dive

into this this concept and we’ll we’ll go from there I’m sure there are lots of questions I’ll take a look at the questions as Chris get started but Chris I am now going to make you the make you the presenter so give me one second and you should see an option to present your screen there you go I can see your screen alright so can you now see yes all right so I’m assuming you can now see my slides I can hear you fine you’re good to go fantastic okay so I’m doing this in a little bit of a strange way because I haven’t really got a laptop that was powerful enough to run both my presentation and run my virtual machines for doing this presentation so I’m actually terminal serving into my laptop from a desktop so it is still technically running on a laptop and so this is something that I’ve sort of been thinking about for probably about a year and it’s been in itself its conception in my head and working on the background so this is where it came from so in a previous role I was a DevOps lead and I was having problems in that I want to be able to get my code from my developers through to production in a sort of a systematic way so the problem statement for this is I want my developers and testers to be able to build and test quality code independently preferably using the same tool sets as production users so I want something that’s reliable repeatable tested consistently with minimal compromises and I think sort of in the last 10 years and I’ve been in the infrastructure space now for 20 years and things have sort of started to move and moving from physical machines to virtual machines sort of allowed you to do a little bit more of this in that it was cheaper for you to spin up a separate environment for DES to test against and sort of virtual machines made it very quick to reset and but there’s still challenges in that every time you spin up a virtual machine you still run in a full-blown operating system and so this is where sort of Rancher and and docker came to life for me and I sort of joined them together and thought this is actually something that now technically is actually possible so some of the problem we’re trying to solve multiple developers all trying to update applications at the same time so this is where you’ve got a shared development environment you might have 30-plus developers sharing 15 servers all working on different feature branches trying to update sort of parts of the service independently you trampling all over each of those toes I’m sure it’s a scenario that people have come across quite regularly and it just causes pain you don’t get scheduling conflicts you’ve got an environment that developers are working against and the PIO comes along says I want a demo so they then got to get it into a state where it all works and that then stops you devs working and sort of slows down the production of quality code you may have no SSL of sort of proxy servers in the environment that you’re working in it’s usually cobbled together from old pieces of kit that sort of lying around and because no one wants to spend money on testing this sort of stuff and the data quality and consistency you may have issues with because what you’ll find is that developers tend to play in that environment as a sandpit and you will end up with problems where you’ve sort of got people doing various things and they’ve spun up a new version of a service they’ve edited the data scheme or in your database they’ve then rolled out their service but the data scheme is still there and that causes a problem again with another service that’s running or the data that they’ve put in or breaks and primary keys or whatever across your database and everything just sort of starts going wrong and network contention you may have thirty developers actually stuck at the thin end of a piece of string because they’re testing against a virtual environment that’s sat in a remote data center and this this game can cause problems and so that what sort of solution can come up with for that and then there’s the environment consistency it’s very very rare that you have the ability to be able to spin up an exact replica of a lifelike environment in sort of a repeatable way for devs to be able to test against so these that these are the problems that I was I was facing as a DevOps lead and I wanted a way around this so a potential solution I’m gonna need a

doctor in the data center now I can run it locally whenever entering the data center I can now run that locally so you can run it on a local hypervisor or you can run it natively depend on what operating system you run in you can have a common set of data and this sort of solution is useful for developers and testers so there’s a couple of things I’d like to pull out on the slide the data set is one of the key things for me so if you’ve got a bunch of services that are all being built by a big team of devs and then you’ve got your testers working independently if you’ve got a database that’s sitting at the back end of this have some scripts that can build your database in a consistent way docker eyes that database and deploy it out to all of your users that way at least you’ve got a common data set of your tests testing against and the great thing with docker is if you don’t like what’s there you can blow it away and start again if you put some tough data in it you trash the contain then you start it getting you back to where you started and it’s a big learning lesson then we took away from this where’s the fact that you can reuse this data over and over again and everyone’s testing against it consistently also having representative data don’t just stick in some dummy data with Mickey Mouse names and sort of non representative data and one of the things that you find moving from a development environment into a production environment is that the data that you’ve been testing against you don’t experience those issues until you hit your live data set so if you can have something that’s not necessarily to the same scale but certainly got all of the same sort of variable combinations in there at least then you’re going to hit your data center sorry hit your prediction that systems with a much higher level of confidence than if you’re just testing against dummy data all the time the other thing that it’s important to do is have your apps as close as possible in version to what you’re running in your data center so you could have Postgres 9/4 on your laptop and you could be running 9/5 in your datacenter that’s the case hope your laptop tonight just again it brings you closer to what you’re running in production and allows for sort of less discrepancies and less errors to creep in between the transition from development to production so what does it look like and I wrote a blog that was published last week about this micro data center as shannon termed it I really liked the name and what this talked about was basically setting this up in sort of a very basic way it gave you some sort of step-by-step instructions on how to get a VirtualBox machine up and running running rancher and talking to another VirtualBox machine that’s up and running an actual development machine that you’re going to use to deploy your services to and what I’m going to do is I’m going to draw a little diagram and sort of talk about the various bits and then I’m going to go into the first part of the demo and sort of show how easy it is to just spin up one of these machines so you’ve got a desktop and it can be anything it can be a market can be a Windows box it can be a Linux box if it’s a Linux box you don’t need the virtualization layer and but typically in place where I’ve been there’s been done on Macs or or on Windows machines so with this I’m assuming that people are using the docker toolbox because it’s very docker toolkit as it’s now called and it’s very easy to download and install it’s got some really nice and soft click through Wizards to just get you started on the path to us of examining docker and as part of that it currently installs the Oracle VirtualBox product as its virtualization layer so the scenario that I’m going to cover here and I’m going to use three virtual machines or over in there’s independent machines within a single VirtualBox installation so the first one is just a rancher server and that can be relatively small and branches sort of minimal install size recommends a gig of ram I run mine quite happily and after half a gig around so if you’re not doing a data center scale and deployment and half a gig works fine I’ve got a mirror and the mirror allows me to basically just cash and docker images so when I’m doing builds repeatedly and I don’t destroy my mirror my mirror takes a cache of the images that I’m taking they just again reduces some that Network contention and allows you to build slightly faster this is all about speed and efficiency for a developer you don’t want something that’s going to take absolutely hours to stand up and get run in and so it’s just adding in these little things can help speed it along so if people are going to go along this this route and and start running multiple virtual machines one of the things that I would recommend is

looking at the boot to docker profile if you’re using boots docker as the operating system which is the default that the docker toolkit will use so it does have some limitations with the dock toolkit and I will go into those but boots docker has this concept of a profile file that persists between reboots a lot of the data doesn’t cyst but some of the data does and this profile does so within this profile and file you can set some consistent parameters that you want so within there I would recommend setting static IP addresses so at least when you’re communicating between these services and you know where everything is on my development VM and I’m going to run a rancher agent and that Rancher agent is just basically going to make my host then deployable to from my ranch ad server so I’m going to stand up things on my ranch a server and those things will get deployed down to my development for em and that Rancher agent just allows us to do that and the other thing that I’m going to spin up is Jenkins and I want to be able to show how easy it can be first of an automated deployment to be integrated into a developer’s workflow so and as part of the demo I’ve got a Jenkins box there wrong way so I’ve got some shared folders and the way that I’m going to provision the code into the actual document is I’m going to share a folder which is contains my git repo III from my desktop just in shared folders I get shared into the development VM and it gets shared into the Jenkins VM as well this is not necessarily the way that you want to do that and it just depends on how you want to do this so if you wanted this to run in a data center in a completely isolated way then obviously you would have your Jenkins box do you get checkouts directing to your Jenkins container or into a sidekick data container and but in this instance just for ease and speed I’ve just mapped it through as a shared folder it does have some benefits on a developer’s laptop in that I can then kick off builds and things locally without having to do a commit so finally the last thing I’m going to do is once my Jenkins jobs and completed running it’s going to have created me a bunch of docker images and they’re just going to be sat on my development VM so what I’m going to do is I’m going to use the Rancher server to stand up and my application stack and show it running so onto the first demo so this is where it all starts to go wrong hopefully not in this instance and so hey Chris could you just increase the size of your of your a little bit hard to read the text is that better perfect yeah that would better thing alright so the first thing I’m going to do is I’m just going to show from the blog and this is how easy it is just to spin up a single virtual machine so all I’m doing here is I’m using docker machinery which gets installed as part of the docket or box to create a VirtualBox machine I’m going to give it a single CPU and I’m going to call this machine machine and give it an 8 gig disk and half a gig of memory and I’m going to tell it to pull down the 110 3 version of docker now the reason I’m using 110 3 is currently officially we only support 110 3 for Rancher and so by default it will pull down the latest version of boot to docker and but for this instance I’m going to fall down 110 3 so if you were to use a different operating you can specify any ISO here say you can specify Rancho US as a different ISO you just need to point to the ISO and they’ll download a minute so just hitting enter it will start doing its download of that ISO image and as you can see here I’ve got my three hosts the three themes run in that going to be part of this and so this is machine i named machine and it’s going to spin up machine here and you can see it starting so if i look at the details of it it’s a little bit tiny but you can see the docker whale just coming in in the in the picture here and what this will do is this will spin up basically the the boot docker operating system which is a very cut-down linux installation that doesn’t persist between reboots apart from a couple of files and some sort of container container persistence and but is sort of a very small lightweight operating system so it’s very quick for getting things up and running so at the moment what this is doing is it’s just waiting for an IP address it’s already created some ssh keys and it’s going to

start the operating system very very quickly right so i’ve now got a machine that i can go into excellent that’s just what you want so now i mean and the the vm that i’ve just spun up and i can run all of the usual docker commands obviously there’s nothing running so if i just pull down lists so this one i’ve not done my usual tweaks into the operating system to point it at my local mirror but this is going to pull down the reddit instance direct from docker hub into this machine and it should start fairly quickly so there we go so if I know docker PS I can see I’ve got a redis instance running and I can go into it and I can do Redis register like ping pong there we go so that’s how quick it is to just actually get from nothing to actually run in a virtual machine that’s running docker and allows you to run up a machine so again this is really quick for testing things out so say you didn’t know anything about Redis in three or four minutes there I’ve managed to pull down a fully functioning Redis installation that I can then that’s off go away and play with so I’m going to terminate this machine because I don’t actually need this as part of my demo and what I’m going to look at is what I’ve got running here so I’ve got my mirror as I described before I’ve got a rancher server and I’ve got an a dev machine so the rancher service got 5 a half gig around 512 Meg and the dev machine I’ve given it a couple of gig around so again this is going to depend on what applications you’re running as to how you want to actually scale these things and but this is sufficient for what I want to do with this test and just sort of give you this demo so what have I got in here all I’ve got at this point is I have a rancher server and I’ve added in a host I’ve not got any application stacks or anything running currently in this and so it’s just a bare machine as you can see there’s no containers or anything running on this dev machine so what I’m going to do is I’ve actually wrapped my doctor commands in sadaqa compost that I’m going to use so as part of the saris range compos I’m going to use as part of the deployment of Rancher you get the option to use the range of compose so you get your docker compose yml in your range of compose ymail which you can define your services in so I’m just going to quickly run this in and get this going so what this is going to do is this is going to create my Jenkins project so first thing it’s going to spin up is the network agent which is part of the rancher operating system so this is the sorry rancher agent is part of the rancher service system so this is what communicates with the network and the rancher server and then we’ve got a Jenkins machinist spinning up so if I quickly pop over to the applications tab this shows you that all I’ve got spinning up in here is is it a single Jenkins container and we can have a look at the config and this is the exact config that I’ve just blown into it so I’ve got a local host image called Jenkins local IMAP through the docker socket for a reason I’ll explain later on and I’m publishing it on port 8000 and that on the external port and that Maps through to port 8080 on the internal port so now now that’s up and running we should be able to go in and see it it does take a minute or so for it to come up so this is where I’m keeping my fingers crossed that the network works for me so in the engine is running on your laptop right you’re sizing everything on a little bar you can point it all from three gig of memory on the laptop so and it’s not not not particularly intensive for this sort of scale of app however depending on the scale of the application that people are using it can get quite intensive so and in my Jenkins image I’ve pre installed a

task to do my build so what I’ve got is I’ve got this voting app so wow this spins up I’m just going to show you that it’s actually doing something building and pulling down and it takes about five minutes for it to do all this so this is pulling pulling in some docker images and it’s going to build some code so I’m going to quickly slip back to my presentation so one of the things I want was how come rancher helped in this instance rather than just runnin doc locally so branch gives us visibility of the application stack so if you’ve got a developer who’s not particularly comfortable with infrastructure and Rancher really helps helps them visualize what it looks like it’s a lot easier for them to see something’s not working when it turns red and then it is for them going through a docker command and trying to figure out what what the texture actually means Rancher helps by giving access it makes it more accessible to people who were less technical so this is where you can say ok I’ve got my complete system working and maybe you want to go to a remote site and show it off for your going out to a conference and you watch off what you’ve done and this then allows people to go to run the full system potentially on their laptop and share all this running and they don’t have to be wizards at docker or anything else because Rancher sort of takes care of all of that for them and consistency it gives you the consistency between everyone running the same thing so because it can be scripted everyone can be run in the same scripts and it looks the same on every single person’s laptop and installation of this so it’s got an API and a GUI so we’ve got the people who were out there who who want to be able to do things programmatically from the command line and Rancher gives a sort of a fully-featured API that you can either kill run curl commands into to do things or you can interact with the Rancher compose and client utility or you can do things in the GUI so this is simple things like scaling up your services or it can be anything on pairs and that I standing up a complete stack and it’s all accessible via these and a pair in the GUI and Rancher gives you the load balancing that you don’t necessarily get out of the box so as part of the Rancher server we give you the HR proxy load balancer which gives you sort of a single instance at your front door so one of the problems that you encounter in developing locally on my laptop if you’re trying to run multiple instances of say Tomcat you’ve got multiple java instances and running in your dock a virtual machine each one of them has to bind to an external port for you to hit it we can sort of take the pain away from that by sticking a load balancer in front of it and you only access it on a single port and then based on the URI it knows which service it needs to go through which again sort of taking that step closer towards how your production system is running you typically wouldn’t run your Tomcat instances on multiple different ports within a production environment that all be run on the same port but because they have the ability to bind to a server port and it gives you that flexibility to run closer to your live production and the catalog so the catalog is a great way of being able to publish your applications out to a wider audience so we’ve got the that the public catalog which has got a bunch of applications in there that you don’t need to know really anything about you can click a couple of buttons and you can have a sort of a multiple instance Redis cluster up and running or you can have ghost or guess what there’s a huge array of things and I really would advise people it’s going to look at what’s in the public catalogs just to see what’s there but what this will allow you to do is you can define a private catalog for people to use so as your developers are going through and sort of publishing these things if you combine a docker registry on your network and with the catalog functionality of Rancher this means it’s really really easy for people to be able to run up your application stack very quickly with very little knowledge so those are the the bits that I wanted to bring out and sort of show why Rancher really helps in this instance so the demo application that I’ve picked and if anyone’s ever been to any of the docker presentations this will be very familiar and I’ve used the docker voting app so they’ve they’ve sort of developed this application for doing demos and it sort of combines multiple different types of technology and just to show you the sort of the flexibility in the power of docker so you’ve got a front end voting app that’s written in Python and the user interacts with this and presses a button and this pops something onto a reddish cube there’s then a jar

of a worker process that sits there and basically polls the reddest key waiting for something to be in it it does some very very naughty thing and sort of basically picks up this thing and sticks it into a Postgres database and then you’ve got a results app that is running in no js’ so it’s very simple but it’s very effective it’s showing how you get multiple different technology stacks all working together on in a single docker instance now I’ve sort of taken this one step further and I wanted to show off the power of Rancher so I’ve scaled my voting happens took a load balancer in front of it just to show that again one of the benefits of having Rancher in there is it allows you to go through and start scaling these services so they um you can check for session persistence and things within your application stack before you actually hit production service so you can start scaling things and figuring out whether they work or not ok so if I go back to the demo this should be nearly completed so there we go we’ve got a success and try and zoom in there so as you can see this has built a bunch of containers and it’s pulled down all the Java and it’s spun up my created my docker containers and at the end Jenkins has given me a success so now if I go back to my voting app I’ve got a green little button and it tells me that my how did long how long did my build take it took me for knit five minutes for it to compile so from start to finish of basically standing up that Jenkins instance and it polling my git repo it took four minutes four or five minutes for it to actually go through and compile all my docker containers so what I’m going to do now is I’m just going to quickly run hickories before you move on from this there was one question that came in from Jack twilly he was asking about just you know the step you there was kind of a Gabi he wasn’t quite clear on the step between spinning up the the local machine and sort of registering it with rancher and how to that how you then launched the how you launched Jenkins you can use part through that one more time to explanting there’s a little gap in confusion there so you spun up the local virtual boxes created those and then how did you register them how did you launch Jenkins and get that started Jack I hope that answers your question yes okay and so I sort of cheated a little bit and I wanted to show how quick it was to spin up a virtual box machine so the machine that I created called machine was just a dummy machine a dummy virtual machine that I spun up just to show how quick you can get up and running with sort of a docker instance and all down an image I actually had a pre-built ranch a server and host already there so if you go and have a look on the ranch of blogs and have a look at the micro dates and stuff and it’s step-by-step walks you through what you need to do to actually get to the state that I started at so hopefully that answers your question so the high level don’t natively your your first step would be once you create that first Rancher VM you need to create the rancher server that launches and then that will give you the the container ID that you want to then run on your dev machine so you’ve been launched the Rancher agent container on the dev machine so the aging container is running on the devil machine the server container is running on the ranchers machine and now you’ve got sort of the pair of those pieces to work together and then you know once you’re there you can launch the Jenkins component yes okay great so with the Jenkins stuff what like I said what it was I embedded my own tasks into it so it was a slight custom build of Jenkins and so when I mentioned early run about my pin through the docker socket and what that allows me to do is that allows me to basically have the Jenkins machine run my docker commands so if we go in and have a look at the actual commands that my voting app is running they’re all very naughty configure so what this is running what this is running is and this Maps through the docker socket and maps in my git repository off my my actual laptop and it uses docker 1 10 3 image to basically mount and do a docker build and typically you’d have to be running a docker container so he’d have to be running on the dock used to do the docker build what this allows you to do by exposing that socket is it allows you to map the docker socket actually into the Jenkins machine so you can do docker build directly against the the docker

host effectively from within that Jenkins machine it’s just a really neat way of actually getting your build to go through ok so I’ve still got just my Jenkins and application stack up and running so now what I’m going to do is I’m going to run my voting application and again this is just Rancher compose and wrapped in a make file so this will stand up the the pieces of the the the application that I saw detailed in the slider minute ago so as you can see that’s already started to appear and we can watch this is the rancher compose will wait for some of these things to come down so they’re ready some Postgres because I’ve not run them before on this Rancher host it’s pulling those down from my mirror and and because the voting that was already there it doesn’t have to pull anything because that was already on my actual machine the docker the development VM okay so it’s pulled down Redis the Postgres is slightly larger it should just take a second and as you can see here what I’ve done is and I’ve exposed my external load balancer to port 5000 and when my result comes up I’ve exposed that on to a different port so that’s on port 5001 all right so there we go we’ve got a full stack up and running so what I’m going to do is I’m going to show you what this very simple app is I’m just going to open this in a new window and push that over to the left of the screen and then I will open the resort up in a new window right so and the voting app it’s very very simple you can choose one or the other so I can choose blue or green and as soon as I push that you’ll see the resulting change in the actual result app so the result apps on the left and the voting apps on the right so when I change this and it will just go blue and green so like I said it’s a very nerdy app but it shows a sort of a very quick way of having multiple technologies all combined to show something it so what I want to do next is I want to edit this application so and I’m going to change it from blue versus green to something else try and be gone Shannon give me two things that I can compare date in stay in the Euro or leave the Euro will do in or out I don’t want to emphasize anything there because people might think that I’m voting one way out of it see this is all live alright so what I’m going to do now is look in my git repo those are the two files that I’ve just changed so I just need to commit those in and the way I’ve got it set up is it it monitors the check in on my Jenkins box it doesn’t actually pull it into the Jenkins box it will then just use the what the folder that I’ve mapped through so if I do and the bid I always forget and I sit there waiting wondering why am i get hasn’t pulled it and it’s because I always forget to do the push ok so now if I go back to my Jenkins dashboard we should see so I’ve got this monitor in so every two minutes what it will do is it will actually pull my git repository and check to see if there’s been any changes so if I go into the actual job I can have a look in the gift polling log and it lasts at 53 so in just a few seconds now we should actually see it and pull down this stuff I’m just going to wait for this – and okay and there’s a question there was a question that came in a couple minutes ago Chris while we’re waiting for that to pull down and the question was about L of virtual machines you potentially could run in this I mean when you’ve done this in the

past and your previous role did you typically just you know what was the what was the kind of scale that you would typically run a virtual machine at for a more complex application where you running you know it’s just anything you can share about that would be really useful yeah so you really are only sort of constrained by the physical limitations of the machine so and to do this stuff comfortably you probably want a machine with a minimum of 16 gig the more memories got the better the faster the disks the better it is but typically what we were working with was a dev machine that was running malt probably up to 30 or Java applications and load balancers and sort of database and even had a PKI and directory stuck in there and and that was running reasonably comfortably in 10 gig of ram so a 10 gig dedicated solely to the actual development virtual machine and then again I was just running with a 512 Meg Rancher and a 512 Meg mirror so yeah it really does depend on how resource intensive your application is so one of the things that you have to watch with certainly things like Java is the amount of memory that he tries to grab so you might have to specify some Java ops to reduce the amount of memory and try and trim that down otherwise what you’ll find is a lot of your earlier services will start fine but then your later services are basically failed to start because they won’t be able to see enough memory so there’s always going to be some level of compromise you don’t have hundreds of gigs of memory and a laptop like you’re doing a server and so like I say it just does depend on what your actual application looks like right but there’s definitely ways to get around sort of some of those limitations by sort of tweaking things so it’ll look a pretty traditional application I’m here talking about databases Java you know it is not like you’re running some micro services style thing exclusively here it’s very you know kind of standard existing deployment yeah yeah absolutely so I’d say docker eyes them sort of it and a commercially available PKI and commercially available directory so I mean some of these things aren’t trivial necessarily to put into docker but the stuff the dividends that you can get from having all of the stuff running locally for a developer can be massive so and if I come back to this build and as you can see this bill would only took six point nine seconds in comparison to the four minutes 53 this is because a lot of the stuff is already cached as you’re doing the build and as is pulling it down and so this is where developers can really sort of benefit from having a git repository that’s monitored and then having something like this automatically going in the background so whilst this is great on a local laptop and you may have a tester paired with a developer so what they can do is they can set their Jenkins build monitor to monitor that feature branch so whenever that developer pushes something up to that feature branch it will automatically download and check out and rebuild all of the sort of instances docker instances and allow that tester to then go and test that feature so it’s a really um it’s a really neat way to be able to sort of collaboratively do this stuff and ensure that you’re running with the same set of data and sort the same codebase okay so I’m just going to quickly flip back to a slide on here so the flow that we went through there is and I changed some files and I checked them into my git repository Jenkins was there monitoring that git repository Jenkins then compiles the stuff and sort of spins up the docker containers and all of that bit sort of from the from the git check in through to the docker piece is all automated now the final piece that I’m going to do is sort of spin it into the rancher and do the upgrade of those services and there’s nothing stopping you as part of your Jenkins job to actually have it automatically upgrade those services within your rancher instance and the only reason I didn’t do it is because I want to obviously highlight a couple of things that you can do with round trip again help developer but if you look means to attach that and they just wanted something that was just run in then you could just change it banking job so they automatically spins up these new containers so back to my gym as you see I’ve got four containers for him now when I did in my Jenkins build and I actually built all of my containers with their names but they before I can’t change version and were anything so what I’m going to do is I’m just gonna go in and just gonna say raid so image different version number and be working where this

is where you would change the stop mapping is through but because I didn’t I’m just gonna click raid I will start is I will start absolutely services and great what I want to do is just to show I’m clicking and you can see that and the spritebot mist ringing and it sort of in between blue and green so click these and actually we’ve got containers and for bringing containers everything will be rowing with and this is this is one of the great things and that was part of the lab monster and within rancher in that it’s just wrong it’s that you don’t service our clicking through those my and I was getting in town out sore and weight or anything it was just cleaned and available cost you know and just the way we use the lab was the sort of dynamically out and remove potatoes as our applications so yeah anything you can do with them Rangers you say okay well I actually like where works or and something working I just wanted to roll back and check that what after doing so before most forgiving it’s the same yes I matter of them if it’s just me but your audio is a little bit distorted it’s kind of getting broken up I don’t know if it’s maybe your you voiceover IP is breaking up a little bit or not but maybe just give it a second okay there’s and I’ve just rolled back everything’s back to the bedding like it’s getting any better Chris it’s still very very distorted we’re altering it maybe it’s on my fan but I saw at least harm and someone else had mentioned it on the questions okay it’s not just me everyone is having a hard time hearing you it just started breaking up about two two minutes ago it kind of started getting really hard to hear so maybe we’ll give it a second there if it clears up and if it doesn’t get better we’ll have to maybe switch to a phone do you do you want to just switch and dial in on a regular phone maybe that will clear it up won’t give it just a second for you to do that’s definitely comes better yeah all right I think you’re good I pick up excellent but I love Wi-Fi right okay and okay so now I’m back to the state where I was where I’ve got my blue and green so it can all I would really wanted to do that was highlight the the ability there to upgrade an application insist you have people still using that application and then sort of roll it back and get back to the state that you were working in before so what I’m going to go ahead and do now is I’m just going to go ahead and upgrade both my and voting at would you mind just going back I think it when you’re was distorted was a little bit hard to follow you so a couple people I wouldn’t mind covering that stuff again that you would cover just a second ago how you did the upgrade their worries how you roll the back okay so I’ll show you on the result container so because the way that I’ve named my image is the same the he’s very quick to upgrade the application because I don’t need to change the image so I’ve got this this image here this local host resort app and so as with all of the branches services I can just go in and I can click this upgrade button and this will say which image is that you want to upgrade it to so what this will do is this will stop the image that’s currently running and it will spin up a new image and when you’ve got multiple services running and you can have you can determine the batch size and sort of how long it weights between spinning up the additional services and but in this instance I’ve only got one service so the batch size is going to be one so when I click upbraid on this one this one will actually cause an outage because it will stop the existing container and it will spin up a new container and so there was an option there to start the new one before but in this instance I didn’t so if I now refresh this I should have in and out so the reason it’s 100% is because I haven’t refreshed the database or the Redis acute or the worker process really all I did was make a change to the front end and the back end so he’s just what it’s displaying and the actual data itself hasn’t changed and so now on

my other application they should be all in and all out there we go so that’s how simple it is to actually perform an upgrade on an application stack with him around sure and then what you can do is I can go and I can go finish the upgrade and say yeah I’m happy that that’s not working and and that’s the the version of the container that I want to keep okay and then my voting app stacks to turn the screen again and everything’s back and running as it was okay so if I just go back to my slides ok so just of wanting to reiterate what the benefits of local dev are and it’s more representative of production like a certain sort having been in the industry for quite a while and docker and Rancher gives you that ability to say ok well I’m running Rancher in my production system I can never and run free locally I’m running darker on my production system and I’m now running it locally so it sort of gets you closer to running the same and stuff it might not be to the same scale but potentially you can be running the same number of services as you’ve got in your production service just maybe with not as much memory and at least it’s going to allow you to tease out some of the problems if not all of the problems before you actually roll it into your production environment and so just give you that extra level of confidence and the distributes a build system so obviously if you’re using something like get every developer has already got a copy of that that git repository on that laptop so your codes already sort of distributed out anyway but what this does is if you include your build system such as Jenkins or something excuse me then every every laptop effectively becomes a build part of your build system and has the ability to be able to build your system and potentially push it into production and so you saw you get more resiliency you’re no longer dependent on a central Jenkins server doing all your builds for you and independent development so this is the one that for me was absolutely key it it allows people to work without breaking other people and and you’ll see the softer the payoff in and productivity is really quite significant and move developers more to developers so when you’ve got a traditional dev house and you’ve got people and all they’ve ever really done is write Java code and this sort of method of working it makes sort of the dev office life easier because it allows the developers to start start using the tools that the dev offers are doing and sort of start moving towards the the management and sort deployment of a service and getting a better understanding of how sort of how it works in production and not just going in blind and going alright okay well I only know the job so and it’s that familiarity with the tools so you’ve got your developers when they go to a production system they’re looking at the same system as what they’ve got on their local laptop so again if that ease of sort of transition from supporting and developing and allows people pick things up quickly and sort of process things a lot more quickly because they’re already familiar with it so you can extend this stack to not only just include in the build system you could extend this to include your your login system or your um sort of your search elasticsearch or whatever it is that use in for doing sort of log query and everything else and then it’s just the consistency if everyone’s building from the same thing because this can all be scripted and it’s of its infrastructure is code everybody’s running from the same thing so everybody should experience the same pain so it’s one person fixes it then everyone can benefit from that straightaway he could just a real quick quick thing and an email from someone who is attending saying that their questions bar isn’t missing so if anyone else is missing the ability to ask questions you can certainly send a chat window request if you have that or raise your hand or something if you’d like I can always unmute your mic and you can ask a question verbally III palled it some reason the something’s going on with the questions bar today and go to meetup it’s turned on so you should see it but if you don’t see it I’m clearly some people see it and some people aren’t seeing it so anyhow just FYI on that okay and limitations so this is the last slide I’ve got really and shouldn’t say did this sort of the physical limitations of the machine are probably the biggest constraint that you’re going to have in that you haven’t got a data

center in a laptop you’re mimicking running your laptop we haven’t actually got one in your laptop so and all I would say is the more memory you can get in the laptop the better it is and the better you’re going to get performance and the closer to sort of your production parity you’re going to get and the other thing especially with sort of i/o intensive applications is get the fastest discs you can to be honest it’s not even worth attempting this unless you’ve got some form of SSD because when you start running multiple services when you’ve got all of those hits in the disk at the same time the performance of your system will just get too sluggish to actually use and it eats sort of negates the and the benefits that you get him because people just start becoming frustrated they’ll start stopping things and starting things and trying to get things working and it just doesn’t work basically the faster you can get the better I mean the nvme discs that you can get a PCI Express disks they’re the fastest thing you can get in a laptop at the moment they’re absolutely tremendous for the performance of this sort of thing so I’m only running on a standard SSD here and it’s fine for this level of development but the more advanced your application and the the faster the disks it gives you sort of more benefit so and then the other thing you limited by your imagination and that there’s lots you can do with this all I’ve done really here is sort of scratch the surface and show you it’s sort of a start of what is actually possible with this stuff and the dis loads of things you can do but if you can docker eyes your entire environment and give it to a developer in a laptop then the e sort of makes their life a lot easier and it gives them sort of it it makes the developers actually happier because they know they’re not breaking in hours they can actually take their laptop away and it doesn’t matter if your network goes down because they can still work and sort of cry this stuff out so it’s just a really powerful thing to do within an organization so one of the things that I was going to pick up on earlier on and we had a question showing myself last week and from someone who my Nico who asked about the overlay networking in Boots docker so with with this concept you can easily run multiple and virtual machines within your your laptop and if you’re in an operating system that supports our overlay Network then it will allow you to sort have containers running in different virtual machines and communicating over our network and which the agents will manage for you unfortunately booted docker has got some kernel bits missing out of it that it’s not compiled with and that stopped our overlay networking working and so I think we’re going to try and put in a pull request get that changed but at the moment tadaka doesn’t support it Sartain chris is fantastic man thank you for for demoing all that live and showing how how to how to run this locally there were a couple questions that came in one was just asking what’s the best way this is Mike from Michael federal in Mackay and you asked the question about the overlay networking okay okay well he’s asking a question about what’s the best way of adding a ranch or host to a ranch or server in a scripted way is there an automated way to do that and you’d recommend so at the moment it’s not a straightforward so and what you have to do is you have to actually once your Rancher service started you have to go in create your API key and set it set the the host registration URL now you can programmatically retrieve that host registration URL will you actually physically have to click through the things at the moment now and Bill who worked for and sure he’s pointed me a go application that he wrote that does some of this stuff in an automated way so it is possible to do this but I haven’t done it in a scripted way yet so I’m sure we can get some some of the code from ill and sort of show this how it does it’s one of the things that’s on my plan today there was another question from Michael Walton who said what are the benefits of using a makefile for wrapping rancher compose and it’s really it’s for consistency and to stop me fat finger in the keyboard and I know that running sort of at 12 or 15 character command line rather than a sort of a 50 to 60 come out a character command line I’m not going to get as many typos really that’s it by the way look at the questions bars back because I think I reset it it seems everyone’s got it now so all of a sudden of course the questions are rolling and everyone had 10th up the first one is from Israel and he said you know I’m curious about how Chris is referencing images like

localhost you know for five thousand last my image and localhost / my image dr. Campos files is that a way to be able to act as an image that have been built in the Rancho server into the development vm box yes so by default if you don’t specify a registry when you’re trying to run an image within rancho server it will assume it’s on docker hub and it will go away and try and fetch your image and if you build that image locally then it will never find it so sort of a nice tweak to get around that issue is if you label it with localhost then it will find that image and sort of allow you to start it up yeah there’s another question this was from Shirley yang and he asked you know with the current site it looks like it would take about two minutes before you can test your changes is there a way to shorten that two minute window down and so you know you may be using something to auto you know use a web hook or something to that the build happens as soon as you push to get yeah I mean like say I am I set my gate check so it went every two minutes you can set it faster than that if you want you can set it with seconds and you can actually manually run a build whenever you want so yeah you could probably curl that build click and actually have it run as soon as you’ve done your tracking or whatever but the reason that I do it with with the git folder map through is purely so you can click that build manually so if I go in and build it now like so just switch back to it and if I go into here I can just do a build now and it will just build it straight away and because the the code is all local it builds it in that six or seven seconds and like I say to make that even faster what you would do is you would update your Jenkins job to actually and refresh your actual services within Rancher as well so you can have that so if you know don’t care about the role in upgrade or anything else you could actually have it rip down the stack and start up the stack fresh every single time great here’s another one the from Mikael yes what kind of followed he said is it possible to create the key through a docker exact or Simula similar and off the top of my head I don’t know we’ll may know that the answer to that one well your knowledge are you unmute maybe yeah the question was just you know is it possible to create the key through a docker exactly similar so when you’re creating that I’m assuming your talk about when you’re creating the ranch or agent the container agent key right but the question no so I think what he wants knows and when you spin up a rancher server fresh French server you don’t get an API key sort of created by default so it’d be nice if maybe as we could have a command line that passed it in and allowed you to pass it in as parameter so when it’s span up the rancher server just for dev we could do that but at the moment I don’t think there is a way that we can do it yeah we don’t have a way for you to to generate your own keys yeah prior to installing rhetoric you have to generate after yeah excellent ok another question was is there an example of the make file on the blog so when you said you posted a blog Dennis was just wondering if you could post them if you haven’t already could you post one of the makefile examples and yeah come on together it’s like say it’s fairly naughty and I think I’ll show you actually to make file it just very hood it’s a fairly naughty like sorry it’s very basic all it does pretty Taylor like three times it’s not like I thought you said not here’s my so oh all the clan that I’m running is this one here and and literally it goes in a tree it removes some wire mouth so my Jenkins compose and whatever it doesn’t say with the dr. Campos when the voter now it just removes my wire mouse that you pass in as a parameter to the Rancho Campos and this Rancher em Script all that does is set the environment variables with the API key and so the secret an access key and and then it just runs against the the URL for the actual Rancho server which is again we’re specifying hard code in an IP address of that server makes life a lot simpler simpler and then it just

specifies the two YAML files so that’s it like I say it was laziness that I can type that rather than having to type all of that Adam so question the kind of a file question to show yang’s earlier question he was he was asking what if we don’t want to actually put a commit to github but wanted to test my changes any thoughts on that so that’s what I just demoed there so just actually clicking the build now and so and if we quickly go in if I and I’m just going to change the title of this one so I just save that because my and local Jenkins is sorry my Jenkins is talking direct to my local git repo and if I grab the that one I was going to make it bit smaller so you can see so currently it says in verses out so that’s built so if I now upgrade my voting app that’s just gonna pull in what get just and build from my local repository so as soon as I hit one that’s one of the new ones I did definitely save that demo obviously not okay so members it don’t it should and I don’t know what happen but it should have updated my title here I think I think the card should work yes he thinks you the results page not the question page okay so if I upgrade my results page then it’s good job someone’s awake Oh trust me they’re all I got six people who had the answer end there’s another question well as we wait to see this pop up and you know we’ll be on the parent livers okay there were apparently I’ve got a ton of questions come rolling in now there was questions coming in on the IRC side someone’s in the rancher the hashtag rancher thing on IRC and someone else posted it from here but is it possible the git repo that you’re using we could share that and yeah yeah it’s it’s Chris Irwin made em oh we can put it on the blog and yeah I created it on github so that anyone can access it and it’s I mean basically it is identical to the daka voting one so if you go for the daka voting on the only difference that I’ve got in there is in I’ve got the Rancho Campos yml that adds in the load balancer and the scale for the actual voting app but it’s in here sawsan i just go into voting yeah there you go that’s it and that’s open I can take that okay well we can when we post the video after this we’ll include that in the blog post about it Michael Walt’s asked if there was a way to run the ranch a load balancer container in a stack locally with docker compose weights sorry I don’t understand the question I think the question is can you when you when you bring up your docker compose file can you but it’s also launched a little branch a load balancer as part of it and the answer should be yeah the only change that I’ve done from the docker votin app is that so and if we go into the actual stack here and have a look at the config so what I’ve done here is I’ve set up this load balancer here and it that’s all I’ve really added and then the scale for the the voting out so there was a question from Ian Ian

asked would you advise against mapping volumes so that changes in code are reflected on containers instantly okay repeat the question yeah even with his question with me going really right would you advise against mapping volumes so the changes in code are reflected on container instantly so things talking about map in knee and the actual code through into a docker content that’s running and I think is it’s whatever gets the job done really and I like to look at these things sort of quite pragmatically but if you’ve got a code that doesn’t actually require compilation and it can just run and connect set changes on the fly then I don’t see any reason why you couldn’t just run up a docker container map in the folder from your git repository or whatever and to the correct plagues and have it run in I don’t see any issue in doing that it’s whatever good I mean this this was all about making devs more effective and more efficient so if that’s a way to make it more efficient for you go for it got it someone asked me to post the link to the blog the earlier blogs I just sent that out in the questions tab so if you’re looking for that you can pull up the blog that Chris posted earlier this week we probably should have sent that out earlier um will will ask the question and and I’ll try to explain it say this out but it will was wondering would you say that this setup is more for local sea ICD or just for local dev he says it seems like there are other for just doing dev like plain old docker compose any thoughts on that if there is and that’s sort of where I started with the docker compose that the thing that didn’t give you was the things like the load balancer and the acceptance from the devs because it’s a lot harder to troubleshoot especially when you starting to have to orchestrate lots of different containers and it just became easier to pull in something like Rancher so you don’t need the Jenkins stuff there at all that was just to show what what again is it sort of the art of the possible and you can take this and do with it what you want your imagination is the only limitation so and I liked using Rancher because it gave me that visibility and it gave me the adoption in the devs who didn’t adopt when all we had was a sort of a range of comparison some scripts so it’s whatever works within the organization I know when things we’re doing this two years the catalog really heavily in that and the the catalog becomes sort of a central place where people pull up and deploy using is a private catalog and they all up into play you know a number of services like at start point when a new user gets their laptop you enjoin the project and deploys okay a couple more questions that we’re gonna jump over to mark I think this is probably the last last two so so the developers git repo and this is from David yes if the developers get repo is mapped at the Jenkins container home underscore folder to allow changes in manual bills to be applied before pushing it to github for the automated build up I read that right yes it’s where you’re just going so it is mapped into the Jenkins container so that and you can just run that manual build and as I just showed with the changing the title of the results page got it and then there was another question from Ray’s and he asked he said you know during an upgrade you spin a new container right does that mean there’s any downtime what if the new container is taking a long time to spin up so this again is where that the rancher load balancer and can sort of take care of some of that pain in that it will and base you’re all set it will define how you spin up the new container so what will happen is it will take a container down and it will spin up a new one and it wait for that one to be passed in it’s health checks and passing traffic to it before it starts on the next one but you can set the size of those cycles as to whether it’s it takes down two at a time or three at a time obviously if you’ve only got a single instance of a service it’s not going to be able to do a lot for you but the load balancer gives you that ability to be able to do a rolling upgrade with a zero outage correct all right the I think now we need to keep moving and switch everything over to mark Chris thank you again for sharing so much this was great I really really appreciate you taking the time to put the slides together and tell the story of this cuz I think it’s something that you know a lot of people are starting to think about how to do these types of things so great job I want to now switch it over to Marc Marc be there looks like you’re on mute if you want to unmute right yeah all right so I am making you the presenter and you be able to see now a big button to share your screen in the middle of the page as soon as you do that I’ll let you know when we can see

it and just again mark is a mark as a consultant who who works at one of the largest financial sorry healthcare organizations in the world where they’ve been implementing this and he’s speaking out up just kind of on in his role as the head of his consulting company they can talk a little about this project so a little bit easier to share this without going through all of the approvals to get it signed off on through the healthcare organization so Marc you just want to introduce yourself and we can talk a little bit you can sort of walk us through your experience yeah everybody can see my screen ale I can feel great thought leader DevOps I was working with Kristin his previous incarnation and we’re in this rather large organization we’ve had a lot of teams trying to get I would now call Devin a box working really well now we’ve also been trying to work up towards the Spotify model which is in many organizations a bit of an ideal and not realised very well or there are problems along the way or there’s a bit of an airship yeah so we’ve also got a lot of services and apps that we’re trying to run and we just like Chris was saying we’ve been trying to get the devs and testers and the QA guys and gals all have the same stuff available to them and we one whether we’re a developing or testing or would point past live environments like Devon in or production we want everybody working on the same tech stack because that’s one of those kind of goals that everyone’s been trying to get to for a few years of if the tech stack is the same we can hit problems early and so well I’ve been working on this for quite a long time now and we’ve found big gains in this so I wanted to talk I was very happy to talk to everybody about stuff that we’ve found going on and a couple of key things that we’ve found out along the way so again mark mention that modify model there and you can use the lab right on that a little bit for everyone a you mention the Spotify model I’m not sure everyone knows what that is talking about these days where you have a set of people that includes some developers someone out some an architect maybe a QA person closely coupled with a product owner and a couple of your platform or infrastructure guys and gals who are trying to be able to take a feature from inception all the way through to production so in a bigger team with say 40 or 50 people like I’m in you’ve got a number of of teams within your scrum or Kanban or whatever you’re calling it these days I have your operating all trying to get features all the way through to production without blockages because for example in a previous previous previous job we had to cut some code we had to test it locally and then we hacked up like deliver that code to a server team who would then set it up on the path of my observers for us and we couldn’t do that ourselves and it was a slightly different text back because they had a slightly slightly different configuration of Oracle or this or that and the Spotify morning was really about trying to get from as I said from inception all the way through to production without any blockages given a small number of people so you’re enabling people to pick up the user stories that you want to develop pick up the user features and put them all the way through by being tightly coupled together so now found in many places that I’ve worked this has been a great idea but a bit tricky to do and for us funnily enough kind of dev and Rancher in a box has been part of solving that problem of getting the dev and the dev ops co-located and sharing the knowledge – papers yeah that’s great thanks Marc okay so um again this Ruth rates I think the experience of many other people that and we went to get our text back the same for everybody because if you’re using the same technologies everywhere you can hit problems early especially something that’s relevant for us for session management so where you’ve got multiple servers like Chris is shown behind the load balancer then it’s quite it has been historically for me quite difficult to prove that that would work before putting it into a bigger beefier server and then finding problems and

then finding in very hard to debug locally and this has really been transformative for me finding that I can scale stuff with a click of a button in the right GUI and straightaway I’m testing my session stickiness and whether it works and whether or not I’ve managed to get out of having sticky sessions so also starting off with docker scripts is great being able to run a few small containers and then link them together but soon I found that guts gets really out of hand eventually then you want to move to docker composed to integrate your services to be able to say this one answer this one and so on but then again that doesn’t deal with scaling so we really wanted to manage docker what we considered properly which means that what we need in ten of these containers and eight of those and seven of those in a clustered ready ready state base how do we make sure that all that is going to behave so then thanks to people like Chris we’ve discovered Rancher and we really haven’t looked back on that it’s been a really transformative to say this is a really easy way for everybody to manage these services and our once lots of our testers are quite technical but they’re not DevOps and they’re not developers nor should they be and we found that this technology has really helped us get the same thing out to everybody and the test is easily able to say oh I’m going to turn that project off I’ll turn that one on I’ll scale it out of it and then see what happens and people because it’s a UI a lot more people can really really get hold of that so this is my own personal reiteration of Chris’s diagram we’ve got Alex the admin setting all of this up there but our local we’ve got our local dev files and scripts coming in to shares inside our oracle box virtualization that allows us to sell the ranch a server that Chris was saying that allows us to set up our rancher agent VM there’s a host at the moment our scripts start all this up and then we go in in the UI create an API key cut and paste into come online and then it runs it writes that into the agent come on it’s doing to join the host on which is because that’s a one-time operation that’s really quite easy and we’ve also got what we’re calling our Rancher mirror though it’s really as Chris this is like a darker registry image running in another VM and using tricks of changing the docker profile file to say here is a registry every time you put an image it comes through that mirror so this is caching docker hub for us not the whole of docker hub only the the particular image we need to say it avanti 14 it centaur 6 its Redis 5 for 14 arsenic and this is greatly reduced our contention across our own internal network because the first time anyone dev sets this up there’s a bit now you can feel a bit of a tug on the network sometimes especially if two or three people do it but there after you’ve got that locally and especially because we tend to pin our version so we specify the versions of all the software we want rather than rolling on latest and this means that once I’ve got the image it’s good to go we might then play a story to have one developer say ok try the latest of everything see if it works if it does repin the new version and but this means that really we’ve massively reduced the problem so for example we’ve Ament those source make file scripts that Chris was showing we’ve amend we’ve got some of ours that say into the mirror pull this list of images and then I’m going to go home and build that all out tonight it doesn’t matter if my network is working or not or a bit slower at home then I’ve got all those images pulled already and this makes it really easy for the user Bob to come in through the load balancer like this for showing and run my application so I’m just going to reiterate some parts of that in the next few slides feel free to jump in if there’s anything you want me to go back on and so yep we will start with bash script building running docker containers we all go to docker compose and then the icing on the cake is that Ryan should compose scales it lets us link stacks together really easily and scale them independently all the high load balancers so for me composition means more reliable scripting so you use Roger compose and docker compose and that also means that we then get health checks added in our launch of compose files so that we can tell what the health of our services and that combined with the fact that the right GUI gives us a lot of information

upfront means that were well ahead of where we used to be so Mira Mira on the laptop this has been one of the single best things I’ve seen in a long time this has really helped us everybody’s machine now in my team and in other teams around me has can build out this entire system like Chris just did very very quickly because even those few minutes that it took Chris a download those images the next time he does that demo that time is gone you’ve just local i/o over a fast SSD hopefully in a big chunk of memory we do like laptops with 32 or 16 gigs of memory because any less is just making it a bit difficult for us we’ve got something like 35 separate containers in a single stack in our system and that makes anything less and lots of those a java-based in our one so anything less than about 16 Meg really isn’t going to fly but for other systems it’s a lot less we’ve got one node of Jas system and it’s got a tiny tiny footprint we can have a really small VM for that one for testers especially we’ve got Jenkins in there so the CI gets its fingers in all the pliers because in Jenkins as a job runner it’s just gonna run bash scripts for us it’s gonna monitor get for us so that that then means that we can very quickly and easily give a button for the testers to push we’ve got a button to pull in the latest code changes we’ve got a button to rebuild the stacks and we’ve got a button to rerun the selenium tests we spin up a selenium debug container so we can even be in VNC into that container and see what it does as it does it and our testers absolutely love this because it means that they can spend time having fun which for most of my testers means destroying the system in unusual and new ways instead of repeating the old tests that they had to run time and time again because we can show them that it’s working they can do that VNC interesting didn’t keep pana they can see exactly what’s going on so again it’s retreats what other people are doing but we can link in the host file system with the linking bits of the dev box the ranch box and therefore we can share files between some of these things and again if you want as one of the questions were saying yes of course you can you could link your code directly and to your container where that’s say running nodejs or python get your just-in-time compilation happening for you and get pretty much instant results in the same way that maybe once upon a time we were all doing connecting Eclipse to our source code and it will compile on-the-fly and give us the answers now we can do it in a full-stack environment so that is also talking to our other containers or Postgres like Redis and really make the whole thing very smooth for us sidekicking a data volume into jenkins has been a real big win because then the state of that is persistent and that sidekick can have our code linked into it so that means we can have for example I can have code that I’m testing in one way locally and if I want to I can then choose to commit that and drop it into the main thing and build it up in Jenkins and run it that way all as one of the questions said we can run it instantly spread from the code all these things are possible but and this has been a real facilitator for us and our testers are also slowly learning that we can easily switch between projects so we don’t have to build this system and run it and build that system and we can build them all in Roger and even if my laptop won’t cope with them or I’m to turn off the stacks I don’t want today the project one can test project two I can turn those facts off and turn the other ones on tomorrow and come back again and this means that lots of projects can be up and running in my containers I mean we’re using Alpine for more and more stuff as the backing of our containers to make them smaller as well saving discs and this is all really helping us and again the easy redeploy from the latest code so Rancho plus plus that’s really how we feel about it’s kind of added a lot of stuff on from what it was before managing is containers is a pain we can solve that so we give it to everybody and again the test is a slightly less technical people they really like EUI because the default branch rewire is showing us memory i/o and network traffic on every container then we’ve been able to say when someone’s building walls hang on a sec

you’ve got no i/o happening and all the memory of that container is through the roof right okay let’s drop that stack and recreate it and the problems generally gone away or maybe actually no there is a bug in the code there and you’ve just helped us find it really quickly and we can very easily test multi stack systems so we can scale out all our containers to two and then test all the sexual behavior is right and that’s something that in the past again I found many systems didn’t deal with well until you put it in past the life kit which were bigger and running four or five of your server so we can now hit that early and then past the live and production releases again smoother and smoother and quicker and quicker because we’ve hit these problems early and I got in in this slide a specific mention of Chris’s localhost cone on five thousand slash trick so all our dev in a box docker compose files have this is the registry address because we’ve got one dev host we build all the images directly into it and therefore this allows Rancher server to tell the agent here is your image use this one because that then means we don’t need the overhead of running a register the full registry locally you can but we’ve don’t have to and then we just push certified builds that our testers had to look at into our path the library registry and then that’s the goes up stack and so again littering what other people are said but I’m really really on this we’ve got the same deployment and management tools as Peter Lorre production and before rancher and doctor I never really had that always got close but it was never quite the same couldn’t scale stuff and these VMs to me all have a low overhead you take the low overhead of docker ization containerization and for us now also you take the low disk space of say images based off of Alpine and we’re getting a really really neat solution to these sorts of problems have any questions coming on this stuff or shall I hand the and they’re controlling back I think there was there was a question they came in a bit earlier I think would be would be helpful if you would talk about it a bit it was really about kind of you know the the testing element you were talking about there so so when you when you transition this over to testing how much you know how much training did you have to give the teams you know kind of your QA teams and others on using this was it may be just across the board was there a lot of training required or was a pretty intuitive given that you’d already authorized your applications people to pick this up very quickly well the main knowledge gap has been that lots of the testers don’t understand and don’t fill out the need to understand containerization I’m kind of with them on that they want a running application and they want to poke it and destroy it and tell you what you missed and for me that’s where some of my the testers that I work with a particular exer their skills makes the application but they don’t know how to tear it apart so for us giving them an easy way to say I want to turn this application on I want to hit a button in the Jenkins UI and have it update to the latest code on this feature branch because we’ve parameterised our Jenkins jobs with the branch name that it wants to check out rebuild the stack can have a look it’s made it really easy for them to hop in different feature branches the different developers work and even as I said by turning certain stacks on and off they’ve got a button where they can say stop that projects that back start this project stack and then they know they’re they can run off the new project and that has been really really useful and because lots of testers are used to using quite complicated tools for some of this stuff you’re giving them tools to do I found that there the line between the two is often the testers want to use some tools for stuff the developers one of you some code DevOps guys wanna use a command line well all three of those sets of people can still use their preferred way of doing stuff but it all joins together in this with one tech stack around a few different ways we check them out on each person’s machine we run a couple of scripts and then that person is up and running with the whole lot and it’s very much like what Chris just showed you know you have a script the stuff that creates your and initial virtual machines with a few tricks in there we can update the doc profiles they all talk to each other and

have fixed IP addresses and you know change someone’s host file so it’s got some fake domain names in it hit a button in Jenkins to put the stack up and bingo they’re testing a project for you and the testers don’t care how that worked they just care that they can go into Jenkins and say change the future branch name they’ve just been emailed or slapped from somebody else and hit a button and then they can test that but we know they’re testing off the same tech stack and then those images are effectively an idempotent product that goes up to PTL and goes up to production at one and then a question came in from michael hereafter what challenges have you faced with using alpine of the container image base it’s so skinny it’s anemic it has nothing in it so something I like about some of the images you get on docker hub say debian-based you’ve got like the kind of Debian toolkit starting point from which you can easily install Java because it’s got bash in it and it’s got curl and it’s got double you get I think the folks were really into alpine a little bit for me personally as the lead DevOps and not like a hardcore platformer guy and Alpine goes a bit too far in your opposite way like I don’t want to know anything about microkernels personally I want it to work so our pines a bit too lightweight so lots of people have thoughts of like having an our pioneer matrix has half a dozen dev tools in it so that you can then install the other bits you need now it still ends up very lightweight but I think there’s I think now give a point and the docker use of Alpine six months or so else settle in a really nice place where there’s a couple of really handy images that have just enough that you’re not installing everything from scratch every time so that’s it’s been a thing for me it’s it I do I’ve nearly black talk about platform guys to help you get the right tools in a base image and then from that base image I can then easily write a docker file that puts in Tomcat all puts in some of the other software that we use often fantastic I don’t know more questions right now mark energy on will be more supplies that you wanted to cover now to me so I look for resentencing is that something to you Shannon yeah I can just take it myself you don’t need to do present it will pass it back well first of all thank you so much mark I really appreciate you sharing all that and you know it sounds like you guys have have really got this going across your team and and you know from what you were saying before it seems to be spreading inside your organization as you know team is sort of understand the use case and how is working it kind of you should have sort of moved from project to project so really great work and I’m so glad I know Chris as well you were telling me that a couple of the other guys that worked on it with you know kind of helping to put all the stuff together we’re on the call as well if you want to you know kind of say go banks you know I know all this didn’t take a lot of people yeah well thank in Pattinson if he’s still on the call and at Marshall obviously was involved in sort of the range of stuff as well so yeah without those guys probably would never have got to where I got to fantastic all right well we’ve almost got to two hours on this is meet up so we’re we’re definitely going to try and wrap it up soon and you last thing we just want to hit on really quickly is just a quick update on some of the latest rancher enhancements we had this in the most recent release of Ranchos 10.1 if you’re not using that already definitely upgrade it’s got it’s that as a bunch of stuff around improving the the dev support for the support for multi node deployment just this week we released 1.1 dev 2 so this is a this should be the I think this will be the final dev release for 1.1 which is scheduled for next month it may be one more well are you are you still on the line do we still have you on audio there’ll be one more oh there’s one more the two there’s a 1.1 dev 3 coming actually there’s gonna be a does 3 coming today probably and then we had a few yeah there’ll be one more major gotcha so um so do you want to walk through 1.1 dev 2 is just you know just shifting on as are quite a few new things in it people may want to play with and just in case people aren’t clear there’s the stable release branch which is which is not highlighted to dev and then there’s the development release branch because it’s all open source we do it all out in the open so you can you can choose to pull either of these when you pull whether you want the stable or though the latest and and most you know bleeding edge stuff is happening on the dev side but go ahead won’t you want to

walk through dev – sure so just quickly go down a list dev – we’ve added additional basis actually we actually should dev one actually had make those support ready but Deb – just continues to improve it we’ve done things like made sure that zookeeper is in the clustered mode so when you launch mesas through the environment it’ll be it’ll be able to be H a correctly if you happen to take down either the zookeeper and since that were running on the host so that’s been added we’ve made changes to the frameworks to improve some of the frameworks there but on my work we’ve improved Mesa a little bit better and will continue to prevent until we go da same for kubernetes when we launched one oh we had the initial kubernetes support but there was actually there’s actually quite a bit missing features when you launch grenades through rancher most notably things like persistent storage the ingress controller so we’ve added ingress controller persistent storage we it’s almost there but couldn’t quite get into depth too but it’s certainly been the next ever release things like Prem registry will be added as well and then also the ability and even just upgrade kubernetes that’s lost our environment that’ll be in the next step release but right now for the dev 2 if we have initial support for the ingress controller so I don’t believe there’s a UI available just yet but you can certainly create that through coop control directly through communities and know that all leverage ranchers low balance of servers that we have today the last thing the last two bullet points is we internationalized our UI framework I think I believe there’s gonna be a Chinese translation pretty like but if you look at our ranch or slash Rancher /ui github project there’s a little bit readme if if you want to add you know language like germ in French and you have any languages you want to know we have the framework ready so we just have to you know people to help contribute to the translations and UI will support that and lastly bill was one of our demos put in initial vault integration for secrets management so try it out should be there it’s very experimental right now so bill tells me but would love feedback if you guys could play with it and try it out and let us know if it works well or not yeah that’s a chance thankful there was a couple questions that rolled in one one question was was about the releases so wills wanted to ask if the dev the – dev releases were guaranteed to always upgrade to prod releases should we always test the upgrade beforehand if you’re moving from – dev – yes like 1.1 yeah so our dev relations will always go to the stable so our stable releasable basic basically be anybody’s without – dev suffix so all the stable releases will upgrade to eight another stable release and all the dev release will also upgrade to the stable release what we won’t support or won’t really heavily test is going from a stable – to a dev release that’s the only the only upgrade path that we won’t have time – to test ourselves so a stable is stable and dev to stable to test cases we will ensure that works cool there’s another question asking about there was recently the s5 Rancher to f5 integration that came out in the catalogue and Elena blogged about it a couple weeks ago Michael was asking if there was any news on adding support for ELB and as well so f5 and asws gob integration can you just comment on that yeah yeah we’re actually I think we have an engineer already looking to this I can’t give an exact deadline but it’s definitely the works right now to support you that’s the next thing we want to add do you expect that would be a 1.1 time frame or will it probably not even be relevant related to release timeframe it’s not in the catalog exactly yeah it’s a catalog item it’s not really related – great so that that’s the feature will just show up when one is done so Michael if you shoot us a note we’ll let you know the you know more exact time as it gets closer okay I think that’s all the questions we have and thanks for walking through that will thank you everyone for attending if you’re new to rancher and you haven’t you know seeing how to get started you know feel free to just go to our github page or you can go to Rancho comm you can find all sorts of things are there about how to set up and download these services every you know also you’ll find links to our forum issues IRC channel all sorts of different avenues to get help if you’re running into challenges both when your water using the product or even just as you’re getting set up well to run a monthly training webinar like this that usually gets a also a very

large crowd if it basically covers the same topics every month so instead of you know these these meetups are always covering new topics and sort of progressing the discussion of areas but if you have team members or people who just want to get them trained and you know go through a good hour and a half of start up getting getting started with Rancher you can find link on our home page to training and just sign up it’s free and it’s a great way to get yourself educated we’re also I should mention you know we’re also hiring a lot all over yeah you know really all over the world we’re hiring people to work as field engineers to join our DevOps team to you know be developers or hiring and sales and marketing so lots of lots of opportunities if any of you love ranch or love working on it as a fan and think it might be a fun job feel free to you know reach out to us either on our careers page or even just you know in IRC or slack or you know over emailing just send an info at Rancher and we’d love to chat with you always looking for for great people to join the team last lastly if you are looking to just kind of get your feet wet there’s a really great book that that lucemon and Bilal developed on using continuous integration this is more about production deployment goes through and talks a lot about you know kind of how to build a fully immutable CI process to push containers into prod you know I mentioned this it’s on our homepage as well you can pull a copy of this a great starting point for just understanding the you know the flows if you’re not already there otherwise thanks again to – whoa – Chris – mark – everyone who spoke today – everyone who asked questions sorry for all the confusion on the questions blood that eventually it cleaned up and we had our normal flood of awesome questions from for all of you and you know next month we’re going to be running the Meetup talking about mesas so our next month’s meetup will be an introduction to running mesas and deploying you know workloads leveraging mesas and ranchers so we’ll be jumping on that and then hopefully we’ll look at some of the topics we talked about in this morning’s all right this morning in there in our earlier ask and we’ll get some of the rest of the scheduled if you’re ever interested in presenting or speaking these are meant to be open so you know we’ve had the guys from Sony we’ve had a bunch of people come on and share what they’re doing love to have guests join us and share something if you’ve got a cool project or and it you know you think there’s a something you could you could share the community really is meant to be very open so please suggest something otherwise Thanks to everyone and we will see you hopefully next month bye bye thanks again everyone