How to Build and Run Node Apps with Docker and Compose

(soft upbeat music) >> Right Hello DockerCon My name is Kathleen Juell and I am a developer at DigitalOcean Primarily I work on our Community web property doing a mixture of full stack application development and managing our pipeline and deploy process via Concourse to Kubernetes So today I’m here to talk to you about how to build and run node applications with Docker and Compose But first just a bit of background on this talk, and its topic by way of DigitalOceans Community Some of you may be familiar with Community platform already We have a wide range of tutorials, questions, answers and developer tools designed to spread awareness and knowledge of open source technologies So the basis for this talk is actually a series that lives on Community called “From Containers to Kubernetes with Node.js The motivation for this series was my sense that there’s often a divide between guides that discuss containerization and deployment and those that discussed application development So I was interested in how a full stack series would do, a series that brought those two things together And just quickly I want to mention that we now have a wonderful ebook version of the series that you can download for free from Community Finally, one more resource I have a series called “Rails on Containers” that also lives on Community that aims to do much the same thing as the node series Basically taking a full stack approach to application development and containerization Okay, so today I’m here to talk to you about building and running node apps with Docker and Compose So first I’ll talk a bit about what goes into building an image then I will talk about how to wire up a development set up with Compose And then finally some things that you will want to think about when you’re setting up for a deployment But first, some hot takes In all seriousness before launching into any specifics, it’s worth spending a minute getting reacquainted with the problem containers are designed to address So this slide for example, shows a flicker like application that has the user management piece, photo management piece, a database adopter, and a front end piece So the application gets a loaded as a whole onto a virtual machine Scaling therefore involves provisioning more machines If any part of the application changes, the whole thing needs to be reloaded So things like application level upgrades become tedious and error prone with lots of works on my machine ambiguity, thrown in for fun In a microservice-based architecture, however, we split the app up into microservices, which are collections of loosely coupled service applications that each perform a single function Containers are the underlying foundation for this architecture So they make it possible to manage groups of identical workloads or deployments and end points that expose groups of containers or services And as you’ve probably gathered from this image, these are key concepts, when you’re working with, container orchestrators like Kubernetes Okay, so today we’re going to talk a little, a bit about what you can do with containers specifically So before we get into any code, it’s worth taking a minute to talk about containers and virtual machines together So a virtual machine, a good example of a virtual machine would be a digitalOcean droplet, a remote server So these servers allow you to run multiple full systems on a single physical host A hypervisor manages, the multiple running machines and shares hardware resources between them So this is great because it allows for application sandboxing and versioning It’s way more efficient than running several physical hosts, but you still have some bloat, right?

Like a full operating system Containers are like virtual machines, but they provide some additional advantages They accomplish the goals of sandbox applications and provide consistent, reproducible runtime environments much more efficiently Running containers means that you don’t need a full operating system just to contain a runtime So Docker Container image files are generally much smaller than virtual machine files The spin up time for containers, generally much quicker They tend to be more performance than virtual machines And finally there are lots and lots of prebuilt, preconfigured images available that you can use that are officially maintained and genetics, Python, node.js, et cetera So now that we’ve covered some metal level stuff on containers, let’s drill into how you might build an image for a Node application in particular So first you want to think about building your base and some of the choices that go with that You can find a list of images along with explanatory resources for officially maintained images on Docker hub In our case, we’re going to use an Alpine image for our Node base And using an Alpine image or a slim image is a great way to minimize the size of your final image Some things that are worth keeping in mind when using Alpine images are your package availability and compatibility with other systems might be different from what you expect So Alpine uses, the muscle library, while many other distros like Ubuntu to use Glibc C So depending on your needs, that could complicate things for you There also, there have been differences in how these libraries handle DNS resolution, which can really matter in a kuberenetis environment, for example So for us, for this image, we’re going to use the Alpine node base because none of those things affect us So first, once we set our base with the, from instruction, we can add our container level of dependencies So the application that we’re building here doesn’t have any additional requirements, but let’s say we wanted to add them so we can interact with our application files on the container We could then add and run instruction to add that package In cases where we needed multiple packages, we can chain on these dependencies into a single run instruction This will help us keep our image layers to a minimum, and decrease our overall image size So chaining the package index update to the add instruction as we’ve done here will also prevent any unintended consequences Let’s say our update image layer is cached for example, but a new package is added to our application that could cause us problems But if we chain things together, it’ll let us, bus the cache appropriately Hopefully no cache This flag here, prevents the package index from being cached locally So we don’t need to add additional instructions to clean it up following our package installation Next we’re going to set our working directory and user Here we get to take advantage of the fact that our image base has a node user that we can use to avoid running our container as route In the same way that you’d want to avoid running processes on a virtual machine as root, it’s a good idea to restrict privileges in a dockerized environment by running processes as a non route user where possible Our next step will be copying our application at code over and installing our project level dependencies When copying your code, again, it’s a good idea to think about cache busting So what we don’t want is for our node modules to be rebuilt anytime we make a change to our application code, unless we’re actually changing our dependencies So if we separate out a copy of package json and package lock json, from our application code copy, we can actually avoid that situation So this is a great pattern to follow with other stacks, Like for example, if this were a real step, you would definitely want to do that with your gem file Here we can also use the channel flag when we’re copying our application code to set the appropriate permissions on the code for our node user And finally you can see what we’re copying the application code from the root of our project, over to the working directory that we’ve specified above Next we’ll add our expose and command instructions With expose we are indicating, what port Apple will be listening on for connections

And then as our last instruction we’re going to specify a command to run the application with In this case it will be Node app JS, which uses our projects app JS file If we needed us to provide a greater level of specificity here, like for example, if we had some tasks that we needed to perform once our dependencies have been installed and our code copied, we can also add an entry point script, or entry point instruction that points to a script that could accomplish those tasks for us So to build out your scripts, you can always look at the source code for any image to see what the default command is for the image, and then you can write your scripts and commands accordingly So here in this case, this image, a default command is node Okay So we can interact with containers in many ways So some things that we’re going to do, are we are going to build the image using the Docker builds command Anytime you build images, you can always look at them with the Docker images command, which can be very useful if, say, you were experimenting with different bases or you wanted to implement build stages to minimize your final, application image size, we’re going to run our image with Docker run So, we’ll be using the D flag, which will run the container in the background, and the P flag will publish the port on the container and map it to the port that we’re going to specify on the host, which in our case will be 80 We can always look at all our containers with Docker PS and the A flag will give us everything, even things that have stopped And then Docker logs give us our logs and we can always exec into a container using Docker exec So, here, is an example of the command that we would use if we wanted to exec into our node application container and get a running shell to it So, because we’re using that Alpine image, it uses the bond shell, so this is the command that we would use to get that shell All right, so demo time, so I’m going to clear all that, and you can see that in our directory we have a Docker file already It does not have a unique name, so what we’re going to do when we build is we are not going to use the F flag, which we could use to specify a different Docker file So we’re going to build, we’re going to tag this, we’re going to call this Docker demo and we’re going to specify the current directory as the build context So you can see, that I’ve cleared my build cache, so we’re getting everything fresh All right, cool So now we can run this Docker run we’re calling this Docket-demo I’m going to write it in the background Going to publish 80:8080 on the host and we’re using Docker-demo Okay, so now can go over to local host, and we can see our Shark application, looking good All right, one final stop We’re going to look at our containers and we’re going to stop this, the one that we just built No, we know it’s working And we’ll remove that as well Okay, cool All right So we now have our application running or we’ve seen how we can do that So, but as you probably saw it on our application form, we have what looks like input fields So we’re going to need to persist some data In order to do this, we’re going to have to add a database service to our setup So we can do this using multiple containers with a tool called Compose, which allows us to define multi container setups So we’re going to walk through how we could wire up a database service with Mongo DB to persist our precious Shark application data

First though, we want to take care of a few things on the application side, to ensure that things run smoothly when we add Compose to our application Before we add any code though, it’s worth thinking about, what Compose is doing and how that might affect our application code So service and Compose is an abstraction that allows us to point to a running container So using Compose and architecting our application as a collection of services, will bring it in line with 12 factor principles So before we set up a Compose file, it’s worth thinking about the work that we need to do on the application side, with that definition of Compose in mind and with these 12FA principles in mind So the 12FA principles that matter to us here are storing the config in the environment and separating it from our code and then treating backing services as attached resources So we’re not running Mongo locally, right? We’re not working on a virtual machine that’s running Mongo and we’re now working with an assigned, database host, like a separate virtual machine that’s running on that So we need to make sure that our database, our code can work with the database host that’s dynamically assigned So for example, if we had a node application that was already using Mongo, let’s say, on the same host, we probably already have database connection information and methods defined in a DB JS file So what we need to do is go in and pull those values out of that file and find a way to pass them in dynamically So here for example, is a DB JS file that hard codes some connection constance We can make this dynamic by using nodes process and property, which returns an object that contains the user environment So instead of hard coding the Mongo connection details, we can read them from the environment This means we’ll need to set them elsewhere So a local hidden file can be a good way to store secrets apart from your code Be sure if you take this approach though, that you log that file into your git in Docker ignore files So here is an N file with those variables that we saw earlier in our DB JS If you’re working in an orchestrated environment, I highly recommend, a credentials manager like vVult, which can provide obviously way more security than a local hidden file Okay, so in DB JS we’re going to make sure that we add resiliency here, to our code by specifying some parameters around connection attempts So here we’re defining some code that will allow us, to set how we try connections and deal with successes and failures And then finally, in package json we want to make sure we have nodemon, which will allow us to automatically restart our application when we make changes ‘Cause certainly, especially in a development set up, we don’t want to have to do that manually on the container Okay, so now we get to write our Compose file So the first thing that we’re going to do is tell Compose what image we’re using for our node JS service So here because we’re working in development, we’re going to build the image locally using that Docker file, which we’ve located in the context of our current directory Next we’re going to tell Compose which environment, file to use to load that information In this case, this is just that N file that we just saw We also want to make sure that we’re specifying the Mongo host name correctly In our case it’s going to point to our DB database service, which we’re going to create next You’ll notice that we’ve added some volumes here as well bind mounts like this one are key part of developing with Compose, right, because in this case, this mount will mount the code on our host and our working directory to the container So we can still work in this local directory, no demo and the work that we’re going to do is going to be available immediately and accessible, in the container However, working with bind mounts can lead to some confusion, so it’s worth keeping a few points in mind Whatever is on the host will hide what’s on the container, in cases where they’re not identical, right? So the specific changes that you’re making to map to your container, introducing only what’s necessary or if you’re spinning up an experimental environment to see how that’ll do could be overwritten by anything that you have locally So in these cases a named volume can be a really handy tool So specifically here we want only the node modules

that we’ve specified for this version of the project to be present on the container Or let’s say instead of having a longer running project where we run into dangers, we have just cloned this repo and we haven’t installed anything locally So we want to make sure that we don’t have an empty directory that’s overriding what’s installed on the container So this named volume will persist the node modules that we install with our Docker file instruction and mount those contents to the container, which will hide the bind So it’s helpful to think about doing this with any dependency that you want to version or avoid rebuilding on the boot Okay, and then final word about volumes for Mac users Thanks to the fact that Macs are not running a Linux kernel natively, file system mounts do not have the same guarantees as they do in a Linux system So fine tuning your mount consistency between container and host using things like delegated mounts is one way to deal with this, and it will make your load times a lot faster Okay We can then add our ports option to map port 80:8080, on the container, and a command, that will override the command that we specified in the image So in this case, we’re running the app with nodemon to ensure those automatic reloads happen after we make changes And we’re also using the wait for tool, which is a wrapper script that uses Netcat to poll, whether or not a specific host import are accepting TCP connections This is to avoid any unintended consequences if say, our application were to try to connect to our database, before our database startup tasks are complete Compose also has a depends on option to ensure orders of dependency, when starting services But this order is based on whether or not a container is running rather than its readiness Next, we’re going to build this, DB database service For our database image, we can use the official Mongo image rather than building our own locally and then either pushing it up, and pulling it back down, and this is because we don’t need to do anything specific, to the image Next in our environment option, we’re making use of the default variables, that we’re both loading the environment file, right? But then also making use of some of the variables that Mongo is providing for us out of the box So Mongo and DB root their username will create, our user with root privileges that are defined in the admin authentication database And then a Mongo and a DB root password will set that user’s password In cases where we wanted a user with a restricted permissions, for example, we would want to create a script to accomplish that and then mount that script into the Docker entry point and the dbd directory on the container Finally, we’re going to add a named volume to persist our application data so that it’s not lost between container restarts And then finally at the end of the file, we’re going to add a top level volumes key for these named volumes Okay, again, many different ways that we can interact with our containers and services We’re going to use Docker Compose update to build our containers and services and run the containers in the background We can always list everything with Docker Compose PS, we can get our logs or Docker Compose logs, and again we can exec into our containers with Docker Compose exec So depending on the base that you’re using, keep the point in mind that I mentioned earlier about the Alpine, shell And then finally Docker Compose down, we’ll take our down, our containers and our defined or default network So in our case, we’re using a default bridge network All right, back we go, to demo Let’s try that again And we’re going to run Docker Compose Oh, D Awesome

Now Pull this up fresh, local host again and what do you know? Got our sharks, and we can add some sharks So I’m going to go with this Megalodon shark, which is an ancient shark, and we’re going to submit that and cool, see it? Create a new shark, do a whale shark Those are large Awesome, okay, so Actually, let’s test our persistence by taking this down Okay, so our containers are down, right? But we want to see, we want to make sure that our application data has persisted So I’m going to put these back up I’m not destroying the volume Cool, okay So now, reload and my volumes have persisted, we can also check that by going to sharks/getshark, and there we go All right, cool So, so far we’ve talked about some things that are specific, to develop and set ups with Compose, along with some things that are generally applicable, like named volumes and decoupling credentials from application code So building on this information, I want to briefly touch on some of the things that you will want to implement when you’re getting ready to deploy to production So when you’re working in production, you’re going to typically be building and pushing application images to some type of container registry So instead of building your application locally, as part of your Compose workflow, you will likely build and push a versioned image to a repository, which you’re then going to pull down to run that app With volumes, you’re definitely not going to want to have any bind mounts, between your local application code and your container running in production So instead you can use a named volume which will allow you to mount the code that you’ve deployed to other containers for reuse So this is really effective if you need to share your application code between containers You will likely want to add a web server, to your setup So with a web server you can do a few things, including adding specificity to how your application handles requests You can also get certs for your application and you have a few choices when you are adding your config to your web server container A bind mount on a config directory or a file will work You can also build the image locally and push it up, and you can copy your config over as part of that process So in that case, you can also use the named volume to persist the config In order to get your certs using say, a certificate authority, like Let’s Encrypt, you have a few choices You can always obtain them for a virtual machine, as you would without containers However, you can also work with a cert bot service, using the officially supported search bot image and this would allow you to go through the entire process of obtaining your certs, with containers So you would get the certs, mount them as volumes, and then Mount those to your web server container And you know how application code, the app code volume here is being shared between all three of those services So the app, the web server and cert bot So if you’d like to learn more about that process, ’cause I’m glossing over it a bit, I have an article on Community called “How to secure a Containerized Node Application “with Let’s Encrypt”, which we’ll walk through the process In way more detailed than I have done here Okay, So that’s it for me, thank you for listening I’ll look forward to engaging with questions and yeah, enjoy the rest of DockerCon folks (soft music)