AWS Partner Webcast – Improving Your AWS Cost Efficiency with Cloudability

I so and welcome to the Amazon Web Services partner webinar series our topic today increasing your AWS cost efficiency with credibility i’m your host maya capacity with the partner marketing team here in seattle and before i introduce the presenters i would like to start with a couple of housekeeping items we are recording today’s webinar and wild lines are muted you can submit questions any time during the webcast we will take time for questions at the close of the webinars and the presentation will be available on SlideShare and YouTube no need to take down the three you are else you will receive an email early next week with the URLs are included in the email body next next meet our presenters JR scotland is the co-founder of credibility a partner of AWS and their clients who presenter Andrew Ridgeley software testing leads from rea group he is joining us from Australia very early there I believe and Scott word solutions architect from amazon web services on the agenda we would start an overview of awh tools for secure utilization management and case study are you able to journey toward a de leus cost and usage control using credibility then we will have a demo flat abilities cost management tools for AWS and we will conclude with a Q&A so without further delay let me pass it on to Scott thanks Maya good day everybody hello and thank you for taking the time to come out and meet with us and learn more about this this great product and more valuable us one of the primary reasons businesses are moving so quickly to AWS is the agility and dick and the clouds increase agility so what exactly is it that enables that increase agility how our business is succeeding on AWS these next few slides I’m going to talk about what AWS is offering to businesses and it’s allowing them to be more agile in today’s business environment today’s cloud-based move in a quick paced world enterprises Campea for afford to be slower w surpassed by their competition or Lucilla’s to their customers with traditional enterprise operating an on-premise data center it takes weeks maybe months in order to justify order rack cable power and configure your infrastructure I’m certain many of us have been in this situation where you need infrastructure now or you have to order early without even having a full understanding of what you really need with AWS infrastructure can be made available to business within minutes you can add environments not only to support your production applications but also to support your test and development environments you can add new environments to a different part of the world as your business grows and you need to support new regions new workloads that are specific to that part of the world you can add multiple pieces of infrastructure all at the same time one hundreds or even thousands of servers at one shot and when you don’t need those servers anymore you turn them off can you stop paying for them your data where how they need as you need to grow and analyze your business as more and more data to look at you can deploy a massive petabyte scale data warehouse and a few minutes and then you don’t want you know we’ve done with it we don’t need it when you’re done analyzing your data you can turn that data off you can turn the systems off and you’re done paying for it tws offers up solutions that are global in nature our data center foot foot is is global and spanning five continents with a highly redundant clusters of data centers in each region our footprint is expanded continuous easily increased capacity redundancy we add locations to meet the needs of our customers around the world each reason each region represents a geographic location that’s designed to optimize the needs of our customers to those regions the creation is designed to the independent of all the other regions and the AWS global infrastructure the availability zones represents one or many standalone computer occasions within a region in each availability zone is also designed to be independent of all the other availability zones our education they represent and they support our content delivery network up front and our route 53 which is our domain name servers these offer up low latency for routing Internet traffic and for serving up your content to your customers key thing to note about these regions that your data never leaves the region without you enabling it moving the data from one

region to another is something fully controlled by the customer either through configuration with NE WS or by executing a copy of the data themselves this gives you the flexibility when dealing with disaster recovery scenarios or ensuring that you’re complying with regional regulatory requirements through the use of AWS enterprises are able to foster a culture of innovation which maybe they haven’t been able to achieve before with traditional on-premise setups yeah I think were going to be agency they could experiment infrequently there’s a lack of infrastructure to experiment on or they’re involved in the lengthy process of actually having to cure justified configure the new infrastructure that seemed score other tasks the enterprise provide their experimentation bill your is expensive if you’re actually able to secure infrastructure to allow you to experiment illing on these experiments can be very expensive let’s say that you decide if you want to experiment on something you actually get the justification and approval to use that hardware for a particular size and configuration you experiment on hardware then you realize that things are not working as expected you in order to take things further you need to order different maybe bigger or more infrastructure or to continue on with any sort of a successful experiment you’re now in a position where you need to further justify and spend on more infrastructure or you have to stop the experiment because costs and you just nobody will allow you to continue to get more infrastructure at that point you may be stuck with infrastructure that you now known all their need and maybe your business can’t even use all of this results in less innovation here an inability to experiment on a regular basis can animation with an enterprise as the barriers innovation become too high when it comes to infrastructure with Amazon Web Services you can experiment often you no longer need to go through the long process of justifying procuring infrastructure and with a few clicks of a button you’re able to launch the infrastructure and services to meet your experiment needs you can feel quickly at low costs once your experiments done and you have the results you can shut down and even terminate the infrastructure that you’ve launched for your experiment as a result you’re no longer paying for that infrastructure you then have the ability to spin up more infrastructure that has a different configuration different size it don’t allow you to continue experiments and order to achieve what you hope is going to be the most optimal results all this relieves all this results in more innovation as the barriers to entry for experimentation drop enterprises they have a greater opportunity to innovate focus on the pieces that are part of their mission and really help them to differentiate their business AWS we offer a lot of cost saving and flexibility enterprises resulting in a lower TCO or total cost of ownership for the enterprise so how can you achieve a lower total cost of ownership with 80 us so there’s several different reasons what we’re enterprises can achieves its lower TCO DWS you replace upfront capital expenses with lower variable costs you no longer to do invest in purchasing the infrastructure which may take weeks to arrive and by the time it arrives you rack cable and power it configure it and then it’s really out what you need anymore in order to meet the needs of your current business with a wsdl launch the infrastructure you need within a few minutes so that you can immediately take advantage of it for your business and your current needs economy the scale allow AWS is continually lower cost us operates at a massive scale we are continuously working as weed every bit of efficiency out of our infrastructure and our tools additionally do the volume that AWS is working on the infrastructure space we will achieve great economies of scale through all the work that it does to get as efficient as possible to their infrastructure we pass these efficiencies on to the customers and form of lower costs since 2006 AWS is lowering prices 42 times their most recent price drop happening in April of 2014 pricing model choice is to support variable and stable workload ewf offers pricing models the best help you meet your business needs there’s four different pricing models on demand reserved instances spot and dedicated well actually go into those a little more detail on the next slide and then you save more money as you a bigger EOS offers multiple pricing models that allow the customers to see increased savings as they use more of the AWS platform and these come in the form of tiered pricing volume discounts and custom pricing for certain use cases so what talk more about the pricing models of exist on AWS we have on demand instances and these actually allow you to stop and start et2 instances and you only pay for the time begins this is actually running it’s great for the initial experimentation phase and we talked about a few slides ago as well as systems that may be absent spiky or clothes or do not need the high availability and may just be there to provide a little bit of business support on an occasional basis or are sitting there and deceit be turned on for disaster recovery situation reserved instances these offer you the lowest hourly rates for a one-time on Fred upfront fee the reserved instances are

always there a way to drive value to your few customers who have more of a steady state or load on the instances and they are available in the one to three-year terms and they have pricing models to meet what we call light medium and heavy utilization workloads we have spotted spot instances which are an opportunity take advantage of unused capacity within AWS since we have a large amount of infrastructure de WS it stands to reason that you know at any point in time there could be some boxes sitting there and idle waiting for customers within the spot market that let’s see users take advantage of market set pricing on these instances and which also presents results in a very incredibly deep discount on those instances two points to keep in mind when you’re talking about spot instances the market is constantly fluctuating with no guarantee if capacity capacity availability or price when someone is willing to pay more for your instance than you are we will abruptly terminated at instance and sell it to them you are not locked in for any specific duration one other model is that we talked about on the previous slide was that was dedicated instances these are instances that launched within your AWS VPC and therefore workloads where a customer requires that they be on their own hardware which is dedicated just to that customer these are great use cases for customers who have very regulatory workloads or compliance needs that need to be met requiring them to be on their own piece of hardware so how do you build a strategy to maximize agility and minimize feet spend well here to talk about that is intermittently from rea group hi Andrew this is my I just a reminder to unroot yourself yeah I’m not again okay no worries um thank you for everyone thank you thanks that Scott um I guess I’d like to start by introducing myself and the company I work for my name is Andrew bridgely I work for a company called Arry a group um rei groups a public company that’s listed on this Australian Stock Exchange we have a number of online businesses around the world the best-known one is real estate com got a you which is based here in Australia but we also have other online businesses around the world in locate locations including Italy and Hong Kong in terms of cloud we started an aggressive strategy towards AWS in 2010 it’s been a very interesting journey with many ups and downs one thing I will say is that the people involved in the early decision making had a lot of a lot of foresight to see see the potential that that is head in terms of myself I’ve typically been involved in the software testing space but importantly you know I’ve been involved in this adoption of a club you know since pretty much the beginning so I’ve really got to see the journey myself oh i’ll start by talking about how we shall we begin that journey our doctrine started in the dead test space you know we took our time to get to production I believe this is a fairly common common mode and I think it worked quite well for us so the way that we’ve got started was that we had one AWS account that was shared across quite a number of business here business units we had shed shed tooling that made everyone’s lives it just happy to easier you know as you know at those early days and all that the everyone was kind of book together there’s no way to separate the different business units and the interesting thing was that everyone was given access there was no there’s no real control to access every every man in his job in the company could moreless to what he wanted or what I wanted in terms of managing costs we did this in a really simple way we did this by adjusting our AWS resource limits such as your ec2 annoyed counts I also imagine this is fairly common one thing I will say is it’s a very blunt tool for managing costs it doesn’t really take into account the who’s the what’s the wise and that’ll talk bit more better than a moment so one of the things we saw that using resource limiters are our main control method was that accredits really clear anti-patterns and the most obvious it clear AG pattern that we have was you know in that in the evening instances would tend to get get stopped or destroyed and so you come in the morning and will be enough capacity and but as the day would go on we would we would be you know capacity or need nothin while talking about late afternoon it was a good chance that all the capacity was

gone and you could no longer start or create instances and so the pattern that we saw happened was that people would come in first thing in the morning start there instances or create your instances even if they didn’t really need them then or even at all that day just to make sure that you know they had the instances for the day and when it got to the afternoon and you know there was potential they might not have them that they would have them so you can imagine it’s exacerbated the problem you know it was getting earlier and earlier in the day we’re reaching our limits and you know we’re spending more money than we needed to so that’s that’s one thing to keep in mind but we went through the process of controlling costs some of the early wins that we’ve had were looking quite closely at the the size and type of instances that we were using the shared tooling that we created for our teams had some defaults we looked at those and we said well with defaulting to large that since maybe maybe we can default to medium that kind of thing and we were able to go back to the businesses hey look we’ve changed our sizes with we’ve saved as much amount of money can we now increase our resource limits again and management would say yes you’ve done a good job but increase your your limits people happy for them for a moment and for maybe a few weeks we would have no dramas but in a cycle kicking again well you know sometime late in the afternoon we will we would citing our resource limits and it would come come earlier and earlier into the day the next major win that we had was and is something that I’d recommend people look at doing particularly in dev test is employing something that we call the stopper Nader so um if you think about your working week you’re you’re typically only in the office you know twenty to twenty-five percent of the total week time you consider the evenings and the weekends so we created was this Ruby script that use the AWS sdk and every evening at a certain time it would effectively go through every instance that we had and work out whether that Russian stopped or not and so you know typically probably eighty eighty to ninety percent of a instances could now actually be stopped in the evenings completely overnight and and also on the weekends so suddenly we had these huge savings of you know say up somewhere up to say seventy percent and again we were able to go back to the business and say look with making suitable shadings can we please up our resource limits again and since this sort of cycle happened having quite a bit in the early days and the one thing I will say is even even turn out stop Anita has has been really handy um so you know that work well for a while but we found over time our our cloud adoption really grew we know we started moving into production using different AWS products the business units got more serious about it being a strategic platform for us and it just I just was become less manageable I’m cost reporting point of view you know as I said earlier we had by this day we probably had one or two or maybe even a couple more atos accounts but they were all shared amongst the business units so there’s no way for any individual business unit to actually know what was off spending am i spending too much no it’s a capacity for me to actually spend a bit more we couldn’t we couldn’t answer those kind of questions and the other the other thing that we couldn’t answer was what did I usage profile really look like you know where we what what certain applications were there using many resources that kind of thing we just didn’t really know how we were using the cloud so you know we sort of sat down and said well we need to do something about this and we were sort of come up with a two tiered approach to dealing with it they’re very the very first thing we did was said we said we need to have a weird to tag our resources and we know we sat down and we discuss the way to do this so the very first thing we came up with was to come up with a mandatory business unit tag and this this tag had a canonical set of values so the business unit could be residential it could be commercial equally media for example everything we said was well let’s as well as having this mandatory tag let’s let’s look at what we would recommend to the business units once once you know resources have a tagged as a business unit they might want to go that step further and we said well let’s actually have two extra tags let’s have an environment tag and an application tag so you know our dev test space environment made a lot of sense so environment could be something like androids environment or project X’s environment that kind of thing an application tag was more appropriate for our production accounts so in in that case you know your application might be for real estate combo you and all the different resources that that make up that that size you know your databases your web apps that kind of stuff we’ll all be tagged with an application so as a part of going through with this week we came up with our tag or terminate policy and

required was that all of our resources and to begin with a c2 resources but as time went on more resources would need to have his business unit AG and we said we put a line in the sand and we said by this day you must have your resources tagged with this business unit tag if you do not have your instances tagged by then they will be stopped and terminated so the process kind of like this so prior to that date and we had reports running on what wasn’t tagged quickly these reports were sent out to the teams there are aware that you know if you don’t do something about this you’re going to lose these resources and as you can imagine it was a bit of a scramble you know towards the end for the teams to make sure that all their their resources have attacked when that date came around you know we basically press that red button and we actually didn’t initially terminate instances what would happen would be that instances would would be stopped emails will be sent to the teams to let know that these instances have been stopped and if those instances were stopped for for a certain amount of certain period of time they would actually be terminated and as you can imagine filling quickly or all their resources that were running an hour counts are all tagged with the business unit tag then we are the second tier cells are saying the the tagging was the first tier 2 that approach the second tier was inducing credibility and this is where the data really came to life so we would have been nowhere without the tagging but we all sort of been you know struggling without credibility so what kind of gooli gave us was actually satisfied our needs to giving visibility to our spending for management so i went to that point in the management of it screaming you know why are we spending what are we spending can you tell me what my team spending straight away they were able to give them reports that they could look at and understand and they knew what the teams are spending at a macro level and also at a project level and and this this was you know a bit of a hallelujah moment um there was one quite interesting story as I was I was preparing these reports I saw one particular project team had that you have over the previous couple weeks been it’s been about five thousand dollars on a bunch of really heavy resources that were running around the clock and I knew the team responsible for it and it just seemed unlikely that they really needed they really needed that that heavy heavy equipment sighs I said I know I said to them you know what are you guys doing do you need is running around the clock its cost five thousand dollars last couple of weeks I know kind of like in shock they were like oh well yeah I didn’t realize this was costing so much money and they were effectively able to save that money in the future and you know they weren’t really to blame for that that is didn’t that it’s been aware of what that was costing them an interesting we actually found one business unit that was actually under utilized in AWS they they’ve been a little timid they were concerned that maybe they were they were spending too much because I just want to wear and once they once they had real clarity on what they were spending they actually start um really making for you so they w-with so that was a really good moment as well so you know we had we had people picking up credibility as a tool but we we realized that we needed our engineers a garden before using it so we had a process of running workshops our internet at reas it’s very well used so we know we have blog posts and immediately we had everyone we had you know a lot of Engineers science I know probably will need to to to make use of that sorry about that okay so this this first report i’m going to show you is is is probably the first typical sort of report that we started creating an REO so this is for the residential business unit this is showing their costs for a particular month and you can see for that month they spent seven thousand dollars um but sort of the power of this report is actually is more in the table below you get in the sexual screenshot you can’t see the entire table but you can see the top spending components and so here you can see really interesting lee how the tags are being used so i looked through that table i can see the domestics the most expensive resource that we had was for was in out gandalf accounts which is our dev test account and it was for an environment called CM and they spent about six hundred and forty five dollars and if you go through that table you can see somewhat similar similar data them and the very last one you can see is for our production account and for our neighborhoods product neighborhoods application so straight at railway we can now see what you know what we’re spending where and that sort of thing so but we needed to take your starter a little bit further you know we needed for example to understand what let’s see in environment which is actually corresponds to a team what they were actually spending so

we’re from using credibility were able to drill a step further and look specifically at CN environment and you know as you can see an example here again the full tables not shown but all the resources that were contributing to that 645 dollars were now we’re now available and the team sort of had previously been working in the dark not really understanding what where their money was going could now see it and from here the team could look at it and and for some teams they would look at and go you know oh my god I’ve spent so much money on these instances you know how can i how can i but saving money optimizing optimizing here but other teams it was like you know this is fine it’s looking good I’m glad I now have the confidence to understand what out what I cost cost used to teachers looking like so this was this is working out really well but again we needed to take a step further and and get get this information into the you know to the top of people’s minds so some of the ways that we went about doing this we’re getting people to save and shared really interesting reports um you know obviously you can you can spend quite a bit of time creating it creating reports to really answer the questions that that you have and not answer the questions of your colleagues as well so some of the popular ones that I saw initially were around underutilized resources and I had to meet you interesting names of some of these reports but typically what you would see you know you would see things like a report based around blows cpu utilization and kiss fire that kind of thing and so now the teams had visibility on on you know here are my resources there particularly under underutilized maybe I can have less of them maybe that can be smaller and you know go through an iterative process obviously the report so you know they can refresh their reports all the time and credibility so they’ll getting that sort of constant feedback and we also found that teens we’re saving project specific reports a little bit similar to the CN 1 I was just showing you a moment ago and saving that so they can keep track of their individual project costs so one of the interesting one of the interesting things and I saw was 110 a couple of teams were we’re also putting in starting with their own special dashboards just collecting really simple metrics like it you know the SC two accounts double spent that kind of thing using the JSON API so that was another another use so that was working out quite well and as time went on I was spending you know a lot of time in the dev test space particularly given my background and I noticed some bad behaviors that weren’t all necessarily related to cost but they some of them certainly were and you can see some of these bad behaviour here and I’ll go through them so I facilities bad behaviour but not in eyes i got it from talking to people it was just kind of obvious but i’ll go for each one now so often I see unused nodes and so examples of this would be certain teams would have up to 30 nodes running in their environment and I would all the talk to the team members and say why do you have 30 nodes running and they would say well we don’t really know why we have these five someone put them there and and they’ve been running and and they’re just there I don’t think we’re actually using them I don’t you know our study analysis as a ricotta bad habit also a lot of the nodes were underutilized you know one two percent cpu and an unnecessarily expensive nodes so some of these guys were running quite large nodes when when really you know small medium maybe maybe even a micro want it done and now another bad habit that I saying was a bit less related to cost was some of these teams had particularly old nose running in their environments at Rea we’d like to get downloads fresh and know that you can recreate them at any time you need but some of these guys were helpful had no to it that had been running for six months and and you know this was this was obviously a bad behavior and on top of that we had at times excessive node up times you know so it’s for some teams I can see their nose over the weekend it’s tapenade they’ve been excluded from stop inator unnecessarily and that was obvious to me so that was great you know the at this point it was a bit more anecdotal and it’s we’re sort of the credibility API and what I I’ve come to two images ball came into it so if you look at the equation on the right hand side um each one of these pieces of data is its correspondence that the metrics in the previous page now each one of these metrics I was able to get in real time from the category API and the cook the cool thing about this was using using this equation and I was able to was able to actually come up with a score for every environment so you can think of this the score is a little bit like golf you know the greater the score

the work should have done you want to you want if score to be as low as possible within reason and if you look at that equation is kind of make sense you know they’re all your average hourly cost you know you know probably want that to be high your average up time you don’t want to be high and this was this actually end up being quite a cool project so suddenly I was actually able to create this table of results and I haven’t got it as a part of slide deck it’s not pretty it’s not particularly interesting to view but what we what we now had was each team relative to each other now some team take quite low scores and I was able to save them you know you’re doing quite well maybe there’s some room for improvement you could straightaway look at the data and they look maybe your node uptime is a bit high but you know you’re doing quite well other teams I would speak to you I say with your scores is relatively poor there’s a number of reasons for this let’s look at these five metrics and and look at how you can improve it it’s actually quite end up in quite a good game over over time you know teams able to improve their scores and and that kind of thing so that’s a touchy that she worked out quite well so on top of having you know the sort of scoreboard I wanted to have a little bit more visualization around your starter as well which I’ll take you I’ll take you to my next my next slide so each one of those those metrics could could now but actually be displayed on a dashboard so I obviously collecting this using the year the JSON API that credibility provides and and always just click on that data and presenting on our dashboards so what you can see here is one of those metrics which is the average no duck time on the x-axis we have all the different environment names which have had to be obfuscated for this presentation because there’s some sensitive information there and the value on the y-axis is the percentage up time so you can see some of our environments have really high percentage up time up to you know ninety percent whereas some actually under ten percent so this this slide isn’t particularly exciting in itself but on a dashboard it was actually quite cool because well actually transition in 20 different metrics so you know at the moment it would be showing node uptime percentage then it will transition to node count and and and so forth and and the cool thing about this was you know having that in dashboards the teams could see how they’re going with is differ metrics how they’re going to compare to other teams and how they going over time so a similar sits like similar sort of data using the cloud the cloud their credibility apio was keep just dashboarding what we were spending so in this example here again for the residential business unit we’ve got our environments on the on the x-axis and we’re able to actually see what our what these environments are spending last week and this week and the cool thing about this is you know if you have some kind of blow out like I was talking earlier with that $5,000 you’ll be able to see it fairly quickly you started you know it’s kept up to date and it’s just it’s just sort of the thing that keeps keeps this information at the top of people’s minds okay so I guess I’m kind of coming to the end now and I kind of like oreos made some good steps you know it would definitely got control about our usage and understanding it at least you know if it is always more to do so the next steps artsy arts taking is having more sophisticated dashboarding with using the d3 libraries as what word what I previously used I think just having something a bit more interesting a bit more graphical that that you know you walk around you offer to look at these dashboards that isn’t very engaging I think there’s definitely some work there the next piece of work which I think probably more significant is till this saves you notice there’s optimization techniques have been applied mostly to dev test and I really think the next step is going to be taking that a production I think the five the five metrics that I went through a moment ago they want to correspond exactly to to production you know in production we might have you know some special requirements but I’m interested to go to my journey and find out you know let’s find a way over that evaluating our health in our production resources and finally I think something that’s going to be very interesting going forward is is food optimising hourly rate with ECT reserved instances so at the moment we do have reserved instances and their considerable but the process forward is that one of one person from Rea spends time with in a double AWS engineer and and works out what the reservations are the process isn’t exactly open or visible it could probably be improved so I think using cloud ability to to actually be quite scientific and open about it I feel as a another step that would like to take soon so that’s it for me for the moment um I’m going to hand over to junior who will be able to show you how to build some of those kind of

reports that I’ve that I’ve just done thank you I’m edge Andrew sorry I ever used to calling a mid-year thanks so much i appreciate the opening there and a little bit of what were you what you guys are doing i also want to give especially big thank you made for those who don’t know it’s three thirty in the morning there where he’s presenting so thank you for being online at that time today really excited to go through some of the topics that he covered rea as he mentioned you know does a lot of work with the cob ability api and they’ve done some really interesting custom things with that what i’m going to cover in my presentation is a bit about how you can do some of those things just easily within the cloud ability application itself in the GUI so i’ve got about four quick slides i’m going to go through just as quick intro and then i’m going to get straight into the demo and show you how you can build out a lot of those reports and get a lot of those questions answer that he was showing you so as part of this let’s talk about for quick takeaways from his from his presentation and as an overview before we dive in quick overview on cloud ability so we basically have five parts to our solution the first is a spin management platform that basically helps you get a really quick view into what’s happening in each department with each tag with each group however you want to split that out with daily emails and budget alerts the next is a cost analytics portion that really lets you dig in deeply into the data well beyond those high level pieces like account or service type deeply into things such as individual tags usage types boxes pretty much whatever granularity you want to see the third piece is a usage analytics component which is all about finding waste and inefficiency with instance level metrics that cover things like CPU bandwidth and disk i/o the fourth piece which those first three really roll up into is the RI purchase analytics it looks like that got cropped out in the slide but that’s reserved instance purchase analytics and that’s really about understanding the right scientific mix as andrew is saying of how to buy reservations that will maximize your savings over time and the fifth piece is an enterprise at enablement and management piece which lets you group together all your different accounts and views into ad hoc groupings are the right people in organization see the right accounts and spending at the right time so we are a SAS service we collect the data via amazon’s data sources programmatically we store that data data warehouses and do diffs on the back of it and then you access it via cloud ability cloud abilities website so there’s no agents or anything like that to install at a high level we also provide a lot of in-depth tooling for large enterprises or anyone with lots of accounts and spending at Amazon to get everybody the right view into the data so there’s a multi-user system which i’ll go into a demo here in a bit there’s a view system which lets you group together business units accounts products whatever you want indistinct views account groupings to do roll-ups of those into this to various groupings and we also offer enterprise support for our customers that’s included as part of the package that includes a hands-on in-depth training both on cloud ability and also all these concepts around reserved instance planning around cost allocation around efficiency all the things that we we talk about in these webinars so the four takeaways before we jump into the demo on the back of what Andrew was talking about the first is you really need to provide daily a regular visibility into your Amazon bill to every one of the organization and the reason for this is that it’s really easy to overspend when you don’t see the bill all the time you’ve got engineering you’ve got operations you get financed a lot of stakeholders so you want to get all that timely relevant cost data into the hands of the team member that needs it on the back of that there’s some work you need to do around tagging as Andrew mentioned around getting together the taxonomy of how you and report on things but once you have that in place can provide that visibility you can really sit back and watch efficiency increase so the next thing I’m going to cover in the demo in a minute here is that you really need to find resources that should not be running the way I like to look at this is you really shouldn’t treat the cloud like a data center you want to turn off Devon stage test resources during nights and weekends and you want to look at hourly instance counts of what’s happening you find when things can be disabled or Auto scale back and turn those on appropriately as enter mentioned you can actually get the exact percentage you did by my math you can save about sixty-five percent just by turning things off on nights and weekends the next thing we’re in to look at in the demo is how to determine what an underutilized instance means for each instance role there’s really not a right answer to is this instance being well utilized it really depends on the role for example you may want Simons’s to be running at low CPU while others can be running at a higher

one so we want to determine role specific usage profiles and set thresholds accordingly the fourth thing before we jump into the demo that we’re going to cover is that you really want to be buying reserved instances iteratively and often most companies by reserve vince’s once or twice a year and they do it very using very simplified math and this can result in some real cliffs imminent or in inventory for your services and also misaligned inventory where you’re not really matching up the right mix of reservations to your actual instances so let’s go ahead and dive into each of these points and do some demo so I’m going to switch over to a browser here excuse me one second while I hope the right screen great so we are looking now at a cost Analytics report within cloud ability and we’re going to initially build what Andrew was showing that I think he indicated was the sort of the hallelujah moment how they got cost by things like tagged in different groups so this interface of incredibility works very much like a Google Analytics where you’ve got a drop-down of different dates you can select you’ve got the ability to share reports export reports everything is also available via the API as andrew is utilizing we’ve got the ability to filter reports based on a variety of different dimensions and metrics you’ve got some visual overviews here of what’s happening on a day-by-day basis if you’re spending and now at the bottom where the meat is as Andrew was talking about are the actual rows that show what the spending is broken down by so in this example we are looking at a very basic report that is cost by linked accounts so let’s say you’ve gone through that process of doing all your tagging maybe you’ve done the tiger terminate rule that Rea did and you’ve got everything somewhat sets so let’s actually customize this report to add some more information that will show how the spending breaks down so all the reports within cloud ability are very customizable all the dimensions and metrics can change you can add additional ones group buy whatever you want to combine as many things as you want sky’s the limit so on the add a dimension field here I’m actually going to scroll down through all these different things we have set up to go to the tags I’m going to add a tag for environment and let’s go ahead and apply that and see what we get so now we’re looking at a view that is going to show us are linked account names by our environments I’m going to take this a step further and actually add in a business unit which is actually not a grouping of Amazon it’s not an actual amazon accounts it’s an ad-hoc grouping that we’ve created within cloud ability that says show me these combinations of accounts by a business unit so now we’re going to look after combination of account name environment and business unit as an example so let’s get a little more detailed with this this is sort of a messy reporting so we’ve got a lot of different things happening here let’s filter into just production resources I’m going to click here on production which is going to create a filter if you look at the top here there’s a filter now that says environment equals production a very simple view right now so I’m going to add some more detail I want to see what this production spending is actually on so I’m going to customize the report takeout environment because we’re already filtered to production so I don’t really need that one I’m going to add in let’s say the Amazon product names which are essentially the services and I’m actually going to take out the linked accounts as well so we’re just looking at environment or of product a production rather so now we can see you know just specifically for this environment of production what individual services we spent on and we can go much further with this as well to say okay let’s also look to see you know what the actual usage types were underneath those services and let’s also add the item descriptions to see what we paid on a unit cost basis for each of these so you can get very detailed reporting here on the back of whatever breakdowns of accounts you want and then continue to filter this down on any way you’d like another thing we might do with this is as Andrew mentioned individuals the organization might not know which specific resources are contributing the costs so I might come in here and actually add in the name tag which is going to list out individual specific resources and keep you in mind we’re still filtered just this one environment of production and here we’ll get a list centrally of individual instances and what they cost during this time period so once you’ve got the right report and you want to you know get that out to other users we also offer a full user system that you basically you get this info into the hands of folks without providing access to the Amazon console so you may have a finance user that you want to bring in and I want to say you know this is our new finance user their email is finance at cloud ability I want to make them an administrator so they can come in here and set up new tag reports and invite other users and I want to give them access to just certain accounts I want them to see this cost Center I want them to see all production

spending these are groupings that I’ve previously created their combinations of amazon accounts you can Floop these accounts together anyway you want and then you can restrict this user to just get access to just these views so I don’t want this user to see in this case finance we might actually want them but let’s say it was just one team lead I might just want them to see certain accounts and finally I’m going to be able to say let’s give this user a default view of this cost Center and what that’s going to do is every day that user or weekly if they want we’ll get this daily mail that breaks down exactly what’s happening with their accounts only and this really speaks to what andrew is talking about you know giving everybody the view into it letting people find those times when they may have that extra five thousand dollars in spending that they didn’t know about so this is going to give them a really clean break down not just what they spent but also what they’re estimated to spend and a break down both counts so the next feast we’re going to take a look at which covers something else that Anders spoke about our hourly instance counts within cloud ability so this is going to help us identify one thing can be turned off so we’re looking hourly by hourly of essentially instance counts i might add in an environment variable and here as well so like identify you know which environments evidence is running all the time and if I filter in I’m testing here for example will find that I have our testing servers are running all the time so you could actually turn off a number of these during nights and weekends and save a large amount of money so let’s jump over to an underutilized instances reports and this is a lot of the data the Andrews talking about with the midges law so we’re looking at instances by things like low bandwidth low disk i/o cpu utilization we’re also looking at the age of those instances to figure out how long good and running and what we can do with this is actually start to add very custom profiles because we might not want all instance to be treated equally so I’m actually going to add in a filter for a let’s do an environment and environments equals test for example and so for test I may say I really want CD utilizations to be at ten percent and want disk i/o to be at another level and you can be very specific I’ve got identifying what profile should have which criteria to indicate what is underutilized you might do this by environment you might do if I roll you might do by instant size and you can build out the right reports and then pull this view the API as well the next piece we’re going to jump in and I’m moving quickly here’s where it looks like we’re coming up on the end of the time is the reserved instance planner and this gets into a lot of the science of how you can choose the reservations for maximum maximum efficiency so right now we’re looking at a one-year term we have the option to recommend reserved instance modifications so do I want to move in reservations around between availability zones do I want to move them between family types and what we’re going to do with this tool is basically look at every individual instance size every availability zone and every operating system and figure out how many reservations you currently have how many you really need and the gap between those and I’ll show you the math here in a minute and then give you a breakdown of what it’s going to cost upfront to buy those reservations and what the savings are going to be as well on the back of those and the way that we calculate this and you’re going to adjust this to me any time periods you want but we’re essentially looking at an hourly histogram of every hour in the reporting period in this example looking at 30 days to say how many instances of that type size and operating system running each hour in this example it looks like we need six heavies and this is based on in a case where we have six of these inning a hundred percent of the time so we recommend you purchase six of these there’s a break even at month six approximately here’s all the information of on demand versus our I savings what the up front is going to be and what the savings rate is going to be and will actually go through and do this for lights for mediums for heavies across your entire consolidated billing master account or you can do it on individual account basis as well makes it very easy to figure out what are the right combinations to buy and then after you purchased them will tell you at the bottom of this list here how well you’re using them is wonder use perfectly there’s no gap we may also tell you that you should move others around or potentially sell reservations also so those are the four pieces that cover some of the points that Andrew talked about again how to build out the cost allocation reports to give daily visibility how to find resources that should not be running how to determine what underutilized means and define roles for each of those and finally to buy reserved instances it early and often so wanted to hand this back to Maya to jump into some of the questions that we have going on my are you available and we do have some questions coming in and i would like to launch a very quick poll satisfaction poll right

before we move on to our questions it will take just the two seconds hold people to third in and then do we do appreciate your joining us today we know how inconvenient it might be you want everyone is sleeping to be up that dark silence it is it is rather early but i’m expecting your little girl to wake up crying thank you for her she’s in the other corner of the house so now i’m hoping she’ll be okay you are a trooper I got a bit a little bit worried before because bucketing down rain as bit worried that it might actually interfere with the with the audio but thankfully it’s actually slide down now so I’m going to close the poll thank you sir audience for answering our particular pole and let’s move on to questions actually both of my questions are for you Jay are let’s start with some of them you already should see a few questions your inbox and I will send them your way as they keep coming in we do have an r4 of questions right now ok career may I see them coming in here so let me go down this list so the first question is is the cloud ability product chart cost in real time so as a manager I can see activities that just launched an impact cost so we basically pull our data from AWS every hour and depending on their update schedule for some of the instance related data that is updated on their end pretty much in real time for some of the cost data is it is updated a few times a day so we’re checking constantly for it and we will basically within about an hour to of quote real time give you whatever information amazon is reporting would you like to just go down to this you do have quite a few sure I’ll keep working through this list here so this cloud ability provide analytics for s3 storage and utilization good question that is a yes and a no so what we can do and can dig in pretty deeply is within the cost analytics if you are making good use of tagging in a lot of the ways that the nendar was talking about and doing things like tagging s3 buckets you’re going to get a lot of detail about what each bucket and also what associated bandwidth is costing you from that within the cloud aboot reports so you could build a list of reports that shows cost per bucket cost of bandwidth per buckets you know ones that are being most users at least used and go after them like that there’s not necessarily an instance analytics like there is easy to but you can get I think a lot of the data you want using reports so we have some some resources available on how to how to build out that type of tagging strategy know how to build those reports that we’d be happy to provide after the call to as well so can your user management federated with corporate identity systems very good question also very soon that’s it’s a not yet but essentially I think what you’re asking is can we plug into systems like ldap sam’l and whatnot and that’s definitely the way we are thinking about doing this we have a variety of large enterprise customers right now like into it or Autodesk or Adobe who use us and some of these enterprise customers you know will often have many hundreds of cloud ability users so we do provide a full API for user management you can create users you can destroy users you can assign views you can give them permissions so all that is available today a lot of our large customers will programmatically create destroyed users based on their corporate identity system stay tuned for more direct integrations with those types of things so what is the list of AWS services cloud ability covers is the next question and the answer that one is actually pretty simple we cover all services so basically anything from a cost perspective that cloud ability sorry that Amazon reports out we can we ingest and put those into the reports we do offer you know essentially near pity accurate reports amazon rounds to five tenths of penny rebound to the penny but pretty much you know spot on exact costs for all amazon services so you know we’ll adjust whatever it is that they’re they’re throwing our way so the next question here was what is available via the API as compared to the interface itself and that’s also a pretty easy question essentially everything from reporting perspective that you see within our application you can pull via

our API we have public API facing API Doc’s available in our knowledge base that cover all the basic cost reporting pieces you essentially get a token that can be configured on a per-user basis to pull that information for our enterprise users they can there’s a separate set of docs that you know do things like the account groupings and the views creation and all that user management but you know any any pro user and eaten you know the lowest into the product gets access to our API to you know do reporting to creation of users all the basics ok looks like forgetting sure on time here so let me go through a few of these others to see what would be next just to cover you just got a question about cause it’s a common question that we guess which you would like to address said what is the cross of credibility yeah definitely so a basic pricing is available on our website if you go to Claude ability calm / pricing or click from the navigation there we basically charge a percentage of your AWS bill and depending on the level service there’s two levels of that we generally find we can save customers crofton five to ten times the cost of cloud ability just in up to my savings I’d say the big value that people get really in the most powerful in some ways the least tangible though is just that overall cost visibility that lets everybody see what’s happening and creates a culture of accountability where which had customers tell us they’ve saved hundreds of thousands of dollars not with turning things off but just by changing the way that their engineers think about cloud resources so the next question here is do you have any documentation on best practices while using cloud ability heard about shutting down test resources and making use of tagging I’m looking for me to eat more detail and tagging in particular so yes we actually have a lot of resources about tagging and cost allocation in particular our knowledge base is a good place for that support that cloud ibly calm we have a white paper on tagging we have a dedicated webinar on tagging blog posts on tagging feel free to you know if you if you start a trial and interact with somebody and our company they can give you a lucky tail or go to the knowledge base and starts looking through some of that there a lot of resources we also work very closely with our trailers and our customers to work on strategy around all those pieces and say down to talk about how they can not just put down the strategy but work on implementing that a lot of things that Andrew talked about around you know getting everybody involved and getting rules to get complete tagging and identifying resources they’re not tagged as well so my I think we’re probably about out of time unless you tell me I have time for one more i’ll do that one if you can take one more question and then we’ll have to take a bite our audience and is there other questions that need to be addressed claddagh towards them yeah we won’t be able to take all the questions how we women give it a shower for one more okay so I think the next one here probably best to address is someone’s asking we back charge our users internally within the company disk lawd ability provide customized billing statements that we can send you our users so you can definitely get to this we’ve got a futures and use cases to do this we essentially with that view system allow you to lock it down just specific bits of the spending by a count for example to send to just this user only gets this part of the spinning that would give them basically a breakdown of what their resources are costing we also have people to use our API to pull likes a very specific tag spending into their own systems so there’s a few ways we can accomplish that the idea being that chargeback is definitely the core what a lot of our customers uses for so depending on what your company how your company structured we can probably work out a solution that works for you on that oh great I think that’s all we had so I’m going to pass it back to you Maya thank you everybody for the questions is really great and happy to answer more via email after the webinar as well Thank You JR and thank you Andrew and Scott for sharing your knowledge with us today and to our audience thank you very much for attending we hope that the content of this webinar was helpful to you please reach out to credibility as what I say double yes for further questions and for any questions that remain unanswered credibility will reply to you here email would it if your business state and it’s a reminder you will also receive an email follow up with the URLs to do up to the webinar recording and slide it on YouTube and side chair thanks once again for joining us today and we hope to see you again in future webinars and a great day everyone you