Google I/O'17: Channel 3

11:30 9:30 a.m. CST

May 18, 2017 l Google I/

O TensorFlow Frontiers Stage 3 TensorFlow iOS

Sierra 3 –

>> Hi, everybody, I’m Wolff Dobson and I apparently can’t walk upstairs. I’d like to talk to you today about the frontiers of TensorFlow. It has three parts. The latest developments in TensorFlow We’re going to talk about Cloud TPUs and TPU pods, and finally Zak is going to tell us about the TensorFlow Research Cloud Okay. So TensorFlow, it’s mentioner machine learning for everyone. We like to say that it is fast, it is flexible and production ready When we say production ready, what we mean is the code that you mess around with on your laptop, that you try out on your local machine, it’s the same code that you push out to run on

giant clusters. It’s all the same thing all the way down, no rewrite ing in a new language after you’ve figure ed it out So, TensorFlow is this language that allows you to write computational graphs and use these computational graphs to do, usually, machine learning and often deep learningSo here is an example of, sort of, a basic deep learn learning graph. Here we have a picture of a cat or a dog that we feed in as pixels into the convolutional layer at the bottom, and then it goes all the way down to give us a prediction of some kind, either it’s a cat or it’s a dog Convolutional layers are one kind of thing. This is another kind of thing. This is a collection of fully- fully-connected layers, where we have each node on each layer connects to every other node on the next layer The arrows between nodes are all weights. Those weight weights are — we like to call them parameters and they get change ed over time This is a very deep network where this is inception V3, and inception V3 has about 17 layers depending how you count, and it has about 25 million parameters , so it can take a while to train And so what do you do when you’re doing TensorFlow, you probably have Python, and you’re going to knock out some Python there, and it’s going to generate this graph for you you. That graph, you’re going to execute later. You’re going to execute it in two, you’re going to be doing training, which means you’re going to be showing examples and modifying based on the results you get, modifying those parameters, and then when you — then you can run it again later on new data. We call that Inference, and we’re hope ing to get some kind of statistical insight ought of out of all the data we have on this new data TensorFlow runs on all kinds of platforms, CPU, GPU, obviously, we’re going to talk about TPUs later. They’re pretty neat Also on mobile, iOS and Android, and actually also on ras berry r raspberry pie, we can support all of TensorFlow on that. We support all languages, Google supports Python, C + , Java and Go These others are community projects, we’re excite ed about the community picking these to use the new C bindings to create new language wrappers Floa come TensorFlow comes with a lot of different tools. This tool is tensorboard , it’s sort of like an X ray for your training process. It allows you to look inside your training and see what’s actually going on inside your graph graph. Thus you inspect your graph, and in this case we’re showing off and embedding, that’s M nest, for those of you that recognize it, it’s handw rite ing data, and you can see we’re grouping all the 5s as gray and all the 9s adds redt’s really as red really need to be able to look and see what’s actually going on. And, once you’ve got this all trained up and you want to push it into production, we have TensorFlow Serve ing That’s a second Open Source project, also developed by Google, that let’s lets y ou kind of manage the process of serve ing a model in production, and it lets you do things like serve multiple model s in production, use it with some kind of container engine, and allows you to manage the process of setting up one of these big services serviceses to allow you to do insight for your customers is a great talk about this. I encourage you, Noh Noha Fidel is going to be speaking about it on Friday Inside Google, TensorFlow is a very popular. This is a graph of directors inside Google with model description files in it, which is to say, this is like a little peetion of piece of machine learning we have inside the Google repository You can see TensorFlow launch ed in Q3 2014 internally. We actually had another system before this, it was called disk belief. It was great but not flexible enough for the kind of projects we wanted to do, and you can see after we launched TensorFlow internally, there is a kind of Explosion of trying to apply ML to every kind of problem at Google, search, gm ail, translate, Maps, all kinds of stuff And in November of 2015, we launched TensorFlow in public There we had the — it quickly became one of the most popular — actually, the most popular machine learning GitHub project, and when we say most popular, we mean really the most popular (Laughing). This is a graph of all of our stars over time We’re really ee happy happy about community, and speak ing of community, since November 2015, we’ve had 17,000 commits from public, many of those from

external contribute ors We have almost 500 non-google contribute ors, and for TensorFlow Version 1.0, which we cuts cut if in February, and we’ve actually had some very significant external commits. One that comes to mind is the Windows GPU Support was actually external and it was committed in and we absolutely loved it And if you hunt around on GitHub, you’ll find thousands of other repository ies that have TensorFlow in the title, and along with that comes blogs and books and all kinds of other materials that, again, the community is putting together and we’re so happy that they are We also take supporting TensorFlow seriously. One of the great things about working on the the TensorFlow Team is our engineering Team actually takes rotations through stack Stack Overflow and also issues and PRs so we had over 5, 000 Stack Overflow questions answered and about 5,000 GitHub issues filed and triage ed in some useful way, and we have a lot of activity. We’re getting almost 160 new issues a week So where are we today? You know, last night, or actually yesterday afternoon, we branched TensorFlow 1.2. There is a lot of new stuff in TensorFlow 1.2 around supporting TPUs. Brennan is going to talk about that, but something that’s come into TensorFlow recently recently is XLA, which is TensorFlow’s compile er for accelerate ed linear algebra, XL A, and the basic idea is TensorFlow graphs go in in, and optimize ed assembly comes out. Machine code that will actually run, it’s not like an optimize ed version of the graph It is like it is an optimize ed version of the graph, but it’s actually running in machine code Super important to support custom hardware, and again we’ll talk more about this with the TPU section TensorFlow is a distribute ed execution engine that runs on top of all of these other kernel s for iOS, Android, GPU, CPU, all of that kind of stuff. On top of that, we have the Python front end, which is how you program it. We have a C++ front end as well and all these other languages Recently in TensorFlow we’ve added a layers API. Layers API gives you sort of big building blocks that are the right shape for you to do basic machine learning tasks in the kind of ways we’ve done, convolutional layers, fully connected layers, all that kind of thing And, on top of that, we have both Karas and Estimate ors. Ke ras didn’t original at Google but we liked it so much, it’s a high-level API that gives you even sort of easy ier powerful ways to fit all of these different, all of your graphs together, and then it’s actually built on top of our Layers API as well, and then we also have Estimate ors, and Estimate ors are a way to package up and abstract a way, sort of the concepts of machine learning learning. It has Fit and ee evaluate Evaluate and all that stuff Again, Estimate is really important for parallellization And with Statementers, Estimate ors, if you want to do a linear regression or classify ier or fully connect ed layers, whatever ttion you want it is you want, we might be able to give you a canned estimate or that allows you to do no graph construction and just drop your data and go The effect of all of these high-level APIs, like Keras and canned estimate ors, one of the things you’re going to see is code gets smaller. But beyond that, like I said, a big thing about Estimate ors, they help you parallelize your computation , and we’re going to talk about that more because TPUs are super parallel Speaking of performance, let’s talk about performance About a week and a half ago, we raised a bunch of benchmark benchmarks. They’re actually on, there is a lipg down there link down there This is one of the graphs on that page. A video DGX1, synthetic data 1, 2, 4, or 8GPU, and there is a lot of throughp ut there, a whole bunch of image s per second.s per second is super important. Another thing that’s really important in machine learning is that you get good scaleup. What that means is if I add another GPU, do I get another G. Approximate PU’s worth of performance out of that? So in this graph, if you go from 4 to 8GPUs, you actually see double the performance It’s a little hard to read on this graph so we made it a little easy ier, and an ideal scale ing line on this graph,

and this is GPUs on one axis and speed up on the other, and you want with one GPU, you have 1X speed up and 8GPUs, you have 8X speed up and you want a straight line all the way down there On your test data, which are common models in machine learning, inspections, . Are es net , Resnet50 We’re getting pretty close Next is synthetic data, you’re putting put holding it on constant, putting it on cinder blocks and reving the engine as fast as you can Add Google we think it’s important to try with real data as well, on the right we have the same test with real data, and you can see it’s not quite as ideal. It’s never going to be ideal because there is going to be overhead for doing the parallel competition computation, but we’re getting pretty close And if you want to scale up to lots of GPU, we have bunch bunchmarks for that as well. This goes up to 64GPUs This time on K80 in the Cloud Again, you’re looking for that, up, from 32 to 64, are you g etting, you know, double the throughput But don’t take my word for it Try it yourself. Not only did we release the page that has all of these bench benchmarks and graphs on it, but we also release ed the code so you can dowrchload download it and run it yourself on your own set setup. We have Google Compute Engine, Amazon EC2, but please try it on your own setup and tell us how it goes Also, that code is tuned for speed, so it’s a good place to look that if you’re having, you know, issues with scaleup, you can take a look at our stuff and see how we did it and see if that can help you But, 64GPUs is a lot, but there is more than 64 GPUs, and with very large compute — with very large models, you’re going to need a lot more computation t o train them, and that kind of computation is more powerful chips, faster communication communication among the accelerate ors, faster memory and optimize ed software stacko to tell you about that, to tell you about TPU, which are all of those things, I’m going to hand it over to ZakThanks, Wolff. I’m super excite ed a bout TPUs and I’d like to share the story of their develop development with you today Some of you may remember this image here. This is an image of Google’s first generation TPU, which was reradio veel at Google yoi last year and they considered building custom as soon as 2006 but it was more urgent in 2013 when one of the star engineers if all Android users spoke to their phones for three minutes a day that could force Google to double the number of data centers, so that drove this crash project to develop our first TPU, which in just 15 months of building and deploying these in the data centers, brought us to some pretty significant performance gains across search, ranking, Google Photos, Street View, Google Translate, and a variety of other large machine learning applications In fact, compared to contemporary CPUs and GPUs, this first generation TPU was 15 to 30X faster on these internal workloads loads while being 30 to 80X more power efficient But there was an important limitation, namely that this first generation TPU was designed for inference and not training. As Wolff mentioned early ier, training is this enormously complicate ed process of setting all of these weights in these giant machine learning models, and then inference is the process of running the ed mo else, and so the models, and while the first generation was excellent for running models that were already trained, Google still had to use huge clusters to train the models 7 that That inspired the second generation, which is what was announce ed yesterday. This device delivers up to 180ter 180teraflops of floating-p oint performance and 64 gigabyte s of ultra-high bandwidth memory. The most important thing is that it’s designed to support most training and inference on one platform, and that turns out to be really convenient because it allows you to develop your model and then run it in exactly the same way You don’t have this barrier of friction of having to quantize the weights or figure out how to deploy it it on a separate platform But for some of Google’s large machine learning problems, even 180teraflops isn’t enough So from the beginning the second generation TPUs were designed to be connected together with an ultra-fast custom network.e call these collections of TPU, pods, and

the TPU pods that you see here contain 64 of these new second generation TPUs. If you add all of that up, that gives you up to 11.5petaflops of machine learning acceleration that you can apply to a single machine learning training problem problem, or you can split up to support many different training problems in parallel This whole system has 4 terabytes of term and the TPU TPUs are connected in a droidal mesh network that fas I will state the ultra-fast and dense communication that you need especially to solve these machine learning training problems.e’re already seeing interesting results on some of the important internal workloads that matter to us. For example , on one of our new large-scale nerl neural machine translation models it used to take 24 hours to train the model on the best commercialaly available. Now in 6 hours we can train to the same accuracy on 1/8 of a TPU pod. new devices , these second generation TPUs, we’re thrilled to announce are coming to Google Cloud as Cloud tums. TPUs. Our goal is Cloud to be the best Cloud for machine learning, we offer customers the choice for the best hardware that you can find anywhere, whether that’s like S scrk ylikeCPUs, Vola, and now this new member of the family, Cloud TPUs so customers can build the best machine learning systems to their specific workl oads To tell you a little bit more about how clowtd TPUs are going to TPU Cloud tup TPUs are going to integrate with TensorFlow, I’d like to hand it off to Brennan >> BRENNAN SAETA: Thank you, Zak. Today I want to talk to you about Cloud TPUs. Now, as we’ve designed Cloud TPUs, we wanted to ensure that it would meet your needs, whether you are a researcher or whether you’re designing a high-scale, high- performance machine learning product, so for researchers, we wanted to ensure that it was interact interactive and low level, can you can play around with your models and debug them And for high-performance and high-scale platforms and products, we wanted to ensure that it would integrate with the rest of the Google Cloud P latform But I think that instead of me just telling you about it, are it’s more excite ing if I show you. Now, before I show you, I want to say an important caveat As we bring TPUs to Google Cloud, there is a lot of move ing pieces, and so some of those pieces are still under active development so things may change That said, let’s get started The simplest models that I think I can show you is alpha times X plus Y So alpha is the constant of 2 X is a vector, or we call a tensor, of three 1s. Y is also three 1s, and we hope to at the end of the computation get three 3s In order to use Google Cloud TPUs TPUs you’ve got to first create one. When you create one, this is your Cloud TPU. And so, in order to access it, however, you need to access it via Google Compute Engine, so we first create a virtues iewl virtual machine in Google’s Compute Engine Product and we’re going to name it demo vm and then we’re going to call a demo tpu. We log into demo v m, and this is what we see after we install TensorFlow.o, we’re going to run the Python I nteractive Interpreter, and we’re going to import TensorFlow just like normal. After that, we’re going to connect to our Cloud TPU using GRPC, Google’s Open Source RPC framework After that, you use TensorFlow just like you normally would, whether you’re on a CPU or GP to run on the TPU in this withtf device block. In doing so you tell TensorFlow to run the computation on the TPU After that, you can just run it, again, just like normal TensorFlow and it just works It’s very comieting excite ing So to review what just happened, we logged into our Google Compute Engine vm, ran the TensorFlow and interact interactive interpreter and connected to the Cloud TPU and ran the competition computation and ferch ed the result ferched the results back to our Google Cloud.his 1VM and Cloud TPU is only the beginning. We designed Cloud TPUs to int dprait with all the integrate with platforms available in the Google platform. if you want, if

you have a model that part of the model runs really, really well on brand new GPUs and part of your model runs really, really well on Cloud TPU, you can create your owness tare own esoteri heterogeneous cluster and test the model to the right parts of the cluster run on the right compute nodes But UW one thing we’ve learned at Google is as you push the boundaries of computational power, as you push extreme scale, you need to optimize the entire system from end to end And what I mean is, you need to take into account not just the amount of discs that you use to feed your data quickly into the excelling accelerate or pods and not just the network that connects all of these distribute ed machines together, but you also need a highly-tuned software platform, and to tell you about that, I’d like to tell you about some of the latest changes that are happening, that are still under act active development in C ore TensorFlow presentation, we had a picture of the Core TensorFlow Platform, and I’ve redrawn that add adding a few extra boxes. I’m going to talk to you today about XLA, Estimate ors, and datasets start with closest to the bare med metal, let’s talk about XLA. TensorFlow is designed to go from research to production, and production means not just serve ers in a data center, but it means going all the way from phones to these exotic hardware compute devices like pup TPU s. in order to run efficiently and to get the extreme performance, you need a highly tuned software stack stack As it turns out, in order to get the optimize ed code, you need to bring your own compile er. XLA is tensor TensorFlow’s compile er. XLA stands for Accelerate ed ak Accelerate ed Linear Algebra. And when you use it you design yourgraph, and XLA will take a subset or even a whole grap and run a huge number of opt immigration passes over it, pass optimization passes over it and approximately put the machine to run on what you have We’re taking the algorithm that does extreme performance on image recognition and XLA will take a subset of that graph and it will reorder the computations and krowt put the output the assembly in binary form to run efficiently on CP CPUs or GPUs But XLA was designed for the Explosion in exotic hardware that we see in machine learning learning. XLA has a plug pluggable back end so that you can target your own hardware elements TPUs are just a custom backe nd for XLA. In doing so we take advantage of the optimizations that we do for CPUs and GPUs and we’re able to run them also on TPUs But XLA runs sort of underneath the hood and so it’s sometimes a little hard to see, so if you’d like to see XLA in action, you can turn on trace ing when you run your TensorFlow computation, and if you print out the trace information that’s stored in that run metadata object, you might see something like this.n one of the node stat s we see this funny underscore X LA launch operation. If you’re not using XLA, the node stats will be your TensorFlow graph nodes, but what happens when you use XLA is the XLA compile er generates that optimize ed binary, and the XLA launch operation is the time spent actually running that bieb ree on binary on your compute devices. one thing that’s interesting is the node names don’t correspond, necessarily, with your TensorFlow graph, and the reason for that is that XLA preforms whole program optimization If you’ve ever used an optimize ing compile er and look ed at the assembly that’s output afterwards, you notice that it doesn’t necessarily match up with your statements that you write in, for example, C That’s because the optimize and compile er is doing a number of tricks underneath the hood to get optimal machine performance, and XLA is doing that not just at the function or the operation level, but it’s schedule ing whole programs to run efficiently So, we’ve talked about XLA or how you run on these new hardware platforms all the way across the range of devices we support, but I now want to talk about how you actually write your algorithms so that they can run across all of these platforms, and for that we have Estimate ors Now, this code I’m showing

you is copy and paste ed directl y from our source repository, and it runs today on C CPUs and GPU, and I’m going to show you how you can modify it to run on TPUs When you use Estimate ors, you define your machine learning algorithm using a model function, and this is the model definition for a simple convolutional neural network to recognize handwritten digits, this is the M (?) dataset for those of you that are familiar I want to draw your attention to the final three lines, in order to see them a a little better, we’ve made them a little bigger Normally, when you train your machine learning algorithm, you use a grade gradient descent optimize er or similar to optimize the weights. This is the training step of your machine learning algorithm. In order to get it to run on TPUs, you just wrap your gradient descent optimize er with a TPU cross-shard optimize er. TPUs have a number of parallel compute elements that all work together, and in order to aggregate the learned weights, you need to use this cross- cross-shard optimize er to relearn the entire algorithm That is the only change required when you’re using Estimate ors to run on CPU, GPUs and TPUs for this example Now, how do you actually run it? You’ve got to define your main function. Using Estimate ors it’s very simple Normally you just design define the Estimate ors based on the model function and you call train. Tolgt this to To get this to work on TPUs you use a TPU E stimate or and a TPU Run Config A that’s it. Those are all the changes required for this model to work on CPU, GPUs, and TPUs Some more complicate ed models may require additional code changes, but the TensorFlow Team is constantly working to reduce the amount of code changes required as you go all the way to production on all of these different platforms If you’d like to hear more about the high-level API, I encourage you to check out the talk in the next hour about TensorFlow with the high-level APIs So, we’ve talked about how to define your machine learning algorithms, we’ve talked about how they execute on hardware device, but we’re missing one key thing. The machine learning triad that has relate ed in a Explosion of progress has been fueled by three components, high – high-performance computation, advances in algorithms, and also input data Without input data, all the fancy acceleration you want is not going to generate any more than random numbers Cloud TPUs are designed to int dprait with the rest integrate with the rest of Google platform to make it easy to load data in a high- high-performance way. You can load data from your persistent disc or your ephemeral loyal SSD on Google Compute Engine, but you can also stream data in directly from Google Cloud storage into your Cloud TPUs, and for those of you doing advance ed reinforcement algorithms or other sorts of things, you can run simulate ors on Google Compute Engine and integrate that with your machine learn learning model running on your Cloud TPUs But as it turns out, we at Google, we’ve been deploy deploying these high opinion hfa nce high performance accelerate ors of CP Us and G GPUs for a while, and we find when you put them into production you find new bottle neck, I think this is best explained with an example of if If you optimize ed your software platform, you use techniques to overlap competition, comp computation, and that’s what we’ve done here, for this image model we run input on the CPU for the next training step while our accelerate or is running the model and learning on the current training step But if you all of a sudden just take this and put this over to an Accelerate or that’s 10 times faster, your training step time is not going to decrease anywhere close to what you would hope To help you feed data in in a flexible but high- high-performance manner, we’re introduce ing a new API in TensorFlow 1.2 called Datasets TensorFlow 1.2, the first release candidate is coming out as we speak, and we encourage you all to try out these new API s, kick the tires tires Although, they’re still a work in progress, we think they’re very excite ing, and I’d like to show you a little about what the status of the API looks like a code snipbit where we load a set of data from a TF record on file. We then repeat the dataset as we train our algorithm. We perform some parse ing and disportion distortions to this using the parse er function, but we can use this in parallel using the parallel num threads through a parallel map Finally we batch it up into

training bunches that he with then feed to the accelerate or This new datasets API borrows from functional programming and other advance ed concepts to make it easy to define high-p erformance input pipelines that run well.We’ve talked about now XLA, Estimate ors and Dataset, and these are just some of the new changes coming in TensorFlow 1.2 that helped make these high -performance accelerate ors be they CPUs, GPUs or TPUs really sing Now, on top of this base of awesome compute and infrastructure, we see some really phenomenal learning and research work, and for that I’d like to turn it back over to Zak Stone to talk about research on the frontier >> ZAK STONE: Thanks, Brennan. We’ve witnessed e xtraordinary breakthroughs in machine learning research over the passed several years, and while there are many typeses of machine learning, many of these breakthroughs vs. been concentrate ed at the edges of what’s computational computationally possible, and we believe that much large er amounts of computation are going to unlock new discovery ies and new breakt hroughs, which with large er and more complex models than researchers can practically experiment with today. the photo behind me, we’ll it represents the research learning community. The mountain represents the break breakthroughs yet to be discovered, and that gate is the computational limits that are h oaltding researchers back research holding researchers back Our goal is to break open the gate and help research researchers to climb that mown mountain to find new discovery. I’m thrilled the TensorFlow Research Cloud was announce ed yesterday in order to support open machine learning research worldwide and help rrch researchers who currently don’t have enough access to computation experiment with large er and more complex motion is denied else come models.e’ve dedicate ed a thousand of these new Cloud TPUs to accelerate open machine learning research. We’re going to set up an application process to let people from all backgrounds and all fields of expertise propose projects to take advantage of this extra amount of computing power. This is 1le 0peta a 80peta frop aflops all together and we can’t wait to see what beam are going to do with people are going to do. What could they be ? We can’t wait to find out example that was mentioned yesterday. I want to mention A uto ML it’s a family of techniques for using machine learning models to develop new machine learning models. I won’t go through the details here, but these are just two examples from a linguistic problem and an image recognition problem where we’ve used the machine learning learn ing model to generate the architectures that you see here on the screen, and the interesting thing about it is that both of these models look kind of complex and organic They don’t necessarily look like the thing that a mathematician might have come up with by hand Now, this kind of experimentation requires immense amounts of compute, but fortunately, with this step function to Cloud TPUs and eventually to the TPU Pods we want to shift the research minds et from scares tee to abundance so you can start contemplate ing these kind of techniques to search for new model architectures and open up new applications So just to be clear, we have one underlie ying product in fool Google Cloud which is the new Cloud TPUs on the Google Compute Engine, but we’re offer ing the community access to them in two ways.or general use, we’re setting up a Cloud TPU alpha program that lets businesses, startup, individuals , students, people, artists, anyone who is interested in this frontier of machine learning, to sign up and try to get early access to these limited numbers of Cloud TPUs to figure out how to adapt them for their applications But for those of you out there who are doing cutting edge machine learning research, who are extending this frontier, we’ve set up the TensorFlow Research Cloud, and we’re going to open up this application process to invite you in to take advantage of these thousand Cloud TPUs to do things that just aren’t feasible today If you’re interested in either of these programs, we encourage you to sign up now at, and tell us more about your compute needs and we’ll notify you as soon as more information is available about Cloud TPUs and the TensorFlow Research Cloud Before I go, I’d like to encourage you to either stick around for the next session, which will cover TensorFlow for non-experts and tell you more about the high high-level APIs that we described early ier, or join us in the office hours section for office hour, and if you’re worried, how do I choose between the next session and office hours? Don’t worry We’ve added a next office hour after the next session so you

can do both Thanks again to your attention, we’ll really excite ed about TensorFlow and Cloud TPUs and we look forward to speaking you with in office hour s.(Applause) 10:30 a.m May 18, 2017 Google UI I/O Fragment Tricks –


May 18, 2017. I/O Fragment

tricks T ricks Stage 3 welcome to tragment tricks, tricks today we’re going to talk about a few ective ways to iewts frag use fragment patterns in your apps Some of these may seem a little bay basic, but they’re also things that are going to help you factor things effectively, and makes sure you can keep your code clean Stupid trag fragment tricks >> Yeah. Stupid fragment tricks Now, let’s start with a little history behind this, which kind of helps understand where some of the Fragment APIs came from, so as we started move ing into large-screen devices in the honeycomb era, we started to realize a few specific patterns in the way that some

apps are put together, especially app denoted devote ed to your navigation and another area denoted to content. When you put these two things together, you can imagine on a small screen device, you might see the navigation as one screen that advances to a content screen, and when you’re on a large er screen device device, you’re able to show the same t hings side-by-side.We had a question of how do I write an app that works seamlessly across both? So the Fragment APIs were born. It was something that you could use to factor out your activity components into two separate pieces that you could show differently based on different device cop figure racial configurations This works really well for some applications like g gmail, but you’re probably not using that if you followed the development of app design over the past several years, you notice that this sort of pattern doesn’t fit for a lot of things However, it’s still extremely useful for representing your application as a series of destinations, so even on a small-screen device, you might have an app that looks like looks something like this. We’ve got our little application where we can order some microkitchen snacks snacks around the Google campus, and for each one of these screens, you can perform a replace transactions for replace the destination that you were looking at previously with the new one. A replace transaction will remove any fragment in the container and then add the new fragment., the nice thing about this is you can place these particular transactions on to the backstack, we’ll handle the back navigation for you. Now, the nice thing about this is that the content and navigation pane separation isn’t limited to just large screens. This can really help to keep your Chrome stable in your application. So any time you have things like bars, net drawers, bottom navigation bars, any of these things that you want to keep stable and perform some consistent animations from one screen to another, this really lets you do it in a way that is really difficult to being a publish with activity-to- activity-to-activity transitions transitions where you can’t have the continuity as you navigate deeper in your app’s hierarchy Now, let’s talk >> Now, let’s talk a little about navigation flows. When I talk about navigation flow, I’m talking about those step-by-step throw flows that you have in your application. So when y ou have a checkout or a sign up, or a setup wizard, you’re going to go — the users are going to go step by step through the navigation, and then they may want to wonder back using the back button and then go forward again again. Then when they’re all done, you want to be all done with that. You want t he user to be able to go back through the checkout process again with the back button, you wouldn’t want that, that would kind of suck. going to focus on our app, the microkitchen app which we’ve been trying to sell to the Google ers to sell the free micro microkitchen apps. >> It hasn’t been working very well >> It has not. We have new customers coming this summer, intern, so we’re pretty sure they will partake in our great free microkitchen service. And when the user comes in, they’re going to chick on click on the Cart Button, and what we’re going to do is remove the little Cart at the bottom, the little fragment there, and then we’re also going to remove that big — replace that big fragment at the top with our new fragment that we’re going to label, Cart. Not the fragment, but we’re going to label the state state, Cart.I’m going to talk about that a little bit later, why we’re giving it a name on the backstack. And then when they hit checkout, they can then choose the address they want to ship to, and we’re just going to replace that fragment with their address selection one, and they can choose payment then, and we replace again with with a faiment fragment, fragment, and again we add it to the backstack and we don’t need a name for the backs tack, but we’ll talk about it in a little bit, but they can go back through the back it is stack, navigate back and choose a different address if they want by hitting the back button As developers, we don’t have to do any other work. Backstack just pops that fragment right back into our state state, and it’s great. And then the user can go back through, choose a payment, and then confirm t heir purchase Now, here on the Confirm P urchase screen, we’re going to do a couple of things all at once. We’re going to do a pop back stack all the way back to that original Cart state that we said, and because we passed the inclusive flag there, now it pops — that Cart state also, and it will go all the way back, pop all the states back, and then statement at the same time, we add this new transaction that will replace that state with our thank you page because we want to thank them for giving us all their mon ey. It’s pretty good. We need to thank them for giving us all their money Now, on this one, we have a

little different thing going on The Okay Button doesn’t just create create a new transaction, it’s going to pop the backstack as well. Now, no matter what the user does, the user is going to do the right thing. They’re not going to add a new transaction — add to that state. We don’t want to have that thank you page come back again if they pop the backstack. , so it’s going to come right back to where they can buy more stuff from us., can you see some of the keys here to backstack management. Always maintain your backstack going forward. It’s so much easy ier to manage your backstack if you choose the direction your user is going to go on the back stack as you navigate forward At the time the user presses the back button, don’t at the time choose what they’re going to do. That’s a lot harder to manage So, to take advantage of this kind of thing, sometimes you need to do some synthetic back stack management, that means if you have a an exearn ital link it an external link into your application, a deep-nested thing , they’re selecting a particular Cart button, and when they hit the back button, you don’t want them to go to the main screen You want them to go into the category, perhaps, or even from a .Or even from a notification >> That’s a great example From notification. Or maybe in search, in the application, you might want to have the same thing So what do we do? Well, we just execute a whole bunch of transactions all at the same time. Just build up the stack for them, boo boom, boom, boom, boom, and execute them >> ADAM POWELL: Hang on, George. If I can make repeated transactions that means on each of those transactions I’m going to start and stop those fragment s as I execute each one in turn, so that can be really expensive expensive, right. So how do I go ahead and main maintain the back stack state without doing a lot of heavy-weight work as I start and top the fragments along the way.That’s a really big problem You have to create the view, tear them down again, inflation, right. It seems like a lot of unnecessary work. It is a lot of unnecessary work. Sorry, guys.You want to fix that? >> ADAM POWELL: Sure >> GEORGE MOUNT: All right Thanks, Adam. So now we have this new thing here called set resoredding ol loud sored reorderring allowed. What it does is allows all of the execution to happen all at once without change ing your fragment state. And then at the very end, we bring up all the fragments that need to be brought up and tear down all the frag fragments that need to be torn down, and now your fragments don’t have to go through all of that oh, I got added, I got removed, I got added, I got removed, so we can optimize this for you.ut you have to watch out because if you expected a fragment to exist that didn’t — that might have been optimize ed out, if you expected it to go through this creation, it might not have done that, so you have to watch out for this. So you use this in your applications. It’s great to use, but expect some slightly different behavior than before , you might have seen in our application that as a user clicks through the checkout screen, it was just pop, pop, pop, pop. The screens changed instantaneously, and that’s not very pleasant, so what we can do is we can add transitions.he easiest ones to use are the basi c transitions that come with fragment, and there are three options, and this is done on the transactions, you call a step you call a set transition. And the basic transitions are fade, open, and close.And from this, it’s really hard to see the difference differences, but you can see the fade has just a simple cross fade, and then the open and close also have a fade and zoom, so play with it a little bit, see what works well in your application and see what you look best. provides a nice subtle effect for your transitions transitions >> ADAM POWELL: What if you want to do something a little bit more in keeping with my own apps design >> GEORGE MOUNT: Yeah, if you want something a little more custom, then we can use animations. Animations in this case is the view animations that allow you to change scale, rot ation, translation, alpha, so fade ing So you can set those for — this is only on the support library, you can set those on the view coming in, the fragment coming in, on a separate one

for the fragment being popped — or I’m sorry being removed, and also the same things for the pop as well, the ones that are being added and removed, so you can have different animations for each of those. And you can get them like this, which is very nice, a slide effect, which you couldn’t do with the basic animations, with the basic transitions Now, if you’re working with the framework fragments you can do the same thing with animate ers, but they provide even more benefit because now you can animate any property on the view That means you can have some really custom animations, whatever you want to do. It’s great And what’s better, is now you can also do that in the Support Like library (Applause) added for the framework so that you can do this and set your animations or animateors fl a style. Now, a lot of you have been using the activity transition, and you want to have that work with fragments as well, so activity transitions a.m transitions allows this great ability to have a shared element change from one view to another , so in this case from one fragment to another, and it works on activity transitions from lobbypop on lobby lob lolly transitions from lollypop on, it’s very usef ul We added the ability to do t hat. So for fragment, instead of doing this on the transition, you do this on the fragment itself. You can set the views — you can set the transition to do on the views that are coming in, and this is for all the views that are not the shared element, so that’s the Enter T ransition And the Shared Element Enter Transition, this is the one you do on the view that is move ing This is our shared element, in this case I don’t know what it was. The almond almonds maybe. And here this is a combination of change bounds, change transform, and change image trans transform, too many things to fit on the slide , but that’s what it is here And then in the transaction, we add the shared element itself. And the shared element is the view that’s in that fr agment, and then we have a target name. This name, this is a shared element transition name, and this is a name that you’ve given the transition element in the fragment that has not been inflated yet, so in this case, it’s in the My Fragment that’s being pulled in And then question have our transition, which is great great. But something’s wrong What? Can you see it?veryone raise your hand if you can see it. What’s the problem with this? Okay. A lot of you can see the problem. All right Transition is only working one way. Right. It works great getting into the detail view, but when we come back to that main view with all of those other elements, you’re not seeing it, and why is that? Well, that main view is a recycle er view, and what’s happening is the recycle er view will layout its views after the set adapter call is made. So if you wait for the layout call after this adapter call is made, and the transition is coming in and saying, hey I want to up the share, and it’s not there So what does it do? It says oh, give up, fade out So in activity transition, what do we do? Well, it’s great, we have a thing called postpone ing the transition. So postpone enter transition, and when the views are ready, we call start postpone edentert ransition. It gives you the ability to wait for view, come on view, come on views, ( (whistle ing) and when they’re all ready, then you say, go! So we want to do this for fragments as well. So here we go. We can do it for fragments fragments, but there is one extra thing you need to do. You absolutely have to have the set reordering allowed because for our short time while you’re postpone ing, both fragmentses are there. They’re both act fragments are there, they’re both active, and that’s not what you expect, you expected one to move and the other to be added. So both views for both fragment fragments are in the create ed state, so that’s going to be a little weird for you. , so we want to make sure you know you’re getting into this situation So let’s see how it’s done in our application. In my view,

now you can do this any time before the Oncreate view view. If you want to do a non-c reate, whatever you want to do You call postponeenter Tran sition because I know at this point, I’m going to do a recycle er view, and I know I need to worry about the transitions there And because I love data bindi ng, some you have might know that, I love data binding, and we have a few others here too That’s good too, thank you Adam, if you haven’t seen that talk, you should go back and watch it We’re using that So here I’m sitting setting the adapter for the recycle er view view. Now I’m setting the adapter and now I have to wait for the layout So I’m waiting for the layout, and then the listener, I call startpostponedentertransition Now my transition is ready and it will just go on ahead and — I think I skipped a slide (Laughing) Okay. So it works (Laughing) >> ADAM POWELL: All right So one of the other features — move ing along to something completely different. One of the other features we get a lot of questions are the start retain instance method on fragments, you remain retain an instance across activity, deconstruction, and creation, cause something like a configuration change So, this means that the object instance itself of the fragment that you’ve marked this way is transferred across the parent recreation, so anything that you put there is moved along with it, so full object, you don’t have to serialize this into a parcellable or across or safe instance state.he important thing to remember if you’re doing this is this doesn’t happen if the process is recreate ed. Your objects aren’t there, be so you need to be a little careful about this You need to make sure you can still restore from instant state if you have to, even though in the common case you may still have the full objects that you may have create ed in the first place. this is a replacements for the old on retain nonconfiguration instance method that was on activity, in fact, this same mechanism is used to implement the fragment fragments version of this, but what’s a little more interesting is that this mechanism is right now the back backbone of the new view model component that we’ve been talking about early ier here, so the view models are actually saved within a retain instance fragment to s huffle them across from one acti vity to the next And this just kind of goes to show the types of infrastructure you can build on top of a retain instance fragment, it’s something a little bit more on the abstract end of things, so we end up getting questions about hey, what’s this good for, and well this is a pretty good example Just like we talked about with view models, it means that there is a few things that you really need to avoid. In this case, don’t put views in or retain instance fragment unless you want to do a lot of manual b ookkeeping. Technically you can kind of get away with it if you’re really careful, but make sure you release all of your references and on destroy view of that particular fragment, so on and so forth, but really it ends up being best to just kind of avoid it. We’ll do a few more patterns later in the talk here on what you can do instead Context, specifically activity contexts. We all know why you don’t want to stave save an instance of the activity longer than the lifet ime of that activity itself, but the thing that really tends to catch people is call-back reference, so if you register a listener or some kind of call- back with a fragment and that is going to outlive the container, it’s really easy to accidently close over that context that you had and have the activity still outlast the original host.So child fragments are something that hit a lot of people hard a few years ago, because frankly, we had a lot of bugs around them. They were create ed to solve a pick problem, and that was dependency management within a particular activity.So, co nsider this case. You get a fragment that has a view page er, pretty common, right. The viewpage er uses a fragment page er adapter because that’s a pretty easy way to use a viewp age er, and then you remove the page er fragment. Now we have a problem So, what actually happens in that case? You’ve got all the fragments added by the page er adapter, but the host fragment or conceptual host fragment that has the viewpage er itself was removed, and now something has to be in charge of remove ing the individual page fragments, and this was something that really kind of meant to show that a single collection of fragments is insufficient. fact, when the Fragment APIs were first rolling out for a lot of internal developers around honeycomb this is one of the first questions that we got, and it took us quite a while to come back and address this. Of what happens when you do have dependencies in between fragment s? And part of the reason why this was such a pain is because we didn’t have any ordering guarantees around fragments being createssed, and creates create ed, and what’s worse is a lot of times this would only come up much, much later in the

process Just because you added fragments in one particular order, you can control that order as you’re running that and kind of running the initial transactions that build up that state, but when your process die , and we restore that fragments from instant state later, the order of that recreation is always undefined, depending on what all may have happened throughout the lifetime of your activity before, just kind of do some artifacts of internal book keep bookkeeping, this was something that could cause these things to be reordered in terms of which one would get on create first and so on and so forth and it made it difficult to connect any sort of shared state The sort of shared state is easy ier to handle with the view model system that we showed early ier, but at the time we really didn’t have a good solution to this problem So, one of the other things that was kind of a pain is the deferred transaction commits that have been common ever since — or up until we added the C ommit Now method on method transaction, the nice thing and the and the reason it was done in the first place is to avoid re-enterrant calls. You didn’t have to worry about one particular transaction being half executed and then starting starting another transaction as a result of it But this really did have a cost, and I think that if you’re in this room right now, you’ve probably experience ed some of the costs of this. Raise your hand if you called execute pendi ng transactions to fix a bug? Yeah. That’s a lot of hands So, this is one of those things that ends up being really kind of difficult to work with So, child fragment dependency ies work out really well because they solve a lot of these particular issues. It’s a separate fragment manager so you don’t get the re-enterrant cases no matter what. If you go ahead and use commit now on a bunch of these things, you don’t have to worry about your parent being in a potentially inconsistent state as you’re doing this because you’re all working within your own local unit All of these things are added and removed as a unit, which means it solves the viewpage er problem. It means if you remove the container containing fragment you don’t have to care about the implementation details of the fragment. This kind of seems like one of those duh things in behind behindsight, behind hindsight hindsight and it’s guaranteed to be restored. This is super important because now you can rely on when things are actually restored. don’t have to worry about these ordering things that are out of your control But perhaps most importantly, the implementation details, again, don’t leak into your surrounding containers containers. So, in conclusion, many of you may have run into a lot of particular issues around child fragments Please go and try version 26 We have fixed more and more issues around this, speferl specifically around inflate ing child fragments.his is one of my favorite uses of this. We talked early ier about using fragments as very course- grained applications within your destination, something that takes up an entire UI pane of your app, but nesting within other fragments, even if you inflate them from one of these course er grained navigation destination, it just kind of works, like you don’t have to worry about taking care of all these other sort of lifecycle issues, and it lets you build smarter and encapsulate ed components We always get this question, too. Hey, do I build a V group or a fragment? And there is a lot of ink spilled and keyboards smashed making these particular arguments arguments online.I never know what to do with this, building a V group or fragment, I always tell people, just use V groups >> ADAM POWELL: Exactly so, you know, excuse me, you said the cross on the streams was bad , but so for this it’s essentially as follows for the rule of thumb. Views should only be responsible for displaying information and publishing direct user interaction events. These end up being very low-level events, button was schikd or clicked or user scrolled something, or responsible for drawing text or other parts that are user interaction Whereas fragments integrate with the surrounding life lifecycle and may be aware of other app components. This is really what gives context to everything you’re doing in your UI, you mind bind to a service, you might communicate to the data model, performing a database query, so on and so forth So, you should never use a fragment when a view will do, but you also want to make sure you’re not adding outside dependency ies to your views That’s definitely the code smell, if you ever find yourself doing something like trying to bind to a service from a view implementation or trying to make a network call, or again, trying to integrate with anything that’s outside of the realm of just that one individual view But that means that there is a hole. It means that you can’t build just something that’s simple as like a completely self-contained “like” button that you can stick in the middle of one of your layouts and treat it and forget. Give it some parameters and go Well, this is one of the r easons why you can inflate fragments. In this case, we’re

showing that you can define parameters to these that you can place in line. We can go ahead and inflate arguments in a way that — excuse me — in a way that allows you to do this without having a lot of very heavy weight integration. You don’t have to go and find it, configure it separately. You can just do it in line So one of the things that made this really difficult to do in the past was kind of, again, just artifact of history in that you couldn’t actually set fragment argument arguments argument arguments, that arguments bundle, after the fragment had been added to a fragment manager, so now we’ve relaxed that and we’ve said that now fragment arguments can be change ed any time the state isn’t saved, include ing during inflation, so there is a case where after you rotate or so on and so forth, and we’re reconnecting inflated fragments, which is again something that we do automatically, then you can run into this case where we’ve already restored that fragment, we’re trying to hook i t back up again, but it’s already added, which means that you can’t do what the natural thing is, which is to represent all of those arguments that you may inflate as arguments in the bundle, so that way you have, basically, a single source of truth for all of the configuration parameters of that fragment Well, now you can.ait a second here. Go back. This is not a state safe thing — >> ADAM POWELL: Oh, yeah, so I don’t know where this comes from, but people apparently try to commit transactions when the state is already saved.hy would they do that? >> ADAM POWELL: I don’t know Because they can’t tell That might have been yet, yeah, so we added a simple Git around this so you know when it happens , and it makes write ing lifec ycle components that makes sure that you don’t try to commit fragment transactions when it’s not valid to do so. There is It’s a whole electroof a heck of a lot easy ier. Thank you So again, part of what we’re trying to do here is create a much better layered infrastructure. We’ve traditionally hidden a lot of these internals in some of these Android components such that only the internal components are twittle ing them, which means as soon as you all have a more complicate ed case you’re trying to handle, we’ve made it very difficult for you to do this in the past past, so we tried to open a lot more of those things up, make these a little easy ier to inspect from your own code and deal with the cases that arrived arise that we didn’t think of.So, in this case, the pattern that we really kind of want to encourage is mapping fragment arguments to attributes for those UI fragment s. Again, this gives you kind of a single source of truth truth So let’s tb ahead and go go ahead to an example of this. In this scais, case, this case, we have a few column-base the extensions that make this a little easy ier to handle. In this case, we have one utility method that is just with styled attributes here, so many of you have dealt with inflate ing attributes to views, and you know, you have to get it typed away, recycle it afterwards, and it’s easy to wrap these up inside some extension functions, similarly, we have this simple little thing. Hey, is there a bundle there for arguments already, if so, reuse it, otherwise, create a new one So, these are the sorts of things that it really adds to the Fragment API, all of the kind of things that are kind of a pain in the neck to do with fragment, you can make some simple extensions that were really hard to do before, but for example, one of my favorite features for something like this is to use property delegates to deal with argument, and you can wrap even more of this stuff and just basically treat them as normal property ies on your fragment objects.So, putting a bunch of this together, fragment s help you maintain more of a content and Chrome separation Give you the ainlt ability to keep your content pages as fully encapsulate ed items without disturbing the rest of the UI around it. You get rich er transitions. You bet better shared components because you can reuse a lot of thee things and you don’t have to reinitial lies them all of them separately and you can better encapsulate your dependency, so you don’t necessarily have to leak all of these things to the surrounding host And, thank you very much for com ing. Anyone who has questions, be please come to one of the mics up in the center aisles and we’ll be glad to take them (Applause) >> AUDIENCE MEMBER: Hello Thank you for the great talk So, my question is about the first part of the presentation, like for example, the page where you talked about manage ing the best take, so to be honest, I have not been using fragments a lot, and recently, and I am doing the same thing with activities, you

know, manage ing the back stack and so you can compare a little bit these two approaches? Why should we use fragments for that now but not just others.king sure you have any persistent Chrome is obviously one item of it. But one of the other things that is convenient when you use fragments so this is the ability to pop several layers of the stack at once. We were able to pop the entirety of the checkoutflow as one single unit rather than having to, say, return are a result to eek prior fragment to say, yep, I’m finished and you should finish t oo, yep, I’m finished and you should too, and it makes it easy ier to handle those sort of cases. Anything you want to add to that >> GEORGE MOUNT: Resources when you’re dealing with just a single activity, because when you have multiple activities they’re all in the — all in memory while you’re going 24R50U your through your whole application >> AUDIENCE MEMBER: Thank you.You’re welcome >> AUDIENCE MEMBER: Hi, . I would like to ask about the fragment transactions, so do you guys have plans to use replace fragment instead of add fragments? Because the problem is, like sometimes I find out — I mean, is this just similar — I mean, the behavior is different, right, but the problem is when you do the pop stack of the fragments, there is no good way to actually figure like when you try to update a toolbar or action bar. So I just wonder, like, what’s the — what’s the recommendation in terms of like manage ing when you pop up the stack from the fragment manager, how to maintain, like, the changes when you pop out — like the changes with the toolbar ke, I start using more of the replace fragment instead of like add fragment mainly because of that, so I wonder if you guys have some recommendation on how to handle the situation? >> ADAM POWELL: Yeah, I mean, if you’re using the the — if you’re using the options menu from each of the fragments to change some of the options items in the tool toolbar, some of that is kind of handle ed automatically for you. I believe there is still a API to change a title within the transaction itself, so you can — this is how we implemented some of the fragment breadcrumbs widget, that I think is still available in the source repository if you want to refer to to in terms of change ing things like the title, but in c ases like that, it’s basically what you can do is you can listen to the — you can add a back stack change listener to the fragment list itself. You get a call back any time the back stack changes. Of you can go and inspect the top-most element of the back stack and make some changes based on that Alternatively, you can have the fragment publish a particular event either using — by change ing a shared view model, for example, to change a title or something as it starts versus stops >> AUDIENCE MEMBER: Okay Yeah. Thanks >> AUDIENCE MEMBER: So the child fragment manager has a back stack of its own currently, but by default it doesn’t do anything when you press the back button. Have you given any thought of what you can do with the back stack there or what you’re supporting in terms of the back stack?I could not have planted another question and this is another thing this escape ed our slides. We went ahead and added a new method, should be in V26 called set primary navigation fragment, and what that does is it’s something, you can actually also fetch it back out but it will debt gait things like the back button down to a child fragment and that will train all the Charlie deep, so as beep as you may want to go with that, then that attempt to pop the back stack will propagate all the way down and it won’t be until the bottom-m ost fragment has nothing else to pop that the parent will be handle ing the pop operation You’d Awd Thanks >> AUDIENCE MEMBER: Thank you so much for the amazing talk. I did have a question about child fragments, so in my previous attempt I tried to inflate a map fragment inside a fragment, and it added it to the fragment manager of the parent fragment >> ADAM POWELL: So, there is — when was previously previously, I’m curious ?Previously was about three weeks ago.Three weeks ago? There are two fragment managers that a fragment has. One is its own fragment manager and one is the child frag am manager There is a Git fragment manager and Git child fragment manager, so depending what you’re looking to do.ut if you use the load — if you use the on create view method then any fragment tags that that creates should be added to the child fragment m anager, if you’ve come across a case where that’s not happening, please see me after because we want to fix it >> GEORGE MOUNT: We will be

in the sandbox, if we don’t get to questions, we’ll be in the sandbox in Building C. Yeah Definitely >> AUDIENCE MEMBER: Thanks for an an amazing talk, first. My question is more of a best practice question. I have like this big fragment that when a user pushes pushes a but the., inflates — or you get a transaction and you get a child fragment in the middle of the big practicing am fragment and then when the user pushes another button, I replace the whole thing to get like to a subscreen. Right? And as soon as I pop the subscreen I get back to the main fragment, and my question is, I’m not too sure how the best practice is to remove or prevent when popping and coming from — when pop popping and coming back to restore the child fragment I just edit because, yeah, the fragment — the child fragment manager automatically reinflight inflates and puts the small child fragment and I don’t want that, so what I do current ly is on the view of my (?) fragment, is I remove my child fragments with (?) and I don’t know what to do there so I’m not too sure how to prevent from getting a new frag fragment >> ADAM POWELL: Okay. I’m trying to understand the case a little beir better. So you have your overall hosts either at your activity or some similar grandparent fragment >> AUDIENCE MEMBER: Sleetly Yeah. >> ADAM POWELL: You’re remove ing a fragment within that host >> AUDIENCE MEMBER: Okay Inflate a small fragment in my big one, right. Let’s say it’s just a view element or something >> ADAM POWELL: Sure.ow I change to another B fragment, like okay, I replace the whole thing and pop and come back, and I don’t want the small fragment s to reappear reappear, and I’m not too sure how to prevent the system from doing that >> GEORGE MOUNT: I would say during navigation, that’s when you would remove that fragment from the child fragment manager, right >> AUDIENCE MEMBER: Exactly Yeah. can do that in navigation when you’re going out >> AUDIENCE MEMBER: Exactly, but in that case usually I have to use committal line statements because the fragment state is already statement, right.If you ran on destroy, yes I guess I’m a little curious because it sounds like you specifically don’t want the smaller fragment to survive You want it to be much more ephemeral, essentially? >> AUDIENCE MEMBER: I want it to die, basically, when I leave the screen.kay. I have to say, I haven’t run into that case before (laughter) >> GEORGE MOUNT: Maybe we can think about it a little bit and talk to you in the sandbox So what you may want to do is you may want to remove that as part of a separate transaction that’s not part of the back stack as you navigate, but that imply ies some knowledge of that implementation detail, but it sounds like you probably want to avoid, but let’s talk about that a little deeper in the sand box if possible. Great. Last question? >> AUDIENCE MEMBER: So, I was hope ing you could shed a little light on something you said in the beginning of the talk. You mentioned building up the back stack when, let’s say, you went in from a deep blink, but some of the U UI gietd guidelines say there is a distinct difference between the up navigation and the back but the button. When is the right time to actually generate the back stack for the user? Because right, we generally on deep link the expected behavior is the Up button should take you up a level in your app, but the back button should exit is what I understood >> ADAM POWELL: To the previous context. You’re absolutely right. This is a wonderful question that I couldn’t have planted into the audience again. So yes, secialg essentially, when you start one of your activities on the task of another application, say when you click a view link or web link where you’re viewing that content within your app, then in those case ses you don’t want the synthetic back stack yet. you want to do, is in that case, the Up button is going to jump you into your own task, so in that case that’s going to be using the Start activity flags, the flag activity new task, the flag activity clear task, and flag activity task on home Happy to write all of this down for you in the sandbox if you’re interested, and then you want to go ahead and as you jump into that new activity you create that synthetic back stack, so that way that gets you deep into the navigation of your application, but back on your own task >> AUDIENCE MEMBER: Okay So the key is to key off of the task >> GEORGE MOUNT: Yes, exactly >> AUDIENCE MEMBER: Awesome Thanks >> ADAM POWELL: Okay. I think that’s about all of the time qeef we’ve got we’ve got. We’ll be in the sandbox area if anyone else wants to ask some questions, and thank you very much (Applause) (s ession complete ed at complete ed 11:10 a.m CS ) >> Thank you for joining this

session, we will assist to go through the registered exits to make room for the next session If you rej registered for the next session in this room, we ask that you please clear the room and return to the registration line outside Thank you (session complete ed at 11:10 a.m. Pacific) proceedings *** ing Android Studio May 18, 2017

San Jose, criminal criminal

California 2

1230 :30 p.m



Chromium e

Good afternoon everyone Houmentd How’s it going

so far no ? Come on comment on give me something. There we

go. Thank you so much for join ing us. My name is Dave Smith


a log projects engineer » DAVE SMITH: We are hear to

talk to you today about how you easy ily get started developing

for Android things use ing a tool you probably already know

which is Android Studio. So let’s talk a little bit about

Android things for a minute. It is an extension of the Android platform to brand new form factor in this case for embedded in I/O thshgs T. This is something /O oT. This is something we have done in the past >> LIZ ANDERSON: Destroyed things is just yet another form

factor where we are bringing all the power of the Android plat plarm form form to this new former factor. This is a great things tore embedded in IoT compefk specifically because it brings Android’s best-in-class tools to a space that tra digsally nt al li digs digs traditionally that hasn’t had well integrated tools ing They are separated. There is not real well integrated experience in a who of lot of case and that’s what makes Android things and >> LIZ ANDERSON: and Android Studio so you a system. You can use these powerful tools to develop debug and deploy your applications to your devices for development and then again in to production. Basically it makes embedded development as sem PIM sill pelt SIM Pell as SIM Pell simple very many vement has become development has become. How many you are Android developers in the room? How many of you use Android Studio roughly every day? Okay Great. So this is an awesome Because you guys already know this tool. And you know all of the amazing things that can do for you in terms of development and deployment but just to give you some background or some idea here this means that now you can use some of these amazing Android sued Studio feature s that you are already familiar instant run, UI layout blrs and all of these great tools to about build applications for erm embedded devices You can leverage all the deep integrations that are already in Android Studio for Google services. Dep integrations for things like Firebase as well as the Google Cloud flat form plarm plat fofrl Cloud Platform. You have this amazing step through debugger you that can set watch points ex execute arbitrary code expressions But in addition we also have all of these new profile ing tools. You saw some of them just announced thatment at the developer keynote, memory, object allocations and tracking down issues in your program and performance problems. So all of these tools that we have announced here at I/O as well as the one thats you are particular familiar with use ing an Android Studio you can apply those seament tools for developing for embedded and IoT All right. So that’s enough about talking about what you probably already know. Let’s talk about the new stuff. As you probably heard yesterday we announced the new preview of Android stew Studio 3.30 and with you 0 and one of new futures is direct support for Android things as a device form factsor. Let’s take a look at what we have added If you are use ing the preview of Android Studio and you krooe Tate a create a new project, Android things is list ed as one of those form nak factors that you can select when you build a new project. Ing this Interesting things thing to note it is not mutually ex-Clive clusive will phone. You can build an application ha that Dhar gets targets both phone and tablet. Maybe you want to build a companion application that goes along with what’s run ning on Android things and you would like to share as much code as possible you could actually build them in to the same application project just use ing different entry points via different activity ies and we will talk a little bill bit more about that. When I check that little Android things box what is being added to my project? So what that does is it walks through and when it kre ates the new projejt it automatically adds the on destroyed things Android things support library. If you have been oug our document significance ation you have seen us telling you to man qually add these ly ually to add these these things >> LIZ ANDERSON: Destroy Android things is going to add them. The support library adds a bunch of knewor P newer PIs with interacting with low level peripheral devices PWM, pie spooi and all the spy. Maybe you have heard of those things But essentially they are all baer interfaces that are used to connect low level ard hairedware, ware hardware. Now you can commune kite icate with those inside your app and support library allows you to do >> LIZ ANDERSON: Void Destroyed and Android Studio will tar get that When you create a new project targeting Android things is an activity dialogue to add a new empty activity to your project This is also important because in Android things application must haved a least ed a at least one main entry point activity. So to discuss that a little bit let me kind of walk you through what happens when Android things starts up for the first time. So Android

things is a little bit different than that digsal Android and that there is a launcher application on the device. User would choose to pick a particular app and then launch that so that they can interact with it. It is not to say that Android things doesn’t have a graphical display because it might. We have to somehow provide a mechanism so that ab Android can automatically launch whatever the primary application is that you have developed once this system fully boots. Way way we do that, once it is finished bool ed ed booting there an intermediary app that is built in to the system and that launch er is looking for a very specific activity of intent on the system that it can launch as the first main amount Kaying application that’s ready to reason whatever code you have developed. Now again if you have done some Android things development before you might have seen this in our documentation but essentially it means that you are create ing an activity that has an intent filter that includes this IoT launcher category and the IoT launch wn uncher app will look for a can tift n activity and lawn unch it as the primary y pray mere primary application that on that device. You can launch other applications if you have multiple APKs but sort of that single entry point that we require to get your app up and running. And again when you use the new activity template in Android Studio what it is going to do is automatically add all of that boilerplate to the manifest for you so that you don’t have to remember to could that do that for that specific activity. All right So one more thing about Android Studio. I mentioned before that there is a possibility for having a display on an Android things device vice but it is not required. We say that displays are optional in Android things. Because of of that there are so some Android behaviors that work a little bit different than traditional Android. And one of those is common is APIs that would show a dialogue it to a user. One is run time permissions, probably the most comoun. Now the issue is showing a run time permission s dialogue to require access to particular ape API we can’t necessary ily do that if there is no display. In Android things these dialogue does not display and permissions are granted to applications by default even if they are dangerous permissions. We say they are granted at install time will you it but it turns out there is a little bit of an implementation difference in here that some of you may have experienced if you is u you have done the development so far. Those per miles an houring missions are grant ed when the device foos first buys. boosts. On reboot it will grant application permissions. There is no dialogue but during development this creates a bit of a gap There is tul actually a period of time between when you install your app on the device the first time and when it may or may not have been rebooted where that permission is not granted to and that can cause some friction when you are doing ing ing development. There are ways around that by manually rebooting the device or in some ways cases you can manual ly installing the APK use ing the grant permissions flag to override all of this >> LIZ ANDERSON: Destroyed Android 3. Studio 3.0 we do this for you. And you download your code out of Android Studio We will automatically grant those permissions after install time the way you expect so that you don’t have to do these manual work arounds or reboot the device. All this is now much smoother and much easier to work with now that we got full integration with Android Studio show you some of this stuff in practice. So I’m going to hand it over to Renato who is going to show you a demo. Can we switch to the demo, please? Kis » RENATO MANGINI DIAS: Are you ready for a life code ing demo? (Applause.) just land launched Android Studio 3 dm .0 which we just announced yesterday. You can download it right now. It is public. But it is on canary channel yet. Sos it is not stable. That means it is rigs risky. I am going to start a new project, Android things project from scratch use ing the support that we just ed added and from there we are going to do something else I would enable form factor support for Android things and I will add an anti-activity just like Dave showed. And I don’t want to do it UI now because this — I will actually run this app on this device as we you can see here. And this device has no UI. No traditional U ifrment I. So thaet’s let’s disable that. Now it is generate ing a project. I will explain a little bit of what was device is. This is one of the many boards that we support and on P top of that

you can see a what we call a hat which just a board with a bunch of sensors and buttons and display segment displays. We are going to play with that a little bit. Okay. Here we have the activity. If I switch to my view grader, you can see that the — library is added automatically. So I’m ready to do something. What do you want me to do day? » DAVE SMITH: Smith Dave? » DAVE SMITH: I guess the first projeks that everyone ject that everyone does is a Blinking light » RENATO MANGINI DIAS: We have this notion of driver. We don’t want want — for every standard peripheral rif ipheral, sensor or display. So we provide it and we count on the opportunity community to provide he community to provide more of those. High level drivers for the sensors. So he don’t have to square keel deal with those low level protoll toll tocols you want to blaing link a Blink a light. With this particular bord bee we cite board we created a metadrivor that contains every sensor that has in that board So I will start by adding the dependency on that Driver Is it better? Okay dependency. This library is published on J standard. When I sync it will add all the dependency ies. And why it does that while it does that I will go back to my activity Now I start with open the LED Led See there are three LE deshgss Ds here. There is a red, yen green and blue and by sues use ing this class fronted driver I can’t open if I of any of those. Open the red? No There an exception. I just — the full way And set the value. I also I will set the LED direction as out. So this is something that we see in the send the signal to and not something that we collect the signal from. And I do that in the loop. I’m pretty sure that Dave has something to tell — to complain about that We will talk about that later Okay. If you are an Android developeror you can see that you can see what we doing here. E We open up the LED and we got in to a loop and we set the value every 300 seconds to an alternate value and now I will run the app Here are the devices I have The only device I have connect ed is the IX7D which is this board If cross our fingers, please, if all goes well after some time the red LED will start Blinking (Applause.) » RENATO MANGINI DIAS: Awesome. I’m done » DAVE SMITH: So it works That’s great. But you are pretty experienced Android developer. I’m not real sure that putting an infinite wire loop in uncreate is the best weabl way to handle this Do you think maybe we can fix ha? that? » RENATO MANGINI DIAS: I knew he would combine complain about that. That’s a very good point. Despite having a UI and — you have to follow the same rules off Android. You should not block the main tread Why? If you don’t have a why you dote Y you don’t Y you don’t neat don’t need to refresh your Y. Keyboard events sent to your application and you have sensor events. So we is have a bun of sduf ch of stuff. We sudden should not block the main track

That’s a main root for and Croyed Android and that’s something we are doing here. So let’s fix that. The way I am going to fix this there are mull multiple ways to do it. You can create a service. You can create a thread. I would do simplest way, I create a hammer and attach that hammer to the main thread and add each set value to the end of the looper of the main thread tread looper It is going to look less economy complicated than it sounds mentioned we are ensure ing that the main thread in Android continues to remain free so that system events can still come in to your application, clr whether life cycle events or input events. It hens It means in general if you have to do any regular polling of an input or something along those things you want to make sure you don’t do that on the main thread either. We are schedule ing an output to the way we are doing this is fine. If we nutted need to pull a sensor or read some value from irn put input we might want to use a background thread That It could be a handling or thread handler thread async task or other things that suit your development work flow. They have all work the same way of you compm. But it is im expect. But it is important to realize when you are working with some of this hardware it is still good to off load those regular repeated long running operations off the main thread hone even toe it is not though it is not a network access that you might traditionally consider as one of those blocking applications. Disokay okay » RENATO MANGINI DIAS: Okay Good job » DAVE SMITH: Sorry » RENATO MANGINI DIAS: And now we have the code that supposedly should work. Again let’s cross our fingers again Actually first let me just go over it very quickly. I have the handler. The twifrns difference between now and before is that now I have a handle r and the handler is — all the work that we the looper was done ing, e ing doing, run and re schedule. It does exactly the same. However at the end it posts itself at the end of the looper. So it can do whatever has to do like reading sensor or reading keys processing any event that is coming to a new thread and then it does another step and reschedules again. It is not specific for anti-things This d Android things. You won’t know if it is working or not. So — because of it was Blinking before. I would change the LED to the green one. So we don’t have to trust highway me Done (Applause.) » RENATO MANGINI DIAS: Awesome » DAVE SMITH: That’s good It is good. I think we are kind of with — we are following a good pattern. Disbyte » RENATO MANGINI DIAS: Byte byte Bye-bye Smith submit we » DAVE SMITH: We work for Google. Can you get something more interesting going? » RENATO MANGINI DIAS: We can do that. As I mechanicsed there ed mention thissed is one Blink sensor in this board this tiny thing here. Can you see right here? It is the BMP280. It is temperature and ambient pressure sensor. And we can easy ily connect to that. Use ing the rainbow hat driver. So let’s do it And what I’m going to do did not is not only read from sensor, I will also show the results like the temperature on this segment display here Right? » DAVE SMITH: I think that’s a better demo » RENATO MANGINI DIAS: See And how many how I’m going to do that, that’s really easy. Did Smith whatth th » DAVE SMITH: What you may have noticed,en the reason for that is that the driver that we have written for this specific peripheral rainbow hat ab tracts es stracts that away Use ing these drivers is a much

faster way of getting started without necessary ily having to deal with with all those low level details but these drivers are open source on github and we will mention a ling to that a little bit later. You can use them as a reference to see how to build something with these abstract stractions use ing thoem e ing ing them directly and you can read the code. Are you done yet? » RENATO MANGINI DIAS: Yes On create I initiate the driver and a few default values and here in the loop I will keep the LED. So you that it is process ing. And I will display on the display the sensor value. And that’s it » DAVE SMITH: Done (Applause.) is in sellous celsius we have to do some conversion » DAVE SMITH: Back to the slides real quick. There we go Awesome. So you just seen how easy it is to get started very quickly use ing the new Android Studio to create a new project, add some code it and communicate with peripherals and you have seen how tz easy it is to in the Java. If you heard yesterday that Kotlin is a first class citizen in Android Studio as part of the new prevent view. Can we use Kotlin to develop for Android nings things? I don’t see why not But a simple example of Java I have flown thrown up here to give an idea of what koom have some of this code might look like if you are interacting on interperipherals on the hat Communicate ing with the rainbow LED on the top and it is doing some base ic setup to change the color. This code snippet is fairly strealt forward. It is straight forward. What if we were to same that sample and write it in Kotlin. We have dropped out a little more than than ten lines of code and you can see some of the things that are gone with try catch blocks are no longer there anymore and the call back is much cleaner use ing a lamb ba and even the initializers are much more expressive and easy to read and this code is a lot more concise and a lot more of a join to even write. So what do you think Renato, can we show them cot line run linen lin Kotlin running on the Android things board Smith dis board » RENATO MANGINI DIAS: This was launched. What we are going to could do here we are going to start a new project You can comp support on one Android Studio. You can add kwot Lynn Kotlin and Java at the same time and the same project. And you know that auto aim’ I’m not cheating here. I will start a new project » DAVE SMITH: I will add Kotlin cot Smith I will add Kotlin support obviously. Again that’s let’s not do a form of tablet. Let’s do Ann destroyed things Android things Same activity. Same layout Same U no UI layout » DAVE SMITH: So one of the things that you can see if you haven’t seen this demo yet when you include Kotlin support by default that it maniac tift is create kre in activity is created in Kotlin but we are automatically in Kotlin already » RENATO MANGINI DIAS: Yes, This is the main activity and it is in Kotlin and it does nothing. So now it is my job to do something with that. First I will add the dependency on if you I goes remember u you guys remember well. The same compend dpi sendsy sy dependency our driver is written in gentleman have. It is Java. It is open source and it is pub lir ed ed lished as a Java A R. And the fact that in Kotlin you can add the dependency and use it for free, there is no extra steps. That’s really a maze ing

You know what, I jers just remembered somethingism auto’ not ism I’m not a Kotlin developer » DAVE SMITH: Is there something in the IDE that we can use to help you around? » RENATO MANGINI DIAS: Yes, ,. . Lucky ily I kept it class here. I add a little bit more of correctness to it And I will copy my class control C, alt tab, select everything on the Kotlin side. This is copy file, okay? And I will control V. And boom. Here it is » DAVE SMITH: Automatically converted in to Kotlin. You are a Kotlin developer you just didn’t know » RENATO MANGINI DIAS: I just came a Kotlin developer This is great. You don’t trust me, right? Of course, you don’t I wouldn’t trust myself Let’s confer that it is working Currently I have the blue LED here. So let me change that to the what color do you want? Red? Okay. Let’s do red. It shows better on the TV. See I’m learning the Kotlin cot ing running the Kotlin app. I’m not running the Java app Launching Boom. They we go (Applause.) » DAVE SMITH: Anies Nicely done sir. Let’s switch back to the slides please Okay. So we have given you a very quick tour today of the new features in Android Studio that are specific to Android things as well as just a very simple walk through of how you can get started and how quick and easy it is to get up and run ning from 0 to a working project use ing all the tools that are available here. A couple of things while you are here at the conference that I think you sh you had check out first all we are going to be in office hours immediately after this. So if we don’t have an opportunity to answer your questions during Q and A, you can follow us over there and we can continue the discussion. We have a bunch of and Croyed Android things in the weed code lab special session section. So check dhoes those out. Once you get home and you want to start doing some of this development for yourself check out the Android things documentation which is on the he Android dev website. Down Joed the joun . Download the new and dread Android Studio Canary and joint IoT developers community, us and other effects folks from the other team, are on this answering questions and pep helping folks out and watching you shaur share your cool projects. Check that out as well. It looks like we have some time for questions. If you do have queues questions feel free to come up to the mics. Otherwise thank you very much for your time today (Applause.) questions? Okay. We got one come ing down >> Hey. Unless I missed something, when you took the temperature from the sensor you didn’t — there was no middle step between putting it in to the display. So I didn’t see a cast to a string, figure ing out where the floating point went (Talking at the same time) >> How did that go from the sensor which is probably a decimal to the — to that — » RENATO MANGINI DIAS: Display is also a dryer we provided ed driver we provided and it has several joe overrides of the method. It has one tore a string which for a string. It gets furs first four characters of the string but it also has one — one for double, for INT and for some other data stipes types >> That’s dak exactly what I wanted. Thank you » DAVE SMITH: One other here Stay over here >> Quick question. If I want to customize the OSP version of Android things to my customer hardware how much work do I have to put in the row to connect things up to the Java API » DAVE SMITH: Could you ask the question again? >> How much — low end interfaces and drivers for the I TC or other per rif kals that I have in order to connect to my upper layer APIs if I have a custom hardware? » DAVE SMITH: You are talk ing about a custom peripheral? >> Personal box that has one

CPU and a bunch of peripherals that are attached to it » DAVE SMITH: Probably the most common way that we see folks doing that is either implimenting a UR interface or S PI interface and the anythings nice things about those they are blank sheet pro could tolls that you could to protocol to sheet protoll toll tocol s that you can use >> Krooifror Driver hardware will cover Android things or do I have to develop everything myself or only part of the AOSB version » DAVE SMITH: All the functionality to actually connect to the physical pins on the device, for instance, the UR those ooin APIs are bubble d all the wail way up to Java. You is have to define how the want the data to transeak between two transblgt act twoen the two between between between the two >> Standard Android apes, ape APIs, APIs would have more stuff especially if you don’t have a dpis play. Seems diss gs display. There may be standard everies that may not apply — your tishgs, developer IT developer and use ing that’s standard ape APIs how do I know what methods are live versus what are only for other deviceses that are bigger? » DAVE SMITH: So I would say bedy fault dy dy dy default assume it all works There is nothing that we have Is abled ed disabled ed d out Android API area With the exception of any APIs that require some sort of UI dialogue to display. So things like run time permissions, another API that’s nonfunctional is owl the all the notification I ams I ams APIs. There is no system API to display. So out side of those display sent treak centric APIs everything that you are used to developing for oncore should function on Android as well >> So the APK runs as a launcher. How do you manage if you want to have multiple apps? Your own lawn uncher has to code other apps >> ? >> Yes the way the system is designed there is a main sbri point bri entry pount point because of that one activity. The system will pick one for you. You want to make sure that you choose one entry point in to there and then from your code you can expand that out. Now as far as getting them installed right now it is a bit of a manual process So you could build those multiple APKs and then essential ly A to B install all of they them or if you have multiple modules in to the AP studio. When we could do some introductions with the developer consoles laid inner later in the year. You have to manually install them if you were doing them on a regular Android device. From a launching perspective we give you that one instent intent and then you fan it out in your code from there. Any — other questions? All right Thanks everyone » RENATO MANGINI DIAS: Thank you (Applause.) Judge >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >>

Straw Ol Mull timodal >>


Where is the other clicker?

Oh, . Thank you. No problem No problem. Okay

» ADRIANA OLMOS ANTILLON: Hell le everyone o everyone. Thank you so much for join our session. It is excite ing to be here and welcome to our session on multimodal interactions. So we would like to start introducing my colleague Jared Strawderman He is part of the conversational design team and he is designer for the Google a

ssistant » JARED STRAWDERMAN: And this is my e stemmed colleague Adriana Olmos Antillon. She a productor designer where she is in charge of the design effort for third party multimodal experiences. Before we dive right in I wanted to share a spernl personal store y, a story that I feel capture as real world example of the moil multimodal. I was trifr traveling to Asheville north Carol lie in October to kearl li North Carolina to attend a wedding of two of my friends. I am originally from West Virginia and north Carol lie in kearl li North Carolina is not that far away. I didn’t have a good idea where to get dinner. My hotel had a concierg e desk and I talked to the conci erge about what you liked and didn’t like. We went back and forth and he was asking me questions about how far I was willing to go and whether I had a car and after he talked to me for me for awhile, he did something bre interest very interesting He narrowed it down to a few strunt straunts and then straunts and then restaurants and then handed over the menus to those restaurants and I made a decision and I told the cocierge that we had decide d to go to this southern restaurant at which time he actually resumed the dialogue between me and him and he said when you want to go how many people are going and he made the is he rememberization reservation for me. The Voice part is represent ed by the dialogue between me and the concierge and the I have sthal visual part represented by the menus Designing multimodal experiences is an interdisciplinary effort My background is in Voice design. So I have been design ing conversational interfaceor s for several years. Anything that involves spoken input and the things that you hear, that’s my area of expertise. Adre dre dre dre driana comes from a more traditional web design act ground background. But it takes combine ing these bo two erts efforts along with things like motion design, UX write ing and visual design to create a really cohesive monothyic experience. Before Let’s take a quick step back and talk about how we as humanings ing beings experience things in the real world » ADRIANA OLMOS ANTILLON: Imagine the last time you were walking on the beach. You could hear the oceans and touch it And it is only senses trigger that — it makes it compelling experience. Flight controls is not — mostly three main senses Site, hearing g Sight hearing and touch We consume controls is based on sight and hearing. So we are going to be focusing in these two » JARED STRAWDERMAN: In order to have any chance at all of designing a compelling multimodal experience an absolute prerequisite for you to know what modallity does well and what it does poorly. For example, Voice does some things really really extremely well and other things it is abjectly ly ly table ly ly terrible at. And it is really important for you to recognize what those are. You can re letter vaj letter leverage the strengths and avoid the availability liability ies. So let’s start with Voice. So Voice is very, very good at having a menu structure and provide ing directs access I used to think that the mobile bsh — my mobile device was the epitomy of convenience and I have really quick access to information and every facet of my life is on this gut beautiful device but then when I started ed ed interacting with going get Google Home across the room I was able to ask it across the room what’s the score of Virginia basketball and it would give it to me. Compare ing it to pull ing my phone out of my pocket and unlock eking ing lock locking it and — there is the score. When you do that with a far field interaction it makes pulling the phone out of your pocket seem intoo too inconvenient vein. vein vooent. As compel convenients as compel ling and excite ing as the benefits Voice are it doesn’t woman without come without comwithout come

without Is its draw whacks back. Usersen can only attain a finite amount of content in their memories. Is You have to manage that. So for those of you who may have a visual design background or for those of you who have developed websites and mobile application s imagine if the only visual affordance that you had to present content to your users was a scrolling ticker that goes across the screen and the user can only remember that text that just e var possible vaP pore right e rated on the edge of the sken screen to help them get through the interaction and this is what we as designers have to manage. And I’m not even sure if it had a chance to cycle all the way through but that’s what the ticker said. So lufrs ously content ly big chunks of content are better presented and lot easier for user r rs to absorb on a restaurant in and San hey sew. Jose. Here is one approach to answering that question >> Black sheep is open today from 5 o’clock to 930 :30 p.m and tomorrow from 5 o’clock to 10 p.m. On Saturday from 0 5 o’clock to 10 p.m. On Sunday from 10 ament a.m. do 2 p.m. and again from 5 o’clock to 9 p.m. And they are closed on Monday » JARED STRAWDERMAN: While it is a good idea — when you present so much content that the user can’t block any of it becomes >> Black sheep is open today from 5 o’clock to 930 :30 p.m Here are the hours for the rest of the weekT so » JARED STRAWDERMAN: Do you give the more salient informations pieces of informations up sfront front and you refer to the deans screen for more dense content that the user can use ats their leisure and what are the drawbacks of use ing Voices and interfaces is when I talk to Google assistant, people can hear me. That’s kind of how talking worings ing ing works. Imagine if the person next to you right here in the middle of Adrian na a about to say something provide profound and anoise maze ing and decides to pull out a phone and asks Google assistant what the name of the bass ket s score the bass kel ket ball game was on Tuesday. And even though the devices is cap able of it is really mimics the range and access of Voice interactions in public settings like this. By the way if you really interested in this topic there is an incredible session hamming happening at 330 :30 today on stage 5 and he is going to be talking about the right use cases, right Voice interactions for your application. I highly recommend it » ADRIANA OLMOS ANTILLON: Visual interfaces can be as permanent or complex and dynamic as wes we want it to be In fact, it is a — there is a lot of information that can be delivered. Take, for instance, a sample as classic light and we can control the time ing of the red to green in order to convey some meaning or be as sophisticated as (inaudible) where you can consume, follow up what is happening, what is trending and what is bad or not bad video. There are a lot things that you can do in one screen. Now the question is how are you go you going to be provide ing e ing ing visual information in a way that is beyond passive consumption and that’s actually for the interactive experiences and one of those is true conversation and in a natural conversation that we have with people we point things, we describe things And this is back and forth among human beses ings. So just happen natural. And the thing is like we start thinking about applications and how map ping ing happen ing on mobile, they throw a lot of information and convey that but we have to be careful. For instance, l the mobile app for, there are some things that I can quickly do but if you remember the last time you are walking to the store, we actual ly have a conversation with the person and then information, information that is delivered back and forth. So if you put those two together they real look really different And in fact, probably small person would chat from experience with the assistant. So one important piece of information you take in to account is the person have a oP n app on the foint

he phone, should be the start ing point and remember you should be — in order to start building your applications So one of the directions is not something — and it is — and how space in the past Found in the background and then you have the — and then you follow along. So one of the things happening and multimodal experience as well. We also experience asking you for questions and then — but when we have to — we have to think how are you going to be combine ing these two modality ies in a way that is not bad to the user Sense of highways these mow dality ies. Various visual and this is going to the tab — because we — we know how to design for all. And that is the decision this. It is not that simple and there is a lot of factors that we knee to take need to take in to consideration So we are going to be talking about these factors that we want ed to take and position, when we think of all these platforms that we are going to be design ing for » JARED STRAWDERMAN: We want ed to talk a little bit about the factors that we have identify ied to help you think about how your experiences will manifest as going gel a going Google assist ant comes to more surfaces and the fact is device designed to used at the used walking, run ning or driving and some other situation that makes the screen otherwise enact es sibl in ak ccess accessible. Think of your phone as apposed to ed pose opposed to the TV. Is the device designed to be use in private or anywhere or a one-to-one relationship between the user and the device like your phone or is it design ed to shared among a group of users like the Google Home. The next is prox yilt, imity So are you close enough to quickly tap on the device to interact with it. As opposed to something like the Google Home or your TV it is optimized for far field interactions. The next is audio capability, does your device just have a very small Mick that be mic that can capture speech from a few feet away or does it have an entire array and designed to capture speech from across the room and, of course, visual capability, do you have a fall ergon QWERTY key dward board or something more primitive like a V pad on a TV row mote and remote and visual out put and that boils down to screen size. So these are factors that we want you to consider and keep in mind and we are going to go through a few of these and where we anticipate that it may show up. So keep in mind this is kind of a future looking forward looking presentation to give you some guidelines by which to anticipate how to deploy your actions on other surfaces. So let’s talk about Google Home since it was stuch such prominute in any eventually ly min prominently in yesterday’s keynote and it is going a lot of traction in the marketplace. I mentioned before how Voice is really incredible at flattening the infrastructure and there is another upside of speech is that I think is real ri ly important to present in the context of Google Home and it is that speech, what I am do ing right now that is the interface. You are ability able to interact with Google Home in way that you have been doing since kwo two years old. There aren’t any manuals. There is no tutorials and there is no learning curve All you have to do is know what kinds of features going Google assistant generally supports like weather and supports. And you just ask for those things the which way you would ask another person So if we look at some of the factors that I outlined earlier, if we look about whether the users in motion they are not You put — your Google Home on a kitchen counter or night stand and it is usually planted ed ed there for quite awhile. Google Home is the deplored in the private setting but it is shared among a group of users. So it is somewhat private. And you don’t have to be close enough to interact with it. It is hospitalize optimized for far field interaction and if we look at

input, output capability ies it has strong capability ies for audio book, input andout out put but very little on the visual side. We three kind of overarching guidelines for how to design for assistant in action on Google Home. And the first one is don’t read, listen and what I mean by that is as you build actions and as you develop experiences on Google Home you may be pulling content from some online source or something and a lot of resources are optimized for writ content, ten content. They are designed ed ed to be read and you maybe something to think you can just sake take these sources and run them through a text-to-speech engine and you have your Voice interface. It is no, the that simple. Let’s ilar It is not that simple. Weather boer cast forecast this string of text makes perfect sense to you. If I were to Translate this since sing string of text in to something that is appropriate for spokesen language. In Mountain View it is sunny 77 degrees and winds knot knot north nert north west 10 to 15 miles an hour >> (Off microphone) » JARED STRAWDERMAN: Totally incomprehensible. So just — when you get a data source and you run it through make sure it is appropriate for spoken content. Just take a few samples and run them through the text-to-speech engine and don’t look at the text. Listen to it to see if you can understand it The next one is avoid information overload. I have already kind of lamented about the e femorale nature of speech Just be careful how much content you present to the user So I ask Google assist anlt on home ant ssistant on home, this is one abroach pproach >> One is what’s movies are out in your favorite theater Pay field Faye maled May field. Snatched The fast and furious. Smur fes, in a circle. Do anies of y of those sound good » JARED STRAWDERMAN: Not only does it bombard you with 12 movies but it force yous s you to make a decision >> 12 m movies playing at your favorite theater. Dial Doo ri of a diary of a women P kid, should women P wim py kid, sh should I tell you about them or keep going? » JARED STRAWDERMAN: Answer the question. So here is what I mean. If you have something like a map or the user is asking for directions or navigation, a map really is efficient as communicate ing information in this really nice come pablgt pact reck tan tangular shape. It conveys how far you are from your destination and what the prefer red route is and what traffic looks like all that kind of stuff. So it may be tempting to just punt to the screen on interactions like this. So when Google Home was first dy ke deemployed and people ep ed ploi deployed and people were asking questions about navigation >> I don’t have a screen. So I can’t do that for you » JARED STRAWDERMAN: So pretty disapointing and un helpful. So we learned our lesson. We thought about why people are asking Google Home these questions and we kind Distilled a few salient points about what beepel are asking el pem people are asking for. In the Bay Area the kwint sent tell question, what’s traffic look like. How longs is it How long is it going to take me to get. We took this approach insend it stead >> The best way to get to work by car is trueing through 87 and is 0 101 north and it will take 19 minutes in light traffic >> » JARED STRAWDERMAN: The image of a map is much more efficient and convey a lot more information to the user, there is nofr no reason why you can’t distill the most sail jent lient piece of information in to a verbal summary. Let’s talk about Smartphones >> » ADRIANA OLMOS ANTILLON: Mart Smartphone is a within wonderful Finance machine that we take everywhere in our lifrs Volume would be up one minute and next timing it goings going to be dub and run done and running through a session and headphones This is, for instance, this example, as you see in Canada and I was up there, in the ski

slopes and we needed that — we knew there was going to be a snowstorm and we needed to know what time to get down from the highly hills P and it was and it was at a time that we could ask the phone and hey when the snow gorm starm storm going to start and because the last thing we wanted to do is take out our mit off our mit tens mit tens miten s. This is a time where we could use our phone as a little phone to carry in our pocket There were times that we were running and navigate ing in traffic and you wish you could speak to your phone and answer your question. In that example the last thing we want to do is have this information be blit blit you want built. So you they need to (cutting snuchlt ing ting out). We can be starting in one position and then other things times we are running to places and we can send a very private or public context and what is wonderful about we can rich interactions with them. What is more compelling is that there is a lot of capability ies at the . At the keynote we saw this excite ing demo westbound woor where e can use camera we can use the camera as a form of input. We came up with three guidelines that we have been oef (Off microphone) oef observe ing. In one mode goes away they are the the other one should take over. Say I was sending a third party application and what kipling it tells me facts about numbers. I pull up the phone and it greets me and is there in person and enter the conversation. And it gives me suggestions of what I could say or I can say (inaudible) It is kind of like — never pay ing attention to the screen >> Howdy this is Kipling. I can tell you facts and trivia. What number you would like to know about » ADRIANA OLMOS ANTILLON: This is the random response and if you u you pay attention, introduce things like this is the — because we didn’t have an artificial key and we add little situations as part of dialogue to come to the kals values that I could say and give them ideas and — because you already have a suggestion to chat bubble. It is very interesting that we optimize the mode that allows for both. Let’s see another example. It is time and those things that wanted to do at beginning of this month. And then we can ask — to deliver the assistance for the next — and it is simple and normal to see — free up in every website and this is — we can quickly scan things. Then the question is like how is it going to be presenting this to auditory content >> Okay. Theware yors have ware warrior s have a (inaudible). Which one do we want tickets for? say things like okay. We are having this conversation and — a little bit about what is going to show up in a little bit

and I won’t start by a surface that was mentioned in yesterday ‘s keynote to TV. So if we quickly look at some of the conditions around how to do the — TV is deployed. It is a staticky advice. did device. It is not move ing and even though there is — it is not going to move with it It was design and used in a private setting among the group of users much like Google Home and it is too far away to touch to interact with it. Now this is the interesting thing about the TV. Look at the where this is on the output scale compared to the input scale. Very, very rich audio and visual output capability but quite mod l erate or limited to input capability ies. What does that tell us? That tell us that the TV is is a consumption device and not really anything sense interaction device and I think the reason that we are kind of bringing this up as you anticipate your actions come ing to life, through the real assistant on TV, know that we have to kind of minimize it on to a banner screen so not disrupt active prom programming going on as the TV is being played Because it is primary ily a consumption device and people watch programs and weigh want we want to leverage that bottom field ael estate real estate. And because we have a few car people in the audience I wanted to touch a little bit on cars in Google assistant on cars and Andrew is — it does a yut but job of audio rich content by way of projex of phone jex of phone on to your car’s interface. It was move ing around, it is carrying you around at 80 #0 0 miles an hour. Drivers shouldn’t me be looking at the screen a lot And even though cars are out and about — the kind of a private setting. And users generally have access to quick touch interaction in a car from a car to adjust their temperature or change the station or something like that but you don’t want the driver interest, beshg agt interacting with things a lot. Should be drive ing erment e ing ing. In terms of ifrn put inn input output put put capability. There is nothing wrong with a lot of spoken output and they have pretty decent mic and sometimes recognition is a challenge of that am pee yent bient noise. If we overlay Google Home on top of the car they are strike ingly similar in terms of where they fall on this matrix What does that tell us? That tells us that Google Home is a very, very Voice only device And that tells us that the car probably should be somewhere in that range as well. So a few take-aways as we kind of summarize here. If you don’t take anything away from today’s talk, please take this away Know the strengths and weakness es of Voice as opposed to visuals. Leverage what’s age age what speech does well, leverage what the screen does really well and voi void avoid the liability ies that we talked about. Optimize for the strongest mode but allow both If the skreb screen does something better point the views or user to the screen but still let them do it in Voice and even though these interactions involve both visual and auditory modes they usually invoked by a spoken input. So make sure that the first bit of content that you are hearing is appropriate for spoken language And one of the overarching principles of use ing Google agel ssist ants when it ant is we want them to be efficient. And each turn in a conversational dialogue should be really short and sweet and easy for the user to absorb and consume and help them get through their interaction » ADRIANA OLMOS ANTILLON: So we talked a lot today about how to package a response and make ing sure that all these modality ies cohesively make a compelling a response and there to say this one is absent. And we also talk about conversational design and how

many important is to be inspired from having a conversation with a human before we jump in to which is a sign of n sign in application but the work did — there is a lot of Google we need to get moving Someones Once we have our response, how is that — we are going to be representing this response automatically, that it is take ing in to account the cop context in which the use, whether they are running, how we present that snfgs information or if they are passive, sitting on the couch. So how — more and more can do these changes between one type of service in to another in a matter that is almost magical and the use offer doesn’t r doesn’t vunt have to skr for it ask for it. Multimodal interactions on the phone how we can make these interactions like these in cars and thaul these things that people can interact with more dynamic to allow for more complacent tractions in a way that is more rich and introduced. So we are very excited that you guys are joining the journey and start building your applications with us. There is going to be a lot of other talks happening today and tomorrow. You are welcome to join in case you are interest ed in the talk for the Google assist antd ant and there is is a challenge and we can work to see all the things that you will be submitting ing ting to it. Thank you very much (Applause.) Since we have five minutes of time, if one person has a question, feel free to ask >> Hi. I saw from the presentation that you did a context and show for the maps experience where — because the person lives in Bay Area, just give ing that direction, based on what the Bay Area people use But for other countries, do you currently allow multi contextual responses? » JARED STRAWDERMAN: I’m sorry could you are y Pete the question? >> e y refeet peat the question? >> For the Bay Area people understand what these two highways mean but for — Korea or somewhere, this is my — it would be meaningful to them Does the current technology allow us to give context depend ing on what the user r r? >> For third party experiences I don’t believe so >> Contextual » JARED STRAWDERMAN: Can you elaborate? >> You said if the user is — The map am example should apply ever where. We are trying to kun Kated do communicate to the user what the referred route is. The trooifl travel time that allies to y pply ies anywhere where you live. The bus route It is not — I was only — because one of the key decisions that travellers like commuters in the Bay Area have to make is 280 or 101. I’m give ing that as an example. Sorry for the confusion >> Thank you >> Great presentation. One of the things that I was really disapointed to see when I was bigs digging in on aist ssistant actions some actions are not supported on certain kinds of devices. I was wonder ing it strikes it mae makes it hard to like make user aware of your app, for example, and what it can do. I was wondering if you had any thoughts about even if you could be something better, like you talked about, still probably need to support all the same use cases across those different surfaces. Any reaction to is that? that? » ADRIANA OLMOS ANTILLON: So first of all you are right. We are starting to make sure there is — across — surfaces and it is something that is very important to us. In terms of the survey and make ing your users aware of what whether these apps are available there is a talk on cover discovery that is going to be talking about in order to have to make your apps discovery on other surfaces. And the third waun that one that we touch upon, we are ston con stantly working to make sure the way we are code ing your application is not — response that mannest ifests on different surfaces. So the probable lem you are having is also we are having at the moment because we have to support all these multiple — it is something that we are trooifr strooi strooiing for ing strive ing for. You are not alone » JARED STRAWDERMAN: By the

way did you see the session is I think was this morning dedicate d for discovery of your app? You missed it. Okay. Just watch it on Youtube >> It was interesting to see that, you know, the user considerations for home for Google Home and for the car were very similar. That said there are things in the car that, you know, have legal considerations you don’t want to have them look ing ats ing ing at the screen and things like that Does the developer know that the person is going — in the car or how is it that the developer is going to know where — the context is? » JARED STRAWDERMAN: Yeah, there was a session in the queue key foet note this northerning morning where you get a signal wlrl a screen is available not. We have to look in to see whether you are getting some surface information. But those — there is a community in G plus called actions on Google And you can kind of go there and ask questions but that’s one of the things that Adriana was talking about. We want to do a better job provide ing just in time signals for you to know the user’s conditions wet ter >> better >> That won’t be enough that the screen is available or not because there is a screen in the car >> » JARED STRAWDERMAN: Right » ADRIANA OLMOS ANTILLON: Yeah, and also to build on your concern like also it is important to take in to consideration where in that country is going running because the policies of make ing the screen available changes from place to place » JARED STRAWDERMAN: But as we firm up kind of how we are dealing with the car there are probably going to be restriction s about what you are going to be able to present as a developer on the he car screen. So I don’t think you are going to even be allowed to do some of the things that you are talking Just because of all the reason s that you mentioned. It is a good question and then, of course, there are some fends frendz tsz frends friends of mine, what happens when you move in to the realm that the cars are drive ing themselves. That whoa whole equation goes out the window >> Thank you » JARED STRAWDERMAN: I think — we will try to do two >> One quick question. Do you have experience with longer conversations, for example, I wanted someone to fill out a form » JARED STRAWDERMAN: Yes, >> This is not a Dirk give me the — give me a web information. But two, three paining pages — » ADRIANA OLMOS ANTILLON: Yeah, >> Maybe pick it up » JARED STRAWDERMAN: Yeah, Yeah, . Actions require an lot of turn take ing and just — we just released on this session with I/O documentation provided examples and how you can have a kvrngs conversation with transactions for when what you have to do if you have to buy flowers or what’s or what’s all steps if you have to buy shoes. First approach is to multi but definitely you can find information there in examples to get inspired >> So you don’t think it is out of the realm of having longer discussions and ongoing sdugs going going discussions about the certain topic? » JARED STRAWDERMAN: Go a head. I think that as long as we have kind of pause and resume type model I don’t think that’s out of the question. But if you are trying to do it on fell swoop it is sweep swoop it is not advise able. Yeah, >> Thanks >> Thank you >> I just had a question about these abilities on different platforms and types of interactions you are talking about Voice and you are talking about images. And visuals. I know gesture control isn’t either of your areas of specialty. But do you have any kind of insights on what you guys’ thoughts are on gesture control and where you plan to go with that in the future and how important that is for the flat orm plart platform? » ADRIANA OLMOS ANTILLON: Gesture control is not something we have been focusing about. I would be going out of the area but definitely something worth explore ing a lot more » JARED STRAWDERMAN: And haptics is one thing that we very, very every so delicately touched on in this presentation That’s another factor, too >> Okay. Thank you » JARED STRAWDERMAN: Thanks Okay. Thanks. AvP (Applause.) >> >> >> >> >> RAW FILE

communication accessibility and may not be a totally verbatim

>> >> >>

>> Thanks >> Is it silenced?

and I work on security research and today with you we work on web security research and Eric who works on webmaster relationship. We are going to tern tell you about how we can learn about web security with Google. Let’s do a quick show of hands. How many of had website compromised or know

someone who has a website hack ed? Come on. Don’t be Shea shy. Raise your hand This is not — almost most of you raise your hands It is not surprise willing ing. In 2016 hacking has never been so prevalent. We found 32% more websites compromised than ever. Try to hack your sweb site for many years from attacking your user by sending malware to them or try to phic sh to use ing e ing ing resources to try to steal your data and expose them Either when you get hacked, the consequences are pretty dear You are also lose ing the trust of your user and potentially financial loss Solve Some of the lasting efrt ing ing effort ing ing effects user trust can take years to cover ever recover recover. This is why it is essential to keep security at the forefront of your web strategy and make sure you in vest in security from when you develop new things, what you maintain it and when you create it. I’m prit pretty sure you already know that ouz otherwise you would not be here today. Today what we are interesting going to do we are going to wouk walk you through the resource that going bell provides Google provides and help defend against hackers First Eric will cover how you can get help from Google when you get hacked. What are the resources we have to help you clean up and to help you secure your website and then with you on we are going to give you a sneak preview of our upcome ing web security source courses and to make it very practical and give you a sense of what you are going to learn we are going to tifr give you a short yoer overview followed by a sneak preview of one of the lectures and because we wanted to have something which is very hands on we have a lot of demos today that is — if it would beingers works and jump right in » ERIC KUAN: First off hi mom. I’m on live stream right now and my mom is watching so ing ing. So I’m super excited about that (Applause.) » ERIC KUAN: Hi mom. Let’s jump in to what you should do if your site gets hacked. You are going to get some type of notification from Google’s perspective that notification is e-mail that you have may have seen or some type of notification in search results They don’t have to come from geg Google. In your suesor user are e-mailing you and saying there is something wrong with your site. You want to check dhoes out chose those out. If your site is compromised it is really annoying and worth of all your users are trying to get to your site and they can’t get to your site. So it is in your best interest to keen clean up as soon as possible. Now the process of cleaning up it can be quite daunting. It is going to take some time. But if you follow these steps methodically is there is a high chance that you can clean your site. The first thing is to quarantine kwarn quarantine your site. Taking it offline or site isolate it to certain parts take certain parts off line. If You want to change any user maim r r names passwords the whole being you don’t want to be hacked while you are trying to fiks fix your site and don’t you don’t want your users trying to get to your site while it is compromised. Take ing your site offline temporary ily is the best move want you aen move and you want to start building your team Identification, this is probably the most difficult part and the most time consume ing. Hackers are constantly trying to prevent you from remove ing the hack on your site. So they will do weird little tricky things like clicking ab you and you are go where this page is show ing an http404 and it is serve ing Spam to your users and search engines. Identification is important. But at this phase you will also want to identify the vulnerability because you want to understand how hackers got in to your site. Cleaning up is just about remove ing those files and then testing make ing sure that your site is running low again. That’s the main part of cleaning up and that’s really all you need to do Match Patching is about close ing those vul negotiatibility ies y e nerability nerability ies that you identified earlier. A lot of people miss this and their sites do get compromised again Don’t fore get to close those vulnerability ies. You can just update your site software, CMSs or your plug-ins things like that. And they will close a lot of vulnerability ies. Finally if your site was flagged by Google and you saw those red initials on your site or some warnings in search results you

want to tell gook that your Google that your site was cleaned ed ed and they can remove those flaegs flags. The steps works for most types of hacks and we realized that a lot of hacking campaigns they work in very, very similar ways and this helps us under stand how attackers are scale ing their attacks. How they are trying to fly under the radar, go against detection and this also helps build better detik tection systems and we have identify ied three major hacking campaigns so far the cloaked key words where they create cloaked pages and drop key woords words on the pages. Gibberish attack, ish ish hack, use ing e ing and when users click in they get redirected to Spammy y my malware sites The finally the Japanese key woords injection and this tar gerts Japanese brand name goods and attempts to sell them fake goods. We have been able to create really great ly ly great documentation step by step guides for each one of these hacks. I will link you to the documents in a second because I think it will help if you or your friendsor someone that you or someone that you know has been hacked or will be hacked in the future. First though I want to talk you through the Gibber hack Understanding how these hacks works are super interesting because they had Lep you with remediation in the future. Now this is what the Gibber hack looks like. It is realingly plain ly ly really plain and trying to use the domain to link well. The underlie ying mechanics behind it are also pretty simple when you put it in to three separate pieces. The user clicks on the site, redirected in some way or fashion to a Spam page, to a PHP jaen ask generate or script and then users are retlek directed to the Spam. So the first important part of this whole chain is this redirect If you can identify how that re direct is happening, where it is happening, you can identify the other parts of the your site that have been comoms compromised as well. And this example the HT access file has been compromised. Through Three lines of code it is going row direct your users come ing from major search engines and then send them to this page right here, the Spam.h p p php page. This is the piece that you want to identify later. It is not going to be called ed ed Spam.php. We have seen hackers call things called horse duck 2 and try to mask as core files instead of W P-config. They are trying to trick users from accurately ly ly fixing their sites. Okay So from the redirect we see that you are sent to the page generate or. Now you are probably going to open this file You want to figure out what’s going on. You are curious You are curious you had how they are doing all this dpaj and you are going damage and you are going to get something like They have enscript script crypt. They don’t It is really difficult to understand what a lot of these scripts are doing Even if you do you take the time to figure out, you know, this is exactly what the code is, it is still not really human readable. This is not code ing e ing ing best practices Your CS professors would be appalled by looking at this You need to remove this file. I would ask that you back up these files just in case just in case they are good files and later on if you do want to do some forensic work it would be helpful to have these back up files. You can see these two files that they have done a lot of damage to a website And so that’s why you don’t even want to be in the phase of cleaning up a website. Cleaning a website is difficult. It is especially financially costly and it is annoy no, Iing. You ing ing You have brand reputation on the line and that’s why the key take away is that prevention is key. So let’s talk about a couple of quick things that you can do today right after the session in order to meP help with revengs prevention Back up your site. There is a lot of people that don’t back up their site and that’s baffle ing to me. Pack P Back up your site as often as possible. If you get compromise d one of the easy yes, sir ways to over ier easier ways to recover restore the backed up version of your site. The vulnerable that the hackers got to probably still exist. So you want to fix that. Sekdzly signed up for ly Sekdzly ly Secondly in up for search cob console. This is one of the ways that Google communicates with webmasters And finally update your code, your themes like I said before

this is one of the most common ways that attackers compromise a site. And I know this is difficult because I have talked to a developer before and she said my client they don’t want to update their site because if we update the core CMS files they are going to mess up a whole bun P of plug-ins. But it is in your client’s best interest to update and you have to convince them. You have to talk to them about make ing a site that is boths secure both skwuren secure and still works for them That’s the really important bees piece. As I said earlier there is a lot of documentation that going gem Google can give to help you. And that’s where our security guides are. The ones for the specific hack campaigns that we have identify ied and we are con stontly building more of these guides for different types of hacking things that we have identify ied. The second thing is the webmaster help forums. We have a lot of awesome top krk krsh krb krshtors contribute or to help you re mediate your site and identify any vulnerables nerability ies and follow us on Twitter. We will give you up dates about search and security I am going to hand it back to Elie to talk to you about the second part of secure ing your site and that’s building a safer website » ELIE BURSZTEIN: Thank you said when it comes to security prevention is key and so far at Google we didn’t have much of a public course to help you out So we did decided in the last year and a half to develop a new security force which is meant to be very hands on so you can have a very practical knowledge that you can apply to security your website. The core idea whined behind the koes he course your site sko cow so you can have hands-on experience and knowledge to help protect your site. So last week it was — I will do what the course looks lib like. The course will be a set of 12 lek turs which will be grouped in to three main categories. First thing we are going to discuss how to handle user data safely ly ly from how do I authenticate my use user, how do I maintain safely and how do I encrypt my communication so that when they interact with website your use the site. The second thing we are going to cover is web attacks. These are attacks that are specific to web security and you need to know to make sure they — you are noul not vul ner ebl nerable to them. We are going to cover the four big ones which is comshgs XSS and CSRF and S SQL injection and click jacking. Embed a widget from a third party website and if you get hacked to how do you deal with user content and how do I make sure that I don’t have toxic content on my website and doiPt don’t famous a picture in to my beautiful stream and as Eric said when you get haked it is difficult ed hacked it is difficult to recover and you concrete case of hacking so you can learn how to investigate them and clean up. So if the event that you get hacked, you already prepared. So for each lek is your we cture we are going to provide you a few materials. We are going to give you slides where we can review the lecture, we will do a video of the slide with some explanation and the most impart portant part we will give you exercises and quizzes so you know how well you are doing and if you have understood the concept. So it was a real development point we have built a ton of those. We have over 50 compersz exercises for you. They are going to be in beautiful aspect, web security First you have attack exercise where you wear your black hat and you try to attack website and you get in to the mind set of the attacker and then, of course, we go to defense where you know how to apply the best set of techniques to protect your website and you really understand what is the mechanism or how you can apply them successfully and for some of those things, especially for hacking we will cover investigation and give you some puzzle and interesting hacks to look and understand if you can figure them out. One of the essential challenges we had to overcome come come was it is not easy to teach web security because you manipulate useable code. We can’t put it online and how you do that? You oshlgly people ep ly originally people come up with the idea of use ing virtual machine and you have to run things. This is not very ideal bau it is because it is very resource intensetive It is on your computer and limits the amount of device you can use. So about a year ago we were thinking of a problem and we say well, what technology has

so much evolved. Let’s try to do something different. Let’s try to use a web hack technology and the crazy idea we had let’s br build a web server in to the Web page. I know that sounds crazy but the idea was we have sweb service web service worker. We can make it intercepting a request and respond to that. And let’s throw in a web SQL in database and see if it works We try ied it and it works real ly now. Now what we have for you is a test bed where you just go to the website and then everything happens to your browser, nothing to install You get — so it is very easy ands it is all happens as you aren a real website That being said this is our world and you want to show — to see it? Yes? >> (Applause.) Can » ELIE BURSZTEIN: Let’s jump to the first demo that will show you the framework. So we are going to do a simple website which going through logging our users. So it to do that we are going to first need to have a server system. As everything and really look like with express. We recognize syntax We need to declare our world When you log-in page we need would two pages. We need somewhere where the user will land and we have a form. Let’s create that. And then let’s add a second which would be process ing information and decide whether or not you try to log in So we are going to create two words. One is a get and the other one is (inaudible) and for now which is going to say LOL just to see if it works Remember this always — you can see it on the bar. This is all across. No connection. All off line. And we hope it is going to work. All right. So let’s look the other side, we load the framework and go to the page and see if it works And here you have it. We have a working Web page in to our browser. Now let’s add a little bit more things because it is a log-in page. So we need a form Add a form. Unfortunately Fortunately support templates like any normal web form. Let’s add — password and a log in fee and let’s reload to make sure we see our log in form >> Hole on Hold on » ELIE BURSZTEIN: To add the template, I fore — we have to add the indexing and we are going to load the template and we load — and mopefully you have hopefully you have a nice GUI layer form. That’s great. It didn’t do much. We need to process it. So let’s have a bunch of users. So here what we are going to do is we are going to add a database. So adding a database in the framework is very uz easy. All you have to do is create a database one line of code, do automatic for us. And then we are going to add a user, so let’s create a user table which just contain a log and password. All right Let’s add a user, Google user for the demo. And by the way, do not store your password. Do not do this. This is a demo This isn’t secure. So let’s add that and add a little bit of Java code to check where database and collect if it happened. ed check if it happen ed. Run it in to a SQL query and test if it is correct and we have you had everything should have everything. Let’s try it Reload. Now we have a page Let’s try with a wrong password So Google I/O A password 1, 2, 3 Yes. User network and works as intended and try again Google I/O as a cuesor user. And password and click on it. Whoo hoo we are logged in So we have a fully functional log in system with database website few lines of code. This is a form we built framework we built and this is the framework that we used to create a cherz n exercise. Yes? (Applause.) » ELIE BURSZTEIN: All right Let’s go back to the slide and talk a little bit more about the contents. So we show the technical framework we have behind it. Framework is great Content is better. You need it So laets jum let’s jump in to one of the SQL injection and show you what kind of content you will learn. You can attack from Attack that you can suffer from. And how to prevent them and after that Yuan will do a few exercises of how you can learn it hands off. So why do SQL injections exist? During

our little demo we introduced a vulnerability ner nerability. Because when you have a SQL statement and skl contains both key words which tell you what to do and parameter will tell you what to look for. If attacker is to control one of the parameters you can inject key words and the server have no way to distinguish between two the and we have this any verse in universe in our very own code. It make injeblgt ion jeksion possible Attacker with basis control skwool query er SQL SQL query will bypass any security check you have And delete ing the database encrypting it and do other things. Norm Formally the consequences of an attack, it brinls brijs bridges confidential yt and also ity ity and it authenticate without knowing the password. You can defeat any type of check So there is really type of SQL injection. Classic one we are going to demonstrate one today and there is more advanced one and second sword order injection and the blind injection. So the class can one as I explained works very simply the attacker instead of sending what you expect, will do the unexpectible and try to manipulate the SQL query by send ing specific payloads and you your data business will happy ily do whatever wants and here is a concrete example. Right? If the attacker can inject the user name then he can decide and say user name to say say gooeg Google allege then and then close the field and add dash dash and which — you can see on the screen well, you are basically bypassing authentication. So to make it more concrete let aets ‘s jump to our second demo. We will show the attack live hopefully. Demo, please All right. So with that tsh we are — — we are back to our demo. Can you going ba to the code, if you remember we put the user name and put it in to our query. You can see it on the he screen We user input and be able to log to the website without the password. So let’s demonstrate that. So the way to do that is as we explained is you use user nim. r r name. And then instead of doing what expected we are going to add a quote to close the parameters and then we are going to escape And the — escape and SQL is just this. How many of thinking it goes going to work? One person. You have no faith m in moo in me. We are going to do it. Let’s try. All right. So here it is. We will log to website without any password because we let the attacker manip put lay ulate the input which you should never do. Go back to the slide Is how do you prevent that? There is a simple way to defend against that. This is called parameterized query and also known as prepared statement Instead of use ing the value able in to your query you are going to write a query and specific what is the pa ra rameter should do and then put in afterwards. That will prevent SQL injection and you should escape the user input You should not trust them. Such So that concludes our short demo of SQL inject shub ion. We are going to show you how the exercise works and how the frameworks works. How many of you would like to get early access to the course. That’s brilliant. All of you » YUAN NIU: All right. So so signing up is very easy. You have to register use ing the link on the screen starting today. Let’s switch to the demo mow mo mo. Since we are familiar with SQL injection already we are kept skip to that. As I willy mentioned each ed ly Elie mentioned each top ic ic ic ic ic will have material and quizzes and, of course, the exercises that are going to give you some hands-on experience so that you can reinforce what you have been learning. You will be attacking defending and investigating sites and to make it a little more fun we are take ing inspiration from the power versus cake more to kraPt vaft draft some of your scenarios. In our world the pi sin ate inned Kate is

sind Kate is a syndicate is a little worry ied. They are going to try to attack the rival ‘s website. So we will be tick steking with the base ic exercises today and since we have pretty much done attack No 1 with Elie we will skip to No 2 and for this I will need my black hat. Okay. All right So we have an objective on the left. Hey pie, that cake shop is still in business. We can’t keep lose ing slices of our territory to rival industries They still have a most on ly online operation for now but we must acts quickly. They are still vulnerable to skwg SQL injections. We have gotten a leaked copy of their server owed code but code but no idea who the user could be. We need to you get their customer list and get to it. Pi boss. So very hopefully we have a direct link to the page that we are going to attack And we have a copy of the server code. So we have the same vulnerable ner nerability as before but because we don’t know the user name we — our previous bad input won’t work. So let’s take a look at the code. And on on line 12 we see that actually it doesn’t really matter what the select statement returns as long as it returns anything at all So this is where we will be targeting our new traft crafted query. We need to get the where statement to return true ats all times. And to do that, we will just add let’s see, 1 equals 1 and this has an affect of saying choose — select from users where user name equals admin or true. And so now we are going to force the statement to true and now we have access to their customer list. And that popup means that we succeeded. So we can move on to the defense. Okay Close this Okay. For this I’m back on to cake. And I have got my white hat for that. So our mission once again greetings fellow baker. As you know this one chatter of an upstart pie maker with crusty connections We spP suspect they have been trying to attack cake shops Your site has been identify ied as one of these targets In particular it seems log-in page leaves vulnerable to SQL injection. Please fix it at your earliest con convenience cake boss. So this time we are going to use the editor and let’s see, I’m going to refresh Let’s see, oops. Refresh again. Sometimes the server need o to get updated properly So it takes a little bit of time. Okay. There we go. And let’s verify first that we are still vulnerable to the SQL injection attack. And indeed we are. Okay. So all right. We are going to use our edit tore or to fix things and this editor comes with hiring, all the nice stuff. And we are also need these three buttons on the side. The first one will run our code against the test framework and make sure that what we are doing will succeed, will not succeed and just tells us when we finish the exercise And in this case we failed because we haven’t actually fix ed anything. The second button is going to give us a hint. And finally the third one will re set everything and sosh sdosh that we can rt — so that we can start from scratch. To go about actual tli fixing ing ly fixing this we will use parameterized query ies and has has it is lot called a prepared statement. We are not spelgs supposed to use user input directly we will sell the tell the SQL prepare a statement that has specify ied exactly when to expect external input. So here this is the parameterized statement and then give it the user name A. and password here. Let’s see, if that passes our test. Okay. So

there this should be good. Let Now let’s refresh and double check that we have — it actually works — oops And we have successfully be defends ourselves against this particular SQL attack Remember that is just the base ic exercise. In practice in create ing actual websites in the view it sanitizer s rs user input. Never trust external input. That’s it for the demo today. Let’s not tempt the demo Gods any further. Back to the slides, please (Applause.) » YUAN NIU: Okay. So thank you for attending our sense session. Even you can sign up for act early access to the course which will be released this summer and if you are interested in learning more about web tech or maps or head haefr over to some of sandboxes. Interest is there There is one by stage 6 and thank you again and enjoy the rest of I/O We will stick around for questions (Applause.) >> Should we go down for questions ? >> Hi. This is is one of first very first thing that hacker start doing. The main problem with what we face is how that website gets hacked and it keeps sending e-mails to a thousand other persons and we never knew until website host actually what’s — brings down our host ser vefr ver. Do you know what is a best practice to start sending any kind of e-mail from one web hosting server or I don’t know — how do you like — just a suggestion for that » ELIE BURSZTEIN: There are many of those. One other thing that we recommend is if you don’t need the mail function ality to remove it from PHP as a configuration level and also add firewall — s will deny any connection to port 25. If you don’t sending mail from your code just prevent it at the Linux or the operating system level >> But you know, when we are in a shared server, so maybe like other users. So the host don’t give the access to do that » ELIE BURSZTEIN: I ginned stand >> That’s I under stand. That’s why it is hard No magic wand. The best thing is to disable the mail function al yiment al ality if you can » YUAN NIU: Keep an eye on your logs and look for spikes in traffic and that’s usually the first time sign >> All right. Thank you >> >> >> >> >> >> >> >> >> >> >> >> GOOGLE I/O 2017 SAN JOSE, CALIFORNIA *** >> Tennessee sorFlow >>


Bed Sent TensorFlow

>> Hi everyone. My name is

Sam Beder

and I’m a product manager on Android things. So I want to talk about Google services on Android things and adding these services to the device and your device’s potential. What I real ly want to convince of you today is not only Google services on adestroyed things Android things easy and semless but it can make a huge difference in the use cases that you can put as well as for your end users. We have many sessions on Android things as well as demos in the sanld sanldz sandz sandbox area and code labs for

you come ing to this session with ideas of devices that you want to make on Android things or for I/O oT devices in general and I want to show you today all the compelling use cases pa that you that you can get when you int skrit egrate some of these Google services So I’m going to go through a number of services today. First I’m going to talk about Google Play services which includes a whole suite of tools such as the mobile voois API vision APIs, location services as well as Firebase. After that I’m going to gooif dive in to Firebase fire Firebase in a little more detail to show you how the realtime data that Is base that Firebase provides can allow you to publish and persistent data and events in interesting ways. After that I’m going to in to TensorFlow and how TensorFlow we think is the perfect application of a powerful on device processing of your Android things device to really add intelligence to that device Next I want to talk about Cloud Platform and how use ing Google Cloud Cloud Platform I you can train and visualize and take action on your device ve ve vices in the field be and I am going to touch on the Google assist ansz ant and I all the things you can get. Before I dives in to services I am going to quickly gefr over things A system on molled module design. This means that we work really closely with our silicon partners to bring you modules which you can place directly in to your IoT devices Now these modules are sech such that it is economical to put them in to devices when you are make ing millions of device s or if you have a very small one or if you are prototype ing device. E We had a session earlier today specifically going from Poe prototype to Android things and give you more details and bring your devigs vice to production on Android things. The Android things separate ing e ing operating system a placed on top of these modules So Android things is a new vertical of Android built for IoT devices. Since we work so closely with our silicon partner s we are able to maintain these modules in new ways. It allows these devices toor be more secure and update ible. Also since it is an Android vertical you get all the Android APIs they are used to for Android development as well as the developer tools and the Android ecosystem. In addition on Android things we have added some new APIs. Such as peripheral I/O and user driver thatly ly allow to you control to you llow you to control the hardware on ware on your device in different ways Really the queue key piece of Android things I believe is the services on top. Because of the API surface that Android things provides, it makes it much easier for Google to put our services on top of Android things. I say endless possibility ies here because not only does going gl Google already support all the services I’m going to walk you through today, but any services that Google makes in the future will be much more portable on Android things because of this API service So now I start dive ing in to some of these services let’s guk talk about Google Play services and the useful tools that it provides. Google Play services gives you access to a skwooet suite of tools some of which you see here. So you get things like the mobile vision APIs, which allow you to leverage the in Intel intelligence in your Android am camera, to identify people in an image as well as faces and their expressions. You also get the nearby APIs which lets when you have two devices near each other as those devices interact with each other in interesting ways. You get all the cast pooiP APIs which lets you from your Android things device cast to a cast enabled device swir somewhere else. Next you have a location services, which lets you query things like what are the cafes near highway me and what are their hours You also get the Google fit ain APIs aallow you to attach sentors sensors to your device and then visualize this dethat as accept that data as steps or other activity ies in interest ing ways. Finally you get Firebase we will talk about more

in a minute. Some of you might know about CTS certification and how CTS certification is a necessary step in order to get these Google Play services With Android things because of a hardware model that I just talk ed about, these modules actually come precertified. So they are all pre-CTS certified meaning ing ing Google Play services will work right out of the box. Gives have you absolute you you absolutely no work to get those things on your Android things devies. vie vie advice. We also vie vice. We have a custom IoT variant of Google Play services. This allows us to make Google Play services more lightweight by take ing out things like phone specific UI elements and game libraries that we don’t think are relevant for IoT devices. We also give you a signed out experience of Google Play services. So no un authenticated APIs because these just aren’t relevant for many IoT devices. So now let’s dive in to Firebase in for detail. I want to walk you through one of our code samples. This is a code sample for a smart doorbell use ing fire because. It involve e ing ing basement e ing ing fair e ing ing Firebase. So walk you through this diagram. On the left you see a user interacting with the smart doorbell. What happens they press the button on the smart doorbell and l the camera takes a picture of themmen . On the right there is another user in the Android phone they can use an app to connect to a Firebase database thatty can y can retrieve that image in realtime So how does this work? When you press the button on the smart camera and the camera takes a picture of you. Then use ing the Android Firebase SDK which use ing the Google place services APIs all on the device it sends this image to the Firebase database in the cloud The user on the he other hand, oer he other end can then use the exact same Google Play services >> LIZ ANDERSON: Destroyed Android Firebase SDK on their phone and retrieve that image. In our code sample we also send this image to the cloud vision APIs to get additional annotations about what’s in the image. So these annotations could be something like in this image there is a person holding a package. So I can give you an additional context about what’s going on It is pretty gool cool. If you go and build this demo, you can see when you press the button and takes a picture, in less than a second the picture will appear and then a few seconds later after the image is propagate the through d through the cloud vigs ape s vision APIs the annotations will appear as well I want to walk through some of the code that pushes this data to Firebase So the first line you see here is just krooe Kate ing e ing create ing a new door ring instance that we are going to use in our Firebase database. Then all we need to do to make this data appear in our Firebase database is set the appropriate fields of our door ring instance. So here you can see in the highlighted portion we are setting the time time many P and the stamp and the image fields with the server time P stamp stamp and l the image URL and the images will appear in a Firebase database to be retrieve by ed by the user on the other side. We send our images to the cloud vision ape APIs to get those annotations So we do that by kaud calling the cloud vision APIs and then setting the appropriate field for those and know teagss and flow teags notation so that additional context will company appears with ass well for the user r as well on the user on the other en oer he other ind end. I can’t talk about all the Google Play services. So instead I want to move on to ens TensorFlow. We really think that TensorFlow is the perfect application for the ondevice processing of your Android things device. So as you have heard from some of the previous talks on Android things Android things is not really well suited if you are just make ing a simple sensor. To fully utilize the and void droid things Android things Pratt platform there should be some intelligence on this device You might wonder though if you are make ing an Internet connect ed device IoT device why do you need this ondevice processing There is several reasons why this could be really important One reason has to do with bandwidth. If, for example, you are make ing a camera that’s counting the number of people in

a line, and you just care about that number, only propagate out that number you save huge amounts on bhand badged band wit width but by not needing to send that anywhere. When you have br mit intermittent connective ity. If your device is only sometimes connected to the Internet, for it to be really functional it needs to have an on deves vice processing for when it is offline. The next reason for ondevice processing has too to do with the principle of least privilege So if you again have that camera where you all you care about is the number of people standing in a line, by principle of least privilege you should only be propagate ing that number even if you trust the other end where you are sending it. There is also some regulatory reasons where this could be important for your use case. The final reason for ondevice processing has to do with realtime applications. So if you are, for example, make ing a robot that has to navigate through an environment, you want to have on device processing. So if something comes in front of that robot you will be able to react to the situation. Again I want to mention that we have have code lab for TensorFlow and troin >> LIZ ANDERSON: Destroyed things and you can try it out in code lab area or at home. I want to do a live demo so we can really see how it works So what I have here it is a pretty simple setup. We have one of our supported boards which is a raspberry pie in this case as well as a wut button a camera and a speaker The button here is on top and camera is located in this Android head’s eye. And then the speaker is in its mouth What what’s going to happen is when I press the button the camera will take a picture That image is then sent through a TensorFlow model located local ly on the device and in the speaker will then say what that TensorFlow model thinks it saw So for you here today I have various dog breeds. Because locally on this TensorFlow model I have what’s called the inception model. Now that inception model is a model provided by Google that’s able to identify thousands of objects Include ing dog breeds. So let’s see if we can do it. I just need to line up the image And >> I see a Dalmatian » SAM BEDER: All right. For those of you who couldn’t see (Applause.) » SAM BEDER: Yeah, . It is in fact, a Dalmatian. But let’s do it one more time. to show you can do more than just one dog breed. So this time I have a French bull dog. All right Line it up again. Hope for the best >> Hey that bhooks looks like me. Just kidding. I see a French bull dog. Rt » SAM BEDER: All right Yeah, (Applause.) » SAM BEDER: Good job little guy. So as I tension mentioned this is already all running locally This is not connected to the Internet at all and this is battery powered. It is totally portable. So I think that this example really shows some of the power you can get with TensorFlow. Now it is walk through some of the code that makes this integrate ion possible. This first page as you can see is pretty simple In And this just shows um load up loading the appropriate TensorFlow lie braeb brary to be used by our defies vice. The first thing I want you to note here is that we are actually only loading the same libraries as is used by Android. So all the TensorFlow code that works on Android will also work on Android things All the samples that all you have on Android for TensorFlow you can port immediately to and dried things Android things. The second thing I want you to note is that actually load in the inference libraries of TensorFlow TensorFlow is basically composed of two sets of libraries There is training, which is where you give it thousands of images along with labor bl labels. So you can make that model that can make those predictions and then there is the inference libraries where use ing that model del that you trained to theak make those predictions. Let gaes to many ‘s to the core functionality. These are the steps to run input data through a TensorFlow model. The first method you see there the feed method is we are loading in your input data. So we have three arguments. Inpull pull put layer name which is the first layer of TensorFlow model where you are going to put your

input data and next there is tensordy vengs dy imensions which describes the structure of your input layer so you you can understand what’s going in to your mod did el and then you have iment imagine pixels imagine pick sells im have image pick sells pixels. The input data is pixels. But this same type of tense TensorFlow model will work across many use cases So if instead you had just sensor data or a combination of sensor data and camera data, you could use the same type of TensorFlow model and it would still work. So the next slide, the actual highlighted portion is really actual work gets done So we call the one method. To run this input data through our TensorFlow model to get that retriks prethikion dikion on t prediction on the other side Finally we need a fetch our data so we can use it. We call fetch along with output array to store our data. Now this out put array is composed of elements that correspond to the confidence that an object is what we saw on the image. So in our first example we predicted Dalmatian. That means that the element and element with highest confidence was that that core y e y re corresponded to Dalmatian. For example, if there is two results that both were highly confidence confident you could say, I think it is one of these two things. And if there were no results — there is no results above a certain threshold of conferred confidence you can say I don’t know what’s in in image. Once you have output of confidences you can do extra gePd depend genldzing on ing depending ing ing us ing ing your use case but I think there is more zing ing interest ing things that we can do. Once we connect devices like this to the cloud. So next I want to talk about Google Claude platform Cloud Cloud Platform and specifically cloud IoT core. So cloud IoT core is a new offer ing that we are anouns nows nnounce ing here at I/O for connecting IoT devices to the Cloud Platform. It has a number of services. You can do thing likes s like MQTT, MQTT is a looit lightweight fro to kol that proto kwol to tocol. Card de coder is 100% managed service It get things like automatic load balance ing and resource propositioning position pre positioning and you can connect one device of cloud IoT core or a million devices and all these things still work the aim same way. Global access point, no matter what region your device is in you can use the same configure ation and connect to the same Google Cloud Cloud I.D. oishgs T IoT wof cores — — you can configure individual devices that you have in the field as well as control those devices set up alerts and set out role level access controls. Allowing one user to be able to have read and write access over a set of devices. And another user to could only have one read access or a subset of those devices. So as I mentioned cloud IoT core connects you to all the benefits of Google platform. This diagram shows you a bunch of benefits that Cloud Platform provides and I’m not going to go through all of them but just to point out a few You get things like big query and big table that allow you to input all the data that you are gathering from your Android things devices and visualize and query over that data. You also get cloud ML to make even more complicated machine learning models with based on alling the data you that that you have collected use ing the power of the cloud. Finally you get all the Analytics tools that Cloud Platform provides to visualize and sample alerts on your data and take action on the devices out in the field. So understand these Analytics a little bit befrt Bert better I want to go through one more demo This demo is running live in our sandz box area sanldz and sandz box area. We have set up a bunch of environmental stations running on Android things and sped spread them around campus. Now — bunch of sensors on them. Things like humidity sense Sr. or, tem

per tir, ature, motion detection and then we are able to add all of this data in the cloud by connecting it through cloud IoT core. You can see some of the data from some of these devices that are the ing a gre gait and also e ing a the aggregate. You can also dive in to one specific device to see more data on what’s going on with that device as well as more time series data on how that device is performed over time You might notice though that this demoef shows you really well that you can connect these de advices to Google Cloud but it doesn’t really utilize the on device processing that I talk ed. I want to go over a few more examples that show you these two services working together. Because when you combine TensorFlow and Google Cloud Cloud Platform you can do some amazingly power ful things. So my first example kind of extends this environmental station demo that I just walked you through Imagine instead of just putting in environmental stations around , we actually connected them to a smart vending machine. We were then able to use all the input data from our environmental station to have a machine learning model use ing TensorFlow running locally on this device. You could predict things like supply and demand based on vendzing machines’ environment and optimize when this vendzing machine would be restocked. You could also connect all vented vend ing machines to the cloud and do even more complicated analysis on those vendzing machines. You can do inventory analysis to figure out which items are performing best in which environmenteds. And environments and you do even better prediction models based on all the the data you are collects ing. This is a perfect example to do what we call federated learning. So federate ing e ing d learn ing is when we have multiple machines that are alling able to learn role locally but based on those local learning we can aggregate that data to make even better machine learning ing ing model in the cloud So here you can imagine having one vendzing machine in a school and another vending machine in a stayed yument dium and both vending machines would have very personalized models based on their environment. But also both benefit from each other bying a gref gait gre ing a ing a gre gaiting their data in cloud and there is a good example that shows you can do spr interesting things without a camera. But my next example goes over a camera use case Because I think that cameras are perfect applications for doing some of this on device process ing. So imagine you have a grocery store and the grocery store puts up cameras to count the number of people standing in line. This camera would use a TensorFlow model that’s lobing locally able to count that number of peep elt in the image and propagate that number to the cloud. You could use this data to open the optimal number of registers at any given time. So you never have to wait in line at the grocery store again. With all of your aggregated data you also do more complicated machine learning models. You could predict how many people you should staff at your groer ri grocery store ain the at any given day and the differences between grocery stores. You could be useful for the shoppers , the end users. You could imagine make ing a mobile app where at home you can check how long the gresh grocery store line is so you are never fruf straighted ed fru straighted ed frustrated by having to wait in line. The next use case I want to go over is the broadens this camera cam Pell a example a little bit more and apply ies it to an industrial use case. So imagine we have a factory that makes pi zas and we add a camera that’s able to do quality control to increase both the quality and the efficiency for this industrial application. I should note that we have another talk that’s specifically on enterprise use cases on Android things. So we should listen to that talk if you want to know more about what’s possible on Android things for some of these industrial applications. So in this case we would have a TensorFlow model that’s locally ability ly ly able to learn how to ePt accept and reject pi >> zzas by koint koument kounltd ing the number of toppings of each pizza. Most of them we will see how have six tomatoes

and five olives. So they are accepted and we will come to one that has too two, too many tomatoes — too few toe mays and too ea mate toes and too few olives and we reject that pizza. You can do more analysis such such as strak track our through put and flag if our error rate goes above a certain tlesh holds hreshold. One more use case I want to go over that uses machine learning in a slightly different way. So that’s going to be reiners in inforcement learning apply ied to an agricultural use case So imagine we have a field that has a bunch of moisture sensors in the ground as well as spin sprin l sprinklers and these are all connected to a central hub run ning and groid Android things. This hub could do some machine learning to optimize exactly what the output of when and how much water each sprinkle r should output to optimize our crop growth. You may have haerdz of DeepMind, as a company at alphabet that recently made Alpha ge Go but this used re-enforce en en en enP en eniners in en iners inforce ment learning. It an amazing tool that be used on Android things really well. With reinforcement learning you discover sod nuanced use cases such as imagine your hill had a hill on it. And in in that case you may want to water the crops at the bot tonl of the tom of the hill less than those at the top of the hail hill because the sfrin kelt rin sprinklers the a the tomb of the hill top of the hill might have runoff water. Android things makes integrations things like that seamless and provides you the tools to do anything that you imagine. I think that use ing things like TensorFlow and cloud together can also do some real ly amazing use cases that you can’t do with just one. Kwin Combine ing these services could do so much more four for your device and end users There is one more service I want to talk about today and that’s the Google asaves ssistant. So Android things supports the Google assistant ant ant SDK. There was a huge number of use cases that we think that assistant can do for you. Apply llows you to connect to all the knowledge of Google as well as allows you to control the devices in your home Again we have a co code lab that goes over getting Android things to worning work with the Google assist anlt ant. You can do it at home or in our sandbox area We partnered with AIY which is a group at Google that makes kits for do it yourself artificial intelligence makers. What you see on the screen here is the kit that they recently released which is the Voice kit which is one of the easiest ways that you can get started with and droigs droid things working with the Google ahh cystant ant assistant. I want to go over one more feature of Android things and that’s the Android things developer console. The Android things developer console brings all these services together. Tr It is our new developer portal that lets you add all these thvrgs services to your device in a really simple way The key with the Android things developer console is customization. You get all the ult mealt coal of imate control of what services will go on your device. You also get device management and jum dates updates Allows you to create your projects as wills well as upload your own APKs and push fees tee those feature updates to the device in the field. The also where you get all the updates from Google. So these are the security updates and the feature updates that will make your devices secure Sichbs you Sibs Since you get cotal tal cotal tal total control. Believe the custom meags of the ization of the developer console gives you the control to create anything that you can imagine A lock n unlocking this limited potential of what we think is possible of Android things especially when combined with Google services. So the to summarize things Android things gives that platform that makes hardware development feasible. It gives all the Android APIs to make the development process easy, combined with the system on

module design to make it quick and economical to make a prototype and also bring that device to production. But the services on top I believe are the huge factsor that allows to innovate and enhance your device as well as bring new features to your users. So Google Play services which twi gives you the tweet of the suite of tools location services as well as Firebase. You get TensorFlow which us use the powerful on device processing of your Android things device to add that intelligence to your device. You also get Cloud Platform and specifically cloud IoT core to connect your device to the even greater intelligence of the cloud. And finally you get the Google ahh cystant ant assistant the latest and greatst est in Google’s personal assistant tech nology. All these haveses is services and anything that comes in the future will fit fit on top of Android things. I want to leave you with tud with my call to action. We have huge number of session sessions on Android things this year and demos and code labs to learn more of what’s Bob possible on Android things. We have a developer site to download the latest Android image and start make ing your idea. I encourage you to add some these Google devices services to your device to see how powerful this can be. Tell us about it Join our developer community where thousands of people are already asking questions share ing their ideas, share ing their prototypes and getting feedback. Again I’m Sam Beder and I look forward to hearing about all the amazing devices that you are building on Android things that things that intd great these int great that’s integrate these powerful Google services. Thank you (Applause.) » SAM BEDER: I think I have a few minutes for questions if anyone wants to come up to the microphones >> Hardware and (Off microphone). Do you have a — (Off microphone) » SAM BEDER: So you mean — (Off microphone) >> >>

>> >>

so much, for coming to the smart home talk. The goal here is

really to give you guys a little bit of understanding and

background on how do you connect to IOT device to Google

Assistant so you can allow that device to be controlled by the Google Assistant. I’m Mark Spai ts, the product lead for smart home and the Google Assistant This is David Shyer, kind of my

partner in crime. He is the engineering lead for the Smart Home Google Assistant. I think when you start to look at some of the things that are happening in this space, it’s really clear that having assistant- controlled physical space is going to be extremely important moving forward. So when we start to look at Smart Home and Google Assistant, it was clear this was a feature we had to have when Google Home launched last year. The feature has become a pillar of the Google Home experience and what YUZers come to expect when they’re interacting with the Assistant The other thing is the IOT market which you guys have heard a thousand times, it’s one of the biggest buzz words of the last three years, it’s actually finally happening, and the assistants are part of that, the interaction with these devices So for us, we knew there’s a unique opportunity for the Google Assistant as relates to the Smart Home space Just basic numbers. These are numbers you’ve seen a thousand times. IoT is going and there’s a thousand devices. I want to look at 2017. There’s 8.4 billion connected devices, and out of that, you have 5.2 that are actually consumer segmented devices. It’s huge. This means that having that physical thing in the space that’s connected to the cloud is actually a big enough penetration rate that having just to control it actually makes sense to a user And those devices are super diverse, right. You have things from cars and lights and fans and robot vacuums. I just got a connected machine delivered to my desk so I could test my barbecue this weekend. That’s how far we’ve come. Everything will be connected and if it’s connected, it should be controlled at some level by the Assistant. The next thing that we really start to look at is where does the Assistant play in this user interaction? For us, it was always, the Assistant will be at the center of these intelligent interactions with these IoT devices. And we put the word “intelligent” in there for a specific reason. Today, a lot of the ways that we control these things are simple on and off switches. They’re actually not intelligent. It’s simply an on and off command and the thing goes on and off, but it doesn’t take into consideration context, device type, and all these things that a user would come to expect Which got us to this point in our development, and we said, it’s about conversations, and not about demands, and this is a core philosophy that our team looks at and we make product decisions and engineering decisions based on this philosophy. A user should be able to say multiple things to a device, right, not just turn this device on. But if you look at a light and all the vast commands you can give it, just to simply control it, it’s amazing, so you can change the color, you can turn them on and off, you can do things like make the bedroom brighter. Imagine yourself sitting there with your spouse, your roommate or your best friend. Think about all the ways that you tell that person to turn the light on or control the light. It’s extremely different, if you’re talking to your mom versus your girlfriend versus your boyfriend All these things we need to be able to understand so we can do what the user wants So to do this, we had to get context. We had to understand things like what is in the home? What are the rooms in the home? Who is in the home? This is the key reason our team decided to create the Google Home graph. The Google Home graph stores and provides the convectual data about the devices in the homes to the Google Assistant. Now, this may sound very similar. You’ve heard these things, right, there’s knowledge graph, a graph for this, a graph for that But, actually, the home is one of the last frontiers. If you look at yourself in your day-to- day, if you walk out of here, we pretty much know that you’ve got a mobile phone, this is your GPS locations, and this is where you are. As soon as you walk in the home, a lot of us know that you walked in the home , but we have no idea what happens after that. Are you in the living room? Bedroom? Watching TV? Your phone may be in the basement and you may be upstairs. We have to have a contextual understanding of the home to have a good experience graph really quickly. We look at it as properties, and there’s kind of three big sets of properties and a lot of things that kind of build upon these properties. So there’s a structure. That structure has an address. It has managers, it has rooms, it has labels. Do you have rooms? Because rooms

are very specific. What I do in the bedroom and what I do in the living room, saying the exact same thing could mean totally different in the context So you have to understand where you actually are in the home. And there’s devices Devices are extremely unique, and we have to know what type of device is it, what are the capabilities of the device? What are the attributes? What’s the current state? And the reason is, there are situations like this. Let’s go back to my example. If you’re sitting on the couch and you say “dim the lights in the living room a little bit.” That’s a super complex command or conversation to even have with someone sitting right next to you Using the home graph, this is what we first do. We say “are there any lights in the living room?” Basic question, yes, there are lights in the living room. Okay Are the lights on? Yes, they’re on, that’s awesome What are they set to? Oh, these lights are on and set to 50 percent. What does “a little bit” mean? In this case, “a little bit” may mean 3%. It’s a default thing we put in there We support things like little bit, a lot, our interns put “a” hella” in there because they’re from California. We take that data, package it up, and pass it to the Google Assistant and the assistant says, all right, got it. I will change these lights from 50% to 53%. That’s how the data from home graph is actually being used in real time But it was awesome as we were building this, and we were saying to ourselves, I think we stumbled upon what’s going to un lock this market for everyone when it comes to controlling devices in the house. So we have to get this to developers. How do we allow developers to build interactions that leverage Google Assistant in the home graph? So today, I’m really excited to introduce smart-home Apps. Smart-Home Apps is an experience that you can build using actions on Google. I’m sure you guys have heard tons of actions about Google. It’s my favorite platform Google has made. But it’s going to be the way that we go from this being an adaptive platform to everyone in the audience being able to hook up to the assistant, leverage the home graph and control it in a very conversational way But that’s enough about me talking. I think most of you want to see how do you actually build one of these things, and with that, I’ll turn it over to David to give you guys a developer experience >> David: thanks, mark [applause]. as mark said, most of you have >> probably seen actions on Google right now. it’s our new platform to let you extend the assistant directly to your applications, to your services, and in this case, to your hardware. so we brought — with this launch, we’re bringing smarthome devices into the actions on Google platform. you register the devices to the home graph, and rather than having the actions on Google model where you talk to your App , you change the voice, you can have direct actions that directly communicate with your devices and everything else in the home. if you have lights, you register to the in the living room. It turns on your lights and other vendors’ lights It works seamlessly. It’s the descriptive grammar that lets the user interact. No one lights a light bulb that asks too many questions so our system is designed for one-shot single actions, if I don’t forget, everything just works. How do we do this? So you guys, the folks that are building hardware, building smart-home devices, light bulbs, thermostats, micro waves, sprinklers, you’re building hardware. We make it really easy to build great hardware You provide a light-weight infrastructure on top of what you’re already doing. You’ve got cloud control on top of these devices. Add in some basic integration protocols All of these are stock, trivial stuff implemented and we can add on top of that all of the stuff that Google is good at, the natural language understanding, the speech generation, the personality, the home home graph Mark talked about, INTRT ization, context, management, and all of these complex schemas of what does it mean to be a washing machine and how do users interact with washing machines? We apply our learning to these engines so they keep getting better and everything we improve you get the improvements on as well We basically integrate in two ways. We have a flow for

device setup and a flow for execution. This is a setup flow Very simple. Once you have registered, I want to show you all of this end to end. Once you’ve registered, you simply — your App shows up in our list of smart-home Apps and the user clicks on it. Woe send them off to you for registration. Send it back to us, give a request for the devices. Any data you want, it’s a simple protocol They have 17 lights that are called these things in these rooms, and they have these traits associated with them and what have you. That’s a full sync each time. You don’t need to do Delta management. We do that for you. We take each thing and put it in the home graph. Then in execution time, we use our assistant stack to do all the heavy lifting. We’ve gotten your state from your devices. We know what those things are called and what the capabilities are. We field that into our real-time speech recognition engine. If the users named their light something weird, we understand it. If it’s chandelier American gazebo, it understands it. The language knows it, and the smart-home engine knows it. All of that stuff that lets us build the query semantics we give you, give those to you, do your tasks, simple stuff, re generate the text response and speech response to the user, and it works We have built some basic device types. We’re building a lot more over the coming weeks and months. Traits are where functionality occurs. You have a simplest trait ever, on and off. This device has a switch Those traits then are made of attributes, states and commands On and off is pretty simple No attributes. One state. I bet you can guess what it is Pretty much one command that just is turning things on and off. That trait then can be composited into whatever devices you want to make. Right now, we’ve launched publicly, those are lights and switches and thermostats and the like. You build a robot, you can reuse that trait and make your robot have an on-off trait. Generally speaking, all your robots should have an on-off trait, highly recommended, otherwise problems occur. So you’ll be able to build as we have more and more traits, you can composite your own device types, whatever you want. Maybe it’s a clock radio with clock and radio functionality and all of that stuff. Maybe it’s a robot that combines custom traits you have, traits we build and custom traits that you can build on top of this platform. I’m going to walk through the demo. It’s really simple. I want to do this in six steps. Register your project on Google platform We’ll set up alof. We’ll alignment the smarthome App We’ll use the gactions command tool to test that package. Once you do that, you go to the real App, production level, nothing special, production level assistant App, in-home control, connect to your App, set up your devices and start talking to them So let’s start the demo This is the actions on Google Console. This is the new con sole we launched yesterday. I’m going to create a project, give it some name. Doesn’t really matter. For here, I’m going to call it Sheep Project. See if it lets me do that This will take a few seconds What we’re doing is creating the projects inside the Google system, assigning a project ID giving you the tools you need You may have seen the API discussions we had earlier today Actually, today, we are not using APIAI for the SmartHome stuff, because we’ve written all the grammar. You don’t need to write any grammar for these actions. We provide it for you Actions SDk, we skip leading the docs, because we’re developers, and let’s go and set up the App information. A lot of the stuff on this page, honestly, we don’t need for home automation for SmartHome projects, but we will anyway because the application wants it For example, the assistant voice, because everything you do here runs as a native Google App, it’s running native understanding, we can use the natural voice. We don’t need to change the voice. So we’ll just say home control demo, full description, demo, we don’t really care. Category, let’s click “home control” down here at the bottom, and what did I miss, to control some stuff Obviously, when you do this for real, since eventually, this will be building in your registry item, you’ll want real text there. It’s asking for

icons, same thing. I’m going to load up some dummy ones. And eventually, these will be used when you submit your App. Away we go. Testing instructions we can skip. It is called Sheep Company. Let’s not think about this too hard. And feel free to spam me there. Privacy policy Just a dummy UR L for now Great. We’ve now created our project. One more thing to do, number 2 on that list, account linking. So let’s add account linking. Authorization code Let me go and grab my keys here These are keys in the demo App we have put on the hub. Please do not use. These are our keys in your actually App. I should not have to tell you why I’m going to set this up, stock , right off the shelf. So I’m not doing anything funky. Test, instructions, demo OLOF should now be set up Over here, this is our stock example App, and all I’m going to do is start this App and standard proxy SSL stuff, so start this running now. Away we go. Now, when I load this, I should be able to go here and please don’t hack into my demo while I’m doing it There. Now, this is a — I know , I shouldn’t have said that You’re doing it now. I know I’m not crazy. It’s one of you This is our example App. We’ve used this App for testing out the application. These are virtual lights. We use this in our own development. I can’t get 100 light bulbs at my desk, people complain so we need to do virtual testing. Turn them on and off and set them up for cloud registration. Now I have an App running. My next step, I’m going to hop over here and I am going to find the action We have this gactions command here, which simply — woops — this takes the action — actually, before I do that, let me show you the action package. So you may have seen action packages in other demos that have lots and lots of other stuff in them. Ours is really simple. That’s all it is. You put the URL at use of execution in there, execution of sync, everything else is stock. And then I can run this and go and grab my project ID, which happens to be easily right here And if I didn’t forget something because I’m on stage, I can do that. Of course it asks me to update when I’m on stage because Apps know when I’m on stage. Skip that. Push the App and now, it should be live I’m going to go over to the phone. Here on the phone, we start the Assistant. Go into settings. Home control. I have no devices. There is my App So my App I just created and ran is now available just like all of our production partners here Click on that App, and — what we should probably see — woops — yeah, yeah, yeah If we did that right, I’ve got my devices, and these fill lights that I put up here are now available on the phone. I’m going to skip them because I’m not going to cover that right now, and I can say — go back to the Assistant, “turn on all my lights.” All right. Two of those decided to get knocked off the cloud Turn on all of my lights [applause] >> Make my lights green. So pretty simple stuff. We have lots of other device types, thermostats, more complex things This is the base case, and you saw, yes, I did cheat. We have a little instance over here , but that’s all it takes. So with that, I will turn it back to Mark >> Awesome. Great demo, David [applause]

So you go and crush it like David just did, build your stuff in like 36 hours and you’re ready to go but there’s still a couple of more things you have to do to launch it. Get a device, because I have to test it. But then once you send the device over to Google and you de ploy your action, we will then do a certification process. And the certification process is really simple. We do have a few testers go through, test basic things,alatency, ence late latency, check the privacy, and say, all right, this is good, it’s rate to go into production, certify it and launch it into production. It’s a really simple process. We are making sure that the experience that the end user is getting is really good so that’s where the certification process comes in, just to make sure things like the right grammars work and the right expectations around latency are set What a bunch of partners who have been working with us They’ve helped us build this process, and we are super excited, because they’ve helped us learn a lot about what we can do better, but more importantly , I think the rap id rapid pace in which even adding SmartHome integrations has been amazing. We started with three integrations. In the keynote, you heard our team say we have 70 now. We did most of those in the last six weeks using this process. So we are extremely excited to open this up to everyone so we can go even faster. So you may say, like what’s next? The first thing is everyone leave here, don’t go to the concert. Build a SmartHome App, and then go to the concert. Right, but there’s other things you may want to check out. There are going to be more device support and features coming in the next couple of months as David said All of these are on display in the sand box. If you go to the Assistant sand box, the IoT sand box, they’re already on display and running in production which is awesome. And the last one is that we work really closely with the Assistant SDk team. If you’re thinking about how do I actually imbed the Assistant into my device, check out the Assistant SDK. I think we flew through this quickly so we have a couple of minutes left, so we’re more than happy to answer any questions you guys may have side >> Microphone’s right here if you want to ask a question Assistant to open the slide, like PPT >> Repeat the question one more time >> Use the Google Assistant like , show me new slide >> Oh, yeah. So for on our side , we more think about the hardware, not the actual software piece, but that’s a great thing you should think about building and actions on Google platform allow you to do that >> Okay, thank you Cool. Well, thank you, guys, so much for coming out [applause] >> Have a great IO Services Provided By: Caption First, Inc




» »


» >>> » >>> » » » SAN JOSE, CALIFORNIA MAY 18, 2017

5:30 PT STAGE 3 Test

Coming up – Google Play Awards at 6:30 p.m at 6:30 p.m Coming up – Google Play Awards at 6:30 p.m

at 6:30 p.m

at 6:30 p.m

at 6:30 p.m

at 6:30 p.m

at 6:30 p.m

at 6:30 p.m

Google I/O 2017 San Jose, California

» All right. Who’s ready for tonight? [Cheering and

applause] » There’s no winners here. All right. Thank you guys for coming. We’re super excited for you, and congratulations on being nominees. Tonight should be super fun. We’re really excited and we hope you guys are too. Quick logistics for this evening, when or if you hear your name called as a winner, please come up to this side of the stage here. The presenter will hand you the award Be prepared. It’s heavy You’ll stop, get a good look at the photographer, and exit the side of the stage, and we’ll actually take the award for you, and you can go back to your seat, and then afterwards, we’ll rightfully give you your award back. So enter, exit, and good luck to everyone, if there are any questions Good luck, everyone. Thank you So, yeah, please only use this stairway, just because this one is going to be really hard to use. Okay GOOGLE I/O 2017 SAN JOSE, CALIFORNIA MAY 18, 2017 6:30 P.M. PASK TIME STAGE 3 GOOGLE PLAY AWARDS

Services provided by: *** » MAY 18, 2017 6:30 P.M. PASK TIME record of the proceedings *** *** » Welcome to the 2017 Google Play A wards

Please welcome Purnima Kochikar , director of Apps and games

business development » Good evening, and welcome to the second annual Google Play A wards [Cheering and applause] year at I/O to celebrate your amazing work. Behind every great App is a creative vision and a remarkable team. Today, we celebrate you, your vision and your teams It’s a privilege for us at Google to be part of your amazing work. You build beautiful apps, successful businesses, and touch lives of people around the world, and you challenge us to innovative on our platforms and keep pace with your imagination about the diversity of the nominees in today’s categories From Apps that teach kids to code to ones that allow you to play with friends around the world, from Think Tank to feeding the hungry, it is amazing to see the many different countries you come from and the many, many different apps that you are building And as where I stand, it is exciting for me to also see all the wonderful things that connect you. You have a relentless focus on your users, innovation hunger to pick up the latest features and capabilities, and a dedication to build beautiful apps that engage In looking at your apps, the judging panel, of course, took into account innovation and design, but also thought about things like app quality that actually lead to user happiness Things like, does your app drain battery? None of us likes that, do we? And judging by the ratings and reviews on Google Play, we are sure that our users and your users agree that your apps exemplify the best that is there in Android and Play. So with no further ado, let’s get the show started!

standout Indie. Please welcome Jamil Moledina, game strategic lead, Google Play I ran into an indie games startup. I learned firsthand how risky innovation is for a small studio, but at the same time, innovation may be all you have to stand out. It’s then incumbent upon platforms to shine a spotlight on those developers that dare to try something new, and helping operate that spotlight at Google is one way that I’ve been able to give back to this rich, diverse community. Indis have inherent value in that they demonstrate the endless capabilities of what small teams can build. They pour their heart and souls into their work They embody diversity, and often, they take unexpected leaps introducing us to unique game controls and breakthrough story ideas, all stemming from a deep passion for their art while using games as a medium to express the human condition This year’s nominees deliver on all counts. Let’s take a look » Standout Indie nominees are: Causality. KINGD: New Lands ingdom: New Lands. Mars Mars. Mushroom 11 Reigns Standout Indie » The winner of this year’s award is the epitome of a beautiful game experience on mobile. Itself stunning graphics, melancholic soundtrack and unique and native game controls give gamers the perfect immersion into the challenges and outcomes of this world. I fell in love the first time I played the early version of this game and am thrilled that we can share it with everyone on Google Play. The winner of this year’s Standout indie award is Mushroom 11 by Untame [Cheering and applause] welcome Larissa Fonaine, director and global head of apps business development, Google Play [Applause] about taking risks and fuelling innovation. Small and scrappy, the most exciting startups inspire all of us by taking great ideas and turning them into breakout success stories At Play, we care a lot about making sure that startups can find their audience and that users can find the great and exciting apps that are coming from this community. In addition to innovating inside their own apps, startups are often early adopters of new platforms and new technology features, which means they’re creating some of the most interesting Android experiences today, and they’re also becoming some of our best partners. So now that we’ve set that bar really high, let’s take a look at the nominees » Standout Startup nominees are:

Cast Box. Digit Discord HOOKED Simple Habit Standout Startup » This year’s winner built an original experience by rethinking fiction for the texting generation. They use an every day mobile behavior to deliver creative and captivating stories that immerse the user in the narrative in real time. As a lifelong bookworm, I’m thrilled to announce that this year’s standout startup award winner is HOOKED by Telepathic [Cheering and applause] Please welcome David Singleton, VP, Engineering developers to reach users with glanceable information throughout their day. With Wear 2.0, our latest build, we introduced standalone apps These allow developers to build experiences that operate independently without requiring the phone nearby. The new apps being created helped simplify life’s daily activities, such as staying informed, exploring what’s nearby, and accessing guides and metrics to improve your personal health and workout routines. And while Android Wear is still a platform in itself infancy, we are so excited by the support from both developers and hardware manufacturers and I hope more of you will try it out. Let’s take a look at this year’s nominees nominees are Bring! Foursquare Lifesum Runtastic Running and Fitness Seven Best Android Wear experience » The winner of this year’s award offers a smart and sleek experience utilizing the convenience of GPS and other key sensors. This year’s winner of best Android Wear Experience is Runtastic Running&Fitness welcome s Sascha Prueter, Director, Android TV photo. {ENTER} [Laughter]

» So while many mobile developers built for the small celebrates audiences through Android TV and take advantage of the larger screen in the house Building for the large screen allows developers to explore new menu and UI options to create rich media experiences that drive really long session times And thanks to a strong developer community — so thanks to you — the TV app section is fast growing and a successful category. The number of TV apps in the Play Store on Android TV have doubled since Google I/O last year and now exceeding 3,000 apps and games. So let’s look at the nominees for this year’s Best TV Experience » Best TV Experience nominees are: AbemaTV Haystack TV KkBOX NetFlix Red Bull TV Best TV Experience » This year’s winner has completely updated their UI to provide a fresh, best-in-class TV user experience, using simple controls and beautiful designs, they give users a content-rich offering with high quality, live, and on- demand content. The winner of this year’s Best TV Experience is Red Bull TV by Red Bull Please welcome Clay Bavor, VP Virtual Reality » So VR is cool because it can, like that guy, make you feel like you’re somewhere else, whether you’re on a beach with scary crab zombies or searching an ancient ship wreck for VR treasure. It really puts you right in the action, and you don’t just get to see what it’s like to be somewhere else, but really experience it. So Day dream is our platform for mobile virtual reality, and thanks to the passion and creativity of some amazing developers, there’s a large and growing library of really cool things to experience and do. You can explore different worlds. You can step inside games. You can just kick back and watch your own personal cinema. We’ve been so impressed with what we’ve seen so far. And so it’s my huge pleasure to announce the nominees for Best VR Experience are: The Arcslinger Gunjack 2: End of Shift. Meko rama VR The Turning Forest Virtual Virtual Reality Best VR Experience » I love them all. But the winner — it’s hard to describe the kind of crazy off-the-wall experience of exploring 50 different alternate realities doing weird jobs for bored art official intelligences, all while trying to uncover the back story of the kind of

shady company apativeitude, the virtual later system of the future. You’ll just have to try it. The winner of this year’s VR Experience is Virtual Virtual Reality by Tender Claws welcome Amit Singh, VP of Business Operations, VR Clay was talking about, virtual reality can take you anywhere Augmented reality, AR, can bring anything to you, as if it was there. And we’ve been working on a project for a while called Tango. It allows phones to send just like we do, depth, scale, perception, precision. It can really bring objects as if they were right there. And people have done amazing things with it You can shop for a couch, make sure it fits the environment you’re in, or you can invite dinosaurs and play with them in your kitchen. And so let’s take a look at the nominees who have built these amazing experiences powered by Tango » Best AR Experience nominees are: Crayolea Color Blaster a Color Blaster Dinosaurs Among Us. Holo Wayfair View WOORLD Best AR Experience » Very, very tough choice These are all great. You must try them out. They’re all actually in the part over there with the Daydream and Tango experience. The one that’s the winner really brings whimsey fancifulness. It really lets you be in the your space and decorate the wall. It’s really fun. Super fun to play The winner is WOORLD by Funome na Please welcome Shazia Makhdumi, Global Head of Family Education and Partnerships, Google Play » Hi, everyone. So I was the mom of two boys age 9 and 10, and who are captivated by worlds of harry Potter and Minecraft I love how great apps and games can capture kids’ imagination and fire their creativity. But, in addition, the right apps also go beyond that. They help change parents’ perception of mobile devices, beyond electronic babysitters to tools that help their kids develop skills, to explore, to thrive, and to learn. Now, if only someone could build an app that could keep my kids that same age for However, developers in our Designed for Families program, while they don’t do that, they use intuitive and kid-friendly

design, together with age- appropriate content to engage and inspire kids, with colorful characters, creative visuals, and realistic sounds, they build fun, educational, and safe experiences on mobile devices So this year’s nominees offer a diverse lineup sure to appeal to any household with kids. Let’s take a look » Best App for Kids nominees are : Animal Jam-Play Wild! Hot Wheels, Race Off. Teeny Titans Think! Think! Toca Life: Vacation. Best App for Kids » So just like it’s really hard, as any parent would know to pick your favorite child, you know, we’ve had to make a choice here. We love them all. But can you imagine anything more fun than showing off your style with characters that express the real you? This year’s winner teaches kids how to use their creativity to build, explore, and share, all while learning about science. The winner of this year’s Best App for Kids is Animal Jam-Play Wild! By Wild Works welcome Jason Titus, VP of Engineering, Developer Product Group about mobile phones is how they let us play together across continents and take time, whether it’s in line or on the bus, and actually make time playing with your friends. In recent years, we’ve seen amazing improvements with multiplayer games taking advantage of the huge numbers of mobile users, as well as the advanced technology in today’s phones. Multiplayer games come in all genres and sizes. But the thing that they have in common is the amazing ability to bring us together. A true multiplayer game is built from the ground up, and this year’s nominees are some of the best. Let’s take a look » Best Multiplayer Game nominees are: Dawn of Titans FIFA Mobile Hearthstone Lords Mobile Modern Strike Online! Best Multiplayer Game » This year’s winner reaches a global audience of nearly 190 countries. It’s developed by a team of 30 different nationalities, which seems so appropriate. Through the developer’s continual push for excellence through iteration and experimentation, this app is constantly evolving to make learning one of your favorite leisure activities — to make learning one of your favorite leisure activities. The winner of this year’s Best Multiplayer Game is Hearthstone by Blizzard Entertainment Jamie Rosenberg, VP Android and Play business have all of you here. It’s amazing to think about the innovation in mobile apps over the past few years. We’ve seen

so many great examples of that already tonight. As smartphones have become more powerful, you all have kept pace by finding more ways for our phones to help us in our daily lives. Great apps help us create, explore, stay fit, organize, learn, and so much more. And the very best of these do so through excellence in design and performance, thoughtful use of the capabilities of our devices, and often a bit of fun. Let’s take a look at the nominees for Best App » Best App nominees are: City mapper Fabulous Memrise Money Lover Quik Best App » So I’m just going to say, this app takes an activity that can be a drag and makes it a ton of fun. The Google Play award for Best App goes to Memrise Sameer Samat, VP Play and Android smartphones for a very long time , and I think we can all agree that games have come a long way on mobile. It’s exciting to see so many of you using capabilities on the platform to bring amazing experiences. You have changed the way people think about mobile gaming, by presenting rich storylines, mind -blowing graphics, and bringing out the competitor in each of us With 2 billion Android devices, developers have the freedom to build fun and engaging experiences knowing that an audience is just one tap away. Let’s take a look at this year’s nominees » Best Game nominees are: Choices Fire Emblem Heroes Lineage 2 Revolution Pokemon Go Transformers: Forged to Fight Best Game » Now, all these titles are in spiring. This year’s winner brings console quality graphics to mobile, showcasing gaming on Android. Through stunning 3D graphics and incredible technical performance, they are bringing iconic characters to life and offering an inspiration of how gaming can truly be on mobile. The winner of this year’s Best Game award is Transformers: Forged to Fight [Cheering and applause] Please welcome Hiroshi Lockheimer. SVP Platforms and

Ecosystems » How’s everyone doing? » How’s everyone doing, having fun? [Cheering and applause] » That’s good. Before we get to this, I just wanted to say a quick thank you, you know, Android, Google Play, we would be nowhere without you, so a big thank you to the developer community. Thank you very much » All right. Well, at Google, our mission is to organize the world’s information and make it universally accessible and useful. Universally accessible With this mission in mind and that phrase in mind, it’s with great pleasure I get to introduce this new category Best Accessibility Experience. Here , we will highlight developers that offer innovate ive solutions to assist in different disability needs and overcoming challenges. They deliver intuitive experiences and empower individuals through mobile apps. They foster communication, build independence and have the capability of connecting users in very meaningful ways. Let’s take a look at the nominees » Best Accessibility Experience nominees are: A Blind Legend Eye-D IFTTT Open Sesame! SwiftKey Symbols Best Accessibility Experience » All right. Well, the winner of this year’s award is a brill iantly designed platform offering an experience inclusive for users and developers with various needs. They simplify daily tasks, especially for those with vision and motor im pairments to increase productivity and independence Please join me in congratulating this year’s winner of this Best Accessibility Experience, IFTTT I’m back. I was really hoping for my head shot again, but you’ll get it live instead So developer ecosystem has an extremely diverse makeup. The product of this breadth of culture, personal interests, and creative vision produces a lot of truly amazing work. You make a lot of amazing work, thank you. This category looks for apps that create meaningful social impact for a broad spectrum of people around the world while taking full advantage of the platform. It rewards developers who challenge themselves to promote themes of contribution, accessibility, and knowledge sharing, through creative solutions that address issues locally, regionally, and even globally. So let’s take a look at the nominees » Best Social Impact nominees are: Charity Miles Peek Acuity ProDeaf Sea Hero Quest ShareTheMeal Best Social Impact » This year’s winner generates large-scale reach to fight a major global issue. The app uses a simple interface to drive awareness and engagement of a major cause. They are a tremendous partner, and I encourage you all to try out their new Instant App launching here at I/O. The winner of this year’s Best Social Impact award is Share the Meal Kochikar

it? [Cheering and applause] to all the nominees and the winners! [Cheering and applause] when he said, without you, tools and our platforms are nothing, so we thank you for your commitment, for your innovation, and I can’t wait to see you here next year to see what you will have built between now and then. So thank you all for coming. I hope you enjoy the rest of the show. There’s a big show at the amphitheater at 8:00. We hope to see you there Thank you very much