Honeywell and Foghorn Leverage Android for Smart Industrial Devices (Cloud Next '19)

[MUSIC PLAYING] ANTONY PASSEMARD: Thanks for being here I’m Antony Passemard I lead the Product Management team for Cloud IoT at Google And I’m joined by a few folks here– Sastry, the CTO from FogHorn, Ramya, VP of Product, and Scott Bracken from Honeywell So we’re going to talk a little bit about some of the stuff we’ve been doing with FogHorn and Honeywell So a bit of an agenda, a little overview of Google and how we see IoT, fairly high level and generic view of IoT for us And then FogHorn will come on stage and talk about their edge solution and their IoT core, integration which is pretty cool You’ll see some of the platform, and they’re going to talk about a use case they’ve done with Honeywell And they actually have a nice demo here to show you what they’ve done, which is really impressive and I think adds a lot of value So we’ll have the demo and Q&A at the end, so I hope you enjoy So you’re probably here because you all know that IoT can and probably is already bringing value to your businesses And whether it’s to solve something specific or really being part of a broader digital transformation, it’s really central to a lot of creating differentiation for your businesses and across the industries We see that in agriculture, really increasing crop yield or lowering energy consumption in oil and gas, doing predictive maintenance or security in oil and gas We’ve seen some of those use cases We’re seeing logistics to do high value asset tracking or better driving behaviors for fleets, reducing risks for the drivers So we see a lot of those use cases So almost across every industry, we’re seeing IoT use cases What we’ve seen also– that’s a study that came out This is a IoT analytics company that does a bunch of competitive intelligence on IoT in different sectors And what they’re saying is in 2021, about 50% of the company will have IoT projects and will have seen value So it’s more than having a project It’s actually seeing value from IoT in 2021 That’s only two years from now But what’s staggering is difference from 2018 to 2021 There’s a massive difference, a massive growth in the realization of value through IoT, which is the big change here I’m going to talk a little bit about real-time in some of the talk because we’re seeing that speed is really critical in IoT We’ve seen analytics done on bulk data on database, and that’s great to build your models But what we’re seeing from customers is really the demand for real-time insights and real-time actionable insight, actually And what’s important, we’re seeing that data By 2025, a quarter of the data will be real-time in nature That’s quite a lot But that’s part of that quarter of data, and 95% of is actually IoT data That’s real time IoT data from the global datasphere, they call it It’s kind of funky word but interesting So real-time nature, being able to take decision right there on the spot, either locally or in the cloud, is super important for an IoT platform And it’s more important even because you want to get that accuracy to reduce costs, drive efficiency on a manufacturing line, to simplify your logistics and reduce risk, and make your environment work more safe So that’s something we’ve seen in oil and gas, in particular, to track workers with their hat and being in zones they’re supposed to be and things like that This is vision intelligence applied to a work environment for safety use cases That has to happen all in real-time You can’t wait for the next day to say, hey, you shouldn’t be in there The accident happens, it’s too late One thing that we’ve noticed– and I’ve been a cloud IoT space for quite some time now, and I’ve seen it going through a lot of cycles of PoC and pilots and testing out stuff I think that’s changing Now I’m seeing the PoCs and the pilots go to production And that’s a big shift that we’re seeing We’re seeing those growth People in companies– I’ve been testing IoT for a while Now they realize what they can do with it, and now it’s going in production That’s the big shift we’re seeing And the problem is you really have to think about, OK, I am I doing my IoT project in isolation? Am I using the right platform? Am I limited by the platform I’m using? Am I restricting my possibilities with a platform I chose? So choosing the provider of your IoT platform is actually really important, and you often time forget about the delays of managing complex infrastructure when you put IoT platform in place The specialized tech and learning curve you have to have for hardware communication, AI,

data ingestion, all of that, there’s no real standard protocols So it’s pretty messy There’s a lot of hidden costs that come with IoT deployment So having a platform that’s simple, flexible, and can really help you to get to value faster is critical for the success We see IoT from Google’s [INAUDIBLE] But we see IoT as a big data challenge almost more than anything else, and big data in real-time And having that ability to gain real-time insight, as I said, will bring the competitiveness that you’re looking for your business So in terms of platform, the Google Cloud IoT platform is a serverless platform from connectivity, processing, storage, and analyze Serverless means that you never have to worry about scaling up or down or pre-provisioning resources and overusing resources It’s just going to scale from one or a few device to millions of devices very easily And that’s really a core differentiation of our platform from connectivity to analysis If we look a little bit deeper into the Cloud IoT platform itself, you’re going to see several components here The main one is Cloud IoT Core That’s something we launched GA last year in March, so about a year ago That’s been pretty successful This is a global front end for your devices, one URL globally And we use all the Google front end– the GFE, we call them– to ingest data from anywhere in the world You don’t really have to think about, where is my device physically and which region does it have to connect to to have the lowest latency That will happen automatically This is served by the same front end that serve your Google search or your YouTube It’s the same front end So it’s really global, really scalable all across the globe Once the data flows into IoT Core, it flows into Pub/Sub Pub/Sub is this messaging bus, global again Also realize, you don’t have to worry about resources here Pub/Sub is available globally That means that if you have data ingested in Australia, in Asia, in North America, they all fall into a topic of choice You want to get that data, just query that topic and get the data out You don’t have to think about, oh, where is my device again It’s all global infrastructure, automatically scalable Pub/Sub stores data for seven days You can do snapshot replays and have several subscribers to it and get the data out The data can go into a Cloud function, a serverless compute service to trigger actions, or do any kind of requests Or it can go to Cloud Data Fusion Our Cloud Data Fusion is the managed version of CDAP CDAP is an open source data processing pipeline You may know it as Cask.io So that’s open source, and you can create your workflow with drag and drop little nodes, and then do your transformation along the way Plug the data out of Pub/Sub, do your transformation, and drop it in some database If you want to do longer processes like windowing analysis in real-time, you can use Dataflow Dataflow is also scalable on its own and has an Apache Beam open source equipment All those are managed but have their open source equivalents Pretty cool And then you’ll land in Bigtable if you want to do time series If you want to do big data analytics, you have your BigQuery You have Cloud Machine Learning to do your training Send those models back, and send them back to the device because Cloud IoT Core is bi-directional And then you have some nice visualization tools So that’s what we call the Cloud IoT platform Putting that together is fairly simple Doesn’t require much code, if any code It’s all serverless And we’re pretty happy with some of the early customers that we’ve had You can see some of those customer at the spotlight session on Thursday We have some of the customer on stage One key element of our strategy with Google Cloud IoT Platform is we can’t do it on our own We have to have partners working with us And I’m really happy today to have FogHorn as one of our partners, if Sastry wants to join me FogHorn, we’ve been working with them for over a year at the Edge, in particular, and solving some of our Edge challenges and doing machine learning at the Edge So thank you, and I’ll leave it up you, Sastry SASTRY MALLADI: Thank you, Antony Is my mic on? ANTONY PASSEMARD: Yeah, you’re on SASTRY MALLADI: Good evening or good afternoon, everyone I know I’m between you and food, so I’ll try to be as interesting as possible So we are a Google partner, as you mentioned, and we are an edge computing platform provider company We are a value-based company I’m going to give you a snapshot of who we are We’re based in Sunnyvale, right here in the valley We provide software that runs on small devices, edge devices, whether they’re existing PLCs, gateways, or embedded systems close to the machines, close to the assets And the key here is to do live data processing, analytics, and machine learning on the data that’s flowing through the edge to derive actionable insights

We’ve got customers across the globe, hundreds of them We’ve got lots of analyst coverage, and so on Everyone that you see here, the logos, these are all our investors Honeywell is an investor Honeywell is actually going to talk today about how they’re leveraging our technology for some of the use cases as well You can see all of the big technology and industrial names there as well that all are investors Traditionally, we have been shipping our products on Gateways, PLCs, Raspberry Pis, things of that nature, where you have a flavor of Linux or Windows or a real-time operating system What we have recently also done is release the same product on hand-held devices, which is actually the core part of the demo and discussion today, on Android-based devices So the way the product works is– this is how it looks like You have a tool here in the cloud, which you use to manage all of the edge devices The software runs on the edge device itself We have a very flexible microservices-based architecture In the traditional Linux world, we have used containers Obviously, in Android, there are no containers So we have developed all of this as an app So the notion that you can have all kinds of data that fed into our system through a database that we have– the two core key components here are the analytics engine and the edge ML, the machine running engine The idea being that as you’re ingesting live data from all different sources, you can apply analytics and machine learning on the data to drive the insights And then you can take closed loop control actions by either using our SDK There is an SDK using which you can build applications Or you can also visualize it and send that information into IoT Core and other places as well So maybe let’s step back a second A lot of you, I’m sure– many of you are familiar with edge computing But let’s step back for a second to say, why is edge actually important Right? So in a typical industrial environment, whether it’s in oil and gas, manufacturing, transportation, smart cities, buildings, doesn’t matter All of these different sectors, you’ve got data coming in– I might add– lots of data, especially if you’re doing video, audio, and acoustic type of data And there is a high cost latency and security associated with it by directly connecting these machines into some kind of a central or cloud location What FogHorn really does is to help with that, solve the problem, by introducing this edge device next to the asset– on many cases on the asset itself– where the data is ingested into our software running here And then you derive the insights and then send the results and insights back into the cloud, wherefore additional processing for fine tuning your models, for fleet-wide visibility, and so on This obviously has the benefits of low latency here because there is no latency involved between these two And then there is low cost of data transport And then security is also eliminated or minimized when you’re communicating with that Another way to look at it, a lot of the times when you’re processing data– in this example right here, this is a sensor signal for one of the signals You see here how the blips are in the signal? This is a suction pump When you typically send that information to a down sampled environment, whether it’s cloud data center, this signal actually looks like that You actually pretty much miss the little point of what it is So the fidelity of deriving actionable insights at the edge is a lot higher, is really why it’s so important, in addition to all of the three things that I talked about– security, cost, and, of course, latencies This is a high level product solution, which is our traditional solution that we’ve been shipping on Linux It’s the same solution that I showed you on the first slide It has been put onto Android The idea here is being that you ingested data from all these different types of sources, enrich them because a lot of the times in the industrial world, the data quantity is not good That layer fixes all of that, cleanses, normalizes, decodes it, and then does processing through these two layers that I talked about And then the information itself can then be published to a cloud environment We have pretty deep integration with IoT Core, as Antony mentioned, and I’ll mentioned that in a second in the next slide how that is done as well And using the SDK, you can then program and programmatically subscribe to those insights and take closed loop actions So one concept that I want to bring into this picture is this notion of what is called edgification I know it’s a new word So what is edgification? A lot of the times when you build data science models, machine learning models, to run in a server environment, cloud environment, you may or may not necessarily pay attention to the amount of compute resources Or you may be actually working on batch data rather than working on live data And you may also not pay attention to the number of layers on the size of the model because those are not necessarily the constraints you worry about in a cloud environment But when you bring the same model to run on an edge

environment, in order to run efficiently and also perform at the rate that you would expect, you have to deal with those things And that process is what we call edgification How do you take your model that typically works off of batch data and make it run on real-time live data? How do you then optimize and reduce the number of layers and the rates and still with the same level of accuracy? And finally, if we look at the anatomy of a machine learning model, it has really three things One is what we call a pre-processing code, which is really feature extraction Let’s say you’re looking at a video or you’re looking at some sensor signals from the machines Before you can apply a machine learning model and algorithm, you’ve got to extract some features And that’s typically what we call pre-processing, and that is the most compute-intensive part of it And then followed by you’ve passed those features into your machine learning algorithm or the equation, which itself is not really that computationally intensive And then followed by post-processing, which is how do you then take the results of that equation and do something with it What we do as part of this edgification process is to extract all of the pre-processing, the post-processing, into our highly efficient complex event processing engine that we built from ground up, which runs in really, really small footprint, few megabytes of memory, runs really fast And then lead the machine learning part [INAUDIBLE] to it into your engine Of course, you also have the ability to, in fact, do the machine learning part in the CEP engine itself as we move forward Here are some examples where we took a video-based layer modeling use case, how you can actually optimize when you build Take a model that is built here, edgify it You got almost a 10x or higher improvement, and yet, much higher fertility, much higher accurate results Now, even when you build a machine learning model to derive some insights and predict some things and you deploy this in the edge environment or any environment for that matter, and it’s accurately predicting the results, it does not necessarily mean over time it’s going to continue to predict with the same level of accuracy Because of data drifts, because of changes in machine behavior, the same model that once used to accurately predict the results may no longer do that So how do we address that? How do we fix? That to address that, we came up with this notion of automated closed loop machine learning So what does that really mean? So as you see here in our architecture, once a model is deployed, we built this thing called a prediction model degradation detector What that module does is that it continuously measures several different factors– whether it’s an F1 score, whether it’s a data distribution collection of how things are– and tries to see if the accuracy of the model that you’re predicting is deviating from what it was before And when it does, what it does it automatically sends that information to the Google IoT Core here From Google IoT Core, as Antony explained, it’s a Pub/Sub system You can send that information to Dataflow, which can then be read and configured into BigQuery in the Cloud ML engine where you retrain incrementally your model with this incremental data that’s coming in And then push that model back onto the edge as well This is all automated Of course, you have an option to not automate it But this is done in an automated way until the accuracy comes back to what it was before And this is what we have been calling the closed loop ML I know different terminology have been used This is truly revolutionizing the edge computing because you’re actually introducing the notion of AI, artificial intelligence, onto these edge devices on how do you self-correct the models deployed onto this environment in order for you to continue to predict this with the same level of accuracy As you can see, our solution is a genetic platform We have been deploying this across many, many use cases across many different verticals Our top verticals actually include manufacturing– a lot of big name customers– oil and gas, transportation, smart cities, buildings We have done other things as well like mining, and so on What I want to do is just mention a couple of use cases quickly here, one in the manufacturing, one in oil and gas, that actually involves different paradigms This is a fact that I can publicly reference It’s a GE factory They manufacture these industrial gated capacitors that look like this This is a highly expensive capacitor that are used in power plants The material cost alone is several thousand dollars And the way the processing includes was, they take an aluminum foil, they wind it through what is called a winding machine called [INAUDIBLE] winding machine And the winding machine itself has hundreds of sensors that are continuously measuring several things, winder tension, width aspect ration, and so on Once they wind it, they press it into packs and then insert it These are those packs And they run it through what is called a treat oven to take out all the moisture content And finally, they’ll fill oil into the capacitor to test the capacitor At this point, if the capacitor fails,

so it’s not working or seeing dielectric failures, it’s too late in the process And that’s exactly what was happening in the factory As I walked into the factory, I see a lot of pink slips on the factory floor, costing them a lot of scrap, millions of dollars So the challenge posed for us was, how do we connect our edge solution to the sensors directly into this assembly line in the machine, connect it and identify exactly what was causing these dielectric failures, more importantly, before it is too late so that the operator actually has a chance to go take the units offline and then fix it And that’s what we have been able to successfully do that and reduce all of their pain points on the scrap The second use case I want to talk about is interesting This is an oil and gas use case Saudi Aramco is one of our customers Many other customers as well We’ve actually publicly talked about this at the RC conference a couple of months ago This is, as some of you may be familiar with, in oil and gas refineries, gas is being refined You’ve got hundreds of compressors And because of various reasons, excessive pressure, sometimes other reasons, compressor failures, the gas gets released in through what are called oil stacks These are all oil stacks And then the gas gets burned When gas gets burned, you see a fire What’s worse, you can see smoke sometimes Fire itself is bad Smoke is even worse Dark smoke is even worse So how to identify? Up until before this edge solution came up, typically, they installed video cameras Sometimes they don’t even do that And a human being is able to monitor it 24 by 7, which is not really practical All they can do is to see, oh, there is fire There is smoke What we have done is installed our software on a controller box attached to these oil stacks– many, many, many of them– and then directly take the video feed from this, combine it with the audio feed of the compressor, and identify and correlate one, there is in fact a flare or a smoke and also compute the volumetric flow of the gas that is burned, and things of that nature So human being and monitoring of this is eliminated But more importantly, operator gets an alert when there is a real problem, and we get to the root cause and why the problem was occurring This is being widely deployed in a very popular use case One last thing I want to say is that this particular session, the rest of the talk and the demo, remember as we talked about, is our product that was released on the handheld devices, mobile edge platforms We have noticed that there is obviously a lot of traction in this field because these are all battery-powered devices, which makes it even more important for our edge computing to consume less and less power, less and less CPU and memory And we imported our software onto these devices And deploying with Honeywell, especially initially working with Honeywell device where we have done a couple of use cases, which we’ll demonstrate today as well With that, I’d like to invite Scott Bracken, Director of Advanced Technology from Honeywell, to speak about their technology SCOTT BRACKEN: Great, thank you Thank you, Sastry Hello, everyone My name is Scott Bracken, as Sastry mentioned, and I lead the Advanced Technology Group for the Productivity Products team within Honeywell And I’d like to introduce you to the corporation of Honeywell a little bit to explain where the Productivity Products group sits within Honeywell It’s an exciting time to be an engineer at Honeywell because there’s a significant amount of growth being driven by technology in these four business segments that Honeywell exists, aerospace, home and building technologies, safety and productivity solutions– which is the group I’m from, and I’ll explain a little more about that in a moment– and the performance materials group The one common thing that connects all of these systems or all of these business groups is the fact that connected solutions are what’s driving the growth in all of these business areas And therefore, the technology development within Honeywell is able to develop common platforms across these different businesses And as I mentioned, we’ve been growing significantly in the technology area and most specifically transforming the Honeywell corporation to a software industrial business The point of this slide is to point out that we have already invested heavily in software engineering talent within our corporation to build that capability Now diving a little bit deeper into the safety and productivity solutions group, you can see that there’s a lot of areas within the industrial sector where technology can help our customers be more productive First in the connected worker Many wearable devices that an employee can wear in a industrial setting can help monitor for safety and to help that employee be more productive In the supply chain and logistics area, there’s a tremendous amount of data that is important to our end customers for managing the efficiency of their operations In the distribution center themselves, we have a lot of automation that we’ve introduced and provide to our customer base, again, for reducing their cost of operations and being more efficient And finally, we also have a very wide portfolio of sensors,

primarily for safety applications Pressure sensors and gas detection type sensors are quite prominent in that portfolio, but we have sensors that support all of the different IoT applications across this suite So once again, diving a little bit deeper into the Honeywell technology area, specifically in productivity products, what am I talking about in that realm? There’s several different solutions, and many customers use this technology in different ways But we can provide common platforms across these use cases to provide those customers with the unique needs that they have, yet with the power of having a very robust solution underneath Things such as vehicle-mounted computers, all types of asset tracking technologies Particularly, RFID sensing is seeing a growth spurt in recent years This is not just limited to some of the industrial settings but also spills over into retail quite prominently And then looking at the solutions on the right-hand of this chart, scanning is really at the core of what we do in our productivity products group Product identification, primarily with barcodes, is what our scanning devices is centered around And then we provide value on top of that with other solutions, such as voice solutions to allow for hands-free operation of product identification, the mobile computers, which I’ll go into in a little more detail in a moment, and then, of course, different tagging that we can use for sensing the devices in any kind of inventory setting That all gets tied together, obviously, in a cloud environment for data management and inventory tracking purposes And most recently, in fact, in the last year, our flagship device was recently introduced We call it our Mobility Edge platform, and it’s a unified platform for mobile computing It’s based on the Android or it’s an Android-based device, and it’s purpose built for these industrial applications The rugged sized design of not only the form factor being very familiar to employees but also allowing for a very heavy use during a workday As I mentioned, product identification and barcode scanning is the core operation of these devices, but that can take place thousands of times a day or thousands of times in a single shift So building a device that can handle that kind of workload, survive the entire shift, is a very critical aspect of our customer requirements So withstanding the harsh environment, whether it’s impact or whether it’s the elements, even moisture, say, a rainy environment– it can also withstand We’ve used the Android operating system, obviously, for easy field deployment And giving us a seamless operation with the cloud connectivity And we built this device with multiple form factors, but underneath the hood is a single common hardware platform And I particularly like this slide myself because I’m surrounded by software engineers, but actually I’m a hardware engineer at my roots And so I have to show at least one slide with a processor module on it And we’re quite proud of this processor module that we introduced with the Mobility Edge platform last year because it is common throughout all the different form factors of the different mobile devices that we provide to our customer base And particularly, I’m excited today to be on stage with Google and with FogHorn because now that we’ve deployed this edge device, this platform, we now can build a tremendous amount of value within that software on top of that device And the first place we started or one of the first places we started is looking for this very thin layer that Sastry has already described, that allows us to build the AI and machine learning toolset and capability on top of these Mobility Edge devices And FogHorn’s Lightning offering meets all of those requirements for our needs across our entire customer set So now the last couple of slides, before I hand it off to the demo, are just describing the use cases that we are first investing in The first one is quite an interesting one to us because we have a rich history of designing a very sophisticated software tool, decoder tool, to extract from an image a barcode and decode it for that customer’s application But on the edge of that application and during different use cases, many of the images we receive from our device on our imager come in very blurry or very noisy It’s very common New operators that haven’t operated a scanner before don’t necessarily aim the device accurately

Or I’m sure many of you have been at a retail store when a particular packaging, maybe a shrink wrapped packaging that has some ripple in it, is causing the scan to be difficult to read for that device And an operator, naturally many times, wants to go closer to that barcode to pick up that scan And in fact, they’re making it more blurry by going closer So again, there’s several different environments where a blurry barcode or being able to decode a blurry barcode would be much more efficient for our customer base And what you see here is a data set So we created a synthetic data set of 50,000 barcode images They are image pairs, clean images and blurry or damaged images And we use that for training our machine learning model that then we deploy through the FogHorn system onto our Mobility Edge device And these are just some snippets of the images that were part of our training data set So that’s our first use case, and our second one is quite basic but very important to our customers So these devices are owned by the customer They’re not owned by the employee They’re not their personal devices So the typical method for these being deployed is at the beginning of a shift, the operator will take the device, use it throughout the shift, and then dock it for charging overnight The very simple requirement is this that device must operate continuously throughout that entire shift That shift can vary in time It can vary in operation and what modes are being used And so by applying the machine learning type of approach to this and working with the data scientists at FogHorn, we’re able to create a real-time operating model on the Mobility Edge device that can monitor and optimize to extend the life of that battery for that device during the shift So with that, I’d like to invite Ramya up from FogHorn, VP of Products, to take us through the demo RAMYA RAVICHANDAR: All right, I’m here to tell you that it’s real So this is the Honeywell device It’s a CT60, and our FogHorn Lightning Mobile is actually running on this So following up from Scott’s talk, what I will a demo here are the two apps One is the battery insights app, and the other one is the barcode optimization But before I jump to the app, what I do want to do– I am a products person– is go through our FogHorn manager here So what do you see on the left side here is our FogHorn manager portal So think of it as our remote management configuration monitoring console that lets you see all the edge devices that have the Lightning Mobile or the Lightning stack installed on it One of the things we do at FogHorn is be very focused on the OT persona We understand the users of our product are operators in the manufacturing floor, your refinery technician And so a lot of what we put in here is focused on that persona So especially in manufacturing, the idea is to create this one golden edge configuration that can work across massive volumes of devices And so this FogHorn manager portal lets you create that configuration Whether it’s to add a new sensor, define analytic expressions that do complex event processing, or come up with machine learning models, all of that is packaged up really nicely into a configuration and deployed onto the device And so that’s what we’ve done here So you can see the two solutions, which is the barcode enhancer and the battery insights Here are the two models that are part of this configuration And now, that’s what’s deployed on this device So let me pull up the FogHorn app So you can see the FogHorn app here that’s installed on this device And immediately, we have the two solutions pop up, which is the barcode enhancer and the battery insights Now, Scott did talk about the goal of using battery insights, and I’ll talk a little bit about the metrics that show up here So both of these apps represent two classes of learning, so to speak In the battery insights app, it’s an adaptive learning model that’s going on on this device So it’s learning on the fly based on the pattern of usage How often is the battery charged? How long is the work shift? How does the user actually use it? What we are building is a very unique fingerprint of device usage that is specific to this unit So the metric here is saying, yeah, it’s OK for the shift This battery is going to last for the next 16 hours Oh, by the way, I also inform you that your work shift is around eight hours So this model is going to get better and better as the device gets used over time

Now, if you move on to the next one, which is the barcode enhancer, it happens behind the scenes So this is an operator He has the scanner He’s scanning the barcode And the goal is to have that first scan be successful And in the event that Scott talked about, the image is grainy, what happens is that gets passed to the FogHorn stack, and we’re running this neural net model So the difference in the model is that’s a neural net model built using TensorFlow, running on TensorFlow Lite And the inferencing is happening on the device So to help demonstrate how the barcodes actually look before and after the neural net model runs through it, I’ll actually go through a barcode viewer enhancer app here All right So on the top here, you see the before picture This is what the operator is actually seeing when they scan the barcode And once it passes through the Lightning Mobile stack here, this is the result. There is definitely much more clarity, and more importantly, there’s no loss of productivity when the operator tries to do that first scan There’s no manual re-entry of the code So let me go through a couple of more images This is an example of a full barcode sample, and this is the clarifying one through our stack This is one of my favorites It’s a piece of the barcode I didn’t think I’d ever say I have a favorite barcode, but here it is There we go, and I think they’ll do that as the last one here So the point of this presentation is to say that we have Lighting Mobile, and we have the ability to now run on Honeywell devices with the Google Cloud technique of building AI models on their data science platform What does it do for us as industrial operators, as industrial users? It opens up this whole universe, this whole expanse of use cases Sastry talked a little bit about sensor fusion in the past, that it’s the ability to combine video, audio, structured data What are the newer insights? For example, if you’re from oil and gas, and you have a technician walking around in your refinery, he could take an Android mobile device, point at a valve, and really get insight about should this valve be on or off The question to ask oneself is, what is a use case today that’s tethering my operator to maybe a location Instead, with the use of Android devices, it’s now more liberating and therefore, the ability to get newer insights [MUSIC PLAYING]