SET Day 2009: Shane Yanke

okay so welcome back I hope you all had a good lunch we will now turn our attention to robots ever since the term robot was coined in a play written by to check brothers in 1921 these machines have captivated our imaginations you no doubt know that times have changed since nineteen twenty one no longer a rug robots of figments of imagination they are almost everywhere today Shane Yankee a graduate student and our computer sciences department will talk to us about robotics when he’s not studying in our labs he’s working as a senior developer for cog nation robots here in Winnipeg his presentation is titled r2d2 and Friends the future of intelligent robots please welcome Shane Yankee and he’s probably performing robots alright well thank you for coming sorry to keep you waiting when dealing with robots there’s there’s always technical issues and no doubt there will be more throughout this presentation so I apologize in advance that being said sorry I am Shan Yankee and I am a graduate student here at the University of Manitoba currently studying robotics and artificial intelligence in the autonomous agents laboratory here and like mentioned I also work at Cog nation robotics which is a local robotics company here in Winnipeg so between the 2 i’m pretty much working with robots almost every day so let’s get started here so what is robotics now there’s many definitions of robotics everybody has a different term the way they want to describe it one of my favorites is it’s the science of perceiving and manipulating the physical world through computer controlled devices and I think the perceiving and manipulating is a very important aspect as if you’re not dealing with the actual real world you’re not moving objects and that you did you know your robots just sitting there not interacting then you basically just have a computer it’s not really a robot so I like that definition and that comes from a book a really good book probabilistic robotics which many people use during their studies so there’s different aspects of robotics obviously there’s they’re not just computer programs there’s hardware involved so hardware and software now the hardware usually entails different disciplines and that includes electrical and mechanical and as well as computer engineering so all of those factor get together they all have to be working together to build the robot platform and then of course there’s a software running on it which is going to be the program’s controlling and controlling that hardware and it’s very important that the hardware and the software interact very very tightly together in order to produce the desired results that you need so there’s the old saying that the you’re only as strong as the the weakest link in the chain or chain is only as strong as the weakest link well that goes the same for all of these aspects if your hardware is the weak link you know you’re not going to perform if your software is the weakling its weak link it’s still not going to perform up to those standards so all of them are very very important and they all have their own roles now as previously mentioned I am a computer science student here so I am a computer science and I deal with the software so throughout this presentation we’re going to deal with the the software side of it and the intelligence and the programs running on the actual robot so there’s different components in the the software programs running on the robot there’s different components and like I said it’s it’s just a computer program and one of the first things you learn about computer programming is a computer system puter program is just inputs processing and then outputs those are the three main components so you get something into your program you do some processing on it and then you spit something out so in example that everybody’s probably uses I don’t know say something like Internet Explorer you surfing the web you type in the address that you want the website Facebook wherever you go check your emails type them in the inputs are going to be you entering them in the keyboard the computer is going to process that data go fetch the webpages whatever and then it’s going to display it out on the screen that’s the put so robots programs have the same thing we can alter the words a little bit to perception reasoning an action you’re going to perceive the the different the surroundings in the environment you’re going to sense different different things obstacles you’re going to reason decide what you want to do you know if there’s a ball in front you maybe you want to kick it maybe you want to move the object and

then you’re going to perform the actions which will either be you know walking moving bending down whatever depending on the robot of course so the inputs are various forms of sensors and they include range finders like laser scanners so nars accelerometers and gyroscopes those are usually used to maintain balance and and and measure different forces acting on the robot and of course cameras and outputs are going to be those are actuators any type of actuator which is like a servo which is basically just a little small electric motor actual motors speakers displays things like that pretty much anything that can give any type of feedback or information so a little outline for the rest of the talk we’re going to go over some weird had the introduction and now we’re going to go over the we’re going to look at some global vision robots first from there we’ll look at local vision that’s where the camera is rather than being overhead it’s now moved down and physically attached to the robot will look at some humanoid robots will kind of change from the hardware and go into some development tools and how you would develop them and hopefully if there’s time we’ll have a little bit of a demo if it works we’ll see okay so before we start talking about the robots I wanted to mention RoboCup and fear a cup and these are two robotic soccer competitions International robotics object robotic soccer competitions and why I wanted to mention that is because most a lot of my examples are going to be about robotic soccer and that’s one of the main things we do in our lab is to get roll us to play soccer so why robot soccer well there’s lots of problems to be solved it it is a huge domain there’s you may think well it’s easy just you know put the ball in the net well it sounds easy but it’s not there’s many things you have object and obstacle detection you have to detect the ball you have to interpret that it is and I’ve in fact a ball there’s other robots on the field you don’t want to be constantly running into them you have to avoid them you don’t want to run into your own teammates you need accurate control you need to really really fine-tune the control to line up those precision shots which we’ll see later there are some very very accurate shots and balance if you’re a humanoid robot then you need to be able to balance things like that so it’s actually a very very difficult problem the competition’s also help drive the research forward you’re always going to be more motivated to to do better than the next guy if there you know there’s a little prize at the end you can get a trophy you can get a little bit of fame and glory so there’s nothing wrong with a little healthy healthy competition to to try and motivate people and push them forward and last of course it’s entertaining for me and hopefully the crowd too so so moving from there we’ll go into global vision now this is where the camera the system for the robots are mounted directly overhead and there’s one vision system for the entire team and I can consist of either a single camera or multiple cameras but it’s mounted overhead and they all there’s there’s a server that processes the data you can see the little arrow comes down to a computer system and it interprets the images it detects the all the different robots in the ball and then decides what it wants to do and then sends out a signal to the robots so these little robots which we’ll see on the next slide are basically just they’re just there’s no computers computer systems but there’s no intelligence on these of these robots they’re just basically doing what they’re told and there’s computers off to the side that are actually controlling them but they are all in fact autonomous being controlled by the computers and not by humans so some of the challenges is we have to identify and track the robots of our own team this coming down to we want to avoid our own team you know you might want to do passing plays which we’ll see later and there are actually these teams are doing this are very very good at it now you also have to track the ball obviously if you want to put the ball in the net you’re going to have to track the ball and opponent robots as well and you’ll see that the teams that can actually track not only their robots but the opponents as well will do very very well and they’ll be able to to stop shots and move and and ste kind of steal the ball away from from the repose as well so it’s very important so these global vision robots they focus on the problem of intelligent multi-agent cooperative systems so soccer is a very very good example for this as you know it’s a team game that’s the goal is to work together to try and score as many goals to beat

your opponent and if they can work together by setting up shots and and doing things like that then they’re going to be way better than the team set that can’t and the where the robots don’t work together so they have colored markers for tracking you can see on this robot there’s all these little dots and that’s what actually the vision system picks up it detects these color circles and from there they can get the exact position of the robot and its orientation they’re also omni drive systems which meaning they can go in any direction at once they don’t have to turn first and then move in that direction they can just zip off any direction and that’s the way they can do that is you can see those blue three blue wheels and they turn like normal wheels but on them there’s all those little white cylinders going the opposite way and what it can do is it can change the speed and if you change the speed in two directions like which ones are driving and then it can actually calculate a vector and basically move in in any direction though they’re very very useful on this this robot here you can see this is from Carnegie Mellon there one I’m probably one of the best small size teams in Robocop and you can see on the front of the robot there’s a little black bar little cylinder in the front and what that does is it spins really really fairly fast and that when the ball hits it it actually rolls the ball backwards into the robot so these robots can actually dribble the ball and move around the field and the ball will basically stick to the front it’s kind of actually very neat also you may not be able to see it but there’s at the bottom there’s little metal plates and these are our kickers and it basically uses compressed air and it’ll shoot the fire out this this metal plate and which will hit the ball and shoot the ball at quite high velocities do they also have chip kickers which are down in front and it kind of gets under the ball and scoops it up and they can actually chip it up and over their opponents so so here I wanted to show a video of this now this is Carnegie Mellon like I said they’re one of the best teams app RoboCup so you can see they’re very very fast they move around the field very fast you’re super slo-mo so you can see the very good at one timers they they’re very good at setting up shots they’ll actually I’ve seen them before they won’t even pass it to the robot they’ll just place a robot in front of the net and they’ll Bank it off the robot like off the back of the robot and in so very very good there’s like yeah sorry about that the guy’s head always he comes in and out but you can see there it actually chipped it up and over the robot there’s a head again I did want to skip there’s a nice goal there I wanted to skip up here to here this game now this is the finals for for the championship this is they’re playing against thailand right now and in this one thailand actually won so here you can see the accuracy in the precision of the shot i would snuck it through there so this is another one too i’ll pause it here you can see that the goalie is detecting the its opponent robot and it’s always going to try and be in between the ball and the net so if I it actually blocked the shot there you can you see it’s going to go in slo-mo here so as the opponent moves the goalie comes across and even it’s a slow mo you can see how fast that shot went it was just like a blur across the screen so they’re actually very very fast very difficult this is all now let’s see you later that’s humanoid robots they don’t have this speed and this is because these these guys because of the global vision system they can have large computers off to the side of the field you can see on the side here this is the Carnegie Mellon team and behind there they have lots of computers doing all the processing and calculations for the for the team okay so from there the goal with global vision system kind of took another direction to find something new and innovative and that’s mixed reality so in mixed reality the environment is produced by an LCD display which is why put flat on the ground and then the robots will actually drive on top of them an image is projected onto the LCD display so the width for mixed reality there has to be an artificial portion and a real portion artificial portion

being the display and then the real portion being the actual physical robots that are going to be on the screen so there’s a world server which is basically a program running on a computer that’s controlling this this environment and it controls the displaying of you know say the soccer field there’s also virtual elements like a ball if you’re playing soccer it controls the physics on that as well as the interaction between the robots and because it’s a virtual ball you know it has to calculate that when the robot drives over it at a certain angle then the ball will move in a certain direction so lots of physics calculations involved in that so all those times you’re sleeping in physics class you thought you know this so you’ll never use this well it’s at it is actually used in the real world so to do this mixed reality we need different types of robots we need obviously smaller robots we’re not going to put one of these guys on the LCD screen it’s going to you know damage the screen so citizen made a really really small robot a one centimeter robot and you probably won’t be able to see this but I actually have one of these robots right here so this is it and we place these on the for the people on the back I do have a picture so you can see the actual size of it these are placed on the LCD screen and then move around and interact with the environment so you can see how small they are slightly larger than a penny and if you look that kind of brown section in the middle that’s the the battery so about one-third of the entire robot is just battery and the battery only lasts for about maybe 15-20 minutes at a time so here’s some of the the environments that we’ve done in our lab there’s our global vision system you can see in the top picture that’s our global vision system camera mounted overhead and finding the positions of the robots so things we have done soccer obviously we always do soccer an obstacle run pac-man and ice hockey so here’s video here’s ice hockey very similar soccer you know except they’re carrying the puck to the side of them rather than actually being in front so there’s a goal and with the mixed reality you can do fancy things like displaying goals and you know you could have a little Zamboni going virtual Zamboni going clean the ice if you want so you’re pretty much open to anything you can think of and pac-man everybody’s favorite game of course so you can see here now these ones are using larger hats if you notice there and then the last one that’s because this is using an older version of the camera system it’s using a lower quality camera we since then upgraded to a higher resolution camera so you can see those the blue walls there it can’t drive through them so it can detect that the walls are there and the people who did this did a very good job at at implementing all the different aspects of the game so you can see all the little pellets there that you collect points you’ll see a cherry close to the end come up you get bonus points for that right now the ghost is turned blue because it got one of those larger pellets that turn that goes blue and then the pac-man can go eat it you’ll see you right away that the ghost is going to turn red and the pac-man is going to turn around and start running because now the ghost is there basically switching roles and you know the the hunter becomes the hunted now so you can see there turned red so now it detects that all the ghost is there it’s turning around running the other way alright so local vision robots now we doing local vision robots this is where the camera system the vision system is moved down on to the physical robot so over here we have that’s a pioneer robot made by mobile robots that’s the company camera on top has laser scanner very large speakers for some reason I don’t know what this one is doing and then over here this is a car that’s been modified to drive autonomously and this one has been been modified for the DARPA Grand Challenge which we’re going to talk about next but you can see how they just to get the car to accurately drive to reliably drive on its own all the sense all these on the top and the side there are all sensors laser scanners you can see there’s about four cameras across the top in all directions laser scanners on the front the back just large and large large amounts of data so the DARPA Grand Challenge it was first held in 2004 and the point of it was to

drive 142 miles in desert rain this was held in Nevada so the train was very very challenging it had winding mountain passes drop-offs with you know hundreds of meters drop-offs on either side you know sharp left turn hairpin turns you can see there’s a couple of hairpin turns narrow tunnels and a beer bottle pass rate at the end where the road narrowed so after hearing all that and shouldn’t be surprised to know that no team finished so here’s some pictures of some teams going through through the course you can see this the first one the larger one the van there jumped the barricades of the course like guy over there looks like he’s kind of running for his life I probably would be too and then this team down here desperately trying to to hold the robots so it doesn’t tip over because they veered off course as well this guy wasn’t so lucky yeah and hit the side probably hit a mound of dirt something flipped over so needless to say these guys were out of the race as well the team that did come the closest was Carnegie Mellon University and they went the furthest with 7.3 six miles out of 142 so not not very far and the reason they they were eliminated after seven miles is because their Hummer broken axle so they were telling everybody all we would have finished but are you know a car malfunction malfunction we broke an axle but they failed to tell you is they detected a false obstacle in the middle of the road and veered off course to avoid it hitting the ditch which then in turn broke the axle so the next year two thousand five they had the same challenge and the winner was Stanford University and that’s that’s the car right there the car who won it’s name is Stanley the winner of this so Stanford University for winning they got prize money of two million dollars second place got 1 million dollars and third place 500,000 so very very big prizes but to put things into perspective that car there probably cost anywhere between two and five million dollars so from there they move to the DARPA urban Grand Challenge this is the winner of that that was held in 2007 there was only one of them this involved vehicles maneuvering in a mock see the environment it was held on an abandoned an old airstrip and they set up and set up a realistic city environment and the cars had to move in and out of traffic you know merge things like that had to park avoid obstacles drive through intersections you’ll see i’ll show a video busy intersections and again they have the probably the same prize money now here at the university we have the not so grand challenge which is our way of doing things because we obviously can’t afford two or five million dollar car so what we use is a little toy Hummer so a little bit smaller than Carnegie Mellon summer but you can see we have vision system mountain on the top on this one there’s no laser scanners it was all strictly done by vision so a smaller robots smaller course rather than you know 142 mile course we just use the the quad outside and just try and try and do some laps around there and the one downside is there’s no prize money alright so humanoid robots probably one of the world’s most famous humanoid robot this one here is the honda assam oh very intelligent robot again this one costs about 1 million to manufacture and if you wanted to rent it you can rent it for about 150,000 a year so so why humanoid robots they have numerous joints usually between 15 for a simple one and 30 for a more complex one so the complexity of controlling and dealing with these robots are is naturally increased you have other things you have to deal with things like balance joint constraints wearing a boat you know are you going to move the motor enjoins in such a way where you know the motor snaps things like that so why would why do we want to make things harder for ourselves well some researchers are studying humanoid robots to get a better understanding for the human body and in order to build something you first have to research it and understand how it works so that’s

one of the key aspects probably a more reasonable one is to develop a robust platform that’s efficient at performing multiple tasks in an environment that’s designed for humans and we can design robots to pretty much do virtually any task you can see there’s one here for for vacuuming your floors that one in the middle there and yeah will vacuum your floor but you know it’s not gonna mow your lawn well they do have lawn mowing robots so you could get that as well but why have a robot a single robot to perform every task when you can design one that will basically that can do anything and anything a human can do and perform all the tasks so a humanoid attea is the most feasible design to interact in environments built for humans take these stairs up the aisle here for example you know these wheeled robots are not going to be able to negotiate those stairs but a humanoid could these ones are probably a little small that’s a big step but probably other ones larger ones they could easily walk up like Conda asthma we could go up and down these stairs another benefit is the tools that we designed for us to use can also be used by a humanoid robot so basically anything you know for manufacturing something using certain tools a hammer what we can swing a hammer a robot can swing a hammer just as well so theoretically a robot a humanoid robot should be able to do any task that a human could do so this is the hrp to robot and what the researchers in this laboratory are doing are trying to get the the humanoid robot to perform day-to-day tasks so you can see it here the top one it’s washing some dishes there it’s serving some beverages or the other two photos serving there’s pouring and there it’s serving the beverages so pretty much anything that we can do again this this guy can be programmed to do so the challenges of the humanoid robots is obviously as mentioned before the control of many motors so we don’t have to we’re not just dealing with you know driving forward turning left and right we’re trying to control and manipulate every joint limited power concerned computation so you saw before those little small small size robots zipping around really really fast you know making really accurate shots and that’s because they they have off to the side they have those really really large computers with the humanoid robots look a little bit more limited in terms of the computational power you can see this guy here you notice anything maybe unique about this guy if I turn it this way here’s a cell phone for a head and that’s this cell phone is is is doing all the computation and the image processing it has a camera in it and it’s doing basically the work of all those the computers off to the side of those small of those small little robots from Carnegie Mellon so imagine trying to take all that computing powder and all that processing and shove it into this phone very very difficult all the algorithms everything has to be modified in that too to run on on this type of platform but at least you know it can make a phone call and call you when it scored a goal so that’s a benefit again more the challenges vision the obstacle detection object detection recognition recognizing that it’s a ball and that’s you know something we take for granted one of the things we find easy you know I can scan this crowd of people there’s quite a few people in here and I can very easily pick out the faces very very fast of all the people that I know and for a robot to do this it’s it’s very very difficult they can detect certain features you may see I’ve seen face tracking programs before they can detect certain features but in order to recognize them and remember them it’s it’s very difficult so here’s a processed image of from a robot running on this phone so you can see this is what the robot might see while it’s it’s playing soccer’s you can see the bottom image there it’s a blue goal and the goals in the humanoid league are colored blue so the robots can actually detect them easier and then there’s the little ball there this one’s pink but normally we use an orange ball like that so you can see when it’s when the image is processed it basically just tries to extract out key features being the ball in the net and it’s just going to try and put that ball in that net however I can but you can see that after it determines okay that green spot there

now that’s the ball that’s what is trying to kick it doesn’t know that if it’s a rubber ball or if it’s actually a piece of fruit I could swap that ball with an actual real orange and the robot would gladly just walk up and kick it as well whereas you know somebody one of us probably would avoid that so the benefits to humanoid robots is they do have multiple modes of transportation not only the various types of locomotion like walking running sidestepping crawling things like that they also have they can they can perform a more balanced walk so if they’re walking in an uneven surface they can try and keep their weight exactly over their feet whereas if they want to try and go faster they can they can use a dynamic gate which will basically it’s almost like a run there if you would stop the motion halfway through the robot would fall over and these are very robust on flat surfaces but as they become as the Train becomes uneven it they become a little bit unstable so this is one of the challenges in one of the robotics or soccer competitions and it’s a stepping fields you can see each level here each of the colored surfaces is at a different level so the robot can either use vision to detect the different levels by the different colors that’s just just like the ball in the goal allowing it to process the information easier or it can use sensors in its feet now these guys are a little bit smaller and usually they don’t have the pressure sensors in the feet so this guy’s trying to step over and you can see gets onto the second-highest surface takes one more step and didn’t didn’t quite make it so moving to something a little bit bigger from those those small sized robots this is Archie this is the newest robot in our laboratory here at the University and right now it’s just basically legs doesn’t really have much of a tour so there’s no arms yet it’s in the process of being built but this is very very sophisticated robots this robot alone costs about two hundred thousand dollars in its current state and it’s basically only half done but you can see there there’s a it has something unique that that some of the other robots doesn’t have if you look at its feet they’re very very long and narrow and a lot of the other robots you might see the humanoids they have a big square foot like the Honda Asamoah has a big square foot so this one is a little bit narrow going to be a little bit more difficult to balance but it also has a toe joint which in theory should be able to make it run a little bit easier and walk a little bit easier so changing perspectives a little bit from the hardware aspects and moving into some development tools some of the development tools that are available are robotic suites they’re called robotic suites and basically these just help you develop robots so you don’t have to go to graduate school to become an expert on developing robots if you have one and you wanted to use it in your home then you could do that so these robotic suites usually have some sort of visual programming language which allows you to just basically doing drag and dropping of blocks and that to configure your robot they have a robotic simulator and usually have a common interface that you can control multiple robots with this the same code and as well the same code will run on either a simulated robot or a real robot so some examples of these are the Microsoft robotics to do the cognition robot sweet and player stage which is an open source project so for the visual programming languages you create like I mentioned you create programs by manipulating graphical elements so this is an example from Microsoft robotics to do you know fairly complex this is fairly low-level visual programming but one thing to look at is the end it has text to speech so essentially this is just you know some sort of speaking program it’s going to run through and then and then say something at the end now the the logic is the same so it’s it’s also called flow based programming so you still have to know you know the underlying concepts on programming but it just makes it a little bit easier to pick up people are usually easier can usually pick up graphical things easier than than just

straight text so want to show you some software they’re on there we go so this is developed by cognition and this is basically a graphical programming language so we’ll just do something quick to show you how easy it is this is probably something all of you can do of course there we go so basically here as I have a camera a project to play which has a camera in it and I have different inputs and outputs remember the whole inputs processing outputs I can drag on an input which is a camera and I can have some outputs this image buffer here is just to buffer the image so beat the camera and the display isn’t changing at at the same time so I got the microphone cord in my pocket my phone I guess that wasn’t smart alright so we’re worried so I got a camera here I got an image buffer and now I want to display it so I just go image out and I can connect these up just by clicking on one module dragging out to the next selecting the output of on the input of the other from this I can generate the project which is going to generate the code behind the scenes so you don’t manipulate all this normally these robotic suites have some sort of runtime which you have to run the code in so I’ll start that can transfer the code to the runtime a couple clicks oh it’s all just visual clicking buttons all done with the mouse and from here if I run it if it works properly see that I now have a display of the camera in my in my computer here you can see that the image is upside down that’s a windows thing in mac OS it it actually is is the right way but to show you how easily it can be changed this is probably something any of you can do is I can go here and get flip module which is just going to flip the image connect those together connect the other ones save and generate that code transfer it and now it’s right side up so very very easy probably something any you can do and so you can all program your own robots so this tries to to alleviate that the daunting tasks of going through and dealing with all the low-level underlying code because it is very difficult and and does take years to master so another tool in the in the robot suite is a simulator and they generate real-world physics and dynamics of of a scene so you can set up and model any type of seeing you want and then it’ll have things like gravity friction little model sensors actually there’s all the different aspects of it so this is really nice for testing code before deploying before you actually run it on your real robot you can actually test see make sure it’s going to work this would have been useful a couple years back in our lab we were doing some some projects with a little toy car something similar to the little yellow hummer that you saw and one of the students their program crashed and when it did for some reason that’s decided to set the motors to full velocity so the robot just all of a sudden took off and it was heading straight for the door and luckily one of the other students was able to jump over and and pick it up before it actually you know ran into somebody so in this situation a simulator would have been nice as you know we could have saw that oh the code is going to set the set the motors to max and then you could have avoided a possible catastrophe another thing is multiple developers can can test code concurrently so at the same time you know we both can be developing for a robot if you know I have these humanoid robots then you know me and my other lab partners can also can

all be working on it testing our stuff in a simulator and then run trying it on the robot after you can also allows you to test new robots and hardware so you know if I have a robot I want to see you know adding a second laser scanners laser scanners usually run between three and ten thousand dollars for a single laser scanner so very expensive if I want to find out if adding a second one will benefit my my robot any I can test it out of a simulator and it’ll maybe save myself some costs there and also dangerous are unrealistic scenarios so you can and one of the in some simulators they have you can either model your own your own environments or they have unrealistic environments one of the ones we have in ours is is we have a Mars terrain or we’ve actually taken NASA data and mapped out on Mars like terrain so you can see what your your robot with how it would act in driving around on Mars now one thing to note no simulation is perfect there all we can’t calculate everything mostly they’re just general approximations on everything and that’s usually dependent on the ng underlying engine and the wave the robots and the world is modeled in the simulator so to show you simulator will start one up here and this is Microsoft robotics to do this is the the simulation portion of their studio takes a little bit to start up takes a lot to start up alright so here is our simulator just move this to the side so you’ll probably recognize this robot it’s the one sitting on the table over there this is the NAO robot it’s made by Aldo Braun robotics from France and it’s currently the standard rise platform in the robotics competitions so I can connect to this simulator I can go here and it will maybe make our robot do some some exercises here so we can simulate that without actually using the robot now this can come in handy as you probably won’t be able to see it but on this guy here you can see that some of these plastic pieces are cracked and a little bit broken coming apart that’s because this one has been used and it has had a couple spills so all right we’ll just put it down like this I don’t want it to have another spill so this guy is still going so you can see that the simulators are useful as we can you know save our robot some damage because they are very expensive this one in his current state it’s probably between five and ten thousand euros so probably closer to 10 to 15,000 Canadian so very very expensive and you can test out some fun things here to do a little dance so that’s the simulator so you can see how they can be useful so from there you can see the simulators they may remind or looks similar to some video games and this is kind of something new is just starting to take off and actually the simulators a lot of simulators out there including the robotics to do and the cognition robot sweet they use the same technologies that’s used in game engines they actually use the nvidia physics which is a really accurate physics engine and some other simulators use it’s called over 3d it’s just a 3d rendering engine so you can see how a lot of the technologies used to build robots to control robots and that can be used in video games because they’re already starting to emerge emerging technologies and this is a game we’ve developed our work AI apocalypse is just

a prototype game and basically you can build the AI using a visual programming language and then you know run your program and compete with your your AI and things like that so for the future where things going if you see in the movie I Robot you’ll know that this robot is from that from the movie this is the NS 5 it was a very very sophisticated robot in there obviously we’re not at that yet but I think that’s something where we where we are trying to get to and where robots are actually moving and walking among us they’ll be using our tools they could they’ll be cooking using our utensils and things like that driving our cars so things like that so that’s where I feel that that robotics are is going and where we’re trying to get and we’re not all that far off this is a robot developed and you can see that some of the similarities between the irobot version the arms and legs here are very similar and this one actually has artificial muscles and that it’s these rubber tubes and and they inflate it with compressed air and then that’ll actually control the joint so and this hand here is very very articulated it models the hand very very accurately and you can pick up very delicate things like light bulbs and pieces of fruit things like that so that’s where we’re going we’re getting there in terms of hardware the intelligence part the software part we still have a long way to go or as you seen from some of the videos you know we’re getting there but what the intelligence is detecting those those objects obstacle obstacles interpreting the environments and that we still have a long ways to go so that’s it for the presentation how we doing on time remember yeah how much more time do I have couple minutes ok ok let’s see if we can get one of these robots to drive around I’m going to try and get this this little r2 unit the the yellow one going whether it actually works not I was having some troubles earlier that’s why we were delayed in the presentation so we’ll see if we can get it going now just to kind of drive it around a bit little slow to connect alright so we’re connected we’re getting the video feed I see some feet moving over there see if it’ll actually so this robot it has some some applications some of the things that’s being used for is telepresence so somebody’s in a remote location and they want to be at a meeting or something they could have this guy driving around it does have speakers in it so you would be able to project your voice through the robot has camera so you could see what’s going on security robots detecting intruders intrusion detection things like that these it’s also ongoing research also it could be deployed in in factories you could put a variety of sensors on it it has lots of places for i 0 where you can put different different types of sensors oxygen sensors chemical sensing sensors and it can actually detect go on the wrong way here it can detect various chemical levels so it can actually be used to to save lives as well in hostile environments things like that so some of those are some of the applications that is being used for and research at

several universities you can see that there are some infrared sensors on it so if I walk up here there it stops moving because I triggered that the infrared sensor you can see it has various ones around the side and those are just just a form of range sensor so that’s pretty much it for the little r2 unit I was going to try and show some of these other ones but I think we’re pretty much out of time okay we’re going to take a break so maybe while we’re doing the break then I’ll see if I can get some of these these go and if you want to stick around you