CU Engineering Interdisciplinary Research Themes

Erin: Thank you all for joining us today! My name is Erin Judge, I’m the Program Manager for Alumni Events at the College of Engineering and Applied Science. I want to thank you all for tuning in for this next webinar, our alumni webinar series In this series, we aim to showcase some of the amazing work and research that is happening at the college, and bring our faculty directly to you for an exclusive look. We are grateful for our faculty presenters today for partnering with us on this webinar, which will showcase a couple of our Interdisciplinary Research Themes at the college With us today we have our Associate Dean for Research, Massimo Ruzzene, the Director of the Engineering Education and AI Augmented Learning IRT, Angela Bielefeldt, and the Director of the Autonomous Systems IRT, Eric Frew Just a few housekeeping items before we begin For optimum audio quality we do have all participants on mute, except for our presenters. If you experience any audio or visual issues during today’s webinar, you can message us in the chat, and we’ll do our best to help you troubleshoot, or you can contact zoom support at 1-888-799-9666 After the presentations today, we’ll be accepting questions live via the Q&A feature within zoom. After each brief faculty presentation, we’ll have about five minutes for audience Q&A, and then we’ll also leave time at the end of the program for further questions to both of our faculty presenters Please submit your questions through the Q&A button that can be found at the bottom of your screen, and if you submit a question to us in advance during registration, we already have those included. Now, it’s my pleasure to introduce Massimo Ruzzene. Massimo is the professor of Mechanical Engineering and the Associate Dean for Research in the college He is the Director of the Vibration and Wave Propagation Laboratory, which conducts research in metamaterials, structural health monitoring, structural dynamics and vibroacoustics Thank you, Massimo, Angela and Eric for taking time to share your work in these initiatives with our alumni and friends today Massimo, I’ll go ahead and turn it over to you Massimo: Thank you, Erin, and thank you to the Advancement and Alumni Relations team from the college for hosting this. It’s my pleasure to provide a brief introduction to the Interdisciplinary Research Teams initiative, and then hand it over to Eric and Angie for the remaining part of this session I just want to take a few minutes to talk about the IRTs in general, and the IRTs are part of the college response or action towards the strategic vision that was said a few years back, which was to accelerate a research impact as a college by providing technical solutions to pressing societal needs, improve our interdisciplinary research and our entrepreneurial mindset, and to really become among the top engineering colleges in the nation. And as part of that, as indicated at the time, which is now about two, three years ago, the Dean intent was to… or decided to invest internal resources to build on college strengths, and prepare allows us to prepare for future opportunities in areas that impact our state and the nation, so in high priority areas And to do that, these interdisciplinary research teams were established by providing internal college investment to develop or to support the development of large-scale multi-million dollar proposals to federal agencies and through siege grants, and these um IRTs would be led by an appointed director or appointed directors, and would convene approximately 25 to 50 faculty across the college, and we had establishment metrics for success so that we could evaluate the progress and their effective use of the internal funding. So a little bit of a timeline as indicated, this started about 2016, and kicked off in about 2017, roughly, where our first IRT competition was called for internal proposals, and the six teams were awarded, which had received funding for two years Last year we conducted an evaluation of the progress of each of the six teams, and then following that we opened up for a second round of the competition, which led to… these two processes led to continued investment on into two IRTs and the investment of three more, and today you will be hearing from a continuing IRT which is Eric’s autonomous systems RIT, and a new one, which is based on Angie’s presentation. So just to

give an overview, a briefly overview, we have had invested initially in six, as I mentioned, these were autonomous systems I mentioned, and multi-functional materials, and these two are containing onward meaning that they continue to receive investment from the college. Four more are considered graduated. In many ways they’re considered as self-sustaining centers, they’ve had success to a level where they do not need additional investment to continue, and one of these is Imaging Science, one is precision biomaterials, and this third one was water energy mixes, and the fourth one was quantum integrated sensor systems, which is a little peculiar because during the two years, it promoted and graduated to a level that included a bigger initiative at the quant, at the campus level, which is now called cubits All of these, I would say, have led to significant success for us, and we found that autonomous system and multi-functional materials are indeed successful, but could be even made bigger with additional funding The newly launched ones are hypersonic one, on our personal vehicles, one is on resilient infrastructure with sustainability and equity, and as indicated… mentioned before, one is in engineering education, AI augmented learning The expected outcomes, what we expect and we evaluate these IRTs on are their ability to leverage college strengths, so we want to be investing in areas where we already have strengths, but we think that convening faculty and building a critical mass on a certain area will lead to new opportunities. And we expect that… we hope that these efforts will accelerate a research impact in these areas in the next five to ten years. This is the horizon that we’re looking at, and they will be able to provide teams to pursue large research opportunities. They will be able or will be encouraged to engage with industry, so some of the technology can transition, and, overall, we hope that we can achieve a position of leadership in these areas at the local state level and at the national level So this… it’s sometimes hard to directly lead them and link them to the IRT, but in the last year we had significant success on very large projects, center-like projects, and this is a list of about five or six of these in the millions of dollars range, and almost all of them, perhaps all of them, can be and had some affiliation with some of the IRTs. This is an indication of the impact of this investment, well this is one evidence of the impact these IRTs have had and continue to have in terms of our ability to really pursue large research opportunities and become leaders in certain areas, such as quantum, power infrastructure, again, quantum, sensing, AI, and other areas related to materials research Without further ado, hopefully I wasn’t too quick, but I was going to give the floor to Angie. I think she’s going to be next talking about the engineering education and AI augmented learning IRT, and then we will have some time for questions for her, and then we will hear from Eric on his autonomous systems efforts. Thank you very much, and thanks again for organizing this Angela: So, I was going to tell you all a little bit about our new IRT in engineering education and AI augmented learning. And I’m Angela Bielefeldt, I’m a faculty member in the department of Civil, Environmental and Architectural Engineering, as well as the Director for the Engineering+ program, and my co-director is Alessandro Roncone, who is from the department of Computer Science As you just heard, our IRT formally started in July of 2020, in the midst of COVID, and so we’re just getting off the ground with our activities So, let’s make sure I can get my slides going. There we go! The mission that we have articulated for our IRT activities are to develop theories, technologies, and know-how for advancing student-centered learning and next-generation learning environments in k-16, graduate, and professional engineering and computing education. And we had a

question prior to this session that asked for a definition of student-centered learning, so student-centered learning acknowledges the fact that each student brings a different background, different interests, needs, and motivations into the learning setting. And so you’re teaching to accommodate those differences so all students can learn optimally, and sort of the traditional lecture-based approach is sort of the opposite of a student-based learning strategy A second mission activity is to establish CU and our college as a national leader in engineering and computing education research and innovative learning technologies to support engineering and computing education, and then finally, to grow a diverse workforce of future researchers leaders and practitioners. So what does this really mean? And a little bit of background on how our IRT came to be may help examine this for you Back in February when we were applying to compete for one of these IRT grants from the college, we actually had two separate and distinct efforts: one in engineering education research, which is activities that have been going on in the college for quite some time spread across all of the different departments in engineering, in addition there was a separate IRT proposal related to AI enhanced learning, really focused at the k-12 level. And it was immediately obvious that our two activities could get together to realize synergistic benefits on both sides, but we are still in the early stages of really figuring out where those leverage points are as we learn new language, like, my expertise is actually on the engineering education research side, and I’m learning a whole lot about the AI side to really understand how I can implement AI as both a research tool, but also as a teaching tool. So, I’m going to walk through a little bit of background on these two areas and show you where they come together, and some of the emerging research that we’re pulling together in these areas To start off with what is engineering education research? And hopefully, this slide is not too intimidating, but you can see here a table that sort of shows a continuum from the perspective of a faculty member on what this might look like So hopefully when you were college students, all of your faculty were engaged in effective teaching so that you were supported in your learning journey. However, nowadays, we’re hoping to move all faculty up a level what we would call scholarly teaching, and so it’s those same good teaching practices that we’ve learned through experience are effective to help students learn, but it integrates assessment, and assessment is really important because student learning varies in each setting, and for each individual student So what worked when I was teaching freshman civil engineers 26 years ago when I started here at CU Boulder is probably quite different now due to the fact that students are changing, there’s a new generation with different background in their k-12 environment as well as motivations And so by using assessment, we can see how and what students are learning and what we as instructors are doing that is helpful in that regard. So it’s an assessment both of students, as well as our own practices as educators, if we then take those lessons that we’ve learned and publish them so that other folks can learn from what we have found to be most effective, that’s considered to be growth into what’s called scholarship of teaching and learning It’s out there so that others can learn from and adopt what we’ve done, but it’s still usually based on our own classrooms and what we’ve used/learned in a small setting So finally, if we move to what’s truly engineering education research, then these are larger studies that have generalizable findings that can be applied for others, and it may relate to our understanding of learning theories, it may be on the more practical side. But regardless it’s gone through this rigorous cycle of posing a specific research question and a hypothesis gathering extensive data that can support or refute your initial hypothesis, and then answer those research questions so that others can benefit So that’s kind of engineering education research in a nutshell, however, where do we apply engineering education research? All of the examples that I’ve talked about so far are higher education classrooms, but really our effort is interested in the entire ecosystem

of engineering education research, and so this slide again is like really cluttered and really busy, but that’s because we’re interested in so many things. So here you can see the higher education world that we’re talking about, undergraduate students and graduate students, and how they learn in their courses and move through their educational trajectory, however, engineering education research can be much broader than this. So we can think about the k-12 space if we are engaging in outreach activities into those classrooms to get students excited and motivated about engineering, then we could study how that works, how effective are those settings, what about summer camps, where students take some computing modules? All of those things are rich areas to understand and really affect what we do in higher education By the same token we can move beyond higher education to the profession itself, so once students graduate and are in the workforce, do they have the skills and tools that they need? Most will engage in lifelong learning and learn new things over the course of the decades that they’re engaged with the profession, and we need to understand that space and we can engage in it, we can engage in worker retraining, we can engage in understanding how… what is needed in industry has changed from when, like, I was in industry 30 years ago, so that I can keep my teaching practices current for the students that we’re educating now. So the engineering education research space is incredibly broad, it’s not just how did a student best learn thermodynamics, but it extends to ideas of persistence, and retention, motivation, their attitudes, in addition to their knowledge outcomes as well as informal educational spaces so co-curricular activities, like a student professional society or an internship, this whole broad space, our research can be poised in any of these different areas It’s a very exciting and broad research space, and so you can see that there are multiple opportunities where machine learning or AI enhancements can both help inform us to understand what’s happening, but also improve our practices So a couple quick examples of engineering education research from my own activities, really to highlight this transitional space between undergraduate students and the profession, because I thought that might be interesting to you since many of you are alumni. We are engaged in a National Science Foundation, funded research project exploring ethics, education, and part of the study was to partner with faculty who we believed embodied excellent teaching practices for engineering and computing ethics education. And so we visited those institutions, we conducted focus groups with the students who were part of those classes, we looked at assignments to see evidence for students ethical reasoning, but that’s one part, and I think you can probably think back to when you were students, there were some classes you thought were amazing, and if you think back now maybe you think: it was fun, but how helpless was it. And the opposite may also be true, that things that you didn’t fully appreciate when you were a student are now much more relevant to you as a practicing professional that was really important, I’m really glad I learned that so we felt that it’s particularly important to get the perspectives of alumni, or practicing engineers to really reflect on the importance of that education, or better or worse, given their current perspectives. And so we conducted both surveys of alumni from these different classes, as well as interviews with some of those individuals, to get a deeper understanding of what was really effective about the learning that they experienced around ethics A similar example from my own research was looking at social responsibility, and it had this same idea of a trajectory across an undergraduate and graduate education, so these longitudinal surveys where… we surveyed students walking in the door as incoming freshmen, surveyed them as graduating seniors, and then continued to invite those individuals after they’ve been in the workforce for about a year, to again take the same survey instrument so we could see a trajectory in their attitudes around professional-social responsibility. And again, with more in-depth interviews with both alumni, as well as employers, to get their perspectives

on this ecosystem that surrounds the ideas around professional-social responsibility I’m going to keep a quick eye on the clock so I don’t go over my time But also, look into now the AI side, like, what does AI bring to the table that can be helpful? So first, I’m going to draw attention to the same new Artificial Intelligence Institute for student, AI teaming that Massimo briefly mentioned This is a 20 million dollar, five-year effort, where CU is partnered with a number of additional universities, we’re the lead university among nine, as well as public schools and private companies. The goal of this research can be briefly encapsulated with this slide So artificial intelligence could be a social, collaborative partner that helps students and teachers work and learn more effectively. So this is an example of classroom, here’s the teacher, the students are working, say, in small teams on hands-on projects, and you, as a teacher, can only get to each table so often to sort of help them along their learning journey. However, what if there’s an Alexa type object sitting on the table, and it’s sort of eavesdropping on the conversation, and it can pick up on cues for when the students are stuck, or when they’re frustrated, and provide a timely nudge in the right direction for their learning, what could that do? And moving up the chain, well, what if instead of like a little Alexa blob, what if it’s an actual robot? Would it be different for kids to engage with a more human humanoid sort of robotic embodiment of that as opposed to just a speaking speaker blob? So this is kind of what they are working on in their new center. There’s a lot of challenges on the AI side to not only do natural language processing, but also to analyze body movements and engagement, and it’s really targeted at this point at the k through 12 level, whereas most of our engineering educational efforts that we’re exploring are situated with somewhat older learners Here’s a couple of other examples that are happening, so as you can imagine under COVID, with a lot of courses being taught over zoom under remote settings that provides both challenges, but also opportunities for educational research, you can easily record students, you can record the speech, you can record the chat boards, and that may provide you with an opportunity to understand the learning processes in those environments. So you can see in this example, again, it’s not just, you know, what people are typing in chat, it’s not just what they’re saying, but they’re actually monitoring gaze, and monitoring attention to try and have other metrics for student engagement and learning. And one of the questions that we often receive then relates to concerns around the ethical implications of privacy that surround AI, and that’s certainly something that I’m particularly motivated to study given my interest in engineering ethics, so it’s something that’s that we definitely are always thinking about, you know, to what extent are students consenting to have their information used? Do they truly understand what that consent process looks like? And to what extent are we protecting their privacy and their data? Additional examples, I think I’ve already mentioned in a general way, but, you know, can artificial intelligence partner in these authentic student teams, and in particular like I was teaching a first-year hands-on engineering projects course? So you can imagine AI sort of monitoring these conversations, watching are all students being equally engaged in that process or some sort of sitting off to the side, and not truly engaging fully in the learning? And is that their choice? Are they being excluded in subtle ways, and so when we look at traditionally underrepresented students, for example, we want to attend those patterns of interaction that bring into play different cultural backgrounds, different social styles, and so forth. And so machine learning and AI can help us to understand and optimize these learning environments So I think I’m almost out of time, so I’ll just quickly highlight a couple more elements. One, if you’re curious, there are folks already doing engineering education research across all of the different departments and programs within our college, and so I just threw up a smattering of

names. Maybe you’ll be able to find the department that you affiliated with when you were a student you might recognize some of the names, but there are lots of us who have already been doing engineering education research activities, not necessarily tied to AI at this point In addition, there are world-class higher education researchers in other programs here at the University of Colorado. There’s this term called disciplinary based education research, and that is really targeting higher education, frequently the science discipline. So physics education research is a long-established discipline as well as, you know, go to each of the different science disciplines, and there are national leaders in education in each of those spaces, and we can learn from each other. You as an engineering student took courses in mathematics and physics, and so there are ties that we should be thinking about and sharing across those spaces, in addition to amazing partners in the school of education who tend to study more k-12 learning spaces, but are excellent partners in our educational research endeavors. We again, hope that our IRT leads to larger funding activities from a variety of federal agencies, as well as private funding opportunities. And then again, we’re brand new, we just awarded four seed grants December 1st in different topic areas as well as we’re attempting to build community at CU, we anticipated initially that would be in person, but we’re still doing this over zoom, as well as a seminar series, where currently we’ve had CU-based speakers, but in the spring we plan to expand to national speakers. So I’ll conclude there, and field some questions Erin: Thank you so much, professor! That was a great introduction. We do have a couple questions here from the audience, so I’ll start with one here from Chuck. Chuck says: I’m impressed on the emphasis on assessment, is this work preceded by creation of a logical model? Angela: Yeah! So, a logic model activity is something that’s coming to us from our partners in the school of education, they really use the logic model language. I would say a lot of us are doing that, but not in the same perhaps mindset. So accreditation under Abet has really driven us into an assessment-minded practice where each course, and even better, each session, each lecture really starts with articulated learning objectives, and then you’re gathering data to measure against those. So, in my mind, that’s sort of that logic model that we are in fact using and looping in the process, and here we are at the end of the semester you may remember it Beyond the faculty course questionnaire process, Noah Finkelstein and physics education is leading us in a total quality framework where we are trying to be more intentional about integrating richer data from our students. I just did focus groups yesterday with students in courses to get their opinions about strengths areas for improvement that they see in the classes, so we can be very intentional about always being in a continuous improvement cycle we’re learning Erin: Great! Thank you! I know you touched on this a little bit, but Ronald asks another question: what types of other education specialists have you included in this study? You did mention quite a few that you’ve engaged, he specifically calls out psychologists and if they have been part of this work Angela: So, we’re just starting out, so I think it’s true that all of our seed grants were given to interdisciplinary teams, but it’s taking a little time I think to connect all these different groups together on campus. I’ve always collaborated say in this ethics study with a trained psychologist who’s an educational researcher. There are different traditions that come into educational research, some from psychology, and some, actually, more from anthropology and sociology At the moment, largely those that expertise is embedded in our school of education partners, but we’re definitely interested in expanding to anyone who’s interested in partnering with us Erin: Great! Next question here is from Daryl, asking how can AI augmented learning be used to jump start technical careers, so that would be a non-four-year degree Angela: Yeah, that is excellent! I think that’s what we’re really hoping for. So, if you look at the fact that technology and professions are changing so rapidly, we know that lifelong learning is imperative, and most folks don’t want to come back and sit in a classroom environment. You’re

adult learners, there’s actually a different term for adult learning called andragogy, and so given the maturity that older working professionals possess, and their self-motivation, then different types of learning are optimized. So AI, I think, is a critical tool there because it can be more self-paced, self-directed because each learner is going to come into that process at such a different point, so if you partnered with an AI… an AI is probably running the background like you may not even know it’s there, but it’s basically giving you lessons based on how you perform on a given task, for example. I definitely think there are strong opportunities for AI to be a partner in the self-paced, more specific, learning practices that the professionals will be interested in Erin: Thank you! We do have a couple more questions for you Angela, but I’m going to save them for the end, so that we can switch over to our next IRT. Thank you so much! So here, I’m going to go ahead and just introduce Professor Eric through, so Eric, please start Eric: Thanks a lot! I’m happy to be here. So I’m going to talk to you all about the Autonomous Systems Interdisciplinary Research Theme, or ASIRT, as I’ll call it, and so I’m one of the two co-directors for ASIRT, my colleague Chris Heckman is in the Computer Science department, and he’s co-directing this effort with me Rather than start with our vision, which I’ll describe in a couple slides here, I wanted to start with a couple of definitions first. So this is the Autonomous System IRT, well what’s an autonomous system? It’s an agent or system that’s comprised of a machine being driven or controlled by some form of autonomy, where tasks, roles, objectives are delegated by a human user, and then this naturally asks well, what’s autonomy? For the purposes of this discussion we could spend literally hours sort of debating the limits of what is autonomy, but just think of it as the ability for a system to perform complex tasks with reduced human intervention for increasingly extended periods of times, or at remote distances So I have a picture here at the bottom to sort of represent autonomy and Autonomous Systems for the sake of this discussion. I want to point out all three kind of components here because I think they’re important to how we view autonomy and autonomous systems in the context of ASIRT On the far right there is a machine, so for the discussion, you know, for the purposes of this IRT one of the questions that often comes up is, you know, how is autonomy or autonomous systems different from Artificial Intelligence? And so, you know, for my discussion the key difference is that there’s always going to be a machine that’s intervening with the environment on the end, whereas you can think of Artificial Intelligence as being broader, it engages with data, not just with a machine. So on one end of our picture there is the machine, on the other end, this is also equally important, there is the human user There will always be the human user in our vision of an autonomous system. This system is dispersed into the world for a reason, and that reason is something that’s driven by a human user, this human user is always going to be engaging with paying attention to this autonomous system. I don’t think there’s any real system that you would truly send out there and would never engage with the human again, but I often get pushed from colleagues and others that, you know, nobody wants true autonomy, and if true autonomy means there’s no human on the end, I would agree. So on one end we have a human user that’s operating, engaging, delegating roles to do the machine through autonomy, and autonomy is the rest that goes in the middle. Now, one of the key questions when we start to think about autonomy is how to certify these systems? How can we understand the limits of their behavior and feel confident as a society when we release them into the wild? And so this picture also lets me talk about the different ways we currently certify systems, in the very broad sense. So again, looking back at the machine end of this picture, most machines, or many machines, I’ll say, are certified through what I’ll refer to as a process-based certification When you get into an automobile, it is certified as road worthy because of the process by which it was built, the design specs that went into it, and how it was built. An aircraft is air worthy, again, because of how it was built. Now, there’s a lot of effort that’s perhaps gone on before the construction of a specific aircraft or specific car, but when a car or aircraft comes off the assembly line it’s certified because of the process through which it was made Now that’s in contrast to the other end of this picture, the human user, right? We give licenses or we certify human operators all the time. I have a teenage daughter who’s going through the process

of learning how to drive, so I’m very attuned to this right now. We certify people through a performance-based certification process, right? A student driver will take a test, we’ll have some training, we’ll get in the car, we’ll go on the road with an instructor who will see how they will perform, and then we’ll extrapolate from there how the driver, the pilot will perform in other contexts where you can’t, you know, see every single possible action scenario in the person before you certify them. So when it comes to autonomy, one way to think about the challenge here or one of the challenges, is how do we bring those two perspectives together? On the one end we can do a lot of process-based certification of machines, how they were built, how they were constructed, on the other hand, we know how to do performance-based certification we can look at and we can observe a system and how it behaves in certain scenarios and extrapolate from there something about the system. And so with autonomy, which sits in between, it has sort of aspects of both. Certain kinds of autonomous algorithms are not repeatable, they are learning as they go, machine learning and artificial intelligence. So you can’t just do a bunch of test cases and see how it’s performed to go, okay, we know how the system will behave because we can’t predict all the different ways that it will perform, but we also know how it was built. We know a lot about the software and the code, and electronics that go into an autonomous system, so this leads to really the vision of the autonomous systems IRT, and that is to focus on the ability to certify the performance of an autonomous system so that we can release it into the wild. And our perspective, and I think what makes CU’s effort unique is we’re focusing on these three aspects that I piloted here. Showing that systems are smart enough to perform across a wide range of conditions. I argue that most of my academic colleagues they focus there. You can go find plenty of YouTube videos, the robots and autonomous systems doing neat things in a lab or, you know, in the field some of us look at how to show that these systems are safe with, you know, mathematical properties and performance bounds that we can assert and then very few of us in the academic world also consider how secure these systems are in response to an adversarial or malicious interaction. So the vision of ASIRT is to understand how to certify autonomous systems by, you know, demonstrating that they’re smart, safe, and secure So when it comes to organizing our efforts, you know, I’ve often been told by colleagues that we can’t approach autonomy as a widget that framing doesn’t work. So autonomy always has to be grounded in the applications. Because ASIRT is about building out opportunities within CU Boulder, we looked at where do we have application expertise, and where do we have domain expertise that can be brought to bear to this problem. So the IRT is sort of organized around a couple of key application domains which you know are shown here sort of columns in this slide. So field robotics and smart vehicles, this is strength of our college, spacecrafts-based systems, automated infrastructure and smart transportation sort of sits on the line between things that we’re really good at, and some future opportunities that we’ve recognized, same thing with advanced manufacturing and industrial robotics, and then, finally, we acknowledge that a lot of the other IRTs there are elements of autonomy or there’s opportunities for our IRT to engage. Angela described a robot interacting with students, so that’s an application that sort of blends across the two different IRTs. And then, you know, when it comes to the disciplines this is how we as academics are conditioned to think, I’ve just sort of showed you how you can take the concepts of smart, safe, and secure autonomy, and maybe map that into domains like control theory, artificial intelligence, formal methods, and software design, and some of these other concepts that we kind of think of as disciplinary. So the idea of our of the IRT is to then bring together the disciplinary focus and the application expertise to show how autonomy and autonomous systems can be deployed in these different contexts, and think through the challenges with certifying them for uses across these different application domains Angela had a slide where she sort of listed a lot of names of the individuals involved in their IRT, I took a different approach here. Rather than listing out the names which you can find on the page here at the top, I decided to create a word cloud as sort of capability, so you can get a sense of the breadth of the faculty in the college of engineering who do work in this area. This was an opt-in process, you know, any colleague that self-identified as doing work in autonomy autonomous systems were invited to join the IRT, and they provided some information on sort of their backgrounds, and so this is just sort of a word cloud of what our colleagues described as their focus, and the colleagues who feel that they are contributing to our work in autonomous systems. The two words that jump out

the most: design, right there in the middle, and applications. I think this is a reflection of the perspective of CU Boulder’s Engineering School in general, very application driven We do a lot of work, you know, not just focusing on theoretical elements of our work, but seeing how that applies to real problems. Other words in their networks, multi-functional materials, again, that interplay with some of these other areas that we have strengths in our college. And so, I won’t read through all of those, which is so you can get a sense of the breadth of background and expertise of the faculty who contribute to this particular IRT A quick aside, you know, this is a… the concept of an IRT is a little bit different in terms of organizing faculty within the college to do large-scale opportunities. I just wanted to say a little bit of how we approached the job of running and managing an IRT itself. We identified sort of three main areas of emphasis within the IRT, and how to use the resources that we were given by the college of engineering, so one was growing that internal community of autonomy researchers, identifying areas and opportunities where we can where we have the critical mass of expertise, and there are important large-scale problems for us to address, and then you know further aggregating into teams and identifying specific opportunities that we would then go after, all sort of aimed towards these large, what we’ve been referring to as center scale projects, you know. We don’t necessarily have to create a new center of autonomy, but we want to go after projects that have sort of larger budgets associated with them projects that faculty aren’t necessarily used to pursuing as individual researchers in their own respective groups Like Angela also mentioned with her IRT, we’ve had some secrets. We’re in our third year, so we’ve done this for a few years now. I’m not going to list all of these different seed grants, but almost all of these involve multiple faculty from across the college of engineering, these are all geared towards expanding research strengths and research expertise, and creating new connections within the college. I’m going to highlight four of them here just because… well I want you to read those four, and remember those when I start to talk about some of the larger projects that we’ve had. Some of our seed grants are already folding into successful larger scale collaborations within the college of engineering Now through these sea grants, you know, we ran these basically to try to encourage faculty to make teams to do that aggregation that I described, and so within that process, really seven areas sort of jumped out at us where, again, we had that critical mass of faculty and the depth of world-class expertise to start to build larger opportunities together So, targeted observation… targeted environmental observation, you know, we’re one of the top earth science campuses in the world, and our engineering faculty do a lot of work on autonomous aircraft, for example, that’s where I work, autonomous fabrication and synthesis, concurrent learning and planning, AI and machine learning concepts, and how we can make proper proof properties of their behavior, human autonomy teaming, this is sort of a topic that was growing within our IRT before the AI engineering IRT was involved, but it’s the same theme, right? How the autonomy and the human actually work together as an application, not just because there’s an operator on the end Distributed autonomous robotics, so swarms and teams of robots and how to think about their design and interaction, high assurance autonomy, so what kinds of mathematical properties can we say about complex autonomous systems, and then lastly autonomous spacecraft, CU Boulder is very well known for its work in space, and so, you know, in the space community autonomy is becoming critical to the next generation of exploration systems that we’re deploying into the universe So, some highlights, beyond just the seed grants, I like to brag that we’ve had over 70 million dollars’ worth of funded projects within the autonomy area since the IRT was formed, rough calculation, that’s about 20% of the total awards that the college of engineering has gained over that time, with roughly 20% of the faculty. The point for saying that though is that, you know, 20% is a very significant amount of our funding. We’re an autonomy college, right? This is a topic that really we have deep expertise in across the college of engineering. The second bullet point here is just an example project that I really like to highlight because this project, I can say with very strong assurances, would not have happened without the IRT. Early on we had a workshop where the faculty who did work in this area just got together, introduced ourselves, you know, we literally went around the room and said, you know, 30 seconds to a minute about who we were, at the time, Sean Humbert, who’s leading this particular project, and Chris Heckman were envisioning this idea of how to use multiple robots to explore underground environments,

and Christopher Williams was a new faculty member at the time, a research faculty member, and he stood up and introduced himself as an expert at building small radar systems, and quite literally in that meeting a light bulb went off in Chris Heckman’s mind that we need that for this underground exploration project, and so they… Chris introduced himself to Christopher, six months later a proposal was submitted, and we got this very large grant, and we’re one of seven teams participating in this challenge. So those are the types of interactions that we’re really trying to foster through these IRT efforts, getting people together who would not have likely met, would not have likely created an opportunity without, you know, the encouragement and support from the college. I list here next just four examples of early faculty, or I guess young, what’s the term these days, sort of early career faculty you’re young investigators, so these are just four of the faculty who are not tenured yet, who have these awards that are often viewed as sort of honors in addition to awards, all in the area of autonomy or related, and you can see kind of a breadth of agency, as well the human frontier science program, the air force, DOD, Nasa, the National Science Foundation, so a breadth of types of projects that we’re pursuing. And at the very bottom just a couple of those larger scale opportunities that we’ve been pursuing, the National Science Foundation has something called an Engineering Research Center, large 20 plus million dollar effort across schools, very similar to the AI center that Angela described before in terms of scale and the in the types of problems to address. We’re a part of an effort there, a part of a team, and then also we’re leading a team on creating a new industry consortium, the center for autonomous air mobility and sensing, I’ll kind of come back on that concept here in a minute, but interfacing and interacting with industry in a way that really lets us provide pre-competitive research to areas of interest, and need to industry members to a consortium For the last couple of minutes here I’m just going to share a couple of examples of large projects along this theme of autonomous systems, and where CU is either leading or a significant contributor to the project. I always love to do this and brag about my colleagues, but I always feel bad that I’m leading others out, so this is just represent… meant to be a couple of interesting and exciting applications, I cannot be comprehensive in, you know, the five minutes that’s remaining here This is a project along that theme of, you know, targeted observer environmental observation so, Dale Lawrence, Gijs de Boer and Brian Argrow were the CU faculty on the engineering autonomy side. This project was actually an international project, an icebreaker, the polar stern iced up in the Arctic Circle for almost a full year, and then it was sort of the base station for environmental sampling throughout the year to understand the complete cycle of the atmosphere, and the conditions in the Arctic. Our team was there to deploy drones, that’s why you can see in that picture “Droneville” that was a quarter mile hike from that ship. There’s a little tent out there where the drones were stored, and every day, or every other day, they would do profiles with these drones to gather information of the atmosphere that you could gather just from the ground. And so that’s the result of years of collaboration between Dale Lawrence and Brian Argrow, especially the engineering faculty, with help from Gijs de Boer, on how to do this kind of environmental sampling with these types of um autonomous systems. Now, I’m focusing on those three, but I’ll just point out that CU had about 30 births on this ship, and again, our earth science faculty also were strong contributors to this particular effort, so a theme here is not just colleagues within the college of engineering working together, but also reaching out to other colleges across the campus, and bringing our engineering expertise to their domains Second project, maybe look at the picture on the top left there. This is about having autonomy explain itself so that, you know, you can develop trust, and so the sort of… you might call it cheesy example, but we often think about the movie “Terminator” for those of you that are science fiction fans, Arnold Schwarzenegger was the villain in the first movie, but then he came back in time a second time to be of assistance to the hero in the second movie, and so there’s the scene where he tells the protagonist, you know, basically: trust me, if you want to live come with me. Well, imagine what would it take for you to trust an autonomous system if it’s just you interacted with it for the first time, so one of the hypotheses we have, my colleague Nisar Ahmed has, is that if this autonomous system is able to provide you an understanding of its competency or its own awareness of its limitations, that will really engage with you in terms of how you will trust it. So it’s not important that autonomy is always right, it’s that autonomy will tell you when it’s going to be wrong or can properly limit its own behaviors. And so that’s what we mean by this idea of competency awareness So we’ve been focusing on ways of developing internal, you know, mechanisms for autonomy

to be reflective, and this is just not this, is also not just reporting a probability of success, because that’s as good as the probabilities that you’re starting with, maybe the system doesn’t have the right algorithm to process the question. So these all fold into this idea of competency awareness by an autonomous agent or autonomous system Another project, sort of switching domains here into space, future Nasa missions envision putting humans on other bodies, right on the moon or on mars. Those are going to rely on autonomy, and so this is a large-scale project, it’s actually being led out of UC Davis, but a strong CU Boulder component on looking at, you know, basically smart habitats for space, and so our colleagues Allie Anderson and Torin Clark, both in the aerospace department, are leading the Human Autonomy Research, Thrust So there’s really two parts to it, one is how does the human engage with the autonomy, so again, similar problems to the AI IRT that sort of natural language and gestures and other things, and then how does the autonomy model the human Here it’s not in a learning context, but it’s in the context of astronauts doing exploration tasks or science tasks somewhere else, and so the human autonomy team is really focusing again on trust in that context, the astronauts, the explorers, they have the opportunity of ignoring the autonomy, and so, how do you build an autonomous system that will be used it comes down to, in many ways, trust in the system, and so that’s a big part of this particular project or CU’s contributions to this particular project Another project, back to the targeted observation or environmental observation domain, this is something that I’ve been involved in and like to just brag about a little bit because it’s personal to me, we spent 2019 storm season using drones to study tornadoes. The video that’s playing that is a tornado, kind of in the background under the cloud they’re poking down, Brian Argrow and I spent five weeks, drove 9000 miles to fly our drones 51 times in 18 storms, where seven of those storms provide a tornado. So another interaction with atmospheric scientists to use autonomous aerospace technologies, in this case for those types of applications. And just a couple more examples, I’ll quickly run through to get to some questions, back in space, two colleagues, Jay McMahon, from aerospace, and Chris Keplinger, from mechanical engineering, looking at some really neat ideas of taking these soft actuators and building robots that quite literally gobble up an asteroid and spit up material that a collector will then grab and perhaps bring back to earth or build something else with in space. A really unique concept that brings together sort of space autonomy, and unique actuator design for mechanical engineering together This is a current industry consortium that we’re a part of. I won’t say too much about this, CU is teaming up with Brigham Young, Virginia Tech, Michigan to do, you know, pre-competitive research in the area of unmanned aircraft systems I mentioned earlier we’re evolving into a new center for advanced air mobility and sensing, it’s a broader context. I’ll just note here that we’re always looking for industry partners to be a part of this consortium, and there’s a lot of mechanisms as part of this consortium concept to work for CU Boulder, and others to work with you or your respective institutions Two more quick examples, this one is a 20 million dollar Nexus, the National Science Foundation calls it, all about understanding how animals sense odor and turn it into actions. Now that might not sound like an autonomous systems project, but if you can understand how nature senses odors and uses it to follow and find the food source, imagine we can put that into an autonomous aircraft that can go find a radiation leak, or perhaps, can be used to monitor fracking sites, and trace in very early on if there’s a leak. So we’ve had a lot of discussions with John and his team, sort of understanding the neuroscience and the computational fluid dynamics, and how does that… how will that inform novel ideas for exploration and robots for related tasks And this very last project, a couple of newer faculty, kind of like almost within the span of like two months had four different projects, all get awarded, focusing on security and safety for autonomous cars and other types of cyber physical systems. Again, these are maybe individually on the small scale, but you can see how that quickly grows into a large opportunity, a large effort in these areas. With that, I was going to talk about looking forward, but I’ll just say we’re continuing to grow, continuing to go after these larger scale efforts. I think it’s more helpful to maybe transition to some questions at this point Erin: Great! Thank you so much, Professor We do have a question here, this one is from Richard. I’m asking due to the significance of war-fighting potential of autonomous systems and associated AI, how is the developmental work conducted

in IP generated at the university level being guarded and protected from open dissemination? Eric: Yeah, so I think there’s two levels to that question. So, you know, how do we deal with the sensitivities that come from material work that could easily get into the hands of an adversary, and have negative consequences. Almost everything we do we do it for open research, and we publish in the literature, so from that perspective, very little needs to be guarded, simply because we’re still doing fundamental research, and we’re still publishing in the open literature, and all of our funded projects sort of allow for that. Now, it is true that some of our projects and technologies do touch on these export control topics, and so we do engage with the export control office here on campus. We’ve had a few occasions where we’ve had to be careful about what we’re proposing or where the limits lie, and CU Boulder, actually, does have the capability to do export restricted… export sensitive type work, and that’s really managed on an individual basis, nothing that I’ve described here, I was going to say, comes close to that, it probably comes close in your mind, but nothing that I’ve described really has those limitations in effect But we’re, I guess, just to answer your question, we are aware of them, but we steer towards the open research and the funding sources that allow for us to not really worry about those directly Erin: Great! We’re coming up close on time here, so I apologize we’re not able to get to all of your questions, but I do want to turn it back over to Massimo for just a moment to help close us out Massimo: Thank you, Erin, and thank you, Angie and Eric! There were great presentations, I thought, and I’m sure you agree with it. They show how a relatively small seed of funding can really create big things, and especially, it can create, and symbolizes the fact that there’s a core expertise, and there’s also a way to convene people to have, perhaps, distinct interest, but they can converge into areas that are mutually exciting. So I want to again thank you… thank Erin for calling and organizing this meeting, and bringing these people together, and the presenters Angie and Eric for providing great and compelling presentations of their efforts Erin: Thank you! And Massimo, I will echo that. Thank you to you Angela, Eric. This was a really great look into some of our Interdisciplinary Research Themes, and how CU is really just being a leader in these areas. Thank you so much for sharing this with all of our Alumni Before we head off here, I’m just going to launch a poll quickly We would love to get your feedback on this webinar, so just let us know on a scale of 1 to 10 how likely you are to recommend a future CU Engineering Alumni Webinar to a friend. Also want you to be on the lookout for an email from our team tomorrow, we’ll send you some follow-up information on how you can stay connected with the college, and we’ll also include a longer survey, so you can give us a little bit more feedback on today’s topic, and we’re also open to any suggestions you have for future events Thank you again Massimo, Angela, Eric! Thank you so much, and thank you to our participants for tuning in, we hope to see you all again soon!