Good evening and welcome. Thank you for attending today's Patten Foundation lecture featuring Professor Chad Jenkins. I'm Jessica Lester, Associate Vice Provost for Faculty in Academic Affairs, and I'm here to say just a few words about Mr. Patton and the legacy of his bequest. In 1931, William T. Patton of Indianapolis made a gift of about $150,000 to establish the Patent Foundation at his alma mater. At the time, it was the single largest gift ever pledge to the Bloomington campus. Under the terms of this gift, which became available upon the death of Mr. Patton in 1936, a visiting professor was chosen each year, originally, to be in residence two months or more of the year. The purpose of this appointment was to provide members of the university, students, faculty, and staff, the privilege of personal acquaintance with the visiting professor. While those terms have changed over the years, as we now typically have two to three patent lectures spend a week on campus each year, the spirit remains unchanged. Patton was a strong believer in public higher education, and he believed that Indiana University, still at the time a smallish regional institution, had a role in enriching the intellectual life of the local community and of the state. His gift to establish the patent foundation reflects his commitment to supporting that intellectual contribution. Patten was born in 1867 on a farm in Sullivan County, Indiana. He taught in the county schools before enrolling at Indiana University at the age of 21. He was a diligent student, a competitive orator, an associate editor of the Indiana student newspaper. He received his undergraduate degree in history in 1893. After graduation, Patton settled in Indianapolis, where he made a career in real estate and county politics. The patent lectures continue to reflect his commitment to Indiana education. A campus-wide faculty committee selects people of national and international reputation who are also willing and able to share their knowledge with a wide public audience. Lecturers stay on campus for one academic week, during which time they interact with faculty and students in classes and informal gatherings and deliver two public lectures. We are thrilled that Professor Chad Jenkins is here as a 2024-25 patent lecturer. Special thanks for enabling Professor Jenkins visits goes to his nominator, Selma Shabanovich of the Letty School. The many departments and units supporting his visit include the Letty School of Informatics, Computing and Engineering, the School of Education, the College of Arts and Sciences, is IU Center of Excellence for Women in Technology, the Cognitive Science Program, the AI Digital Futures Program, the Office of the Vice President for Student Success, the Center for Innovative Teaching and Learning, Atkins LLC, 21st Century Scholars, and the Groups Scholars. Thanks also goes to Provost Rahul Shrabostov and the Provost Office, the Cox Research Scholars Program, the Ameritai House, and the Patent Committee. To introduce our esteemed guest, I'm happy to present Dean Joanna Millenchuk of the Letty School. Thank you so much. I am so excited to introduce Professor Chad Jenkins, who we were close colleagues for a time at University of Michigan. He is a professor of robotics at the University of Michigan. Professor Jenkins is known for groundbreaking work that brings robots closer to real-world. usability, whether it's making them more adaptable, more usable, or interactive, or actually capable of learning from humans. And I don't know if you're going to be talking about what they'll be learning from humans, but one of the things that he taught robots to do is the cabbage patch. So I had to look that up. his research spans everything for machine learning to human robot collaboration and all of these have big implications on how robots can help us in everyday life professor Jenkins has studied computer science and mathematics at alma college his master's degree in computer science at georgia tech and his PhD in computer science at the university of southern california he started his academic career at Brown, but moved to the University of Michigan in 2015. His contributions have earned him international acclaim. He is a National Geographic's emerging explorer, a fellow of the American Association for the Advancement of Science, and a fellow of the Association for the Advancement of artificial intelligence. But besides his technical work, he's a dedicated mentor and a driving force for greater diversity in STEM, making sure that the field is welcoming for everyone, and he was really instrumental in defining the foundational principles of the nation's discipline, yes, discipline of robotics. Today, he'll share with you the story of what it takes to define a new and truly 21st century discipline. Welcome, Professor Jenkins. That was so wonderful. Thank you, Dean Mellencichick. It's so great to be here amongst so many great colleagues, friends, and just great stewards of our field and our nascent discipline of robotics. I have so many slides because there's so much I want to share with you. I'll try to make it comprehensible because I've been in robotics for a long time. I've been in robotics for about 25 years. And it's a huge pleasure, a privilege, to be a part of our field. I did not think I was going to be here when I was in high school and not getting good grades and playing Nintendo. But in a shameless attempt to grab your attention, I'm going to start off with cat pictures. So this is my cat. And so he's a good cat, and that cat is with my Atari. The Atari changed my life because it was really the first time I thought about computing. And it really, and what elect, I didn't know what an electrical engineer was at the time, but I was just thrilled by this machine. It was, it shaped my career path. But, you know, but it's, but you know, but I'll just show a bunch of cat pictures, you know, so that's our cat, he's around. That's the cat with me on game day, so the cat watches the games with me with the big inflatable em over there. But most importantly, my cat's name, this is how much computing and video games helped my career is my cat's name is Qatari 2,600, which is short for Cat Atari 2,600. If you're an old school person like me and you're used to Atari cartridges, that's my guy right there. Although with my kids these days, you know, they don't play Atari, they play Nintendo. So it's more like Katario Kart deluxe 8. I don't know if we have any Nintendo Switch players here, but that's, but that's him. So, you know, so, so really what this, what the Atari and the pathway all the way to Nintendo Switches, like, is my career, you know, I've been able to have a very successful career-long participation in robotics and AI and computer science, and we want to be able to now the next generations to be able to do that as well. And so one thing that I think we have a duty to do in higher education is to not just think about our undergraduate and graduate curriculum, but how does it fit with industry and successful careers long term and also with k through 12 i think we're going to have to have a more seamless approach and and i'm not supposed to say leaky pipeline anymore you know pipeline method uh analogy is gone we're really talking about pathways people will enter and exit our field and how do we have different ways for people to to be a part of it and um and that's really how we've thought about about our undergraduate major that we've created in robotics at michigan as well as how do we reach out with to other schools across the nation and build a healthy ecosystem. And I think this is what's in line with William T. Patton, the quote that I saw about education being so important to our system of higher education being one of our most outstanding assets as the United States, which has enabled incredible innovation and prosperity. And I think that open system is really important. It's easy to say this as a quote, but what does that actually mean? And I think it really comes down to people and ideas. What we produce in higher education is more than anything, we produce educated people that have a sense of citizenship and stewardship for our society. And those people are not only educated, but they also know how to practice the ideas of an academic discipline and extend them in new ways. And so people and ideas, we don't produce products and profit. we're not we're not elected we don't necessarily win votes our but our job is really the development of people and those new ideas when we think about it what we really do is we help people help students help the younger generations learn from the past be a part of something today and then build new ideas for the future but when we think about that I would make the argument in our modern higher education system it's really the classroom being the catalyst what we do Oftentimes at a big R1 university, we think about research and all my great ideas, but we earn the trust and confidence of students in the courses. If I can show somebody how to do something, I give them new power, they're willing to follow me and take my mentorship for more open-ended things where we start to explore new ideas together. But the classroom is something we can't give up on, and how do we create a better classroom across the country? And so, you know, so with that, thinking of the classroom as a catalyst in my path, I'm going to ask everybody here, just, you know, so I can make it a little interactive. I've been in, so since I've been faculty, this is my 21st year is faculty. I hate to say that, but it's true. What question do you think I get the most when I'm on campus? What do you think people ask me? And this is ungraded, so you can say whatever you want. Yeah. Why do I go to the private sector? You know, I don't get that nearly as much as I would like, but, you know, it's on the list. It's on the list. Anybody else want to take a guess? Yeah, in the back. When did I graduate? Now, that is in the top five. We're doing the family feud. You'd be, you'd hit the board. All right, I'll take one more. Anybody else want to ask? Want to know what it was quick? What's that? How do I get an A? You know, people ask that all the time, and it's especially from the A-minus students. The ones that really want to know. But the actual answer, and this is very apropos for Indiana right now is, am I with the football team? It used to be, am I on the football team? You know, if I don't shave, you see the gray hairs and I'm like, am I the parent of a football team or a coach or something like that? And the answer is no, I'm not with the football team. But it actually isn't true. I was on it with the football team. In 1988, I played for one year when I was in high school in Louisiana. And, you know, at that time, I did not think I would be here. I was just trying to get through classes, you know, just trying to, you know, it wasn't really possible. I had, I had massive, you know, self-confidence issues, and I'm grateful for the mentors that helped me along the way that helped me be able to, as a young scholar back when I was at Brown, I basically won every big junior faculty award you could, you could win through this and that, and, you know, and I always say that awards are great, but they're empty unless you have a sense of purpose. But I'm grateful for this opportunity. And we try to pay it back by doing lots of tours, lots of outreach. And so we have lots of people come through our lab. And the top question they ask, is they ask me, is what I do a real job? And I say, yes, you can do this too if you want to. And that leads to a lot of other question that they ask. Like, are you, you know, will robots take my job? Are you making R2D2? You know, increasingly from students, they'll ask, can they work in my lab? and I try to provide as much opportunity as possible. And this is just a small sampling of the questions I get. But in this talk, I want to focus on three of those. So, will robots take over the world? So where is my robot? And can your robot bring me a drink? And so I'll get to things from there. I'll start with, will robots take over the world? And the answer is, no, robots aren't nearly that smart. Or I thought they were until Chad GPT came out, and we had large language models, and now I've got to rethink a little bit. But the great thing is, no matter how great the AI technology gets, we can decide what robots will do. Will they help humanity? Will they just take jobs? Will it just be job replacement? We have the power to control these systems, and we have to remember that. That's our responsibility as a society and a discipline. And I am grateful for Selma for this. And so this is Selma. You know, we, as people may not know this, but we were edited. of a journal for eight for about eight years we started from nothing and that's our impact factor it went up and then we were burned out in our last year which was 2023 so it went down a little bit but but for everything she's doing on campus you know we spent we spent a lot of time on a journal and was really trying to promote a more humane let's say enlightened approach to reviewing and our scholarly enterprise that thinks not just about the work that we're doing but also it's larger impact and so I want to thank Selma for that thank you thank you for a good run as that together. This is back in 2016 when we just started. We were fresh-faced and you know and idealistic and you know and I didn't have gray hair at the time. I didn't have any hair. But one thing I most noted for is a TED talk that I gave with Henry Evans and so it's gotten a lot of a lot of views and people found it very inspiring and it gives a sense of what we could do with robotic technology. Henry lives in Palo Alto California and when he was about four he suffered a stroke-like attack that left him quadriplegic and without the ability to speak. And so he relies on other people to do just the basic activities of daily living. My colleague Charlie Kemp at Georgia Tech picked up some of the things that we're doing with making network protocols for robots to make them web accessible and started to make interesting interfaces for Henry to do basic things like consume a meal on his own, being able to scratch an itch on his own. Just imagine if you had this, in your daily life, the times you just sort of just scratch an itch, you don't think about it, but what if you didn't have that ability? You required somebody else to do that. How do you get that, restore that capability back to yourself? This next part, Charlie showed this and it was super impactful. That was the first time Henry was able to shave himself in 10 years. And being able to use robot technology, to do those types of things that get restored dignity to people is incredible. And so I happen to be, this was done at Willow Garage, which was a startup company. And, you know, I went there and I helped develop this thing called robot web tools. And so really, this was about how we can make TCP-like interoperability for robotics. So when you open your web browser, you don't think, what's the network stack doing, and how do I get my software to run, and does it on a Mac or a PC? it just runs. It just works everywhere. That's what we're trying to build for robots. So you can just start to use it, and you don't have to think about all the low-level technical details that I do, that I worry about as a computer scientist. And the core of it is Ross Bridge, which is a network protocol that lets things interoperate. The way that TCPIP or HTTP works for your web browser, for your internet traffic, that's what we're doing for robots. And so while I was out there, colleague, Kaijin, said, Henry wanted to meet me and wanted to fly drones. And I was like, all right, maybe I could do that. So I went by the Palo Alto Mall and I bought a drone. And I just hacked up a simple little web interface to get like the front facing camera from the drone. And I made big buttons because the way that Henry uses a computer, he has like a little tracker on his glasses so he can point and he has movement of his thumb so he can click a button. So I made those buttons nice and big so he could click on them easily. And then I took that interface up to his house in Palo Alto, in the hills of Palo Alto. And this was his first fight with the drone here. And it was incredible. We don't have sound, but you can just hear this gasp of joy from him being able to go around and just basically use that drone to explore his yard in ways that he has. hadn't before. He was able to fly the drone over to, he has a little garden and he'd never been able to go there because the wheelchair doesn't go in, but he was able to navigate the drone over there. He, when they moved in, they had solar panels on his roof that he had never seen. He went up and saw those. He was, you know, landing on the basketball hoop. He was doing all kinds of things. And this just showed to me the power of what robots could do with the right application with the right with the right motivation and I was super excited by. And since then we've seen a large body of applications grow from this. So we have, you know, so my colleagues have taken this and it has a life of its own and so we're starting to see this ecosystem of robots growing that just can work, plug and play with each other. If you are care, if you're computer scientists you like, everything is available off of robot web tools. And so we got lots of stars and forks. If you're a computer scientist, you know what I'm talking about. But that's basically saying that people in the community are using what we're doing and it's making an impact. And when I tell students about that, they say that sounds awesome and cool and super inspiring. And then they get the question that I usually am anticipating, which is, so where's my robot? You know, so like we have this all this amazing technology. Where is my robot? And why am I not using it? and it's doing all kinds of things for us. And I would say that your robot is actually here right now. So we have robots that are drones that are doing things. We have autonomous cars, robots in manufacturing. All of these, there are robots doing so many things right now. It's just that it doesn't meet our expectations just yet. And so when people think of robots, what they actually mean are mobile manipulation robots. You know, like Rosie the Robot on the Jetsons. Mobile manipulation means that you have robots that can move in human spaces and have the dexterity to work with objects in our normal human space. And so these are robots like the Fetch Robot here, which is the first robot I bought when I came to the University of Michigan and the Willow Garage PR2. That's really what you mean. Human, you know, they can work in real human spaces. But the question is, how much these robots cost, right? How much, you know, if I, if you wanted to have these robots and have them do something for you, how much, how much are you? then, right? And because I love the prices right, and that's me spinning the wheel on the prices. I'm not, I wasn't actually on the real price of right. It's prices right live that came to Ann Arbor and I paid a lot of money to get a spin at the end of it. I need four guesses about how much these robots cost and it's not $1. So, so four guesses. How much do people think these robots cost? Yeah, right back there. 20,000. All right. 100,000. All right, I like that one. Yeah, 75,000. All right, I need one more, not one dollar. 125,000. This is a very astute audience, I have to say. Well, the way I would say is if you looked at it, if I plot it over time, the first robot that I bought was 2009 when I was back at Brown, and that was $400,000. More than my house. When I got to Michigan, it was $100,000. It was $100,000. thousand dollars in 2015. If I compare that to the first robot that I, the humanoid robot that I started working with, the NASA Robenot at Johnson Space Center, it's roughly about a million and a half dollars. And one thing I believe in, for math education, Joanna backed me up on this linear algebra, most important math course. See, I got the thumbs up. You know, one thing that you get from linear algebra is the ability to fit a polynomial to data. And what I did in 2015 was I extrapolated and said in 2020, you're going to be able to get a robot, a real mobile manipulation robot that's serious for about $40,000. And that roughly came true. So Hello Robot sells a mobile manipulator, just a small mobile manipulator with a little gripper. It's about $20,000, $17,000. So if you said $20,000, you're right. You know, Boston Dynamics, in addition to their humanoid robot, sells a warehouse robot that's much more. expensive but it's meant for supply chain and picking up picking things up but the thing that I'm most excited about are humanoid robots like the agility robotics digit that you see at the at the top we have a couple of those at Michigan that was about $25,000 I mean not $250,000 and so the reason is all the guesses that you have are right in some form there is a platform that does that and really with the economies of scale we're starting to see more and more of these robots being built they built about three I think five of the NASA robinots, 60 of the Willow Garage PR2s, hundreds of the robots, of the fetch robots, and the next generation you're gonna see probably being made in the thousands. And with that, the cost is gonna come down, the same way that the cost has come down with our computers that used to take up this entire room, and now I carry on my, just carry in my pocket, something that's more powerful than that, at a fraction of the cost. But that's only the hardware. If we think about the software, the NASA robinot, you know, back in those days, it was only a remote control device. I had to put on a head-mounted display and I had motion capture all over me to control the robot and to get it to do the right things. But with the advent of all the great sensors that we have and essentially the same capability that we use for autonomous driving, our capability could be described as put that there. Something gets picked up at one location, transported, and dropped off at another location, just the same where you would have an autonomous car. But with large language models and the, you know, ability to reason, we see systems that can reason, we're going to be able to task robots with different, give them to, give them task and delegate things to them, where we could say, do this task for me. And when we're saying that, do this task for me, what we're really saying is, can we make the physical world programmable? The same way that we are able to automate information through digital computers, will we be able to do the same thing with physical world. And that really is our challenge for robots. There's still a big issue. You know, some people think Mars and interplanetary exploration is the frontier. I would say the kitchen is actually the frontier. This is actually clean compared to the way my kids keep our house. If you think about the variety of different objects, their form and their function and how you use them and how you use them in sequence to accomplish something like cooking or cleaning or repair or anything like that, that actually is still a big, huge frontier that we have yet to address. But our goal is really to have any robot that can perform any reasonable task in an arbitrary environment. If we can do that, if we can have software, artificial intelligence can do that, that's the game changer. And so when we think about this, we sort of think about it in terms of interactive task learning, or that's the way that we've sort of characterized it. And so can you get something, can you get AI systems that can perform well, that can scale to new tasks and new environments, and are actually usable by people? And there's a number of different approaches in AI that we've used, and the real is still a real question of what the paradigm is that we should use to program robots. And oftentimes people think that the ultimate is going to be transformers. When I say transformer, I mean large language models, diffusion systems that can generate images. But I don't think those transformers are the end-all-be-all. I would cite the case of Robert Williams in Detroit, who was a, there was a burglary that happened, and they had security cameras. Detroit police took that footage, that grainy footage, ran it through a neural network, and Robert Williams came up as the suspect. They arrested him purely on that and detained him for 30 hours. But he wasn't the guilty party. He was aware somewhere else. And so how are these technologies going to affect your civil rights? And even if this neural network that they use for surveillance is right 99% of the time, what happens with the 1% of the time that it's not right? And can you characterize that? Do you know when that's going to happen? Can you trust any system that you don't know its failure modes? And that's really what we're dealing with with modern transformer-based neural networks. But there's also issues about how this is going to affect the workplace, the bias that could be in these systems. And also, when you look at the development team as higher education, are we populating with these development teams with the diversity needed to represent the perspectives for leading artificial intelligence and its deployment around the world? Those are big questions that we are in the best position to address. But all the things are high-minded arguments. Let me boil it down to something more simple. Like, what is this? Does anybody know what that is? What's that? salmon salmon what a salmon fillet yeah what's that fish in water I will say this this was generated by AI what do you think the prompt was that they that they had for this salmon bridge okay that's one any other guesses salmon a salmon swimming down the river so like so think about this I mean, it's not wrong. It's not. I mean, I'm glad a student showed this to me. It's not wrong, but it's not right. And so we have to be careful with these AI systems. So this is just an insight into some of the larger issues that we have to address. And so my group has been working on systems that are more robust that don't just trust neural networks. And so we've been using it in the case of robots being able to grasp objects and cluttered environment. and so this case we're doing we're recognizing each of these objects and getting a and being able to grasp it in this case it's sorting items that should go in the laundry room versus items that should go elsewhere in the house so we recognize the object and it's type as well as it's six degree of freedom pose and then we and then we can then we can grasp the object and do other more purposeful things with it and we know that errors are going to occur and we want to be robust one thing you'll note is that the comet got put in the items that should go in the laundry room and that's not true, right? That goes in the kitchen. And that's not the robot's fault. That's because my students don't do laundry or dishes. And they're still working on that. But what we do, the what's underlying these systems, that we use neural networks on the front end to give us a good guess, initial guess, of what we're expecting to see. And then we use more robust AI methods in the background, essentially Bayesian inference, in this case a particle filter, over those guesses to refine it so we have something that's explainable and diagnosable that we can then use to give an explanation of what they're actually seeing and if there's an error we can we can correct that and recover from it and so that's the that's the essence of how these methods work and so we've been we've been looking at that is thinking about what's the next wave of artificial intelligence that's going to come I would make an argument that humanoid robots the same way that large language models and facial recognition came out of nowhere and now is everywhere, that will be humanoid robots in the next five to 20 years. And so I think these robots are incredible. I'm not going to talk about learning from demonstration because I only have so much time, but the capability is, you know, you have robots that can stand up on their own and do enough to pick up objects. So this is a robot that's doing sort of a grocery packing task, a bin packing task. And it can do it okay. It's not perfect just yet because it's both trying to stand up and pack the bin. But we can also stock shelves, and so you'll notice it kind of bumps into the shelf. It's not perfect either. But it's getting there, and we're working on the hands and dexterity of these types of systems. And so it went from 10 years ago, like these systems would just fall down all over the place to now they robustly stand up. One of these robots could be standing here next to me, stay up for my entire talk, and I would not be worried about it. And that's an amazing accomplishment. And, you know, I think one of the things that you're going to see where it's really interesting, that's a robot, I don't have the cornhole video, but it can play cornhole. But materials handling will be, I think, one of the big things that you see humanoid robots do, among other things. But that's something that's readily available, and so you can kind of get a sense of that. My group is the Laboratory for Progress. We are the Laboratory for Perceptive Robotics and Grounded Reasoning Systems. We work on many different things that are towards the goal of being able to teach robots' tasks from demonstration. These are just some highlights of some of the things that we've been doing, and so we can build champagne towers with our robots, actually transparent objects, thank you, is really a hard task. That clamp that's right there is not actually there. We're using neural radiance fields or Gaussian splats to be able to render an object there that we can use we can compose and also it won't be just one robot it'll be multiple robots that you're working with and so how do we do coordination for these for these robots and so we've been working on a number of different problems for from that that relate to advancing robot technology so it gets my my next question is can your robot bring me a drink sure no problem we do that all the time so so if you want to see the big the the most prevalent robot demo is getting a drink from somewhere. You can see from the old Willow Garage. They like to get beer for people, but, you know, but, but, but, but, you know, soft drinks too. And so when we show these demos to people, they're like, oh, wow, that's really cool. And, and, and at the time when I created, originally created these slides, there was a show called Silicon Valley. And they're like, have you seen that show? It kind of reminds me of that show on, on HBO. And they kept saying in this, in this fictional Silicon Valley environment, that whatever they're working on will make the world a better place. You could make the world a better place through constructing elegant hierarchies for maximal code reuse and extensibility. That is making the world a better place. Or making the world a better place through scalable, fault-tolerant, distributed databases with acid transactions. If anybody, anything can make the world a better place. But the one that hit closest to home for me was the fictional CEO of this company called Hooley, not talking about any real company there. If you've been to Silicon Valley, you may know what I'm talking about. But this CEO had had a little bit of issues with ethics. He said that Huli is about making the world a better place through minimal message-oriented transport layers. And that really hit close to home because that is exactly what I'm talking about with Ross Bridge and a TCPIP for robots. So that could easily be me in that position, you know, just saying that I'm making the world a better place with whatever I'm doing. Will my work actually make the world a better place? It's something I wrestle with all the time. And it gets down to what drives our behavior as scientists. I am driven by innovation and excellence. I love what I'm doing ever since I got my Atari, 2,600, looking at the ways I could create new technology, new computing technology, new artificial intelligence. But I have to balance that against my need for equity and opportunity. As somebody who came, whose parents grew up, We're educated during segregation and have come through, and I'm a beneficiary of the civil rights movement. But I have to balance that against the economic incentive, and in an ideal world, we can balance equity, opportunity, innovation, excellence, and the economic sustainability. But oftentimes computing and robotics doesn't, you know, it's focusing on innovation and economics, and equity and opportunity can be challenged to bring together. And that really is where we have sort of a difference. disparate impact in our field. And that disparate impact is really something we can see with the numbers. I could throw lots of statistics at you, but if you look at the overall population, so disparate treatment is when somebody, we say, I'm gonna pick this person over that person because of some demographic or some category. Dispert impact is the overall effect of the system. And when you're looking, these numbers are generous in terms of participation in our field. That's a real, that it's, we're not representative robotics and computing you know just as a as a as when I go to any sort of meeting I'm oftentimes very low minority participation one of my highest is national robotics initiative the National Science Foundation I was at an NSF meeting and I was I was one of eight out of one of eight minority of 257 people overall and that's been that was a high point what's really driving this is computer science you know the high there's high demand for computer sciences works time stories from UT Austin but this is Michigan Daily and really what's driving is salaries if you look at the at the report you know our enrollments in computer science at Michigan I think it's a tenth of the overall university one-tenth of your university like over a third of a college of engineering and really what's driving that are salaries it's kind of shocking when we were coming up with the robotics major to know that the starting salary for computer scientists is over $100,000, and that's pretty huge. But what that does is it creates an interesting competitive environment where when we look at the overall numbers, we just see a swell of students trying to get into computer science. We have, at the time in 2020, we had over 2,500 majors. It's now over 3,300 majors in computer science. But for the people that look like me, there's less than, I think we're at 50 now, but 39 at the time. And so that gives me some pause. for what I have to think about but my colleague Talitha Washington at Clark Atlanta says we can we can really up that there's a potential in the next 10 years for us to have potentially 20,000 more minorities in the engineering professions and I would argue that the way to do that is through the classroom the classroom is the catalyst and so so this is this is Joanna and me so when I said this right Joanna was our associate dean for undergraduate education things and we can put linear, we can do linear algebra, and we can make a robotics major. And she's like, slow down. And so this is the happy time after this, you know, we had a lot of interesting debates, so we're feeling good here. But I said, like, we could think about this all together, and an integral part of it is creating an undergraduate major that has both equity and excellence at its core. And we were able to do that. We were able to define the discipline of robotics as the study of embodied intelligence for machines. that sense, reason, and act, and work with people, this is important, work with people to improve quality of life and productivity equitably across society. And this is the great opportunity that I've had. We've launched this major in fall 2022. It was the first research one, big research one institution to have a robotics major. And this was to both define the discipline of robotics because an undergraduate major is really the intellectual organization of your discipline. If you're a biologist, you're trained to think about the world through biology. If you're a chemist, physicist, sociologist, French literature, all of those things, but it doesn't constrict you. And we can do this by defining the discipline of robotics for equity and excellence in the beginning and think about meeting the needs of society. This thing that Joanna and I went over many times, you know, you could look at it, but I'll give you the highlights of it. One important thing is work with real robots from day one. Students want purpose. They want to know what they're doing. It's not enough to just sit in class and cover material. They want to see it and experience it. So we put linear algebra four calculus. We teach programming with real robots so it becomes real, and they learn how to build real systems. And that's really important. Also, we didn't relegate professionalism and ethics and working with people to the end of the major. We put it up front. And so in their sophomore year, they have to start working with people and they have to understand the ethical considerations. But really, there's the core of the discipline that's really our 300-level courses. And what is that? I would make an argument that really what, it comes from the notion of Brooksian, Rod Brooks, who created I-Robot Corporation, who makes robot vacuums. He defined embodied intelligence as really machines that can sense, reason, and act on the world. I'm going to try. And if you think what that really means is that we have this closed loop, right? That robots, there's some system that's the actual embodiment in the world. You sense the, that embodiment senses the world. That sensing is turned into your understanding through perception. From your perception of the world, you make decisions. Then you turn those decisions, the motor forces, how you act, and then you carry that out and you do this over and over and over while working with people and communicating with people. If I took this and I added some math to it because we've got to have a bunch of math and I broke it into different categories and I put robotics numbers on those, then that turns into the core of our curriculum where we have to then build on, we have to think about what we have to build in in terms of dynamics from mechanical engineering, circuits from electrical engineering, linear algebra is the most important math class that we have, including probability as well. well and that's what turns into our into our major for the for for robotics and it has worked out pretty well because because we went from launch zero majors and in August 2022 to over 200 majors now those are me with the first two graduates but our number of graduates is up to 17 we're on track to being what i think the third biggest major in the in the college of engineering fourth third or fourth biggest major and so the demand is there if you want to see what we're we've done we've published it all on archive and so you can you can see that there's a there's a great story there but we do have a number of challenges and opportunities their enrollment from student demand to work with real robots is huge we need lots of robots students can feel defeated by math so I think we have to rethink higher education and then also we have we have to think about sustainable pathways where the classroom is the catalyst and so we've been we've been looking at this through a low-cost platform that we've We've used to teach over, I think now it's over 1,500 students to do autonomous navigation. We've created a computational linear algebra class that is really taking off and engaging a lot of students. And we've also created distributed teaching collaboratives, which is an open source approach to working across universities. And so I want to just try to highlight distributed teaching collaborative. I will not get through this, so I'm going to speed through a lot of slides. But I'm going to skip that. I have a whole thing in that. But really, this arises from, like, the mentoring, what happens in our community. So this is Jasmine. And so I gave a talk at Berea College, and she's really at the beginnings of a lot of this. We built a relationship at the commanded academic careers workshop. She helped me brainstorm what this looks like in 2020. She sent students to work with me in 2023, and I'm going to do an invited talk with her. But this really just comes from the mentoring and the one-on-one relationships we've had. had. We've built out a large partnership that covers Morehouse College, Howard University, Florida A&M, Borea College in Kentucky. We've had this. I had that picture right there with mask on because I didn't know if we were going to beat MS Michigan State, but I changed that to actually actually show football gave instead because we beat Michigan State, yayos. And all of it started from from from a from relationship I had when I went to the conference for African American researchers in mathematical sciences and so Bill Massey at Princeton introduced me to Todd Schurn at Howard and he said Todd's coming around why don't you have a chat with him and Todd comes to see me at at at Michigan he says my students at Howard want to build a black robot and it's like they're working with Carnegie Mellon and they're doing this cruel robot receptionist and but it seems kind of sterile so they want to build a about that feels black. And I said, all right, sure, but we need kinematics, planning, linear algebra, differential equations. And, and I love Todd, because he keeps it 100, 100% real, 100% of time. He said, no, thanks. That's too much math. That's not really what we want. And so I said, we can improve math education, and we can do it, and we can do it by starting to teach linear algebra for calculus. And so when people came from Morehouse to work with us in 20, I whispered to Duane Joseph right there, I said, Duane, maybe we could teach a linear algebra class together. And we can do it as what we call a distributed teaching collaborative, where we create the materials for the course together, but you offer it on your own. And with some support, and I think that's worked out pretty well. We had pretty good numbers, and so I'll just flash those numbers over time, and it's gotten even better. So the class teaches about 200 students per semester at the University of Michigan, and we teach about 100 students per year at HBCUs and those students at HBCUs come and then work with us for research experiences. I would give one story that's really important. This is Jamie. Jamie teaches our linear algebra class and I met her when she was a student at Michigan Tech, a PhD student. And I said, hey, great to meet you. You know, let's keep up a mentoring relationship and, you know, I just want to know who you are and just stay in touch. And Jamie was on her way to being a star in computer science at Michigan Tech. But what happens to a lot of people along their pathway is that she ended up leaving the program for one reason or another, I don't know. But I stayed in touch and I said, you know, and she just terminated with the master's and started teaching high school. And I said, all right, but, you know, let's just stay in touch. touch. And when we had an opportunity to bring her in, for we needed instructional faculty, I was able to hire her, and she was able to come and start to teach our linear algebra class and really help it grow into a juggernaut that's taking over campus. But what's more important that is she's able to help students come from HBCUs and historically black colleges and universities and then intergraduate school. And that is something that we can do to start having people who may be in our field and may venture off the path and then come back, but also help others as they come along. And that's something that we'd like to do. How am I doing on time? Am I out? Okay. So I will just go, I'm just going to, I could say much more. I have so many slides, but I'm just going to, I'm just going to offer one thought, just a couple thoughts real quick. Just a number of things here that that basically, you know, we're all sort of anxious about admissions, hiring, how do we onboard people onto our programs. But the main thing is if you take a class from me, then I can build a mentor in relationships through something very structured that will lead to open-ended collaborations. That's the one thing that you get for me at an R1 institution. If I can grow that through teaching courses together, then we can open up the pathways for kids at smaller schools and historically black colleges. And that's in minority-serving institutions. The classroom is the catalyst. And so the real issue is that if you're at an HBCU or a smaller teaching school, your teaching load is three courses to be generous, sometimes four, sometimes five. My teaching load is one course per semester, and I get to be the expert in that subject. But if we can work together and I can lend my expertise and you can, and we can deliver the course together, and they can help me be a better teacher because they're often better teachers, then we can work together and build a win-win where I, in addition, I get to know their students and I get to build a pipeline from a new place into Michigan, and I think that's really important. But we also decompose course delivery from course development from course delivery, and that is the mindset of an open source project. And I'm going to skip through those slides, and I've got a lot more, so I'm going to stop. but I'm gonna not go through all of those. There's a lot, but if you want to know more, there's a great story that got, there's a great story that just got released by Michigan Engineering that talks about our effort in more depth. We create low-cost platforms. We do, we have a lot of engagement. We're grateful for support from Amazon and Toyota and the Sloan Foundation for making this happen. There's a lot of potential here. And really, it's about how we can have our robotics pathways at the end of the day. How can we guide people to have successful career-learning participation in the high-tech fields? And I think that's really amazing. I think it fits with the ice spirit and the ideals of the patent foundation and the lecture. And it really comes down to the people and ideas, like using the classroom as a catalyst for helping people do more. And that helps us build the mentorship and community to explore new ideas together. And I'm thankful for the people that have helped me do that. So that's a picture of me in Selma and Joanna. That's the best picture I have with Katie. And I just want to acknowledge Admiral Adam Robinson, who's an alum of Indiana, who's been a great mentor for me and just really a fabulous guy. And I've been able to serve on a board of trustees with him, and he's been great. my cat would tell you to go vote so I'm just going to show a gratuitous cat picture and and really you know if we can do this we can start to bridge this district impact and start to come back to have a have a world where we can have innovation and equity and economic incentive where we can have where robotics as a true discipline can do that so with that I'm going to I'm going to skip out and say thank you again for your time and attention and all those enough slides that I had to skip over because I love to talk. Thank you again. right thank you for that question i think that's that's one of the big questions that we have to face in higher education is how do we you know how do we think about a broad education for our students i'm a graduate of alma college do you how many people know alma college went to alma college what's our there we go scots i love that i am grateful for getting a liberal arts education I take, I took wisdom of the Far East, poetry, sociology, you know, and that I think is, you know, and I understand the value of a liberal arts education. My kids will probably have liberal arts educations. Michigan engineering does not believe in a liberal arts education, right? You come because you get tech training, and the person that told me that was Joanna Millenchuck right there. So, you know, but I think, you know, the issue that we have is that in the power, past, in the recent past, you see an influx, a massive influx of students into computer science and engineering because that's where they see where the big payday is. If they, you know, whether they wanted to actually do it or not, they did that and that's where the foot traffic is coming into, you know, for our enrollments. But I think what artificial intelligence is, what you're seeing with artificial intelligence is basically it's being taken out of our hands as computer scientists. The problems are no longer, you know, are increasingly not. not computer science problems or AI problems, but problems are, how do you collect your data? How do you deploy it so you don't have bias? How is it gonna affect your jobs? As a roboticist, I have to be an amateur economist, because people are gonna ask me, are robots gonna take their jobs? And I need to have a reasonable answer, but I don't know, because I'm not really trained to think about that. That's why we need the whole of the academy to help us out. We need the humanists that can tell us, you know, that can give us a sense of inspiration, a sense of perspective on this, especially for science fiction. We need economists and sociologists that can help us understand what the future of work is and what that could be. I don't really have great answers for you because, you know, because Michigan engineering is what Michigan engineering would be. But I think that question that you ask is why you're seeing a lot of universities. stand-up generative AI committees across, that bring everybody across the university. Yeah, follow up. Yeah. I have five slides on that, but I have too many slides already, so I can't show that. So think about, so I believe, there's a couple things that I believe about K-12 education as an amateur, you know, amateur, you know, K-12 education. person. One is I believe in long division. I think students should do long division. But once you get to high school, right, you think about your sequence. Algebra 1, geometry, algebra 2, then you take this journey into calculus. You know, you do pre-calc, Calc 1, and then you get a college, Calc 1, Calc 2, Calc 3, differential equations, and then you get a chance to come back to linear algebra. And but linear algebra is needed to start to do artificial intelligence at the level that we do it in computer science. So really we have this big gap. If we move linear algebra ahead to the front of the, to a semester one experience, then we can open up the rest of artificial intelligence throughout the curriculum. But also what's more important is, we are defeating students through math. They're looking at it and they're like, I, you know, why am I doing all this pencil and paper stuff? What, you know, what good is this? What is, what's meaningful about it? And that's why we have to change as higher education to make math more relevant, to make it accessible. We still need those theoretical classes, but that may not be the thing that we have up front. And I think that's one of the things that I'm very grateful for Joanna to, for letting us do this experiment. It's turning out pretty well, and I think it's just the beginning of how we rethink math both in higher education and K through 12. One other point I want to make. Math is also, when you're coding, people say they want coding. You know, they should be able to code a computer, but really what you, but coding is just math expressed as a computer program. I would, if I could wave my magic wand, we would combine coding with math to do something relevant like art, expression, storytelling, and that would be, I think, a lot more meaningful to our kids, but that's just one person's thought. Yeah, right. I agree with that. So I think what you should actually have is algebra three should be what we teach as linear algebra, solving large systems of equations. doing fun stuff like making video games, like 3D video games. And I think, you know, and that's what we, that would really be the senior level class I would do in and merge with precount. And then there's a linear algebra four, or an algebra four, which be the theoretical aspects, there should be a college class. You know, and I think that's, that, I think that is a rework that we should be doing with secondary education, but that is way past my, way above my pay grade, but I agree with you. Well, I'm just going to jump in as, you know, moderated from the privilege. I also am feeling that problem with the school. Yeah. I know, right? I said it. Yeah. So I will say that I was, that Joanna had, you brought the mathematicians up to North Campus, and we had these forums. And the math, people in the math department said that students today know calculus a lot better than they did. before but their algebraic skills algebraic manipulation has gone way down because they're speeding through it and they're not really learning it and that's an issue go blue all right right yeah yeah right right yeah right right where do we need robots the most and how many we want right that's a terrible question for me because I will tell you need as many robots as you as we could make. That again, that again is a question for an economist or people that go out and actually understand what the needs are in society. I don't know. I'm sitting in my lab. I'm trying to make robots that do all sorts of amazing things, and I don't understand the need. I think what I can do is try to give our students an appreciation for the other disciplines that can provide those answers, to know that we are, as we as people that do, that that are making interesting technology to not have the answers for everything. And that has kind of been, you know, if you look over the last 20 or 30 years, we as people in computer science think we have the answers for everything, and we don't. I can try to give that humility, that scientific dispassion that our students should have. I can give them that, but I can't answer the question of how many robots we need. I'll get my labor economist and help him. I love the picture of you. yeah yeah yeah thinking do i want money my home yeah well well this is so this is the issue that i have right is that um in that i i am not i'm very reluctant for our new automated world that's coming up i check every email by hand that's why i'm slow on it but the people that are fast they have automated systems that generate responses for them maybe even send off that read the emails they synthesize them for them and I'm on I don't trust automated systems but it's not really me or you are they're gonna have this decision it's the generation that's coming up it's my kids and they're you know and they're much more willing to trust this technology than I than I am and they may be more fluent with it and more familiar with it and so to some extent I'm I'm again the wrong person to ask right I do worry that that we will use this technology to to to make the things that we currently do more profitable, like autonomous driving. You know, the most populous job that we have is some form of transportation, bus driver, Uber, taxi driver, truck driver. But if we automate all of those, where are those people going to go? Where are they going to do, right? You know, or we, you know, we think about just the supply chain and we just think about doing drones. There are real problems that we should address. Health care, taking care of our aging and disabled population, and I will be one of them very soon, so I want this technology in place. You know, that's a real issue to lower the cost, improve quality of life. How do we use robotic technology to improve education? How do we do that to inspect and repair our infrastructure at a lower cost and higher productivity? Those are the hard problems, and those require, you know, the people in the business school and the executive class conversation with them of what we can do. What are easy problems that we can solve? What are hard problems that we need to work on? But informed by the needs that we have in society. And that takes the whole of the academy to work on it. Yes. So can I ask the email question and looking at all the responses? Yeah. I'm going to go back a couple slides. Start with my cat says. My cat says vote. So that's one way you can do. It's not really enough. I would say that corporations like universities have their own diversity, their own sort of culture, very metropolitan culture, very diverse culture. And that for everybody within the university, for every faculty member in a university, there's an opposite and equal faculty member. And same in corporations, right? There are pockets that are doing terrible stuff, there are pockets that are doing amazing and enlightening things. And we are fortunate that work with great partners that see the value, that see the value of education, that see the value of fulfilling the social contract. But I can't say, but, you know, but there are, but I can't, there are no safeguards against that. The reality is when higher education for what we do, most of our money is going to come from the Department of Defense. It's going to come from corporations. If I'm lucky enough to get funding from the National Institutes of Health, it will come from there as well. But that's something that has to change at a much higher level. And I can only sort of do what I can with the good partners that I can assemble to try to make a positive difference. But what you're saying is, you know, I have no safeguards against the potential dystopian future. Yeah. Hit me with your best shot. Yeah, yeah. That's what robots like here. Right. I'll take the last one first. Modern artificial intelligence, which I will talk about, if you come to my talk in two days, I'll talk a little bit more about AI. It seems to have no problem translating between languages. The issue is getting closer to you. I think your first question is, I don't always know if it's right. And I have to, and if I'm not trained, if I don't know, if I'm not skilled enough to know whether I'm getting the right answer or not, I could be left with some pretty bad answers. and you're probably seeing them in class right now because students are using GPT all over the place to do their work. And so those are very real issues. To get to your second question about collaboration, yes, we're working on that. But it's still, you know, so this is where your deep learning AI system that's in prevalence right now, that's large language models, I don't know if it's going to be the best for multi-robot coordination. It's very good for one, robot but but it's still a question of whether it's going to be useful across the across multiple robots that are collaborating and we've been using Bayesian inference to do that so there's a that that is itself a whole other talk I'm happy to talk about non-parametric Bayesian inference to translate from my physicist thing you all this pots and icing models with continuous state spaces I'm happy to to go in more depth about that but thank you for your question I was a question of that. Yeah, right. And I envision a world where they could potentially be a solution to a lot of the police misconduct against African Americans in particular. Their stories soon could somehow almost very unique along this line. I envision a world where drones could be stem in and several police practice to surveil but I want to hear you guys about the flip side of all of that um data collection would have to be able to say that forever uh particularly use it against people what they because basically robocop has cameras and they uh service and there's also the AI right right I think you're right I think what you're gonna see what you're gonna see which is why we need more computer scientists and AI and engineers helping with the public policy space is greater regulation to lay down the rules of the road for what you what you can and cannot do what you should and should not do with artificial intelligence and it would be beyond me to try to try to be amateur public public policy person to to say for that but I I can just speak for myself in that I am I'm particularly very worried about how people are using facial recognition technology I'm very worried about how AI is being used in in financing and hiring and admissions sentencing you know how it's being used and And even our educational scenario is to decide, you know, what track a student should go on in terms of their educational pathway. And so, you know, so that profiling apparatus is going to just get bigger. That's why the thing that is that is most important is for us to produce people from our universities who can help guide that discussion, help the evolution and the deployment and the deployment of the tech. We, you know, it isn't just something that happens out at Google and happens out at, you know, at Facebook. You know, those people, they come to us for the people that get the training. We should think about how we're educating them because what we do here will have that impact downstream. And we should be, and that's something that is part of the things that we have to do to change for the future. Now, I was very vocal in the summer 2020. I thought it would be great to have an end of qualified immunity. It would be great to have, you know, let's pass the George Floyd Justice and Policing Act, the John Lewis for the People Act, you know, and I think the stories of not just police misconduct, but also the misuse of artificial intelligence is actually quite troubling. But I would also say that I think a lot of good people out there trying to do their best in these scenarios. And let's, I think we have to come together as a nation to really, to really solve these problems a meaningful way, and not just have sort of a pendulum go back and forth from one reaction to another. I think if robots are technology, I'm gonna need the 401k sooner than the robot will. You know, I would say, I'm gonna riff on that point to sort of say, one thing that is sort of the hottest topic of debate in my lab is this deep learning gonna show that human level intelligence, is really just finding correlations and massive amounts of data. And we are not as maybe as intelligent as we think we are. We're just great processors of massive amounts of information. Or is there something special about us as people that make us uniquely capable of processing information and gaining higher insight? And we don't know. Nobody knows what's going on inside of the neural networks right now. And, you know, and I think this is a fascinating question, maybe even a troubling question, that may have me rethinking some of my basic assumptions about life and intelligence. But I think that's where I would bring on my science fiction writers to help me out and how I think about this, right? So, ask someone who's interesting, what you like to read? Your science talk here, you told me not. I think it starts with, you know, with the way you teach your classes, right? You know, I think certain things that we don't do is we don't emphasize testing. We leave professionalism, ethics to like just that professionalism and ethics course, that one credit course that you take at the end of your program. Don't do that. Move it up to the front, right? So before, you know, so before a student starts taking second semester data structures and starts taking computer architecture and starts taking, theory of algorithms, they should be taking a course that thinks about how they work with people, their professionalism, the ethics, their larger responsibility. I would say a separate class and not necessarily the intro class. That's what I would. It should be embedded throughout, but if you go back and you think about our incentive as faculty, you know, the reality is that most of my colleagues, you know, they would, want to be good teachers, but they don't think of this as important. They're not incentivized to think of this as important. Until you, until whenever I write an NSF proposal or a proposal to the Department of Defense or the Postal to the National Institutes of Health, and that becomes something that I'm really judged on and really judged on in my tenure and promotions process, it's not going to matter. So I've got to take what I can get and focus on that one class to really help them. It's not important, but it's just sort of a long dream. we've had so you know so if you talk to a lot of Japanese roboticists that really were pushing the edge of human robotics they're trying to do Astro Boy right we're building humanoids because it just seems like the right thing to do but when you're but once you have a real need then you start to find solutions find find the technology that will match that need and we're making technology without a particular need in mind so that's why why you see this But imagine a day, you know, when you go to Disneyland, you see the animatronics, they have, you know, the animatronics, you know, are sort of like bolted to the ground and they kind of do, it's a small world after all, right? But like, there will come a time when they don't have to do that, where they can walk around like any other actor. And I think what I'm excited about is being able to put that capability in your hands so you can think about what that could look like as a new medium. My daughter is an art major at Kalamazoo College, and she would tell you that Jernive AI is stealing from all of the artists in the world, and it's terrible, and you should never use it. But I try to tell her people are going to use it. So how do you use it as a tool? And we actually go back and forth about this. And so it's an interesting new world what this will look like. Maybe you say these human robots are terrible. You want to stick to traditional puppetry and storytelling. Or maybe you say this is the greatest thing. ever and you know maybe you can have it work like that or or offer changes and suggestions the once the technology becomes good enough it comes out of my hand as the researcher and we give it to the world and it's it's and it becomes it takes on a life of its own across the across the disciplines all right there we go Kate College of the house one thing I've I found out when she got accepted is Kalamazoo College is the most expensive college of Michigan oh my goodness I got that bill and I was like what is that right I will definitely not put words in the Pentagon's mouth, so I will let them answer that. But I think as a roboticist, one of the things that keeps me up at night is how this technology is going to be used in future conflicts. You know, right now when you deploy, you know, people, and, you know, there is a cost. There's a cost to your citizenry. You know, then that's what holds us back from doing awful things, making bad decisions. But now if you don't have a cost, if it's just a machine, what does that look like? Things that we're seeing in Ukraine, you know, with the Russian invasion of Ukraine, that's troubling. You know, the use of drone technology, the use of automation, you know, if you and, you know, but it's not just, it's not just in the conflict, but even if you're a pilot, you fly F-35, right? It used to be you're holding on to the, you're holding on to the, throttle and you're doing everything manually these days you are commanding a whole it's like a command center right you have to you have all this these instruments they're flying the plane and you're just supervising it and that sort of same way that you're that you sort of fly an f35 a modern f35 is what we're going to all have to deal with what is it going to look like when you have all this automation around and you're not doing the work yourself you're supervising automated systems to do work for you and that's uh that's that's what's going to govern how we have future conflicts, the speed, scale, and size of those conflicts, but it also is gonna affect our daily lives and how we just do our work. Artifice, yeah. I should have an answer to that. Maybe I just don't wanna answer that question. I can't say that that possibility is impossible. But I do think we have control over a lot of these systems. I don't think, you know, I don't think we'll have a Terminator Skynet, you know, style, style issue. But I think the concern that I have, a more pressing concern, is that with artificial intelligence and the automated technologies, it's going to put more power in the hands of a few people, where you don't have to rely on other people, other rest of society to get things done. And I think it could widen those gaps if we only focus on just on the unwise use of the technology itself. Right. So I would say that, so I have many thoughts on this, but really to me the core of your major is years two and three. So if I look, if I went back and I looked at it, what we're saying is year two sort of set you up, you know, like your general first year can look mostly the same for lots of people, right? For engineers, you're taking calculus, you're taking math, physics, intro level programming, intro level engineering design, and a bunch of general electives, right? And so we can offer some of those general electives before you get into the crux of your major. And then once you get on the other side, once you take all the core classes in your senior year, there's upper-level electives that can be available to the wider body of, of campus that, you know, that can give you sort of the insights that you need for, let's say, basic literacy and fluency in a particular area. And so really, you know, pedagogically, your major, your discipline focuses on year two and year three, and years one and four give you more free electives to that you can use to reach out to other disciplines or broaden your knowledge in that same discipline. Right. So if you want to take, If you want to take the deep learning course for robot perception that we offer, you've got to have linear algebra, you've got to have differential equations, you have to be new out of program, otherwise you're going to have a miserable experience in that course. But if you want to take our ethics for AI and robotics course, then really you need first year writing. And that 400 or 400 level course, that senior level course or that graduate level course is something that you can take without prerequisites. so it's not really that you know it's not really you know i mean every course should be should basically prepare you to have a good experience in the course and not necessarily be a gatekeeper for for for your progression and uh in progression to get the knowledge that you need have what that's when you have to start putting that for human right that's now right like yeah you know like i mean I looked at something, there was a, I think, a politician in Arizona who made like a deep fake of himself talking about deep fapes and how you can't trust certain, and you have to think about disinformation. And I was like, that's genius, but that's super scary, right? And, you know, and there has to be, and the guard for the guy is not necessarily regulation, but it's an educated populist that understands AI and has a literacy about AI. AI is becoming a new literacy, right? And so you have to be a discerning, our citizenry has to be discerning consumers of information. And, you know, and I wish I had good answers for how we prove that we're human, but I think we're at that stage now. I actually brought a Halloween costume, too. So it's going to, so if you see me on Halloween, it'll be fun. Thank you for sticking through all my slides. I appreciate that.