We always have to turn the mic down. Good evening, thank you so much for coming out tonight. And it is Halloween. We sincerely appreciate your attendance. My name is Jody Linnae Madeira, and I'm Professor of Law at the I.U. Mower School of Law. In 1931, Mr. William T. Patton of Indianapolis made a gift of something more than $115,000 for the establishment of the Patent Foundation at his alma mater. At the time, it was the single largest gift ever pledged to the Bloomington campus. The gift was in the form of Liberty War Bonds and Indiana Municipal and County Bonds, which were held in trust by the university. Padden was not a wealthy man. He received income from the trust funds until his death in 1936, when the university endowed the William T. Patton Foundation. Under the terms of this generous gift, there was to be chosen each year, a visiting professor who at the time was to be in residence two months or more of the year. The purpose of this appointment was to provide members of the university, students, faculty, and staff the privilege and advantage of personal acquaintance with the visiting professor. While these terms have changed over the years, we now typically have two patent lectures spend a week on campus each year. The spirit remains unchanged. Patton believed that Indiana University still a smallish provincial institution at the time. time could enrich the intellectual life of the local community and of the state of Indiana, both in the moment and in the future. He said, I have always been proud to be a Hoosier. I am proud to be a graduate of our great Hoosier State University where we train and educate our children. Patton was born in 1867 on a farm in Sullivan County, Indiana. He taught in the county schools before enrolling at Indiana University at the age of 21. He was a diligent student, a competitive orator, an associate editor of the Indiana student newspaper. He received his undergraduate degree in history in 1893. After graduation, Patton settled in Indianapolis, where he made a career in real estate and county politics. Since the first patent lecture by the German business economist Alfred Maines in 1937, more than 130 world-renowned scholars have lectured at Indiana University under the auspices of the Patent Foundation. Noted specialists in their fields, speakers have been chosen for their ability to convey the significance of their work to a general audience. Lectures stay on campus for one academic week, during which time they interact with faculty and students in classes and informal gatherings and deliver to public lectures. Special thanks for enabling Professor Jenkins' visit goes to, his nominator, Selma Shabanovich of the Ludi School. The many departments and units supporting his visit are the Ludi School of Informatics Computing and Engineering, School of Education, College of Arts and Sciences, IU Center of Excellence for Women and Technology, Cognitive Science Program, AI Digital Futures, AIDF program, Office of the Vice President for Student Success, the Center for Innovative Teaching and Learning, Atkins LLC, 21st century scholars, groups scholars. Thanks also go to Provost Rahul Shrivasov and the Provost Office, the Cox Research Scholars Program, the Meridae House, the Patent Committee. And to introduce our esteemed guests, I'm happy to present Professor Selma Shabanovich of the Letty School. Thank you. Thank you, esteemed guests, wizards, and muggles. I'm very glad you could join us on this auspicious Hallows Eve to greet another amazing mage from one of our excellent wizarding colleges in the region. And I'm going to stop that now. That's as far as I could go. I'm really delighted to have Dr. Chad Jenkins here. He is an exemplary leader in robotics and artificial intelligence, not just for his groundbreaking scholarship, but also for his amazing efforts in making STEM education much, much more inclusive. Dr. Jenkins explores the fundamental building blocks of robot action and perception to enable robots to work with and assist diverse users, including older adults and people with disabilities. One of the first times I saw work done by Dr. Jenkins was, now you can see it in a TED Talk, but it was in a video of a man who was paralyzed by stroke with whom Dr. Jenkins worked so he could use robots including a drone. And this was how many, probably 15, 20 years, almost 20 years ago, right? And now maybe we're used to drones, we're used to even telepresence robots, but at the time this was an unique and astounding an achievement. And even today in all of my human-robot interaction classes, one of the first things the students do for an assignment is watch that video so they can see a real example of user-centered robot design. And I think aside from being a wonderful example of that, I think this also is a wonderful example of Dr. Jenkins' abiding research and teaching philosophy, which is to make robots that work for any of us, we have to make them work for everybody. Dr. Jenkins uses this inclusive human-centered approach and his teaching as well. He's the founding chair of and among the chief architects of a new robotics undergraduate program at the University of Michigan. He also pioneered a transformative approach to broadening participation in STEM through the distributed teaching collaboratives for artificial intelligence, which connect faculty at HBCUs and minority-serving institutions with peers at R1 universities. Dr. Jenkins' excellence has been recognized through numerous awards. I have a huge paragraph here with all of them. I can't, well, I can read all of them, but you can also look at his Wikipedia page. But I'll just mention some. There is a Sloan Research Fellowship, a Presidential Early Career Award for Scientists and Engineers, a Young Investigator Award from the Office of Naval Research. He's a fellow of the American Association for the Advancement of Science. And many, many more. And I think also it's amazing that he is also kind of popularly recognized by popular science, named him one of the brilliant 10 in 2011. National Geographic called him an emerging explorer. So he's truly been everywhere and done things with everyone. And before I kind of let Chad talk to you, I also want to say that I have had the amazing luck to work with Dr. Jenkins. We worked as editors-in-chief of a journal together, and that allowed me to personally witness kind of all of his really true efforts to bring both rigor, scientific rigor, and inclusiveness into a very interdisciplinary field. And it also gave me a chance to benefit from his mentorship. And so I want to say thank you, Chad. So let's say welcome to Dr. Chad Jenkins. second here all right so thank you professor Sabanich it's great to be here I'm glad that you're in your Halloween costume thank you for spending your Halloween with me and listening to me talk anybody know what I am with this what my how costume is any price is right thank you very much if you came to the lecture on Tuesday you know how much I love the price is right and so but with that I'm I'm going to take this off now. There we go. I wouldn't let you come up, the only person who is enjoying Halloween here. And so welcome to my lecture here entitled, The AI Pick 2, Fast, Cheap or Good. And so really this is, this has helped us calibrate sort of what we need as citizens in this sort of AI-powered world that we're starting to see. And so really, when we think about AI being fast, cheap, and good, what this talk is really meant for us to do is to get a sense of where AI is today, what AI, what needs we have for AI in tomorrow, and how do we prepare students? How do we prepare citizens and scholars for an AI-driven world? And really, this is what I would like to help us have a discussion about as we move forward. It's incredibly inspiring to be here for the patent lecture. You know, as I learn more about it and how the value of education and how education is really a supercharger for our economy, for our society as a nation and has led to the prosperity of the United States, it's incredible to be here and to further that because really that's what we need in order to have the innovation that we of the future decades we're paying this forward and i can say that that's that's me right when we're talking about the value of higher education my career as a roboticist uh getting to work with many amazing platforms and and scholars is really a huge benefit i'm able to do work in in human robot interaction and robot manipulation where we can have robots that can and they can see the world and can interact with it and deal with messy environments. So in this case, we're just picking up objects in cluttered environments and being able to sort them. This is one sort of thing. I went more into depth on that on Tuesday. But more importantly, it's the people who I get to work with, the great students in my group, my amazing colleagues, the number of students, being able to help generate future people that are educated, who understand the ideas of robotics, and artificial intelligence, can put those ideas into practice and can extend them out to create the new innovations of the future. We are in the business of people and ideas. We generate people, educated, responsible people, more than anything else. I would also say, while your football team is good, enjoy it. You know, it doesn't last forever, and I can tell you, because that was a picture of me. So that was me, I, you know, And so I have this thing that I wore it. So I have been at many places. I have never been a part of a national championship. So there's a whole story about this thing that I'm wearing, which I call the postseason chain. So I gave a talk, invited talk at Humanoys at UT Austin. And I wore that chain. From the time we beat Ohio State to the time we won the national championship, I wore the postseason chain everywhere. And so this is, this is some of my good colleagues from UT Austin. you know, I got them a post-season chain to, and I said, we'll see you in January, but we ended up not seeing them in January. But one thing I'm particularly grateful for my colleague is that I said, you know, we had a reception at the stadium restaurant, and in our stadium at Michigan, if you're in the club, you can see out on the field. But at the UT station, at the Texas Stadium, you can't do that. The restaurant doesn't see out on the field. So I was like, is there a way I can get out there? And we went up and we snuck into one of the boxes up there, and he took a picture. with me and you know and that's like those are some of the story I mean this this person who's here is Peter Stone he's amazing person in AI a luminary but the fact that he's willing to break in to get a picture of me with my post-season chain of Texas State Texas Memorial Stadium was was fun and I was glad that we could also turn that into a championship chain so that's me I was I had a meeting of the Robotics Automation Society in Switzerland and the Swiss Alps my daughter went with me the art major and I had to watch the game at 1.30 a.m. Swiss time and then be up at 9 o'clock for a meeting, but I still wore the posies and shade all the way there. So it's just fun. Enjoy while you have a good football team. But some of the people that I got to meet as well, I have all sorts of colleagues. And so this is me with Veronica and Christy, some colleagues that are now in the Big Ten, Washington and UCLA. And we were working on a briefing to talk about artificial intelligence and where it's going to go. And we were trying to figure out how do we describe, how do we describe, it. And Christy relayed this story about Martin Scorsese. And Martin Scorsese said, he was under pressure to get the Godfather done. And so he told the studio back in the 70s, you know, you like he said, cheap, fast, or good. Pick two of those, I can only do two, right? And I was like, wow, that's a good, and we started using that to describe artificial intelligence. It was only later that I realized that she wasn't actually talking about Scorsese. I looked it up. It wasn't Scorsese and The Godfather. It's actually Hal Needham and Smoky and the Bandit. That's where this comes from. So think about that when you watch that movie. If you still watch that movie, I don't know if it holds up well, but it still is great. And so I'd like to talk about artificial intelligence from this lens in that if you're thinking about trying to get something done, whether it's a movie, artificial intelligence, any sort of task. I would argue that people are expensive. They can be fast, they can be good, but people are not cheap. They cost a lot of money. And you get what you pay for, right? And when you think of the types of artificial intelligence that we've had traditionally up until the modern era, that past AI is very good. If you use things like Google Maps, right, we can represent, we can represent Google Maps as, what's we call a graph structure, and we can search over that graph, and we can tell you the shortest route to get from any one place on the map to another place, and it's very good, right? Google Maps is really good at what it does. Maybe it'll be a little weird sometimes, but that's really just the estimation of the traffic or maybe it missed a construction there, but for the most part, we can rely on these systems to give us good answers. And so, but that past AI, as it scales from just doing simple route finding to more complicated tasks, like, you know, like trying to write essays or do extended reasoning, it will take more time to deliver. It's going to be pretty slow, right? And so as you grow the complexity of your task, that past AI will become, will grow usually exponentially in the amount of time it will take the complete. And so that will take time to deliver. And so in our modern error, when we think about things like chat GPT, there's a, you know, and you look at the large language models and diffusion systems that can generate images and all of the generative AI that's happening, there's a tendency to say, it just works. This was great, it's awesome, it's just going to work. And so if you were here on Tuesday, you can't answer this, but I would offer this example, so it's coming back again, right? If you're new to this one, to get a sense of generative AI and how it works, can you tell me what this is? Anybody who's just coming to this lecture, does anybody know what that is? What's that? Tuna fish, it's tuna fish? Any other guesses? Salmon, all right. It's just salmon, that's it. Anybody else want to guess? Water slide, it could be a water slide. It's a, that's exactly right. It's a salmon swimming down a river. And so if you asked generative AI, a student sent this to me, he said, you know, he said, and if somebody prompted one of the image generation, generative AI image systems, and said generate salmon swimming down a river, that prompt generated salmon swimming down a river, right? You know, it's right, but it's not right. And so that's something you have to think about, but I'd like to think about some more consequential things here And in general, I think you see these generative AI systems, and they're right 99% of the time. They produce amazing capabilities. They can do things that we haven't seen before. If you've seen the Nobel Prizes that were won in physics and chemistry recently, these are amazing capabilities, but we have to be careful and understand the risks. So even if AI is right, 99% of the time, can we characterize the 1% of the time that it's wrong so we know what we're getting from these systems? And I would talk about what happened on my 46th birthday, not to me, but on my 46th birthday, January 9th, 2020, when Robert Williams, who lives in a suburban area, outskirt of Detroit, was wrongfully arrested based off purely a facial recognition system. There was a robbery that occurred, and they had this grainy video of the act being committed. and they ran that through a neural network and Robert Williams came up and so I mean does that look like him I'm not I'm not sure but purely on that facial recognition hit the police came and arrested him and detained him for 30 hours and it took a lot of legal work to get him to get him released and you know that's you know that's time he's not going to get back that's you know that's you know he definitely felt like he was you know he was you know in front of his friends, families, and neighbors. He's written a great op-ed in Washington Post about facial recognition and journalative AI, which I encourage people to read. But I think one thing that I would think about is a few, so basically they took this image and they ran him through it and he came out as the suspect. But there's a bigger issue in that in that there's research that was run that took grainy image. The purpose of it was take grainy low resolution image and convert that to a high resolution image. So he took a grainy image of President Obama and then they got the result and it didn't look like President Obama. And so that's, you know, those are some issues that we have to think about, right? You know, what's going on inside these models? We don't really know what's happening in a neural networks. It produces really good results until it doesn't and then we have to be aware of it. And so I wouldn't call the current AI that we have the it just works that's really the dream AI we're still dreaming about that and so that's right that sits right here in the middle i would call the current AI that we have you know it's not necessarily the best quality right not necessarily in quality that it works most of the time but quality in that can i trust the result that i get right um and so i would like to just offer an example for you to think about just a sort of a hypothetical example So imagine that you have to fly from Seattle to D.C., right? And so let me give you an option, right? So you're going to fly from Seattle to D.C. And we can have, you can do it the usual way that we do it right now. So roughly, you know, I'm a Delta flyer. So I drove here, but somehow I'm going to make diamond on Delta this year, which I really don't like. I don't want to do that again because I like seeing my cat and my kids and being at home. But if I got to fly, I'm flying Delta, and let's say I get a $500 ticket on Delta, 10 hours from the time I leave my door to the time I get to the door in, let's say, in D.C. And, you know, Delta's pretty good these days. They're more like 95% on-time departure, but let's say 85%, right? And so you've got to do that. And so that's with, you know, people who are running this, right? People going to security, doing a gate, flying, air traffic control, maintenance, all of these things, right? Let's say I could replace all of that with artificial intelligence, and I could have an AI crew instead. And with that AI crew, I can get the cost down to $50 a ticket, six hours, because we're going to be more efficient, and 98% on-time departure. Right? So the question is, which flight would you take? Human crew or AI crew? Just a rough example. Just a quick your hands human crew AI crew all right I got you have half the audience there I like the distribution let's let me make this a little more interesting let's say that the that so if you look at at failure rates right the airline industry is very good you're probably I think they would say you're safer on an airplane than you are walking on the street or driving in local traffic and so I think this is actually so they would say that you have a one one incident every 100 million flights I think it's actually better than that I think it's one incident every 200 million or 300 million flights right and so we would consider that to be eight nines of success so you take the precision out and you would need nine nines with float with a with a decimal form and that would be the percentage of your success but let's say with the AI crew you only have you there's a one in a million chance that you have an incident that would be bad right does that change logic anybody change your minds is this still so good argue we got a couple of people change your minds all right all right you know but what if I took the currents if I took the current reality that I have right so let's say let's say that if I took my current neural networks and it's more like 98% right and there's no sullen there's no Captain Selenberger around you get like you just get you just get you know the autopilot does that change anybody's mind AI crew right and so that's something that you have to think about but that does that mean I should completely not use AI at all well you know it depends on what I want to want to use this for so if I'm shipping goods maybe I'm maybe I'm good with that right you know do I need my socks that much you know do I need that fan do you need that that USB charger or those converse that my daughter wants for you know she can live without it right but what about the things that you trust right if I'm sending my daughter off to college and putting on our flight, or this is somebody's life or death organ, or this is aid that could really help somebody, you know, which flight do you trust and how do you balance the risk versus the potential benefit? We also know that neural networks, there's research that we have to do that that helps the research, that will help these networks get better. So there's been research, and my colleague, my colleague, Dr. Riggins, and that, Dr. Ritz, at the Institute for Defense Analysis. You know, covered this, has written a good survey, whether it's a pandan or a gibbon. And so if you take an image of a panda and you put just random noise, you added random noise to it, you could make the AI system think it's not what it actually is, right? So it goes from being a panda, you add some snow, some static noise, and you know, it changes the results because we don't really know what's going on inside these networks. There's some even more challenging cases, so I'm going to skip the confusion matrix there, but one of my colleagues at Michigan, Atul Prakash, who's now the chair of computer science, they showed had interesting results, they said, if you have an AI system that's trained to recognize stop signs, but you put, you know, little stickers on it in the right place, you could get the neural network, you can get the AI system to think it's a speed limit sign instead of a stop sign. And that's, you know, that's trouble. Some researchers at MIT actually showed of how you can get an AI system to think that a turtle, if you rotate it the right way, show it the right way, is actually a rifle. And just imagine what that would be, right? And so those are some cases that we have to think about. And so, so Christy, Veronica, and I were putting this together in terms of our thought piece, which we call pieces of thought. And we sort of broke it down into these things that we think, you know, citizens should be aware of. One is that what we know now is that AI has to be demystified. You should know what you're getting. AI provides huge capabilities, but you have to calibrate the risks and the with the risks and the benefits and know what you have. It's not perfect, but it is quite amazing in what it does. We also know that on the horizon that we want to match that risk to the type of AI that will be provided. So from that knowledge, from that awareness that we can have as educators, We want people to be able to know how to use AI in a reasonable way. But if we think out further over the horizon, what I would argue is that modern computing that we have right now, digital computing, is not really a great match for artificial intelligence. There's other forms of computing that will be even better that may be able to get us to that dream AI, but that requires training for students in this new era and also open up new possibilities for what we might think about in terms of core computing structures. Digital computing is not the only option for us, and even quantum computing is not the only option for us. So I should talk about, so I can get into the weeds fast and talk at a super deep level and not give insights. So I'm going to try to give some insights into what AI is right now. And so when people talk about AI, what they really mean is a deep neural network, right? It's a structure that's emulating, that's approximating how the brain works and being able to provide predictive power. So really, this is no different than what you have from high school, algebra two, where you have Y equals F of X. If people remember Y equals F of X, right? You know, just remember, like, that F of X could be like X squared, right? And if I put in X as two, it'll square it and why Y will be four, right? That really is what neural networks are doing except that we don't know, we don't know that it's X squared in the middle, right? So we have to figure out this crazy complicated function in the middle. And so if we can do that, instead of trying to figure this out analytically, we can try to use data to do this. So let's say that I, you know, I wanted to be able to predict the quality of a restaurant using the number of stars it has on Yelp, right? So I could go around and figure out, take, get training data. I could go around and get, and get, you know, get, you know, get Yelp reviews from, you know, from, you know, from like, say, different restaurants in the D.C. area, so I picked this in D.C. so Smashburger is not so good, but Capital Grill is great. And I can collect that for a lot of different restaurants, right? And so it maybe forms a rough pattern. And there might be some outliers here, there, a restaurant that's good, but gets low stars or a restaurant that's not really good, but has high stars. And what I can do, what AI is really doing is trying to figure out what's this underlying model and underneath it, what's this trained model? And if I have that trained model, if I get some new restaurant, let's say, say here, I can look at the number of stars it has and I can predict how much you're gonna like that restaurant. It's predictive quality, right? So I take data, I learn some model of y equals f of x, then I get some new X that I haven't seen before, some new input, and I can predict, I can make a prediction of what its output would be. That's what neural networks are really doing. But really, you know, these models are not just used on simple things like restaurant quality. Oftentimes it's using to do recognition, things like recognition from images. So, you know, so you'll have lots of camera data and you're trying to figure out, you know, what is that object or what is that person doing? And so this is, you know, this is behind a lot of things like surveillance technology, or sports, if you're watching sports, you're seeing all the graphics they put up, they're using neural networks to do that. And you could, you know, and this is now very easy. This is off-the-shelf technology that you can use, right? This doesn't require an advanced degree in computer science to make this work. You can pull it off the shelf and get things going. And so image data has many more dimensions and need much more data to characterize this model and this function. So what happens is you go off, and you would say, like, my input would be an image and my output would be some label. So maybe I have a label of, you know, is that a label of some sort of image? And I'm trying to have, I'm trying to figure out if that's a mammal, a placential, a carnivore, a canine, sailing vessel, watercraft, and I'm starting to put that label on that. And so what I will do is I will collect a lot of training data and training data has become big. And so there's lots of, there's lots of training data, You know, you know, there's a lot of efforts in China, which is a leader in artificial intelligence. A lot of efforts here by big technologies to try to collect as much data as possible. The quality of your model depends on the quality of data that you're getting. And once you have that data, you can then show this data to the model. And so you show inputs and outputs. And then as you do that, you have those matched inputs and outputs. You're going to train the weights of this model. You're going to train this model. So now it has predictive power. So now if you give a new image to this trained model, you can get predictive recognition. So I just show a regular image here, and I can get labels of all the things in that image, the cars. In this case, it's cars and people. But if I look closely, I can see that that's not always right. So there's things that are called false positives and false negatives. there are certain things that were labeled bused that are clearly not bused, they're cars. And then there's things that weren't seen at all. So like if I went through, I just manually went through and looked at all the things that weren't seen, we can see that that's not necessarily correct. And, you know, I got this example from 2018, 2017, and the technology has gotten better, but this problem still exists. And so that's what we should be careful of. And so we should ask, how did these mistakes occur? when when this mistakes occur how do we fix them and how to recover for them but also what was the cost paid for making these mistakes and that requires more than just me as a computer scientist that requires that requires knowledge from all around the Academy of the University so so as we're thinking about what needs that what we need to do this I would think about I would think about the types of risks that we would have and how we match that so if I'm thinking about the risks and consequences of, let's say, an automated system, a robot that might be doing things in the world, or a data analytic system that might be analyzing the types of data we're thinking about, there's sort of different levels that we have to consider. So I would consider, let's say, you know, the neural networks that we have right now, good for things like consumer shipping, right? Low stakes, let's say, advertising, or giving me a suggestion of an introduction I could write for Professor Jenkins Patent lecture or something like that, right? You know, and so low stakes, good suggestions. If I'm starting to do things like controlling an autonomous car, I may want my old school model-based Google Maps style AI that I can predict and understand what's happening with them. If I start to go out and think about my commercial airline, you know, I'm going to make an argument that you should still have skilled people. You know, most of your commercial airline has run automated, but at the end of the day, the pilot, air traffic control, they're making the decision. And that's really what you want in that level of skill. You know one question I get a lot as a roboticist is about Terminator and Skynet. And so I have often come to say that if you are doing things like nuclear stewardship, it's not even just people, it's organizations of people that have to man this. We don't give that. over to an automated machine not and definitely not one decision-making system it has to be collective and so we're thinking about that right and nuclear stewardship is sort of more extreme example you know 12 lines the margin for error is so small that you can't tolerate that that error that has to be you know like almost impossible to happen also one thing that people should note is that then we train an AI system it's uh it takes a ton of energy if let's forget about our open AI and Google, just think about the number of AI systems that are being used on this campus for research, student projects, and you know, various types of scholarship that might be happening. You're probably, if you're like Michigan, you're generating gigawatt hours every semester, right? Just training one of these neural networks can take up huge amounts of energy. And so there are solutions that can help us do where we can avoid that level of energy consumption and that really means that you know that that dream AI that we are that we are thinking about can be possible but the thing that make that will allow this to happen is to understand the investment and the training that went into how we got this far in computing i would make the argument that um that the AI of today that we are now seeing it required continued and significant uh federal investment over the last century AI didn't just show up, computing didn't just show up, it dates all the way back to World War I, and investments that were made into automated turret controls and calculations for the nuclear bomb, you know, Los Alamos and things like that. Artificial intelligence dates back to Norbert Wiener, who created the term cybernetics. We could have easily been talking about cybernetics instead of robotics and AI. But it didn't really catch on in terms of a name, but the principle still existed. And the term artificial intelligence self was coined in 1956 at the Dartmouth conference by the founding fathers and AI. And since 1956, that's when we've seen this evolution of artificial intelligence that sort of has taken a number of different waves. And federal investment has been all throughout this. So in the beginning, it was more about your old school Google Maps style AI. and so that took that took over and that was sort of your first wave and they we had these AI winters where people overhyped and then they didn't work and then they got they were they were essentially you know people were interested in funding them and then it got got hyped up again and then it started working and it was good and that first wave of AI really is about you know about search about vehicles about planning about really looking through a problem that second wave of AI is really your neural networks and it's sort of it's sort of you're sort of you're sort of took off for a little bit and dipped, came back in the 90s and then suffered, but since 2011 has really taken off. And there's a third wave of AI that's really starting to come about. In terms of federal investment, I will not talk about this because it's even more boring in what I'm talking about now, but if you look at where your money goes from the National Science Foundation and similar types of efforts, types of agencies, I would take a look at the, thank you, I would take a look at the tire tracks. So the tire tracks basically says how the types of areas that were invested for federal research and how that has matriculated into big to major industries. So early advancements in broadband and mobile, microprocessors, computing, the internet and the web, cloud computing, robotics, entertainment and design. Those started from, you know, Elon Musk didn't just come up and think of this and it just showed up. It took decades and lots of, and billions of dollars to make this happen. networks I would point to Alex Bible who worked with Jeffrey Hinton who was who Jeffrey Hinton who just won the Nobel Prize in Physics for for contributions in deep learning you know they would talk about how their ideas weren't really supported and how they had to leave the United States to go to places like Canada and Japan to get the support they needed in order to develop these ideas that were sort of kooky and offbeat at the time but turned out to be the right idea and so we have to have diverse federal funding in order to to help with us. And so when we think about this, I mean, this is kind of abstract. I want to make it a little bit more clear. And when we think about this first wave of AI, it really started with just sort of simple systems for search. And so in the 60s and 70s, they were thinking about how to play tic-tac-toe. How can you have an automated system that can play tic-tac-toe? And so if you think about the starting graph, so when you start off with a regular tic-tac-toe game, you can the first player has one of nine choices that they could think that they can play and that leads to nine possibilities from each of those nine possibilities there's uh there's eight possibilities for the other player to to move so now you have nine times eight number of possibilities for for for that second move then you multiply by seven for the for the player for the for the next for the first player to move six and you can enumerate all the possibilities and you can search through them. So whenever somebody makes a, whenever that first player makes a play, you can have an automated system that can think through all those possibilities and select the best possible outcome and then play that move. And then that's how our automated systems work. It's something called a mini-max algorithm. You know, a lot of our graph systems are able to do this. That's how we can reason through how our systems work. These systems got good enough by the 1990s that they could play chess and they could think through all the possibilities for how you could play chess and they were good enough that they could beat Gary Kasparov who was the who was a chess grandmaster at that time that same technology is literally what is driving autonomous cars being able to the route planning for Google Maps being able to have an autonomous car work and also to be able to generate city scale maps and so so I think this is one of the coolest things I've ever seen it's from one of my colleagues Ryan used for my colleagues Ryan Eustis and Ed Olson they have an autonomous car with a laser range finder on it and they built this map. This is a map of a big house at Michigan. This is downtown Ann Arbor. What you're seeing in blue is the laser range finder is the map that the 3D map that the robots built up. When you're seeing in red and yellow is what the robot is seeing right now from its laser range finding senses. And so our robots can build fabulous models of space. We can navigate through the space of geometry really well. But what we can't do is understand what each of those things mean. We can see it as trees and buildings and, you know, sidewalk and road. You can see those bicyclists that were coming through the middle of the intersection that you don't want to hit. We understand that, but our automated systems, our AI systems, were not able to understand that. And that's where we came into the second wave of artificial intelligence, where we have deep learning, where we can learn from lots of data. And so Alex Nett, which Jeff Hinton created, was really the rise of that deep learning. And that allowed us to basically see pedestrians and understand the semantics, the form and function of objects and scenes. And that's what's doing facial recognition. If you wanna know what's doing the blurring behind you on Zoom, that is neural networks in action. That's what's allowed us to make things like Dali for creating images from language props, to be able to code automatically. And that was just 2022. This is what I showed in. class. By 23, I had to change out the stables of vision and it could rate just all of these these images right here completely generated by AI. They don't exist, they're not real. And so you could just say make a gingerbread house and a diorama form and it will generate that for you. Everybody's seen chat GPT and the language that it could create. And if you think about what Nvidia is going to do for humanoid robots, you're going to simulate many different types of environments and you'll have humanoid robots that could just learn in this sort of generative AI fashion and so this is really what's coming and it's very interesting but this also required decades of investment so this dates back to to Rosenblatt's perceptron from from Cornell which which was the early neural networks in 1958 in 1989 this was turned into the convolutional neural networks where you say something into Alexa or Google Home you're using these neural networks that came from the 80s and that's what allows you to understand what they're saying and that's turned into you know into the chat GPTs and the stable diffusion and the and the and the facial recognition systems that we've seen as I've said before these learn from lots of data but there are limits as we've talked about with like with the stop sign being fooled and that's why the third wave that's coming up is to create explainable AI, where we can get the best of both of these worlds. And so to give an example of that, if we look at this handwritten digit right here that's up in the square, is that a nine or a four? I don't necessarily know, right? There's an explanation that could be generated that would say, if I think of the strokes this way, that could be a nine. But if I think of the strokes as sort of the curve of a four and the line down, there's there's an explanation that could be given for both of those, where I maintain both of those possibilities. That's what we're going to see in the third wave. And so if the AI makes a mistake, it could say, it could be this, or it could be this, or it could be this other possibility, and this is how much confidence I have in each of those. And it's able to say why it could do that. And so that really is a lot of what we're doing to that we're seeing to combine these together. And this is something that I'm thinking about in terms of training because we've created a class, Robotics 102, which is sort of that first step in the modern AI. We teach students how to do, how to think, how to have AI that can think through an entire problem and learn from lots of data. And that is a springboard into research for future AI for our students. And so that gets to the question of how do we train students for an AI-enabled future. And so when I think about this class, I just want to show a video of this class in action. We use a M-Bot, so this is a robot that cost about $360. $60 to make, and we were able to build maps of our environment, similar to that map of Ann Arbor, the 3D map. Students can code from the daily step on campus to get these robots to do autonomous navigation, and we're creating low-cost robot platforms for them to do this. And I think it's one thing to do computer science and just sort of see some abstract output on a screen, it's another thing when you see something moves and really do something. It's incredible. I think this gets to the heart of the patent, of the patent lectureship. In that education, we have to rethink education for the needs of our future. And that's why we created the robotics major. Robotics 102 is one of these steps into the major. I will always give Joanna credit. This is us after we made the major. We're happy and smiling. We weren't smiling when we started. We had some very interesting debates. but really this launched in fall 2022. We have over 200 majors that are part of this and we see a great growth potential to define the discipline of robotics for both equity and excellence and meet these emerging needs in our society. We define robotics as the study of embodied intelligence for machines that sense, reason, act, and work with people to improve quality of life and productivity equitably across society. and you can read more about it so we have a paper it's all it's all there and i won't go into that in more depth but i would like to say that we are trying to rethink higher education for the people of ideas that we need thank you um the people and ideas that we need to so that they can have continued success from the time they start school through college and higher education to continued success in the workplace all the way through retirement and that really is what that's one of the things that we have to do for higher education moving forward The one thing I will highlight is our effort to build what we call distributed teaching collaboratives. And this is where we're working with historically black colleges, teaching colleges, minority-serving institutions to build pathways into Research One universities through open-source course development. And so this is what we call distributed teaching collaboratives and sort of an open-source approach to doing this. And what I would say is that the classroom is the catalyst for all the things that we're thinking about that you know for the things that that the patent lecture is about we should start to think about that classroom more so maybe even more so than research at that catalyst for for continued study and advancement I would say that think about our value proposition if we have a student as an undergraduate and they want we want to get them to an R1 University or a successful career in industry what we have we want to be able to have them be successful but there's barriers, admissions and hiring. Whenever I get a graduate applicant, I'm like, I don't know what they're about. I can see something on paper, but what they look like on paper may not match the reality that I have when they're in the program. Or if I'm an employer and I'm trying to hire, what is this person going to look like, how they're going to perform on my, on the job with my team in my context? Maybe it works, maybe it doesn't. And so there's a lot of anxiety on the selection side of this as well as the applicant's side of this. But if a student has a research experience with a professor then we can generate a recommendation letter maybe we generate a research publication that minimizes that anxiety for the person that's trying to hire or the person or the organization that's trying to admit and so that research experience means a lot but even for me as a professor if I'm trying to work with a student an undergrad for research experience now I have onboarding anxiety what what is this student going to do in my lab what are their interests what are their background how they going to fit with my research group or some project that I have are my funding source. But the thing that students get when they come to Michigan more than anything else is the opportunity to take a course with me. If they take a course with me, we work on something structured and we build the mentorship that's gonna be needed, the mentorship and the trust and confidence, and I give them new power and they are able to, we're able to work well with me, and that's what builds the foundation for doing, for doing continuous, work together but what about students across at smaller teaching colleges or community colleges or HPCUs and that's where we've created distributed teaching collaborators so we can work together on delivering courses and build a bridge together and so we could essentially do this nationally and it creates a win-win collaboration because there's a value proposition on both sides if we think about the so my faculty here what's your teaching load one course a semester maybe two but at my at So that's my teaching load. I teach one area in my hot topic and I teach my class, but I'm looking at my colleagues at smaller schools, they have to teach three courses, oftentimes four or five a semester, that's a lot. But they're oftentimes better teachers. So what we can do is create a distributed teaching collaborative where we work together to develop the course and then we support each other when we deliver that course to our students. And that's what's gonna allow us to build a bridge across our universities. one thing that's really important is the class listing is always owned by the local institution it's never a student from Howard University or Berea College or Morehouse College taking a Michigan course they're always getting it from their local faculty and we're building collaborations amongst the faculty we've seen incredible you know for Robotics 102 we've seen great enrollments so you know so we have a lot of students and many of these students are oftentimes ending up in graduate school or the tech industry and we're able to to help them along their path I can say more about that but I'm gonna I'm gonna just say if you want to know more we have a great article it's on Michigan engineering that you can you can you can check out because I think this is these types of students these types of classes are what's gonna be needed to solve these power problems and I'll leave you with one one other thought that I try to tell the students is digital computing right for artificial intelligence you spend most of your time with AI, transferring data from main memory to GPUs or changing the state of something from zero to one, we don't necessarily have to do that. There's all sorts of other options that are available to us. And so if we think about the growth of computing as we've gone from electromechanical relays and vacuum tubes to integrate to transistors and integrated circuits, there's new options that are out there for analog computing, optical computing, biological, thermodynamic, quantum. This I think is one of the main things that we have to explore and I may not be able to do it but maybe I'm going to teach a student in class that will be able to provide these solutions and that is incredibly inspiring. And so that dream AI is to be able to do direct operations to me on probability distributions. So not just individual X, Y, but a distribution of possibilities. And so I'm just going to flash this here, I'm not going to go into the details of it, but taking our digital operations and turn them into continuous instantaneous probability operations, That's all I'm going to say, I'm going to move on. But that dream AI is possible. I have a dream that that is there. And I think, and one of the books that I would recommend is the dream machine by Waldrop, which talks about how you went from World War I and the investments made in World War I and World War II to the personal computers that we see now. And I think that is really possible. And if we look at our history, if we invest, we train, we make people aware, we can stay the leaders of innovation and we can make this dream AI possible. And so that's really what's on the horizon. And so I think what we have now is really AI that's fast, cheap, or good. And that builds on basically the last century of having dreams of digital computing and the invention of the transistor and the digital computing revolution, which is now the foundation for what could be dreams of probabilistic computing. That's my dream where you can get something that's fast, cheap, and good, and have AI that's really well beyond our dreams. And so that is the last thing I'll leave you with. And so thank you very much for your attention. Thank you, Chad, for that thoughtful talk. And I hope your thoughts have been provoked because now it's your turn, and you get to ask questions. So who's Bray? What else is coming up? Fantastic talk, very interesting. I'm a professor of chemistry, so I'm very fascinated how we can move this into the third dimension and you gave us a teaser there about probabilistic computing can you give us more of a sense of what that actually will be I wish I could if I could I would you know I'd be on the route to getting my Nobel Prize in physics as well but but I think right now if you look at the primitives for digital computer right it's operations on binary on binary variables right um excuse me um you can do and between two binary variables or um and not and so those so that really that zero that binary data of having the zeros and the ones has worked really well because we can control that data it works well for making word and uh and Firefox and all the applications that run on our phone candy crush uh you know all uh all of those things but um but it is um but you know, but that's for, you know, for something that's stable that you don't necessarily need to, you want a very exact answer. But neural networks are showing that we don't need necessarily exact answers. We, you know, we want something that roughly works most of the time. And so, you know, so, you know, so really, you know, maybe we could maintain distribution over either binary variables or distributions over entire continuous space and a distribution of, let's say, three-dimensional space. And so now we can't use that binary and or not as sort of digital logic, we have to think of something else. And so maybe there's something that lets us, let's say take the axioms of probability over continuous data and turn that into something that we can do computing over. That can perform a marginalization, can multiply distributions together, that can, you know, that I don't have all the axioms here, but all those axioms I saw that I've learned in probability theory, Maybe there's a way to turn that into something that can compute autonomously. And quantum, you know, I, you know, it took me a long time as a graduate student to get my head around quantum, that things can be in two states at the same time. But it doesn't, that, but that's still on a binary variable. Maybe there's something that could extend to a continuous space. And you know, even if I can't, I can't imagine that now, I'd like to get students to think about that. So maybe by the time that my grandchildren are in the field, that these new types of processors, this two type of computing would be possible. In your talk, you showed us some of the riskiest uses of AI, which are classifying and categorizing different information like our visual data. We have the EU AI Act, which is a risk-based system of regulation, but there's a lack of regulation, in the US. I'm wondering if you had any thoughts on regulation, the future of regulation in AI and how that ties into the dream AI. Right. I have thoughts about regulation on AI. I'm definitely not qualified to think about that on my own as a computer scientist, somebody who's trained as a computer scientist and now is a roboticist. This is where the whole of the academy is needed. You'll hear me say this refrain over and over, even though all the students want to be computer scientist and go to Silicon Valley and make $100,000 a year just out of college, we do need to have better representation across all of the different disciplines because I'm not a public policy person. I'm not a lawyer. I'm not an economist. Although I'm as a roboticist, I'm kind of forced to be an amateur economist because everybody asks, am I going to replace their job? And that's not true. But really, this is the coming together of people from many different areas, many different sectors, the public government sector, the private sector, academia, to have that discussion. And so I don't think any one sector can do it on its own, anyone discipline can do it on its own. I can talk about what the possibilities, what is and is not possible with AI today, what could be possible in the future, but I can't tell you how it should work with necessarily, in terms of a regulatory framework. I need to talk with other people to make that happen. There is a risk of going too fast in terms of the regulation because now you may stifle innovation and there will be other state actors that could be moving ahead and advancing and, you know, it could surpass the United States if we're not careful, if we put the brakes on too fast. But if there's no regulation, then it becomes the Wild West and you can have really bad effects in terms of violations of people's civil rights. bad effects to the economy when you're making decisions on something on things that are not true on disinformation bad effects for society so that's so you know so this requires a larger discussion to make to make the best most informed choice possible it won't be optimal we will have to try and and and adjust over time but but we have to do something So first of all, thank you. This is a really great talk, lots of compelling ideas. I'd like to talk about lots of things. But one question I'd like to focus on is, and towards the end of the talk, teaching robotics. That's one of the topics that are close to my heart. So in teaching robotics, robots are really expensive. So in thinking about autonomous driving cars and especially this last year or two, we've seen humanoid robots, They're very expensive, and I was wondering if you could sort of elaborate a little bit on how you see, so I see it as sort of the 1950s with computers, and then eventually home computers came around, and then we had an explosion. So if we're going to train, you know, the next, we need to train an army of engineers to be able to do these robotics cars and humanoid robots, and do you see, how do you see teaching in the next 10 years developing for that? So I was about to answer your, think about, I was about to form my answer to your question, and then you answered the question for me in that we're thinking, you know, if we go back and look at the 50s, the same way that we were in, that there was a space race and there was a need to generate the workforce to have the innovations to compete, you know, for putting people on the moon and in orbit, that's what we're facing in artificial intelligence and robotics right now. And so you do need to have, I wouldn't say, Army, but let's say a workforce that's out there that could, you know, that are able to create the innovations that will help us move forward. I would say that similar to the 1950s in computing, where, you know, we're a computer that, you know, that would take up, that would, you would have to take up this entire room that would cost, you know, millions upon millions dollars to just do some basic computations. now I carry something that is in my pocket that is that is orders of magnitudes faster and cheaper and so with the economies of scale we will drive down the cost the mobile robot that I showed you is three hundred six dollars per unit that's because the laser range finder on it went from being fifteen hundred dollars to a hundred dollars because they're making more of those the computer that's on board instead of having the old school PC 104 stack or you know or the types of computers that would cost you know over thousands of dollars now I use a raspberry pie that's you know that's tens of dollars right as we build more we're gonna know more that you know we'll build more of them they'll be cheaper I would argue that the robots that we have now are sort of like the equivalence of the Apple 2e that I that I have or maybe the old 286 computer that I you know that I played Apogy games on right and so like that's you know I think that the economies of scale are coming the training part is really thinking about you know I mean I don't think we should have a Cold War mentality, that would be bad. But I do think we have, as we move forward, there is a similar sort of need for training the innovation workforce that we're seeing. Thank you for the question. Chad, a very great talk. So I like the way that you present to share us a great picture, a big picture for the AI. So in my mind, I believe that your robotics have been progressing slowly compared with other domains not for you agree of this or not so until that so we have the GPT or the large language model so before I would say before CPD existed I was asked that whether humans should be concerned on AI the power of AI and whether we will be endangered in some in some day in the future I told them my my people that know you should not I'm never right about that because we are too far away from the level. And the second year, then GPT existed and it surprised me, to be honest. So what I want to ask is that, now if we combine the robotics and the GPT, can GPT change the trajectory of the robotics? Or if there are some, I would say, some unique synergy that people can explore. It is changing robotics right now. If you have any other comments that how people can find a way to further explore. Right. So, you know, so the generative AI that you're talking about with GPT that came from, you know, from people saying that like where is AI and was it going to be to now with GPT, it's everywhere you can't escape it, that's what we're going to see with robotics. I would argue that, you know, that within 20 years you'll see a human, you'll see humanoid robots walking around spaces like this and it won't be special. It will just be there. I think the question for roboticists like us is how are we going to get there? Will it be the model-based MPP-style algorithms that we've been using that we know and understand and if we can speed those algorithms up, that they will be the backbone of how these AI systems work for humanoid robots? or will it be foundational models that are completely end-to-end? You know, I think that's, if you look at Nvidia or OpenAI or Tesla, I think that's where they're putting their money into. And that might work too. I think as a researcher, I believe there's something in between, something that provides the speed and recall power of foundational models in neural networks with the explainability, and the robustness of model-based AI. And if AI takes the, it follows the trend, every 15 years or so, 15 to 20 years, we get, you know, some new model comes into fashion. You know, neural network came into fashion into, in around 2010. Before that it was Bayesian inference, which took over in the 90s, and before that it was neural networks before that, and you know, or A-Star algorithms in search. And so if history holds the AI that we have 15 years from now, 20 years from that will be different than the AI that we have today. So that's why we need to continue to have a robust and diverse federal investment portfolio across the different research approaches to artificial intelligence, but also across the disciplines overall. I'm very grateful that they have funding for MRNA research that helped during COVID, right? Hey there. Hi. Hi. Thanks so much for coming out, especially on Halloween I got a few questions sure answer however you'd like how much longer until ads start to appear and prompt responses like hey how do I make a milkshake what's the best ingredients and it's like oh use this brand of milk use this and when do we start seeing ads in the humanoids like hey watch this five-second video and then we'll make then we'll go get your drink or whatever it is and then what's sort of protections will be in place to make sure that response we get is the truth instead of again some ad sponsored thing or some political sponsored thing and finally what is your best tip for all of us for PowerPoint presentations that's good I like that I think we don't I don't think people know right now what's going to be the business model for generative AI you've seen incredible capabilities you can have a subscriber model I don't know if that's profitable given how much, how many resources go into training and the queries for this model. If it holds with how web search makes their money, then you will probably see advertisements in the responses. So it will either be, I mean, they could be embedded in the responses and you could be getting biased information towards the way they want you to think. Or it could be, you know, an advertisement of, you know, you submit a prompt, you get an ad, and then you get a ad, and then you get, get your your your then you get your output and I think you said both of those so things so I think you're you're probably already thinking thinking about that I don't use PowerPoint I use keynote and the thing that I love about modern presentation software is it enables the programmers mindset so most of this stuff is automated and and you know and I think I think it's I love modern presentation software it's just amazing and I have thoughts on that and we can talk offline and I can show you a little bit more thank you hi oh wait I forgot to say something you asked about regulations and I think that gets back to the question that was asked before about about bringing in different sectors and and an area opinions across the academy sorry for interrupting I just want to say that all good thanks for the talk So you initially pointed at humans as being an example of good, being able to perform well and be reliable. And so I was hoping you could maybe elaborate on that a little more. So it seems to me that humans maybe lack some of the explainability of a lot of these, like also good, model-based AI that you compare them to in some way. So I'm wondering what aspects from your perspective do you see as being, making humans perform better and more reliable compared to some of the other current AI, the best in class AI that we have today, and how could we integrate some of those qualities and features into the AI that we have? So I would say these are, you know, I wouldn't say this, these are mathematically proven axioms, right? It's something to get you to think a little bit, right? But I would say that if you think about what it takes to get an AI system to work and deliver what you really want, you know, at the end of the day, I probably could have asked somebody to do it and they could have done it better, you know, and they could give me a result where they can explain it to me. I can go back and forth with them. You know, the quality that I would get working with another person, right? You know, it's usually going to be faster than doing an AI system, right? Now, if I ask for them to plan a route from one location through another, yeah, a computer can do that better. But think about, like, your more complicated task, right? Design a campaign for me to advertise my new product, right? Or, you know, can you lay out my class, right? let's go out and do the, you know, lay out the schedule for my class, lay out the exams, how am I going to engage the students, you know, something big like that, right, at that scale. And even generative AI can't do that just yet. It might be on its way, but I can give that to another person. I can't give that to my old school AI. Yeah, go ahead. Right, right, right. You know, I think I would boil down to the phrase, I know what you mean. I got it, right? Like, I think we can have an intuitive discussion with people because we are like people and we can explain it and we can, you know, and you have to, there's still a hiring issue and getting the right people to work with and somebody who you can actually delegate a task to. But if you do it, you know, people can do all kinds of things, right? We're amazing, you know, we're still the best neural networks on the planet, right, by a long shot. And so, you know, it's just, you know, we don't really know how the human brain, works you know maybe there's going to be things on from you know from neuroscientists that will uncover this there may be things maybe we learn with deep deep learning that you know that our brains are just correlation we we just have massive amounts of data and we're just extracting the correlations from those experiences or maybe there's something about us individually you know about people that that we are inherently born with that allows us to you know to there's something in our nature that allows us to adapt and to and to and to to thrive and to form societies, you know, the continual nature versus nurture. And maybe AI will show us, you know, what can be done with just sort of nature alone from those correlations, from that correlation, the correlations in the data, maybe we'll start to add some sort of innate reward into our AIs that could start to mimic human level AI. I think at the end of day, I don't know. I just know that if I need to get something done, I'm still asking a person help me do it I can ask Google Maps for my routes I can ask Chad GPT to make this goofy image for me or can give me suggestions about you know how I'm going to introduce professors of Bonavitch when she comes to Michigan and gives a similar talk right I'm curious about your opinion on to achieve like dream dream AI do we do we do you think we need we still need like major fund conceptual breakthroughs or do we need just a lot more incremental engineering work? We need fundamental breakthroughs that are probably going to come not, that are probably going to not come from modern artificial intelligence or modern computer science or modern robotics. It's going to come from the, from chemists and physicists and biologists and people and math and mathematicians, right? And so I think that's, you know, that's the reason why we still need to fund basic science and so I think you know I think right now as a computer sciences we get a lot of incremental work right I took a deep learning system I did this thing or I modified the loss function in this incremental way and I got I got this thing and so and so it's still you know so we kind of we're incentivized to do incremental work that can show some that can show some progress but those really transformative ideas like neural networks work 20 years ago still needs to have support. Those wacky, crazy ideas that we don't think are gonna work, may end up working. The basic idea is, let a thousand flowers bloom. Sometimes when people think about DEI, it's only about admissions of higher education, diversity, equity, inclusion, it's only about like achieving sort of, including women and underrepresented minorities, but that's not true. DEI is an investment strategy, right? If you don't believe that DEI is an investment strategy, I would love to sell you some of my Bernie Madoff stock and my Bear Stearns from 2007, you're right? You know, what you do is you create a diverse portfolio that you invest in. These are ideas, these are people, you want, and so you have to have that inclusive mindset, you create a diverse portfolio, you watch it grow over time with equity, and that really is something that you should think about when you're thinking about the people that are coming for higher education but you definitely think about it for the federal scientific uh scientific research portfolio hi um thank you for the wonderful talk it really did provide me some new perspectives to think about today um so my question is i'll preface this i'm not a computer scientist yeah so when we talk about AI there's a lot of discussion about bias in AI systems. And one of the first things I've been thinking about bias is bias in data that is used to trade AI. But I'm wondering if there's anything within the system design itself that we are failing to look at currently where bias could creep in. Yes. And I'm an academic. I don't actually build anything real, so I can't really answer that for you. But yes, I think what we do know is that diverse teams, you know, make better products because they bring more perspectives to the table. And especially when you're trying to mitigate AI bias, you need to have a diverse collection of stakeholders, a diverse collection of developers. You need to have not just coders who think about making tech, tech, tech, tech, but like, but people that think across the, that bring, that represent designers and human factors and cognitive scientists and the economist, that larger intellectuals, intellectual diversity and you know and you know and different perspective that's the key to making better products at the end of the day okay these are the last few questions did you want to ask a question all right right so so the split of men and women in the class represents roughly what we see the gender the gender breakdown at the university at Michigan engineering it's usually about 75% male 25% female some semesters I'll have more women you know it'll be closer to it'll be closer to a 6040 ratio other semesters I will have only one one woman in class the power requirements I don't necessarily know about the power requirements but I think for the for the neural networks it's it's big I can say when we that when we are running our neural networks, we have a GPU machine in our lab, when we start training in the model, the temperature in the lab goes up several degrees, and it's, and I have to walk out because it's, because that machine gets pretty hot. Question about the mistakes that AI make, especially the example did you talk about identifying the wrong person. Is that because AI may... only a single prediction or is it something that humans are not double-checking or is it that AI is not double-checking or is it not getting multiple options? I all of those above plus so it's all of those things plus we don't know what's happening inside of the inside of the network right it is at the end of the day so it's a statistical regression it's the correlations and the data so it could be that we didn't collect enough data if you collect all possible data and all possible outputs then you could have the answer to almost anything right assuming that you've you have the right scoping of your problem if you do all of that perfectly then you'd have you know then you can have the perfect predicting machine but then there's just uncertainty that happens right uncertainty in context uncertainty maybe there's sensor noise you know can we characterize all the different factors that go into making decisions you know if we can do all of that and collect that data then sure I guess we could have that system but but uncertainty is all around us as roboticists we understand that that uncertainty is just a part of the physical world and so so so for all the things that you mentioned plus data collection plus the sensor modalities that we have you know all those are going to be factors and so being able to get multiple options on the output with confidences and is one one possibility we usually just take the best answer and we say that's the right answer and that that's that's one of the ways that we should we should we will have to adjust AI for the future that brings me to my question which I've been thinking all the way through this about cohabitation you know we human beings are messy you know that's the problem with autonomous cars If we were really totally predictable, then maybe autonomous cars would not run us down, but we're not. And so I'm sort of wondering, and when I grade my students' work, you know, I had these lists of mistakes they make, and they always make new ones. Yeah. And so how do you see the human unpredictability problem being worked out? It concerns me that we will be the ones who get regulated. I think that is a possibility, but I think what's really going to happen instead is that instead of treating these AI systems as the decision-making systems, and then we have to adjust to them, we will treat the AI systems as a suggestion, but we will still make decisions. So think of it as you are supervising a number of AI systems that are working for you to do something. So I would take the example of autonomous cars. So there's five different levels to autonomous cars. Level zero is what you have right now. You do all the driving. No cruise control. You're just doing everything all the way up to level five where it's completely doing everything on its own. It's Knight Rider. I don't know if people know what Night Rider is, but like, you know, it's Night Rider, right? And usually what we see is something that looks more like a level two kind of system. control, maybe, you know, something that's giving lane guidance, you know, like keeping you within the lane. If you veer out of the lane, it might start beep at you, it might start sending controls a certain way. You know, the auto, you know, the auto park. And so I'm getting features that are autonomous, or I might be getting suggestions about things I'm doing from the car, but I still make the decisions. There may be a time when I let go of the steering wheel, but I still have to supervise all of the systems. I can't sit there eating cereal and read the newspaper right because that really is what's good what's doing is turning the driver seat more into like a cockpit right commercial airlines or commercial air pilots most of the the flight is done autonomously they have to step in in certain cases you know there are certain things that you'll still have to do manually but they're monitoring and making sure everything works right and when you have that Captain Sullenberger moment they are ready to step in and grab the grab the throttle Last question. Go ahead. No. Go, go for it. The funny thing is the last few questions together presented my question, which is basically why is it that I hate flying. And yet my impression is that overwhelming majority of the flight is done by the autopilot, and they're promising within a few years autonomous take off and landing. So I have already been trusting computers about that, and yet I don't trust my Tesla autopilot right now. And I wanted to ask whether the difference there is that the risk environment in the sky is less dense with risks than the natural risk environment on the streets, or whether, as Leo was suggesting, humans are just so squirrelly that car autopilets are not yet clever enough to anticipate their millions of bad human decisions along the way. The reality of the air, you know, the commercial airspace is highly regulated. You know, if you fly drones, you know, and you're doing it right, you're not really supposed to just go up and fly something. Got to stay under 400 feet, because everything else, you know, they know what plane is going to go in what area of space at time, and that's not something we have on the public roadways. Now, there is a situation where they will start putting more structure onto the public roadways, you know, so that autonomous cars can, you know, that will restrict the things that we can do as drivers, but also enable robotic cars. But let me offer another example, email, right? Selma, Joanna, Katie, David, they'll all tell you, I'm not good at responding to email. Every email is handcrafted with the exact thing that I want to say because I'm really afraid of saying something bad and getting in trouble. But if I'm looking at my daughter and my kids' generation, they're using GPT to both synthesize the messages and send it. they may not be writing messages. They may be supervising a dialogue system to send and receive messages. And think about that for all the different things that you do, your calendar system, you know, the types of work that you do, you will be supervising a lot of automated systems rather than doing the work that yourself. That is one thing that you might see in the future work. My car may be, even right now when I think about my car and I drive it, I am making suggestions to the car. It's not the old car that I turn and I'm directly moving the axles. It's all electronic and I turn it and it's like an electrical signal that is suggesting to the car that it should turn or should go forward. And so, you know, so that is really, you know, that is where I'm probably falling behind because I'm not ready for more automation and I'm not ready to trust it. But the next generation is, and they will be able to do things much faster, but probably with more errors. Thank you very much for you being here, and also, of course, to Dr. Chad Jenkins for sharing some of his magic with us. Thank you, Chuck. Thank you so much.