American Museum of Natural History 2016.4.1---AMNH SciCafe :自主空中機器人


American Museum of Natural History 2016.4.1---AMNH SciCafe :自主空中機器人

發佈日期:2016年4月1日
Autonomous aerial robots, commonly referred to as drones, could soon be used for search and rescue, first response, and precision farming. Join roboticist Vijay Kumar, dean and professor of engineering at the University of Pennsylvania, as he describes the advantages and the challenges of coordinating and controlling teams of small robots.
自主空中机器人,通常被称为无人驾驶飞机,很快就可以用于搜索和救援,第一反应,精耕细作。加入机器人专家维贾伊·库马尔,院长,并在宾夕法尼亚大学工程学教授,他描述的优势,协调和控制小机器人队的挑战。



==========Google 翻译==========

0:09>>Vijay Kumar: So unmanned aerial vehicles. We've been working 0:11on this for about 15 years now, 0:15and I want to show you this picture which illustrates 0:18the number of unmanned aerial vehicles and how it's grown over the years. 0:22This is a picture in 2010, and the smaller vehicles that I'm going to talk to you about, 0:27we started playing around with these in 2005. 0:31You can see the exponential growth in these vehicles. 0:34In the 1980s we did not have commercial GPS, and therefore 0:40it was really hard to develop autonomous flying robots. 0:44So in 2010, people said, well, this is going to be a $10 billion industry 0:48and people projected all kinds of uses. But it was primarily military uses. 0:52It was about surveillance, about spying, force protection, warfare, 0:57all the kinds of things that many of us are not really interested in. 1:01Certainly, from a scientific standpoint, these don't present the challenges 1:05or the potential for impact; no pun on that word. 1:09The FAA famously predicted that we'll have 15,000 civilian drones by 2020. 1:15So fast forward for five years, and now they are all over the place. 1:19I don't have to explain to anyone what an unmanned aerial vehicle is 1:23or what a drone is. And going back to the number 15,000, 1:2815,000 drones were sold in a single month last year, per month. 1:34And people predict, estimate that over a million drones were sold 1:37just in December in the Christmas season. 1:40So it's a $15 billion industry already, and again, 1:43people make famous predictions that it's going to grow to $20 billion, $25 billion by 2020. 1:48And you know that that number is also going to be wrong. 1:51But what's exciting now is that the applications have grown to areas that we never imagined. 1:57So agriculture, inspecting different aspects of our civilian infrastructure, 2:04border patrols, photography, construction and so on, so forth. 2:09So it's really grown to be a very exciting field. 2:11Of course, different people call them different things. 2:14We prefer to use the word aerial robots, because robots we think of as being smart, 2:20able to make their own decisions and so on. 2:22The military calls them remotely piloted vehicles, because in fact, 2:26they're not drones. There are human beings that are controlling these vehicles 2:30every step of the way. So it's actually a misnomer to call them drones. 2:34But of course, the popular press and all of us call them drones. 2:39So to me, they're all now pretty much the same thing. 2:44It seemed appropriate to show this picture in this museum. 2:48You think about the evolution of aerial robots, and we are just starting. 2:54And really, we want to be further along. 2:57In my lab, we look at what I call the five S's of aerial robotics. 3:03So the first S is we want to make them small. 3:06If you want to navigate an environment of humans, you want to be small. 3:10You want to be able to maneuver. You want to go through doorways. 3:13You want to go across rubble in buildings. So we are looking for making them small. 3:19We also want to make them safe. 3:21Clearly, we don't want something banging into humans causing harm, 3:27so therefore we're trying to make them as safe as possible. 3:30Smart, this is an obvious thing. If you're building a robot you want it to be smart. 3:34We also want these robots to move quickly. 3:37So we want to create robots that you actually have to slow them down 3:40to actually see what happens, just like NFL replays. 3:43So those are the kinds of robots we're shooting for. 3:46And then finally, we think about swarms, so that's the fifth S in our vernacular. 3:53So having just given you a flavor of the kinds of problems we're interested in, 3:57I want to tell you a little bit about how we think about autonomous control. 4:01First, you're trying to control something which really lives in six dimensions. 4:07There's three positions and three orientations that you have to simultaneously control. 4:12And a robot like this has four rotors, and you only have four inputs. 4:17You have four motors and you're trying to control with these four motors 4:22six different things. The system is under-actuated. 4:26It's sort of an unfair problem mathematically because you're trying to do six things with 4:30only four inputs. 4:31So these robots are called quad rotors because they have four rotors. 4:36Even if you add more rotors you are fundamentally under-actuated. 4:39And so we spend a lot of time just attacking this problem. 4:43The second thing we do is think about how to 4:45design software that runs in real time. So I'm showing you a picture here. 4:50I'm just trying to impress you with a block diagram, 4:52but the thing I want to get to you is the fact that you have these feedback loops. 4:58So if you look at the inner most feedback loop you will see that

5:01it operates at roughly a millisecond. 5:04That means every millisecond the robot is estimating its rotation, 5:09its attitude in the real world and its angular velocity, 5:12and trying to regulate that to get the precise orientation it wants. 5:17So then the intermediate feedback loop in the middle, 5:20you're feeding back positon and velocity, 5:23and that operates roughly at 10 milliseconds. 5:26And then finally, the outermost loop, you're thinking about trajectories in the real world 5:30and how to plan trajectories, and that operates roughly at 100 milliseconds. 5:35So those are the three levels of intelligence that need to be built into the system. 5:41And although all of our students are engineers, 5:43they spend 80 percent of their time thinking about software. 5:46So therefore, they also become computer scientists in this field. 5:50The most critical things are the computations that happen onboard. 5:55These are the orientation calculations to figure out how to control the orientation. 6:00Some of the other computations actually don't happen to happen onboard. 6:04So if you look at the position, sometimes we get estimates of positions 6:08from external cameras, and those computations can actually happen off board. 6:13In fact, a lot of the trajectory planning software 6:15that we write and test in the lab happens on a laptop like mine. 6:20Today we hear a lot about cloud infrastructure. 6:22Well, we use the cloud infrastructure to run real time control loops 6:28for vehicles like this as they maneuver through the environment. 6:32We try to make these robots as small as possible. 6:34This is work of Yash Mulgaonkar. This is the smallest robot we've built. 6:38It's only 11 centimeters tip to tip. It has a max speed of about six meters per second. 6:44And in terms of the size and the velocity, in terms of body lengths per second, 6:50it is equivalent to a Boeing 787 flying 50 times the speed of sound. 6:56So it actually flies pretty fast for something this small. 7:00By making it small, we automatically make it safe. 7:04And by making it small, we also make it more maneuverable, as I'll explain in a minute. 7:10The inspiration from this really comes from nature. 7:13So if you look at honeybees, for example, they're extremely small. 7:16They've very maneuverable. In fact, unlike all the big robots we build, 7:21they don't even think about avoiding collisions. 7:24So for us, the nightmare is, we build this big behemoth, 7:27and the first thing we think about is, we don't want to bump into other features 7:31in the environment, bump into each other. 7:33Well, you look at these honeybees and they love collisions, because by colliding, they 7:39learn. 7:39So by contacting your neighbor you actually know who your neighbor is 7:42and you know a little bit about your environment. 7:45So we'd love to be able to create robots of this scale. 7:49In our lab, we actually think about how to scale things down. 7:56Here you work of Yash Mulgaonkar. And that's the sound of the robots. 8:01You can see that one of the things you notice is that when the robots become small, 8:05they're able to respond more quickly to perturbations. 8:08In fact, we can show through scaling laws that the maximum acceleration 8:13you can get goes as one over the characteristic length. 8:17In other words, you make a robot half the size and their ability 8:21to maneuver in the rotational direction doubles. 8:24Likewise, the robustness, which we call the basin of attraction, 8:28grows dramatically as you shrink the size of the vehicle. 8:32This might be counterintuitive. All of us who fly on large aircrafts 8:36prefer those to smaller turboprops, but the turboprops or the large aircrafts 8:41are never subject to perturbations like this. 8:44If you want to respond to collisions and react to them and be robust to them, 8:48you really want to think about sizing things that are much smaller, 8:50and that's what we try to do. 8:53Anecdotally, when we first started working on this we realized we needed to have 8:56a first aid kit, so we started buying Band-Aids and things like that. 9:01If you plot a histogram of Band-Aids over the years, now it's tailed off 9:05when we moved to these little guys, people don't get hurt, which is great. 9:08And this is, again, 1/20th speed, probably the first mid-air collision - 9:13planned mid-air collision, and vehicles bumping into each other and recovering from these 9:19collisions. 9:20These robots are traveling at roughly two meters per second, walking speed. 9:24So imagine one person standing still and you walk right into that person. 9:28You feel the impact. Well, these robots feel it, but 9:31they're able to recover from it quite spontaneously. 9:35So that's the advantage of size. In terms of figuring out how to plan these motions, 9:42we think a lot about how to represent the dynamics of these vehicles. 9:47So if you look on the right hand side, there's this huge vector of things that the robot 9:51stores- 9:52its position, its velocity, its rotation, its angle of velocity. 9:58And we think of clever ways in which to abstract from this a smaller dimensional representation,

10:05which consists only of the positon and the orientation, the heading angle, 10:10much like you would when you drive a car. 10:12When you drive a car you think of your position in the road 10:14and you think of the yaw angle of the car, and that's roughly what you see in this left 10:22hand side picture. 10:23If you work in the smaller abstraction, 10:26then you can think about planning trajectories in that space that are safe, 10:31and then some fancy mathematics that ensures that these trajectories are as smooth as possible. 10:37So again, the intuition is, if you have lots of inertia, 10:41you don't want your trajectories to be jerky. You want them to be smooth. 10:45We try to minimize what is called a snap, which is the fourth derivative of positon 10:49over time. 10:50So the derivative is velocity, second is acceleration, third is jerk, fourth is snap. 10:56You can also do crackle and pop, but we don't. 10:58But we try to minimize that and then find the right trajectory. 11:01So that's the essence of the planning problem. 11:05Once you do that in the simpler space, which is the problem in computational geometry, 11:10then we transform this over into this more complex space and then we execute them. 11:14And that's basically what you see in these videos. 11:16You see the robot going through these planned obstacles. 11:21So if the robot knows where the obstacles are, 11:25it can plan these minimum snap trajectories at a fraction of a second, often 20, 30 times 11:31a second. 11:32And it doesn't matter if the obstacle is moving. 11:34If the robot knows how the obstacle is moving, 11:36it can determine how to plan trajectories to go through the obstacle. 11:41So this is a bread and butter for all planning algorithms that we use. 11:42Some of you may have seen videos of birds fishing to catch their prey, 11:47and this is amazing. Look at this bird coordinating its flight, its vision and so on, 11:53and its claws. We try to do the same thing with robots. 11:56So here's a robot fishing for Philly cheesesteak hoagies, 12:03and it's able to pick that out. So again, we focus on the split second timing, 12:08coordinating vision, coordinating arms, coordinating hands, 12:12and flight as you fly through complex environments. 12:16Then finally, work of Sarah Tang, where she's able to use this framework 12:21to think about transporting suspended payloads 12:24whose length is more than the height of the window. 12:28So you have to figure out how to get the momentum of the object to be such that 12:32the suspended payload swings through first before the robot actually goes through it. 12:37So these calculations look complicated, 12:40but by abstracting the dynamics of the simplest space, 12:43we're able to solve this in real time and then feed it to the robot. 12:47And then lastly, this problem of trying to perch in complex environments. 12:53Again, you want to perch to save energy, to rest. 12:57And the challenge for us is to purge on vertical surfaces. 13:00So we have a gripper which is made out of a dry adhesive. 13:05I call these the Spiderman claws, and they're able to hold onto flat surfaces; 13:12a gripper designed by colleagues at Stanford at Mark Cutkosky's lab. 13:16And again, this framework allows us to land on any vertical surface, 13:20or any tilted surface for that matter, at just the right velocity to achieve perching. 13:26So we're able to get autonomy in a wide variety of settings. 13:30Not just in flight, but also perching, grasping and things of that nature. 13:36I want to tell you a little bit about the problem of state estimation. 13:39Everything I've shown you thus far we have cheated. 13:42We have cheated in the following way. 13:45In the lab, our robots are equipped with motion capture cameras and reflective markers. 13:52So the cameras see the reflective markers and they compute the position 13:56of the robot a hundred to two hundred times a second, 14:00and then deliver that information to the robot. 14:02The robot knows where it is at all times. It's like having GPS on steroids. 14:04You know exactly where you are, and unlike in the city when you're going around 14:05and you lose GPS, here you never lose GPS. 14:05So this gives the robots an unfair advantage and they'd be able to do all the things 14:09that you just saw with amazing precision. 14:12In the real world, and here's a typical building on the Penn campus, 14:17it becomes really challenging. Without external cameras you don't know where you are. 14:22In fact, in this building, GPS doesn't work. My cellphone doesn't work. 14:26We barely get Wi-Fi coverage. 14:28So how do you get robots to localize in complex environments like this? 14:34So we work a lot on this problem, and I want to show you a prototype 14:38that was built by a former student who is now a professor 14:40at Hong Kong University of Science and Technology, Shaojie Shen. 14:45The system he built consists of two forward facing cameras, 14:49and you can see the GPS receiver on top. You can see a laser scanner, 14:53which is this orange band on the top. And then there's a downward facing camera, too, 14:58which you don't see. 14:59So this package allows the vehicle to sense features in the environment

15:05and determine where it is relative to those features. Then as it moves, 15:10much like humans, when we walk we're looking at things in the environment. 15:14We take steps. We know roughly how far we've walked. 15:18And then we look at how these features are flying past our retina. 15:21We integrate that motion to then figure out where we are in the real world. 15:25And that's what this robot is able to do. 15:28Here you'll see work of Sikang Liu that essentially takes this information 15:33coming from the robots and it's able to construct the three dimensional maps. 15:39This is just outside our lab. You can see it build high resolution maps 15:44at five centimeter resolution, and how it's entering the lab as you can see, 15:49with all the clutter - obviously our lab. And the bottom, you see the map that it's 15:53building 15:53and you'll see that the color of the objects that it sees is overlaid on this map. 15:59So this is now leading to "smart." And it's not really smart in the sense that 16:06it's not making any intelligent decisions in this particular experiment. 16:10But it's smart enough now that it's able to perceive the environment 16:13and represent it in terms of this three dimensional map. 16:16Which is a great starting point. Imagine being outside a building and then 16:20deploying the vehicle inside the building where you have a complete picture 16:24of what's inside the building. You know something about its structural integrity. 16:28If there's an active shooter in the building you can probably detect that shooter. 16:32And if there are victims in the building you can localize 16:34and tell rescue workers where they are. So this basic technology, 16:38while it might not appear to be very smart to us, is actually smart enough to do lots 16:44of useful things. 16:46Here's the same type of technology, an outdoor flight. 16:50Many of us have now heard about Amazon and Google 16:53wanting to deliver packages to our doorstep. This, in theory, works. 16:58It works when you are flying at let's say 400 feet, just at the FAA ceiling, 17:04where GPS is clear and you're relatively unobstructed. 17:09But what happens when you get to features such as trees, where your GPS might not work? 17:15And in our case, when you have cameras, 17:17the cameras might not have enough illumination to function. 17:20So we look at combinations of sensors, as you see on the top left. 17:26And at every instant, the robot is able to estimate its error. 17:30So if you see that ellipsoid in the middle, this is not unlike the ellipsoid you see in 17:34your Google maps 17:34which tells you the error in your position. 17:37So the vehicle not only calculates where it is, it's also able to tell the software 17:42what its estimate of the error is. And as it goes around this complex, 17:47indoor and outdoor, using lasers indoors where cameras and GPS don't work, 17:53and outdoors in bright sunlight where the camera doesn't work but maybe GPS works, 17:58it's able to navigate its way through a fairly complex environment. 18:01So is a half a kilometer flight at roughly walking speed, 18:05and it's able to do all of this autonomously. 18:08So this is a very important technology as you sort of get close to human build environments, 18:16like buildings or trees that just happen to grow since the last time you were there. 18:20So you need to detect that. You have to react to it and then behave in a safe way. 18:26So one problem that we run into is that these vehicles burn a lot of power. 18:32So if you look at rotor crafts, they burn roughly 200 watts per kilo. That's a lot. 18:40That's like four light bulbs for every kilo of payload you carry. 18:45And part of the problem is that all this hardware I've shown you is actually quite heavy. 18:50The cameras are about 80 grams, the laser range finder that I showed you is about 370 18:56grams. 18:57Our Intel processor of the board is about 200 grams. 19:00So you add all of this up, not only are you burning power to power the devices, 19:06you're also burning power just to carry these devices. 19:10So a big challenge for us is to actually limit the power consumption. 19:15If you don't limit the power consumption you have to carry bigger batteries, 19:17nd if you carry bigger batteries that's extra payload and you're burning even more power. 19:24All the things I've been telling you about, being small, 19:26being safe, that goes right out the window because your devices keep getting bigger. 19:32So this is a big challenge for us. But consumer electronics sometimes comes to the rescue. 19:39So if you ask yourself the question, what is an inexpensive device that you can buy 19:43today 19:44that has sensing and computing in a lightweight package and low power, 19:50of course it's your smart phone. So we started asking the question, 19:54could we build something that's powered exclusively through smart phones? 19:59So we came up with this idea of a "phlone". So you buy an off-the-shelf,

20:07in this case Samsung Galaxy S5 phone, and you download our app. 20:11And then you buy a USB cable -and make sure the USB cable is as small as possible 20:15because you want to limit the weight- and you plug it into a drone. 20:20This just happens to be the robot that we built, but it will work with most drones. 20:23Then you can actually power the device using a smart phone. 20:27So I want to show you - this is Giuseppe Loianno's work, and show you the robot that he built, 20:34where this phone is actually taking pictures of everything it sees in the environment 20:3930 times a second, calculating features in the environment, estimating distance to the 20:44features, 20:45and from that, estimating its position. 20:47So all the computation and all the sensing is done onboard using the phone's camera, 20:53the phone's processor and the phones inertial measurement unit, 20:57which is basically a system of accelerometers and gyros that measure accelerations 21:01and angular accelerations. 21:03So this is in collaboration with Qualcomm, but you can see three meters per section 21:06autonomous flight, all planned by Giuseppe through his software. 21:12And of course, you can get it to do whatever you want, and you can just imagine, 21:16you can take the mother of all selfies if you position it wherever you want. 21:22So this give us some hope that you can actually build really lightweight devices 21:26with off-the-shelf hardware. So it's inexpensive, lightweight, and also safe. 21:34The other S-word I talk about is speed. So this is what we'd like to be able to do. 21:39This is actually being driven by an expert pilot. 21:43Imagine again responding to 911 calls and getting there 21:47and responding to things quickly, finding out where the bad guys are. 21:51We'd like to be able to do this autonomously. 21:52There's only one small segment of this, and I'll show you this in a minute, 21:57where the flight is autonomous. 21:59So this piece, flying down the hallway. This is about three or four meters per second, 22:04maybe a little more than that. So we know how to do that autonomously. 22:08But navigating these bends at high speeds and going up and down the stairs, 22:13these are things we're still working on. But that is something we'd like to do before this 22:19then becomes an effective tool that we might imagine using in a search and rescue and first 22:24response. 22:25Finally, I'd like to talk a little bit about cooperative control, 22:30where we look at the problem of how to get all of these robots to collaborate 22:33and do something useful. And of course, once again, we're inspired by nature. 22:39So this is a picture with half a million to a million starlings off the coast of Denmark, 22:45and you can see them form these incredible patterns in the sky. 22:49To my knowledge, they don't use a whole lot of mathematics to do this, right, 22:52but mathematics is the tool that we have at our disposal. 22:55So it's a real challenge to be inspired by nature, and then work with tools 23:00that we know to create these kinds of behaviors. 23:04Instead of trying to mimic them, what we have tried to do is, instead, 23:07understand some basic organizing principles that we believe allows us to 23:12accomplish these kinds of movements. So it's not just about flight. 23:16We can see ants cooperatively carrying objects, and they carry this object back to their nest. 23:22They think it's food. The reason they think it's food is because this plastic object 23:26we created is coated with the juice from figs, so they think it's food 23:30and they carry it back to their nest. But this allows us to study cooperation. 23:36This is actually an elastic disk, so it allows us to see which ants are pulling, 23:40as you can see on the top, and which ants are pushing at the bottom. 23:45You can also see which ants are not doing anything. 23:48They're just goofing off and they're there for the ride. 23:51But it's really intriguing how these ants spontaneously form teams 23:57and are able to accomplish these incredibly complex tasks. 24:00At least from a robotics standpoint, these are very complex tasks. 24:05So again, the organizing principle is, first, each ant, each bird acts independently. 24:11So we want robots to think about being completely autonomous and being self-contained. 24:16Second, we'd really like them to work with local information. 24:21There is no way in a room like this, if we had to make decisions, 24:25that we wait for consensus to emerge and do something as a group. 24:30Maybe that's what the government does today. That's why they don't do anything. 24:34But it's very hard to achieve that. So you really have to work based on 24:38what you know locally, and then act based on that. 24:43The third idea is also fairly simple. This notion of anonymity. 24:47We want individuals to be agnostic to who their neighbors are. 24:52So if you think about a completely altruistic society of robots, 24:57then the robots shouldn't care who their neighbors are.

25:00We want them to collaborate and be exactly the same way, 25:05independent of the specificity of who they're surrounded by. 25:09So we try to incorporate all of these elements into our software. 25:14You could see here that he's demonstrating the first idea. 25:18This is Katie Powers' work, where she has encoded these leader/follower behaviors into 25:23the robots. 25:25So the first robot is literally hijacked by David Pogue, and he is able to manipulate 25:31it. 25:32The other robots are basically responding to their neighbors. 25:36And they don't care that one of them has actually been lifted up by a human being and is moving 25:40it. 25:40They're just reacting to the position. 25:42The simple idea here is that a single individual can actually manipulate, 25:50maybe not quite a swarm, but in principle a swarm. 25:53So the control computations that have to be done don't scale with the number of robots. 26:00It's just the same computations you'd have to do if you just had a single robot. 26:03And then everything else follows, because every robot is following a leader, 26:03and then that robot has another leader and so on and so forth. 26:04The second idea is this concept of anonymity, Matt Turpin's work. 26:10And here you could see that the robots have been asked to form a circular pattern. 26:14They know the patter that they have to form, but again, they are agnostic to their specific 26:21neighbors. 26:21They're agnostic to even the number of robots on the team. 26:25So as long as they know where the pattern has to be formed and what the shape 26:28of the pattern is, they're able to find their place, adjust their spacing 26:33with respect to their neighbors. 26:35And now we're beginning to see something that might resemble the pattern formation 26:40that we saw in the starlings. Admittedly, for a very simple circular pattern, 26:45but still, doing them autonomously without worrying about the number of robots on the 26:50team. 26:52Then finally, you see some of these things put together where the pattern 26:56actually changes shape, starting with a rectangle, then into an ellipse, 27:00into a straight line, back into a circle. In all of these computations, 27:06a programmer is essentially telling the robots what patterns to form by giving the robots 27:13different shapes as a function of time. And the robots figure out which robot 27:18needs to be where in order to describe the shape and they adapt to the commands. 27:23So you could see how these kinds of algorithms might be used now 27:26for half a million robots if we had them. And if there was a place 27:30we could do these kinds of experiments, 27:32these algorithms would scale to those large numbers. 27:36I want to talk a little bit about why we're doing what we're doing, 27:39besides creating these cool videos and publishing them on YouTube. 27:42And of course, everybody loves those, but ultimately, 27:44we're interested in solving some real problems. 27:48The first problem area that we're very excited about is agriculture. 27:54If you look at the challenges facing society, you quickly come to the conclusion that 27:59water and food, and actually these challenges are related, 28:03are our number one challenge. The efficiency of almost all production systems in the world 28:09has gone up over time, but for food it's actually going down for a variety of reasons. 28:14So one thing we're really interested in is trying to see how we can use robots 28:18to monitor and tend crops. Here's our robot flying in an apple orchard 28:23carrying all kinds of sensors. And they're able to, in this environment, 28:29do fairly simple things. On the bottom left, they're gathering infrared information. 28:35On the bottom right, they're building three dimensional maps of apple trees. 28:39And in the center, they're computing an index called NDVI. 28:43So each of these pieces of information is useful in order to assess the health of a 28:49plant. 28:49So for instance, if you know something about the size of the plant, 28:53if you have a three dimensional map, you can fly by that plant week to week 28:57and measure the state of growth, and you can estimate how healthy it is. 29:02If you look at this NDVI, the central thing, that essentially tells you something about 29:06the vigor of the plant. Something even more basic, flying past these plants 29:10we can count apples and we can estimate the yield on the plants, 29:14and helps them plan for downstream picking, harvesting and then shipping; 29:19something quite basic with every manufacturing facility has, but farmers don't have. 29:26Another thing, and this is Kartik Mohta's work, we're working on 29:29is this notion of robot first responders. So imagine you have a 911 call 29:34from a building. You can imagine a swarm of robots equipped with cameras 29:39getting to the building and surrounding it long before search and rescue workers 29:45come to the scene, long before first responder police officers come to the scene. 29:50What we are really trying to do with these, on the top left you see 29:55the operator interface, what the dispatcher might see before he or she even reacts to

30:00the 911 call. 30:02The robots are surrounding the building deciding who takes up 30:04what position around what ingress or egress point, 30:08all the time assimilating information and building a mosaic, 30:12as you see on the top right, and a three dimensional map on the bottom. 30:16So now, if a police car were to drive up to the scene, 30:20they would be equipped with all this information before they even get there. 30:24And they would know what to do before they got there. 30:27This is a very important tool in operations, 30:31where oftentimes speed of response is so critical. 30:36This is not true just for outdoor operations, but also indoor operations. 30:40I want to show you some experiments we did. 30:43This was about five years ago, after the Fukushima earthquake. 30:48This was in a town not too far from Fukushima where our aerial robot is hitching a ride 30:55on one of our Japanese colleagues' ground robots. And by the way, 30:59the reason it hitches a ride is because our robots are programmed to be lazy. 31:03They burn a lot of power, so anytime they can ride on top of something else, they do. 31:07But you can see in this collapsed doorway they quickly realize that the team 31:13cannot go through. So the aerial robot takes off, is able 31:16to cross over the bookshelf, see what's on the other side, 31:20all the time creating a three dimensional map. 31:22And this kind of information then can be made available to somebody 31:26who is standing outside the room or outside the building, 31:29and providing valuable information in terms of the structure integrity of the collapsed 31:35building, 31:35in terms of potential victims, and assessing the state of the building. 31:41In this particular experiment, again, this was five years ago - 31:44we were able to build three dimensional maps. 31:49And this is three stories - the seventh, eighth and ninth floor of a nine story building. 31:56So the map is a five centimeter resolution map, and this took a long time to build. 32:01This experiment lasted about two and a half hours, and that's one of the challenges of 32:05robotics. 32:07If I tell a search and rescue worker or a first responder that I want you 32:11to give me two and a half hours so I go into this building and give you this 32:13wonderful map, nobody is going to give me that time. I'll be lucky if they give me two 32:18and a half minutes, or maybe two and a half seconds. 32:22That's where this idea of swarms come in. We really want systems 32:26that can go in really quickly, collect the data, and by the time they cut out, 32:30they've assimilated this information and built a three dimensional map. 32:34And that's the kind of thing we're shooting for. 32:37So let me just conclude with a poster of an upcoming Warner Brothers movie 32:43called The Swarm. 32:44Actually, some of you might be old enough to actually remember this movie. 32:48Has anyone seen this? If you've seen it you probably know 32:51that you won't recommend it to your friends. It's actually a terrible movie. 32:54It's about killer bees that attack mankind and so on. 32:59But I love the poster because everything about this poster is true. 33:02The size is immeasurable. I hope I've convinced you the power is limitless. 33:07Even that last piece, it's enemy is man, which is true. 33:11We have the technology and we have to find a way to harness the technology 33:15and use it in a way that could be beneficial to society and to mankind. 33:19So even that part is true. 33:21So thank you very much.


0:09 >>维贾伊·库马尔:那么无人机。我们一直在努力 0:11现在在这个约15年, 0:15我想告诉你这幅画,它说明 0:18无人驾驶车辆的数目和它是如何生长在过去几年。 0:22这是在2010年的照片,较小的车辆,我要和你谈, 0:27我们在2005年开始玩弄这些。 0:31你可以看到在这些车辆的指数增长。 0:34在20世纪80年代,我们没有商用GPS,因此 0:40这是真的努力开发自主飞行机器人。 0:44因此,在2010年,有人说,好,这将是$ 10十亿产业 0:48人们预计的各种用途。但它主要是用于军事目的。 0:52这是关于监视,约间谍,部队保护,战争, 0:57所有的各种各样的事情,我们很多人都没有真正感兴趣。 1:01当然,从科学的角度来看,这些不存在的挑战 1:05或冲击的可能性;对这个词没有双关语。 1:09美国联邦航空局著名的预测,我们将在2020年拥有1.5民用飞机。 1:15所以,五年快进,现在他们所有的地方。 1:19我没有解释任何人的无人机是什么 1:23或无人驾驶飞机是什么。并要回数15,000, 1:28 15000无人机进行了单月销售的,去年每月。 1:34而人们预测,估计有超过一百万的无人驾驶飞机共售出 1:37刚刚于12月在圣诞季节。 1:40所以这是一个$ 15个十亿产业已经,并再次, 1:43人做,这是怎么回事,到2020年将增长到$ 20个十亿缴付$ 25十亿著名的预言。 1:48而且你知道,这个数字也将是错误的。 1:51但是,现在是令人兴奋的是,应用已发展到我们从来没有想过的地方。 1:57所以,农业,检验我们的民用基础设施的不同方面, 2:04边境巡逻,摄影,建筑等等,等等。 2:09所以真的成长为一个非常令人兴奋的领域。 2:11当然,不同的人打电话给他们不同的东西。 2:14我们更喜欢使用这个词的空中机器人,机器人,因为我们所认为的聪明, 2:20能够做出自己的决定等。 2:22军方称他们为遥控飞行器,因为实际上, 2:26他们没有飞机。有人类正在控制这些车辆 2:30的每一步。因此,它实际上是用词不当给他们打电话无人驾驶飞机。 2:34但当然,大众媒体和我们所有的人打电话给他们的无人驾驶飞机。 2:39所以对我来说,他们都是现在几乎同样的事情。 2:44这似乎是适当的,以显示这幅画在这个博物馆。 2:48你想想,空中机器人的进化,而我们才刚刚开始。 2:54真的,我们想一起进一步。 2:57在我的实验室中,我们看一下我所说的空中机器人的5个S的。 3:03因此,第一个S是我们希望让他们小。 3:06如果你想浏览人类的环境中,你想成为小。 3:10你想成为能够操纵。你想通过门道。 3:13你想跨越瓦砾进去建筑物。因此,我们正在寻找使他们小。 3:19我们也希望,使他们的安全。 3:21显然,我们不想要的东西猛击进入人体造成危害, 3:27所以因此我们正在努力使他们尽可能安全。 3:30灵动,这是显而易见的事情。如果你正在构建一个机器人,你希望它是聪明的。 3:34我们也希望这些机器人快速移动。 3:37因此,我们要创建你确实有慢下来的机器人 3:40实际上看看会发生什么,就像NFL重播。 3:43因此,这些都是我们所要拍摄的机器人。 3:46然后最后,我们想到蜂拥而来,所以这是我们的白话第五秒。 3:53所以,刚刚给你的各种问题,我们感兴趣的味道, 3:57我想告诉你我们如何看待自主控制一点点。 4:01首先,你要控制一些东西,真正生活在六个维度。 4:07有三个位置,而且你必须同时控制三个方向。 4:12而像这样的机器人有四个转子,而你只有四个输入。 4:17你有四个电动机和你想用这四个电机控制 4:22六个不同的事情。该系统是欠驱动。 4:26这有点不公平问题的数学,因为你正在试图做的六件事与 4:30只有四个输入。 4:31所以这些机器人被称为四转子,因为他们有四个转子。 4:36即使你添加更多的转子你是从根本上欠驱动。 4:39因此,我们花了很多的时间只是攻击了这个问题。 4:43我们做的第二件事是思考如何 4:45在实时运行设计软件。所以我在这里展示你的照片。 4:50我只是想给你一个框图留下深刻的印象, 4:52但我想给你的是,你有这些反馈回路的事实。 4:58所以,如果你在最内侧的反馈环路,你会看到

5:01它运行在大约一毫秒。 5:04这意味着每毫秒的机器人估计其旋转, 5:09其在真实世界中的姿态和角速度, 5:12并试图规范,要获得它想要的精确定位。 5:17这样,就要在中间的中间反馈环路, 5:20你反馈当前位置和速度, 5:23并且,在10毫秒的大致运行。 5:26然后最后,最外层循环,你在现实世界中思考的轨迹 5:30以及如何规划的轨迹,并且在100毫秒大致动作。 5:35因此,这些都是三级情报的需要被内置到系统。 5:41虽然所有的学生都是工程师, 5:43他们花费的时间80%思考软件。 5:46因此,因此,它们也成为计算机科学家在这一领域。 5:50最关键的事情是发生船上的计算。 5:55这些都是定位计算,找出如何控制方向。 6:00一些其他的计算其实并不发生在船上发生。 6:04所以,如果你看一下位置,有时我们得到位置的估计 6:08从外部摄像头,这些计算实际上可以发生断板。 6:13事实上,很多轨迹规划软件 6:15我们编写和测试实验室发生在像我这样的笔记本电脑。 6:20今天我们听到很多关于云计算基础设施。 6:22好了,我们使用云基础架构来运行实时控制回路 6:28对于这样的车辆,因为他们通过操纵环境。 6:32我们努力使这些机器人尽可能小。 6:34这是佳日Mulgaonkar工作。这是我们已经构建了最小的机器人。 6:38这只是11厘米提示给小费。它拥有每秒约六米的最高速度。 6:44并且在大小和速度方面,以每秒体长度而言, 6:50它相当于一个波音787飞50次音速。 6:56因此,它实际上过得蛮快的东西,这个小。 7:00通过使小,我们会自动使之安全。 7:04并使其小,我们也使其更容易操作,因为我会在一分钟内解释。 7:10从这个灵感确实来自于大自然。 7:13所以,如果你看看蜜蜂,例如,他们是非常小的。 7:16他们已经非常的机动性。事实上,与我们所建立的大机器人, 7:21他们甚至没有去想避免碰撞。 7:24所以对我们来说,噩梦是,我们建立这个大的庞然大物, 7:27我们想想的第一件事是,我们不想碰到其他功能 7:31在环境,撞到对方。 7:33好吧,你看看这些蜜蜂和他们爱的碰撞,因为碰撞,他们 7:39学习。 7:39因此,通过你的邻居与您联系真正了解你的邻居是谁 7:42你知道你的环境一点点。 7:45因此,我们很乐意能够创造如此大规模的机器人。 7:49在我们的实验室,我们其实想想如何扩大下来。 7:56在这里,您佳日Mulgaonkar的工作。这就是机器人的声音。 8:01你可以看到,你注意到的事情之一是,当机器人变小, 8:05他们能够更迅速地响应扰动。 8:08其实,我们可以通过标度律表明,最大加速度 8:13你可以去为一个比特征长度。 8:17换句话说,你让一个机器人一半大小和自己的能力 8:21在旋转方向双打操纵。 8:24同样地,鲁棒性,我们称之为吸引盆, 8:28大幅成长,如您缩小车辆的大小。 8:32这可能是违反直觉的。我们所有的人谁飞大型飞机 8:36喜欢那些更小的涡轮螺旋桨飞机,但涡轮螺旋桨飞机或大型飞机 8:41永远不会受到这样的扰动。 8:44如果你想以应对碰撞和反应,对他们是健壮他们, 8:48你真的要考虑大小的东西都是要小得多, 8:50而这正是我们试图做的。 8:53有趣的是,当我们第一次开始了这方面的工作我们意识到我们需要有 8:56一个急救箱,所以我们开始购买创可贴之类的东西。 9:01如果您绘制创可贴的直方图多年来,现在它的尾关 9:05当我们搬到了这些小家伙,人不受到伤害,这是伟大的。 9:08这是再次,1/20的速度,可能是第一个空中相撞 - 9:13计划空中相撞,和车辆相互碰撞,并从这些回收 9:19碰撞。 9:20这些机器人以每秒大约两米的速度运行,行走速度。 9:24所以,想象一个人呆呆地站在那里,你顺利进入那个人。 9:28你觉得影响。那么,这些机器人的感觉,但 9:31他们能够从它很自然恢复。 9:35所以这是大小的优点。在搞清楚方面如何规划这些运动, 9:42我们想了很多关于如何表示这些车辆的动态。 9:47所以,如果你看看右手边,有这个巨大的东西,机器人矢量 9:51商店 - 9:52它的位置,它的速度,它的旋转,其速度的角度。 9:58我们认为,可以在这些抽象的从这个小维表示聪明的方法,

10:05它仅由当前位置和方向,方位角的, 10:10就如您在当你驾驶一辆汽车。 10:12当你开车,你觉得在你的路上的位置 10:14你认为汽车的偏航角,而这大概是你在这看到左边 10:22右手边的画面。 10:23如果您在较小的抽象工作, 10:26那么你可以考虑一下在这个空间是安全规划的轨迹, 10:31然后一些奇特的数学,以确保这些轨迹是尽可能顺利。 10:37如此反复,直觉是,如果你有很多的惯性, 10:41你不想让你的轨迹是生涩。你希望他们能顺利。 10:45我们尽量减少所谓的快照,这是当前位置的四阶导数 10:49随着时间的推移。 10:50因此,导数的速度,二是加速,第三个是混蛋,第四个是小菜一碟。 10:56你也可以做裂纹和流行,但我们不知道。 10:58但是,我们尽量最小化,然后找到合适的轨迹。 11:01所以这是规划问题的实质。 11:05一旦你这样做了简单的空间,这是计算几何的问题, 11:10那么,我们在将其转化成这种更复杂的空间,然后我们执行它们。 11:14而这基本上是你在这些视频看到的。 11:16你看,机器人将通过这些计划的障碍。 11:21因此,如果机器人知道障碍在哪里, 11:25它可在几分之一秒计划这些最低卡扣轨迹,往往20次,30次 11:31一秒。 11:32并且如果障碍物正在移动也没关系。 11:34如果机器人知道如何障碍物移动时, 11:36它可以决定如何规划轨迹要经过障碍物。 11:41所以这是我们使用所有的规划算法面包和奶油。 11:42你们有些人可能已经看到鸟捕鱼捉猎物的视频, 11:47这是惊人的。看看这鸟协调它的飞行,它的视野等等, 11:53和它的爪子。我们尝试做同样的事情的机器人。 11:56因此,这里是为费城奶酪牛排hoagies机器人钓鱼, 12:03它是能够挑选了这一点。如此反复,我们专注于一瞬间时机, 12:08协调视觉,协调胳膊,手的协调, 12:12和飞行当你在复杂的环境中飞翔。 12:16于是最后,莎拉汤,在那里她能够使用这个框架的工作 12:21想想暂停运送有效载荷 12:24其长度小于窗口的高度以上。 12:28所以,你必须弄清楚如何获取对象的势头是这样的: 12:32暂停的有效载荷通过摆动之前先机器人通过它实际上去。 12:37所以这些计算看起来比较复杂, 12:40但通过抽象的最简单的空间的动态, 12:43我们能够解决这个实时,然后将其输送到机器人。 12:47再最后,试图在复杂环境中栖息这个问题。 12:53再次,要栖息节约能源,休息。 12:57对我们所面临的挑战是要清除在垂直表面上​​。 13:00因此,我们有哪些是做出来的干粘合剂的夹持器。 13:05我把这些蜘蛛侠爪子和他们能够守住平坦的表面; 13:12同事在斯坦福设计在马克Cutkosky实验室的抓手。 13:16再次,这个框架允许我们土地上的任何垂直表面, 13:20或任何倾斜的表面对于这个问题,在适当的速度,实现栖息。 13:26因此,我们能够得到的自主权在各种各样的设置。 13:30不只是在飞行中,也栖息,把握和这种性质的东西。 13:36我想告诉你的状态估计问题一点点。 13:39我展示你的一切我们迄今为止已经被骗。 13:42我们已经通过以下方式欺骗。 13:45在实验室中,我们的机器人配备动作捕捉摄像头和反光标志。 13:52所以摄像机看到的反射标记和它们计算位置 13:56机器人的一百至二百次的第二 14:00然后传送该信息给机器人。 14:02机器人知道它在任何时候都。这就像类固醇有GPS。 14:04你到底知道你在哪里,而不像在城市,当你绕来绕去 14:05而你失去了GPS,在这里你永远不会失去GPS。 14:05所以这给机器人不公平的优势,他们会能够做的所有事情 14:09您刚刚以惊人的精密切割。 14:12在现实世界中,这里是在宾夕法尼亚校园典型建筑, 14:17它变得很有挑战性。无需外接摄像头,你不知道你在哪里。 14:22事实上,在这栋楼,GPS无法正常工作。我的手机不能正常工作。 14:26我们几乎获得Wi-Fi覆盖。 14:28那么,你如何让机器人在复杂环境中这样定位? 14:34因此,我们的工作对这个问题有很多,我想告诉你一个原型 14:38这是由以前的一个学生谁现在是教授建 14:40在科技,少杰沉的香港大学。 14:45他建立了系统由两个前向摄像头, 14:49你可以看到在上面的GPS接收机。你可以看到一个激光扫描仪, 14:53这是对上面这个橙色的乐队。再有一个向下的相机,太, 14:58你没有看到。 14:59所以该包允许车辆以感测环境中的特征

15:05并确定它是相对于这些功能。然后,因为它的动作, 15:10很像人类,当我们走,我们正在寻找在环境的东西。 15:14我们采取的步骤。我们大致知道有多远,我们已经走了。 15:18然后我们来看看这些功能是如何飞过我们的视网膜。 15:21我们集成了议案,然后找出其中我们在现实世界中。 15:25这就是这个机器人能够做什么。 15:28在这里,你会看到刘抗感冒工作,基本上采取此信息 15:33来自未来的机器人,它是能够构建三维地图。 15:39这仅仅是我们的实验室外。你可以看到它建立高分辨率的地图 15:44五厘米的分辨率,以及它如何进入实验室,你可以看到, 15:49与所有的混乱 - 显然我们的实验室。和底部,可以看到它的地图 15:53建造 15:53你会看到,它看到的物体的颜色这张地图上叠加。 15:59所以这是现在导致“聪明”。而且这不是在这个意义上真正聪明的 16:06它不是在该具体实验进行任何智能决策。 16:10但现在够聪明,它是能够感知环境 16:13并代表它在此三维地图的条款。 16:16这是一个很好的起点。想象一下,在大楼外面,然后 16:20部署车辆,你有一个完整的画面楼内 16:24一个什么样的建筑物内。你了解它的结构完整性。 16:28如果有在大厦枪击案你也许可以检测到射手。 16:32如果有在建筑都是受害者,你可以本地化 16:34并告诉救援人员,他们是。所以这个基本技术, 16:38虽然它可能不会出现很聪明的我们,其实是足够聪明的做很多 16:44有用的东西。 16:46下面是同类型的技术,一个室外飞行。 16:50我们很多人现在已经听说过亚马逊和谷歌 16:53想送包我们的家门口。此,在理论上,工作原理。 16:58它的工作原理,当你在我们说400英尺飞行,只是在FAA的天花板, 17:04其中,GPS是明确的,你是比较通畅。 17:09但是,当你得到的功能,如树,你的GPS可能无法工作,会发生什么? 17:15而在我们的情况下,当你有摄像头, 17:17相机可能没有足够照明的功能。 17:20所以,我们看的传感器组合,当你在左上角看到的。 17:26并在每一个瞬间,机器人能够估计其错误。 17:30所以,如果你看到在中间的椭圆形,这是不是不像你在看到椭球 17:34您的谷歌地图 17:34它告诉你在你的位置的误差。 17:37因此,车辆不仅计算它在哪里,它也能告诉软件 17:42误差的估计是什么。而且,因为它去解决这个复杂的, 17:47室内,室外,室内使用,其中摄像头和GPS不工作的激光器, 17:53室外在明亮的阳光下那里的相机不工作,但也许GPS的工作原理, 17:58它能够通过一个相当复杂的环境中导航的方式。 18:01所以是一个半公里的飞行以大约步行速度, 18:05它是能够做到这一切自主。 18:08因此,这是当你几分亲近人类构建环境一个非常重要的技术, 18:16像建筑物或树木,仅仅发生,因为你在那里的最后一次增长。 18:20所以,你需要检测。你必须作出反应,然后用安全的方式行事。 18:26所以,我们遇到的一个问题是,这些车辆燃烧大量的电力。 18:32所以,如果你看一下转子工艺品,他们烧每公斤约200瓦。好多啊。 18:40这就像四灯灯泡,有效载荷的每公斤您随身携带。 18:45而问题的一部分是,所有这些硬件我向您展示其实是相当重的。 18:50该相机约80克,激光测距仪,我向您展示的约370 18:56克。 18:57我们董事会的英特尔处理器大约是200克。 19:00所以,你添加所有的这件事,不仅是你烧的电源设备供电, 19:06你还焚烧发电只是携带这些设备。 19:10所以对我们来说是很大的挑战是真正限制功耗。 19:15如果不限制功耗,你要好好的更大的电池, 19:17第二,如果你携带更大的电池这是额外的有效载荷和你燃烧更多的权力。 19:24所有我已经告诉你一下,是小的事情, 19:26是安全的,去右窗外,因为你的设备不断变大。 19:32所以这对我们来说是一个很大的挑战。但是,消费类电子产品,有时就派上用场了。 19:39所以,如果你问自己的问题,什么是便宜的设备,你可以买到 19:43今天 19:44具有传感和计算在一个轻量级封装,低功耗, 19:50当然,这是你的智能手机。于是我们开始问这个问题, 19:54我们可以建立一些专门的通过智能手机供电? 19:59所以我们想出了这个主意是“phlone”的。所以,你购买一个现成的架子,

20:07在这种情况下,三星Galaxy S5手机,你下载我们的应用程序。 20:11然后你买一个USB连接线 - 和确保USB电缆尽可能小 20:15因为要限制的重量,你将其插入一个无人驾驶飞机。 20:20这恰好是我们建造的机器人,但它会与大多数无人机的工作。 20:23然后,你可以使用智能手机其实供电设备。 20:27所以我想告诉你 - 这是朱塞佩Loianno的工作,并告诉你,他所打造的机器人, 20:34其中,这款手机实际上正在一切的照片它看到的环境 20:39 30次,在环境中的计算功能,估计的距离 20:44特征, 20:45并从,估计其位置。 20:47因此,所有的计算和所有的感测板上使用手机的摄像头完成, 20:53这款手机的处理器和手机惯性测量单元, 20:57这基本上是衡量加速加速度计和陀螺仪的系统 21:01和角加速度。 21:03因此,这是与高通合作,但你可以看到每节3米 21:06自主飞行,全部通过他的软件计划在朱塞佩。 21:12当然,你可以得到它做任何你想要的,你可以想像, 21:16如果你无论你想放置它,你可以采取一切自拍的母亲。 21:22因此,这给我们带来了一些希望,你实际上可以建立真正的轻量级设备 21:26有现成的,现成的硬件。所以它的价格便宜,重量轻,而且也是安全的。 21:34另外S-字,我谈的是速度。因此,这是我们想要什么能够做到。 21:39这实际上是由一个专家先导驱动。 21:43试想一下,再次回应911呼叫和到达那里 21:47和响应快速的东西,找出那里的坏人。 21:51我们希望能够自主做到这一点。 21:52只有一个这样小部分,我会告诉你该在一分钟内, 21:57其中,飞行是自治的。 21:59所以这一块,飞下来走廊。这大约是每秒三,四米, 22:04也许有点不止于此。因此,我们知道该怎么做自主。 22:08但在高速航行这些弯曲和去向上和向下的楼梯, 22:13这些事情我们还在努力。但是,这是我们想在这之前做 22:19然后变成我们想象的使用搜救和第一个有效的工具 22:24响应。 22:25最后,我想谈谈协同控制一点点, 22:30我们看看这个问题如何得到所有这些机器人协作 22:33并做一些有用的东西。当然,再一次,我们被大自然的启发。 22:39因此,这是一个五十万的图片一百万椋鸟在丹麦的海岸, 22:45你可以看到它们形成在天空中这些令人难以置信的图案。 22:49据我所知,他们不使用一大堆数学要做到这一点,对不对, 22:52但是数学是,我们有我们所掌握的工具。 22:55所以这是大自然的启发一个真正的挑战,然后使用工具的工作 23:00我们知道创建这些类型的行为。 23:04而不是试图模仿他们,我们试图做的是,相反, 23:07了解一些基本的组织原则,我们相信可以让我们 23:12完成这些类型的动作。因此,这不仅仅是飞行。 23:16我们可以看到蚂蚁背着合作对象,他们携带这个对象回到自己的小窝。 23:22他们认为这是食品。他们认为这是食物的原因是因为这种塑料对象 23:26我们创建涂有从无花果汁,所以他们认为这是食品 23:30他们带着它回到自己的窝。但是,这可以让我们研究的合作。 23:36这实际上是一种弹性盘,所以它让我们看到了这蚂蚁拉, 23:40你可以看到在顶部,和蚂蚁在底部推。 23:45您还可以看到蚂蚁都没有做任何事情。 23:48他们只是在消磨时间,他们在那里的行列。 23:51但它确实耐人寻味如何将这些蚂蚁自发形成团队 23:57并能完成这些极其复杂的任务。 24:00在从机器人的角度来看至少,这些都是非常复杂的任务。 24:05如此反复,该组织的原则是,首先,每个蚂蚁,每只鸟独立行事。 24:11因此,我们希望机器人想想是完全自主的,是独立的。 24:16第二,我们真的很想他们与当地的信息工作。 24:21还有在一间像这样没办法,如果我们不得不做出的决定, 24:25我们等待的共识出现,并做一些事情为一组。 24:30也许这就是政府今天做什么。这就是为什么他们没有做任何事情。 24:34但它是很难做到这一点。所以,你真的有合作,开发基于 24:38你知道在本地,然后什么行为基于这一点。 24:43第三个想法也很简单。这个概念不愿透露姓名的。 24:47我们希望个人是不可知的谁是他们的邻居是。 24:52所以,如果你想机器人完全利他的社会, 24:57那么机器人不应该关心谁是他们的邻居是。

25:00我们希望他们合作,是完全一样的方式, 25:05独立谁他们包围的特殊性。 25:09因此,我们尝试将所有这些元素融入我们的软件。 25:14你可以在这里看到他展示了第一个想法。 25:18这是凯蒂权力的工作,在那里她编码这些领导者/跟随行为为 25:23机器人。 25:25因此,第一个机器人字面上大卫波格劫持时,他能够操纵 25:31它。 25:32其他机器人基本上是回应他们的邻居。 25:36他们不关心他们中的一个实际上是由一个人抬起和移动 25:40它。 25:40他们只是反应的位置。 25:42这里简单的想法是,一​​个人可以真正操纵, 25:50也许并不完全是蜂拥而上,但原则上一个群。 25:53使得必须完成的控制计算不与机器人的数量比例。 26:00这只是你必须做,如果你只是有一个单一的机器人一样计算。 26:03然后,一切遵循,因为每个机器人跟随的领导者, 26:03然后该机器人具有另一个领导等等,等等。 26:04第二个想法是匿名,马特·特平的工作这个概念。 26:10在这里你可以看到,机器人已被要求形成一个循环模式。 26:14他们知道他们必须形成图案,但同样,他们是不可知其具体 26:21邻居。 26:21它们是不可知的甚至在队的机器人数。 26:25所以,只要他们知道哪里图案具有要形成什么形状 26:28的模式是,他们能够找到自己的位置,调整间距 26:33相对于他们的邻居。 26:35现在我们开始看到的东西,可能类似于图案形成 26:40我们在看到八哥。诚然,对于一个非常简单的圆形图案, 26:45但尽管如此,自主做他们不用担心机器人的数量 26:50球队。 26:52最后,您看到一些这些东西放在一起,该模式 26:56实际上改变形状,从一个矩形,然后进入一个椭圆, 27:00成一条直线,回成一个圆圈。在所有这些计算的, 27:06程序员基本上是告诉机器人什么模式来通过给机器人形成 27:13不同的形状作为时间的函数。和机器人找出哪些机器人 27:18需要是其中为了描述的形状和它们适应的命令。 27:23所以,你可以看到这些类型的算法可能现在用 27:26五十万的机器人,如果我们有他们。如果有个地方 27:30我们可以做这类实验, 27:32这些算法将扩展到那些大型的数字。 27:36我想谈谈为什么我们正在做我们正在做的一点点, 27:39除了创建这些精彩视频并将其发布到YouTube上。 27:42当然,每个人都喜欢的,但最终, 27:44我们感兴趣的是解决一些实际问题。 27:48第一个问题是区域我们非常兴奋是农业。 27:54如果你看一下社会所面临的挑战,你很快得出结论说 27:59水和食物,而实际上这些挑战是相关的, 28:03是我们的头号挑战。几乎所有的生产系统在世界上效率 28:09已经随着时间的推移,但对于食品它实际上下降了各种各样的原因。 28:14这么一件事,我们真的很感兴趣的是想看看我们如何能够利用机器人 28:18监测和趋向作物。这是我们的机器人,一个苹果园飞 28:23携带各种传感器。和他们能够,在这种环境下, 28:29这样做很简单的事情。在左下角,他们正在收集红外信息。 28:35在右下方,他们正在建设的苹果树的三维地图。 28:39并在中心,他们计算称为NDVI的索引。 28:43因此,每个这些信息的,以评估的健康是非常有用的 28:49厂。 28:49因此,举例来说,如果你知道一些关于工厂的规模, 28:53如果你有一个三维地图,可以通过植物周周飞 28:57和衡量增长的状态,你能估计它是多么健康。 29:02如果你看一下这个NDVI,中央的事情,基本上告诉你一些有关 29:06植物的活力。一些更基本的,飞过这些植物 29:10我们可以指望苹果,我们可以估算在植物产量, 29:14并帮助他们规划下游采摘,收获,然后发货; 29:19一些很基本的与每一个制造工厂了,但农民没有。 29:26另一件事,这是卡尔蒂克Mohta的工作,我们正在努力 29:29就是这个概念的机器人第一反应的。所以,想象一下你有一个911电话 29:34从建筑物。你可以想象配备摄像头的机器人的群 29:39让大楼和前搜救人员长期围绕着它 29:45来到现场,没过多久,第一个响应警察来到现场。 29:50我们真正想用这些来做,在左上角可以看到 29:55操作界面,调度可能会看到他或她甚至反应之前

30:00 911电话。 30:02该机器人周围建筑物决定谁占用了 30:04围绕什么什么入口或出口点的位置, 30:08所有的时间吸收信息,并建立一个马赛克, 30:12当你看到右上角,并在底部的三维地图。 30:16所以,现在,如果一辆警车被哄抬到了现场, 30:20他们将配备所有这些信息之前,他们甚至到达那里。 30:24他们会知道该怎么做,他们到了那里之前。 30:27这是在操作的非常重要的工具, 30:31其中,经常反应速度是非常关键。 30:36这是不正确的只是为室外作业,而且室内的操作。 30:40我想告诉你一些实验我们做到了。 30:43这是大约五年前,福岛地震后。 30:48这是在一个小镇不太远的地方福岛航拍我们的机器人搭上一程 30:55我们的日本同事的地面机器人之一。顺便说一下, 30:59它扯起的原因是因为我们的机器人编程偷懒。 31:03他们烧了大量的电能,所以他们随时可以在别的事情上面骑,他们做的。 31:07但是你可以在这个倒塌门口看到他们很快意识到团队 31:13无法通过。因此,空中机器人起飞,能 31:16跨越书架,看看有什么在另一边, 31:20所有的时间创建一个三维地图。 31:22而这种信息的话,可以提供给别人 31:26谁是站在房间外或建筑物外, 31:29而在倒塌的结构完整性提供有价值的信息 31:35建造, 31:35在潜在受害者方面,并评估建筑物的状态。 31:41在这个特定的实验中,再次,这是五年前 - 31:44我们能够建立三维地图。 31:49这是三个故事 - 九层楼的第七,第八和第九层。 31:56因此,地图是一个五厘米的分辨率的地图,这花了很长的时间来建立。 32:01这个实验持续了约两个半小时,这是中面临的挑战之一 32:05机器人。 32:07如果我告诉搜救人员或第一反应,我想你 32:11给我两个半小时,我进入这座大楼,并给你这个 32:13精彩的地图,没有人要给我那个时候。我会很幸运,如果他们给我两 32:18分半钟,或者两个半秒。 32:22这就是这个想法群的进来,我们真的希望系统 32:26真正能够很快进去,收集数据,并通过它们切出的时间, 32:30他们已经吸收了这些信息,并建立了一个三维地图。 32:34这就是那种我们正在拍摄的东西。 32:37因此,让我只是即将到来的华纳兄弟电影的海报总结 32:43所谓的群。 32:44其实,有些人可能是足够老实际上记住这部电影。 32:48有没有人见过这个?如果你看过它,你可能知道 32:51你不会把它推荐给你的朋友。它实际上是一个可怕的电影。 32:54这是关于杀人蜂会攻击人类等。 32:59但我喜欢的海报,因为这个海报一切都是真的。 33:02大小是无法估量的。我希望我已经说服你的权力是无限的。 33:07即使是最后一块,它的敌人是人,这是真的。 33:11我们有技术,我们必须找到一种方法,利用该技术 33:15和的方式,可以是社会和对人类有益的使用。 33:19因此,即使这部分是真实的。 33:21所以,我非常感谢你。


No comments: