Wes Bush, CEO of Northrop Grumman

Landon Lecture
Sept. 6, 2016

Thank you, Dick.

You know, I have to start by saying that I have known and admired Gen. Myers for several decades now. And Dick continues to serve as my reference point for that phrase "Great American." So KSU is so fortunate to have his leadership here.

Well I'm really delighted to be with you this evening. And I'm delighted to be back on this beautiful campus. I had the pleasure of speaking with your engineering department a little over three years ago. And I have to say during both visits I've been struck by how much purple I see. You know I have three kids and so I've done a lot of touring of universities and every university, of course, has its own colors. But some places you go you have to ask what they are. That is not a problem on this campus. The other reason I'm so pleased to be here is because speaking at the Landon Lecture Series is such an honor. And I really appreciate your invitation to speak during this, the 50th year of this series.

Now I'm an engineer by training and by profession, so I have a passion for the positive impact that technology can have on our world. And I'm privileged to work at what I believe is the most innovative and forward-leaning technology company in the aerospace and defense industry, Northrop Grumman. In my career I've seen incredible technological advances. And I think the record is pretty clear that our nation's defense community has contributed many innovations that have made our lives better beyond the domains of security. It's a long list: telecommunications satellites, global positioning — GPS, the internet of course, and even advanced medical prosthetics are a few examples.

And when you combine the technologies that have come out of the defense industry with technologies from other industries and from great universities like KSU, together these have made incalculable contributions to the advancement of the human condition: the mitigation of poverty through the creation of whole new industries; the expansion of health care access to people whom, a few short years ago, would have had little chance for it; and perhaps most impactful of all, at least as the betterment of mankind is concerned, the democratization of knowledge by placing it at the fingertips of anyone with a smartphone or laptop almost anywhere around the globe.

Our world has an amazing innovation ecosystem. And that ecosystem includes great universities such as KSU. Universities have played an absolutely critical role in the development of all of these technologies. And universities will continue to be a core part of enabling this progress for humankind. These advances are built on a wide variety of technologies and also — and I think this is really important — the integration of these technologies into systems.

I think it will surprise none of you that a key thread common to many of these miraculous advances is computing power. We take the yearly increase of computing power almost for granted anymore. And, if you're like me, you often wonder what the next great revolution will be that computing power will enable. One direction that we are moving in — a direction that will perhaps, I think, have one of the greatest impacts on our lives and the world since the computer itself — is the advancement of autonomous systems.

Now some call this robotics, but I prefer the term autonomous systems because I think robotics has come to be misunderstood. Many people confuse systems that are remotely controlled with robotics. But true robots are not remotely controlled. Autonomous systems are able to perform their tasks without the ongoing connection to humans. And those tasks are becoming more and more complex, more and more indispensable, and more and more available to people every year. And I think that's why it's so interesting and why it is so important.

Apart from computing power, there have been several other developments that set the stage for this new age of autonomy. I mentioned GPS earlier — that is certainly one of them. Autonomous systems would have very limited utility without an ability to keep track of their locations in at least two dimensions — say left and right, or forward and backward. In the early days of GPS its precision was measured in feet. But today it's measured in inches, or sometimes even less. And that capability has proven instrumental in the advancement of autonomy. GPS and other precision location technology are largely what will allow these systems to move from the confines of the factory floor out into the real world, affording humans immense new benefit. We can already see it happening. Think of agricultural equipment, such as unmanned agricultural harvesters or, in the world I work in, pilotless aircraft.

Another critical advance is the explosive growth of sensor technologies and, concurrently, the miniaturization of these sensors. Today there are sensors that will tell you if the pipes in your home are freezing, if your tires are getting too bald, if your farmland is ready for planting, or perhaps if your autonomous system is straying away from its assigned duties. A biosensor that you swallow can send its data to your gastroenterologist monitoring his computer screen from anyplace in the world. And because they are so small, a sensor for every conceivable function or eventuality can be inexpensively connected into just about any autonomous system. The miniaturization of sensors reflects the miniaturization of electronics in general. In one lifetime we've moved from vacuum tubes to highly integrated circuit components that are nearly microscopic. This has made possible the creation of innumerable systems and it has also made innumerable others practical.

And then, of course, there is computing power, which I've already mentioned. Increasing computing power in and of itself is simply not adequate to enable what I see on the horizon. If we drill down even further to get at the real potential of autonomous systems we can see why. The real potential transcends simple autonomy and lies instead in cognitive autonomy. These are autonomous systems that operate using the actions that we would expect from the judgment and, ultimately, the ethics of a human being performing the same function.

Let me give you a sense of how enormous the development challenge is for cognitive autonomy. As you know there are many companies here in the U.S. and around the world that are working to develop driverless cars. And the general public might assume that that's not too tough of a problem. You might think GPS combined with the right sensors and control standards will keep the car on course and will ensure that it doesn't hit the car in front of it. Sounds kind of simple, but let's think about the real world.

Let's say a person runs out in front of that car. And let's say that car is moving too fast to stop in time. So now the car needs to figure out which direction to swerve to avoid hitting the person. And — again, the real world — let's say that if the car swerves to the right it risks hitting other people on a sidewalk, but if it swerves to the left it risks hitting oncoming traffic. The systems controlling this car need to prioritize the risks, evaluate the potential harm of each option, and act on those evaluations. And these systems need to do it instantly, and the results must be at least as good every time as one would expect from the best human driver — and it is hoped the results would be even better. However you want to think about it, the actions of an autonomous car must reflect the same concern for human life as a human driver. That's cognitive autonomy. And ideally, such an autonomous vehicle would be able to act without a human's judgment lapse or execution inadequacies.

So now scale up the computing power needed for that self-driving car operating in two dimensions, and let's add a third dimension: up and down. Imagine how much computing power is necessary for the safe operation of a pilotless airliner with several hundred passengers aboard. That's a much bigger problem than can be solved by a GPS receiver and a set of sensors as critical to the solution as they are. But this is where today's revolution really begins — because it's a much bigger challenge that can be solved by computing power alone. So before I go any further, let me address what I think is a misconception — an understandable one, but a misconception nonetheless.

Many non-technologists often presume that technology is constantly progressing with analytical continuity, where future results simply build on the results of the past. I think we all understand Moore's Law that processing speeds have doubled every 18 months or so. I think this is one of the reasons why so many of us take technology's progress for granted. It's easy to presume that any computing-based problem can be solved if we're simply patient enough to wait for Moore's Law to catch up with our ambitions. But the development of cognitive autonomy is a very different animal. It isn't just about the progression of hardware capabilities. The necessary breakthroughs could not come from the simple advancement of computing power; something else is required — something that will allow a machine to learn. That something turns out to be algorithms. Now an algorithm is a very simple thing in concept. It would appear to be nothing more than a set of rules designed to allow a computer to solve some specified problem. But today’s algorithms need to do more. They need to learn and apply that learning to the solving of unexpected problems. Think back to that driverless car example. Real operational environments present the inherent randomness of our world. These algorithms must be able to deal with unpredictability. That's not a linear progression of technology; that's what we call a breakthrough — and that breakthrough has been made.

Let me tell you about one of our systems, just as an example. A system called the X-47B. The X, many of you know, stands for experimental. This is a pilotless, unmanned aircraft flying on and off aircraft carriers, refueling in flight, and performing many other functions as well. Like the driverless car, the cognitive systems on this autonomous aircraft ultimately need to operate with judgment and reliability far better than the best conceivable human pilot. But unlike the car example, this aircraft must perform in three dimensions; at altitudes of tens of thousands of feet; at velocities of hundreds of miles per hour; and in highly variable and adverse weather conditions both in the air and maneuvering on the carrier deck. And it must operate in hostile environments where the enemy may be trying to shoot it down or jam it, or in today's world, perhaps even hack into it. And it must be able to do all these things all by itself, thousands of miles away from its launch point. And one last wrinkle just to make this even more interesting: This particular aircraft has to do it all without a tail — because if it has a tail, it wouldn’t be stealthy. And for those of you who know about airplanes, when you take that tail away, it makes it unstable — so it’s even more challenging to fly this thing.

There are simply too many variables to actively program into software. Only a machine that can learn, that can deal on its own with the unexpected can meet these types of requirements. And I will tell you that landing is especially challenging. Now you have two bodies — the carrier deck and the aircraft — each moving in three dimensions and each with unpredictable movements created by the sea and the air. But the algorithms on the X-47B must enable it to do so with high reliability. This isn’t just a dream; this is a reality. This amazing aircraft is working. Its first takeoff and landing on the deck of an aircraft carrier occurred in 2013. It was a momentous step forward for autonomous systems. And it has continued to fly with extraordinary reliability. In fact, one telling measure of its reliability is this: on landing it touches down on precisely the same point on that pitching flight deck time and time again. So reliably, in fact, that the Navy has now asked us to program some randomness into its landing performance to keep that part of the flight deck from prematurely wearing out. It's been digging a hole in the flight deck — and it does this night and day, in good weather or bad weather, in high seas or calm seas.

Last year the X-47B managed a successful air-to-air refueling from a manned aircraft. In that achievement it had to compensate for three bodies moving in three dimensions — the X-47B itself, the manned refueling aircraft, and the refueling basket at the end of a long, flexible hose that the X-47's refueling probe had to come up and engage. Adding that third body to the equation really complicated the challenge by orders of magnitude. The cognitive systems controlling it need to be able to prioritize the importance of the particular mission, which is variable, against the risk to the human beings aboard the manned refueling aircraft. And those systems have to factor in the roughness of the air, the time that the refueling aircraft can spend on station trying to complete the procedure, the threats to the human flight crew from whatever dangers are present, and a whole host of other factors, which are changing from moment to moment. And with each change, the priorities of the other factors are affected in this nonstop domino effect. The challenge was immense and mere computing power alone was not enough; only a machine that could learn could do this.

Now defense and national security uses of this technology are very important. And we as a company are focusing a lot of effort on this. But quite frankly, I think those applications are small in scope relative to their potential to improve the human condition. Recall that the first use of rockets was as weapons, and now we use them to explore the solar system and our universe beyond. Autonomous systems, too, have a larger and more impactful future outside the defense arena than they do inside of it. I think you can get an inkling of the beginnings of this just by looking at what's going on in advanced manufacturing.

A couple of things to think of when it comes to the use of this technology and this class of manufacturing. First, the modern factory floor often looks more like a laboratory than the kind of traditional automobile factory that we probably envision. And second, the autonomous systems being used in modern manufacturing are less and less executing repetitive tasks, and more and more collaborating with human workers. Now they're still assembling large industrial objects like cars and engines, but they are also precise enough to assemble small electronics that could fit on a pinhead. They are more and more becoming cognitive, such that you may see a human worker actually moving the machines arms through a task sequence to teach it what it needs to do — and then it learns, and then it improves on it. These machines are also getting lighter. One automaker in Europe is using units that weigh less than 70 pounds. That combination of lightweight and cognition affords them such versatility that they can shift from one task to another in different locations on the factory floor. It also reduces their cost and brings great economy to the operation. The 3,000 pound multimillion giants bolted to the floor that often are envisioned as the robotic assembly line are becoming fewer and fewer.

So what are the implications? Reduced manufacturing labor costs and lighter and more versatile systems every year. Together, these mean that smaller manufacturers will eventually be able to compete with the traditional giants of industrial manufacturing. This can unleash an ocean of human innovation without the enormous capital investment that keeps so many good ideas from ever seeing the light of day. It could also mean the reduced premium on low-cost labor, which some analysts believe could bring many manufacturing jobs back to the U.S. These would be different jobs — high-tech and leveraging knowledge. These same advantages for manufacturing are spilling over into other areas as well. I mentioned agricultural harvesting earlier. It's becoming more and more automated, reducing farm labor costs and increasing efficiency. Medical automation is another exciting area of advancement, with many man hours presently being spent on more menial tasks now being taken over — over time — by the machines, allowing the human professionals more time doing what they were really trained to do: spending time with patients and focusing on them. The machines can then monitor the patients and notify the doctors and nurses if they need to intervene.

And of course, the uses of autonomy in transportation are highly anticipated. These are some of the near-term applications of this technology. The longer-term applications are much more difficult today to wrap our brains around because the potential is just too great and varied to foretell. I can tell you that for me, the idea of autonomous systems dispatched to Mars to build research bases that are up and running and safely ready to receive human occupants upon their arrival is a little bit more exciting than the prospect of just being able to read my emails and maybe watch TV during my commute to work in the morning. That would be fun, too, but there are bigger things that I think we can do.

Now I know that many fear the constant march of technology. And the idea of autonomous machines that can learn may sound frightening to a lot of people. But here’s the reality: Cognitive autonomy is a genie that is well out of the bottle. It cannot be ignored. How we choose to embrace, adapt and manage it will determine much of our future: our future prosperity, our security, knowledge and human progress. There will, of course, be setbacks and growing pains but there always are in any endeavor.

If you look back at the airmail service that was established in 1918 — the airmail service that ultimately established 24-hour service coast-to-coast along an air highway of lighted beacons — it pioneered air navigation and all-weather flying and it was the parent of today's airline industry as well as our National Weather Service. Yet if you look back at the early stages of the first 40 pilots hired by the airmail service, only nine were still alive two years later. Several years beyond that, however, pilot fatalities were few and far between. How we deal with the inevitable setbacks will impact the path forward. And societal acceptance of cognitive autonomy may turn out to be the pacing factor in its adoption and growth.

Just an example, again, back with the driverless car. Despite the 6 million auto accidents per year in the U.S. alone, resulting in 35,000 deaths and 2 million injuries with an estimated 90 percent of these accidents being the result of human error, I think we can all safely bet that driverless technology will be ready and available long before society is ready to embrace it. Driverless cars will offer a lot of improvements, I believe, in safety and efficiency. They should be able to cut the safe distance between moving vehicles from a matter of many yards — and I emphasize to this my kids oftentimes — many yards between moving vehicles, depending on your speed, to just inches at any speed. Imagine what that would do for highway crowding alone. But it would also likely require the separation of driverless vehicles from those with drivers — with human drivers perhaps feeling a little bit like second-class citizens, at least initially.

The point is that societal acceptance of new technology almost always lags behind its pace of development, and because politics, at least in a democracy, always happens downstream of innovation and culture. Our efforts at legal and policy accommodation are necessary, but I don’t think they're sufficient if this technology is to realize its potential. And frankly, this is not all bad — although we need to ensure that we don’t impair our progress relative to that of other innovative nations. Currently, the only real players in these efforts to socialize machines that can learn are technologists on one end and popular culture — books, movies, and television — at the other end. And the mass media's vision of this technology is almost universally dark. Anyone who’s ever watched a "Terminator" movie knows what I'm talking about. We need a real conversation among people who occupy this vast space between technology and Hollywood.

And this is where Kansas State University comes in — not just KSU, but all universities and colleges. Institutions like this have been vital in contributing to the development of this technology. And one of the primary roles of places like KSU is as creative disruptors. Traditionally, you do this by creating synergies that wouldn't exist if left to develop one by one — that's sort of a classic way of approaching it. In this manner, institutions like KSU help build the intersections between the technologies — these combinations that I talked about earlier that actually make these systems work. And when universities do it right, the fears and reservations associated with new technologies are often calmed and their potential gets socialized and made more welcomed in our lives. And the result can be an advance of the human condition. But we also know that when this function is neglected in new technology, tremendous potential can be wasted and the opportunity costs can be enormous, even tragically so. I think the unused potential today of genetically modified foods is a really good example of that.

In my view what is needed is for the void between technologists and popular culture to be filled with other voices: social scientists, historians, legal scholars, ethicists, botanists, agriculturalists, economists, theologians, medical specialists, astronomers, citizens and anyone else who has something thoughtful to offer on this challenge that we face. Questions need to be debated. What should these machines be allowed to do or not do? What restrictions or parameters should be engineered into them? How do we know that they are engineered right? There are a host of other questions. And we need a lot of good thinking. We need papers and articles to be published to provoke the thinking. We need forums to be conducted, and thoughtful leadership, including at the political level, needs to be engaged.

But like it or not, this genie is not going back into the bottle. I'm not worried about that as long as we recognize the risks inherent in any new technology and we proactively manage them rather than simply trying to ignore or avoid them. We are almost to a point where it is actually easier to include the algorithms necessary to allow a machine to learn than it is to try and program a machine for all the contingencies we expect it to face. And that represents a real tipping point — a line of demarcation beyond which it makes little sense not to pursue machine learning. I think there's a lot riding on this moment and, personally, I am very excited about it. I'm also aware that this is a global vector. The U.S. does not have a lock on this technology. How we choose to manage the adoption of cognitive autonomy will impact our global standing for generations to come.

To my mind this is a logical follow-on to the information revolution, which has made almost infinite amounts of knowledge available to virtually everyone in any language, and which has spawned uncountable dreams, visions and ideas. But nothing of practical use to mankind was ever made fully real by just an idea. It's only created by actions taken, and of course, those actions might have been inspired by a great idea but it was the actions themselves that affected the outcome. Yes, this technology stands to enable enormous progress like the exploration of our universe. But it also stands to allow people to take the knowledge afforded to them by the information age, with all the dreams, vision and aspirations it inspires, and translate it into the actions necessary to benefit all of us. I think it’s hard to imagine something more exciting than that. And speaking here, at KSU, at an institution with so many thoughtful people, I can't wait to see the role that you and other universities will play in having us realize this great potential.

Thanks for having me here to speak this evening.

Wes Bush
Landon Lecture
Sept. 6, 2016

Video
Transcript