SETI, circuits, and more: a conversation with the authors of “The Art of Electronics”

Since its original publication in 1980, The Art of Electronics has been regarded as a masterpiece of electronic engineering. Engineers have hailed it as an indispensable, life-altering work which has greatly deepened their understanding and love of circuit design. The highly-anticipated third edition was released earlier this year and has already garnered rave reviews. element14’s Sagar Jethani spoke with co-authors Paul Horowitz and Winfield Hill about the new third edition.


Paul HorowitzWinfield Hill

Art of Electronics co-authors Paul Horowitz (L) and Winfield Hill (R)


element14 member Don Bertke, asks: Why The Art of Electronics? Isn’t electronics a codified discipline? Art implies a degree of imprecision or subjectivity.

Horowitz:  It was Win’s idea to call it “The Art of Electronics.” I think we said it right in the first edition: electronic circuit design is an art. It’s a combination of some basic laws, some rules of thumb, and a large bag of tricks. In our opinion, a good circuit design also has elegance. There’s a beauty to it when you’ve done it in a nice way that combines a good choice of components demonstrating a clever use of their properties in a robust and reliable design that does what you want.

The ability to practice this art really grows the more you do it. It takes many years before you can see through to a beautiful design.

Hill:  You can learn it by doing, but we think if you read enough stories, little examples of how the art is crafted, then you can get a good way along and become a better designer without having spent so many years of practice to get there.

Horowitz:  You can take a microcontroller and you can program it to do something, but there are probably numerous ways to do it better, cheaper, and at lower power—and in a way that is very emotionally satisfying if you take the time to learn the art of circuit design. So we stand by our title.

What’s different about the new third edition? Is it a complete rewrite, or an incremental update?

Hill:  The first couple of chapters are expanded and rewritten, but many of the rest of them are virtually written from scratch. Topics such as filters, oscillators and timers, switchmode power converters, transmission lines, TV, and microcontrollers include substantially new material, as do our greatly expanded treatments of MOSFETs, logic families and interfacing, serial buses, A/D conversion, optoelectronics, transimpedance amplifiers, and low-noise and precision design.

art of electronics

Horowitz:  Originally, the first edition came from a course we were teaching, and we thought of it as an enhanced textbook. We soon discovered that most of our readership was comprised of professional engineers. For that reason, we really bulked-up the third edition with some seriously grown-up topics. A lot of these are on the periphery of microcontrollers.

Embedded controllers have become so common, so we’ve added the sort of analog subjects that have become very important because they surround the microcontroller core. Things like precision design, so we have a  rather extensive and fully re-written chapter on that, low-noise design, power switching and power conversion, and analog-to-digital and digital-to-analog conversion. Those four chapters are really something. Each one of them could really be its own small book. And we’ve added a bunch of new topics that were not in the previous editions that we felt were of interest to professional designers more so than for someone taking a course in electronic circuits.

Hill:  We added 50 photographs to the book. They’re nicely annotated, and really show what things look like. We’re quite proud of these photographs, and we spent a lot of time making them. I think it’s a feature that really improved upon the second edition, largely because of the ability to now use digital photography to make them look beautiful. And our publisher chose a very smooth-surfaced paper to retain the high quality of these photographs.

One of our members, Erik Ratcliff, asks if he hasn’t already purchased any of the editions, which one should he start with? Can you describe more of the content he will find in the new third edition which isn’t in the second edition?

Hill:  We believe that everyone should have a copy of the second edition. There’s a lot of great material in the second edition that we dropped the chapters wholesale, and there are other cases where we rewrote chapters entirely for the third edition. A lot of people were not interested in just getting a warmed-over update of the second edition—they wanted to see brand new stuff, and Paul and I really wanted to write new stuff.

Horowitz:  Another new aspect of the third edition goes back to the illustrations, especially scope readouts. This is something that was made possible by the digitization of oscilloscopes which occurred between the publication of the second and third editions. In the old days when you had an analog scope, you could only photograph the screen and it would become a halftone photograph in the first and second editions of the book. But with digital scopes, you can now make nice line art. We’ve got 90 scope shots in the third edition which show the authentic behavior of circuits, compared with maybe five in the second edition.

oscilloscope screen

Hill:  When you have the electronic version of a scope screen, you can edit it, and even superimpose and put multiple scope screens together. You can create scopes that have more channels and traces than your hardware will actually allow.

Horowitz:  Yes, I have some six or eight channel scopes displayed in the third edition.

Win, you mentioned that some topics from the second edition were dropped altogether for the third edition. Can you give some examples?

Hill:  “Good circuits, bad circuits” is gone, but will soon reappear on the Art of Electronics website. We’ve also removed the construction chapter, the low power chapter, and the scientific instrumentation chapters. Somebody might want to have this content, and that can be found in the second edition.

Horowitz:  We did incorporate some of that material into different chapters of the third edition, but it’s true that those specific chapters are gone.

Throughout the text, you refer to x-chapters. element14 member Shabaz Yousaf asks if you can explain what these are.

Horowitz:  For someone starting to read this book without much background in electronics, we wanted to provide them with a basic introduction to circuit design. But for the real experienced reader, we wanted to go a layer deeper and get into the difficult problems and really push into the edge of the envelope. So we decided we would put this into extra chapters we call x-chapters. They will follow some of these more basic chapters.

There are only five chapters for which we decided we would do this:

  • chapter 1x, which covers passive elements, like inductors, capacitors, and resistors
  • chapter 2x, which covers bipolar transistors
  • chapter 3x, which covers FETs
  • chapter 4x, which covers op-amps
  • chapter 9x which covers power control and power supplies

Our original objective was to put each of these right after each chapter that it attaches to in the book. As it happened, the third edition got up to 1,200 pages even without the x chapters—even with larger pages and a somewhat smaller font. So about a year ago we made the decision to publish the x chapters as a separate book. We took some of the material that we had intended to put in the x chapters that we felt was really important and we put it back into the main text of the third edition. So some of the chapters now in the main book are more sophisticated.

When do you plan to release the x chapters?

Horowitz:  The x chapters book will come out in no more than two years’ time—I would like it to be less. It’s already about fifty percent written with figures and everything else already.

My next question comes from element14 member John Wiltrout. The second edition has a popular student manual. Are you planning to release something similar for the third edition?

Horowitz:  The third edition goes a little more toward the professional designer, so we have taken some of the more basic material and enhanced it into what used to be called the student manual. It’s now going to be called “Learning the Art of Electronics: A Hands-on Approach.” We used to call it the little book, but that “little book” is now about a thousand pages! It’s quite substantial. It incorporates the labs and the classes and all that stuff that’s in the current student manual, plus some additional stuff. It takes some of that basic load off the main volume. It’s written by Tom Hayes, the first author of the previously-released Student Manual.

The Maker movement has grown significantly since the second edition was published. It sounds like the Learning the Art of Electronics (the student manual) will cater to those who have more of a hands-on learning style, as opposed to those who prefer to learn from a traditional text.

Horowitz:  Yes, I think so. The student manual covers the electronic labs we’re using here at Harvard, and it’s a set of lessons that you build with your hands.

But I would also say that the main Art of Electronics text is not a traditional text in any sense of the word. A traditional text in some sense can be both too mathematical and kind of boring. On the other hand, I find a lot of multimedia, like watching a video blog on how to design stuff, tooslow! I want to move more quickly. I think you can move through our book at your own speed. It’s full of circuits that you can build, and it’s full of what the waveforms look like.

It’s really designed to get you doing hands-on work.

Hill:  Yeah, it’s all rubber-meets-the-road kind of stuff. That adds to the practical aspect that’s in the book. You can’t get that kind of stuff on the Web. I love looking at blogs, but after looking at them for half an hour, how much have you really learned? We go into things a lot more deeply in the book. It’s hard for me to see how a format other than the printed page is really suitable for that kind of serious material that we have.

element14 contributor Elecia White was curious to know if, given the depth of your book—90 oscilloscope shots, 80 tables, over 1600 components—you have some favorite circuits? Some that you find yourselves constantly going back to as points of reference?

Horowitz:  I find this a very interesting question. We have a favorite graph, and it’s on page 526. In one graph, it basically has everything you need to know to do a low-noise op-amp circuit. It’s just an incredible picture. It’s got all the parts, it’s got all the curves, and it’s got everything you need to know. Even a little tutorial with a graph marking some equations on it. Also, I had a lot of fun building figure 8.58 in Adobe Illustrator. It’s about Effective Input Noise Density, and it took everything I knew about Illustrator to make that happen. That’s our favorite graph.

Any favorite tables?

Horowitz:  Our favorite table is table 5.5 which supposedly covers “seven” precision op-amps—but it actually has about seventy five! It has pretty much all the op-amp parameters you need to know to choose one, made in conjunction with that graph we just talked about. There’s a lot of information you’ll find in there that you actually won’t find in datasheets.

For instance, the LT1012, which is a bias-compensated bipolar amplifier, has a completely incorrect specification on its datasheet for input current noise because they didn’t take into account the fact that if you cancel one current with another uncorrelated current it doesn’t subtract the noise—it actually adds to the noise power. So they have specs that are off by an order of magnitude or more. That sort of stuff is in that table as well as in the text.

We actually had a nice chat with Jim Williams on that one. We pointed out that this LT1012 spec was completely bogus. There was this long pause at the other end, and then in a low tone, he said, “You’ve uncovered one of the dark secrets of Silicon Valley.” (laughs)


You’ve given me your favorite graph and your favorite table. Do you have a favorite circuits?

Hill:  I’m not sure we have any one favorite circuit, but we can list a few.

Horowitz:  We had a lot of fun thinking about this one. Throughout the book, we decided we’d take some example circuits and present them in different ways because a lot of the art of engineering is about making choices. You choose this version or that, maybe because of price, performance, or some other factor. There’s no single “best” op-amp, for example.

One example we did was the sun tan monitor. It’s not something anybody really needs, but we thought it was kind of cute. We start back in the analog chapter talking about how you make an integrator to keep track of how much sun you’re getting. Then we go into the digital chapter and we have counters running it, then eventually we get into microcontrollers and they’re running your entire sun tan experience.

We did a similar thing for pseudorandom bit sequence noise generators. We go from discrete digital logic through to FPGA, CPLDs and then, ultimately, microcontrollers. We actually spent 12 pages carrying you through half a dozen different ways you can make a pseudorandom sequence. We have a nice example in figure 8.93 on page 559 which shows how to use a pseudorandom bit sequence noise generator plus analog filtering to make yourself three different colors of noise: white, pink, and red, including performance graphs for that circuit.

By the way, when we put a circuit in, we name names. So you’ll see the part numbers, you’ll even see the pin numbers. In most cases, we actually built these circuits, so you’ll also see the performance as well.

We thought that those circuits make a kind of nice sequence.

Hill:  My favorite circuits include Figure 9.13 on page 606. It shows the internal circuitry of the 317-style linear regulator. I’ve always thought that was especially elegant and cute. These are circuits that we put in the book that other people designed, and some of them are very nice. I really love the HP/Agilent/Keysight DMM circuitry which we cover in number of places in the book, like Figure 849 on page 513, Figure 1315 on page 896, and Figure 1347 on page 919. These are different aspects of the instrument which are covered in the book.

A lot of disruptive innovations have taken place in the world of electronic engineering since the second edition was published in 1989. You have been in a unique position to observe these changes as they unfolded. Is there a specific technology that either of you feel will fundamentally alter the nature of electronic design in the next twenty years?

Horowitz:  When we were kids, if you wanted to do electronics, you were into ham radio. If you wanted to do automobiles, you took your car apart. We went from that into an era in which nobody did anything because they just bought packaged stuff. Now we’re back to an era in which people are making stuff again, and we’re very pleased to see that. Of course, part of that is due to the availability of inexpensive microcontrollers and little prefab platforms like Arduino and Raspberry Pi and so on.

ham radio

But I think a bigger part is the Internet. It’s really the ability to find out with just a few keystrokes what other people are doing. And to share your ideas and to get the answers to just about any question by asking it in online communities that has really made this whole movement take off. So if we’re going to ask what’s been disruptive, I think it’s been the Internet and the availability of inexpensive components.

As for the future, I’m chastened by Niels Bohr’s comment in which he said, “Prediction is very difficult, especially if it’s about the future.”

Hill:  The last edition of the book was in 1989. If somebody had asked us what was going to change in 25 years…

Horowitz:  I don’t think we would have foreseen Arduino, the Internet, Make Magazine and all that. I would have said in the 1960s that tunnel diodes were going to be the big thing! They were going to blow away transistors—they were fast, they were simple, they were cheap. Where did they go? They’re pretty much gone. So I think anybody who predicts is a fool.

Hill:  We’re dodging that question!

You’ve mentioned the impact of microcontrollers a few times. At element14, we sell a wide variety of MCUs. Shabaz and I were recently discussing whether the introduction of such fully-featured, low-cost boards have changed the nature of design. Is there any downside? In the old days, an engineer might have sat down and actually come up with a really elegant circuit that serves the needs of the application. But today they just don’t have the time. So instead they’ll reach for a microcontroller that they’re already familiar with to get the job done. It may be a little overkill, but it will suffice.

Horowitz:  My cautionary comment, if there is one, is that in some sense you’re sticking these microcontrollers together like LEGOs and oftentimes you can get a circuit to work. But if you really want to push the envelope on what you can do, chances are this approach is suboptimal. At some point you really have to bear down into the difficult parts.

Let me give you an example of what happens here at Harvard. We have students who get a design problem: build such-and-such. They go out to popular websites, grab a bunch of modules, and they snap them together, hoping it’s going to work because, you know—it looks like this output goes into this input. And it doesn’t work. The problem is that they don’t really understand problems like different voltage levels, or impedance levels, or bypassing. They end up with stuff that either blows up or just doesn’t work very well. Or it’s oscillating but they just don’t know it because they’re only looking at what’s on the screen and not on a real oscilloscope. I guess I worry that by having these snap-together things, you sometimes miss the important stuff that’s really essential for reliability and for meeting performance standards that matter.

On the other hand, if it gets the job done, who’s to complain if it works for you? I’m pleased that these microcontroller are basically now justcomponents, and that they’re part of people’s designs.

Hill:  Let me draw your attention to chapter 12 of our book: Logic Interfacing. This is where we teach how to interface processors to all kinds of things. Most of the world does not run on 3.3 volts, so how are you going to connect your processor pins to all the various kinds of things you’re after? This is what we talk about at length in chapter 12. Then in chapter 13 we discuss analog connections to those processors.

When you go and get your little board from TI or whoever, it has some pins that you can connect to other circuitry, but the serious question is: what will that other circuitry be?

That’s what we try to teach.

Too many people feel that if they learn their microcontroller, then they’re all set. But they come up short when they have to drive a big solenoid or something serious like that. Chapter 15 is our controller chapter, and we have three full-page figures on pages 1080, 1083 and 1085 which show different things hooked up to your processor. It’s quite inspiring to look at all the strange little things that are in here: stepping motor controllers, capacitance position sensors, video decoders, Ethernet—there’s just a whole mess of stuff that you would want to hook up. We spent a lot of time teaching people about the kinds of things they can do with that, too.

How can engineers keep their skills sharp?

Hill:  You might have a processor that you know perfectly well, but maybe you’re curious about trying some new things. We think that’s great. That’s the best way to keep yourself sharp.

Horowitz:  I spoke with our really skilled circuit designer, Jim McArthur, about this. We quote him a number of times throughout the book. He had a good comment: “In the end, you probably need Breakfast Club—people that you talk to who you really trust in terms of electronics and they tell you which things that you really need to be aware of.” If you have a small group of Breakfast Club people you can stay abreast without it overwhelming you and keeping you from getting the job done.

I notice you mentioned Newark Electronics in the appendices to the third edition.

Hill:  I’m a regular buyer at Newark Electronics.

Horowitz:  In Appendix K, “Where do I go to buy electronic goodies?” we specifically mention Newark as having a good selection of tools. I also say “Hooray!” because Newark still has a catalog in print, whereas Digi-Key gave it up a couple years ago.

Do you think we will have any new components in the next few years that we will come to see as being as indispensible as resistors, capacitors, and transistors are today?

Horowitz:  I’m not investing in memristors yet. It will be interesting to see what the real nonvolatile memory technology will be. I don’t think it’s going to be floating-gate flash in the long run. I think there will be something much better, but I’m not sure which of the three or four contenders right now it will be—or if it will be something completely different. In the book, we stated the plusses and minuses of MRAM, FeRAM, phase-change memory, and so on.

Check with us in thirty years and we’ll see where it goes!

Paul, you are one of the pioneers of the SETI movement. You worked with the late Carl Sagan in advancing the search for extra-terrestrial intelligence. Could you describe your work here, and where SETI stands today?

Horowitz:  SETI is alive and well. It’s a big space out there, of frequencies and directions and everything else. We don’t know what extra-terrestrial civilizations are sending, but we’re pretty sure they exist. If you had asked someone twenty years ago how many planets exist beyond our solar system, they’d have said probably not many. But if you ask someone today they would say there are more planets out there than stars.

radio telescopes seti

There are more planets in the Milky Way than there are stars. And there are probably as many habitable planets in the Milky Way as there are stars—say 100 billion, plus or minus. Things are looking good for life elsewhere. In fact, things aren’t even looking so bad for primitive life elsewhere in our own solar system.

What’s the best way to communicate with other intelligent life in the universe?

Horowitz:  Recent ideas are to look for optical pulses. That’s been our big thing for the last ten years, but radio is probably still the best bet. There’s not much going on in radio these days. The Allen telescope array, which was going to be the Great White Hope, has not worked out. Arecibo only has a very tiny pencil beam looking at the sky at a narrow range of frequencies—that’s SETI @Home.

Contact is going to happen one of these days, but what we’re doing so far is just scratching the surface. So far, unless it’s been classified, no one’s found a signal from an extra-terrestrial intelligence.

Hill:  Paul and his students designed a telescope in Harvard, Massachusetts. It runs automatically when it’s a clear night, and it scans the night sky looking for life signals. He has not updated his web page for it because he’s been working too hard on the third edition of the book. (laughs) The telescope will be running tonight, right?

Horowitz:  Well, it’s looking a little cloudy. We should be in California where you are, Sagar! (laughs)

Let me close this subject with a little quote. It’s from when were doing a radio experiments and upgrading the telescope Win just mentioned. I showed the upgrade to one of my colleagues, Bill Press, the author of Numerical Recipes. Bill looked at this thing and he said, “Horowitz, you know you have one chance in a million… of becoming the most famous person ever.” So, it’s a long shot.

But it would be a remarkable discovery. It would be the end of earth’s cultural isolation. It would be a bridge across four billion years of independent origin of life. It would be the greatest discovery in the history of humankind.

Hope springs eternal in this business.

Hill:  So it will either be the greatest discovery in the history of humankind or nothing.

Horowitz:  Or nothing. (laughs)

I want to give it to you both for producing what one of our members has called the holy scripture of electronic engineering. Congratulations on the amazing new edition.

Hill: Thank you very much. We enjoyed speaking with you.

Horowitz:  If your member is right, does that make us saints? Or something more?

She didn’t clarify! (laughs)


You can purchase the new third edition of The Art of Electronics at Paul and Win’s official site.

element14 members mentioned in this article include DAB, modalpdx, elecia, shabaz, and jw0752.

Originally published on element14


Saving capitalism: a conversation with economist Luigi Zingales

Luigi Zingales teaches finance at the University of Chicago’s Booth School of Business and is the author of “A Capitalism for the People.” He argues that what those who oppose big business and those who oppose big government fail to perceive is that they are fighting the same enemy: crony capitalism. Big business could not survive without the protection it receives from government, and big government could not survive without the backing it receives from business. In 2013, I spoke with Zingales about the forces threatening American capitalism and what we can do about it.

Luigi Zingales

Calvin Coolidge famously declared that the chief business of America is business. Was he wrong?

I don’t think he was wrong, but people don’t understand how we should achieve this. When you go to the Grand Canyon, there is a sign that says “Please don’t feed the animals.” It goes on to explain that precisely because you love animals, you need to make sure they are in an environment where they can keep hunting and acquiring food in natural ways so they can survive long term. I think the same logic applies to business. We need to tell Washington: please don’t feed the businesses. If you love business, you should want it to remain competitive in the normal marketplace without any government subsidies and market distortions, because those distortions end up hurting the very businesses you’re trying to help.

You argue that instead of being pro-business, we should be pro-market. What is the difference?

Both sides of the political spectrum want to portray themselves as pro-business. This means they want to subsidize and help existing businesses instead of thinking about how we can make the marketplace fairer, and the playing field more level for all. That’s what pro-market policies are all about. Businesses are always for free markets when they first enter an industry, but the moment they are established, they want to limit entry and build restrictions to make more profit. There is a danger when government gets involved in sanctioning these barriers to entry.

Americans have a tradition of protesting about social issues like abortion, gay marriage, and gun control. Why don’t we get more outraged over crony capitalism?

I grew up in Italy, so I’m naturally a conspiracy theorist. Attention is paid to these social issues precisely to distract from what is more fundamental.

In his book What’s the Matter with Kansas?, Thomas Frank argues that Republicans have perfected the art of using hot-button social issues to get people to vote against their economic self-interest. But it’s not just Republicans, is it?

The big issues like gay marriage and abortion that divide the two political parties hide the fact that both parties subsidize business. There is, in fact, a universal consensus between the establishments of both parties. You would be hard-pressed to find any discontinuity between, say, the policy that Hank Paulson used at Treasury and the one that Timothy Geithner used at Treasury.

Why is it that we usually only hear concerns about the power of big business from the left?

In part, the responsibility is on people who are most vocal in their support of capitalism. Because they love the free-market system, they mistakenly think they have to love big business and everything big business does. Actually, it should be the other way around: because we love the free-market system, we should be particularly severe on businesses that distort and desecrate the system. The only kind of criticism raised against the power of big business is from people on the extreme left— so much so that any criticism of big business is now immediately identified as being leftist. I don’t think this should be the case. Pro-market people should be just as outspoken about the distortion taking place in capitalism today.

Jeb Hensarling, who chairs the House Financial Services Committee, once described his economic philosophy in very similar terms, stating that he is not pro-business, but pro-free enterprise. Does he represent a new populism on the right, or is he just an outlier on the periphery?

I think he’s an outlier, but he’s a sign of something that is boiling underground. There is a huge potential in the United States today for a populist, pro-market movement.

To be fair, this is the same Jeb Hensarling who just took a bunch of Wall Street lobbyists to a posh ski resort for a vacation fundraiser last week.

And the danger is that arguing against crony capitalism can be a useful cover for other things. I try to be across-the-board in my criticisms. The problem is everywhere, and we need to ferret it out.

You write about the harm that lobbying does to the public good. But isn’t lobbying a case of “we have met the enemy, and he is us?” We only call it lobbying when the other guy does it. When I do it, I’m simply exercising my constitutional right to free speech.

There is no question that some lobbying is good, and that’s why it is protected by the Constitution. When lobbying is advocating a position, this is what democracy is about. The more transparent and widespread this process is, the better.

But lobbying has matured. Twenty years ago, a lot of lobbying was about how to get the government off your back, and as a libertarian, I am sympathetic to that. Today, however, it’s about how to get the government in your pocket.

But isn’t it completely rational for groups to protect their own self-interest by lobbying government?

When you lobby for your interests, you are maximizing your benefits, and what you are doing is exercising an optimal response. Nevertheless, this optimal response isn’t always right for the system as a whole. If everybody rushes the doors when there is a fire in the building, the optimal response for each individual is to run faster. But you know this is not a good thing to do. You need to bring a bit of order and have people exit the building in a proper way that can maximize the number of people who can be saved.

How have businesses responded to your call for a system of rules designed to benefit the system as a whole?

I sometimes feared that I would rub business executives the wrong way with such arguments, but I have received an enormous amount of support from the business community.

Don’t they just go back to their normal lobbying activity?

Yes, but they do not like it either. They feel they are in a rat race, and they want to get out. But unilateral disarmament is not a very smart strategy.

After the Great Crash of 1929, the Pecora investigation exposed the corruption of the big banks for all to see. Public outrage led directly led to the creation of the SEC and an entire regulatory system to make sure Wall Street was playing by a fair set of rules. In our rush to fix the economy, did we miss an opportunity to pass real financial reform?

President Obama had a different agenda when he took office. He was elected on the promise of health care reform, and he decided that this was his historic moment to pass it. Everything else, including the financial crisis, was given the second page. By the time he started to look at it more seriously, it was much more difficult to deal with. In March 2009, banks were on their knees. The president could have reshaped the financial services industry any way he wanted. I think that he did not have a clear idea how to do it, and the advice he got from Tim Geithner and Larry Summers went in the wrong direction.

I don’t think that the need for justice means we have to burn everything down, either. But we have gone to the opposite extreme of trying to cover everything up. This does a lot of damage to people’s perception of fairness, and ultimately leads to an erosion of support for free-market principles.

Why don’t we worry more about this loss of faith in free markets?

Because it cannot be measured properly. Our political system over-reacts to things that are measurable and under-reacts to things that are non-measurable. The value of the stock of capital is measured every day in the stock market, so politicians pay a lot of value to the stock market. Faith in free markets is not measured so precisely, so it tends to be ignored.

Does either political party have a better track record than the other when it comes to resisting crony capitalism, or are they simply in thrall to different interests?

I think they are equally in thrall.

Did repealing Glass-Steagall cause the financial crisis?

No, I don’t think we can blame the crisis directly on not having a separation between investment banking and commercial banking. At the time, I thought its repeal was a reasonable thing to do. Glass-Steagall was originally passed back in the 1930’s, and I felt that a lot of things had changed by 1999.

But you later became a fan of Glass-Steagall. Why the change of heart?

There were a few reasons. First of all, consider the attempt to separate proprietary trading from non-proprietary trading— something which now goes by the name The Volcker Rule. I realized how difficult it was to do it except by actually separating investment banking from commercial banking. We can argue whether this is the best response to the problems we saw during the crisis, but at least it is a definite response— and one that goes in the right direction. But the Volcker Rule, by itself, is unenforceable because it requires regulators to identify the intention of trade: whether a bank intends to trade for a client, or whether it intends to trade on its own account. In practice, this is extremely hard to determine.

You write that Glass-Steagall also prevented individual actors in the financial sector from joining forces.

Part of what kept the power of the financial industry at bay since the Great Depression was that it was divided. Commercial banks were pushing one way, and investment banks were pushing another way. As we know, competition benefits capitalism. In this case, competition, or conflicting interests, benefited voters and the system overall. But once the financial industry became consolidated, all participants could march together in the lobbying process, and they could basically get their way throughout. Such a concentration of power is bad for free markets.

Last month, the economy added 165,000 new jobs, and the unemployment rate dropped to 7.5%. The stock market now stands at record highs. Are we back? Is the recovery finally on safe ground?

I think we are on a slow path of recovery, but I’m much more worried about the underlying trend. If we look from 2000 to today, we’ve had some ups and downs, but we have a definite trend in reduction of employment— not necessarily higher unemployment, but a reduction of employment —especially among lower-educated people who seem to be increasingly marginalized. This is going to increase inequality and social tension in the long term.

A few months ago, a 28 year-old graduate student caught an Excel error in the work of two major economists which has called into question the entire argument that governments should cut debt to create economic growth. Are people justified in criticizing economists for presenting their models with the same certitude normally reserved for the physical sciences?

I actually see the Reinhart-Rogoff scandal as a great moment in economics. The fact that a 28 year-old graduate student at the University of Massachusetts can bring down a tenured professor at Harvard and a former chief economist at the IMF is an indication of how much of a science our discipline is. Even smart people make mistakes. It’s OK as long as those mistakes are uncovered, and uncovered quickly.

Do the corrections to Reinhart-Rogoff mean that we are too fixated on cutting spending and reducing deficits today?

I think the most important emphasis on cutting should be on cutting for the long term, not cutting in the short term. The more the economy recovers, the more that cutting in the short term can be good, but there’s no doubt that when you cut expenses the immediate impact is negative. Imagine I waste money by supporting some workers who don’t do anything, and I decide to fire them. Now, it will take some time for them to find new jobs. Remember: although they were producing nothing, the wages they received were reflected in GDP. So the immediate impact in cutting is a drop in recorded GDP. The bet that Germany and other European countries are championing right now is that the long-term effects of cutting will eventually prove to be good.

You argue that the complexity of regulations makes us more dependent on the lobbyists’ specialized knowledge, and that the purpose of complexity is often to hide loopholes which benefit specific industries. How much of this corruption is an inevitable byproduct of a legislative process that produces hundreds of new laws each year?

There is no doubt that there is too much production. Part of the reason I think we should limit the number of laws and simplify them is that it would make it more difficult for corruption to hide. In the shade, a lot of things take place. When you produce laws that are 2,000 pages long, even members of Congress don’t read them. The only ones who can really control them at that point are lobbyists.

Do you see anyone on the left or right today who you think understands the threat posed to our democracy by crony capitalism?

I was quite supportive of Paul Ryan’s plan last year, and I think he understands many of these issues. There are other people like him who understand the issues, but it’s not clear that they want to run with them. Part of the problem is that you’re going to implicate a lot of the establishment. Anyone who has a career in Washington will find it very difficult to turn around and dismiss the very system they are a part of.

A Capitalism for the People

To learn more about how we can restore the free-market system and recapture the genius of American prosperity, check out Luigi Zingales’ latest book, “A Capitalism for the People.”



Originally published on PolicyMic.

Robots won’t destroy the human race: that’s kind of our thing


Last year, celebrated astrophysicist Stephen Hawking told the BBC that artificial intelligence could bring about the end of the human race:

“It would take off on its own, and re-design itself at an ever increasing rate… Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

The development of full artificial intelligence could spell the end of the human race.”

The idea of machines taking over has been a staple of science fiction for generations, but could today’s technology really contain the seeds of our own destruction?

Such fears were stoked when it was recently reported that a computer had finally passed the Turing Test last year. Alan Turing, the father of modern AI, originally posited the test in his 1950 paper, Computing Machinery and Intelligence. Turing realized the difficulties involved in answering the question of whether machines can truly think owing to our own incomplete notions of how thinking can be defined. He therefore suggested an alternative, utilitarian approach to the answering the question: an imitation game. If a human interrogator cannot determine if the entity with whom he is communicating is another human or a machine, then we may for all intents say the machine is thinking.

Almost from the moment it was published, Turing’s hypothesis was assailed: Who would make the determination? A single person? A crowd? And what time frame should be used before the judge or judges must render judgment? Indeed, some biographers have suggested that Turing merely posited the imitation game as a thought experiment, never meaning for it to be used as a serious determinant of whether a machine can actually think.

This summer, judges convened at London’s Royal Society to participate in a Turing Test involving an electronic correspondent named Eugene Goostman– a correspondent who might be either a computer or another human being. In the end, 33 percent of the judges were fooled into believing Eugene was a real person, and the University of Reading’s Kevin Warwick announced the results:

“We are proud to declare that Alan Turing’s Test was passed for the first time on Saturday. In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human.”

A sweeping statement, to be sure. But a closer examination reveals that the results do not quite hold up.

For starters, the judges were told that Eugene was a 13 year-old boy from St. Petersburg. This foreknowledge meant they would attribute any choppiness in his English and lack of familiarity with Western cultural references to his age and foreign nationality. But isn’t this weighing the scale just a bit? Even if one believes that Turing meant his thought experiment to be taken seriously, this handicapping seems to violate the spirit of a fair Turing Test. In addition, the idea that 30 percent of judges must be fooled for at least five minutes is a recent invention that finds very little backing in Turing’s work. Indeed, the results were unconvincing to Imperial College’s Professor Murray Shanahan:

“Of course the Turing Test hasn’t been passed. I think its a great shame it has been reported that way, because it reduces the worth of serious AI research. We are still a very long way from achieving human-level AI, and it trivialises Turing’s thought experiment (which is fraught with problems anyway) to suggest otherwise.”

Shanahan’s comments echo my own experience at university. While pursuing simultaneous degrees in computer science and philosophy, I witnessed the high regard in which artificial intelligence was held by philosophers. Professors and students in my Philosophy of Mind class routinely speculated about the rise of machine intelligence, and vigorous debates ensued about whether we were already witnessing the emergence of thinking machines around us. I remember walking from such seminars down to Loyola’s Computing Lab to build neural networks, games that would challenge (and often beat) human competitors, and advanced pattern recognition algorithms. At no point during these late night jam sessions did I or my fellow coders believe that we were doing anything other than constructing elaborate iterative scripts: if this, then that. Was that enough to constitute true intelligence? We certainly didn’t think so, and were amused to see just how seriously work like ours was taken by the philosophy set: if they only knew the tricks we used to make our programs look like real thinking.

Today, an entire class of celebrity scientists like Raymond Kurzweil and Kevin Warwick base their careers on prognosticating the rise of truly intelligent machines, and issue dire warnings about how we must take heed before it’s too late. Given recent advances in areas like the amplification and coding of neural signals and environments where devices automatically respond to the presence of human activity, it’s tempting to believe we are on the verge of a true revolution which will lead to the emergence of autonomous thinking machines. But today’s “thinking” devices are still only simulating human thought by executing iterative code written by human authors: if this, then that. And those devices execute everything their human authors tell them to do– including the mistakes.

Consider Toyota. In 2013, a jury found the auto maker liable for the death of a vehicle occupant due to unintended acceleration. Forensic software witness Michael Barr showed that Toyota software developers had violated several protocols of sound coding, including declaring conflicting global variables. This poorly-written, overly-complex code caused Toyota vehicles to temporarily accelerate when drivers hit the brake. The jury found Toyota liable for $3 million. A year later, the troubled auto maker was forced to pay a whopping $1.2 billion for lying to Congress and the public about the causes of sudden acceleration and for disavowing responsibility for the multiple deaths it caused.

To the jury and the Department of Justice, there was no question who was at fault: Toyota. Specifically, the human agency which created the code that killed. When adjudicating matters of life and death, we correctly assign responsibility to the human authors behind the code because we understand that code is really just a set of instructions, like a recipe for chocolate cake. If the cake gets botched, you don’t blame the ingredients, you blame the chef. The contrary view that code is capable of taking on an independent life of its own and can therefore be assigned responsibility is risible for good reason.

The real threat to humanity lies not in the rise of the robots, but in our own innate tendency to use new scientific insights to violate the well-being of others. Automated flight systems mutate into extra-judicial drone strikes. Life-saving drug therapies lead to a host of maladies (and their cures) invented by big pharma to feed the beast. Vast communication systems give rise to a surveillance society in which every motion and message is tracked by agencies exempt from constitutional limits.

These fears about technology actually disguise the fears we have about how such discoveries will by used– by us. Besides being a great film, The Terminator struck a nerve because it tapped into our growing unease about the rise of computer technology in the 1980’s and how it was displacing traditional, often manufacturing-based economies. The Matrix was also a fun film, but it may have resonated with us because the elaborately constructed (and wholly artificial) world swirling around its protagonist echoed the complexity many people were starting to feel about the extent to which data was being used to control our lives– especially vast, impersonal financial markets and intrusive computer-based marketing.

Fears about technology really mask the fears we have about the uses to which we humans will put it.

 So don’t worry about the robots just yet.

Originally published on element14.