Carver Mead: Microelectronics, neuromorphic computing, and life at the frontiers of science and technology

01 September 2024
By William G. Schulz
Carver Mead in his Caltech lab (2017). The white rectangle to the right of Mead is his 1971 chip. Photo credit: Caltech Academic Media Technologies (AMT).

Carver Mead has just finished writing a new paper. At age 90, the renowned pioneer of semiconductor electronics is tackling, with a small group of optics experts, lingering questions rooted in Einstein’s theory of general relativity that concern the effects of gravity on the speed of light. The experiment encompasses Mead’s own G4v theory about gravity that employs a quantum-wave representation for matter and extrapolates from Einstein’s suggestion that gravitational potential has both scalar (static) and vector (dynamic) components.

“I tend to push things to the edge, just to make sure I understand what’s going on,” Mead, emeritus professor of engineering and applied science at California Institute of Technology (Caltech), says of the experiment. “And you know, if it’s not what I expect, that’s good, because I learned something.”

Mead is always learning something—and then putting it to work. Since the last half of the 20th Century until today, his mind has given science and society the modern method of designing integrated circuits, the very idea of integrating them into computer science as an academic discipline,  related futuristic goals like neuromorphic computing, and a reformulation of electrodynamics, to name just a few of his pathbreaking contributions.

A preprint of Mead’s latest paper is posted on arXiv, but he is already planning follow-on experiments. “We’re just one step at a time figuring out what’s true. And if it is what general relativity predicts, then we need to know why.”

His efforts are emblematic of a mind that continues to explore fundamental science from the perspective of a technologist. After all, it was Mead and then-graduate student Bruce Hoeneisen’s experiments on the tunneling effect of electrons—a quantum mechanical phenomena—that backed up the physical realities supporting Moore’s Law, which holds that the number of transistors on an integrated circuit will double every two years.

Mead established that, as the transistors in microchips shrink in size and more of them occupy each unit area, they will not overheat the chips, but rather work faster, better, and with less energy. From his electron tunneling investigations, he predicted a transistor size of 0.15 µm was possible at a time when the commercial size was 10 µm. But skepticism lingered until dramatic advances in chip lithography began to unfold rapidly in the 1980s. By 2000, Mead’s achievable transistor size was achieved, and with advances like phase-shift masking, integrated circuitry has continued shrinking in the nanometer domain.

Mead popularized the term Moore’s Law, which he points out is not a “law” of science in the strictest sense. His friend Gordon Moore, a co-founder of Fairchild Semiconductor, shared with Mead his idea about the expected rate of doubling of transistors on a chip. Moore had also shared with Mead early transistors from Fairchild that had been rejected for cosmetic reasons. Mead used them for his class at Caltech, where he taught students about the transistor technology that would soon revolutionize Silicon Valley and the world.

In 1968, as a consultant, Mead joined Moore and Robert Noyce in the founding of what would become Intel.  Watching the tedious, labor-intensive, error-prone methods they employed in their large-scale integrated chip design and mask-making, he decided that there must be a better way.  Back at Caltech, he developed a method of generating chip logic and circuit geometry directly by employing simple, self-built, computer programs.  He had his first chip fabricated and working by 1971.  The Caltech students demanded that he establish a course on chip-design, which he taught in 1971 and every year thereafter.

After giving a 1976 seminar at Xerox Palo Alto Research Center (PARC) on the chip-design achievements of Caltech students, Mead met Lynn Conway, who would be his co-author of the textbook, Introduction to VLSI Systems” 

Mead’s lifetime of scientific contribution includes more than 80 patents, and the founding or co-founding of more than 20 startup companies. In the 1980s, he conceptualized how neuromorphic computing might be realized via the modeling of human neurology.

Carver Mead receiving the 2002 National Medal of Technology from President George W. Bush. Photo credit: White House press photo.

Electricity, electronics, and a love of learning have played feature roles in Mead’s life from the beginning. Born in Bakersfield, California, in 1934, he is a lifelong Californian with a heritage that reaches back to the state’s pioneer days. As a child, he moved with his family to the Big Creek hydroelectric facility where his father was a plant operator. Theirs was one of about 14 houses on the grounds of the new plant, in the mountains some 20 miles east of Fresno.

In an oral history for the Chemical Heritage Foundation, Mead recalled his introduction to learning in a one-room schoolhouse: “There were 20 of us in the school. It was a fabulous way to be educated. The way the teacher works, of course, she goes and deals with one grade for a little while and gives them stuff to do while you go on to do other things. But the nice thing about being educated that way is that, in my case, I would tune into the things that I found interesting.”

After high school, Mead continued his education at Caltech where he completed bachelor’s, master’s, and PhD degrees. And, of course, he has stayed on there to this day for his research and teaching career.

In hindsight, Mead says his ideas about the human/computer interface began to take shape in the early 1970s as he was developing what later became known as VLSI design. He says he began to realize that it was “crazy the way we’re interfacing with computers, because we all have to learn this very awkward, nonintuitive language, but sooner or later VLSI will advance to the point where computers can understand our language and then be much more user-friendly.”

“I had no doubt about it,” Mead continues. “And I hadn’t even started the neuromorphic thing yet, but it was clear that Moore’s Law was sooner or later going to get us to where computers can do natural language. And when people can converse with a program in natural language, they feel like it’s intelligence.”

Truly neuromorphic computing, Mead says, will require chips that, like the brain, process information encoded as events—in hierarchical ways.  Such a chip would be able to make near-instantaneous associations between sense and emotion, for example, seeing a shark and perceiving danger. He believes some systems are coming closer to recreating this and mentions optical dynamic- vision sensors.

“Along with learning how the brain does what it does,” Mead says, “we also need people to be taking some risks,” in terms of advanced neuromorphic computer architecture. “You can’t learn without doing. You can’t learn how a thing works unless you can build it and make it work. And that is a tall order.”

Unlike computers, the human nervous system doesn’t separate gain control from signal processing. “It’s all one thing,” Mead says.  “We have just begun to learn how the brain does orders of magnitude more computation per energy unit than we’ve even come close to [electronically],” but contemporary solutions like 3D chip stacking haven’t yet fundamentally changed the way computers work or the unbearable heat-load they generate.

 “The brain does a huge number of things on about 20 watts,” he says. But Mead believes we haven’t yet realized the potential of neuromorphic computing.  Pointing to the various natural language chat programs sprouting abundantly in the computing landscape, he says, “They look intelligent. They are marvelous, friendly encyclopedias. Fantastic. But they’re not what the brain does. And we’re still not close.”

William G. Schulz is the Managing Editor of Photonics Focus.

 

Recent News