Robot Learns Human Trick for Not Falling Over

As Verne understood, the U.S. Civil War (during which
60,000 amputations were performed) inaugurated the modern prosthetics era in the United States, thanks to federal funding and a wave of design patents filed by entrepreneurial prosthetists. The two World Wars solidified the for-profit prosthetics industry in both the United States and Western Europe, and the ongoing War on Terror helped catapult it into a US $6 billion dollar industry across the globe. This recent investment is not, however, a result of a disproportionately large number of amputations in military conflict: Around 1,500 U.S. soldiers and 300 British soldiers lost limbs in Iraq and Afghanistan. Limb loss in the general population dwarfs those figures. In the United States alone, more than 2 million people live with limb loss, with 185,000 people receiving amputations every year. A much smaller subset—between 1,500 to 4,500 children each year—are born with limb differences or absences, myself included.

Today, the people who design prostheses tend to be well-intentioned engineers rather than amputees themselves. The fleshy stumps of the world act as repositories for these designers’ dreams of a high-tech, superhuman future. I know this because throughout my life I have been fitted with some of the most
cutting-edge prosthetic devices on the market. After being born missing my left forearm, I was one of the first cohorts of infants in the United States to be fitted with a myoelectric prosthetic hand, an electronic device controlled by the wearer’s muscles tensing against sensors inside the prosthetic socket. Since then, I have donned a variety of prosthetic hands, each of them striving toward perfect fidelity of the human hand—sometimes at a cost of aesthetics, sometimes a cost of functionality, but always designed to mimic and replace what was missing.

In my lifetime, myoelectric hands have evolved from clawlike constructs to multigrip, programmable, anatomically accurate facsimiles of the human hand, most costing tens of thousands of dollars. Reporters can’t get enough of these sophisticated, multigrasping “bionic” hands with lifelike silicone skins and organic movements, the unspoken promise being that disability will soon vanish and any lost limb or organ will be replaced with an equally capable replica. Prosthetic-hand innovation is treated like a high-stakes competition to see what is technologically possible. Tyler Hayes, CEO of the prosthetics startup
Atom Limbs, put it this way in a WeFunder video that helped raise $7.2 million from investors: “Every moonshot in history has started with a fair amount of crazy in it, from electricity to space travel, and Atom Limbs is no different.”

We are caught in a bionic-hand arms race. But are we making real progress? It’s time to ask who prostheses are really for, and what we hope they will actually accomplish. Each new multigrasping bionic hand tends to be more sophisticated but also more expensive than the last and less likely to be covered (even in part) by insurance. And as recent research concludes, much simpler and far less expensive prosthetic devices can perform many tasks equally well, and the fancy bionic hands, despite all of their electronic options, are rarely used for grasping.

Two photographs side by side of the author first lifting a kettlebell off of the ground and then extending it in front of her. The kettlebell is gripped by a metal claw that looks designed for that purpose.Activity arms, such as this one manufactured by prosthetics firm Arm Dynamics, are less expensive and more durable than bionic prostheses. The attachment from prosthetic-device company Texas Assistive Devices rated for very heavy weights, allowing the author to perform exercises that would be risky or impossible with her much more expensive bebionic arm.Gabriela Hasbun; Makeup: Maria Nguyen for MAC cosmetics; Hair: Joan Laqui for Living Proof

Function or Form

In recent decades, the overwhelming focus of research into and development of new artificial hands has been on perfecting different types of grasps. Many of the most expensive hands on the market differentiate themselves by the number and variety of selectable prehensile grips. My own media darling of a hand, the bebionic from Ottobock, which I received in 2018, has a fist-shaped power grip, pinching grips, and one very specific mode with thumb on top of index finger for politely handing over a credit card. My 21st-century myoelectric hand seemed remarkable—until I tried using it for some routine tasks, where it proved to be
more cumbersome and time consuming than if I had simply left it on the couch. I couldn’t use it to pull a door shut, for example, a task I can do with my stump. And without the extremely expensive addition of a powered wrist, I couldn’t pour oatmeal from a pot into a bowl. Performing tasks the cool bionic way, even though it mimicked having two hands, wasn’t obviously better than doing things my way, sometimes with the help of my legs and feet.

When I first spoke with
Ad Spiers, lecturer in robotics and machine learning at Imperial College London, it was late at night in his office, but he was still animated about robotic hands—the current focus of his research. Spiers says the anthropomorphic robotic hand is inescapable, from the reality of today’s prosthetics to the fantasy of sci-fi and anime. “In one of my first lectures here, I showed clips of movies and cartoons and how cool filmmakers make robot hands look,” Spiers says. “In the anime Gundam, there are so many close-ups of gigantic robot hands grabbing things like massive guns. But why does it need to be a human hand? Why doesn’t the robot just have a gun for a hand?”

It’s time to ask who prostheses are really for, and what we hope they will actually accomplish.

Spiers believes that prosthetic developers are too caught up in form over function. But he has talked to enough of them to know they don’t share his point of view: “I get the feeling that people love the idea of humans being great, and that hands are what make humans quite unique.” Nearly every university robotics department Spiers visits has an anthropomorphic robot hand in development. “This is what the future looks like,” he says, and he sounds a little exasperated. “But there are often better ways.”

The vast majority of people who use a prosthetic limb are unilateral amputees—people with amputations that affect only one side of the body—and they virtually always use their dominant “fleshy” hand for delicate tasks such as picking up a cup. Both unilateral and bilateral amputees also get help from their torsos, their feet, and other objects in their environment; rarely are tasks performed by a prosthesis alone. And yet, the common clinical evaluations to determine the success of a prosthetic are based on using only the prosthetic, without the help of other body parts. Such evaluations seem designed to demonstrate what the prosthetic hand can do rather than to determine how useful it actually is in the daily life of its user. Disabled people are still not the arbiters of prosthetic standards; we are still not at the heart of design.

Two black and white photographs. On the left showing a nurse watching a man lift small items with a Hosmer Hook, a prosthetic arm with a curved split hook that can be opened and closed through movement of the shoulder. On the right a worker with a hammer attachment in place of a prosthetic hand hammers a nail into a piece of wood.The Hosmer Hook [left], originally designed in 1920, is the terminal device on a body-powered design that is still used today. A hammer attachment [right] may be more effective than a gripping attachment when hammering nails into wood.Left: John Prieto/The Denver Post/Getty Images; Right: Hulton-Deutsch Collection/Corbis/Getty Images

Prosthetics in the Real World

To find out how prosthetic users live with their devices,
Spiers led a study that used cameras worn on participants’ heads to record the daily actions of eight people with unilateral amputations or congenital limb differences. The study, published last year in IEEE Transactions on Medical Robotics and Bionics, included several varieties of myoelectric hands as well as body-powered systems, which use movements of the shoulder, chest, and upper arm transferred through a cable to mechanically operate a gripper at the end of a prosthesis. The research was conducted while Spiers was a research scientist at Yale University’s GRAB Lab, headed by Aaron Dollar. In addition to Dollar, he worked closely with grad student Jillian Cochran, who coauthored the study.

Watching raw footage from the study, I felt both sadness and camaraderie with the anonymous prosthesis users. The clips show the clumsiness, miscalculations, and accidental drops that are familiar to even very experienced prosthetic-hand users. Often, the prosthesis simply helps brace an object against the body to be handled by the other hand. Also apparent was how much time people spent preparing their myoelectric prostheses to carry out a task—it frequently took several extra seconds to manually or electronically rotate the wrists of their devices, line up the object to grab it just right, and work out the grip approach.The participant who hung a bottle of disinfectant spray on their “hook” hand while wiping down a kitchen counter seemed to be the one who had it all figured out.

In the study, prosthetic devices were used on average for only 19 percent of all recorded manipulations. In general, prostheses were employed in mostly nonprehensile actions, with the other, “intact” hand doing most of the grasping. The study highlighted big differences in usage between those with nonelectric, body-powered prosthetics and those with myoelectric prosthetics. For body-powered prosthetic users whose amputation was below the elbow, nearly 80 percent of prosthesis usage was nongrasping movement—pushing, pressing, pulling, hanging, and stabilizing. For myoelectric users, the device was used for grasping just 40 percent of the time.

More tellingly, body-powered users with nonelectric grippers or split hooks spent significantly less time performing tasks than did users with more complex prosthetic devices. Spiers and his team noted the fluidity and speed with which the former went about doing tasks in their homes. They were able to use their artificial hands almost instantaneously and even experience direct haptic feedback through the cable that drives such systems. The research also revealed little difference in use between myoelectric single-grasp devices and fancier myoelectric multiarticulated, multigrasp hands—except that users tended to avoid hanging objects from their multigrasp hands, seemingly out of fear of breaking them.

“We got the feeling that people with multigrasp myoelectric hands were quite tentative about their use,” says Spiers. It’s no wonder, since most myoelectric hands are priced over $20,000, are rarely approved by insurance, require frequent professional support to change grip patterns and other settings, and have costly and protracted repair processes. As prosthetic technologies become more complex and proprietary, the long-term serviceability is an increasing concern. Ideally, the device should be easily fixable by the user. And yet some prosthetic startups are pitching a subscription model, in which users continue to pay for access to repairs and support.

Despite the conclusions of his study, Spiers says the vast majority of prosthetics R&D remains focused on refining the grasping modes of expensive, high-tech bionic hands. Even beyond prosthetics, he says, manipulation studies in nonhuman primate research and robotics are overwhelmingly concerned with grasping: “Anything that isn’t grasping is just thrown away.”

A grid of six photographs showing specialized prosthetic attachments being used for shooting pool, swimming, playing a drum, holding a volleyball, fishing, and throwing a basketball.TRS makes a wide variety of body-powered prosthetic attachments for different hobbies and sports. Each attachment is specialized for a particular task, and they can be easily swapped for a variety of activities. Fillauer TRS

Grasping at History

If we’ve decided that what makes us human is our hands, and what makes the hand unique is its ability to grasp, then the only prosthetic blueprint we have is the one attached to most people’s wrists. Yet the pursuit of the ultimate five-digit grasp isn’t necessarily the logical next step. In fact, history suggests that people haven’t always been fixated on perfectly re-creating the human hand.

As recounted in the 2001 essay collection
Writing on Hands: Memory and Knowledge in Early Modern Europe, ideas about the hand evolved over the centuries. “The soul is like the hand; for the hand is the instrument of instruments,” Aristotle wrote in De Anima. He reasoned that humanity was deliberately endowed with the agile and prehensile hand because only our uniquely intelligent brains could make use of it—not as a mere utensil but a tool for apprehensio, or “grasping,” the world, literally and figuratively.

More than 1,000 years later, Aristotle’s ideas resonated with artists and thinkers of the Renaissance. For Leonardo da Vinci, the hand was the brain’s mediator with the world, and he went to exceptional lengths in his dissections and illustrations of the human hand to understand its principal components. His meticulous studies of the tendons and muscles of the forearm and hand led him to conclude that “although human ingenuity makes various inventions…it will never discover inventions more beautiful, more fitting or more direct than nature, because in her inventions nothing is lacking and nothing is superfluous.”

Da Vinci’s illustrations precipitated a wave of interest in human anatomy. Yet for all of the studious rendering of the human hand by European masters, the hand was regarded more as an inspiration than as an object to be replicated by mere mortals. In fact, it was widely accepted that the intricacies of the human hand evidenced divine design. No machine, declared the Christian philosopher William Paley, is “more artificial, or more evidently so” than the flexors of the hand, suggesting deliberate design by God.

Performing tasks the cool bionic way, even though it mimicked having two hands, wasn’t obviously better than doing things my way, sometimes with the help of my legs and feet.

By the mid-1700s, with the Industrial Revolution in the global north, a more mechanistic view of the world began to emerge, and the line between living things and machines began to blur. In her 2003 article “
Eighteenth-Century Wetware,” Jessica Riskin, professor of history at Stanford University, writes, “The period between the 1730s and the 1790s was one of simulation, in which mechanicians tried earnestly to collapse the gap between animate and artificial machinery.” This period saw significant changes in the design of prosthetic limbs. While mechanical prostheses of the 16th century were weighed down with iron and springs, a 1732 body-powered prosthesis used a pulley system to flex a hand made of lightweight copper. By the late 18th century, metal was being replaced with leather, parchment, and cork—softer materials that mimicked the stuff of life.

The techno-optimism of the early 20th century brought about another change in prosthetic design, says
Wolf Schweitzer, a forensic pathologist at the Zurich Institute of Forensic Medicine and an amputee. He owns a wide variety of contemporary prosthetic arms and has the necessary experience to test them. He notes that anatomically correct prosthetic hands have been carved and forged for the better part of 2,000 years. And yet, he says, the 20th century’s body-powered split hook is “more modern,” its design more willing to break the mold of the human hand.

“The body powered arm—in terms of its symbolism—(still) expresses the man-machine symbolism of an industrial society of the 1920s,”
writes Schweitzer in his prosthetic arm blog, “when man was to function as clockwork cogwheel on production lines or in agriculture.” In the original 1920s design of the Hosmer Hook, a loop inside the hook was placed just for tying shoes and another just for holding cigarettes. Those designs, Ad Spiers told me, were “incredibly functional, function over form. All pieces served a specific purpose.”

Schweitzer believes that as the need for manual labor decreased over the 20th century, prostheses that were high-functioning but not naturalistic were eclipsed by a new high-tech vision of the future: “bionic” hands. In 2006, the U.S. Defense Advanced Research Projects Agency launched
Revolutionizing Prosthetics, a research initiative to develop the next generation of prosthetic arms with “near-natural” control. The $100 million program produced two multi-articulating prosthetic arms (one for research and another that costs over $50,000). More importantly, it influenced the creation of other similar prosthetics, establishing the bionic hand—as the military imagined it—as the holy grail in prosthetics. Today, the multigrasp bionic hand is hegemonic, a symbol of cyborg wholeness.

And yet some prosthetic developers are pursuing a different vision. TRS, based in Boulder, Colo., is one of the few manufacturers of
activity-specific prosthetic attachments, which are often more durable and more financially accessible than robotic prosthetics. These plastic and silicone attachments, which include a squishy mushroom-shaped device for push-ups, a ratcheting clamp for lifting heavy weights, and a concave fin for swimming, have helped me experience the greatest functionality I have ever gotten out of a prosthetic arm.

Such low-tech activity prostheses and body-powered prostheses perform astonishingly well, for a tiny fraction of the cost of bionic hands. They don’t look or act like human hands, and they function all the better for it. According to Schweitzer, body-powered prostheses are
regularly dismissed by engineers as “arcane” or derisively called “Captain Hook.” Future bionic shoulders and elbows may make a huge difference in the lives of people missing a limb up to their shoulder, assuming those devices can be made robust and affordable. But for Schweitzer and a large percentage of users dissatisfied with their myoelectric prosthesis, the prosthetic industry has yet to provide anything fundamentally better or cheaper than body-powered prostheses.

The Breakthroughs We Want

Bionic hands seek to make disabled people “whole,” to have us participate in a world that is culturally two-handed. But it’s more important that we get to live the lives we want, with access to the tools we need, than it is to make us look like everyone else. While many limb-different people have used bionic hands to interact with the world and express themselves, the centuries-long effort to perfect the bionic hand rarely centers on our lived experiences and what we want to do in our lives.

We’ve been promised a breakthrough in prosthetic technology for the better part of 100 years now. I’m reminded of the scientific excitement around lab-grown meat, which seems simultaneously like an explosive shift and a sign of intellectual capitulation, in which political and cultural change is passed over in favor of a technological fix. With the cast of characters in the world of prosthetics—doctors, insurance companies, engineers, prosthetists, and the military—playing the same roles they have for decades, it’s nearly impossible to produce something truly revolutionary.

In the meantime, this metaphorical race to the moon is a mission that has forgotten its original concern: helping disabled people acquire and use the tools they want. There are inexpensive, accessible, low-tech prosthetics that are available right now and that need investments in innovation to further bring down costs and improve functionality. And in the United States at least, there is a broken insurance system that needs fixing. Releasing ourselves from the bionic-hand arms race can open up the possibilities of more functional designs that are more useful and affordable, and might help us bring our prosthetic aspirations back down to earth.

This article appears in the October 2022 print issue.

Source link

Paying Tribute to 1997 IEEE President Charles K. Alexander

Charles K. Alexander, 1997 IEEE president, died on 17 October at the age of 79.

The active volunteer held many high-level positions throughout the organization, including 1991–1992 IEEE Region 2 director. He was also the 1993 vice president of the IEEE United States Activities Board (now IEEE-USA).


The IEEE Life Fellow worked in academia his entire career. At the time of his death, he was a professor of electrical and computer engineering at Cleveland State University and served as dean of its engineering school.

He was a former professor and dean at several schools including Temple University, California State University, Northridge, and Ohio University. He also was a consultant to companies and government agencies, and he was involved in research and development projects in solar energy and software engineering.

Alexander was dedicated to making IEEE more meaningful and helpful to engineering students. He helped found the IEEE Student Professional Awareness program, which offers talks and networking events. Alexander also helped found IEEE’s student publication IEEE Potentials.

He mentored many students.

“My life has been so positively impacted with the significant opportunity to know such a giant in the engineering world,” says Jim Watson, an IEEE senior life member and one of Alexander’s mentees. “While many are very successful engineers and instructors, Dr. Alexander rises far above those who contributed to the success of others.”

Helping engineering students succeed

Alexander was born in Amherst, Ohio, where he became interested in mechanical engineering at a young age. He fixed the cars and machines used on his family’s farm, according to a 2009 oral history conducted by the IEEE History Center.

He switched his interests and then earned a bachelor’s degree in electrical engineering in 1965 from Ohio Normal (now Ohio Northern University), in Ada. As a freshman, he joined the American Institute of Electrical Engineers, one of IEEE’s predecessor societies. While he was an undergraduate, he served as secretary of the school’s AIEE student branch.

Alexander went on to receive master’s and doctoral degrees in electrical engineering from Ohio University in Athens, in 1967 and 1971 respectively. As a graduate student, he advised the university’s Eta Kappa Nu chapter, the engineering honor society that is now IEEE’s honor society. He significantly increased meeting attendance, he said in the oral history. Thanks to his efforts, he said, the chapter was ranked one of the top four in the country at the time.

After graduating, he joined Ohio University in 1971 as an assistant professor of electrical engineering. During this time, he also worked as a consultant for the U.S. Air Force and Navy, designing manufacturing processes for their various new systems. Alexander also designed a testing system for solid-state filters, which were used in atomic warheads for missiles on aircraft carriers.

He left a year later to join Youngstown State University, in Ohio, as an associate professor of electrical engineering. He was faculty advisor for the university’s IEEE student branch and helped increase its membership from 20 students to more than 200, according to the oral history. In 1980 he moved to Tennessee and became a professor of electrical engineering at Tennessee Tech University, in Cookeville. He also helped the school’s IEEE student branch boost its membership.

In 1986 he joined Temple University in Philadelphia as a professor and chair of the electrical engineering department. At the time, the university did not have an accredited engineering program, he said in the oral history.

“They brought me on board to help get the undergraduate programs in all three disciplines accredited,” he said. He also created master’s degree and Ph.D. programs for electrical engineering. He served as acting dean of the university’s college of engineering from 1989 to 1994.

After the engineering programs became accredited, Alexander said in the oral history that his job was done there so he left Temple in 1994 to join California State University, Northridge. He was dean of engineering and computer science there.

Alexander returned to Ohio University as a visiting professor of electrical engineering and computer science. From 1998 to 2002, he was interim director of the school’s Institute for Corrosion and Multiphase Technology. The institute’s researchers predict and resolve corrosion in oil and gas production and transportation infrastructure.

But after a few years, Alexander said, he missed creating and growing engineering programs at universities, so when an opportunity opened up at Cleveland State University in 2007, he took it. As dean of the university’s engineering school, he added 12 faculty positions.

Supporting student members’ professional development

Throughout his career, Alexander was an active IEEE volunteer. He served as chair of the IEEE Student Activities Committee, where he helped launch programs and services that are still being offered today. They include the IEEE Student Professional Awareness Program and the WriteTalk program (now ProSkills), which helps students develop their communication skills.

He was editor of the IEEE Transactions on Education. Along with IEEE Senior Member Jon R. McDearman, he helped launch IEEE Potentials.

Potentials was designed to be something of value for the undergraduates, who don’t want to read technical papers,” Alexander said in the oral history. “We styled it after IEEE Spectrum. Jon and I decided to include articles that would help students on topics like career development and how to be successful.”

Alexander continued to rise through the ranks in IEEE and was elected the 1991–1992 Region 2 director. The following year, he became vice president of the IEEE United States Activities Board (now IEEE-USA) and served in that position for two years.

He was elevated to IEEE Fellow in 1994 “for leadership in the field of engineering education and the professional development of engineering students.”

He was elected as the 1997 IEEE president.

“It was an incredible honor,” he said in the oral history. “One of the very special things that has happened to me.”

He received the 1984 IEEE Centennial Medal as well as several awards for his work in education, including a 1998 Distinguished Engineering Education Achievement Award and a 1996 Distinguished Engineering Education Leadership Award, both from the Engineering Council, the United Kingdom’s regulatory body for the profession.

“Dr. Alexander always emphasized the value of developing professional and ethical skills to enhance engineering career success,” Watson says. “He encouraged others to apply Winston Churchill’s famous quote ‘We make a living by what we get but we make a life by what we give.’”

To share your condolences or memories of Alexander, use the commenting form below.

From Your Site Articles

Related Articles Around the Web

Source link

John Bardeen’s Terrific Transistorized Music Box

On 16 December 1947, after months of work and refinement, the Bell Labs physicists John Bardeen and Walter Brattain completed their critical experiment proving the effectiveness of the point-contact transistor. Six months later, Bell Labs gave a demonstration to officials from the U.S. military, who chose not to classify the technology because of its potentially broad applications. The following week, news of the transistor was released to the press. The New York Herald Tribune predicted that it would cause a revolution in the electronics industry. It did.


How John Bardeen got his music box

In 1949 an engineer at Bell Labs built three music boxes to show off the new transistors. Each Transistor Oscillator-Amplifier Box contained an oscillator-amplifier circuit and two point-contact transistors powered by a B-type battery. It electronically produced five distinct tones, although the sounds were not exactly melodious delights to the ear. The box’s design was a simple LC circuit, consisting of a capacitor and an inductor. The capacitance was selectable using the switch bank, which Bardeen “played” when he demonstrated the box.

An older man holds an electronic gadget encased in clear plastic in one hand and points at it with his other hand.
John Bardeen, co-inventor of the point-contact transistor, liked to play the tune “How Dry I Am” on his music box. The Spurlock Museum/University of Illinois at Urbana-Champaign

Bell Labs used one of the boxes to demonstrate the transistor’s portability. In early demonstrations, the instantaneous response of the circuits wowed witnesses, who were accustomed to having to wait for vacuum tubes to warm up. The other two music boxes went to Bardeen and Brattain. Only Bardeen’s survives.

Bardeen brought his box to the University of Illinois at Urbana-Champaign, when he joined the faculty in 1951. Despite his groundbreaking work at Bell Labs, he was relieved to move. Shortly after the invention of the transistor, Bardeen’s work environment began to deteriorate. William Shockley, Bardeen’s notoriously difficult boss, prevented him from further involvement in transistors, and Bell Labs refused to allow Bardeen to set up another research group that focused on theory.

Frederick Seitz recruited Bardeen to Illinois with a joint appointment in electrical engineering and physics, and he spent the rest of his career there. Although Bardeen earned a reputation as an unexceptional instructor—an opinion his student Nick Holonyak Jr. would argue was unwarranted—he often got a laugh from students when he used the music box to play the Prohibition-era song “How Dry I Am.” He had a key to the sequence of notes taped to the top of the box.

In 1956, Bardeen, Brattain, and Shockley shared the Nobel Prize in Physics for their “research on semiconductors and their discovery of the transistor effect.” That same year, Bardeen collaborated with postdoc Leon Cooper and grad student J. Robert Schrieffer on the work that led to their April 1957 publication in Physical Review of “Microscopic Theory of Superconductivity.” The trio won a Nobel Prize in 1972 for the development of the BCS model of superconductivity (named after their initials). Bardeen was the first person to win two Nobels in the same field and remains the only double laureate in physics. He died in 1991.

Overcoming the “inherent vice” of Bardeen’s music box

Curators at the Smithsonian Institution expressed interest in the box, but Bardeen instead offered it on a long-term loan to the World Heritage Museum (predecessor to the Spurlock Museum) at the University of Illinois. That way he could still occasionally borrow it for use in a demonstration.

In general, though, museums frown upon allowing donors—or really anyone—to operate objects in their collections. It’s a sensible policy. After all, the purpose of preserving objects in a museum is so that future generations have access to them, and any additional use can cause deterioration or damage. (Rest assured, once the music box became part of the accessioned collections after Bardeen’s death, few people were allowed to handle it other than for approved research.) But musical instruments, and by extension music boxes, are functional objects: Much of their value comes from the sound they produce. So curators have to strike a balance between use and preservation.

As it happens, Bardeen’s music box worked up until the 1990s. That’s when “inherent vice” set in. In the lexicon of museum practice, inherent vice refers to the natural tendency for certain materials to decay despite preservation specialists’ best attempts to store the items at the ideal temperature, humidity, and light levels. Nitrate film, highly acidic paper, and natural rubber are classic examples. Some objects decay quickly because the mixture of materials in them creates unstable chemical reactions. Inherent vice is a headache for any curator trying to keep electronics in working order.

The museum asked John Dallesasse, a professor of electrical engineering at Illinois, to take a look at the box, hoping that it just needed a new battery. Dallesasse’s mentor at Illinois was Holoynak, whose mentor was Bardeen. So Dallesasse considered himself Bardeen’s academic grandson.

It soon became clear that one of the original point-contact transistors had failed, and several of the wax capacitors had degraded, Dallesasse told me recently. But returning the music box to operable status was not as simple as replacing those parts. Most professional conservators abide by a code of ethics that limits their intervention; they make only changes that can be easily reversed.

An electronic gadget is connected with wires to an oscilloscope and other equipment.
In 2019, University of Illinois professor John Dallesasse carefully restored Bardeen’s music box.The Spurlock Museum/University of Illinois at Urbana-Champaign

The museum was lucky in one respect: The point-contact transistor had failed as an open circuit instead of a short. This allowed Dallesasse to jumper in replacement parts, running wires from the music box to an external breadboard to bypass the failed components, instead of undoing any of the original soldering. He made sure to use time-period appropriate parts, including a working point-contact transistor borrowed from John’s son Bill Bardeen, even though that technology had been superseded by bipolar junction transistors.

Despite Dallesasse’s best efforts, the rewired box emitted a slight hum at about 30 kilohertz that wasn’t present in the original. He concluded that it was likely due to the extra wiring. He adjusted some of the capacitor values to tune the tones closer to the box’s original sounds. Dallesasse and others recalled that the first tone had been lower. Unfortunately, the frequency could not be reduced any further because it was at the edge of performance for the oscillator.


“Restoring the Bardeen Music Box”

www.youtube.com

From a preservation perspective, one of the most important things Dallesasse did was to document the restoration process. Bardeen had received the box as a gift without any documentation from the original designer, so Dallesasse mapped out the circuit, which helped him with the troubleshooting. Also, documentary filmmaker Amy Young and multimedia producer Jack Brighton recorded a short video of Dallesasse explaining his approach and technique. Now future historians have resources about the second life of the music box, and we can all hear a transistor-generated rendition of “How Dry I Am.”

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the December 2022 print issue as “John Bardeen’s Marvelous Music Box.”

Source link

Andrew Ng: Unbiggen AI – IEEE Spectrum

Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company
Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

Andrew Ng on…

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first
NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a
data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

From Your Site Articles

Related Articles Around the Web

Source link

The Transistor at 75 – IEEE Spectrum

Palo Alto’s government has set a very aggressive Sustainability and Climate Action Plan with a goal of reducing its greenhouse gas emissions to 80 percent below the 1990 level by the year 2030. In comparison, the state’s goal is to achieve this amount by 2050. To realize this reduction, Palo Alto must have 80 percent of vehicles within the next eight years registered in (and commuting into) the city be EVs (around 100,000 total). The projected number of charging ports will need to grow to an estimated 6,000 to 12,000 public ports (some 300 being DC fast chargers) and 18,000 to 26,000 residential ports, with most of those being L2-type charging ports.

“There are places even today where we can’t even take one more heat pump without having to rebuild the portion of the system. Or we can’t even have one EV charger go in.” —Tomm Marshall

To meet Palo Alto’s 2030 emission-reduction goals, the city, which owns and operates the electric utility, would like to increase significantly the amount of local renewable energy being used for electricity generation (think rooftop solar) including the ability to use EVs as distributed-energy resources (vehicle-to-grid (V2G) connections). The city has provided incentives for the purchase of both EVs and charging ports, the installation of heat-pump water heaters, and the installation of solar and battery-storage systems.

There are, however, a few potholes that need to be filled to meet the city’s 2030 emission objectives. At a February meeting of Palo Alto’s Utilities Advisory Commission, Tomm Marshall, assistant director of utilities, stated, “There are places even today [in the city] where we can’t even take one more heat pump without having to rebuild the portion of the [electrical distribution] system. Or we can’t even have one EV charger go in.”

Peak loading is the primary concern. Palo Alto’s electrical-distribution system was built for the electric loads of the 1950s and 1960s, when household heating, water, and cooking were running mainly on natural gas. The distribution system does not have the capacity to support EVs and all electric appliances at scale, Marshall suggested. Further, the system was designed for one-way power, not for distributed-renewable-energy devices sending power back into the system.

A big problem is the 3,150 distribution transformers in the city, Marshall indicated. A 2020 electrification-impact study found that without improvements, more than 95 percent of residential transformers would be overloaded if Palo Alto hits its EV and electrical-appliance targets by 2030.

Palo Alto’s electrical-distribution system needs a complete upgrade to allow the utility to balance peak loads.

For instance, Marshall stated, it is not unusual for a 37.5 kilovolt-ampere transformer to support 15 households, as the distribution system was originally designed for each household to draw 2 kilowatts of power. Converting a gas appliance to a heat pump, for example, would draw 4 to 6 kW, while an L2 charger for EVs would be 12 to 14 kW. A cluster of uncoordinated L2 charging could create an excessive peak load that would overload or blow out a transformer, especially when they are toward the end of their lives, which many already are. Without smart meters—that is, Advanced Metering Infrastructure (AMI), which will be introduced into Palo Alto in 2024—the utility has little to no household peak load insights.

Palo Alto’s electrical-distribution system needs a complete upgrade to allow the utility to balance peak loads, manage two-way power flows, install the requisite number of EV charging ports and electric appliances to support the city’s emission-reduction goals, and deliver power in a safe, reliable, sustainable, and cybersecure manner. The system also must be able to cope in a multihour-outage situation, where future electrical appliances and EV charging will commence all at once when power is restored, placing a heavy peak load on the distribution system.

A map of EV charging stations in the Palo Alto, CA area PlugShare.comA map of EV charging stations in the Palo Alto, CA area from PlugShare.com

Palo Alto is considering investing US $150 million toward modernizing its distribution system, but that will take two to three years of planning, as well as another three to four years or more to perform all the necessary work, but only if the utility can get the engineering and management staff, which continues to be in short supply there and at other utilities across the country. Further, like other industries, the energy business has become digitized, meaning the skills needed are different from those previously required.

Until it can modernize its distribution network, Marshall conceded that the utility must continue to deal with angry and confused customers who are being encouraged by the city to invest in EVs, charging ports, and electric appliances, only then to be told that they may not be accommodated anytime soon.

Policy runs up against engineering reality

The situation in Palo Alto is not unique. There are some 465 cities in the United States with populations between 50,000 and 100,000 residents, and another 315 that are larger, many facing similar challenges. How many can really support a rapid influx of thousands of new EVs? Phoenix, for example, wants 280,000 EVs plying its streets by 2030, nearly seven times as many as it has currently. Similar mismatches between climate-policy desires and an energy infrastructure incapable of supporting those policies will play out across not only the United States but elsewhere in one form or another over the next two decades as conversion to EVs and electric appliances moves to scale.

As in Palo Alto, it will likely be blown transformers or constantly flickering lights that signal there is an EV charging-load issue. Professor Deepak Divan, the director of the Center for Distributed Energy at Georgia Tech, says his team found that in residential areas “multiple L2 chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” Given that most of the millions of U.S. transformers are approaching the end of their useful lives, replacing transformers soon could be a major and costly headache for utilities, assuming they can get them.

Supplies for distribution transformers are low, and costs have skyrocketed from a range of $3,000 to $4,000 to $20,000 each. Supporting EVs may require larger, heavier transformers, which means many of the 180 million power poles on which these need to sit will need to be replaced to support the additional weight.

Exacerbating the transformer loading problem, Divan says, is that many utilities “have no visibility beyond the substation” into how and when power is being consumed. His team surveyed “twenty-nine utilities for detailed voltage data from their AMI systems, and no one had it.”

This situation is not true universally. Xcel Energy in Minnesota, for example, has already started to upgrade distribution transformers because of potential residential EV electrical-load issues. Xcel president Chris Clark told the Minneapolis Star Tribune that four or five families buying EVs noticeably affects the transformer load in a neighborhood, with a family buying an EV “adding another half of their house.”

Joyce Bodoh, director of energy solutions and clean energy for Virginia’s Rappahannock Electric Cooperative (REC), a utility distributor in central Virginia, says that “REC leadership is really, really supportive of electrification, energy efficiency, and electric transportation.” However, she adds, “all those things are not a magic wand. You can’t make all three things happen at the same time without a lot of forward thinking and planning.”

As part of this planning effort, Bodoh says that REC has actively been performing “an engineering study that looked at line loss across our systems as well as our transformers, and said, ‘If this transformer got one L2 charger, what would happen? If it got two L2s, what would happen, and so on?’” She adds that REC “is trying to do its due diligence, so we don’t get surprised when a cul-de-sac gets a bunch of L2 chargers and there’s a power outage.”

REC also has hourly energy-use data from which it can find where L2 chargers may be in use because of the load profile of EV charging. However, Bodoh says, REC does not just want to know where the L2 chargers are, but also to encourage its EV-owning customers to charge at nonpeak hours—that is, 9 p.m. to 5 a.m. and 10 a.m. to 2 p.m. REC has recently set up an EV charging pilot program for 200 EV owners that provides a $7 monthly credit if they do off-peak charging. Whether REC or other utilities can convince enough EV owners of L2 chargers to consistently charge during off-peak hours remains to be seen.

“Multiple L2 chargers on one distribution transformer can reduce its life from an expected 30 to 40 years to 3 years.” —Deepak Divan

Even if EV owner behavior changes, off-peak charging may not fully solve the peak-load problem once EV ownership really ramps up. “Transformers are passively cooled devices,” specifically designed to be cooled at night, says Divan. “When you change the (power) consumption profile by adding several EVs using L2 chargers at night, that transformer is running hot.” The risk of transformer failure from uncoordinated overnight charging may be especially aggravated during times of summer heat waves, an issue that concerns Palo Alto’s utility managers.

There are technical solutions available to help spread EV charging peak loads, but utilities will have to make the investments in better transformers and smart metering systems, as well as get regulatory permission to change electricity-rate structures to encourage off-peak charging. Vehicle-to-grid (V2G), which allows an EV to serve as a storage device to smooth out grid loads, may be another solution, but for most utilities in the United States, this is a long-term option. Numerous issues need to be addressed, such as the updating of millions of household electrical panels and smart meters to accommodate V2G, the creation of agreed-upon national technical standards for the information exchange needed between EVs and local utilities, the development of V2G regulatory policies, and residential and commercial business models, including fair compensation for utilizing an EV’s stored energy.

As energy expert Chris Neldernoted at a National Academy EV workshop, “vehicle-to-grid is not really a thing, at least not yet. I don’t expect it to be for quite some time until we solve a lot of problems at various utility commissions, state by state, rate by rate.”

In the next article in the series, we will look at the complexities of creating an EV charging infrastructure.

From Your Site Articles

Related Articles Around the Web

Source link

The Future of the Transistor Is Our Future

This is a guest post in recognition of the 75th anniversary of the invention of the transistor. It is adapted from an essay in the July 2022 IEEE Electron Device Society Newsletter. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

On the 75th anniversary of the invention of the transistor, a device to which I have devoted my entire career, I’d like to answer two questions: Does the world need better transistors? And if so, what will they be like?

I would argue, that yes, we are going to need new transistors, and I think we have some hints today of what they will be like. Whether we’ll have the will and economic ability to make them is the question.

I believe the transistor is and will remain key to grappling with the impacts of global warming. With its potential for societal, economic, and personal upheaval, climate change calls for tools that give us humans orders-of-magnitude more capability.

Semiconductors can raise the abilities of humanity like no other technology. Almost by definition, all technologies increase human abilities. But for most of them, natural resources and energy constrains make orders-of magnitude improvements questionable. Transistor-enabled technology is a unique exception for the following reasons.

  1. As transistors improve, they enable new abilities such as computing and high-speed communication, the Internet, smartphones, memory and storage, robotics, artificial intelligence, and other things no one has thought of yet.
  2. These abilities have wide applications, and they transform all technologies, industries, and sciences.
    a. Semiconductor technology is not nearly as limited in growth by its material and energy usages as other technologies. ICs use relatively small amounts of materials. As a result, they’re being made smaller, and the less materials they use, the faster, more energy efficient, and capable they become.
  3. Theoretically, the energy required for information processing can still be reduced to less than one-thousandth of what is required today . Although we do not yet know exactly how to approach such theoretical efficiency, we know that increasing energy efficiency a thousandfold would not violate physical laws. In contrast, the energy efficiencies of most other technologies, such as motors and lighting, are already at 30 to 80 percent of their theoretical limits.

Transistors: past, present, and future

How we’ll continue to improve transistor technology is relatively clear in the short term, but it gets murkier the farther out you go from today. In the near term, you can glimpse the transistor’s future by looking at its recent past.

The basic planar (2D) MOSFET structure remained unchanged from 1960 until around 2010, when it became impossible to further increase transistor density and decrease the device’s power consumption. My lab at the University of California, Berkeley, saw that point coming more than a decade earlier. We reported the invention of the FinFET, the planar transistor’s successor, in 1999. FinFET, the first 3D MOSFET, changed the flat and wide transistor structure to a tall and narrow one. The benefit is better performance in a smaller footprint, much like the benefit of multistory buildings over single-story ones in a crowded city.

The FinFET is also what’s called a thin-body MOSFET, a concept that continues to guide the development of new devices. It arose from the insight that current will not leak through a transistor within several nanometers of the silicon surface because the surface potential there is well controlled by the gate voltage. FinFETs take this thin-body concept to heart. The device’s body is the vertical silicon fin, which is covered by oxide insulator and gate metal, leaving no silicon outside the range of strong gate control. FinFETs reduced leakage current by orders of magnitude and lowered transistor operating voltage. It also pointed toward the path for further improvement: reducing the body thickness even more.

The fin of the FinFET has become thinner and taller with each new technology node. But this progress has now become too difficult to maintain. So industry is adopting a new 3D thin-body CMOS structure, called gate-all-around (GAA). Here, a stack of ribbons of semiconductor make up the thin body.

Three different configurations of rectangles have blue, yellow, and pink portions.
Each evolution of the MOSFET structure has been aimed at producing better control over charge in the silicon by the gate [pink]. Dielectric [yellow] prevents charge from moving from the gate into the silicon body [blue].

The 3D thin-body trend will continue from these 3D transistors to 3D-stacked transistors, 3D monolithic circuits, and multichip packaging. In some cases, this 3D trend has already reached great heights. For instance, the regularity of the charge-trap memory-transistor array allowed NAND flash memory to be the first IC to transition from 2D circuits to 3D circuits. Since the first report of 3D NAND by Toshiba in 2007, the number of stacked layers has grown from 4 to beyond 200.

Monolithic 3D logic ICs will likely start modestly, with stacking the two transistors of a CMOS inverter to reduce all logic gates’ footprints [see “3D-Stacked CMOS Takes Moore’s Law to New Heights”]. But the number of stacks may grow. Other paths to 3D ICs may employ the transfer or deposition of additional layers of semiconductor films, such as silicon, silicon germanium, or indium gallium arsenide onto a silicon wafer.

The thin-body trend might meet its ultimate endpoint in 2D semiconductors, whose thickness is measured in atoms. Molybdenum disulfide molecules, for example, are both naturally thin and relatively large, forming a 2D semiconductor that may be no more than three atoms wide yet have very good semiconductor properties. In 2016, engineers in California and Texas used a film of the 2D-semiconductor molecule molybdenum disulfide and a carbon nanotube to demonstrate a MOSFET with a critical dimension: a gate length just 1 nanometer across. Even with a gate as short as 1 nm, the transistor leakage current was only 10 nanoamperes per millimeter, comparable with today’s best production transistor.

“The progress of transistor technology has not been even or smooth.”

One can imagine that in the distant future, the entire transistor may be prefabricated as a single molecule. These prefabricated building blocks might be brought to their precise locations in an IC through a process called directed-self-assembly (DSA). To understand DSA, it may be helpful to recall that a COVID virus uses its spikes to find and chemically dock itself onto an exact spot at the surface of particular human cells. In DSA, the docking spots, the “spikes,” and the transistor cargo are all carefully designed and manufactured. The initial docking spots may be created with lithography on a substrate, but additional docking spots may be brought in as cargo in subsequent steps. Some of the cargo may be removed by heat or other means if they are needed only during the fabrication process but not in the final product.

Besides making transistors smaller, we’ll have to keep reducing their power consumption. Here we could see an order-of-magnitude reduction through the use of what are called negative-capacitance field-effect transistors (NCFET). These require the insertion of a nanometer-thin layer of ferroelectric material, such as hafnium zirconium oxide, in the MOSFET’s gate stack. Because the ferroelectric contains its own internal electric field, it takes less energy to switch the device on or off. An additional advantage of the thin ferroelectric is the possible use of the ferroelectric’s capacity to store a bit as the state of its electric field, thereby integrating memory and computing in the same device.

Two smiling men in suits. The man on the left wears a large golden medal around his neck.
The author [left] received the U.S. National Medal of Technology and Innovation from President Barack Obama [right] in 2016.

Kevin Dietsch/UPI/Alamy

To some degree the devices I’ve described arose out of existing trends. But future transistors may have very different materials, structures, and operating mechanisms from those of today’s transistor. For example, the nanoelectromechanical switch is a return to the mechanical relays of decades past rather than an extension of the transistor. Rather than relying on the physics of semiconductors, it uses only metals, dielectrics, and the force between closely spaced conductors with different voltages applied to them.

All these examples have been demonstrated with experiments years ago. However, bringing them to production will require much more time and effort than previous breakthroughs in semiconductor technology.

Getting to the future

Will we be able to achieve these feats? Some lessons from the past indicate that we could.

The first lesson is that the progress of transistor technology has not been even or smooth. Around 1980, the rising power consumption per chip reached a painful level. The adoption of CMOS, replacing NMOS and bipolar technologies—and later, the gradual reduction of operation voltage from 5 volts to 1—gave the industry 30 years of more or less straightforward progress. But again, power became an issue. Between 2000 and 2010, the heat generated per square centimeter of IC was projected by thoughtful researchers to soon reach that of the nuclear-reactor core. The adoption of 3D thin-body FinFET and multicore processor architectures averted the crisis and ushered in another period of relatively smooth progress.

The history of transistor technology may be described as climbing one mountain after another. Only when we got to the top of one were we able see the vista beyond and map a route to climb the next taller and steeper mountain.

The second lesson is that the core strength of the semiconductor industry—nanofabrication—is formidable. History proves that, given sufficient time and economic incentives, the industry has been able to turn any idea into reality, as long as that idea does not violate scientific laws.

But will the industry have sufficient time and economic incentives to continue climbing taller and steeper mountains and keep raising humanity’s abilities?

It’s a fair question. Even as the fab industry’s resources grow, the mountains of technology development grow even faster. A time may come when no one fab company can reach the top of the mountain to see the path ahead. What happens then?

The revenue of all semiconductor fabs (both independent and those, like Intel, that are integrated companies) is about one-third of the semiconductor industry revenue. But fabs make up just 2 percent of the combined revenues of the IT, telecommunications, and consumer-electronics industries that semiconductor technology enables. Yet the fab industry bears most of the growing burden of discovering, producing, and marketing new transistors and nanofabrication technologies. That needs to change.

For the industry to survive, the relatively meager resources of the fab industry must be prioritized in favor of fab building and shareholder needs over scientific exploration. While the fab industry is lengthening its research time horizon, it needs others to take on the burden too. Humanity’s long-term problem-solving abilities deserve targeted public support. The industry needs the help of very-long-term exploratory research, publicly funded, in a Bell Labs–like setting or by university researchers with career-long timelines and wider and deeper knowledge in physics, chemistry, biology, and algorithms than corporate research currently allows. This way, humanity will continue to find new transistors and gain the abilities it will need to face the challenges in the centuries ahead.

Source link

Waiting for Superbatteries – IEEE Spectrum

Utrecht, a largely bicycle-propelled city of 350,000 just south of Amsterdam, has become a proving ground for the bidirectional-charging techniques that have the rapt interest of automakers, engineers, city managers, and power utilities the world over. This initiative is taking place in an environment where everyday citizens want to travel without causing emissions and are increasingly aware of the value of renewables and energy security.

“We wanted to change,” says Eelco Eerenberg, one of Utrecht’s deputy mayors and alderman for development, education, and public health. And part of the change involves extending the city’s EV-charging network. “We want to predict where we need to build the next electric charging station.”

So it’s a good moment to consider where vehicle-to-grid concepts first emerged and to see in Utrecht how far they’ve come.

It’s been 25 years since University of Delaware energy and environmental expert Willett Kempton and Green Mountain College energy economist Steve Letendre outlined what they saw as a “dawning interaction between electric-drive vehicles and the electric supply system.” This duo, alongside Timothy Lipman of the University of California, Berkeley, and Alec Brooks of AC Propulsion, laid the foundation for vehicle-to-grid power.

The inverter converts alternating current to direct current when charging the vehicle and back the other way when sending power into the grid. This is good for the grid. It’s yet to be shown clearly why that’s good for the driver.

Their initial idea was that garaged vehicles would have a two-way computer-controlled connection to the electric grid, which could receive power from the vehicle as well as provide power to it. Kempton and Letendre’s
1997 paper in the journal Transportation Research describes how battery power from EVs in people’s homes would feed the grid during a utility emergency or blackout. With on-street chargers, you wouldn’t even need the house.

Bidirectional charging uses an inverter about the size of a breadbasket, located either in a dedicated charging box or onboard the car. The inverter converts alternating current to direct current when charging the vehicle and back the other way when sending power into the grid. This is good for the grid. It’s yet to be shown clearly why that’s good for the driver.

This is a vexing question. Car owners can earn some money by giving a little energy back to the grid at opportune times, or can save on their power bills, or can indirectly subsidize operation of their cars this way. But from the time Kempton and Letendre outlined the concept, potential users also feared losing money, through battery wear and tear. That is, would cycling the battery more than necessary prematurely degrade the very heart of the car? Those lingering questions made it unclear whether vehicle-to-grid technologies would ever catch on.

Market watchers have seen a parade of “just about there” moments for vehicle-to-grid technology. In the United States in 2011, the University of Delaware and the New Jersey–based utility NRG Energy signed a
technology-license deal for the first commercial deployment of vehicle-to-grid technology. Their research partnership ran for four years.

In recent years, there’s been an uptick in these pilot projects across Europe and the United States, as well as in China, Japan, and South Korea. In the United Kingdom, experiments are
now taking place in suburban homes, using outside wall-mounted chargers metered to give credit to vehicle owners on their utility bills in exchange for uploading battery juice during peak hours. Other trials include commercial auto fleets, a set of utility vans in Copenhagen, two electric school buses in Illinois, and five in New York.

These pilot programs have remained just that, though—pilots. None evolved into a large-scale system. That could change soon. Concerns about battery wear and tear are abating. Last year, Heta Gandhi and Andrew White of the
University of Rochestermodeled vehicle-to-grid economics and found battery-degradation costs to be minimal. Gandhi and White also noted that battery capital costs have gone down markedly over time, falling from well over US $1,000 per kilowatt-hour in 2010 to about $140 in 2020.

As vehicle-to-grid technology becomes feasible, Utrecht is one of the first places to fully embrace it.

The key force behind the changes taking place in this windswept Dutch city is not a global market trend or the maturity of the engineering solutions. It’s having motivated people who are also in the right place at the right time.

One is Robin Berg, who started a company called
We Drive Solar from his Utrecht home in 2016. It has evolved into a car-sharing fleet operator with 225 electric vehicles of various makes and models—mostly Renault Zoes, but also Tesla Model 3s, Hyundai Konas, and Hyundai Ioniq 5s. Drawing in partners along the way, Berg has plotted ways to bring bidirectional charging to the We Drive Solar fleet. His company now has 27 vehicles with bidirectional capabilities, with another 150 expected to be added in coming months.

This image shows three men in suits standing next to a charging station that is charging a blue electric car with the words u201cBidirectional Ecosystemu201d written on the door.In 2019, Willem-Alexander, king of the Netherlands, presided over the installation of a bidirectional charging station in Utrecht. Here the king [middle] is shown with Robin Berg [left], founder of We Drive Solar, and Jerôme Pannaud [right], Renault’s general manager for Belgium, the Netherlands, and Luxembourg.Patrick van Katwijk/Getty Images

Amassing that fleet wasn’t easy. We Drive Solar’s two bidirectional Renault Zoes are prototypes, which Berg obtained by partnering with the French automaker. Production Zoes capable of bidirectional charging have yet to come out. Last April, Hyundai delivered 25 bidirectionally capable long-range Ioniq 5s to We Drive Solar. These are production cars with modified software, which Hyundai is making in small numbers. It plans to introduce the technology as standard in an upcoming model.

We Drive Solar’s 1,500 subscribers don’t have to worry about battery wear and tear—that’s the company’s problem, if it is one, and Berg doesn’t think it is. “We never go to the edges of the battery,” he says, meaning that the battery is never put into a charge state high or low enough to shorten its life materially.

We Drive Solar is not a free-flowing, pick-up-by-app-and-drop-where-you-want service. Cars have dedicated parking spots. Subscribers reserve their vehicles, pick them up and drop them off in the same place, and drive them wherever they like. On the day I visited Berg, two of his cars were headed as far as the Swiss Alps, and one was going to Norway. Berg wants his customers to view particular cars (and the associated parking spots) as theirs and to use the same vehicle regularly, gaining a sense of ownership for something they don’t own at all.

That Berg took the plunge into EV ride-sharing and, in particular, into power-networking technology like bidirectional charging, isn’t surprising. In the early 2000s, he started a local service provider called LomboXnet, installing line-of-sight Wi-Fi antennas on a church steeple and on the rooftop of one of the tallest hotels in town. When Internet traffic began to crowd his radio-based network, he rolled out fiber-optic cable.

In 2007, Berg landed a contract to install rooftop solar at a local school, with the idea to set up a microgrid. He now manages 10,000 schoolhouse rooftop panels across the city. A collection of power meters lines his hallway closet, and they monitor solar energy flowing, in part, to his company’s electric-car batteries—hence the company name, We Drive Solar.

Berg did not learn about bidirectional charging through Kempton or any of the other early champions of vehicle-to-grid technology. He heard about it because of the
Fukushima nuclear-plant disaster a decade ago. He owned a Nissan Leaf at the time, and he read about how these cars supplied emergency power in the Fukushima region.

“Okay, this is interesting technology,” Berg recalls thinking. “Is there a way to scale it up here?” Nissan agreed to ship him a bidirectional charger, and Berg called Utrecht city planners, saying he wanted to install a cable for it. That led to more contacts, including at the company managing the local low-voltage grid,
Stedin. After he installed his charger, Stedin engineers wanted to know why his meter sometimes ran backward. Later, Irene ten Dam at the Utrecht regional development agency got wind of his experiment and was intrigued, becoming an advocate for bidirectional charging.

Berg and the people working for the city who liked what he was doing attracted further partners, including Stedin, software developers, and a charging-station manufacturer. By 2019,
Willem-Alexander, king of the Netherlands, was presiding over the installation of a bidirectional charging station in Utrecht. “With both the city and the grid operator, the great thing is, they are always looking for ways to scale up,” Berg says. They don’t just want to do a project and do a report on it, he says. They really want to get to the next step.

Those next steps are taking place at a quickening pace. Utrecht now has 800 bidirectional chargers designed and manufactured by the Dutch engineering firm NieuweWeme. The city will soon need many more.

The number of charging stations in Utrecht has risen sharply over the past decade.

“People are buying more and more electric cars,” says Eerenberg, the alderman. City officials noticed a surge in such purchases in recent years, only to hear complaints from Utrechters that they then had to go through a long application process to have a charger installed where they could use it. Eerenberg, a computer scientist by training, is still working to unwind these knots. He realizes that the city has to go faster if it is to meet the Dutch government’s mandate for all new cars to be zero-emission in eight years.

The amount of energy being used to charge EVs in Utrecht has skyrocketed in recent years.

Although similar mandates to put more zero-emission vehicles on the road in New York and California failed in the past, the pressure for vehicle electrification is higher now. And Utrecht city officials want to get ahead of demand for greener transportation solutions. This is a city that just built a central underground parking garage for 12,500 bicycles and spent years digging up a freeway that ran through the center of town, replacing it with a canal in the name of clean air and healthy urban living.

A driving force in shaping these changes is Matthijs Kok, the city’s energy-transition manager. He took me on a tour—by bicycle, naturally—of Utrecht’s new green infrastructure, pointing to some recent additions, like a stationary battery designed to store solar energy from the many panels slated for installation at a local public housing development.

This map of Utrecht shows the city’s EV-charging infrastructure. Orange dots are the locations of existing charging stations; red dots denote charging stations under development. Green dots are possible sites for future charging stations.

“This is why we all do it,” Kok says, stepping away from his propped-up bike and pointing to a brick shed that houses a 400-kilowatt transformer. These transformers are the final link in the chain that runs from the power-generating plant to high-tension wires to medium-voltage substations to low-voltage transformers to people’s kitchens.

There are thousands of these transformers in a typical city. But if too many electric cars in one area need charging, transformers like this can easily become overloaded. Bidirectional charging promises to ease such problems.

Kok works with others in city government to compile data and create maps, dividing the city into neighborhoods. Each one is annotated with data on population, types of households, vehicles, and other data. Together with a contracted data-science group, and with input from ordinary citizens, they developed a policy-driven algorithm to help pick the best locations for new charging stations. The city also included incentives for deploying bidirectional chargers in its 10-year contracts with vehicle charge-station operators. So, in these chargers went.

Experts expect bidirectional charging to work particularly well for vehicles that are part of a fleet whose movements are predictable. In such cases, an operator can readily program when to charge and discharge a car’s battery.

We Drive Solar earns credit by sending battery power from its fleet to the local grid during times of peak demand and charges the cars’ batteries back up during off-peak hours. If it does that well, drivers don’t lose any range they might need when they pick up their cars. And these daily energy trades help to keep prices down for subscribers.

Encouraging car-sharing schemes like We Drive Solar appeals to Utrecht officials because of the struggle with parking—a chronic ailment common to most growing cities. A huge construction site near the Utrecht city center will soon add 10,000 new apartments. Additional housing is welcome, but 10,000 additional cars would not be. Planners want the ratio to be more like one car for every 10 households—and the amount of dedicated public parking in the new neighborhoods will reflect that goal.

This photograph shows four parked vehicles, each with the words u201cWe Drive Solaru201d prominently displayed, and each plugged into a charge point.Some of the cars available from We Drive Solar, including these Hyundai Ioniq 5s, are capable of bidirectional charging.We Drive Solar

Projections for the large-scale electrification of transportation in Europe are daunting. According to a Eurelectric/Deloitte report, there could be 50 million to 70 million electric vehicles in Europe by 2030, requiring several million new charging points, bidirectional or otherwise. Power-distribution grids will need hundreds of billions of euros in investment to support these new stations.

The morning before Eerenberg sat down with me at city hall to explain Utrecht’s charge-station planning algorithm, war broke out in Ukraine. Energy prices now strain many households to the breaking point. Gasoline has reached $6 a gallon (if not more) in some places in the United States. In Germany in mid-June, the driver of a modest VW Golf had to pay about €100 (more than $100) to fill the tank. In the U.K., utility bills shot up on average by more than 50 percent on the first of April.

The war upended energy policies across the European continent and around the world, focusing people’s attention on energy independence and security, and reinforcing policies already in motion, such as the creation of emission-free zones in city centers and the replacement of conventional cars with electric ones. How best to bring about the needed changes is often unclear, but modeling can help.

Nico Brinkel, who is working on his doctorate in
Wilfried van Sark’s photovoltaics-integration lab at Utrecht University, focuses his models at the local level. In
his calculations, he figures that, in and around Utrecht, low-voltage grid reinforcements cost about €17,000 per transformer and about €100,000 per kilometer of replacement cable. “If we are moving to a fully electrical system, if we’re adding a lot of wind energy, a lot of solar, a lot of heat pumps, a lot of electric vehicles…,” his voice trails off. “Our grid was not designed for this.”

But the electrical infrastructure will have to keep up.
One of Brinkel’s studies suggests that if a good fraction of the EV chargers are bidirectional, such costs could be spread out in a more manageable way. “Ideally, I think it would be best if all of the new chargers were bidirectional,” he says. “The extra costs are not that high.”

Berg doesn’t need convincing. He has been thinking about what bidirectional charging offers the whole of the Netherlands. He figures that 1.5 million EVs with bidirectional capabilities—in a country of 8 million cars—would balance the national grid. “You could do anything with renewable energy then,” he says.

Seeing that his country is starting with just hundreds of cars capable of bidirectional charging, 1.5 million is a big number. But one day, the Dutch might actually get there.

This article appears in the August 2022 print issue as “A Road Test for Vehicle-to-Grid Tech.”

From Your Site Articles

Related Articles Around the Web

Source link

Here’s All the Science Hitching a Ride on Artemis I

NASA’s Artemis I mission launched early in the predawn hours this morning, at 1:04 a.m. eastern time, carrying with it the hopes of a space program aiming now to land American astronauts back on the moon. The Orion spacecraft now on its way to the moon also carries with it a lot of CubeSat-size science. (As of press time, some satellites have even begun to tweet.)

And while the objective of Artemis I is to show that the launch system and spacecraft can make a trip to the moon and return safely to Earth, the mission is also a unique opportunity to send a whole spacecraft-load of science into deep space. In addition to the interior of the Orion capsule itself, there are enough nooks and crannies to handle a fair number of CubeSats, and NASA has packed as many experiments as it can into the mission. From radiation phantoms to solar sails to algae to a lunar surface payload, Artemis I has a lot going on.


Most of the variety of the science on Artemis I comes in the form of CubeSats, little satellites that are each the size of a large shoebox. The CubeSats are tucked snugly into berths inside the Orion stage adapter, which is the bit that connects the interim cryogenic propulsion stage to the ESA service module and Orion. Once the propulsion stage lifts Orion out of Earth orbit and pushes it toward the moon, the stage and adapter will separate from Orion, and the CubeSats will launch themselves.

A metal cylinder five meters across rests in a clean room with scaffolding around it, with ten small boxes mounted on platforms insideTen CubeSats rest inside the Orion stage adapter at NASA’s Kennedy Space Center.NASA KSC

While the CubeSats look identical when packed up, each one is totally unique in both hardware and software, with different destinations and mission objectives. There are 10 in total (three weren’t ready in time for launch, which is why there are a couple of empty slots in the image above).

Here is what each one is and does:

While the CubeSats head off to do their own thing, inside the Orion capsule itself will be the temporary home of a trio of mannequins. The first, a male-bodied version provided by NASA, is named Commander Moonikin Campos, after NASA electrical engineer Arturo Campos, who was the guy who wrote the procedures that allowed the Apollo 13 command module to steal power from the lunar module’s batteries, one of many actions that saved the Apollo 13 crew.

A mannequin in an orange flight suit lies on its back in a testing roomMoonikin Campos prepares for placement in the Orion capsule.NASA

Moonikin Campos will spend the mission in the Orion commander’s seat, wearing an Orion crew survival system suit. Essentially itself a spacecraft, the suit is able to sustain its occupant for up to six days if necessary. Moonikin Campos’s job will be to pretend to be an astronaut, and sensors inside him will measure radiation, acceleration, and vibration to help NASA prepare to launch human astronauts in the next Artemis mission.

Two blue female mannequins, one wearing a bulky black vest, strapped into the interior of a space capsuleHel­ga and Zo­har in place on the flight deck of the Ori­on space­craft.NASA/DLR

Accompanying Moonikin Campos are two female-bodied mannequins, named Helga and Zohar, developed by the German Aerospace Center (DLR) along with the Israel Space Agency. These are more accurately called “anthropomorphic phantoms,” and their job is to provide a detailed recording of the radiation environment inside the capsule over the course of the mission. The phantoms are female because women have more radiation-sensitive tissue than men. Both Helga and Zohar have over 6,000 tiny radiation detectors placed throughout their artificial bodies, but Zohar will be wearing an AstroRad radiation protection vest to measure how effective it is.

A dozen researchers in masks stand in front of two blue bags in a NASA laboratoryNASA’s Biology Experiment-1 is transferred to the Orion team.NASA/KSC

The final science experiment to fly onboard Orion is NASA’s Biology Experiment-1. The experiment is really just seeing what time in deep space does to some specific kinds of biology, so all that has to happen is for Orion to successfully haul some packages of sample tubes around the moon and back. Samples include:

  • Plant seeds to characterize how spaceflight affects nutrient stores
  • Photosynthetic algae to identify genes that contribute to its survival in deep space
  • Aspergillus fungus to investigate radioprotective effects of melanin and DNA damage response
  • Yeast used as a model organism to identify genes that enable adaptations to conditions in both low Earth orbit and deep space

There is some concern that because of the extensive delays with the Artemis launch, the CubeSats have been sitting so long that their batteries may have run down. Some of the CubeSats could be recharged, but for others, recharging was judged to be so risky that they were left alone. Even for CubeSats that don’t start right up, though, it’s possible that after deployment, their solar panels will be able to get them going. But at this point, there’s still a lot of uncertainty, and the CubeSats’ earthbound science teams are now pinning their hopes on everything going well after launch.

For the rest of the science payloads, success mostly means Orion returning to Earth safe and sound, which will also be a success for the Artemis I mission as a whole. And assuming it does so, there will be a lot more science to come.

From Your Site Articles

Related Articles Around the Web

Source link

The James Webb Space Telescope was a Career-Defining Project for Janet Barth

Janet Barth spent most of her career at the Goddard Space Flight Center, in Greenbelt, Md.—which put her in the middle of some of NASA’s most exciting projects of the past 40 years.

She joined the center as a co-op student and retired in 2014 as chief of its electrical engineering division. She had a hand in Hubble Space Telescope servicing missions, launching the Lunar Reconnaissance Orbiter and the Magnetospheric Multiscale mission, and developing the James Webb Space Telescope.


Barth, an IEEE Life Fellow, conducted pioneering work in analyzing the effects of cosmic rays and solar radiation on spacecraft observatories. Her tools and techniques are still used today. She also helped develop science requirements for NASA’s Living With a Star program, which studies the sun, magnetospheres, and planetary systems.

For her work, Barth was honored with this year’s IEEE Marie Sklodowska-Curie Award for “leadership of and contributions to the advancement of the design, building, deployment, and operation of capable, robust space systems.”

“I still tear up just thinking about it,” Barth says. “Receiving this award is humbling. Everyone at IEEE and Goddard who I worked with owns a piece of this award.”

From co-op hire to chief of NASA’s EE division

Barth initially attended the University of Michigan in Ann Arbor, to pursue a degree in biology, but she soon realized that it wasn’t a good fit for her. She transferred to the University of Maryland in College Park, and changed her major to applied mathematics.

She was accepted for a co-op position in 1978 at the Goddard center, which is about 9 kilometers from the university. Co-op jobs allow students to work at a company and gain experience while pursuing their degree.

“I was excited about using my analysis and math skills to enable new science at Goddard,” she says. She conducted research on radiation environments and their effects on electronic systems.

Goddard hired her after she graduated as a radiation and hardness assurance engineer. She helped ensure that the electronics and materials in space systems would perform as designed after being exposed to radiation in space.

Because of her expertise in space radiation, George Withbroe, director of the NASA Solar-Terrestrial Physics program (now its Heliophysics Division), asked her in 1999 to help write a funding proposal for a program he wanted to launch—which became Living With a Star. It received US $2 billion from the U.S. Congress and launched in 2001.

During her 12 years with the program, Barth helped write the architecture document, which she says became a seminal publication for the field of heliophysics (the study of the sun and how it influences space). The document outlines the program’s goals and objectives.

In 2001 she was selected to be project manager for a NASA test bed that aimed to understand how spacecraft are affected by their environment. The test bed, which collected data from space to predict how radiation might impact NASA missions, successfully completed its mission in 2020.

Barth reached the next rung on her career ladder in 2002, when she became one of the first female associate branch heads of engineering at Goddard. At the space center’s Flight Data Systems and Radiation Effects Branch, she led a team of engineers who designed flight computers and storage systems. Although it was a steep learning curve for her, she says, she enjoyed it. Three years later, she was heading the branch.

She got another promotion, in 2010, to chief of the electrical engineering division. As the Goddard Engineering Directorate’s first female division chief, she led a team of 270 employees who designed, built, and tested electronics and electrical systems for NASA instruments and spacecraft.

vintage photograph of woman smiling in group of 3 peopleBarth (left) and Moira Stanton at the 1997 RADiation and its Effects on Components and Systems Conference, held in Cannes, France. Barth and Stanton coauthored a poster paper and received the outstanding poster paper award.Janet Barth

Working on the James Webb Space Telescope

Throughout her career, Barth was involved in the development of the Webb space telescope. Whenever she thought that she was done with the massive project, she says with a laugh, her path would “intersect with Webb again.”

She first encountered the Webb project in the late 1990s, when she was asked to be on the initial study team for the telescope.

She wrote its space-environment specifications. After they were published in 1998, however, the team realized that there were several complex problems to solve with the telescope’s detectors. The Goddard team supported Matt Greenhouse, John C. Mather, and other engineers to work on the tricky issues. Greenhouse is a project scientist for the telescope’s science instrument payload. Mather won the 2006 Nobel Prize in Physics for discoveries supporting the Big Bang model.

The Webb’s detectors absorb photons—light from far-away galaxies, stars, and planets—and convert them into electronic voltages. Barth and her team worked with Greenhouse and Mather to verify that the detectors would work while exposed to the radiation environment at the L2 Lagrangian point, one of the positions in space where human-sent objects tend to stay put.

Years later, when Barth was heading the Flight Data Systems and Radiation Effects branch, she oversaw the development of the telescope’s instrument command and data handling systems. Because of her important role, Barth’s name was written on the telescope’s instrument ICDH flight box.

When she became chief of Goddard’s electrical engineering division, she was assigned to the technical review panel for the telescope.

“At that point,” she says, “we focused on the mechanics of deployment and the risks that came with not being able to fully test it in the environment it would be launched and deployed in.”

She served on that panel until she retired. In 2019, five years after retiring, she joined the Miller Engineering and Research Corp. advisory board. The company, based in Pasadena, Md., manufactures parts for aerospace and aviation organizations.

“I really like the ethics of the company. They service science missions and crewed missions,” Barth says. “I went back to my roots, and that’s been really rewarding.”

The best things about being an IEEE member

Barth and her husband, Douglas, who is also an engineer, joined IEEE in 1989. She says they enjoy belonging to a “unique peer group.” She especially likes attending IEEE conferences, having access to journals, and being able to take continuing education courses and workshops, she says.

“I stay up to date on the advancements in science and engineering,” she says, “and going to conferences keeps me inspired and motivated in what I do.” The networking opportunities are “terrific,” she adds, and she’s been able to meet people from just about all engineering industries.

An active IEEE volunteer for more than 20 years, she is executive chairwoman of the IEEE Nuclear and Plasma Sciences Society’s Radiation Effects Steering Group, and she served as 2013–2014 president of the IEEE Nuclear and Plasma Sciences Society. She also is an associate editor for IEEE Transactions on Nuclear Science.

“IEEE has definitely benefited my career,” she says. “There’s no doubt about that.”

Source link

The EV Transition Explained: Can the Grid Cope?

Why not? EVs lack tailpipe emissions, sure, but producing, operating, and disposing of these vehicles creates greenhouse-gas emissions and other environmental burdens. Driving an EV pushes these problems upstream, to the factory where the vehicle is made and beyond, as well as to the power plant where the electricity is generated. The entire life cycle of the vehicle must be considered, from cradle to grave. When you do that, the promise of electric vehicles doesn’t shine quite as brightly. Here we’ll show you in greater detail why that is.

The life cycle to which we refer has two parts: The vehicle cycle begins with mining the raw materials, refining them, turning them into components, and assembling them. It ends years later with salvaging what can be saved and disposing of what remains. Then there is the fuel cycle—the activities associated with producing and using the fuel or electricity to power the vehicle through its working life.

For EVs, much of the environmental burden centers on the production of batteries, the most energy- and resource-intensive component of the vehicle. Each stage in production matters—mining, refining, and producing the raw materials, manufacturing the components, and finally assembling them into cells and battery packs.

Where all this happens matters, too, because a battery factory uses a lot of electricity, and the source for that electricity varies from one region to the next. Manufacturing an EV battery using coal-based electricity results in more than three times the greenhouse-gas emissions of manufacturing a battery with electricity from renewable sources. And about
70 percent of lithium-ion batteries are produced in China, which derived 64 percent of its electricity from coal in 2020.

A worker inspects rows of electric vehicle batteries.The manufacture of lithium batteries for EVs, like those shown here, is energy intensive, as is the mining and refining of the raw materials. AFP/Getty Images

Most automotive manufacturers say they plan to use renewable energy in the future, but for now, most battery production relies on electric grids largely powered by fossil fuels.
Our 2020 study, published in Nature Climate Change, found that manufacturing a typical EV sold in the United States in 2018 emitted about 7 to 12 tonnes of carbon dioxide, compared with about 5 to 6 tonnes for a gasoline-fueled vehicle.

You also must consider the electricity that charges the vehicle. In 2019,
63 percent of global electricity was produced from fossil-fuel sources, the exact nature of which varies substantially among regions. China, using largely coal-based electricity, had 6 million EVs in 2021, constituting the largest total stock of EVs in the world.

But coal use varies, even within China. The southwest province of Yunnan derives about 70 percent of its electricity from hydropower, slightly more than the percentage in Washington state, while Shandong, a coastal province in the east, derives about 90 percent of its electricity from coal, similar to West Virginia.

Norway has the highest per capita number of EVs, which represented
more than 86 percent of vehicle sales in that country in 2021. And it produces almost all its electricity from hydro and solar. Therefore, an EV operated in Shandong imposes a much bigger environmental burden than that same EV would in Yunnan or Norway.

The United States falls somewhere in the middle, deriving
about 60 percent of its electricity from fossil fuels, primarily natural gas, which produces less carbon than coal does. In our model, using electricity from the 2019 U.S. grid to charge a typical 2018 EV would produce between 80 and 120 grams of carbon dioxide per kilometer traveled, compared with about 240 to 320 g/km for a gasoline vehicle. Credit the EV’s advantage to its greater efficiency in the conversion of chemical energy to motion—77 percent, compared with 12 to 30 percent for a gasoline car—along with the potential to generate electricity using low-carbon sources. That’s why operating EVs typically releases less carbon than operating gasoline vehicles of similar size, even in coal-heavy grids like Shandong or West Virginia.

An EV operated in Shandong or West Virginia emits about 6 percent
more greenhouse gas over its lifetime than does a conventional gasoline vehicle of the same size. An EV operated in Yunnan emits about 60 percent less.

But when you factor in the greenhouse-gas emissions associated with vehicle manufacture, the calculus changes. As an illustration, an EV operated in Shandong or West Virginia emits about 6 percent
more greenhouse gas over its lifetime than does a conventional gasoline vehicle of the same size. An EV operated in Yunnan emits about 60 percent less.

Can EVs be good enough—and can manufacturers roll them out fast enough—to meet the goals set in 2021 by the 26th United Nations Climate Change Conference (COP26)? The 197 signatory nations agreed to hold the increase in the average global temperature to no more than 2 °C above preindustrial levels and to pursue efforts to limit the increase to 1.5 °C.

Our
analysis shows that to bring the United States into line with even the more modest 2-degree goal would require electrifying about 90 percent of the U.S. passenger-vehicle fleet by 2050—some 350 million vehicles.

To arrive at this number, we first had to decide on an appropriate carbon budget for the U.S. fleet. Increases in global average temperature are largely proportional to cumulative global emissions of carbon dioxide and other greenhouse gases. Climate scientists use this fact to set a limit on the total amount of carbon dioxide that can be emitted before the world surpasses the 2-degree goal; this amount constitutes the global carbon budget.

We then used results from a model of the global economy to allocate a portion of this global budget specifically to the U.S. passenger-vehicle fleet over the period between 2015 and 2050. This portion came out to around 45 billion tonnes of carbon dioxide, roughly equivalent to a single year of global greenhouse-gas emissions.

6 million

Number of EVs on the road in China in 2021

This is a generous allowance, but that’s reasonable because transportation is harder to decarbonize than many other sectors. Even so, working within that budget would require a 30 percent reduction in the projected cumulative emissions from 2015 to 2050 and a 70 percent reduction in annual emissions in 2050, compared with the business-as-usual emissions expected in a world without EVs.

Next, we turned to our model of the U.S fleet of light vehicles. Our model simulates for each year from 2015 to 2050 how many new vehicles are manufactured and sold, how many are scrapped, and the associated greenhouse-gas emissions. We also keep track of how many vehicles are on the road, when they were made, and how far they are likely to drive. We used this information to estimate annual greenhouse-gas emissions from the fuel cycle, which depend partly on the average vehicle size and partly on how much vehicle efficiency improves over time.

Finally, we compared the carbon budget with our model of total cumulative emissions (that is, both vehicle-cycle and fuel-cycle emissions). We then systematically increased the share of EVs among new vehicle sales until the cumulative fleet emissions fell within the budget. The result: EVs had to make up the vast majority of vehicles on the road by 2050, which means they must make up the vast majority of vehicle sales a decade or more earlier.

That would require a dramatic increase in EV sales: In the United States in 2021, just over 1 million vehicles—less than 1 percent of those on the road—were fully electric. And only 3 percent of the new vehicles sold were fully electric. Considering the long lifetime of a vehicle, about 12 years in the United States, we would need to ramp up sales of EVs dramatically starting now to meet the 2-degree target. In our model, over 10 percent of all new vehicles sold by 2020 would have had to be electric, rising above half by 2030, and essentially all by 2035. Studies conducted in other countries, such as China and Singapore, have arrived at similar results.

Our analysis shows that to bring the United States into line with even the more modest 2-degree goal would require electrifying about 90 percent of the U.S. passenger-vehicle fleet by 2050—some 350 million vehicles.

The good news is that 2035 is the year suggested at the COP26 for all new cars and vans in leading markets to be zero-emissions vehicles, and many manufacturers and governments have committed to it. The bad news is that some major automotive markets, such as China and the United States, have not yet made that pledge, and the United States has already missed the 10 percent sales share for 2020 that our study recommended. Of course, meeting the more ambitious 1.5 °C climate target would require even larger-scale deployment of EVs and therefore earlier deadlines for meeting these targets.

It’s a tall order, and a costly one, to make and sell so many EVs so soon. Even if that were possible, there would also have to be an enormous increase in charging infrastructure and in material supply chains. And that much more vehicle charging would then put great pressure on our electricity grids.

Charging matters, because one of the commonly cited obstacles to EV adoption is range anxiety. Shorter-range EVs, like the Nissan Leaf, have a manufacturer’s
reported range of just 240 km, although a 360-km model is also available. Longer-range EVs, like the Tesla Model 3 Long Range, have a manufacturer’s reported range of 600 km. The shorter driving ranges of most EVs are no problem for daily commutes, but range anxiety is real for longer trips, especially in cold weather, which can cut driving ranges substantially due to the energy demand of heating the cabin and lower battery capacity.

Most EV owners recharge their cars at home or at work, meaning that chargers need to be available in garages, driveways, on-street parking, apartment-building parking areas, and commercial parking lots. A couple of hours at home is sufficient to recharge from a typical daily commute, while overnight charging is needed for longer trips. In contrast, public charging stations that use fast charging can add several hundred kilometers of range in 15 to 30 minutes. This is an impressive feat, but it still takes longer than refilling a gas tank.

Another barrier to the adoption of EVs is the price, which is largely a function of the cost of the batteries, which make the purchase price 25 to 70 percent higher than that of an equivalent conventional vehicle. Governments have offered subsidies or tax rebates to make EVs more appealing, a policy which the U.S. Inflation Reduction Act has just augmented. But such measures, while easy enough to implement in the early days of a new technology, would become prohibitively expensive as EV sales mount.

Although EV battery costs have fallen dramatically over the past decade, the International Energy Agency is projecting a
sudden reversal of that trend in 2022 due to increases in prices of critical metals and a surge in demand for EVs. While projections of future prices vary, highly cited long-term projections from BloombergNEF suggest the cost of new EVs will reach price parity with conventional vehicles by 2026, even without government subsidies. In the meantime, EV buyers’ sticker shock could be alleviated by the knowledge that fuel and maintenance costs are far lower for EVs and that total ownership costs are about the same.

1,700 terawatt-hours per year

Additional electricity needed to electrify 90 percent of U.S. passenger vehicles

But what drivers gain, governments might lose. The International Energy Agency
estimates that by 2030 the deployment of EVs could cut global receipts from fossil-fuel taxes by around US $55 billion. Those tax revenues are necessary for the maintenance of roads. To make up for their loss, governments will need some other source of revenue, such as vehicle registration fees.

The growth in the number of EVs introduces various other challenges, too, not the least of which are the greater demands placed on materialsupply chains for EV batteries and electricity grids. Batteries require raw materials such as lithium, copper, nickel, cobalt, manganese, and graphite. Some of these materials are highly concentrated in a few countries.

For example, the Democratic Republic of Congo (DRC) holds about 50 percent of the world’s cobalt reserves. Just two countries—Chile and Australia—account for over two-thirds of global lithium reserves, and South Africa, Brazil, Ukraine, and Australia have almost all the manganese reserves. This concentration is problematic because it can lead to volatile markets and supply disruptions.

Miners move large bags at a cobalt mine.Cobalt mining for batteries in the Democratic Republic of Congo has been linked to water-quality problems, armed conflicts, child labor, respiratory disease, and birth defects.Sebastian Meyer/Corbis/Getty Images

The COVID pandemic has shown just what supply-chain disruptions can do to other products dependent on scarce materials, notably semiconductors, the shortage of which has forced several automotive manufacturers to stop producing vehicles. It is unclear whether suppliers will be able to meet the future demand for some critical raw materials for electric batteries. Market forces may lead to innovations that will increase the supplies of these materials or reduce the need for them. But for now, the implications for the future are not at all obvious.

The scarcity of these materials reflects not only the varying endowment of various countries but also the social and environmental consequences of extraction and production. The presence of cobalt mines in the DRC, for example, reduced water quality and expanded armed conflicts, child labor, respiratory disease, and birth defects. International regulatory frameworks must therefore not only protect supply chains from disruption but also protect human rights and the environment.

Some of the problems in securing raw material could be mitigated by new battery chemistries—several manufacturers have announced plans to switch to lithium iron phosphate batteries, which are cobalt free—or battery-recycling programs. But neither option totally removes supply-chain or socio-environmental concerns.

That leaves the electricity grid. We estimate that electrifying 90 percent of the U.S. light-duty passenger fleet by 2050 would raise demand for electricity by up to 1,700 terawatt-hours per year—41 percent of U.S. electricity generation in 2021. This additional new demand would greatly change the shape of the consumption curve over daily and weekly periods, which means the grid and its supply would have to be remodeled accordingly.

And because the entire point of EVs is to replace fossil fuels, the grid would need more renewable sources of energy, which typically generate energy intermittently. To smooth out the supply and ensure reliability, the grid will need to add energy-storage capacity, perhaps in the form of
vehicle-to-grid technologies that exploit the installed base of EV batteries. Varying the price of electricity throughout the day could also help to flatten the demand curve.

All said, EVs present both a challenge and an opportunity. The challenge could be hard to manage if EVs are deployed too rapidly—but rapid deployment is exactly what is needed to meet climate targets. These hurdles can be overcome, but they cannot be ignored: In the end, the climate crisis will require us to electrify road transport. But this step alone cannot solve our environmental woes. We need to pursue other strategies.

We should try as much as possible, for example, to avoid motorized travel by cutting the frequency and length of car trips through better urban planning. Promoting mixed-use neighborhoods—areas that put work and residence in proximity—would allow more bicycling and walking.

Between 2007 and 2011, the city of Seville built an
extensive cycling network, increasing the number of daily bike trips from about 13,000 to more than 70,000—or 6 percent of all trips. In Copenhagen, cycling accounts for 16 percent of all trips. Cities around the world are experimenting with a wide range of other supporting initiatives, such as Barcelona’s superblocks, regions smaller than a neighborhood that are designed to be hospitable to walking and cycling. Congestion charges have been levied in Stockholm and London to limit car traffic. Paris has gone further, with a forthcoming private-vehicle ban. Taken together, changes in urban form can reduce transport energy demand by 25 percent, according to a recent installment of the Sixth Assessment Report from the Intergovernmental Panel on Climate Change.

We should also shift from using cars, which often have just one person inside, to less energy-intensive modes of travel, such as public transit. Ridership on buses and trains can be increased by improving connectivity, frequency, and reliability. Regional rail could supplant much intercity driving. At high occupancy, buses and trains can typically keep their emissions to below 50 grams of carbon dioxide per person per kilometer, even when powered by fossil fuels. In electrified modes, these emissions can drop to a fifth as much.

Between 2009 and 2019, Singapore’s investment in mass rapid transit helped reduce the share of private vehicle transport from 45 percent to 36 percent. From 1990 to 2015, Paris slashed vehicle travel by 45 percent through sustained investment in both public transit and active transit infrastructure.

Implementing these complementary strategies could ease the transition to EVs considerably. We shouldn’t forget that addressing the climate crisis requires more than just technology fixes. It also demands individual and collective action. EVs will be a huge help, but we shouldn’t expect them to do the job alone.

This article appears in the November 2022 print issue as “The Electric Vehicle Is Not Enough.”

From Your Site Articles

Related Articles Around the Web

Source link