Video Friday: Dark Side Spot

So where will we turn for future scaling? We will continue to look to the third dimension. We’ve created experimental devices that stack atop each other, delivering logic that is 30 to 50 percent smaller. Crucially, the top and bottom devices are of the two complementary types, NMOS and PMOS, that are the foundation of all the logic circuits of the last several decades. We believe this 3D-stacked complementary metal-oxide semiconductor (CMOS), or CFET (complementary field-effect transistor), will be the key to extending Moore’s Law into the next decade.

The Evolution of the Transistor

Continuous innovation is an essential underpinning of Moore’s Law, but each improvement comes with trade-offs. To understand these trade-offs and how they’re leading us inevitably toward 3D-stacked CMOS, you need a bit of background on transistor operation.

Every metal-oxide-semiconductor field-effect transistor, or MOSFET, has the same set of basic parts: the gate stack, the channel region, the source, and the drain. The source and drain are chemically doped to make them both either rich in mobile electrons (
n-type) or deficient in them (p-type). The channel region has the opposite doping to the source and drain.

In the planar version in use in advanced microprocessors up to 2011, the MOSFET’s gate stack is situated just above the channel region and is designed to project an electric field into the channel region. Applying a large enough voltage to the gate (relative to the source) creates a layer of mobile charge carriers in the channel region that allows current to flow between the source and drain.

As we scaled down the classic planar transistors, what device physicists call short-channel effects took center stage. Basically, the distance between the source and drain became so small that current would leak across the channel when it wasn’t supposed to, because the gate electrode struggled to deplete the channel of charge carriers. To address this, the industry moved to an entirely different transistor architecture called a
FinFET. It wrapped the gate around the channel on three sides to provide better electrostatic control.

Intel introduced its FinFETs in 2011, at the 22-nanometer node, with the third-generation Core processor, and the device architecture has been the workhorse of Moore’s Law ever since. With FinFETs, we could operate at a lower voltage and still have less leakage, reducing power consumption by some 50 percent at the same performance level as the previous-generation planar architecture. FinFETs also switched faster, boosting performance by 37 percent. And because conduction occurs on both vertical sides of the “fin,” the device can drive more current through a given area of silicon than can a planar device, which only conducts along one surface.

However, we did lose something in moving to FinFETs. In planar devices, the width of a transistor was defined by lithography, and therefore it is a highly flexible parameter. But in FinFETs, the transistor width comes in the form of discrete increments—adding one fin at a time–a characteristic often referred to as fin quantization. As flexible as the FinFET may be, fin quantization remains a significant design constraint. The design rules around it and the desire to add more fins to boost performance increase the overall area of logic cells and complicate the stack of interconnects that turn individual transistors into complete logic circuits. It also increases the transistor’s capacitance, thereby sapping some of its switching speed. So, while the FinFET has served us well as the industry’s workhorse, a new, more refined approach is needed. And it’s that approach that led us to the 3D transistors we’re introducing soon.

A blue block pierced by three gold-coated ribbons all atop a thicker grey block.In the RibbonFET, the gate wraps around the transistor channel region to enhance control of charge carriers. The new structure also enables better performance and more refined optimization. Emily Cooper

This advance, the RibbonFET, is our first new transistor architecture since the FinFET’s debut 11 years ago. In it, the gate fully surrounds the channel, providing even tighter control of charge carriers within channels that are now formed by nanometer-scale ribbons of silicon. With these nanoribbons (also called
nanosheets), we can again vary the width of a transistor as needed using lithography.

With the quantization constraint removed, we can produce the appropriately sized width for the application. That lets us balance power, performance, and cost. What’s more, with the ribbons stacked and operating in parallel, the device can drive more current, boosting performance without increasing the area of the device.

We see RibbonFETs as the best option for higher performance at reasonable power, and we will be introducing them in 2024 along with other innovations, such as PowerVia, our version of
backside power delivery, with the Intel 20A fabrication process.

Stacked CMOS

One commonality of planar, FinFET, and RibbonFET transistors is that they all use CMOS technology, which, as mentioned, consists of n-type (NMOS) and p-type (PMOS) transistors. CMOS logic became mainstream in the 1980s because it draws significantly less current than do the alternative technologies, notably NMOS-only circuits. Less current also led to greater operating frequencies and higher transistor densities.

To date, all CMOS technologies place the standard NMOS and PMOS transistor pair side by side. But in a
keynote at the IEEE International Electron Devices Meeting (IEDM) in 2019, we introduced the concept of a 3D-stacked transistor that places the NMOS transistor on top of the PMOS transistor. The following year, at IEDM 2020, we presented the design for the first logic circuit using this 3D technique, an inverter. Combined with appropriate interconnects, the 3D-stacked CMOS approach effectively cuts the inverter footprint in half, doubling the area density and further pushing the limits of Moore’s Law.

Two blue blocks stacked atop each other. Each is pierced through by gold coated ribbons.3D-stacked CMOS puts a PMOS device on top of an NMOS device in the same footprint a single RibbonFET would occupy. The NMOS and PMOS gates use different metals.Emily Cooper

Taking advantage of the potential benefits of 3D stacking means solving a number of process integration challenges, some of which will stretch the limits of CMOS fabrication.

We built the 3D-stacked CMOS inverter using what is known as a self-aligned process, in which both transistors are constructed in one manufacturing step. This means constructing both
n-type and p-type sources and drains by epitaxy—crystal deposition—and adding different metal gates for the two transistors. By combining the source-drain and dual-metal-gate processes, we are able to create different conductive types of silicon nanoribbons (p-type and n-type) to make up the stacked CMOS transistor pairs. It also allows us to adjust the device’s threshold voltage—the voltage at which a transistor begins to switch—separately for the top and bottom nanoribbons.

How do we do all that? The self-aligned 3D CMOS fabrication begins with a silicon wafer. On this wafer, we deposit repeating layers of silicon and silicon germanium, a structure called a superlattice. We then use lithographic patterning to cut away parts of the superlattice and leave a finlike structure. The superlattice crystal provides a strong support structure for what comes later.

Next, we deposit a block of “dummy” polycrystalline silicon atop the part of the superlattice where the device gates will go, protecting them from the next step in the procedure. That step, called the vertically stacked dual source/drain process, grows phosphorous-doped silicon on both ends of the top nanoribbons (the future NMOS device) while also selectively growing boron-doped silicon germanium on the bottom nanoribbons (the future PMOS device). After this, we deposit dielectric around the sources and drains to electrically isolate them from one another. The latter step requires that we then polish the wafer down to perfect flatness.

Gold columns are bridged by a purple polygon and a green one. A rectangle bisects the polygon. It's pink on top and yellow on the bottom.An edge-on view of the 3D stacked inverter shows how complicated its connections are. Emily Cooper

Blue, pink and green rectangles representing different parts of transistors are arranged in a larger circuit on the left and one half the size on the right.By stacking NMOS on top of PMOS transistors, 3D stacking effectively doubles CMOS transistor density per square millimeter, though the real density depends on the complexity of the logic cell involved. The inverter cells are shown from above indicating source and drain interconnects [red], gate interconnects [blue], and vertical connections [green].

Finally, we construct the gate. First, we remove that dummy gate we’d put in place earlier, exposing the silicon nanoribbons. We next etch away only the silicon germanium, releasing a stack of parallel silicon nanoribbons, which will be the channel regions of the transistors. We then coat the nanoribbons on all sides with a vanishingly thin layer of an insulator that has a high dielectric constant. The nanoribbon channels are so small and positioned in such a way that we can’t effectively dope them chemically as we would with a planar transistor. Instead, we use a property of the metal gates called the work function to impart the same effect. We surround the bottom nanoribbons with one metal to make a
p-doped channel and the top ones with another to form an n-doped channel. Thus, the gate stacks are finished off and the two transistors are complete.

The process might seem complex, but it’s better than the alternative—a technology called sequential 3D-stacked CMOS. With that method, the NMOS devices and the PMOS devices are built on separate wafers, the two are bonded, and the PMOS layer is transferred to the NMOS wafer. In comparison, the self-aligned 3D process takes fewer manufacturing steps and keeps a tighter rein on manufacturing cost, something we demonstrated in research and reported at IEDM 2019.

Importantly, the self-aligned method also circumvents the problem of misalignment that can occur when bonding two wafers. Still, sequential 3D stacking is being explored to facilitate integration of silicon with nonsilicon channel materials, such as germanium and III-V semiconductor materials. These approaches and materials may become relevant as we look to tightly integrate optoelectronics and other functions on a single chip.

Orange elongated blocks connect to several narrower blocks of a variety of colors. Making all the needed connections to 3D-stacked CMOS is a challenge. Power connections will need to be made from below the device stack. In this design, the NMOS device [top] and PMOS device [bottom] have separate source/drain contacts, but both devices have a gate in common.Emily Cooper

The new self-aligned CMOS process, and the 3D-stacked CMOS it creates, work well and appear to have substantial room for further miniaturization. At this early stage, that’s highly encouraging. Devices having a gate length of 75 nm demonstrated both the low leakage that comes with excellent device scalability and a high on-state current. Another promising sign: We’ve made wafers where the smallest distance between two sets of stacked devices is only
55 nm. While the device performance results we achieved are not records in and of themselves, they do compare well with individual nonstacked control devices built on the same wafer with the same processing.

In parallel with the process integration and experimental work, we have many ongoing theoretical, simulation, and design studies underway looking to provide insight into how best to use 3D CMOS. Through these, we’ve found some of the key considerations in the design of our transistors. Notably, we now know that we need to optimize the vertical spacing between the NMOS and PMOS—if it’s too short it will increase parasitic capacitance, and if it’s too long it will increase the resistance of the interconnects between the two devices. Either extreme results in slower circuits that consume more power.

Many design studies, such as one by
TEL Research Center America presented at IEDM 2021, focus on providing all the necessary interconnects in the 3D CMOS’s limited space and doing so without significantly increasing the area of the logic cells they make up. The TEL research showed that there are many opportunities for innovation in finding the best interconnect options. That research also highlights that 3D-stacked CMOS will need to have interconnects both above and below the devices. This scheme, called buried power rails, takes the interconnects that provide power to logic cells but don’t carry data and removes them to the silicon below the transistors. Intel’s PowerVIA technology, which does just that and is scheduled for introduction in 2024, will therefore play a key role in making 3D-stacked CMOS a commercial reality.

The Future of Moore’s Law

With RibbonFETs and 3D CMOS, we have a clear path to extend Moore’s Law beyond 2024. In a 2005 interview in which he was asked to reflect on what became his law, Gordon Moore admitted to being “periodically amazed at how we’re able to make progress. Several times along the way, I thought we reached the end of the line, things taper off, and our creative engineers come up with ways around them.”

With the move to FinFETs, the ensuing optimizations, and now the development of RibbonFETs and eventually 3D-stacked CMOS, supported by the myriad packaging enhancements around them, we’d like to think Mr. Moore will be amazed yet again.

From Your Site Articles

Related Articles Around the Web

Source link

Building a Fleet of Personal EVs in Kenya

Electric vehicles are gaining in popularity around the world, especially with the high cost of fuel. But some of the current EV models aren’t a good fit for owners who can’t wait around for hours for their batteries to charge, such as taxi drivers, delivery people, and ride-hailing services.

One startup trying to solve this problem is ARC Ride, in Nairobi, Kenya.


Founded in 2020, the company sells a variety of EVs, including e-bikes, scooters, motorcycles, and tuktuks. ARC Ride is also installing battery-swapping stations around the city. The company’s app lets EV owners locate the battery-swap stations.

A key person involved in the critical tasks of assembling, servicing, and testing the company’s vehicles and related products is Magdalene Maluta. The EV enthusiast came to the 20-employee company through the confluence of motivation, the right mix of skills, and fate.

“Before I came to ARC Ride, I had been looking into EVs [as a career choice],” Maluta says. “I would read about Tesla and wish I was one of the people in their videos making the vehicles.”

In March 2020, an IT firm where she was about to start a new job was shuttered because of the COVID-19 pandemic. After struggling to find a job, Maluta got a call in January 2021 from ARC Ride, offering her a position after someone had recommended her. She started with assembly and maintenance, where she learned about EVs from the ground up.

Working with electric vehicles requires mechanical and engineering expertise. Maluta has diplomas in mechanical engineering from the National Industrial Training Authority and the PC Kinyanjui Technical Training Institute, both in Nairobi.

“My electrical engineering courses included basic electronics, programming, automation, robots, and robotics,” she says. “Applying what I learned in them has helped me solve problems in mechanical engineering and made my work easier.”

As an inventory and maintenance manager, Maluta is involved in many facets of the company, including supervising the assembly of EVs, overseeing the maintenance and support teams, tracking inventory, and helping test new features on the mobile app. She is also helping to design and develop the company’s next generation of vehicles, batteries, and swap stations.

ARC Ride offers four types of EVs. The E1 is an electric bicycle with a battery range of 60 kilometers and up to 65 km with pedal assist. With the throttle, the top speed is 60 km/h, and with pedaling added, it can get up to 65 km/h. The E2 is a two-wheel scooter with a battery range of 85 km and a top speed of 60 km/h. The E2+ all-electric motorcycle—called a boda-boda—can accommodate two batteries and gets a range of 85 km with one battery and 160 km with two. Its top speed is 75 km/h. The E3, a three-wheeled tuktuk motorized rickshaw, can carry up to 500 kilograms of goods and has a range of 80 km.

“You have to have the drive and passion for this kind of work, and you need to show it. Do not doubt yourself.”

Each vehicle includes a charger. Recharging a depleted battery takes less than 4 hours, Maluta says. As an alternative, owners can exchange their nearly discharged battery at the company’s battery-swap stations for a fully charged one. It takes about 2 minutes to complete a battery swap, Maluta says.

The startup has aggressive plans to add more models and increase production and sales to about 500 vehicles per month, along with adding more swap stations, she says. ARC Ride recently received an order for parts to assemble 120 E2+ motorcycles. It also has plans to expand to other cities in Kenya as well as neighboring countries in East Africa.

“We are looking into having a single battery that can be used in all four of our current EVs, which will make our swap stations even more useful,” she says. “Also, we are looking into increasing our vehicles’ range and speed. We want to add regenerative braking, which converts braking energy into electricity that goes back into the battery. And—like all EV makers—we want faster battery charging.”

Part of her job is testing scooters and e-bikes around the city, for performance, top speed, braking, and range. “When we do maintenance or service on one, I test each vehicle’s performance personally,” she says.

Maluta also has a hand in hiring people and looks for specific skills. “To design and develop EV products, you have to be able to use tools like AutoCAD, Autodesk, and SolidWorks. And you can’t do that without an engineering background.”

ARC Ride is committed to diversity in its hiring, and Maluta works hard to hire more women for her team. But, she emphasizes, “You have to have the drive and passion for this kind of work, and you need to show it. Do not doubt yourself. Your identity should be the last thing that is going to limit you.”

This article appears in the September 2022 print issue as “Magdalene Maluta.”

Source link

20 Teams to Compete for $10M Telerobot XPrize

Ideally, autonomous robots would be capable enough to do everything we wanted them to do, and lots of people are working very hard toward that goal. Annoyingly, though, humans are extremely capable, and with the exception of tasks that require a very specific combination of strength or speed or precision, having a human in the loop is still a good way of making sure that you get the job done. But the physical meat-sack nature of humans is annoying as well, restricting us to using our talents and (equally important) having physical experiences in only one location at a time.

The ANA Avatar XPrize seeks to solve this by combining humans and robots by enabling physical, nonautonomous avatar systems that allow remote users to see, hear, touch, and interact in real time. This isn’t a new idea, but with US $10 million up for grabs, this competition is the biggest push toward avatar robotics we’ve seen since the DARPA Robotics Challenge. And after a questionable start, the challenge evolved to (I would argue) better serve its purpose, with a final event coming up in November that will definitely be worth watching.


In the future, avatars could help provide critical care and deploy immediate responses in emergency situations, or offer opportunities for exploration and new ways of collaboration, stretching the boundaries of what is possible and maximizing the impact of skill and knowledge sharing.

Avatar robots are systems designed to provide the hardware and software necessary for a remote human to experience the robot’s environment as directly as possible, and allow the human to interact with that environment. Strictly speaking, the extent to which you can call avatar systems “robots” is debatable, because the focus is usually on fidelity of experience rather than autonomy. The systems are free to assist their users with carrying out low-level tasks, but systems that are anything more than “semiautonomous” are specifically excluded from the XPrize competition.

Avatar systems are essentially very, very fancy remote-control stations connected to mobile manipulators. For the human in the loop, this typically means (at the very least) a virtual-reality headset along with some way of directly controlling a manipulator or two. However, the concept could be extended to include wearable sensors and even brain-machine interfaces. The idea is that once you have something like this up and running, distance ceases to be a factor, and you’d be able to effectively use the avatar system whether it’s in the next room over, the next continent over, or anywhere else from deep sea to high orbit.

The XPrize competition, sponsored by the Japanese airline ANA, has been underway for several years now. Twenty finalist teams have been selected to compete in Los Angeles in November. While each robot meets some general guidelines (mobile, safe to operate indoors and around people, under 160 kilograms, fully untethered), each team has its own unique hardware and approach to telepresence, which should make the competition incredibly exciting.

During the final event, each of these robots (and their remote operators) will have to complete 10 tasks that test the avatar’s ability to provide remote human-to-human connection, the potential to explore places where humans cannot, and feasibility of transporting the skills of an expert human to remote locations in real time. These tasks will measure tangible things like fidelity of remote perception (including touch), localization and navigation, and manipulation. There will also be tasks targeted toward more experiential things, including effectiveness of emotional expression and natural conversational turn-taking. While the full test tasks won’t be revealed until the final event, here are some examples of what the tasks will be incorporated:

  • The avatar introduces itself to the mission commander, repeats mission goals, and activates a switch
  • The avatar moves between designated areas, identifies a heavy canister, and picks it up
  • The avatar is able to utilize a drill
  • The avatar feels the texture of objects without seeing them, and retrieves a requested object

None of these example tasks necessarily seem that complicated to perform, but the key is performing them reliably and well, especially when it comes to things that aren’t quite as easy to measure in an empirical way—like being able to give (and feel) a gentle hug. The actual scoring will be done by expert judges acting as operators, which is a really great way of ensuring that the avatar interfaces are adaptable, effective, and user friendly. During the event, the scored portion of each trial will last a maximum of 25 minutes, but the 75 minutes beforehand will be spent training the operator to use the avatar system. Seventy-five minutes may sound like a lot of time, but for a new operator on a sophisticated system, the teams are going to need to focus not just on making their systems easy to use but also on finding an effective way of teaching people. I really appreciate this aspect of the challenge, because (as we’ve seen in both the DARPA Robotic Challenge and the SubT subterranean challenge) expert operators can accomplish amazing things, but that’s not a sustainable path for the broad adoption of practical remotely operated robots.

The final event itself will be free and open to the public in Los Angeles on 4 and 5 November. Spectators (that’s you!) will be able to see both the test course and the team garages, and there will be live broadcasts of the operator control rooms as well as feeds of what the operators are experiencing through the sensors of their avatar robots. The stakes are high, with $5 million going to the winning team, $2 million to second place, and $1 million for third.

For more details on the competition, we spoke with
David Locke, the senior program director of the ANA Avatar XPrize.

IEEE Spectrum: Where did the inspiration for this competition come from?

David Locke: The vision that we had with ANA was this idea of teleportation—the idea that you could literally teleport yourself somewhere else. But we knew we didn’t have the technology to do that then, and we certainly don’t have it now, but what’s the next step? What if you could put an avatar in the middle of that vision and transport yourself anywhere in the world through that system. Teleportation might not be an option, but telepresence and telexistence is, where you can actually feel physically present in a location, using the robot as your conduit.

Compared to the DRC and SubT, where do you think that these avatar systems will be on the spectrum from robot operators to robot supervisors?

Locke: When I first came to this avatar competition, I was thinking a lot about the DRC, and how we could model this competition off of it. I actually brought in the technical lead for the DRC to help me run this competition. But to be clear, the avatar that we’re talking about is nonautonomous. This robot will not be making any movements without the operator dictating that it will make those movements. I think the future of avatars could include a combination of both nonautonomy and autonomy, but right now, we’re really focused on the nonautonomy. The reason is that it’s important for us that the operator feels connected to the environment, and one of the ways of doing that is by controlling your own movements, and not having the avatar tell you where to go. We want the user to have shared interactions that feel authentic, where you feel a full sense of embodiment in the remote environment.

But isn’t it true that sometimes when you’re remotely controlling a multiple-degrees-of-freedom robot, the task of doing so is complex enough that it makes you feel disconnected with the remote environment anyway? What about assistive autonomy to smooth that interface?

Locke: During the semifinals, we did find that a lot. The judges are in there judging their experience on both a subjective and objective level, and you would see them struggle with things like grabbing puzzle pieces. And sometimes the recipient judge would have to nudge them in the right direction. I definitely think that there’s some form of autonomy that’s going to come in and play a bigger role in avatars in the future, but it’s hard to say what that will be, and I’d love to explore it in an ANA Avatar XPrize No. 2. But I think right now, this nonautonomous zone is a good space for us, as sort of the “hello world” of what the potential is of avatar technology.

Can you describe what the finals will be focusing on?

Locke: We’ll be going for advanced tasks in mobility, haptics, and manipulation, with a focus on three domains: connection, the ability for humans to connect using an avatar as a conduit; skill transference, the ability to transport your skill set anywhere in the world using an avatar; and then exploration, the idea that you can use an avatar to travel anywhere from your own couch or access places that are dangerous or otherwise inaccessible to humans. Those are the three main things we’re going to hit on at the finals.

If connection is an important metric for this competition, how do you judge that in a fair way?

Locke: It’s actually weighted such that operator experience and the recipient experience are the most important parts of the competition, more important than the task-level objective scoring. This rationale has evolved since semifinals testing; after further review of our semifinals testing data, and because we saw such positive feedback/scoring from the judges regarding both the recipient and operator experience(s) during semifinals, for finals we are now weighting the ability to complete the required tasks. Experience absolutely remains a key factor. However, of the 15 possible points teams may earn at finals, 10 points will be task based with 5 points attributed to the operator/recipient experience(s). This will also help from a storytelling and audience experience perspective.

Looking at the finalist teams, there’s a lot of variety in the hardware, and more expensive hardware can make an enormous difference to how a robot performs. How does that factor into the competition?

Locke: It’s hard to say. Your team may not have the best hardware, but if you’re able to converge and integrate different technologies to make yourself successful, you have an advantage. But you know, it’s something that we face in all XPrizes, and I imagine DARPA has a little bit of this as well: How do you make it a level playing field for all the teams? Some of the steps that we took early on in this competition were to hit really hard very early on the fact that teams should be looking to collaborate and share ideas and tech, and to either combine and form larger teams or find other ways to support one another. We also tried to find different experts to link the teams up with for advice, as well as bringing in a number of different supply partners for free or discounted goods. And at the semifinals testing, we did distribute $2 million in milestone prize money.

I’ve done eight of these XPrize competitions. And the one thing that always blows me away is that for a lot of these teams, it’s not just about the money or about winning—it comes from an honest place of advancing the technology, and I’m always shocked at the level of dedication that these teams have for making progress and pushing the tech forward. —David Locke

What should our expectations be for the final event?

Locke: This competition is so different from any other robotics competition, because it’s not solely about completing the tasks. It’s about exploring and connecting people, and that can be hard to demonstrate. The audience will need to really understand what they’re seeing, because it’s hard as an audience member to detect a connection between a human and a robot, right? The audience will be able to see the robot in the trial along with the recipient, but they’ll also see an operator view that shows what the operator is seeing and why they’re making the decisions they’re making as they go through the course.

People are going to have to keep in mind that this is like the very first computer. What we see isn’t going to go straight from the test course to store shelves. It’s going to take time for the technology to advance, and this will be phase No. 1. And what I would love to see is a way of continuing this challenge with XPrize year after year to help teams hone their tech and push it toward the market.

Long term, avatar robots will, we hope, go far beyond this competition. The pandemic has shown both how flexible physical presence should be, as well as how important it can be. Telepresence has been an important first step, which (at least for me)
has given some tantalizing hints of just how powerful embodied remote presence can potentially be. The idea of an immersive experience is far more compelling, and the Avatar XPrize is going to help us get there.

Source link

Nvidia’s CTO on the Future of High-Performance Computing

So where will we turn for future scaling? We will continue to look to the third dimension. We’ve created experimental devices that stack atop each other, delivering logic that is 30 to 50 percent smaller. Crucially, the top and bottom devices are of the two complementary types, NMOS and PMOS, that are the foundation of all the logic circuits of the last several decades. We believe this 3D-stacked complementary metal-oxide semiconductor (CMOS), or CFET (complementary field-effect transistor), will be the key to extending Moore’s Law into the next decade.

The Evolution of the Transistor

Continuous innovation is an essential underpinning of Moore’s Law, but each improvement comes with trade-offs. To understand these trade-offs and how they’re leading us inevitably toward 3D-stacked CMOS, you need a bit of background on transistor operation.

Every metal-oxide-semiconductor field-effect transistor, or MOSFET, has the same set of basic parts: the gate stack, the channel region, the source, and the drain. The source and drain are chemically doped to make them both either rich in mobile electrons (
n-type) or deficient in them (p-type). The channel region has the opposite doping to the source and drain.

In the planar version in use in advanced microprocessors up to 2011, the MOSFET’s gate stack is situated just above the channel region and is designed to project an electric field into the channel region. Applying a large enough voltage to the gate (relative to the source) creates a layer of mobile charge carriers in the channel region that allows current to flow between the source and drain.

As we scaled down the classic planar transistors, what device physicists call short-channel effects took center stage. Basically, the distance between the source and drain became so small that current would leak across the channel when it wasn’t supposed to, because the gate electrode struggled to deplete the channel of charge carriers. To address this, the industry moved to an entirely different transistor architecture called a
FinFET. It wrapped the gate around the channel on three sides to provide better electrostatic control.

Intel introduced its FinFETs in 2011, at the 22-nanometer node, with the third-generation Core processor, and the device architecture has been the workhorse of Moore’s Law ever since. With FinFETs, we could operate at a lower voltage and still have less leakage, reducing power consumption by some 50 percent at the same performance level as the previous-generation planar architecture. FinFETs also switched faster, boosting performance by 37 percent. And because conduction occurs on both vertical sides of the “fin,” the device can drive more current through a given area of silicon than can a planar device, which only conducts along one surface.

However, we did lose something in moving to FinFETs. In planar devices, the width of a transistor was defined by lithography, and therefore it is a highly flexible parameter. But in FinFETs, the transistor width comes in the form of discrete increments—adding one fin at a time–a characteristic often referred to as fin quantization. As flexible as the FinFET may be, fin quantization remains a significant design constraint. The design rules around it and the desire to add more fins to boost performance increase the overall area of logic cells and complicate the stack of interconnects that turn individual transistors into complete logic circuits. It also increases the transistor’s capacitance, thereby sapping some of its switching speed. So, while the FinFET has served us well as the industry’s workhorse, a new, more refined approach is needed. And it’s that approach that led us to the 3D transistors we’re introducing soon.

A blue block pierced by three gold-coated ribbons all atop a thicker grey block.In the RibbonFET, the gate wraps around the transistor channel region to enhance control of charge carriers. The new structure also enables better performance and more refined optimization. Emily Cooper

This advance, the RibbonFET, is our first new transistor architecture since the FinFET’s debut 11 years ago. In it, the gate fully surrounds the channel, providing even tighter control of charge carriers within channels that are now formed by nanometer-scale ribbons of silicon. With these nanoribbons (also called
nanosheets), we can again vary the width of a transistor as needed using lithography.

With the quantization constraint removed, we can produce the appropriately sized width for the application. That lets us balance power, performance, and cost. What’s more, with the ribbons stacked and operating in parallel, the device can drive more current, boosting performance without increasing the area of the device.

We see RibbonFETs as the best option for higher performance at reasonable power, and we will be introducing them in 2024 along with other innovations, such as PowerVia, our version of
backside power delivery, with the Intel 20A fabrication process.

Stacked CMOS

One commonality of planar, FinFET, and RibbonFET transistors is that they all use CMOS technology, which, as mentioned, consists of n-type (NMOS) and p-type (PMOS) transistors. CMOS logic became mainstream in the 1980s because it draws significantly less current than do the alternative technologies, notably NMOS-only circuits. Less current also led to greater operating frequencies and higher transistor densities.

To date, all CMOS technologies place the standard NMOS and PMOS transistor pair side by side. But in a
keynote at the IEEE International Electron Devices Meeting (IEDM) in 2019, we introduced the concept of a 3D-stacked transistor that places the NMOS transistor on top of the PMOS transistor. The following year, at IEDM 2020, we presented the design for the first logic circuit using this 3D technique, an inverter. Combined with appropriate interconnects, the 3D-stacked CMOS approach effectively cuts the inverter footprint in half, doubling the area density and further pushing the limits of Moore’s Law.

Two blue blocks stacked atop each other. Each is pierced through by gold coated ribbons.3D-stacked CMOS puts a PMOS device on top of an NMOS device in the same footprint a single RibbonFET would occupy. The NMOS and PMOS gates use different metals.Emily Cooper

Taking advantage of the potential benefits of 3D stacking means solving a number of process integration challenges, some of which will stretch the limits of CMOS fabrication.

We built the 3D-stacked CMOS inverter using what is known as a self-aligned process, in which both transistors are constructed in one manufacturing step. This means constructing both
n-type and p-type sources and drains by epitaxy—crystal deposition—and adding different metal gates for the two transistors. By combining the source-drain and dual-metal-gate processes, we are able to create different conductive types of silicon nanoribbons (p-type and n-type) to make up the stacked CMOS transistor pairs. It also allows us to adjust the device’s threshold voltage—the voltage at which a transistor begins to switch—separately for the top and bottom nanoribbons.

How do we do all that? The self-aligned 3D CMOS fabrication begins with a silicon wafer. On this wafer, we deposit repeating layers of silicon and silicon germanium, a structure called a superlattice. We then use lithographic patterning to cut away parts of the superlattice and leave a finlike structure. The superlattice crystal provides a strong support structure for what comes later.

Next, we deposit a block of “dummy” polycrystalline silicon atop the part of the superlattice where the device gates will go, protecting them from the next step in the procedure. That step, called the vertically stacked dual source/drain process, grows phosphorous-doped silicon on both ends of the top nanoribbons (the future NMOS device) while also selectively growing boron-doped silicon germanium on the bottom nanoribbons (the future PMOS device). After this, we deposit dielectric around the sources and drains to electrically isolate them from one another. The latter step requires that we then polish the wafer down to perfect flatness.

Gold columns are bridged by a purple polygon and a green one. A rectangle bisects the polygon. It's pink on top and yellow on the bottom.An edge-on view of the 3D stacked inverter shows how complicated its connections are. Emily Cooper

Blue, pink and green rectangles representing different parts of transistors are arranged in a larger circuit on the left and one half the size on the right.By stacking NMOS on top of PMOS transistors, 3D stacking effectively doubles CMOS transistor density per square millimeter, though the real density depends on the complexity of the logic cell involved. The inverter cells are shown from above indicating source and drain interconnects [red], gate interconnects [blue], and vertical connections [green].

Finally, we construct the gate. First, we remove that dummy gate we’d put in place earlier, exposing the silicon nanoribbons. We next etch away only the silicon germanium, releasing a stack of parallel silicon nanoribbons, which will be the channel regions of the transistors. We then coat the nanoribbons on all sides with a vanishingly thin layer of an insulator that has a high dielectric constant. The nanoribbon channels are so small and positioned in such a way that we can’t effectively dope them chemically as we would with a planar transistor. Instead, we use a property of the metal gates called the work function to impart the same effect. We surround the bottom nanoribbons with one metal to make a
p-doped channel and the top ones with another to form an n-doped channel. Thus, the gate stacks are finished off and the two transistors are complete.

The process might seem complex, but it’s better than the alternative—a technology called sequential 3D-stacked CMOS. With that method, the NMOS devices and the PMOS devices are built on separate wafers, the two are bonded, and the PMOS layer is transferred to the NMOS wafer. In comparison, the self-aligned 3D process takes fewer manufacturing steps and keeps a tighter rein on manufacturing cost, something we demonstrated in research and reported at IEDM 2019.

Importantly, the self-aligned method also circumvents the problem of misalignment that can occur when bonding two wafers. Still, sequential 3D stacking is being explored to facilitate integration of silicon with nonsilicon channel materials, such as germanium and III-V semiconductor materials. These approaches and materials may become relevant as we look to tightly integrate optoelectronics and other functions on a single chip.

Orange elongated blocks connect to several narrower blocks of a variety of colors. Making all the needed connections to 3D-stacked CMOS is a challenge. Power connections will need to be made from below the device stack. In this design, the NMOS device [top] and PMOS device [bottom] have separate source/drain contacts, but both devices have a gate in common.Emily Cooper

The new self-aligned CMOS process, and the 3D-stacked CMOS it creates, work well and appear to have substantial room for further miniaturization. At this early stage, that’s highly encouraging. Devices having a gate length of 75 nm demonstrated both the low leakage that comes with excellent device scalability and a high on-state current. Another promising sign: We’ve made wafers where the smallest distance between two sets of stacked devices is only
55 nm. While the device performance results we achieved are not records in and of themselves, they do compare well with individual nonstacked control devices built on the same wafer with the same processing.

In parallel with the process integration and experimental work, we have many ongoing theoretical, simulation, and design studies underway looking to provide insight into how best to use 3D CMOS. Through these, we’ve found some of the key considerations in the design of our transistors. Notably, we now know that we need to optimize the vertical spacing between the NMOS and PMOS—if it’s too short it will increase parasitic capacitance, and if it’s too long it will increase the resistance of the interconnects between the two devices. Either extreme results in slower circuits that consume more power.

Many design studies, such as one by
TEL Research Center America presented at IEDM 2021, focus on providing all the necessary interconnects in the 3D CMOS’s limited space and doing so without significantly increasing the area of the logic cells they make up. The TEL research showed that there are many opportunities for innovation in finding the best interconnect options. That research also highlights that 3D-stacked CMOS will need to have interconnects both above and below the devices. This scheme, called buried power rails, takes the interconnects that provide power to logic cells but don’t carry data and removes them to the silicon below the transistors. Intel’s PowerVIA technology, which does just that and is scheduled for introduction in 2024, will therefore play a key role in making 3D-stacked CMOS a commercial reality.

The Future of Moore’s Law

With RibbonFETs and 3D CMOS, we have a clear path to extend Moore’s Law beyond 2024. In a 2005 interview in which he was asked to reflect on what became his law, Gordon Moore admitted to being “periodically amazed at how we’re able to make progress. Several times along the way, I thought we reached the end of the line, things taper off, and our creative engineers come up with ways around them.”

With the move to FinFETs, the ensuing optimizations, and now the development of RibbonFETs and eventually 3D-stacked CMOS, supported by the myriad packaging enhancements around them, we’d like to think Mr. Moore will be amazed yet again.

From Your Site Articles

Related Articles Around the Web

Source link

Nuclear Energy Brinkmanship in Ukraine

A battle of nerves and steel is raging at Europe’s largest nuclear power plant, which Russia captured in March. Russian forces use the Zaporizhzhia plant as a safe haven for troops and equipment, including artillery that is shelling Ukrainian-held territory directly across the Dnipro River. Ukraine is launching a counteroffensive to retake occupied territory, including Zaporizhzhia. And, all the while, each blames the other as explosions rock the nuclear site.

According to a Reuters report today, Russia’s Defense Department may order the plant to shut down, citing shelling damage to the plant’s “back-up support systems.” Yesterday most plant workers were allegedly told not to come to work tomorrow, according to Ukranian intelligence, which warns the Russians may be planning a dangerous “provocation.”

The ongoing confrontation risks widespread death and contamination—including to the fertile Ukrainian breadbasket that helped feed the world until Russia’s invasion. In a simulated fallout map produced by Ukraine’s national weather service and posted Sunday, a radiation plume spreads northwest and reaches Poland and Lithuania within 72 hours.

In Ukraine, which endured the 1986 Chernobyl accident, fear of another nuclear disaster is fueling a debate over Zaporizhzhia’s continued operation amidst the mayhem. Nikolai Steinberg, a former chief engineer at Chernobyl and member of the International Atomic Energy Agency’s governing board, called running the plant “a crime” in an email interview with IEEE Spectrum. That’s because stopping would cool off the operating reactors, buying the beleaguered operators time to avert nuclear meltdowns if, say, shelling sparks a station-wide blackout.

Backers of Steinberg’s position are urging Ukraine’s nuclear regulatory agency to order a shutdown. (To date, plant operators have continued to respond to orders from Ukraine’s grid operator.) But Ukraine’s Ministry of Energy and nuclear power firm Energoatom say a broader risk calculus supports keeping the plant running. As Energoatom stated in June: “Disconnection is impossible from a technical, security, economic or political point of view.” Factors cited to justify operation include the possibility that cooling the plant will make it easier for Russia to transfer its generation to its own grid, and the need for power exports to Europe that deliver revenue and political support.

Last month Energoatom increased generation at Zaporizhzhia by ordering plant staff, working under Russian supervision, to start up a third reactor for the first time since the plant’s March 4 capture. At the time the Minister of Energy was pushing Europe’s grid regulators to rapidly expand capacity limits for Ukrainian electricity exports.

“We are in a nightmare,” is how Jan Haverkamp, a nuclear safety expert with Greenpeace International, described the quandary facing Ukraine and the world in an email to IEEE Spectrum. Haverkamp and other Western experts say ceasing power generation would be “the wisest thing to do” under normal circumstances. Of course, adds Haverkamp, there’s nothing normal about Zaporizhzhia’s situation.

Nuclear Roulette

The debate over operating Zaporizhzhia boiled over when artillery shells started hitting the plant site late last month. Which side is responsible for that shelling remains contested, though Western security experts argue that Russia has more incentives to risk an accident. Equipment damaged in the attacks include:

The August 5 substation blast prompted one of Zaporizhzhia’s three operating units to automatically shut down. It also left the plant with just one grid connection, compared to 7 pre-invasion. That jeopardizes the entire plant, which uses grid power to cool all six of its reactors and spent fuel pools. Energoatom CEO Petro Kotin said it put Zaporizhzhia “very close” to the situation that produced the 2011 Fukushima meltdowns.

Batteries and diesel backup generators are designed to power the plant’s cooling systems for 10 days. But Haverkamp says “an emergency situation or even melt-down” is possible well before then. Alleged corruption before Russia’s invasion raises doubts about the reliability of Zaporizhzhia’s back-up systems. Haverkamp also cites the plant’s “exhausted and decimated” Ukrainian staff, and the possibility of a power struggle with their Russian occupiers over how to manage an emergency.

Ukraine’s state nuclear safety center projects that Zaporizhzhia’s operating units could experience reactor damage in as little as 3 hours without power, according to a May 2022 assessment revealed last week by Kyiv-based MIND. Horrific consequences could follow. The agency that manages the Chernobyl Exclusion Zone projected last week that shells or rockets hitting Zaporizhzhia might unleash an accident 10 times worse than Chernobyl’s. The agency stated that radioactive emissions could kill tens of thousands of people, displace 2 million, pollute an area three times the size of Ukraine, and create a longterm exclusion zone as large as 30,000 square kilometers.

Olena Pareniuk, a senior nuclear safety expert at Ukraine’s National Academy of Sciences, told Ukrainian Radio last week that Zaporizhzhia could yield the world’s first magnitude-8 nuclear accident. Chernobyl and Fukushima were sevens.

Hence the call to suspend power generation. According to the state nuclear safety center’s expert assessment, reviewed by Spectrum, moving all reactors to a “cold stop” would extend the delay between full power loss and core damage from 3 hours to 27 hours. Stifling the nuclear reactions that produce energy would allow short-lived fission products in the reactors to dissipate, reducing the harm caused by radioactive emissions. “The overall risk would decrease,” says Ed Lyman, director of nuclear power safety at the Union of Concerned Scientists. He says proactive shutdown makes sense, just as nuclear plants in the U.S. do when hurricanes head their way.

The agency stated that radioactive emissions could kill tens of thousands of people, displace 2 million, pollute an area three times the size of Ukraine, and create a longterm exclusion zone as large as 30,000 square kilometers.

Reports suggest that Ukraine’s nuclear regulator may be moving towards ordering a cold stop during the occupation, heeding counsel from the agency’s advisory board. On August 4 the board recommended a cold stop requirement for two of the plant’s four off-line units whose turbine halls appear to be occupied by Russian weapons.

However, at least one member of the Board, state nuclear safety center representative Viktor Shenderovych, proposed stopping all six units. That call is supported by Georgiy Balakan, a former special advisory to the president of Energoatom, who worked with two independent groups of Ukrainian experts to fashion the world’s first industry-grade risk assessments of nuclear power operation under hostilities.

The independent calculations were performed using standard industry codes and Energoatom’s probabilistic risk models of Zaporizhzhia, developed with participation of U.S. national laboratories. And they look at risk across the site rather than just individual units, so they can spot larger accidents that happen when one event disrupts multiple systems. The analyses, reviewed in a recent LinkedIn post, project that “common-cause” failures from military action significantly increase the probability of reactor core damage when multiple units are operating.

Demilitarizing Zaporizhzhia

In an August 16 statement to Spectrum, Energoatom CEO Petro Kotin rejects calls to stop nuclear generation, arguing that they play into Russia’s hands.

Kotin claims that stopping Zaporizhzhia’s internal power generation “may cause an emergency” by making it more reliant on off-site power. But his primary argument is that a shutdown would facilitate Russia’s apparent plans to permanently annex the plant, along with the rest of occupied southeastern Ukraine. He notes that Russian state nuclear power giant Rosatom has acknowledged that it has staff at the plant, saying they provide “technical, consulting, communications and other assistance.” Sergey Kiriyenko, the Kremlin’s point man for Russian-occupied Ukraine, led Rosatom from 2005 to 2016.

Ukraine was connected to Russia’s grid until the invasion, when it disconnected and quickly synchronized instead with Europe’s grid. But Crimea, occupied since 2014, remains on the Russian grid. Kotin claims that shutdown is a prerequisite for switching Zaporizhzhia (and the intervening southeastern lines to Crimea) back to Russia’s grid.

A Ukrainian expert contacted by Spectrum counters that claim, however. He states that if Zaporizhzhia keeps running it can power its own systems while the regional grid is realigned. It creates some risk, but Russia’s forces have proven their willingness to endanger lives for months.

Ukrainian President Volodymyr Zelenskyy says only restoration of full Ukrainian control can guarantee its safety. He has also called for new sanctions against Russia that target Rosatom, which remains largely unscathed to date. Last week 42 nations including the U.S., Canada, Turkey and most European states endorsed Zelenskyy’s call for Russia to immediately withdraw from the Zaporizhzhia plant. Not surprising, Russia rejected that call. Its Security Council representative explained that Russian forces must stay to protect against “provocations and terrorist attacks.”

Nuclear safety experts such as Haverkamp at Greenpeace International endorse United Nations secretary general António Guterres’ proposed solution: Russian withdrawal coupled with creation of a demilitarized zone around the plant. The problem is finding a neutral international body to take charge. The most obvious choice, the International Atomic Energy Agency, is viewed with suspicion by Ukraine. Many IAEA staffers spent their careers at Rosatom, including the agencies’ deputy director.

Haverkamp is sympathetic: “I am not really sure whether the IAEA can deliver that, as locked in [as] they are with Rosatom and Russia.”

Source link

2022—The Year the Hydrogen Economy Launched?

Utrecht, a largely bicycle-propelled city of 350,000 just south of Amsterdam, has become a proving ground for the bidirectional-charging techniques that have the rapt interest of automakers, engineers, city managers, and power utilities the world over. This initiative is taking place in an environment where everyday citizens want to travel without causing emissions and are increasingly aware of the value of renewables and energy security.

“We wanted to change,” says Eelco Eerenberg, one of Utrecht’s deputy mayors and alderman for development, education, and public health. And part of the change involves extending the city’s EV-charging network. “We want to predict where we need to build the next electric charging station.”

So it’s a good moment to consider where vehicle-to-grid concepts first emerged and to see in Utrecht how far they’ve come.

It’s been 25 years since University of Delaware energy and environmental expert Willett Kempton and Green Mountain College energy economist Steve Letendre outlined what they saw as a “dawning interaction between electric-drive vehicles and the electric supply system.” This duo, alongside Timothy Lipman of the University of California, Berkeley, and Alec Brooks of AC Propulsion, laid the foundation for vehicle-to-grid power.

The inverter converts alternating current to direct current when charging the vehicle and back the other way when sending power into the grid. This is good for the grid. It’s yet to be shown clearly why that’s good for the driver.

Their initial idea was that garaged vehicles would have a two-way computer-controlled connection to the electric grid, which could receive power from the vehicle as well as provide power to it. Kempton and Letendre’s
1997 paper in the journal Transportation Research describes how battery power from EVs in people’s homes would feed the grid during a utility emergency or blackout. With on-street chargers, you wouldn’t even need the house.

Bidirectional charging uses an inverter about the size of a breadbasket, located either in a dedicated charging box or onboard the car. The inverter converts alternating current to direct current when charging the vehicle and back the other way when sending power into the grid. This is good for the grid. It’s yet to be shown clearly why that’s good for the driver.

This is a vexing question. Car owners can earn some money by giving a little energy back to the grid at opportune times, or can save on their power bills, or can indirectly subsidize operation of their cars this way. But from the time Kempton and Letendre outlined the concept, potential users also feared losing money, through battery wear and tear. That is, would cycling the battery more than necessary prematurely degrade the very heart of the car? Those lingering questions made it unclear whether vehicle-to-grid technologies would ever catch on.

Market watchers have seen a parade of “just about there” moments for vehicle-to-grid technology. In the United States in 2011, the University of Delaware and the New Jersey–based utility NRG Energy signed a
technology-license deal for the first commercial deployment of vehicle-to-grid technology. Their research partnership ran for four years.

In recent years, there’s been an uptick in these pilot projects across Europe and the United States, as well as in China, Japan, and South Korea. In the United Kingdom, experiments are
now taking place in suburban homes, using outside wall-mounted chargers metered to give credit to vehicle owners on their utility bills in exchange for uploading battery juice during peak hours. Other trials include commercial auto fleets, a set of utility vans in Copenhagen, two electric school buses in Illinois, and five in New York.

These pilot programs have remained just that, though—pilots. None evolved into a large-scale system. That could change soon. Concerns about battery wear and tear are abating. Last year, Heta Gandhi and Andrew White of the
University of Rochestermodeled vehicle-to-grid economics and found battery-degradation costs to be minimal. Gandhi and White also noted that battery capital costs have gone down markedly over time, falling from well over US $1,000 per kilowatt-hour in 2010 to about $140 in 2020.

As vehicle-to-grid technology becomes feasible, Utrecht is one of the first places to fully embrace it.

The key force behind the changes taking place in this windswept Dutch city is not a global market trend or the maturity of the engineering solutions. It’s having motivated people who are also in the right place at the right time.

One is Robin Berg, who started a company called
We Drive Solar from his Utrecht home in 2016. It has evolved into a car-sharing fleet operator with 225 electric vehicles of various makes and models—mostly Renault Zoes, but also Tesla Model 3s, Hyundai Konas, and Hyundai Ioniq 5s. Drawing in partners along the way, Berg has plotted ways to bring bidirectional charging to the We Drive Solar fleet. His company now has 27 vehicles with bidirectional capabilities, with another 150 expected to be added in coming months.

This image shows three men in suits standing next to a charging station that is charging a blue electric car with the words u201cBidirectional Ecosystemu201d written on the door.In 2019, Willem-Alexander, king of the Netherlands, presided over the installation of a bidirectional charging station in Utrecht. Here the king [middle] is shown with Robin Berg [left], founder of We Drive Solar, and Jerôme Pannaud [right], Renault’s general manager for Belgium, the Netherlands, and Luxembourg.Patrick van Katwijk/Getty Images

Amassing that fleet wasn’t easy. We Drive Solar’s two bidirectional Renault Zoes are prototypes, which Berg obtained by partnering with the French automaker. Production Zoes capable of bidirectional charging have yet to come out. Last April, Hyundai delivered 25 bidirectionally capable long-range Ioniq 5s to We Drive Solar. These are production cars with modified software, which Hyundai is making in small numbers. It plans to introduce the technology as standard in an upcoming model.

We Drive Solar’s 1,500 subscribers don’t have to worry about battery wear and tear—that’s the company’s problem, if it is one, and Berg doesn’t think it is. “We never go to the edges of the battery,” he says, meaning that the battery is never put into a charge state high or low enough to shorten its life materially.

We Drive Solar is not a free-flowing, pick-up-by-app-and-drop-where-you-want service. Cars have dedicated parking spots. Subscribers reserve their vehicles, pick them up and drop them off in the same place, and drive them wherever they like. On the day I visited Berg, two of his cars were headed as far as the Swiss Alps, and one was going to Norway. Berg wants his customers to view particular cars (and the associated parking spots) as theirs and to use the same vehicle regularly, gaining a sense of ownership for something they don’t own at all.

That Berg took the plunge into EV ride-sharing and, in particular, into power-networking technology like bidirectional charging, isn’t surprising. In the early 2000s, he started a local service provider called LomboXnet, installing line-of-sight Wi-Fi antennas on a church steeple and on the rooftop of one of the tallest hotels in town. When Internet traffic began to crowd his radio-based network, he rolled out fiber-optic cable.

In 2007, Berg landed a contract to install rooftop solar at a local school, with the idea to set up a microgrid. He now manages 10,000 schoolhouse rooftop panels across the city. A collection of power meters lines his hallway closet, and they monitor solar energy flowing, in part, to his company’s electric-car batteries—hence the company name, We Drive Solar.

Berg did not learn about bidirectional charging through Kempton or any of the other early champions of vehicle-to-grid technology. He heard about it because of the
Fukushima nuclear-plant disaster a decade ago. He owned a Nissan Leaf at the time, and he read about how these cars supplied emergency power in the Fukushima region.

“Okay, this is interesting technology,” Berg recalls thinking. “Is there a way to scale it up here?” Nissan agreed to ship him a bidirectional charger, and Berg called Utrecht city planners, saying he wanted to install a cable for it. That led to more contacts, including at the company managing the local low-voltage grid,
Stedin. After he installed his charger, Stedin engineers wanted to know why his meter sometimes ran backward. Later, Irene ten Dam at the Utrecht regional development agency got wind of his experiment and was intrigued, becoming an advocate for bidirectional charging.

Berg and the people working for the city who liked what he was doing attracted further partners, including Stedin, software developers, and a charging-station manufacturer. By 2019,
Willem-Alexander, king of the Netherlands, was presiding over the installation of a bidirectional charging station in Utrecht. “With both the city and the grid operator, the great thing is, they are always looking for ways to scale up,” Berg says. They don’t just want to do a project and do a report on it, he says. They really want to get to the next step.

Those next steps are taking place at a quickening pace. Utrecht now has 800 bidirectional chargers designed and manufactured by the Dutch engineering firm NieuweWeme. The city will soon need many more.

The number of charging stations in Utrecht has risen sharply over the past decade.

“People are buying more and more electric cars,” says Eerenberg, the alderman. City officials noticed a surge in such purchases in recent years, only to hear complaints from Utrechters that they then had to go through a long application process to have a charger installed where they could use it. Eerenberg, a computer scientist by training, is still working to unwind these knots. He realizes that the city has to go faster if it is to meet the Dutch government’s mandate for all new cars to be zero-emission in eight years.

The amount of energy being used to charge EVs in Utrecht has skyrocketed in recent years.

Although similar mandates to put more zero-emission vehicles on the road in New York and California failed in the past, the pressure for vehicle electrification is higher now. And Utrecht city officials want to get ahead of demand for greener transportation solutions. This is a city that just built a central underground parking garage for 12,500 bicycles and spent years digging up a freeway that ran through the center of town, replacing it with a canal in the name of clean air and healthy urban living.

A driving force in shaping these changes is Matthijs Kok, the city’s energy-transition manager. He took me on a tour—by bicycle, naturally—of Utrecht’s new green infrastructure, pointing to some recent additions, like a stationary battery designed to store solar energy from the many panels slated for installation at a local public housing development.

This map of Utrecht shows the city’s EV-charging infrastructure. Orange dots are the locations of existing charging stations; red dots denote charging stations under development. Green dots are possible sites for future charging stations.

“This is why we all do it,” Kok says, stepping away from his propped-up bike and pointing to a brick shed that houses a 400-kilowatt transformer. These transformers are the final link in the chain that runs from the power-generating plant to high-tension wires to medium-voltage substations to low-voltage transformers to people’s kitchens.

There are thousands of these transformers in a typical city. But if too many electric cars in one area need charging, transformers like this can easily become overloaded. Bidirectional charging promises to ease such problems.

Kok works with others in city government to compile data and create maps, dividing the city into neighborhoods. Each one is annotated with data on population, types of households, vehicles, and other data. Together with a contracted data-science group, and with input from ordinary citizens, they developed a policy-driven algorithm to help pick the best locations for new charging stations. The city also included incentives for deploying bidirectional chargers in its 10-year contracts with vehicle charge-station operators. So, in these chargers went.

Experts expect bidirectional charging to work particularly well for vehicles that are part of a fleet whose movements are predictable. In such cases, an operator can readily program when to charge and discharge a car’s battery.

We Drive Solar earns credit by sending battery power from its fleet to the local grid during times of peak demand and charges the cars’ batteries back up during off-peak hours. If it does that well, drivers don’t lose any range they might need when they pick up their cars. And these daily energy trades help to keep prices down for subscribers.

Encouraging car-sharing schemes like We Drive Solar appeals to Utrecht officials because of the struggle with parking—a chronic ailment common to most growing cities. A huge construction site near the Utrecht city center will soon add 10,000 new apartments. Additional housing is welcome, but 10,000 additional cars would not be. Planners want the ratio to be more like one car for every 10 households—and the amount of dedicated public parking in the new neighborhoods will reflect that goal.

This photograph shows four parked vehicles, each with the words u201cWe Drive Solaru201d prominently displayed, and each plugged into a charge point.Some of the cars available from We Drive Solar, including these Hyundai Ioniq 5s, are capable of bidirectional charging.We Drive Solar

Projections for the large-scale electrification of transportation in Europe are daunting. According to a Eurelectric/Deloitte report, there could be 50 million to 70 million electric vehicles in Europe by 2030, requiring several million new charging points, bidirectional or otherwise. Power-distribution grids will need hundreds of billions of euros in investment to support these new stations.

The morning before Eerenberg sat down with me at city hall to explain Utrecht’s charge-station planning algorithm, war broke out in Ukraine. Energy prices now strain many households to the breaking point. Gasoline has reached $6 a gallon (if not more) in some places in the United States. In Germany in mid-June, the driver of a modest VW Golf had to pay about €100 (more than $100) to fill the tank. In the U.K., utility bills shot up on average by more than 50 percent on the first of April.

The war upended energy policies across the European continent and around the world, focusing people’s attention on energy independence and security, and reinforcing policies already in motion, such as the creation of emission-free zones in city centers and the replacement of conventional cars with electric ones. How best to bring about the needed changes is often unclear, but modeling can help.

Nico Brinkel, who is working on his doctorate in
Wilfried van Sark’s photovoltaics-integration lab at Utrecht University, focuses his models at the local level. In
his calculations, he figures that, in and around Utrecht, low-voltage grid reinforcements cost about €17,000 per transformer and about €100,000 per kilometer of replacement cable. “If we are moving to a fully electrical system, if we’re adding a lot of wind energy, a lot of solar, a lot of heat pumps, a lot of electric vehicles…,” his voice trails off. “Our grid was not designed for this.”

But the electrical infrastructure will have to keep up.
One of Brinkel’s studies suggests that if a good fraction of the EV chargers are bidirectional, such costs could be spread out in a more manageable way. “Ideally, I think it would be best if all of the new chargers were bidirectional,” he says. “The extra costs are not that high.”

Berg doesn’t need convincing. He has been thinking about what bidirectional charging offers the whole of the Netherlands. He figures that 1.5 million EVs with bidirectional capabilities—in a country of 8 million cars—would balance the national grid. “You could do anything with renewable energy then,” he says.

Seeing that his country is starting with just hundreds of cars capable of bidirectional charging, 1.5 million is a big number. But one day, the Dutch might actually get there.

This article appears in the August 2022 print issue as “A Road Test for Vehicle-to-Grid Tech.”

From Your Site Articles

Related Articles Around the Web

Source link

myBuddy, a Dual-Arm Collaborative Robot Powered by Raspberry Pi

A smiling man with a mustache and beard in a suit with a blue tieToyota Research Institute

Gill Pratt, Toyota’s Chief Scientist and the CEO of TRI, believes that robots have a significant role to play in assisting older people by solving physical problems as well as providing mental and emotional support. With a background in robotics research and five years as a program manager at the Defense Advanced Research Projects Agency, during which time he oversaw the DARPA Robotics Challenge in 2015, Pratt understands how difficult it can be to bring robots into the real world in a useful, responsible, and respectful way. In an interview earlier this year in Washington, D.C., with IEEE Spectrum’s Evan Ackerman, he said that the best approach to this problem is a human-centric one: “It’s not about the robot, it’s about people.”

What are the important problems that we can usefully and reliably solve with home robots in the relatively near term?

Gill Pratt: We are looking at the aging society as the No. 1 market driver of interest to us. Over the last few years, we’ve come to the realization that an aging society creates two problems. One is within the home for an older person who needs help, and the other is for the rest of society—for younger people who need to be more productive to support a greater number of older people. The dependency ratio is the fraction of the population that works relative to the fraction that does not. As an example, in Japan, in not too many years, it’s going to get pretty close to 1:1. And we haven’t seen that, ever.

Solving physical problems is the easier part of assisting an aging society. The bigger issue is actually loneliness. This doesn’t sound like a robotics thing, but it could be. Related to loneliness, the key issue is having purpose, and feeling that your life is still worthwhile.

What we want to do is build a time machine. Of course we can’t do that, that’s science fiction, but we want to be able to have a person say, “I wish I could be 10 years younger” and then have a robot effectively help them as much as possible to live that kind of life.

There are many different robotic approaches that could be useful to address the problems you’re describing. Where do you begin?

Pratt: Let me start with an example, and this is one we talk about all of the time because it helps us think: Imagine that we built a robot to help with cooking. Older people often have difficulty with cooking, right?

Well, one robotic idea is to just cook meals for the person. This idea can be tempting, because what could be better than a machine that does all the cooking? Most roboticists are young, and most roboticists have all these interesting, exciting, technical things to focus on. And they think, “Wouldn’t it be great if some machine made my meals for me and brought me food so I could get back to work?”

But for an older person, what they would truly find meaningful is still being able to cook, and still being able to have the sincere feeling of “I can still do this myself.” It’s the time-machine idea—helping them to feel that they can still do what they used to be able to do and still cook for their family and contribute to their well-being. So we’re trying to figure out right now how to build machines that have that effect—that help you to cook but don’t cook for you, because those are two different things.

A black and white two armed robot with a mobile base sweeps the floor of a living roomA robot for your home may not look much like this research platform, but it’s how TRI is learning to make home robots that are useful and safe. Tidying and cleaning are physically repetitive tasks that are ideal for home robots, but still a challenge since every home is different, and every person expects their home to be organized and cleaned differently.Toyota Research Institute

How can we manage this temptation to focus on solving technical problems rather than more impactful ones?

Pratt: What we have learned is that you start with the human being, the user, and you say, “What do they need?” And even though all of us love gadgets and robots and motors and amplifiers and hands and arms and legs and stuff, just put that on the shelf for a moment and say: “Okay. I want to imagine that I’m a grandparent. I’m retired. It’s not quite as easy to get around as when I was younger. And mostly I’m alone.” How do we help that person have a truly better quality of life? And out of that will occasionally come places where robotic technology can help tremendously.

A second point of advice is to try not to look for your keys where the light is. There’s an old adage about a person who drops their keys on the street at night, and so they go look for them under a streetlight, rather than the place they dropped them. We have an unfortunate tendency in the robotics field—and I’ve done it too—to say, “Oh, I know some mathematics that I can use to solve this problem over here.” That’s where the light is. But unfortunately, the problem that actually needs to get solved is over there, in the dark. It’s important to resist the temptation to use robotics as a vehicle for only solving problems that are tractable.

It sounds like social robots could potentially address some of these needs. What do you think is the right role for social robots for elder care?

Pratt: For people who have advanced dementia, things can be really, really tough. There are a variety of robotic-like things or doll-like things that can help a person with dementia feel much more at ease and genuinely improve the quality of their life. They sometimes feel creepy to people who don’t have that disability, but I believe that they’re actually quite good, and that they can serve that role well.

There’s another huge part of the market, if you want to think about it in business terms, where many people’s lives can be tremendously improved even when they’re simply retired. Perhaps their spouse has died, they don’t have much to do, and they’re lonely and depressed. Typically, many of them are not technologically adept the way that their kids or their grandkids are. And the truth is their kids and their grandkids are busy. And so what can we really do to help?

Here there’s a very interesting dilemma, which is that we want to build a social-assistive technology, but we don’t want to pretend that the robot is a person. We’ve found that people will anthropomorphize a social machine, which shouldn’t be a surprise, but it’s very important to not cross a line where we are actively trying to promote the idea that this machine is actually real—that it’s a human being, or like a human being.

So there are a whole lot of things that we can do. The field is just beginning, and much of the improvement to people’s lives can happen within the next 5 to 10 years. In the social robotics space, we can use robots to help connect lonely people with their kids, their grandkids, and their friends. We think this is a huge, untapped potential.

A black and white two armed robot grasps a glass in a kitchenA robot for your home may not look much like this research platform, but it’s how TRI is learning to make home robots that are useful and safe. Perceiving and grasping transparent objects like drinking glasses is a particularly difficult task.Toyota Research Institute

Where do you draw the line with the amount of connection that you try to make between a human and a machine?

Pratt: We don’t want to trick anybody. We should be very ethically stringent, I think, to not try to fool anyone. People will fool themselves plenty—we don’t have to do it for them.

To whatever extent that we can say, “This is your mechanized personal assistant,” that’s okay. It’s a machine, and it’s here to help you in a personalized way. It will learn what you like. It will learn what you don’t like. It will help you by reminding you to exercise, to call your kids, to call your friends, to get in touch with the doctor, all of those things that it’s easy for people to miss on their own. With these sorts of socially assistive technologies, that’s the way to think of it. It’s not taking the place of other people. It’s helping you to be more connected with other people, and to live a healthier life because of that.

How much do you think humans should be in the loop with consumer robotic systems? Where might it be most useful?

Pratt: We should be reluctant to do person-behind-the-curtain stuff, although from a business point of view, we absolutely are going to need that. For example, say there’s a human in an automated vehicle that comes to a double-parked car, and the automated vehicle doesn’t want to go around by crossing the double yellow line. Of course the vehicle should phone home and say, “I need an exception to cross the double yellow line.” A human being, for all kinds of reasons, should be the one to decide whether it’s okay to do the human part of driving, which is to make an exception and not follow the rules in this particular case.

However, having the human actually drive the car from a distance assumes that the communication link between the two of them is so reliable it’s as if the person is in the driver’s seat. Or, it assumes that the competence of the car to avoid a crash is so good that even if that communications link went down, the car would never crash. And those are both very, very hard things to do. So human beings that are remote, that perform a supervisory function, that’s fine. But I think that we have to be careful not to fool the public by making them think that nobody is in that front seat of the car, when there’s still a human driving—we’ve just moved that person to a place you can’t see.

In the robotics field, many people have spoken about this idea that we’ll have a machine to clean our house operated by a person in some part of the world where it would be good to create jobs. I think pragmatically it’s actually difficult to do this. And I would hope that the kinds of jobs we create are better than sitting at a desk and guiding a cleaning machine in someone’s house halfway around the world. It’s certainly not as physically taxing as having to be there and do the work, but I would hope that the cleaning robot would be good enough to clean the house by itself almost all the time and just occasionally when it’s stuck say, “Oh, I’m stuck, and I’m not sure what to do.” And then the human can help. The reason we want this technology is to improve quality of life, including for the people who are the supervisors of the machine. I don’t want to just shift work from one place to the other.

A two finger robotic gripper with soft white pliable gripping surfaces picks up a blue cylinderThese bubble grippers are soft to the touch, making them safe for humans to interact with, but they also include the necessary sensing to be able to grasp and identify a wide variety of objects.Toyota Research Institute

Can you give an example of a specific technology that TRI is working on that could benefit the elderly?

Pratt: There are many examples. Let me pick one that is very tangible: the Punyo project.

In order to truly help elderly people live as if they are younger, robots not only need to be safe, they also need to be strong and gentle, able to sense and react to both expected and unexpected contacts and disturbances the way a human would. And of course, if robots are to make a difference in quality of life for many people, they must also be affordable.

Compliant actuation, where the robot senses physical contact and reacts with flexibility, can get us part way there. To get the rest of the way, we have developed instrumented, functional, low-cost compliant surfaces that are soft to the touch. We started with bubble grippers that have high-resolution tactile sensing for hands, and we are now adding compliant surfaces to all other parts of the robot’s body to replace rigid metal or plastic. Our hope is to enable robot hardware to have the strength, gentleness, and physical awareness of the most able human assistant, and to be affordable by large numbers of elderly or disabled people.

What do you think the next DARPA challenge for robotics should be?

Pratt: Wow. I don’t know! But I can tell you what ours is [at TRI]. We have a challenge that we give ourselves right now in the grocery store. This doesn’t mean we want to build a machine that does grocery shopping, but we think that trying to handle all of the difficult things that go on when you’re in the grocery store—picking things up even though there’s something right next to it, figuring out what the thing is even if the label that’s on it is half torn, putting it in the basket—this is a challenge task that will develop the same kind of capabilities we need for many other things within the home. We were looking for a task that didn’t require us to ask for 1,000 people to let us into their homes, and it turns out that the grocery store is a pretty good one. We have a hard time helping people to understand that it’s not about the store, it’s actually about the capabilities that let you work in the store, and that we believe will translate to a whole bunch of other things. So that’s the sort of stuff that we’re doing work on.

As you’ve gone through your career from academia to DARPA and now TRI, how has your perspective on robotics changed?

Pratt: I think I’ve learned that lesson that I was telling you about before—I understand much more now that it’s not about the robot, it’s about people. And ultimately, taking this user-centered design point of view is easy to talk about, but it’s really hard to do.

As technologists, the reason we went into this field is that we love technology. I can sit and design things on a piece of paper and feel great about it, and yet I’m never thinking about who it is actually going to be for, and what am I trying to solve. So that’s a form of looking for your keys where the light is.

The hard thing to do is to search where it’s dark, and where it doesn’t feel so good, and where you actually say, “Let me first of all talk to a lot of people who are going to be the users of this product and understand what their needs are. Let me not fall into the trap of asking them what they want and trying to build that because that’s not the right answer.” So what I’ve learned most of all is the need to put myself in the user’s shoes, and to really think about it from that point of view.

Source link

RoboBusiness is Back! – IEEE Spectrum

So where will we turn for future scaling? We will continue to look to the third dimension. We’ve created experimental devices that stack atop each other, delivering logic that is 30 to 50 percent smaller. Crucially, the top and bottom devices are of the two complementary types, NMOS and PMOS, that are the foundation of all the logic circuits of the last several decades. We believe this 3D-stacked complementary metal-oxide semiconductor (CMOS), or CFET (complementary field-effect transistor), will be the key to extending Moore’s Law into the next decade.

The Evolution of the Transistor

Continuous innovation is an essential underpinning of Moore’s Law, but each improvement comes with trade-offs. To understand these trade-offs and how they’re leading us inevitably toward 3D-stacked CMOS, you need a bit of background on transistor operation.

Every metal-oxide-semiconductor field-effect transistor, or MOSFET, has the same set of basic parts: the gate stack, the channel region, the source, and the drain. The source and drain are chemically doped to make them both either rich in mobile electrons (
n-type) or deficient in them (p-type). The channel region has the opposite doping to the source and drain.

In the planar version in use in advanced microprocessors up to 2011, the MOSFET’s gate stack is situated just above the channel region and is designed to project an electric field into the channel region. Applying a large enough voltage to the gate (relative to the source) creates a layer of mobile charge carriers in the channel region that allows current to flow between the source and drain.

As we scaled down the classic planar transistors, what device physicists call short-channel effects took center stage. Basically, the distance between the source and drain became so small that current would leak across the channel when it wasn’t supposed to, because the gate electrode struggled to deplete the channel of charge carriers. To address this, the industry moved to an entirely different transistor architecture called a
FinFET. It wrapped the gate around the channel on three sides to provide better electrostatic control.

Intel introduced its FinFETs in 2011, at the 22-nanometer node, with the third-generation Core processor, and the device architecture has been the workhorse of Moore’s Law ever since. With FinFETs, we could operate at a lower voltage and still have less leakage, reducing power consumption by some 50 percent at the same performance level as the previous-generation planar architecture. FinFETs also switched faster, boosting performance by 37 percent. And because conduction occurs on both vertical sides of the “fin,” the device can drive more current through a given area of silicon than can a planar device, which only conducts along one surface.

However, we did lose something in moving to FinFETs. In planar devices, the width of a transistor was defined by lithography, and therefore it is a highly flexible parameter. But in FinFETs, the transistor width comes in the form of discrete increments—adding one fin at a time–a characteristic often referred to as fin quantization. As flexible as the FinFET may be, fin quantization remains a significant design constraint. The design rules around it and the desire to add more fins to boost performance increase the overall area of logic cells and complicate the stack of interconnects that turn individual transistors into complete logic circuits. It also increases the transistor’s capacitance, thereby sapping some of its switching speed. So, while the FinFET has served us well as the industry’s workhorse, a new, more refined approach is needed. And it’s that approach that led us to the 3D transistors we’re introducing soon.

A blue block pierced by three gold-coated ribbons all atop a thicker grey block.In the RibbonFET, the gate wraps around the transistor channel region to enhance control of charge carriers. The new structure also enables better performance and more refined optimization. Emily Cooper

This advance, the RibbonFET, is our first new transistor architecture since the FinFET’s debut 11 years ago. In it, the gate fully surrounds the channel, providing even tighter control of charge carriers within channels that are now formed by nanometer-scale ribbons of silicon. With these nanoribbons (also called
nanosheets), we can again vary the width of a transistor as needed using lithography.

With the quantization constraint removed, we can produce the appropriately sized width for the application. That lets us balance power, performance, and cost. What’s more, with the ribbons stacked and operating in parallel, the device can drive more current, boosting performance without increasing the area of the device.

We see RibbonFETs as the best option for higher performance at reasonable power, and we will be introducing them in 2024 along with other innovations, such as PowerVia, our version of
backside power delivery, with the Intel 20A fabrication process.

Stacked CMOS

One commonality of planar, FinFET, and RibbonFET transistors is that they all use CMOS technology, which, as mentioned, consists of n-type (NMOS) and p-type (PMOS) transistors. CMOS logic became mainstream in the 1980s because it draws significantly less current than do the alternative technologies, notably NMOS-only circuits. Less current also led to greater operating frequencies and higher transistor densities.

To date, all CMOS technologies place the standard NMOS and PMOS transistor pair side by side. But in a
keynote at the IEEE International Electron Devices Meeting (IEDM) in 2019, we introduced the concept of a 3D-stacked transistor that places the NMOS transistor on top of the PMOS transistor. The following year, at IEDM 2020, we presented the design for the first logic circuit using this 3D technique, an inverter. Combined with appropriate interconnects, the 3D-stacked CMOS approach effectively cuts the inverter footprint in half, doubling the area density and further pushing the limits of Moore’s Law.

Two blue blocks stacked atop each other. Each is pierced through by gold coated ribbons.3D-stacked CMOS puts a PMOS device on top of an NMOS device in the same footprint a single RibbonFET would occupy. The NMOS and PMOS gates use different metals.Emily Cooper

Taking advantage of the potential benefits of 3D stacking means solving a number of process integration challenges, some of which will stretch the limits of CMOS fabrication.

We built the 3D-stacked CMOS inverter using what is known as a self-aligned process, in which both transistors are constructed in one manufacturing step. This means constructing both
n-type and p-type sources and drains by epitaxy—crystal deposition—and adding different metal gates for the two transistors. By combining the source-drain and dual-metal-gate processes, we are able to create different conductive types of silicon nanoribbons (p-type and n-type) to make up the stacked CMOS transistor pairs. It also allows us to adjust the device’s threshold voltage—the voltage at which a transistor begins to switch—separately for the top and bottom nanoribbons.

How do we do all that? The self-aligned 3D CMOS fabrication begins with a silicon wafer. On this wafer, we deposit repeating layers of silicon and silicon germanium, a structure called a superlattice. We then use lithographic patterning to cut away parts of the superlattice and leave a finlike structure. The superlattice crystal provides a strong support structure for what comes later.

Next, we deposit a block of “dummy” polycrystalline silicon atop the part of the superlattice where the device gates will go, protecting them from the next step in the procedure. That step, called the vertically stacked dual source/drain process, grows phosphorous-doped silicon on both ends of the top nanoribbons (the future NMOS device) while also selectively growing boron-doped silicon germanium on the bottom nanoribbons (the future PMOS device). After this, we deposit dielectric around the sources and drains to electrically isolate them from one another. The latter step requires that we then polish the wafer down to perfect flatness.

Gold columns are bridged by a purple polygon and a green one. A rectangle bisects the polygon. It's pink on top and yellow on the bottom.An edge-on view of the 3D stacked inverter shows how complicated its connections are. Emily Cooper

Blue, pink and green rectangles representing different parts of transistors are arranged in a larger circuit on the left and one half the size on the right.By stacking NMOS on top of PMOS transistors, 3D stacking effectively doubles CMOS transistor density per square millimeter, though the real density depends on the complexity of the logic cell involved. The inverter cells are shown from above indicating source and drain interconnects [red], gate interconnects [blue], and vertical connections [green].

Finally, we construct the gate. First, we remove that dummy gate we’d put in place earlier, exposing the silicon nanoribbons. We next etch away only the silicon germanium, releasing a stack of parallel silicon nanoribbons, which will be the channel regions of the transistors. We then coat the nanoribbons on all sides with a vanishingly thin layer of an insulator that has a high dielectric constant. The nanoribbon channels are so small and positioned in such a way that we can’t effectively dope them chemically as we would with a planar transistor. Instead, we use a property of the metal gates called the work function to impart the same effect. We surround the bottom nanoribbons with one metal to make a
p-doped channel and the top ones with another to form an n-doped channel. Thus, the gate stacks are finished off and the two transistors are complete.

The process might seem complex, but it’s better than the alternative—a technology called sequential 3D-stacked CMOS. With that method, the NMOS devices and the PMOS devices are built on separate wafers, the two are bonded, and the PMOS layer is transferred to the NMOS wafer. In comparison, the self-aligned 3D process takes fewer manufacturing steps and keeps a tighter rein on manufacturing cost, something we demonstrated in research and reported at IEDM 2019.

Importantly, the self-aligned method also circumvents the problem of misalignment that can occur when bonding two wafers. Still, sequential 3D stacking is being explored to facilitate integration of silicon with nonsilicon channel materials, such as germanium and III-V semiconductor materials. These approaches and materials may become relevant as we look to tightly integrate optoelectronics and other functions on a single chip.

Orange elongated blocks connect to several narrower blocks of a variety of colors. Making all the needed connections to 3D-stacked CMOS is a challenge. Power connections will need to be made from below the device stack. In this design, the NMOS device [top] and PMOS device [bottom] have separate source/drain contacts, but both devices have a gate in common.Emily Cooper

The new self-aligned CMOS process, and the 3D-stacked CMOS it creates, work well and appear to have substantial room for further miniaturization. At this early stage, that’s highly encouraging. Devices having a gate length of 75 nm demonstrated both the low leakage that comes with excellent device scalability and a high on-state current. Another promising sign: We’ve made wafers where the smallest distance between two sets of stacked devices is only
55 nm. While the device performance results we achieved are not records in and of themselves, they do compare well with individual nonstacked control devices built on the same wafer with the same processing.

In parallel with the process integration and experimental work, we have many ongoing theoretical, simulation, and design studies underway looking to provide insight into how best to use 3D CMOS. Through these, we’ve found some of the key considerations in the design of our transistors. Notably, we now know that we need to optimize the vertical spacing between the NMOS and PMOS—if it’s too short it will increase parasitic capacitance, and if it’s too long it will increase the resistance of the interconnects between the two devices. Either extreme results in slower circuits that consume more power.

Many design studies, such as one by
TEL Research Center America presented at IEDM 2021, focus on providing all the necessary interconnects in the 3D CMOS’s limited space and doing so without significantly increasing the area of the logic cells they make up. The TEL research showed that there are many opportunities for innovation in finding the best interconnect options. That research also highlights that 3D-stacked CMOS will need to have interconnects both above and below the devices. This scheme, called buried power rails, takes the interconnects that provide power to logic cells but don’t carry data and removes them to the silicon below the transistors. Intel’s PowerVIA technology, which does just that and is scheduled for introduction in 2024, will therefore play a key role in making 3D-stacked CMOS a commercial reality.

The Future of Moore’s Law

With RibbonFETs and 3D CMOS, we have a clear path to extend Moore’s Law beyond 2024. In a 2005 interview in which he was asked to reflect on what became his law, Gordon Moore admitted to being “periodically amazed at how we’re able to make progress. Several times along the way, I thought we reached the end of the line, things taper off, and our creative engineers come up with ways around them.”

With the move to FinFETs, the ensuing optimizations, and now the development of RibbonFETs and eventually 3D-stacked CMOS, supported by the myriad packaging enhancements around them, we’d like to think Mr. Moore will be amazed yet again.

From Your Site Articles

Related Articles Around the Web

Source link

Q&A: Marc Raibert on the Boston Dynamics AI Institute

Last week, Hyundai Motor Group and Boston Dynamics announced an initial investment of over $400 million to launch the new Boston Dynamics AI Institute. The Institute was conceptualized by (and will be led by) Marc Raibert, the founder of Boston Dynamics, with the goal of “solving the most important and difficult challenges facing the creation of advanced robots.” That sounds hugely promising, but of course we had questions—namely, what are those challenges, how is this new institute going to solve them, and what are these to-be-created advanced robots actually going to do? And fortunately, IEEE Spectrum was able to speak with Marc Raibert himself to get a better understanding of what the Institute will be all about.


If we can start by looking back a little bit—what kind of company did you want Boston Dynamics to be when you founded it in 1992?

Marc Raibert: The truth is, at that point, it wasn’t going to be a robotics company at all. It was going to be a modeling and simulation company. I’d been a professor for about 15 years by then, and was really well funded (heavily by DARPA), but I wasn’t sure that the funding was going to continue. We’d produced some modeling and simulation results that seemed interesting, so I decided to start Boston Dynamics and see what it could be.

It took a while before we got back to robotics. Sony was really the trigger—we’d worked for them quietly for about five years and made a running AIBO which never saw the light of day, and then we worked on their little humanoid QRIO, building tools that made it possible to do choreography. So that was sort of the crossover, applying our modeling and simulation tools to Sony’s robots. And then we decided to write a proposal for BigDog, and the whole company changed almost immediately. It felt great to return to building machines, and I’ve never looked back.

Did you miss academia at all, or do you prefer the approach that you took with Boston Dynamics?

Raibert: Part of the idea for the Institute is to combine the best of the academic world and the best of the industrial lab world. Universities have these very creative, forward looking people who frequently aren’t bothered by whatever the legacy solutions are. And they’re frequently going for blue sky research. An industrial lab has the kind of teamwork that, in my opinion, is really hard to find in an academic setting, along with schedules and budget discipline and a skilled staff who can be there gaining experience for decades. So when you combine those, I think that’s really a sweet spot. It’s how Boston Dynamics’ research works, and it’s what we’re going to try to do at the Institute.

If you’re going to try and look over the horizon rather than just advance things incrementally, you have to try wacky stuff.

How important do you think it is to make robots that are useful and practical?

Raibert: It’s not that we’re not worrying about eventually making things that are useful, but if you’re going to try and look over the horizon rather than just advance things incrementally, you have to try wacky stuff. So that’s part of the plan, to try things that don’t immediately seem practical.

For a while, I felt guilty about building one-legged hopping robots. On the one hand, it was technically interesting and different, but on the other hand, it was really hard to see how they could ever get to the point where they would be useful for anything. But the underpinnings of those one legged hopping machines, focusing on the dynamics, I think really got Boston Dynamics to where it is today, where they’re making robots that are practical and useful and can do things that we would have never gotten to if we’d kept plodding along the way that other legged robots were at the time. I believe in the necessity of wandering the desert before you can get to a place where you’re making a practical, money-making thing.

We have to remove the pressure to make things more reliable, more manufacturable, and cheaper in the short term. Those are things that are important, but they’re in the way of trying new things. The pitch I made to Hyundai explicitly says that, and proposes funding that extends long enough that we’re not distracted in the short term.

Why is now the right time for this?

Raibert: Boston Dynamics is really starting to be successful doing commercial stuff, and that’s not my long suit. My long suit is to dream, and to do the long-term stuff. For a long time, Boston Dynamics was primarily doing that, and they’re still doing some really exciting long-term work, but I wanted to focus squarely on it.

I don’t think the lay public understands how stupid robots are compared to people.

Let’s talk about the four areas that the new Institute plans to focus on. What’s Cognitive AI, and why is it important?

Raibert: The new thing that’s clearly different from what Boston Dynamics is doing, is to make robots smarter, in the sense that they need to be able to look at the world around them and fundamentally understand what they’re seeing and what’s going on. Don’t get me wrong, this is currently science fiction, but I’ve learned that if you keep working on something like this long enough with enough resources, you may be able to make progress. So, I’d like to make a robot that you can take into a factory, where it watches a person doing a job, and figures out how to do that job itself. Right now, it takes a fleet of programmers even for simple tasks, and every new thing you want your robot to do is a lot of work. This has been clear for years, and I want to find a way to get past that. And I don’t think the lay public understands how stupid robots are compared to people—a person could come into my workshop and I could show them how to do almost any task, and within 15 minutes, they’d be doing it. Robots just aren’t anything like that… yet.

There are a lot of people making progress on problems like these in academia—are you hoping to bring them into the Institute, or support them directly in academia, or how do you picture this working?

Raibert: We’re in this airplane that hasn’t gotten off the ground yet, and we’re going to try everything. We are going to try to hire academics to come work for us—I have an academic background and so does Al Rizzi, my CTO, and while I had a happy time in academia, this is even better and I think we’ll find at least a few people who feel that way too. But we’re also going to have consultants from academia and industry, and we’ll fund some lab work. And of course we want people’s students, and I’d really like to get people with industrial experience as well.

I think something that happens sometimes in academia is that things stay on the blackboard for too long. We want to make room for as much theorizing as we need, but we also want to convert that into tangible demonstrations. I think the physicality is really important.

I also want to say that while we have defined the research area that we’re going to focus on, they just came out of my head and the heads of a couple of other people here. But, the people that we hire are going to have their own ideas of what we should do and what the way forward is, and we absolutely want to count on that and have that be part of our culture. So, we’re trying to get this thing off the ground and flying with our ideas, but we really want to bring in people with ideas of their own.

The second and third technical areas that you’re planning to focus on are Athletic AI, and Organic Design. Can you tell us about those?

Raibert: Athletic AI is making your body work, through balance, energy conservation, maneuvering around obstacles or adversaries in real time, and even low-level navigation. We think that there’s still a lot of athletic progress to be made. And you know, Boston Dynamics is continuing to work on some very interesting stuff in that area, and I’m still Chairman of the Board over there and I still love that company, and we’re going to try and find paths that are supportive rather than conflicting. But we also have ideas for making advances in the physicality of the robots that we want to work on at the Institute.

Organic Design means mechanical hardware as well as electronics and computing. There, the idea is that in addition to having engineering teams to support our research, we want to use AI to help develop more futuristic designs. We think optimizing a hardware design can take advantage of a lot of different kinds of information, like simulation-based optimization and learning-based optimization, where there’s a lot of opportunity to do things that have never been done before to make the hardware stronger, lighter, more efficient, and maybe, someday, cheaper.

What do you feel is the right balance of focusing on hardware versus focusing on software for things like Athletic AI and Organic Design?

Raibert: At Boston Dynamics, it was a pretty even balance. We started out as being more controls and software focused, and we had a good hardware group, but it was a little more like university lab hardware. But Boston Dynamics has built up its hardware capability, and I think that’s really important. I think the idea that you’re going to have crummy hardware and have software make up for it might be okay for some mid-range products, but if you’re going to keep pushing the boundaries and achieve animal and human levels of athleticism and then exceed them, you want the hardware to be as absolutely great as you can make it, and there’s still a lot of opportunity to move that ahead. But the software can obviously do a huge amount, too. So it’ll be both sides catching up to each other, forever!

I used to be a guy who would get caught up with some new widget, like a new valve or a new kind of bearing or something. But the system engineering is vastly more important.

Really, it’s a holistic thing, and when I talk about Organic Design, I mean taking the software and the hardware into account at the same time—having one eye on the physics and what the controller has to do to deal with that, and one eye on the hardware which also has to deal with the physics, and growing those together. It’s system engineering. I used to be a guy who would get caught up with some new widget, like a new valve or a new kind of bearing or something. But the system engineering is vastly more important, and there’s so much optimization and improvement that can be done with the right combination of things, even if each individual component is a little less than perfect.

You’ve put so much work into Atlas at Boston Dynamics—will you be bringing that program with you to the Institute?

Raibert: No, we’re not going to. Boston Dynamics has a strong team working on Atlas, and wants to maintain their R&D ability, and so Boston Dynamics will continue with that. At some point the Institute may buy some Atlas robots and do some work with them.

The final area that the Institute will focus on is Ethics and Policy. Why does that deserve equal importance to the technical focus areas?

Raibert: If you look at even just the headlines about Boston Dynamics, there’s a lot of emotion there, and a lot of concern. I think it only makes sense if we’re going to be leaders in this area that we do some disciplined thinking about ethics, bringing in some outside people who perhaps aren’t as enthusiastic about the kinds of things that we’re doing as we are. But also, I think there’s a very positive story to the ethics of what we do, and we’ll try to articulate that as best we can.

There are four topics that always come to mind for me. One is the use of robots by the military, one is robots taking jobs, and one is killer robots (or robots that are intended to harm people without human-in-the-loop regulation), and one is the idea that robots will somehow take over the world against the will of human beings. I think the last two are where you get the least grounding in what’s really happening, and the others are works in progress. The military topic is a very complex thing, and with the jobs topic, yes, some people’s jobs will be done by robots. Other jobs that don’t yet exist will be created by robots. And robots will help people’s existing jobs become safer and easier. I hope we’re going to be open about all of these things—I’m not embarrassed about my opinions, and I think if we can have an open conversation, it’ll be good.

I think the Institute needs to be different and more open and more available, and certainly the best talent these days wants to be able to publish the big ideas they’re working on and we’re going to accommodate that.

How open will the Institute be with the research that it’s doing?

Raibert: We’ll be more open than Boston Dynamics has been with respect to working with universities and with publishing. I don’t blame anyone but myself for that; I wasn’t much of a collaborator in the early days of Boston Dynamics, and didn’t really want to show the public too much too early. I think the Institute needs to be different and more open and more available, and certainly the best talent these days wants to be able to publish the big ideas they’re working on and we’re going to accommodate that.

You mention caring for people and helping people live better lives as things that you hope robots will be able to do. What’s the path towards making robots that are dynamic and capable, but also safe for humans to be around?

Raibert: I’m of two minds. The very athletic robots are the hardest to make safe, but I think that there are going to be a lot of useful things that those robots will do where they aren’t safe enough for people to be around. We should keep working on those things. But on the other hand, having a robot that isn’t dynamic at all is really hard to make useful. It’s a tough problem, and I’m sure there are paths that we haven’t thought of yet to make things safer. So I don’t know what the answers are, but we’re going to see what we can do.

With this long-term vision that you have for the Institute, how will you measure success? What will make you feel like the Institute did what it was supposed to do?

Raibert: That’s a good question. One indication of success will be that good people want to join us and work there. So far, the people I have talked to have been very interested, so I’m optimistic. Two other important measures in the past have been: do our funders keep funding us, and how many views do we get on YouTube! YouTube really changed everything—if my career had been based on writing papers with lots of equations and plots, I don’t think anybody would have ever cared. But the fact that we could visualize what we were thinking, and where we thought we could go, had a big impact on the work that we did.

Can you elaborate a little bit on why making YouTube videos is so important?

Raibert: The very first BigDog video, we put on our website, but not on YouTube. We didn’t know about YouTube at that time, and someone else posted it on YouTube instead. And then my partner Rob [Playter, now the CEO of Boston Dynamics] and I went to DARPA’s 50th anniversary dinner [in 2008]. We were just contractors, but we decided to introduce ourselves to Tony Tether, the head of DARPA. We said, “we’re from the company that makes BigDog,” and immediately he says, “BigDog, three and a half million YouTube views!” And we realized, oh, this matters!

As an academic, I had totally resisted the media, and I thought it was unseemly to appeal to the media. But once Boston Dynamics was commercial, and we wanted to sell projects and sell machines, we found out the value of having people know who we were. YouTube helped Boston Dynamics to be widely known around the world, without any marketing budget. And it’s fun to get recognition for your work!

Raibert: Our top level mission is to help Hyundai embrace these new technological areas. I think they made a bold move by buying Boston Dynamics to help with that, and they’re also working on a new Global Software Center. They’re really thinking big about these long-term technologies, even beyond automobiles as mobility in general goes through a transition. As the Institute gets a little further along, I think some of our work will get done jointly with Hyundai, and some of our people will help Hyundai with things they want to develop. Urban air mobility might be an area where there’s some crossover, for example.

I sincerely believe that having a different paradigm, where we don’t focus on a product that we’re going to launch in a couple of years with all the incremental work it takes to make that happen, is really going to be an asset for the field.

How far will this go? Will the Institute be doing entirely basic research, or will you also be working towards productization?

Raibert: My hope is that we can follow in the footsteps of the Broad Institute, the Whitehead Institute, the Max Planck Institute, and even the old Bell Labs. But I’m afraid of the impact of focusing on practical applications. Getting Spot to go all the way through to adoption in a routine utility environment, for example, is a lot of work, and I don’t want the Institute to get sidetracked too much. We’re all for other groups, like Boston Dynamics or other Hyundai organizations or maybe outside organizations, taking our technology and doing something practical. We may do some spinouts, where people at the Institute create a new company and we help them along. But I would rather that be a separate thing. I sincerely believe that having a different paradigm, where we don’t focus on a product that we’re going to launch in a couple of years with all the incremental work it takes to make that happen, is really going to be an asset for the field.

Does this feel more like the end of something for you, or the beginning of something?

Raibert: Totally the beginning! I’ve been working on setting this up for long enough that there’s no tears about the transition. Boston Dynamics is firing on all cylinders, I get to be on the board, I still have a badge so I can go in if I want, it’s great. And the Institute is off to a great start with remarkable support from Hyundai. I’m at the age where I should be retiring, but I’m not going to—this is better!

Source link

Emmy Award Winner’s Algorithms Bring High-Quality Video to Your TV

So where will we turn for future scaling? We will continue to look to the third dimension. We’ve created experimental devices that stack atop each other, delivering logic that is 30 to 50 percent smaller. Crucially, the top and bottom devices are of the two complementary types, NMOS and PMOS, that are the foundation of all the logic circuits of the last several decades. We believe this 3D-stacked complementary metal-oxide semiconductor (CMOS), or CFET (complementary field-effect transistor), will be the key to extending Moore’s Law into the next decade.

The Evolution of the Transistor

Continuous innovation is an essential underpinning of Moore’s Law, but each improvement comes with trade-offs. To understand these trade-offs and how they’re leading us inevitably toward 3D-stacked CMOS, you need a bit of background on transistor operation.

Every metal-oxide-semiconductor field-effect transistor, or MOSFET, has the same set of basic parts: the gate stack, the channel region, the source, and the drain. The source and drain are chemically doped to make them both either rich in mobile electrons (
n-type) or deficient in them (p-type). The channel region has the opposite doping to the source and drain.

In the planar version in use in advanced microprocessors up to 2011, the MOSFET’s gate stack is situated just above the channel region and is designed to project an electric field into the channel region. Applying a large enough voltage to the gate (relative to the source) creates a layer of mobile charge carriers in the channel region that allows current to flow between the source and drain.

As we scaled down the classic planar transistors, what device physicists call short-channel effects took center stage. Basically, the distance between the source and drain became so small that current would leak across the channel when it wasn’t supposed to, because the gate electrode struggled to deplete the channel of charge carriers. To address this, the industry moved to an entirely different transistor architecture called a
FinFET. It wrapped the gate around the channel on three sides to provide better electrostatic control.

Intel introduced its FinFETs in 2011, at the 22-nanometer node, with the third-generation Core processor, and the device architecture has been the workhorse of Moore’s Law ever since. With FinFETs, we could operate at a lower voltage and still have less leakage, reducing power consumption by some 50 percent at the same performance level as the previous-generation planar architecture. FinFETs also switched faster, boosting performance by 37 percent. And because conduction occurs on both vertical sides of the “fin,” the device can drive more current through a given area of silicon than can a planar device, which only conducts along one surface.

However, we did lose something in moving to FinFETs. In planar devices, the width of a transistor was defined by lithography, and therefore it is a highly flexible parameter. But in FinFETs, the transistor width comes in the form of discrete increments—adding one fin at a time–a characteristic often referred to as fin quantization. As flexible as the FinFET may be, fin quantization remains a significant design constraint. The design rules around it and the desire to add more fins to boost performance increase the overall area of logic cells and complicate the stack of interconnects that turn individual transistors into complete logic circuits. It also increases the transistor’s capacitance, thereby sapping some of its switching speed. So, while the FinFET has served us well as the industry’s workhorse, a new, more refined approach is needed. And it’s that approach that led us to the 3D transistors we’re introducing soon.

A blue block pierced by three gold-coated ribbons all atop a thicker grey block.In the RibbonFET, the gate wraps around the transistor channel region to enhance control of charge carriers. The new structure also enables better performance and more refined optimization. Emily Cooper

This advance, the RibbonFET, is our first new transistor architecture since the FinFET’s debut 11 years ago. In it, the gate fully surrounds the channel, providing even tighter control of charge carriers within channels that are now formed by nanometer-scale ribbons of silicon. With these nanoribbons (also called
nanosheets), we can again vary the width of a transistor as needed using lithography.

With the quantization constraint removed, we can produce the appropriately sized width for the application. That lets us balance power, performance, and cost. What’s more, with the ribbons stacked and operating in parallel, the device can drive more current, boosting performance without increasing the area of the device.

We see RibbonFETs as the best option for higher performance at reasonable power, and we will be introducing them in 2024 along with other innovations, such as PowerVia, our version of
backside power delivery, with the Intel 20A fabrication process.

Stacked CMOS

One commonality of planar, FinFET, and RibbonFET transistors is that they all use CMOS technology, which, as mentioned, consists of n-type (NMOS) and p-type (PMOS) transistors. CMOS logic became mainstream in the 1980s because it draws significantly less current than do the alternative technologies, notably NMOS-only circuits. Less current also led to greater operating frequencies and higher transistor densities.

To date, all CMOS technologies place the standard NMOS and PMOS transistor pair side by side. But in a
keynote at the IEEE International Electron Devices Meeting (IEDM) in 2019, we introduced the concept of a 3D-stacked transistor that places the NMOS transistor on top of the PMOS transistor. The following year, at IEDM 2020, we presented the design for the first logic circuit using this 3D technique, an inverter. Combined with appropriate interconnects, the 3D-stacked CMOS approach effectively cuts the inverter footprint in half, doubling the area density and further pushing the limits of Moore’s Law.

Two blue blocks stacked atop each other. Each is pierced through by gold coated ribbons.3D-stacked CMOS puts a PMOS device on top of an NMOS device in the same footprint a single RibbonFET would occupy. The NMOS and PMOS gates use different metals.Emily Cooper

Taking advantage of the potential benefits of 3D stacking means solving a number of process integration challenges, some of which will stretch the limits of CMOS fabrication.

We built the 3D-stacked CMOS inverter using what is known as a self-aligned process, in which both transistors are constructed in one manufacturing step. This means constructing both
n-type and p-type sources and drains by epitaxy—crystal deposition—and adding different metal gates for the two transistors. By combining the source-drain and dual-metal-gate processes, we are able to create different conductive types of silicon nanoribbons (p-type and n-type) to make up the stacked CMOS transistor pairs. It also allows us to adjust the device’s threshold voltage—the voltage at which a transistor begins to switch—separately for the top and bottom nanoribbons.

How do we do all that? The self-aligned 3D CMOS fabrication begins with a silicon wafer. On this wafer, we deposit repeating layers of silicon and silicon germanium, a structure called a superlattice. We then use lithographic patterning to cut away parts of the superlattice and leave a finlike structure. The superlattice crystal provides a strong support structure for what comes later.

Next, we deposit a block of “dummy” polycrystalline silicon atop the part of the superlattice where the device gates will go, protecting them from the next step in the procedure. That step, called the vertically stacked dual source/drain process, grows phosphorous-doped silicon on both ends of the top nanoribbons (the future NMOS device) while also selectively growing boron-doped silicon germanium on the bottom nanoribbons (the future PMOS device). After this, we deposit dielectric around the sources and drains to electrically isolate them from one another. The latter step requires that we then polish the wafer down to perfect flatness.

Gold columns are bridged by a purple polygon and a green one. A rectangle bisects the polygon. It's pink on top and yellow on the bottom.An edge-on view of the 3D stacked inverter shows how complicated its connections are. Emily Cooper

Blue, pink and green rectangles representing different parts of transistors are arranged in a larger circuit on the left and one half the size on the right.By stacking NMOS on top of PMOS transistors, 3D stacking effectively doubles CMOS transistor density per square millimeter, though the real density depends on the complexity of the logic cell involved. The inverter cells are shown from above indicating source and drain interconnects [red], gate interconnects [blue], and vertical connections [green].

Finally, we construct the gate. First, we remove that dummy gate we’d put in place earlier, exposing the silicon nanoribbons. We next etch away only the silicon germanium, releasing a stack of parallel silicon nanoribbons, which will be the channel regions of the transistors. We then coat the nanoribbons on all sides with a vanishingly thin layer of an insulator that has a high dielectric constant. The nanoribbon channels are so small and positioned in such a way that we can’t effectively dope them chemically as we would with a planar transistor. Instead, we use a property of the metal gates called the work function to impart the same effect. We surround the bottom nanoribbons with one metal to make a
p-doped channel and the top ones with another to form an n-doped channel. Thus, the gate stacks are finished off and the two transistors are complete.

The process might seem complex, but it’s better than the alternative—a technology called sequential 3D-stacked CMOS. With that method, the NMOS devices and the PMOS devices are built on separate wafers, the two are bonded, and the PMOS layer is transferred to the NMOS wafer. In comparison, the self-aligned 3D process takes fewer manufacturing steps and keeps a tighter rein on manufacturing cost, something we demonstrated in research and reported at IEDM 2019.

Importantly, the self-aligned method also circumvents the problem of misalignment that can occur when bonding two wafers. Still, sequential 3D stacking is being explored to facilitate integration of silicon with nonsilicon channel materials, such as germanium and III-V semiconductor materials. These approaches and materials may become relevant as we look to tightly integrate optoelectronics and other functions on a single chip.

Orange elongated blocks connect to several narrower blocks of a variety of colors. Making all the needed connections to 3D-stacked CMOS is a challenge. Power connections will need to be made from below the device stack. In this design, the NMOS device [top] and PMOS device [bottom] have separate source/drain contacts, but both devices have a gate in common.Emily Cooper

The new self-aligned CMOS process, and the 3D-stacked CMOS it creates, work well and appear to have substantial room for further miniaturization. At this early stage, that’s highly encouraging. Devices having a gate length of 75 nm demonstrated both the low leakage that comes with excellent device scalability and a high on-state current. Another promising sign: We’ve made wafers where the smallest distance between two sets of stacked devices is only
55 nm. While the device performance results we achieved are not records in and of themselves, they do compare well with individual nonstacked control devices built on the same wafer with the same processing.

In parallel with the process integration and experimental work, we have many ongoing theoretical, simulation, and design studies underway looking to provide insight into how best to use 3D CMOS. Through these, we’ve found some of the key considerations in the design of our transistors. Notably, we now know that we need to optimize the vertical spacing between the NMOS and PMOS—if it’s too short it will increase parasitic capacitance, and if it’s too long it will increase the resistance of the interconnects between the two devices. Either extreme results in slower circuits that consume more power.

Many design studies, such as one by
TEL Research Center America presented at IEDM 2021, focus on providing all the necessary interconnects in the 3D CMOS’s limited space and doing so without significantly increasing the area of the logic cells they make up. The TEL research showed that there are many opportunities for innovation in finding the best interconnect options. That research also highlights that 3D-stacked CMOS will need to have interconnects both above and below the devices. This scheme, called buried power rails, takes the interconnects that provide power to logic cells but don’t carry data and removes them to the silicon below the transistors. Intel’s PowerVIA technology, which does just that and is scheduled for introduction in 2024, will therefore play a key role in making 3D-stacked CMOS a commercial reality.

The Future of Moore’s Law

With RibbonFETs and 3D CMOS, we have a clear path to extend Moore’s Law beyond 2024. In a 2005 interview in which he was asked to reflect on what became his law, Gordon Moore admitted to being “periodically amazed at how we’re able to make progress. Several times along the way, I thought we reached the end of the line, things taper off, and our creative engineers come up with ways around them.”

With the move to FinFETs, the ensuing optimizations, and now the development of RibbonFETs and eventually 3D-stacked CMOS, supported by the myriad packaging enhancements around them, we’d like to think Mr. Moore will be amazed yet again.

From Your Site Articles

Related Articles Around the Web

Source link