Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Jan. 12, 2022 — A University of Melbourne-led team has perfected a technique for embedding single atoms in a silicon wafer one-by-one. Their technology offers the potential to make quantum computers using the same methods that have given us cheap and reliable conventional devices containing billions of transistors.
“We could ‘hear’ the electronic click as each atom dropped into one of 10,000 sites in our prototype device. Our vision is to use this technique to build a very, very large-scale quantum device,” says Professor David Jamieson of The University of Melbourne, lead author of the Advanced Materials paper describing the process.
His co-authors are from UNSW Sydney, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Leibniz Institute of Surface Engineering (IOM), and RMIT Microscopy and Microanalysis Facility.
“We believe we ultimately could make large-scale machines based on single atom quantum bits by using our method and taking advantage of the manufacturing techniques that the semiconductor industry has perfected,” he says.
Until now, implanting atoms in silicon has been a haphazard process, where a silicon chip gets showered with phosphorus which implant in a random pattern, like raindrops on a window.
“We embedded phosphorus ions, precisely counting each one, in a silicon substrate creating a qubit ‘chip,” which can then be used in lab experiments to test designs for large scale devices.”
“This will allow us to engineer the quantum logic operations between large arrays of individual atoms, retaining highly accurate operations across the whole processor,” says UNSW’s Scientia Professor Andrea Morello, a joint author of the paper. “Instead of implanting many atoms in random locations and selecting the ones that work best, they will now be placed in an orderly array, similar to the transistors in conventional semiconductors computer chips.”
“We used advanced technology developed for sensitive X-ray detectors and a special atomic force microscope originally developed for the Rosetta space mission along with a comprehensive computer model for the trajectory of ions implanted into silicon, developed in collaboration with our colleagues in Germany,” says Dr. Alexander (Melvin) Jakob, first author of the paper, also from the University of Melbourne.
This new technique can create large scale patterns of counted atoms that are controlled so their quantum states can be manipulated, coupled and read-out.
The technique developed by Professor Jamieson and his colleagues takes advantage of the precision of the atomic force microscope, which has a sharp cantilever that gently ‘touches’ the surface of a chip with a positioning accuracy of just half a nanometre, about the same as the spacing between atoms in a silicon crystal.
The team drilled a tiny hole in this cantilever, so that when it was showered with phosphorus atoms one would occasionally drop through the hole and embed in the silicon substrate.
The key, however, was knowing precisely when one atom—and no more than one—had become embedded in the substrate. Then the cantilever could move to the next precise position on the array.
The team discovered that the kinetic energy of the atom as it plows into the silicon crystal and dissipates its energy by friction can be exploited to make a tiny electronic ‘click.”
That is how they know an atom has embedded in the silicon and to move to the next precise position.
“One atom colliding with a piece of silicon makes a very faint click, but we have invented very sensitive electronics used to detect the click, it’s much amplified and gives a loud signal, a loud and reliable signal,” says Professor Jamieson.
“That allows us to be very confident of our method. We can say, “Oh, there was a click. An atom just arrived.” Now we can move the cantilever to the next spot and wait for the next atom.”
“With our Centre partners, we have already produced ground-breaking results on single atom qubits made with this technique, but the new discovery will accelerate our work on large-scale devices,” he says.
What is quantum computing and why is it important?
Quantum computers perform calculations by using the varied states of single atoms in the way that conventional computers use bits—the most basic unit of digital information.
But whereas a bit has only two possible values—1 or 0, true or false—a quantum bit, or qubit, can be placed in a superposition of 0 and 1. Pairs of qubits can be placed in even more peculiar superposition states, such as “01 plus 10,” called entangled states. Adding even more qubits creates an exponentially growing number of entangled states, which constitute a powerful computer code that does not exist in classical computers. This exponential density of information is what gives quantum processors their computational advantage.
This basic quantum mechanical oddness has great potential to create computers capable of solving certain computational problems that conventional computers would find impossible due to their complexity.
Practical applications include new ways of optimizing timetables and finances, unbreakable cryptography and computational drug design, maybe even the rapid development of new vaccines.
“If you wanted to calculate the structure of the caffeine molecule, a very important molecule for physics, you can’t do it with a classical computer because there are too many electrons,” says Professor Jamieson.
“All these electrons obey quantum physics and the Schrödinger equation. But if you’re going to calculate the structure of that molecule, there are so many electron-electron interactions, even the most powerful supercomputers in the world today can’t do it.
“A quantum computer could do that, but you need many qubits because you’ve got to correct random errors and run a very complicated computer code.”
Silicon chips containing arrays of single dopant atoms can be the material of choice for classical and quantum devices that exploit single donor spins. For example, group-V donors implanted in isotopically purified Si crystals are attractive for large-scale quantum computers. Useful attributes include long nuclear and electron spin lifetimes of P, hyperfine clock transitions in Bi or electrically controllable Sb nuclear spins.
Promising architectures require the ability to fabricate arrays of individual near-surface dopant atoms with high yield. Here, an on-chip detector electrode system with 70 eV root-mean-square noise (≈20 electrons) is employed to demonstrate near-room-temperature implantation of single 14 keV P+ ions.
The physics model for the ion–solid interaction shows an unprecedented upper-bound single-ion-detection confidence of 99.85 ± 0.02% for near-surface implants. As a result, the practical controlled silicon doping yield is limited by materials engineering factors including surface gate oxides in which detected ions may stop.
For a device with 6 nm gate oxide and 14 keV P+ implants, a yield limit of 98.1% is demonstrated. Thinner gate oxides allow this limit to converge to the upper-bound. Deterministic single-ion implantation can therefore be a viable materials engineering strategy for scalable dopant architectures in silicon devices.
More information: Alexander M. Jakob et al, Deterministic Shallow Dopant Implantation in Silicon with Detection Confidence Upper‐Bound to 99.85% by Ion–Solid Interactions, Advanced Materials (2021). DOI: 10.1002/adma.202103235
Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!
Just in time for Earth Day, the San Diego Supercomputer Center (SDSC) has announced that it has replaced tens of thousands of pounds of toxic batteries with a more environmentally friendly alternative. The batteries, whi Read more…
Cerebras Systems has been making waves for a few years with its massive, dinner plate-sized Wafer Scale Engine (WSE) chips, which are aimed at helping organizations achieve their “most ambitious AI goals.” Now, Cereb Read more…
PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…
Russia’s invasion of Ukraine—which will soon enter its second month—set off the most sweeping salvo of sanctions seen since the Cold War, with companies from Disney to McDonald’s suspending operations in Russia. The computing sector has been no exception, with virtually every heavy-hitter ending sales to the country, including AMD, Intel... Read more…
ISC 2022 Program Chair Keren Bergman is the Charles Batchelor Professor at Columbia University where she also directs the Lightwave Research Laboratory. Our exclusive interview with Bergman covers her history with ISC with a focus on the event's return to Hamburg, Germany, May 29-June 2, as well as her insights into the HPC landscape. Read more…
The Covid-19 pandemic has profoundly changed the world. The remote workplace has become the norm. We have started looking at personal health differently – the way we work, live, play and do business. Read more…
Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…
PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…
Russia’s invasion of Ukraine—which will soon enter its second month—set off the most sweeping salvo of sanctions seen since the Cold War, with companies from Disney to McDonald’s suspending operations in Russia. The computing sector has been no exception, with virtually every heavy-hitter ending sales to the country, including AMD, Intel... Read more…
ISC 2022 Program Chair Keren Bergman is the Charles Batchelor Professor at Columbia University where she also directs the Lightwave Research Laboratory. Our exclusive interview with Bergman covers her history with ISC with a focus on the event's return to Hamburg, Germany, May 29-June 2, as well as her insights into the HPC landscape. Read more…
Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…
During a talk for the Ken Kennedy Institute’s 2022 Energy High Performance Computing Conference, Dan Stanzione, executive director of the Texas Advanced Compu Read more…
Thanks to Planck’s constant — 4.14 x 10-15 eV seconds, rounded up (or 6.63 x 10-34 Joule seconds, if you prefer) — April 14 has been designated World Quantum Day by a loosely affiliated group of scientists and organizations around the world, including the U.S. (see website). The broad idea is to spotlight quantum information science’s rapid growth and, in particular... Read more…
Scientists at Brookhaven National Laboratory, Columbia University, the University of Connecticut, University of Edinburgh, Regensburg University, and the University of Southampton are seeking answers to physics mysteries at the highest energies and shortest distances. The team is devising new methods and enhancing their code in order to exploit the huge potential... Read more…
MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…
Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…
Graphics chip powerhouse Nvidia today announced that it has acquired HPC cluster management company Bright Computing for an undisclosed sum. Unlike Nvidia’s bid to purchase semiconductor IP company Arm, which has been stymied by regulatory challenges, the Bright deal is a straightforward acquisition that aims to expand... Read more…
Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will be used to help build new AI models, develop augmented reality tools, seamlessly analyze multimedia data and more. The supercomputer’s... Read more…
AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…
IBM today announced it will deploy its first quantum computer in Canada, putting Canada on a short list of countries that will have access to an IBM Quantum Sys Read more…
Today, the LLVM compiler infrastructure world is essentially inescapable in HPC. But back in the 2000 timeframe, LLVM (low level virtual machine) was just getting its start as a new way of thinking about how to overcome shortcomings in the Java Virtual Machine. At the time, Chris Lattner was a graduate student of... Read more…
GPU-maker Nvidia is continuing to try to keep its proposed acquisition of British chip IP vendor Arm Ltd. alive, despite continuing concerns from several governments around the world. In its latest action, Nvidia filed a 29-page response to the U.K. government to point out a list of potential benefits of the proposed $40 billion deal. Read more…
Just about a month ago, Pfizer scored its second huge win of the pandemic when the U.S. Food and Drug Administration issued another emergency use authorization Read more…
The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…
MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…
Quantum computing pioneer D-Wave today announced plans to go public via a SPAC (special purpose acquisition company) mechanism. D-Wave will merge with DPCM Capital in a transaction expected to produce $340 million in cash and result in a roughly $1.6 billion initial market valuation. The deal is expected to be completed in the second quarter of 2022 and the new company will be traded on the New York Stock... Read more…
Intel held its 2022 investor meeting yesterday, covering everything from the imminent Sapphire Rapids CPUs to the hotly anticipated (and delayed) Ponte Vecchio GPUs. But somewhat buried in its summary of the meeting was a new namedrop: “Falcon Shores,” described as “a new architecture that will bring x86 and Xe GPU together into a single socket.” The reveal was... Read more…
The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…
A new industry consortium aims to establish a die-to-die interconnect standard – Universal Chiplet Interconnect Express (UCIe) – in support of an open chipl Read more…
Nvidia has announced that it has acquired Excelero. The high-performance block storage provider, founded in 2014, will have its technology integrated into Nvidia’s enterprise software stack. Nvidia is not disclosing the value of the deal. Excelero’s core product, Excelero NVMesh, offers software-defined block storage via networked NVMe SSDs. NVMesh operates through... Read more…
Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.