Scales can be fascinating. In this use, I am not referring to fishy bits but to computing bits, physical and computational.
On the physical side, I recently discovered a Powers of Ten redux video located on YouTube, Powers of Ten 2. The video starts from a couple having a picnic lunch on a beach near Sicily and expands through the planets and galaxies to reach the cosmic microwave background. In a similar spirit, there is an interactive website called Scale of the Universe 2 that allows a user to find things of a very small scale, from people and dinosaurs down to atoms, neutrinos, quarks, and the Plank limit. In this spectrum, the website shows a "transistor gate" at 25 nanometers (25nm). I cannot find a date on the site to know when it was prepared, but the information is now long stale. As I write in January 2022, Advanced Micro Devices (AMD) is shipping products in large volume using 6nm technology from Taiwan Semiconductor Manufacturing Company (TSMC). The next AMD products will be in 5nm technology and Apple is already using that (TSMC 5nm) for their chips. TSMC has already announced plans for 3nm and 2nm technology. Intel, although having troubles for the last several years, is projecting products that will be built using 5nm and smaller. When I started my engineering schooling (Purdue University and UC-Berkeley), we were grappling with the emerging opportunities of 1 micron technology, that is 1000nm, and we are now close to 1nm. It is becoming inconvenient to talk about these sizes - no one wants to talk about 1/4nm or 0.25nm, so Intel has recently switched to Angstroms as the unit of measure. Therefore, Intel talks about 20A (2nm) and 5A (0.5nm) as future technologies. In the past, engineers would compare transistor sizes to the thickness of a human hair, but we must now compare to the size of atoms. In a silicon crystal (used to manufacture chips), the interatom spacing is about 3A or 1/3rd of a nanometer. Therefore, speaking loosely, a 3nm transistor is about 10 atoms across.
On the computing side, we used to build supercomputers as very large single computers. Somewhere in the 1990s, the "wolfpack" approach to cluster smaller computers took over the supercomputer world. Instead of building a single computer that ran faster and faster, we would partition the computational work across "clusters" of small computers. Working together, communicating, the many small computers would solve problems faster than the biggest single computer. Today, thousands of small computers (each more powerful than the single supercomputers of old) solve enormous problems. The race for the fastest computer in the world is described by the "Top 500" list maintained out of a research group at the University of Tennessee. As I write, the current fastest computer in the world in is Japan and is measured at about 400-some petaflops; that is 400 x 10^15 floating point operations (FLOPS) per second. A new supercomputer is being built in Tennessee called Frontier, at the Oak Ridge National Laboratory, ORNL. It will run at 1.5 Exaflops, or 1.5 x 10^18 FLOPS/second -- over three times faster than the current record-holder. There are rumors of similar supercomputers in China (PRC), but no one has published data to confirm this. When I started engineering school, the fast computers were measured in MIPS, approximately millions of operations per second, or megaflops (10^6). In another year of so, the El Capitan supercomputer at Lawrence Livermore National Labs (LLNL) will turn on and deliver even more exaflops, probably more than 2 exaflops. (If the step from 1.5 to 2 sounds unimpressive, recall that the fastest documented supercomputer today is about 0.5 exaflops).
The fastest computer in the world in 1975 was the Cray-1, pictured below. It achieved about 160 megaflops. I took this photo at the Supercomputing Conference in 2018.
No comments:
Post a Comment