Posted by Eric Klien
During the last few years, the semiconductor industry has been having a harder and harder time miniaturizing transistors with the latest problem being Intel’s delayed roll-out of its new 14 nm process. The best way to confirm this slowdown in progress of computing power is to try to run your current programs on a 6-year-old computer. You will likely have few problems since computers have not sped up greatly during the past 6 years. If you had tried this experiment a decade ago you would have found a 6-year-old computer to be close to useless as Intel and others were able to get much greater gains per year in performance than they are getting today.
Many are unaware of this problem as improvements in software and the current trend to have software rely on specialized GPUs instead of CPUs has made this slowdown in performance gains less evident to the end user. (The more specialized a chip is, the faster it runs.) But despite such workarounds, people are already changing their habits such as upgrading their personal computers less often. Recently people upgraded their ancient Windows XP machines only because Microsoft forced them to by discontinuing support for the still popular Windows XP operating system. (Windows XP was the second most popular desktop operating system in the world the day after Microsoft ended all support for it. At that point it was a 12-year-old operating system.)
It would be unlikely that AIs would become as smart as us by 2029 as Ray Kurzweil has predicted if we depended on Moore’s Law to create the hardware for AIs to run on. But all is not lost. Previously, electromechanical technology gave way to relays, then to vacuum tubes, then to solid-state transistors, and finally to today’s integrated circuits. One possibility for the sixth paradigm to provide exponential growth of computing has been to go from 2D integrated circuits to 3D integrated circuits. There have been small incremental steps in this direction, for example Intel introduced 3D tri-gate transistors with its first 22 nm chips in 2012. While these chips were slightly taller than the previous generation, performance gains were not great from this technology. (Intel is simply making its transistors taller and thinner. They are not stacking such transistors on top of each other.)
But quietly this year, 3D technology has finally taken off. The recently released
Samsung 850 Pro which uses 42 nm flash memory is competitive with competing products that use 19 nm flash memory. Considering that, for a regular flat chip, 42 nm memory is (42 × 42) / (19 × 19) = 4.9 times as big and therefore 4.9 times less productive to work with, how did Samsung pull this off? They used their new 3D V-NAND architecture, which stacks 32 cell layers on top of one another. It wouldn’t be that hard for them to go from 32 layers to 64 then to 128, etc. Expect flash drives to have greater capacity than hard drives in a couple years! (Hard drives are running into their own form of an end of Moore’s Law situation.) Note that by using 42 nm flash memory instead of 19 nm flash memory, Samsung is able to use bigger cells that can handle more read and write cycles.
Samsung is not the only one with this 3D idea. For example, Intel has announced that it will be producing its own 32-layer 3D NAND chips in 2015. And 3D integrated circuits are, of course, not the only potential solution to the end of Moore’s Law. For example, Google isgetting into the quantum computer business which is another possible solution. But there is a huge difference between a theoretical solution that is being tested in a lab somewhere and something that you can buy on Amazon today.
Finally, to give you an idea of how fast things are progressing, a couple months ago Samsung’s best technology was based on 24-layer 3D MLC chips and now Samsung has already announced that it is mass producing 32-layer 3D TLC chips that hold twice as much data per cell than the 32-layer 3D MLC chips currently used in the Samsung 850 Pro.
The Singularity is near!
Source Lifeboat Foundation