Discussion about this post

User's avatar
Tanj's avatar

Lucid! I had not previously noticed the problem of decreasing gate voltage colliding with immovable sub threshold slope as one of the reasons scaling ended. Interesting that several limits all coincided about the same scale.

Carver Mead in the mid-1960s presented a comprehensive description of scaling of semiconductor devices, and published a prediction for sub-micron devices (expected as the limit to lithography), Dennard reduced that general case to a concise set of rules just for CMOS which provided a road ahead with constant power per area that everyone could drive. Sometimes less is Moore.

Expand full comment
Peter W.'s avatar

Thanks Vikram! In addition to the points you covered here so nicely, another factor that prevents true Dennard scaling are the quantum mechanical phenomena, especially tunneling, that become more and more important (likely) as the distances gating current flows decrease. Once we approached a gate thickness measured in nanometers, leak currents become significant. All this is in addition to the difficulty providing power through more and ever-smaller vias and the other factors you already described. All these factors contribute to more heat, something semiconductors especially at nanometer scale really don't like.

To go back to concrete examples: Intel's Pentium IV marked, IMHO, a true "hard stop" in their quest to keep increasing frequencies. The idea of having the CPU performing more and more tasks by few ( first one, later two) cores by running a core really, really fast hit a wall. With the Pentium IV, that wasn't helped by a design with a very long pipeline that caused trouble when it stalled. Which happened often enough. The entire industry then turned to multiple core designs and increasing speed and efficiency by including specialized circuitry (SIMDs) in the cores.

Expand full comment
5 more comments...

No posts