Hillsboro (OR) – Intel’s business model breaks down simply to build more chips at higher profitability each quarter. A big portion of that profitability comes from production yield – so it’s no surprise that Intel throws lots of resources at improving its CPU manufacturing long before mass-production ramps. The upcoming 65 nm generation already has reached yields of a mature chip, the company claims.
It’s no secret anymore that Intel will introduce a range of processors that will showcase the firm’s 65 nm production process. Smaller structures – which includes gate lengths of 35 nm in Intel’s 65 nm processors – allow semiconductor firms to squeeze more units on one wafer and, theoretically, increase production output, efficiency and profit. However, smaller structures also get more complex from generation to generation and require enormous investments and expensive approaches to achieve high yield rates of finished and functional products.
At a recent visit to Intel’s D1D fab, we were able to get a sneak peak at the firms efforts and talked to some of the engineers working responsible for preparing new processors for mass production.
Today, a typical processor is developed within a period of 24 to 32 months, down from about 48 months a decade ago. The search for errors in the semiconductor design and manufacturing takes place during this research development phase, but also throughout the time the CPU is commercially available. While error detection and correction is the main task during development, engineers typically focus on enhancements to the design during a chip’s product cycle.
Intel’s Technology and Development Group (TD) is responsible for finding errors and getting them fixed. If a problem is reported, there are engineers who often will have to find that one individual transistor that is causing the problem, describing the issue to the designer who then is able to correct it. While testing is largely automated and the general problem usually hints to a certain area into a chip, the sheer amount of transistors – 1.7 billion in the future Itanium 2 CPU – create a perfect needle in a haystack problem.
The process of finding and correcting an error typically is referred to as “nano surgery”, which involves corrections in semiconductors that involve structures smaller than 10 nm. Markus Kuhn, group leader in Intel’s Technology and Development Group, told us that the firm currently can see down to a 5 nm level. “What we do is basically trying to control atoms on a surface level,” he said.
This appears not to be exaggerated considering the fact that it is atoms which are influencing production yields and bringing a variation in output to semiconductor manufacturing. Steven Megli, co-plant manager of Intel’s Fab 12 in Chandler, Ariz., mentioned that it can be simply a difference in humidity between the factory where a manufacturing process was developed and a facility which tries to duplicate that process that causes substantial differences in yield. “We spent weeks to create the same air quality in Chandler as we have it in Oregon,” he said. While the firm aims for exactly duplicating a manufacturing environment in multiple locations around the world, Megli conceded that this is an impossible task, as individual unexpected atoms floating in the air can disrupt production yields. “We are happy if we can achieve about 90 percent of the exact same environment,” he told us.
It all comes down to understand and control a material in a certain environment as much as possible. “There is a statistical amount variation that we need to control,” he said. Complete control would be simply impossible, according to Kuhn, but his group aims to “control an issue until it becomes a non-issue.”
Intel’s efforts in finding even the smallest errors before production ramps appear to be paying off – at least, if we believe the presentations of company executives. Within 20 months, the 65 nm process has reached a maturity level which 180 nm reached in 38 months, 130 nm in 30 months and 90 nm in 26 months.
The capability to detect even the smallest errors early brings up the question why Intel was not aware of the significant leakage current in its 90 nm processors – a circumstance that limited the firm’s ability to scale the architecture beyond 3.8 GHz due to enormous levels of power consumption and heat dissipation. According to Kuhn, Intel was very well aware of the leakage issue, but it was considered to be an issue at the time the design was developed. “We knew that the 90 nm chips would create much more heat when we increased performance. But it was not that important then. This has become a problem only in recent times,” he said.
Leakage has been decreased in 65 nm processors and will continue to play an important part of processor development down the road. But Kuhn believes that the industry never will solve the problem of current leaking through gate oxides. “It’s a quantum mechanical effect. There always will be leakage,” he said. “Our goal is to make it less significant.”
So, how long will Intel be able to keep up this process and scale its CPUs? Kuhn does not see a limit yet. Current equipment, including massive electron microscopes costing several million dollars, will lead research and development through 32 nm processors which are expected to hit the market in 2009. After that, the industry is coming close to what many consider to be the nano technology age and structures of 20 nm and smaller. Kuhn is optimistic that semiconductors will scale well beyond. “By the time I retire, I want to see electrons,” he said.