NCSA: A look inside one of the world's most capable supercomputer facilities

Posted by Rick C. Hodgin

The north campus of The University of Illinois in Urbana is home to one of the most powerful and long-running supercomputer facilities in the world. The recent Top 500 list has all five of their primary workhorses listed. Three of them were even in the top 100 with Abe debuting at #8. To walk by the facility you would have no idea what's going on inside. But it's the centers like this which shape much of science and industry. Many believe they even hold the keys to understanding the very nature of our universe.

 


A look inside the NCSA (42 images) ...

 

I had the unique opportunity to visit the National Center for Supercomputing Applications (NCSA) in Urbana, Illinois, last week. I was greeted by public information specialist Trish Barker. Trish was joined by Joshi Fullop and John Stone. When we arrived at a second facility we were also joined by Senior Operations Program Manager Tom Roney.

Joshi is responsible for developing all of the monitoring software used at the facility. His software is also available open source. John is affiliated with the NCSA indirectly through his department's large supercomputing needs at the university. And Tom is responsible for the daily operations of physically running the supercomputing facility.

They all showed a great love and enthusiasm for what they do. That fact was even displayed by an almost competitive atmosphere as they all raced to see who could best answer any questions I asked. They'd even carry on conversations amongst themselves for a few seconds here and there. It was actually quite humorous at times to sit back and watch it all unfurl. The four of them brought many different viewpoints on the actual technology, its application and any real-world benefits we might see.

During the tour, there was so much information being tossed around that I could've literally spent days with them and not received it all. One thing was absolutely clear, however: the effort involved in making supercomputing available is no small feat. It requires knowledge and teamwork. Big teamwork. The people I came in contact with were impressive. They all seemed to be at the top of their game and it was quite a task for me just to keep up with everything (even though I was told they were greatly simplifying much of the technical details).

 


What is the NCSA?

The NCSA is the largest non-secure, or public, supercomputing facility in the United States. It was created in 1985 after an unsolicited request for funding was sent to the director of the National Science Foundation in 1983.

The early days of the NCSA were rather difficult. They were often pressed for funding because the idea of real-world benefits from such a new thing like a supercomputer weren't well understood. There was also an inertial mindset, one acting in direct opposition to the early team's efforts, against the need for the new fangled technology idea. However, it wasn't long after their first machine, a Cray X-MP/24 began churning away that all skeptics were proved wrong. I find it interesting to note that today's high-end desktops are about three times as powerful as the original Cray. They also have vastly more memory and storage capacity. We all have supercomputers today on our desktops, if only 20 years later.

ou may have come across the name NCSA before as this place is recognized as the origin of the web browser. It was here where Marc Andreesen, who later became the co-founder of Netscape, and Eric Bina invented the Mosaic browser in the early 1990s.

What services do they offer?

The NCSA is part of an open community, meaning if you or I had the need to do so, even we could use their facilities.

 

 

The NCSA is one of the four largest open facilities in the United States. There are also several smaller sites. The other three big players are The San Diego Supercomputing Center (SDSC), The Pittsburgh Supercomputing Center (PSC), and the newest one: The Texas Advanced Computing Center (TACC).

The NCSA's close ties to the academic community have provided many advances which are now industry standards. The teams working at NCSA right now are attempting to solve the future problems of newer machines housing more than 100,000 physical processors (400,000+ cores). The challenges facing this field of development are extreme. The team I toured with could not emphasize that reality enough.

What does the NCSA look like?

The NCSA campus is physically divided into two buildings. The first one I arrived at was completed in 2006 and now houses the bulk of the non-hardware support staff. Prior to this building being completed, there were as many as seven buildings scattered around campus with all of the NCSA staff. Today, there are more than 300 people working at the NCSA to make everything happen. These are people responsible for software setup, testing, proofing and analysis. There are also departments which work with the data after it's computed, like their Advanced Visualization department which has created some stunning work under the leadership of Donna Cox and her team. They create 3D models, animations and provide researches with a more tangible way to view their hard work.

Altogether the facilities in their new building allow NCSA customers to have the most interconnected and robust experience possible, no matter what their supercomputing needs are.

The other building was adopted and expanded from what used to be an astronomy center at the university in the 1970s. The original portion of the building comprises two upper levels and a type of basement-looking facility which is the first floor. The new add-on of 18,029 sq. feet was completed in 2001 and is comprised of one large floor and the same kind of basement.

The supercomputers at the NCSA are physically housed in this second building. It also contains an immense electrical station and four very large cooling systems consuming nearly all of the square footage available.

From the outside you would have no idea what takes place inside. Only a small sign out front gives any hint; it reads “Advanced Computation Building”. Still, in order to gain access to the building, you must pass through the outer layer of security. It has audio, video, a card scanner and a keypad. Once inside, you proceed up one floor via a plexi-glass top elevator. When you exit at the second floor you're standing at the back of the main control room. From there, the staff can monitor the supercomputer installation, down to every individual processor.

In truth, I must say there was a real sense of energy or electricity in the air in that room. When I first entered it expected there to be a humming, like we've all heard on the bridge of shows like Star Trek. Still, as I gazed toward the many screens and glanced at their important data I could not help but be struck by it all. The power... I mean, it's just right there before you on display.

The main control room looks like a small version of NASA's control room. It's manned around the clock, every day, all year, by three or four operators. This control room serves as a first-response “call center” for not only the customers using their facility, but also nine others. NCSA part of an organization called Teragrid, which is a distributed computing system across many different sites in the U.S. Teragrid communicates with all sites using a 30 Gbps backbone.

There are literally dozens of monitors displaying details of whatever is processing on the machines, which sit even deeper in the building. Joshi's open-source monitoring software populates many of those screens. Still, it's those machines further back in the building which really draw companies to the NCSA for their number crunching needs. Their power and performance is key. Let's see what it is all about.

Read on the next page: How much computing capacity does NCSA have and how much does a supercomputer cost?


How much computing capacity does NCSA have?

The total maximum theoretical computing capacity of NCSA is 146 teraflops distributed across five primary machines. The constant throughput is somewhat less though, due to overhead like networking and disk storage. To put this large number into perspective consider this: The average high-end multi-core PC would take a few days to compute what the NCSA can every second.

I've calculated that the NCSA's raw computing capacity is about 37,000 times greater than those of high-end PCs. The NCSA's computers can literally crunch in 2.37 seconds what it would theoretically take your PC all day to compute. I say “theoretically”, because while that absolute number holds true, the reality is that home PCs are not supercomputers. The vast memory, disk storage and networking capacity of their supercomputers is so far beyond the home PC that a more accurate comparison would be that the NCSA can compute any particular task in less than one second what it would take your PC several days to crunch.

I was told by John and Joshi that many industries are shaped and fed by the research which takes place at the NCSA. They're either the consumer of new products for testing. Or, and this happens a lot, the results of whatever their machines have processed serves to drive new ideas in the industry.

I was also told that vendors literally come to the NCSA and say “Hey, we've built this bigger, badder machine but we've never tested it in the real world. Will you take it, run it through its paces and see if you can break it for us?” Some of these tests end in horrific disaster. Others end up shaping the future of supercomputing. The goal is to constantly be looking ahead 5-10 years.

The truth is that the benefits of supercomputing abilities drive entire areas of industry and research. There are countless industries whose needs are being met by efforts like these. But they still need more. The computing power required to address the bigger questions we keep asking are growing exponentially. As larger and larger problems are found, they need more and more horsepower to solve. Right now our ability to ask questions greatly exceeds our processing potential. However, with the new proposal they should see 10 petaflop computing in a few years. That's at least 10 times more powerful than the highest-end machine we see today.

 

 

 

Is it all just about the computing abilities?

While the machines themselves are impressive in raw horsepower, they would be nothing without the experts who keep them running. Everyone I spoke with kept going on and on about how so-and-so's team is responsible for this. And, such-and-such's group is responsible for that.

There is just so much teamwork involved in bringing forth this kind of effort I was really taken aback by it all. I had no idea the supreme amount of effort and challenges facing the supercomputing industry. I figured it was pretty much “lock and load”. Something like: If it worked on your 42U cluster at the office, why shouldn't it work identically on a supercomputer? Well, the fact is that's often the customer's experience. But the whole process works like that (so smoothly) because of the teamwork. Were it not for the expertise, things would be far worse.

How much does a supercomputer cost?

I'm told the hardware components of a supercomputer purchase represent only a very small portion of the lifetime expense of a facility like the NCSA.
If you wanted to buy your own supercomputer you'd have the initial costs of physical hardware. These are things like buildings, computers, networks, storage, memory, cooling equipment, etc. It runs at about 50% for the servers and the other half of the investment goes into networking and interconnects. Beyond that, you have the ongoing salaries of the hundreds of people required to keep everything running smoothly. There are also enormous fixed expenses like electricity and constant component upgrades and expansions. Suppose the average technician earns $70K per year. That's $21 million for the NCSA's staff annually, just in salaries.

The group I toured with talked in terms of the “thousands of dollars per hour” to operate the facility. That equates to an annual fixed expense of about $1 per processor per hour. Still, that's very inexpensive compared to the late 1980s when supercomputing was new and it was literally $1000 to $1800 per hour per processor. It is impressive how far we've come (and how far we're going).

How many supercomputers are there at NCSA?

NCSA has five primary machines, though many of their older retired machines exist (at least in part) for something they call “preparation”. Some of the older machines are also used for their in-house advanced 3D rendering farms.

All of these machines listed below are part of Teragrid, which communicates between itself via a 30 Gbps network. For communication with the university, a 10 Gbps network is used. All five of their primary workhorses were listed in the June, 2007 Top 500 list:

#8 - “Abe” - 2007 - 90 teraflops
Abe is the latest supercomputer to join NCSA. It is scheduled to go into full production on Monday, July 2, 2007. It's currently operating at full capacity and is rated #8 in the world. It operates at a maximum computing capacity of just under 90 teraflops. Its sustained computing capacity is documented at around 63 teraflops. It is a Linux-based cluster comprised of 1200 Dell PowerEdge 1955s blades. Each blade sports two 2.33 GHz Quad-Core Xeons (Clovertown core) on a 1333 GHz FSB operating in Intel64 mode (true 64-bit computing). That's 2400 physical processors housing 9600 cores. The system communicates with itself and the outside world using an Infiniband network at 10 Gbps. It has 200 TB of disk storage and each core has a dedicated GB of memory all to itself, resulting 9.6 TB of DDR2 memory total.

#47 - “T3” - 2006 - 22 teraflops
T3 was installed in March, 2007. It is currently ranked #47 in the world and operates at a maximum computing capacity of 22 teraflops. It is a Linux-based cluster comprised of 1040 dual-core 2.66 GHz Xeons resulting in 2080 physical cores. It contains a total of 4.1 TB of system memory and 20 TB of disk storage.

#90 - “Tungsten” - 2003 – 16.4 teraflops
Tungsten was installed in November, 2003 and debuted at #3 in the world. It is currently ranked #90 showing just how fast this landscape changes. It is a Red Hat Linux-based cluster comprised of 1750 Dell PowerEdge 1750 servers, each with two 3.2 GHz Netburst Xeons. It has a total of 2560 processors resulting in a maximum computing capacity of 16.38 teraflops. Its sustained computing capacity is documented at 9.819 teraflops. It sits atop a Myricom Myrinet interconnect network.

#160 - “Mercury” - 2004 – 10.2 teraflops
Mercury was installed in June, 2004 and debuted at #15 in the world. It is currently ranked #160. It has a maximum computing capacity of 10.23 teraflops. Its sustained computing capacity is 7.22 teraflops. It is an SuSE Linux based cluster comprised of 887 Intel dual-core Itanium 2 nodes operating at 1.3 GHz and 1.5 GHz, resulting in 1774 physical cores. Each node has either 4 GB or 12 GB of memory. It also sits atop a Myricom Myrinet interconnect network.

#246 – “Cobalt” - 2005 – 6.55 teraflops
Cobalt was installed in June, 2005 and debuted at #48 in the world. It is currently ranked #246. It has a maximum computing capacity of 6.55 teraflops. Its sustained computing capacity is 6.1 teraflops and is based on a Linux cluster comprised of 512 dual-core Itanium 2 nodes operating at 1.6 GHz. Each node has either 2 GB or 6 GB of memory. It uses an SGI NUMAlink 4 interconnect internally and an InfiniBand network to other machines.

There are several other high-power computers used for preparation runs, monitoring, networking and other applications as well as a large amount of fault tolerance, especially on the monitoring software. After all, if something crashes, it's the monitoring software that needs to alert someone.

The NCSA also has lots of other non-computational machines. For example, they have two new tape storage arrays capable of housing 11,000 tapes. Each tape holds 400 GB of data. That's a total storage capacity per machine of 4.4 PB (petabytes). These machines are about 20 feet long, 5 feet wide and 6 feet high. They have an internal retrieval mechanism which zips back and forth along the tape library at 30 mph. Tom told me it was capable of moving at 60 mph. I can tell you it was quite impressive and scary to see the thing move back and forth at 30mph when it stops just inches from your face on the other side of the glass. It takes a degree of concentration not to flinch.

What I found very interesting was that the NCSA stores a tape backup of all data it has ever computed for its customers. They keep everything online for immediate retrieval for 2 months in active disk cache. After that it's only available via tape and must be requested for download. The automated machines retrieve data as its requested and turnaround is quite reasonable, even for very old, archived data.

Tom told me it took the NCSA 19 years to reach the petabyte milestone of storage requirement. This happened in 2005. But, it only took another 12 months to reach the second petabyte (2006) milestone. The third came in only eight months (2007) and right now they're estimating six months more to reach number four. To put that into perspective, a petabyte is approximately 1,500,000 CDs worth of data, which is enough to fill a football field five discs high with CDs laid end to end.

 

Read on the next page: What does it take to keep everything cool? How much power does it take to run these supercomputers?


What does it take to keep everything cool?

The NCSA employs a very impressive cooling system. There are four air chillers, each about 12 feet high, 20 feet wide and 30 feet long. Inside each system is a wall of air filters, a heat exchanger and an exhaust vent which pours out its cold air into a six foot high false floor beneath the supercomputers. The floor acts as its own HVAC ducting and results in hot and cold rows. Every other row of supercomputers contains holes in the floor. Those holes act as vents letting the pressurized air from inside the false floor duct blow upward. The air enters those rows with equal uniformity across the entire floor and it's this design which cools the entire room.

 

 

The air coming out of these holes is about 50 degrees Fahrenheit. There is enough volume and pressure that it blows with intensity all the time. It didn't take more than about 15 seconds in a cold row before you really started to get cold. I would probably describe it as standing in front of a really cold AC unit as it blows on high. Joshi told me that while they were setting up some of the new computers the cooling system could not be turned off. As a result the teams working were wearing something like high-altitude clothing to work in the cold rows as they installed and hooked everything up. Also, when you go down into the false floor it is very wise to take a jacket due to the 30+ mph wind gusts experienced down there.

 

The cold rows serve as inlet air by populating the airspace between rows of computers. Those machines are oriented in such a way that they always suck air in from the cold side and expel it to the hot side. This results in every other row of computers facing the opposite way. Once the air is expelled it is drawn up to the ceiling by the closed system where it enters return vents and cycles through again.

The hot air being expelled was around 98 degrees Fahrenheit everywhere I tested it - which means that the air sees an increase of almost 50 degrees Fahrenheit in only one pass through these workhorse machines.

 

 

 

What about fire safety?

I was very surprised to learn that the fire system they have for a supercomputing facility was water based. Tom spoke about Halon gas systems being the preferred method. However, at the time of the new add-on to the building, Halon was under close scrutiny due to its potential damage to the environment. It's also very expensive and requires a special breathing apparatus if used. As such, a simple water drench fire control system is used.

I noticed during the tour that incoming water lines were about 18 inches in diameter at 75 psi. That equates to several hundred gallons of water per minute throughput. In other words, more than enough to quench the entire facility very fast and efficiently in the event of an emergency. There are several smoke sensors throughout the facility and everything fire-system related is clearly identified by orange PVC piping.

Tom told me that two things would have to happen before water is ever pumped into the normally 100% dry system. First, there has to be smoke detected: No smoke, no water. Second, there has to be enough heat to break the fire sprinkler head's fusible links (the small red glass tubes). If either of these two events has not occurred the system remains completely dry with only alarms going off to alert the control room.


How much power does it take to run the NCSA?

To power and cool the NCSA's single-building supercomputers requires a constant input of 1.7 Megawatts. An entire corner of the building was dedicated to power receipt and distribution. They also had two dedicated UPS boxes which stood six feet tall, three feet wide and 12 feet deep. They were capable of keeping all monitoring systems up and running for 15 minutes in the event of a catastrophic power failure. I asked him how often they see unexpected power losses and he said “never.” The university gets priority notifications from the power company due to their large consumption.

The NCSA's one supercomputing building consumes 6% of the entire university's 30 Megawatt power budget. It equates to a one-building electricity bill of $3 per second – or about $1,500,000 per year. That cost is broken down to about 66% cooling and 33% computer electricity.

 

How can someone use a supercomputer?

When I asked this question the team began speaking rather intently about something called "allocations". An allocation is an allotment of time on a particular system, but it's not quite that simple. You don't walk in with a fist full of dollars and say "I have the money, I want five allocations". There are several factors which weigh in on just who gets to use the equipment because there is a limited amount of computing power available for the many thousands of people desiring to use their equipment. The center is funded primarily by the NSF and is offered completely free to anyone desiring to use it, however there is a peer review process for applications. So, mostly because of the intense competition for access, it turns into a real process that must be gone through.

For the majority of customers (even the university), allocations are submitted and reviewed on an annual basis. An entire year's worth of supercomputing use must be accounted for at that time. The maximum proposal can only be seven pages. If you intend to run 6 jobs for the whole year, you're allowed seven pages. If you intend to run 10,000 jobs for the whole year, you're allowed seven pages.

In truth, this requires a tremendous amount of scheduling and foresight. The team could not emphasize that fact and its significance enough. There are often teams at the university vying for a portion of their department's allocation of supercomputer use (should it be approved). Those teams must submit internal proposals long before the finalists get put on to the official proposal submitted to the NCSA. This often results in many worthwhile projects going unprocessed due to the limitations seen today in maximum computing ability. If we had faster equipment, many more of these projects would see the light of day and many more advancements would be possible.

One of the more interesting aspects I found was that on some projects there's a circle of data. The output from a January run might be the input data for a March run. And that output might then be part of a September run. When you're scheduling things a year in advance, there is some concern over whether or not the data will be accurate enough to move forward at each stage. Suppose there was an error in your model. Or suppose there wasn't an error in your model and yet the model yielded far different results than you were expecting. What then? You have to analyze, regroup, rethink, redesign. These things all take time and are real concerns requiring degrees of planning and post-run analyses of absolutely phenomenal efforts by those involved.

 

Read on the next page: Real world benefits of supercomputing


What about funding?

The NCSA is funded primarily by the National Science Foundation (NSF) and the university.  Since the facility is, therefore, fully funded in all respects, competition arises not by who has the most money but rather who has the best need.  The peer review process and annual proposals are used to determine who gets how many allocations on their system.  The reality here is that there are many worthwhile projects which get passed over due to limitations in computer resources.  The problem for researchers isn't the money.  It's the mechanics of proving to a type of committee that your job is at least as important as someone else's job, and therefore you wind up being awarded time on the machine.  Again, if we had faster and less costly supercomputer resources available, many more research projects would be executed each year.  Abe is a big step in that direction with its 90 teraflops stemming from a mere two rows. Compare that to several other systems capable of 22 teraflops or less consuming 5, 6 or 7 rows.  Copper, a system scheduled to go out of service in September, 2007, takes up about the same amount of space as Abe, yet it is 45x less powerful at 2 teraflops.

It reminded me of the Barbara Streisand song Putting it Together with the lyric: “Every time I stop to get defensive I remember: vinyl is expensive” (referring to her dealing with the record company executives and their needs when all she wanted to do was record music).

 

 

 

What are the real-world benefits?

There are no two ways about it. Sites like the NCSA literally drive high-speed hardware innovation and seamless software integration initiatives world-wide. It's a marriage that doesn't grow old over time either. In fact, right now there are about twenty teams working on solving the biggest problems of tomorrow's supercomputers (those with more than 100,000 processors). There just aren't solutions today which scale to that size with enough efficiency.

The industry's forward thinking attitude also reflects an adage that after years of moving along on a particular technology something new could come along in an instant. It might even change the entire face of computing. It will be the supercomputing arenas where such revolutionary changes are first tested and applied.

Still, we (the consumers) end up seeing real benefits from their efforts. This filtering down occurs only a few years after products are developed for supercomputers. Technology is pioneered for the high-end but really it reaches us pretty fast all things considered.

We can thank sites like the NCSA for Internet based web browsers (NCSA Mosaic), Gigabit and greater Ethernet, inexpensive fiber optics, high density memory and disk storage, faster processors, lower power consumption, hardware feature advancements and integrations, and really so much more related technology like standards and best practices. Were it not for these site's needs growing so quickly, then it would've been far longer before we the people saw any significant breakthroughs in those areas.

Mean Time Between Failures – MTBF

In enterprise applications, MTBF is a big deal – it describes the time period in which failures are likely to occur. It gets extremely complex in supercomputing applications and this is how John described it to me: He said, “Consider having one light bulb. How long will it be before it fails? Now, suppose you have two? How long before one fails? Now, suppose you have 100. Or 1000. Or 10,000. Or a million. How long now? With supercomputers, we're dealing with machines that now have more processors than the first computers had transistors. The fact is they do fail, and one of [the NCSA's] jobs is to recognize that and try to make the impact as minimal as possible.”

John brought the concept of MTBF into stark reality for me. Consider dealing with a machine like Abe and its almost 10,000 cores. The fact is that even with error checking and correction (ECC) there will still be hardware errors every so often. Right now, I'm told the MTBF is about 6 hours. This means that every six hours, one component of their thousands of cores, thousands of gigabytes in memory and hard drives, thousands of network cards, 100s of miles of cabling and fiber optics, 1.7 Megawatts of constant power consumption, 50 degrees Fahrenheit heat differential from the cool side to the hot side, will fail. Something will not be processed correctly resulting in a failed job. If this happens, the project time that was used on the supercomputer, is wasted.

Consider a job which might take two days to crunch. If it ran from beginning to end without stopping, only to find out at the end that something failed. Well, that's a large block of time (1/180th of the annual budget) that's just been wasted. And, the whole job has to start over and everyone else who has scheduled tasks may get bumped back.

To combat this hard reality, the team tries to break jobs into 6 hour blocks or less. There are also software solutions being researched right now which will no longer equate jobs from linear design to physical hardware, but will rather take the idea of virtualizing everything. This means that a particular job will no longer need to know if it's running on 1000 cores, 5000 cores, 200 cores or whatever. It would instead just need however many logical cores it does, and then those are assigned to whatever machine is available. And, because the jobs are designed to be virtualized in this way it will no longer matter if a particular machine goes down, or even if a bank of machines goes down. The job will persist with only the most recently computed block being lost, requiring re-computation.

I'm told the software efforts being researched right now will radically shape the direction the supercomputing industry takes in 10+ years. The idea of virtualization seems to be paramount in so many computer-related industries that I truly believe the days of the software being written for the machine are nearly at an end.

 

Read on the next page: Miscellaneous facts about the NCSA, Trivia 


Miscellaneous Facts

#1 - I There are some ripple effects which propagate observably in large clusters when there are networking or hardware problems. A malfunctioning network connection (one requiring a large number of retries), for example, can cause the entire system to get slower, and slower, and slower over time. This adds many hundreds of seconds to some jobs and that equates to many more thousands of dollars in computer time. The networking teams have worked hard to keep this from being a concern, but the reality is it does happen and it is a concern. This is another reason so many teams are required to keep everything operating at high capacity. A simple issue like a mildly faulty networking adapter can cost users tens of thousands of dollars.

#2 – Memory prices and disk capacity have come down so quickly that there is a significantly growing ramp toward much higher computing ability. We will see supercomputers with wider abilities much faster now that multiple cores are here. The machines will be smaller, more affordable and will yield supercomputing potential to so many projects which are undoubtedly passed over today due to priorities and politics in allocation assignments.

 

#3 – There were so many more things the team wanted to tell me. They just couldn't speak a word because they were constrained by non-disclosure agreements. Every time we began speaking about a particular subject and we would begin to discuss it in depth it would almost always come to a lull. John, Joshi and Trish would all stop and begin looking at each other deciding what could be said and what couldn't. They would occasionally go off on one of those conversations where they speak amongst themselves in non-discernible code like “you know that thing with the thing, can we talk about that?” Or “what about the thing the place sent over?” Still, they did go on to describe the area of science they're talking about. It is the absolute bleeding edge. It's the place where they're receiving products for testing or research which have only been produced in the lab.

#4 – Through one of those lulls I was told not to worry about Moore's Law. There are apparently solutions which, if not yet publicly known (like Intel's hi-K dielectrics), soon will be. But they said it will continue on, most likely indefinitely. Imagine that. From the power of the original 80286 and 80386 to where we are today. In 20 more years having that much more of a relative increase in power. Our cell phone's computational capacity will dwarf our highest-end desktop machines today. Our PDAs will be portable supercomputers (as if some of them today already aren't, especially compared with the original Cray X-MP/24). The future is just amazing, and it's all because of efforts like these. Amazing.

#5 – I can't convey clearly enough how jazzed the team I spoke with seemed to be about what they do. When they were speaking about the technology, the future, even the problems needing solved, they were among the most happy employed people I've ever seen. I can't think of a situation where I've been where people have been happier to do what they do everyday. Trish, Joshi, John, Tom, they all seemed to be excited about where they are and where they're going. I can tell you about the experiences I've had in the software industries, it's only when you're able to work on a project you've personally helped create where you find that kind of excitement. And that's just what it is for these professionals.

 

Humble Beginnings

The NCSA began on a campus building called “The Oil Chemistry Building”. It is a small building still standing today with 1960s appearance. It looks very much like something undesirable. Still, that building has produced some of the greatest inventions we all use today. NCSA's Mosaic web browser, for example, which others were later modeled after. One of YouTube's founders began with a little idea he had to share some data (at the time MP3 files I'm told). And countless other industry leaders who have their roots back to this one little building which looks a little sad today.

Over time they expanded to several other buildings. One of them was what is now called “Durst Cycle” across the street from the parking garage. The “NCSA” sign is, quite humorously, still visible out front.

Trivia

The HAL 9000 computer used in 2001: A Space Odyssey was “born” at the same university where NCSA sits nestled. Quoted from the movie:
HAL 9000: “Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it I can sing it for you.”  I was told the reason Stanley Kubrick chose the university at Urbana, Illinois, because one of his associates went there.

The Beckman Institute is a building immediately northwest of the NCSA's non-supercomputing facility. It's named after Arnold O. Beckman, born in 1900. He developed an electric pH meter in 1934 as well as an extremely high-speed centrifuge in the 1950s. Today, that facility bearing his name attracts many top scientists (like John) because of its proximity to the NCSA and the close relationship they share. The Beckman Institute (and specifically John's department) represents a very large continuous customer for the NCSA.

The non-supercomputing NCSA building has picture-frame like images on the first, second and fourth floors. When you look at them they are literal images of the exact hallway you're in. However, they are more than pictures.  They are true 3D animated sequences. The sequences have some very odd things happening in them. This unusual behavior of an otherwise normal building reflects the creative mind of the art department at the university. It goes along with the virtual theme known outwardly to NCSA customers from the creations handed out by the Advanced Visualizations laboratory.

Trish, Joshi, John, Tom, I thoroughly enjoyed my time with you. I wanted the TG Daily readers to know just how much your excitement was conveyed to me. Thank you for touring the NCSA with me and sharing the details of your work environment.