Monday, October 29, 2007

New World's Most Powerful Vector Supercomputer From NEC, SX-9


Fujitsu and Hitachi in Japan and IBM, Cray in US were the teraflop giants and always competed with each other to be the most powerful computer manufacturing company.
So keeping up with the tradition, NEC Japan, on Thursday announced the launch of what it called the world's most powerful supercomputer on the market, SX-9.
SX-9, the fastest vector supercomputer with a peak processing performance of 839 TFLOPS(*1). The SX-9 features the world's first CPU capable of a peak vector performance of 102.4 GFLOPS(*2) per single core.

In addition to the newly developed CPU, the SX-9 combines large-scale shared memory of up to 1TB and ultra high-speed interconnects achieving speeds up to 128GB/second. Through these enhanced features, the SX-9 closes in on the PFLOPS(*3) range by realizing a processing performance of 839 TFLOPS. The SX-9 also achieves an approximate three-quarter reduction in space and power consumption over conventional models. This was achieved by applying advanced LSI design and high-density packaging technology.

In comparison to scalar parallel servers(*5) incorporating multiple general-purpose CPUs, the vector supercomputer(*4) offers superior operating performance for high-speed scientific computation and ultra high-speed processing of large-volume data. The enhanced effectiveness of the new product will be clearly demonstrated in fields such as weather forecasting, fluid dynamics and environmental simulation, as well as simulations for as-yet-unknown materials in nanotechnology and polymeric design. NEC has already sold more than 1,000 units of the SX series worldwide to organizations within these scientific fields.

The SX-9 is loaded with "SUPER-UX," basic software compliant with the UNIX System V operating system that can extract maximum performance from the SX series. SUPER-UX is equipped with flexible functions that can deliver more effective operational management compatible with large-scale multiple node systems.
The use of powerful compiler library groups and program development support functions to maximize SX performance makes the SX-9 a developer-friendly system. Application assets developed by users can also be integrated without modification, enabling full leverage of the ultra high-speed computing performance of the SX-9.

"The SX-9 has been developed to meet the need for ultra-fast simulations of advanced and complex large-capacity scientific computing," Yoshikazu Maruyama, senior vice president of NEC Corp., said in a statement.
NEC's supercomputers are used in fields including advanced weather forecasting, aerospace and in large research institutes and companies. The SX-9 will first go on display at a supercomputing convention next month in Reno, Nevada.

Specifications

Multi-node Single-node
2 - 512 nodes*3 1 node
SX-9 SX-9/A SX-9/B
Central Processing Unit (CPU)
Number of CPUs 32 - 8,192 8-16 4-8
Logical Peak Performance*1 3.8T - 969.9TFLOPS 947.2G - 1,894.4GFLOPS 473.6G - 947.2GFLOPS
Peak Vector Performance*2 3.3T - 838.9TFLOPS 819.2G - 1,638.4GFLOPS 409.6G - 819.2GFLOPS
Main Memory Unit (MMU)
Memory Architecture Shared and distributed memory Shared memory
Capacity 1T - 512TB 512GB、1TB 256GB,512GB
Peak Data Transfer Rate 2048TB/s 4TB/s 2TB/s
Internode Crossbar Switch (IXS)
Peak Data Transfer Rate 128GB/s×2 bidirectional (per node) -


(1) *TFLOPS:
one trillion floating point operations per second
(2) *GFLOPS:
one billion floating point operations per second
(3) *PFLOPS:
one quadrillion floating point operations per second
(4) Vector supercomputer:
A supercomputer with high-speed processor(s) called "vector processor(s)" that is used for scientific/technical computation. Vector supercomputers deliver high performance in complex, large-scale computation, such as climates, aeronautics / space, environmental simulations, fluid dynamics, through the processing of array-handling with a single vector instruction.
(5) Scalar parallel supercomputer:
A supercomputer with multiple general purpose processors suitable for simultaneous processing of multiple workloads such as genomic analysis or easily paralleled computations like particle computation. They deliver high performance by connecting many processors (also used for business applications) in parallel.

Sunday, October 28, 2007

What has Betty Cooker and Gridka got in Common, They Both use " ... in box " solution

Creating a cake became much easier in the late 40s when Betty Crocker released cake mix in a box.

Do you ever wish there was an equivalent for computing grids? Now there is, almost.

An approach known as “grid in a box” is making it possible to gather all the ingredients required to make grid computing more affordable and accessible for participating grid centers.

“The idea of ‘grid in a box’ is to put all needed grid services on one piece of hardware,” says Oliver Oberst, Forschungszentrum Karlsruhe. “Instead of having several machines working together to host the infrastructure of a grid site, there are several virtual machines working on one computer—the ‘box.’”

Traditionally, building a grid site with gLite—the middleware designed by Enabling Grids for E-sciencE and used predominately in Europe—required multiple different grid services to be installed, each on a different machine.
Continue reading at International Science Grid.....

Friday, October 26, 2007

VMWare Fusion 1.1 RC for Intel Macs released

If you are virtualizing your computing on a Mac, VMWare is out to help you with new release of the VMWare fusion. The Fusion 1.1 RC is said to have fixed some of the problems the fusion had but these features should certainly get your attension;
  • VMware Fusion 1.1 now includes English, French, German, and Japanese versions
  • Unity improvements include:
    • My Computer, My Documents, My Network Places, Control Panel, Run, and Search are now available in the Applications menu, Dock menu, and the Launch Applications window
    • Improved support for Windows Vista 32 and 64-bit editions
    • Improved Unity window dragging and resizing performance
  • Boot Camp improvements include:
    • Support for Microsoft Vista in a virtual machine
    • Improved support for preparing Boot Camp partitions
    • Automatically remount Boot Camp partition after Boot Camp virtual machine is shut down
  • Improved support for Mac OS X Leopard hosts
  • Improved 2D drawing performance, especially on Santa Rosa MacBook Pros
Download the Fusion 1.1 RC and 30 day together with 30 day trial Key.

Thursday, October 25, 2007

Wubi Super Easy Ubuntu Installer for Windows


Wubi is a free Ubuntu installer for Windows users that will bring you into the Linux world with a single click. Wubi allows you to install and uninstall Ubuntu as any other Windows application. If you heard about Linux and Ubuntu, if you wanted to try them but you were afraid, this is for you.
Beauty is that if you find the Ubuntu to your liking, you can make your machine dual boot with another small utility.
Tags: , ,

Web Based Virtual Machine Creator for VMWare

VMWare's virtualization technology allows you to run other operating systems within your native OS, but VMWare Player doesn't provide an easy way to create the disk images to host your guest OS. Enter the online service Virtual Machine Creator.
Via Wired, from a Reddit post.

Monday, October 15, 2007

Hiachi Quadruples Current Harddrives, To 4TB for Desktop and TO 1TB for Notebook Drives.

2x Reduction of Nanometer Recording Technology Shows Promise for 1TB Notebook and 4TB Desktop PCs in 2011

TOKYO, Oct. 15, 2007 -- Hitachi, Ltd. (NYSE: HIT / TSE: 6501) and Hitachi Global Storage Technologies (Hitachi GST), announced today they have developed the world's smallest read-head technology for hard disk drives, which is expected to quadruple current storage capacity limits to four terabytes (TB) on a desktop hard drive and one TB on a notebook hard drive.

Researchers at Hitachi have successfully reduced existing recording heads by more than a factor of two to achieve new heads in the 30-50 nanometer (nm) range, which is up to 2,000 times smaller than the width of an average human hair (approx. 70-100 microns). Called current perpendicular-to-the-plane giant magneto-resistive*1 (CPP-GMR) heads, Hitachi's new technology is expected to be implemented in shipping products in 2009 and reach its full potential in 2011.

Hitachi will present these achievements at the 8th Perpendicular Magnetic Recording Conference (PMRC 2007), to be held 15th -17th October 2007, at the Tokyo International Forum in Japan.

"Hitachi continues to invest in deep research for the advancement of hard disk drives as we believe there is no other technology capable of providing the hard drive's high-capacity, low-cost value for the foreseeable future," said Hiroaki Odawara, Research Director, Storage Technology Research Center, Central Research Laboratory, Hitachi, Ltd. "This is an achievement for consumers as much as it is for Hitachi. It allows Hitachi to fuel the growth of the ‘Terabyte Era’ of storage, which we started, and gives consumers virtually limitless ability for storing their digital content."

Hitachi believes CPP-GMR heads will enable hard disk drive (HDD) recording density of 500 gigabits per square inch (Gb/in2) to one terabit per square inch (Tb/in2), a quadrupling of today's highest areal densities. Earlier this year, Hitachi GST delivered the industry's first terabyte hard drive with 148 Gb/in2, while the highest areal density Hitachi GST products shipping today are in the 200 Gb/in2 range. These products use existing head technology, called TMR*2 (tunnel-magneto-resistive) heads. The recording head and media are the two key technologies controlling the miniaturization evolution and the exponential capacity-growth of the hard disk drive.

Cutting Through the Noise - The Strongest Signal-to-Noise Ratio

The continued advancements of hard disk drives requires the ability to squeeze more and more, and thus, smaller and smaller data bits onto the recording the media, necessitating the continued miniaturization of the recording heads to read those bits. However, as the head becomes smaller, electrical resistance increases, which in turn also increases the noise output and compromises the head's ability to correctly read the data signal.
High signal output and low noise is what is desired in hard drive read operations, thus, researchers try to achieve a high signal-to-noise (S/N) ratio in developing effective read-head technology. Using TMR head technology, researchers predict that accurate read operations would not be conducted with confidence as recording densities begin to surpass 500 Gb/in2.

The CPP-GMR device, compared to the TMR device, exhibits less of an electrical resistance, resulting in lower electrical noise but also a smaller output signal. Therefore, issues such as producing a high output signal while maintaining a reduced noise to increase the S/N ratio needed to be resolved before the CPP-GMR technology became practical

In response to this challenge, Hitachi, Ltd. and Hitachi GST have co-developed high-output technology and noise-reduction technology for the CPP-GMR head. A high electron-spin-scattering magnetic film material was used in the CPP-GMR layer to increase the signal output from the head, and new technology for damage-free fine patterning and noise suppression were developed. As a result, the signal-to-noise ratio, an important factor in determining the performance of a head, was drastically improved. For heads with track widths of 30nm to 50nm, optimal and industry-leading S/N ratios of 30 decibel (dB) and 40 dB, respectively, were recently achieved with the heads co-developed at Hitachi GST's San Jose Research Center and Hitachi, Ltd.'s Central Research Laboratory in Japan.

Recording heads with 50 nm track-widths are expected to debut in commercial products in 2009, while those with 30 nm track-widths will be implemented in products in 2011. Current TMR heads, shipping in products today, have track-widths of 70 nm.

The Incredible Shrinking Head

The discovery of the GMR effect occurred in 1988, and that body of work was recognized just last week with a Nobel Prize for physics. Nearly two decades after its discovery, the effects of GMR technology are felt more strongly than ever with Hitachi's demonstration of the CPP-GMR head today.

In 1997, nine years after the initial discovery of GMR technology, IBM implemented the industry's first GMR heads in the Deskstar 16GXP. GMR heads allowed the HDD industry to continue its capacity growth and enabled the fastest growth period in history, when capacity doubled every year in the early 2000s. Today, although areal density growth has slowed, advancements to recording head technology, along with other HDD innovations, are enabling HDD capacity to double every two years.

In the past 51 years of the HDD industry, recording head technology has seen monumental decreases in size as areal density and storage capacity achieved dizzying heights. The first HDD recording head, called the inductive head, debuted in 1956 in the RAMAC - the very first hard drive - with a track width of 1/20th of an inch or 1.2 million nm. Today, the CPP-GMR head, with a track-width of about one-millionth of an inch or 30 nm, represents a size reduction by a factor of 40,000 over the inductive head used in the RAMAC in 1956.

Notes

*1
CPP-GMR: As an alternative to existing TMR heads, CPP-GMR head technology has a lower electrical resistance level, due to its reliance on metallic rather than tunneling conductance, and is thus suited to high-speed operation and scaling to small dimensions.
*2
TMR head: Tunnel Magneto-Resistance head
A tunnel magneto-resistance device is composed of a three layer structure of an insulating film sandwiched between ferromagnetic films. The change in current resistance which occurs when the magnetization direction of the upper and lower ferromagnetic layers change (parallel or anti parallel) is known as the TMR effect, and ratio of electrical resistance between the two states is known as the magneto-resistance ratio.
Hitachi News release.

Friday, October 12, 2007

Patent Infringement Law suit againt Linux (Is RedHat and Novell Linux?)

Well, it was bound to happen: Linux vendors Red Hat and Novell have been sued for patent infringement. Groklaw is reporting that on Tuesday, the two companies were sued by IP Innovation LLC and Technology Licensing Corp. for violating three patents having to do with windowing user interfaces.

The lawsuit represents the first test of what happens when open source collides with patents, and it's interesting for a couple reasons. First, notice that all the other Linux vendors are missing from the defendants list, most notably IBM. That could be because IBM has already licensed the patents in a different context. (In June, Apple settled a patent infringement lawsuit with the same plaintiffs over at least one of the patents involved here.)

Stollen from Frank Hayes Blog

UNIVA UD Unveils blueprint for the world’s first industrial strength open source cluster and grid product suite

Taking aim squarely at vendors who offer only costly, confusing and limiting proprietary grid and cluster products, Tuecke and Venkat will lead a discussion about key issues raised by customers opting for open source implementations.

Increasingly, businesses are embracing open source software models in many areas, but until now there has been no complete, integrated open source stack for cluster and grid, says Tuecke, Univa UDs chief technology officer. Given Univas open-source pedigree and United Devices commercially proven technology, we believe that gap can now be filled and expect the resulting merged solution set will drive many more cluster and grid operators to open source implementations. There is no longer any reason to tie up cluster and grid systems with costly and limiting proprietary software.

Univa and United Devices, pioneers and leaders in cluster and grid technology, announced the merger of the two companies last month, becoming Univa UD. At that time, the company promised it would outline an open source industrial strength product roadmap at the Open Grid Forum.

Based on a free, downloadable open source cluster management product, Univa UD has said its end-to-end High Performance Computing (HPC) open source product suite will also include a fully supported pro version with rich functionality and an enterprise-class grid solution growing out of UD's award-winning Grid MP technology.

Our vision is to emulate and improve on the open source models of software companies who have gone before us, said Tuecke, companies like Red Hat and SugarCRM.

Tuecke, along with Dr. Ian Foster and Dr. Carl Kesselman, founded Univa in 2004, as well as the Globus Project almost a decade earlier. They are known as the fathers of grid computing for their pioneering efforts in developing open grid software and standards.

Prior to founding Univa, serving as its initial CEO and subsequently becoming its CTO, Tuecke was responsible for managing the architecture, design, and development of Globus software, as well as the Grid and Web Services standards that underlie it such as OGSA and WSRF.

In 2002, Tuecke received Technology Review magazines TR100 award, which recognized him as one of the worlds top 100 young innovators. In 2003, he was named (with Foster and Kesselman) by InfoWorld magazine as one of its Top 10 Technology Innovators of the year.

There continues to be a tremendous growth in the cluster market in terms of revenues and number of units, said Venkat, a co-founder of United Devices in 1999. The open source grid and HPC expertise from Univa and the commercial technology and experience from United Devices put Univa UD in a unique position to serve this market. End-to-end, we can now offer the worlds best-of-breed open source technologies backed with commercially proven solutions and world-class services and support.

Univa UDs session at OGF21 will be 1:30 p.m. to 3 p.m., Tuesday, Oct. 16, in the Portland Room at the Grand Hyatt Seattle. Univa UD said details of its new product roadmap also will be available at the companys exhibit during the conference.

Thursday, October 04, 2007

We Need another Sputnik

Although I was not born yet, the launch of Sputnik had a large impact on my life. Being in a family that were scientists for a few generations make you see and think differently. The Sputnik has floored the accelerator on my families. They were so busy they forgot to make me. I was born seven years after my brother! My big event was the moon landing, even though I could barely understand it, I was feeling sorry for Micheal Collins and his son (why? you tell me!). So here is almost exact my take on sputnik affair!

8. Speaking of General Medaris, in the final chapter of your book, “Sputnik’s Legacy,” you quote him: “If I could get ahold of that thing, I would kiss it on both cheeks.” What did he mean?


Sputnik galvanized America. We put billions of dollars into education. We began producing 1,500 PhDs a week. Teachers were going to special summer institutes, Middlebury to study language, MIT to study technology. It brought the middle classes back into education, which was drifting toward elitism. It showed us at our best.

We get Dr. Spock, Dr. Seuss. Rote learning starts to be abandoned. Dick and Jane are skewered on a plate. There’s less Latin and Greek, more Spanish and Russian.

Betty Friedan is working on a book about Smith College, and she said Sputnik got her thinking. Stephen King is in a theater, watching a movie called “Earth vs. the Flying Saucer,” about Martians coming down to Malibu and taking women back to Mars. They stopped the movie in the middle to announce Sputnik. That was the beginning of his dread. The world had been reality versus fantasy, and now the two had come together.

Sputnik changed a lot of people.

9. What you’re saying flies in the face of the people who say that too much money has been spent on the space program, that in more recent times it could have been used for other things...

It got us all the things we now rely on, laptop computers, cellphones. Countries that don’t have the copper to string phonelines? Cellphones. The space race has given the world a whole boost at every level. The space guys were the first guys to learn to do biometric readings of people’s bodies. There was a large technology transfer.

At its highpoint, it was four percent of our economy, now it’s only seven-tenths of one percent. And there’s an $8 billion positive balance of payments in the aerospace industry, meaning you take all the money coming into this country—other countries paying Boeing to build their planes, hiring American pilots, for instance—and it’s more than other segments of industry.

Sputnik resulted in the creation of DARPA, the Defense Advance Research Projects Agency. That was hundreds of millions of dollars into a think tank that was supposed to come up with those things that would prevent us from being surprised. There were these huge computers bulk processing, at MIT, at Cal Tech. And these huge computers could talk to each other.

When the government was finished with the ARPA net, they said, Let’s give it to the world. Think what would have happened if they had decided to auction it off. So it’s because of Sputnik that we’ve got the internet.

10. We were talking earlier in the conversation about how because of Sputnik we had more scientists, more engineers, better education. Somehow it feels as if today we’ve gone back to pre-Sputnik days. Now you hear about how we need more scientists, more engineers, better education because that sector of our society all seems to be going overseas...

Well, that’s the argument everybody’s making, we may need another Sputnik moment, something to galvanize us and get us going again. Katrina could have been that moment, but it wasn’t. I thought that bridge collapse in Minneapolis might have been it, that we might have recognized we’re letting the country deteriorate while we sit in corners with our ipods.

The above are three of the 10 questions and answers published by CBS News after interviewing Paul Dickson, who wrote Sputnik: The Shock of the Century. Published in 2001, it’s just been re-released; he’s also the co-writer on a new documentary, Sputnik Mania.
Visit CBS and read the rest. I am very sure we need another sputnik. I don't want to be another beatnik.