Monday, November 27, 2006

Digipede tells you how to Grid Enable your Application

My blogging Pal, Dan's company, Digipede is hosting a series of web casts that direct you to do simple changes to your application development and achieve an state where your application is Grid enabled.
I have signed myself up for some of the web casts. I think it is important that you learn about another face of Grid Computing, the .NET side of it! Dan is a Fan, of .NET.
While you are there check out their case studies. Also they do have a evaluation version of their suite. Follow the link below.

Links;
Digipede Webcasts,

Digipede case studies
Digipede evaluation request form

Friday, November 24, 2006

How the Search engine grids fared this year.

Since most of the Search engines run grids of linux servers (at least Google) may be it is worth while to see the October 2006 data for the Top U.S. Search Providers. The Report was provided by Nielson//NetRatings.
It shows that the Google is still the leader, may be because of those Linux clusters and might be the reason Microsoft's Steve Balmer jumping in to bed with Novell. Hoping to sue Google, may be not he does not have enough hair to pull.
Here is the figures;
Example: An estimated 3.0 billion search queries were conducted at Google Search, representing 50 percent of all search queries conducted during the given time period.

Top search engines in September 2006
Search engine___________Searches_______Growth___Share
Google__________________3,022,326_______23%_____49.6%
Yahoo!__________________1,456,269_______30%_____23.9%
MSN/Windows Live__________538,594_______-8%______8.8%

Find more search engines performances on the report.
Links;
Nielsen//NetRatings News

Tuesday, November 21, 2006

Lustre, the cluster file system

Lustre is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by Cluster File Systems, Inc.

The central goal of the development of a next-generation cluster file system which capable of serving clusters with 10,000's of nodes, petabytes of storage, move 100's of GB/sec with state of the art security and a easy to use management infrastructure.

Lustre is incorporated into most of todays largest Linux clusters in the world, which includes, CFS's partners who offer lustre as a core component of their cluster offerings. These include HP StorageWorks SFS, and the Cray XT3 and XD1 supercomputers). Users have also demonstrated that Lustre scales well in both the directions, running in production clusters as small as 4 and as large as 15,000 nodes.

The latest version of Lustre is always available from Cluster File Systems, Inc. Public Open Source releases of Lustre are made under the GNU General Public License.

Links;
Lustre the cluster file system
Luster documentation wiki

Friday, November 17, 2006

Grid Technology related Acronyms

I was browsing through a well known Grid Technology site, International Science Grid, when I noticed the link of the week. It was to a link to GridPP's Web site hosts the Grid Acronym Soup, a guide to some of the acronyms used in the grid computing community. If you can't find your acronym in the list, links to other projects' acronym compilations and glossaries are also included in the Soup.
Sometimes it is hard to notice that some people may have hard time understanding these Acronyms, I myself only know a few of them ;).
So I decided to publish a link to GridPP.

Links;
International Science Grid

Grid Acronym Soup

Thursday, November 16, 2006

IBM pushes Linux and Grid, eases deplyment

On Wednesday, IBM introduced its Implementation Services for Linux and Grid and Grow Express Implementation Service, both of which expand on existing IBM offerings by building on lessons learned from individual projects to create a standard way to deploy computing grids and Linux. The services use an automated, Web-based tool to streamline projects, cutting costs and improving efficiencies.
IBM says the services, anchored by the Web-based tool, can reduce Linux implementation times by nearly a third.

“The tool incorporates industry-application intelligence and best-practice knowledge from thousands of client engagements to ensure consistent implementation around the world,” IBM said.

For grid deployments, IBM is adding the Web-based tool to simplify further the Grid and Grow Express package it introduced last spring.

“The service product includes hardware, software and services, and can be incorporated into current storage and server infrastructure,” IBM said.

The Linux and grid implementation services are available now from IBM Global Services. Pricing was not released.

Links;
IBM Grid and Grow
Linuxworld

Tuesday, November 14, 2006

Java GPLed, when is SUN going to stop?


It is funny to remember that One of my first postings that got posted on slashdot was about SUN revoking then SCSL OEM like license given to FreeBSD foundation. You can still read it at slashdot, even though it is almost a year old. Many a linux distributions could not distribute Java with their distributions because SUN's license prohibited it.
I have not gone through the complete saga yet but I have read many a articles but I think I will rely on SUN it self.
Here is what I found;

Another Freedom for Java Technology

Sun started a revolution with Java technology 10 years ago. With a free runtime, an open specification, and a platform-independent promise of compatibility, Java technology became a gold standard in embedded devices, mobile phones, on the desktop and within the enterprise. Now, in 2006, Sun is open sourcing its implementations of Java technology as Free/Libre software. More

Live Webcast
Join Sun's CEO Jonathan Schwartz and Executive Vice President of Software Rich Green for the launch event.

Get Involved
Visit the three new open-source Java communities that Sun is seeding and download the code: OpenJDK, Mobile & Embedded, and the GlassFish community.

Duke, the mascot of Java technology, is open sourced too.
Tags: , , , ,

Monday, November 13, 2006

100Gigabit Ethernet? Yes they have it at SC06

A first-ever demonstration of 100 Gigabit Ethernet (100 GbE) technology by a team of industry partners, including Finisar, Infinera, Internet2, Level 3 Communications, and University of California at Santa Cruz, shows that 100 GbE technology is viable and capable of implementation in existing optical networks with 10 Gigabit/second (Gb/s) wavelengths.
The system successfully transmitted a 100 GbE signal from Tampa, Florida to Houston, Texas, and back again, over ten 10 Gb/s channels through the Level 3 network. This is the first time a 100 GbE signal has been successfully transmitted through a live production network. The 100 GbE system will be on display from November 14th to the 16th at the Infinera booth (Booth no. 1157) at the SC06 International Conference in Tampa. The system will be transmitting a 100 GbE signal to the Internet2 booth (Booth no. 1451) during the show.
"This new approach to providing 100 Gig Ethernet service over long distances enables LAN Ethernet protocols in the WAN environment," said Jack Waters, CTO of Level 3. "Compared to other methods that have been demonstrated, this is a practical, economical solution that operates over the wide area using existing DWDM technologies. We're pleased to have been involved with developing and testing this solution, and will be watching closely as it is commercialized."
The demonstration encodes a 100 GbE signal into ten 10 Gb/s streams using an Infinera-proposed specification for 100 GbE across multiple links. A single Xilinx FPGA implements this packet numbering scheme and electrically transmits all ten signals to ten of Finisar's 10 Gb/s XFP optical transceivers which in turn convert the signals to optics. These signals are then transmitted to an Infinera DTN DWDM system. For the long-distance demonstration, conducted last week, the 100 GbE signal was then handed off to Infinera systems within the Level 3 network where it was transmitted across the Level 3 network to Houston and back. This pre-standard specification for 100 GbE guarantees the ordering of the packets and quality of the signal across 10 Gb/s wavelengths and demonstrates that it is possible for carriers to offer 100 GbE services across today's 10 Gb/s infrastructure.
Links;
More info on 100GbE at SC06


Sunday, November 12, 2006

How is your Bro, I meant Bro cluster, the NIDS.


Lawrence Berkeley National Laboratory has developed a comprehensive approach to cyber security that allows the open exchange of scientific knowledge while simultaneously protecting critical resources from attacks -- the Bro intrusion detection system. And now, Bro is Big Bro in the form of a scalable cluster which will demonstrate its effectiveness on a 10 gigabit network connection during the SC06 conference to be held Nov. 11-17 in Tampa. The demo will be featured in LBNL's booth, as I have mentioned before.
But what is Bro? Bro is an open-source, Unix-based Network Intrusion Detection System (NIDS) that passively monitors network traffic and looks for suspicious activity. Bro detects intrusions by first parsing network traffic to extract is application-level semantics and then executing event-oriented analyzers that compare the activity with patterns deemed troublesome. Its analysis includes detection of specific attacks (including those defined by signatures, but also those defined in terms of events) and unusual activities (e.g., certain hosts connecting to certain services, or patterns of failed connection attempts).

So who wants Bro?
Bro is intended for use by sites requiring flexible, highly customizable intrusion detection. It is important to understand that Bro has been developed primarily as a research platform for intrusion detection and traffic analysis. It is not intended for someone seeking an "out of the box" solution. Bro is designed for use by Unix experts who place a premium on the ability to extend an intrusion detection system with new functionality as needed, which can greatly aid with tracking evolving attacker techniques as well as inevitable changes to a site's environment and security policy requirements.

Bro has a lot of features and but most striking for me is;
Snort Compatibility Support
The Bro distribution includes a tool, snort2bro, which converts Snort signatures into Bro signatures. Along with translating the format of the signatures, snort2bro also incorporates a large number of enhancements to the standard set of Snort signatures to take advantage of Bro's additional contextual power and reduce false positives.
This is what lead me to try Bro.
I guess you need to visit Bro Intrusion Detection System and learn more. Also Br is open source and you can download and try. It runs on a commodity PCs and what a better way find out about the software than running it yourself.

Links;
Bro's home, not yours.



Saturday, November 11, 2006

ClusterBuilder.org adds Clustering Encyclopedia

From Cluster resources news;
Cluster Resources, Inc. and LinuxHPC.org announced the release of ClusterBuilder.org version 1.3, featuring the new Clustering Encyclopedia – a specialized reference source of high-performance computing (HPC) technologies and products.

ClusterBuilder.org is a Web site created through the combined efforts of Cluster Resources and LinuxHPC.org , designed to help cluster administrators, technical evaluators and purchase evaluators build out better cluster, grid and utility-based computing environments.

The new Clustering Encyclopedia adds more than 130 new articles and 160 pages of cluster related information, demonstrating ClusterBuilder.org's continued efforts to provide an HPC-centric research location and information portal for HPC technologies.

In addition to the encyclopedia, the new version of ClusterBuilder.org also contains a hyper-linked index, which acts as a portal to quickly and easily guide users to the specific content they seek.

Links;
ClusterBuilder.org
LinuxHPC.org
Cluster resources

Cleversafe, your OSS data Grid Solution

Next step in your grid project or grid research project is Storage. At least mine is;). I have been looking for a better solution for my storage grid requirements and the first ear mark! (learned this during the elections) is that it had to be open source. So it is not easy t find a mature well functioning solution.
Then I came across Cleversafe Project. I have busy with it ever since.
Currently Cleversafe has quite a few projects under it's arms. Most notables are;

Cleversafe Dispersed Storage™
DSGrid File System™
Cleversafe Desktop™

Cleversafe Dispersed Storage™
The Dispersed Storage Project is the central point of development and idea exchange for developers around the world to contribute to innovative storage solutions leveraging dispersed storage methodology.

The project uses information dispersal algorithms IDAs to separate data into 11 unrecognizable DataSlices™ and distribute them, via secure Internet connections, to 11 storage locations throughout the world, creating a storage grid. With dispersed storage, transmission and storage of data is inherently private and secure. No single entire copy of the data is in one location, and only 6 out of the 11 nodes need to be available in order to perfectly retrieve the data.

Data on the grid remains private and secure in the face of natural catastrophes, or failures of hardware, connection, facility, or IT management. Moreover, the individual data slices do not carry enough information for an unauthorized viewer to determine the original content.

DSGrid File System™

The dsgfs project will enable a dispersed storage grid, such as the Cleversafe Research Storage Grid, a freely available, multi-terabyte globally dispersed storage grid, to appear as an ultra-reliable, durable hard drive to a Linux application. Using the dsgfs, users will be able to seamlessly store data on the Cleversafe Research Storage Grid, on a commercial grid or on a grid they build themselves. dsgfs initially will support various versions of Linux, including Debian, Fedora and CentOS.

The file system is facilitated by Cleversafe Dispersed Storage technology uses Information Dispersal Algorithms (IDAs™) to separate data into 11 unrecognizable DataSlices™ and distribute them, via secure Internet connections, to 11 storage locations throughout the world.

Cleversafe Desktop™

The Desktop Client, unlike the CLI, supports the concept of Storage Sets. Storage Sets allow files located in different locations on a computer to be grouped together as a single set. By using Storage Sets combined with the backup scheduling features of Cleversafe Desktop, ordinary users can backup individual machines at scheduled intervals, or on-demand--without the use of complex command line operations.

Cleversafe Desktop is written from the ground-up for multi-platform support. Initial releases will support Windows XP client and server flavors with various versions of Linux, including Debian, Fedora, RedHat, FreeBSD, Ubuntu and CentOS and Mac OS X ports to follow. This makes storage grids more accessible and attractive to a wider group of non-technical users and developers, including fans of Windows, Linux and embedded Linux solutions.

I really like the last one, the desktop project. Because I need the services provided by it as my clients are multi platform.
“This new project makes it much easier for a wider range of developers to use dispersed storage and explain the power of it to business managers,” said Manish Motwani, Cleversafe Desktop project lead. “It puts a presentable face on the grid, which is quite complex and technical behind-the-scenes, and adds point-and-click simplicity to better organize files for easier data retrieval. It’s also cross-platform savvy to ensure there are as few barriers as possible to using open source dispersed storage grids.”
Go explore your grid storage needs.
Links;
Cleversafe.org
Cleversafe wiki

Tuesday, November 07, 2006

Alchemi v.1.0.6 (Developer release) has been released



I almost missed this announcement since I am still battling with the version 1.0.5 and Now I am in the process of upgrading to the 1.0.6 version. If you wonder what Alchemi is;
Alchemi is a software framework that allows you to painlessly aggregate the computing power of networked machines into a virtual supercomputer (computational grid) and to develop applications to run on enterprise grids.
The release of version 1.0.6. brings with it following major changes since 1.0.5, which are a LUA (Least Privileged User Account) operation for both Manager, Executor in normal and service mode and an option for executing user's GThreads in a secure sandbox (experimental)
Before downloading and running the new version,Please note that this is a developer release intended for enthusiasts and early adopters as documentation is not up to date.
Here are the release notes;
This release contains some small important new features.

- The Manager, Executor are now designed run under least-privileged user accounts by default. In service mode, they run as 'LocalService' - which is a limited privilege user account.
- The Manager, Executor will (in both normal and service mode), will now read / write config files from the users' AppData directory.
In Windows XP, this would be :
C:\Documents and Settings\\Application Data\Alchemi\Manager
or

C:\Documents and Settings\\Application Data\Alchemi\Executor
- The logs are located in the user's Temp directory which would be (in Windows XP),

C:\Documents and Settings\\Local Settings\Temp\Alchemi\Manager\logs

or

C:\Documents and Settings\\Local Settings\Temp\Alchemi\Executor\logs

- This changes mean that an Administrator or user with admin privileges can install Alchemi, and start up the services, while any user can run Alchemi and/or clients without needing admin rights.
- Added Sandboxed execution for optionally running user's GThreads under low privileges. (These options are not all exposed through the GUI / API yet)

Links;

Alchemi at sourceforge

Alchemi at University of Melbourne



What not to miss at SC06. Berkeley Shines


Lawrence Berkeley National Laboratory, belonging to U.S. Department of Energy, will share their leadership and expertise in the field of Super computing, Grid Computing and Cluster Computing, via talks, technical papers and demonstrations at the SC06 conference to be held Nov. 11-17 in Tampa, Fla.
So if you are planing to be there, please take a not of what LBNL is doing at the Booth 1812. I am sure it will grab your attention and keep your grid computing or super computing mind entertained.

Demonstrations and Talks presented by Lawrence Berkeley National Laboratory;

Berkeley Lab, located in booth 1812, will present demonstrations of a number of tools and techniques developed to advance scientific computing and networking. Booth demonstrations will include the following:
* The Bro Cluster for Intrusion Detection on a 10 Gig Network

* Using FastBit for High-Performance Visual Analysis of Numerical and Text Data: Mining the Enron Email Archive

* Python Tools for Automatically Wrapping Legacy Codes as Grid Services

* Tool for Validating Compatibility and Interoperability of Storage Resource Managers (SRMs) for Heterogeneous Storage Systems

* ACTS Collection User Support Clinic

* Using VisIt to Visualize and Analyze AMR Data of Turbulent Reactive Chemistry Simulations

* Plasma Wakefield Acceleration Visualization

* High Performance Visualization using an 8-socket, 16-core Opteron Machine

Talks in the LBNL booth will cover three Scientific Discovery through Advanced Computing (SciDAC) projects led by Berkeley Lab, the new Cray XT4 being installed at NERSC, ESnet's new network partnership with Internet2, and supernova research at NERSC. Here's the schedule:

Tuesday, Nov. 14

* 11 a.m.: "Scientific Data Management: Essential Technology for Data-Intensive Science," Arie Shoshani, Scientific Data Management, LBNL

* 2 p.m.: "NERSC's Move Toward Petascale Computing with the Cray XT Architecture," William T. Kramer, NERSC/LBNL

* 3 p.m.: "Discovery and Destabilization: Experiments in Stellar Explosions at NERSC," F. Douglas Swesty and Eric S. Myra, Dept. of Physics & Astronomy, State University of New York at Stony Brook

Wednesday, Nov. 15

* 11 a.m.: "The SciDAC2 Visualization and Analytics Center for Enabling Technologies: Overview and Objectives," Wes Bethel, Visualization, LBNL

* 2 p.m.: "Next Generation Optical Infrastructure for the U.S. Research and Education Community," William E. Johnston, ESnet, LBNL

* 3 p.m.: "Introducing the SciDAC Outreach Center," Jonathan Carter, NERSC User Services, LBNL.

Technical Program Presentations

Berkeley Lab is also well represented in the SC06 technical program, with LBNL staff presenting research in technical paper, tutorial and poster sessions, invited talks, workshops and a Birds-of-a-Feather session. Here is a list of presentations by LBNL staff:

* "25 Years of Accelerator Modeling," Masterworks presentation, Robert Ryne, Accelerator and Fusion Research Division

* "ESnet," Education Program plenary talk, Bill Johnston, Computational Research Division

* "Detecting Distributed Scans Using High-Performance Query-Driven Visualization," technical paper, Kurt Stockinger, E. Wes Bethel, Scott Campbell, Eli Dart, and Kesheng Wu, Computational Research Division

* "Optimized Collectives for PGAS Languages with One-Sided Communication," poster, Dan Bonachea, Paul Hargrove, Rajesh Nishtala, Michael Welcome, Katherine Yelick, Computational Research Division

* "Computing Protection in Open HPC Environments," tutorial, Stephen Q. Lau, Scott Campbell, William T. Kramer, Brian L. Tierney, NERSC Division

* "The HPC Challenge (HPCC) Benchmark Suite," tutorial, David Bailey, co-presenter, Computational Research Division

* "Best Practice in HPC Procurements," workshop, Bill Kramer, NERSC Division

* "TOP500 Supercomputers," Birds of a Feather, Erich Strohmaier, Computational Research Division

Additionally, Zhengji Zhao, Lin-Wang Wang, Juan Meza, Andrew Canning and Osni Marques of LBNL's Computational Research Division will give presentations during the Second IEEE/ACM International Workshop on High Performance Computing for Nano-science and Technology (HPCNano06) to be held in conjunction with SC06.

Links;
SC06

Saturday, November 04, 2006

Do you like UBUNTU! now you can get the real free version of it!

The Free Software Foundation (FSF) has announced the release of gNewSense 1.0, an Ubuntu derivative that promotes software "freedom" by excluding proprietary components. Created by developers Brian Brazil and Paul O'Malley, the project is sponsored by the FSF to provide users with a robust desktop distribution that adheres to the organization's strict ideological standards and caters to users that prefer to avoid pragmatic compromises.
In the official 1.0 release of gNewSense, proprietary firmware and fonts have been removed, and access to the non-free Ubuntu repositories has been eliminated. The distribution also includes completely new artwork and includes developer-oriented packages like Emacs and the GCC compiler in the default installation. The developers have also elected to eschew integration with Launchpad, a proprietary development management tool used by Ubuntu.
Not everyone seems to have the enthusiasm that FSF shows, specially ubantu community. But I think it is a good move. Ubuntu is a good desktop distribution and I have not installed it, (I have downloaded it to check out the features and I liked them) because non free software integration. Same goes for Freespire/linspire or Lindows as it was known before the company caved up to Microsoft and changed the name to Linspire/Freespire. The problem is that they carry non free software. But these are done to make the lives of average Linux users. You can load these distributions on any pc or notebook without much hassle, less hassle than XP ;). Anyway I don't want this dirty patent filled software infecting Linux, providing the likes of M$, SCO to come charging when they feel challenged.
I admire the efforts of Ubuntu and Freespire communities and they do serve certain market segment.
But we do need organizations like FSF to keep checks and balances of OSS and provide products like gNewSense.
Now I have a ubuntu distribution that I like. gNewSense!
I got this news first at ARS Technica.

Links;
gNewSense
FSF
Ubuntu
ARS T

Friday, November 03, 2006

Grid technology book for Savvy managers

There are not many books about grid technology since it is fairly new in the technology arena. There are quite few books published on Grid Technology but most of them are for IT folks or for developers. But the case is all these IT personnel or developers have to prove the technology to their managers. And in order to do so, one has to educate managers and I am sure they are not in the business of digging in to Grid technology in depth to understand the technology.
So what do the managers do, read the book!
Pawel Plaszczak, and Rich Wellner, Jr. from GridwiseTech, one of my favorite grid resources, have published a excellent book.
Grid Computing The Savvy Manager’s Guide covers what is needed to educate a manager on a grid technology.
This non-technical book on technical matters answers key questions on Grid computing in business terms.

* What really are grids? What is Grid technology?
* What are the business benefits of Grid-enabling the infrastructure?
* Why should I, as a savvy businessperson, be interested in grids?
* Should my company “plug in”?
* How do I get started? How to plan the move to Grid paradigm?
The book due to nature of the technology, has an online companion at savvygrid. They also carry other information such as errata, reviews, TOC, look inside and a discussion group. I enjoyed the Savvygrid site and it is a good introduction to the book and the reviews are a must read.

Links
Savvygrid Book online
Gridwisetech site

What happens when GNU meets Cluster?

You get Gluster, a GNU Cluster distribution aimed at commoditizing Supercomputing and Super storage. Core of the Gluster provides a platform for developing clustering applications tailored for a specific tasks such as HPC Clustering, Storage Clustering, Enterprise Provisioning, and Database Clustering.
According to the developers, Gluster is designed for massive scalability and performance from ground up.
So why another cluster? don't we have enough cluster and HPC resources?
Gluster gives following answers;
GlusterHPC is

1. Designed for massive scalability (16 nodes or 65,000 nodes makes no difference). Much of the building blocks of Gluster are already powering worlds top supercomputers.
2. Portability (across distributions and architectures).
3. Modular and extensible.
4. Built on Gluster Platform which extends clustering technology beyond HPC to database, storage, enterprise provisioning, etc.
5. Very easy to use with a clean dialog based front-end.
6. Backed by supercomputing experts.
7. Supports multi-casting and Infiniband.
8. Centralized remote screen control.
9. Very easy to add new features or customize.
10.Doesn't require a database server to store configuration information.
By the way the project is still awaiting GNU approval, so it is under the category, NOT-YET-GNU. I hope and think it will be approved.
If you are worried about if this will run under your Linux distribution, fear not. It
is distro independent.
Links;
Gluster.org
Gluster Docs
Gluster Downloads

Thursday, November 02, 2006

Fermi Research Alliance wins 1.575 Billion contract.

The U.S. Department of Energy (DOE) has awarded a new $1.575 billion, five-year contract for management and operation of Fermi National Accelerator Laboratory (FNAL) to the Fermi Research Alliance, LLC (FRA), owned jointly by the University of Chicago (UChicago) and Universities Research Association, Inc. (URA).

“The quality of the new contract is a direct consequence of the competition process,” DOE Under Secretary for Science Dr. Raymond L. Orbach said today at a ceremony at Fermilab where he made the announcement of the contractor. “The partnership between UChicago and URA will enhance organizational depth and capability, promising improvements in performance and accountability."
The new contract contains a number of provisions intended to provide incentives for outstanding performance. The contract contains award term provisions under which the department may recognize outstanding performance through phased extensions of the contract for up to a total of 20 years, if the contractor meets specific performance levels established by DOE. The contract also contains incentive fee provisions under which FRA can earn a maximum total fee of up to $3.55 million a year for outstanding performance during the initial five-year term and the first five years of any award term extensions.

The initial contract term will be January 1, 2007, to December 31, 2011.


Cluster RFQ (request for quote) made easy

If you have ever wanted to get a RFQ (Request For Quote) from vendors, I am sure you may have run around the web, looking for vendors, filling out multiple forms and then comparing the replies to find the best match for you buck. There is a service now that will help you with hunting down vendors, filling out multiple forms, but the comparison part you will have to do yourself.
So who is providing this service and how much will it cost?
LinuxHPC.org is providing this service and it is totally free of charge and I think it a really good deal if you are in the market for Clusters or beginning to build a Grid resource.
From the LinuxHPC.org's website describing the form;

"It takes time to visit different cluster vendor websites to request a quote. The LinuxHPC Cluster RFQ form was created to assist the Linux cluster community by reaching many vendors through this one form. LinuxHPC will assist you from filling out the form to the product arriving on your doorstep.

The LinuxHPC RFQ is a FREE service! You pay nothing!"
The process is also simple;
"The above listed vendors have a proven track record of delivering products and services to the Linux cluster community.

1. Fill out the form
2. A representative from LinuxHPC will contact you by phone or email
3. The LinuxHPC representative will go over the details of your request, as well as assist you with questions about the RFQ process
4. LinuxHPC will send out the RFQ to the vendors you request...you are in full control of your RFQ
5. You will begin to see multiple responses to your RFQ
6. LinuxHPC will follow up with you and the vendors to ensure that you have received the desired response and experience from the vendors. In addition we will be available to assist you in any way we can during the RFQ process
So no more hundreds of form filling. Just a single form will reach hundreds of vendors."
This resource is not only for customers but as well for Vendors, where they could reach multiple customers seeking cluster solutions. The LinuxHPC is a well respected HPC computing resource and visited by many seeking HPC solutions, ideas, and support.
So get that RFQ today or provide a reply to a RFQ today. Visit LinuxHPC.org.
Following is a list of vendors who is listed on the site;
Accelerated Servers
Ace Computers
Advanced Clustering Technologies
Agilysys
AOES Group (EU)
Appro
Aspen Systems
Atipa Technologies
Cepoint Networks
Cluster Computing Systems (EU)
Cluster Resources
ClusterVision (EU) Compusys plc (UK)
HP
IBM
ION Computer Systems
Linux Labs (Asia)
Livewire Lifescience Solutions
Linux Networx
Major Linux Computing
Microway
New Tech Solutions, Inc
Penguin Computing Pogo Linux
PSSC Labs
Quant-X (EU/ME)
Reason
RocketCalc
SGI
Streamline Computing (UK)
Terra Soft Solutions
Tsunamic Tech.
Verari Systems
Western Scientific
Links;
LinuxHPC RFQ Form
Vendors can sign up here
LinuxHPC.org

UPS aims to run fine on Grid Technology on Linux, x86 and DataSynapse

Linux World today reports that UPS has moved to Grid Technology to make advances in its IT technology base.
The fist step for UPS is to consolidate, streamline and do better than the competition rater than hunting for raw horse power in computing. How do they do it? well by starting on multiple frontiers.
“Using technology to differentiate ourselves from our competitors has always been fundamental to our success, and it’s one of the reasons we’re moving forward with [grid] technology,” Brian Cucci, manager of the Advanced Technology Group at UPS said during a Webcast last week with DataSynapse, the company that supplies the Atlanta-based company its grid software.

The software, called Grid Server, is now in production use at UPS and lets the company distribute a billing invoice application that once ran on an expensive mainframe across a group of cheaper x86 systems running Linux.
For UPS, grid computing was just another piece in its evolving IT puzzle, which is aimed at reducing costs and improving efficiency. The company’s Technology Directions Subcommittee, which is made up of representatives from across the organization and reports to the CIO, is charged with keeping track of hot technologies, determining which can best bring business value.

Grid computing gained priority and moved to the top of the group’s radar screen last year, because it fit in nicely with several other technology projects that either were underway at UPS or were in the planning stages, Cucci said. Those projects include virtualization and consolidation efforts, as well as an initiative to move to a computing-on-demand approach to IT that focuses on the use of low-priced commodity hardware.
To read complete three page article, Please visit Linux World.
DataSynapse.