Sunday, December 24, 2006

Happy holidays and make a grid with your family and friends.

I hope Santa will come your way and Your holiday will be filled with laughter and Joy. If you pass by an unfortunate person, please try to make him/her smile. Don't forget that all the Kids are the same!

Best Regards!
Tags: , ,

Thursday, December 14, 2006

RIAA and Linux Kernel, Linus delivers it straight

Responding to a statement;
"> Numerous kernel developers feel that loading non-GPL drivers into the
> kernel violates the license of the kernel and their copyright. Because
> of this, a one year notice for everyone to address any non-GPL
> compatible modules has been set."

Linus Torvalds has responded in a lengthy message. Although written because of a discussion of restricting Non-GPL drivers, it applies to most of the projects that are open source.
I will just state some of the Linus' statements that most appeals to me and will redirect you to the article. If you are intersted, then you can follow rest of the thread, which carries some gems like;
Bellyfull of fish
Penguins laying under the moon
Dreaming of wings to fly.

Back to Linus;
"The silly thing is, the people who tend to push most for this are the
exact SAME people who say that the RIAA etc should not be able to tell
people what to do with the music copyrights that they own, and that the
DMCA is bad because it puts technical limits over the rights expressly
granted by copyright law."
"The fact is, the reason I don't think we should force the issue is very
simple: copyright law is simply _better_off_ when you honor the admittedly
gray issue of "derived work". It's gray. It's not black-and-white. But
being gray is _good_. Putting artificial black-and-white technical
counter-measures is actually bad. It's bad when the RIAA does it, it's bad
when anybody else does it."
And lastly;
"There's a big difference between "copy" and "use". It's exatcly the same
issue whether it's music or code. You can't re-distribute other peoples
music (becuase it's _their_ copyright), but they shouldn't put limits on
how you personally _use_ it (because it's _your_ life).

Same goes for code. Copyright is about _distribution_, not about use. We
shouldn't limit how people use the code."

All his statements are very true and drawing a line is hard for developers and thinkers as well. If you think you are confused, I do not even know why I even wrote this post!

Links;
Linus' responses to restricting Linux kernel



Wednesday, December 13, 2006

Grid Comes to Web Hosting,

Media Temple webhosting services has announced that it is providing Grid based hosting plans. According to the media temple, Grid-Server services provide website hosting for others by means of clustering multiple networked servers. (mt) Media Temple’s Grid- Server services utilizes a completely new hosting platform that replaces yesterday’s obsolete shared server technology. We've eliminated roadblocks and single points of failure by using hundreds of servers working in tandem for your site, applications, and email. The Grid-Server program's on-demand scalability means you'll always be ready for intense bursts of traffic and the growing audience resulting from your online success. All of this power, controlled through our brand new AccountCenter, is available today for a price point unmatched by any competing service.

Then hunting the I found this gem, "MediaTemple Grid Server: Not Good for Sites with Multiple Developers" at No.oneslistening.com
So if you are planing to host your multi agent, multi developer development project on this grid based service.
"Here at N1L, we’ve got a number of programmers who collaborate on projects, some that you’ve heard about and others still in the oven. We were active in the beta program for MediaTemple’s new Grid Server (GS). One thing we found is that the GS doesn’t play well with our projects that have multiple developers. Below, I’ll outline three ways that MediaTemple’s GS is not conducive to a collaborative environment. My hope is that this article provides a voice for change and improvement in the GS."

So follow the link and read the rest of the article that describes individual developer problems.

Links;
Media temple grid hosting
Not Good for Sites with Multiple Developers

Saturday, December 02, 2006

LA Grid (LAH GRID) kicks up steam, FAU (Florida Atlantic University) joins the grid.


Florida Atlantic University announced today that it has become the 10th member of the IBM-led Latin American Grid (LA Grid), an effort to create professional IT opportunities for the Hispanic community and to advance research in areas such as life sciences, weather modeling and prediction.
By joining LA Grid, FAU will contribute research scientists and the university's supercomputer based on IBM BladeCenter Systems from the university's College of Engineering and Computer Science.
While joint research programs in hurricane mitigation, life sciences and health care are the priority for LA Grid, member universities can also access the joint supercomputing resources for independent research. FAU, for example, plans to conduct research on the human genome, bioinformatics mapping, computational physics, integrated computation and communications, video processing, computer simulation and information security.
With FAU joining the grid, LA Grid will add another 150 servers to the grid and will have 1,500 available member processors for shared use. IBM's goal is to see LA Grid grow to include as many as 30 universities and 10,000 member processors by 2010.
Current LA Grid participants include Florida International University, the University of Miami, the University of North Florida and the University of Puerto Rico, Monterrey Tech (Mexico) as well as the Universidad de la Plata (Argentina) and Instituto Universitario Aeronautico (Mexico). Additional grid members providing computing power and resources include the Barcelona Supercomputing Center (Spain) and IBM South Florida.

Links;
LA Grid Initiative

Monday, November 27, 2006

Digipede tells you how to Grid Enable your Application

My blogging Pal, Dan's company, Digipede is hosting a series of web casts that direct you to do simple changes to your application development and achieve an state where your application is Grid enabled.
I have signed myself up for some of the web casts. I think it is important that you learn about another face of Grid Computing, the .NET side of it! Dan is a Fan, of .NET.
While you are there check out their case studies. Also they do have a evaluation version of their suite. Follow the link below.

Links;
Digipede Webcasts,

Digipede case studies
Digipede evaluation request form

Friday, November 24, 2006

How the Search engine grids fared this year.

Since most of the Search engines run grids of linux servers (at least Google) may be it is worth while to see the October 2006 data for the Top U.S. Search Providers. The Report was provided by Nielson//NetRatings.
It shows that the Google is still the leader, may be because of those Linux clusters and might be the reason Microsoft's Steve Balmer jumping in to bed with Novell. Hoping to sue Google, may be not he does not have enough hair to pull.
Here is the figures;
Example: An estimated 3.0 billion search queries were conducted at Google Search, representing 50 percent of all search queries conducted during the given time period.

Top search engines in September 2006
Search engine___________Searches_______Growth___Share
Google__________________3,022,326_______23%_____49.6%
Yahoo!__________________1,456,269_______30%_____23.9%
MSN/Windows Live__________538,594_______-8%______8.8%

Find more search engines performances on the report.
Links;
Nielsen//NetRatings News

Tuesday, November 21, 2006

Lustre, the cluster file system

Lustre is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by Cluster File Systems, Inc.

The central goal of the development of a next-generation cluster file system which capable of serving clusters with 10,000's of nodes, petabytes of storage, move 100's of GB/sec with state of the art security and a easy to use management infrastructure.

Lustre is incorporated into most of todays largest Linux clusters in the world, which includes, CFS's partners who offer lustre as a core component of their cluster offerings. These include HP StorageWorks SFS, and the Cray XT3 and XD1 supercomputers). Users have also demonstrated that Lustre scales well in both the directions, running in production clusters as small as 4 and as large as 15,000 nodes.

The latest version of Lustre is always available from Cluster File Systems, Inc. Public Open Source releases of Lustre are made under the GNU General Public License.

Links;
Lustre the cluster file system
Luster documentation wiki

Friday, November 17, 2006

Grid Technology related Acronyms

I was browsing through a well known Grid Technology site, International Science Grid, when I noticed the link of the week. It was to a link to GridPP's Web site hosts the Grid Acronym Soup, a guide to some of the acronyms used in the grid computing community. If you can't find your acronym in the list, links to other projects' acronym compilations and glossaries are also included in the Soup.
Sometimes it is hard to notice that some people may have hard time understanding these Acronyms, I myself only know a few of them ;).
So I decided to publish a link to GridPP.

Links;
International Science Grid

Grid Acronym Soup

Thursday, November 16, 2006

IBM pushes Linux and Grid, eases deplyment

On Wednesday, IBM introduced its Implementation Services for Linux and Grid and Grow Express Implementation Service, both of which expand on existing IBM offerings by building on lessons learned from individual projects to create a standard way to deploy computing grids and Linux. The services use an automated, Web-based tool to streamline projects, cutting costs and improving efficiencies.
IBM says the services, anchored by the Web-based tool, can reduce Linux implementation times by nearly a third.

“The tool incorporates industry-application intelligence and best-practice knowledge from thousands of client engagements to ensure consistent implementation around the world,” IBM said.

For grid deployments, IBM is adding the Web-based tool to simplify further the Grid and Grow Express package it introduced last spring.

“The service product includes hardware, software and services, and can be incorporated into current storage and server infrastructure,” IBM said.

The Linux and grid implementation services are available now from IBM Global Services. Pricing was not released.

Links;
IBM Grid and Grow
Linuxworld

Tuesday, November 14, 2006

Java GPLed, when is SUN going to stop?


It is funny to remember that One of my first postings that got posted on slashdot was about SUN revoking then SCSL OEM like license given to FreeBSD foundation. You can still read it at slashdot, even though it is almost a year old. Many a linux distributions could not distribute Java with their distributions because SUN's license prohibited it.
I have not gone through the complete saga yet but I have read many a articles but I think I will rely on SUN it self.
Here is what I found;

Another Freedom for Java Technology

Sun started a revolution with Java technology 10 years ago. With a free runtime, an open specification, and a platform-independent promise of compatibility, Java technology became a gold standard in embedded devices, mobile phones, on the desktop and within the enterprise. Now, in 2006, Sun is open sourcing its implementations of Java technology as Free/Libre software. More

Live Webcast
Join Sun's CEO Jonathan Schwartz and Executive Vice President of Software Rich Green for the launch event.

Get Involved
Visit the three new open-source Java communities that Sun is seeding and download the code: OpenJDK, Mobile & Embedded, and the GlassFish community.

Duke, the mascot of Java technology, is open sourced too.
Tags: , , , ,

Monday, November 13, 2006

100Gigabit Ethernet? Yes they have it at SC06

A first-ever demonstration of 100 Gigabit Ethernet (100 GbE) technology by a team of industry partners, including Finisar, Infinera, Internet2, Level 3 Communications, and University of California at Santa Cruz, shows that 100 GbE technology is viable and capable of implementation in existing optical networks with 10 Gigabit/second (Gb/s) wavelengths.
The system successfully transmitted a 100 GbE signal from Tampa, Florida to Houston, Texas, and back again, over ten 10 Gb/s channels through the Level 3 network. This is the first time a 100 GbE signal has been successfully transmitted through a live production network. The 100 GbE system will be on display from November 14th to the 16th at the Infinera booth (Booth no. 1157) at the SC06 International Conference in Tampa. The system will be transmitting a 100 GbE signal to the Internet2 booth (Booth no. 1451) during the show.
"This new approach to providing 100 Gig Ethernet service over long distances enables LAN Ethernet protocols in the WAN environment," said Jack Waters, CTO of Level 3. "Compared to other methods that have been demonstrated, this is a practical, economical solution that operates over the wide area using existing DWDM technologies. We're pleased to have been involved with developing and testing this solution, and will be watching closely as it is commercialized."
The demonstration encodes a 100 GbE signal into ten 10 Gb/s streams using an Infinera-proposed specification for 100 GbE across multiple links. A single Xilinx FPGA implements this packet numbering scheme and electrically transmits all ten signals to ten of Finisar's 10 Gb/s XFP optical transceivers which in turn convert the signals to optics. These signals are then transmitted to an Infinera DTN DWDM system. For the long-distance demonstration, conducted last week, the 100 GbE signal was then handed off to Infinera systems within the Level 3 network where it was transmitted across the Level 3 network to Houston and back. This pre-standard specification for 100 GbE guarantees the ordering of the packets and quality of the signal across 10 Gb/s wavelengths and demonstrates that it is possible for carriers to offer 100 GbE services across today's 10 Gb/s infrastructure.
Links;
More info on 100GbE at SC06


Sunday, November 12, 2006

How is your Bro, I meant Bro cluster, the NIDS.


Lawrence Berkeley National Laboratory has developed a comprehensive approach to cyber security that allows the open exchange of scientific knowledge while simultaneously protecting critical resources from attacks -- the Bro intrusion detection system. And now, Bro is Big Bro in the form of a scalable cluster which will demonstrate its effectiveness on a 10 gigabit network connection during the SC06 conference to be held Nov. 11-17 in Tampa. The demo will be featured in LBNL's booth, as I have mentioned before.
But what is Bro? Bro is an open-source, Unix-based Network Intrusion Detection System (NIDS) that passively monitors network traffic and looks for suspicious activity. Bro detects intrusions by first parsing network traffic to extract is application-level semantics and then executing event-oriented analyzers that compare the activity with patterns deemed troublesome. Its analysis includes detection of specific attacks (including those defined by signatures, but also those defined in terms of events) and unusual activities (e.g., certain hosts connecting to certain services, or patterns of failed connection attempts).

So who wants Bro?
Bro is intended for use by sites requiring flexible, highly customizable intrusion detection. It is important to understand that Bro has been developed primarily as a research platform for intrusion detection and traffic analysis. It is not intended for someone seeking an "out of the box" solution. Bro is designed for use by Unix experts who place a premium on the ability to extend an intrusion detection system with new functionality as needed, which can greatly aid with tracking evolving attacker techniques as well as inevitable changes to a site's environment and security policy requirements.

Bro has a lot of features and but most striking for me is;
Snort Compatibility Support
The Bro distribution includes a tool, snort2bro, which converts Snort signatures into Bro signatures. Along with translating the format of the signatures, snort2bro also incorporates a large number of enhancements to the standard set of Snort signatures to take advantage of Bro's additional contextual power and reduce false positives.
This is what lead me to try Bro.
I guess you need to visit Bro Intrusion Detection System and learn more. Also Br is open source and you can download and try. It runs on a commodity PCs and what a better way find out about the software than running it yourself.

Links;
Bro's home, not yours.



Saturday, November 11, 2006

ClusterBuilder.org adds Clustering Encyclopedia

From Cluster resources news;
Cluster Resources, Inc. and LinuxHPC.org announced the release of ClusterBuilder.org version 1.3, featuring the new Clustering Encyclopedia – a specialized reference source of high-performance computing (HPC) technologies and products.

ClusterBuilder.org is a Web site created through the combined efforts of Cluster Resources and LinuxHPC.org , designed to help cluster administrators, technical evaluators and purchase evaluators build out better cluster, grid and utility-based computing environments.

The new Clustering Encyclopedia adds more than 130 new articles and 160 pages of cluster related information, demonstrating ClusterBuilder.org's continued efforts to provide an HPC-centric research location and information portal for HPC technologies.

In addition to the encyclopedia, the new version of ClusterBuilder.org also contains a hyper-linked index, which acts as a portal to quickly and easily guide users to the specific content they seek.

Links;
ClusterBuilder.org
LinuxHPC.org
Cluster resources

Cleversafe, your OSS data Grid Solution

Next step in your grid project or grid research project is Storage. At least mine is;). I have been looking for a better solution for my storage grid requirements and the first ear mark! (learned this during the elections) is that it had to be open source. So it is not easy t find a mature well functioning solution.
Then I came across Cleversafe Project. I have busy with it ever since.
Currently Cleversafe has quite a few projects under it's arms. Most notables are;

Cleversafe Dispersed Storage™
DSGrid File System™
Cleversafe Desktop™

Cleversafe Dispersed Storage™
The Dispersed Storage Project is the central point of development and idea exchange for developers around the world to contribute to innovative storage solutions leveraging dispersed storage methodology.

The project uses information dispersal algorithms IDAs to separate data into 11 unrecognizable DataSlices™ and distribute them, via secure Internet connections, to 11 storage locations throughout the world, creating a storage grid. With dispersed storage, transmission and storage of data is inherently private and secure. No single entire copy of the data is in one location, and only 6 out of the 11 nodes need to be available in order to perfectly retrieve the data.

Data on the grid remains private and secure in the face of natural catastrophes, or failures of hardware, connection, facility, or IT management. Moreover, the individual data slices do not carry enough information for an unauthorized viewer to determine the original content.

DSGrid File System™

The dsgfs project will enable a dispersed storage grid, such as the Cleversafe Research Storage Grid, a freely available, multi-terabyte globally dispersed storage grid, to appear as an ultra-reliable, durable hard drive to a Linux application. Using the dsgfs, users will be able to seamlessly store data on the Cleversafe Research Storage Grid, on a commercial grid or on a grid they build themselves. dsgfs initially will support various versions of Linux, including Debian, Fedora and CentOS.

The file system is facilitated by Cleversafe Dispersed Storage technology uses Information Dispersal Algorithms (IDAs™) to separate data into 11 unrecognizable DataSlices™ and distribute them, via secure Internet connections, to 11 storage locations throughout the world.

Cleversafe Desktop™

The Desktop Client, unlike the CLI, supports the concept of Storage Sets. Storage Sets allow files located in different locations on a computer to be grouped together as a single set. By using Storage Sets combined with the backup scheduling features of Cleversafe Desktop, ordinary users can backup individual machines at scheduled intervals, or on-demand--without the use of complex command line operations.

Cleversafe Desktop is written from the ground-up for multi-platform support. Initial releases will support Windows XP client and server flavors with various versions of Linux, including Debian, Fedora, RedHat, FreeBSD, Ubuntu and CentOS and Mac OS X ports to follow. This makes storage grids more accessible and attractive to a wider group of non-technical users and developers, including fans of Windows, Linux and embedded Linux solutions.

I really like the last one, the desktop project. Because I need the services provided by it as my clients are multi platform.
“This new project makes it much easier for a wider range of developers to use dispersed storage and explain the power of it to business managers,” said Manish Motwani, Cleversafe Desktop project lead. “It puts a presentable face on the grid, which is quite complex and technical behind-the-scenes, and adds point-and-click simplicity to better organize files for easier data retrieval. It’s also cross-platform savvy to ensure there are as few barriers as possible to using open source dispersed storage grids.”
Go explore your grid storage needs.
Links;
Cleversafe.org
Cleversafe wiki

Tuesday, November 07, 2006

Alchemi v.1.0.6 (Developer release) has been released



I almost missed this announcement since I am still battling with the version 1.0.5 and Now I am in the process of upgrading to the 1.0.6 version. If you wonder what Alchemi is;
Alchemi is a software framework that allows you to painlessly aggregate the computing power of networked machines into a virtual supercomputer (computational grid) and to develop applications to run on enterprise grids.
The release of version 1.0.6. brings with it following major changes since 1.0.5, which are a LUA (Least Privileged User Account) operation for both Manager, Executor in normal and service mode and an option for executing user's GThreads in a secure sandbox (experimental)
Before downloading and running the new version,Please note that this is a developer release intended for enthusiasts and early adopters as documentation is not up to date.
Here are the release notes;
This release contains some small important new features.

- The Manager, Executor are now designed run under least-privileged user accounts by default. In service mode, they run as 'LocalService' - which is a limited privilege user account.
- The Manager, Executor will (in both normal and service mode), will now read / write config files from the users' AppData directory.
In Windows XP, this would be :
C:\Documents and Settings\\Application Data\Alchemi\Manager
or

C:\Documents and Settings\\Application Data\Alchemi\Executor
- The logs are located in the user's Temp directory which would be (in Windows XP),

C:\Documents and Settings\\Local Settings\Temp\Alchemi\Manager\logs

or

C:\Documents and Settings\\Local Settings\Temp\Alchemi\Executor\logs

- This changes mean that an Administrator or user with admin privileges can install Alchemi, and start up the services, while any user can run Alchemi and/or clients without needing admin rights.
- Added Sandboxed execution for optionally running user's GThreads under low privileges. (These options are not all exposed through the GUI / API yet)

Links;

Alchemi at sourceforge

Alchemi at University of Melbourne



What not to miss at SC06. Berkeley Shines


Lawrence Berkeley National Laboratory, belonging to U.S. Department of Energy, will share their leadership and expertise in the field of Super computing, Grid Computing and Cluster Computing, via talks, technical papers and demonstrations at the SC06 conference to be held Nov. 11-17 in Tampa, Fla.
So if you are planing to be there, please take a not of what LBNL is doing at the Booth 1812. I am sure it will grab your attention and keep your grid computing or super computing mind entertained.

Demonstrations and Talks presented by Lawrence Berkeley National Laboratory;

Berkeley Lab, located in booth 1812, will present demonstrations of a number of tools and techniques developed to advance scientific computing and networking. Booth demonstrations will include the following:
* The Bro Cluster for Intrusion Detection on a 10 Gig Network

* Using FastBit for High-Performance Visual Analysis of Numerical and Text Data: Mining the Enron Email Archive

* Python Tools for Automatically Wrapping Legacy Codes as Grid Services

* Tool for Validating Compatibility and Interoperability of Storage Resource Managers (SRMs) for Heterogeneous Storage Systems

* ACTS Collection User Support Clinic

* Using VisIt to Visualize and Analyze AMR Data of Turbulent Reactive Chemistry Simulations

* Plasma Wakefield Acceleration Visualization

* High Performance Visualization using an 8-socket, 16-core Opteron Machine

Talks in the LBNL booth will cover three Scientific Discovery through Advanced Computing (SciDAC) projects led by Berkeley Lab, the new Cray XT4 being installed at NERSC, ESnet's new network partnership with Internet2, and supernova research at NERSC. Here's the schedule:

Tuesday, Nov. 14

* 11 a.m.: "Scientific Data Management: Essential Technology for Data-Intensive Science," Arie Shoshani, Scientific Data Management, LBNL

* 2 p.m.: "NERSC's Move Toward Petascale Computing with the Cray XT Architecture," William T. Kramer, NERSC/LBNL

* 3 p.m.: "Discovery and Destabilization: Experiments in Stellar Explosions at NERSC," F. Douglas Swesty and Eric S. Myra, Dept. of Physics & Astronomy, State University of New York at Stony Brook

Wednesday, Nov. 15

* 11 a.m.: "The SciDAC2 Visualization and Analytics Center for Enabling Technologies: Overview and Objectives," Wes Bethel, Visualization, LBNL

* 2 p.m.: "Next Generation Optical Infrastructure for the U.S. Research and Education Community," William E. Johnston, ESnet, LBNL

* 3 p.m.: "Introducing the SciDAC Outreach Center," Jonathan Carter, NERSC User Services, LBNL.

Technical Program Presentations

Berkeley Lab is also well represented in the SC06 technical program, with LBNL staff presenting research in technical paper, tutorial and poster sessions, invited talks, workshops and a Birds-of-a-Feather session. Here is a list of presentations by LBNL staff:

* "25 Years of Accelerator Modeling," Masterworks presentation, Robert Ryne, Accelerator and Fusion Research Division

* "ESnet," Education Program plenary talk, Bill Johnston, Computational Research Division

* "Detecting Distributed Scans Using High-Performance Query-Driven Visualization," technical paper, Kurt Stockinger, E. Wes Bethel, Scott Campbell, Eli Dart, and Kesheng Wu, Computational Research Division

* "Optimized Collectives for PGAS Languages with One-Sided Communication," poster, Dan Bonachea, Paul Hargrove, Rajesh Nishtala, Michael Welcome, Katherine Yelick, Computational Research Division

* "Computing Protection in Open HPC Environments," tutorial, Stephen Q. Lau, Scott Campbell, William T. Kramer, Brian L. Tierney, NERSC Division

* "The HPC Challenge (HPCC) Benchmark Suite," tutorial, David Bailey, co-presenter, Computational Research Division

* "Best Practice in HPC Procurements," workshop, Bill Kramer, NERSC Division

* "TOP500 Supercomputers," Birds of a Feather, Erich Strohmaier, Computational Research Division

Additionally, Zhengji Zhao, Lin-Wang Wang, Juan Meza, Andrew Canning and Osni Marques of LBNL's Computational Research Division will give presentations during the Second IEEE/ACM International Workshop on High Performance Computing for Nano-science and Technology (HPCNano06) to be held in conjunction with SC06.

Links;
SC06

Saturday, November 04, 2006

Do you like UBUNTU! now you can get the real free version of it!

The Free Software Foundation (FSF) has announced the release of gNewSense 1.0, an Ubuntu derivative that promotes software "freedom" by excluding proprietary components. Created by developers Brian Brazil and Paul O'Malley, the project is sponsored by the FSF to provide users with a robust desktop distribution that adheres to the organization's strict ideological standards and caters to users that prefer to avoid pragmatic compromises.
In the official 1.0 release of gNewSense, proprietary firmware and fonts have been removed, and access to the non-free Ubuntu repositories has been eliminated. The distribution also includes completely new artwork and includes developer-oriented packages like Emacs and the GCC compiler in the default installation. The developers have also elected to eschew integration with Launchpad, a proprietary development management tool used by Ubuntu.
Not everyone seems to have the enthusiasm that FSF shows, specially ubantu community. But I think it is a good move. Ubuntu is a good desktop distribution and I have not installed it, (I have downloaded it to check out the features and I liked them) because non free software integration. Same goes for Freespire/linspire or Lindows as it was known before the company caved up to Microsoft and changed the name to Linspire/Freespire. The problem is that they carry non free software. But these are done to make the lives of average Linux users. You can load these distributions on any pc or notebook without much hassle, less hassle than XP ;). Anyway I don't want this dirty patent filled software infecting Linux, providing the likes of M$, SCO to come charging when they feel challenged.
I admire the efforts of Ubuntu and Freespire communities and they do serve certain market segment.
But we do need organizations like FSF to keep checks and balances of OSS and provide products like gNewSense.
Now I have a ubuntu distribution that I like. gNewSense!
I got this news first at ARS Technica.

Links;
gNewSense
FSF
Ubuntu
ARS T

Friday, November 03, 2006

Grid technology book for Savvy managers

There are not many books about grid technology since it is fairly new in the technology arena. There are quite few books published on Grid Technology but most of them are for IT folks or for developers. But the case is all these IT personnel or developers have to prove the technology to their managers. And in order to do so, one has to educate managers and I am sure they are not in the business of digging in to Grid technology in depth to understand the technology.
So what do the managers do, read the book!
Pawel Plaszczak, and Rich Wellner, Jr. from GridwiseTech, one of my favorite grid resources, have published a excellent book.
Grid Computing The Savvy Manager’s Guide covers what is needed to educate a manager on a grid technology.
This non-technical book on technical matters answers key questions on Grid computing in business terms.

* What really are grids? What is Grid technology?
* What are the business benefits of Grid-enabling the infrastructure?
* Why should I, as a savvy businessperson, be interested in grids?
* Should my company “plug in”?
* How do I get started? How to plan the move to Grid paradigm?
The book due to nature of the technology, has an online companion at savvygrid. They also carry other information such as errata, reviews, TOC, look inside and a discussion group. I enjoyed the Savvygrid site and it is a good introduction to the book and the reviews are a must read.

Links
Savvygrid Book online
Gridwisetech site

What happens when GNU meets Cluster?

You get Gluster, a GNU Cluster distribution aimed at commoditizing Supercomputing and Super storage. Core of the Gluster provides a platform for developing clustering applications tailored for a specific tasks such as HPC Clustering, Storage Clustering, Enterprise Provisioning, and Database Clustering.
According to the developers, Gluster is designed for massive scalability and performance from ground up.
So why another cluster? don't we have enough cluster and HPC resources?
Gluster gives following answers;
GlusterHPC is

1. Designed for massive scalability (16 nodes or 65,000 nodes makes no difference). Much of the building blocks of Gluster are already powering worlds top supercomputers.
2. Portability (across distributions and architectures).
3. Modular and extensible.
4. Built on Gluster Platform which extends clustering technology beyond HPC to database, storage, enterprise provisioning, etc.
5. Very easy to use with a clean dialog based front-end.
6. Backed by supercomputing experts.
7. Supports multi-casting and Infiniband.
8. Centralized remote screen control.
9. Very easy to add new features or customize.
10.Doesn't require a database server to store configuration information.
By the way the project is still awaiting GNU approval, so it is under the category, NOT-YET-GNU. I hope and think it will be approved.
If you are worried about if this will run under your Linux distribution, fear not. It
is distro independent.
Links;
Gluster.org
Gluster Docs
Gluster Downloads

Thursday, November 02, 2006

Fermi Research Alliance wins 1.575 Billion contract.

The U.S. Department of Energy (DOE) has awarded a new $1.575 billion, five-year contract for management and operation of Fermi National Accelerator Laboratory (FNAL) to the Fermi Research Alliance, LLC (FRA), owned jointly by the University of Chicago (UChicago) and Universities Research Association, Inc. (URA).

“The quality of the new contract is a direct consequence of the competition process,” DOE Under Secretary for Science Dr. Raymond L. Orbach said today at a ceremony at Fermilab where he made the announcement of the contractor. “The partnership between UChicago and URA will enhance organizational depth and capability, promising improvements in performance and accountability."
The new contract contains a number of provisions intended to provide incentives for outstanding performance. The contract contains award term provisions under which the department may recognize outstanding performance through phased extensions of the contract for up to a total of 20 years, if the contractor meets specific performance levels established by DOE. The contract also contains incentive fee provisions under which FRA can earn a maximum total fee of up to $3.55 million a year for outstanding performance during the initial five-year term and the first five years of any award term extensions.

The initial contract term will be January 1, 2007, to December 31, 2011.


Cluster RFQ (request for quote) made easy

If you have ever wanted to get a RFQ (Request For Quote) from vendors, I am sure you may have run around the web, looking for vendors, filling out multiple forms and then comparing the replies to find the best match for you buck. There is a service now that will help you with hunting down vendors, filling out multiple forms, but the comparison part you will have to do yourself.
So who is providing this service and how much will it cost?
LinuxHPC.org is providing this service and it is totally free of charge and I think it a really good deal if you are in the market for Clusters or beginning to build a Grid resource.
From the LinuxHPC.org's website describing the form;

"It takes time to visit different cluster vendor websites to request a quote. The LinuxHPC Cluster RFQ form was created to assist the Linux cluster community by reaching many vendors through this one form. LinuxHPC will assist you from filling out the form to the product arriving on your doorstep.

The LinuxHPC RFQ is a FREE service! You pay nothing!"
The process is also simple;
"The above listed vendors have a proven track record of delivering products and services to the Linux cluster community.

1. Fill out the form
2. A representative from LinuxHPC will contact you by phone or email
3. The LinuxHPC representative will go over the details of your request, as well as assist you with questions about the RFQ process
4. LinuxHPC will send out the RFQ to the vendors you request...you are in full control of your RFQ
5. You will begin to see multiple responses to your RFQ
6. LinuxHPC will follow up with you and the vendors to ensure that you have received the desired response and experience from the vendors. In addition we will be available to assist you in any way we can during the RFQ process
So no more hundreds of form filling. Just a single form will reach hundreds of vendors."
This resource is not only for customers but as well for Vendors, where they could reach multiple customers seeking cluster solutions. The LinuxHPC is a well respected HPC computing resource and visited by many seeking HPC solutions, ideas, and support.
So get that RFQ today or provide a reply to a RFQ today. Visit LinuxHPC.org.
Following is a list of vendors who is listed on the site;
Accelerated Servers
Ace Computers
Advanced Clustering Technologies
Agilysys
AOES Group (EU)
Appro
Aspen Systems
Atipa Technologies
Cepoint Networks
Cluster Computing Systems (EU)
Cluster Resources
ClusterVision (EU) Compusys plc (UK)
HP
IBM
ION Computer Systems
Linux Labs (Asia)
Livewire Lifescience Solutions
Linux Networx
Major Linux Computing
Microway
New Tech Solutions, Inc
Penguin Computing Pogo Linux
PSSC Labs
Quant-X (EU/ME)
Reason
RocketCalc
SGI
Streamline Computing (UK)
Terra Soft Solutions
Tsunamic Tech.
Verari Systems
Western Scientific
Links;
LinuxHPC RFQ Form
Vendors can sign up here
LinuxHPC.org

UPS aims to run fine on Grid Technology on Linux, x86 and DataSynapse

Linux World today reports that UPS has moved to Grid Technology to make advances in its IT technology base.
The fist step for UPS is to consolidate, streamline and do better than the competition rater than hunting for raw horse power in computing. How do they do it? well by starting on multiple frontiers.
“Using technology to differentiate ourselves from our competitors has always been fundamental to our success, and it’s one of the reasons we’re moving forward with [grid] technology,” Brian Cucci, manager of the Advanced Technology Group at UPS said during a Webcast last week with DataSynapse, the company that supplies the Atlanta-based company its grid software.

The software, called Grid Server, is now in production use at UPS and lets the company distribute a billing invoice application that once ran on an expensive mainframe across a group of cheaper x86 systems running Linux.
For UPS, grid computing was just another piece in its evolving IT puzzle, which is aimed at reducing costs and improving efficiency. The company’s Technology Directions Subcommittee, which is made up of representatives from across the organization and reports to the CIO, is charged with keeping track of hot technologies, determining which can best bring business value.

Grid computing gained priority and moved to the top of the group’s radar screen last year, because it fit in nicely with several other technology projects that either were underway at UPS or were in the planning stages, Cucci said. Those projects include virtualization and consolidation efforts, as well as an initiative to move to a computing-on-demand approach to IT that focuses on the use of low-priced commodity hardware.
To read complete three page article, Please visit Linux World.
DataSynapse.

Saturday, October 28, 2006

Unbreakable LInux for everyone, not just Oracle customers.

During the Oracle Unbreakable Linux (read Redhat) announcement, Oracle also unvailed that the support is available to everyone.
"Oracle's Unbreakable Linux program is available to all Linux users for as low as $99 per system per year," said Oracle President Charles Phillips. "You do not have to be a user of Oracle software to qualify. This is all about broadening the success of Linux. To get Oracle support for Red Hat Linux all you have tH do is point your Red Hat server to the Oracle network. The switch takes less than a minute."

Looks like there might be help to RedHat as well;
"We think it's important not to fragment the market," said Oracle's Chief Corporate Architect Edward Screven. "We will maintain compatibility with Red Hat Linux. Every time Red Hat distributes a new version we will resynchronize with their code. All we add are bug fixes, which are immediately available to Red Hat and the rest of the community. We have years of Linux engineering experience. Several Oracle employees are Linux mainline maintainers."
Links;
Oracle Linux

Insiders mummer that Larry really wanted to have Oracle Linux but had to still support RedHat due to technical reasons. So in the near future, Oracle might embrace one of the ReaHat Clones. CentOS, WhiteBox comes to mind. But that is too near in the fture, for now if you want Oracle to support your Linux, it has to be RedHat.
It seems the industry leaders were very eager to hear the news. Following are some of the quote by those leaders;

DELL
"As a customer with first hand experience of Oracle's outstanding support organization, Dell will use Oracle to support Linux operating systems internally," said Michael Dell, Chairman of the Board, Dell. "Oracle's new Linux support program will help us drive standards deeper into the enterprise. Today we're announcing that Dell customers can choose Oracle's Unbreakable Linux program to support Linux environments running on Dell PowerEdge servers."

Intel
"Having worked with Oracle for many years in the enterprise computing space, we believe that the Oracle Unbreakable Linux program will bring tremendous value to our mutual Linux customers," said Paul Otellini, President and CEO, Intel Corporation. "Our work with Oracle on this program will be an important extension to our longstanding enterprise computing relationship."

HP
"HP and Oracle's collaboration and testing of Linux with integrated stacks of hardware, software, storage, and networking has helped create numerous best practices across the industry. HP welcomes the addition of Oracle's Unbreakable Linux program to the portfolio," said Mark Hurd, Chairman and Chief Executive Officer, HP.

IBM
"Oracle's support for Red Hat Linux will encourage broader adoption of Linux in the enterprise," said Bill Zeitler, Senior Vice President & Group Executive, IBM Systems and Technology Group. "IBM shares Oracle's goal of making Linux a reliable, highly standard, cost effective platform for mission critical applications backed by world class support."

Accenture
"Linux is important to us, and to our customers," said Don Rippert, Chief Technology Officer, Accenture. "We applaud Oracle's efforts to bring enterprise-quality support to Linux with the Oracle Unbreakable Linux program announcement. Together with Oracle, we at Accenture look forward to making the Linux experience even better for our customers."

AMD
"Oracle's Unbreakable Linux program will greatly expand the servicing options available to our AMD Linux customers," said Hector Ruiz, Chairman and Chief Executive Officer of Advanced Micro Devices. "We are excited by the program's potential to further enhance the success of AMD Linux servers in the enterprise."

Bearing Point
"It is critical that our customers have true enterprise-quality support for their Linux deployments. Oracle's Unbreakable Linux program support delivers the level of confidence our customers need to run Linux in their data centers," said Harry You, CEO, Bearing Point.

EMC
"The combined power of EMC and Oracle solutions bring superior reliability, scalability, high availability, and now, enhanced enterprise supportability to Linux users. We are confident that joint Linux solutions from EMC and Oracle will deliver enterprise scale and quality while lowering the cost of infrastructure for our customers," said Joe Tucci, Chairman, CEO, President, EMC.

BMC
"As Oracle's only systems management ISV at the highest level in Oracle's Partner Program, BMC Software is excited to see Oracle's deepening commitment to Linux," said Bob Beauchamp, BMC Software President and CEO. "Business Service Management from BMC Software with the Oracle Unbreakable Linux program meets customer demand for lower cost and higher quality support for their infrastructure."

NetApp
"The world's largest enterprises must have the flexibility to quickly and continually adapt to today's rapidly changing business requirements, without incurring risk," said Dan Warmenhoven, CEO of Network Appliance. "The Oracle Unbreakable Linux program is designed to drive the key benefits of Linux - including flexibility, reliability, and simplicity - directly into the data center. The longstanding relationship between NetApp and Oracle has enabled us to continuously deliver superior enterprise solutions to enable business agility and improve reliability - all tenets of the NetApp brand."

Well that is a good deal of information for this post. I will have another article relating to Oracle Linux in near future as I am a Oracle customer, I will convert some of the Windows boxen to Linux and see how it ares.



Friday, October 27, 2006

Start your GRID with a Load Balanced MySQL Cluster.


Typical MySQL Cluster

I was wondering around Howtoforge, looking for ISPConfig, (OK it is another article!) when I noticed the guide for Load Balanced MySQL Cluster. I have setup many of those in my work. Never with a easy guide, just following instructions found on grid technology forums, Google search etc. After a few installations, it becomes a nature to you. But my biggest problem is explaining the process to another. I do it but with quite bit of difficulties.
Remember though, due to the nature of the beast, you will need a lot of memory, say for 1GB database, will need at least 1.1GB usable memory on each node.
That is the reason this guide is valuable, along with it's content, it is fairly well written.
Why do I need a Load balanced MySQL cluster? Well if you have a MySQL database that is very important to your operations, it is always better to have more than one copy of the database. But rather than keeping two copies of the database on two computers, and using one at a time, Load Balanced solution will allow you to use bothe computers at the same time. Providing disaster recovery means and higher through put at the same time. In my Oracle servers, I use Oracle RAC clustering technology. Which does the same thing, but at a higher price.

The cluster the author setup is load-balanced by a high-availability load balancer that in fact has two nodes that use the Ultra Monkey package which provides heartbeat (for checking if the other node is still alive) and ldirectord (to split up the requests to the nodes of the MySQL cluster).
He used Debian Sarge for all nodes. So you may have to check a bit if you are using other distributions. The MySQL version 5.0.19 was used. If you do not want to use MySQL 5, you can use MySQL 4.1 as well.

This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents on the web.
Again to thank and protect the author, this does not come with any warranty. He goes on to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take.
So without any further rumblings, here is the article Load Balanced MySQL Cluster.
But there is also another article on MySQL Cluster, on MySQL Site.


Tuesday, October 24, 2006

Fedora Core 6 the ZOD, is out of the Door

The Fedora Project has announced the release of Fedora Core 6 (Zod). Install-time access to third-party package repositories, extensive performance improvements, support for Intel-based Macs, and a new GUI virtualization manager are some of the primary features. Additionally, Fedora Core 6 provides various improvements on the desktop, including a new default font and theme, the latest releases of GNOME and KDE, and additional options in window managers.
So if your grids were running on Fedora distribution, it ia time to get ready for trnsformation.
Since;
*Fedora Core 6 ships with the [WWW] 2.6.18 Linux kernel, and there are no longer separate kernels for multi-processor and single-processor architectures. A single kernel now automatically detects your processor.
*[WWW] X.org 7.1 is included, and it dynamically configures monitor resolution and refresh rates to limit the amount of required user configuration.
*Fedora Core 6 runs on Intel-based Macs.
*Improved i18n support using the default [WWW] SCIM input method, including more languages such as Sinhalase (Sri Lanka) and Oriya, Kannada, and Malayalam (India). Fedora Core now provides an easy interface to switch the input methods using im-chooser.
*The GNOME 1.x legacy stack has been removed from Fedora Core, and added to Fedora Extras.
Now you can add dual core single core, and macs (Intel) to your mixture of computer in your grid or grid technology.
But before jumping and hugging ZOD, read this discussion on Slashdot.

Monday, October 09, 2006

Oracle Grid Director on Grid Technology

Oracle Grid Director on Managing Large Deployments

Q&A on Gridtoday, Oracle director of Grid computing Bob Thome is interviewed on the complexity, management and security issues that arise when implementing Grid infrastructures, and why Grid is still worth the effort. According to Thome, political and cultural issues the No. 1 obstacle to Grid deployment.

Sunday, October 08, 2006

Is the grid for me?

There is a very interesting discussion going on at /. About "to grid or not to grid", A user asks in askslashdot ""In my job at a (large) investment bank I am constantly being pushed to use grid technology. I have many problems with this (not least that our data center is at best 100 Mb/s and our software is actually more data than computation heavy). A typical batch job takes 10-30 minutes consisting of around 10,000 trades. I would far rather spend the time and money on multi-core machines and optimizing the software than on the latest fad technology. I am interested to hear from other people in a similar position and, in particular, why or why not they chose grid software over improving the existing code to leverage better processor technology, and which grid software they chose to use and why. Or, conversely, why they chose not to use grid software."
There are many a comments and advise together with IT staff and Management disagreements and wishes are brought forth.
I suggest that you at least browse through the discussion.

Saturday, October 07, 2006

Economical view at grid technology.

Gridtoday has interviewed Robert Cohen an economist, Economic Strategy Institute fellow and Cohen Communications Group president to discusses some of his studies looking at how Grid computing, SOA and Web services have transformed the industrial landscape. In particular, Cohen discusses how grids have affected the automotive sector, a topic on which he presented at EGEE'06.
Cohen said during the interview, "I think many industries are beginning to use grids and SOAs to generate their core profits. As this becomes more evident, business processes could change in a wider range of industries beyond autos, aerospace, semiconductors and finance where they have been most noticeable. This could result in dramatic improvements in productivity as well as firms using the global economy as their base for design and product development."
Read more at gridtoday.

Monday, October 02, 2006

Grid snippets from Gridtoday

In big Grid news: Voltaire has entered the grid management game with its GridVision Enterprise software (GRIDtoday should have more on this next week); an automotive intelligence provider implemented an enterprise grid with Oracle; the Open Science Grid received a well-deserved $30 million from the NSF; the Louisiana Optical Network Initiative is working with Dell to create a 30- teraflop grid consisting of six high-performance clusters; and, sticking with the tera-scale theme, Intel CTO Justin Rattner discussed his company's prototype teraflop processors. Of course, depending on your interests, there are plenty of other interesting items, as well. Enjoy.


Saturday, September 30, 2006

The SC06 Tech Papers program, all 54 of them.

The SC06 Tech Papers program represents hundreds of thousands of hours of research. A committee of 233 conducted peer reviews of 239 submissions, from over 900 authors, culminating in the selection of 54 papers for presentation in Tampa.

Papers are organized in sessions of three papers each, covering thefollowing topics:

* Architecture
* Memory
* Interconnect Routing and Scheduling
* Scalable Systems Software
* MPI and Communications
* MPI Tools and Performance Studies
* Grid Allocation and Reservation
* Grid Applications
* Grid Networks and Portals
* Grid Scheduling and Protocols
* Grid Resource Management
* Data Management and Query
* Imaging and Visual Analysis
* Biology
* Molecular Dynamics
* Particles and Continuum
* Tools and Techniques for Performance
* Blue Gene System Software

Five papers have been nominated for Best Student Paper and three more as SC06 Best Paper.
They are 54 sessions spread over three days, one can find the schedule here.

ClusterBuilder.org 1.3 released with expanded services

News from LinuxHPC.org
Provo, Utah – Cluster Resources, Inc. and LinuxHPC.org announced today the release of ClusterBuilder.org version 1.3 (http://www.clusterbuilder.org), featuring the new Clustering Encyclopedia – a specialized reference source of high-performance computing (HPC) technologies and products.

ClusterBuilder.org is a Web site created through the combined efforts of Cluster Resources (http://www.clusterresources.com) and LinuxHPC.org (http//:www.linuxhpc.org), designed to help cluster administrators, technical evaluators and purchase evaluators build out better cluster, grid and utility-based computing environments.

The new Clustering Encyclopedia adds more than 130 new articles and 160 pages of cluster related information, demonstrating ClusterBuilder.org's continued efforts to provide an HPC centric research location and information portal for HPC technologies.

“The Clustering Encyclopedia provides users with a foundation for effective research.” said Michael Jackson, president of Cluster Resources. “Site visitors can use the encyclopedia to boost their current understanding of the common cluster related terms and technologies they seek.”

The encyclopedia features brief but thorough overviews on a number of supercomputing concepts, products and subjects such as batch processing, utility computing, multi-core processors, workload managers, etc.
Read more at the Linuxhpc.org......

Thursday, September 28, 2006

Quad Processor for Grids

Sol@rion reports over at Geemodo that Intel has announced at Fall IDF 2006 about the upcoming quad processors. There is a processor slated for blade servers as well. I think it is good news for grid technologies, enough power in a blade.
Geemodo: QUAD Processor introduced by Intel at IDF fall 2006

Monday, September 25, 2006

Diordn@ at work

Just Joined the site! Hope I can contribute good grid info

Sunday, September 24, 2006

The Royal Society Opens up It's scientific journals Archive.

According to solarion,
Usually closed to public, The scientific journals library of the Royal Society of London will be open to all of us for next two months. Usually it is open only upto 1997, and now we can read all the upto or down to Philosophical Transactions in 1665.
I did go there and found some interesting stuff, too much to consume, at least reading those made feel less bad about my English! And they were written by English!!

Wednesday, September 20, 2006

The European Commission invests in GRID research.

The European Commission has launched 23 new research projects to highlight the power of grid computing to businesses. It has pledged €78m to grid research, with the lion's share - €36m - going to three key research projects.

The three projects - BEinGRID, XtreemOS and Brein - will all explore the relevance and benefits of grid computing across multiple industries.

Together with 20 smaller projects, these three projects will bring together around 300 participants from academia and industry.

Viviane Reding, the EC's information society and media commissioner, said the projects will enable businesses to become more adaptive, agile and innovative if they help companies embrace the potential of grid computing.

News comes to you from Gemma Simpson @ Silicon.com.

Monday, September 18, 2006

SC06 Conference Registration is open


Registration opens for SC06, the annual conference of high-performance computing, networking, storage and analysis. This year's meeting, with the theme "Powerful Beyond Imagination," will convene November 11-17 at the Tampa Convention Center in Florida.

Online registration information can be found at http://sc06.supercomp.org/registration/attendee.php. To qualify for the advance registration discounts, registration and payment must be received by 5 pm (Eastern Time) Sunday, October 15.

This year's technical program includes 26 full and half-day tutorials, 54 technical papers, seven panel discussions, a series of Masterworks sessions, poster presentations and eight workshops. Visionaries and well-known leaders will speak on the state of high-performance computing in the keynote and plenary sessions, as well as participate in lively panel discussions.

Several awards will be presented, including the prestigious Sidney Fernbach and Seymour Cray Engineering awards, the Gordon Bell Prizes for fastest computer performance, and challenge awards recognizing competitive efforts in utilizing bandwidth, analyzing and visualizing data, and effectively accessing stored data.

An education program will offer hands-on sessions for teacher and faculty teams to help them incorporate computational tools into the classroom. SCinet, the conference's high-performance, production-quality network backbone, will use the most advanced technology to make the Tampa Convention Center one of the best-connected sites on the planet.

The exhibition area of the Tampa Convention Center has been will feature displays by more than 225 industrial and research exhibitors showcasing their latest systems, services and scientific achievements.

SC06 is sponsored by the Institute of Electrical and Electronics Engineers (IEEE) Computer Society and the Association for Computing Machinery's Special Interest Group on Computer Architecture (ACM SIGARCH). For more information, see http://sc06.supercomp.org/.

SC06 Conference Opens Hotel Reservations Web Site for Attendees
Hotel room reservations for the SC06 conference on high performance computing, networking, storage and analytics can now be made though the official conference Web site at http://sc06.supercomputing.org/travel/hotels.php. Attendees are encouraged to book their rooms as soon as possible to ensure the best selection of hotel location and room rate.

If you are unable to attend;
SCDesktop Brings SC06 to You
From its successful debut last year, SCDesktop returns to bring SC06 to "virtual attendees." As a Virtual Attendee you will have access to:

* Keynote
* Plenary Sessions
* Masterworks Sessions
* Exhibitor Forums
* Poster Sessions
* SC Global Sessions

Virtual Attendees will access the above sessions via collaborative technologies that provide two-way audio and video connections to the conference. Attendees will receive a limited or open source license for the collaboration software. Testing and training will be provided so that attendees can successfully participate.

An added feature of SCDesktop this year is the introduction of Time Delayed Broadcasting. All programs that are offered to our virtual attendees will be broadcast again twelve (12) hours later. This time delay will allow our European and Asian audience to enjoy the programs at a more convenient time.

More Information: http://sc06.supercomputing.org/conference/scglobal.php

Globus turns 10! Enjoy the happy Birthday


Fron Ian Foster, The Globus Pioneers words! on the event of 10th birthday of Globus.
The GlobusWORLD conference being held (jointly with GridWorld and the Open Grid Forum) this week in Washington, D.C., is a significant milestone for those involved in the development and use of the Globus open source Grid software. The reason is that it was 10 years ago (to be precise, on Aug. 21, 1996) that Carl Kesselman and I received our first funding for work on Globus, from DARPA. Gary Minden and Mike St. Johns were our enlightened program managers, followed by Gary Koob. I must also recognize the support of Bob Aiken, Tom Kitchens and, especially, Mary Anne Scott, then all at DoE.

Given this milestone, I will spend some time here recapping history and reflecting on where we have come and what we have learned.

A Little History

10 years is a long time: What on earth have we been doing over that period? Let's revisit some of the highlights.

The emergence of high-speed networks in the 1990s led to an awareness that the Internet could allow for more interesting applications than e-mail and file transfer. (Len Kleinrock had envisioned this possibility back in 1969, but it took a while to get there!) Efforts like the U.S. Gigabit testbed project, led by Bob Kahn, and the Supercomputing'95 I-WAY effort, led by Tom DeFanti and Rick Stevens, helped build awareness of these opportunities. This era also saw pioneering efforts such as the NSF Metacenter, led by Charlie Catlett and Larry Smarr, and Legion, led by Andrew Grimshaw. However, for the most part, every application was constructed from scratch.
Read the complete article at Gridtoday.

State of the Community grids and their future

A few days ago I wrote about Wolfgang Gentzsch, because one of his old articles inspired me. Anyway I get an email from Gridtoday and guess who is one of the featured writers? Wolfgang Gentzsch.
This time he writes about community grids;
During the last 12 months, we have analyzed the UK e-Science Program, the U.S. TeraGrid, Naregi in Japan, ChinaGrid, the European EGEE and the German D-Grid initiative. Our research, so far, is based on information from project Web sites, slide presentations, and from interviews with major representatives from these Grid initiatives. As an example, one of the earliest projects, with the highest funding volume and therefore one of the most important ones, is the UK e-Science Initiative. Major e-Science projects have been studied and key representatives interviewed from six e-Science Centers in the UK. The major focus of our research and of the interviews was on applications and strategic direction, government and industry funding, national and international cooperation, and strengths and weaknesses of the Grid projects.

As a result, we have compiled the following list of lessons learned and recommendations which may help others to successfully plan, implement, operate and fund similar Grid projects in the near future:

* Focus on understanding your user community and their needs. Invest in a strong communications and participations channel for leaders of that group to engage.

* Learn and keep up with what your peers have done/are doing. There is much useful experience to learn from partners.

* Instrument your services so that you collect good data about who is using which services and how. Analyze this data and learn from watching what's really going on, in addition to what users report.

* Plan for an incremental approach and lots of time talking out issues and plans. Social effects dominate in non-trivial grids.

* In any Grid project, during development as well a during operation, the core Grid infrastructure should be modified/improved only in large time cycles because all the Grid applications strongly depend on this infrastructure.

* Continuity, especially for the infrastructure part of Grid projects, is extremely important. Therefore, additional funding should be available also after the official duration of the project, to guarantee service and support and continuous improvement and adjustment to new developments.

* Close collaboration between the Grid infrastructure developers and the application developers and users is mandatory for the applications to seamlessly utilize the core Grid services of the infrastructure and to avoid application silos.

* New application grids (community grids) should utilize the components of the 'generic' Grid infrastructure to avoid re-inventing wheels and building silos.

* The infrastructure building block should be user-friendly to enable new (application) communities an easy adoption path. In addition, the infrastructure group should offer service and support for installation and operation.

* Centers of Excellence should specialize on specific services, e.g., integration of new communities, Grid operation, training, utility service, etc.

* We recommend implementing utility computing only in small steps, starting by making moderate enhancements to existing service models, and then testing utility models first as pilots. Very often, today's existing government funding models are counter-productive when establishing new and efficient forms of utility services.

* After a generic Grid infrastructure has been build, other projects should focus on an application or a specific service, to avoid complexity and re-inventing wheels.

* Reuse of software components from open-source and standards initiatives is highly recommended, especially in the infrastructure and application middleware layer. This leverages the power of the whole community.

* For interoperability reasons, focusing on software engineering methods is important, especially for the implementation of protocols and the development of standard interfaces.

* In case of more complex projects, e.g. consisting of an integration and several application or community projects, a strong management board should steer coordination and collaboration among the projects and the working groups. The management board (Steering Committee) should consist of leaders of the different projects.

* Participation of industry in this early phase has to be industry-driven. A blunt push from the outside, even with government funding, doesn't seem to be promising. Success will come only from natural needs e.g., through existing collaborations with research and industry, as a first step.

More detailed information about the study, the Grid projects, their objectives, funding, the use of the Globus Toolkit Grid middleware, applications, challenges, etc. will be presented in a follow-on article in GRIDtoday in a few weeks.

Friday, September 15, 2006

2004 Predictions of a grid Technologist on the GRID!

Previously senior director of grid computing at Sun Microsystems Inc, Wolfgang Gentzsch stated in an article on Computer world that grid computing will come in three waves. The first, well under way, primarily involves the academic research community. The second, just beginning, brings in corporations as users. The third, still some years off, will add individual consumers to the grid. At that point, the Internet will be "the grid," says Gentzsch, managing director of grid computing and networking services at MCNC Inc.
He mentioned the second wave to be commercial entities, To the question,
What's moving us into the second, corporate, wave of grid computing?
He answered "The IT vendors have their grid story in place -- IBM, Sun, Oracle and the others -- for the next generation of products that they want to ship and make money with. But there's no money in research grids, and consumer grids are far out. So the current interest is in the enterprise grid."
I think we are in the midst of the second wave with multiple offerings of grid for rent from very companies that he mentioned.
What about the third wave? For that his answer to the question;
What about the third grid wave, the one for consumers?
We are talking about gaming grids, where hundreds of gamers come together and use the grid for really heavy interactive and compute-intensive stuff.
Also health care. If you have a heart attack or stroke and you are within 15 minutes of a hospital, you get easy help. But in the countryside, the percentage of people dying from heart attacks is at least 50% higher than in the cities. Now, a grid reduces distances to zero, so the country doctor has immediate access to all these expensive machines, which have digital heartbeats, in the hospital. If that hospital is too busy, the health care grid broker selects another resource that is least loaded.
All his statements have come to be true and still changing. Read the complete article here

Tuesday, September 12, 2006

The GRIDtoday 2006 Readers' and Editors' Choice Awards

GRIDtoday has announced the winners of GRIDtoday's inaugural Readers'
and Editors' Choice Awards at The IDG GridWorld conference in Washington, DC.

GRIDtoday has designated two categories of awards: (1) Readers' Choice,
where winners have been determined by a random poll of GRIDtoday
readers,
(2) Editors' Choice, where winners have been determined by
votes of an advisory group of recognized luminaries, contributors and
editors influential in Grid and Service-Oriented IT.

Grid is being used in fields such as academic research, automotive and
aerospace, bio-IT, humanities research, security and defense, financial
services, government, manufacturing, oil & gas, pharmaceuticals,
telecommunications and others.


The GRIDtoday 2006 Readers' and Editors' Choice Awards

Most useful and innovative Grid SOLUTION OR BUILDING BLOCK available today
Readers' Choice Award Recipient: IBM
Editors' Choice Award Recipient: Platform Computing


Most innovative Grid MIDDLEWARE solution
Readers' Choice Award Recipient: Platform Computing
Editors' Choice Award Recipient: Univa Corporation


Most innovative STORAGE solution for a Grid implementation
Readers' Choice Award Recipient: IBM
Editors' Choice Award Recipient: eXludus


Most innovative NETWORKING solution for a Grid implementation
Readers' Choice Award Recipient: Voltaire
Editors' Choice Award Recipient: Cisco Systems


Most innovative Grid MANAGEMENT OR PERFORMANCE IMPROVEMENT SOFTWARE
for a Grid implementation
Readers' Choice Award Recipient: Altair Engineering
Editors' Choice Award Recipient: United Devices


Best price / performance Grid SOLUTION OR BUILDING BLOCK available
today
Readers' Choice Award Recipient: Sun Microsystems
Editors' Choice Award Recipient: IBM


Best price / performance MIDDLEWARE solution for a Grid implementation
Readers' Choice Award Recipient: Platform Computing
Editors' Choice Award Recipient: (Tie)
United Devices
Digipede Technologies


Best price / performance STORAGE solution for a Grid implementation
Readers' Choice Award Recipient: IBM
Editors' Choice Award Recipients: Terrascale Technologies


Best price / performance NETWORKING solution for a Grid implementation
Readers' Choice Award Recipient: Cisco Systems
Editors' Choice Award Recipient: Voltaire


Commercial organization demonstrating the most innovative Grid implementation in EARTH SCIENCES / ENERGY
Readers' Choice Award Recipient: Royal Dutch / Shell Group
Editors' Choice Award Recipient: BP Global

Commercial organization demonstrating the most innovative Grid implementation in LIFE SCIENCES (includes Pharma)
Readers' Choice Award Recipient: Novartis AG
Editors' Choice Award Recipient: Johnson & Johnson


Commercial organization demonstrating the most innovative Grid implementation in MANUFACTURING
Readers' Choice Award Recipient: The Boeing Company
Editors' Choice Award Recipient: Airbus


Commercial organization demonstrating the most innovative Grid implementation in ENTERTAINMENT
Readers' Choice Award Recipient: Pixar Animation Studios
Editors' Choice Award Recipient: DreamWorks SKG


Most Innovative Grid Implementation in FINANCIAL SERVICES
Readers' Choice Award Recipient: Wachovia Corporation
Editors' Choice Award Recipient: JPMorgan Chase


Commercial organization demonstrating the most innovative Grid implementation for BUSINESS PROCESS EFFICIENCY
Readers' Choice Award Recipient: Google
Editors' Choice Award Recipient: eBay


Research organization demonstrating the most innovative Grid implementation in support of EARTH SCIENCES / ENERGY applications
Readers' Choice Award Recipient: TeraGrid
Editors' Choice Award Recipient: D-Grid Initiative


Research organization demonstrating the most innovative Grid implementation in LIFE SCIENCES (includes Pharma)
Readers' Choice Award Recipient: RENCI TeraGrid BioPortal
Editors' Choice Award Recipient: Biomedical Informatics Research
Network (BIRN)


Research organization demonstrating the most innovative Grid implementation in GOVERNMENT research
Readers' Choice Award Recipient: CERN
Editors' Choice Award Recipient: (Tie)
TeraGrid
UK e-Science Programme


Research Grid initiative that you feel has earned the reputation of overall 'Top Research Grid'
Readers' Choice Award Recipient: CERN
Editors' Choice Award Recipient: TeraGrid

Thursday, August 31, 2006

Another player in grid technology opens up to Open source

I was browsing through the gridmeter at infoworld, when I noticed that Greg Nawrocki has written about the Activegrid moving it's tooling to eclipse environment. But the article header misleads, according to my humble opinion. Headline states "
Good News For Open Source Grid" but so far, most of the best grid systems and research have been open source. Activegrid itself is based on LAMP (Linux, Apache, MySQL and PHP/Python). But if you read through the site you will notice that author also thinks LAMP is the best platform for grid.
But the gridmeter brings out some thought provoking information about grid that it a browseble page for those who are interested in grid technology. But the success of grid, so far belongs to open source movement.