Friday, December 21, 2007

Grid.org Open Source Community Shines

CHICAGO--(BUSINESS WIRE)--Grid.org, the online community for open-source cluster and grid software, grew to 481 members and recorded more than 900 downloads of the free open-source Cluster Express beta software in its first month of availability.

Grid.org was launched Nov. 12, 2007 to provide a single aggregation point for information and interaction by the community of users, developers and administrators interested in a complete grid and cluster software stack. The sites primary open-source project, Cluster Express, provides comprehensive cluster scheduling and management by integrating proven, best-of-breed, open-source components into a seamless package that is easy to install and use.

"Response to the initial Grid.org launch and to our call for participation in the Cluster Express beta program has been gratifying, said Steve Tuecke, co-founder and chief technology officer at Univa UD, the Grid.org site sponsor. Obviously we tapped into pent-up demand for a complete, integrated, open-source approach to cluster and grid computing.

Univa UD announced the initial beta program for Cluster Express last month as a way to let users positively impact and shape development of the software, expected to be generally available in early 2008. Today, the company announced availability of the second beta version of Cluster Express on Grid.org.

We expect the release of the new beta version to drive more participation in the community, as more and more people begin to install and use the technology. With the excellent input were getting from users, administrators and developers, there is no doubt we will be able to integrate exactly the components and features this market wants in subsequent releases, Tuecke said.

Grid.org is expanding to meet community requirements based on input from site visitors. Recently, the site added a Wiki that allows shared authoring of open-source grid and cluster content by the Grid.org community. Grid.org also plans to support code-sharing, allowing Cluster Express developers to contribute to the software and users to easily share enhancements and applications. This capability, along with access to the Cluster Express source repository and versioning control system, will be available to members in the first quarter of 2008. Other planned enhancements include an interactive map of cluster implementations worldwide, to visually display and provide metrics on the landscape of cluster users at a global level.

About Grid.org

Grid.org is an online community for open source cluster and grid software users, administrators and developers. The mission of the site has evolved to one focused on providing a single location where open-source cluster and grid information can be aggregated so that people with a similar range of interests can easily exchange information, experiences, and ideas related to the complete open source cluster software stack. Established in 2001, Grid.org operated as a public interest Internet research grid for over 6 years and has now broadened the reach of the site to encourage use of open source technologies for grid computing at large.

Tags: , , ,

Tuesday, December 18, 2007

A Real Life Death Star, A Blackhole Blasts A Galaxy


A powerful jet from a super massive black hole is blasting a nearby galaxy, according to new findings from NASA observatories. This never-before witnessed galactic violence may have a profound effect on planets in the jet's path and trigger a burst of star formation in its destructive wake

Known as 3C321, the system contains two galaxies in orbit around each other. Data from NASA's Chandra X-ray Observatory show both galaxies contain super massive black holes at their centers, but the larger galaxy has a jet emanating from the vicinity of its black hole. The smaller galaxy apparently has swung into the path of this jet.

This "death star" galaxy was discovered through the combined efforts of both space and ground-based telescopes. NASA's Chandra X-ray Observatory, Hubble Space Telescope, and Spitzer Space Telescope were part of the effort. The Very Large Array telescope, Socorro, N.M., and the Multi-Element Radio Linked Interferometer Network (MERLIN) telescopes in the United Kingdom also were needed for the finding.

"We've seen many jets produced by black holes, but this is the first time we've seen one punch into another galaxy like we're seeing here," said Dan Evans, a scientist at the Harvard-Smithsonian Center for Astrophysics and leader of the study. "This jet could be causing all sorts of problems for the smaller galaxy it is pummeling."

Jets from super massive black holes produce high amounts of radiation, especially high-energy X-rays and gamma-rays, which can be lethal in large quantities. The combined effects of this radiation and particles traveling at almost the speed of light could severely damage the atmospheres of planets lying in the path of the jet. For example, protective layers of ozone in the upper atmosphere of planets could be destroyed.

Jets produced by super massive black holes transport enormous amounts of energy far from black holes and enable them to affect matter on scales vastly larger than the size of the black hole. Learning more about jets is a key goal for astrophysical research.

"We see jets all over the Universe, but we're still struggling to understand some of their basic properties," said co-investigator Martin Hardcastle of the University of Hertfordshire, United Kingdom. "This system of 3C321 gives us a chance to learn how they're affected when they slam into something - like a galaxy - and what they do after that."

The effect of the jet on the companion galaxy is likely to be substantial, because the galaxies in 3C321 are extremely close at a distance of only about 20,000 light years apart. They lie approximately the same distance as Earth is from the center of the Milky Way galaxy.

A bright spot in the Very Large Array and MERLIN images shows where the jet has struck the side of the galaxy, dissipating some of the jet's energy. The collision disrupted and deflected the jet.

Another unique aspect of the discovery in 3C321 is how relatively short-lived this event is on a cosmic time scale. Features seen in the Very Large Array and Chandra images indicate that the jet began impacting the galaxy about one million years ago, a small fraction of the system's lifetime. This means such an alignment is quite rare in the nearby universe, making 3C321 an important opportunity to study such a phenomenon.

It is possible the event is not all bad news for the galaxy being struck by the jet. The massive influx of energy and radiation from the jet could induce the formation of large numbers of stars and planets after its initial wake of destruction is complete.

The results from Evans and his colleagues will appear in The Astrophysical Journal. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for the agency's Science Mission Directorate. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass.

Additional information and images are available at:
http://chandra.harvard.edu
and
http://chandra.nasa.gov
Tags: , ,

Monday, December 17, 2007

Firewire (S3200) Reaches 3.2 Gigabit Transfer Rate

Good News for those firewire based grids. I have a small experimental cluster running on firewire that I started years ago and so old that it is only firewire 400! May be things will change soon

Dallas, Dec. 12, 2007 -- The 1394 Trade Association today announced a new specification to quadruple the speed of FireWire to reach 3.2 gigabits per second.

The new electrical specification, known as S3200, builds upon the IEEE 1394b standard, preserving all the advantages of FireWire while offering a major and unprecedented boost in performance. The new speed uses the cables and connectors already deployed for FireWire 800 products, making the transition forward easy and convenient for 1394 product vendors and their customers. Because the 1394 arbitration, data, and service protocols were not modified for S3200, silicon and software vendors can deploy the faster speed FireWire quickly and with confidence that it will deliver its full potential performance. The S3200 specification is expected to be ratified by early February.

FireWire 800 products deployed since 2003 have proven that IEEE 1394b delivers outstanding performance. Operating without polling, without idle times, and without continuous software management, FireWire 800 efficiently delivers more than 97 percent of its bit rate as payload -- not overhead. FireWire 800 hard drives today can easily move over 90 megabytes per second. S3200 preserves 100 percent of the 1394b design efficiency and will deliver extremely high payload speeds reaching nearly 400 megabytes per second. Other interface technologies struggle to deliver half their advertised bit rate to the user, even under optimal conditions.

No Compromises to 1394’s Features

The S3200 specification brings FireWire to this new performance level without compromising existing features. For example, FireWire provides much more electrical power than any other interface, freeing users from inconvenient AC power adapters. FireWire products built using S3200 will directly connect to every previously released FireWire product. Alternative cable options are available to carry FireWire over long distances - 100 meters or more - even at high speeds.

Also, FireWire’s peer-to-peer architecture allows products to operate with a computer - or without one. This superior combination of features is not found in any other technology, which explains why over one billion FireWire ports have been shipped to date, on products as diverse as computers, cameras, televisions, hard drives, and musical instruments. IEEE 1394 also is deployed in vital applications in state-of-the-art aircraft and polar orbiting satellites.

S3200 Strengthens 1394’s Position in Storage, Consumer Electronics

One of the strongest markets today for FireWire is storage for computers. The best hard drives with FireWire 800 can move data almost three times as fast as the best hard drives with USB 2.0. Also, FireWire provides much more electrical power than USB, so FireWire-equipped hard drives can operate without an AC adapter, and at high rotational speeds. USB hard drives can fail to work from USB power, or require a second USB cable for power, or use the lowest-performance drive mechanisms because so little power is available.

With S3200 this power advantage for FireWire is fully preserved. S3200 also makes FireWire so fast that users will see no advantage from eSATA. Both interfaces are much faster than any modern hard drive mechanism, but eSATA does not provide electrical power to operate a drive. On a computer, an eSATA port is far less flexible than a FireWire port, because many more devices can connect to FireWire. For these reasons, S3200 makes FireWire the superior choice for future external storage products.

S3200 will also enhance FireWire’s strong position in consumer electronics A/V devices such as camcorders and televisions. Today, 100 percent of HD set top boxes provided by cable companies have FireWire ports. So do 100 models of HDTV. FireWire is the only separable interface today that can record HD programs in their full digital quality while also meeting the content protection requirements of copyright holders. Many companies are pursuing whole-home HD network solutions using FireWire - notably the HANA Alliance.

Technology development is also nearing completion to permit FireWire to operate over cable television coaxial cables, without disrupting the existing program content. With S3200, FireWire becomes fast enough to move even uncompressed HD signals over long distances at much lower cost than solutions such as HDMI.

"The S3200 standard will sustain the position of IEEE 1394 as the absolute performance leader in multi-purpose I/O ports for consumer applications in computer and CE devices," said James Snider, executive director, 1394 Trade Association. "There is a very clear migration path from 800 Megabits/second to 3.2 Gigabits/second, with no need for modifications to the standard and no requirement for new cables or connectors."

The Silicon Working Group developed the S3200 specification within the 1394 Trade Association, with participation by industry leaders including Symwave, Texas Instruments, LSI Corporation, and Oxford Semiconductor. S3200 specifies the electrical operation of the 3.2 Gigabit mode first specified by IEEE 1394b-2002, without changing any connector, cable, protocol, or software requirements. Based on the working group's progress, the Trade Association has set a January 2008 date for the specification to enter a ratification process.

The 1394 Trade Association is a worldwide organization dedicated to the advancement and enhancement of the IEEE 1394 audio video standard. For more information, visit www.1394ta.org

Contact:
Dick Davies
415 652 7515
ipra@mindspring.com

Tags: , , ,

Saturday, December 15, 2007

Microsoft challenges VMware with Early Release of Hyper-V

Microsoft announced that a beta version of its Virtualization software is ready sooner than expected, a release that had been scheduled for early next year. On Thursday, the software maker made available a beta version of its Hyper-V hypervisor technology.
Virtualization is a technology that allows software applications and operating systems to be separated from their hardware systems and then shared over servers and storage infrastructure. Businesses then use virtualization technology to better manage large pools of data over fewer hardware systems.
New technology also allows for closer interoperability between Windows and Linux than seen in the past, and also puts it to compete with technology offered by VMware, whose virtualization technology has been the foremost player so far.

Tags: , , ,

Thursday, December 13, 2007

Getting Coherence: Free Data Grid Webinar

If you work with Grids, then you know that it is not a simple matter. This year I completed two Grid Based projects. One was a Research Grid which was very large scale completely based on Linux and the other was a Data Grid, an Oracle running on Windows!.
The frist and the large was easier that at the latter stage we did not call it Grid Development or Grid Implementation. It was more like Rabbit Breading, easy and almost no effort.
Then there is this Oracle Data Grid running on Windows project. I would not say it is smooth but one thing I learned is that Gates and the group have put a lot of effort in to developing windows 2003 server. Still a way to go but acceptable. Oracle was one of my expertise so I had no issues there. But one big hardware vendor was a total disappointment. But we Goth through the project unscratched.
Now to the idea of this post. Oracle has prepared a webinar to introduce Oracle Coherence™ Data Grid. You will be able to learn;

  • How Oracle Coherence capabilities function, such as coherent in-memory caching, dynamic data partitioning, and parallel query and process execution, and how they are being mapped onto grid infrastructures.
  • How Data Grid capabilities function, how organizations are using them to solve complex computing problems and examples of how organizations are leveraging this on a global scale.
  • How easy it is to deploy Oracle Coherence, which is generally operational within hours.
  • How Oracle Coherence is fully configurable, providing total flexibility to change caching topology without code changes.
Topic: Getting Coherence: Free Data Grid Webinar
Date: Tuesday, December 18, 2007
Time: 11:00 a.m. EST
So if you are interested in Grid, Grid Technology, and or Oracle, Please follow this link and register. You for sure will learn a thing or two.
Tags: , , ,

Monday, December 10, 2007

Softlayer Continues to Better Itself

December 5, 2007, Dallas, TX – Today SoftLayer, the industry’s fastest-growing web hosting provider, said that its string of significant product and service releases over the past months highlight the company’s vision and its course for the future. Since September, SoftLayer has announced a number of noteworthy launches and milestones.

“All of our recent additions are part of our overall strategy to automate managed services that have traditionally been performed manually and, as a result, have been labor intense and costly,” said Lance Crosby, SoftLayer Chief Executive Officer. “We firmly believe the only way to scale managed services is to automate installation, management, and monitoring.”

Mr. Crosby said that SoftLayer’s portfolio of services will continue to expand rapidly, and that the company has many other major growth opportunities on the way. “We’re well on track to have 15,000 servers deployed by the end of the first quarter in 2008. And I’m pleased to say that we’re launching a site in Seattle, our first additional geographic location,” he said.

SoftLayer has formed a 5-year, $16-million deal to deploy as many as 10,000 servers at Internap’s Seattle data center. The new site will give SoftLayer customers geographic diversity and allow them to locate their servers for optimal speed and performance. Softlayer will provide global load balancing between locations via two independent 10-gigabit high-performance IP network circuits connecting the Seattle and Dallas data centers.

SoftLayer’s latest automated services extend the flexibility and control its customers have over their systems. RescueLayer, a server recovery solution, restores failed servers by rebooting the systems into a RAM-disk rescue kernel with the server’s regular network access. A new automated, hardware-driven load-balancing solution can distribute work across servers, improving scalability and provisioning. Additionally, StorageLayer, a comprehensive storage and backup service, integrates multiple storage technologies including iSCSI and EVault. StorageLayer will soon be expanded with scalable virtualization technologies, disaster recovery options, and several enterprise services that take advantage of SoftLayer’s new Seattle site.

SoftLayer’s recent additions also have included relationships with best-in-class technologies and leaders. Softlayer released PassMark® BurnInTest™ hardware testing at no-charge to customers. They also began providing Urchin by Google Analytics, an enterprise-class website analysis package. The company also announced it became a Microsoft Gold Partner with certification in Networking Infrastructure Solutions, Advanced Infrastructure Solutions, Hosting Solutions Specialization and Storage Solutions Specialization.

“We’re proud of how much we’ve brought to market and how quickly,” said Mr. Crosby. “We’ve built an enormous amount of momentum. And we’re picking up more speed every day.”

Friday, November 30, 2007

FortressITX Selects 3Tera’s AppLogic Grid OS For Utility Computing/Grid Solutions

CLIFTON, NJ--(BUSINESS WIRE)--FORTRESS ITX, the premier boutique Internet infrastructure outsourcing partner for the enterprise, announced today it has launched a new grid hosting platform Dynamic Grid and service plans based on 3Tera's award winning AppLogic Grid Operating System. FortressITX customers can now take advantage of a highly available, highly scalable grid infrastructure to host their online applications.

"With utility computing and virtualization taking hold as a viable IT solution, we see early adopters benefiting the most through shorter time to market and dramatic reduction of deployment and management costs," says Jason Silverglate, President and CEO of FortressITX. Now, by partnering with 3Tera and leveraging their AppLogic grid OS, FortressITX solves all of that with a highly available and scalable way to deploy additional infrastructure in seconds to handle customer's demand for high traffic."

"We are happy to welcome FortressITX to the AppLogic family of partners, because we share the belief that online businesses should be able to easily manage, deploy and scale their applications without being constrained by the complexity and cost of owning and operating their own infrastructure," said Bert Armijo, Senior Vice President, Sales, Marketing and Product Management, 3Tera, Inc. "The new partnership enables FortressITX customers to run Web applications on demand to compete cost-effectively in today's online marketplace."

For more information, visit http://www.fortressITX.com/grid.

Rackable Systems, Inc. Passes $1 Billion Mark in Lifetime Sales

FREMONT, Calif.--(BUSINESS WIRE)--Rackable Systems, Inc. (NASDAQ:RACK), a leading provider of servers and storage products for large-scale data centers, today announced that the corporation has reached the $1 billion mark in lifetime sales of products and services.

Founded in 1999, Rackable Systems has become the fastest growing x86 server provider among the top five providers in North America based on analyst firm Gartners comprehensive review of the market (Servers Quarterly Statistics Worldwide: Database, November 2007), having deployed hundreds of thousands of systems worldwide since its inception.

Hitting the billion dollar mark is a true milestone and a tribute to Rackable Systems leadership in the markets we serve, said Mark J. Barrenechea, president and CEO of Rackable Systems. Our Eco-LogicalTM servers and storage have been deployed for hundreds of customers around the world, and our growth is a testament to the demand for our energy-efficient solutions.

Rackable Systems award-winning server and storage products can help reduce data center operational costs and improve total cost of ownership. With patented innovations in rack density, DC Power distribution and cooling designs, Rackable Systems products help improve efficiency and performance in even the most complex data centers.

Friday, November 16, 2007

First Ever Green500 List Released at SC07

Blue Gene/P Solution and eServer Blue Gene Solution top the list with nine out of first ten. Almost all the Supercomputers that I have access, made the list! Proud to be green and computing!

BLACKSBURG, Va., Nov. 15 -- Virginia Tech released the inaugural Green500 List this morning at the Supercomputing 2007 (SC07) conference in Reno, Nev.

"The Green500 List is intended to serve as a ranking of the most energy-efficient supercomputers in the world and as a complementary view to the Top500 List," said Wu Feng, associate professor in the Departments of Computer Science and Electrical and Computer Engineering at Virginia Tech.

All systems on the Green500 List are ranked by MFLOPS/Watt (million floating-point operations per second per watt). The MFLOPS numerator is the reported LINPACK sustained (Rmax) value recorded by the Top500 List. (LINPACK is a linear algebra software package used to create equations to challenge computers.) The watts denominator is either a direct measurement of the system running the LINPACK benchmark at Rmax load or a peak power estimate based upon machine specifications.

For now, systems must first place in the current Top500 List in order to be considered for the Green500. Of the Top500 machines, more than 200 machines directly reported their measured power for the Green500 List. In cases where measured power was not provided, the Green500 List used peak power, as estimated by the Green500 team, based on the best available specifications for the systems in the Top500 List.

The November 2007 Green500 List is a combined ranking of all 500 machines based on the best (highest) MFLOPS/Watt rating available from either direct measurements or peak power estimations. Because peak power numbers do not necessarily reflect power consumption under load, the Green500 team specifically discourages direct comparisons of measured and peak values in the current Green500.

"As this list is the first attempt of its kind, the rankings are open to interpretation by the media, associated vendors, and the general community," said Kirk Cameron, associate professor of computer science at Virginia Tech. "The Green500 team encourages fair use of the list rankings to promote energy efficiency in high-performance systems. We discourage use of the list to disparage a particular vendor, system, or user," he concluded.

The list itself and the methodology used to rank the systems are works-in-progress, Feng said, adding that this will evolve over time to ensure accuracy and more closely reflect energy efficiency in the fast-paced, ever-changing, high-performance community.

HPCwire Readers and Editors Name Rackable Systems the “Best Price/Performance HPC Hardware Solution” in the Industry

FREMONT, Calif., November 13, 2007 – Rackable Systems, Inc. (NASDAQ: RACK), a leading provider of servers and storage for large-scale data centers, today announced that the company has been awarded a 2007 HPCwire Readers’ Choice Award in the category of Best Price/Performance HPC Hardware Solution. The award honors Rackable Systems’ broad range of high-performing Eco-Logical™ servers and storage, designed to reduce operational expenses and improve TCO in even the most demanding data center environments.

“Rackable Systems is honored to be recognized for delivering compelling x86 solutions to the HPC community,” said Mark J. Barrenechea, president and CEO of Rackable Systems. “Our build-to-order model allows us to tailor each solution to the price/performance needs of each customer, helping to enable the Eco-Logical data center.”

Rackable Systems’ innovative x86 server and storage designs help reduce power and cooling costs while maximizing performance, providing an ideal solution for HPC deployments. With products ranging from Eco-Logical rack-mount servers to the company’s RapidScale clustered storage and its award-winning ICE Cube modular data center, Rackable Systems continues to provide solutions that are customized and built to order to meet the unique needs of any data center. Carefully designed to reduce power consumption and increase reliability, Rackable Systems solutions enable data centers to “go green” without compromising application performance.

“At the University of Florida, we have been pleased with the power and space efficiency provided by our Rackable Systems servers and storage. With their high-density racks and dual-core servers, we were able to house a cluster that would have required three times the space and twice the power of competing solutions,” said Dr. Charles Taylor, senior HPC systems engineer and associate director of the University of Florida HPC Center. “We applaud Rackable Systems for bringing high-performance, environmentally-friendly solutions to HPC environments like our own.”

The coveted HPCwire Readers’ Choice Awards are determined through online polling of the global HPCwire audience, along with a rigorous selection process involving HPCwire editors and industry luminaries. The awards are an annual feature of the publication and constitute prestigious recognition from the HPC community. The 2007 Readers’ Choice awards generated a record number of responses from the several hundred thousand readers who access HPCwire each week.

More information on these awards can be found at the HPCwire website (http://www.HPCwire.com).
Rackable Systems.

SC07, The Super Computing Conference!

I finally managed to sneak in to the SC07, Super Computing Conference that I attend every year, SC06, SC05, etc. I almost missed the opportunity due to prior commitments. But two days are better than nothing and I have a load of information to write about. As usual I learned a lot already there will be more by the end of the day.
SC07, sponsoredby ACM and IEEE Computer Society, will showcase how high-performance computing,networking, storage and analysis lead to advances in research, education andcommerce. This premiere international conference includes technical andeducation programs, workshops, tutorials, an exhibit area, demonstrations andhands-on learning. For moreinformation, please visit http://sc07.supercomputing.org/.

Wednesday, November 14, 2007

SUN Solaris on DELL PowerEdge servers.


"We have 12 million (Solaris) licenses in the marketplace, and a majority of them aren't running on Sun hardware," Jonathan Schwartz, Sun's chief executive, said during a keynote speech at Oracle OpenWorld in San Francisco.

The multi-year distribution agreement was announced live from Oracle OpenWorld, where Sun President and CEO, Jonathan Schwartz and Dell Chairman and CEO, Michael Dell, are keynote speakers. With this announcement, Dell is expanding the range of enterprise-class operating systems it offers to its customers and Sun is expanding the reach of its Solaris OS.

As part of the relationship, Dell and Sun will cooperate on system certification and the development of offerings based on Solaris and Dell solutions. In addition, Dell and Sun have agreed to work together to secure support from key ISVs for Solaris on Dell PowerEdge servers. Both companies will work to ensure the combination of Dell PowerEdge servers and the Solaris OS delivers customer choice and value for applications that demand reliability, security, scalability, performance and integrated virtualization.

"Dell's offering of Solaris redefines the market opportunity for both companies," said Jonathan Schwartz, president and CEO, Sun Microsystems. "The relationship gives Dell broader reach into the global free software community with Solaris and OpenSolaris and gives Sun access to channels and customers across the volume marketplace."

"Part of our focus to simplify IT means delivering customers choice and by adding Solaris to our solutions set we are able to do that," said Michael Dell, chairman and CEO, Dell.


Thursday, November 08, 2007

Bittorrent Distribution System is expanded at ibiblio

ibiblio has added four more open source Osprey bittorrent distribution system. Some of my favorite projects have been mirrored and gone are the speed problems we had on ibiblio. It makes a sense to get what you need without taxing the system a lot. If a lot of people are using the system, the benefits will rise. So grab you project files from ibiblio bittorrent syste.
The ibiblio-hosted or -mirrored projects now serving over torrent.ibiblio.org now include some of My favorites:

Red Hat Enterprise Linux Available On Demand on Amazon Elastic Compute Cloud

on the heels of Redhat 5.1 announcement, Redhat and Amazon announced a new business deal between the two to provide better product offering to customers.

Raleigh NC - November 7, 2007 - Red Hat (NYSE: RHT), the world’s leading provider of open source solutions, today announced the beta availability of Red Hat Enterprise Linux on Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides resizeable compute capacity in the cloud. This collaboration makes all the capabilities of Red Hat Enterprise Linux, including the Red Hat Network management service, world-class technical support and over 3,400 certified applications, available to customers on Amazon's proven network infrastructure and datacenters.

The combination of Red Hat Enterprise Linux and Amazon EC2 changes the economics of computing by allowing customers to pay only for the infrastructure software services and capacity that they actually use. Red Hat Enterprise Linux on Amazon EC2 enables customers to increase or decrease capacity within minutes, removing the need to over-buy software and hardware capacity as a set of resources to handle periodic spikes in demand.

For more information on the offering, visit www.redhat.com/solutions/cloud.


Redhat 5.1 ready for standalone systems, virtualized systems, appliances and web-scale "cloud" computing environments.

Raleigh NC - November 7, 2007 - Red Hat (NYSE: RHT), the world’s leading provider of open source solutions, today announced the availability of Red Hat Enterprise Linux 5.1, with integrated virtualization. This release provides the most compelling platform for customers and software developers ever, with its industry-leading virtualization capabilities complementing Red Hat's newly announced Linux Automation strategy. It offers the industry’s broadest deployment ecosystem, covering standalone systems, virtualized systems, appliances and web-scale "cloud" computing environments.

Red Hat Enterprise Linux 5.1 virtualization delivers considerably broader server support than proprietary virtualization products, and up to twice the performance. This allows greater server consolidation and eliminates a key obstacle to deploying virtualization more widely. And Red Hat Enterprise Linux customers benefit from one of the industry's largest and fastest-growing set of certified applications.

Red Hat Enterprise Linux's deployment flexibility uniquely allows customers to deploy a single platform, virtual or physical, small or large, throughout their enterprise. By providing one platform that spans the broadest range of x86, x86-64, POWER, Itanium and mainframe servers, regardless of size, core count or capacity, customers can gain dramatic operational and cost efficiencies when compared to proprietary solutions. And fully integrated virtualization, included at no additional cost, amplifies these benefits. Notably, Red Hat Enterprise Linux 5.1 provides enhanced support for virtualization of Microsoft Windows guests, providing significant performance improvements for Windows XP, Windows Server 2000, 2003 and Windows 2008 beta guests.

Red Hat works closely with its hardware partners to lead the industry in providing support for new hardware features, an advantage unique to the open source development model. This is reflected in support for features such as Nested Page Tables in the new release.

"Our initial testing indicates that Red Hat Enterprise Linux virtualization delivers significant performance gains for our compute intensive applications and should provide an additional layer of abstraction that will help us manage multiple competing priorities," said Derek Chan, Head of Digital Operations for DreamWorks Animation.

"With Red Hat Enterprise Linux virtualization, customers can easily deploy any application, anywhere at anytime," said Paul Cormier, executive vice president, Worldwide Engineering at Red Hat. "Other virtualization products don't scale to support large numbers of cores or CPUs, which limit customers’ ability to utilize their infrastructure, or force customers to deploy multiple virtualization platforms. With Red Hat Enterprise Linux, customers enjoy a flexible yet consistent application environment for all of their virtualization requirements: from small servers to mainframe-class systems, for Linux and Windows servers and for even the most demanding workloads."

Red Hat Enterprise Linux 5.1 is immediately available to customers via Red Hat Network, Red Hat's management and automation platform. Red Hat Network provides customers a common platform for managing both physical and virtual servers, eliminating the need for organizations to acquire, manage and train their staff on new tools to manage virtual servers. Red Hat Network allows customers to provision, monitor and manage their servers throughout the entire lifecycle.

Red Hat Enterprise Linux virtualization includes the ability to perform live migration, allowing customers to seamlessly move running applications from one server to another, maximizing resource utilization in the face of changing business requirements. Red Hat Enterprise Linux Advanced Platform includes high-availability clustering, storage virtualization and failover software to provide enhanced levels of application availability, for both physical and virtual servers.

Utilizing multiple cores and CPUs is more important than ever with the release of Intel’s latest Quad-Core Intel® Xeon® processors and Itanium® processors. Users deploying Red Hat Enterprise Linux 5.1 and utilizing Intel® Virtualization Technology can experience even greater gains. "Red Hat and Intel have worked together in delivering a high-performance platform for virtualization," said Pat Gelsinger, senior vice president, general manager, Intel Digital Enterprise Group. "Red Hat Enterprise Linux 5.1 allows customers to scale up their virtual infrastructure to run high-performance virtual machines that utilize Intel® Virtualization Technology and all the processing power of the Quad-Core Intel® Xeon® processors and high-end Itanium® servers, without the overhead seen in traditional virtualization environments."

For more information about Red Hat Enterprise Linux 5, visit www.redhat.com/rhel.

Wednesday, November 07, 2007

Record-Setting Fifth Planet Found Orbiting Nearby Star, 55 Cancri, in the constellation Cancer.

55 Cancri, which is 41 light years away and can be seen with the unaided eye or binoculars
The newly discovered planet weighs about 45 times the mass of Earth and may be similar to Saturn in its composition and appearance. The planet is the fourth from 55 Cancri and completes one orbit every 260 days. Its location places the planet in the "habitable zone," a band around the star where the temperature would permit liquid water to pool on solid surfaces. The distance from its star is approximately 116.7 million kilometers (72.5 million miles), slightly closer than Earth to our sun, but it orbits a star that is slightly fainter.

"The gas-giant planets in our solar system all have large moons," said Debra Fischer, an astronomer at San Francisco State University and lead author of a paper that will appear in a future issue of the Astrophysical Journal. "If there is a moon orbiting this new, massive planet, it might have pools of liquid water on a rocky surface."

Fischer and University of California, Berkeley, astronomer Geoff Marcy, plus a team of collaborators discovered this planet after careful observation of 2,000 nearby stars with the Shane telescope at Lick Observatory located on Mt. Hamilton, east of San Jose, Calif., and the W.M. Keck Observatory in Mauna Kea, Hawaii. More than 320 velocity measurements were required to disentangle signals from each of the planets.

"This is the first quintuple-planet system," said Fischer. "This system has a dominant gas giant planet in an orbit similar to our Jupiter. Like the planets orbiting our sun, most of these planets reside in nearly circular orbits."

The first planet orbiting 55 Cancri was discovered by Marcy and Butler in 1996; the planet, the size of Jupiter, was the fourth-known star with an exo-planet ever discovered. It orbits its star every 14.6 days.

The second, third and fourth were discovered over the next eight years.


Read more at NASA JPL

Tuesday, November 06, 2007

Continuous Data Protection for Linux with SteelEye Data Replication for Linux v6

Palo Alto, CA - November 5, 2007 - SteelEye Technology® Inc., a leading provider of award-winning data protection, high availability and disaster recovery products for Linux and Windows, announced today the release of SteelEye Data Replication for Linux v6. The new product is the first on Linux to combine continuous data protection (CDP) features such as "any point in time rewind" with comprehensive data replication capabilities which support multiple replication targets and provide combining data mirrors across any combination of LANs and WANs. The combination of CDP with off-site replication assures businesses that critical data can be restored in case of data corruption, viruses, user errors, or in the face of large-scale physical disasters.

Any Point in Time Rewind
"Any Point in Time Rewind" is a vital CDP element that rewinds data back in time to any moment before data loss occurred. In minutes, data can be restored and business is back to normal. Adding such capabilities to an already robust data replication solution, SteelEye Data Replication also provides organizations with a CDP option that enhances protection of more than one backup of critical business data, whether on-site or remote. Immediately upon noticing data integrity issues, users can mount the replicated data on a backup volume and then move backward and forward through the data stream via time stamps or other user-defined bookmarks until the optimal set of data is constructed. That data are then placed into service and normal operations resume.

Integrated Data Recovery Wizard
The process of managing data recovery is handled through a built-in Data Recovery Wizard. This simple-to-use tool guides users through the building of a new temporary dataset. It validates the dataset for consistency via appropriate tools and then loops back through these steps as needed until the optimal dataset is built. An intelligent binary search technique speeds the rebuilding process to optimize recovery time. Once completed, the wizard places the dataset back into service, onto the production server, and resumes normal business operations.

Multiple Replication Targets
For companies who need the enhanced protection of more than one real-time backup of critical business data, SteelEye Data Replication supports multiple replication targets, which can span any combination of local and remote sites. Local data recovery is assured by continuous backup within the local data center, while asynchronous remote data replication protects against nearby disasters.

More About SteelEye Data Replication for Linux
Leveraging open source features contributed by SteelEye into the 2.6 kernel, SteelEye Data Replication for Linux provides both host-based volume replication and continuous data protection for Red Hat Enterprise Linux and Novell SLES environments. An integrated compression engine along with a block-level change-only architecture reduces network bandwidth requirements and speeds replication.

Used by itself, SteelEye Data Replication delivers real-time backup of business-critical data across either a Wide or Local Area connection. When combined with LifeKeeper for Linux, cluster configurations can be built, ranging from low-cost two node shared nothing clusters to geographically-dispersed clusters for disaster recovery protection.

Monday, November 05, 2007

Dell Pland to Acqiure EqualLogic for $1.4 billion

Round Rock, Texas, and NASHUA, N.H.

Dell has entered into a definitive agreement to acquire EqualLogic, a leading provider of high-performance iSCSI storage area network (SAN) solutions uniquely optimized for virtualization. The acquisition will strengthen Dell’s product and channel leadership in simplifying and virtualizing IT for customers globally. iSCSI SAN technology represents the fastest growing part of the storage business.

“Our customers will be dealing with the largest increase in data we have seen in our history over the next few years,” said Michael Dell, Chairman and CEO, Dell. “Leading the iSCSI revolution will help Dell accelerate IT simplification and virtualization and will drive the Dell value proposition into more areas of the enterprise storage business,” Mr. Dell said.

Under the terms of the agreement, Dell will purchase EqualLogic for approximately $1.4 billion in cash. The acquisition of EqualLogic is expected to close late in the fourth quarter of Dell's fiscal year 2008 or early in the first quarter of fiscal 2009. The company expects the acquisition to be dilutive to earnings per share, excluding the amortization of intangibles, by $0.02 to $0.05 in aggregate for Fiscal 2009 and Fiscal 2010. The acquisition has been approved by the board of directors of each company and is subject to regulatory approvals and customary closing conditions.

After completion of the transaction, Dell plans to grow EqualLogic’s successful channel-partner programs with current and future EqualLogic-branded products, and also plans to incorporate EqualLogic technology into future generations of its Dell PowerVault storage line available through the channel and direct from Dell.

Dell Inc

EqualLogic

Thursday, November 01, 2007

massively multi-player Games (MMORPGs) on Grid


Since I wrote about G-Cluster almost two years ago, I have not read too much about Grid and The Games. But a visit to ISGTW made me smile, I want to play Games on my Grid!

Cmputer games have gone hand-in-hand with IT innovation for more than 30 years, capturing the imagination and devotion of millions.

edutain@grid is bringing games to the grid, again.

edutain@grid (Contract No. 034601) is a project funded by the European Commission under the 6th Framework Programme. Its duration is 36 months and started on 1 September 2006.

edutainatgrid: massively multi-player "killer" grid applications

Monday, October 29, 2007

New World's Most Powerful Vector Supercomputer From NEC, SX-9


Fujitsu and Hitachi in Japan and IBM, Cray in US were the teraflop giants and always competed with each other to be the most powerful computer manufacturing company.
So keeping up with the tradition, NEC Japan, on Thursday announced the launch of what it called the world's most powerful supercomputer on the market, SX-9.
SX-9, the fastest vector supercomputer with a peak processing performance of 839 TFLOPS(*1). The SX-9 features the world's first CPU capable of a peak vector performance of 102.4 GFLOPS(*2) per single core.

In addition to the newly developed CPU, the SX-9 combines large-scale shared memory of up to 1TB and ultra high-speed interconnects achieving speeds up to 128GB/second. Through these enhanced features, the SX-9 closes in on the PFLOPS(*3) range by realizing a processing performance of 839 TFLOPS. The SX-9 also achieves an approximate three-quarter reduction in space and power consumption over conventional models. This was achieved by applying advanced LSI design and high-density packaging technology.

In comparison to scalar parallel servers(*5) incorporating multiple general-purpose CPUs, the vector supercomputer(*4) offers superior operating performance for high-speed scientific computation and ultra high-speed processing of large-volume data. The enhanced effectiveness of the new product will be clearly demonstrated in fields such as weather forecasting, fluid dynamics and environmental simulation, as well as simulations for as-yet-unknown materials in nanotechnology and polymeric design. NEC has already sold more than 1,000 units of the SX series worldwide to organizations within these scientific fields.

The SX-9 is loaded with "SUPER-UX," basic software compliant with the UNIX System V operating system that can extract maximum performance from the SX series. SUPER-UX is equipped with flexible functions that can deliver more effective operational management compatible with large-scale multiple node systems.
The use of powerful compiler library groups and program development support functions to maximize SX performance makes the SX-9 a developer-friendly system. Application assets developed by users can also be integrated without modification, enabling full leverage of the ultra high-speed computing performance of the SX-9.

"The SX-9 has been developed to meet the need for ultra-fast simulations of advanced and complex large-capacity scientific computing," Yoshikazu Maruyama, senior vice president of NEC Corp., said in a statement.
NEC's supercomputers are used in fields including advanced weather forecasting, aerospace and in large research institutes and companies. The SX-9 will first go on display at a supercomputing convention next month in Reno, Nevada.

Specifications

Multi-node Single-node
2 - 512 nodes*3 1 node
SX-9 SX-9/A SX-9/B
Central Processing Unit (CPU)
Number of CPUs 32 - 8,192 8-16 4-8
Logical Peak Performance*1 3.8T - 969.9TFLOPS 947.2G - 1,894.4GFLOPS 473.6G - 947.2GFLOPS
Peak Vector Performance*2 3.3T - 838.9TFLOPS 819.2G - 1,638.4GFLOPS 409.6G - 819.2GFLOPS
Main Memory Unit (MMU)
Memory Architecture Shared and distributed memory Shared memory
Capacity 1T - 512TB 512GB、1TB 256GB,512GB
Peak Data Transfer Rate 2048TB/s 4TB/s 2TB/s
Internode Crossbar Switch (IXS)
Peak Data Transfer Rate 128GB/s×2 bidirectional (per node) -


(1) *TFLOPS:
one trillion floating point operations per second
(2) *GFLOPS:
one billion floating point operations per second
(3) *PFLOPS:
one quadrillion floating point operations per second
(4) Vector supercomputer:
A supercomputer with high-speed processor(s) called "vector processor(s)" that is used for scientific/technical computation. Vector supercomputers deliver high performance in complex, large-scale computation, such as climates, aeronautics / space, environmental simulations, fluid dynamics, through the processing of array-handling with a single vector instruction.
(5) Scalar parallel supercomputer:
A supercomputer with multiple general purpose processors suitable for simultaneous processing of multiple workloads such as genomic analysis or easily paralleled computations like particle computation. They deliver high performance by connecting many processors (also used for business applications) in parallel.

Sunday, October 28, 2007

What has Betty Cooker and Gridka got in Common, They Both use " ... in box " solution

Creating a cake became much easier in the late 40s when Betty Crocker released cake mix in a box.

Do you ever wish there was an equivalent for computing grids? Now there is, almost.

An approach known as “grid in a box” is making it possible to gather all the ingredients required to make grid computing more affordable and accessible for participating grid centers.

“The idea of ‘grid in a box’ is to put all needed grid services on one piece of hardware,” says Oliver Oberst, Forschungszentrum Karlsruhe. “Instead of having several machines working together to host the infrastructure of a grid site, there are several virtual machines working on one computer—the ‘box.’”

Traditionally, building a grid site with gLite—the middleware designed by Enabling Grids for E-sciencE and used predominately in Europe—required multiple different grid services to be installed, each on a different machine.
Continue reading at International Science Grid.....

Friday, October 26, 2007

VMWare Fusion 1.1 RC for Intel Macs released

If you are virtualizing your computing on a Mac, VMWare is out to help you with new release of the VMWare fusion. The Fusion 1.1 RC is said to have fixed some of the problems the fusion had but these features should certainly get your attension;
  • VMware Fusion 1.1 now includes English, French, German, and Japanese versions
  • Unity improvements include:
    • My Computer, My Documents, My Network Places, Control Panel, Run, and Search are now available in the Applications menu, Dock menu, and the Launch Applications window
    • Improved support for Windows Vista 32 and 64-bit editions
    • Improved Unity window dragging and resizing performance
  • Boot Camp improvements include:
    • Support for Microsoft Vista in a virtual machine
    • Improved support for preparing Boot Camp partitions
    • Automatically remount Boot Camp partition after Boot Camp virtual machine is shut down
  • Improved support for Mac OS X Leopard hosts
  • Improved 2D drawing performance, especially on Santa Rosa MacBook Pros
Download the Fusion 1.1 RC and 30 day together with 30 day trial Key.

Thursday, October 25, 2007

Wubi Super Easy Ubuntu Installer for Windows


Wubi is a free Ubuntu installer for Windows users that will bring you into the Linux world with a single click. Wubi allows you to install and uninstall Ubuntu as any other Windows application. If you heard about Linux and Ubuntu, if you wanted to try them but you were afraid, this is for you.
Beauty is that if you find the Ubuntu to your liking, you can make your machine dual boot with another small utility.
Tags: , ,

Web Based Virtual Machine Creator for VMWare

VMWare's virtualization technology allows you to run other operating systems within your native OS, but VMWare Player doesn't provide an easy way to create the disk images to host your guest OS. Enter the online service Virtual Machine Creator.
Via Wired, from a Reddit post.

Monday, October 15, 2007

Hiachi Quadruples Current Harddrives, To 4TB for Desktop and TO 1TB for Notebook Drives.

2x Reduction of Nanometer Recording Technology Shows Promise for 1TB Notebook and 4TB Desktop PCs in 2011

TOKYO, Oct. 15, 2007 -- Hitachi, Ltd. (NYSE: HIT / TSE: 6501) and Hitachi Global Storage Technologies (Hitachi GST), announced today they have developed the world's smallest read-head technology for hard disk drives, which is expected to quadruple current storage capacity limits to four terabytes (TB) on a desktop hard drive and one TB on a notebook hard drive.

Researchers at Hitachi have successfully reduced existing recording heads by more than a factor of two to achieve new heads in the 30-50 nanometer (nm) range, which is up to 2,000 times smaller than the width of an average human hair (approx. 70-100 microns). Called current perpendicular-to-the-plane giant magneto-resistive*1 (CPP-GMR) heads, Hitachi's new technology is expected to be implemented in shipping products in 2009 and reach its full potential in 2011.

Hitachi will present these achievements at the 8th Perpendicular Magnetic Recording Conference (PMRC 2007), to be held 15th -17th October 2007, at the Tokyo International Forum in Japan.

"Hitachi continues to invest in deep research for the advancement of hard disk drives as we believe there is no other technology capable of providing the hard drive's high-capacity, low-cost value for the foreseeable future," said Hiroaki Odawara, Research Director, Storage Technology Research Center, Central Research Laboratory, Hitachi, Ltd. "This is an achievement for consumers as much as it is for Hitachi. It allows Hitachi to fuel the growth of the ‘Terabyte Era’ of storage, which we started, and gives consumers virtually limitless ability for storing their digital content."

Hitachi believes CPP-GMR heads will enable hard disk drive (HDD) recording density of 500 gigabits per square inch (Gb/in2) to one terabit per square inch (Tb/in2), a quadrupling of today's highest areal densities. Earlier this year, Hitachi GST delivered the industry's first terabyte hard drive with 148 Gb/in2, while the highest areal density Hitachi GST products shipping today are in the 200 Gb/in2 range. These products use existing head technology, called TMR*2 (tunnel-magneto-resistive) heads. The recording head and media are the two key technologies controlling the miniaturization evolution and the exponential capacity-growth of the hard disk drive.

Cutting Through the Noise - The Strongest Signal-to-Noise Ratio

The continued advancements of hard disk drives requires the ability to squeeze more and more, and thus, smaller and smaller data bits onto the recording the media, necessitating the continued miniaturization of the recording heads to read those bits. However, as the head becomes smaller, electrical resistance increases, which in turn also increases the noise output and compromises the head's ability to correctly read the data signal.
High signal output and low noise is what is desired in hard drive read operations, thus, researchers try to achieve a high signal-to-noise (S/N) ratio in developing effective read-head technology. Using TMR head technology, researchers predict that accurate read operations would not be conducted with confidence as recording densities begin to surpass 500 Gb/in2.

The CPP-GMR device, compared to the TMR device, exhibits less of an electrical resistance, resulting in lower electrical noise but also a smaller output signal. Therefore, issues such as producing a high output signal while maintaining a reduced noise to increase the S/N ratio needed to be resolved before the CPP-GMR technology became practical

In response to this challenge, Hitachi, Ltd. and Hitachi GST have co-developed high-output technology and noise-reduction technology for the CPP-GMR head. A high electron-spin-scattering magnetic film material was used in the CPP-GMR layer to increase the signal output from the head, and new technology for damage-free fine patterning and noise suppression were developed. As a result, the signal-to-noise ratio, an important factor in determining the performance of a head, was drastically improved. For heads with track widths of 30nm to 50nm, optimal and industry-leading S/N ratios of 30 decibel (dB) and 40 dB, respectively, were recently achieved with the heads co-developed at Hitachi GST's San Jose Research Center and Hitachi, Ltd.'s Central Research Laboratory in Japan.

Recording heads with 50 nm track-widths are expected to debut in commercial products in 2009, while those with 30 nm track-widths will be implemented in products in 2011. Current TMR heads, shipping in products today, have track-widths of 70 nm.

The Incredible Shrinking Head

The discovery of the GMR effect occurred in 1988, and that body of work was recognized just last week with a Nobel Prize for physics. Nearly two decades after its discovery, the effects of GMR technology are felt more strongly than ever with Hitachi's demonstration of the CPP-GMR head today.

In 1997, nine years after the initial discovery of GMR technology, IBM implemented the industry's first GMR heads in the Deskstar 16GXP. GMR heads allowed the HDD industry to continue its capacity growth and enabled the fastest growth period in history, when capacity doubled every year in the early 2000s. Today, although areal density growth has slowed, advancements to recording head technology, along with other HDD innovations, are enabling HDD capacity to double every two years.

In the past 51 years of the HDD industry, recording head technology has seen monumental decreases in size as areal density and storage capacity achieved dizzying heights. The first HDD recording head, called the inductive head, debuted in 1956 in the RAMAC - the very first hard drive - with a track width of 1/20th of an inch or 1.2 million nm. Today, the CPP-GMR head, with a track-width of about one-millionth of an inch or 30 nm, represents a size reduction by a factor of 40,000 over the inductive head used in the RAMAC in 1956.

Notes

*1
CPP-GMR: As an alternative to existing TMR heads, CPP-GMR head technology has a lower electrical resistance level, due to its reliance on metallic rather than tunneling conductance, and is thus suited to high-speed operation and scaling to small dimensions.
*2
TMR head: Tunnel Magneto-Resistance head
A tunnel magneto-resistance device is composed of a three layer structure of an insulating film sandwiched between ferromagnetic films. The change in current resistance which occurs when the magnetization direction of the upper and lower ferromagnetic layers change (parallel or anti parallel) is known as the TMR effect, and ratio of electrical resistance between the two states is known as the magneto-resistance ratio.
Hitachi News release.

Friday, October 12, 2007

Patent Infringement Law suit againt Linux (Is RedHat and Novell Linux?)

Well, it was bound to happen: Linux vendors Red Hat and Novell have been sued for patent infringement. Groklaw is reporting that on Tuesday, the two companies were sued by IP Innovation LLC and Technology Licensing Corp. for violating three patents having to do with windowing user interfaces.

The lawsuit represents the first test of what happens when open source collides with patents, and it's interesting for a couple reasons. First, notice that all the other Linux vendors are missing from the defendants list, most notably IBM. That could be because IBM has already licensed the patents in a different context. (In June, Apple settled a patent infringement lawsuit with the same plaintiffs over at least one of the patents involved here.)

Stollen from Frank Hayes Blog

UNIVA UD Unveils blueprint for the world’s first industrial strength open source cluster and grid product suite

Taking aim squarely at vendors who offer only costly, confusing and limiting proprietary grid and cluster products, Tuecke and Venkat will lead a discussion about key issues raised by customers opting for open source implementations.

Increasingly, businesses are embracing open source software models in many areas, but until now there has been no complete, integrated open source stack for cluster and grid, says Tuecke, Univa UDs chief technology officer. Given Univas open-source pedigree and United Devices commercially proven technology, we believe that gap can now be filled and expect the resulting merged solution set will drive many more cluster and grid operators to open source implementations. There is no longer any reason to tie up cluster and grid systems with costly and limiting proprietary software.

Univa and United Devices, pioneers and leaders in cluster and grid technology, announced the merger of the two companies last month, becoming Univa UD. At that time, the company promised it would outline an open source industrial strength product roadmap at the Open Grid Forum.

Based on a free, downloadable open source cluster management product, Univa UD has said its end-to-end High Performance Computing (HPC) open source product suite will also include a fully supported pro version with rich functionality and an enterprise-class grid solution growing out of UD's award-winning Grid MP technology.

Our vision is to emulate and improve on the open source models of software companies who have gone before us, said Tuecke, companies like Red Hat and SugarCRM.

Tuecke, along with Dr. Ian Foster and Dr. Carl Kesselman, founded Univa in 2004, as well as the Globus Project almost a decade earlier. They are known as the fathers of grid computing for their pioneering efforts in developing open grid software and standards.

Prior to founding Univa, serving as its initial CEO and subsequently becoming its CTO, Tuecke was responsible for managing the architecture, design, and development of Globus software, as well as the Grid and Web Services standards that underlie it such as OGSA and WSRF.

In 2002, Tuecke received Technology Review magazines TR100 award, which recognized him as one of the worlds top 100 young innovators. In 2003, he was named (with Foster and Kesselman) by InfoWorld magazine as one of its Top 10 Technology Innovators of the year.

There continues to be a tremendous growth in the cluster market in terms of revenues and number of units, said Venkat, a co-founder of United Devices in 1999. The open source grid and HPC expertise from Univa and the commercial technology and experience from United Devices put Univa UD in a unique position to serve this market. End-to-end, we can now offer the worlds best-of-breed open source technologies backed with commercially proven solutions and world-class services and support.

Univa UDs session at OGF21 will be 1:30 p.m. to 3 p.m., Tuesday, Oct. 16, in the Portland Room at the Grand Hyatt Seattle. Univa UD said details of its new product roadmap also will be available at the companys exhibit during the conference.

Thursday, October 04, 2007

We Need another Sputnik

Although I was not born yet, the launch of Sputnik had a large impact on my life. Being in a family that were scientists for a few generations make you see and think differently. The Sputnik has floored the accelerator on my families. They were so busy they forgot to make me. I was born seven years after my brother! My big event was the moon landing, even though I could barely understand it, I was feeling sorry for Micheal Collins and his son (why? you tell me!). So here is almost exact my take on sputnik affair!

8. Speaking of General Medaris, in the final chapter of your book, “Sputnik’s Legacy,” you quote him: “If I could get ahold of that thing, I would kiss it on both cheeks.” What did he mean?


Sputnik galvanized America. We put billions of dollars into education. We began producing 1,500 PhDs a week. Teachers were going to special summer institutes, Middlebury to study language, MIT to study technology. It brought the middle classes back into education, which was drifting toward elitism. It showed us at our best.

We get Dr. Spock, Dr. Seuss. Rote learning starts to be abandoned. Dick and Jane are skewered on a plate. There’s less Latin and Greek, more Spanish and Russian.

Betty Friedan is working on a book about Smith College, and she said Sputnik got her thinking. Stephen King is in a theater, watching a movie called “Earth vs. the Flying Saucer,” about Martians coming down to Malibu and taking women back to Mars. They stopped the movie in the middle to announce Sputnik. That was the beginning of his dread. The world had been reality versus fantasy, and now the two had come together.

Sputnik changed a lot of people.

9. What you’re saying flies in the face of the people who say that too much money has been spent on the space program, that in more recent times it could have been used for other things...

It got us all the things we now rely on, laptop computers, cellphones. Countries that don’t have the copper to string phonelines? Cellphones. The space race has given the world a whole boost at every level. The space guys were the first guys to learn to do biometric readings of people’s bodies. There was a large technology transfer.

At its highpoint, it was four percent of our economy, now it’s only seven-tenths of one percent. And there’s an $8 billion positive balance of payments in the aerospace industry, meaning you take all the money coming into this country—other countries paying Boeing to build their planes, hiring American pilots, for instance—and it’s more than other segments of industry.

Sputnik resulted in the creation of DARPA, the Defense Advance Research Projects Agency. That was hundreds of millions of dollars into a think tank that was supposed to come up with those things that would prevent us from being surprised. There were these huge computers bulk processing, at MIT, at Cal Tech. And these huge computers could talk to each other.

When the government was finished with the ARPA net, they said, Let’s give it to the world. Think what would have happened if they had decided to auction it off. So it’s because of Sputnik that we’ve got the internet.

10. We were talking earlier in the conversation about how because of Sputnik we had more scientists, more engineers, better education. Somehow it feels as if today we’ve gone back to pre-Sputnik days. Now you hear about how we need more scientists, more engineers, better education because that sector of our society all seems to be going overseas...

Well, that’s the argument everybody’s making, we may need another Sputnik moment, something to galvanize us and get us going again. Katrina could have been that moment, but it wasn’t. I thought that bridge collapse in Minneapolis might have been it, that we might have recognized we’re letting the country deteriorate while we sit in corners with our ipods.

The above are three of the 10 questions and answers published by CBS News after interviewing Paul Dickson, who wrote Sputnik: The Shock of the Century. Published in 2001, it’s just been re-released; he’s also the co-writer on a new documentary, Sputnik Mania.
Visit CBS and read the rest. I am very sure we need another sputnik. I don't want to be another beatnik.

Monday, September 10, 2007

Embedded VMWare called ESX Lite

VMware is launching a new, embedded version of its flagship ESX Server hypervisor, along with a disaster recovery tool and an update for its virtual desktop broker.
The news aritcle

Quad Core AMD Opteron released

SAN FRANCISCO (AP) — Advanced Micro Devices Inc. launched its highly publicized new server chip Monday, delivering the biggest jolt to its product lineup in four years.

The company's redesigned Opteron processor is the first from AMD to feature four computing engines on a single chip instead of just one or two.

AMD's belated entry into the "quad-core" market is a critical element in the financially strapped company's offensive against Intel Corp., the world's largest semiconductor company, whose market value of $148 billion makes it 21 times bigger than AMD.

Intel has outspent its smaller rival on new technologies and better absorbed the pain of a brutal price battle that has led to embarrassing market-share losses AMD hopes its new chip will reverse.

Also Monday, Intel raised its third-quarter revenue outlook on stronger-than expected demand for its microprocessors. The company now expects revenue between $9.4 billion and $9.8 billion, up from its previous range of $9 billion to $9.6 billion.

AMD says the newly redesigned Opteron chip is an important improvement in high-performance computing. It's using a different engineering strategy than Intel.

Intel's four-core chips are actually a package of two chips with two cores each. In AMD's four-core chips, all the cores are placed on a single piece of silicon.

Industry observers have debated whether either strategy matters in terms of performance.

Adding more processors allows chips to handle multiple task at once, a crucial ability, particularly in corporate data centers.

AMD was not a player in the server processor market until it released its first Opteron chip in 2003. Demand soared because of its energy efficiency and other technological features, and by last year, Sunnyvale-based AMD had grown to capture about a quarter of the worldwide market, according to Mercury Research.

But Santa Clara-based Intel fought back last year with a strong new lineup of chips based on a new design, and it also beat AMD to market with its first four-core chips.

Compared with Intel's new products, AMD's product line began to look dated, and its market share plunged. AMD now controls only about 13 percent of the server market.

"What is key about this product is really getting back some of that lost share," said Dean McCarron, Mercury Research's president and principal analyst.

AMD's path toward Monday's launch has been rocky, with AMD Chief Executive Hector Ruiz saying the chips are launching about six months behind schedule. Some analysts and investors expressed disappointment that the chips available at launch are slower than expected — operating at 1.9 gigahertz to 2.3 gigahertz, depending on the model.

AMD said it will boost their speed later this year. By comparison, Intel's fastest Xeon server processors operate at 3.0 gigahertz, which measures processing cycles per second.

Saturday, September 08, 2007

Lenovo ready to give you Linux on ThinkPads, Go Vote for your Distro

If you were following the Lenovo blogs, inside the box, they have posted a follow up to the post,
Linux On a Mobile PC and you can vote for your favorite distro that you would like on your ThinkPad. I have had Ubuntu on my Thinkpad(s). I have tried many other distributions but Ubuntu gave me the least trouble and of course that is what I voted for!
If you have been shouting about manufacturers not providing Linux with computers, then this is the chance for influencing one of the largest notebook manufacturers in the world!.
Here is the link to the article,
Linux Follow Up by Matt Kohut

Friday, September 07, 2007

3Tera is showcases the AppLogic 2.1 release at the Office 2.0 Conference 2007


SAN FRANCISCO--(BUSINESS WIRE)--3Tera, Inc., the leading innovator of grid computing and utility computing services for web applications, announced today at the Office 2.0 Conference in San Francisco, CA the commercial availability of AppLogic 2.1. The new 2.1 release of the award winning AppLogic grid operating system adds comprehensive Application Monitoring and support for multiple CPUs per appliance. SaaS and Web 2.0 companies can benefit from greater scalability, improved resource utilization, unprecedented visibility and control over application performance.

Utility Computing or Cloud Computing is quickly gaining popularity with online service companies, said Peter Nickolov, president and COO of 3Tera. Our latest version provides unprecedented control of applications and virtual private data center management for production environments, allowing Web 2.0 and SaaS companies to grow and self manage their services using only a browser.

The product we are announcing today has undergone more testing in beta then any previous release of AppLogic, said Bert Armijo, VP of Marketing and Product Development at 3Tera. AppLogic 2.1 allows for greater scalability and control of the infrastructure, assuring users of their ability to grow.

Weve been using the new AppLogic for a month. It had a huge impact on enhancing the manageability of our application, especially for scaling Apache and MySQL, said Joost Schreve, founder and CEO of EveryTrail, Inc., a Web 2.0 startup building an online platform for visualizing travel experiences by mapping and describing geographical locations. Taking advantage of the monitoring capabilities in the new release helped us easily identify elements that needed more resources. We were able to increase the performance and scale our applications easily.

3Tera is showcasing the AppLogic 2.1 release at the Office 2.0 Conference in San Francisco, CA. Office 2.0 is held at the St. Regis Hotel, September 5 7. For more information on Office 2.0, including the conference agenda, visit www.o2con.com/index.jspa.