Wednesday, December 17, 2008

Google Open Source Blog: New File Systems Added to MacFUSE

MacFUSE, an Open Source mechanism that allows you to extend Mac OS X's native file system capabilities had a State of the Union talk recently and offered many dmos, including the following:
  • AncientFS - a file system that lets you mount ancient, and in some cases current-day Unix data containers as regular volumes on Mac OS X.

  • UnixFS - a general-purpose abstraction layer for implementing Unix-style file systems in user space.

  • ufs - a user-space implementation (read-only) of the UFS file system family.

  • sysvfs -a user-space implementation (read-only) of the System V file system family.

  • minixfs -a user-space implementation (read-only) of the Minix file system family.
If you are a Mac or MacFUSE user, it is time to checkout the video below and the code at the repository!


Google Open Source Blog: New File Systems Added to MacFUSE

Wednesday, November 12, 2008

HP / NetSuite Takes Small Business To Cloud Computing.

Hewlett-Packard (HP) announce that it will offer software from NetSuite to bring software as a service (SaaS) business applications to the small and midsize business (SMB) market.
With the Netsuite, small businesses can manage their operations online like ecommerce to managing inventories, order fulfillment, customer relationships.
If you see similarities with Salesforce offerings, yes you are right and the Netsuite competes with the Salesforce. But it caters to companies with fewer than 1000 users.
The Netsuite stand to gain a lot from the relation ship with HP and it's 15000 strong value added resellers in the HP chnnel in the United States.
"This collaboration with HP will accelerate our penetration of the SMB market worldwide and demonstrate that the channel can play a major role in delivering cloud computing solutions," said Zach Nelson, chief executive officer, NetSuite.
You will find the official press release here.


Tuesday, October 28, 2008

Azure, Cloud Computing Platform From Microsoft.

Microsoft has released a CTP, Customer Technical Preview of Windows Azure, a cloud services operating system that serves as a development, service hosting, and service management environment for the Azure Services Platform.

Windows Azure provides developers with on-demand compute and storage to host and manage web applications on the internet through Microsoft data centers.

So if you want to be up in the clouds, follow this link.

Sunday, September 21, 2008

SUSE Gets Wyse And Collaborates On Enterprise Thin Client Virtualization.

SAN JOSE, Calif. and WALTHAM, Mass.— Wyse Technology, the global leader in thin computing, and Novell today announced the joint delivery of Wyse Enhanced SUSE® Linux Enterprise, the next-generation of Linux* operating system designed for thin computing environments and available only on Wyse desktop and mobile thin client devices. Wyse Enhanced SUSE Linux Enterprise is a powerful combination of Wyse's extensive experience in thin computing and the ease of use, flexibility and security of SUSE Linux Enterprise. Wyse Enhanced SUSE Linux Enterprise will be available pre-loaded on the Wyse thin client devices in Q4 2008.

According to a 2008 IDC report (1), the Linux thin client market will grow from nearly 1 million units in 2008 to 1.8 million units in 2011. Linux will reach a 30.5 percent share of all operating system shipments on thin client devices by 2011. The ever-increasing market penetration of Linux-based thin clients is due to their ability to lower total cost of ownership, while helping enterprises gain a more secure and flexible computing environment. Additionally, in emerging areas such as desktop virtualization, the operating system used by client devices is becoming less relevant, as long as it is an enabler of virtualization technologies, and not the limiting factor.

With Wyse Enhanced SUSE Linux Enterprise customers gain a host of benefits. End-users will be able to maximize productivity and minimize training costs due to the easy-to-use graphical user interface combined with the rich user experience provided only by Wyse, including cross-platform multimedia support, USB peripheral support and flexible hardware options. IT administrators will gain the flexibility of having their thin client devices automatically update and configured upon boot-up, or being able to use the enhanced scalable management capabilities of Wyse Device Manager, the industry-leading thin computing management software solution.

"With the announcement of Wyse Enhanced SUSE Linux Enterprise pre-loaded on our platforms, we are providing more choices and more flexibility to users who wish to deploy Linux-based thin clients," said Ricardo Antuna, vice president of Product Management and Business Development at Wyse Technology, Inc. "Since we announced our intention to collaborate with Novell last year, we have worked towards creating a solution that will enable our customers to deploy Linux without the compromises on security, scalability and performance encountered with non-standard and proprietary thin Linux distributions. Wyse Enhanced SUSE Linux Enterprise doesn't disappoint."

"We are pleased that Wyse has selected SUSE Linux Enterprise as the platform for their next generation Linux-based thin clients," said Carlos Montero-Luque, vice president of product management for Open Platform Solutions at Novell. "This is clear indication of the readiness of desktop Linux overall to meet the needs of enterprise customers, including lower costs, stronger security and improved manageability. All kinds of organizations are finding Linux thin client deployments to be a good fit for their hardware, security needs and budgets, and we are excited to partner with Wyse to deliver a market-leading solution."

Integration with Microsoft* Active Directory* and unparalleled driver and software support will enable enterprises to easily deploy Wyse Enhanced SUSE Linux Enterprise thin clients in a Windows* environment. Finally, Wyse Enhanced SUSE Linux Enterprise includes built-in support for Wyse's suite of virtualization software, enabling enterprises to take advantage of third-party desktop virtualization solutions such as Citrix* XenDesktop* and VMware* VDI.

"Wyse Enhanced SUSE Linux Enterprise further builds momentum for client virtualization by providing customers with multiple options when deploying desktop appliances within their organizations," said Raj Dhingra, group vice president and general manager, Desktop Delivery Group, Citrix Systems Inc. "The tight integration between Wyse’s suite of desktop appliances and Citrix client virtualization technologies, XenDesktop and Citrix XenApp, provides a superior experience for the user and a cost-effective solution for desktop and application delivery."

"The VMware-Wyse partnership has been extended with the release of Wyse Enhanced SUSE Linux Enterprise with built-in support for the VDM client. This release further helps lower the overall costs of deploying a VMware VDI solution by eliminating the need for expensive client-side hardware and operating systems," said Jerry Chen, VMware's senior director of Product Marketing for Enterprise Desktop Products.

Wyse Enhanced SUSE Linux Enterprise includes the GNOME* desktop, Firefox* browser, a powerful terminal emulator, as well as pre-built technologies for connecting to thin computing architectures. These architectures include the VDM client from VMware, the ICA client from Citrix, and the RDP client from Microsoft. This flexibility and support makes Wyse Enhanced SUSE Linux Enterprise the ideal choice for organizations whether they wish to run server-based, Web-based, or local (including legacy) applications.

Wyse Enhanced SUSE Linux Enterprise will be available in Q4 2008 pre-loaded Wyse X50L mobile thin client devices. For more information on Wyse Enhanced SUSE Linux Enterprise and the Wyse family of thin clients visit http://www.wyse.com/products.

Monday, September 15, 2008

IC (Integrated Circuit) Is 50 Years Old And TI Launches Kilby Labs To Honor Jack Kilby.

DALLAS (September 12, 2008) - Texas Instruments Incorporated (TI) (NYSE:TXN) announced its new "Kilby Labs" today, a center of innovation designed to foster creative ideas for breakthrough semiconductor technology. Launched on September 12, the 50th anniversary of the integrated circuit, the new labs will build on IC inventor Jack Kilby's legacy of revolutionizing our lives through chip innovation.

Kilby Labs will be located on TI's Dallas North Campus and is inspired by the original TI lab, where Kilby first designed the chip that opened the door to 3G cell phones, portable ultrasound machines and automotive antilock braking systems. The new facility, though, will bring together university researchers and leading TI engineers to discover life-changing opportunities for semiconductor technology. From creating new ways to make health care more mobile to harnessing new power sources to enabling more fuel-efficient vehicles, researchers at the Kilby Labs will focus on developing chip advances that make a difference.

"All of us at TI believe that technologies that significantly impact our lives are the right technologies for our business," said Rich Templeton, Chairman and CEO of TI, at the launch celebration held at the Semiconductor Building on TI's North Campus. "The power to help make the world healthier, safer, greener and more fun is what gets us excited about chip innovation, and why we come to work every day at TI. It's what motivated Jack Kilby to build the first IC and why he was able to transform the world through his ideas and inventions."

"Our vision for Kilby Labs," said Gregg Lowe, TI senior vice president and the project's executive sponsor, "is that it will combine TI's experience in developing new chip technologies and our understanding of customer needs with the dreams of a new generation of innovators. Technology springs from imagination, and we want to create an environment where people can both imagine a better world and help build it. The best way we can celebrate Jack's contributions is by providing people with the opportunity to carry on his work and find new ways for a tiny chip to dramatically improve millions of lives around the world."

TI has named Ajith Amerasekera as director of its new Kilby Labs. Ajith, who is a TI Fellow, joined the company in 1991 and holds a PhD in Electrical Engineering and Physics. He previously served as CTO for TI's application-specific integrated circuit division, and as the holder of 28 issued patents and author of four books on semiconductors, Ajith is well recognized in the international technical community.

In addition to the new Kilby Labs, TI is honoring Jack Kilby's life and legacy with a variety of events showcasing his unique vision within the world of engineering and his creative expression through photography:

  • Meadows Museum at Southern Methodist University, Dallas: Jack Kilby: The Eye of Genius - Photographs by the Inventor of the Microchip will run through September 21. The exhibit displays several artifacts, such as a collection of Kilby's photography, his original notebook of sketches and ideas for the integrated circuit, his Nobel Prize in Physics, the world's first microchip and the first handheld calculator.
  • The Museum of Nature and Science, Dallas: A microchip mini-exhibit will run through October 19. The display features items from the TI archives in contrast to their modern form, along with video footage.
  • Texas Instruments Headquarters: The original lab where Kilby worked and made his significant discovery of the first integrated circuit has been recreated onsite. The recreated lab will inspire future inventors and serve as a visual reminder of the power of science and technology combined with creativity.
  • Great Bend, Kansas: TI has made a donation toward Jack Kilby's memorial statue in his hometown of Great Bend, Kansas. To learn more about the 50th anniversary of Jack Kilby's invention of the integrated circuit, please visit www.ti.com/tichip.

Thursday, August 07, 2008

Volunteer Astronomer Finds “Cosmic Ghost”

Green Object in the center is the “Hanny’s Voorwerp”
New Haven, Conn. — When Yale astrophysicist Kevin Schawinski and his colleagues at Oxford University enlisted public support in cataloguing galaxies, they never envisioned the strange object Hanny van Arkel found in archived images of the night sky.

The Dutch school teacher, a volunteer in the Galaxy Zoo project that allows members of the public to take part in astronomy research online, discovered a mysterious and unique object some observers are calling a “cosmic ghost.”

van Arkel came across the image of a strange, gaseous object with a hole in the center while using the www.galaxyzoo.org website to classify images of galaxies.

When she posted about the image that quickly became known as “Hanny’s Voorwerp” ( Dutch for “object”) on the Galaxy Zoo forum, astronomers who run the site began to investigate and soon realized van Arkel might have found a new class of astronomical object.

“At first, we had no idea what it was. It could have been in our solar system, or at the edge of the universe,” said Schawinski, a member and co-founder of the Galaxy Zoo team.

Scientists working at telescopes around the world and with satellites in space were asked to take a look at the mysterious Voorwerp. “What we saw was really a mystery,” said Schawinski. “The Voorwerp didn’t contain any stars.” Rather, it was made entirely of gas so hot — about 10,000 Celsius — that the astronomers felt it had to be illuminated by something powerful. They will soon use the Hubble Space Telescope to get a closer look.

Tuesday, August 05, 2008

Apache, PHP and PostgreSQL. As A Stack For All Your OS Platforms,

Bitnami has released a new set of Infrastructure Stacks: LAPPStack, MAPPStack and WAPPStack. If you did not know what they are, it is time you get to know them.These stacks provide an easy to install distribution of Apache, PostgreSQL, PHP and supporting libraries. The user-friendly installer allows users to quickly install and configure a PHP-PostgreSQL platform on Linux, Windows and OS X. These also include phpPgAdmin, a management tool for PostgreSQL, to make administration tasks even easier.
LAPPStack for Linux, MAPPStack for MAC OS X and WAPPStack for of course windows. I have all my development machines that spans all three plaforms, supported by these stacks. If you are in the visinity of any web development, I suggest you pay a visit to BitNami.
You will also find that forum, wiki, CRM and document management applications such as Drupal, Joomla!, WordPress, DokuWiki, KnowledgeTree and SugarCRM are provided as packages for easy deployment. Now you know where I go to get mmost of my development stacks!
Info Source, BitNami Blog

Monday, August 04, 2008

$360 Million Data Center in North Carolina by IBM to Serve Cloud Computing.

RESEARCH TRIANGLE PARK, NC - 01 Aug 2008: IBM (NYSE: IBM) announced today plans to build a $360 million state-of-the-art data center at its facility in Research Triangle Park (RTP), North Carolina. The data center will include new technologies and services that will enable IBM to deliver Cloud Computing capabilities to clients.

Cloud computing uses advanced technologies and global delivery mechanisms to enable individuals to access information and services from any device with extremely high levels of availability and quality of experience.

IBM will renovate an existing building on its RTP campus in North Carolina to create one of the most technologically advanced and energy efficient data centers in the world. The new data center will be the first in the world to be built with IBM's New Enterprise Data Center design principles. Clients using this center will have unparalleled access to massive internet-scale computing capabilities, while gaining the cost and environmental protection advantages of IBM's industry-leading energy efficiency data center design.

Data centers are the backbone of information technology (IT) infrastructure for businesses and other organizations, with powerful servers and storage systems running business-critical technology including software applications, email and web sites. IBM owns and operates more than eight million square feet of data center space -- more than any other company in the world.

This new RTP data center is a key component in IBM's Project Big Green initiative to dramatically increase energy efficiency in the data center, as companies face escalating energy costs and the requirement to handle a rapidly rising amount of data.

"This announcement further demonstrates IBM's commitment to our state and to our people," said Gov. Mike Easley. "I look forward to maintaining this partnership with IBM for years to come."

"This new data center is part of IBM's commitment to construct the world's most advanced data centers," said Bob Greenberg, general manager of IT Optimization and North Carolina Senior State Executive at IBM. "This is the latest example of IBM's deep history of innovation in North Carolina. When we open for business in late 2009, the new IBM data center assures that Research Triangle Park will be a strategic location for our outsourcing business for many years to come. I'd like to thank the State of North Carolina, Durham County, the Durham Chamber of Commerce, and Duke Energy for their outstanding support that helped make this project possible."

"IBM's innovations have been a cornerstone of the Research Triangle Park and Durham County, and this new state-of-the-art data center certainly continues that outstanding legacy," said Ellen W. Reckhow, Chairman of the Durham County Board of Commissioners. Durham County approved allocating $750,000 in economic development incentives for IBM's new data center.

More nformation from IBM.


Thursday, July 31, 2008

Phoenix, We Have Water On Mars.

TUCSON, Ariz. -- Laboratory tests aboard NASA's Phoenix Mars Lander have identified water in a soil sample. The lander's robotic arm delivered the sample Wednesday to an instrument that identifies vapors produced by the heating of samples.

"We have water," said William Boynton of the University of Arizona, lead scientist for the Thermal and Evolved-Gas Analyzer, or TEGA. "We've seen evidence for this water ice before in observations by the Mars Odyssey orbiter and in disappearing chunks observed by Phoenix last month, but this is the first time Martian water has been touched and tasted."

With enticing results so far and the spacecraft in good shape, NASA also announced operational funding for the mission will extend through Sept. 30. The original prime mission of three months ends in late August. The mission extension adds five weeks to the 90 days of the prime mission.

"Phoenix is healthy and the projections for solar power look good, so we want to take full advantage of having this resource in one of the most interesting locations on Mars," said Michael Meyer, chief scientist for the Mars Exploration Program at NASA Headquarters in Washington.

The soil sample came from a trench approximately 2 inches deep. When the robotic arm first reached that depth, it hit a hard layer of frozen soil. Two attempts to deliver samples of icy soil on days when fresh material was exposed were foiled when the samples became stuck inside the scoop. Most of the material in Wednesday's sample had been exposed to the air for two days, letting some of the water in the sample vaporize away and making the soil easier to handle.

"Mars is giving us some surprises," said Phoenix principal investigator Peter Smith of the University of Arizona. "We're excited because surprises are where discoveries come from. One surprise is how the soil is behaving. The ice-rich layers stick to the scoop when poised in the sun above the deck, different from what we expected from all the Mars simulation testing we've done. That has presented challenges for delivering samples, but we're finding ways to work with it and we're gathering lots of information to help us understand this soil."

Since landing on May 25, Phoenix has been studying soil with a chemistry lab, TEGA, a microscope, a conductivity probe and cameras. Besides confirming the 2002 finding from orbit of water ice near the surface and deciphering the newly observed stickiness, the science team is trying to determine whether the water ice ever thaws enough to be available for biology and if carbon-containing chemicals and other raw materials for life are present.

The mission is examining the sky as well as the ground. A Canadian instrument is using a laser beam to study dust and clouds overhead.

"It's a 30-watt light bulb giving us a laser show on Mars," said Victoria Hipkin of the Canadian Space Agency.

A full-circle, color panorama of Phoenix's surroundings also has been completed by the spacecraft.

"The details and patterns we see in the ground show an ice-dominated terrain as far as the eye can see," said Mark Lemmon of Texas A&M University, lead scientist for Phoenix's Surface Stereo Imager camera. "They help us plan measurements we're making within reach of the robotic arm and interpret those measurements on a wider scale."

The Phoenix mission is led by Smith at the University of Arizona with project management at NASA's Jet Propulsion Laboratory in Pasadena, Calif., and development partnership at Lockheed Martin in Denver. International contributions come from the Canadian Space Agency; the University of Neuchatel, Switzerland; the universities of Copenhagen and Aarhus in Denmark; the Max Planck Institute in Germany; and the Finnish Meteorological Institute.

For more about Phoenix, visit:

http://www.nasa.gov/phoenix

Media contacts: Guy Webster 818-354-6278
Jet Propulsion Laboratory, Pasadena, Calif.
guy.webster@jpl.nasa.gov

Wednesday, July 16, 2008

Argonne National Laboratory's IBM Blue Gene/P, The Fastest Supercomputer In The World For Open Science.

Argonne National Laboratory's IBM Blue Gene/P, The Fastest Supercomputer In The World For Open Science.

ARGONNE, Ill. (June 18, 2008) — The U.S. Department of Energy's (DOE) Argonne National Laboratory's IBM Blue Gene/P high-performance computing system is now the fastest supercomputer in the world for open science, according to the semiannual Top500 List of the world's fastest computers.

The Top500 List was announced today during the International Supercomputing Conference in Dresden, Germany.

The Blue Gene/P – known as Intrepid and located at the Argonne Leadership Computing Facility (ALCF) – also ranked third fastest overall. Both rankings represent the first time an Argonne-based supercomputing system has ranked in the top five of the industry's definitive list of supercomputers.

The Blue Gene/P has a peak-performance of 557 Teraflops (put in other terms, 557 trillion calculations per second). Intrepid achieved a speed of 450.3 Teraflops on the Linpack application used to measure speed for the Top500 rankings.

"Intrepid's speed and power reflect the DOE Office of Science's determined effort to provide the research and development community with powerful tools that enable them to make innovative and high-impact science and engineering breakthroughs," said Rick Stevens, associate laboratory director for computing, environmental and life sciences at Argonne.

"The ALCF and Intrepid have only just begun to have a meaningful impact on scientific research," Stevens said. "In addition, continued expansion of ALCF computing resources will not only be instrumental in addressing critical scientific research challenges related to climate change, energy, health and our basic understanding of the world, but in the future will transform and advance how science research and engineering experiments are conducted and attract social sciences research projects, as well."

"Scientists and society are already benefiting from ALCF resources," said Peter Beckman, ALCF acting director. "For example, ALCF's Blue Gene resources have allowed researchers to make major strides in evaluating the molecular and environmental features that may lead to the clinical diagnosis of Parkinson's disease and Lewy body dementia, as well as to simulate materials and designs that are important to the safe and reliable use of nuclear energy plants."

Eighty percent of Intrepid's computing time has been set aside for open science research through the DOE Office of Science's (SC) highly select Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. There are currently 20 INCITE projects at the ALCF that will use 111 million hours of computing time this year. SC's Office of Advanced Scientific Computing Research provides high-level computer power focused on large-scale installation used by scientists and engineers in many disciplines.

The Top500 List is compiled by Hans Meuer of the University of Mannheim in Germany, Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory, and Erich Strohmaier and Horst Simon of DOE's National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. The list made its debut in June 1993 and ranked as No. 1 DOE's Los Alamos National Laboratory's Thinking Machine Corporation's CM-5, with 1,024 processors and a peak-performance of 131 gigaflops.

Argonne National Laboratory brings the world's brightest scientists and engineers together to find exciting and creative new solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America 's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.

For more information, please contact Angela Hardin (630/252-5501 or ahardin@anl.gov) at Argonne.

Wednesday, June 18, 2008

Ixia To Demo 100 GbE Network Connections

June 17, 2008. In Las Vegas, NV, Ixia will be the first test vendor to demonstrate testing of 100 GbE network connections. This proof-of-concept demonstration can be seen as part of Infinera's demonstration of 100 GbE transport at their booth (#5607) and by appointment in Ixia's private meeting room (MSL 4704).

More Internet bandwidth is continually needed due to the general expansion of Internet usage and to the adoption of bandwidth-hungry applications, including IPTV, video, peer-to-peer sharing, and wireless backhaul. Computer and network architecture and technology have kept pace with such advancements as multi-core processing, virtualization, networked storage, and I/O convergence. These improvements have similarly led to increased demand for data center bandwidth.

Historically, increased bandwidth requirements have been addressed with solutions that begin by combining multiple lower bandwidth links, and then transition to higher bandwidth links. We've seen this multiple times, in the transition from 10 Mbps to 100 Mbps to 1 Gbps to 10 Gbps.

Today, both within data centers and across major Internet links, multiple 10 Gigabit Ethernet (10 GbE) links are aggregated using one of several link aggregation protocols. Aggregation, however, comes with a raft of problems. Use of multiple 10 GbE links requires a large-scale replication of resources, including expensive computer interfaces and switch ports. Aggregation of multiple flows can be a complex operation, calling for packing of flows that use less than 10 Gbps of traffic and splitting of flows that use more than 10 Gbps. The bursty nature of most Internet protocols, coupled with stringent SLAs, often results in bandwidth wastage.

It's essential that single, logical, high-speed links be available in the data center and throughout the Internet. The step to 100 GbE is seen as a key ingredient to the next major expansion of the Internet - "the core empowers the edge."

If these high-speed links are ever to become ubiquitous, standards and conventions must be developed. The IEEE has taken the lead in 40 GbE and 100 GbE standards. Starting with the inception of the IEEE HSSG (high speed study group) in mid-2006, the Task Force (IEEE 802.3ba) was formalized at the end of 2007 and is scheduled to issue approved standards by 2010. At the same time, multiple network component vendors are participating in multi-source agreements (MSAs) for internal and interface components not dictated by the standards.

The members of the IEEE committee strongly believe that 100 GbE is ready now - that no new breakthrough is needed to proceed. In fact, due to the immediate need for high-speed links, NEMS and carriers have implemented the 802.3ba pre-standards. There's likewise an immediate need for test equipment to be used by NEMs, carriers, and enterprises to validate full line-rate operation, verify interoperability, and validate protocol operation.

To meet the needs of 100 GbE users, Ixia has concentrated its efforts toward providing IEEE 802.3ba test equipment that tracks the evolving standards for 100 GbE.

Ixia's 100 Gigabit Ethernet Proof of Concept Demonstration offers:
  • An IEEE 802.3ba Task Force-based implementation
  • The first 100 GbE line-rate traffic generation and analysis solution
  • The first PCS-layer implementation using multilane distribution (MLD)
Ixia believes that its active participation in the IEEE standardization effort and its dedication to providing test equipment that paces the standardization effort will help to foster the development of 100 GbE network products. A full family of compatible, tested products will serve to ensure the acceptance of this important technology.

When available, Ixia's 100 GbE interface card will merge seamlessly with all other Ixia interfaces and test applications. As a compatible member of Ixia's test platform, it will allow full layer 2-7 testing with all types of interfaces, at all speeds.

For more information, including white papers on 100 GbE and PCS/MLD, please go to http://www.ixiacom.com/100GbE. For a private demonstration and briefing at NXTComm'08 - e-mail Thananya Baldwin, Strategic Programs Director, at tbaldwin@ixiacom.com to make an appointment to see this leading-edge technology.

One quadrillion floating point operations per second Supercomputer, “Roadrunner,” Tops 31st TOP 500 SuperComputer List


MANNHEIM, Germany; BERKELEY, Calif. & KNOXVILLE, Tenn.—With the publication of the latest edition of the TOP500 list of the world’s most powerful supercomputers today (Wednesday, June 18), the global high performance computing community has officially entered a new realm—a supercomputer with a peak performance of more than 1 petaflop/s (one quadrillion floating point operations per second).

The new No. 1 system, built by IBM for the U.S. Department of Energy’s Los Alamos National Laboratory and and named “Roadrunner,” by LANL after the state bird of New Mexico achieved performance of 1.026 petaflop/s—becoming the first supercomputer ever to reach this milestone. At the same time, Roadrunner is also one of the most energy efficient systems on the TOP500.

The 31st edition of the TOP500 list was released at the International Supercomputing Conference in Dresden, Germany. Since 1993, the list has been produced twice a year and is the most extensive survey of trends and changes in the global supercomputing arena.

“Over the past few months, there were a number of rumors going around about whether Roadrunner would be ready in time to make the list, as well as whether other high-profile systems would submit performance numbers,” said Erich Strohmaier, a computer scientist at Lawrence Berkeley National Laboratory and a founding editor of the TOP500 list. “So, as the reports came in during recent weeks, it’s been both exciting and challenging to compile this edition.”

The Roadrunner system is based on the IBM QS22 blades which are built with advanced versions of the processor in the Sony PlayStation 3, displaces the reigning IBM BlueGene/L system at DOE’s Lawrence Livermore National Laboratory. Blue Gene/L, with a performance of 478.2 teraflop/s (trillions of floating point operations per second) is now ranked No. 2 after holding the top position since November 2004.

Rounding out the top five positions, all of which are in the U.S., are the new IBM BlueGene/P (450.3 teraflop/s) at DOE’s Argonne National Laboratory, the new Sun SunBlade x6420 “Ranger” system (326 teraflop/s) at the Texas Advanced Computing Center at the University of Texas – Austin, and the upgraded Cray XT4 “Jaguar” (205 teraflop/s) at DOE’s Oak Ridge National Laboratory.

Among all systems, Intel continues to power an increasing number, with Intel processors now found in 75 percent of the TOP500 supercomputers, up from 70.8 percent of the 30th list released last November.

Other highlights from the latest list include:

  • Quad-core processor based systems have taken over the TOP500 quite rapidly. Already 283 systems are using them. Two hundred three systems are using dual-core processors, only eleven systems still use single core processors, and three systems use IBMs advanced Sony PlayStation 3 processor with 9 cores.
  • The top industrial customer, at No. 10, is the French oil company: Total Exploration Production.
  • IBM held on to its lead in systems with 210 systems (42 percent) over Hewlett Packard with 183 systems (36.6 percent). IBM had 232 systems (46.4 percent) six months ago, compared to HP with 166 systems (33.2 percent).
  • IBM remains the clear leader in the TOP500 list in performance with 48 percent of installed total performance (up from 45), compared to HP with 22.4 percent (down from 23.9). In the system category Dell, SGI and Cray follow with 5.4 percent, 4.4 percent and 3.2 percent respectively.
  • The last system on the list would have been listed at position 200 in the previous TOP500 just six months ago. This is the largest turnover rate in the 16-year history of the TOP500 project.

For the first time, the TOP500 list will also provide energy efficiency calculations for many of the computing systems and will continue tracking them in consistent manner.

  • Most energy efficient supercomputers are based on
    • IBM QS22 Cell processor blades (up to 488 Mflop/s/Watt),
    • IBM BlueGene/P systems (up to 376 Mflop/s/Watt)
  • Intel Harpertown quad-core blades are catching up fast:
    • IBM BladeCenter HS21with low-power processors (up to 265 Mflop/s/Watt)
    • SGI Altix ICE 8200EX Xeon quad-core nodes, (up to 240 Mflop/s/Watt) ,
    • Hewlett-Packard Cluster Platform 3000 BL2x220 with double density blades (up to 227 Mflop/s/Watt)
  • These systems are already ahead of BlueGene/L (up to 210 Mflop/s/Watt).

Rounding out the Top 10 systems are:

  • The No. 6 system is the top system outside the U.S., installed in Germany at the Forschungszentrum Juelich (FZJ). It is an IBM BlueGene/P system and was measured at 180 Tflop/s.
  • The No. 7 system is installed at a new center, the New Mexico Computing Applications Center (NMCAC) in Rio Rancho, NM. It is built by SGI and based on the Altix ICE 8200 model. It was measured at 133.2 Tflop/s.
  • For the second time, India placed a system in the top10. The Computational Research Laboratories, a wholly owned subsidiary of Tata Sons Ltd. in Pune, India, installed a Hewlett-Packard Cluster Platform 3000 BL460c system. They integrated this system with their own innovative routing technology and achieved a performance of 132.8 Tflop/s which was sufficient for No. 8.
  • The No. 9 system is a new BlueGene/P system installed at the Institut du D√©veloppement et des Ressources en Informatique Scientifique (IDRIS) in France, which was measured at 112.5 Tflop/s.
  • The last new system in the TOP10 – at No. 10 – is also an SGI Altix ICE 8200 system. It is the biggest system installed at an industrial customer, Total Exploration Production. It was ranked based on a Linpack performance of 106.1 Tflop/s.

The U.S. is clearly the leading consumer of HPC systems with 257 of the 500 systems. The European share (184 systems – up from 149) is still rising and is again larger then the Asian share (48 – down from 58 systems).

Dominant countries in Asia are Japan with 22 systems (up from 20), China with 12 systems (up from 10), India with 6 systems (down from 9), and Taiwan with 3 (down from 11).

In Europe, UK remains the No. 1 with 53 systems (48 six months ago). Germany improved but is still in the No. 2 spot with 46 systems (31 six months ago).

The TOP500 list is compiled by Hans Meuer of the University of Mannheim, Germany; Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory; and Jack Dongarra of the University of Tennessee, Knoxville.

Thursday, May 08, 2008

"BLUEFIRE", Power 575 Hydro- Cluster, Delivered To The National Center for Atmospheric Research (NCAR)

16 Dual Core POWER6 CPU's Planked By 64 DIMMs
BOULDER—The National Center for Atmospheric Research (NCAR) has taken delivery of a new IBM supercomputer that will advance research into severe weather and the future of Earth's climate. The supercomputer, known as a Power 575 Hydro- Cluster, is the first in a highly energy-efficient class of machines to be shipped anywhere in the world.

Scientists at NCAR and across the country will use the new system to accelerate research into climate change, including future patterns of precipitation and drought around the world, changes to agriculture and growing seasons, and the complex influence of global warming on hurricanes. Researchers also will use it to improve weather forecasting models so society can better anticipate where and when dangerous storms may strike.

Named "bluefire," the new supercomputer has a peak speed of more than 76 teraflops (76 trillion floating-point operations per second). When fully operational, it is expected to rank among the 25 most powerful supercomputers in the world and will more than triple NCAR's sustained computing capacity.

"Bluefire is on the leading edge of high-performance computing technology," says Tom Bettge, director of operations and services for NCAR's Computational and Information Systems Laboratory. "Increasingly fast machines are vital to research into such areas as climate change and the formation of hurricanes and other severe storms. Scientists will be able to conduct breakthrough calculations, study vital problems at much higher resolution and complexity, and get results more quickly than before."

Researchers will rely on bluefire to generate the climate simulations necessary for the next report on global warming by the Intergovernmental Panel on Climate Change (IPCC), which conducts detailed assessments under the auspices of the United Nations. The IPCC was a recipient of the 2007 Nobel Peace Prize.

"NCAR has a well-deserved reputation for excellence in deploying supercomputing resources to address really difficult challenges," says Dave Turek, vice president of deep computing at IBM. "Bluefire will substantially expand the organization's ability to investigate climate change, severe weather events, and other important subjects."

Bluefire by the numbers

Bluefire is the second phase of a system called the Integrated Computing Environment for Scientific Simulation (ICESS) at NCAR. After undergoing acceptance testing, it will begin full-scale operations in August. Bluefire, which replaces three supercomputers with an aggregate peak speed of 20 teraflops, will provide supercomputing support for researchers at NCAR and other organizations through 2011.

An IBM Power 575 supercomputer, bluefire houses the new POWER6 microprocessor, which has a clock speed of 4.7 gigahertz. The system consists of 4,064 processors, 12 terabytes of memory, and 150 terabytes of FAStT DS4800 disk storage.

Bluefire relies on a unique, water-based cooling system that is 33 percent more energy efficient than traditional air-cooled systems. Heat is removed from the electronics by water-chilled copper plates mounted in direct contact with each POWER6 microprocessor chip. As a result of this water-cooled system and POWER6 efficiencies, bluefire is three times more energy efficient per rack than its predecessor.

"We're especially pleased that bluefire provides dramatically increased performance with much greater energy efficiency," Bettge says.

The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under primary sponsorship by the National Science Foundation (NSF). Opinions, findings, conclusions, or recommendations expressed in this document are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, NASA, or other funding agencies.

Related sites on the World Wide Web

Bluefire Home Page (includes fact sheets and additional images)

GridTags: , , , , , ,

Climate Computer To Consume less Than 4 Megawatts Of Power And Achieve A Peak Performance Of 200 Petaflops.


BERKELEY, Calif. — Three researchers from the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have proposed an innovative way to improve global climate change predictions by using a supercomputer with low-power embedded microprocessors, an approach that would overcome limitations posed by today’s conventional supercomputers.

In a paper published in the May issue of the International Journal of High Performance Computing Applications, Michael Wehner and Lenny Oliker of Berkeley Lab’s Computational Research Division, and John Shalf of the National Energy Research Scientific Computing Center (NERSC) lay out the benefit of a new class of supercomputers for modeling climate conditions and understanding climate change. Using the embedded microprocessor technology used in cell phones, iPods, toaster ovens and most other modern day electronic conveniences, they propose designing a cost-effective machine for running these models and improving climate predictions.

In April, Berkeley Lab signed a collaboration agreement with Tensilica®, Inc. to explore such new design concepts for energy-efficient high-performance scientific computer systems. The joint effort is focused on novel processor and systems architectures using large numbers of small processor cores, connected together with optimized links, and tuned to the requirements of highly-parallel applications such as climate modeling.

Understanding how human activity is changing global climate is one of the great scientific challenges of our time. Scientists have tackled this issue by developing climate models that use the historical data of factors that shape the earth’s climate, such as rainfall, hurricanes, sea surface temperatures and carbon dioxide in the atmosphere. One of the greatest challenges in creating these models, however, is to develop accurate cloud simulations.

Although cloud systems have been included in climate models in the past, they lack the details that could improve the accuracy of climate predictions. Wehner, Oliker and Shalf set out to establish a practical estimate for building a supercomputer capable of creating climate models at 1-kilometer (km) scale. A cloud system model at the 1-km scale would provide rich details that are not available from existing models.

To develop a 1-km cloud model, scientists would need a supercomputer that is 1,000 times more powerful than what is available today, the researchers say. But building a supercomputer powerful enough to tackle this problem is a huge challenge.

Historically, supercomputer makers build larger and more powerful systems by increasing the number of conventional microprocessors — usually the same kinds of microprocessors used to build personal computers. Although feasible for building computers large enough to solve many scientific problems, using this approach to build a system capable of modeling clouds at a 1-km scale would cost about $1 billion. The system also would require 200 megawatts of electricity to operate, enough energy to power a small city of 100,000 residents.

In their paper, “Towards Ultra-High Resolution models of Climate and Weather,” the researchers present a radical alternative that would cost less to build and require less electricity to operate. They conclude that a supercomputer using about 20 million embedded microprocessors would deliver the results and cost $75 million to construct. This “climate computer” would consume less than 4 megawatts of power and achieve a peak performance of 200 petaflops.

“Without such a paradigm shift, power will ultimately limit the scale and performance of future supercomputing systems, and therefore fail to meet the demanding computational needs of important scientific challenges like the climate modeling,” Shalf said.

The researchers arrive at their findings by extrapolating performance data from the Community Atmospheric Model (CAM). CAM, developed at the National Center for Atmospheric Research in Boulder, Colorado, is a series of global atmosphere models commonly used by weather and climate researchers.

The “climate computer” is not merely a concept. Wehner, Oliker and Shalf, along with researchers from UC Berkeley, are working with scientists from Colorado State University to build a prototype system in order to run a new global atmospheric model developed at Colorado State.

“What we have demonstrated is that in the exascale computing regime, it makes more sense to target machine design for specific applications,” Wehner said. “It will be impractical from a cost and power perspective to build general-purpose machines like today’s supercomputers.”

Under the agreement with Tensilica, the team will use Tensilica’s Xtensa LX extensible processor cores as the basic building blocks in a massively parallel system design. Each processor will dissipate a few hundred milliwatts of power, yet deliver billions of floating point operations per second and be programmable using standard programming languages and tools. This equates to an order-of-magnitude improvement in floating point operations per watt, compared to conventional desktop and server processor chips. The small size and low power of these processors allows tight integration at the chip, board and rack level and scaling to millions of processors within a power budget of a few megawatts.

Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California. Visit our Website at www.lbl.gov.


Thursday, April 24, 2008

The New IBM iDataplex Technology To Help WEB 2.0 Developers.

SAN FRANCISCO, CA - 23 Apr 2008: Web 2.0 Expo – IBM Global Financing, the lending and leasing business segment of IBM today announced financing opportunities to help customers access an entire new category of servers uniquely designed to address the technology needs of companies that use Web 2.0-style computing to operate massive data centers with tens of thousands of servers.

The recently announced IBM “iDataPlex” system leverages IBM’s blade server heritage to build a completely new design point that:
  • More than doubles the number of systems that can run in a single rack,
  • Uses 40 percent less power while increasing the amount of computing that can be done on single system 5X,
  • Can be outfitted with a liquid cooling wall on the back of the system that enables it to run at “room temperature” -- no air conditioning required,
  • Uses all industry standard components as well as open source software such as Linux to lower costs.
“IBM Global Financing offers an end-to-end solution for customers looking to access the new IBM iDataplex technology,” said John Callies, general manager of IBM Global Financing. “From acquisition to disposal, IBM Global Financing can be there to help Web 2.0 customers and other segments with high performance environments access these benefits.”

IBM Global Financing is uniquely positioned to offer customers looking to access the IBM iDataPlex system attractive lease rates because of its ability to capture high residual value in the secondary market for these new servers. Customers in the US can also benefit from additional benefits received under the US Economic Stimulus Advantage offering developed by IBM Global Financing. Under this offering announced earlier this year, US customers acquiring the IBM iDataPlex system in 2008 can benefit from either enhanced rates or a free 3 month deferral on leases.

IBM Global Financing will also help clients accessing this technology spread the costs of these servers and the software and services needed to implement them flexibly over time. IBM Global Financing’s Project Financing offerings will help match costs to benefits, with low upfront payments during the install process, that will ramp as the benefits achieved from the new technology begins to be realized. This is a significant benefit for CEOs and CFOs looking to manage costs while simultaneously funding innovation.

Customers looking to replace their existing data center equipment with the new iDataPlex technology can also benefit from IBM Global Asset Recovery Services, which can manage the disposal of equipment in accordance with environmental regulations, paying special attention to the security of the data contained on the hard drive.
tag: , , ,

Monday, April 21, 2008

The U.S. Department of Energy's (DOE) Argonne National Laboratory celebrates the dedication of the Argonne Leadership Computing Facility

ARGONNE, Ill. (April 21, 2008) – The U.S. Department of Energy's (DOE) Argonne National Laboratory today celebrated the dedication of the Argonne Leadership Computing Facility (ALCF) during a ceremony attended by key federal, state and local officials.

The ALCF is a leadership-class computing facility that enables the research and development community to make innovative and high-impact science and engineering breakthroughs. Through the ALCF, researchers conduct computationally intensive projects on the largest possible scale. Argonne operates the ALCF for the DOE Office of Science as part of the larger DOE Leadership Computing Facility strategy. DOE leads the world in providing the most capable civilian supercomputers for science.

"I am delighted to see this realization of our vision to bring the power of the department's high performance computing to open scientific research," said DOE Under Secretary for Science Raymond L. Orbach. "This facility will not only strengthen our scientific capability but also advance the competitiveness of the region and our nation. The early results span the gamut from astrophysics to Parkinson's research, and are exciting examples of what's to come."

Orbach, Patricia Dehmer, DOE Office of Science Deputy Director for Science Programs, and Michael Strayer, DOE Associate Director of Science for Advanced Scientific Computing Research, attended the ALCF dedication, along with Congresswoman Judy Biggert.

DOE makes the computing power of the ALCF available to a highly select group of researchers at publicly and privately held research organizations, universities and industrial concerns in the United States and overseas. Major ALCF projects are chosen by DOE through a competitive peer review program known as Innovative and Novel Computational Impact on Theory and Experiment (INCITE).

Earlier this year, DOE announced that 20 INCITE projects were awarded 111 million hours of computing time at the ALCF. The diverse array of awards includes projects led by Igor Tsigelny, San Diego Supercomputer Center, University of California, San Diego, to model the molecular basis of Parkinson's disease; William Tang, Princeton Plasma Physics Laboratory, to conduct high-resolution global simulations of plasma microturbulence; and Jeffrey Fox, Gene Network Sciences, to simulate potentially dangerous rhythm disorders of the heart that will provide greater insight into these disorders and ideas for prevention and treatment. Academic institutions, including the University of Chicago, the University of California at Davis and Northwestern University, and large public companies such as Proctor & Gamble and Pratt & Whitney, also received computing time at the ALCF through INCITE.

Argonne has been a leading force in high-performance computers. Two years prior to the establishment of the ALCF in 2006, Argonne and Lawrence Livermore National Laboratory began working closely with IBM to develop a series of computing systems based on IBM's BlueGene platform. Argonne and IBM jointly sponsor the international BlueGene Consortium to share expertise and software for the IBM BlueGene family of computers.

Since 2005, Argonne has taken delivery of a BlueGene/L and BlueGene/P that have a combined performance capability of 556 teraflops per second. Key strengths include a low-power system-on-a-chip architecture that dramatically improves reliability and power efficiency. The BlueGene systems also feature a scalable communications fabric that enables science applications to spend more time computing and less time moving data between CPUs. Together with DOE's other Leadership Computing Facility at Oak Ridge National Laboratory, which has deployed a large Cray supercomputer, computational scientists have platforms that provide capabilities for breakthrough science.

"The ALCF has tremendous computing ability, making it one of the country's preeminent computing facilities," said Argonne Director Robert Rosner. "The research results generated by the ALCF will be used to develop technologies beneficial to the U.S. economy and address issues that range from the environment and clean and efficient energy to climate change and healthcare."

DOE selected a team composed of Argonne, PNNL and ORNL in 2004 to develop the DOE Office of Science (SC) Leadership Computing Facilities after a competitive peer review of four proposals. PNNL operates the Molecular Science Computing Facility, and LBNL runs the National Energy Research Science Computing Center. DOE SC's computational capabilities are expected to quadruple the current INCITE award allocations to nearly a billion processor hours in 2009.

Argonne National Laboratory brings the world's brightest scientists and engineers together to find exciting and creative new solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America 's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.

For more information, please contact Angela Hardin (630/252-5501 or ahardin@anl.gov) at Argonne.

tag: , , , , , , , , , , ,

Tuesday, April 15, 2008

OpenEdge 10.1c Released By Progress Software.

BEDFORD, Mass.--(BUSINESS WIRE)--April 15, 2008--Progress Software Corporation (NASDAQ: PRGS), a global supplier of application infrastructure software used to develop, deploy, integrate and manage business applications, today announced the immediate availability of the Progress(R) OpenEdge(R) 10.1C business application development platform. With this release, OpenEdge becomes the first business application development platform to support IPv6(1) a next generation Internet protocol designed to bring superior reliability, flexibility and security to the Internet. Other large vendors have so far failed to reach this key government-regulated milestone, and in some cases have been forced to recall products that were originally billed as IPv6-compliant.

Additional enhancements include improved error handling capabilities, a next generation OpenEdge Sonic(TM) Enterprise ESB adapter, Unicode support for both the Oracle DataServer and MS SQL DataServer, plus support for Eclipse 3.2.2. OpenEdge is the first integrated platform optimized for the development and deployment of service-oriented business applications. It isolates developers from the complexities of today's computing environments, allowing them to concentrate on what really matters - creating the business logic of their application. Recently, IDC named Progress Software as the largest pure-play embedded database management system (DBMS) vendor in the IDC, "Worldwide Embedded DBMS 2007-2011 Forecast and 2006 Vendor Shares," Doc#209653, December 2007 report.

The independent software vendors that comprise Progress Software's network of ISVs (called Progress Application Partners) can continue to develop their software the way they have always done in order to gain IPv6 support. At the same time, applications using the IPv4 standard now have the option to upgrade at any time.

Erwin "Ray" Bender, Program Manager with GE Healthcare commented: "The limitless network addressing capability in IPv6 was essential to us in rolling out our Centricity pharmacy product for the U.S. Department of Defense (DoD). Within the DoD, we have 500 pharmacies located at 300 military facilities, each with label printing and robot prescription filling capabilities. With IPv4, routing and sub-netting were becoming untenable. Progress Software has been an invaluable partner working with us to meet the U.S government mandate to implement IPv6 and also achieve our corporate goal of moving towards a global pharmacy system."

IPv6 arose as the new network layer to replace the 20-year old IPv4 standard because the Internet is essentially "running out" of unique IP addresses. IPv6 provides a much larger address space that allows greater flexibility in assigning IP addresses. The standard is of particular interest to independent software developers now because the United States government set forth a mandate requiring all federal agencies to upgrade their network backbones to IPv6 by June 2008. As a result, if developers want their applications deployed by government or by government contractors, they must ensure their applications work properly in IPv6 environments.

In addition to IPv6 addressing capabilities, OpenEdge 10.1C also includes the following enhancements:

    --  Improved error handling capabilities

-- A next generation OpenEdge Sonic(TM) Enterprise ESB adapter

-- Unicode support added to the Oracle DataServer and MS SQL
DataServer

-- Additional 24x7 continuous database availability and problem
resolution enhancements

-- Support for Eclipse 3.2.2, enabling developers to seamlessly
extend the OpenEdge development environment to the Windows
Vista platform

-- Enhanced object oriented programming capabilities to
facilitate object reuse and improve developer productivity

-- Improvements to OpenEdge Architect including new views and
graphics tools, enhanced macro functionality, and new ABL
editor wizards, dialogs, and UI features

-- Database resiliency validations that minimize planned downtime
for maintenance and upgrades and reduce unplanned downtime by
identifying and correcting problems

-- Installation enhancements that further automate the
installation by using an electronic license addendum file to
automatically enter serial numbers and product control codes

-- 64-bit JVM support for stored procedures and database triggers
on all 64-bit platforms, including AIX64, Solaris64, Linux64,
HP PA-RISC 64, and HP Itanium

More details on the OpenEdge 10.1C platform are available at: http://www.progress.com/openedge/products/openedge/


tag: , , , , , , , , , ,

Thursday, April 10, 2008

Charles Babbage Comes To Silicon Valley

The difference engine arrived Wednesday at the Computer History Museum. Here it is being lifted off its delivery truck while still in its red shipping cover.(Credit: Daniel Terdiman/CNET News.com)

I was there to see the unfolding of this fantastic masterpiece. But I will let the master story teller, Daniel Terdiman, tell the story. I also need to borrow one of his pictures as I left my camera in San Jose yesterday. You can see more photos at News.com, follow the link below. I will write more about the machine later.

But do not miss the 6 month window, go and see the machine, it is a wonder that will get wheels turning in your head!
Machine invented by Babbage and never built, and the only ones actually constructed were in the 19th century, the first one built in modern times was created in 1991 at London's Science Museum. Much more recently, tech millionaire Nathan Myhrvold visited the London museum and decided he wanted one for himself. So he commissioned the museum to build it for him.

Three-and-a-half years later, the machine was finished, but before it goes in Myhrvold's living room it is going to spend six months on proud display at the Computer History Museum here. And on Wednesday, it was expected to arrive at the Mountain View museum.

Daniel Terdiman's story

tag: , ,

Saturday, April 05, 2008

Second Life Grid By LindenLab and IBM


SAN JOSE, Calif. - 10 Oct 2007: IBM (NYSE: IBM) and Linden Lab®, creator of the virtual world Second Life® (www.secondlife.com), today announced the intent to develop new technologies and methodologies based on open standards that will help advance the future of 3D virtual worlds.

IBM in Second Life photo

IBM and Linden in Push for Open, Integrated 3-D 'Net: Two IBM employees -- represented by their 3-D avatars -- have a discussion prior to a business meeting at the IBM Open Source and Standards office in the virtual world Second Life. IBM and Linden Labs today announced they will work with a broad community of partners to drive open standards and interoperability to enable avatars -- the online persona of visitors to these online worlds -- to move from one virtual world to another with ease, much like you can move from one website to another on the Internet today. The companies see many applications of virtual world technology for business and society in commerce, collaboration, education, training and more.

As more enterprises and consumers explore the 3D Internet, the ecosystem of virtual world hosts, application providers, and IT vendors need to offer a variety of standards-based solutions in order to meet end user requirements. To support this, IBM and Linden Lab are committed to exploring the interoperability of virtual world platforms and technologies, and plan to work with industry-wide efforts to further expand the capabilities of virtual worlds.

"As the 3D Internet becomes more integrated with the current Web, we see users demanding more from these environments and desiring virtual worlds that are fit for business," said Colin Parris, vice president, Digital Convergence, IBM. "BM and Linden Lab's working together can help accelerate the use and further development of common standards and tools that will contribute to this new environment."

"We have built the Second Life Grid as part of the evolution of the Internet," said Ginsu Yoon, vice president, Business Affairs, Linden Lab. "Linden and IBM shares a vision that interoperability is key to the continued expansion of the 3D Internet, and that this tighter integration will benefit the entire industry. Our open source development of interoperable formats and protocols will accelerate the growth and adoption of all virtual worlds."

IBM and Linden Lab plan to work together on issues concerning the integration of virtual worlds with the current Web; driving security-rich transactions of virtual goods and services; working with the industry to enable interoperability between various virtual worlds; and building more stability and high quality of service into virtual world platforms. These are expected to be key characteristics facing organizations which want to take advantage of virtual worlds for commerce, collaboration, education and other business applications.

More specifically, IBM and Linden Lab plan to collaborate on:

* "Universal" Avatars: Exploring technology and standards for users of the 3D Internet to seamlessly travel between different virtual worlds. Users could maintain the same “avatar” name, appearance and other important attributes (digital assets, identity certificates, and more) for multiple worlds. The adoption of a universal “avatar” and associated services are a possible first step toward the creation of a truly interoperable 3D Internet.

* Security-rich Transactions: Collaborating on the requirements for standards-based software designed to enable the security-rich exchange of assets in and across virtual worlds. This could allow users to perform purchases or sales with other people in virtual worlds for digital assets including 3D models, music, and media, in an environment with robust security and reliability features.

* Platform stability: Making interfaces easier to use in order to accelerate user adoption, deliver faster response times for real-world interactions and provide for high-volume business use.

* Integration with existing Web and business processes: Allowing current business applications and data repositories – regardless of their source – to function in virtual worlds is anticipated to help enable widespread adoption and rapid dissemination of business capabilities for the 3D Internet.

* Open standards for interoperability with the current Web: Open source development of interoperable formats and protocols. Open standards in this area are expected to allow virtual worlds to connect together so that users can cross from one world to another, just like they can go from one web page to another on the Internet today.


IBM is actively working with a number of companies in the IT and virtual world community on the development of standards-based technologies. This week IBM hosted an industry wide meeting to discuss virtual world interoperability, the role of standards andthe potential of forming an industry wide consortium open to all. This meeting is expected to also begin to address the technical challenges of interoperability and required and recommended standards.

Linden Lab has formed an Architecture Working Group that describes the roadmap for the development of the Second Life Grid. This open collaboration with the community allows users of Second Life to help define the direction of an interoperable, Internet-scale architecture.

For more information about the Second Life Grid visit http://secondlifegrid.net/. The Second Life community maintains information about the Architecture Working Group at http://wiki.secondlife.com/wiki/Architecture_Working_Group.

Thursday, April 03, 2008

The Ninf-G5 is now available from APGRID.

I recieved the following information from Ninf-G group. I have been working with Ninf-G for a long time now and the new features suggested about the Version 5.0, Ninf-G5 is making me wanting to upgrade. But because I am working with a bunch of other people, I need to plan out the upgrade. Thank you, Yoshio Tanaka.
The Ninf-G version 5.0.0 is now available for download at the Ninf project home page at: http://ninf.apgrid.org/ .

Ninf-G Version 5.0.0 (Ninf-G5) is a new version of Ninf-G which is a reference implementation of the GridRPC API.

Major functions of Ninf-G include (1) remote process invocation, (2) information services, and (3) communication between Ninf-G Client and Servers. Ninf-G4 is able to utilize various middleware for remote process invocation, however Ninf-G4 utilize the Globus Toolkit for information services and communication between Ninf-G Client and Servers.
On the other hand, Ninf-G5 does not assume specific Grid middleware as prerequisites, that is, unlike the past versions of Ninf-G (e.g. Ninf-G2, Ninf-G4), Ninf-G5 works in non Globus Toolkit
environments. Ninf-G5 is able to utilize various middleware not only for remote process invocation but also for information services and communication between Ninf-G Client and Servers. Ninf-G5 is appropriate for a single system as well as non-Globus Grid environments. It is expected to provide high performance for task parallel applications from a single system to Grid.

Here are compatibility issues between Ninf-G5 and Ninf-G4. The GridRPC API and Ninf-G API implemented by Ninf-G5 is compatible with Ninf-G4 except the two small issues (details are described in CHANGES file).
- Ninf-G Client configuration file for Ninf-G4 is not compatible with Ninf-G5.
- Due to protocol changes, Ninf-G4 client cannot communicate with
- Ninf-G5 executables and vice versa.

If you have any questions, comments, please send emails to
ninf@apgrid.org or ninf-users@apgrid.org .

http://ninf.apgrid.org/
http://www.apgrid.org/

tag: , , , , , , ,

Tuesday, April 01, 2008

OKI Develops World's First 160Gbps Optical 3R Regenerator for Ultra Long Distance Data Transmission

Image From Paper Written By Kozo Fujii (PDF), Development of an Ultra High-Speed Optical Signal Processing Technology - For Practical Implementation of a 160Gbit/s Optical Communication System.
OKI Develops World's First 160Gbps Optical 3R Regenerator for Ultra Long Distance Data Transmission Enabling Ultra High Capacity Data to Be Transmitted to the Other Side of the Planet

TOKYO--(BUSINESS WIRE)--Oki Electric Industry Co., Ltd. announced it is the world’s first to achieve all optically regenerated transmission, which enables unlimited transmission of 160Gbps optical signals with single wavelength. To demonstrate the results of this project, OKI used an optical test-bed provided by the National Institute of Information and Communications Technology (NICT)’s Japan Gigabit Network II (JGN II)(1). The research that led to OKI’s achievement was conducted as part of the "Research and Development on Lambda Utility Technology,” under the auspices of NICT.

“This result proves that we can now transmit data at 160Gbps data, a speed equivalent to transmitting four movies, approximately 8 hours of data, in a single second. This amount of data at this speed can be sent over distances greater than the length of Japan, which is about 3,000km, and in fact to the other side of the planet, which is about 20,000km,” said Takeshi Kamijo, General Manager of Corporate R&D Center at OKI. “160Gbps data transmission uses an ultra high-speed optical communication technology that is expected to be commercialized in 2010 or after. OKI will analyze the findings from the field trial and develop a commercial-level 160Gbps optical 3R Regenerator.”

In a conventional optical communication system, an optical amplifier is placed every 50 to 100 km to compensate for propagation loss. Because signal distortion and timing jitter accumulate during transmission, the faster the speed of transmission, the shorter the transmission range. Therefore, to achieve longer distance, optical signals are converted into electric signals before the transmission limit is reached and converted back into optical signals and re-transmitted after the signal processing is completed. However, the speed for batch signal processing is currently limited to 40Gbps. Therefore, technologies to efficiently regenerate optical signals without converting them to electric signals are required in order to achieve a transmission speed of over 100Gbps.

To do this, OKI developed an all-optical 3R Regenerator, which uses a specialized optical-repeater technology with functions for re-amplification, re-shaping to remove optical signal wave distortion, and re-timing to avoid timing jitter accumulation. With these advances, in theory, it is possible to achieve signal processing speeds of over 200Gbps.

OKI also developed a Polarization Mode Dispersion Compensator (PMDC) that adaptively mitigates the impact of the changes in transmission line characteristics that are unique to optical fiber. Polarization mode dispersion is a phenomenon whereby wave distortion increases in an oval-shaped fiber core. The dispersion value changes depending on the temperature or transmission environment. Because the faster the transmission speed, the more sensitive it is to such changes, a PMDC is indispensable for transmission systems operating at over 40Gbps. OKI’s newly developed PMDC adopts a design to fully leverage the optical 3R Regenerator.

In the field trial using this equipment, in principle, OKI proved there was hardly any limit to transmission distance. Though 40Gbps and 80Gbps transmission using all-optical 3R Regenerators has been done in the past, OKI is the first in the world to conduct a field trial using 160Gbps optical signal regenerators.

By evaluating the performance of all-optical 3R regenerators while changing the regenerator spacing, OKI achieved a maximum regenerator spacing of 380km, which is equivalent to transmitting at 160Gbps between Tokyo and Osaka with just one optical 3R regenerator.

The findings from this trial were reported at the general conference held by The Institute of Electronics, Information and Communication Engineers on March 20.

[Glossary]

(1) Optical test-bed provided by Japan Gigabit Network II (JGN II)

Working together with JGN II, NICT provides a next generation optical network R&D environment to manufactures and institutions who do not have their own environment.
tag: , , , , , , , , ,

Saturday, March 22, 2008

IBM Announces European Cloud Computing Hub in Dublin

DUBLIN, IRELAND and ARMONK, NY - 19 Mar 2008: Today IBM (NYSE: IBM) and the Industrial Development Agency of Ireland (IDA Ireland) announced the establishment of Europe's first Cloud Computing Center. Located in Dublin, the new facility will serve as a hub that will deliver Cloud Computing research and services to a number of satellite facilities to be built in Europe, Middle East and Africa. IBM experts from these centers will work directly with clients in the region, helping them adopt cloud computing solutions that spur technology research and business development.

One of the Dublin center's first offerings for clients, called IBM Idea Factory for Cloud Computing, is a new service delivered directly to clients over a cloud computing environment. Using Web 2.0 technology, it allows communities of business professionals to be assembled into social networks to facilitate the development of new business ideas. IBM Idea Factory for Cloud Computing captures business processes -- from their beginnings as ideas to commercialization -- speeding up brainstorming among employees, partners, software developers and other third party participants.

"The selection of Ireland as the location for IBM's European hub for Cloud Computing highlights Ireland's role as an important contributor to IBM's global research, development and innovation strategy," said Miche√°l Martin TD, Minister for Enterprise, Trade and Employment for the Irish government. "The investment further establishes IBM Ireland's growing reputation as a high performance computing centre within IBM Corporation. IDA Ireland and IBM have a proactive long-standing relationship in advancing the Irish business and the implementation of strategic high value knowledge-based research and development investments."

"Our investments in cloud computing are a prime example of how IBM is seeking out emerging global market opportunities and new computing models that benefit IBM clients," said Steve Mills, Senior Vice President and Group Executive, IBM Software Group. "Through this new facility and the cloud computing model, the wealth of talent at IBM's software lab in Ireland will be accessible to not only the rest of Europe, but Africa and the Middle East as well."

Cloud computing is an information technology (IT) infrastructure in which dynamically shared computing resources are virtualized and accessed as a service. Cloud computing replaces the traditional data center model in which companies own and manage their own stand alone hardware and software systems. Cloud computing is an attractive proposition for small to large-sized companies. It also is a green technology model that reduces energy consumption by improving IT resource utilization, therefore requiring fewer servers to handle equivalent workloads.

The need for cloud computing is fueled by the dramatic growth in business collaboration, connected devices, real-time data streams, and Web 2.0 applications such as streaming media and entertainment, social networking and mobile commerce.

The first client of the center at Dublin will be the Sogeti Group, a specialist provider of Local Professional IT Services. Sogeti plans to use the IBM Idea Factory for Cloud Computing, providing its employees around the world with the technology to collectively brainstorm online and generate new ideas about building the "Sogeti of the Future."

"Innovation is at the heart of every successful company," said Michiel Boreel, CTO of Sogeti. "By utilizing IBM Cloud Computing Center resources, we expect to generate a wealth of real-world solutions for accelerating Sogeti's international growth and delivering step-change for our clients. Another positive benefit is increased interaction and collaboration between our consultants around the world, as well as hands-on experience with this leading-edge computing power."

"Responding to demand in the market, we are moving fast to build an integrated cloud computing operation. We are adding Europe's first Cloud Computing Center at Dublin and more facilities into a network of existing centers in China, Vietnam and the U.S. The centers can bring skills and expertise to serve our clients in building their own new enterprise data centers," said Dr. Willy Chiu, Vice President of High Performance On Demand Solutions, IBM Software Group. With such a network of centers, Dr. Chiu pointed out, "We will also address the need for open interoperability standards."

The IBM High Performance on Demand Solutions Lab will work with IDA Ireland to build this center using IBM's "Blue Cloud" technologies, a series of cloud computing offerings based on industry open standards and open source software. IBM Tivoli systems management software will manage the Cloud Computing environment.

The center will place a focus on innovation and research activities. As part of its ongoing educational initiatives, IBM has also agreed to facilitate cloud computing training for lecturers at the Dublin Institute of Technology's School of Computing. The training will help the school to foster new skills that meet the needs of this emerging computing model.

IBM Cloud Computing Milestones
IBM has been expanding its cloud computing capabilities around the world. Over the past year, IBM has provided cloud computing services to clients such as China Telecom, Wuxi Municipal Government of China, the Ministry of Science and Technology of Vietnam and others. IBM also launched "Blue Cloud," a series of cloud computing offerings, and entered into partnerships for cloud computing programs with a number of partners in Europe.

For more information about IBM cloud computing, please visit http://www.ibm.com/developerworks/websphere/zones/hipods/

tag: , , , , , , , , ,