Thursday, April 24, 2008

The New IBM iDataplex Technology To Help WEB 2.0 Developers.

SAN FRANCISCO, CA - 23 Apr 2008: Web 2.0 Expo – IBM Global Financing, the lending and leasing business segment of IBM today announced financing opportunities to help customers access an entire new category of servers uniquely designed to address the technology needs of companies that use Web 2.0-style computing to operate massive data centers with tens of thousands of servers.

The recently announced IBM “iDataPlex” system leverages IBM’s blade server heritage to build a completely new design point that:
  • More than doubles the number of systems that can run in a single rack,
  • Uses 40 percent less power while increasing the amount of computing that can be done on single system 5X,
  • Can be outfitted with a liquid cooling wall on the back of the system that enables it to run at “room temperature” -- no air conditioning required,
  • Uses all industry standard components as well as open source software such as Linux to lower costs.
“IBM Global Financing offers an end-to-end solution for customers looking to access the new IBM iDataplex technology,” said John Callies, general manager of IBM Global Financing. “From acquisition to disposal, IBM Global Financing can be there to help Web 2.0 customers and other segments with high performance environments access these benefits.”

IBM Global Financing is uniquely positioned to offer customers looking to access the IBM iDataPlex system attractive lease rates because of its ability to capture high residual value in the secondary market for these new servers. Customers in the US can also benefit from additional benefits received under the US Economic Stimulus Advantage offering developed by IBM Global Financing. Under this offering announced earlier this year, US customers acquiring the IBM iDataPlex system in 2008 can benefit from either enhanced rates or a free 3 month deferral on leases.

IBM Global Financing will also help clients accessing this technology spread the costs of these servers and the software and services needed to implement them flexibly over time. IBM Global Financing’s Project Financing offerings will help match costs to benefits, with low upfront payments during the install process, that will ramp as the benefits achieved from the new technology begins to be realized. This is a significant benefit for CEOs and CFOs looking to manage costs while simultaneously funding innovation.

Customers looking to replace their existing data center equipment with the new iDataPlex technology can also benefit from IBM Global Asset Recovery Services, which can manage the disposal of equipment in accordance with environmental regulations, paying special attention to the security of the data contained on the hard drive.
tag: , , ,

Monday, April 21, 2008

The U.S. Department of Energy's (DOE) Argonne National Laboratory celebrates the dedication of the Argonne Leadership Computing Facility

ARGONNE, Ill. (April 21, 2008) – The U.S. Department of Energy's (DOE) Argonne National Laboratory today celebrated the dedication of the Argonne Leadership Computing Facility (ALCF) during a ceremony attended by key federal, state and local officials.

The ALCF is a leadership-class computing facility that enables the research and development community to make innovative and high-impact science and engineering breakthroughs. Through the ALCF, researchers conduct computationally intensive projects on the largest possible scale. Argonne operates the ALCF for the DOE Office of Science as part of the larger DOE Leadership Computing Facility strategy. DOE leads the world in providing the most capable civilian supercomputers for science.

"I am delighted to see this realization of our vision to bring the power of the department's high performance computing to open scientific research," said DOE Under Secretary for Science Raymond L. Orbach. "This facility will not only strengthen our scientific capability but also advance the competitiveness of the region and our nation. The early results span the gamut from astrophysics to Parkinson's research, and are exciting examples of what's to come."

Orbach, Patricia Dehmer, DOE Office of Science Deputy Director for Science Programs, and Michael Strayer, DOE Associate Director of Science for Advanced Scientific Computing Research, attended the ALCF dedication, along with Congresswoman Judy Biggert.

DOE makes the computing power of the ALCF available to a highly select group of researchers at publicly and privately held research organizations, universities and industrial concerns in the United States and overseas. Major ALCF projects are chosen by DOE through a competitive peer review program known as Innovative and Novel Computational Impact on Theory and Experiment (INCITE).

Earlier this year, DOE announced that 20 INCITE projects were awarded 111 million hours of computing time at the ALCF. The diverse array of awards includes projects led by Igor Tsigelny, San Diego Supercomputer Center, University of California, San Diego, to model the molecular basis of Parkinson's disease; William Tang, Princeton Plasma Physics Laboratory, to conduct high-resolution global simulations of plasma microturbulence; and Jeffrey Fox, Gene Network Sciences, to simulate potentially dangerous rhythm disorders of the heart that will provide greater insight into these disorders and ideas for prevention and treatment. Academic institutions, including the University of Chicago, the University of California at Davis and Northwestern University, and large public companies such as Proctor & Gamble and Pratt & Whitney, also received computing time at the ALCF through INCITE.

Argonne has been a leading force in high-performance computers. Two years prior to the establishment of the ALCF in 2006, Argonne and Lawrence Livermore National Laboratory began working closely with IBM to develop a series of computing systems based on IBM's BlueGene platform. Argonne and IBM jointly sponsor the international BlueGene Consortium to share expertise and software for the IBM BlueGene family of computers.

Since 2005, Argonne has taken delivery of a BlueGene/L and BlueGene/P that have a combined performance capability of 556 teraflops per second. Key strengths include a low-power system-on-a-chip architecture that dramatically improves reliability and power efficiency. The BlueGene systems also feature a scalable communications fabric that enables science applications to spend more time computing and less time moving data between CPUs. Together with DOE's other Leadership Computing Facility at Oak Ridge National Laboratory, which has deployed a large Cray supercomputer, computational scientists have platforms that provide capabilities for breakthrough science.

"The ALCF has tremendous computing ability, making it one of the country's preeminent computing facilities," said Argonne Director Robert Rosner. "The research results generated by the ALCF will be used to develop technologies beneficial to the U.S. economy and address issues that range from the environment and clean and efficient energy to climate change and healthcare."

DOE selected a team composed of Argonne, PNNL and ORNL in 2004 to develop the DOE Office of Science (SC) Leadership Computing Facilities after a competitive peer review of four proposals. PNNL operates the Molecular Science Computing Facility, and LBNL runs the National Energy Research Science Computing Center. DOE SC's computational capabilities are expected to quadruple the current INCITE award allocations to nearly a billion processor hours in 2009.

Argonne National Laboratory brings the world's brightest scientists and engineers together to find exciting and creative new solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America 's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.

For more information, please contact Angela Hardin (630/252-5501 or ahardin@anl.gov) at Argonne.

tag: , , , , , , , , , , ,

Tuesday, April 15, 2008

OpenEdge 10.1c Released By Progress Software.

BEDFORD, Mass.--(BUSINESS WIRE)--April 15, 2008--Progress Software Corporation (NASDAQ: PRGS), a global supplier of application infrastructure software used to develop, deploy, integrate and manage business applications, today announced the immediate availability of the Progress(R) OpenEdge(R) 10.1C business application development platform. With this release, OpenEdge becomes the first business application development platform to support IPv6(1) a next generation Internet protocol designed to bring superior reliability, flexibility and security to the Internet. Other large vendors have so far failed to reach this key government-regulated milestone, and in some cases have been forced to recall products that were originally billed as IPv6-compliant.

Additional enhancements include improved error handling capabilities, a next generation OpenEdge Sonic(TM) Enterprise ESB adapter, Unicode support for both the Oracle DataServer and MS SQL DataServer, plus support for Eclipse 3.2.2. OpenEdge is the first integrated platform optimized for the development and deployment of service-oriented business applications. It isolates developers from the complexities of today's computing environments, allowing them to concentrate on what really matters - creating the business logic of their application. Recently, IDC named Progress Software as the largest pure-play embedded database management system (DBMS) vendor in the IDC, "Worldwide Embedded DBMS 2007-2011 Forecast and 2006 Vendor Shares," Doc#209653, December 2007 report.

The independent software vendors that comprise Progress Software's network of ISVs (called Progress Application Partners) can continue to develop their software the way they have always done in order to gain IPv6 support. At the same time, applications using the IPv4 standard now have the option to upgrade at any time.

Erwin "Ray" Bender, Program Manager with GE Healthcare commented: "The limitless network addressing capability in IPv6 was essential to us in rolling out our Centricity pharmacy product for the U.S. Department of Defense (DoD). Within the DoD, we have 500 pharmacies located at 300 military facilities, each with label printing and robot prescription filling capabilities. With IPv4, routing and sub-netting were becoming untenable. Progress Software has been an invaluable partner working with us to meet the U.S government mandate to implement IPv6 and also achieve our corporate goal of moving towards a global pharmacy system."

IPv6 arose as the new network layer to replace the 20-year old IPv4 standard because the Internet is essentially "running out" of unique IP addresses. IPv6 provides a much larger address space that allows greater flexibility in assigning IP addresses. The standard is of particular interest to independent software developers now because the United States government set forth a mandate requiring all federal agencies to upgrade their network backbones to IPv6 by June 2008. As a result, if developers want their applications deployed by government or by government contractors, they must ensure their applications work properly in IPv6 environments.

In addition to IPv6 addressing capabilities, OpenEdge 10.1C also includes the following enhancements:

    --  Improved error handling capabilities

-- A next generation OpenEdge Sonic(TM) Enterprise ESB adapter

-- Unicode support added to the Oracle DataServer and MS SQL
DataServer

-- Additional 24x7 continuous database availability and problem
resolution enhancements

-- Support for Eclipse 3.2.2, enabling developers to seamlessly
extend the OpenEdge development environment to the Windows
Vista platform

-- Enhanced object oriented programming capabilities to
facilitate object reuse and improve developer productivity

-- Improvements to OpenEdge Architect including new views and
graphics tools, enhanced macro functionality, and new ABL
editor wizards, dialogs, and UI features

-- Database resiliency validations that minimize planned downtime
for maintenance and upgrades and reduce unplanned downtime by
identifying and correcting problems

-- Installation enhancements that further automate the
installation by using an electronic license addendum file to
automatically enter serial numbers and product control codes

-- 64-bit JVM support for stored procedures and database triggers
on all 64-bit platforms, including AIX64, Solaris64, Linux64,
HP PA-RISC 64, and HP Itanium

More details on the OpenEdge 10.1C platform are available at: http://www.progress.com/openedge/products/openedge/


tag: , , , , , , , , , ,

Thursday, April 10, 2008

Charles Babbage Comes To Silicon Valley

The difference engine arrived Wednesday at the Computer History Museum. Here it is being lifted off its delivery truck while still in its red shipping cover.(Credit: Daniel Terdiman/CNET News.com)

I was there to see the unfolding of this fantastic masterpiece. But I will let the master story teller, Daniel Terdiman, tell the story. I also need to borrow one of his pictures as I left my camera in San Jose yesterday. You can see more photos at News.com, follow the link below. I will write more about the machine later.

But do not miss the 6 month window, go and see the machine, it is a wonder that will get wheels turning in your head!
Machine invented by Babbage and never built, and the only ones actually constructed were in the 19th century, the first one built in modern times was created in 1991 at London's Science Museum. Much more recently, tech millionaire Nathan Myhrvold visited the London museum and decided he wanted one for himself. So he commissioned the museum to build it for him.

Three-and-a-half years later, the machine was finished, but before it goes in Myhrvold's living room it is going to spend six months on proud display at the Computer History Museum here. And on Wednesday, it was expected to arrive at the Mountain View museum.

Daniel Terdiman's story

tag: , ,

Saturday, April 05, 2008

Second Life Grid By LindenLab and IBM


SAN JOSE, Calif. - 10 Oct 2007: IBM (NYSE: IBM) and Linden Lab®, creator of the virtual world Second Life® (www.secondlife.com), today announced the intent to develop new technologies and methodologies based on open standards that will help advance the future of 3D virtual worlds.

IBM in Second Life photo

IBM and Linden in Push for Open, Integrated 3-D 'Net: Two IBM employees -- represented by their 3-D avatars -- have a discussion prior to a business meeting at the IBM Open Source and Standards office in the virtual world Second Life. IBM and Linden Labs today announced they will work with a broad community of partners to drive open standards and interoperability to enable avatars -- the online persona of visitors to these online worlds -- to move from one virtual world to another with ease, much like you can move from one website to another on the Internet today. The companies see many applications of virtual world technology for business and society in commerce, collaboration, education, training and more.

As more enterprises and consumers explore the 3D Internet, the ecosystem of virtual world hosts, application providers, and IT vendors need to offer a variety of standards-based solutions in order to meet end user requirements. To support this, IBM and Linden Lab are committed to exploring the interoperability of virtual world platforms and technologies, and plan to work with industry-wide efforts to further expand the capabilities of virtual worlds.

"As the 3D Internet becomes more integrated with the current Web, we see users demanding more from these environments and desiring virtual worlds that are fit for business," said Colin Parris, vice president, Digital Convergence, IBM. "BM and Linden Lab's working together can help accelerate the use and further development of common standards and tools that will contribute to this new environment."

"We have built the Second Life Grid as part of the evolution of the Internet," said Ginsu Yoon, vice president, Business Affairs, Linden Lab. "Linden and IBM shares a vision that interoperability is key to the continued expansion of the 3D Internet, and that this tighter integration will benefit the entire industry. Our open source development of interoperable formats and protocols will accelerate the growth and adoption of all virtual worlds."

IBM and Linden Lab plan to work together on issues concerning the integration of virtual worlds with the current Web; driving security-rich transactions of virtual goods and services; working with the industry to enable interoperability between various virtual worlds; and building more stability and high quality of service into virtual world platforms. These are expected to be key characteristics facing organizations which want to take advantage of virtual worlds for commerce, collaboration, education and other business applications.

More specifically, IBM and Linden Lab plan to collaborate on:

* "Universal" Avatars: Exploring technology and standards for users of the 3D Internet to seamlessly travel between different virtual worlds. Users could maintain the same “avatar” name, appearance and other important attributes (digital assets, identity certificates, and more) for multiple worlds. The adoption of a universal “avatar” and associated services are a possible first step toward the creation of a truly interoperable 3D Internet.

* Security-rich Transactions: Collaborating on the requirements for standards-based software designed to enable the security-rich exchange of assets in and across virtual worlds. This could allow users to perform purchases or sales with other people in virtual worlds for digital assets including 3D models, music, and media, in an environment with robust security and reliability features.

* Platform stability: Making interfaces easier to use in order to accelerate user adoption, deliver faster response times for real-world interactions and provide for high-volume business use.

* Integration with existing Web and business processes: Allowing current business applications and data repositories – regardless of their source – to function in virtual worlds is anticipated to help enable widespread adoption and rapid dissemination of business capabilities for the 3D Internet.

* Open standards for interoperability with the current Web: Open source development of interoperable formats and protocols. Open standards in this area are expected to allow virtual worlds to connect together so that users can cross from one world to another, just like they can go from one web page to another on the Internet today.


IBM is actively working with a number of companies in the IT and virtual world community on the development of standards-based technologies. This week IBM hosted an industry wide meeting to discuss virtual world interoperability, the role of standards andthe potential of forming an industry wide consortium open to all. This meeting is expected to also begin to address the technical challenges of interoperability and required and recommended standards.

Linden Lab has formed an Architecture Working Group that describes the roadmap for the development of the Second Life Grid. This open collaboration with the community allows users of Second Life to help define the direction of an interoperable, Internet-scale architecture.

For more information about the Second Life Grid visit http://secondlifegrid.net/. The Second Life community maintains information about the Architecture Working Group at http://wiki.secondlife.com/wiki/Architecture_Working_Group.

Thursday, April 03, 2008

The Ninf-G5 is now available from APGRID.

I recieved the following information from Ninf-G group. I have been working with Ninf-G for a long time now and the new features suggested about the Version 5.0, Ninf-G5 is making me wanting to upgrade. But because I am working with a bunch of other people, I need to plan out the upgrade. Thank you, Yoshio Tanaka.
The Ninf-G version 5.0.0 is now available for download at the Ninf project home page at: http://ninf.apgrid.org/ .

Ninf-G Version 5.0.0 (Ninf-G5) is a new version of Ninf-G which is a reference implementation of the GridRPC API.

Major functions of Ninf-G include (1) remote process invocation, (2) information services, and (3) communication between Ninf-G Client and Servers. Ninf-G4 is able to utilize various middleware for remote process invocation, however Ninf-G4 utilize the Globus Toolkit for information services and communication between Ninf-G Client and Servers.
On the other hand, Ninf-G5 does not assume specific Grid middleware as prerequisites, that is, unlike the past versions of Ninf-G (e.g. Ninf-G2, Ninf-G4), Ninf-G5 works in non Globus Toolkit
environments. Ninf-G5 is able to utilize various middleware not only for remote process invocation but also for information services and communication between Ninf-G Client and Servers. Ninf-G5 is appropriate for a single system as well as non-Globus Grid environments. It is expected to provide high performance for task parallel applications from a single system to Grid.

Here are compatibility issues between Ninf-G5 and Ninf-G4. The GridRPC API and Ninf-G API implemented by Ninf-G5 is compatible with Ninf-G4 except the two small issues (details are described in CHANGES file).
- Ninf-G Client configuration file for Ninf-G4 is not compatible with Ninf-G5.
- Due to protocol changes, Ninf-G4 client cannot communicate with
- Ninf-G5 executables and vice versa.

If you have any questions, comments, please send emails to
ninf@apgrid.org or ninf-users@apgrid.org .

http://ninf.apgrid.org/
http://www.apgrid.org/

tag: , , , , , , ,

Tuesday, April 01, 2008

OKI Develops World's First 160Gbps Optical 3R Regenerator for Ultra Long Distance Data Transmission

Image From Paper Written By Kozo Fujii (PDF), Development of an Ultra High-Speed Optical Signal Processing Technology - For Practical Implementation of a 160Gbit/s Optical Communication System.
OKI Develops World's First 160Gbps Optical 3R Regenerator for Ultra Long Distance Data Transmission Enabling Ultra High Capacity Data to Be Transmitted to the Other Side of the Planet

TOKYO--(BUSINESS WIRE)--Oki Electric Industry Co., Ltd. announced it is the world’s first to achieve all optically regenerated transmission, which enables unlimited transmission of 160Gbps optical signals with single wavelength. To demonstrate the results of this project, OKI used an optical test-bed provided by the National Institute of Information and Communications Technology (NICT)’s Japan Gigabit Network II (JGN II)(1). The research that led to OKI’s achievement was conducted as part of the "Research and Development on Lambda Utility Technology,” under the auspices of NICT.

“This result proves that we can now transmit data at 160Gbps data, a speed equivalent to transmitting four movies, approximately 8 hours of data, in a single second. This amount of data at this speed can be sent over distances greater than the length of Japan, which is about 3,000km, and in fact to the other side of the planet, which is about 20,000km,” said Takeshi Kamijo, General Manager of Corporate R&D Center at OKI. “160Gbps data transmission uses an ultra high-speed optical communication technology that is expected to be commercialized in 2010 or after. OKI will analyze the findings from the field trial and develop a commercial-level 160Gbps optical 3R Regenerator.”

In a conventional optical communication system, an optical amplifier is placed every 50 to 100 km to compensate for propagation loss. Because signal distortion and timing jitter accumulate during transmission, the faster the speed of transmission, the shorter the transmission range. Therefore, to achieve longer distance, optical signals are converted into electric signals before the transmission limit is reached and converted back into optical signals and re-transmitted after the signal processing is completed. However, the speed for batch signal processing is currently limited to 40Gbps. Therefore, technologies to efficiently regenerate optical signals without converting them to electric signals are required in order to achieve a transmission speed of over 100Gbps.

To do this, OKI developed an all-optical 3R Regenerator, which uses a specialized optical-repeater technology with functions for re-amplification, re-shaping to remove optical signal wave distortion, and re-timing to avoid timing jitter accumulation. With these advances, in theory, it is possible to achieve signal processing speeds of over 200Gbps.

OKI also developed a Polarization Mode Dispersion Compensator (PMDC) that adaptively mitigates the impact of the changes in transmission line characteristics that are unique to optical fiber. Polarization mode dispersion is a phenomenon whereby wave distortion increases in an oval-shaped fiber core. The dispersion value changes depending on the temperature or transmission environment. Because the faster the transmission speed, the more sensitive it is to such changes, a PMDC is indispensable for transmission systems operating at over 40Gbps. OKI’s newly developed PMDC adopts a design to fully leverage the optical 3R Regenerator.

In the field trial using this equipment, in principle, OKI proved there was hardly any limit to transmission distance. Though 40Gbps and 80Gbps transmission using all-optical 3R Regenerators has been done in the past, OKI is the first in the world to conduct a field trial using 160Gbps optical signal regenerators.

By evaluating the performance of all-optical 3R regenerators while changing the regenerator spacing, OKI achieved a maximum regenerator spacing of 380km, which is equivalent to transmitting at 160Gbps between Tokyo and Osaka with just one optical 3R regenerator.

The findings from this trial were reported at the general conference held by The Institute of Electronics, Information and Communication Engineers on March 20.

[Glossary]

(1) Optical test-bed provided by Japan Gigabit Network II (JGN II)

Working together with JGN II, NICT provides a next generation optical network R&D environment to manufactures and institutions who do not have their own environment.
tag: , , , , , , , , ,