HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 5 hours 33 min ago

Atos Achieves SAP HANA Certification for Bullion Server

Wed, 12/07/2016 - 06:45

Dec.  7 — Atos, through its technology brand Bull, announces that bullion, its high-end enterprise x86 server, is now certified for operating the SAP HANA platform up to 16TB. The modular and ultra-flexible architecture helps simplify operations and improve business productivity and IT efficiency. Clients can easily upscale and personalize their in-memory needs over time with this scalable server that can handle up to 16 TeraBytes (TB) of memory. Today Atos is one of only two players worldwide delivering a certified platform over 8TB.

Key advantages the 16TB bullion for SAP HANA offers clients worldwide:

  • Same server technology from small to very large SAP HANA environments – With this new certification achieved on a 16 CPUs appliance model equipped with Intel Xeon Processor E7-v4 Family, bullion servers for SAP HANA are among the most scalable platforms worldwide. The whole range supports organizations by optimizing landscapes running SAP HANA – from 512GB up to 16TB database – on the same technology and without any rupture.
  • In-memory capacity of up to 16 TB to help meet companies’ business needs worldwide – Large companies will be able to migrate their largest mission-critical databases to SAP HANA, and hence leverage the speed and flexibility of the SAP HANA platform.

One of the biggest SAP HANA migration project in the world

Atos’ six-year contract with Siemens, Europe’s largest engineering company, focuses on a platform running in the cloud and is built on the SAP HANA platform for data services to meet Siemens’ growing business demands. The platform – based on bullion, the enterprise high-end x86 server from Bull – is deployed on a worldwide basis to support more than 100,000 Siemens personnel across the whole Siemens Group. This is one of the biggest SAP HANA migration projects in the world, hosting critical data in more than 1PB of aggregated memory.

This range of bullion appliances for SAP HANA reinforces Atos’ capabilities in successful end-to-end delivery of SAP solutions. As a SAP partner, Atos combines unique expertise and technologies to unleash the value of SAP HANA.

“This announcement is at the heart of our ambition to support customers in their digital journey – delivering an outstanding, agile and performant IT infrastructure,” said Pierre Barnabé, Chief Operating Officer, Big Data & Security at Atos. “With the extension of our bullion for SAP HANA range, we can match our customers’ extreme requirements for their SAP software landscape.”

More than 500 organizations have already chosen bullion technologies to host their most critical workloads, in a secure environment with high performance requirements. Bullion servers from Atos beat performance records, according to the international benchmark from Standard Performance Evaluation Cooperative (SPEC).

About Atos

Atos SE (Societas Europaea) is a leader in digital services with pro forma annual revenue of circa EUR 12 billion and 100,000 employees in 72 countries. Serving a global client base, the Group provides Consulting & Systems Integration services, Managed Services & BPO, Cloud operations, Big Data & Cyber-security solutions, as well as transactional services through Worldline, the European leader in the payments and transactional services industry. With its deep technology expertise and industry knowledge, the Group works with clients across different business sectors: Defense, Financial Services, Health, Manufacturing, Media, Utilities, Public sector, Retail, Telecommunications, and Transportation. Atos is focused on business technology that powers progress and helps organizations to create their firm of the future. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and is listed on the Euronext Paris market. Atos operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline.

Source: Atos

The post Atos Achieves SAP HANA Certification for Bullion Server appeared first on HPCwire.

NVIDIA Delivers AI Supercomputer to Berkeley

Wed, 12/07/2016 - 06:40

Dec. 7 — NVIDIA CEO Jen-Hsun Huang earlier this year delivered the NVIDIA DGX-1 AI supercomputer in a box to the University of California, Berkeley’s Berkeley AI Research Lab (BAIR).

BAIR’s over two dozen faculty and more than 100 graduate students are at the cutting edge of multi-modal deep learning, human-compatible AI and connecting AI with other scientific disciplines and the humanities.

“I’m delighted to deliver one of the first ones to you,” Jen-Hsun told a group of researchers at BAIR celebrating the arrival of their DGX-1.

AI’s Need for Speed

The team at BAIR are working on a dazzling array of AI problems across a huge array of fields — and they’re eager to experiment with as many different approaches as possible.

To do that, they need speed, explains Pieter Abbeel, an associate professor at UC Berkeley’s Department of Electrical Engineering and Computer Science.

“More compute power directly translates into more ideas being investigated, tried out, tuned to actually get them to work,” Abbeel says. “So right now, an experiment might typically maybe take anywhere from a few hours to a couple of days, and so if we can get something like a 10-fold speed-up, that would narrow it down from that time to much shorter times — then we could right away try the next thing.”

Autonomous Driving

That speed — and the ability to manage huge quantities of data — is the key to new breakthroughs in deep learning, which, in turn, is key to helping computers navigate environments that people do every day, such as public roads, explains John Canny, the Paul and Stacy Jacobs Distinguished Professor of Engineering at UC Berkeley’s Department of Electrical Engineering and Computer Science.

“In driving, drivers continue to improve over many years and decades because of the experience that they gain,” Canny says. “In machine learning, deep learning currently doesn’t really manage data sets of that size — so our interest is in collecting, processing and leveraging those very large data sets.”

Cars that could learn not just from their own experiences — but from those of millions of other vehicles — promise to dramatically improve safety, explains Trevor Darnell, a professor in UC Berkeley’s Department of Electrical Engineering and Computer Science.

“But that’s just the tip of the iceberg,” Darnell says. “There will be also revolutions in transportation and logistics, the process of just moving stuff around — if you’d like to get a small package from here to there. If we could have autonomous vehicles of all sorts of sizes moving all of our goods and services around, I can’t even speculate the degree of productivity that will give us.”

Everyday Robotics

Giving machines the ability to learn from their experience is also the key to helping robots move from factory floors to less predictable environments, such as our homes, offices and hospitals, Abbeel says.

“It’s going to be important these robots can adapt to new situations they’ve never seen before,” Abbeel says. “The big challenge here is how to build an artificial intelligence that allows these robots to understand situations they’ve never seen before and still do the right thing.”

While deep learning is already part of commonly used web services that help machines categorize information — such as speech and image recognition — Abbeel and his colleagues are exploring ways to help machines make decisions on their own.

Called “reinforcement learning,” this new approach promises to help machines understand and navigate complex environments, Abbeel explains.

Building machines that can not only learn from their environment, but judge the risks that they’re taking is key to building smarter robots, explained Sergey Levine, an assistant professor at the Department of Electrical Engineering and Computer Sciences at UC Berkeley.

Flying robots, for example, not only have to adapt to quickly changing environments, but have to be aware of the risks they’re taking as they fly. “We use deep learning to build deep neural-network policies for flight that are aware of their own uncertainty so that they don’t take actions for which they don’t really understand the outcome,” said Levine.

Fueling the AI Revolution

New approaches such as this promise to help researchers build machines that are, ultimately, more helpful. The speed of DGX-1’s GPUs and integrated software — and the connections between them — will help BAIR explore these new ideas faster than ever.

“There’s somewhat of a linear connection between how much compute power one has and how many experiments one can run,” Darnell says. “And how many experiments one can run determines how much knowledge you can acquire or discover.”

Source: Jim McHugh, NVIDIA

The post NVIDIA Delivers AI Supercomputer to Berkeley appeared first on HPCwire.

Blue Waters Speeds Gravitational Wave Analysis

Wed, 12/07/2016 - 06:35

Dec. 7 — The historic discovery of gravitational waves earlier this year has sent astronomers on the search for more. These scientists are turning to the Blue Waters supercomputer at NCSA in aid of their quest.

So it was fitting that the NANOGrav annual fall meeting be held at NCSA, allowing researchers to see and work with Blue Waters first hand. As part of their meeting, Eliu Huerta, chair of the local organizing committee of this event and leader of NCSA’s Gravity group, organized a hackathon with the goal of using the GPUs in Blue Waters to accelerate the pipelines NANOGrav uses to search for gravitational waves. Huerta obtained the needed computational resources for this event through a Blue Waters education allocation.

The end results: the two main bottlenecks in analysis were successfully removed and, for the first time, ipython notebooks ran on Blue Waters’ compute nodes in a stable configuration. In addition, the hackathon wetted the interest of Illinois participants, who expressed a keen interest in participating in future NANOGrav hackathons, and a desire to participate in searches of gravitational waves with pulsar timing arrays using Blue Waters.

The pipelines used for the hackathon were developed by Justin Ellis and Steve Taylor, postdoctoral researchers at the Jet Propulsion Laboratory (JPL). The duo prepared ipython notebooks to isolate the main bottlenecks in the analysis. By using ipython notebooks code developers could interactively quantify whether the use of new libraries or new pieces of code provided a significant speed up in the analysis.

Blue Waters team members Timothy Bouvet, Roland Haas, Sharif Islam, and Colin MacLean joined the hackers, contributing their expertise to provide access to the system and the stable ipython notebook platform needed to optimize NANOGrav’s pipelines.

A python library was implemented in the pipelines to take advantage of the properties of sparse matrices, which are common in pulsar timing analysis. This improvement accelerated the analysis by at least a factor of two. Another linear algebra bottleneck was related to the computation of matrix multiplications. This process is repeated according to the number of pulsars used in the analysis—currently about 50 times. openMP was implemented to further accelerate this analysis.

NANOGravers and undergraduates, graduates, and postdocs from the University of Illinois at Urbana-Champaign were instructors and code developers. They included instructors: Adam Brazier (Cornell), Justin Ellis (JPL), Nate Garver-Daniels (WVU), Roland Haas (NCSA), Eliu Huerta (NCSA), Steve Taylor (JPL), and code developers: Arian Azin (Illinois), Paul Baker (WVU), Patrick Dean Mullen (Illinois), Mark Dewing (Illinois), Daniel George (Dept. of Astronomy at Illinois and NCSA), Miguel Holgado, Michael Katolik, and Wei-Ting Liao, all with the Dept. of Astronomy at Illinois, and Kedar Phadke (Illinois).

Source: NCSA

The post Blue Waters Speeds Gravitational Wave Analysis appeared first on HPCwire.

TACC Programs Introduce Students From Underserved Communities to HPC

Wed, 12/07/2016 - 06:30

Dec. 7 — Joseph Molina, a first-generation Latino college student from the agricultural community of Salinas, California, knew very little about advanced computing before last summer.

Now, after participating in the Integrative Computational Education and Research Traineeship (ICERT) program at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin (UT), he’s homed in on the field he would like to work in: developing mobile applications and bringing new ideas to life.

“The experience I had in the program is one that I will take with me for the rest of my life,” he says. “I not only expanded my knowledge in computer science but improved the overall confidence I have in my programming skills as a software engineer. It was a vital experience that will push me forward in life.”

Like much of the tech world, high performance computing does not fully reflect the diversity of the U.S. Consequently, the nation is in danger of missing out on vital talent from those not exposed to advanced computing as a possible career.

These facts led TACC — one of the world’s leading computing research centers — to redouble their efforts to broaden the diversity of students who learn about supercomputing.

Through a series of education and outreach programs hosted over the summer and continuing throughout the year, TACC is providing transformative learning experiences to dozens of students with limited computing experience. In doing so, they are creating a model of how to recruit, train and engage Latina, African American and women students in advanced computing.

The entire article can be found here.

Source: Aaron Dubrow, TACC

The post TACC Programs Introduce Students From Underserved Communities to HPC appeared first on HPCwire.

DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan

Tue, 12/06/2016 - 18:39

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan.

DataDirect Networks and IBM Japan have worked for about a year on a new active archive solution that allows Yahoo Japan to cache dozens of petabytes of data from its OpenStack Swift object storage system at a data center in Japan and then transfer and store it in a U.S.-based data center owned by YJ America, Inc., its American subsidiary.

According to Yahoo Japan, subsidiary of the American internet giant, the company moved to an active archive system configured on both sides of the Pacific Ocean because of rising data volumes, multi-petabyte data backup requirements, and disaster recovery measures it implemented after the Great East Japan Earthquake of 2011 – along with a desire to avoid Japanese energy costs, which are 74 percent higher than in the U.S.

The active archive is built on DDN’s SFA7700X hybrid flash storage appliance and IBM Spectrum Scale Active File Management caching functionality for higher speed I/O and metadata handling, and it is able to handle dozens of petabytes of data within a single file system configuration, according to DDN. The system allows the data center in Japan to cache data from the operating object storage private cloud at a rate of 11TB/ hour and back up the data in a data center archive in the United States, transferring data at what DDN called “a breakthrough transfer rate” of 50TB per day while allowing users in Japan to access data and conduct services.

Laura Shepard, DDN senior director of marketing, said data transfers on this scale are seen at HPC research sites but that it is not common in business environments. “It’s the performance of the underlying storage system, to be able to actually service the requests at the rates required means very high bandwidths for a multi-region data transfer… There was quite a bit of work done to tune the infrastructure to achieve this level of performance for international data transfer.”

Daisuke Masaki, cloud innovations, site operations division, systems management group, Yahoo Japan, said the company was “grappling with a number of challenges related to our large, fast-paced data growth and vital disaster recovery needs; however, installing a massive storage system in a data center in Japan raises additional issues, such as power consumption. We therefore opted for a bold technical solution in which our data center in Japan caches data from the existing object storage (OpenStack Swift) and saves the data to a data center archive in the United States, which can be operated with 26 percent of the electricity cost of a data center in Japan and at about one-third of the cost of competitive solutions. Moving forward, we plan to expand and save data from multiple websites in Japan to the active archive system in the United States.”

The post DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan appeared first on HPCwire.

Pages