HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 28 min ago

Nor-Tech Announces Intel Video on HPC Cluster Simulation Demo Site

Tue, 06/20/2017 - 09:16

MINNEAPOLIS, June 20, 2017 — Nor-Tech just announced the production of a new Intel video about simulationclusters.com, a free simulation site made possible through a collaboration between Nor-Tech, Intel, and Dassault Systèmes.

The site, which offers real-time demonstration of the ROI of upgrading from a workstation to an HPC cluster, is now running Intel’s cutting-edge Xeon E5-2600 V4 processors. Benchmark tests with the new processors show dramatic performance increases and significant cost benefits.

Simulationclusters.com runs Abaqus FEA, a SIMULIA product developed by Dassault. It is a scalable suite of unified analysis products that allows all users to collaborate and share simulation data and approved methods without the loss of information fidelity.

Nor-Tech also maintains a cutting-edge demo cluster, geared toward CAE/CFD/FEA applications, that demonstrates the benefits of transitioning applications from workstations to HPC clusters and allows software testing. There is no cost to the user for eithersimulationclusters.com or the demo cluster.

Nor-Tech has been at the forefront of supercomputing innovation, delivering HPC solutions for more than a decade. The company offers a full range of HPC clusters, including standard 42U rackmount clusters, GPU clusters, visualization clusters, workgroup clusters, office clusters, portable clusters and entry-level clusters all available with Intel’s new v4 processors.

Nor-Tech President and CEO David Bollig said, “Intel produced an excellent video that spells out the benefits of simulationclusters.com.  We are very proud of this venture and the value it delivers.  Moreover we are very pleased about the success of this collaboration with Intel and Dassault Systèmes.”

The video can be viewed at http://www.nor-tech.com/.

About Nor-Tech

Nor-Tech is on CRN’s list of the top 40 Data Center Infrastructure Providers—joining ranks with IBM, Dell, Hewlett Packard Enterprise, and Lenovo. The company is renowned throughout the scientific, academic, and business communities for easy to deploy turnkey clusters and expert, no wait time support. All of Nor-Tech’s technology is made by Nor-Tech in Minnesota and supported by Nor-Tech around the world. In addition to HPC clusters, Nor-Tech’s custom technology includes workstations, desktops, and servers for a range of applications including CAE, CFD, and FEA. Nor-Tech engineers average 20+ years of experience and are responsible for significant high performance computing innovations. The company has been in business since 1998 and is headquartered in Burnsville, Minn. just outside of Minneapolis. To contact Nor-Tech call 952-808-1000/toll free: 877-808-1010 or visit http://www.nor-tech.com.

Source: Nor-Tech

The post Nor-Tech Announces Intel Video on HPC Cluster Simulation Demo Site appeared first on HPCwire.

ACRI-ST Extends Quantum Xcellis Storage to Manage 8 Petabytes of Research Data

Tue, 06/20/2017 - 09:10

SAN JOSE, Calif., June 19, 2017 — Quantum Corp. (NYSE: QTM) today announced that ACRI-ST, a French-based scientific research organization specializing in remote sensing and modelling of physical and environmental phenomena, has extended its Quantum scale-out storage environment to provide the foundation for a new 8 PB archive project. ACRI-ST, which deployed a Quantum StorNext-powered scale-out storage solution several years ago, has now built on that solution to support the European Space Agency’s Earth Observation Data Archiving Service (EODAS). EODAS archives satellite-based research data for past and current agency missions as well as third-party missions.

Enabling Cutting-Edge Scientific Research
ACRI-ST enables scientists, university researchers, government agencies and other organizations around the world to draw from more than 20 years of data. It also offers processing services to help streamline research. In 2014, ACRI-ST deployed a StorNext multi-tier storage solution to manage the massive increase in data resulting from ESA’s launch of the Sentinel-3 satellite. The solution has been a clear success, accelerating the ACRI-ST workflows and ensuring that archived data is readily accessible for in-house scientists and hundreds of external researchers.

“The mix of disk and tape in a single environment gives us the best of both worlds,” said Gilbert Barrot, CIO at ACRI-ST. “We can keep a cache of data immediately available on disk while storing most of the repository on more economical tape. Our in-house researchers can access the files they need on their own, even when data resides on tape, and they’ve been very happy with the response time.”

Expanding the Archive to Support Additional Needs
In 2016, ACRI-ST won a bid to support EODAS, which meant it had to expand its archive service to provide 8 PB of capacity ― 6 PB for historical data and 2 PB for live missions.

“ESA wanted a robust archive that could provide bulk data retrieval and distribution,” explained Barrot. “The Quantum archive solution was perfect. We added capacity easily and inexpensively while still maintaining a small data center footprint.”

ACRI-ST scaled its existing archive environment in France and built a nearly identical environment in Luxembourg through its subsidiary adwaïsEO (hosting data center in Betzdorf) that serves as the main archive site. Both the France and Luxembourg sites include a Quantum tape library and a StorNext-powered Xcellis workflow storage system which facilitates high-performance data ingest and retrieval while providing integrated protection and archive data management.

Once data is ingested in Luxembourg, backup copies of tapes are then shipped to France for offsite disaster recovery protection. In France there is also an active vault configuration to enhance data accessibility and durability for minimal cost. In addition, ACRI-ST uses Quantum Extended Data Life Management (EDLM) capabilities, which automatically detect potential problems with tape media and move data from the suspect tape to another one.

“EDLM helps us meet key EODAS requirements for data integrity,” said Barrot. “We can keep satellite data safe from loss ― even over long periods of time.”

Looking to the Cloud and Beyond
With Quantum’s scale-out tiered storage foundation, ACRI-ST can continue to pursue new projects, whether that requires further expanding capacity, implementing new capabilities, adding HPC storage or connecting with cloud-based systems.

“We have the flexibility to accommodate a wide variety of future requirements,” said Barrot. “We have the confidence that we can continue to win bids and deliver outstanding service with Quantum-based solutions.”

Quantum to Showcase Storage Solutions for HPC at ISC High Performance Conference
At the ISC High Performance 2017 conference in Frankfurt, Germany (June 18-22), Quantum is highlighting storage solutions for HPC environments, including the type of multi-tier storage deployed by ACRI-ST. In addition, Quantum is presenting at two sessions during the show:

  • Molly Presley, Vice President of Global Marketing, is discussing the latest developments in multi-tier storage for HPC workflows at the Vendor Showdown, Monday, June 19, 2:15 p.m. Panorama 2, Forum, Vendor Showdown 02
  • Jason Coari, Sales Specialist, Scale-out Storage, is presenting “Scalable High Performance Storage for Technical Computing” on Wednesday, June 21, 4:00 p.m., Booth #M-210

Additional Resources

About Quantum

Quantum is a leading expert in scale-out tiered storage, archive and data protection, providing solutions for capturing, sharing and preserving digital assets over the entire data lifecycle. From small businesses to major enterprises, more than 100,000 customers have trusted Quantum to address their most demanding data workflow challenges. Quantum’s end-to-end, tiered storage foundation enables customers to maximise the value of their data by making it accessible whenever and wherever needed, retaining it indefinitely and reducing total cost and complexity. See how at www.quantum.com/customerstories.

Source: Quantum

The post ACRI-ST Extends Quantum Xcellis Storage to Manage 8 Petabytes of Research Data appeared first on HPCwire.

Inspur Unveils GX4 AI Accelerating Box at ISC17

Tue, 06/20/2017 - 09:04

FRANKFURT, Germany, June 20, 2017 — Inspur unveiled GX4, a new flexible and high scalability AI accelerating box at ISC 2017. The GX4 is able to achieve the decoupling coprocessor resources including CPU and GPU, Xeon Phi and FPGA, expand the computing power on demand, and provide highly flexible support to various AI applications in GPU-accelerated computing. This is another innovative effort followed by the release of the ultra-high density AI supercomputer AGX-2 last month at GTC 2017 held in San Jose, California, United States.

GX4 makes decoupling and restructuring of the coprocessors and CPU computing resources possible. It enables coprocessors with different architectures, such as GPU, Xeon Phi and FPGA to meet the needs of various AI application scenarios, such as AI cloud, deep-learning model training, and online inference. What’s more important, GX4 expands computational efficiency by connecting standard rack servers with GPU computing expansion modules, overcomes the obstacle that GPU servers need to adjust entire system and motherboard design to change computing  topologies . GX4’s independent computing acceleration module design significantly increases system deployment flexibility, offers high expansion performance from 2 to16 cards, and provides flexible topology changes by changing the connection between server and expansion module, making computing infrastructures and upper-level application better matched, and achieving the best performance of the AI computing clusters.

The GX4 overcomes the expansion limitation of 8 GPU cards of general AI computing equipment and provides better stand-alone computing performance. Each GX4 supports 4 accelerating cards in 2U form factor, and one head node can connect up to 4 GX4s, achieving 16 accelerating cards in one acceleration computing pool.

Jay Zhang, Vice President of Inspur Overseas Headquarter, stated that the GX4 sufficiently addresses the major differences in the AI deep-learning training model, using a flexible expansion method to support different levels of AI training models, and effectively lowering energy consumption and delays. The GX4 provides a flexible and innovative AI computing solution for companies and research organizations engaged in artificial intelligence across the world.

Inspur is dedicated to developing intelligent computing business which focuses on cloud computing, big data and deep leaning, and this has been regarded as the most important business development for the next decade. In recent years, Inspur has become the largest AI computing platform provider in China. Besides, Inspur’s AI solutions’ market share has reached to 60% in China and 80% in China’s BIG 3 IT companies, Baidu, Alibaba and Tencent, and it has been widely- used in smart-voice, smart-image and other applications by companies such as iFlytek and Face++.

Source: Inspur

The post Inspur Unveils GX4 AI Accelerating Box at ISC17 appeared first on HPCwire.

Dell EMC Highlights Momentum in Advancing HPC Community at ISC17

Tue, 06/20/2017 - 09:01

HOPKINTON, Mass. and FRANKFURT, Germany, June 20, 2017 — At ISC17, Dell EMC is announcing an agreement to further democratize and advance HPC, as well as additions to its Ready Solutions portfolio that will help customers further optimize their HPC projects. Working with leading researchers and innovators worldwide, Dell EMC HPC solutions empower customers to make critical advances in industries such as research, life sciences and manufacturing.

New Strategic Agreement and Industry Recognition Underscore HPC Leadership

Dell EMC and NVIDIA have expanded their collaboration by signing a new strategic agreement to include joint product development of new products and solutions that address the burgeoning workload and datacenter requirements, with GPU-accelerated solutions for HPC, data analytics, and artificial intelligence. Additionally, Dell EMC will work with NVIDIA to support the new Volta architecture and intends to launch Volta-based solutions by the end of the year.

Dell EMC delivered systems also continue to earn recognition as industry leaders, with an increasing number of systems in the TOP500 and Green500 lists. Published twice annually, the TOP500 list shows the 500 most powerful commercially available computer systems in the world. The Green500 list provides a ranking of the most energy-efficient supercomputers in the world. Multiple Dell EMC delivered systems appear on these lists, including:

  • The Cambridge Research Computing Service at the University of Cambridge is implementing a new $12M system aimed at addressing large scale High I/O data-intensive science cases. Wilkes-2, which appears at #100 on the TOP500 list and #5 on the Green500 list is the largest GPU system in the UK and the highest-ranked Dell EMC delivered system on the Green500 list. Peta4-KNL, #405 on the list, is the largest KNL system in the UK, and brings true petascale research computing within reach of all UK research groups and industries that wish to buy time on the system.
  • The recently-upgraded Stampede 2 system at Texas Advanced Computing Center (TACC) is now at #12 on the TOP500 list and #32 on the Green500 list. It is the highest-ranked Dell EMC delivered system on the TOP500 list. The upgraded system includes the phased deployment of a ~18 petaflop system based on Dell EMC PowerEdge servers including Intel® Xeon® Phi processors. Additionally, Stampede 2 is expected to double the peak performance, memory, storage capacity and bandwidth of Stampede 1. The last phase of the system will include integration of the upcoming 3D XPoint non volatile memory technology.
  • The Dell EMC HPC Innovation Lab, which appeared in the TOP500 list in November 2016 with 544 nodes with a mix of Intel Xeon and Intel Xeon Phi processors, now sits at #374 in the new TOP500 list and at #150 on the Green500 list. The lab also includes a 32-node Intel Xeon Scalable processor cluster with Intel Omni-path networking for testing and evaluation, as well as Dell C4130 servers with NVIDIA GPUs and Mellanox IB.

Dell EMC is also collaborating with CoolIT Systems to provide a factory-installed, direct contact CPU liquid cooling solution that will address power and cooling challenges for a wide range of customers. The cold plate solution, designed and manufactured by CoolIT Systems, will be available in a select Dell EMC PowerEdge 14th generation servers. It uses warm water to cool the CPUs, eliminating the need for chilled water and reducing cooling energy costs by up to 56 percent for data center infrastructure (cooling PUE). The solution also features increased rack density, enabling customers to deploy up to 23 percent more IT equipment.1

Top Researchers Continue to Choose Dell EMC HPC Solutions

The Jūlich Supercomputing Center is working with Dell EMC and Intel to deploy a supercomputer combo that is expected to be the first Cluster-Booster platform based on technology developed in the European Union (EU)’s DEEP and DEEP-ER research projects. This Cluster-Booster marks a step toward the implementation of modular supercomputing, a new paradigm directly reflecting the diversity of execution characteristics, found in modern simulation codes, in the architecture of the supercomputer. Instead of a homogenous design, the modular supercomputing paradigm enables optimal resource assignment to enable more scientists and engineers to solve their highly complex problems by simulations. The Booster module will be a 5PF cluster based on Dell EMC PowerEdge C6320p servers, connected with Dell EMC Networking H-series fabric based on Intel Omni-Path® technology.

The NASA Center for Climate Simulation (NCCS) at Goddard Space Flight Center needed a system that combines HPC and virtualization technologies in a private cloud designed for large-scale data analytics. To meet this need and alleviate some of the strain on Discover, their continually evolving HPC system, the NCCS launched its Advanced Data Analytics Platform (ADAPT) onsite private cloud. A green initiative, ADAPT was built largely from decommissioned HPC components, including hundreds of Dell EMC PowerEdge servers that came out of the Discover supercomputer as that system evolved to bring in new technologies. Scientists interact with the ADAPT team to provision resources and launch ephemeral VMs for “as needed” processing. The data-centric, virtual system approach significantly lowers the barriers and risks to organizations that require on-demand access to HPC solutions.

Supporting Quotes
Ian Buck, vice president and general manager – Accelerated Computing Group, NVIDIA
“Increasingly, deep learning is a strategic imperative for every major technology company, permeating every aspect of work. Specifically, artificial intelligence is being driven by leaps in GPU computing power that defy the slowdown in Moore’s law. The work we are doing to advance GPU computing alongside Dell EMC will empower AI developers as they race to build new frameworks to tackle some of the greatest challenges of our time.”

Garrison R. Vaughn, NCCS systems engineer, ADAPT Team – NASA’s Goddard Space Flight Center
“Our leadership challenges us to do new, innovative things, which is where ADAPT came from. ADAPT enables researchers to uncover valuable information, which is why it is critical that the system performs well. The Dell servers, working as compute nodes inside ADAPT, have been real workhorses and have been reliable. That’s impressive, especially with the heat we run through the system.”

Paul Teich, principal analyst, TIRIAS Research
“Dell EMC is moving beyond its vertical HPC segment focus and strengthening its commitment to key HPC platform technologies. A new strategic agreement with NVIDIA for joint innovation in HPC, big data, and machine learning and collaborating with CoolIT Systems for factory-installed warm water cooling systems will accelerate Dell EMC’s mission to democratize HPC.”

Armughan Ahmad, senior vice president & general manager – Ready Solutions and Alliances, Dell EMC
“Dell EMC is proud of our work to help the research community and its customers capitalize on HPC, expanding it from a niche market to a broader audience, from departmental clusters to leadership-class systems. Our commitment to continuing to partner with strong leaders and push the envelope around HPC innovation is solid and growing, as evidenced by our market leadership, industry-leading products, and world-class customers and deployments.”

About Dell EMC

Dell EMC, a part of Dell Technologies, enables organizations to modernize, automate and transform their data center using industry-leading converged infrastructure, servers, storage and data protection technologies.  This provides a trusted foundation for businesses to transform IT, through the creation of a hybrid cloud, and transform their business through the creation of cloud-native applications and big data solutions.  Dell EMC services customers across 180 countries – including 98% of the Fortune 500 – with the industry’s most comprehensive and innovative portfolio from edge to core to cloud.

Source: Dell EMC

The post Dell EMC Highlights Momentum in Advancing HPC Community at ISC17 appeared first on HPCwire.

Inspur Unveiled GX4, A New Flexible and High Scalability AI Accelerating Box at ISC 2017

Tue, 06/20/2017 - 01:01

Frankfurt, Germany, June 19, 2017 – Inspur unveiled GX4, a new flexible and high scalability   AI accelerating box at ISC 2017. The GX4 is able to achieve the decoupling coprocessor resources including CPU and GPU, Xeon Phi and FPGA, expand the computing power on demand, and provide highly flexible support to various AI applications in GPU-accelerated computing. This is another innovative effort followed by the release of the ultra-high density AI supercomputer AGX-2 last month at GTC 2017 held in San Jose, California, United States.

Launch view

GX4 makes decoupling and restructuring of the coprocessors and CPU computing resources possible. It enables coprocessors with different architectures, such as GPU, Xeon Phi and FPGA to meet the needs of various AI application scenarios, such as AI cloud, deep-learning model training, and online inference. What’s more important, GX4 expands computational efficiency by connecting standard rack servers with GPU computing expansion modules, overcomes the obstacle that GPU servers need to adjust entire system and motherboard design to change computing  topologies . GX4’s independent computing acceleration module design significantly increases system deployment flexibility, offers high expansion performance from 2 to16 cards, and provides flexible topology changes by changing the connection between server and expansion module, making computing infrastructures and upper-level application better matched, and achieving the best performance of the AI computing clusters.

The New Flexible and High Scalability AI Accelerating Box GX4

The GX4 overcomes the expansion limitation of 8 GPU cards of general AI computing equipment and provides better stand-alone computing performance. Each GX4 supports 4 accelerating cards in 2U form factor, and one head node can connect up to 4 GX4s, achieving 16 accelerating cards in one acceleration computing pool.

Presented by Jay Zhang, VP of Inspur Overseas Headquarter

Jay Zhang, Vice President of Inspur Overseas Headquarter, stated that the GX4 sufficiently addresses the major differences in the AI deep-learning training model, using a flexible expansion method to support different levels of AI training models, and effectively lowering energy consumption and delays. The GX4 provides a flexible and innovative AI computing solution for companies and research organizations engaged in artificial intelligence across the world.

Inspur is dedicated to developing intelligent computing business which focuses on cloud computing, big data and deep leaning, and this has been regarded as the most important business development for the next decade. In recent years, Inspur has become the largest AI computing platform provider in China. Besides, Inspur’s AI solutions’ market share has reached to 60% in China and 80% in China’s BIG 3 IT companies, Baidu, Alibaba and Tencent, and it has been widely- used in smart-voice, smart-image and other applications by companies such as iFlytek and Face++.

The post Inspur Unveiled GX4, A New Flexible and High Scalability AI Accelerating Box at ISC 2017 appeared first on HPCwire.

OpenACC Shows Growing Strength at ISC

Mon, 06/19/2017 - 18:30

OpenACC is strutting its stuff at ISC this year touting expanding membership, a jump in downloads, favorable benchmarks across several architectures, new staff members, and new support by key HPC applications providers, ANSYS, for example. It is also holding its third user group meeting at the conference and a number of other activities including a BoF. That seems like significant progress in its rivalry with OpenMP.

Parallel programing models, of course, have become de rigueur to get the most from HPC systems, especially with the rise of manycore, GPU, and other heterogeneous architectures. OpenACC formed in 2011 to support parallel programing on accelerated systems. In its own words, OpenACC “is a directives-based programming approach to parallel computing designed for performance and portability on CPUs and GPUs for HPC.”

There are now roughly 20 core members – Cray, AMD, Oak Ridge National Laboratory, and Indiana University, to name a few. OpenACC reports downloads jumped 86 percent jumped in the last six months, driven in part by a new free community release that also supports Microsoft Windows. Interestingly, support for Windows which is a rarity in core HPC was very important to ANSYS according Michael Wolfe, OpenACC technical lead and a PGI staff member. The current OpenACC version is 2.5 with 2.6 expected to be available for public comment in the next couple of months.

As shown in the slide below, OpenACC has steadily expanded the number of platforms supported. It’s an impressive list although notably absent from this list is ARM. Before it ceased operations PathScale supported ARM and currently the GCC group (GNU Compiler Group) is working on OpenACC support for ARM. Leading compiler provider PGI, owned by NVIDIA, also has plans. “It’s no secret that our plan is to eventually support ARM and we’ll be using the same mechanism we used to support Power and so the compiler part is relatively straight forward. It’s getting the numerical libraries in place [that’s challenging],” says Wolfe.

Significantly, OpenACC is reporting rough parity with OpenMP for application acceleration on a pair of Intel systems and an IBM Minsky when compared with a single core Haswell system. (Reported systems specs: Intel dual Haswell – 2×16 core server, four K80s; dual Intel Broadwell – 2×20 core server, eight P100s; IBM dual Minsky – Power8+ NVLINK, four P100s; host systems for GPUs not listed. The application was AWE Hydrodynamics CloverLeaf mini-app.)

“You get almost no performance decrement on a multicore on the various systems,” notes Wolfe. OpenACC hasn’t yet benchmarked against Intel’s forthcoming Skylake. “We’re waiting on it. Obviously we need to re-optimize our code generator.”

Perhaps most telling, say OpenACC proponents, is the uptick in support from HPC application community. In its ISC new release, OpenACC reported it now accelerates ANSYS Fluent (CFD) and Gaussian (Quantum Chemistry) and VASP (Material Science), which are among the top 10 HPC applications, as well as selected ORNL Center for Accelerated Application Readiness (CAAR) codes to be run on the future CORAL Supercomputer: GTC (Physics), XGC (Physics), LSDalton (Quantum Chemistry), ACME(CWO), and FLASH (Astrophysics).

“Early indications are that we can nearly match the performance of CUDA using OpenACC on GPUs. This will enable our domain scientists to work on a uniform GPU accelerated Fortran source code base,” says Martijn Marsman, Computational Materials Physics at the University of Vienna in the official press release.

“We’ve effectively used OpenACC for heterogeneous computing in ANSYS Fluent with impressive performance. We’re now applying this work to more of our models and new platforms,” says Sunil Sathe, lead software developer, ANSYS.

OpenACC also reports the recently upgraded CSCS Piz Daint supercomputer will be running five codes implemented with OpenACC in the near term: COSMO (CWO), ELEPHANT (Astrophysics), RAMSES (Astrophysics), ICON (CWO), ORB5 (Plasma Physics).

Two new OpenACC officers have been appointed:

  • Guido Juckeland

    Guido Juckeland is the new secretary for OpenACC. He founded the Computational Science Group at Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Germany. His research focuses on better usability and programmability for hardware accelerators and application performance monitoring as well as optimization. He is also vice-chair of the SPEC High Performance Group (HPG) and an active member of the OpenACC technical.

  • Sunita Chandrasekaran

    Sunita Chandrasekaran is the new director of user adoption. Her mission is to grow the OpenACC organization and user community. She is currently an assistant professor at the University of Delaware. Her research interest spans HPC, parallel algorithms, programming models, compiler and runtime methodologies and reconfigurable computing. She was one of the recipients of the 2016 IEEE TCHPC Award for Excellence for Early Career Researchers in HPC.

Wolfe says the forthcoming 2.6 release is mostly a matter of tweaks. One change in the works which is substantive is Deep Copy capability.

“Many of these programs have very complex data structures. If you think about supercomputing you think about arrays, vectors, and matrices. [But] that’s so 1970s. Now these applications will have an array of structures and each structure element has a subarray which is a different. On today’s devices, in order to get most performance on the GPU, you need to move the data onto the GPU memory which is higher bandwidth, closer to the device,” says Wolfe.

“Deep copy doesn’t just copy the array but copies that and all the subarrays and all the subarrays. There is a mechanism to support this today but it is clunky [and] requires a lot of code. We are trying to automate that but we are afraid we are going to get it wrong. So what we are doing now in the PGI compiler, we are working on a prototype application before we standardize something in the classification,” says Wolfe.

The post OpenACC Shows Growing Strength at ISC appeared first on HPCwire.

GENCI to Boost Industrial Innovation with New Petascale Supercomputer

Mon, 06/19/2017 - 17:28

PARIS, June 19, 2017 — The French Ministry of Higher Education, Research and Innovation (Ministère de l’Enseignement Supérieur, de la Recherche et de l’Innovation) and GENCI, the French national agency in charge of funding and making available large-scale facilites in high performance computing, hailed the signing of a contract, with ATOS Bull, for providing a new supercomputer that will further increase the scientific and economic competitiveness of both France and Europe, with an investment effort of 24million euros.

Hosted at the CEA’s TGCC centre (Très Grand Centre de Calcul) in Bruyères-Le-Châtel (Essonne), thissupercomputer, operational at the beginning of 2018, will have a peak computational capacity of 9 petaflops, or 9 million billion operations a second in its initial configuration (the equivalent of more than 75,000 office PCs), multiplying the available computing power by 4.5 factor over the existing BULL Curie system at the TGCC. Alongside its computing capacity, the computer will also be able to manage massive amount of data thanks to its data read/write capacity in excess of 500 Go/s. The capacity of this supercomputer is then scheduled to be increased to 20 petaflops in 2019.

The use of numeric simulation and high-performance computing has become a vital instrument in the field of fundamental and applied research, as well as in a growing number of industrial sectors and also as a decision-making tool for the public sector.

For scientists, in every field, this offers the opportunity to use an essential modelling tool that is complementary to both theory and experimentation, involving the fast generation and processing of a massive volume of complex data. This state-of-the-art equipment for extreme simulations will make it possible to improve research and applications within a range of fields as varied as climatology, combustion, new energies, astrophysics, medicine and biology, plasma physics, materials science, as well as humanities and social sciences to artificial intelligence and deep learning.

And for business, and in particular the SMEs and start-up sectors, numerical simulation enables them to optimise the performance of their technologies and processes, and helps create the innovations of tomorrow, in, for example, the aeronautic, automotive and energy sectors.

In the context of the digital revolution taking into account many countries such as the USA, China and Japan making huge investments in the world class strategic areas of HPC and big data, this new, extremely powerful and energy-efficient supercomputer will also become the French high level contribution within the PRACE (PartneRship for Advanced Computing in Europe) European research infrastructure to which GENCI is strongly committed.

This very expected purchase represents a further step up of investment into research infrastructures. It follows on from the full capacity commissioning of the Occigen supercomputer at the CINES centre, in Montpellier, at the beginning of 2017 and will be followed between now and 2019 with an investment in a new computer to be hosted at CNRS’s IDRIS centre, located on Paris-Saclay plateau.

GENCI also welcomes the fact that this highly competitive tender call was won by a European computer manufacturer, ATOS Bull, and is thus contributing to the creation of jobs and the recognition of the excellence of the scientific and technological expertise of France in this high added-value sector. 

About GENCI

Since its creation in 2007 by the Public authorities, GENCI is working to increase the use of numerical simulation and high performance computing (HPC) for boosting competitiveness within the French economy, across all scientific and industrial fields. GENCI’s role is:

  • to implement the national strategy for equipping inHPC resources the three national computing centres and making the systems available for French researchers
  • to support the creation of an integrated European high performance computing ecosystem
  • to work to promote numerical simulation and high performance computing within the academic and industrial communities.

GENCI is a “civil company” (société civile) under the French law, and 49% owned by the State, represented by the ministère en charge de l’Enseignement supérieur et de la recherche, 20% by the CEA, 20% by the CNRS, 10% by the Universities, which are represented by the Conférence des présidents d’Université, and 1% by Inria.

Source: GENCI

The post GENCI to Boost Industrial Innovation with New Petascale Supercomputer appeared first on HPCwire.

Hyperion Research Announces HPC Innovation Excellence Award Winners

Mon, 06/19/2017 - 17:19

FRANKFURT, Germany, June 19, 2017 — Hyperion Research (the former IDC HPC team) today announced the newest recipients of the HPC Innovation Excellence Award at ISC High Performance 2017, the major supercomputing conference being held June 18-June 22, in Frankfurt, Germany.

The awards for outstanding achievements enabled with high performance computing (HPC) are given twice a year, in conjunction with the June ISC conference in Germany and the November SC supercomputing conference held in the U.S. Details about the winners are below.

The program’s main goals are to showcase return on investment (ROI) and success stories involving HPC; to help other users better understand the benefits of adopting HPC; and to help justify HPC investments, including for small and medium-size enterprises (SMEs).

“While there are multiple benchmarks to measure the performance of high performance computers, there hasn’t been an adequate methodology to evaluate the economic and scientific value HPC systems contribute,” said Earl Joseph, Hyperion Research’s CEO and executive director of the HPC User Forum. “These awards are designed to help close that gap by identifying HPC’s impact on increasing economic value, advancing scientific innovation and engineering progress, and, above all, improving the quality of life worldwide.”

Award Categories

There are three awards categories, two of them brand new for June 2017:

  • HPC User Innovation Award. All awards since the program began in 2011 have been for HPC users. Primary judges for these awards are members of the HPC User Forum Steering Committee, an international volunteer group of HPC experts from government, academic and industrial organizations.
  • HPC Data Center Innovation Award. This new category recognizes innovations, created and put into practice by HPC data center staff members that improve data center operations and/or user productivity.
  • HPC Vendor Innovation Award. Also a new category, these awards are counterparts to the data center awards. They recognize innovations, created and put into practice by HPC vendors that improve data center operations and/or user productivity.

HPC User Innovation Award

  • ArcticDEM Project: Responding to Climate Change (National Center for Supercomputing Applications, National Geospatial-Intelligence Agency, Ohio State University, PGC, University of Colorado, Boulder, University of Minnesota). This project is in response to the need for high quality elevation data in remote locations, the availability of technology to process big data, and the need for accurate measurement of topographic change. Data is used to predict sea level rise, coastal erosion, national security, civil engineering, and aircraft safety, along with many, many other science, governmental and commercial applications. MORE…
  • BP Seismic Imaging Research. BP’s Seismic Imaging Research has delivered major breakthroughs, critical in identifying over one billion additional barrels of reserves at its Gulf of Mexico offshore fields. With HPC, BP is able to test ideas quickly and scale to deliver results. MORE…
  • Celeste Project: A New Model for Cataloging the Universe (Lawrence Berkeley National Laboratory). A Berkeley Lab-based research collaboration of astrophysicists, statisticians, and computer scientists is looking to shake things up with Celeste, a new statistical analysis model designed to enhance one of modern astronomy’s most time-tested tools: Sky surveys. MORE…
  • Solving Mysteries of Electrolytes in Batteries (National Institute of Material Science, Center for Green Research on Energy and Environmental Materials or GREEN). Japanese scientists have synthesized two crystal materials that show great promise as solid electrolytes. All-solid-state batteries built using the solid electrolytes exhibit excellent properties, including high power and high energy densities, and could be used in long-distance electric vehicles. MORE…
  • Turning the Famous Maya the Bee Character into a 3-D Film (Studio 100 and M.A.R.K.13). The task required calculating each of the CGI-stereoscopic films’ 150,000 images twice – once for the perspective of the left, and once for the right eye. Given the detail-rich nature of the Maya the Bee film, the group averaged two hours per image on a single node — blazing fast in animation terms. Such times couldn’t have been achieved on a standard PC. MORE…

HPC Data Center Innovation Award

  • NASA Modular Super Computing Facility Saves Water, Power, Money (NASA). This innovative concept, launched in January 2017, centers around an SGI/HPE supercomputer nicknamed Electra, which combines outdoor air and evaporative cooling to reduce annual water use by 99% and enable a PUE of 1.03. An imminent 28-fold system expansion is expected to save NASA about $35 million per year over alternative strategies. MORE…

HPC Vendor Innovation Award

  • Tesla VI00: Tackling Once Impossible Challenges (NVIDIA): NVIDIA’s Tesla V100 substantially advances the firm’s chip density (21 billion transistors in an 815mm2 chip) and is engineered to excel at AI and HPC. With 640 Tensor Cores, V100 boasts 120TF of performance on deep learning applications. MORE…
  • DOME MicroDataCenter (IBM): This innovation from IBM’s Zurich Research Lab integrates compute, storage, networking, power and cooling in a package that’s up to 20 times denser than today’s typical data center technology. DOME MicroDataCenter has no moving parts, makes no noise, and is small enough for deployment on factory floors, in doctors’ offices, and other novel HPC environments. MORE…
  • Bright Computing/Microsoft Azure Integration: Function-rich, Easy-to-learn (Bright/Microsoft). Smoothly integrating Bright’s function-rich, easy-to-learn management software into Microsoft’s important Azure public cloud service sets the stage for running a larger spectrum of HPC workloads in a public cloud environment—including support for InfiniBand, heterogeneous CPU-accelerator workloads, and more.

About Hyperion Research

Hyperion Research is the new name for the former IDC high performance computing (HPC) analyst team. As Hyperion Research, the team continues all the worldwide activities that have made it the world’s most respected HPC industry analyst group for more than 25 years, including HPC and HPDA market sizing and tracking, subscription services, customer studies and papers, and operating the HPC User Forum. Hyperion helps IT professionals, business executives, and the investment community make fact-based decisions on technology purchases and business strategy. For more information, see http://www.hpcuserforum.com.

Source: Hyperion Research

The post Hyperion Research Announces HPC Innovation Excellence Award Winners appeared first on HPCwire.

Eight Georgia Tech Schools Partner for Advanced Degree in Machine Learning

Mon, 06/19/2017 - 11:17

ATLANTA, June 19, 2017 — The Georgia Institute of Technology has been approved to offer a new advanced degree program for the emerging field of machine learning.

In a unanimous vote, the Board of Regents of the University System of Georgia approved Georgia Tech’s request to establish a Doctor of Philosophy in Machine Learning.

“The field of machine learning is now ubiquitous in everything we do. It impacts everything from robotics and cybersecurity to data analytics – all topics of extraordinary interest to Georgia Tech,” said Rafael L. Bras, Georgia Tech provost and executive vice president for Academic Affairs and the K. Harrison Brown Family Chair.

“This new Ph.D. program embraces the interdisciplinary impact and nature of machine learning and serves to strengthen Georgia Tech’s strong position as a leading center of knowledge and expertise in this increasingly important field of study.”

A collaborative approach

The machine learning (ML) Ph.D. program is a collaborative venture between the colleges of Computing, Engineering, and Sciences. An inaugural class of approximately 15 students is scheduled to convene for the Fall 2017 semester. The class is expected to comprise incoming Ph.D. students and some who may have recently begun other programs at Georgia Tech.

Qualified students can apply to the program through one of eight participating schools at Georgia Tech. These include the schools of Computational Science and Engineering, Computer Science, and Interactive Computing in the College of Computing.

Participating schools in the College of Engineering include the School of Electrical and Computer Engineering, the Stewart School of Industrial and Systems Engineering, the Coulter Department of Biomedical Engineering, and the Guggenheim School of Aerospace Engineering.

Students can also apply for the ML Ph.D. program through the School of Mathematics in the College of Sciences.

“The ML Ph.D. degree program is ideal for students from a variety of academic backgrounds interested in multidisciplinary collaboration,” said Justin Romberg, the Schlumberger Professor in the School of Electrical and Computer Engineering and program curriculum coordinator for the ML Ph.D. program.

“Students will learn to integrate and apply principles from computing, statistics, optimization, engineering, mathematics, and science to innovate and create machine learning models and then apply them to answer important, real-world, data-intensive questions.”

ML@GT

Although students apply to the program through one of eight schools, the hub for the new ML Ph.D. degree is the Center for Machine Learning at Georgia Tech (ML@GT).

Opened in July 2016 as the home for machine learning at Georgia Tech, ML@GT has more than 100 affiliated faculty members from five Georgia Tech colleges and the Georgia Tech Research Institute, as well as some jointly affiliated with Emory University.

“While there are many faculty members doing machine learning research at Georgia Tech, until now there has been a lack of a structured and systematic interdisciplinary ML training program for students,” said College of Computing Professor and ML@GT Director Irfan Essa.

“Once accepted to the program, students become members of the ML@GT community, where they will be able to develop a solid understanding of fundamental principles across a range of core areas in the machine learning discipline.”

The operations and curricular requirements for the new Ph.D. program – which include five core and five elective courses, a qualifying exam, and a doctoral dissertation defense – will be managed by ML@GT.

The five core courses in the ML Ph.D. degree program are:

  • Mathematical Foundations of Machine Learning
  • Intermediate Statistics
  • Machine Learning: Theory and Methods
  • Probabilistic Graphical Models and Machine Learning in High Dimensions
  • Optimization

“Our goal is to have students develop a deep understanding and expertise in a specific theoretical aspect or application area of the machine learning discipline,” said Romberg.

“The students will be able to apply and integrate the knowledge and skills they have developed and demonstrate their expertise and proficiency in an application area of practical importance.”

After successfully completing all of the curricular requirements, students will have the computational skills and mathematical modeling skills needed for careers in industry, government, or academia.

“Machine learning is helping industries – from aerospace and biomedicine to cybersecurity and financial services – make sense of data to improve business processes and identify previously hidden connections that benefit their businesses and their customers,” said Essa.

“Beyond this, machine learning is fueling a rapid development of stronger, more robust artificial intelligence applications, like natural language processing, that may help to solve many of the world’s more complex and longstanding problems.”

Source: Georgia Tech

The post Eight Georgia Tech Schools Partner for Advanced Degree in Machine Learning appeared first on HPCwire.

ANSYS, Synopsys to Partner for Design Optimization

Mon, 06/19/2017 - 09:53

PITTSBURGH and MOUNTAIN VIEW, Calif., June 19, 2017 — ANSYS (NASDAQ: ANSS) and Synopsys (NASDAQ: SNPS) will enable customers to accelerate the next generation of high-performance computing, mobile and automotive products thanks to a new partnership that will tightly integrate ANSYS’ power integrity and reliability signoff technologies with Synopsys’ physical implementation solution for in-design usage.

Developers of innovative, cost-effective and reliable smart products need to quickly optimize, validate and signoff their designs. While designers have been using ANSYS and Synopsys tools in combination for years, the integrated solution will enable mutual customers to apply power integrity and reliability signoff technologies earlier in the design flow – empowering them to deliver innovative, high-performance and reliable products faster, while reducing power, area and cost.

The integration of ANSYS’ industry-leading platform for chip power and reliability signoff, ANSYS RedHawk, with Synopsys’ best-in-class place-and-route solutions, Synopsys IC Compiler II, will provide users earlier signoff accuracy within the Synopsys design environment. This integration will enable rapid design exploration, design weakness detection, optimization and thermal-aware reliability through increased functionality within the place-and-route environment. The in-design power integrity and reliability signoff-driven flow will eliminate late design changes and ensure consistency with final chip-package-system signoff analyses with RedHawk.

“This partnership is a continued step in Synopsys’ strategy to bring more physical and signoff technologies earlier in the design flow within our Synopsys Digital Design Platform,” said Sassine Ghazi, senior vice president and co-general manager, Design Group at Synopsys. “Partnering with ANSYS enables Synopsys to quickly deliver a reliability and thermal-driven design flow that is critical for designing the next generation of semiconductors.”

Synopsys and ANSYS will also provide a feedback loop between the two-gold standard solutions, Synopsys PrimeTime and ANSYS RedHawk. Voltage-aware timing analysis can be performed rapidly to avoid additional guard-banding and design margining.

“As the industry moves to more and more complex chips, signoff-driven rail analysis needs to be available sooner in the physical design flow just like timing and design rule checking,” said John Lee, general manager at ANSYS. “We believe partnering with Synopsys to bring our signoff technology into the Synopsys In-Design approach is the right way to accomplish this objective.”

“TSMC collaborates with our EDA partners on silicon design solutions to enable our customers to achieve competitive performance, power and area for their next generation electronic products,” said Suk Lee, TSMC senior director, Design Infrastructure Marketing Division. “This industry collaboration between Synopsys and ANSYS provides an opportunity for them to take the collaboration a step further by enabling reliability and thermal-driven physical design built on the industry’s popular physical implementation and signoff solutions.”

“ARM has been a long-time user of both Synopsys and ANSYS technologies, which have helped in the development of some of the most sophisticated CPU cores available in the market,” said Hobson Bullman, vice president and general manager, TSG, ARM. “This announced partnership will enable our semiconductor partners to optimize our IP within their SoC designs earlier in the flow allowing more time to focus on reliable, robust and energy efficient designs.”

“Both Synopsys and ANSYS have been strong collaboration partners with MediaTek to manage increasing manufacturing complexity and to deliver designs on schedule while realizing aggressive performance, power and area goals,” said SA Hwang, general manager of Design Technology, MediaTek. “We believe this new partnership between Synopsys and ANSYS will enable MediaTek engineers to accelerate their pace of innovation while achieving further power, performance and area optimizations.”

ANSYS and Synopsys will be featured at the Design Automation Conference in booth 647 and booth 147 respectively, from June 18-22 in Austin, Texas.

About ANSYS, Inc.

If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, chances are you’ve used a product where ANSYS software played a critical role in its creation. ANSYS is the global leader in engineering simulation. We help the world’s most innovative companies deliver radically better products to their customers. By offering the best and broadest portfolio of engineering simulation software, we help them solve the most complex design challenges and create products limited only by imagination.  Founded in 1970, ANSYS employs thousands of professionals, many of whom are expert M.S. and Ph.D.-level engineers in finite element analysis, computational fluid dynamics, electronics, semiconductors, embedded software and design optimization. Headquartered south of Pittsburgh, Pennsylvania, U.S.A., ANSYS has more than 75 strategic sales locations throughout the world with a network of channel partners in 40+ countries. Visit www.ansys.com for more information.

About Synopsys

Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software partner for innovative companies developing the electronic products and software applications we rely on every day. As the world’s 15th largest software company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and is also growing its leadership in software security and quality solutions. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing applications that require the highest security and quality, Synopsys has the solutions needed to deliver innovative, high-quality, secure products. Learn more at www.synopsys.com.

Source: ANSYS

The post ANSYS, Synopsys to Partner for Design Optimization appeared first on HPCwire.

Supermicro Introduces Workload Optimized HPC Solutions at ISC17

Mon, 06/19/2017 - 09:47

FRANKFURT, Germany, June 19, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI) invites you to see our latest X11 generation High-Performance Computing (HPC) solutions at the International Supercomputing Conference being held from June 18 to 22 at Messe Frankfurt, Tor Ost (East Gate), Hall 3, Booth D-1130.

Broad HPC Solution Offerings
Supermicro will feature multiple new HPC solutions that are workload optimized for oil/gas modeling, computational fluid dynamics, large data analytics and artificial intelligence applications. These systems deliver optimized performance per watt per dollar. The new X11 generation systems, based on the upcoming Intel Xeon Processor Scalable Family (codenamed Skylake), include Ultra Servers with All-Flash NVMe, offering maximum IOPs high-performance storage. The new X11 BigTwin doubles the density of standard systems and delivers faster compute and more memory for the HPC workloads. The portfolio of SuperServers based on NVidia GPUs and Intel co-processors provide advanced engineered architectures to optimize deep learning and artificial intelligence solutions.

“Our HPC offerings provide compelling opportunities to maximize both performance and cost savings. At ISC, we are featuring a range of high-performance systems that support maximum memory, and performance maximized All-Flash NVMe 2U 24 drive systems and new GPU-based systems for deep learning,” said Charles Liang, President and CEO of Supermicro. “For maximum density our MicroBlade and SuperBlade systems offer cost-optimized compute density of 0.14U to 0.4U per dual processor server with a 23 percent per node power savings.”

Exhibit Highlights
Supermicro will feature a broad range of new HPC systems as well as other general-purpose and application-specific building blocks:

  • The 2U X11, BigTwin (SYS-2029BT-HNR) provides blazing fast high-density compute infrastructure. This 2U 4-node chassis supports dual processors, 24 Memory DIMM, 6 all-Flash NVMe drives and 3 PCI-E 3.0 expansion slots per node. BigTwin supports maximum system performance and efficiency by delivering 30% better thermal capacity.
  • The 8U X11, SuperBlade is a dense solution for HPC applications including research and data analytics. The blade solution supports up to 20x 2-socket servers or 10x 4-socket servers in a scalable form factor. The blades support the highest performance processors. The 100G EDR InfiniBand or 100G Intel Omni-Path switching support low latency applications. The blades include hot-swap NVMe drives and multiple low cost M.2 storage options. The enclosure has optional Battery Backup Power (BBP) modules replacing high cost datacenter UPS systems for reliability and data protection.
  • The MicroBlade is a 3U/6U enclosure based solution in a Blade form factor that supports up to 14/28 hot-swap high-performance server blades. The MicroBlade enables high-performance workloads with industry-leading density, power efficiency and a wide range of available processor choices. HPC customers looking for a highly scalable architecture and compute intensive applications will benefit from the MicroBlade. A MicroBlade centric datacenter installation running compute-intensive applications for semiconductor design resulted in an industry-leading PUE of 1.06 delivering the highest energy efficiency standards.
  • The 1U, SuperServer (SYS-1028GQ-TRT), is a GPU Server with dual Intel Xeon CPUs and features four P100 Pascal GPUs for Machine Learning, Deep Learning and Scale Out applications.
  • The 4U, SuperServer (SYS-4028GR-TR2) is an 8 GPU Server that provides higher GPU density and performance. With dual Intel Xeon CPUs it features eight P100 Pascal GPUs under a Single-Root complex and support for RDMA, generating, massively parallel processing power and unrivaled GPU peering for Machine Learning and Scale Out for Deep Learning and Artificial Intelligence applications.
  • The 2U X11, Ultra Server (SYS-2029U-TN24R4T+) is designed for high-performance storage and database applications with storage capacity for up to 24 NVMe drives.
  • The 2U, Ultra Server (SYS-2028U-TR4+) provides NVMe over fabric using Intel Optane for high bandwidth data transfers and high-performance storage applications. With support for up to 24 hot-swap drive bays, it will be shown with the new Intel Optane SSD DC P4800X as well as Intel Omni-Path Architecture (OPA).
  • The 1U, Ultra Server (SYS-1028U-TN10RT+) is designed to enable high- performance storage applications and virtualized environments with support for up to 10 U.2 NVMe solid state drives.

About Super Micro Computer, Inc. (NASDAQ: SMCI)

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly, solutions available on the market. For more information, please visit, http://www.supermicro.com.

Source: Supermicro

The post Supermicro Introduces Workload Optimized HPC Solutions at ISC17 appeared first on HPCwire.

IDC and Gartner Q1 Sever Shipment Reports Show that Chinese Server Market is Booming, but the Global Server Market is in Depression

Mon, 06/19/2017 - 09:41

Both IDC and Gartner recently released their 2017 Q1 Worldwide X86 Server Market Reports.  The data indicates that the global server market is still facing a demand slump. According to IDC and Gartner, the worldwide server revenue in the first quarter of this year dropped 4.6% and 4.5% respectively. And the worldwide Top 5 server vendors in shipments are: Dell EMC, HPE, Lenovo, Huawei, Inspur.

IDC 2017 Q1 Worldwide X86 Server Revenue Ranking Gartner 2017 Q1 Worldwide X86 Server Revenue Ranking

 

IDC 2017 Q1 Worldwide X86 Server Shipments Ranking Gartner 2017 Q1 Worldwide X86 Server Shipments Ranking

China is the only growing market among all the regions. According to IDC, China’s X86 server revenue in the first quarter is 1.81 billion dollars and the shipments is 493,000 units, showing a YoY increase of 5.77% and 3.16% respectively.

IDC 2017 Q1 China X86 Server Revenue Ranking

According to Gartner, China’s X86 server revenue in the first quarter is 2.315 billion dollars, and the shipments is 579,000 units, experiencing a YoY increase of 9% and 4.7% respectively.

The two reports show a same result for the Top 3 server vendors in China: Inspur tops the market, followed by Huawei and Dell EMC.

Gartner 2017 Q1 China X86 Server Shipments Ranking

The continuous growth of the Chinese market has been fueled by the fast development of public cloud. As Internet turns to be the largest and fastest growing field, purchase orders from China’s large internet players like Baidu, Alibaba, Tencent, and Qihoo 360 have also increased. Local server vendors in China like Inspur have been the driving force of market growth and play a dominate role in Internet operations Market. Gartner analysts believe that Inspur will have a significant influence on the development of China’s large-scale data centers. Inspur hopes to apply its successful client-centered and cost-effective business model in China’s large-scale data centers to the other regions.

AI applications, such as AI model training, are commonly adopted by many CSPs, and the emergence of larger-scale neural networks will further fuel the growth of AI computing products. Inspur has become the leader in providing AI computing platforms in China, providing co-processing accelerating servers based on GPU, FPGA, KNL, Caffe-MPI software and algorithm optimization for Baidu, Alibaba, Tencent, Qihoo, Face ++ and other Chinese artificial intelligence enterprises.

The post IDC and Gartner Q1 Sever Shipment Reports Show that Chinese Server Market is Booming, but the Global Server Market is in Depression appeared first on HPCwire.

NASA Ames Research Center Selects Mellanox InfiniBand New Scalable Supercomputer

Mon, 06/19/2017 - 09:39

SUNNYVALE, Calif. & YOKNEAM, Israel, June 19, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, announced today that NASA Ames Research Center has selected the Mellanox EDR InfiniBand solutions and the new HPE SGI 8600 liquid cooled platform to expand their “Electra” supercomputing cluster with next generation interconnect and processor technology. This system expansion will leverage Mellanox In-Network Computing technology that enables smart offloading, to achieve highest level of applications performance and efficiency. The system will utilize the HPE Enhanced Hypercube system topology which today is the backbone of the NASA Ames, “Pleiades” supercomputer. The addition to the Electra supercomputer will be installed in the second part of 2017, and is expected to continue to scale in the future.

“We have been working with NASA for many years, to provide the key interconnect solutions for NASA supercomputing platforms,” said Gilad Shainer, vice president of marketing at Mellanox technologies. “By leveraging 100G EDR InfiniBand and the multiple In-Network Computing engines, such as MPI offloads, RDMA and more, NASA will be able to maximize their data center return on investment.”

“HPE is excited to have partnered with Mellanox to deploy NASA Ames’s next generation HPC cluster based on HPE’s new SGI 8600 liquid cooled platform with the Mellanox ConnectX-5 interconnect,” said Craig Yamasaki, director product management, High Performance Computing and AI at HPE. “HPE has coupled its Hypercube topology with the new ConnectX-5 interface to enable system expansion without the use of external switches while delivering cost savings and performance improvements.”

Mellanox’s intelligent and In-Network Computing capabilities incorporated in ConnectX-5 InfiniBand adapters and Switch-IB2 InfiniBand switches enable advance data processing and real time analytics, which result in world-leading applications performance and scalability.
For more information please refer to: www.mellanox.com.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox’s intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at www.mellanox.com.

Source: Mellanox

The post NASA Ames Research Center Selects Mellanox InfiniBand New Scalable Supercomputer appeared first on HPCwire.

Julia Computing Raises $4.6M in Seed Funding

Mon, 06/19/2017 - 09:28

Berkeley, Calif., June 19, 2017 – Julia Computing is pleased to announce seed funding of $4.6M from investors General Catalyst and Founder Collective.

Julia Computing CEO Viral Shah says, “We selected General Catalyst and Founder Collective as our initial investors because of their success backing entrepreneurs with business models based on open source software. This investment helps us accelerate product development and continue delivering outstanding support to our customers, while the entire Julia community benefits from Julia Computing’s contributions to the Julia open source programming language.”

The General Catalyst team was led by Donald Fischer, who was an early product manager for Red Hat Enterprise Linux, and the Founder Collective team was led by David Frankel.

Julia is the fastest modern high performance open source computing language for data, analytics, algorithmic trading, machine learning and artificial intelligence. Julia combines the functionality and ease of use of Python, R, Matlab, SAS and Stata with the speed of C++ and Java. Julia delivers dramatic improvements in simplicity, speed, capacity and productivity. Julia provides parallel computing capabilities out of the box and unlimited scalability with minimal effort. With more than 1 million downloads and +161% annual growth, Julia is one of the top 10 programming languages developed on GitHub and adoption is growing rapidly in finance, insurance, energy, robotics, genomics, aerospace and many other fields.

According to Tim Thornham, Director of Financial Solutions Modeling at Aviva, Britain’s second-largest insurer, “Solvency II compliant models in Julia are 1,000x faster than Algorithmics, use 93% fewer lines of code and took one-tenth the time to implement.”

Julia users, partners and employers hiring Julia programmers in 2017 include Amazon, Apple, BlackRock, Capital One, Comcast, Disney, Facebook, Ford, Google, Grindr, IBM, Intel, KPMG, Microsoft, NASA, Oracle, PwC, Raytheon and Uber.

  1. Julia is lightning fast. Julia provides speed improvements up to 1,000x for insurance model estimation, 225x for parallel supercomputing image analysis and 10x for macroeconomic modeling.
  2. Julia provides unlimited scalability. Julia applications can be deployed on large clusters with a click of a button and can run parallel and distributed computing quickly and easily on tens of thousands of nodes.
  3. Julia is easy to learn. Julia’s flexible syntax is familiar and comfortable for users of Python, R and Matlab.
  4. Julia integrates well with existing code and platforms. Users of C, C++, Python, R and other languages can easily integrate their existing code into Julia.
  5. Elegant code. Julia was built from the ground up for mathematical, scientific and statistical computing. It has advanced libraries that make programming simple and fast and dramatically reduce the number of lines of code required – in some cases, by 90% or more.
  6. Julia solves the two language problem. Because Julia combines the ease of use and familiar syntax of Python, R and Matlab with the speed of C, C++ or Java, programmers no longer need to estimate models in one language and reproduce them in a faster production language. This saves time and reduces error and cost.

Julia Computing was founded in 2015 by the creators of the open source Julia language to develop products and provide support for businesses and researchers who use Julia. Julia Computing’s founders are Viral Shah, Alan Edelman, Jeff Bezanson, Stefan Karpinski, Keno Fischer and Deepak Vinchhi.

 

Source: Julia Computing

The post Julia Computing Raises $4.6M in Seed Funding appeared first on HPCwire.

Breaking: 49th Top500 List Announced at ISC

Mon, 06/19/2017 - 01:07

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major shakeups — China still has the top two spots locked with the the 93-petaflops TaihuLight and the 33.8 petaflops Tianhe-2 — there are some interesting historical and global trends to share, as well as notable Green500 results.

In the top ten strata of the 49th Top500 list, the names are the same but Piz Daint, the Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS), has moved up five positions from number eight to number three. The punched-up processing was provided thanks by replacing older Tesla gear with Nvidia Tesla P100 GPUs (see coverage here for more details), doubling the previous 9.8 Linpack petaflops score to 19.6 petaflops. The Intel processors were also upgraded: from Sandy Bridge to Haswell architecture.

Piz Daint’s rise has pushed the 17.6-petaflops U.S. Titan supercomputer down to fourth position, leaving the United States without a claim to any of the top three rankings. As the Top500 authors observe in today’s announcement, the only other time this has happened was in November 1996, when Japan dominated all three top spots.

For reference, the new top 10 rankings are reproduced below:

Source: Top500

This minimal list reshuffling led Top500 watcher and market analyst Addison Snell to comment, “With no changes in the Top 10 systems other than the Piz Daint upgrade, it may look like things aren’t moving forward, but this is the lull before a spate of new supercomputers that could hit the next list, particularly the two CORAL pre-exascale systems at U.S. national labs and the possibility of the Chinese Tienhe-2A upgrade.

“Some of the more interesting trends occur over the rest of the list population,” the CEO of Intersect360 Research continued. “For example, the number of manycore systems continues to rise, whether as accelerators or co-processors, which are mostly Nvidia GPUs, or with the Intel Xeon Phi as a standalone processor. This is driving related improvements in power efficiency, which is necessary in the run-up to exascale. It’s also notable to see Intel Omni-Path adoption continuing on the list. We are monitoring this in our end-user surveys to see how much penetration Omni-Path might have versus Ethernet and InfiniBand.”

When asked if he thought the CORAL systems, Summit and Sierra, would be ready next time this year, Snell said he wouldn’t be surprised if they come sooner than that. So SC17? We caught Snell in between flights but we’ll be asking him more about his thinking here during ISC. The U.S. has announced an exascale accelerated timeline (see our latest U.S. exascale coverage here) and promised additional monies to fund it, so a quickening for “pre-exascale” here makes sense if partners IBM, Nvidia and Mellanox can accommodate.

Then, as Snell also noted, there is still the matter of the Tianhe-2A system. The Tianhe-2 upgrade, which was to go forward with Feiteng processors after a U.S. embargo derailed the Knights Landing refresh, has not yet materialized. Signs now point to Tianhe-2A being NUDT’s exascale prototype, one of three exascale contenders in China (along with the Sugon and Wuxi Supercomputing Center efforts). It is speculated that the next Tianhe will employ the Feiteng FT-2000/64 that Phytium Technologies introduced at the 2016 Hotchips conference. The FT-2000/64 is a 64-core ARM processor with a stated 512 gigaflops peak performance at a frequency of 2.0 GHz in a 100 watt power envelope (max).

Splitting the Top500 Pie

While the U.S. has lost supremacy at the peak, it counts five systems within the top ten, still more than any other country. The U.S. leads total system share as well with 169 machines. China is a close second with 160. Recall the U.S. and China were tied with 171 systems each six months ago, but other countries have assumed some of that share, notably Japan and the UK. Japan is now third with 33 supercomputers up from 27 in November. Germany ranks fourth with 28, down from 31. France and the UK are tied for fifth with 17 systems each, with France dropping three systems and the UK adding four.

Shifting the perspective to aggregate performance share, maintains the ordering: U.S. (33.8 percent), China (32 percent), Japan (6.6 percent), Germany (5.6 percent), France (3.4 percent), United Kingdom (3.4).

Looking at the vendor landscape, Hewlett Packard Enterprise (HPE) asserts itself as the number one vendor by system volume with 143, picking up 25 systems in the SGI acquisition, finalized last November. Lenovo is second with 88 systems, followed by Cray (57 systems), Sugon (46) and IBM (27). On the previous list iteration, it was HPE (112 systems), Lenovo (92 systems), Cray (56 systems), Sugon (47) and IBM (with 33). There was only one new IBM system on today’s listing.

June 2017 Top500 vendor tree map (percent of total list performance)

When it comes to total list performance share, Cray maintains its lead at 21.4 percent, a skosh up from 21.3 percent six months back. Bolstered by its SGI aquisition, HPE comes back to a solid second place with 16.6 percent up 9.8 percent. With the strong showing of the combined HPE+SGI installs, Sunway TaihuLight developer NRCPC drops to third with 12.5 percent of the total installed performance (down from 13.8). Lenovo is next (9.3 percent, up from 8.8 percent), then IBM (7.5 percent, down from 8.8 percent).

The aggregate performance of all 500 computers on the 49th list stands at 749 petaflops, compared to 672 petaflops six months ago and 567 petaflops one year ago. This 32 percent annual growth rate is far below historical trends, which prior to 2008 averaged about 90 percent per year and more recently averaged around 55 percent per year. It’s a trend that shows no signs of reversal, according to the Top500 authors.

Source: Top500

The aggregate performance of the top ten machines is 235.9 petaflops up from 226 petaflops owed solely to the Piz Daint upgrade. 21 systems have joined the petaflops club, bringing total membership to 138 from 117 six months ago. The admission point for the TOP100 is currently 1.21 petaflops up from 1.07 petaflops. The bar for entry onto the list has been raised to 432.2 Linpack teraflops compared to 349.3 petaflops on the last list.

Other notable trends observed by Top500 authors:

  • Accelerator/Co-processor trends through June 2017 (Source: Top500)

    A total of 91 systems on the list are using accelerator/co-processor technology, up from 86 on November 2016.  71 of these use NVIDIA chips, 14 systems with Intel Xeon Phi technology (as Co-Processors), one uses ATI Radeon, and two are using PEZY technology. Three systems use a combination of Nvidia and Intel Xeon Phi accelerators/co-processors. An additional 13 Systems now use Xeon Phi as the main processing unit.

  • The average number of accelerator cores for these 91 systems is 115,000 cores/system.
  • Intel continues to provide the processors for the largest share (92.8 percent) of TOP500 systems.
  • Ninety-three (93.0) percent of the systems use processors with eight or more cores, sixty-eight (68.6) percent use twelve or more cores, and twenty-seven (27.2) percent twelve or more cores.
  • Gigabit Ethernet is now at 207 systems (unchanged), in large part thanks to 194 systems now using 10G interfaces. InfiniBand technology is now found on 178 systems, down from 187 systems, and is the second most-used internal system interconnect technology.
  • Intel Omni-Path technology which made its first appearance one year ago with 8 systems is now at 38 systems up from 28 system six month ago.

Also noteworthy, the Top500 list now incorporates the HPCG benchmark results “to provide a more balanced look at performance,” according to the list editors. They further report that “the fastest system on the HPCG benchmark is Fujitsu’s K computer which is ranked #8 in the overall Top500. It is followed closely by Tianhe-2 which is also No. 2 on the Top500.” This lineup is unchanged since the November HPCG ranking results.

Highlights from the Green500

The new list has an interesting tale to tell when it comes to energy efficiency metrics. Japan captured the top four spots of the Green500 with four new systems and the upgraded Swiss Piz Daint has the fifth spot. The fact that all five of these systems employ Tesla P100 GPUs speaks well for Nvidia, which also claims the seventh through fourteenth Green500 spots.

At the top of the green ranking, touting 14.110 gigaflops/watt, is the new TSUBAME 3.0, a modified HPC ICE XA machine, designed by Tokyo Tech and HPE. The system earned a 61st place spot on the TOP500 with a 1.998-petaflop Linpack run. The new Green500 record holder bests the previous record set by Nvidia’s internal Saturn V supercomputer six months ago (8.17 gigaflops/watt) by 72.7 percent.

The second-place Green500 system is “kukai,” built by Exascaler and installed at the Yahoo Japan Corporation. It achieves 14.045 gigaflops/watt, a mere 0.3 percent behind TSUBAME 3.0. It’s Top500 ranking is 466. Coming in at number three is the AIST AI Cloud system at the National Institute of Advanced Industrial Science and Technology, Japan. The NEC machine achieves 12.68 gigaflops/watt and is ranked number 148 on the Top500. The fourth place Green500 system is the Fujitsu-made RAIDEN GPU system, installed at RIKEN’s Center for Advanced Intelligence Project. It accomplished 10.6 gigaflops/watt and sits at number 306 on the Top500 line-up.

Piz Daint, the fifth-ranked supercomputer on the Green500, achieved 10.4 gigaflops/watt. As a number three system, this is quite the accomplishment, as energy efficiencies don’t always scale that high. The fact that Piz Daint is the most energy-efficient supercomputer within the top 50 fastest supercomputers speaks to that point.

In sixth position is “Gyoukou,” the Exascalar ZettaScaler-1.6 system at the Japan Agency for Marine-Earth Science and Technology with 10.2 gigaflops/watt. Relying on PEZY-SC2 accelerators, Gyoukou is the highest ranking non-GPU system on the Green500 list.

The TOP500 and Green500 awards will be presented by Top500 co-author Horst D. Simon, deputy director of Lawrence Berkeley National Laboratory, at 10:30 am today in Frankfurt. We expect lots more analysis to come out of the Top500 and Green500 program tracks. We will report back on these and other benchmarking results presented at ISC 2017. If you have any insights or comments to share, please catch me by email or in-person at the show.

The post Breaking: 49th Top500 List Announced at ISC appeared first on HPCwire.

ISC: Extreme-Scale Requirements to Push the Frontiers of Deep Learning

Sat, 06/17/2017 - 11:36

Deep learning is the latest and most compelling technology strategy to take aim at the decades-old “drowning in data/starving for insight” problem. But contrary to the commonly held notion, deep learning is more than a big data problem per se. Delivering on deep learning’s potential – and achieving its anticipated 50 percent annual growth rate market opportunity – involves a highly demanding scaling problem that requires overlapping computational and communications capabilities as complex as any of the classic supercomputing challenges of the past.

That’s the view of Cray senior VP and CTO Steve Scott, who will discuss “pushing the frontiers of deep learning” at ISC in Frankfurt to close out Deep Learning Day (Wednesday, June 21) at the conference.

Scott told EnterpriseTech (HPCwire‘s sister publication) the focus of his session will be on training at-scale neural networks to handle complex deep learning applications: self-driving cars, facial recognition, robots sorting mail, supply-chain optimization and aiding in the search for oil and gas, to name a few.

“The main point I’ll be making is that we see a general convergence of data analytics and classic simulation and modeling HPC problems,” he said. “Deep learning folds into that, and the training problem in particular is a classic HPC problem.”

In short, greater machine intelligence requires larger, more complex models – with billions of model weights and hundreds of layers.

Ideally, Scott said, training neural networks using the stochastic gradient descent algorithm “you’d process one sample of that training data, then update the weights of your model, and then repeat that process with the next piece of training data and then update the weights of your model again.”

Cray’s Steve Scott

The problem, he said, is that it’s an inherently serial model. So even when using a single node, Scott said, users have traditionally broken up their training data into sets – called “mini-batches” – to speed up the process. The entire training process becomes much more difficult when you want to train your network not on one GPU, or 10 GPUs, but on a hundred or thousands of GPUs.

You can simplify training by using lesser amounts of data, but that leads to deep learning systems that haven’t been trained thoroughly enough and, therefore, aren’t intelligent enough. “If you have a small amount of data and you try to use it to train a very large neural network,” Scott said, “you end up with a phenomenon called ‘overfitting,’ where the model works very well for the training data you gave it, but it can’t generalize to new data and new situations.”

So scale is essential, and scale is a big challenge.

“Scaling up this training problem to large numbers of compute nodes brings up this classic problem of convergence of your model vs. the parallel speed you can get,” Scott said. “This is a really tough problem. If you have more compute nodes working in parallel you can process more samples per second. But now you’re doing more work each time, your processing more samples before you can update the model weights. So the problems of converging to the correct model becomes much more difficult.”

Scott will discuss the kind of system architecture required to take on deep learning training at scale, an architecture that – surprise! – Cray has been working on for years.

“It calls for a very strong interconnect [the fabric, or network, connecting the processors within the system], and it also has a lot to do with turning this into an MPI [the communications software used by the programs to communicate via the fabric] problem,” Scott said. “It calls for strong synchronization, it calls for overlapping your communications and your computation.

“We think bringing supercomputing technologies, from both a hardware and a software perspective, to bear can help speed up this deep learning problem that many people don’t think of. They think of it as a big data problem, not as a classic supercomputing problem. We think the core problem here in scaling these larger models is one in which supercomputing technology is uniquely qualified to address.”

Scott said deep learning has taken root to different degrees in different parts of the market. Hyperscalers (Google, Facebook, Microsoft, AWS, etc.) have thousands of projects under development with many, in voice and image recognition in particular, fully operational.

“It’s really past the tipping point,” Scott said. “The big hyperscalers have demonstrated that this stuff works and now they’re applying it all over the place.”

But the enterprise market, lacking the data and the compute resources of hyperscalers, remains for now in the experimentation and “thinking about it” phase, he said. “The enterprise space is quite a bit further behind. But they see the potential to apply it.” Organizations that are early adopters of IoT, with its attendant volumes of machine data, are and will be the early adopters of deep learning at scale.

“We’re seeing it applied to lots of different problems,” said Scott. “Many people, including me, are optimistic that every area of industry and science and beyond is going to have problems that are amenable to deep learning. We think it’s going to be very widespread, and it’s very large organizations with large amounts of data where it will take root first.”

The post ISC: Extreme-Scale Requirements to Push the Frontiers of Deep Learning appeared first on HPCwire.

Asetek at ISC17: Liquid Cooling Becoming New Norm

Fri, 06/16/2017 - 07:16

AALBORG, Denmark , June 16, 2017 — Asetek, a world leader in liquid cooling for HPC and data centers, will return to the International Supercomputing Conference (ISC17) in Frankfurt, Germany, June 18–22, 2017.

Asetek’s cost- and energy efficient liquid cooling solutions for HPC and data centers are on display at booth #J-600. Asetek’s engineers and sales team, including Mandarin-speaking staff, are present to showcase Asetek’s innovative solutions.

The 2016 TOP500 and Green500 lists include nine installations that uses Asetek liquid cooling, including Japan’s fastest computer, the Oakforest-PACS supercomputer in Tokyo (ranked #6 on both lists). Today, an increasing number of HPC installations deploy Asetek technology with existing installations expanding and new ones coming onboard.

Liquid cooling is becoming the new norm in the face of skyrocketing wattages by CPUs, GPUs and cluster nodes. Asetek cooling solutions are currently provided by OEMs such as Cray, Fujitsu, Format, Penguin.Liquid cooling for NVIDIA P100, Intel Knight’s Landing and Skylake will be on display at Asetek’s booth #J-600 as well as rack-level options such as InRackCDU and Vertical RackCDU.

“2017 is the inflection point for HPC cooling. We are seeing higher wattage clusters that will need liquid cooling so they can operate. It is a clear sign that liquid cooling is becoming the new norm. We are also seeing new HPC installations come onboard – a clear signal that our technology in making an impact on the industry,” said John Hamill, Asetek Chief Operating Officer.

If you would like to schedule an appointment with our engineering and sales team, please send an email to questions@asetek.com. Otherwise, please stop by booth J-600 at ISC17.

To learn more about Asetek liquid cooling, please visit www.asetek.com and follow us on Twitter @Asetek and Facebook.

Follow the International Supercomputing Conference (ISC17) on Twitter @ISChpc or on the ISC17 conference website.

About Asetek

Asetek is the global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK). For more information, visit www.asetek.com

Source: Asetek

The post Asetek at ISC17: Liquid Cooling Becoming New Norm appeared first on HPCwire.

Mellanox Announces a Strategic Collaboration With HPE

Fri, 06/16/2017 - 07:14

SUNNYVALE, Calif. & YOKNEAM, Israel, June 16, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, announced today a strategic collaboration with HPE to enable the industry’s most efficient high-performance computing and machine learning data centers, based on the innovative technologies from both companies. The mutual solutions will enable customers to leverage the InfiniBand and Gen-Z open standards to maximize their return on investment for current and future data centers and applications.

Leveraging Mellanox’s intelligent and In-Network Computing capabilities in ConnectX®-5 InfiniBand adapters and Switch-IB2 InfiniBand switches in the recently announced HPE SGI 8600 and HPE Apollo 6000 Gen10 systems, enables both companies to now offer the most powerful, scalable and efficient high-performance computing and machine learning fabric solutions in the industry. This collaboration will enable both companies to develop synergistic technology integration and the optimal usage of the upcoming HDR InfiniBand Quantum switches, ConnectX-6 adapters and future Gen-Z devices. In addition, joint development with HPE’s Advanced Development team will pave the road to Exascale computing.

Mellanox In-Network Computing interconnect architecture provides the leading technology to advance data processing and real time analytics, which is critical for a diverse range of HPC and machine learning applications. The collaboration with HPE will enable the companies to leverage the technology advancements of HDR InfiniBand and Gen-Z to overcome performance limitations of proprietary products and advance the levels of compute efficiency, scalability and data processing performance.

“Data-centric computing, In-Network computing and open standards are all necessary for technology innovations and for achieving state-of-the-art performance for a broad range of HPC and machine learning applications,” said Bill Mannel, vice president and general manager of HPC and AI at HPE. “The HPE and Mellanox collaboration will enable us to bring the solutions our customers need, based on InfiniBand and Gen-Z capabilities, to maximize data centers efficiency, and to deliver the highest value to our customers.”

Mellanox and HPE will showcase the new solutions at the world-wide development and benchmarking centers and together will optimize customers’ applications and provide support in planning, developing and deploying future data centers.

“We have been working with HPE for more than a decade now, accelerating many of the world’s leading supercomputers,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “The growing demands for faster data analysis and scalable data simulations mandates technology innovations and collaborations around open standards, to deliver the needed compute and storage infrastructures today and in the future. The collaboration with HPE will enable tighter technology development, and optimize use of the InfiniBand architecture and for Gen-Z in the future. This will allow both companies to build the most efficient data centers for HPC and machine learning workloads.”

Mellanox and HPE will be presenting more details of the new collaboration at June 2017 HP-CAST and the International Supercomputing Conference in Frankfurt, Germany.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox’s intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at www.mellanox.com.

Source: Mellanox

The post Mellanox Announces a Strategic Collaboration With HPE appeared first on HPCwire.

ARM Announces Beta Release of ARM Fortran Compiler

Thu, 06/15/2017 - 19:19

From the ARM Community Blog

by 

Alongside the open source compilers for porting scientific applications to ARMv8-A, for the first time sits a new commercial solution from ARM, promising regular and targeted updates made specifically for high performance computing (HPC) and scientific computing users. For decades, Fortran has been and remains a leading language of choice for HPC and scientific application codes, a primary driver for ARM’s work in delivering high quality Fortran compilers, enabling the progress of scientists seeking to run, and advance applications on 64-bit ARM architecture. The good news: we’re now in the position to announce the availability of a beta release of the ARM Fortran compiler, supporting Fortran 2003 and prior versions.

Our commercial compiler complements the two quality open source solutions available on ARM today, namely GFortran and Flang/LLVM. Featuring PGI’s open-source Flang front-end, the ARM Fortran compiler provides users with wide Fortran application coverage and support when porting code to ARMv8-A compatible platforms such as the Cavium ThunderX2 or the Qualcomm Centriq SoC’s. It even offers a future proof solution for those looking ahead as it features full support for the ARM Scalable Vector Extension (SVE).

A comprehensive HPC tools suite for ARM

The beta release of ARM Fortran compiler is another step in demonstrating ARM’s continued investment in the HPC ecosystem, with the goal of providing HPC developers a complete toolchain with best-in-class performance. We announced ARM Performance Libraries at SC15, ARM C/C++ Compiler at SC16, the acquisition of Allinea, and its market leading profiler and debugger tools in Dec ‘16 and continue forward momentum with the ARM Fortran compiler.  Over the next few months, we will hone the quality and performance of ARM Fortran compiler and intend to provide a fully supported version by Nov ‘17.  At that point, HPC developers will have a comprehensive, commercially supported development tool suite for ARM platforms with ARM C/C++/Fortran compiler for HPC, ARM Performance Libraries (with optimized BLAS, LAPACK and FFT routines), and industry standard Allinea tools – DDT debugger, MAP profiler and Performance Reports.

“The availability of a Fortran compiler, designed specifically for the ARMv8-A architecture, addresses a core requirement for the HPC software development community. This release highlights the commitment from ARM in supporting users in running scientific applications successfully from end to end, taking account of their current and future application needs” said Larry Wikelius, Vice President of the Software Ecosystem and Solutions Group at Cavium, Inc. “Cavium has partnered closely with ARM for support on Cavium’s ThunderX2 product family and congratulates ARM on this key HPC product milestone.”

Building on PGI’s open-source Flang front-end and LLVM

For the past year, ARM has been collaborating with PGI and other partners on the development of an open-source Flang front-end to LLVM.

With PGI open sourcing the Flang front-end in May 2017, we have integrated it with our LLVM code generation backend in our commercial Fortran compiler. For HPC users, this delivers a compiler with wide Fortran application coverage and predictable performance.

Availability

ARM Fortran Compiler (Beta) is now available as part of the latest ARM Compiler for HPC Package release. The package also includes ARM C/C++ Compiler and ARM Performance Libraries and is available for leading Linux distributions running on ARMv8-A based hardware.

Getting Started

We expect HPC deployments based on ARM partner hardware by the end of 2017 so it’s not too early to try it.  The Fortran compiler Getting Started Guide will help you in your trial with a complete workflow – from installing the tools, compiling a Fortran example with ARM Fortran Compiler and running the binary on an ARM device.

Source: ARM Community Blog

The post ARM Announces Beta Release of ARM Fortran Compiler appeared first on HPCwire.

Six Exascale PathForward Vendors Selected; DoE Providing $258M

Thu, 06/15/2017 - 14:53

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD, Cray, Hewlett Packard Enterprise (HPE), IBM, Intel, and NVIDIA. The Department of Energy (DoE) will provide $258 million while the vendors must contribute at least 40 percent of the total costs bringing the total investment to at least $430 million. Under the recently accelerated ECP timetable, the U.S. expects to field one or two exascale machines in 2021 followed by others in the 2023 timeframe.

Few details about the specific technology projects being undertaken by the PathForward companies were revealed, nor was how the money will be divided among the vendors. Nevertheless the awards mark an important milestone in ECP efforts noted ECP director Paul Messina.

Speaking at a press pre-briefing yesterday, Messina said the PathForward investment was critical to moving hardware technology forward at an accelerated pace. “By that I mean beyond what the vendor or manufacture roadmaps currently have scheduled. [It also helps bridge] the gap between open ended architecture R&D and advanced product development as focused on the delivery of the first of a kind capable exascale systems,” said Messina.

The ECP program has many elements. PathForward awards are intended to drive the hardware technology research and development required for exascale. Applications and software technology development fall under different ECP programs and have a different budget. The actual procurement of the eventual exascale systems is also done differently and funded separately; the individual national labs and facilities which will house and operate the computers purchase their individual systems directly. It now seems likely the first two exascale computing sites will be Argonne National Laboratory and Oak Ridge National Laboratory based on spikes in their facilities budget in the proposed FY 2018 DoE budget.

Much of today’s announcement and yesterday’s briefing had been expected. Messina did provide confirmation that Aurora, the planned successor to Mira supercomputer at Argonne National Laboratory, is likely to be pushed out or changed. “At present I believe that the Aurora system contract is being reviewed for potential changes that would result in a subsequent system in a time different timeframe from the original Aurora system. Since it’s just early negotiations I don’t think we can be any more specific that,” he said.

It would have been interesting to get a clearer sense of a few specific PathForward technology projects but none were discussed. Much of the work is predictably under NDA. Messina identified what are by now the familiar challenges facing the task of achieving exascale computing: massive parallelism, memory and storage, reliability, and energy consumption. “Specifically the work funded by PathForward has been strategically aligned to address those key challenges through development of innovative memory architectures, higher speed interconnect, improved reliability of systems, and approaches for increasing computer power and capability without prohibitive increases in energy demand,” he said.

Messina noted vendor progress in PathForward would be closely monitored: “Firms will be required to deliver final reports on the outcomes of their research but it’s very important to note this is a co-design effort with other [ECP] activities and we will be having frequent, formally scheduled intermediate reviews every few months. The funding for each of the vendors is based on specific work packages, and as each work package is delivered which would be an investigation on a particular aspect of the research. So it isn’t that we send the money and wait three years and get an answer.”

Messina also emphasized the labs (eventual systems owners) and the ECP app/software teams would be deeply involved in co-design and work product assessment. “Application developers and systems software developers, software library developers, for example, will participate in those evaluations,” he said.

All of the vendors emphasized expectations to incorporate results of their exascale research into their commercial offerings. William Dally, chief scientist and SVP of research at NVIDIA noted this is NVIDIA’s the sixth DoE R&D contract and that previous research contracts led to major innovations, “such as energy efficient circuits and the NVLink interconnect being incorporated into our Maxwell, Pascal, and Volta GPUs.”

In the official DoE release, Secretary of Energy Rick Perry is quoted, “Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” said Secretary Perry. “These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing—exascale-capable systems.”

It does seem as if increasing tension in the international community is firing up regional and national competitive zeal in pursuit of exascale. Here’s an excerpt from today’s official release:

“Exascale systems will be at least 50 times faster than the nation’s most powerful computers today, and global competition for this technological dominance is fierce. While the U.S. has five of the 10 fastest computers in the world, its most powerful — the Titan system at Oak Ridge National Laboratory — ranks third behind two systems in China. However, the U.S. retains global leadership in the actual application of high performance computing to national security, industry, and science.”

Pressed on how the U.S. stacked up against internationals rivals, particularly China, in the race to exascale, Messina said, “Our current plan is to have delivery of at least one, not necessarily one, in 2021. I would not characterize that as to catch up with China. We do know of course that China has indicated they plan to have at least one exascale system in 2020 but we, for example, do not know whether that system will be a peak exaflops system versus what we are planning to deliver. A concise answer [to your question is we plan to deliver], at least one system in 2021 and another if not in 2021, then in 2022.”

See HPCwire article for a broader overview of the ECP, Messina Update: The US Path to Exascale in 16 slides.

The six selected PathForward vendors all seek to leverage their various expertise and ongoing R&D efforts. Senior executives and research staff from each company participated in yesterday’s briefing but very few specific details were offered, perhaps understandably so. Here are snippets from their comments.

  • AMD. “Exascale is important because it pushes industry to innovate more and faster. While the focus of the PathForward program is on HPC the benefits are applicable across a wide range of computing platforms and cloud services as well as computational domains such as machine learning and data science,” said Alan Lee, corporate VP for research and advanced development. He positioned AMD as the only company with both x86 and GPU offerings and expertise in melding the two.
  • Cray. “We care very little about peak performance. We are committed to delivering sustained performance on real workloads,” said Steve Scott, SVP and chief technology officer. Cray intends to explore new advances in node-level and system-level technologies and architectures for exascale systems. “[We’ll focus] on building systems that are highly flexible and upgradeable over time in order to take advantage of various [emerging] processor and storage technology.”
  • HPE. HPE plans to leverage its several years of R&D into memory driven computing technologies – think The Machine project. “PathForward will significantly accelerate the pace of our development and allow us to leverage activities and investments such as The Machine. [W]e will accelerate R&D into areas such as silicon photonics, balanced systems architecture, and software [for example],” said Mike Vildibill, VP, advanced technologies, exascale development & federal R&D programs.
  • IBM. “[We believe] future computing is going to be very data centric and we are focused very much on building solutions that allow complex analytics and modeling and simulation to actually be used on very large data sets. We see the major technical challenges to an exascale design to be power efficiency, reliability, scalability, and programmability and we feel very strongly those challenges need to be addressed in the context of a full system design effort,” said Jim Sexton, IBM Fellow and director of data centric systems, IBM Research.
  • Intel. “Exascale from Intel’s perspective is not only about high performance computing. It’s also about artificial intelligence and data analytics. We think these three are all part of the solution and need to be encompassed. So HPC is continuing to grow. It’s really established itself as one of the three pillars of scientific discovery, along with theory and experiment. AI is quickly growing and probably the fastest growing segment of computing as we find ways to efficiently use data to find relationships to make accurate predictions,” said Al Gara, Intel Fellow, data center group chief architect, exascale systems. He singles out managing and reducing power consumption as one area Intel will work on.
  • NIVDIA. “This contract will focus on critical areas including energy efficiency, GPU architectures and resilience, and our finding will certainly be incorporating to future generations of GPUs after the Volta generation,” said Dally, Ph.D. “It also allows us to focus on improving the resilience of our GPUs which allows them to be applied at greater scale than in the past.”

Apparently the expectation is work during this PathForward contract will be sufficient to support the full ECP as Messina said there were no plans for a second RFP for 2023 systems.

Link to DoE press release: https://exascaleproject.org/path-nations-first-exascale-supercomputers-pathforward/

The post Six Exascale PathForward Vendors Selected; DoE Providing $258M appeared first on HPCwire.

Pages