HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 5 hours 30 min ago

First PEARC17 Exhibitors Announced

Mon, 03/13/2017 - 10:27

NEW ORLEANS, La., March 13, 2017 — The PEARC17 organizing committee has announced the first three exhibitors for the inaugural Practice & Experience in Advanced Research Computing conference: Internet2, Omnibond, and Globus. PEARC17 will take place in New Orleans, July 9-13, 2017.

As a non-profit exhibitor at the Silver level, support from Internet2 will contribute to the PEARC17 Student Program. As a Bronze level exhibitor, Omnibond’s contribution will help support the PEARC17 Tutorials on Monday, and Globus is providing conference-wide support as a non-profit Patron exhibitor.

Internet2 is a non-profit, member-owned advanced technology community founded by the nation’s leading higher education institutions in 1996. Internet2 serves more than 94,000 community anchor institutions, 317 U.S. universities, 70 government agencies, 43 regional and state education networks, over 900 InCommon participants, 78 leading corporations working with our community, and 61 national research and education network partners that represent more than 100 countries.

Omnibond is a technology development company, experienced in merging synergies from the research, open source, and business communities. It provides software engineering and support for CloudyCluster: Self Service HPC in the Cloud, OrangeFS: High Performance Parallel Virtual File System, TrafficVision: video analytics for the transportation industry and NetIQ Identity/Access Management Drivers. 

Globus is software-as-a-service for research data management, used by hundreds of research institutions and high-performance computing facilities worldwide. The service enables secure, reliable file transfer, sharing, and data publication for managing data throughout the research lifecycle. Globus is an initiative of the University of Chicago, and is supported in part by funding from the Department of Energy, the National Science Foundation, the National Institutes of Health, and the Sloan Foundation.

For more information on exhibiting at PEARC17, see www.pearc.org/exhibitors or contact Melyssa Fratkin, mfratkin@tacc.utexas.edu.

About PEARC17

PEARC17 is open to professionals and students in advanced research computing. The conference series builds on the XSEDE conferences’ success and core audiences to serve the broader community. In addition to XSEDE, organizations supporting the new conference include the Advancing Research Computing on Campuses (ARCC): Best Practices Workshop, the Science Gateways Community Institute (SGCI), the Campus Research Computing Consortium (CaRC), the ACI-REF consortium, the Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC), and Internet2.

Source: PEARC

The post First PEARC17 Exhibitors Announced appeared first on HPCwire.

Altair HyperWorks 2017 Released

Mon, 03/13/2017 - 10:16

TROY, Mich., March 13, 2017 — Altair has announced its release of HyperWorks 2017. This latest release sees several functionalities added in areas such as model-based development, electromagnetism, nonlinear structural analysis, modeling and meshing, multiphysics and multi-disciplinary analysis, lightweight design and optimization. New products and enhancement highlights include:

    • Model-based Development Suite: solidThinking Activate, Compose and Embed capabilities encompassing concept studies, control design, system performance optimization and controller implementation and testing are now part of the platform.
    • Electromagnetics Analysis and Design: Flux for EM simulation of static and low frequency applications, and WinProp for propagation modeling and radio network planning are added as perfect complements to FEKO, focused on high frequency EM simulations related to antenna design, placement, radiation hazard and bio electromagnetics.
    • Material Modeling and Manufacturing: Multiscale Designer is a tool for development and simulation of accurate models for heterogeneous material systems including laminated composites, honeycomb cores, reinforced concrete, soil, bones, and various others. Manufacturing offerings now include solidThinking “Click2” products for extrusion, casting and metal forming process simulation.
    • Usability and Efficient Model Management: HyperMesh now offers a complete, robust solution for assembly and model variants management, expanding the part library and configuration management capabilities. Important new features for crash and safety users have also been implemented. A brand new desktop tool called ConnectMe has been developed to efficiently manage, launch and update all the products within the HyperWorks suite.

“HyperWorks 2017 adds key enhancements to the modeling and assembly capabilities of the software,” said James P. Dagg, Chief Technical Officer, User Experience at Altair. “Users can now communicate directly with their enterprise PLM system, storing libraries of parts and configurations of their models. Tasks like setting up a model with multiple configurations for different disciplines can now be done in minutes.”

  • Multiphysics Analysis and Performance: Major speed and scalability improvements have been implemented for all the Altair solvers. In particular, structural analysis capabilities for OptiStruct® have been further elevated to support the most complex nonlinear contact and material models. For fluid simulation (CFD), new turbulence and transition models have been implemented in AcuSolve to capture laminar to turbulent flow regime change.

In terms of computational performance, FEKO, OptiStruct, and RADIOSS leverage the most modern computer architectures and latest parallelization technology to generate solutions faster and make them more scalable on compute clusters.

“With the HyperWorks 2017 release we followed our vision to continue focusing on simulation-driven innovation. We are now able to simulate more physics with improved HPC performance,” said Uwe Schramm, Chief Technical Officer, Solvers and Optimization at Altair. “In particular, with the addition of Flux for low-frequency EM simulation, we’re offering a complete multiphysics portfolio all linked through optimization.”

For more information visit altairhyperworks.com/hw2017.

About Altair
Founded in 1985, Altair is focused on the development and application of simulation technology to synthesize and optimize designs, processes and decisions for improved business performance. Privately held with more than 2,600 employees, Altair is headquartered in Troy, Michigan, USA with more than 45 offices throughout 20 countries, and serves more than 5,000 corporate clients across broad industry segments. To learn more, please visit www.altair.com.

Source: Altair

The post Altair HyperWorks 2017 Released appeared first on HPCwire.

Synopsys’ IC Compiler II Completed Certification for TSMC’s 7-nm Process Technology

Mon, 03/13/2017 - 08:21

MOUNTAIN VIEW, Calif., March 13, 2017 — Synopsys, Inc. (Nasdaq: SNPS) today announced that TSMC has certified the Synopsys Galaxy Design Platform for the V1.0 of its latest 7-nanometer (nm) FinFET process technology.

Further collaborations, anchored around the Design Compiler Graphical and IC Compiler II digital implementation products, have supported TSMC’s High Performance Compute (HPC) methodology to mutual customers for the 7-nm node that is proven to deliver broad performance gains aimed at compute-intensive designs. The results of this joint collaborative work will accelerate designers’ creation of next generation products.

With process, performance and yield demands requiring innovative solutions, a broad collaboration on via-structures, seamlessly supported throughout the flow, is a key part of both 7-nm design and the 7-nm HPC flow deployment. The solution consists of performance exploration and what-if analysis of via-structures through Design Compiler Graphical as well as automatic creation and insertion in the IC Compiler II place-and-route flow coupled with PrimeTime ECO support that preserves and enhances via-pillar structures during final timing-signoff ECO stages. The Synopsys-TSMC collaboration produces innovative methodology to enable 7-nm high-performance, high-reliability designs.

Addressing the needs of low-power operation, low-voltage enablement is delivered throughout the Galaxy Design Platform with comprehensive support for Advanced Waveform Propagation (AWP) allied with Parametric-on-chip-variation (POCV) technologies.

IC Compiler II additionally brings signoff timing accuracy within the design-closure phase through the deployment of the PrimeTime timing analysis and signoff technology. A platform-wide deployment of Total-Power-Optimization technologies, including expanded multi-bit-methodology support and advanced concurrent-clock-and-data optimization, furthers designers’ ability to deliver highly differentiated, low-power products.

PrimeTime physically-aware ECO has been enhanced for 7-nm, seamlessly accounting for the latest process-driven requirements, including pin-track alignment of ECO placed cells and power recovery for lower leakage.

“This signifies the completion of long collaboration between Synopsys and TSMC to deliver full flow design tools and collateral at 7-nm process technology,” said Suk Lee, TSMC senior director, Design Infrastructure Marketing Division. “The results of the partnership enable designers to begin early tapeouts today.”

“The collaboration takes full advantage of innovative TSMC 7-nm high-performance low power technologies,” said Bijan Kiani, vice president of product marketing for the Design Group at Synopsys. “The end result enables our mutual customers to engage immediately on high quality production 7-nm designs using the Galaxy Design Platform.”

The Galaxy tools certified by TSMC for their 7-nm process include:

  • IC Compiler II place and route: full-color routing and extraction, advanced cut-metal modeling for reducing end of line spacing, and a full flow deployment of Via Pillar technology.
  • PrimeTime signoff timing: Signoff accurate timing analysis with enhanced variation modeling, low voltage support and Via Pillar ECO technology for HPC designs
  • StarRC signoff extraction: Advanced color-aware variation modeling, Via Pillars support for high-performance design and enhanced FinFET MEOL parasitic modeling for needed accuracy
  • IC Validator physical signoff: Certified runsets for signoff DRC and LVS; cut-metal and complex fill-to-signal space support
  • HSPICE, CustomSim and FineSim simulation solutions: FinFET device modeling with self-heating/aging effect and Monte Carlo feature support. Accurate circuit simulation results for analog, logic, high-frequency, and SRAM designs.
  • Custom Compiler custom design: Full coloring interactive routing, DRC checks and density reporting, color-aware EM and RC reporting.
  • NanoTime custom timing analysis: SPICE-accurate transistor-level static timing analysis of 7-nm embedded SRAMs, with new mesh network parasitic modeling of power rail trench contacts.
  • ESP-CV custom functional verification: Transistor-level symbolic equivalence checking for 7-nm SRAM, macros, and library cell designs.
  • CustomSim reliability analysis: Accurate dynamic transistor-level IR/EM analysis for color-aware EM rules and advanced via support.
  • In addition, gate-level static/dynamic signal and PG IR/EM analysis with advanced cell current distribution modeling has been implemented in PrimeRail, not including thermal-aware capability which is an on-going collaboration between TSMC and Synopsys.

About Synopsys

Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software partner for innovative companies developing the electronic products and software applications we rely on every day. As the world’s 15th largest software company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP, and is also growing its leadership in software security and quality solutions. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing applications that require the highest security and quality, Synopsys has the solutions needed to deliver innovative, high-quality, secure products. Learn more at www.synopsys.com.

Source: Synopsys

The post Synopsys’ IC Compiler II Completed Certification for TSMC’s 7-nm Process Technology appeared first on HPCwire.

Machine Learning Gets HPC Treatment at University of Pisa

Mon, 03/13/2017 - 01:01

The University of Pisa, established in 1343, is one of the oldest universities in the world and it has continuously evolved to meet the new challenges of international research and education at the highest level.

To help keep the university at the leading edge in multiple scientific, mathematical, and engineering disciplines, the IT infrastructure has gotten great attention. Specifically, in recent years, there has been a determination that the university’s IT center must not only support its students and faculty in their research and development ventures, but also offer help to Italian industries, providing facilities for the design and testing of innovative solutions.

Building a world-class computing center for modern R&D requires help from leading technology partners. Recognizing this, the university partnered with Dell EMC and Intel to build an IT infrastructure that delivers the compute power, storage capacity, and performance required to do advanced and innovative research in highly competitive fields.

The focal point of the effort is the Dell | Intel Competence Centre for Cloud and High Performance Computing (HPC) at the University of Pisa and Scuola Normale Superiore di Pisa. The center was created to respond to the rapidly growing need for cutting-edge infrastructure solutions, allowing university researchers to share and power their work, and visitors to get insights into the latest and most efficient infrastructure technology.

The partnership is a true collaboration with all those involved realizing valuable benefits. “Dell EMC has helped us to create a state-of-the-art storage and computing environment for our students and researchers and, in turn, we have helped with a number of proofs of concepts,” said Maurizio Davini, CTO, IT Center, University of Pisa.

Using PowerEdge blade servers with the latest Intel Xeon processors, University departments and local companies can develop, test, and run innovative algorithms. “We can run highly complex algorithms that were just not possible before our Dell EMC solution,” Davini said. “We can support local organizations in their ventures. And we can drive research that, in turn, is helping development across the region.”

Machine learning steps up

The center is being used by many groups to conduct research in a variety of disciplines. One area that is ripe of center’s HPC capabilities is machine learning.

Recent efforts have explored machine translation, image captioning, and cancer treatment response prediction. Additionally, industrial projects have focused on big data analytics, machine learning models for the semiconductor industry, and biophysical signal analysis.

Examples of the machine learning work being done in different research areas includes:

Semiconductor industry: The center is being used to help develop data analytics and machine learning models to improve integrated circuit production. The methodology used is trying to quantify design space coverage, which is a unifying abstraction for all components of the design-to-manufacturing data flow, with the hope of optimizing yield of integrated circuits that are being created using today’s much smaller scale structures.

Life sciences: The center is helping researchers apply machine learning to better understand DNA sequencing data. The work requires encoding DNA sequence data as an image dataset and then using deep learning image classification and training solutions. The researchers use Dell EMC PowerEdge C4130 servers to run multiple analyses in parallel. Applications for this work include personalized and genomics-based medicine.

Biophysics: Combining university research and expertise, industrial demands, and the center’s HPC capabilities, BioBeats ─ a University of Pisa spin-off start-up ─ uses machine learning and artificial intelligence to analyze users’ heartbeats and create adaptive music. The company’s Hear and Now app teaches breathing exercises that help users relax. The app is based on clinically validated stress-reducing and mindfulness practices.

Summary

HPC has become the backbone of science and product development. A balanced HPC infrastructure must be tailored to the task at hand and the application and code tied to that task. The right system and implementation, proper services and support must be designed to make for efficient research workflows and processes, getting results faster and optimizing and speeding up discovery or a product’s time to market.

The Dell | Intel Competence Centre for Cloud and High Performance Computing at the University of Pisa is expanding the boundaries of what is possible. The Center offers compute technology that is flexible to meet diverse density, form-factor, and performance criteria. Offers intelligent, high-performance, and scalable storage strategies to get the data explosion under control and make data easily accessible for researchers. And makes use of networking and interconnect technologies that are simplified and standardized for HPC infrastructures.

 

To learn more about empowering your R&D efforts by making use of the most advanced HPC solutions for the enterprise, visit:

http://www.intel.com/content/dam/www/public/us/en/documents/case-studies/hpc-univ-pisa.pdf

Video: University of Pisa Simplifies HPC with Intel and Dell | Intel IT Center

 http://www.dell.com/hpc

http://www.intel.com/ssf

The post Machine Learning Gets HPC Treatment at University of Pisa appeared first on HPCwire.

Major French Bank Now Supporting Humanitarian Research Through World Community Grid

Fri, 03/10/2017 - 17:00

PARIS, March 9, 2017 — SILCA, the information technology and services arm for Crédit Agricole Group, has formally signed on to donate its surplus computer processing power to IBM’s (NYSE: IBM) World Community Grid in support of humanitarian research.

In just its first month of participation, after installing the World Community Grid app on 1,100 employee workstations, it contributed the equivalent of three years of computing time to scientific research.

World Community Grid is an IBM-funded and managed program that advances scientific research by harnessing computing power “donated” by volunteers around the globe. This resource is the equivalent of a virtual supercomputer that helps enable scientists to more quickly conduct millions of virtual experiments. These experiments aim to pinpoint promising drug candidates for further study.

SILCA, which ensures the security and digital transformation of Crédit Agricole Group, first proposed this project at Crédit Agricole Group’s “Innovation Day” event, and won the company’s top award, chosen from among 60 initiatives described by the bank’s subsidiaries. Thanks to this project, SILCA will contribute to significant research studies in many areas, including Zika, tuberculosis, AIDS, Ebola, cancer and clean energy.

For Philippe Mangematin, in charge of innovation development at SILCA, its participation is “a powerful message for Crédit Agricole to send about its commitment to a social responsibility agenda.”

To date, World Community Grid has connected researchers to half a billion U.S. dollars’ worth of free supercomputing power. This resource to accelerate scientific discovery, partially hosted in IBM’s cloud, has been fueled by 720,000 individuals and 440 institutions from 80 countries who have donated more than 1 million years of computing time on more than 3 million desktops, laptops, and Android mobile devices. Their participation has helped identify potential treatments for childhood cancer, more efficient solar cells, and more efficient water filtration materials.

World Community Grid is enabled by Berkeley Open Infrastructure for Network Computing (BOINC), an open source software platform developed at the University of California, Berkeley.

Join World Community Grid today to enable your computer or Android device for a humanitarian project.

Source: IBM Corp.

The post Major French Bank Now Supporting Humanitarian Research Through World Community Grid appeared first on HPCwire.

SC17 Panel Submissions Due April 24

Fri, 03/10/2017 - 07:39

March 10 — Panels at SC17 will be, as in past years, among the most important and heavily attended events of the Conference. Panels bring together the key thinkers and producers in the field to consider (in a lively and rapid-fire context) some of the key questions challenging high performance computing, networking, storage and associated analysis technologies for the foreseeable future.

They bring a rare opportunity for mutual engagement of community leaders and broad mainstream contributors in a face-to-face exchange through audience participation and questioning. Surprises are the norm at panels, which make for exciting and lively hour-and-a-half sessions. Panels explore topics in depth by capturing the opinions of a wide range of people active in the relevant fields.

These sessions represent state of the art opinions, and can be augmented with social media technologies including Twitter, LinkedIn, and video feeds, and even real-time audience polling. Please consider participating!

Important Date: Submission Deadline: April 24, 2017* (23:59PM AoE) – Hard Deadline, there will be *NO EXTENSIONS*

For Web Submissions: click here. For questions, e-mail: panels@info.supercomputing.org

Source: SC17

The post SC17 Panel Submissions Due April 24 appeared first on HPCwire.

Calculations on Supercomputers Help Reveal the Physics of the Universe

Fri, 03/10/2017 - 07:00

March 10 — On their quest to uncover what the universe is made of, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are harnessing the power of supercomputers to make predictions about particle interactions that are more precise than ever before.

Argonne researchers have developed a new theoretical approach, ideally suited for high-performance computing systems, that is capable of making predictive calculations about particle interactions that conform almost exactly to experimental data. This new approach could give scientists a valuable tool for describing new physics and particles beyond those currently identified.

With the theoretical framework developed at Argonne, researchers can more precisely predict particle interactions such as this simulation of a vector boson plus jet event. (Image by Taylor Childers.)

The framework makes predictions based on the Standard Model, the theory that describes the physics of the universe to the best of our knowledge. Researchers are now able to compare experimental data with predictions generated through this framework, to potentially uncover discrepancies that could indicate the existence of new physics beyond the Standard Model. Such a discovery would revolutionize our understanding of nature at the smallest measurable length scales.

“So far, the Standard Model of particle physics has been very successful in describing the particle interactions we have seen experimentally, but we know that there are things that this model doesn’t describe completely. We don’t know the full theory,” said Argonne theorist Radja Boughezal, who developed the framework with her team.

“The first step in discovering the full theory and new models involves looking for deviations with respect to the physics we know right now. Our hope is that there is deviation, because it would mean that there is something that we don’t understand out there,” she said.

The theoretical method developed by the Argonne team is currently being deployed on Mira, one of the fastest supercomputers in the world, which is housed at the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

Using Mira, researchers are applying the new framework to analyze the production of missing energy in association with a jet, a particle interaction of particular interest to researchers at the Large Hadron Collider (LHC) in Switzerland.

Physicists at the LHC are attempting to produce new particles that are known to exist in the universe but have yet to be seen in the laboratory, such as the dark matter that comprises a quarter of the mass and energy of the universe.

Although scientists have no way today of observing dark matter directly — hence its name — they believe that dark matter could leave a “missing energy footprint” in the wake of a collision that could indicate the presence of new particles not included in the Standard Model. These particles would interact very weakly and therefore escape detection at the LHC. The presence of a “jet”, a spray of Standard Model particles arising from the break-up of the protons colliding at the LHC, would tag the presence of the otherwise invisible dark matter.

In the LHC detectors, however, the production of a particular kind of interaction — called the Z-boson plus jet process — can mimic the same signature as the potential signal that would arise from as-yet-unknown dark matter particles. Boughezal and her colleagues are using their new framework to help LHC physicists distinguish between the Z-boson plus jet signature predicted in the Standard Model from other potential signals.

Previous attempts using less precise calculations to distinguish the two processes had so much uncertainty that they were simply not useful for being able to draw the fine mathematical distinctions that could potentially identify a new dark matter signal.

“It is only by calculating the Z-boson plus jet process very precisely that we can determine whether the signature is indeed what the Standard Model predicts, or whether the data indicates the presence of something new,” said Frank Petriello, another Argonne theorist who helped develop the framework. “This new framework opens the door to using Z-boson plus jet production as a tool to discover new particles beyond the Standard Model.”

Applications for this method go well beyond studies of the Z-boson plus jet. The framework will impact not only research at the LHC, but also studies at future colliders which will have increasingly precise, high-quality data, Boughezal and Petriello said.

“These experiments have gotten so precise, and experimentalists are now able to measure things so well, that it’s become necessary to have these types of high-precision tools in order to understand what’s going on in these collisions,” Boughezal said.

“We’re also so lucky to have supercomputers like Mira because now is the moment when we need these powerful machines to achieve the level of precision we’re looking for; without them, this work would not be possible.”

Funding and resources for this work was previously allocated through the Argonne Leadership Computing Facility’s (ALCF’s) Director’s Discretionary program; the ALCF is supported by the DOE’s Office of Science’s Advanced Scientific Computing Research program. Support for this work will continue through allocations coming from the Innovation and Novel Computational Impact on Theory and Experiment (INCITE) program.

Source: Joan Koka, Argonne National Laboratory

The post Calculations on Supercomputers Help Reveal the Physics of the Universe appeared first on HPCwire.

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

Thu, 03/09/2017 - 17:07

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Midweek at the Open Compute Project (OCP) Summit in Santa Clara, Calif., the GPU technology leader unveiled blueprints for a new open source Tesla P100-based accelerator – HGX-1 – developed for clouds with Microsoft under its Project Olympus. (We’ll make an educated guess that the D in DGX-1 stands for Deep Learning and the H in HGX-1 for Hyperscale.) At roughly the same time, Facebook introduced Big Basin, the successor to its Big Sur GPU server, which also uses Nvidia P100s (in a similar 8-way configuration, which we’ll get into in a moment). And in the embedded world, Nvidia announced the Jetson TX2, billed as a “drop-in supercomputer,” with an ARM-based CPU supporting Pascal graphics.

That’s a productive week by any standard and there are multiple threads to follow here. Most of the activity was driven by artificial intelligence/deep learning’s continued drive into upper-end HPC and the cloud. Nvidia has been striving to leverage its GPU strength in both traditional scientific computing as well as in AI/DL whose applications often require lower precision (32-, 16-, and even 8-bit) computation.

HGX-1, the Project Olympus hyperscale GPU accelerator chassis for AI

Roy Kim, director Tesla Product Management, described the adoption of AI/DL as a revolution gathering speed fast. “The deep learning and AI revolution, even though it is huge, is also fairly young. A few years ago people were still asking the question, what is deep learning. Now every cloud vendor is asking how it can be AI-ready,” said Kim. A standardized HGX-1 design will make that possible, he contends.

The emergence of open source hardware for the cloud via OCP and Olympus is reminiscent of the emergence of the ATX ‘standard’ for PCs. The HGX-1 will be used as part of a standard AI/DL reference platform and enable cloud providers to rapidly develop AI/DL offerings, according to Kim.

Here’s a brief summary of Nvidia’s busy news week:

  • HGX-1. Think DGX-1, without the CPUs. It’s an accelerator box with eight Tesla P100s, connected in the same hypercube mesh as the DGX-1 and also leveraging the NVLink interconnect. The HGX-1 hooks to servers via PCIe interface. Developed under the Olympus program guidelines, the design is open source such that users could easily take the files to their preferred ODM for manufacture. It will be interesting to see how cloud providers respond and whether significant tweaking takes place to optimize the HGX design for particular AI/Dl workloads.
  • Big Basin. Facebook says Big Basin trains models that are 30 percent larger because of enhanced throughput and an increase in memory from 12 GB to 16 GB. “In tests with popular image classification models like ResNet-50, we were able to reach almost 100 percent improvement in throughput compared with Big Sur,” according to Arlene Gabriana Murillo’s FB blog. Designed as a JBOG (just a bunch of GPUs) to allow for the complete disaggregation of the CPU compute from the GPUs, it does not have compute and networking built in, so it requires an external server head node. “By designing [Big Basin] this way, we can connect our Open Compute servers as a separate building block from the Big Basin unit and scale each block independently as new CPUs and GPUs are released,” says FB in a blog Built in collaboration with ODM Quanta Cloud Technology, the Big Basinsystem also features Tesla P100 GPU accelerators.
  • RIKEN’s new DGX-1 supercomputer (Image courtesy of Fujitsu Ltd.)

    Fujitsu AI Supercomputer. The new RIKEN machine will include 24 DGX-1 systems as well as 32 Fujitsu PRIMERGY servers and is expected to reach 4 petaflops peak performance when running half-precision floating point calculations. The new supercomputer is scheduled to go online next month and will be used to accelerate AI research in medicine, manufacturing, healthcare and disaster preparedness.

  • Jetson TX2. A replacement for the Jetson TX1, the embedded module (SoC) features Pascal graphics with 256 CUDA cores while its CPU is an HMP (Heterogeneous Multi-Processor Architecture) Dual Denver plus a quad ARM Cortex-A57. Nvidia, like others, seems to be doing more with ARM, which though strong in the embedded and mobile space has struggled to penetrate the datacenter. That may be changing. Microsoft announced an ARM initiative on cloud workflows this week. “We have been running evaluations side by side with our production workloads and what we see is quite compelling. The high Instruction Per Cycle (IPC) counts, high core and thread counts, the connectivity options and the integration that we see across the ARM ecosystem is very exciting and continue to improve,” wrote Leendert van Doorn of Microsoft in a blog.

The FB Big Basin and Microsoft embrace of HGX-1 suggest some of different ways in which Nvidia GPU technology may be deployed by cloud vendors. The Microsoft HGX-1, built by Ingrasys (a Foxconn subsidiary), is flexible in the sense that the HGX-1 is deliberately designed to accommodate differing AI/DL workloads.

“[For Facebook], it’s really about their particular workloads. They talk about natural language processing, image processing, and all of this is really core to the services they provide their users. So they built a system that is best suited for their workload. The topology is very similar to the HGX-1 in that it has the same hypercube mesh and has eight Tesla P100s in the box with NVLink. The only difference is that it has been optimized and tuned for deep learning training, which means it has been hardened for DL training as opposed to HGX-1 which is highly configurable,” said Kim.

Interesting, when Microsoft describes the Olympus philosophy it says: “Project Olympus applies a model of open source collaboration that has been embraced for software but has historically been at odds with the physical demands of developing hardware. We’re taking a very different approach by contributing our next generation cloud hardware designs when they are approx. 50% complete – much earlier in the cycle than any previous OCP project. By sharing designs that are actively in development, Project Olympus will allow the community to contribute to the ecosystem by downloading, modifying, and forking the hardware design just like open source software,” wrote Kushagra Vaid, GM, Azure Hardware Infrastructure in a fall 2016 blog.

The HGX-1 is a complete design, said Kim but that doesn’t preclude optimization. “The design itself is complete and so you can go to Foxconn and give them this design and file and say can you manufacture this for us. It’s been tested and it works. Certainly because it is open source I can imagine other cloud vendors going in and saying I could tweak this to be more efficient specifically for the target market that I am going after and that one of the benefits. I wouldn’t be surprised if that happens. I think it gives each cloud provider an ability to optimize the system for their particular workload.”

Will there be an HGX-2? “That’s a good question. The idea is that the standards do evolve to meet the needs of the evolving workloads. We are going to continue to work with our cloud vendors to provide the best answers for that. Without giving you any roadmap, we do expect it to evolve,” said Kim.

The post Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal appeared first on HPCwire.

IDC’s HPC Group Spun out to Temporary Trusteeship

Thu, 03/09/2017 - 16:42

The immediate fate of the HPC research group within International Data Corp. (IDC) became known yesterday with comments from (now former) IDC Program Vice President, HPC, Earl Joseph, who told HPCwire‘s sister pub EnterpriseTech his group is being spun out under the temporary trusteeship of an entity called Hyperion Research. The arrangement, engineered by the Committee on Foreign Investment in the United States (“CFIUS”), is designed to create a firewall between IDC and Joseph’s group of analysts while allowing it to continue its work in anticipation of an acquisition within the next 12 months.

The divestiture of the HPC group by IDC allows completion of the acquisition of International Data Group, Inc. (IDG), including its subsidiaries IDC, IDG Communications, and IDG Ventures, by a pair of Chinese investors, China Oceanwide Holdings Group and IDG Capital. The deal was first announced six weeks ago.

The transaction received clearance from CFIUS, but it was decided that Joseph’s HPC research group could not be included in the deal because of the sensitive nature of the research it does on behalf of the US government.

Earl Joseph at IDC Directions, Boston

“We can’t be owned by a Chinese entity, that’s what the rules are,” said Joseph at the IDC Directions conference in Boston, adding that 60 percent of its work is done on for the U.S. government. “IDC has had to remove everything from its records in our entire footprint. So all documents going back 20 years, employee files – just everything. There have been three rounds of purges.”

Joseph also said IDC is not allowed to do business in the HPC sector for three years. He emphasized that all the research services, such as its HPC User Forums, that his group delivered, while with IDC will continue unabated.

“The government said our group has to continue functioning,” Joseph said. “So IDC and Oceanwide had to put it together so everything we did in the past would continue….”

Under the terms of the agreement with CFIUS, the Hyperion Research trusteeship has to be sold within 12 months, but Joseph said the intent is for his group to be purchased within three to six months, “and the trustee thinks he can do it in six weeks.”

The terms of the IDG acquisition were not disclosed, but sources in January estimated the sales price to be between $500 million to $1 billion. Founded in 1964 by Pat McGovern, IDG is a global media, market research and venture company; it operates in 97 countries around the world. McGovern, the long-time CEO, passed away in 2014.

China Oceanwide is a privately held, multi-billion dollar, international conglomerate founded by Chairman Zhiqiang Lu. Its operations span financial services, real estate assets, media, technology and strategic investment. The company has a global business force of 12,000.

IDG Capital is an independently operated investment management partnership, which cites IDG as one of many limited partners. It was formed in 1993 as China’s first technology venture investment firm. It operates in many sectors, including Internet and wireless communications, consumer products, franchise services, new media, entertainment, education, healthcare and advanced manufacturing.

The post IDC’s HPC Group Spun out to Temporary Trusteeship appeared first on HPCwire.

HPC Twitter Roundup (March 9, 2017)

Thu, 03/09/2017 - 12:33

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below.

Finally got my machine room access. Here's a photo of Cori Phase II; first row are Haswells, and remainder are KNL. pic.twitter.com/XDQt0g9F55

— Glenn K. Lockwood (@glennklockwood) March 2, 2017

Because #HPC is in the #DNA of #kaust from day 1. KAUST hosting HPC #Saudi #Arabia on March 13-15 2017. @KAUST_HPC @KAUST_News @cscsch pic.twitter.com/SoX4kUHbux

— KAUST ECRC (@KAUST_ECRC) March 7, 2017

High Performance Computing beginner's workshop happening now! #hpc pic.twitter.com/8i3w1p6vg2

— Ed Swindelles (@edswindelles) March 6, 2017

Stampede 2 buildout is underway! It's @TACC 's newest large-scale system. https://t.co/Bu0FrgoAas pic.twitter.com/FqXMzY4jMZ

— TACC (@TACC) March 9, 2017

I am woman, here me curse…at my code. #internationalwomensday #WomeninHPC #HPC

— Fernanda Foertter (@hpcprogrammer) March 8, 2017

#ISGC2017 keynote by Miron Livny about High Throughput Computing. #HTC #HPC #GridComputing pic.twitter.com/L5FFWkS9Ql

— Andreas Schreiber (@onyame) March 7, 2017

Panel at #IMG_TechSummit #TSMC the next big things #IoT #automotive #HPC #mobile @ImaginationTech pic.twitter.com/I9yUDnmhgP

— JenBernier (@Jenn271) March 8, 2017

Thank you to everyone who joined us for our #OnDemand webinar yesterday! #supercomputing #HPC pic.twitter.com/FyQlW2kDMv

— OhioSupercomputerCtr (@osc) March 9, 2017

“Networking and exchange is a critical component of #HPCSaudi17. We seek to advance an ecosystem of #HPC in the Kingdom,” said Jysoo Lee.

— KAUST (@KAUST_News) March 9, 2017

"If Wright is flight, and Edison is light, then Hopper is code.” –@BarackObama awarding Hopper the Medal of Freedom. #InternationalWomensDay pic.twitter.com/x3MKdh4RGU

— LLNL (@Livermore_Lab) March 8, 2017

HPE's VP & SGI CTO @EngLimGoh is indeed one of the leading #HPC visionaries of our time! Great article https://t.co/681nVSUkAW via @HPCwire pic.twitter.com/HTUkHTUlCN

— ComnetCo (@ComnetCo_Inc) March 7, 2017

104 days, 7 hours to get my #keynote together to talk #hpc #cloud #storage and #national #cyber #infrastructure @ https://t.co/1V9cjsz57n

— James Cuff (@jamesdotcuff) March 6, 2017

.@DellHPC's@MartinHilgeman at #HPC workshop in Khobar, Saudi Arabia. pic.twitter.com/MWrTYlxPgB

— Suhaib Khan (@suhaibkhan) March 9, 2017

David Moss @HartreeCentre Fantastic #HPC resources there to assist industry. @LivUni @KTNUK_ESP @KTN_Creative @HVM_Catapult @KTNUK pic.twitter.com/SpnN5OWRB8

— Richard Foggie (@foggie_esp) March 9, 2017

We are excited to join the @IDC Technical #Computing Advisory Panel to help shape the future of #HPC!

— Rescale (@RescaleInc) March 8, 2017

Great to see longtime @doecsgf committee member Bob Voigt on tap to speak @KAUST_HPC! #HPC https://t.co/aFyzginAxW

— DOE CSGF (@doecsgf) March 7, 2017

I will be happy to discuss about #BurstBuffer during #CUG2017 with anybody interested in the topic and exchange opinions @cray_inc #HPC

— George Markomanolis (@geomark) March 6, 2017

Click here to view the top tweets from last week.

The post HPC Twitter Roundup (March 9, 2017) appeared first on HPCwire.

HPC4Mfg Advances State-of-the-Art for American Manufacturing

Thu, 03/09/2017 - 12:29

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing.

Keynote speaker Thomas Lange, 36-year veteran of Procter & Gamble (P&G), the manufacturing company well-known in HPC circles for their Pringles success story, engaged the room with a dynamic recounting of the history of manufacturing in the United States. Lange, an industry consultant since leaving P&G in 2015, emphasized the importance of infrastructure and logistics to the rise of American manufacturing. Throughout the last two centuries, he noted, manufacturing success was tied first to waterways (P&G), then to railroads (Sears), to the interstate-highway network (Walmart), and moving into the present day, the Internet (Amazon).

Tom Lange

“Manufacturers have to innovate how we do our thing or we will diminish,” said Lange. “It’s that simple. It’s not just about regulations and cheap labor off-shore; it’s about innovating how we do what we do, not just what we make. And it turns out innovating manufacturing at scale is too expensive to just try it and see what happens. That is the issue; it’s too big; it’s too expensive to mess with.”

The HPC4Mfg program was launched by the Department of Energy in 2015 to directly facilitate this innovation by infusing advanced computing expertise and technology into the US manufacturing industry, where it “shortens development time, guides designs, optimizes processes, prequalifies parts, reduces testing, reduces energy intensity, minimizes green house gas emissions, and ultimately improves economic competitiveness,” according to HPC4Mfg program management. Advancing innovative clean energy technologies and reducing energy and resource consumption are core elements of the program.

Lori Diachin, HPC4Mfg Director

“The HPC4Mfg program has really been designed for high-performance computing and [demonstrating] the benefits to industry,” said HPC4Mfg Director Lori Diachin “You see a lot of ways that it’s impacting industry in the projects we have now, and these impacts range from accelerating innovation, facilitating new product design, and upscaling technologies that have been demonstrated in the laboratory or at a small scale.”

HPC4Mfg began with five seedling projects and has since implemented three solicitation rounds. (Awardees for the third round are due to be announced very shortly). It is now executing a $8.5-9 million portfolio at Lawrence Livermore, Lawrence Berkeley, and Oak Ridge National Laboratories (the managing partner laboratories for the program). The program is in the process of expanding across the DOE national lab space to include access to computers and expertise at other participating laboratories.

Currently, there are 27 demonstration projects (either in-progress, getting started or going through the CRADA process) and one, larger capability project with Purdue Calumet and US Steel (to develop the “The Virtual Blast Furnace”). The projects get access to the top supercomputers in the country: Titan at Oak Ridge, Cori at Berkeley, Vulcan at Livermore, Peregrine at NREL, and soon Mira at Argonne National Lab.

HPC4Mfg is sponsored by the DOE’s Advanced Manufacturing Office (AMO), which is part of the Office of Energy Efficiency and Renewable Energy. The AMO’s mission is to “partner with industry, small business, universities, and other stakeholders to identify and invest in emerging technologies with the potential to create high-quality domestic manufacturing jobs and enhance the global competitiveness of the United States.”

HPC4Mfg proposal submissions by industrial sector (Source: HPC4Mfg)

High-impact manufacturing areas, such as the aerospace industry, automotive, machinery, chemical processing, and the steel industry, are all represented in the participant pool.

“We aim to lower the barriers, lower the amount of risk that industrial companies have in experimenting with high performance computing in the context of their applications,” said Diachin of the program’s vision and goals. “From our perspective, the status of the industry is that some large companies have a lot of access to HPC. They’re very sophisticated in how they use it. On the flip side, very few small-to-medium-sized companies really have the in-house expertise or the access to compute resources that they need to even try out high performance computing in the context of their problems.

“On the DOE side, we do have a lot of expertise and we have very large-scale computers and so we’re able to bring to bear some of those technologies in a large array of different problems, but I think it’s a challenge – and I’ve heard this many times – for industry to understand how do they get access to the expertise that’s in the DOE labs. What is that expertise? Where does it live? They can’t really track everything that’s going on in all the national labs that the DOE has. And so this program is really designed to help reduce those barriers and create that marriage between industry-interesting challenges and problems and HPC resources at the laboratory.”

In terms of disciplines, computational fluid dynamics is a very widely needed expertise, also materials modeling and thermomechanical type modeling, but there are a wide variety, according to Diachin.

From Concept to Project: Airplanes, Lightbulbs, and Paper Towels

After submitting a concept paper, followed by a full proposal, successful projects receive about $300,000 from the AMO to fund the laboratory participation in the project. The industrial partners are required to provide at least a 20 percent match to the AMO funding. This is usually in the form of “in kind time and effort” but industrial partners can also provide a cash contribution.

Diachin emphasized that concept papers need not identify a particular lab or PI as collaborator, explaining, “You just need to tell us what your problem is and describe it in a way that we understand what simulation capabilities are needed and what’s the impact that you envision being able to achieve if you’re successful in this demonstration project.  The technical merit review team will evaluate each concepts paper for relevance as a high performance computing challenge, appropriateness for partnership with the national laboratories, and its ability to have national scale impact and be successful.  And if you haven’t identified a principal investigator at the national lab, we’ll identify the right place and team from the DOE lab complex to get this work done; this matching process is really a unique feature of the program.“

For a given round, the program typically receives about 40 concept papers from which the program office selects about 20 to go forward to full proposal state. From that they select around 10 to be fully-funded. The proposals are evaluated on how well they advance the state-of-the-art for the manufacturing sector, the technical feasibility of the project, the impact to energy savings and or clean energy production, relevance to HPC, and the strength and balance of the team.

“We are really looking for a strong partnership between the DOE lab and the company,” Diachin told HPCwire. “We’re looking for evidence that there were in-depth discussions as part of the proposal writing process and that there’s a good match in terms of the team.”

Building community and workforce is another important goal here, and the AMO funds about 10 student internships to work on the HPC4Mfg program each year.

In her talk, Diachin highlighted several projects. The LIFT consortium in collaboration with the University of Michigan and Livermore is working to predict the strength of lightweight aluminum lithium alloys produced under different process conditions. Implemented in aircraft designs, the new alloys could save millions of dollars in fuel costs.

SORAA/LLNL: GaN crystal growth

The SORAA/Livermore team is working to develop more efficient LED lightbulbs by modeling ammono-thermal crystal growth of gallium nitride to scale up the process. The goal is to reduce production costs of LED lighting by 20 percent. The new high-fidelity model will save years of trial-and-error experimentation typically needed to facilitate large-scale commercial production.

Energy savings in paper-making is the focus of the Agenda2020 Technology Alliance (a paper industry consortium) in collaboration with Livermore and Berkeley. The goal of this project is to use multi-physics models to reduce paper rewetting in the pressing process. The simulations will be used to optimize drying reducing energy consumption by up to 20 percent (saving 80 trillion BTUs and $250 million each year).

In another paper-related project, P&G and their lab partner Livermore are using HPC to evaluate different microfiber configurations “to optimize the drying time while maintaining user experience.” The project resulted in the development of a new mesh tool, called pFiber, that reduces the product design cycle by a factor of two for smaller numbers of fibers and processing cores, and by a factor of eight for higher fiber counts using a larger number of cores.

This P&G project also illustrates the return on investment for the laboratories. The example represents the largest non-benchmark run done with the Paradyn code at Livermore. “These are very challenging problems that the industry is putting forward that are stretching the capabilities and making our capabilities at the national labs more robust,” said Diachin.

One area that is receiving a lot of attention is additive manufacturing, which is broadly used among multiple industry sectors and thus fits with the role of HPC4Mfg to foster high-impact innovation. “It’s a very hot topic for modeling and simulation, both to better understand the processes and the properties of the resultant parts,” said Diachin.

A collaboration involving United Technologies Research Center (UTRC), Livermore and Oak Ridge is one of the projects studying this industrial process. Their focus is on dendrite growth in additive manufacturing parts. UTRC is one of those companies that has a lot of sophisticated modeling and simulation experience, Diachin explained. “They came to the table with some models that they had in hand that they could run in two dimensions, but they weren’t able to take into three dimensions, so the collaboration is taking the models that they have and looking at implementing them directly in a code at Livermore called AMP and running that to much larger scale. At the same time, at Oak Ridge, there are alternate models that can be used to model these processes, so they are developing these alternate models and then they will compare and contrast these different models to understand the process better. So it’s a very interesting approach.”

Once the projects create these large-scale models in partnership with the labs, there can be a need to then down-scale the applications to employ them in industrial settings. This is where reduced order modeling comes in. “This can be a very nice use of the resources and expertise at the labs,” Diachin told HPCwire. “The way reduced order models often work is you run very large-scale, fine-resolution, detailed simulations of a particular phenomenon and from that you can extract basis vectors from a number of different parameter runs. You can then use those basis vectors to create a much smaller representation of the problem – often two to three orders of magnitude smaller. Problems that required high-performance computing can then be run on a small cluster or even a desktop and you can do more real-time analysis within the context of the parameter space you studied with the large-scale run. That’s a very powerful tool for process optimization or the process decisions you have to make in an operating environment. “

HPC4Mfg focuses on manufacturing right now, but the concept is designed to be scalable. “We get a lot of concept papers that are very appropriate for other offices potentially within the Department of Energy and we have been informally socializing them. With the next solicitation we’re going to make that more formal. Jeff Roberts from Livermore National Lab has been working with Mark Johnson at the AMO and others to really expand the program into a lot of different areas,” said Diachin.

The program runs two solicitations per year, in the fall and in the spring. The next funding round will be announced in mid to late March with concept papers due the following month. After the announcement, the HPC4Mfg program management team will be conducting webinars to explain the goals of the program, the submission process and answer any questions.

Announced Projects:

Spring 2016 Solicitation Selectees

Fall 2015 Solicitation Selectees

Seedlings

The post HPC4Mfg Advances State-of-the-Art for American Manufacturing appeared first on HPCwire.

Bullion x86 Server From Atos Beats Worldwide Performance Record

Thu, 03/09/2017 - 07:16

PARIS, France, March 9 — Atos, a leader in digital transformation, announces through its technological brand Bull, that according to the international benchmark from Standard Performance Evaluation Cooperative (SPEC) its bullion servers beat yet again performance records. Performed with a 16-socket configuration, this benchmark demonstrates that the high-end enterprise bullion x86 servers perform at exceptional levels and thus the most powerful in the world in terms of speed and memory.

Used by over 100 million end-users worldwide

Bullion is widely deployed in businesses and governments, mainly in Europe, North America, Africa and Brazil. Through its unique features, it supports the digital transformation of many clients including the City of Helsinki. It also simplifies all operations, for example for OPO Oeschger in Switzerland, guaranteeing delivery within 24 hours for all items listed in its massive catalogue.

Bullion offers:

  • An exceptional memory footprint, up to 24 TB (terabytes), to address Big Data applications, in-memory and real time, for example with the SAP HANA appliance where bullion achieved 16TB certification,
  • A significant Total Cost of Ownership (TCO) reduction of large datalakes and virtualized clusters reaching up to 35% on database consolidation projects,
  • A technological alternative at lower cost for Sparc and HP-UX systems.

“This recognition is a real pride for the Group. Bull’s technological expertise in the field of infrastructure is again underligned. The power demonstrated by bullion, coupled with its exceptional memory capacity, makes it the benchmark of servers and supports our ambition to become a world leader in Big Data and SAP HANA,” said Emmanuel Le Roux, Head of Server & Appliances division at Atos.

Atos pushes limits and sets high standards

The latest generation bullion servers, bullion S, equipped with 384 cores from Intel Xeon processor (E7v4 family) and 4TB of RAM, achieved a peak performance of 14100 – 13600 Base – according to SPECInt_rate2006 benchmark. This new top performance is proof of the relevance of technological choices for these bullion servers, a modular and consistent range, from 2-16 sockets. This high level of performance combined with the platform high density and innovative architecture enabled to deploy a whole range of appliances for all heavy workloads environments such as the market leading SAP HANA and datalake or database.

More details on obtained benchmark: Source SPECint_rate2006, January 2017 Site SPECint_rate2006 Result of benchmark bullion

About SPEC

The System Performance Evaluation Cooperative, now named the Standard Performance Evaluation Corporation (SPEC), was founded in 1988 by a small number of workstation vendors who realized that the marketplace was in desperate need of realistic, standardized performance tests. The key realization was that an ounce of honest data was worth more than a pound of marketing hype. SPEC has grown to become one of the more successful performance standardization bodies with more than 60 member companies. SPEC publishes several hundred different performance results each quarter spanning a variety of system performance disciplines. 

About Atos

Atos SE (Societas Europaea) is a leader in digital transformation with circa 100,000 employees in 72 countries and pro forma annual revenue of circa € 12 billion. Serving a global client base, the Group is the European leader in Big Data, Cybersecurity, Digital Workplace and provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting edge technologies, digital expertise and industry knowledge, the Group supports the digital transformation of its clients across different business sectors: Defense, Financial Services, Health, Manufacturing, Media, Utilities, Public sector, Retail, Telecommunications, and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and is listed on the Euronext Paris market. Atos operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline.

Source: Atos

The post Bullion x86 Server From Atos Beats Worldwide Performance Record appeared first on HPCwire.

AMAX’s [SMART]DC Data Center Manager to be on Display at Open Compute Summit

Thu, 03/09/2017 - 06:49

FREMONT, Calif., March 9 — AMAX, a leading provider of cloud computing solutions for the modern data center, announced today it will be at Open Compute Summit at the Santa Clara Convention Center on March 8th and 9th, demonstrating how [SMART]DC Data Center Manager can serve as a universal infrastructure management solution for heterogeneous and highly-efficient data centers.

“As data centers scale aggressively to meet the demands of a cloud-dependent, highly-connected XaaS world, enterprises seek ways to reduce the CAPEX and OPEX of their data centers to maximize ROI,” said Dr. Rene Meyer, Director of Product Development, AMAX. “One dominant trend is the movement away from purpose-built, brand-name servers to lower cost, commoditized white box servers that offer a higher degree of customization, and open platforms such as OCP.”

[SMART]DC Data Center Manager was developed to address the needs of companies to have a seamless way to adopt white box or OCP platforms with minimal disruption, and manage their entire data center infrastructure using a single pane of glass universal management. [SMART]DC is compatible across all major server platforms, including Dell, HP, Lenovo, as well as major white box and OCP-accepted and inspired platforms. [SMART]DC can be used to manage standard compute and storage platforms as well as high-performance platforms featuring NVIDIA GPUs for Deep Learning and HPC.

Deployed as an out-of-band appliance, [SMART]DC is designed to be a robust remote management tool with intelligent software-defined power and cost saving features to help data centers achieve up to 30% in power savings. The software features versatile interface options, including an intuitive and configurable web GUI, Command Line Interface (CLI), and integration into an existing management solution via API, with zero code modification needed.

Key benefits include:

  • Out-Of-Band Management: Communication is agentless through the BMC management port, independent of server OS. Does not consume CPU resources or interfere with applications.
  • Virtual KVM: Administer, provision, and diagnose servers from anywhere through remote KVM.
  • Real-Time Reporting: Track real-time server activity, power consumption, and thermal trends. Gain insights needed for data center power and efficiency management and future capacity planning without having to rely on additional sensors or meters.
  • Integrated Health Monitoring: Detect, locate, and identify server health issues. Receive alerts for overcooling and hotspots of data room before they become incidents to improve uptime.
  • Security: Set user and group privileges for access control and rights management.
  • Density: Maximize server count per rack based on power consumption analytics to improve rack space efficiency.
  • Power Savings: Automatically flag idle and underutilized servers for consolidation or repurposing.
  • Call Home Ready: Supports add-on embedded service notification feature for AMAX hardware to expedite break/fix services.

[SMART]DC is the standardized management component in One Platform, AMAX’s OCP-inspired turnkey solution which is delivered fully rack-integrated, tested, and ready to plug and play, minimizing the engineering overhead and time to market needed for deploying OCP. One Platform features a modular design capable of supporting various applications (OpenStack, Big Data Analytics, Cloud Storage, HPC/Deep Learning), and a configurable power shelf with in-rack battery. With [SMART]DC, One Platform can be easily plugged into an existing data center and be managed alongside existing infrastructure under a single management layer.

Source: AMAX

The post AMAX’s [SMART]DC Data Center Manager to be on Display at Open Compute Summit appeared first on HPCwire.

Mellanox to Showcase Cloud Infrastructure Efficiency With SONiC Over Spectrum Open Ethernet Switches

Thu, 03/09/2017 - 06:45

SUNNYVALE, Calif. & YOKNEAM, Israel, March 9 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, announced that it will work with Microsoft to showcase the Open Networking solution with its flagship Open Compute Project (OCP) compatible Spectrum Ethernet Switchesrunning production-ready networking operating system based on Microsoft’s Software for Open Networking in the Cloud (SONiC) at 10, 25, 40, 50 and 100Gb/s speed. This combined solution demonstrates continued momentum behind the Open Ethernet initiative, as highlighted by interoperability, high availability, operational simplicity, and end-to-end efficient data movement based on Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE).

Mellanox has been an active contributor to OCP since the inception of the project, with key innovations in host networking solutions supporting the widest range of OCP server platforms, and open networking initiatives such as Microsoft’s Switch Abstraction Interface (SAI) and SONiC. Through collaboration with Microsoft and other open networking equipment vendors, Mellanox has helped to enhance these open cloud networking software and API/SDK’s, readying them for deployment in production networks.

In addition, Mellanox has worked with Microsoft to demonstrate the key role that high-performance networking plays in improving total cloud infrastructure efficiency. In a demo led by Mellanox and with participation from Microsoft, faster and higher efficiency data movement leveraging RoCE is shown to significantly enhance storage access and applications such as virtual machine Live Migration. The high throughput, low and consistent latency, and innovative congestion management implementation of Mellanox Spectrum Ethernet switch have made it the best choice for supporting highly-efficient RoCE-based deployments.

“Open Ethernet has been a Mellanox vision that is well-aligned with our hyperscale customers and partners, and we have invested significant resources to make it come to fruition,” said Amir Prescher, senior vice president of business development and general manager of the interconnect business, Mellanox Technologies. “As a key switch platform interoperable with SONiC, the Mellanox Spectrum stands out due to its outstanding performance and predictability, and the higher total infrastructure efficiency it enables by having the best end-to-end support of RoCE.”

“Through open collaboration with Mellanox and other contributors to the SONiC open source project, we’ve helped enhance the scale, stability and usability of this cloud networking software stack,” said Yousef Khalidi, Corporate Vice President, Microsoft Corp. “We are pleased to see production deployment of a switch operating system based on SONiC powering Microsoft infrastructure and we look forward to a broader footprint of SONiC within Microsoft.”

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox Technologies

The post Mellanox to Showcase Cloud Infrastructure Efficiency With SONiC Over Spectrum Open Ethernet Switches appeared first on HPCwire.

Intel Joins the iRODS Consortium

Wed, 03/08/2017 - 12:15

March 8 — The integrated Rule-Oriented Data System (iRODS) Consortium today announced Intel Corporation, has joined the membership-based foundation.

As a consortium member, Intel plans to improve integration between iRODS, the free open source software for data virtualization, data discovery, workflow automation, and secure collaboration; and Lustre, an open source parallel distributed file system used for computing on large-scale high performance computing clusters. Membership in the consortium is a first step in offering an integrated tiered solution to Lustre end-users that allows them to easily move data sets from HPC systems into less costly long-term storage systems, where the data can be managed, shared and kept secure using iRODS. By offering tiered storage using iRODS, administrators of HPC systems running Lustre and scientists who compute their data on these systems will be able to automate policies on when and where to move data once it is no longer needed for compute jobs, restrict and manage access to data, conduct audits and run reports.

Scalable Lustre file systems can be part of multiple computer clusters with tens of thousands of nodes, and Lustre is capable of more than a terabyte per second of aggregate throughput. Lustre file systems are a popular choice for businesses with large data centers and data sets, including industries such as meteorology, oil and gas, life sciences, and finance.

“Having Intel and its Lustre development team as members of the iRODS Consortium gives us the opportunity to integrate the iRODS open source data management system with one of the most successful and widely used high performance distributed file system available,” said Jason Coposky, executive director of the iRODS Consortium. “We will have the opportunity to help some of the world’s top scientists take control of their data and we will be able to collaborate with HPC and Lustre administrators to ensure that powerful supercomputers concentrate on computing data for science and business, rather than handling data storage and management that can be tiered onto cheaper, long-term systems.”

Other members of the iRODS Consortium are Bayer, Dell/EMC, DDN, HGST, IBM, MSC, the National Institute for Computational Science at the University of Tennessee, Panasas, RENCI, Seagate, University College London, Utrecht University, and the Wellcome Trust Sanger Institute.

Source: iRODS Consortium

The post Intel Joins the iRODS Consortium appeared first on HPCwire.

CMU’s Frieze Honored for Increasing Diversity in Computer Science

Wed, 03/08/2017 - 09:19

PITTSBURGH, Penn., March 8 — The Computing Research Association has selected Carnegie Mellon University’s Carol Frieze as the recipient of its 2017 A. Nico Habermann Award, recognizing her sustained, successful efforts to promote diversity in computer science.

Frieze directs Women@SCS, a student/faculty organization that promotes opportunities for women, and SCS4ALL, a student-run initiative to broaden participation in computing by underrepresented groups, in CMU’s School of Computer Science (SCS). Her work has helped SCS consistently enroll and graduate a higher percentage of women than the national average.

Last fall, almost half of SCS’s first-year students were women, setting a school record.

“Carol’s nomination letters attest that she played an important role in creating an inclusive environment at CMU, and her research can help others learn best practices and insights to help spread this type of progress beyond her home institution to the entire community,” the CRA said in announcing the award.

A book co-authored by Frieze and Jeri Quesenberry of the Dietrich College of Humanities and Social Sciences, “Kicking Butt in Computer Science: Women in Computing at Carnegie Mellon University,” was published last year. A guide for computer science programs, it explains the rationale and methods used at Carnegie Mellon over nearly two decades to sustain the cultural changes that support a diverse student body.

The award is named for the late Nico Habermann, a longtime head of CMU’s Computer Science Department and the first dean of its School of Computer Science. Habermann, who also served as head of the National Science Foundation’s Computer and Information Science and Engineering directorate, was deeply committed to increasing the participation of women and underrepresented minorities in computing research. The CRA presents the award to a person who has made outstanding contributions to increase the numbers and successes of under-represented members in computing research.

In addition to leading Women@SCS and SCS4ALL, Frieze has organized “roadshows” that introduce K-12 students to computer science, summer workshops for high school teachers interested in computer science, and workshops to inspire undergraduate women from around the world to consider careers in computer science research.

One of Frieze’s strengths is getting students to take the lead in these projects.

”She guides students rather than doing things for them,” said Lenore Blum, professor of computer science, and Tom Cortina, assistant dean for undergraduate education, in a nominating letter for the Habermann award. “She offers students leadership positions, some who would never consider requesting such a position, but Carol asks them and guides them to shine as leaders.”

Frieze, who earned a Ph.D. in cultural studies in computer science, has performed groundbreaking research on the interests of male and female computer science students and on how to enhance diversity with such co-authors as Blum and Quesenberry.

She also organized BiasBusters@CMU, workshops that help faculty, staff and students become aware of unconscious biases in the workplace and learn how to effectively intervene.

In 2015, CMU honored Frieze with its Mark Gelfand Award for Educational Outreach in recognition of her work creating opportunities for women and underrepresented groups in computer science. Last year, she and Jeff Bigham, associate professor in the Human-Computer Interaction Institute, received the 2016 AccessComputing Capacity Building Award, which honors collaborators who work to advance students with disabilities in computing fields.

About Carnegie Mellon University

Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.

Source: Carnegie Mellon University

The post CMU’s Frieze Honored for Increasing Diversity in Computer Science appeared first on HPCwire.

AMD Previews “Naples” High Performance Server Processor

Wed, 03/08/2017 - 09:16

SANTA CLARA, Calif., March 8 — AMD (NASDAQ: AMD) took a significant step into the server and datacenter market with its most detailed look yet at the upcoming high-performance CPU for servers, codenamed “Naples”. Purpose-built to disrupt the status-quo and to scale across the cloud datacenter and traditional on-premise server configurations, “Naples” delivers the highly regarded “Zen” x86 processing engine in industry-leading configurations of up to 32 cores. Superior memory bandwidth and the number of high-speed input / output channels in a single-chip further differentiate “Naples” from anything else in the server market today. The first processors are scheduled to be available in Q2 2017, with volume availability building in the second half of the year through OEM and channel partners.

“Today marks the first major milestone in AMD re-asserting its position as an innovator in the datacenter and returning choice to customers in high-performance server CPUs,” said Forrest Norrod, senior vice president and general manager, Enterprise, Embedded and Semi-Custom business unit, AMD. “‘Naples’ represents a completely new approach to supporting the massive processing requirements of the modern datacenter. This groundbreaking system-on-chip delivers the unique high-performance features required to address highly virtualized environments, massive data sets and new, emerging workloads.”

The new AMD server processor exceeds today’s top competitive offering on critical parameters, with 45% more cores, 60% more input / output capacity (I/O), and 122% more memory bandwidth.

“It is exciting to see AMD back in the server conversation with a new CPU and a sound strategy for why it is the right processor for the modern datacenter and the cloud computing era,” said Matt Eastwood, senior vice president, Enterprise Infrastructure and Datacenter, IDC. “Looking at the product details announced today, it sounds like a compelling combination that will give IT buyers a unique new option to consider when making their next upgrade.”

“Naples” features:

    • A highly scalable, 32-core System on Chip (SoC) design, with support for two high-performance threads per core
    • Industry-leading memory bandwidth, with 8-channels of memory per “Naples” device. In a 2-socket server, support for up to 32 DIMMS of DDR4 on 16 memory channels, delivering up to 4 terabytes of total memory capacity.
    • The processor is a complete SoC with fully integrated, high-speed I/O supporting 128 lanes of PCIe 34, negating the need for a separate chip-set
    • A highly-optimized cache structure for high-performance, energy efficient compute
    • AMD Infinity Fabric coherent interconnect for two “Naples” CPUs in a 2-socket system
    • Dedicated security hardware

AMD will deliver two presentations on its datacenter strategy and upcoming products this week during the Open Compute Summit. Scott Aylor, vice president of enterprise solutions will talk in the main hall on Wed., March 8th at 4:55 PM PST, while Dan Bounds, senior director of enterprise products, will deliver an engineering Tech Talk on Thurs., March 9th at 9:20 AM PST on the Expo Hall stage.  

About AMD 

For more than 45 years AMD has driven innovation in high-performance computing, graphics and visualization technologies ― the building blocks for gaming, immersive platforms, and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) websiteblog, and Facebook and Twitter pages.

Source: AMD

The post AMD Previews “Naples” High Performance Server Processor appeared first on HPCwire.

SC17 Now Accepting Submissions for Technical Papers

Wed, 03/08/2017 - 08:05

March 8 — The SC17 Conference Committee is now accepting submissions for technical papers. The Technical Papers Program at SC is the leading venue for presenting the highest-quality original research, from the foundations of HPC to its emerging frontiers. The Conference Committee solicits submissions of excellent scientific merit that introduce new ideas to the field and stimulate future trends on topics such as applications, systems, parallel algorithms, data analytics and performance modeling. SC also welcomes submissions that make significant contributions to the “state-of-the-practice” by providing compelling insights on best practices for provisioning, using and enhancing high-performance computing systems, services, and facilities

The SC conference series is dedicated to promoting equality and diversity and recognizes the role that this has in ensuring the success of the conference series. We welcome submissions from all sectors of society.  SC17 is committed to providing an inclusive conference experience for everyone, regardless of gender, sexual orientation, disability, physical appearance, body size, race, or religion.

Source: SC17

The post SC17 Now Accepting Submissions for Technical Papers appeared first on HPCwire.

E4 Computer Engineering and Wistron Corporation Introduce New Energy-Efficient OCP Platform

Wed, 03/08/2017 - 07:30

SANTA CLARA, Calif., March 8 — E4 Computer Engineering and Wistron Corporation today showcase the result of their joint effort to produce an open rack PetaFlops-Class Computing Solution, that features, among other distinctive characteristics, a remarkable energy efficiency ratio that is based on the IBM POWER architecture.

As an active member of the OpenPOWER Foundation, E4 Computer Engineering has a very proactive approach and desires to add a different and pioneering solution to its OpenPOWER based range, that could improve energy efficiency for its HPC and enterprise users.

With the open rack form factor compute nodes provided by Wistron, E4 was able to fully customize the solution, by adding the liquid cooling and Infiniband interconnect.

This new system, named OP 206 Gold, is a 2U 21” Open Rack Enclosure with integrated piping & power distribution. It is a Power8-based node in OCP form-factor, with leading edge features specifically engineered for HPC workloads. The node has two IBM Power8 processors, four NVIDIA Tesla P100 GPUs with NVLink interconnect and liquid cooling, that is the newest feature. Liquid cooling is key to increase the repeatability of the performance (which is not dependent upon the variation of the processor temperature) and to extend the lifespan of the components (by lowering the thermal stresses due to higher temperature).

Specifically, for the OP206 Gold, E4 Computer Engineering, Wistron and the University of Bologna developed an innovative technology for measuring, monitoring and capping the power consumption of the whole node, through the collection of data from all relevant components (processors, memory, GPU, fans) to improve energy efficiency.

“Finding new ways of making easily deployable and energy efficient HPC solutions is often a complex task, which requires a lot of planning, testing and benchmarking – said Cosimo Gianfreda CTO, Co-Founder, E4 Computer Engineering. – We are very lucky to work with great partners like Wistron, as their timing and accuracy means we have all the right conditions to have effective time-to-market. I strongly believe that the performance on the node, coupled with the power monitoring technology, will receive a wide acceptance from the HPC and Enterprise community.”

“The open and collaborative spirit of innovation within the OpenPOWER Foundation enables companies like E4 to take advantage of new technology and build solutions to help customers dealing with the huge volume of data in today’s technology environment,” said Ken King, IBM general manager of OpenPOWER. “The POWER8 with NVIDIA NVLink processor enables incredible velocity of data transfer between CPUs and GPUs and is ideal for emerging workloads like advanced analytics, AI and machine learning.”

“Tesla P100 GPU accelerator with NVLink multi-GPU technology enables a new class of servers that can deliver the performance of hundreds of CPU server nodes,” said Roy Kim, Director of NVIDIA Tesla Product Management. “With the Tesla Platform and advanced OpenPOWER technology, E4 is delivering innovative, high-powered solutions to tackle the most demanding HPC and artificial intelligence workloads.”

“Accelerating the AI applications in OCP infrastructure, Wistron POWER8 systems with NVLink solution support up to four Tesla P100, that will dramatically speed-up the performance and manage the energy saving at the same rack; one of the most powerful platforms for PetaFlops-class high performance computing,” said Donald Hwang, Chief Technology Officer and President of EBG at Wistron Corporation. 

About E4 Computer Engineering

Since 2002, E4 Computer Engineering has been innovating and actively encouraging the adoption of new computing and storage technologies. Because new ideas are so important, we invest heavily in research and hence in our future. Thanks to our comprehensive range of hardware, software and services, we are able to offer our customers complete solutions for their most demanding workloads on: HPC, Big-Data, AI, Deep Learning, Data Analytics, Cognitive Computing and for any challenging Storage and Computing requirements. E4. When Performance Matters.

For more info:  www.e4company.com

About Wistron Corporation

Wistron Corporation is a Fortune Global 500 company and a technology service provider supplying design, manufacture, and after-sales services for various ICT (information and communication technology) products. We are devoted to increasing the value of our services and systems through developing innovative solutions in the areas of cloud computing, display vertical integration, and e-waste recycling.

As a long-standing partner with IBM, Wistron has more than 10 years PowerPC design experience and provides various flexible business models from barebones to rack integration delivery. For more information, please visit: www.wistron.com.

Source: E4 Computer Engineering

The post E4 Computer Engineering and Wistron Corporation Introduce New Energy-Efficient OCP Platform appeared first on HPCwire.

Supermicro’s Silicon Valley Server and Storage Manufacturing Facilities Go Green

Wed, 03/08/2017 - 07:21

SAN JOSE, Calif., March 8 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in compute, storage and networking technologies including green computing, has opened its new resource efficient LEED (Leadership in Energy and Environmental Design) Gold Certified distribution and final assembly center in Silicon Valley which includes clean fuel-cell electricity generation on-site.

The new 182,000 square foot facility is the first of five new twenty-first century production hubs within Supermicro’s Green Computing Park that supplement the existing 1.5 million square foot, worldwide headquarters, product-development and manufacturing space. The facility is used to build Supermicro’s comprehensive portfolio of server and storage products, including the new SuperBlade, BigTwin and Simply Double Storage products delivering workload optimized systems to the leading cloud, big data, enterprise and IoT innovators in Silicon Valley and around the world. Supermicro has worldwide engineering, manufacturing and distribution facilities in the United States, Europe and Asia to meet the specific needs of our regional customers. All of these locations deliver the most efficient systems, servers and storage that drive the Internet as well as enterprise datacenter applications.

The new facility will generate its own clean fuel-cell based electricity on-site saving over $30 million in energy costs over 10 years when fully deployed. A one megawatt-hour Bloom Energy Server will provide the majority of the facility’s energy load and is configured to maintain critical operations during grid outages. Compared to traditional centralized power sources, the fuel cell delivers enhanced sustainability benefits in many ways: high efficiency, greenhouse gas emissions reductions, avoided air pollutants, and reduced water use. The Bloom Energy Server converts natural gas into clean electricity using a highly efficient electrochemical reaction without combustion. By not burning fuel the fuel-cells virtually eliminate smog forming particulates and harmful NOx and SOx emissions that are generated by conventional power plants. The fuel project will save an estimated 20% on energy expenses and avoid nearly 3 million pounds of CO2 each year, the equivalent carbon sequestered by over 1,000 acres of trees.

The LEED-certified buildings are resource efficient, using less water and energy and reduce greenhouse gas emissions. The buildings utilize low VOC highly reflective roofing to eliminate the need for air conditioning and are heated by hot-water perimeter airflow heating. The buildings are lit with LED lighting systems. The first building went into service in late 2016 with the second to follow in August of this year.

“We are committed to green computing leadership across our products and facilities. Our new Green Computing Campus uses a fuel cell technology power system that reduces pollution while saving us several million dollars in energy costs per year,” said Charles Liang, President and CEO of Supermicro. “Our investment in leading edge technology provides us with clean and much more reliable electrical power for our vertically integrated system validation, engineering, manufacturing and distribution campus in the heart of Silicon Valley.”

“I could not be more proud of this home-grown San Jose manufacturer’s success over the past two decades,” said San Jose Mayor Sam Liccardo. “Supermicro continues to operate under the motto of ‘We Keep IT Green’ by manufacturing high quality products that meet the latest environmental standards. I thank CEO Liang and his team for their continued commitment to environmental and economic excellence, and for creating jobs here in San Jose.”

”We are proud to partner with another Silicon Valley manufacturing company,” said KR Sridhar, Founder, Chairman and CEO of Bloom Energy. “This project highlights many of the benefits of clean distributed energy, enabling a multi-use campus to generate electricity on-site and protect its operations from grid disruptions, all while reducing operating expenses and reducing criteria pollutants.”

With over $2 billion in revenues and 5x growth in 6 years, Supermicro was ranked as the #1 fastest growing IT company in the world and #18 fastest growing company overall in 2016 by Fortune Magazine. It is also a member of the Fortune 1000 largest U.S. corporations. Research and development efforts are performed in-house, which increases the communication and collaboration between design teams, streamlines the development process and reduces time-to-market. Using a building block approach Supermicro provides a broad range of products, and enables the delivery of application-optimized solutions based upon customers’ requirement for performance, time-to-market, quality, cost and power efficiency.

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Super Micro Computer

The post Supermicro’s Silicon Valley Server and Storage Manufacturing Facilities Go Green appeared first on HPCwire.

Pages