HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 6 hours 5 min ago

HPE ‘Trade & Match’ Server Tops STAC N1 Test

Tue, 10/10/2017 - 08:56

An overclocked trade & match server from Hewlett Packard Enterprise using a Mellanox network stack has delivered the lowest mean latency to date on a STAC N1 test, according to a report posted by the Securities Technology Analysis Center (STAC) yesterday. Minimizing latency is a key component in highly competitive securities trading.

“The STAC-N1 Benchmarks exercised two of four HPE ProLiant XL170r Gen9 Servers in a 2U HPE Apollo r2600 Chassis, a component of the HPE Apollo 2000 System. Each server was configured with one Intel E5-1680 v3 processor, one HPE Ethernet 10/25Gb 2-port 640SFP28 network adapter, which is a rebranded Mellanox ConnectX-4 Lx adapter, and eight 16 GiB DIMMs. The chassis was configured with two 1400W Power supplies,” according to the report (HPE170814).”

The CPU clock rate was boosted to 4.5 GHz. The test was conducted last week. The STAC-N1 test measures the performance of a host network stack (server, OS, drivers, host adapter) using a market data style workload. “Compared to all other public STAC-N1 reports of Ethernet-based SUTs (stack under test), this SUT demonstrated:”

  • Lowest mean latency at both the base rate and the highest rate tested.
  • Lowest max latency at the base rate.
  • Lowest max latency at or above 1 million messages per second.
  • Highest max rate reported: 1.6 million messages per second.
  • 9999th percentile latency (six nines) of just 5 μsec at the base rate and 6 μsec at 1.6 million messages per second.

The Mellanox components included: “ConnectX-4 Lx, rebranded as the HPE Ethernet 640SFP28 Adapter, is Mellanox’s recommended Network Adapter Card for High Frequency Trading, supporting both 10 and 25Gbps Ethernet with the same hardware. ConnectX-4 Lx allows applications to achieve ultra-low latency at all message rates, either through the kernel, using socket based kernel bypass (VMA), or using the Verbs/RDMA APIs. In addition, the ConnectX-4 Lx Network Adapter Card supports highly accurate time synchronization, and high resolution timestamping, which are critical for meeting regulatory requirements.”

Link to STAC report: https://stacresearch.com/HPE170814

The post HPE ‘Trade & Match’ Server Tops STAC N1 Test appeared first on HPCwire.

TYAN Exhibits NVIDIA Tesla V100 GPU Powered Server Platforms

Tue, 10/10/2017 - 07:44

MUNICH, Oct. 10, 2017 — TYAN, a server platform design manufacturer and the subsidiary of MITAC Computing Technology Corporation, showcases their latest GPU-optimized platforms that target the high performance computing and artificial intelligence sectors at the GPU Technology Conference in Munich, Germany during Oct 10~12.

“TYAN’s new GPU computing platforms are designed to provide efficient parallel computing for the analytics of vast amounts of data. By incorporating NVIDIA’s latest Tesla V100 GPU accelerators, TYAN provides our customers with the power to accelerate both high performance and cognitive computing workloads” said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit.

TYAN’s Thunder HX FT77D-B7109 is a 4U server with two Intel Xeon Scalable Processors and support for up to 8 NVIDIA Tesla V100 GPUs and 24 DIMM slots. The FT77D-B7109 takes the advantage of the new Intel Xeon Scalable Processors to increase total expansion capability in a system and allows the 9th PCIe x16 slot to be installed next to 8 GPU cards. The design is ideal for PCI-E bifurcation card deployment or high speed networking like 100 Gigabit EDR Infiniband, 100 Gigabit Ethernet, or 100 Gigabit Intel Omni-Path fabric. The platform specializes in massively parallel workloads including scientific computing, genetic sequencing, oil & gas discovery, large scale facial recognition, and cryptography.

The Thunder GA88-B5631 is a 1U server with a single Intel Xeon Scalable Processor. With support for up to 4 NVIDIA Tesla V100 GPUs within a 1U server, the GA88-B5631 is among the industry’s highest density GPU servers available on the market. With 4 GPU cards deployed, the platform offers a Full Height / Half Length PCIe x16 card to accommodate a networking adapter speeds up to 100Gb/s such as EDR InfiniBand or 100 Gigabit Ethernet. The platform is designed to support many of today’s emerging cognitive computing workloads such as Machine Learning and Artificial Intelligence.

TYAN GPU Computing Platforms:

  • 4U/8-GPU Thunder HX FT77D-B7109: 4U dual-socket Intel Xeon Scalable Processor-based platform with support for up to 8 NVIDIA Tesla GPUs, 24 DDR4 DIMM slots, and 14 2.5” hot-swap SATA 6Gb/s devices
  • 4U/5-GPU Thunder HX FT48T-B7105:4U dual-socket Intel Xeon Scalable Processor-based platform with support for up to 5 NVIDIA Tesla GPUs, 12 DDR4 DIMM slots, and 4 3.5” hot-swap SATA 6Gb/s devices
  • 4U/4-GPU Thunder HX FT48B-B7100:4U dual-socket Intel Xeon Scalable Processor-based platform with support for up to 4 NVIDIA Tesla GPUs, 12DDR4 DIMM slots, and 10 2.5” hot-swap SATA 6Gb/s devices
  • 1U/4-GPU Thunder HX GA88-B5631: 1U single-socket Intel Xeon Scalable Processor-based platform with support for up to 4 NVIDIA Tesla CPUs, 12 DDR4 DIMM slots, and 2 2.5” hot-swap SATA 6Gb/s devices


About TYAN

TYAN, as a server brand of Mitac Computing Technology Corporation under the MiTAC Group (TSE:3706), designs, manufactures and markets advanced x86 and x86-64 server/workstation board technology, platforms and server solution products. Its products are sold to OEMs, VARs, System Integrators and Resellers worldwide for a wide range of applications. TYAN enables its customers to be technology leaders by providing scalable, highly-integrated, and reliable products for a wide range of applications such as server appliances and solutions for high-performance computing and server/workstation used in markets such as CAD, DCC, E&P and HPC.

Source: TYAN

The post TYAN Exhibits NVIDIA Tesla V100 GPU Powered Server Platforms appeared first on HPCwire.

One Stop Systems Exhibits Two GPU Accelerators with NVIDIA Tesla V100 GPUs

Tue, 10/10/2017 - 07:37

MUNICH, Oct. 10, 2017 — One Stop Systems (OSS), the leading provider of high performance computing accelerators for a multitude of HPC applications, today will exhibit two of the most powerful GPU accelerators for data scientists and deep learning researchers, the CA16010 and SCA8000. Both expand the performance of typical GPU-accelerated compute nodes to new limits. The CA16010 high-density compute accelerator (HDCA) platform delivers 16 PCIe NVIDIA Tesla V100 GPUs providing over 1.7 petaflops of Tensor Operations in a single node for maximum performance in the highest density per rack. Using the OSS GPUltima rack-level solution with 128 PCIe Tesla V100 GPUs, the CA16010 nodes combine for over 14 PetaFLOPs of compute capability using NVIDIA GPU Boost.

The SCA8000 platform packs eight powerful NVIDIA Tesla V100 SXM2 GPUs connected via NVIDIA NVLink in a single GPU expansion accelerator. Each Tesla V100 SXM2 GPU provides 300GB/s bidirectional interconnect bandwidth for the most performance-hungry, peer-to-peer applications in data centers today. With up to four PCI-SIG PCIe Cable 3.0 compliant links to the host server up to 100m away, the SCA8000 supports a flexible upgrade path for new and existing datacenters with the power of NVLink without upgrading server infrastructure.  With advanced, independent IPMI system monitoring and full featured SNMP interface not available in any other GPU accelerator with NVLink, the SCA8000 fits seamlessly into any size datacenter.

Each Tesla V100 SXM2 provides 16GB of CoWoS HBM2 Stacked Memory and a staggering 125 TeraFLOPS mixed-precision deep learning performance, 15.7 TeraFLOPS single-precision performance and 7.8 TeraFLOPS double-precision performance with NVIDIA GPU Boost technology. This performance is made possible by the 5,120 CUDA cores and 640 Tensor Cores in the Tesla V100.

Visitors to GTC Europe in Munich, Germany, can view the CA16010 and SCA8000 for Tesla V100 GPUs in One Stop Systems’ booth #E05.

“The combination of both the CA16010 for NVIDIA Tesla V100 PCIe and SCA8000 for NVIDIA Tesla V100 SXM2 with advanced IPMI, OSS continues its leadership in providing the densest and most cost-effective multi-PetaFLOP solutions using the latest GPU technologies,” said Steve Cooper, OSS CEO. “Supporting both PCIe and SXM2, the OSS compute accelerators show tremendous performance gains in many applications such as training deep neural networks, oil and gas exploration, financial simulations, and medical imaging. As GPU technology continues to improve, OSS products are immediately able to accommodate the newest and most powerful GPUs.”

“NVIDIA GPU computing is helping researchers and engineers take on some the world’s hardest challenges,” said Paresh Kharya, group product marketing manager of Accelerated Computing at NVIDIA. “One Stop Systems’ customers can now tap into the power of our Volta architecture to accelerate their deep learning and high performance computing workloads.”

About One Stop Systems

Customers can order the CA16010 today for immediate delivery starting at $49,000 while orders for the SCA8000 will ship in December. Highly-trained OSS sales engineers are available to assist customers in choosing the right system for their application and requirements.

One Stop Systems designs and manufacturers computing appliances and flash storage arrays for high performance computing (HPC) applications such as deep learning, oil and gas exploration, financial trading, defense and any other applications that require the fastest and most efficient data processing. By utilizing the power of the latest GPUs, compute accelerators and flash storage technologies, our systems stay on the cutting edge of the latest technologies. Our equipment provides the utmost flexibility for every environment, from the datacenter to the warfighter with more density than other solutions. We have a reputation of innovation using the very latest technology, simulation and design equipment to operate with the highest efficiency.

Source: One Stop Systems

The post One Stop Systems Exhibits Two GPU Accelerators with NVIDIA Tesla V100 GPUs appeared first on HPCwire.

Telco Systems, NXP and Arm Introduce New uCPE Offering

Tue, 10/10/2017 - 07:26

MANSFIELD, Mass. and AUSTIN, TX, October 10, 2017 — Telco Systems, a provider of SDN/NFV, CE 2.0, MPLS and IP solutions, together with NXP Semiconductors (NASDAQ:NXPI), a worldwide leader in advanced secure connectivity solutions, today announced the industry’s first Arm-based uCPE solution available in the market. In close collaboration with Arm, the solution combines a rich uCPE feature set on a multicore communications platform in the LS2088A that enables a performance, power and cost point not available in the market with existing architectures.

This advanced solution fulfills a market demand for multi-technology and multi-vendor uCPE white box solutions that enable telecom and managed service providers with more options to choose the best technology to address their operational environment and business targets.

By using NFVTime as the common uCPE NFVi OS software for Arm-based and other popular white box devices, service providers are now able to introduce this Arm uCPE without complicating the operation processes or compromising functionality and service capabilities at the MANO integration layer.

“Our new uCPE offering provides additional options for our customer to deploy NFV services and to address their specific operational requirements and business goals,” explained Raanan Tzemach, Vice President of Product Management and Marketing at Telco Systems. “We are proud to lead market innovations by working with strong market players like NXP and Arm.”

Telco Systems’ NFVTime is an open uCPE that includes a hardware agnostic NFVi-OS and uCPE MANO software solutions. NFVTime is service-ready with out-of-the-box support for SD-WAN, managed router, managed security, and other VNFs, which can be added remotely at any time.

The advanced uCPE white box offering is based on the Layerscape LS2088A processor with eight 64-bit Arm Cortex-A72 Cores. The processor cores in combination with integrated hardware acceleration for cryptographic processing, virtual forwarding and traffic management provide performance to support multi-gigabit routing and network services. Like all NXP Layerscape processors, the LS2088A includes Trust Architecture technology, which provides a secure hardware root of trust to ensure the integrity of operating software and network communications.

“NXP is pleased to enable the market and Telco Systems was the right partner to help expand the variety of uCPEs available and to highlight the functional advantages of our Layerscape platform,” said Noy Kucuk, vice president of product marketing for NXP. “This advanced uCPE will enable service providers to deploy securely multiple VNFs with high performance in multi-vendor environment.”

“A commercially deployable Arm-based uCPE solution from Telco Systems and NXP highlights the scalability and performance advantages of the Arm architecture and will accelerate our growing NFV ecosystem,” said Drew Henry, senior vice president, Infrastructure Business Unit, Arm. “Collaborating with these two networking leaders further expands the breadth of efficient and flexible Arm-based uCPE platforms for operators and service providers.”

At the SDN World Congress in The Hague, Netherlands on October 9-13, Arm will be demonstrating this joint uCPE offering at booth, number B27. At this event, Telco Systems will also be demonstrating this joint offering at Booth C9 as well as NXP at Booth B37.

About Telco Systems

Telco Systems delivers a portfolio of Carrier Ethernet and MPLS-based demarcation, aggregation, NFV and vCPE solutions, enabling service providers to create intelligent, service-assured, CE 2.0-compliant networks for mobile backhaul, business services and cloud networking. Telco Systems’ end-to-end Ethernet, SDN/NFV-ready product portfolio delivers significant advantages to service providers, utilities and city carriers competing in a rapidly evolving telecommunications market. Telco Systems is a wholly owned subsidiary of BATM Advanced Communications (LSE: BVC).

About NXP Semiconductors
NXP Semiconductors N.V. (NASDAQ: NXPI) enables secure connections and infrastructure for a smarter world, advancing solutions that make lives easier, better and safer. As the world leader in secure connectivity solutions for embedded applications, NXP is driving innovation in the secure connected vehicle, end-to-end security & privacy and smart connected solutions markets. Built on more than 60 years of combined experience and expertise, the company has 31,000 employees in more than 33 countries and posted revenue of $9.5 billion in 2016. Find out more at www.nxp.com.

Source: Telco Systems

The post Telco Systems, NXP and Arm Introduce New uCPE Offering appeared first on HPCwire.

Sandia Computing Researcher Wins DOE Early Career Research Program Award

Mon, 10/09/2017 - 14:25

ALBUQUERQUE, N.M., Oct. 9, 2017 — Sandia National Laboratories researcher Tim Wildey has received a 2017 Early Career Research Program award from the Department of Energy’s Office of Science.

Wildey is Sandia’s first winner of the Advanced Scientific Computing Research branch of the prestigious program, said manager Daniel Turner.

The national award, now in its eighth year, provides researchers a grant of $500,000 yearly for five years. Its intent is “to identify and provide support to those researchers early in their careers who have the potential to develop new scientific ideas, promote them and convince their peers to pursue them as new directions,” according to the Office of Science.

Wildey’s proposed research seeks to develop data-informed multiscale modeling and simulation that will be mathematically consistent and more robust than current practices.

His research over the past few years has focused on developing mathematical and computational frameworks that quantify the amount of uncertainty present in a problem. That uncertainty is then included in his mission-related modeling and simulation.

Wildey anticipates that using complex multiphysics applications to inform high-consequence decisions will require moving beyond “forward simulation” — the practice of assuming all a model’s input parameters are known and then using the model to make predictions about objects of interest.

“Moving beyond forward simulation means that we no longer assume that we precisely know these model inputs and we instead seek to infer information about them from experimental data,” he said.

“Many problems in materials science, subsurface flow and mechanics and magnetohydrodynamics are best described by multiphysics, which involves multiple physical models or multiple physical phenomena, and multiscale models,” he said. “These problems are challenging to simulate because they incorporate detailed physical interactions across a wide range of length and time scales. This research will pursue mathematically rigorous and computationally efficient approaches for predicting the properties and behavior of realistic, complex multiphysics applications.”

His proposed integration of advances in numerical discretization, uncertainty quantification, data assimilation and model adaptation should achieve models that can better predict outcomes.

“This project integrates some of that foundational work with ideas we’ve been exploring at Sandia to benefit a wide range of mission applications that support the science and national security missions of Advanced Scientific Computer Research and the DOE,” Wildey said.

Wildey joined Sandia in January 2011 after a postdoctoral fellowship at the University of Texas at Austin, receiving a Master of Science and doctorate at Colorado State University and a bachelor’s degree at Michigan State University, all in mathematics.

DOE Early Career grants are available in the program areas of Advanced Scientific Computing Research, Biological and Environmental Research, Basic Energy Sciences, Fusion Energy Sciences, High Energy Physics and Nuclear Physics.

About Sandia National Laboratories

Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

The post Sandia Computing Researcher Wins DOE Early Career Research Program Award appeared first on HPCwire.

Argonne Training Program: Leaning into the Supercomputing Learning Curve

Mon, 10/09/2017 - 14:20

Oct. 9, 2017 — What would you do with a supercomputer that is at least 50 times faster than today’s fastest machines? For scientists and engineers, the emerging age of exascale computing opens a universe of possibilities to simulate experiments and analyze reams of data — potentially enabling, for example, models of atomic structures that lead to cures for disease.

But first, scientists need to learn how to seize this opportunity, which is the mission of the Argonne Training Program on Extreme-Scale Computing (ATPESC). The training is part of the Exascale Computing Project, a collaborative effort of the U.S. Department of Energy’s (DOE) Office of Science and its National Nuclear Security Administration.

Starting in late July, 70 participants — graduate students, computational scientists, and postdoctoral and early-career researchers — gathered at the Q Center in St. Charles, Illinois, for the program’s fifth annual training session. This two-week course is designed to teach scientists key skills and tools and the most effective ways to use leading-edge supercomputers to further their research aims.

Recently, 70 scientists — graduate students, computational scientists, and postdoctoral and early-career researchers — attended the fifth annual Argonne Training Program on Extreme-Scale Computing (ATPESC) in St. Charles, Illinois. Over two weeks, they learned how to seize opportunities offered by the world’s fastest supercomputers. (Image by Argonne National Laboratory.)

This year’s ATPESC agenda once again was packed with technical lectures, hands-on exercises and dinner talks.

“Supercomputers are extremely powerful research tools for a wide range of science domains,” said ATPESC program director Marta García, a computational scientist at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility at the department’s Argonne National Laboratory.

“But using them efficiently requires a unique skill set. With ATPESC, we aim to touch on all of the key skills and approaches a researcher needs to take advantage of the world’s most powerful computing systems.”

To address all angles of high-performance computing, the training focuses on programming methodologies that are effective across a variety of supercomputers — and that are expected to apply to exascale systems. Renowned scientists, high-performance computing experts and other leaders in the field served as lecturers and guided the hands-on sessions.

This year, experts covered:

  • Hardware architectures
  • Programming models and languages
  • Data-intensive computing, input/output (I/O) and machine learning
  • Numerical algorithms and software for extreme-scale science
  • Performance tools and debuggers
  • Software productivity
  • Visualization and data analysis

In addition, attendees tapped hundreds of thousands of cores of computing power on some of today’s most powerful supercomputing resources, including the ALCF’s Mira, Cetus, Vesta, Cooley and Theta systems; the Oak Ridge Leadership Computing Facility’s Titan system; and the National Energy Research Scientific Computing Center’s Cori and Edison systems – all DOE Office of Science User Facilities.

“I was looking at how best to optimize what I’m currently using on these new architectures and also figure out where things are going,” said Justin Walker, a Ph.D. student in the University of Wisconsin-Madison’s Physics Department. “ATPESC delivers on instructing us on a lot of things.”

Shikhar Kumar, Ph.D. candidate in nuclear science and engineering at the Massachusetts Institute of Technology, elaborates: “On the issue of I/O, data processing, data visualization and performance tools, there isn’t a single option that is regarded as the ‘industry standard.’ Instead, we learned about many of the alternatives, which encourages learning high-performance computing from the ground up.”

“You can’t get this material out of a textbook,” said Eric Nielsen, a research scientist at NASA’s Langley Research Center. Added Johann Dahm of IBM Research, “I haven’t had this material presented to me in this sort of way ever.”

Jonathan Hoy, a Ph.D. student at the University of Southern California, pointed to the larger, “ripple effect” role of this type of gathering: “It is good to have all these people sit down together. In a way, we’re setting standards here.”

Lisa Goodenough, a postdoctoral researcher in high energy physics at Argonne, said: “The theme has been about barriers coming down.” Goodenough referred to both barriers to entry and training barriers hindering scientists from realizing scientific objectives.

“The program was of huge benefit for my postdoctoral researcher,” said Roseanna Zia, assistant professor of chemical engineering at Stanford University. “Without the financial assistance, it would have been out of my reach,” she said, highlighting the covered tuition fees, domestic airfare, meals and lodging.

Now, anyone can learn from the program’s broad curriculum, including the slides and videos of the lectures from some of the world’s foremost experts in extreme-scale computing, online — underscoring program organizers’ efforts to extend its reach beyond the classroom. The slides and the videos of the lectures captured at ATPESC 2017 are now available online at: http://extremecomputingtraining.anl.gov/2017-slides and http://extremecomputingtraining.anl.gov/2017-videos, respectively.

For more information on ATPESC, including on applying for selection to attend next year’s program, visit http://extremecomputingtraining.anl.gov.

This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications and hardware technology, to support the nation’s exascale computing imperative.

Established by Congress in 2000, the National Nuclear Security Administration (NNSA) is a semi-autonomous agency within the U.S. Department of Energy responsible for enhancing national security through the military application of nuclear science. NNSA maintains and enhances the safety, security, and effectiveness of the U.S. nuclear weapons stockpile without nuclear explosive testing; works to reduce the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the U.S. and abroad. Visit nnsa.energy.gov for more information.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.

Source: Andrea Manning, Argonne National Laboratory

The post Argonne Training Program: Leaning into the Supercomputing Learning Curve appeared first on HPCwire.

Berkeley Lab Researchers Lead Development of Workflow to Predict Ground Movement

Mon, 10/09/2017 - 14:07

Oct. 9, 2017 — With emerging exascale supercomputers, researchers will soon be able to accurately simulate the ground motions of regional earthquakes quickly and in unprecedented detail, as well as predict how these movements will impact energy infrastructure—from the electric grid to local power plants—and scientific research facilities.

Currently, an interdisciplinary team of researchers from the Department of Energy’s (DOE’s) Lawrence Berkeley (Berkeley Lab) and Lawrence Livermore (LLNL) national laboratories, as well as the University of California at Davis are building the first-ever end-to-end simulation code to precisely capture the geology and physics of regional earthquakes, and how the shaking impacts buildings. This work is part of the DOE’s Exascale Computing Project (ECP), which aims to maximize the benefits of exascale—future supercomputers that will be 50 times faster than our nation’s most powerful system today—for U.S. economic competitiveness, national security and scientific discovery.

Transforming hazard into risk: Researchers at Berkeley Lab, LLNL and UC Davis are utilizing ground motion estimates from a regional-scale geophysics model to drive infrastructure assessments. (Image Courtesy of David McCallen)

“Due to computing limitations, current geophysics simulations at the regional level typically resolve ground motions at 1-2 hertz (vibrations per second). Ultimately, we’d like to have motion estimates on the order of 5-10 hertz to accurately capture the dynamic response for a wide range of infrastructure,” says David McCallen, who leads an ECP-supported effort called High Performance, Multidisciplinary Simulations for Regional Scale Seismic Hazard and Risk Assessments. He’s also a guest scientist in Berkeley Lab’s Earth and Environmental Sciences Area.

One of the most important variables that affect earthquake damage to buildings is seismic wave frequency, or the rate at which an earthquake wave repeats each second. Buildings and structures respond differently to certain frequencies. Large structures like skyscrapers, bridges, and highway overpasses are sensitive to low frequency shaking, whereas smaller structures like homes are more likely to be damaged by high frequency shaking, which ranges from 2 to 10 hertz and above. McCallen notes that simulations of high frequency earthquakes are more computationally demanding and will require exascale computers.

In preparation for exascale, McCallen is working with Hans Johansen, a researcher in Berkeley Lab’s Computational Research Division (CRD), and others to update the existing SW4 code—which simulates seismic wave propagation—to take advantage of the latest supercomputers, like the National Energy Research Scientific Computing Center’s (NERSC’s) Cori system. This manycore system contains 68 processor cores per chip, nearly 10,000 nodes and new types of memory. NERSC is a DOE Office of Science national user facility operated by Berkeley Lab.  The SW4 code was developed by a team of researchers at LLNL, led by Anders Petersson, who is also involved in the exascale effort.

With recent updates to SW4, the collaboration successfully simulated a 6.5 magnitude earthquake on California’s Hayward fault at 3-hertz on NERSC’s Cori supercomputer in about 12 hours with 2,048 Knights Landing nodes. This first-of-a-kind simulation also captured the impact of this ground movement on buildings within a 100-square kilometer (km) radius of the rupture, as well as 30km underground. With future exascale systems, the researchers hope to run the same model at 5-10 hertz resolution in approximately five hours or less.

“Ultimately, we’d like to get to a much larger domain, higher frequency resolution and speed up our simulation time, ” says McCallen. “We know that the manner in which a fault ruptures is an important factor in determining how buildings react to the shaking, and because we don’t know how the Hayward fault will rupture or the precise geology of the Bay Area, we need to run many simulations to explore different scenarios. Speeding up our simulations on exascale systems will allow us to do that.”

This work was recently published in the Institute of Electrical and Electronics Engineers (IEEE) Computer Society’s Computers in Science and Engineering.

Predicting Earthquakes: Past, Present and Future 

Historically, researchers have taken an empirical approach to estimating ground motions and how the shaking stresses structures. So to predict how an earthquake would affect infrastructure in the San Francisco region, researchers might look at a past event that was about the same size—it might even have happened somewhere else—and use those observations to predict ground motion in San Francisco. Then they’d select some parameters from those simulations based on empirical analysis and surmise how various buildings may be affected.

“It is no surprise that there are certain instances where this method doesn’t work so well,” says McCallen. “Every single site is different—the geologic makeup may vary, faults may be oriented differently and so on. So our approach is to apply geophysical research to supercomputer simulations and accurately model the underlying physics of these processes.”

To achieve this, the tool under development by the project team employs a discretization technique that divides the Earth into billions of zones. Each zone is characterized with a set of geologic properties. Then, simulations calculate the surface motion for each zone. With an accurate understanding of surface motion in a given zone, researchers also get more precise estimates for how a building will be affected by shaking.

The team’s most recent simulations at NERSC divided a 100km x 100km x 30km region into 60 billion zones. By simulating 30km beneath the rupture site, the team can capture how surface-layer geology affects ground movements and buildings.

Eventually, the researchers would like to get their models tuned up to do hazard assessments. As Pacific Gas & Electric (PG&E) begins to implement a very dense array of accelerometers into their SmartMeters—a system of sensors that collects electric and natural gas use data from homes and businesses to help the customer understand and reduce their energy use—McCallen is working with the utility company about potentially using that data to get a more accurate understanding of how the ground is actually moving in different geologic regions.

“The San Francisco Bay is an extremely hazardous area from a seismic standpoint and the Hayward fault is probably one of the most potentially risky faults in the country,” says McCallen. “We chose to model this area because there is a lot of information about the geology here, so our models are reasonably well-constrained by real data. And, if we can accurately measure the risk and hazards in the Bay Area, it’ll have a big impact.”

He notes that the current seismic hazard assessment for Northern California identifies the Hayward Fault as the most likely to rupture with a magnitude 6.7 or greater event before 2044. Simulations of ground motions from large—magnitude 7.0 or more—earthquakes require domains on the order of 100-500 km and resolution on the order of about one to five meters, which translates into hundreds of billions of grid points. As the researchers aim to model even higher frequency motions between 5 to 10 hertz, they will need denser computational grids and finer time-steps, which will drive up computational demands. The only way to ultimately achieve these simulations is to exploit exascale computing, McCallen says.

In addition to leading an ECP project, McCallen is also a Berkeley lab research affiliate and Associate Vice President at the University of California Office of the President.

This work was done with support from the Exascale Computing Project, a collaborative effort between the DOE’s Office of Science and National Nuclear Security Agency. NERSC is a DOE Office of Science User Facility.

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. »Learn more about computing sciences at Berkeley Lab.

Source: NERSC

The post Berkeley Lab Researchers Lead Development of Workflow to Predict Ground Movement appeared first on HPCwire.

TACC Develops Multi-Factor Authentication Solution, Makes it Open-Source

Mon, 10/09/2017 - 13:55

Oct. 9, 2017 — How does a supercomputing center enable tens of thousands of researchers to securely access its high-performance computing systems while still allowing ease of use? And how can it be done affordably?

These are questions that the Texas Advanced Computing Center (TACC), asked themselves when they sought to upgrade their system security. They had previously relied on users’ names and passwords for access, but with a growing focus on hosting confidential health data and the increased compliance standards that entails, they realized they needed a more rigorous solution.

Multi-factor authentication (MFA) provides an extra layer of cybersecurity protection against brute-force attacks.

In 2015, TACC began looking for an appropriate multi-factor authentication (MFA) solution that would provide an extra layer of protection against brute-force attacks. What they quickly discovered was that the available commercial solutions would cost them tens to hundreds of thousands of dollars per year to provide to their large community of users.

Moreover, most MFA systems lacked the flexibility needed to allow diverse researchers to access TACC systems in a variety of ways — from the command line, through science gateways (which perform computations without requiring researchers to directly access HPC systems), and using automated workflows.

So, they did what any group of computing experts and software developers would do: they built our own MFA system, which they call OpenMFA.

They didn’t start from scratch. Instead they scoured the pool of state-of-the-art open source tools available. Among them was LinOTP, a one-time password platform developed and maintained by KeyIdentity GmbH, a German software company. To this, they added the standard networking protocols RADIUS and HTTPS, and glued it all together using custom pluggable authentication modules (PAM) that they developed in-house.

This approach integrates cleanly with common data transfer protocols, adds flexibility to the system (in part, so they could create whitelists that include the IP addresses that should be exempted), and supports opt-in or mandatory deployments. Researchers can use the TACC-developed OpenMFA system in three ways: via a software token, an SMS, or a low-cost hardware token.

Over three months, they transitioned 10,000 researchers to OpenMFA, while giving them the opportunity to test the new system at their leisure. In October 2016, use of the MFA became mandatory for TACC users.

Since that time, OpenMFA has recorded more than half a million logins and counting. TACC has also open-sourced the tool for free, public use. The Extreme Science and Engineering Discovery Environment (XSEDE) is considering OpenMFA for its large user base, and many other universities and research centers have expressed interest in using the tool.

TACC developed OpenMFA to suit the center’s needs and to save money. But in the end, the tool will also help many other tax-payer-funded institutions improve their security while maintaining research productivity. This allows funding to flow into other efforts, thus increasing the amount of science that can be accomplished, while making that research more secure.

TACC staff will present the details of OpenMFA’s development at this year’s Internet2 Technology Exchange and at The International Conference for High Performance Computing, Networking, Storage and Analysis (SC17).

To learn more about OpenMFA or explore the code, visit the Github repository.

Source: TACC

The post TACC Develops Multi-Factor Authentication Solution, Makes it Open-Source appeared first on HPCwire.

U.S. DOE Awards Multi-Institution Grants for Nuclear Physics Computing

Fri, 10/06/2017 - 13:30

Oct. 6, 2017 — Three nuclear physics research projects involving Michigan State University researchers have won grants from the U.S. Department of Energy Office of Science (DOE-SC).

The five-year awards are part of the Scientific Discovery through Advanced Computing (SciDAC4) program supported by the DOE-SC Offices of Nuclear Physics and Advanced Scientific Computing Research. Each of these SciDAC projects is a collaboration between scientists and computational experts at multiple national laboratories and universities, who combine their talents in science and computing to address a selected set of high-priority problems at the leading edge of research in nuclear physics, using the very powerful Leadership Class High Performance Computing (HPC) facilities available now and anticipated in the near future.

Two of the grants will support forefront research in nuclear science at the Facility for Rare Isotope Beams (FRIB), under construction now at MSU.

“High-performance computing is a third leg of nuclear science, after experimentation and analytical theory,” said Witek Nazarewicz, FRIB chief scientist, and an investigator on the one of the awarded projects, NUCLEI. “We are extremely grateful for the awards, and eager to put them to use to develop better models of atomic nuclei.”

The goal of the NUCLEI (NUclear Computational Low-Energy Initiative) project is to use advanced applied mathematics, computer science, and physics to accurately describe the atomic nucleus in its entirety, one of FRIB’s key research areas. The principal investigator on the NUCLEI project is Joseph Carlson from Los Alamos National Laboratory.

“The NUCLEI project builds on recent successes in large-scale computations of atomic nuclei to provide results critical to nuclear science and astrophysics, and to nuclear applications in energy and national security,” said Nazarewicz.

Other MSU researchers involved in the NUCLEI project are Scott Bogner and Heiko Hergert of the FRIB Laboratory; and H. Metin Aktulga, of the Department of Computer Science and Engineering.

A second grant will fund the Towards Exascale Astrophysics of Mergers and Supernovae (TEAMS) research project. Improved simulations of supernovae and neutron star mergers carried out by TEAMS researchers will advance our understanding of the creation of the heaviest elements, like gold, silver, and many others, a second key research area of FRIB. How the heavy elements formed following the Big Bang is a longstanding mystery in astrophysics and cosmology. The principal investigator on the TEAMS project is William Raphael Hix from Oak Ridge National Laboratory.

“Simulations of heavy-element production has long been a grand challenge problem in astrophysics, always pushing the extremes of what was computationally feasible,” said TEAMS investigator Sean Couch, of the Departments of Physics and Astronomy and Computational Mathematics, Science and Engineering (CMSE), who also has a joint appointment in FRIB. “Now, with next-generation supercomputers combined with the experimental research that will be enabled by FRIB, we are likely to see a revolution in our understanding of these processes.”

Other MSU researchers involved in the TEAMS project are Luke Roberts from the FRIB Laboratory, and Andrew Christlieb, of CMSE.

The third grant was awarded to the Computing the Properties of Matter with Leadership Computing Resources project. Alexei Bazavov, of CMSE and the Department of Physics and Astronomy, is an investigator on the project from MSU. The principal investigator is Robert Edwards from the Thomas Jefferson National Accelerator Facility.

MSU is establishing FRIB as a new scientific user facility for the Office of Nuclear Physics in the U.S. Department of Energy Office of Science. Under construction on campus and operated by MSU, FRIB will enable scientists to make discoveries about the properties of rare isotopes in order to better understand the physics of nuclei, nuclear astrophysics, fundamental interactions, and applications for society, including in medicine, homeland security and industry.

Source: Michigan State University

The post U.S. DOE Awards Multi-Institution Grants for Nuclear Physics Computing appeared first on HPCwire.

OpenACC Offers Free Online Course, Starting Oct. 19

Fri, 10/06/2017 - 13:18

Oct. 6, 2017 — Join OpenACC for the free Introduction to OpenACC course to learn how to start accelerating your code with OpenACC. The course is comprised of three instructor-led classes that include interactive lectures with dedicated Q&A sections and hands-on exercises. The course covers analyzing performance, parallelizing and optimizing your code.

While this course does not assume any previous experience with OpenACC directives or GPU programming in general, programming experience with C, C++, or Fortran is desirable.

This course is the joint effort of OpenACC.org, the Pittsburgh Supercomputing Center, and NVIDIA.

•    October 19, 2017 – OpenACC Basics
•    October 26, 2017 – GPU Programming with OpenACC
•    November 2, 2017 – Optimizing and Best Practices for OpenACC

Register Now

Source: OpenACC

The post OpenACC Offers Free Online Course, Starting Oct. 19 appeared first on HPCwire.

Researchers Eye Papermaking Improvements Through HPC

Fri, 10/06/2017 - 10:26

Oct. 6, 2017 — With the naked eye, a roll of paper towels doesn’t seem too complicated. But look closely enough, and you’ll see it’s made up of layers of fibers with thousands of intricate structures and contact points. These fluffy fibers are squeezed together before they are printed in patterns, and this resulting texture is key to the paper’s performance.

For a large paper product manufacturer like Procter and Gamble(link is external), which regularly uses high-performance computing to develop its products, simulating this behavior – the way in which those paper fibers contact each other– is complicated and expensive. The preprocessing stage of generating the necessary computational geometry and simulation mesh can be a major bottleneck in product design, wasting time, money and energy.

Lawrence Livermore National Lab researchers are developing a parallel program called p-fiber to help Procter and Gamble simulate the way in which paper fibers contact each other.

To help the company speed up the development process, Lawrence Livermore National Laboratory (LLNL) researcher Will Elmer and his team of programmers focused their efforts on developing a parallel program called p-fiber. Written in Python, the program prepares the fiber geometry and meshing input needed for simulating thousands of fibers, relying on a meshing tool called Cubit, created at Sandia National Laboratories, to generate the mesh for each individual fiber. The p-fiber code has been tested on parallel machines developed at Livermore for mission-critical applications. P-fiber prepares the input for ParaDyn, the parallel-computing version of DYNA3D, a code for modeling and predicting thermomechanical behavior.

The ensuing research, performed for an HPC4Manufacturing (HPC4Mfg) project with the papermaking giant, resulted in the largest multi-scale model of paper products to date, simulating thousands of fibers in ParaDyn with resolution down to the micron scale.

“The problem is larger than the industry is comfortable with, but we have machines with 300,000 cores, so it’s small in comparison to some of the things we run,” Elmer said. “We found that you can save on design cycle time. Instead of having to wait almost a day (19 hours), you can do the mesh generation step in five minutes. You can then run through many different designs quicker.”

Elmer said each individual paper fiber might consist of as many as 3,000 “bricks” or finite elements (components that calculate stress and strain), meaning millions of finite elements had to be accounted for. Elmer and his team generated up to 20 million finite elements, and modeled the most paper fibers in a simulation to date — 15,000. More importantly, they verified that the p-fiber code could scale up to a supercomputer, and, using Lab HPC systems Vulcan and Syrah, they found they could study the scaling behavior of the ParaDyn simulations up to 225 times faster than meshing the fibers one after another.

“Procter and Gamble hasn’t been able to get this kind of simulation, with this many fibers, to run on their system,” Elmer said. “We were able to show there’s a path to get to a representational size of a paper product. Questions like, ‘How much force do you need to tear it?’ can be answered on a supercomputer of the size we’re using. That was a valuable finding, so maybe years down the road, they could be doing these simulations for this kind of work in-house. That’s what HPC4Manufacturing is all about, showing these power players what can be possible in five years.”

Procter and Gamble began using the code on the Lab’s supercomputers in June, providing them with a way to use Paradyn remotely, and to determine if it would improve their design process. The company has the option to license p-fiber.

LLNL benefited from the collaboration as well by learning about how Paradyn scales with massive contact problems, Elmer said, and by creating benchmarks for helping to improve the code. The researchers located and fixed bugs in the code and doubled the speed of Paradyn on Vulcan, which could help with mission-critical applications.

“There’s still a lot of work to be done, but I’m happy with the way this worked,” Elmer said. “I think it’s gotten a lot of visibility and it’s a good example of working with a sophisticated user like Procter and Gamble. It filled out the portfolio of HPC4Manufacturing at that high level. It was a good way to get the Lab engaged in U.S. manufacturing competitiveness.”

Summer intern Avtaar Mahe (who researched gaps in the Paradyn code and scaled up the studies to run on Vulcan) and LLNL researcher Peggy Li (who worked on parallelization and programming) contributed to the effort.

The research was supported by the HPC4Manufacturing program, managed by the Department of Energy’s Advanced Manufacturing Office within the Energy Efficiency and Renewable Energy(link is external) Office. The program, led by LLNL, aims to unite the world-class computing resources and expertise of Department of Energy national laboratories with U.S. manufacturers to deliver solutions that could revolutionize manufacturing.

For more information, see HPC4Mfg.

Source: Lawrence Livermore National Laboratory

The post Researchers Eye Papermaking Improvements Through HPC appeared first on HPCwire.

University of South Dakota Gets HPC Cluster for Research Computing

Fri, 10/06/2017 - 10:13

Oct. 6, 2017 — The University of South Dakota has acquired a high performance computing cluster as a campus-wide resource available to faculty, staff and students as well as researchers across the state.

Made possible by a $504,000 grant from the National Science Foundation and a $200,000 grant from the South Dakota Board of Regents, the new cluster is named Lawrence after Nobel Laureate and University of South Dakota alumnus E. O. Lawrence.

“Lawrence makes it possible for us to accelerate scientific progress while reducing the time to discovery,” said Doug Jennewein, the University’s Director of Research Computing. “University researchers will be able to achieve scientific results not previously possible, and our students and faculty will become more engaged in computationally assisted research.”

The Lawrence supercomputer will support 12 STEM projects across several departments at three institutions in North and South Dakota. The system supports multidisciplinary research and research training in scientific domains such as high energy physics, the human brain, renewable energy, and materials science.

“Our new cluster will help researchers answer big questions such as the nature of dark matter, and the links between the human brain and human behavior,” Jennewein said.

Built by Advanced Clustering Technologies, the Lawrence Cluster has a peak theoretical performance of more than 60 TFLOPS. The system architecture includes general-purpose compute nodes, large memory nodes, GPU-accelerated nodes, interactive visualization nodes, and a high speed InfiniBand interconnect.

Source: Advanced Clustering Technologies

The post University of South Dakota Gets HPC Cluster for Research Computing appeared first on HPCwire.

Former NERSC PI Wins Nobel Prize in Chemistry for Cryo-EM

Thu, 10/05/2017 - 16:53

Oct. 5 — The 2017 Nobel Prize in Chemistry was awarded October 4 to three scientists central to the development of cryo-electron microscopy (Cryo-EM), a technique used to reveal the structures of large organic molecules at high resolution.

Jacques Dubochet of the University of Lausanne, Switzerland, Joachim Frank of Columbia University and Richard Henderson of the MRC Laboratory of Molecular Biology were honored for their role in “developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution.” Cryo-electron microscopy allows researchers to freeze biomolecules mid-movement and visualize their molecular processes that carry essential processes inside cells.

Joachim Frank, a NERSC PI from 2004-2006, pioneered the computational methods needed to reconstruct the 3D shape of biomolecules from thousands of 2D images obtained using Cryo-EM.

One of the laureates—Frank—pioneered the computational methods needed to reconstruct the 3D shape of biomolecules from thousands of 2D images obtained from Cryo-EM, methods employed today by most structural biologists who use electron microscopy. The SPIDER (System for Processing Image Data from Electron Microscopy and Related fields) software package Frank helped develop was for many years one of the most widely used for carrying out single-particle image 3D reconstruction of macromolecular assemblies from Cryo-EM image data.

Left to right: Jacques Dubochet, Joachim Frank, Richard Henderson

Some of Frank’s work involved running computations at Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC). From 2004-2006, he was the principal investigator on a NERSC project, “Correlative Cryo-EM and Molecular Dynamics Simulations of Ribosomal Structure.” Using NERSC’s Seaborg system, Frank and his team completed several molecular dynamics simulations of the GTPase-associated center in the 50S ribosome subunit and transfer RNA (tRNA) and compared the molecular dynamics snapshots with molecular structures computationally reconstructed from experimental electron microscopy images using the SPIDER software package.

“Using the single-particle reconstruction technique, Cryo-EM maps have provided valuable visualizations of ribosome binding with numerous factors,” Frank noted in his 2006 ERCAP request to NERSC. “Thus, so far, Cryo-EM has been the only means to visualize the ribosome in its functional states.”

Frank’s work at NERSC and the computational methodology supporting it were highlighted in two 2007 publications: a paper in the Proceedings of the National Academy of Sciences co-authored with Wen Li, also a NERSC user, and another in the Journal of Structural Biology that included two co-authors from Berkeley Lab’s Computational Research Division (CRD): Chao Yang and Esmond Ng. CRD’s contributions to this research included improving algorithms and their parallel implementations to determine the orientation of the 2D images and continually refine the construction of the 3D model from these images.

NERSC is a U.S. Department of Energy Office of Science User Facility.

For more on Berkeley Lab’s contributions to Cryo-EM:

Berkeley Lab Tech Brings Nobel-Winning Cryo-EM into Sharper Focus

Cryo-EM’s Renaissance

Crystalization in Silico

About NERSC and Berkeley Lab
The National Energy Research Scientific Computing Center (NERSC) is a U.S. Department of Energy Office of Science User Facility that serves as the primary high-performance computing center for scientific research sponsored by the Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a DOE national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science. »Learn more about computing sciences at Berkeley Lab.

Source: NERSC

The post Former NERSC PI Wins Nobel Prize in Chemistry for Cryo-EM appeared first on HPCwire.

Clemson Palmetto Cluster on Track for $1 Million Upgrade

Thu, 10/05/2017 - 16:36

CLEMSON, Oct. 5 — A $1-million upgrade to Clemson University’s acclaimed supercomputer, the Palmetto Cluster, is expected to help researchers quicken the pace of scientific discovery and technological innovation in a broad range of fields, from developing new medicines to creating advanced materials.

New hardware that could be in place as early as spring will add even more power to the Palmetto Cluster. Even before the upgrade, it rated eighth in the nation among academic supercomputers, according to the twice-annual Top500 list of the world’s most powerful computers.

Amy Apon, center, stands with her team next to the Palmetto Cluster supercomputer.

The upgrade is funded by the National Science Foundation and will support more than 370 faculty members and students who are working on a broad range of research topics with more than $14 million in funding.

Supercomputers are increasingly important because they allow researchers to solve complex, mathematically intensive problems in a relatively short period of time.

“A problem that might take 10 days on a conventional computer could take a few minutes on a supercomputer,” said Amy Apon, chair of the Division of Computer Science in the School of Computing.

Apon played a leading role in securing the funding for the Palmetto Cluster’s upgrade, serving as principal investigator on the grant. It’s the second $1 million grant in five years that Apon has helped Clemson land through the National Science Foundation’s Major Research Instrumentation Program.

“Clemson has a history of providing high-performance computing resources,” she said. “We’ve been doing this now for more than a decade, but these resources are expensive and have to be renewed every three to five years. It is time for a refresh.

“This most recent grant provides Clemson the resources it needs to continue offering the high-performance computing on which our researchers have come to depend.”

Researchers plan to use the upgrade to strengthen relationships with industry and broaden collaborations in the state, including one with Claflin University. The upgrade will also enhance the development of new curricula in computational and data-enabled science and engineering.

Russell Kaurloto, vice president for information technology and chief information officer at Clemson, said the upgrade positions the university to have major impact.

“This highly collaborative project is a key enabler for Clemson to rise to the next level of research productivity,” he said. “It underscores the university’s continued involvement to support advanced computing research and education.”

Eileen Kraemer, the C. Tycho Howle Director of the School of Computing, said the grant will bring leading-edge technologies to Clemson.

“This upgrade will help enhance Clemson’s reputation as a national leader in high-performance computing,” she said. “Dr. Apon and her multidisciplinary team are helping keep the university’s computing resources on the cutting edge.”

Co-principal investigators on the grant are Dvora PerahiaMashrur ChowdhuryKuang-Ching “K.C.” Wang and Jill GemmillJim Pepin, Clemson’s chief technical officer, is also expected to play a crucial role in implementation of the award.

Perahia, a professor of physical chemistry and physics, is leading the material research effort.

“The new computational power provided by this grant will enable state-of-the-art, innovating research that pushes the boundaries of current limits,” she said. “Further, it will enable the development of innovative computer science algorithms and technologies that facilitate the processing of large data.

“All this puts Clemson researchers at the forefront of their respective fields. The research accelerated by the award will impact the advancement of smart materials, medicine, the environment and the energy economy.”

Cynthia Young, the founding dean of the College of Science, concurred.

“From complex genetics research to big data transfer, the Palmetto Cluster plays a crucial role in our ability to lead world-class research at Clemson,” Young said. “This support from NSF helps advance CU science forward by enhancing excellence in discovery and innovation.”

Anand Gramopadhye, dean of the College of Engineering, Computing and Applied Sciences, said the upgrades will support critical research.

“This grant provides Clemson with the opportunity to acquire major instrumentation that supports the research and research education goals of the university and its collaborators,” he said. “I congratulate Dr. Apon and her team on the grant.”

Source: Clemson University

The post Clemson Palmetto Cluster on Track for $1 Million Upgrade appeared first on HPCwire.

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

Thu, 10/05/2017 - 16:01

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderella journey the last three years. It closed out 2015 with record revenue of $725 million and went into 2016 with even higher expectations. However, market sluggishness, chip delays and a mid-year smoke event led Cray to lower 2016 guidance from around $825 million to the $650 million range. Cray still managed to end the year on a high note; it recorded the largest revenue quarter in the company’s history and its second-highest revenue year, just shy of $630 million. This year the company is projecting revenue in the $400 million range and has not yet provided guidance beyond 2017.

Pete Ungaro

During our 45-minute interview, Pete, who has been at the helm of the Seattle-based supercomputing company for the last 12 years, provides a candid perspective on the factors behind the market slowdown, whether Cray can sustain this flat period and why he believes ultimately in the long-term health of the industry that is Cray’s mainstay: high-end supercomputing. We also explore the wave of activities and trends that are creating market momentum, including exascale, commercial HPC, AI and deep learning advances.

When we spoke with Pete, the details of the agreement with Seagate to transfer the ClusterStor assets to Cray were still being ironed out so we did not get into the specifics of that arrangement. That deal has since closed and Cray Storage VP John Howarth briefed us on the details and what it means for Cray – you can read that interview here.

In the interval since we spoke, Cray also announced three big customer wins: a Cray XC40 install at JAIST, an XC50-AC supercomputer at Japan’s Yokohama City University and $48 million contract from the Korea Institute of Science and Technology Information for a CS500 cluster supercomputer. These wins reflect the success Cray is having in the Asian market, which Pete mentions below.

Tiffany Trader: Cray has had quite a journey over the last 12-18 months – there was the smoke event a year ago at one of the Chippewa Falls factories, the fall-off in revenue for 2016 and the overall supercomputing market slowdown — can you give us your perspective on the events and market pressures of the last year, year-and-a-half and their impact on Cray?

Pete Ungaro: It’s been an absolutely crazy last couple of years.  In 2016, we had just come off a run where we had tripled our revenues over the preceding four years, so we were on an amazing growth path in our market and had really built the company up to be profitable on a sustained basis pretty much consistently every year.  Then we got a curve ball thrown at us. As you mentioned, we had the “smoke-event” at one of our two manufacturing facilities in Chippewa Falls, which ended up taking our production down for about three or four weeks and caused smoke damage to a few customer machines, and we had to work hard to get through that. Then about the same time, around the summer of 2016, our primary market, which is the high-end of the supercomputing market, was really starting to go into a significant slowdown. We spent a lot of time looking at what was happening in our market, as we always do, and based on the data we’ve been able to gather, the high-end of the supercomputing market, which we count as systems that are 5 to 10 million dollars and higher, dropped over 25 percent in 2016 and has continued to drop in 2017. At the same time, what’s been really interesting as you look through that market data, is that even though the market is down and our revenues are down, when we look at how Cray did in the market during that time period, we’re doing really well.  And that has continued in 2017.  Our win rates are up; we’re gaining share in the high end, but the market overall is down, so it’s muting our success even though we’re still doing well competitively in the market overall. I think in ‘16, our market share at the high-end of supercomputing was down roughly 16 percent and the market was down by over 25 percent, so we are gaining share in a declining market. So, it’s been an interesting ride for us over the last couple years.

Tom Tabor: So Pete, Can Cray sustain this flat period of high-end sales and if so how?

Pete: I’m confident we can. The company is in a very solid financial position — we have a strong balance sheet, and we have no debt on the books. So, we can definitely sustain our business during this slower period. We brought our expenses down a little bit to account for that this year, but we’re also continuing to invest in strategic growth areas, such as artificial intelligence and deep learning. We just did a transaction with Seagate on the ClusterStor line, so we’re still out there fighting and I think doing a great job from a market competitiveness position.

Tiffany: We have heard you emphasize that there is a softer than normal market – what are the factors that you are looking at there? On the Q2 earnings call, you said: “This is the longest and deepest market downturn I have seen in my 25 years of working in this industry.” At the same time, we’re seeing really bullish market forecasts from HPC analysts — Hyperion Research is projecting $30 billion and Intersect360 $44 billion for 2021. Admittedly those figures are for HPC at large, but how do you square this?

Pete: For the overall HPC market, it depends on who you talk to, how they’re measuring it, and what they’re counting. For the core of our business, we focus not just on the supercomputing market, but the really high-end of it. Hyperion, for instance, they count $500,000 machines as supercomputers. While we have systems in the $500,000 range, the vast majority of our revenues are systems that are in the $5 million to $10 million plus range and that’s really our core market. If you look at the worldwide market, a big part of where we don’t operate, by our choice, is in China, and China is by far the fastest growing supercomputing market. I totally agree with Hyperion, as well as Intersect360, that the market overall is going to grow. You see most of that growth in smaller systems, $250,000 to a million or two, and that’s where a huge part of the growth is, but that’s not our strongest part of the market. We have systems that play in that space, but it’s not driving a large amount of revenue for us, so it’s really that subset of the market, our addressable market, that is slow right now. But I do agree that, overall, the HPC market and especially the high-end supercomputing space is going to rebound and grow over time.

There’s all this excitement with the exascale programs going on around the world, and some of those are locked into different countries and different technologies and such that could limit where things are, but mostly those dollars aren’t really flowing into 2017 and 2018. They are out well beyond that, in the 2020-2023 timeframe in funding, so that’s another reason I feel the market is going to come back, but it’s going to be a slower return as it comes back and will take time.

Tom: At a high-level we understand the challenges, but can you can you chop it up for us? Tell us why it’s soft and where is it soft? Is it soft across the board – the federal initiatives, the industrial sector – are they redirecting spending to AI and data analytics? Is cloud encroaching on traditional HPC?

Pete:  If you look at the high-end of the supercomputing market, the slowdown is pretty broad across the market, but for different reasons in different segments, so it really comes down to what’s going on with customer behavior. Generally, we’re seeing customers holding on to their systems longer, really across the board, and this is partly due to the rate of change in processor performance slowing down recently. And so, we’re seeing customers not upgrading systems quite as often as they have been. On the government side of our business, there’s clearly a lot of budget uncertainties in the shorter term and some leadership changes at federal governments around the world – the new administration in the U.S.; Brexit in the UK is a huge change and the UK is one of the top four or five countries from a revenue perspective. There is also a lot of uncertainty in the EU right now. We’re seeing an across-the-board slowdown in spending on the government side as well as on the higher education side of our business. On the commercial side, our biggest market has been the energy segment with oil and gas companies. Clearly, with the price of oil being down, CAPEX spending has slowed as a part of that.

But one of the things you mentioned that I do think is really important is, “is there something else picking up these cycles that high-end machines used to do?” and I don’t believe that there is. I am very certain that cloud is not very competitive with high-end systems today based on architectures that you can get out in the cloud today and the fundamental technology factors around our tightly-integrated systems versus what you can go out and get on the cloud. We also don’t see AI and analytics technologies taking away from what’s going on in high-end supercomputing, in fact we see them as complementary and growing over time.

From a country perspective, we break up our market into four regions: the Americas, EMEA (Europe, Middle East and Africa), we have Japan as a separate region, and then Asia Pacific as a fourth region. We’ve seen the biggest weakness in Americas and EMEA, and we’ve seen that Asia-Pacific is doing great right now, but it’s a smaller segment and a smaller market, but it’s been really strong. It’s different by country, by industry and by customer, so it’s not one thing we can point to, it’s a wide set of different factors all hitting at the same time that is causing the current market weakness.

Tom: When do you think the market will return to a level point where it’s normal in terms of HPC deployment and HPC acquisitions, not just in EMEA and the US but globally?

Pete: Tom, if you can tell me when the market will come back, I will pay you. I will owe you big! It’s tough to know exactly when the market is going to return, but I am confident that the market will return. I come from a sales background and I spend a lot of time with customers around the world. One of the biggest topics of conversation is exactly this, and I remain convinced that because of the unique things that these high-end supercomputers can do, customers are going to continue to rely on high-end supercomputers and that is what is going to make the market return. A number of factors have all hit at one time, and that’s creating a larger than normal dip in the market. The high-end market has always had its ups and downs, but as I have said, it’s the longest and the deepest that I’ve seen in 25 years. I am confident that the market is going to return over time, and because we continue to invest in R&D at Cray and we’re not slowing down on that one iota, I think we’re going to emerge from this stronger than ever.

Tiffany: Fair enough the “crystal ball” question, What about your revenue in terms of traditional HPC versus the commercial sector? How is that shaking down?

Pete: The vast majority of our revenue is at the high-end of the supercomputing market — the systems that are $5 million or $10 million, or even more than that. We also have a growing presence in the storage market, and with our ClusterStor transaction, we expect it to grow even more.  The numbers you were quoting earlier — the market growing to $40 billion — that includes storage, software and services.  In terms of analytics and AI, machine learning, deep learning, those are smaller segment for us.

Commercial has been a big growth area for us. We’ve gone from virtually zero commercial sales when I first came to Cray to being more than 15 percent of our revenue in 2015. It’s taken a step back from that over the past couple of years because of the energy market, but we expect that to grow again over time. Ultimately, our goal is to drive about a third of our revenue from commercial customers and two-thirds from government and higher education customers.

Overall, we have always talked about trying to grow twice as fast as the broader HPC market. So as that market has grown over the past few years, outside of the last couple, we’ve had really nice growth.  We actually grew three times the market rate and we were taking share along the way. I feel like we have the same opportunity. Our biggest investment that we make by far is in R&D, and one of the things I’m most proud of is that we’ve been able to continue to grow our R&D investment over the last several years.  And that has given us a really differentiated product offering and an exciting roadmap. We have a few cards up our sleeve here that we’re going to bring out over the next few years that we are very excited about.

Tiffany: You mentioned the different geographical regions that are important to Cray – we see China, Japan, Europe, possibly India, all pursuing indigenous HPC programs. How will this impact Cray’s sales and what’s the strategy here – as a tag to that, I’m wondering the larger ClusterStor line if you could potentially provide storage to these systems for example?

Pete: We do have a large presence in many of these regions, and we’ve been working hard to expand our presence in those markets, especially in R&D. We have R&D centers in a number of countries now, and as we bring on the ClusterStor line we are going to open a facility in India as part of that transaction. So, we continue to grow R&D not just within the US, but also outside of the US. I mentioned earlier that China is not in play for us, but we’re definitely going to try and play a part in the other programs, whether it’s from a systems perspective or maybe a technology perspective.  Our larger storage line gives us another way to be part of these programs as well. I’m hopeful, but we will have to see how that plays out.

Tiffany: Over the last few years we’ve seen a broadening away from pure-play supercomputing at Cray – with the Appro acquisition, the Urika line, supercomputing-as-a-service plays with the Markley Group and Deloitte – and the recent addition of the ClusterStor line – can you connect these dots for us? Is this all part of your strategy to deal with a constricting and volatile traditional HPC market?

Pete: That’s a great question. Cray isn’t a broad, general IT provider like HPE, IBM or Dell. We’re a focused player who is focused on a very specific part of the market. We have had a single view of this market for quite a few years now about what’s happening in the market and where does Cray want to play, and we believe the traditional supercomputing, the modeling and simulation that’s been done on supercomputers since the start of Cray over 40 years ago, is converging with this high-end big data market that spans analytics, artificial intelligence, (including deep learning and machine learning). Our strategy is to help customers compute, store and manage their data, and also run analytics and data models all within this framework. So, we don’t see it as broadening away from supercomputing; we see all of these elements converging, and virtually everything we’ve done over the past ten years – and you’ve mentioned a few of them – is along that strategy line. So, for us it’s really about how we leverage our technology and knowhow in supercomputing to expand our reach and build a stronger business model for the company, while staying focused on the part of the market where we think we bring the most value.

A great example is the growth of data. Everybody agrees data is continuing to grow like crazy; it’s not going to stop. It’s forcing our customers to confront how they deal with increasing amounts of data, how do they run larger and larger models because now they have to put more data into their models and try to make them more real.  All these things start to play to our strengths of building larger systems, building systems that scale, and getting performance at scale. You mentioned the ClusterStor line, that’s key to our storage and data management part of our strategy; the Appro acquisition helps us on the computing side of our strategy; Urika helps us in analytics. Partnerships like Deloitte and Markley Group, they help us to reach new customers. So, I don’t see it as a broadening away from pure-play supercomputing, I actually see it as this is how the market is evolving over the next few years.

Tom: So it’s not that you’re broadening your portfolio to reach outside of the HPC segment, but you’re expanding your portfolio to sell more to the HPC segment, whether it be industry or public.

Pete: Exactly. I think that in five or ten years, the high-end of the supercomputing market and the high-end of the big data analytics, or the machine learning/deep learning market, is going to be one market. It’s going to look and feel like one big market, and that’s where we want to play. That’s the market that we want to be a part of, so what do we need to put in place across the company from technologies to the solutions to the go-to-market partnerships that are going to enable us to play in that part of the market.

Tiffany: In your last quarterly call you were asked about the health of the pipelines both for the traditional and commercial sectors. We spoke about this a bit, but could you add anything about how you’d characterize the nature and the size of the deals in those pipelines?

Pete: We don’t disclose specific details of our pipeline and our backlog, but as we said on our last earnings call, the pipeline is up slightly compared to last quarter. More customers are now beginning to move their plans forward. So, a lot of the uncertainty that we talked about that has been impacting the market, customers are starting to firm up their plans a little bit more and put more plans into place – the exascale plans are a good example. As a result, we’re starting to see more real, definable opportunities. It’s still to the point though where the movement of opportunities through the pipeline is still very slow, so we still believe that these are some good early signs of the potential market return, but the exact timing is hard to predict.

Tiffany: I’ll push a little bit more; how is your visibility for 2018 and how much of that forecasting process is tied to reading the political tea leaves? I’m thinking specifically about the three budget proposals that are currently on the table for DOE and NSF funding, there’s consensus on boosting exascale funding which has to bode well for tier 1 sales. But lab funding, NSF, NIH are in jeopardy. There were layoffs at Oak Ridge. So how are these unknowns impacting visibility and when will you be able to offer 2018 guidance?

Pete: There’s definitely limited visibility right now in the market because there are these unknowns. We continue to believe that the market will rebound, but it’s hard to say exactly when, or what the slope of that rebound is going to be. We haven’t put out any projections for 2018 yet, but most of what’s been happening on the political funding side of things is really targeted at systems that are out a few years. So, in terms of reading the political tea leaves that you mentioned, the impact to our 2018 revenues aren’t as large as you might think because a lot of those decisions have already been made and are already well under way. A supercomputer is not something you decide to buy on Monday and have a month from now — there’s a pretty long procurement process. I would say that overall – not just in the US but around the world – the political climate and appetite for high-end computing is very positive, and exascale is a great example of that not just in the US but around the world. There are a number of countries that are running exascale programs. I think all of these things taken individually and when you put them together gives us more confidence that the market is going to rebound over time. It’s too early for us to call 2018, but our competitiveness hasn’t changed, so I think when the market returns, we are going to return in a big way.

Tiffany: Do you find encouragement in the numbers for exascale that are in the current budget proposals?

Pete: Yes, I think it’s great. I’m a huge supporter of the program, not just from the perspective that we’re a provider for the program, but in terms of what advanced science and engineering does for competitiveness in the US and around the world.  I’m very pleased to see so many programs that are going after the very high-end of supercomputing because I think it’s going to make a huge difference to industrial competitiveness for a lot of countries.

Tiffany: One challenge I don’t think Cray has that some of its competitors do is the trend for hyperscalers (and their imitators) to get their servers from ODMs in Taiwan and China. This is hurting traditional server vendors; there was a prominent example earlier this year with HPE losing business from a major hyperscale customer, purported to be Microsoft. The Web-scale giants also have the buying power to secure huge discounts. What’s your perspective on the health of the server business and what does that signify for Cray?

Pete: You’re right in that we don’t really play much in that market. But what is clear to us, and it’s been clear for quite a few years now, is that the market is bifurcating.  The bifurcation is from the general-purpose systems that we’ve had in the past, for instance big SMP machines from Sun, IBM and HPE and the like, to really different kinds of purpose built systems.  One is large scale-out systems and the other is tightly integrated high-performance systems, and clearly the part of the market where cloud is predominantly focused is in the scale-out systems space.  That space has been overhauled – think about what happened with Dell and EMC and every major vendor that’s in that general-purpose market – that’s been a very challenging market. The part of the market that we play in is this tightly integrated high-performance market, whether it be for modeling and simulation or large databases or business intelligence or big data analytics or deep learning. We don’t see as much difference and change in that part of the market. In fact, I would argue over the past 10 years, there’s been a reduction in the number of companies that are building those very tightly integrated systems for that part of the market, which is part of the reason why we’ve had such success overall. I’m very pleased we are focused on that part of the market and not the other part of the market, where I think that public and private cloud models will dominate over time.

Seymour Cray and Cray-1 in 1976

Tom: Cray is an iconic American supercomputer maker and while the company has undergone some transitions it is really one of a kind in terms of its history and having supercomputing as its core specialty. Seymour Cray, the father of supercomputing and of course the founder of Cray, is universally admired and respected. I imagine that honoring this long shadow is really an honor but could also be quite a burden.

Pete: It is an amazing honor and it’s something I think about quite a bit. Seymour Cray was a pioneer, a true visionary. His passion and vision is something I know all our employees, me included, work very hard to fulfill every day. I often think about how cool it is to be working at the company of the guy who practically invented the market. That’s amazing to me. I really believe at our core we are continuing to build and expand on Seymour’s vision today, by helping our customers solve the toughest challenges, scale to performance levels that people didn’t think were possible, and deliver faster competitive results for companies to be leaders in their industry. That is what we’re trying to do today and it’s fun to think about.

Tiffany: And there is the American connotation. Cray is a global enterprise, but Cray’s roots in Minnesota and Wisconsin, that’s undeniably part of the Cray brand. So let’s talk about Cray’s role in US supercomputing leadership. The Cray Titan supercomputer is the fastest US machine; there’s also Cori (NERSC), Trinity (NNSA), and Theta (Argonne). Cray is part of the DOE’s Fast Forward program to advance the technology needed for the United States’ first crop of exascale systems, on track for 2021-2022. How important is Cray to US supercomputing leadership and strategic competitiveness and isn’t it in the US’ interest to have 2-3 strong US-based companies capable of creating leadership-class systems?

Pete: You probably should ask the government that question, but I think it’s clear that we are a big part of supercomputing leadership in the US, and in many other countries around the world. If you look at the list of the largest supercomputers in the world, we are very fortunate to have a large percentage of those systems. I do think it’s important that there isn’t just one supercomputing company – although I will tell you I often dream about that!  But ultimately, I don’t believe the high-end supercomputing market is large enough to support too many companies, so it’s a balance of making sure customers have options and companies are able to grow and thrive in the market.  I do believe the industry is healthy and can sustain the players that it has. Cray is basically the last pure-play supercomputing company in the industry, and it shows. There’s no other companies like us left — we’re the only ones that are completely dedicated to this market.

Tiffany: Turning a little bit, we touched on trends earlier, the convergence of HPC and big data, but we’re also seeing artificial intelligence, machine learning and deep learning. These are really hot topics in HPC, so what are Cray’s views on AI writ large and what is the company doing from a product standpoint to address this market?

Pete: It’s super exciting. Artificial intelligence has the potential to transform a number of industries, and it’s already making an impact in a number of sectors. I would say virtually every one of our customers around the world are going to explore its potential across every vertical segment that we touch in the market today.

We believe artificial intelligence is really aligned with a number of our core capabilities and strengths because at its heart, AI is much more of a classical high performance computing problem. It leverages a lot of the capabilities that we build into our systems today, and we have seen really exciting results in scaling up deep learning and machine learning problems on our supercomputers. One of our energy customers, PGS, had a great example of using machine learning within their full-wave form inversion model to select a velocity model as part of their seismic simulations. We’ve used deep learning with weather simulation data to do real-time forecasting, what is called “Now-Casting”, which is really exciting. And we’ve done a bunch of work with the largest GPU-based supercomputer in the world at CSCS in Switzerland where we’ve scaled up deep learning models to thousands of GPUs. We have a team at Cray that’s specifically dedicated to this area, so you’re going to see a lot more coming from us over the next years in this space.

Tiffany: We also recently saw NERSC and Stanford scale up a deep learning model to ~15 petaflops on the Cori supercomputer.

Pete: NERSC has been a great partner of ours in working to scale up these kind of models, and looking at how to we take supercomputing technology and apply it to deep learning, machine learning, AI, and big data analytics. They are really pushing the boundaries of that. One of the advantages we have is to get to work with so many smart customers around the world. We really get a lot of insight into what’s going to happen in the market over the next few years.

Tiffany: Do you think that these trends in AI will open up a lot of market opportunities for Cray beyond the scientific sphere?

Pete: We’re hoping. It’s too early to call yet, but we definitely believe that it has a really big potential in terms of what Cray could be, and what Cray could grow into over the next five to ten years.

Tom: One of the things that amazes me about you Pete is how responsive you are to email, text and communications as a whole. I don’t know anyone in our industry, especially someone in your position that is so consistent and quick to respond. Granted a lot of your employees may not, on occasion, appreciate that, but how do you do it?

Pete: I found something that I really love doing, and I’m blessed to have job and a company where I get to work with a lot of smart people and a lot of smart customers, and that gets me energized every day. It’s what you would naturally do when you enjoy what you do, so I don’t think I do anything special and our success as a company has been a heck of a lot more about all the employees of Cray and our customers than it has been about me. But I will tell you, I do really love being here and being part of the industry.

Tom: One final question you’re going to have to think very carefully on: how does your daughter feel about having an HPC rock star as a dad?

Pete: (Laughs) My daughter is as embarrassed about her dad as probably most 20-year-old girls are. I may be somewhat of a recognized name in HPC, but I think she has the same viewpoint as many kids: drop me off a block away from my friends so nobody sees me with you!

The post Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro appeared first on HPCwire.

Scientists Enlist Supercomputers, Machine Learning to Automatically Identify Brain Tumors

Thu, 10/05/2017 - 14:48
Oct. 5 — Primary brain tumors encompass a wide range of tumors depending on the cell type, the aggressiveness, and stage of tumor. Quickly and accurately characterizing the tumor is a critical aspect of treatment planning. It is a task currently reserved for trained radiologists, but in the future, computing, and in particular high-performance computing, will play a supportive role. George Biros, professor of mechanical engineering and leader of the ICES Parallel Algorithms for Data Analysis and Simulation Group at The University of Texas at Austin, has worked for nearly a decade to create accurate and efficient computing algorithms that can characterize gliomas, the most common and aggressive type of primary brain tumor.At the 20th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2017), Biros and collaborators from the University of Pennsylvania (led by Professor Christos Davatzikos), University of Houston (led by Professor Andreas Mang) and University of Stuttgart (led by Professor Miriam Mehl), presented results of a new, fully automatic method that combines biophysical models of tumor growth with machine learning algorithms for the analysis of Magnetic Resonance (MR) imaging data of glioma patients. All the components of the new method were enabled by supercomputers at the Texas Advanced Computing Center (TACC).

The top row shows the initial configuration. The second row shows the same configuration at the final iteration of our coupled tumor inversion and registration scheme. The three images on the bottom show the corresponding hard segmentation. The obtained atlas based segmentation (middle image) and the ground truth segmentation for the patient are very similar. The top row shows the initial configuration. The second row shows the same configuration at the final iteration of our coupled tumor inversion and registration scheme. The three images on the bottom show the corresponding hard segmentation. The obtained atlas based segmentation (middle image) and the ground truth segmentation for the patient are very similar. Source: TACC

Biros’ team tested their new method in the Multimodal Brain Tumor Segmentation Challenge 2017 (BRaTS’17), an annual competition where research groups from around the world present methods and results for computer-aided identification and classification of brain tumors, as well as different types of cancerous regions, using pre-operative MR scans.

Their system scored in the top 25 percent in the challenge and were near the top for whole tumor segmentation.

“The competition is related to the characterization of abnormal tissue on patients who suffer from glioma tumors, the most prevalent form of primary brain tumor,” Biros said. “Our goal is to take an image and delineate it automatically and identify different types of abnormal tissue – edema, enhancing tumor (areas with very aggressive tumors), and necrotic tissue. It’s similar to taking a picture of one’s family and doing facial recognition to identify each member, but here you do tissue recognition, and all this has to be done automatically.”

Training And Testing The Prediction Pipeline

For the challenge, Biros and his team of more than a dozen students and researchers, were provided in advance with 300 sets of brain images on which all teams calibrated their methods (what is called “training” in machine learning parlance).

In the final part of the challenge, groups were given data from 140 patients and had to identify the location of tumors and segment them into different tissue types over the course of just two days.

“In that 48-hour window, we needed all the processing power we could get,” Biros explained.

The image processing, analysis and prediction pipeline that Biros and his team used has two main steps: a supervised machine learning step where the computer creates a probability map for the target classes (“whole tumor,” “edema,” “tumor core”); and a second step where they combine these probabilities with a biophysical model that represents how tumors grow in mathematical terms, which imposes limits on the analyses and helps find correlations.

TACC computing resources enabled Biros’ team to use large-scale nearest neighbor classifiers (a machine learning method). For every voxel, or three-dimensional pixel, in a MR brain image, the system tries to find all the similar voxels in the brains it has already seen to determine if the area represents a tumor or a non-tumor.

With 1.5 million voxels per brain and 300 brains to assess, that means the computer must look at half billion voxels for every new voxel of the 140 unknown brains that it analyzes, deciding for each whether the voxel represents a tumor or healthy tissue.

“We used fast algorithms and approximations to make this possible, but we still needed supercomputers,” Biros said.

Each of the several steps in the analysis pipeline used separate TACC computing systems. The nearest neighbor machine learning classification component simultaneously used 60 nodes (each consisting of 68 processors) on Stampede2, TACC’s latest supercomputer and one of the most powerful systems in the world. (Biros was among the first researchers to gain access to the Stampede2 supercomputer in the spring and was able to test and tune his algorithm for the new processors there.) They used Lonestar 5 to run the biophysical models and Maverick to combine the segmentations.

Most teams had to limit the amount of training data they used or apply more simplified classifier algorithms on the whole training set, but priority access to TACC’s ecosystem of supercomputers meant Biros’ team could explore more complex methods.

“George came to us before the BRaTS Challenge and asked if they could get priority access to Stampede2, Lonestar5, and Maverick to ensure that their jobs got through in time to complete the challenge,” said Bill Barth, TACC’s Director of High Performance Computing. “We decided that just increasing their priority probably wouldn’t cut it, so we decided to give them a reservation on each system to cover their needs for the 48 hours of the challenge.”

George Biros, professor of mechanical engineering and leader of the ICES Parallel Algorithms for Data Analysis and Simulation Group at The University of Texas at Austin

As it turned out, Biros and his team were able to run their analysis pipeline on 140 brains in less than 4 hours and correctly characterized the testing data with nearly 90 percent accuracy, with is comparable to human radiologists.

Their method is fully automatic, Biros said, and needed only a small number of initial algorithmic parameters to assess the image data and classify tumors without any hands-on effort.

Integrating Diverse Research

The team’s scalable, biophysics-based image analysis system was the culmination of 10 years of research into a variety of computational problems, according to Biros.

“In our group and our collaborators’ groups, we have multiple research threads on image analysis, scalable machine learning and numerical algorithms,” he explained. “But this was the first time we put everything together for an application to make our method work for a really challenging problem. It’s not easy, but it’s very fulfilling.”

The BRaTS competition thus represents a turning point in his research, Biros said.

“We have all the tools and basic ideas, now we polish it and see how we can improve it.”

The image segmentation classifier is set to be deployed at the University of Pennsylvania by the end of the year in partnership with his collaborator, Christos Davatzikos, director of the Center for Biomedical Image Computing and Analytics and a professor of Radiology there. It won’t be a substitute for radiologists and surgeons, but it will improve the reproducibility of assessments and potentially speed up diagnoses.

The methods that the team developed go beyond brain tumor identification. They are applicable to many problems in medicine as well as in physics, including semiconductor design and plasma dynamics.

Said Biros: “Having access to TACC supercomputers makes our life infinitely easier, makes us more productive and is a real advantage.”

Biros’ research is jointly funded by the National Institutes of Health, the National Science Foundation, the Department of Energy, and the Air Force Office of Scientific Research. Stampede2 is supported by the National Science Foundation (Award #1540931).

Source: Aaron Dubrow, TACC

The post Scientists Enlist Supercomputers, Machine Learning to Automatically Identify Brain Tumors appeared first on HPCwire.

Intel Debuts Programmable Acceleration Card

Thu, 10/05/2017 - 14:00

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel is making a push on the FPGA accessibility front. The company announced this week the Intel Programmable Acceleration Card (PAC), a hardware/software platform designed to enable customized FPGA-based acceleration of networking, storage and computing workloads.

The Intel PAC, which combines platforms, software stack and ecosystem solutions, abstracts the complexities of FPGA implementation, Intel said, to “enable architects and developers to quickly develop and deploy power-efficient acceleration of a variety of applications and workloads.”

Intel PAC, powered by the Intel Arria 10 GX FPGA, has three major elements:

  • Intel-qualified FPGA acceleration platforms that operate with Intel Xeon CPUs
  • An “acceleration stack” for Intel Xeon CPU with FPGAs that provide industry standard frameworks, interfaces and optimized libraries
  • An ecosystem of market-specific solutions

“The goal here is to supercharge the data center by delivering higher performance at lower TCO,” Bernhard Friebe, senior director of FPGA software solutions at Intel’s Programmable Solutions Group, told (HPCwire‘s sister publication) EnterpriseTech. “The key for us is to position FPGAs as a versatile accelerator because we can do so many things (with it) and the data center is inherently a workload-agnostic and very dynamic environment. You want to use your compute resources to full utilization, and the FPGA can do that.”

The half-length, half-height PCIe card plugs into standard Intel Xeon processor-based servers. The platform approach enables OEMs to offer Intel Xeon processor-based server acceleration solutions with their unique value add.

“With this collaboration,” said Brian Payne, Dell EMC’s vice president, product management and marketing, Server Solutions Division, “Dell EMC and Intel are combining a reliable platform with an emerging software ecosystem that provides a new technology capability for customers to unlock their business potential.”

Friebe said the acceleration stack provides an easy way to drop-in accelerator functions developed by the ecosystem for specific workloads. He said an emerging partner ecosystem is developing market-specific solutions in the areas of artificial intelligence, real-time big data analytics, video processing, financial acceleration, genomics and cybersecurity.

The acceleration stack provides a common developer interface for both application and accelerator function developers, and includes drivers, application programming interfaces (APIs), and an FPGA interface manager. It includes acceleration libraries and development tools aimed at saving developer time and enables code re-use across multiple Intel FPGA platforms.

According to Intel, early adopters include:

The Broad Institute, Cambridge, Mass., which has seen a 50X algorithm speed up for genome sequencing compared to using Xeon E5 processors alone for the Pairwise HMM algorithm, previously a bottleneck in the genomic sequencing process.

Swarm64, a relational database accelerator firm, which has seen a 10X boost in real-time data analytics for relational databases, which the company says projects to a roughly 40 percent TCO savings over three years.

Attala, a cloud infrastructure specialist, which reports 57-72 percent lower latency for its NVMe-over-fabric storage platform.

The PAC product is sampling now and is expected hit general availability in the first half of 2018.

The post Intel Debuts Programmable Acceleration Card appeared first on HPCwire.

Strategic Value of High-Performance Computing for Research and Innovation

Thu, 10/05/2017 - 11:10

High-performance computing (HPC) is an enormous part of the present and future of engineering simulation. HPC enables engineers and researchers to gain high-fidelity insight into product behavior — insight that cannot be obtained without detailed simulation models. When applied to design exploration, HPC can lead to robust product performance and reduced warranty and maintenance costs. Wim Slagter, Director HPC & Cloud Alliances at ANSYS, gives his perspective on HPC adoption challenges and threats on a European scale, and explains how these can be addressed by strategic partnerships.

The post Strategic Value of High-Performance Computing for Research and Innovation appeared first on HPCwire.

The Wharton School, University of Pennsylvania, Extends HPC Environment with Unicloud

Thu, 10/05/2017 - 11:08

From regressions and optimizations, to Natural Language Processing and Machine Learning, The Wharton School’s High Performance Computing (HPC) and Big Data Analytics workloads cover a broad spectrum of uses. As researchers’ needs grew beyond desktops and departmental servers, Wharton’s HPC cluster had to be extended – economically and without impacting its large roster of users.

Wharton avoided substantial costs while transparently tripling its core count – through the strategic use of Amazon Web Services’ cloud based resources accessed dynamically via Unicloud.

The post The Wharton School, University of Pennsylvania, Extends HPC Environment with Unicloud appeared first on HPCwire.

Call Now Open for ISC 2018 Research Papers

Thu, 10/05/2017 - 10:33

FRANKFURT, Oct. 5, 2017 — Submissions are now open for the ISC 2018 conference research paper sessions, which aim to provide first-class opportunities for engineers and scientists in academia, industry, and government to present and discuss issues, trends, and results that will shape the future of high performance computing (HPC). Submissions will be accepted through December 22, 2017.

The research paper sessions will be held from Monday, June 25, through Wednesday, June 27, 2018. ISC High Performance (ISC 2018) will be hosted in Frankfurt, Germany.

Submitted research papers will be peer-reviewed double-blind by at least four reviewers. This year’s Research Papers Committee is headed by Prof. David Keyes, KAUST, as Chair and Dr.-Ing. Carsten Trinitis, TU Munich, as Deputy Chair and Prof. Rio Yokota, Tokyo Institute of Technology, as Proceedings Chair and Dr. Michèle Weiland, EPCC, as Proceedings Deputy Chair.


All accepted research papers will be published in the Springer‘s Lecture Notes in Computer Science (LNCS) series. For the camera-ready version, authors are automatically granted one extra page to incorporate reviewer comments. The publication is free of charge and the published papers can be downloaded from Springer‘s website for a limited time after the conference. The proceedings are indexed in the ISI Conference Proceedings Citation Index – Science (CPCI-S), included in ISI Web of Science, EI Engineering Index (Compendex and Inspec databases), ACM Digital Library, DBLP, Google Scholar, IO-Port, MathSciNet, Scopus and Zentralblatt MATH.

Areas of Interest

The Research Papers Committee encourages the submission of high-quality papers reporting original work in theoretical, experimental, and industrial research and development. The ISC submission process will be divided into seven tracks this year.

Architectures & Networks

•    Future design concepts of HPC systems

•    Multicore / manycore systems

•    Heterogeneous systems

•    Network technology

•    Domain-specific architectures

•    Memory technologies

•    Trends in the HPC chip market

•    Exascale computing Data, Storage & Visualization

•    From big data to smart data

•    Memory systems for HPC & big data

•    File systems & tape libraries

•    Data-intensive applications

•    Databases

•    Visual analytics

•    In-situ analytics

  HPC Applications

•    Highly scalable applications

•    Convergence of simulations & big data

•    Scalability on future architectures

•    Workflow management

•    Coupled simulations

•    Industrial simulations

•    Implementations on SIMT accelerators HPC Algorithms

•    Innovative algorithms, discrete or continuous

•    Algorithmic-based fault tolerance

•    Communication-reducing  algorithms

•    Synchronization-reducing algorithms

•    Time-space tradeoffs in algorithms

  Programming Models & Systems Software

•    Parallel programming paradigms

•    Tools and libraries for performance & productivity

•    Job management

•    Monitoring & administration tools

•    Productivity improvement

•    Energy efficiency Artificial Intelligence & Machine Learning

•    Neural networks & HPC

•    Machine learning & HPC

•    AI and ML-oriented hardware

•    Towards benchmarks in ML Performance Modeling & Measurement

•    Performance models

•    Performance prediction & engineering

•    Performance  measurement

•    Power consumption

NOTE: Submissions on other innovative aspects of high performance computing are also welcome. You will be asked to pick a primary and a secondary track from the seven above for your submission. Please refer to www.isc-hpc.com/research-papers-2018.html for full submission guidelines.


The ISC organizers will again sponsor the call for research papers with the Hans Meuer Award for the most outstanding research paper.

Important Dates

Submission Deadline Friday, December 22, 2017 Author Rebuttals Monday, February 19 – Thursday, February 22, 2018 Notification of Acceptance Monday, March 5, 2018 Camera-Ready Submission Monday, April 2, 2018 Submission Research Paper Sessions Monday, June 25 – Wednesday, June 27, 2018 Final Presentation Slides in PDF due Friday, June 29, 2018


If you have any questions or comments, please contact:

Ms. Tanja Gruenter
Conference Program Coordinator

Source: ISC

The post Call Now Open for ISC 2018 Research Papers appeared first on HPCwire.