HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 5 hours 35 min ago

Blue Waters Allocations Awarded to 26 Research Teams

Wed, 03/08/2017 - 07:00

March 8 — Twenty-six research teams at the University of Illinois at Urbana-Champaign have been allocated computation time on the National Center for Supercomputing Application’s (NCSA) sustained-petascale Blue Waters supercomputer after applying in Fall 2016. These allocations range from 25,000 to 600,000 node-hours of compute time over a time span of either six months or one year. The research pursuits of these teams are incredibly diverse, ranging anywhere from physics to political science.

Blue Waters is one of the world’s most powerful supercomputers, capable of performing quadrillions of calculations every second and working with quadrillions of bytes of data. Its massive scale and balanced architecture help scientists and engineers—as well as scholars involved in the humanities and social sciences—tackle research challenges that could not be addressed with other computing systems.

Blue Waters provides University faculty and staff a valuable resource to perform groundbreaking work in computational science and further Illinois’ mission to foster discovery and innovation. The system presents a unique opportunity for the U of I faculty and researchers with about 2 percent of the capacity of Blue Waters allocated each year to projects at the University through a campuswide peer-review process.

The next round of proposals will be due March 15, 2017. To learn how you could receive an allocation to accelerate your research, visit https://bluewaters.ncsa.illinois.edu/illinois-allocations.

Fall 2016 Illinois Allocations

General Proposals

  • Christina Cheng (Animal Biology): Structural Basis for Extreme Cold Stability in the Eye Lenses of Teleost Fishes
  • Wendy K. Tam Cho and Yan Liu (Political Science): Enabling Redistricting Reform: A Computational Study of Zoning Optimization
  • Marcelo Garcia (Civil and Environmental Engineering) and Paul Fischer (Computer Science): Direct Numerical Simulation of Turbulence and Sediment Transport in Oscillatory Boundary Layer Flows
  • Deborah Levin (Aerospace Engineering): Kinetic Simulations of Unsteady Shock-Boundary Layer Interactions using Petascale Computing
  • Deborah Levin (Aerospace Engineering): Modeling Plasma Flows with Kinetic Approaches using Hybrid CPU-GPU Computing
  • Rafael Tinoco Lopez (Civil and Environmental Engineering): High Resolution Numerical Simulation of Oscillatory Flow and Sediment Transport through Aquatic Vegetation: Using the Highly Scalable Higher-Order Incompressible Solver Nek5000
  • Zan Luthey-Schulten (Chemistry), Tyler Earnest (Beckman Institute), Zhaleh Ghaemi(Chemistry), Michael Hallock (Beckman Institute), and Thomas Kuhlman (Physics): Whole Cell Simulations of Escherichia coli and Saccharomyces cerevisiae
  • Liudmila Mainzer (NCSA): Search for Missing Variants in Large Exome Sequencing Projects by Optimization of Analytic Pipelines, in application to Alzheimer’s Disease
  • Rakesh Nagi and Ketan Date (Industrial and Enterprise Systems Engineering): Parallel Algorithms for Solving Large Assignment Problems
  • Caroline Riedl, Vincent Andrieux, Naomi Makins, Marco Meyer, and Matthias Grosse Perdekamp (Physics): Mapping Proton Quark Structure in Momentum and Coordinate Phase Space using 17 PB of COMPASS Data
  • Andre Schleife (Materials Science and Engineering) and Alina Kononov (Physics): Non-adiabatic Electron-ion Dynamics in Ion-irradiated Carbon Nanomembranes
  • Diwakar Shukla (Chemical & Biomolecular Engineering): Unraveling the Molecular Magic of Witchweed
  • Justin Sirignano (Industrial and Enterprise Systems Engineering): Distributed Learning with Neural Networks
  • Edgar Solomonik (Computer Science): Performance Evaluation of New Algebraic Algorithms and Libraries
  • Ryan Sriver and Hui Li (Atmospheric Sciences): Simulating Tropical Cyclone-Climate Interactions under Anthropogenic Global Warming using High-Resolution Configurations of the Community Earth System Model (CESM)
  • Emad Tajkhorshid (Biochemistry and Pharmacology): Atomic Resolution Description of the Transport Cycle in Neurotransmitter Transporters
  • Lucas Wagner (Physics): Quantum Monte Carlo Simulations of Magnetism and Models in Condensed Matter
  • Junshik Um and Greg McFarquhar (Atmospheric Sciences): An Assessment of Impacts of Orientation, Non-Sphericity, and Size of Small Atmospheric Ice Crystals on Scattering Property Calculations to Improve In-Situ Aircraft Measurements, Satellite Retrievals, and Climate Models
  • Donald J. Wuebbles and Xin-Zhong Liang (Atmospheric Sciences): Particulate Matter Prediction and Source Attribution for U.S. Air Quality Management in a Changing World

Exploratory Proposals

  • William Gropp (NCSA and Computer Science) and Roy Campbell (Computer Science): Performance Analysis of Large-Scale Deep Learning Systems
  • Les Gasser (Computer Science): Large-Scale Exploratory Social Simulations for Understanding Knowledge Flows, Cultural Patterns, and Emergent Organization Structures
  • Ravishankar Iyer, Saurabh Jha, and Valerio Formicola (Electrical and Computer Engineering): Predicting Performance Degradation and Failures of Application through System Activity Monitoring
  • Tomasz Kozlowski (Nuclear, Plasma, & Radiological Engineering): Improving Nuclear Power Competitiveness in a Deregulated Energy Market
  • Praveen Kumar and Phong Le (Civil and Environmental Engineering): Extreme-scale Modeling — Role of Micro-topographic Variability on Nutrient Concentration and Mean Age Dynamics
  • Jian Peng (Computer Science): Protein Structure Prediction using Deep Neural Networks
  • Luke Olson (Computer Science): Large-Scale Solution of Constrained Systems via Monolithic Multigrid

For more information about these projects and other science and engineering work being propelled by Blue Waters, visit bluewaters.ncsa.illinois.edu.

Source: Hannah Remmert, NCSA

The post Blue Waters Allocations Awarded to 26 Research Teams appeared first on HPCwire.

NVIDIA and Microsoft Announce Hyperscale GPU Accelerator

Wed, 03/08/2017 - 06:57

SANTA CLARA, Calif., March 8 — NVIDIA (NASDAQ: NVDA) with Microsoft today unveiled blueprints for a new hyperscale GPU accelerator to drive AI cloud computing. Providing hyperscale data centers with a fast, flexible path for AI, the new HGX-1 hyperscale GPU accelerator is an open-source design released in conjunction with Microsoft’s Project Olympus.

HGX-1 does for cloud-based AI workloads what ATX — Advanced Technology eXtended — did for PC motherboards when it was introduced more than two decades ago. It establishes an industry standard that can be rapidly and efficiently embraced to help meet surging market demand.

The new architecture is designed to meet the exploding demand for AI computing in the cloud — in fields such as autonomous driving, personalized healthcare, superhuman voice recognition, data and video analytics, and molecular simulations.

“AI is a new computing model that requires a new architecture,” said Jen-Hsun Huang, founder and chief executive officer of NVIDIA. “The HGX-1 hyperscale GPU accelerator will do for AI cloud computing what the ATX standard did to make PCs pervasive today. It will enable cloud-service providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing.”

“The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast-growing machine learning workloads, and its unique design allows it to be easily adopted into existing data centers around the world,” wrote Kushagra Vaid, general manager and distinguished engineer, Azure Hardware Infrastructure, Microsoft, in a blog post.

For the thousands of enterprises and startups worldwide that are investing in AI and adopting AI-based approaches, the HGX-1 architecture provides unprecedented configurability and performance in the cloud.

Powered by eight NVIDIA Tesla P100 GPUs in each chassis, it features an innovative switching design — based on NVIDIA NVLink interconnect technology and the PCIe standard — enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on the HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations.

Cloud workloads are more diverse and complex than ever. AI training, inferencing and HPC workloads run optimally on different system configurations, with a CPU attached to a varying number of GPUs. The highly modular design of the HGX-1 allows for optimal performance no matter the workload. It provides up to 100x faster deep learning performance compared with legacy CPU-based servers, and is estimated at one-fifth the cost for conducting AI training and one-tenth the cost for AI inferencing.

With its flexibility to work with data centers across the globe, HGX-1 offers existing hyperscale data centers a quick, simple path to be ready for AI.

Collaboration to Bring Industry Standard to Hyperscale

Microsoft, NVIDIA and Ingrasys (a Foxconn subsidiary) collaborated to architect and design the HGX-1 platform. The companies are sharing it widely as part of Microsoft’s Project Olympus contribution to the Open Compute Project, a consortium whose mission is to apply the benefits of open source to hardware and rapidly increase the pace of innovation in, near and around the data center and beyond.

Sharing the reference design with the broader Open Compute Project community means that enterprises can easily purchase and deploy the same design in their own data centers.

NVIDIA Joins Open Compute Project

NVIDIA is joining the Open Compute Project to help drive AI and innovation in the data center. The company plans to continue its work with Microsoft, Ingrasys and other members to advance AI-ready computing platforms for cloud service providers and other data center customers.

About NVIDIA

NVIDIA‘s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.” More information at http://nvidianews.nvidia.com/.

Source: NVIDIA

The post NVIDIA and Microsoft Announce Hyperscale GPU Accelerator appeared first on HPCwire.

SDSC and Scripps Institution of Oceanography Awarded NASA ACCESS Grant

Wed, 03/08/2017 - 06:50

March 8 — The San Diego Supercomputer Center (SDSC) and Scripps Institution of Oceanography at the University of California San Diego have been awarded a NASA ACCESS grant to develop a cyberinfrastructure platform for discovery, access, and visualization of data from NASA’s ICESat and upcoming ICESat-2 laser altimeter missions.

ICESat and ICESat-2 (scheduled for launch in 2018) measure changes in the volume of Earth’s ice sheets, sea-ice thickness, sea-level height, the structure of forest and brushland canopies, and the distribution of clouds and aerosols.

The new project, dubbed “OpenAltimetry” (www.openaltimetry.org), will build upon technology that SDSC developed for its NSF-funded OpenTopography facility, which provides web-based access to high-resolution topographic data and processing tools for a broad spectrum of research communities.

OpenAltimetry, which includes the Boulder, CO-based National Snow and Ice Data Center (NSIDC) and UNAVCO as collaborators, also incorporates lessons learned from a prototype data discovery interface that was developed under NASA’s Lidar Access System project, a collaboration between UNAVCO, SDSC, NASA’s Goddard Space Flight Center, and NSIDC.

OpenAltimetry will enable researchers unfamiliar with ICESat/ICESat-2 data to easily navigate the dataset and plot elevation changes over time in any area of interest. These capabilities will be heavily used for assessing the quality of data in regions of interest, and for exploratory analysis of areas of potential but unconfirmed surface change.

Possible use cases include the identification of subglacial lakes in the Antarctic, and the documentation of deforestation via observations of forest canopy height and density changes.

“The unique data generated by ICESat and the upcoming ICESat-2 mission require a new paradigm for data access, both to serve the needs of expert users as well as to increase the accessibility and utility of this data for new users,” said Adrian Borsa, an assistant professor at Scripps’ Institute of Geophysics and Planetary Physics and principal investigator for the OpenAltimetry project. “We envision a data access system that will broaden the use of the ICESat dataset well beyond its core cryosphere community, and will be ready to serve the upcoming ICESat-2 mission when it begins to return data in 2018,” added Borsa. “Ultimately, we hope that OpenAltimetry will be the platform of choice for hosting similar datasets from other altimetry missions.”

“OpenTopography has demonstrated that enabling online access to data and processing tools via easy-to-use interfaces can significantly increase data use across a wide range of communities in academia and industry, and can facilitate new research breakthroughs,” said Viswanath Nandigam, associate director for SDSC’s Advanced Cyberinfrastructure Development Lab. Nandigam also is the principal investigator for the OpenTopography project and co-PI of OpenAltimetry.

On a broader scale, the OpenAltimetry project addresses the primary objective of NASA’s ACCESS (Advancing Collaborative Connections for Earth System Science) program, which is to improve data discovery, accessibility, and usability of NASA’s earth science data using mature technologies and practices, with the goal of advancing Earth science research through increasing efficiencies for current users and enabling access for new users.

The project leadership team includes Co-I Siri Jodha Singh Khalsa from NSIDC, Co-I Christopher Crosby from UNAVCO, and Co-I Helen Fricker from Scripps. Additional SDSC staff supporting the project include Kai Lin, a senior research programmer; and Minh Phan, a software developer. The OpenAltimetry project is funded under NASA ACCESS grant number NNX16AL89A until June 22, 2018.

About SDSC 

As an Organized Research Unit of UC San Diego, SDSC is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community, including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC’s Comet joins the Center’s data-intensive Gordon cluster, and are both part of the National Science Foundation’s XSEDE (Extreme Science and Engineering Discovery Environment) program.

About Scripps Institution of Oceanography

Scripps Institution of Oceanography at the University of California San Diego, is one of the oldest, largest, and most important centers for global science research and education in the world. Now in its second century of discovery, the scientific scope of the institution has grown to include biological, physical, chemical, geological, geophysical, and atmospheric studies of the earth as a system. Hundreds of research programs covering a wide range of scientific areas are under way today on every continent and in every ocean. The institution has a staff of more than 1,400 and annual expenditures of approximately $195 million from federal, state, and private sources. Learn more at scripps.ucsd.edu.

Source: SDSC

The post SDSC and Scripps Institution of Oceanography Awarded NASA ACCESS Grant appeared first on HPCwire.

Titan Supercomputer Assists With Polymer Nanocomposites Study

Wed, 03/08/2017 - 06:39

OAK RIDGE, Tenn., March 8 — Polymer nanocomposites mix particles billionths of a meter (nanometers, nm) in diameter with polymers, which are long molecular chains. Often used to make injection-molded products, they are common in automobiles, fire retardants, packaging materials, drug-delivery systems, medical devices, coatings, adhesives, sensors, membranes and consumer goods. When a team led by the Department of Energy’s Oak Ridge National Laboratory tried to verify that shrinking the nanoparticle size would adversely affect the mechanical properties of polymer nanocomposites, they got a big surprise.

“We found an unexpectedly large effect of small nanoparticles,” said Shiwang Cheng of ORNL. The team of scientists at ORNL, the University of Illinois at Urbana-Champaign (Illinois) and the University of Tennessee, Knoxville (UTK) reported their findings in the journal ACS Nano.

Blending nanoparticles and polymers enables dramatic improvements in the properties of polymer materials. Nanoparticle size, spatial organization and interactions with polymer chains are critical in determining behavior of composites. Understanding these effects will allow for the improved design of new composite polymers, as scientists can tune mechanical, chemical, electrical, optical and thermal properties.

Until recently, scientists believed an optimal nanoparticle size must exist. Decreasing the size would be good only to a point, as the smallest particles tend to plasticize at low loadings and aggregate at high loadings, both of which harm macroscopic properties of polymer nanocomposites.

The ORNL-led study compared polymer nanocomposites containing particles 1.8 nm in diameter and those with particles 25 nm in diameter. Most conventional polymer nanocomposites contain particles 10–50 nm in diameter. Tomorrow, novel polymer nanocomposites may contain nanoparticles far less than 10 nm in diameter, enabling new properties not achievable with larger nanoparticles.

Well-dispersed small “sticky” nanoparticles improved properties, one of which broke records: Raising the material’s temperature less than 10 degrees Celsius caused a fast, million-fold drop in viscosity. A pure polymer (without nanoparticles) or a composite with large nanoparticles would need a temperature increase of at least 30 degrees Celsius for a comparable effect.

“We see a shift in paradigm where going to really small nanoparticles enables accessing totally new properties,” said Alexei Sokolov of ORNL and UTK. That increased access to new properties happens because small particles move faster than large ones and interact with fewer polymer segments on the same chain. Many more polymer segments stick to a large nanoparticle, making dissociation of a chain from that nanoparticle difficult.

“Now we realize that we can tune the mobility of the particles—how fast they can move, by changing particle size, and how strongly they will interact with the polymer, by changing their surface,” Sokolov said. “We can tune properties of composite materials over a much larger range than we could ever achieve with larger nanoparticles.”

Better together

The ORNL-led study required expertise in materials science, chemistry, physics, computational science and theory. “The main advantage of Oak Ridge National Lab is that we can form a big, collaborative team,” Sokolov said.

Cheng and UTK’s Bobby Carroll carried out experiments they designed with Sokolov. Broadband dielectric spectroscopy tracked the movement of polymer segments associated with nanoparticles. Calorimetry revealed the temperature at which solid composites transitioned to liquids. Using small-angle X-ray scattering, Halie Martin (UTK) and Mark Dadmun (UTK and ORNL) characterized nanoparticle dispersion in the polymer.

To better understand the experimental results and correlate them to fundamental interactions, dynamics and structure, the team turned to large-scale modeling and simulation (by ORNL’s Bobby Sumpter and Jan-Michael Carrillo) enabled by the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at ORNL.

“It takes us a lot of time to figure out how these particles affect segmental motion of the polymer chain,” Cheng said. “These things cannot be visualized from experiments that are macroscopic. The beauty of computer simulations is they can show you how the chain moves and how the particles move, so the theory can be used to predict temperature dependence.”

Shi-Jie Xie and Kenneth Schweizer, both of Illinois, created a new fundamental theoretical description of the collective activated dynamics in such nanocomposites and quantitatively applied it to understand novel experimental phenomena. The theory enables predictions of physical behavior that can be used to formulate design rules for optimizing material properties.

Carrillo and Sumpter developed and ran simulations on Titan, America’s most powerful supercomputer, and wrote codes to analyze the data on the Rhea cluster. The LAMMPS molecular-dynamics code calculated how fast nanoparticles moved relative to polymer segments and how long polymer segments stuck to nanoparticles.

“We needed Titan for fast turn-around of results for a relatively large system (200,000 to 400,000 particles) running for a very long time (100 million steps). These simulations allow for the accounting of polymer and nanoparticle dynamics over relatively long times,” Carrillo said. “These polymers are entangled. Imagine pulling a strand of spaghetti in a bowl. The longer the chain, the more entangled it is. So its motion is much slower.” Molecular dynamics simulations of long, entangled polymer chains were needed to calculate time-correlation functions similar to experimental conditions and find connections or agreements between the experiments and theories proposed by colleagues at Illinois.

The simulations also visualized how nanoparticles moved relative to a polymer chain. Corroborating experiment and theory moves scientists closer to verifying predictions and creates a clearer understanding of how nanoparticles change behavior, such as how altering nanoparticle size or nanoparticle–polymer interactions will affect the temperature at which a polymer loses enough viscosity to become liquid and start to flow. Large particles are relatively immobile on the time scale of polymer motion, whereas small particles are more mobile and tend to detach from the polymer much faster.

The title of the paper is “Big Effect of Small Nanoparticles: A Shift in Paradigm for Polymer Nanocomposites.

Source: ORNL

The post Titan Supercomputer Assists With Polymer Nanocomposites Study appeared first on HPCwire.

ISC 2017 Now Open for Registration

Tue, 03/07/2017 - 10:36

FRANKFURT, Germany, March 7 — Early registration at reduced rates is now open for ISC High Performance, the largest high performance computing forum in Europe. By registering between now and May 10 for the full conference pass, attendees from commercial sectors and academia can save over 45 percent off the onsite registration rates.

The ISC High Performance conference and exhibition will be held from June 18 – 22 at Messe Frankfurt, and the event expects over 3,000 attendees.

The conference will kick off on Sunday, June 18, with half-day and full-day tutorials. This year the organizers received a record-breaking number of tutorial proposals, which will be announced on the event website later this month. The three-day general track will kick off on Monday, June 19, and be held through Wednesday, June 21. The track includes many unique sessions. Of individual significance are the three keynotes and five distinguished talks from experts on topics such as data networks, data analytics, weather prediction, current developments and future trends, and the scientific purposes of Sunway TaihuLight – the fastest supercomputer in the world.

New this year, ISC offers two special full-day programs that have a great appeal to the industrial and deep learning user communities. The first in the row is the Industrial Day on Tuesday, June 20, which specifically addresses challenges in the industrial manufacturing, transport and logistics sectors. Branching into deep learning on the next day, technical talks from academic and industry leaders will give the Deep Learning Day attendees up-to-date insights on the rapid development in this area and also demonstrate how deep learning can be enabled with HPC technology, as well as how the demands of deep learning will affect current and future HPC infrastructure.

Complementing the general track, this year’s research track will host the following sessions: the Research Paper, Research Poster and Project Poster sessions, as well as the PhD Forum. Various topical and interest-specific Birds-of-a-Feather sessions, the fast-paced Vendor Showdown and Exhibitor Forums will again take place this year. The Hans Meuer Award as well as the Gauss Award await the best research papers in 2017. The conference will conclude on Thursday, June 22 with half-day and full-day workshops. All pre-submitted workshops are now published on the website.

The ISC exhibition, a three-day event that runs from Monday, June 19, through Wednesday, June 21, will feature about 150 exhibits from leading HPC companies and research organizations. The exciting Student Cluster Competition, organized by the HPC Advisory Council and ISC High Performance, will be held for the sixth time, once again on the show floor.

The organizers are offering flexible conference passes under four different categories: tutorial, conference, exhibition and workshop. For the detailed fee structure and pass descriptions, click here. Please note that tutorials and workshops require separate registration. The conference agenda planner contains the full program and enables attendees to plan their schedule efficiently.

Look into Your Travel and Stay Early

If you fly with any of the oneworld global alliance partners, you are eligible for discounts and attendee benefits for yourself and a travel companion. ISC organizers are also once again partnering with Deutsche Bahn to offer their attendees a special event ticket.

The conference’s 2017 official hotel partners are Marriott, Maritim and Mövenpick, who are offering special room rates to attendees within a specified time period. All three of them are located in the close vicinity of the ISC 2017 conference and exhibition grounds. Please click on the Travel & Stay page and proceed to Frankfurt Hotels for the booking links and contact information. For all other hotels, you can book via CPO Hanser Service.

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

Over 400 hand-picked expert speakers and 150 exhibitors, consisting of leading research centers and vendors, will greet attendees at ISC High Performance. A number of events complement the Monday – Wednesday keynotes, including the Distinguished Speaker Series, the Industry Track, The Machine Learning Track, Tutorials, Workshops, the Research Paper Sessions, Birds-of-a-Feather (BoF) Sessions, Research Poster, the PhD Forum, Project Poster Sessions and Exhibitor Forums.

Source: ISC High Performance

The post ISC 2017 Now Open for Registration appeared first on HPCwire.

SDSC Achieves Record Performance in Seismic Simulations With Intel

Tue, 03/07/2017 - 07:24

March 7 — Researchers at the San Diego Supercomputer Center (SDSC) at the University of California San Diego have developed a new seismic software package with Intel Corporation that has enabled the fastest seismic simulation to-date, as the two organizations collaborate on ways to better predict ground motions to save lives and minimize property damage.

The latest simulations, which mimic possible large-scale seismic activity in the southern California region, were done using a new software system called EDGE, for Extreme-Scale Discontinuous Galerkin Environment. The largest simulation used 612,000 Intel Xeon Phi processor cores of the new Cori Phase II supercomputer at the National Energy Research Scientific Computing Center (NERSC), the primary scientific computing facility for the Office of Science in the U.S. Department of Energy (DOE).

SDSC’s ground-breaking performance of 10.4 PFLOPS (Peta FLoating-point Operations Per Second, or one quadrillion calculations per second) surpassed the previous seismic record of 8.6 PFLOPS conducted on China’s Tianhe-2 supercomputer. Through efficient utilization of the latest and largest supercomputers, seismologists are now able to increase the frequency content of the simulated seismic wave field.

Obtaining higher frequencies is a key to predict ground motions relevant for common dwellings in conducting earthquake research. SDSC and Intel researchers also used the DOE’s Theta supercomputer at the Argonne National Laboratory as part of the year-long project.

“In addition to using the entire Cori Phase II supercomputer, our research also showed a substantial gain in efficiency in using the new software,” said Alex Breuer, a postdoctoral researcher from SDSC’s High Performance Geocomputing Laboratory (HPGeoC) and lead author of the paper, to be presented in June at the International Super Computing (ISC) High Performance conference in Frankfurt, Germany. “Researchers will be able to run about two to almost five times the number of simulations using EDGE, saving time and reducing cost.”

A second HPGeoC paper submitted and accepted for the ISC High Performance conference covers a new study of the AWP-ODC software that has been used by the Southern California Earthquake Center (SCEC) for years. The software was optimized to run in large-scale for the first time on the latest generation of Intel data center processors, called Intel Xeon Phi x200.

These simulations, also using NERSC’s Cori Phase II supercomputer, attained competitive performance to an equivalent simulation on the entire GPU-accelerated Titan supercomputer. Titan is located at the DOE’s Oak Ridge National Laboratory and has been the resource used for the largest AWP-ODC simulations in recent years. Additionally, the software obtained high performance on Stampede-KNL at the Texas Advanced Computing Center at The University of Texas at Austin.

Intel Parallel Computing Center at SDSC

Both research projects are part of a collaboration announced in early 2016 under which Intel opened a computing center at SDSC to focus on seismic research, including the ongoing development of computer-based simulations that can be used to better inform and assist disaster recovery and relief efforts.

The Intel Parallel Computing Center (Intel PCC) continues an interdisciplinary collaboration between Intel, SDSC, and SCEC, one of the largest open research collaborations in geoscience. In addition to UC San Diego, the Intel PCC at SDSC includes researchers from the University of Southern California (USC), San Diego State University (SDSU), and the University of California Riverside (UCR).

The Intel PCC program provides funding to universities, institutions, and research labs to modernize key community codes used across a wide range of disciplines to run on current state-of-the-art parallel architectures. The primary focus is to modernize applications to increase parallelism and scalability through optimizations that leverage cores, caches, threads, and vector capabilities of microprocessors and coprocessors.

“Research and results such as the massive seismic simulation demonstrated by the SDSC/Intel team are tremendous for their contributions to science and society,” said Joe Curley, senior director of Code Modernization Organization at Intel Corporation. “Equally, this work also demonstrates the benefit to society of developing modern applications to exploit power-efficient and highly parallel CPU technology.”

Such detailed computer simulations allow researchers to study earthquake mechanisms in a virtual laboratory. “These two studies open the door for the next-generation of seismic simulations using the latest and most sophisticated software,” said Yifeng Cui, founder of the HPGeoC at SDSC and director of the Intel PCC at SDSC. “Going forward, we will use the new codes widely for some of the most challenging tasks at SCEC.”

The multi-institution study which led to the record results includes Breuer and Cui; as well as Josh Tobin, a Ph.D. student in UC San Diego’s Department of Mathematics; Alexander Heinecke, a research scientist at Intel Labs; and Charles Yount, a principal engineer at Intel Corporation.

The titles of the respective presentations and publications are “EDGE: Extreme Scale Fused Seismic Simulations with the Discontinuous Galerkin Method” and “Accelerating Seismic Simulations using the Intel Xeon Phi Knights Landing Processor”. The work was supported by the National Science Foundation (NSF), SCEC, and the Intel PCC initiative. Intel, Xeon, and Xeon Phi are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries.

About SDSC 

As an Organized Research Unit of UC San Diego, SDSC is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community, including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC’s Comet joins the Center’s data-intensive Gordon cluster, and both are part of the National Science Foundation’s XSEDE (Extreme Science and Engineering Discovery Environment) program.

Source: SDSC

The post SDSC Achieves Record Performance in Seismic Simulations With Intel appeared first on HPCwire.

Mellanox Enables PCIe Gen-4 OpenPOWER-Based Rackspace OCP Server With 100Gb/s Connectivity

Tue, 03/07/2017 - 07:12

SUNNYVALE, Calif. & YOKNEAM, Israel, March 7 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced that Mellanox ConnectX-5 Open Compute Project (OCP) Ethernet adapter will enable the world’s first PCIe Gen-4 OpenPOWER/OCP-based Zaius, the open server platform from Google and Rackspace. Mellanox will showcase ConnectX-5, the industry’s first PCIe Gen-4 based 100Gb/s OCP Ethernet adapter at the OCP Summit, 2017.

The exponential growth of data demands not only the fastest throughput but also smarter networks. Mellanox intelligent interconnect solutions incorporate advanced acceleration engines that perform sophisticated processing algorithms on the data as it moves through the network. Intelligent network solutions greatly improve the performance and total infrastructure efficiency of data intensive applications in the cloud such as cognitive computing, machine learning and Internet of Things (IoT). Mellanox’s ConnectX-5 supports both InfiniBand and Ethernet, and is the most advanced 10/25/50/100Gb/s intelligent adapter on the market today. Additionally, as the first adapter to support PCIe Express Gen 4.0, ConnectX-5 delivers full 200Gb/s data throughput to servers and storage platforms.

The ConnectX-5 supports Multi-Host technology, delivering flexibility and major cost savings for the next generation of Cloud, Web2.0, Big Data and cognitive computing platforms. Multi-Host technology disaggregates the network and enables building new scale-out heterogeneous compute and storage racks with direct connectivity from multiple processors to shared network controller. Mellanox Multi-Host technology is available today in the Mellanox portfolio of ConnectX-4 Lx, ConnectX-4, and ConnectX-5 adapters at speeds of 50 and 100Gb/s.

“We anticipate that Zaius and our Barreleye G2 server solution will bring new levels of performance and efficiency to our portfolio,” said Aaron Sullivan, Distinguished Engineer, Rackspace. “This platform combines IBM’s Power9 processor with PCI Express Gen4, and Mellanox ConnectX-5 network adapters. Leveraging these technologies, it is now possible to deliver hundreds of gigabits of bandwidth from a single network adapter.”

“IBM, the OpenPOWER Foundation and its members are fostering an open ecosystem for innovation to unleash the power of cognitive and AI computing platforms,” said Ken King, IBM general manager of OpenPOWER. “The combination of the POWER processor and Mellanox ConnectX-5 technology, using novel interfaces like CAPI and OpenCAPI, will dramatically increase system throughput for the next generation of advanced analytics, AI and cognitive applications.”

“Mellanox has been committed to OCP’s vision from its inception and we are excited to bring continued innovation to this growing community,” said Kevin Deierling, vice president marketing at Mellanox Technologies. “Through collaboration between IBM and Rackspace, we continue to push the boundaries of innovation, enable open platforms and unlock performance of compute and storage infrastructure.”

Visit Mellanox Technologies at the OCP Summit

Visit Mellanox during the OCP Summit at the Santa Clara Convention Center, OCP Summit 2017, March 8 -9, booth no. C-23, to learn more about the OCP family of Connect-X adapters.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox Technologies

The post Mellanox Enables PCIe Gen-4 OpenPOWER-Based Rackspace OCP Server With 100Gb/s Connectivity appeared first on HPCwire.

Stampede Supercomputer Helps UM Researchers Design Jets With Morphing Wings

Tue, 03/07/2017 - 06:50

March 7 — As much as we complain about air travel, the fact is, flying has gotten considerably cheaper, safer, faster and even greener, over the last 60 years.

Today’s aircraft use roughly 80 percent less fuel per passenger-mile than the first jets of the 1950s – a testimony to the tremendous impact of aerospace engineering on flight. This increased efficiency has extended global commerce to the point where it is now economically viable to ship everything from flowers to Florida manatees across the globe.

In spite of continuous improvements in fuel burning efficiency, global emissions are still expected to increase over next two decades due to a doubling in air traffic, so making even small improvements to aircrafts’ fuel efficiency can have a large effect on economies and on the environment.

This potential for impact motivates Joaquim Martins — an aerospace engineer at the University of Michigan (UM) who leads the Multidisciplinary Design Optimization Laboratory — to develop tools that let engineers design more efficient aircraft.

“Transportation is the backbone of our economy. Any difference you can make in fuel burn, even a fraction of a percent, makes a big difference in the world,” Martins says. “Our goals are two-fold: to make air transportation more economically feasible and at the same time to reduce the impact on the environment.”

Using the Stampede supercomputer at the Texas Advanced Computing Center, as well as computing systems at NASA and UM, Martins has developed improved wing designs capable of burning less fuel, as well as tools that help the aerospace industry build more efficient aircraft.

“We’re bridging the gap between an academic exercise and a practical method for industry, who will come up with future designs,” he says.

Novel wing designs for more efficient flight

Improvements in wing design have the potential to improve efficiency up to 10 percent, lowering costs and pollution. Moreover, in areas where new technologies are being applied – such as for wings made of composite materials or wings that morph during flight – improved design tools can provide insights when intuitive understanding is lacking.

Presenting at the American Institute of Aeronautics and Astronautics (AIAA) SciTech Forum in January 2017, Martins and collaborators Timothy Brooks (UM) and Graeme Kennedy (Georgia Tech) described efforts to optimize the design of wings built with new composite materials and emerging construction methods.

Today’s airplanes feature 50 percent composites materials, but the composites are placed in a relatively simple way. New automatic fiber placement machines, however, can place composites in complex curves, creating what are known as tow-steered composite wings.

“That opens up the design space, but designers aren’t used to this,” Martins says. “It’s challenging because there isn’t a lot of intuition on how to utilize the full potential of this technology. We developed the algorithms for optimizing these tow-angles.”

Click here to view the entire article. 

Source: Aaron Dubrow, TACC

The post Stampede Supercomputer Helps UM Researchers Design Jets With Morphing Wings appeared first on HPCwire.

Mellanox Unveils OCP-Based ConnectX-5 Adapters for Qualcomm Centriq 2400 Processor-Based Platforms

Tue, 03/07/2017 - 06:45

SUNNYVALE, Calif. & YOKNEAM, Israel, March 7 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced availability of industry leading OCP-based 10, 25, 40, 50, and 100Gb/s ConnectX-5 network adapters for the Qualcomm Centriq 2400 processor-based platforms. ConnectX-5’s advanced performance and efficiency paired with the Qualcomm Centriq 2400, the world’s first 10-nanometer server processor, offer a complete ARM-based infrastructure for hyperscale and cloud data centers.

“The combination of Mellanox Ethernet and InfiniBand interconnect solutions and Qualcomm’s leading ARM-based processors enables the generation of compute and storage data centers,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “The RDMA technology and the smart cloud, storage and machine learning offloads of ConnectX-5 maximize the Qualcomm CPU efficiency and overall applications’ performance. We are excited to demonstrate the ARM server platforms with Qualcomm and their readiness for data center usage.”

“Pairing our 10 nanometer server processor with Mellanox’s industry-leading multi-host solution delivers flexible compute and modularity for the scaling needs of next-generation data center infrastructure demands,” said Ram Peddibhotla, vice president of product management, Qualcomm Datacenter Technologies. “By featuring two world-class technologies, the Qualcomm Centriq 2400 processor and Mellanox ConnectX-5 interconnect, we are enabling innovative topologies and networked compute models that facilitate cloud, high performance computing, telco and other market applications.”

ConnectX-5 is the world’s leading 10, 25, 40, 50, 56 and 100Gb/s InfiniBand and Ethernet intelligent adapter. Delivering message rates of up to 200 million messages per second, and latency as low as 0.7us, ConnectX-5 supports the most advanced offloads to accelerate high-performance computing and machine learning algorithms, virtualized infrastructures, and storage workloads. Together with native RDMA and RDMA over Converged Ethernet (RoCE), ConnectX-5 dramatically improves storage and compute platform efficiency.

Mellanox and Qualcomm will showcase the Qualcomm Centriq 2400 server-based platform with Mellanox ConnectX-5 at the Open Compute Project (OCP) Summit, March 8-9, booth No. C-23, Santa Clara.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at: www.mellanox.com.

Source: Mellanox Technologies

The post Mellanox Unveils OCP-Based ConnectX-5 Adapters for Qualcomm Centriq 2400 Processor-Based Platforms appeared first on HPCwire.

ASRock Rack Reveals New OCP-Related Offering

Tue, 03/07/2017 - 06:30

SANTA CLARA, Calif., March 7 — The Open Compute Project (OCP) has been successfully reaching the goals of efficiency and scalability for 7 years. Since the demand for cloud services has skyrocketed, the OCP has become one of the best options to satisfy the hunger in the storage, datacenter, and virtual I/O areas. Therefore, ASRock Rack is proud to reveal a new optimized and flexible design.

ASRock Rack has developed the OCP series with Professional Power Usage (PUE), dual gigabit LAN ports and excellent I/O flexibility. Both support dual Intel Xeon E5-2600 v3 & v4 processors and a max 1024GB DDR4 memory capacity. This energy-efficient choice is eco-friendly and reduces the cost for computation-intensive applications. Built with dual Intel i210 GLAN controllers, this design makes sure the customer doesn’t need to purchase additional LAN cards. It creates more flexibility for deployments and the chance to minimize downtime by failover set-up. If this is still not enough, you’ll also find a Mezzanine slot (x8) which supports 10G Ethernet to amplify the bandwidth.

Furthermore, ASRock Rack has been busy creating cutting-edge designs of various mezzanine cards. Each card includes optimized development which enhances barebones & motherboards capacities. To view our squad and battleship, please visit ASRock Rack at Booth C7 at the OCP U.S. Summit 2017.

Source: ASRock Rack

The post ASRock Rack Reveals New OCP-Related Offering appeared first on HPCwire.

IBM Touts Hybrid Approach to Quantum Computing

Mon, 03/06/2017 - 14:27

IBM is attempting to move quantum computing to the mainstream via an application-programming interface and other software tools that would allow developers to build interfaces between its cloud-based quantum machine and digital computers.

The accelerating trend toward linking quantum machines—in this case IBM’s 5 qubit computer—with traditional computing underscores efforts to leverage the attributes of both approaches in what amounts to a hybrid computing platform.

Along with the API, the company (NYSE: IBM) also on Monday (March 6) released an upgraded simulator for modeling circuits up to 20 qubits. It also plans to release a software development kit by mid-year for building “simple” quantum applications and programs. Those efforts reflect the emergence of quantum software startups such as 1QBit, which partners with quantum computing pioneer D-Wave Systems.

IBM said the API and development kit would expand access to its cloud-based quantum processor for running algorithms, experiments and simulations. The company unveiled a research platform last year that has attracted about 40,000 users. For example, the Massachusetts Institute of Technology tapped the cloud service for its online quantum information science course. IBM engineers also have noted heavy use of the service by Chinese researchers.

The cloud-based system is “the beginnings of a quantum community,” predicted Robert Wisnieff, a quantum researcher at IBM’s Watson Research Center.

Indeed, IBM stressed that its latest initiative is designed to “expand the application domain of quantum computing.” The effort also introduces a new metric for gauging the pace of progress from research to a commercial platform with up to 50 qubits of processing power. The metric, “quantum volume,” includes the number of qubits, “quality” of quantum operations and connectivity.

IBM also said it plans to use these measures in developing a 50-qubit commercial machine over the next several years. “We envision IBM [quantum] systems working in concert with our portfolio of classical high-performance systems to address problems that are currently unsolvable,” Tom Rosamilia, senior vice president of IBM Systems, noted in a statement.

Early adopters of quantum computing have embraced the hybrid approach. “Quantum computers are not going to be used in isolation,” predicted Ned Allen, chief scientist at Lockheed Martin Corp. (NYSE: LMT), an early investor in quantum computing technologies. The military contracting giant uses a D-Wave One quantum machine as part of its verification and validation (V&V) program for mission critical software.

Along with V&V, Allen predicted during a recent panel discussion sponsored by the Information Technology and Innovation Foundation that quantum computing was suited to application such as “classical” analytics.

He further predicted that hybrid-computing platforms would emerge that leverage quantum co-processors.

That hybrid approach seems likely to continue for the foreseeable future since a universal quantum computer remains elusive. In unveiling in cloud-based platform last May, the company said it envisions “medium-sized quantum processors” totaling between 50 and 100 qubits over the next decade.

Still, a 50-qubit machine would outperform today’s highest performance supercomputers, underscoring the huge potential of quantum computing as traditional Moore’s Law digital scaling runs out of steam.

The post IBM Touts Hybrid Approach to Quantum Computing appeared first on HPCwire.

More Bad News for Gamblers – AI Wins…Again

Mon, 03/06/2017 - 10:48

AI-based poker playing programs have been upping the ante for lowly humans. Notably several algorithms from Carnegie Mellon University (e.g. Libratus, Claudico, and Baby Tartanian8) have performed well. Writing in Science last week, researchers from the University of Alberta, Charles University in Prague and Czech Technical University report their poker algorithm – DeepStack – is the first computer program to beat professional players in heads-up no-limit Texas hold’em poker.

Sorting through the “firsts” is tricky in the world of AI-game playing programs. What sets DeepStack apart from other programs, say the researchers, is its more realistic approach at least in games such as poker where all factors are never full known – think bluffing, for example. Heads-up no-limit Texas hold’em (HUNL) is a two-player version of poker in which two cards are initially dealt face down to each player, and additional cards are dealt face-up in three subsequent rounds. No limit is placed on the size of the bets although there is an overall limit to the total amount wagered in each game.

“Poker has been a longstanding challenge problem in artificial intelligence,” says Michael Bowling, professor in the University of Alberta’s Faculty of Science and principal investigator on the study. “It is the quintessential game of imperfect information in the sense that the players don’t have the same information or share the same perspective while they’re playing.”

Using GTX 1080 GPUs and CUDA with the Torch deep learning framework, “we train our system to learn the value of situations,” says Bowling on an NVIDIA blog. “Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game.”

DeepStack Research Team

In the last two decades, write the researchers, “computer programs have reached a performance that exceeds expert human players in many games, e.g., backgammon, checkers, chess, Jeopardy!, Atari video games, and go. These successes all involve games with information symmetry, where all players have identical information about the current state of the game. This property of perfect information is also at the heart of the algorithms that enabled these successes,” write the researchers.

“We introduce DeepStack, an algorithm for imperfect information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning.”

In total 44,852 games were played by the thirty-three players with 11 players completing the requested 3,000 games, according to the paper. Over all games played, DeepStack won 492 mbb/g. “This is over 4 standard deviations away from zero, and so, highly significant.” According to the authors, professional poker players consider 50 mbb/g a sizable margin. Using AIVAT to evaluate performance, we see DeepStack was overall a bit lucky, with its estimated performance actually 486 mbb/g.”

(For those of us less prone to take a seat at the Texas hold’em poker table, mbb/g equals milli-big-blinds per game or the average winning rate over a number of hands, measured in thousandths of big blinds. A big blind is the initial wager made by the non-dealer before any cards are dealt. The big blind is twice the size of the small blind; a small blind is the initial wager made by the dealer before any cards are dealt. The small blind is half the size of the big blind.)

It’s an interesting paper. Game theory, of course, has a long history and as the researchers note, “The founder of modern game theory and computing pioneer, von Neumann, envisioned reasoning in games without perfect information. ‘Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory.’ One game that fascinated von Neumann was poker, where players are dealt private cards and take turns making bets or bluffing on holding the strongest hand, calling opponents’ bets, or folding and giving up on the hand and the bets already added to the pot. Poker is a game of imperfect information, where players’ private cards give them asymmetric information about the state of game.”

According to the paper, DeepStack algorithm is composed of three ingredients: a sound local strategy computation for the current public state, depth-limited look-ahead using a learned value function to avoid reasoning to the end of the game, and a restricted set of look-ahead actions. “At a conceptual level these three ingredients describe heuristic search, which is responsible for many of AI’s successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.”

The researchers describe DeepStack’s architecture as a standard feed-forward network with seven fully connected hidden layers each with 500 nodes and parametric rectified linear units for the output. The ’turn’ network was trained by solving 10 million randomly generated poker turn games. These turn games used randomly generated ranges, public cards, and a random pot size. The flop network was trained similarly with 1 million randomly generated flop games.

Link to paper: http://science.sciencemag.org/content/early/2017/03/01/science.aam6960.full

Link to NVIDIA blog: https://news.developer.nvidia.com/ai-system-beats-pros-at-texas-holdem/

The post More Bad News for Gamblers – AI Wins…Again appeared first on HPCwire.

Democratized HPC Enables Innovative Software Platforms at University of Cambridge

Mon, 03/06/2017 - 09:49

Scientists at the University of Cambridge have been leveraging an advanced higher performance computing and storage infrastructure to conduct research in a number of fields. By partnering with Dell EMC and Intel, the HPC Solution Centre is providing the capabilities needed to make breakthrough discoveries in healthcare, high energy physics, astronomy, and industry.

The problems being investigated in these vastly different areas all need the latest compute and storage technologies. Typically, the researchers in these fields work with very large volumes of data. The data must be quickly analyzed, visualized, modeled, and shared. And in most cases, the data must be maintained indefinitely and made accessible to be analyzed at different points in time as new algorithms and exploration methods are generated.

To accomplish all of this requires a hardware infrastructure (including servers, storage, and networking elements) that is highly scalable. The infrastructure must be flexible enough to support the wide variety of workloads found in today’s leading science labs.

Such a hardware infrastructure must be complemented with a new or updated software infrastructure to achieve greater results. In particular, great performance and speed-to-results benefits can be realized by modifying and optimizing the algorithms used for research to take full advantage of the features and enhancements incorporated into the latest generation of Intel Scalable Systems Framework.

Finding the right solution

Certainly, such HPC requirements and goals have been common for years in academic computing centers around the world. However, two things have changed recently. One is the volume of data that needs to be analyzed. And the second, and perhaps most important change, is the need to provide access to HPC resources to a much larger and broader group of researchers.

To that point, a new researcher demographic of non-computational experts has emerged. To meet the need of this demographic, the University needed a platform that would enable non-IT specialist researchers to benefit from HPC capabilities, while still handling large scale data intensive workloads. To meet the demand, the university partnered with Dell EMC and Intel to design and implement a new paradigm of HPC Systems.

For more than six years, Dell EMC and the University of Cambridge worked together on the HPC Solution Centre in an effort to provide solutions to real-world problems by increasing the effectiveness of the HPC and data platforms used in the research community. More recently, the effort expanded. To enable the community to take further advantage of new research discoveries, the university and Dell EMC were joined by Intel to create an additional focus on large scale data centric HPC, data analytics and multi-tenanted cloud HPC provisioning.

“The three-way collaboration has strengthened the HPC [center],” said Paul Calleja, head of HPC services, University of Cambridge. “By creating a larger mass of skills and resources, we are able to focus on the emerging problems of data-centric HPC, data analytics, and cloud based research computing services. We’re able to tackle the HPC challenges identified by the community and resolve real-world issues.”

As a result of this collaboration, innovation has been unlocked enabling new levels of performance, scale, cost efficiency, and ways of working within the commodity HPC and storage domains. For example, researchers benefit from the Intel Scalable Systems Framework with Dell EMC PowerEdge servers that use the full Intel portfolio of products, like Intel® Xeon® processors, Intel® Xeon Phi™ coprocessors, Intel SSDs; scalable Dell EMC HPC Lustre storage, plus interconnect solutions like Intel® True Scale Fabric; and Dell EMC Networking H-series switches based on Intel Omni-Path architecture. These advances have been applied across hundreds of customer use cases, driving advances in healthcare, high energy physics, astronomy and industry.

Through the enhanced performance of Dell EMC HPC solutions and access to the benefits of OpenStack virtualized technologies, university researchers are now able to process and model complex data with significantly improved flexibility and system usability.

Focus on computational biology

With the superior hardware and core technologies, the Dell EMC | Cambridge HPC Solution Centre platform radically democratizes access to large scale compute and data resources and ultimately contributes to significant advancements in the treatment discovery process.

A good example of what is being achieved is the work done by university researchers in the area of computational biology. Efforts in this area at the university focus on the development of new advanced computing solutions that use the most modern HPC and big data technologies to improve genomic data analysis and visualization.

To that point, there is great interest in developing new algorithms and bioinformatic tools for the analysis of genomic data that enable researchers to understand what biological processes, genes, or variants are involved in different phenotypes or diseases.

One effort being carried out is the work with Genomics England. Genomics England is a company set up and owned by the UK Department of Health to run the 100,000 Genomes Project, which aims to sequence 100,000 genomes from NHS patients with a rare disease and their families, and patients with cancer.

To help support the goals of the project and others like it, the Dell EMC | Cambridge HPC Solution Centre is developing a next-generation population platform that will take large amounts of genomics data let researchers look for biological relevance within that data for identifying the cause and treatments of the different diseases.

Overall, the Centre is exploring ways to make HPC capabilities available to a much larger set of researchers than was possible in the past. To achieve this goal, requires an innovative, flexible, scalable, and highly efficient HPC infrastructure. To that end, by working with Dell EMC and Intel, the HPC Solution Centre has made significant contributions to HPC system management, storage architecture, remote visualization, and green computing. This in turn has provided a way for university research staff to create new health care developments and unlock new insights in high energy physics and astronomy.

 

For more information about the Dell EMC | Cambridge HPC Solution Centre, visit http://www.dell.com/learn/uk/en/ukbsdt1/hpcc/cambridge-hpc-solution-centre

For more information about accelerating life sciences research with new HPC platforms, visit www.dell.com/hpc

For more information on Code Modernization with the Life Sciences Community, visit www.intel.com/healthcare/optimizecode 

The post Democratized HPC Enables Innovative Software Platforms at University of Cambridge appeared first on HPCwire.

2017 Women in IT Networking at SC Application is Now Open

Mon, 03/06/2017 - 07:51

March 6 — The WINS program is currently seeking qualified women U.S. candidates in their early to mid-career to join the SCinet volunteer workforce for SC17. Selected candidates will receive full travel support and mentoring by well-known engineering experts in the research and education community.

Created each year for the SC conference, SCinet brings to life a very high-capacity network that supports the revolutionary applications and experiments that are a hallmark of the SC conference. SCinet will link the convention center to research and commercial networks around the world.

SC is dedicated to supporting an inclusive environment. In 2015, SCinet partnered with a team of collaborators to create the pilot SCinet Diversity Program¹, developed as a means for addressing the prevalent gender gap that exists in Information Technology (IT), particularly in the fields of network engineering and high performance computing. The success of the 2015 program led to an official three-year award by the National Science Foundation (NSF) and DOE-ESnet, titled Women in IT Networking at SC (WINS)². WINS is a joint effort between the Energy Sciences Network (ESnet), the Keystone Initiative for Network Based Education and Research (KINBER), the University Corporation for Atmospheric Research (UCAR) and SCinet.

SCinet provides an ideal “apprenticeship” opportunity for engineers and technologists looking for direct access to the most cutting-edge network hardware and software, while working side-by-side with the world’s leading network and software engineers, and the top network technology vendors.

More than 15 teams comprise SCinet, all focused on specific areas of expertise involved in setting up and operating a research network. Selected candidates will be matched with a mentor in one of these areas based on interest and background. Some learning and training opportunities include (but are not limited to):

  • Operating and maintaining traditional IT services for SCinet;
  • Installing fiber optic network connections;
  • Installing and configuring wireless access points;
  • Installing and configuring wired network devices for conference meeting rooms;
  • Managing internet routing protocols;
  • Configuring wide-area network connections to national telecom providers;
  • Supporting conference attendees,  high-performance computing (HPC) and high-performing network demonstrations; and
  • Participating in cybersecurity activities focused on prevention, detection, and countermeasures to protect the resources of the conference.

For additional information on the WINS program, please visit the WINS website.

Source: SC17

The post 2017 Women in IT Networking at SC Application is Now Open appeared first on HPCwire.

Fujitsu to Build RIKEN’s Deep Learning System

Mon, 03/06/2017 - 06:58

TOKYO, Japan, March 6 — Fujitsu today announced that it has received RIKEN’s order for the “Deep learning system,” which in terms of operations will be one of the largest-scale supercomputers in Japan specializing in AI research. The RIKEN Center for Advanced Intelligence Project will use the new system, scheduled to go online in April 2017, as a platform to accelerate R&D into AI technology.

The system’s total theoretical processing performance will reach 4 petaflops. The system will be comprised of two server architectures, with 24 of NVIDIA DGX-1 servers and 32 FUJITSU Server PRIMERGY RX2530 M2 servers, along with a high-reliability, high-performance storage system.

Fujitsu is leveraging the extensive know-how that it and Fujitsu Laboratories Ltd. have in high-performance computing development and AI research to build and operate one of Japan’s most advanced AI research systems. The company will also provide support for R&D that utilizes the system, thereby contributing to the creation of a future society in which AI is used to find solutions to a variety of social issues.

About the Deep Learning System

The new system will be used at the Center for Advanced Intelligence Project to accelerate R&D into base technologies for innovative AI and the development of technologies that work to support such fields as regenerative medicine and manufacturing, and that into the future enable real-world implementation of solutions to social issues, including healthcare for the elderly, management of aging infrastructure, and response to natural disasters. The Center for Advanced Intelligence Project, which has an integrated R&D system for everything from basic research to public implementation, advances joint research with researchers in a variety of universities, research institutes, clinical medical organizations, and in the world of industry. The new system will support AI researchers in Japan, and will become a core system that spurs on breathtaking advances in research that realizes innovative AI for the world.

Overview

The system is comprised of two server architectures specialized for deep learning using the latest CPUs and GPUs, and a storage system; it is being installed in Fujitsu’s Yokohama datacenter, a robust facility with cutting-edge security. Along with the standard DGX-1 deep learning software environment which NVIDIA provides in a public cloud, Fujitsu integrated a customized software environment for use in a secure on-site network. The system has operations management functions for easily and flexibly creating and reproducing calculation execution environments and the security and reliability for processing data of high importance, such as personal and intellectual property data.

Configuration

1. Computation server

With 24 NVIDIA DGX-1 servers, each including eight of the latest NVIDIA Tesla P100 accelerators and integrated deep learning software, and 32 FUJITSU Server PRIMERGY RX2530 M2 servers, the system has a total theoretical performance of more than 4 petaflops (when performing half-precision floating-point calculations).

In building the system, an early deployment and evaluation of DGX-1 was performed at Fujitsu laboratories.

2. Storage system

The file system runs FUJITSU Software FEFS, high-performance scalable file system software, on six FUJITSU Server PRIMERGY RX2540 M2 PC servers, eight FUJITSU Storage ETERNUS DX200 S3 storage systems, and one FUJITSU Storage ETERNUS DX100 S3 storage system to provide the IO processing demanded by deep learning analysis.

“NVIDIA DGX-1, the world’s first all-in-one AI supercomputer, is designed to meet the enormous computational needs of AI researchers. Powered by 24 DGX-1s, the RIKEN Center for Advanced Intelligence Project’s system will be the most powerful DGX-1 customer installation in the world. Its breakthrough performance will dramatically speed up deep learning research in Japan, and become a platform for solving complex problems in healthcare, manufacturing and public safety,” said Jim McHugh, VP and General Manager at NVIDIA.

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 159,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE:6702; ADR: FJTSY) reported consolidated revenues of 4.7 trillion yen (US$41 billion) for the fiscal year ended March 31, 2016. For more information, please see http://www.fujitsu.com.

Source: Fujitsu

The post Fujitsu to Build RIKEN’s Deep Learning System appeared first on HPCwire.

Bright Computing Supplies Bright Cluster Manager to Fox Chase Cancer Center

Mon, 03/06/2017 - 06:52

SAN JOSE, Calif., March 6 — Bright Computing, the leading provider of hardware-agnostic cluster and cloud management software, announces it has supplied its Bright Cluster Manager solution to the Fox Chase Cancer Center for use in a new high-performance computing (HPC) cluster. The 30-node cluster supports bioinformatics initiatives for Fox Chase’s world class cancer research programs. Fox Chase focuses on RNA sequencing and variant calling in tumor samples, as well as clustered regularly interspaced short palindromic repeats (CRISPR) data processing, protein structure prediction and molecular modeling, and computational docking investigations for drug discovery. Fox Chase Cancer Center chose Bright Cluster Manager for its easy-to-use graphical user interface (GUI), which gives researchers point and click cluster management, rather than the traditional command line interface with which many researchers struggle.

HPC consultants at Data in Science Technologies (DST) worked with Fox Chase Cancer Center to develop a reference architecture and a toolset for the cluster that would help them meet their current and future research needs for conducting large scale simulations. DST originally selected Bright Cluster Manager to manage deployment of images on the new HPC cluster nodes. After deployment, DST was engaged as operational manager for the cluster, and continues to use Bright Cluster Manager for monitoring cluster health and basic cluster administration.

“We evaluated many cluster management tools and Bright was the obvious choice, especially since Fox Chase was transitioning from two different Linux systems,” said Debbie Willbanks, senior partner, at Data in Science Technologies. “Activities that are difficult in other cluster management tools are easy with Bright, providing a turnkey solution for cluster management. Using a GUI and a few mouse clicks, administrators can easily accomplish tasks that were previously command line driven and requiring numerous ad hoc tools. There is now consistency in our approach to any problem – glitches are easy to diagnose and solve.”

According to Michael Slifker, senior program analyst at Fox Chase’s Biostatistics and Bioinformatics Facility, “Bright Cluster Manager helped meet the need for a smooth, seamless and expedited transition to our new cluster from the older cluster, which had become difficult to manage. This helped us meet our deadline for a very large and important research grant application. With Bright Cluster Manager, our researchers are much more efficient, while the ability to quickly provision new nodes makes the research team more productive. It all helps in our quest to provide excellent computational and bioinformatics resources for researchers at Fox Chase Cancer Center.”

The Bright Cluster Management GUI was designed to offer an intuitive interface that is easy to use for beginners, while not compromising on efficiency and completeness for experienced users. The current demand for HPC administrators far outweighs the supply, so Bright helps bridge the gap. Now researchers can do what they do best – discover new drugs that save lives.

About Bright Computing

Bright Computing is a global leader in cluster and cloud infrastructure automation software. Bright Cluster Manager, Bright Cluster Manager for Big Data, and Bright OpenStack provide a unified approach to installing, provisioning, configuring, managing, and monitoring HPC clusters, big data clusters, and OpenStack clouds. Bright’s products are currently deployed in more than 650 data centers around the world. Bright Computing’s customer base includes global academic, governmental, financial, healthcare, manufacturing, oil/gas/energy, and pharmaceutical organizations such as Boeing, Intel, NASA, Stanford University, and St. Jude Children’s Research Hospital. Bright partners with Amazon, Cray, Dell, Intel, Nvidia, SGI, and other leading vendors to deliver powerful, integrated solutions for managing advanced IT infrastructure such as high performance computing clusters, big data clusters, and OpenStack-based private clouds. www.brightcomputing.com

Source: Bright Computing

The post Bright Computing Supplies Bright Cluster Manager to Fox Chase Cancer Center appeared first on HPCwire.

IBM Building Universal Quantum Computers for Business and Science

Mon, 03/06/2017 - 06:47

March 6 — IBM (NYSE: IBM) announced today an industry-first initiative to build commercially available universal quantum computing systems. “IBM Q” quantum systems and services will be delivered via the IBM Cloud platform. While technologies that currently run on classical computers, such as Watson, can help find patterns and insights buried in vast amounts of existing data, quantum computers will deliver solutions to important problems where patterns cannot be seen because the data doesn’t exist and the possibilities that you need to explore to get to the answer are too enormous to ever be processed by classical computers.

IBM also announced today:

  • The release of a new API (Application Program Interface) for the IBM Quantum Experience that enables developers and programmers to begin building interfaces between its existing five quantum bit (qubit) cloud-based quantum computer and classical computers, without needing a deep background in quantum physics.
  • The release of an upgraded simulator on the IBM Quantum Experience that can model circuits with up to 20 qubits. In the first half of 2017, IBM plans to release a full SDK (Software Development Kit) on the IBM Quantum Experience for users to build simple quantum applications and software programs.

The IBM Quantum Experience enables anyone to connect to IBM’s quantum processor via the IBM Cloud, to run algorithms and experiments, work with the individual quantum bits, and explore tutorials and simulations around what might be possible with quantum computing.

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of Hybrid Cloud and director for IBM Research. “Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries.”

IBM intends to build IBM Q systems to expand the application domain of quantum computing. A key metric will be the power of a quantum computer expressed by the “Quantum Volume”, which includes the number of qubits, quality of quantum operations, qubit connectivity and parallelism. As a first step to increase Quantum Volume, IBM aims at constructing commercial IBM Q systems with ~50 qubits in the next few years to demonstrate capabilities beyond today’s classical systems, and plans to collaborate with key industry partners to develop applications that exploit the quantum speedup of the systems.

IBM Q systems will be designed to tackle problems that are currently seen as too complex and exponential in nature for classical computing systems to handle. One of the first and most promising applications for quantum computing will be in the area of chemistry. Even for simple molecules like caffeine, the number of quantum states in the molecule can be astoundingly large – so large that all the conventional computing memory and processing power scientists could ever build could not handle the problem.

IBM’s scientists have developed techniques to efficiently explore the simulation of chemistry problems on quantum processors (https://arxiv.org/abs/1701.08213 and https://arxiv.org/abs/1612.02058) and experimental demonstrations of various molecules are in progress. In the future, the goal will be to scale to even more complex molecules and try to predict chemical properties with higher precision than possible with classical computers.

Future applications of quantum computing may include:

  • Drug and Materials Discovery: Untangling the complexity of molecular and chemical interactions leading to the discovery of new medicines and materials;
  • Supply Chain & Logistics: Finding the optimal path across global systems of systems for ultra-efficient logistics and supply chains, such as optimizing fleet operations for deliveries during the holiday season;
  • Financial Services: Finding new ways to model financial data and isolating key global risk factors to make better investments;
  • Artificial Intelligence: Making facets of artificial intelligence such as machine learning much more powerful when data sets can be too big such as searching images or video; or
  • Cloud Security: Making cloud computing more secure by using the laws of quantum physics to enhance private data safety.

“Classical computers are extraordinarily powerful and will continue to advance and underpin everything we do in business and society. But there are many problems that will never be penetrated by a classical computer. To create knowledge from much greater depths of complexity, we need a quantum computer,” said Tom Rosamilia, senior vice president of IBM Systems. “We envision IBM Q systems working in concert with our portfolio of classical high-performance systems to address problems that are currently unsolvable, but hold tremendous untapped value.”

IBM’s roadmap to scale to practical quantum computers is based on a holistic approach to advancing all parts of the system. IBM will leverage its deep expertise in superconducting qubits, complex high performance system integration, and scalable nanofabrication processes from the semiconductor industry to help advance the quantum mechanical capabilities. Also, the developed software tools and environment will leverage IBM’s world-class mathematicians, computer scientists, and software and system engineers.

“As Richard Feynman said in 1981, ‘…if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.’ This breakthrough technology has the potential to achieve transformational advancements in basic science, materials development, environmental and energy research, which are central to the missions of the Department of Energy (DOE),” said Steve Binkley, deputy director of science, US Department of Energy. “The DOE National Labs have always been at the forefront of new innovation, and we look forward to working with IBM to explore applications of their new quantum systems.”

Growing the IBM Q Ecosystem

IBM believes that collaborating and engaging with developers, programmers and university partners will be essential to the development and evolution of IBM’s quantum computing systems.

Since its launch less than a year ago, about 40,000 users have run over 275,000 experiments on the IBM Quantum Experience. It has become an enablement tool for scientists in over 100 countries and, to date, 15 third-party research papers have been posted to arXiv with five published in leading journals based on experiments run on the Quantum Experience.

IBM has worked with academic institutions, such as MIT, the  Institute for Quantum Computing at the University of Waterloo, and École polytechnique fédérale de Lausanne (EPFL) to leverage the IBM Quantum Experience as an educational tool for students. In collaboration with the European Physical Society, IBM Research – Zurich recently hosted students for a full-day workshop to learn how to experiment with qubits using the IBM Quantum Experience.

“Unlocking the usefulness of quantum computing will require hands-on experience with real quantum computers,” said Isaac Chuang, professor of physics and professor of electrical engineering and computer science at MIT. “For the Fall 2016 semester of the MITx Quantum Information Science II course, we featured IBM’s Quantum Experience as part of the online curriculum for over 1,800 participants from around the world. They were able to run experiments on IBM’s quantum processor and test out for themselves quantum computing principles and theories they were learning.”

In addition to working with developers and universities, IBM has been engaging with industrial partners to explore the potential applications of quantum computers. Any organization interested in collaborating to explore quantum applications can apply for membership to the IBM Research Frontiers Institute, a consortium that develops and shares a portfolio of ground-breaking computing technologies and evaluates their business implications. Founding members of the Frontiers Institute include Samsung, JSR, Honda, Hitachi Metals, Canon, and Nagase.

“We heavily invest in R&D and have a strong interest in how emerging technologies such as quantum computing will impact the future of manufacturing,” said Nobu Koshiba, President of JSR, a leading chemical and materials company in Japan. “Our pipelines of innovations range from synthetic rubbers for tires to semiconductor and display materials, along with products in the life sciences, energy and environmental sectors. By having exposure to how quantum computing can provide new computational capability to accelerate materials discovery, we believe this technology could have a lasting impact on our industry and specifically our ability to provide faster solutions to our customers.”

For more information on IBM’s universal quantum computing efforts, visit www.ibm.com/ibmq.

For more information on IBM Systems, visit www.ibm.com/systems.

About IBM Research 

For more than seven decades, IBM Research has defined the future of information technology with more than 3,000 researchers in 12 labs located across six continents. Scientists from IBM Research have produced six Nobel Laureates, 10 U.S. National Medals of Technology, five U.S. National Medals of Science, six Turing Awards, 19 inductees in the National Academy of Sciences and 20 inductees into the U.S. National Inventors Hall of Fame.

Source: IBM

The post IBM Building Universal Quantum Computers for Business and Science appeared first on HPCwire.

Exxact Introduces New Spectrum Development Workstation Featuring NVIDIA Quadro GP100 and NVLink Technology

Mon, 03/06/2017 - 06:42

FREMONT, Calif., March 6 — Exxact Corporation, a leading provider of high performance computing solutions and professional workstation graphics, announced today its planned production of the Spectrum TXN005-0128N Development Workstation. The system, expected to be among the first powered with the newly announced Pascal-based NVIDIA Quadro GP100, features NVIDIA NVLink technology, which scales multiple GPUs for unmatched desktop compute capability and reduced simulation times.

“We are pleased to reveal our Spectrum Development Workstation featuring Quadro GP100 and NVIDIA NVLink technology,” said Jason Chen, Vice President of Exxact Corporation. “The Spectrum TXN005-0128N will utilize multiple Quadro GP100s interconnected via NVLink to provide high-end graphical and compute performance, unified to streamline workflows for engineers, designers, and researchers alike.”

“Artificial intelligence, virtual reality and photorealism are central to today’s most demanding workflows,” said Bob Pette, Vice President of Professional Visualization at NVIDIA. “Equipped with Quadro GP100 GPU technology, the Spectrum Development Workstation will provide an enterprise-grade visual computing platform invaluable to those who require extreme render and compute capabilities for larger datasets.”

The Spectrum TXN005-0128N will feature an Intel Core i7 or Xeon E5-2600/1600 v3/v4 processor and offer the option of either one or two pairs of Quadro GP100 cards interconnected via NVLink to utilize a high-bandwidth, energy-efficient interconnect enabling ultra-fast communication between the GPUs themselves. NVLink technology allows data sharing at rates five to 12 times faster than the traditional PCIe Gen3 interconnect, creating dramatic speed increases in application performance. A pair of Quadro GP100 GPUs in the Spectrum TXN005-0128N will increase the effective memory footprint and scale application performance by enabling GPU-to-GPU data transfers at rates up to 80 GB/s (bidirectional).

Equipped with the latest Quadro GP100 technology, the Spectrum TXN005-0128N will enable engineers, designers, researchers, and artists to:

  • Unify simulation, HPC, rendering and design – A single Quadro GP100 combines unprecedented double precision performance with 16GB of high-bandwidth memory (HBM2) so users can conduct simulations during the design process and gather realistic multiphysics simulations faster than ever before. By combining two of the GPUs with NVLink technology, customers can scale to 32GB of HBM2 to create a massive visual computing solution on a single workstation.
  • Explore deep learning – One Quadro GP100 provides more than 20 TFLOPS of 16-bit floating point precision computing — making it an ideal development platform to enable deep learning in Windows and Linux environments.
  • Incorporate VR into design and simulation workflows – The “VR Ready” Quadro GP100 has the power to create detailed, lifelike, and immersive environments. Larger, more complex designs can be experienced at scale.
  • Reap the benefits of photorealistic design – Pascal-based Quadro GPUs can render photorealistic images more than 18 times faster than a CPU.
  • Create expansive visual workspaces – Visualize data in high resolution and HDR color on up to four 5K displays.
  • Efficiently run a variety of compute and visualization applications – With single and double precision performance, the Spectrum TXN005-0128N can be used for applications such as: SIMULIA Abasqus/Standard, ANSYS Mechanical, CST STUDIO, ANSYS Workbench, Altair HyperWorks, and Siemens NX SimCenter.

The Quadro GP100 GPU is the most versatile computing powerhouse for professional desktops. Able to deliver more than 5 TFLOPS of double-precision (FP64), 10 TFLOPS of single-precision (FP32), and 20 TFLOPS of half-precision (FP16) performance, the card supports a wide range of compute-intensive workloads flawlessly. It is equipped with 16GB of high-bandwidth memory (HBM2), offering one of the industry’s fastest graphics memory currently available (717GB/s peak bandwidth) and making it the ideal platform for latency-sensitive applications that handle large datasets. With 16-bit floating point precision computing, the Quadro GP100 provides double the throughput and reduces storage requirements to enable the training and deployment of larger neural networks.

The Exxact Spectrum TXN005-0128N Development Workstation featuring the Quadro GP100 with NVLink technology is available for order now. The NVIDIA Quadro GP100 is also now available from Exxact Corporation. For more information or inquiries, please contact our sales department here.

About Exxact Corporation

Exxact develops and manufactures innovative computing platforms and solutions that include workstation, server, cluster, and storage products developed for Life Sciences, HPC, Big Data, Cloud, Visualization, Video Wall, and AV applications. With a full range of engineering and logistics services, including consultancy, initial solution validation, manufacturing, implementation, and support, Exxact enables their customers to solve complex computing challenges, meet product development deadlines, improve resource utilization, reduce energy consumption, and maintain a competitive edge. Visit Exxact Corporation at www.exxactcorp.com.

Source: Exxact Corporation

The post Exxact Introduces New Spectrum Development Workstation Featuring NVIDIA Quadro GP100 and NVLink Technology appeared first on HPCwire.

ISC High Performance Conference Lays Out New Diversity Goals

Fri, 03/03/2017 - 10:09

In this contributed Q&A, ISC’s Nages Sieslack interviews Martin Meuer and Thomas Meuer, managing directors of the ISC Group, about the diversity initiatives and goals that were introduced this year. The event group is putting in a serious effort to increase participation of women and other under-presented groups at its annual conference, the next iteration of which takes place June 18-22, in Frankfurt, Germany.

What Does Diversity Mean to the ISC High Performance Conference?

Thomas Meuer: Diversity is multifaceted. In the context of a conference, the term can refer to speakers, participants or exhibitors and also include aspects such as age, gender, culture or geographical origin, sexual orientation, and more. But also a balance in attendance from industry and academia reflects diversity, as well as the program composition.

Martin Meuer

Martin Meuer: We do strive to address many of the above-mentioned facets and players in the community, but of course we also realize that we can only directly influence aspects under our purview. We can and are, for example, ensuring a greater gender balance in the appointment of ISC chairs, session chairs and speakers. Likewise we are also striving to compile a
diverse program that we believe have an appeal to the HPC and AI communities. We have started ensuring that the ISC exhibition offers all the important components of HPC. This year we shall also have businesses like Amazon, Google and Baidu exhibiting at the show.

Why is diversity important to a technical conference?

Martin Meuer: A technical conference is basically a large user group meeting that brings together different players, be it the people driving businesses or researchers that advance technologies. If a certain segment wouldn’t be at the industry conference, their contribution will be missed, which has consequences for the development of the community and vice-versa.

Imagine the Chinese or Japanese HPC community not attending ISC – that would adversely affect the knowledge sharing and collaboration on international exascale initiatives. Or the women in this field not actively being present at any HPC conferences – female researchers would lose representation at community gatherings.

As we see it, there is no doubt whatsoever that technical communities like the HPC community greatly benefit from having topics viewed and reviewed from different perspectives. There is ample research and published studies that shows that diversity breeds innovation, attracts talent, helps businesses perform better, and it also provides a stronger sense of a community.

Did diversity grow organically at ISC or is it a recent effort?

Thomas Meuer

Thomas Meuer: We have always aimed for a balance in the conference program, mostly with regard to the geographical origin of the speakers. Over the last 30 years, we have welcomed attendees from over 80 countries.

There is still a great potential in improving female researchers’, scientists’ and business leaders’ representation at ISC. Women generally remain underrepresented in the STEM workforce, including HPC. This is apparent from our published data where only 10 to 15 percent of past attendees are women. We wanted to change that in 2017, but not in baby steps. We engaged in many meetings with Toni Collis, the director of the Women in HPC (WHPC), and thanks to her support and guidance, we were able to introduce specific goals, which are now published online.

How is diversity reflected in the 2017 conference?

Martin Meuer: First of all we have introduced a compliance program and our goal is to fill 25 percent of the committee chairs, deputy chairs and session chairs’ positions with women. The following step was to urge individual session chairs to invite at least one-third female experts as speakers. This was a tough call for most session chairs. In some areas, for example in industrial HPC topics, it is almost impossible to find female speakers to address particular topics that are important to the B2B manufacturing industry.

However we are very pleased to introduce three distinguished female researchers to address the topics of data network and data analytics this year. For those who missed our announcement, this year’s conference keynote will be delivered by data scientist Dr. Jennifer Tour Chayes from Microsoft Research.

To encourage greater diversity in the research program, we established the double-blind review process to handle the following submissions: research papers, research posters and the PhD forum.

Finally we are bringing in the deep learning community to ISC by integrating a new program element – the Deep Learning Day on Wednesday, June 21. This program will enable HPC practitioners, the deep learning community, and the user community to engage with each other.

In light of the travel restrictions the US government has attempted to impose on certain countries, do you anticipate any impact on the ISC conference?

Thomas Meuer: We don’t wish to speculate on external facts, but we can tell you that we have received a record high number of submissions in our BoF and tutorials sessions. Maybe this is an indication that we will reach a new ISC attendance record.

However we have been hearing from a number of trusted partners in the US that they might increase their attendance at HPC conferences outside the country if travel restrictions impede their ability to meet and collaborate with foreign experts at US-based events.

Here let me take this interview as an opportunity to urge all groups within the HPC community to make use of the ISC platform. Get in touch with our chairs or us directly to recommend methods, or even programs to promote diversity.

The post ISC High Performance Conference Lays Out New Diversity Goals appeared first on HPCwire.

Nominations Open for PRACE Ada Lovelace Award for HPC 2017

Fri, 03/03/2017 - 05:07

March 3 — From May 16-18, 2017, PRACE will organise the fourth edition of its Scientific and Industrial Conference – PRACEdays17 – under the motto “HPC for Innovation: when Science meets Industry” in Barcelona, Spain, where the PRACE Ada Lovelace Award for HPC will be presented. PRACE initiated this Award in 2016 at PRACEdays16, and Zoe Cournia from Greece was the inaugural awardee.

PRACE is happy to receive your nomination for the Award. The nomination process is fully open via this Call for Nominations published on the PRACE website and via different social media channels. Nominations should be sent to submissions-pracedays@prace-ri.eu by Friday March 17, 2017.

The winner of the Award will be invited to participate in the concluding Panel Session at PRACEdays17, and will receive a cash prize of € 1 000 as well as a certificate and an engraved crystal trophy.

A nomination should include the following:

  • Name, address, phone number, and email address of nominator (person making the nomination). The nomination should be submitted by a recognised member of the HPC community.
  • Name, address, and email address of the nominee (person being nominated).
  • The nomination statement addressing why the nominee should receive this award, should include a description of the first two criteria (see below) in detail within half page each (max. 300 words each)
  • Copy of the nominee’s CV, certificates, listing publications (with indication of the h-index), honours, etc. should be provided.

The selection criteria are:

  1. Outstanding impact on HPC research, computational science or service provision at a global level.
  2. Role model for women beginning careers in HPC.
  3. The winner of this award must be a young female scientist (PhD +10 years max, excluding parental leave)[1] who is currently working in Europe or has been working in Europe during the past three years.

[1] ERC rules will apply: the 10 years start on the PhD title date and end with the date when the award is given; a flat rate of 18 months is added due to parental leave.

The Selection Committee is composed of:

  • Jürgen Kohler: ex-Chair of the PRACE Industrial Advisory Committee, Daimler AG, Germany, and member of the Selection Committee since 2016
  • Mateo Valero: SC15 award winner for outstanding contribution to HPC, Director of the Barcelona Supercomputing Center (BSC), Spain, and member of the Selection Committee since 2016
  • Richard Kenway: ex-Chair of the PRACE Scientific Steering Committee, Tait Professor of Mathematical Physics, UK, and Member of the Selection Committee since 2016
  • Laura Grigori: Member of the PRACE Scientific Steering Committee, and Director of Research at INRIA/University Pierre and Marie Curie, France
  • Christoph Schütte: Vice-Chair of the PRACE Scientific Steering Committee, Professor of Mathematics and Computer Science at the Free University of Berlin, Germany
  • Suzanne Talon: CEO of Calcul Québec, Canada

The committee will (partially) change every 2 years.

About PRACE

The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Seventh Framework Programme (FP7/2007-2013) under grant agreement RI-312763 and from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreements 653838 and 730913. For more information, see www.prace-ri.eu.

Source: PRACE

The post Nominations Open for PRACE Ada Lovelace Award for HPC 2017 appeared first on HPCwire.

Pages