HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 5 hours 35 min ago

HPC Compiler Company PathScale Seeks Life Raft

Thu, 03/23/2017 - 10:30

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. A letter from the company with a listing of assets is included at the end of the article.

PathScale represents one of handful of compiler technologies that are designed for high performance computing, and it is one the last independent HPC compiler companies. In an interview with HPCwire, PathScale Chief Technology Officer and owner Christopher Bergström attributes the company’s financial insolvency to its heavy involvement in Intel alternative architectures.

“Unfortunately in recent years, we bet big on ARMv8 and the partner ecosystem and the hardware has been extremely disappointing,” said Bergström. “Once partners saw how low their hardware performed on HPC workloads they decided to pull back on their investment in HPC software.”

Due to confidentiality agreements, he’s limited to speaking in generalities but argues that the currently available ARMv8 processors deliver very weak performance for HPC workloads.

“ARM is possibly aware of this issue and as a result has introduced SVE (Scalable Vector Extensions),” Bergström told us. “Unfortunately, they focused more on the portability side of vectorization and the jury is still out if they can deliver competitive performance. SVE’s flexible design and freedom to change vector width on the fly will possibly impact the ability to write code tuned specifically for a target processor. In addition, design of the hardware architecture blocks software optimizations that are very common and potentially critical for HPC. And based on the publicly available roadmaps, the floating point to power ratio is not where it needs to be for HPC workloads in order to effectively compete against Intel or GPUs.”

Before coming to these conclusions, PathScale had a statement of work contract with Cavium to help support optimizing compilers for their ThunderX processors. When that funding was pulled, PathScale also lost their ability to gain and support customers for ARMv8. They looked for funders, and had conversations with stakeholders in the private and public sphere, but the money just wasn’t available.

“Show me a company in the HPC space wanting to invest,” said Bergström, “They’re not investing in compiler technology.”

ARM, which was scooped up by Japanese company SoftBank in September 2016 for $31 billion, may be the exception, but according to Bergström the PathScale technology, while it significantly leverages LLVM, doesn’t perfectly align with what they need.

Bergström brokered the deal with Cray that resurrected PathScale from the ashes of SiCortex in 2009 (more on this below) and he’s proud of what he and his team have accomplished over the last seven years. “We love compilers, we love the technology. We want to continue developing this stuff. The team is rock solid, we’re like family. We live eat and breathe compilers, but we’re not on a sustainable business path and we need a bailout or help refocusing. We need people who understand that these kind of technologies add value and LLVM by itself isn’t a panacea.”

Addison Snell, CEO of HPC analyst firm Intersect360 Research, shared some additional perspective on the market dynamics at play for independent tools vendors. “In the Beowulf era, clusters were all mostly the same, so what little differentiation there was came from things like development environments and job management software,” he said. “Independent middleware companies of all types flourished. Now we’re trending back toward an era of architectural specialization. Users are shopping for architectures more than they’re shopping for which compiler to use for a given architecture, and acquisitions have locked up some of the previously dominant players. Vendors’ solutions will have their own integrated stacks. Free open-source versions might still exist, but there will be less room for independent middleware players.”

PathScale has a winding history that dates back to 2001 with the founding of Key Research by Lawrence Livermore alum Tom McWilliams. The company was riding the commodity cluster wave, developing clustered Linux server solutions based on a low-cost 64-bit design. In 2003, contemporaneous with the rising popularity of AMD Opteron processors, Key Research rebranded as PathScale and expanded its product line to include high-performance computing adapters and 64-bit compilers.

PathScale would then pass through a number of corporate hands. In 2006, QLogic acquired PathScale, primarily to gain access to its InfiniBand interconnect technology. The following year, the compiler assets were sold to SiCortex, which sought a solution for its MIPS-based HPC systems.

When SiCortex closed its doors in 2009, Cray bought the PathScale assets and revived the company. Under an arrangement struck with Cray, PathScale would go forward as an independent technology group with an option to buy. In March 2012, PathScale CTO Christopher Bergström acquired all assets and became the sole owner of PathScale Inc.

The PathScale toolchain currently generates code for the latest Intel processors, AMD64, AMD GPUs, Power8, ARMv8, and NVIDIA GPUs in combination with both Power8 and x86.

In a message to the community, Pathscale writes:

We are evaluating all options to overcome this difficult time, including refocusing to provide training and code porting services instead of purely offering compiler licenses and optimization services. Our team deeply understands parallel programming and whether you have crazy C++ or ancient Fortran, we can likely help get it running on GPUs (NVIDIA or AMD) or vectorization targets (like Xeon Phi).

All PathScale engineers would love to continue to work on the compiler as an independent company, but we need the community to help us. We need people who believe in our technical roadmap. We need people who understand the future exascale computing software stack will likely be complex, but that complexity and advanced optimizations will make it easier for end users. At the same time we must be realistic and without immediate assistance start accepting any reasonable offer on the assets as a whole or piece by piece.

Our assets include:

  • PathScale website, trademarks and branding

  • C, C++ and Fortran compilers

  • Complete GPGPU and many-core runtime which supports OMP4 and OpenACC and is portable across multiple architectures (NVIDIA GPU, ARMv8, Power8+NVIDIA and AMD GPU)

  • Significant modifications to CLANG and LLVM to enable support for OpenACC and OpenMP and parallel programming models.

  • Complete engineering team with expertise working on CLANG and LLVM and MIPSPro.

  • Advertising credits with popular websites ($30,000)

A purchase or funding from crowdsourcing or other community event will keep a highly optimizing OpenMP and OpenACC C/C++ and Fortran compiler toolchain plus experienced development team in operation. Succinctly, PathScale preserves architectural diversity and opens the door for competition with a performant compiler for interesting architectures with OpenMP and OpenACC parallelization.

If interested please contact funding@pathscale.com.

Editor’s note: HPCwire has reached out to Cavium and ARM and we will update the article with any responses we receive.

The post HPC Compiler Company PathScale Seeks Life Raft appeared first on HPCwire.

IEEE Unveils Next Phase of IRDS to Drive Beyond Moore’s Law

Thu, 03/23/2017 - 09:25

PISCATAWAY, N.J., March 23, 2017 — IEEE today announced the next milestone phase in the development of the International Roadmap for Devices and Systems (IRDS)—an IEEE Standards Association (IEEE-SA) Industry Connections (IC) Program sponsored by the IEEE Rebooting Computing (IEEE RC) Initiative—with the launch of a series of nine white papers that reinforce the initiative’s core mission and vision for the future of the computing industry. The white papers also identify industry challenges and solutions that guide and support future roadmaps created by IRDS.

IEEE is taking a lead role in building a comprehensive, end-to-end view of the computing ecosystem, including devices, components, systems, architecture, and software. In May 2016, IEEE announced the formation of the IRDS under the sponsorship of IEEE RC. The historical integration of IEEE RC and the International Technology Roadmap for Semiconductors (ITRS) 2.0 addresses mapping the ecosystem of the new reborn electronics industry. The new beginning of the evolved roadmap—with the migration from ITRS to IRDS—is proceeding seamlessly as all the reports produced by the ITRS 2.0 represent the starting point of IRDS.

While engaging other segments of IEEE in complementary activities to assure alignment and consensus across a range of stakeholders, the IRDS team is developing a 15-year roadmap with a vision to identify key trends related to devices, systems, and other related technologies.

“Representing the foundational development stage in IRDS is the publishing of nine white papers that outline the vital and technical components required to create a roadmap,” said Paolo A. Gargini, IEEE Fellow and Chairman of IRDS. “As a team, we are laying the foundation to identify challenges and recommendations on possible solutions to the industry’s current limitations defined by Moore’s Law. With the launch of the nine white papers on our new website, the IRDS roadmap sets the path for the industry benefiting from all fresh levels of processing power, energy efficiency, and technologies yet to be discovered.”

“The IRDS has taken a significant step in creating the industry roadmap by publishing nine technical white papers,” said IEEE Fellow Elie Track, 2011-2014 President, IEEE Council on Superconductivity; Co-chair, IEEE RC; and CEO of nVizix. “Through the public availability of these white papers, we’re inviting computing professionals to participate in creating an innovative ecosystem that will set a new direction for the greater good of the industry. Today, I open an invitation to get involved with IEEE RC and the IRDS.”

The series of white papers delivers the starting framework of the IRDS roadmap—and through the sponsorship of IEEE RC—will inform the various roadmap teams in the broader task of mapping the devices’ and systems’ ecosystem:

“IEEE is the perfect place to foster the IRDS roadmap and fulfill what the computing industry has been searching for over the past decades,” said IEEE Fellow Thomas M. Conte, 2015 President, IEEE Computer Society; Co-chair, IEEE RC; and Professor, Schools of Computer Science, and Electrical and Computer Engineering, Georgia Institute of Technology. “In essence, we’re creating a new Moore’s Law. And we have so many next-generation computing solutions that could easily help us reach uncharted performance heights, including cryogenic computing, reversible computing, quantum computing, neuromorphic computing, superconducting computing, and others. And that’s why the IEEE RC Initiative exists: creating and maintaining a forum for the experts who will usher the industry beyond the Moore’s Law we know today.”

The IRDS leadership team hosted a winter workshop and kick-off meeting at the Georgia Institute of Technology on 1-2 December 2016. Key discoveries from the workshop included the international focus teams’ plans and focus topics for the 2017 roadmap, top-level needs and challenges, and linkages among the teams. Additionally, the IRDS leadership invited presentations from the European and Japanese roadmap initiatives. This resulted in the 2017 IRDS global membership expanding to include team members from the “NanoElectronics Roadmap for Europe: Identification and Dissemination” (NEREID) sponsored by the European Semiconductor Industry Association (ESIA), and the “Systems and Design Roadmap of Japan” (SDRJ) sponsored by the Japan Society of Applied Physics (JSAP).

The IRDS team and its supporters will convene 1-3 April 2017 in Monterey, California, for the Spring IRDS Workshop, which is part of the 2017 IEEE International Reliability Physics Symposium (IRPS). The team will meet again for the Fall IRDS Conference—in partnership with the 2017 IEEE International Conference on Rebooting Computing (ICRC)—scheduled for 6-7 November 2017 in Washington, D.C. More information on both events can be found here: http://irds.ieee.org/events.

IEEE RC is a program of IEEE Future Directions, designed to develop and share educational tools, events, and content for emerging technologies.

IEEE-SA’s IC Program helps incubate new standards and related products and services, by facilitating collaboration among organizations and individuals as they hone and refine their thinking on rapidly changing technologies.

About the IEEE Standards Association

The IEEE Standards Association, a globally recognized standards-setting body within IEEE, develops consensus standards through an open process that engages industry and brings together a broad stakeholder community. IEEE standards set specifications and best practices based on current scientific and technological knowledge. The IEEE-SA has a portfolio of over 1,100 active standards and more than 500 standards under development. For more information visit the IEEE-SA website.

About IEEE

IEEE is the largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics. Learn more at http://www.ieee.org.

Source: IEEE

The post IEEE Unveils Next Phase of IRDS to Drive Beyond Moore’s Law appeared first on HPCwire.

EDEM Brings GPU-Optimized Solver to the Cloud with Rescale

Thu, 03/23/2017 - 08:16

SAN FRANCISCO, Calif., March 23, 2017 — Rescale and EDEM are pleased to announce that the EDEM GPU solver engine is now available on Rescale’s ScaleX platform, a scalable, on-demand cloud platform for high-performance computing. The GPU solver, which was a highlight of the latest release of EDEM, enables performance increases from 2x to 10x compared to single-node, CPU-only runs.

EDEM offers Discrete Element Method (DEM) simulation software for virtual testing of equipment that processes bulk solid materials in the mining, construction, and other industrial sectors. EDEM software has been available on Rescale’s ScaleX platform since July 2016. Richard LaRoche, CEO of EDEM commented: “The introduction of the EDEM GPU solver has made a key impact on our customers’ productivity by enabling them to run larger simulations faster. Our partnership with Rescale means more users will be able to harness the power of the EDEM engine by accessing the market’s latest GPUs through Rescale’s cloud platform.”

The addition of an integrated GPU solver to Rescale gives users shorter time-to-answer and enables a deeper impact on design innovation. To Rescale, the addition of EDEM’s GPU solver also signals a strengthening partnership. “Rescale’s GPUs are the cutting edge of compute hardware, and EDEM is ahead of the curve in optimizing their software to leverage GPU capabilities. We are proud to be their partner of choice to bring this forward-thinking simulation solution to the cloud, bringing HPC within easy reach of engineers everywhere,” said Rescale CEO Joris Poort.

About EDEM

EDEM is the market-leading Discrete Element Method (DEM) software for bulk material simulation. EDEM software is used for ‘virtual testing’ of equipment that handles or processes bulk materials in the manufacturing of mining, construction, off-highway and agricultural machinery, as well as in the mining and process industries. Blue-chip companies around the world use EDEM to optimize equipment design, increase productivity, reduce costs of operations, shorten product development cycles and drive product innovation. In addition EDEM is used for research at over 200 academic institutions worldwide. For more information visit: www.edemsimulation.com.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

Source: Rescale

The post EDEM Brings GPU-Optimized Solver to the Cloud with Rescale appeared first on HPCwire.

Google Launches New Machine Learning Journal

Wed, 03/22/2017 - 10:07

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” for machine learning. Writing on the Google Research Blog, Shan Carter and Chris Olah described the project as follows:

“Science isn’t just about discovering new results. It’s also about human understanding. Scientists need to develop notations, analogies, visualizations, and explanations of ideas. This human dimension of science isn’t a minor side project. It’s deeply tied to the heart of science.

“That’s why, in collaboration with OpenAI, DeepMind, YC Research, and others, we’re excited to announce the launch of Distill, a new open science journal and ecosystem supporting human understanding of machine learning. Distill is an independent organization, dedicated to fostering a new segment of the research community.

“Modern web technology gives us powerful new tools for expressing this human dimension of science. We can create interactive diagrams and user interfaces the enable intuitive exploration of research ideas. Over the last few years we’ve seen many incredible demonstrations of this kind of work.

“Unfortunately, while there are a plethora of conferences and journals in machine learning, there aren’t any research venues that are dedicated to publishing this kind of work. This is partly an issue of focus, and partly because traditional publication venues can’t, by virtue of their medium, support interactive visualizations. Without a venue to publish in, many significant contributions don’t count as “real academic contributions” and their authors can’t access the academic support structure.”

According to Carter and Olah, “Distill aims to build an ecosystem to support this kind of work, starting with three pieces: a research journal, prizes recognizing outstanding work, and tools to facilitate the creation of interactive articles.

Here’s a snapshot of guidelines for working with the new Journal:

  • “Distill articles are prepared in HTML using the Distill infrastructure — see the getting started guide for details. The infrastructure provides nice default styling and standard academic features while preserving the flexibility of the web.
  • Distill articles must be released under the Creative Commons Attribution license. Distill is a primary publication and will not publish content which is identical or substantially similar to content published elsewhere.
  • To submit an article, first create a GitHub repository for your article. You can keep it private during the review process if you would like — just share it with @colah and @shancarter. Then email review@distill.pub to begin the process.

Distill handles all reviews and editing through GitHub issues. Upon publication, the repository is made public and transferred to the @distillpub organization for preservation. This means that reviews of published work are always public. It is at the author’s discretion whether they share reviews of unpublished work.”

The post Google Launches New Machine Learning Journal appeared first on HPCwire.

Penguin Computing Announces Expanded HPC Cloud

Wed, 03/22/2017 - 09:38

FREMONT, Calif., March 22, 2017 — Penguin Computing, provider of high performance computing, enterprise data center and cloud solutions, today announced the availability of the company’s expanded Penguin Computing On-Demand (POD) High Performance Computing Cloud.

“As current Penguin POD users, we are excited to have more resources available to handle our mission-critical real-time global environmental prediction workload,” said Dr. Greg Wilson, CEO, EarthCast Technologies. “The addition of the Lustre file system will allow us to scale our applications to full global coverage, run our jobs faster and provide more accurate predictions.”

The expanded POD HPC cloud extends into Penguin Computing’s latest cloud datacenter location, MT2. The MT2 location offers this expansion with the addition of Intel Xeon E5-2680 v4 processors through our B30 node class offering.

B30 Node Specifications

  • Dual Intel Xeon E5-2680 v4 processors
  • 28 non-hyperthreaded cores per node
  • 256GB RAM per node
  • Intel Omni-Path low-latency, non-blocking, 100Gb/s fabric

In addition to the new processors, the MT2 location provides customers with access to a Lustre, parallel file system – delivered through Penguin’s FrostByte storage solution.   POD’s latest Lustre file system provides high speed storage with an elastic billing model – only billing customers for the storage they consume, metered hourly.

The new POD MT2 public cloud location also provides customers with cloud redundancy – enabling multiple, distinct cloud locations to ensure that business critical, and time sensitive HPC workflows are always able compute.

“The latest expansion to our MT2 location extends the capabilities of our HPC cloud,” said Victor Gregorio, SVP Cloud Services at Penguin Computing. “As an HPC service, we work closely with our customers to deliver their growing cloud needs – scalable

bare-metal compute, easy access to ready-to-run applications, and tools such as our Scyld Cloud Workstation for remote 3D visualization.”

Penguin Computing customers in fields such as manufacturing, engineering, and weather sciences are able to run more challenging HPC applications and workflows on POD with the addition of these capabilities.

These workloads can be time sensitive and complex – demanding the specialized HPC cloud resources Penguin makes available on POD. The compute needs of HPC users are not normally satisfied in a general-purpose public cloud, and Penguin Computing continues to be a leader in unique, cost effective, high-performance cloud services for HPC workloads.

POD customers have immediate access to these new offerings through their existing accounts through the POD Portal. Experience POD by visiting https://www.pod.penguincomputing.com to request a free trial account.

About Penguin Computing

Penguin Computing is one of the largest private suppliers of enterprise and high performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing On-Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivery of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of a limited number of authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing. Penguin Computing has systems installed with more than 2,500 customers in 40 countries across eight major vertical markets.

Source: Penguin

The post Penguin Computing Announces Expanded HPC Cloud appeared first on HPCwire.

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

Wed, 03/22/2017 - 09:14

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. New advances reported by Swiss researchers in Nature last week suggest practical use of x-rays for fast, accurate, reverse-engineering of chips may be near.

“You’ll pop in your chip and out comes the schematic. Total transparency in chip manufacturing is on the horizon. This is going to force a rethink of what computing is,” said Anthony Levi of the University of Southern California describing the research in an IEEE Spectrum article (X-rays Map the 3D Interior of Integrated Circuits). “This is going to force a rethink of what computing is” what it means for a company to add value in the computing industry.

The work by Mirko Holler, Manuel Guizar-Sicairos, Esther H. R. Tsai, Roberto Dinapoli, Elisabeth Müller, Oliver Bunk, Jörg Raabe (all of Paul Scherrer Institut) and Gabriel Aeppli (ETH) is described in their Nature Letter, “High-resolution non-destructive three-dimensional imaging of integrated circuits.”

“[We] demonstrate that X-ray ptychography – a high-resolution coherent diffractive imaging technique – can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip,” write the researchers in the abstract.

“Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.”

Starting with a known structure – an ASIC developed at the institute – and then moving to an Intel chip (Intel G3260 processor) about which they had limited information, the researchers were able to accurate identify and map components in the chips. A good summary of the experiment is provided in the IEEE Spectrum article:

“The ASIC was produced using 110-nanometer chip manufacturing technology, more than a decade from being cutting edge. But the Intel chip was just a couple of generations behind the state of the art: It was produced using the company’s 22-nm process…To produce a 3D rendering of the Intel chip—an Intel G3260 processor—the team shined an X-ray beam through a portion of the chip. The various circuit components—its copper wires and silicon transistors, for example—scatter the light in different ways and cause constructive and destructive interference. Through a technique called X-ray ptychography, the researchers could point the beam at their sample from a number of different angles and use the resulting diffraction patterns to reconstruct chip’s internal structure.”

The experiment was carried out at the cSAXS beamline of the Swiss Light Source (SLS) at the Paul Scherrer Institut, Villigen, Switzerland. Details of the components are as follows. Coherent X-rays enter the instrument and pass optical elements that in their combination form an X-ray lens used to generate a defined illumination of the sample. These elements are a gold central stop, a Fresnel zone plate and an order sorting aperture. The diffracted X-rays are measured by a 2D detector, a Pilatus 2M in the present case. Accurate sample positioning is essential in a scanning microscopy technique and is achieved by horizontal and vertical interferometers.

As the IEEE Spectrum article notes, “Even if this approach isn’t widely adopted to tear down competitors’ chips, it could find a use in other applications. One of those is verifying that a chip only has the features it is intended to have, and that a “hardware Trojan”—added circuitry that could be used for malicious purposes—hasn’t been introduced.”

Link to IEEE article: http://spectrum.ieee.org/nanoclast/semiconductors/processors/xray-ic-imaging

Link to Nature paper: http://www.nature.com/nature/journal/v543/n7645/full/nature21698.html

The post Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging appeared first on HPCwire.

ISC High Performance Adds STEM Student Day to the 2017 Program

Wed, 03/22/2017 - 07:37

FRANKFURT, Germany, March 22, 2017 — ISC High Performance is pleased to announce the inclusion of the STEM Student Day & Gala at this year’s conference. The new program aims to connect the next generation of regional and international STEM practitioners with the high performance computing industry and its key players.

ISC 2017 has created this program to welcome STEM students into the world of HPC with the hope that an early exposure to the community will encourage them to acquire the necessary HPC skills to propel their future careers.

The ISC STEM Student Day & Gala will take place on Wednesday, June 21, and is free to attend for 200 undergraduate and graduate students. All regional and international students are welcome to register for the program, including those not attending the main conference. The organizers also encourage female STEM students to exploit this opportunity as ISC 2017 is very committed to improving gender diversity

Students will be able to register for the program starting mid-April via the program webpage.

Participating students will enjoy an afternoon discovering HPC by visiting the exhibition and then joining a conference keynote before participating in a career fair. In the evening, they can network with key HPC players at a special gala event.

Supermicro, PRACE, CSCS and GNS Systems GmbH have already come forward to support this program. Funding from another six organizations is needed to ensure the full success of the STEM Day & Gala. Sponsorship opportunities start at 500 euros, with all resources flowing directly into the event organization. Please contact anna.schachoff@isc-group.com to get involved.

“There is currently a shortage of a skilled STEM workforce in Europe and it is projected that the gap between available jobs and suitable candidates will grow very wide beyond 2020 if nothing is done about it,” said Martin Meuer, the general co-chair of ISC High Performance.     

“This gave us the idea to organize the STEM Day, as many organizations that exhibit at ISC could profit from meeting the future workforce directly.” 

The ISC STEM Student Day & Gala is also a great opportunity for organizations to associate themselves as STEM employers and invest in their future HPC user base. 

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

Over 400 hand-picked expert speakers and 150 exhibitors, consisting of leading research centers and vendors, will greet attendees at ISC High Performance. A number of events complement the Monday – Wednesday keynotes, including the Distinguished Speaker Series, the Industry Track, The Machine Learning Track, Tutorials, Workshops, the Research Paper Sessions, Birds-of-a-Feather (BoF) Sessions, Research Poster, the PhD Forum, Project Poster Sessions and Exhibitor Forums.

Source: ISC

The post ISC High Performance Adds STEM Student Day to the 2017 Program appeared first on HPCwire.

LANL Simulation Shows Massive Black Holes Break “Speed Limit”

Tue, 03/21/2017 - 10:48

A new computer simulation based on codes developed at Los Alamos National Laboratory is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. The simulation is based on a computer code used to understand the coupling of radiation and certain materials.

“Supermassive black holes have a speed limit that governs how fast and how large they can grow,” said Joseph Smidt of the Theoretical Design Division at Los Alamos National Laboratory,  “The relatively recent discovery of supermassive black holes in the early development of the universe raised a fundamental question, how did they get so big so fast?”

Using codes developed at Los Alamos for modeling the interaction of matter and radiation related to the Lab’s stockpile stewardship mission, Smidt and colleagues created a simulation of collapsing stars that resulted in supermassive black holes forming in less time than expected, cosmologically speaking, in the first billion years of the universe.

“It turns out that while supermassive black holes have a growth speed limit, certain types of massive stars do not,” said Smidt. “We asked, what if we could find a place where stars could grow much faster, perhaps to the size of many thousands of suns; could they form supermassive black holes in less time?” The work is detailed in a recent paper, “The Formation Of The First Quasars In The Universe.”

It turns out the Los Alamos computer model not only confirms the possibility of speedy supermassive black hole formation, but also fits many other phenomena of black holes that are routinely observed by astrophysicists. The research shows that the simulated supermassive black holes are also interacting with galaxies in the same way that is observed in nature, including star formation rates, galaxy density profiles, and thermal and ionization rates in gasses.

“This was largely unexpected,” said Smidt.  “I thought this idea of growing a massive star in a special configuration and forming a black hole with the right kind of masses was something we could approximate, but to see the black hole inducing star formation and driving the dynamics in ways that we’ve observed in nature was really icing on the cake.”

A key mission area at Los Alamos National Laboratory is understanding how radiation interacts with certain materials.  Because supermassive black holes produce huge quantities of hot radiation, their behavior helps test computer codes designed to model the coupling of radiation and matter. The codes are used, along with large- and small-scale experiments, to assure the safety, security, and effectiveness of the U.S. nuclear deterrent.

“We’ve gotten to a point at Los Alamos,” said Smidt, “with the computer codes we’re using, the physics understanding, and the supercomputing facilities, that we can do detailed calculations that replicate some of the forces driving the evolution of the Universe.”

Link to LANL release: http://www.lanl.gov/discover/news-release-archive/2017/March/03.21-supermassive-black-hole-speed-limit.php?source=newsroom

Link to paper: https://arxiv.org/pdf/1703.00449.pdf

Link to video about the discovery: https://youtu.be/LD4xECbHx_I

Source: LANL

The post LANL Simulation Shows Massive Black Holes Break “Speed Limit” appeared first on HPCwire.

Supermicro Launches Intel Optane SSD Optimized Platforms

Tue, 03/21/2017 - 07:53

SAN JOSE, Calif., March 21, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a leader in compute, storage and networking technologies including green computing, expands the Industry’s broadest portfolio of Supermicro NVMe Flash server and storage systems with support for Intel Optane SSD DC P4800X, the world’s most responsive data center SSD.

Supermicro’s NVMe SSD Systems with Intel Optane SSDs for the Data Center enable breakthrough performance compared to traditional NAND based SSDs. The Intel Optane SSDs for the data center are the first breakthrough that begins to blur the line between memory and storage, enabling customers to do more per server, or extend memory working sets to enable new usages and discoveries. The PCI-E compliant expansion card delivers an industry leading combination of 2 times better latency performance, up to more than 3 times higher endurance, and up to 3 times higher write throughput than NVMe NAND SSDs. Optane is supported across Supermicro’s complete product line including: BigTwin, SuperBlade, Simply Double Storage and Ultra servers supporting the current and next generation Intel Xeon Processors. These innovative solutions enable a new high performance storage tier that combines the attributes of memory and storage ideal for Financial Services, Cloud, HPC, Storage and overall Enterprise applications.

The first generation Supermicro supported Intel Optane SSDs are initially a PCI-E compliant expansion card with additional form factors to follow. A 2U Supermicro Ultra system will be able to deliver 6 million WRITE IOPs and 16.5 TB of high performance Optane storage. Intel Optane will deliver optimal performance in the 1U 10 NVMe All-Flash SuperServer and the capacity optimized 2U 48 All-Flash NVMe Simply Double Storage Server and provide accelerated caching across the complete line of NVMe supported scale out storage servers including the new 4U 45 Drive system with NVMe Cache drives.

“Being First-To-Market with the latest in computing technology continues to be our corporate strength, the addition of Intel Optane memory technology gives our top tier customers a new memory deployment strategy that provides better write performance and latency than existing NVMe NAND SSD solutions including more than 30 drive writes per day,” said Charles Liang, President and CEO of Supermicro. “In addition this new memory is slated to consume 30 percent lower max-power than SSD NAND memory, supporting our customer’s green computing priorities.”

“Supermicro’s system readiness for the new Optane memory technology will provide fast storage and cache for MySQL and HCI applications, ” said Bill Lesczinske, Vice President, Non-Volatile Memory Solutions Group. “With 77x better read latency in the presence of a high write workload and as a memory replacement with Intel Memory Drive Technology – software will make the Optane SSD look like DRAM transparently to the OS, providing greater in-memory compute performance to Supermicro systems.”

For more information on Supermicro’s complete range of NVMe Flash Solutions, please visit http://www.supermicro.com/products/nfo/NVMe.cfm.

About Super Micro Computer, Inc. (NASDAQ: SMCI)
Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Supermicro

The post Supermicro Launches Intel Optane SSD Optimized Platforms appeared first on HPCwire.

DDN Names Bret Costelow VP of Global Sales

Tue, 03/21/2017 - 07:45

SANTA CLARA, Calif., March 21, 2017 — DataDirect Networks (DDN) today announced the appointment of Bret Costelow as the company’s vice president of global sales. In his new role, Costelow will oversee technical computing sales worldwide, and will leverage more than 25 years of sales and sales leadership experience to further boost visibility of DDN’s deep technical expertise and high-performance computing (HPC) storage platform offerings, develop new business strategies and drive revenue growth. Costelow’s leadership and experience spans leading technology companies, including Intel and Ricoh Americas.

“Bret Costelow is an inspiring sales leader with a clear understanding of our customers’ needs and a vision of how DDN’s technologies and solutions can best solve their toughest data storage challenges,” said Robert Triendl, senior vice president, global sales, marketing, and field services, DDN. “Bret’s proven success in high-growth business settings, deep knowledge of the Lustre* and HPC market, proven track record for generating traction with innovative, advanced technologies, and his broad experience with software sales make him a great asset to our team and a great resource for our partners and customers around the world.” 

Costelow joins DDN from Intel Corporation, where he led a global sales and business development team for Intel’s HPC software business and supported Intel’s 2012 acquisition of Whamcloud, the main development arm for the open source Lustre file system, and its subsequent sales and marketing. Costelow was instrumental in leading the Lustre business unit to expand into adjacent markets, reaching beyond HPC file systems to HPC cluster orchestration software. Under his leadership, the HPC software business unit opened new markets in Asia, launched a comprehensive, global software sales channel program and drove year-over-year revenue growth that averaged more than 30 percent in each of the past five years. Costelow is also on the board of directors of the European Open File Systems (EOFS), a non-profit organization focused on the promotion and support of open scalable file systems for high-performance computing in the technical computing and enterprise computing markets.

“DDN is the uncontested market leader in HPC storage, with a highly differentiated portfolio of solutions for technical computing users in all vertical markets. This portfolio, combined with aggressive investments in new technologies, positions the company incredibly well for continued growth and success as disruptive technologies, such as non-volatile memory (NVM), unsettle the storage market landscape and create exciting new opportunities,” said Bret Costelow, vice president, global sales at DDN. “The current market dynamics and DDN’s agility to respond made this the perfect time to join DDN. I look forward to working with the incredible talent in DDN’s field team, product management, product development and software engineering teams to help drive DDN’s success and growth to new levels, and to help accelerate the success of DDN’s customers and partners around the world.”

Supporting Resources

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Names Bret Costelow VP of Global Sales appeared first on HPCwire.

Cray CEO to Speak on Convergence of Big Data, Supercomputing at TechIgnite

Tue, 03/21/2017 - 07:41

SEATTLE, Wash., March 21, 2017 — Supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced that the company’s President and CEO, Peter Ungaro, will give a presentation on “The Convergence of Big Data and Supercomputing” at TechIgnite, a IEEE Computer Society conference exploring the trends, threats, and truth behind technology.

The convergence of artificial intelligence technologies and supercomputing at scale is happening now. As a featured speaker at TechIgnite’s “AI and Machine Learning” track, Ungaro’s presentation will examine how the convergence of big data and modeling and simulation run on supercomputing platforms at scale is creating new opportunities for organizations to discover innovative ways of extracting value from massive data sets.

Other TechIgnite speakers include Apple co-founder Steve Wozniak, Tony Jebara, director of machine learning at Netflix, William Ruh, CEO for GE Digital, and more.

TechIgnite will take place on March 21-22, 2017 at the Hyatt Regency San Francisco Airport Hotel in Burlingame, CA. Ungaro’s presentation will be held at 2:00pm PT on Wednesday, March 22. A complete list of TechIgnite speakers is available online via the following URL: http://techignite.computer.org/speakers/.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray

The post Cray CEO to Speak on Convergence of Big Data, Supercomputing at TechIgnite appeared first on HPCwire.

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

Tue, 03/21/2017 - 06:38

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-world applications are depends on whom you talk to and for what kinds of applications. Los Alamos National Lab, for example, has an active application development effort for its D-Wave system and LANL researcher Susan Mniszewski and colleagues have made progress on using the D-Wave machine for aspects of quantum molecular dynamics (QMD) simulations.

At CeBIT this week D-Wave and Volkswagen will discuss their pilot project to monitor and control taxi traffic in Beijing using a hybrid HPC-quantum system – this is on the heels of recent customer upgrade news from D-Wave (more below). Last week IBM announced expanded access to its five-qubit cloud-based quantum developer platform. In early March, researchers from the Google Quantum AI Lab published an excellent commentary in Nature examining real-world opportunities, challenges and timeframes for quantum computing more broadly. Google is also considering making its homegrown quantum capability available through the cloud.

As an overview, the Google commentary provides a great snapshot, noting soberly that challenges such as the lack of solid error correction and the small size (number of qubits) in today’s machines – whether “universal” digital machines like IBM’s or “analog” adiabatic annealing machines like D-Wave’s – have prompted many observers to declare useful quantum computing is still a decade way. Not so fast, says Google.

“This conservative view of quantum computing gives the impression that investors will benefit only in the long term. We contend that short-term returns are possible with the small devices that will emerge within the next five years, even though these will lack full error correction…Heuristic ‘hybrid’ methods that blend quantum and classical approaches could be the foundation for powerful future applications. The recent success of neural networks in machine learning is a good example,” write Masoud Mohseni, Peter Read, and John Martinis (a 2017 HPCwire Person to Watch) and colleagues (Nature, March 8, “Commercialize early quantum technologies”)

The D-Wave/VW project is a good example of a hybrid approach (details to follow) but first here’s a brief summary of recent quantum computing news:

  • IBM released a new API and upgraded simulator for modeling circuits up to 20 qubits on its 5-qubit platform. It also announced plans for a software developer kit by mid-year for building “simple” quantum applications. So far, says IBM, its quantum cloud has attracted about 40,000 users, including, for example, the Massachusetts Institute of Technology, which used the cloud service for its online quantum information science course. IBM also noted heavy use of the service by Chinese researchers. (See HPCwire coverage, IBM Touts Hybrid Approach to Quantum Computing)
  • D-Wave has been actively extending its development ecosystem (qbsolv (D-wave) and qmasm (LANL, et al.) and says researchers have recently been able to simulate a 20,000 qubit system on 1,000-qubit machine using qbsolv (more below). After announcing a 2,000-quibit machine in the fall, the company has begun deploying them. The first will be for a new customer, Temporal Defense System, and another is planned for the Google/NASA/USRA partnership which has a 1,000-qubit machine now. D-wave also just announced Virginia Tech and the Hume Center will begin using D-Wave systems for work on defense and intelligence applications.
  • Google’s commentary declares: “We anticipate that, within a few years, well-controlled quantum systems may be able to perform certain tasks much faster than conventional computers based on CMOS (complementary metal oxide–semiconductor) technology. Here we highlight three commercially viable uses for early quantum-computing devices: quantum simulation, quantum-assisted optimization and quantum sampling. Faster computing speeds in these areas would be commercially advantageous in sectors from artificial intelligence to finance and health care.”
D-Wave 2000Q System

Clearly there is a lot going on even at this stage of quantum computing’s development. There’s also been a good deal of wrangling over just what is a quantum computer and the differences between IBM’s “universal” digital approach – essentially a machine able to do anything computers do now – and D-Wave’s adiabatic annealing approach, which is currently intended to solve specific classes of optimization problems.

“They are different kinds of machines. No one has a universal quantum computer now, so you have to look at each case individually for its particular strengths and weaknesses,” explained Martinis to HPCwire. “The D-wave has minimal quantum coherence (it loses the information exchanged between qubits quite quickly), but makes up for it by having many qubits.”

“The IBM machine is small, but the qubits have quantum coherence enough to do some standard quantum algorithms. Right now it is not powerful, as you can run quantum simulations on classical computers quite easily. But by adding qubits the power will scale up quickly. It has the architecture of a universal machine and has enough quantum coherence to behave like one for very small problems,” Martinis said.

Noteworthy, Google has developed 9-qubit devices that have 3-5x more coherence than IBM, according to Martinis, but they are not on the cloud yet. “We are ready to scale up now, and plan to have this year a ‘quantum supremacy’ device that has to be checked with a supercomputer. We are thinking of offering cloud also, but are more or less waiting until we have a hardware device that gives you more power than a classical simulation.”

Quantum supremacy as described in the Google commentary is a term coined by theoretical physicist John Preskill to describe “the ability of a quantum processor to perform, in a short time, a well-defined mathematical task that even the largest classical supercomputers (such as China’s Sunway TaihuLight) would be unable to complete within any reasonable time frame. We predict that, in a few years, an experiment achieving quantum supremacy will be performed.”

Bo Ewald

For the moment, D-Wave is the only vendor offering near-production machines versus research machines, said Bo Ewald, the company’s ever-cheerful evangelist. He quickly agrees though that at least for now there aren’t any production-ready applications. Developing a quantum tool/software ecosystem is a driving focus at D-wave. The LANL app dev work, though impressive, still represents proto-application development. Nevertheless the ecosystem of tools is growing quickly.

“We have defined a software architecture that has several layers starting at the quantum machine instruction layer where if you want to program in machine language you are certainly welcome to do that; that is kind of the way people had to do it in the early days,” said Ewald.

“The next layer up is if you want to be able to create quantum machine instructions from C or C++ or Python. We have now libraries that run on host machines, regular HPC machines, so you can use those languages to generate programs that run on the D-Wave machine but the challenge that we have faced, that customers have faced, is that our machines had 500 qubits or 1,000 qubits and now 2,000; we know there are problems that are going to consume many more qubits than that,” he said.

For D-Wave systems, qbsolv helps address this problem. It allows a meta-description of the machine and the problem you want to solve as quadratic unconstrained binary optimization or QUBO. It’s an intermediate representation. D-Wave then extended this capability to what it calls virtual QUBOs likening it to virtual memory.

“You can create QUBOs or representations of problems which are much larger than the machine itself and then using combined classical computer and quantum computer techniques we could partition the problem and solve them in chunks and then kind of glue them back together after we solved the D-Wave part. We’ve done that now with the 1,000-qubit machine and run problems that have the equivalent of 20,000 qubits,” said Ewald, adding the new 2,000-qubit machines will handle problems of even greater size using this capability.

At LANL, researcher Scott Pakin has developed another tool – a quantum macro assembler for D-Wave systems (QMASM). Ewald said part of the goal of Pakin’s work was to determine, “if you could map gates onto the machine even though we are not a universal or a gate model. You can in fact model gates on our machine and he has started to [create] a library of gates (or gates, and gates, nand gates) and you can assemble those to become macros.”

Pakin said,My personal research interest has been in making the D-Wave easier to program. I’ve recently built something really nifty on top of QMASM: edif2qmasm, which is my answer to the question: Can one write classical-style code and run it on the D-Wave?

“For many difficult computational problems, solution verification is simple and fast. The idea behind edif2qmasm is that one can write an ordinary(-ish) program that reports if a proposed solution to a problem is in fact valid. This gets compiled for the D-Wave then run _backwards_, giving it ‘true’ for the proposed solution being valid and getting back a solution to the difficult computational problem.”

Pakin noted there are many examples on github to provide a feel for the power of this tool.

“For example, mult.v is a simple, one-line multiplier. Run it backwards, and it factors a number, which underlies modern data decryption. In a dozen or so lines of code, circsat.v evaluates a Boolean circuit. Run it backwards, and it tells you what inputs lead to an output of “true”, which used in areas of artificial intelligence, circuit design, and automatic theorem proving. map-color.v reports if a map is correctly colored with four colors such that no two adjacent regions have the same color. Run it backwards, and it _finds_ such a coloring.

“Although current-generation D-Wave systems are too limited to apply this approach to substantial problems, the trends in system scale and engineering precision indicate that some day we should be able to perform real work on this sort of system. And with the help of tools like edif2qmasm, programmers won’t need an advanced degree to figure out how to write code for it,” he explained.

The D-Wave/VW collaboration, just a year or so old, is one of the more interesting quantum computing proof-of-concept efforts because it tackles an optimization problem of the kind that is widespread in everyday life. As described by Ewald, VW CIO Martin Hoffman was making his yearly swing through Silicon Valley and stopped in at D-Wave and talk turned to the many optimization challenges big automakers face, such as supply logistics, vehicle delivery, and various machine learning tasks and doing a D-Wave project around one of them. Instead, said Ewald, VW eventually settled on a more driver-facing problem.

It turns out there are about 10,000 taxis in Beijing, said Ewald. Each has a GPS device and their positions are recorded every five seconds. Traffic congestion, of course, is a huge problem in Beijing. The idea was to explore if it was possible to create an application running on both traditional computer resources and D-Wave to help monitor and guide taxi movement more quickly and effectively.

“Ten thousand taxis on all of the streets in Beijing is way too big for our machine at this point, but they came to this same idea we talked about with qbsolve where you partition problems,” said Ewald. “On the traditional machines VW created a map and grid and subdivided the grid into quadrants and would find the quadrant that was the most red.” That’s red as in long cab waits.

The problem quadrant was then sent to D-Wave to be solved. “We would optimize the flow, basically minimize the wait time for all of the taxis within the quadrant, send that [solution] back to the traditional machine which would then send us the next most red, and we would try to turn it green,” said Ewald.

According to Ewald, VW was able to relatively create the “hybrid” solutions quickly and “get what they say are pretty good results.” They have talked about then being able to extend this project to predict where traffic jams are going to be and give people perhaps 45 minute warnings that there’s the potential for a traffic jam at such and such intersection. The two companies have a press conference planned this week at CeBIT to showcase the project.

It’s good to emphasize that the VW/D-wave exercise is developmental – what Ewald labels as a proto application: “But just the fact that they were able to get it running is a great step forward in many ways in that we believe our machine will be used side by side with existing machines, much like GPUs were used in the early days on graphics. In this case VW has demonstrated quite clearly how our machine, our QPU if you will, can be used in helping accelerate the work being done on a traditional HPC machines.”

Image art, chip diagram: D-Wave

The post Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access appeared first on HPCwire.

Intel Ships Drives Based on 3-D XPoint Non-volatile Memory

Mon, 03/20/2017 - 12:07

Intel Corp. has begun shipping new storage drives based on its 3-D XPoint non-volatile memory technology as it targets data-driven workloads.

Intel’s new Optane solid-state drives, designated P4800X, seek to combine the attributes of memory and storage in the same device. The result is a new “data storage tier” intended to overcome growing storage bottlenecks in datacenters.

Intel said its new SSD based on Xpoint (pronounced “cross point”) memory technology would help speed applications for faster caching and storage while allowing datacenter operators to deploy larger datasets analyzed using large memory pools.

Intel argues that current storage approaches based on DRAM and NAND are contributing to the current datacenter storage gap, and that storage platform increasingly need to behave “like system memory.” DRAM is too expensive to scale while NAND can scale but falls short in terms of datacenter performance.

Hence, the Optane SSD leverages the Xpoint approach unveiled in 2015 to boost memory density by as much as ten times compared to conventional memory chips, claim Intel and development partner Micron Technology Inc. Optane, Intel’s first deployment of Xpoint memory stacks, is said to deliver a five- to eight-fold boost in performance for “low queue depth workloads.”

Faster caching and storage are also said to boost scaling for individual servers while speeding up latency-sensitive workloads, the company said Monday (March 20).

The first product in the Optane P4800X series comes with 375-Gb storage capacity in the form of an add-in card with both PCI Express and Non-Volatile Memory Express interfaces. Typical latency is rated at less than 10 microseconds.

Intel said its Optane SSDs combine emerging Xpoint memory media with its memory controller as well as proprietary interface hardware and software.

Last April, Intel (NASDAQ: INTC) demonstrated Optane SSDs operating at 2 Gb/sec speeds. Along with speed improvements, Intel and memory partner Micron (NASDAQ: MU) said last year they hoped to convince potential enterprise customers that XPoint memory platforms are more durable than current NAND flash technology as well as providing as much as a ten-fold increase in storage density for persistent data compared to DRAM.

In terms of endurance, Intel said Optane could handle up to 30 drive writes per day and up to 12.3 petabytes of written data. Hence, the SSDs target “write-intensive applications such as online transaction processing, high performance computing, write caching and logging,” the chipmaker said.

As flash storage makes greater inroads in datacenters, Intel’s storage SSDs based on 3-D Xpoint memory technology essentially creates a new storage category between flash and DRAM. The chipmaker argues its storage approach addresses the fundamental computing problems of moving data closer and linking to CPUs.

“Faster storage is important to computing because computing is done on data, and data is put in storage,” said Robert Crooke, general manager of Intel’s Non-Volatile Memory Solutions Group. “The longer it takes to get to that data, the slower the computing….”

Meanwhile, Micron is targeting its SSDs based on Xpoint technology at cloud applications, data analytics, online transaction processing and the Internet of Things.

The memory maker said last summer its Quantx line of SSDs also delivers read latencies at less than 10 microseconds and writes at less than 20 microseconds. That, Micron asserted, is 10 times better than NAND flash-based SSDs.

This article was first published on HPCwire’s sister publication, EnterpriseTech.

The post Intel Ships Drives Based on 3-D XPoint Non-volatile Memory appeared first on HPCwire.

PRACE Proceeds into Second Phase of Partnership

Mon, 03/20/2017 - 08:23

AMSTERDAM, March 20, 2017 — At the occasion of the 25th PRACE Council Meeting in Amsterdam, the PRACE Members ratified a Resolution to proceed with the second phase of their Partnership: PRACE 2. The PRACE 2 programme defines the second period of PRACE from 2017 to 2020. With this agreement, PRACE will strengthen Europe’s position as world-class scientific supercomputing provider, a technology considered a key enabler for knowledge development, scientific research, big data analytics, solving global and societal challenges, and European industrial competitiveness.

In the context of the global HPC race between USA, Asia and Europe where European countries decided to compete allied, the overarching goal of PRACE is to provide the federated European supercomputing infrastructure that is science-driven and globally competitive. It builds on the strengths of European science providing high-end computing and data analysis resources to drive discoveries and new developments in all areas of science and industry, from fundamental research to applied sciences including: mathematics and computer sciences, medicine, and engineering, as well as digital humanities and social sciences. Recently PRACE was confirmed as the only e-Infrastructure on the ESFRI 2016 Roadmap (European Strategy Forum for Research Infrastructures).

“PRACE 2 is a natural next step in the successful pan-European collaboration in HPC. Our ultimate goal is to provide a world-class federated and sustainable HPC and data infrastructure to all researchers in Europe,” said Prof. Dr. Anwar Osseyran, Chair of the PRACE Council.

For the PRACE 2 programme, the PRACE Members have thoroughly discussed and defined the underlying funding model of the Research Infrastructure, based on the contribution of the 5 Hosting Members and the General Partners. The European Commission supports specific PRACE activities via project funding.

The new PRACE 2 programme will help to create a fertile basis for the sustainability of the infrastructure, in order to continue fostering world leading science as well as enabling technology development and industrial competitiveness in Europe through supercomputing. This will be accomplished through:

  1. Provisioning of a federated world-class Tier-0 supercomputing infrastructure that is architecturally diverse and allows for capability allocations that are competitive with comparable programmes in the USA and in Asia.
  2. A single, thorough Peer Review Process for resource allocation, exclusively based on scientific excellence of the highest standard.
  3. Coordinated High-Level Support Teams (HLST) that provide users with support for code enabling and scaling out of scientific applications / methods, as well as for R&D on code refactoring on the Tier-0 systems.
  4. Implementation actions in the areas of dissemination, industry collaboration, and training, as well as the exploration of future supercomputing technologies that will include additional application enabling investments co-ordinated with the support team efforts.

Hosting Members and General Partners undersigning the PRACE 2 programme will be eligible to apply for Tier-0 resources, provided to the PRACE 2 programme, which are then available to principal investigators from academia and industry in their countries. Scientists from other countries may be invited to contribute to these projects to benefit from these large allocations.

PRACE 2 will award substantially more core hours to larger projects than before, boosting scientific and industrial advancement in Europe. With 5 Hosting Members (France, Germany, Italy, Spain, and Switzerland) the capacity offering is planned to grow to 75 million node hours per year. Resources remain free of charge at the point of usage.

The impact of PRACE 2 is already visible for user communities: In the 14th Call for Proposals for Project Access, PRACE was able to make available 3 times more resources than in previous calls, offering a cumulated peak performance of more than 62 Petaflops in 7 complementary leading edge Tier-0 systems.

“We are very pleased with how the PRACE Members have come together and invested substantial efforts and resources in the project. PRACE 2 will deliver a much needed increase in computational power, and with the new High Level Support Teams we are also establishing a joint computational infrastructure that will strengthen European competitiveness,” said Prof. Erik Lindahl, Chair of the PRACE Scientific Steering Committee.

About PRACE

The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Seventh Framework Programme (FP7/2007-2013) under grant agreement RI-312763 and from the EU’s Horizon 2020 research and innovation programme (2014-2020) under grant agreements 653838 and 730913. For more information, see www.prace-ri.eu.

Source: PRACE

The post PRACE Proceeds into Second Phase of Partnership appeared first on HPCwire.

Mellanox Introduces 100Gb/s Silicon Photonics Line

Mon, 03/20/2017 - 07:50

SUNNYVALE, Calif. & YOKNEAM, Israel, March 20, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systemstoday introduced a new line of 100Gb/s silicon photonics components to serve the growing demand of hyperscale Web 2.0 and cloud optical interconnects. The new product line provides module makers with access to a fully qualified portfolio of silicon photonics components and optical engine subassemblies.

“Our customers can now achieve significant time to market advantages for embedded modules and transceivers by using fully-qualified, and cost-effective silicon photonics component and chip sets,” said Amir Prescher, senior vice president of business development and general manager of the interconnect business at Mellanox. “PSM4 represents the highest volume, most cost effective and flexible building blocks for 100Gb/s for single mode fiber transceivers for data center applications. End customers benefit by having more supplier options and Mellanox benefits by scaling our high-volume silicon photonics products.”

Specifically, Mellanox is announcing the immediate availability of:

  • 100Gb/s PSM4 silicon photonics 1550nm transmitter, with flip-chip bonded DFB lasers with attached 1m. fiber pigtail for reaches of 2km
  • 100Gb/s PSM4 silicon photonics 1550nm transmitter, with flip-chip bonded DFB lasers with attached fiber stub for connectorized transceivers with reaches of 2km
  • Low-power 100Gb/s (4x25G) modulator driver IC
  • 100Gb/s PSM4 silicon photonics 1310 and 1550nm receiver array with 1m fiber pigtail
  • 100Gb/s PSM4 silicon photonics 1310 and 1550 receiver array for connectorized transceivers
  • Low-power 100Gb/s (4x25G) trans-impedance amplifier IC

These components are fully qualified for use in low-cost, electronics-style packaging, ensuring a low-risk, quick time to market advantage. Because the Mellanox silicon photonics platform eliminates the need for complex optical alignment of lenses, isolators, and laser subassemblies, customers can scale to high volume manufacturing easier and faster than traditional technologies.

Recently, Mellanox announced that it has shipped more 100,000 Direct Attach Copper (DAC) cables and more 200,000 optical transceiver modules for 100Gb/s networks, confirming the market demand and high volume manufacturing leadership for 100Gb/s interconnect products.

Mellanox will be exhibiting at the Optical Fiber Conference (OFC), March 21-23, at the Los Angeles Convention Center, Los Angeles, CA, booth no. 3715. Mellanox will be showcasing live demonstrations of its 100Gb/s end-to-end switching, network adapter and copper and optical cables and transceivers solutions, including:

  • Live 200Gb/s silicon photonics demonstration
  • Spectrum SN2700, SN2410 and SN2100 100Gb/s QSFP28/ SFP28 switches
  • ConnectX-4 and ConnectX-5 25G/50G/100Gb/s QSFP28/SFP28 network adapters
  • LinkX™ 25G/50G/100Gb/s DAC & AOC cables and 100G SR4 & PSM4 transceivers
  • New Quantum switches with 40 ports of 200Gb/s QSFP28 in 1RUchassis
  • New ConnectX-6 adapters with two ports of 200Gb/s QSFP28
  • Silicon Photonics Optical engines and components

At OFC, the Company will also be demonstrating interoperability of the Mellanox Silicon Photonics 100Gb/s PSM4 with Innolight, AOI, Oclaro, and Hisense transceivers in both the Mellanox booth and in the adjacent Ethernet Alliance booth, no. 1709.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, cables and transceivers, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Introduces 100Gb/s Silicon Photonics Line appeared first on HPCwire.

Supermicro Showcases Enterprise, Datacenter Solutions at CeBIT 2017

Mon, 03/20/2017 - 07:41

HANNOVER, Germany, March 20, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a leader in compute, storage and networking technologies including green computing, announces participation in the annual CeBIT exhibition being held in Hannover, Germany from March 20 to 24. Supermicro will showcase enterprise and data center solutions in Booth B-66.

This year Supermicro’s showcased products include the new SuperBlade and BigTwin high density computing solutions. The 8U SuperBlade is the newest in Supermicro’s blade systems. The new 8U SuperBlade supports both current and next generation Intel Xeon processor-based blade servers with the fastest 100G EDR InfiniBand and Omni-Path switches for mission critical, enterprise and data center applications. It also leverages the same Ethernet switches, chassis management modules, and software as the successful MicroBlade for improved reliability, serviceability, and affordability. It maximizes the performance and power efficiency with DP and MP processors in half-height and full-height blades, respectively. The new smaller form factor 4U SuperBlade maximizes density and power efficiency while enabling up to 140 dual-processor servers or 280 single-processor servers per 42U rack.

The Supermicro BigTwin is a breakthrough multi-node server system with a multitude of innovations and industry firsts. BigTwin supports maximum system performance and efficiency by delivering 30% better thermal capacity in a compact 2U form-factor enabling solutions with the highest performance processor, memory, storage and I/O. Continuing Supermicro’s NVMe leadership, the BigTwin is the first All-Flash NVMe multi-node system. BigTwin doubles the I/O capacity with three PCI-e 3.0 x16 slots and provides added flexibility with more than 10 networking options including 1GbE, 10G, 25G, 100G Ethernet and InfiniBand with its industry leading SIOM modular interconnect.

Supermicro will also be featuring the newest edition to the company’s NVMe Flash portfolio supporting Intel’s new Optane SSDs.  Supermicro’s industry leading portfolio of 60+ NVMe Flash based solutions with Intel Optane SSDs can deliver up to 11 million Write IOPs and 30TB of high performance Optane storage in a 1U form factor.

“We look forward to our participation on CeBIT each year as an opportunity to showcase our industry leading NVMe storage and enterprise computing solutions,” said Charles Liang, President and CEO of Supermicro.  “Both our BigTwin and SuperBlade systems are achieving market traction in high-density, energy-conscious data centers.”

Supermicro offers the industry’s most extensive selection of motherboards, servers and storage to support a wide range of markets including DS, DSS, Industrial and Machine Automation, Retail, Transport, Communication and Networking (Security), as well as Warm and Cold Storage.

Key server systems, storage systems, motherboards and Ethernet switches on display this year will include:

  • BigTwin features (SYS-2028BT-HNR+). 4 dual Intel Xeon processor nodes in a 2U form factor, 24 DIMM slots for up to 3TB of memory, 6 NVMe U.2 drive bays per node, and an SIOM networking card per node
  • The Supermicro MicroBlade represents an entirely new type of computing platform. It is a powerful and flexible extreme-density 6U/3U all-in-one total system that features 28/14 hot-swappable MicroBlade Server nodes supporting 28/14 Newest Dual-Node Intel Xeon processor-based UP systems with Intel Xeon processor E3-1200v5 family configurations with up to 2 SSDs/1 HDD per Node.
  • NVMe Ultra Server for advanced in-memory computing. The system will include MemX, a high capacity, high performance, working memory and storage solution that offers superior performance at lower acquisition costs compared to traditional DRAM-only memory configurations. MemX uses NVMe-compatible, high performance HGST-branded Ultrastar SN200 family PCIe solid-state drives (SSDs) from Western Digital. The combined solution can deliver up to 11.7 terabytes (TB) of working memory and direct attached storage of 330 TB per 2U Ultra Server. The combination of Ultra Server and MemX is the ideal solution for Cloud Computing, in-Memory database, and big data analytics workloads used by Cloud Service Providers, Hyperscale, and Enterprise deployments
  • 7U SuperServer  Eight socket R1 (LGA 2011) supports Intel Xeon processor E7-8800 v4/v3 family (up to 24-Core), up to 24TB in 192 DDR4 DIMM slots, up to 15 PCI-E 3.0 slots (8 x16, 7 x8), 4x 10Gb LAN (SIOM), 1 dedicated LAN for IPMI Remote Management, 1 VGA, 2 USB 2.0, 1 COM via KVM, up to 12 Hot-swap 2.5″ SAS3 HDDs (w/ RAID cards), 20x 2.5″ or 6x 3.5″, internal HDDs (w/ RAID cards)   (SYS-7088B-TR4FT)
  • 4U SuperServer with Dual socket R3 (LGA 2011) supports Intel Xeon processor E5-2600 v4/ v3 family (up to 160W TDP), up to 3TB ECC 3DS LRDIMM, up to DDR4-2400MHz; 24x DIMM slots, 2 PCI-E 3.0 x16, 1 PCI-E 3.0 x8, SIOM for flexible networking options, 60x 3.5″ Hot-swap SAS3/SATA3 drive bays; 2x 2.5″ rear Hot-swap SATA drive bays; optional 6 NVMe bays, LSI 3108 SAS3 HW RAID controller, Server remote management: IPMI 2.0/KVM over LAN / Media over LAN (SSG-6048R-E1CR60N)
  • 2U SuperServer with four hot-pluggable system nodes with: Single socket P (LGA 3647) supports  Intel Xeon Phi x200 processor, optional integrated Intel Omni-Path fabric, CPU TDP support Up to 260W, up to 384GB ECC LRDIMM, 192GB ECC RDIMM, DDR4-2400MHz in 6 DIMM slots, 2 PCI-E 3.0 x16 (Low-profile) slots, Intel i350 Dual port GbE LAN, 1 Dedicated IPMI LAN port, 3 Hot-swap 3.5″ SATA drive bays, 1 VGA, 2 SuperDOM, 1 COM, 2 USB 3.0 ports (rear) (SYS-5028TK-HTR)
  • 1U SuperServer Dual socket R3 (LGA 2011) supports Intel Xeon processor E5-2600 v4/ v3 family; QPI up to 9.6GT/s, up to 3TB ECC 3DS LRDIMM up to DDR4- 2400MHz; 24x DIMM slots, 2 PCI-E 3.0 x8 slots(2 FH 10.5″ L, 1 LP), 4x 10GBase-T ports, 10x 2.5″ SATA (Optional 8x SAS3 ports via AOC) Hot-swap Drive Bays, Diablo Technologies Memory1 Support (SYS-1028U-TR4T+)
  • 1U SuperServer with dual socket Intel Xeon processor E5-2600 v4 family (up to 145W TDP), up to 4 co-processors, up to DDR4-2400MHz; 16x DIMM slots, 3 PCI-E 3.0 x16 slots, 1 PCI-E 3.0 x8 Low-profile slot, 2x 10GBase-T LAN via Intel X540, 2x 2.5″ Hot-swap drive bays, 2x 2.5″ internal drive bays (SYS-1028GQ-TXRT)
  • 1U SuperServer with dual socket Intel Xeon processor E5-2600v4/v3 family (up to 145W TDP), up to 4 co-processors, up to 512GBECC 3DS LRDIMM , up to DDR4-2400MHz; 16x DIMM slots, 3 PCI-E 3.0 x16 slots, 1 PCI-E 3.0 x8 Low-profile slot, 2x 10GBase-T LAN via Intel X540, 2x 2.5″ Hot-swap drive bays, 2x 2.5″ internal drive bays (SYS-1028GQ-TXRT)
  • 2U NVMe Mission Critical Storage Server with 40 Dual port NVMe Omni-Path SIOM support (SSG-2028R-DN2R40L)
  • Intel Xeon-D 12-core embedded motherboard (X10SDV-12C-TLN4F): with up to 128GB memory, 6 SATA3 ports, 1 PCI-E 3.0 x16, 1 M.2 PCI-E 3.0 x4, and 2x 10GbE network connectivity
  • Intel Xeon-D 4-core embedded motherboard (X10SDV -2C-TP8F): with up to 128GB memory, 2 PCI-E 3.0 x8, 1 M.2 PCI-E 3.0 x4, and 2x 10G SFP+ networking connectivity
  • ATOM Motherboard, Intel Atom processor E3940, SoC, FCBGA 1296, up to 8GB Unbuffered non-ECC DDR3-866MHz SO-DIMM in 1 DIMM slot, Dual GbE LAN ports via Intel I210-AT, 1 PCI-E 2.0 x2 (in x8) slot, M.2 PCIe 2.0 x2, M Key 2242/2280, 1 Mini-PCIe with mSATA, 2 SATA3 (6Gbps) via SoC, 4 SATA3 (6Gbps) via Marvel 88SE9230, 1 DP (DisplayPort), 1 HDMI, 1 VGA, 1 eDP (Embedded DisplayPort), 1 Intel HD Graphics, 2 USB 3.0 (2 rear), 7 USB 2.0, (2 rear, 4 via headers, 1 Type A), 3 COM ports (1 rear, 2 headers), 1 SuperDOM,  4-pin 12v DC power connector (A2SAV)
  • 1U Top-of-Rack 48x Port 100Gb/s switch (SSH-C48Q) – supports the 100Gbps Intel Omni-Path Architecture (OPA), 48x 100 Gb/s ports – QSFP28, optional RJ45 1G management port and USB serial console port
  • 1U SuperSwitch Top of Rack Bare Metal 1/10G Ethernet switch with 48x 1Gbps Ethernet RJ45 ports and 4x SFP+ 10Gbps Ethernet ports (SSE-G3648BR)

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Supermicro

The post Supermicro Showcases Enterprise, Datacenter Solutions at CeBIT 2017 appeared first on HPCwire.

Dell HPC Innovation Lab: Democratizing HPC

Mon, 03/20/2017 - 01:01

The high performance computing requirements in many businesses are exploding. Across varied industries, the growing use of increasingly sophisticated simulation and modeling algorithms and big data analytics for smarter, faster decision-making are overwhelming IT infrastructures.

What many companies are finding is that they need compute, storage, and networking capabilities that are on a par with those commonly found in the largest academic supercomputing centers and government labs.

Unfortunately, there are many challenges in bringing this technology into the enterprise. This is something the Dell HPC Innovation Lab is trying to address.

HPC for Enterprise

While HPC has been critical to scientific research, Dell EMC is trying to mainstream its use to support enterprises of all sizes as they seek a competitive advantage in an ever increasing digital world.

The Dell EMC HPC Innovation Lab’s goal is to commercialize the benefits of advanced processing, network, and storage technologies, as well as enable open standards across the industry. “We want to make it simpler and easier to design and operate HPC systems,” said Onur Celebioglu, HPC Engineering Director at Dell.

To that end, the 13,000 square-foot facility (containing over 1000 servers of different form factors and generations) is a focal point for Dell EMC’s joint R&D activities with partners and system integrators, as well as with customers. Work focuses on research, development, and integration of HPC solutions. Dell EMC uses the HPC Innovation Lab to evaluate emerging HPC technologies and to find ways to incorporate them into Dell EMC solutions

Knowing that businesses must take economics into account when deploying HPC solutions, the Dell HPC Innovation Lab generates best practices to help customers get the biggest bang for the buck. The staff includes engineers with diverse technical backgrounds including computer science, mechanical engineering, and bioinformatics. “We are developing best practices and exploring how to get the most performance out of a system,” Celebioglu said. For instance, the lab is looking at things including how BIOS settings impact performance and can be adjusted to fine-tune system performance.

As part of the overall efforts, the Dell EMC HPC Innovation Lab builds optimized systems that simplify HPC system design process. And since every organization’s workloads can have very different characteristics, the lab does performance and scalability studies in collaboration with partners and customers.

Embracing the latest technology

Dell announced a new expansion of the Dell HPC Innovation Lab in cooperation with Intel specifically for support of the Intel® Scalable System Framework (SSF). This multi-million dollar expansion to the facility includes additional domain expertise, infrastructure, and technologists.

To evaluate the benefits and potential uses of the new Intel technologies, the lab hosts Dell’s Zenith supercomputer. Zenith is designed on Intel’s SSF and contains 13,824 cores using Intel Xeon E5-2697 v4 processors, 128 GB of memory per node, a non-blocking OmniPath Architecture (OPA) fabric, and 480 TB of Dell HPC NFS storage.

Dell uses Zenith to prototype and characterize the performance of advanced technologies for general HPC use and specifically for target vertical markets, such as genomics and manufacturing.

Applying derived knowledge to the real world

The work done at the lab permeates into the Dell HPC solutions. For example, earlier this year Dell announced the global availability of the Dell HPC Systems portfolio, a family of HPC and data analytics solutions, powered by Intel, that combine the flexibility of customized HPC systems with the speed, simplicity, and reliability of pre-configured systems. Dell engineers and domain experts designed and tuned the new systems for research, life sciences and manufacturing workloads with fully tested and validated building-block systems, backed by a single point of hardware support and service options across the solution lifecycle.

With simplified configuration and ordering, enterprise organizations can more quickly select and deploy Dell HPC Systems today. These systems include the latest Intel® Xeon® processor families, support for Intel OPA fabric, Dell HPC Lustre Storage, and Dell HPC NFS Storage solutions.

Furthermore, Dell EMC’s HPC Innovation Lab is involved with the design, development and performance analysis of Dell EMC’s newly announced PowerEdge C6320p server, based on the latest “Knights Landing” (code named “KNL”, now the 7200 series) generation of Intel Xeon Phi processors.

While Dell EMC has worked with the leading supercomputing centers for years, these recent announcements show how the work done in the HPC Innovation Lab is allowing companies to safely apply advanced computing technologies to commercial endeavors.

 

For more information about how the Dell HPC Innovation Lab can help your company meet it compute, storage, and networking requirements, visit: www.dell.com/hpc.

The post Dell HPC Innovation Lab: Democratizing HPC appeared first on HPCwire.

SC17 Technical Paper Abstracts Due Monday, March 20

Fri, 03/17/2017 - 13:25

DENVER, Co., March 17, 2017 — Only three more days to submit your technical paper abstracts for SC17! Don’t miss out on this opportunity to showcase your HPC R&D. The Technical Papers Program at SC is the leading venue for presenting the highest-quality original research, from the foundations of HPC to its emerging frontiers.

The conference committee is seeking submissions that introduce new ideas to the field and stimulate future trends on topics such as applications, systems, parallel algorithms, data analytics and performance modeling. SC also welcomes submissions that make significant contributions to the “state-of-the-practice” by providing compelling insights on best practices for provisioning, using and enhancing high-performance computing systems, services and facilities.

Abstracts submissions close: Monday, March 20, 2017, end of day AoE

Submit abstracts here: https://submissions.supercomputing.org/

The SC conference series is dedicated to promoting equality and diversity and recognizes the role that this has in ensuring the success of the conference series. We welcome submissions from all sectors of society.  SC17 is committed to providing an inclusive conference experience for everyone, regardless of gender, sexual orientation, disability, physical appearance, body size, race or religion.

Source: SC17

The post SC17 Technical Paper Abstracts Due Monday, March 20 appeared first on HPCwire.

BASF Taps HPE to Build Supercomputer for Chemical Research

Fri, 03/17/2017 - 07:36

LUDWIGSHAFEN, Germany and PALO ALTO, Calif., March 17, 2017 — BASF SE and Hewlett Packard Enterprise (NYSE: HPE) today announced that the companies will collaborate to develop one of the world’s largest supercomputers for industrial chemical research at BASF’s Ludwigshafen headquarters this year. Based on the latest generation of HPE Apollo 6000 systems, the new supercomputer will drive the digitalization of BASF’s worldwide research.

“The new supercomputer will promote the application and development of complex modeling and simulation approaches, opening up completely new avenues for our research at BASF,” said Dr. Martin Brudermueller, Vice Chairman of the Board of Executive Directors and Chief Technology Officer at BASF. “The supercomputer was designed and developed jointly by experts from HPE and BASF to precisely meet our needs.”

The new system will make it possible to answer complex questions and greatly reduce the time required to obtain results from several months to days across all research areas. As part of BASF’s digitalization strategy, the company plans to significantly expand its capabilities to run virtual experiments with the supercomputer. In addition, it will help BASF reduce time to market and costs by, for example, simulating processes on catalyst surfaces more precisely or accelerating the design of new polymers with pre-defined properties.

“In today’s data-driven economy, high performance computing plays a pivotal role in driving advances in space exploration, biology and artificial intelligence,” said Meg Whitman, President and Chief Executive Officer, Hewlett Packard Enterprise. “We expect this supercomputer to help BASF perform prodigious calculations at lightning fast speeds, resulting in a broad range of innovations to solve new problems and advance our world.”

With the help of Intel Xeon processors, high-bandwidth, low-latency Intel Omni-Path Fabric and HPE management software, the supercomputer acts as a single system with an effective performance of more than 1 Petaflop (1 Petaflop equals one quadrillion floating point operations per second). With this system architecture, a multitude of nodes can work simultaneously on highly complex tasks, dramatically reducing the processing time.

“Customers are always looking for systems that deliver the best performance at the best total cost of ownership,” said Barry Davis, General Manager, Accelerated Workload Group, Intel. “Intel® Omni-Path Architecture is specifically designed to deliver outstanding performance while scaling cost-effectively from entry-level high performance computing clusters to larger clusters with 10,000 nodes or more — offering a significant advantage on both fronts.”

Developed and built by HPE, the new supercomputer will consist of several hundred computer nodes. The supercomputer will also leverage HPE Apollo Systems to give customers simplified administration efficiencies and flexibility to match their solutions to the workload and lower their total cost of ownership.

About Hewlett Packard Enterprise
Hewlett Packard Enterprise is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.

About BASF
At BASF, we create chemistry for a sustainable future. We combine economic success with environmental protection and social responsibility. The approximately 114,000 employees in the BASF Group work on contributing to the success of our customers in nearly all sectors and almost every country in the world. Our portfolio is organized into five segments: Chemicals, Performance Products, Functional Materials & Solutions, Agricultural Solutions and Oil & Gas. BASF generated sales of about EUR 58 billion in 2016. BASF shares are traded on the stock exchanges in Frankfurt (BAS), London (BFA) and Zurich (BAS). Further information at www.basf.com.

Source: HPE

The post BASF Taps HPE to Build Supercomputer for Chemical Research appeared first on HPCwire.

Researchers Recreate ‘El Reno’ Tornado on Blue Waters Supercomputer

Thu, 03/16/2017 - 18:03

The United States experiences more tornadoes than any other country. About 1,200 tornadoes touch down each each year in the U.S. with most occurring during tornado season, from March through June. Given the devastating consequences to life and property, understanding these terrible twisters is an important goal of meteorologists, and supercomputing is crucial to this endeavor.

Backed by the power of the Blue Waters supercomputer, researchers at the University of Wisconsin–Madison were able to gain insight into the inner-workings of tornadoes and the supercells that produce them. The resulting visualizations capture the tornado formation process, tornadogenesis, in detail.

The research team, led by Leigh Orf, a scientist with the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin–Madison, simulated a supercell thunderstorm that set off a cluster of tornadoes on the Oklahoma landscape over a four-day period in May 2011.

“One after the other, supercells spawned funnel clouds that caused significant property damage and loss of life,” notes a writeup on the research. “On May 24, one tornado in particular – the “El Reno” – registered as an EF-5, the strongest tornado category on the Enhanced Fujita scale. It remained on the ground for nearly two hours and left a path of destruction 63-miles long.”

The research shed light on essential tornado drivers but it also validated the unpredictable nature of tornadoes. There’s a requirement of “non-negotiable” parts, said Orf, including “abundant moisture, instability and wind shear in the atmosphere, and a trigger that moves the air upwards, like a temperature or moisture difference.” But even when these conditions are met, a tornado does not necessarily result.

“In nature, it’s not uncommon for storms to have what we understand to be all the right ingredients for tornadogenesis and then nothing happens,” said Orf. “Storm chasers who track tornadoes are familiar with nature’s unpredictability, and our models have shown to behave similarly.”

The EF-5 simulation was carried out on the Blue Waters Supercomputer at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. The computation was completed in about three days versus the decades that would be required on a standard desktop computer.

The research team plans to keep refining the model so they can keep unraveling the mysteries of tornado formation. Increasing scientific knowledge of severe weather events, such as these, has important implications for enhancing life-saving storm warning systems.

Video of May 24, 2011 supercell simulation:

For the full story, see the University of Wisconsin–Madison news article by Eric Verbeten.

The post Researchers Recreate ‘El Reno’ Tornado on Blue Waters Supercomputer appeared first on HPCwire.

Pages