HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 29 min ago

£3m Awarded to Oxford-led Consortium for Machine Learning Facility

Thu, 03/30/2017 - 09:51

OXFORD, March 30, 2017 — A consortium of eight UK universities, led by the University of Oxford, has been awarded £3 million by the Engineering and Physical Sciences Research Council (EPSRC) to establish a national high-performance computing facility to support machine learning.

The new facility, known as the Joint Academic Data Science Endeavour (JADE), forms part of a combined investment of £20m by EPSRC in the UK’s regional Tier 2 high-performance computing facilities, which aim to bridge the gap between institutional and national resources.

JADE, which will be the largest Graphics Processing Unit (GPU) facility in the UK, will provide a computational hub to support the research of the world-leading groups in machine learning at the universities of Oxford, Edinburgh, Sheffield, King’s College London, Queen Mary University of London and University College London (UCL). It will also provide a powerful resource for data science and molecular dynamics researchers at the universities of Bristol and Southampton.

Machine learning has experienced huge growth over the last five years, with applications including computer vision for driverless cars, language translation services and medical imaging. JADE is the first national computing facility to support this rapid growth.

Professor Mike Giles of Oxford University, who is leading the project, said: ‘For the first time, JADE will provide very significant national computing facilities addressing the particular needs of machine learning, one of the fastest growing areas of academic research and industrial application.’

JADE will be delivered through a partnership between Atos, who will provide and integrate the system hardware, and STFC’s Hartree Centre, who will host and support the system for the three-year initial duration of the facility. Exploiting the capabilities of the NVIDIA DGX-1 Deep Learning System, JADE will comprise 22 of these servers, each containing 8 of the newest NVIDIA Tesla P100 GPUs linked by NVIDIA’s NVlink interconnect technology.

To support researchers using the system, five software engineering posts are being created by Oxford, KCL, QMUL, Southampton and UCL. This is a key investment to ensure the necessary expertise is in place to derive maximum benefit from the new facility.

Speaking on JADE’s potential research impact, Professor Philip Nelson, EPSRC’s Chief Executive, said: ‘These centres will enable new discoveries, drive innovation and allow new insights into today’s scientific challenges. They are important because they address an existing gulf in capability between local university systems and the UK National Supercomputing Service ARCHER. Many universities are involved in the six new centres, and these will give more researchers easy access to High Performance Computing.’

The Engineering and Physical Sciences Research Council (EPSRC)

As the main funding agency for engineering and physical sciences research, our vision is for the UK to be the best place in the world to Research, Discover and Innovate.

By investing £800 million a year in research and postgraduate training, we are building the knowledge and skills base needed to address the scientific and technological challenges facing the nation. Our portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research we fund has impact across all sectors. It provides a platform for future economic development in the UK and improvements for everyone’s health, lifestyle and culture.

We work collectively with our partners and other Research Councils on issues of common concern via Research Councils UK.

Source: EPSRC

The post £3m Awarded to Oxford-led Consortium for Machine Learning Facility appeared first on HPCwire.

Travel Support Available for PEARC17 Student Contributors

Thu, 03/30/2017 - 09:18

NEW ORLEANS, La., March 30, 2017 — PEARC17, Practice & Experience in Advanced Research Computing 2017, is now offering financial support opportunities for students with submissions to the main conference program—thanks to support from XSEDE, the San Diego Supercomputer Center, and through the fundraising efforts of STEM-Trek and Virginia Tech. The deadline to apply is May 25, 2017.

Funding is available to cover costs of airfare, shared lodging, and registration fees. Due to funding constraints, participation in the PEARC17 Student Program is limited to students at U.S. and Canadian institutions, and partial support is requested from the student’s institution.

To receive travel support for the PEARC17 Student Program, students need to APPLY HERE no later than May 25, and are required to participate in all student activities, including the volunteer program. Student authors will be notified of accepted papers by mid-April and accepted posters by mid-May.

Learn more about the PEARC17 Student Program and participation opportunities here http://pearc17.pearc.org/student-program.


PEARC17—Practice & Experience in Advanced Research Computing 2017—unites the high-performance computing and advanced digital research communities, addressing the challenges of using and operating advanced research computing within academic and open science communities. Being held in New Orleans July 9-13, PEARC17 offers a robust technical program, as well as networking, professional growth and multiple student participation opportunities. See pearc17.pearc.org for more information.

Source: PEARC

The post Travel Support Available for PEARC17 Student Contributors appeared first on HPCwire.

GW4 Unveils World’s First ARM-based Production Supercomputer

Thu, 03/30/2017 - 09:00

BIRMINGHAM, England, March 30, 2017 — The GW4 Alliance has unveiled the world’s first ARM-based production supercomputer at today’s Engineering and Physical Sciences Research Council (EPSRC) launch at the Thinktank science museum in Birmingham.

The EPSRC awarded the GW4 Alliance, together with Cray Inc. and the Met Office, £3m to deliver a new Tier 2 high performance computing (HPC) service that will benefit scientists across the UK. 

The supercomputer, named ‘Isambard’ after the renowned Victorian engineer Isambard Kingdom Brunel, will enable researchers to choose the best hardware system for their specific scientific problem, saving time and money.

Isambard is able to provide system comparison at high speed as it includes over 10,000, high-performance 64-bit ARM cores, making it one of the largest machines of its kind anywhere in the world.

It is thought that the supercomputer, which has already received international acclaim, could provide the template for a new generation of ARM-based services. 

Isambard is being assembled at its new home, the Met Office, where EPS and climate scientists will work together to gain first-hand insights into how their scientific codes need to be adapted to emerging computational architectures.

 Professor Simon McIntosh-Smith, lead academic on the project at the University of Bristol said: “We’re delighted with the reaction that Isambard has received within the high performance computing community. Since we announced the system we’ve been contacted by a wide range of world-class academic and industrial HPC users asking for access to the service. The GW4 Isambard project is able to offer a high-quality production environment for direct comparison across a wide range of architectures with class-leading software tools, and this is proving to be an exciting combination.”

Professor Nick Talbot, Chair of the Board for the GW4 Alliance and Deputy Vice-Chancellor for Research and Impact at the University of Exeter, said: “We have been delighted to work with partners Cray Inc and the Met Office on this project, which has demonstrated how GW4’s collaborative ethos can produce truly world-leading outcomes. Isambard exemplifies our region’s expertise in advanced engineering and digital innovation, and we hope it could provide the blueprint for a new era of supercomputing worldwide.”

Established in 2013, the GW4 Alliance brings together four leading research-intensive universities: Bath, Bristol, Cardiff and Exeter. It aims to strengthen the economy across the region through undertaking pioneering research with industry partners.  

About the Engineering and Physical Sciences Research Council (EPSRC)

As the main funding agency for engineering and physical sciences research, our vision is for the UK to be the best place in the world to Research, Discover and Innovate.

By investing £800 million a year in research and postgraduate training, we are building the knowledge and skills base needed to address the scientific and technological challenges facing the nation. Our portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research we fund has impact across all sectors. It provides a platform for future economic development in the UK and improvements for everyone’s health, lifestyle and culture. 

We work collectively with our partners and other Research Councils on issues of common concern via Research Councils UK. www.epsrc.ac.uk

Source: EPSRC

The post GW4 Unveils World’s First ARM-based Production Supercomputer appeared first on HPCwire.

Prof. Dieter Kranzlmüller Named Chairman of the Board at Leibniz Supercomputing Centre

Thu, 03/30/2017 - 08:54

BERLIN, March 30, 2017 — The Gauss Centre for Supercomputing (GCS) announced today that effective April 1st, 2017, Prof. Dr. Dieter Kranzlmüller is the new Chairman of the Board of Directors at GCS member Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences and Humanities in Garching near Munich. Kranzlmüller succeeds Prof. Dr. Dr. h.c. Arndt Bode who has been Chairman of the Board since October 1st, 2008. Prof. Bode will continue to be a member of the LRZ Board of Directors.

In 2008 Dieter Kranzlmüller joined the Board of Directors at LRZ and became a full professor of computer science at the Chair for Communication Systems and System Programming at Ludwig-Maximilians-Universität Munich (LMU). His scientific focus lies in einfrastructures, including network and IT management, grid and cloud computing, as well as in high performance computing, virtual reality and visualisation. Kranzlmüller graduated from the Johannes Kepler University Linz. Having spent a number of years working in the IT industry, he returned to academia to work at the universities of Reading, TU Dresden, École Normale Supérieure Lyon, and to act as deputy director of the EGEE project at CERN in Geneva. He is truly internationally oriented and is a member of many European and international organizations in the field of IT.

Kranzlmüller actively supports the Center for Digital Technology and Management (CDTM), a joint initiative of the two Munich universities, Ludwig-Maximilians-Universität München (LMU) and Technical University of Munich (TUM), to foster young researchers. Just like Arndt Bode in 2008, Dieter Kranzlmüller will start his duties as the LRZ Chairman of the Board of Directors with the procurement of the next generation supercomputer at LRZ, SuperMUC-NG. Arndt Bode enjoys an excellent reputation worldwide as an expert in the field of supercomputer architectures and energy efficient high performance computing. Ever since he started his professional career, Bode has been involved in the development of parallel computers and has been for decades one of the drivers in this field. He advanced Germany’s and Europe’s infrastructure in this area and represents Germany in PRACE, the Partnership for Advanced Computing in Europe. In addition, Bode has been and still is one of the driving forces behind a number of significant German and European organizations and projects like the Gauss Centre for Supercomputing, PROSPECT, ETP4HPC, GÉANT, KONWIHR, just to name a few.

Arndt Bode’s efforts were acknowledged on many occasions, most noticeably in 2015 when he was awarded the Konrad Zuse Medal of the Gesellschaft für Informatik, the German Informatics Society. As the Chairman of the Board of Directors, Bode drove the transition of the LRZ from a classic computing centre to a modern IT service provider.

About GCS

The Gauss Centre for Supercomputing (GCS) combines the three national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s Tier-0 supercomputing institution. Concertedly, the three centres provide the largest and most powerful supercomputing infrastructure in all of Europe to serve a wide range of industrial and research activities in various disciplines. They also provide top-class training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advanced Computing in Europe), an international non-profit association consisting of 25 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level. GCS is jointly funded by the German Federal Ministry of Education and Research and the federal states of Baden-Württemberg, Bavaria, and North Rhine-Westphalia. It has its headquarters in Berlin/Germany. www.gauss-centre.eu

Source: Gauss Centre for Supercomputing

The post Prof. Dieter Kranzlmüller Named Chairman of the Board at Leibniz Supercomputing Centre appeared first on HPCwire.

Ohio Supercomputer Center Dedicates ‘Owens’ Cluster

Wed, 03/29/2017 - 16:39

In a dedication ceremony held earlier today (March 29), officials from Ohio Supercomputer Center (OSC) along with state representatives gathered to celebrate the launch of OSC’s newest cluster: “Owens.” A nod to the system’s speed and power, “Owens” is the namesake of J.C. “Jesse” Owens, who won four gold medals at the 1936 Olympics. Along with recent upgrades, the new system increases the center’s total computing capacity by a factor of four and its storage capacity by three.

The Dell/Intel cluster was funded as part of a $12 million appropriation included in the 2014–15 Ohio biennial capital budget. An investment of $9.7 million went toward the cluster and the remainder of the appropriation funded storage systems and facilities upgrades, required to support existing and new resources.

“The state of Ohio has made significant investments in the OSC since its creation to expand research in academia and industry across the state through the use of high performance computing services,” said Chancellor John Carey, of the Ohio Department of Higher Education. “Deploying this new system at the center will give Ohio researchers a powerful new tool that they can leverage to make amazing discoveries and innovative breakthroughs.”

As shared in an announcement from OSC, other speakers included David Hudak, Ph.D., interim executive director of OSC; Thomas Beck, Ph.D., professor of chemistry at the University of Cincinnati and chair of the center’s Statewide Users Group; and Tony Parkinson, Vice President for NA Enterprise Solutions and Alliances at Dell EMC.

“This major acquisition, installation and deployment will enable our clients, both academic and industrial, to significantly enhance their computational work,” Hudak said. “Ohio researchers are eager for this massive increase in computing power and storage space. Our current systems were almost constantly running near peak capacity.”

“OSC is dedicated to keeping users involved in the evolution of its HPC systems,” Beck said. “This commitment ensures that research projects are computationally on par with work being conducted by colleagues and partner organizations throughout the state, across the country and internationally.”

The Owens cluster is comprised of a total of 824 Dell PowerEdge server nodes. 648 of these are “dense nodes”: C6320 two-socket servers with Intel Xeon E5-2600 v4 processors and 128GB memory. The system spec page also lists an analytics complement, comprised of 16 huge memory nodes (Dell PowerEdge R930 four-socket server with Intel Xeon E5-4830 v3 processors, 1,536 GB memory, and 12 x 2TB drives)​.

The theoretical peak performance of the CPU nodes is ~750 teraflops, but the recent addition of 160 Nvidia Pascal-based GPU nodes (Dell PowerEdge R730 two-socket servers with Intel Xeon E5-2680 v4 CPUs), doubles this to 1.5 petaflops double-precision performance — nearly 10X greater any previous OSC system.

The system relies on DDN storage componentry, including Infinite Memory Engine, and Mellanox EDR (100Gbps) InfiniBand.

“OSC’s Owens Cluster represents one of the most significant HPC systems Dell has built,” said Dell EMC’s Parkinson.

The announcement from OSC notes “a nearly complete overhaul of the data center infrastructure has been completed since last spring, now providing users with nearly 5.5 petabytes of disk storage and more than five petabytes of tape backup. The center also acquired and installed NetApp software and hardware for home directory storage.”

Link to OSC announcement.
Link to Owens system specs.

The post Ohio Supercomputer Center Dedicates ‘Owens’ Cluster appeared first on HPCwire.

NSF Seeks Bold Ideas on Cyberinfrastructure Needs: Submissions Due April 5

Wed, 03/29/2017 - 15:41

March 29, 2017 — A new NSF Dear Colleague Letter (DCL) has been posted: Request for Information on Future Needs for Advanced Cyberinfrastructure to Support Science and Engineering Research (NSF CI 2030), https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf17031.

NSF Directorates and Offices are jointly requesting input from the research community on science challenges and associated cyberinfrastructure needs over the next decade and beyond. Contributions to this Request for Information (RFI) will be used during the coming year to inform the Foundation’s strategy and plans for advanced cyberinfrastructure investments. We invite bold, forward-looking ideas that will provide opportunities to advance the frontiers of science and engineering well into the future.

The DCL points to an external submission website. Please note that the deadline for submissions is April 5, 2017 5:00 PM ET. Questions about this effort and the submission process should be sent to William Miller, Office of Advanced Cyberinfrastructure, at the following address: nsfci2030rfi@nsf.gov.

Source: XSEDE

The post NSF Seeks Bold Ideas on Cyberinfrastructure Needs: Submissions Due April 5 appeared first on HPCwire.

USC’s Information Sciences Institute Tapped to Lead $31M Chip Research Project

Wed, 03/29/2017 - 15:29

March 29, 2017 — The Information Sciences Institute at the USC Viterbi School of Engineering has been awarded a $30.9 million contract to develop technology ensuring that computing chips are manufactured with minimal defects.

The team led by Principal Investigator John Damoulakis, a senior director for advanced electronics at USC ISI, includes researchers from USC’s Ming Hsieh Department of Electrical Engineering, Stanford University, Northwestern University and the Paul Scherrer Institute at the Swiss Federal Institutes of Technology.

“We are thrilled that the award will allow ISI to provide commercial, academic and government entities with reliable and economical access to electronics that are free of manufacturing defects,” said Premkumar Natarajan, the executive director of ISI.

The USC team was selected, among others, because of its expert knowledge on microelectronics, microscopy and high-performance computing, as well as its ability ability to deliver research to the U.S. government that pertains to the manufacturing of reliable nano-electronics, he explained.

“Nano-imaging is technology of the future. The USC team will image features in chips that are about 5,000 times smaller than a human hair and make them visible to the human eye for analysis and experimentation,” Natarajan said.

“This capability can lead to the discovery of new materials and pharmaceuticals and advance the understanding of biological structures, thus opening a new realm of research at USC.”

Source: University of Southern California

The post USC’s Information Sciences Institute Tapped to Lead $31M Chip Research Project appeared first on HPCwire.

EU Ratchets up the Race to Exascale Computing

Wed, 03/29/2017 - 15:12

The race to expand HPC infrastructure, including exascale machines, to advance national and regional interests ratcheted up a notch yesterday with announcement that seven European countries – France, Germany, Italy, Luxembourg, Netherlands, Portugal and Spain – signed an agreement to establish EuroHPC. It calls for “acquiring and deploying an integrated world-class high-performance computing infrastructure…available across the EU for scientific communities, industry and the public sector, no matter where the users are located.” The announcement was made by the European Commission.

“High-performance computing is moving towards its next frontier – more than 100 times faster than the fastest machines currently available in Europe. But not all EU countries have the capacity to build and maintain such infrastructure, or to develop such technologies on their own. If we stay dependent on others for this critical resource, then we risk getting technologically ‘locked’, delayed or deprived of strategic know-how. Europe needs integrated world-class capability in supercomputing to be ahead in the global race,” said Andrus Ansip, European Commission vice president for the Digital Single Market in the official release.

The EU, of course, is no stranger to HPC pursuit with programs such as Horizon2020 and PRACE2 the follow-on to PRACE (Partnership for Advanced Computing in Europe) among others. How they all fit together isn’t immediately clear. What seems clear is regional and national competitive zeal over HPC is rising. Earlier this week the U.K. announced plans to establish six new HPC centers. The U.K. is in the process of exiting the European Union (see HPCwire article: UK to Launch Six Major HPC Centers).

The official EU announcement characterized the initiative as “a European project of the size of Airbus in the 1990s and of Galileo in the 2000s.” Here’s an excerpt from the agreement, first signed last week in Rome:

“The participating Member States:

  • Agree to work towards the establishment of a cooperation framework – EuroHPC – for acquiring and deploying an integrated exascale supercomputing infrastructure that will be available across the EU for scientific communities as well as public and private partners, no matter where supercomputers are located.
  • Agree, in the context of EuroHPC, to work together and with the European Commission to prepare, preferably by the end of 2017, an implementation roadmap for putting in place the above-mentioned exascale supercomputing infrastructure that would address the following:

1. The technical and operational requirements and the financial resources needed for acquiring such infrastructure

2. The definition of appropriate legal and financial instruments for such acquisition.

3. The procurement processes for the acquisition of two world-class pre-exascale supercomputers preferably starting on 2019-2020, and two world-class full exascale supercomputers preferably starting on 2022-2023.

4. The development of high-quality competitive European technology, its optimization through a co-design approach and its integration in at least one of the two exascale supercomputers.

5. The development of test-beds for HPC and Big Data applications for scientific, public administration and industrial purposes.”

The agreement also calls for pan-European involvement:

  • “Invite the European Commission to participate in this endeavor and work together on how it can be best supported at EU level.
  • Agree that the target exascale supercomputing infrastructure will address the growing needs of the scientific community, and to look also for ways and conditions to open the availability of this infrastructure to users from industry and the public sector, while guaranteeing the best use of the infrastructure for scientific excellence and an innovative and competitive industry.
  • Agree to enable the development of applications and services, for example those proposed in the IPCEI on HPC and BDA.
  • Invite all Member States and Associated Countries to join EuroHPC.”

It’s noteworthy that similar concerns are being expressed in the U.S with regard to HPC leadership, particularly in light of Trump’s proposed budget.

Speaking with HPCwire, William Gropp, acting director of the National Center for Supercomputing Applications, noted, “It would be great if there were more than one NSF track one system. It would be great if there were more than a handful of track two systems. If you look at Japan, for example, they have nine large university advanced computing systems, not counting the Flagship 2020 system in their plans, and that, honestly, is more than we’ve got. So there is a concern we will not provide the level of support that will allow us to maintain broad leadership in the sciences. That’s been a continuous concern.” (see HPCwire article, Bill Gropp – Pursuing the Next Big Thing at NCSA)

Link to the EU announcement: https://ec.europa.eu/digital-single-market/en/news/eu-ministers-commit-digitising-europe-high-performance-computing-power

The post EU Ratchets up the Race to Exascale Computing appeared first on HPCwire.

Intel Appoints Chief Strategy Officer

Wed, 03/29/2017 - 10:19

SANTA CLARA, Calif., March 29, 2017 — Intel Corporation today announced the appointment of Aicha S. Evans as chief strategy officer, effective immediately. She will be responsible for driving Intel’s long-term strategy to transform from a PC-centric company to a data-centric company, as well as leading rapid decision making and company-wide execution of the strategy.

“Aicha is an industry visionary who will help our senior management team and the board of directors focus on what’s next for Intel,” Intel CEO Brian Krzanich said. “Her new role reflects her strong strategic leadership across Intel’s business, most importantly in 5G and other communications technology. Her invaluable expertise will contribute to the company’s long-term strategy and product portfolio.”

“I look forward to working across the company to advance Intel’s ongoing transformation,” Evans said. “We have an exciting future ahead us.”

Evans is an Intel senior vice president and has been responsible for wireless communications for the past nine years. Most recently, she was the general manager of the Communication and Devices Group. Evans joined Intel in 2006 and is based in Santa Clara, Calif. In her new role, she will report to Intel CFO Bob Swan.

An internal and external search is underway for a new general manager of Intel’s Communication and Devices Group.

About Intel

Intel (NASDAQ: INTC) expands the boundaries of technology to make the most amazing experiences possible. Information about Intel can be found at newsroom.intel.com and intel.com.

Source: Intel

The post Intel Appoints Chief Strategy Officer appeared first on HPCwire.

Data-Hungry Algorithms and the Thirst for AI

Wed, 03/29/2017 - 09:21

At Tabor Communications’ Leverage Big Data + EnterpriseHPC Summit in Florida last week, esteemed HPC professional Jay Boisseau, chief HPC technology strategist at Dell EMC, engaged the audience with his presentation, “Big Computing, Big Data, Big Trends, Big Results.”

Trends around big computing and big data are converging in powerful ways, including the Internet of Things (IoT), artificial intelligence (AI) and deep learning. Innovating and competing is now about big, scalable computing and big, fast data analytics – and “those with the tools and talent will reap the big rewards,” Boisseau expressed.

Prior to joining Dell EMC (then Dell Inc.) in 2014, Boisseau made his mark as the founding director of the Texas Advanced Computing Center (TACC). Under his leadership the site became a center of HPC innovation, a legacy that continues today under Director Dan Stanzione.

Jay Boisseau

“I’m an HPC person who’s fascinated by the possibilities of augmenting intelligence with deep learning techniques; I’ve drunk the ‘deep learning Kool-Aid,’” Boisseau told the crowd of advanced computing professionals.

AI as a field goes back to the 50s, Boisseau noted, but the current proliferation of deep learning using deep neural networks has been made possible by three advances: “One is that we actually have big data; these deep learning algorithms are data hungry. Whereas we sometimes lament the growth of our data sizes, these deep neural networks are useless on small data. Use other techniques if you have small data, but if you have massive data and you want to draw insights that you’re not even sure how to formulate the hypothesis ahead of time, these neural network based methods can be really really powerful.

“Parallelizing the deep learning algorithms was another one of the advances, and having sufficiently powerful processors is another one,” Boisseau said.

AI, big data, cloud and deep learning are all intertwined and they are driving rapid expansion of the market for HPC-class hardware. Boisseau mines for correlations with the aid of Google Trends; the fun-to-play-with Google tool elucidates the contemporaneous rise of big data, deep learning, and IoT. Boisseau goes a step a further showing how Nvidia stock floats up on these tech trends.

The narrow point here is that deep learning/big data is an engine for GPU sales; the larger point is that these multiple related trends are driving silicon specialization and impacting market dynamics. As Boisseau points out, we’re only at the beginning of this trend cluster and we’re seeing silicon developed specifically for AI workloads as hardware vendors compete to establish themselves as the incumbent in this emerging field.

Another deep learning champion Nvidia CEO Jen Hsun Huang refers to machine learning as HPC’s consumer first killer app. When Nvidia’s CUDA-based ecosystem for HPC application acceleration launched in 2006, it kick started an era of heterogeneity in HPC (we’ll give the IBM-Sony Cell BE processor some cred here too even if the processor design was an evolutionary dead end). Fast forward to 2013-2014 and the emerging deep learning community found a friend in GPUs. With Nvidia, they could get their foot in the DL door with an economical gaming board and work their way up the chain to server-class Tesla GPUs, for max bandwidth and FLOPS.

Optimizations for single-precision (32-bit) processing, and support for half-precision (16-bit) on Nvidia’s newer GPUs, translates into faster computation for most AI workloads, which unlike many traditional HPC applications do not require full 64-bit precision. Intel is incorporating variable precision compute into its next-gen Phi product, the Knights Mill processor (due out this year).

Boisseau observed that starting about two decades ago HPC began the swing towards commodity architectures, with the invention of commodity-grade Beowulf clusters by Thomas Sterling in 1994. Benefiting from PC-based economies of scale, these x86 server-based Linux clusters became the dominant architecture in HPC. In turn, this spurred the movement toward broader enterprise adoption of HPC.

Although Xeon-flavored x86 is something of a de facto standard in HPC (with > 90 percent share), the pendulum appears headed back toward greater specialization and greater “disaggregation of technology,” to use a phrase offered by industry analyst Addison Snell (CEO, Intersect360 Research). Examples include IBM’s OpenPower systems; GPU-accelerated computing (and Power+GPU); ARM (now in server variants with HPC optimizations); AMD’s Zen/Ryzen CPU; and Intel’s Xeon Phi line (also its Altera FPGAs and imminent Xeon Skylake).

A major driver of all this: a gathering profusion of data.

“In short, HPC may be getting diverse again, but much of the forcing function is big data,” Boisseau observed. “Very simply, we used to have no digital data, then a trickle, but the ubiquity of computers, mobile devices, sensors, instruments and user/producers has produced an avalanche of data.”

Buzz terminology aside, big data is a fact of life now, “a forever reality” and those who can use big data effectively (or just “data” if the “big” tag drops off), will be in a position to out-compete, Boisseau added.

When data is your pinnacle directive and prime advantage, opportunity accrues to whoever holds the data, and that would be the hyperscalers, said Boisseau. Google, Facebook, Amazon, et al. are investing heavily in AI, amassing AI-friendly hardware like GPUs but also innovating ahead with even more efficient AI hardware (e.g., Tensor Processing Units at Google, FPGAs at Microsoft). On the tool side are about a dozen popular frameworks; TensorFlow (Google), mxnet (Amazon), and CNTK (Microsoft) among them.

Tech giants are advancing quickly too with AI strategies, Boisseau noted. Intel has made a quick succession of acquisitions (Nervana, Movidius, Saffron, MobilEye); IBM’s got its acquisition-enhanced Watson; Apple bought Turi.

“You [also] have companies like GraphCore, Wave Computing, and KnuPath that are designing special silicon with lower precision and higher performance,” said Boisseau. “There was a fourth one, Nervana, and Intel liked that company so much they bought it. So there were at least four companies making silicon dedicated to deep learning. I’m really eager to see if Nvidia – and I don’t have inside knowledge on this – further optimizes their technology for deep learning and removes some of the circuitry that’s still heritage graphics oriented as well as how the special silicon providers do competing against Intel and Nvidia as well as how Intel’s Nervana shapes up.”

Adding to the cloud/hyperscaler mix is the quickly expanding world of IoT, which is driving big data. The Internet of Things is enabling companies to operate more efficiently; it’s facilitating smart buildings, smart manufacturing, and smart products, said Boisseau. But as the spate of high-profile DDoS attacks attest, there’s a troubling security gap. The biggest challenge for IoT is “security, security, security,” Boisseau emphasized.

Another top-level point Boisseau made is that over half of HPC systems are now sold to industry, notably across manufacturing, financial services, life sciences, energy, EDA, weather and digital content creation. “Big computing is now as fundamental to many industries as it is in research,” Boisseau said. Half of the high performance computing TAM (total addressable market), estimated at nearly $30 billion, is now in enterprise/industry, and there’s still a lot of untapped potential, in Boisseau’s opinion.

Market projections for AI are even steeper. Research houses are predicting that AI will grow to tens of billions of dollars a year (IDC predicts a surge past $4 billion in 2020; IBM expects market to be $2 trillion over next decade; Tractica plots $3.5 trillion in revenue by 2025).

Boisseau is confident that the world needs big data AND deep learning, citing the following reasons/scenarios:

  • Innovation requires ever more capability: to design, engineer, manufacture, distribute, market and produce new/better products and services.
  • Modeling and simulation enable design, in accordance with physics/natural laws, and virtual engineering, manufacturing, testing.
  • Machine learning and deep learning enable discovery and innovation
    • When laws of nature don’t apply (social media, sentiment, etc.) or are non-linear/difficult to simulate accurately over time (e.g. weather forecasting).
    • That may be quicker and/or less costly depending on simulation scale, complexity versus data completeness.

“When we understand the laws of nature, when we understand the equations, it gives us an ability to model and simulate highly accurately,” said Boisseau. “But for crash simulations, we still don’t want to drive a car that’s designed with data analysis; we need modeling and simulation to truly understand structural dynamics and fluid flow and even then data analysis can be used in the interpretation.

“There will be times where data mining over all those crash simulations adds to the modeling and simulation accuracy. So modeling and simulation will always remain important, at least as long as the universe is governed by visible laws, especially in virtual engineering and manufacturing testing, but machine learning and deep learning enable discovery in other ways, especially when the laws of nature don’t apply.”

“If you’ve adopted HPC great, but deep learning is next,” Boisseau told the audience. “It might not be next year for some of you, it might be two years, five years, but I suspect it’s sooner than you think.”

The post Data-Hungry Algorithms and the Thirst for AI appeared first on HPCwire.

Colombia Taps Supercomputer for HIV Drug Research

Wed, 03/29/2017 - 09:11

WEST LAFAYETTE, Ind., March 29, 2017 — Colombian researchers are helping design new drugs for treating HIV using a supercomputer built in a partnership between the Universidad EAFIT in Medellin and Purdue University.

Besides trying to identify likely drug targets for new HIV treatments, EAFIT’s first supercomputer, named Apolo, is being used for everything from earthquake science in a country regularly shaken by tremors, to a groundbreaking examination of the tropical disease leishmaniasis, to the most “green” way of processing cement. The machine speeds the time to science for Colombian researchers and lets them tackle bigger problems.

Because EAFIT is one of the few Colombian universities with a supercomputer and a strong partnership with a major American research university, it is poised to receive big money from the Colombia Científica program funded by the World Bank. The program is overseen by Colciencias, Colombia’s National Science Foundation. It is aimed at producing more robust research and more advanced-degree graduates, as well as at growing partnerships with local industries. EAFIT already has attracted Grupo Nutresa, a Latin American food processing company.

Juan Luis Mejía, rector at Universidad EAFIT, says that in retrospect the decision to buy Apolo was almost “irresponsible” considering obstacles like the lack of support in Colombia or the staff required on top of the hardware.

“The reality here in Colombia is that there’s incentive to invest in the machines but not for the human capital necessary to run them,” says Juan Guillermo Lalinde, director of the Apolo team and professor of informatics at EAFIT.

Decades of isolation due to the violent drug wars of the ’80s and ’90s took a toll on Colombian universities’ ability to grow. Mejía says EAFIT had been searching for an international partner to help.

In Purdue it found a partner with a lot to offer, where accelerating discoveries in science and engineering with supercomputers is concerned. Purdue’s central information technology organization has built and operated nine high-performance computing systems for faculty researchers in as many years, most rated among the world’s top 500 supercomputers. They’ve given Purdue the best research computing resources for use on a single campus in the nation. Hardware, when one of those machines was retired at Purdue, became the foundation of Apolo.

Pilar Cossio, a Colombian HIV researcher working for the Max Planck Institute in Germany, requires high-performance computing for work she’s begun examining millions of compounds to see which ones bind best to particular proteins. She didn’t anticipate finding a ready-made solution when she came back to her home country. Then she found Apolo.

“There are only two supercomputers in Colombia for bioinformatics,” says Cossio, whose research combines physics, computational biology and chemistry. “Apolo is the only one that focuses on satisfying scientific needs. It’s important for us in the developing countries to have partnerships with universities that can help us access these crucial scientific tools.”

While the hardware is important, the partnership is also about people. Purdue research computing staff members have traveled to Colombia to help train and to work with EAFIT colleagues, and EAFIT students have participated in Purdue’s Summer Undergraduate Research Fellow (SURF) program working with a variety of supercomputing experts at Purdue.

EAFIT and Purdue have even sent joint teams to student supercomputing competitions in New Orleans and Frankfurt, Germany. Some of the Colombian students on the teams have become key staff members at Colombia’s Apolo Scientific Computing Center, which, in turn, is training the next generation of Colombia’s high-performance computing experts.

If watching the Apolo team at work is a lot like watching Purdue’s research computing operation at work, there’s a reason. From the start, Purdue emphasized the sensibility of its Community Cluster Program partnership with faculty and its centrally managed, shared “condo” model for operating research supercomputers, says Donna Cumberland, executive director of research computing.

“Anybody can buy a machine, but getting people to run it and getting faculty to use it, that’s what we wanted to impart,” Cumberland says.

With Apolo’s installation in 2012, EAFIT’s research capacity ballooned. Once fully functional, the computer constantly ran at almost 100 percent capacity. In 2015, Apolo helped 69 researchers from EAFIT, other universities and local industry complete their research. In 2016, the machine executed what’s equivalent to 129 years of computation.

Before Apolo Juan Carlos Vergara would go two weeks at a time without his personal computer because it was busy grinding numbers for his research modeling earthquakes. His hard drive broke — twice. The supercomputer let him get months of work done in days, while also letting him expand the scale of his seismic engineering problems to an area 5 million times larger.

“Sometimes they would be up until 1 a.m. helping me solve problems,” Vergara says of the Apolo staff. “I saw them as part of my team, fundamental to what I do every day.”

In the fall of 2016, EAFIT retired Apolo and bought Apolo II. As it goes with technology, Apolo II is a third the size of the original with twice the power. Purdue will be adding to that power by selling EAFIT part of its next retiring research supercomputer this year.

“Finding an alliance without a hidden agenda, with a true interest in sharing knowledge of technology that would allow us to progress,” Mejía says. “Because of all this, I believe that the relationship between our university and Purdue is one of the most valuable and trusting.”

Gerry McCartney, Purdue’s vice president for information technology and chief information officer, says the credit really should go to EAFIT, which was willing to make a leap into high-performance research computing and recognized the avenues it could open.

“They had the academic environment, the infrastructure and the willingness to invest in people,” McCartney says. “We think of them as a partner now, and we expect to deepen that.” 

Source: Purdue University

The post Colombia Taps Supercomputer for HIV Drug Research appeared first on HPCwire.

ScaleMatrix Wins Most Innovative Solution Award at Leverage Big Data + Enterprise HPC

Wed, 03/29/2017 - 07:16

SAN DIEGO, Calif., March 29, 2017 — ScaleMatrix, a leading provider of customer-premise and hosted cloud services for HPC and Big Data workloads delivered on the industry’s leading Dynamic Density Control (DDC) enclosure platform, today announced they received the ‘Most Innovative Solution’ Award at Leverage Big Data + Enterprise HPC 2017 Summit, hosted last week in Ponte Vedra Beach, Florida by Datanami, EnterpriseTech, HPCwire and nGage Events. The award, which is voted on by the attendees of the summit, recognizes the most innovative solution presented during the course of the three-day event. The attendees included top enterprise players from nearly every vertical, alongside the nation’s leading technology and service providers.

“We were honored by the recognition, not only because it was our first time presenting at this event, but because of the caliber and scale of the audience,” said Chris Orlando, Co-Founder of ScaleMatrix. Orlando continues, “These are the top global organizations in their respective fields, who are writing the playbooks that the rest of the world is emulating in the IT world. To have the advancements we are making in data center cost, efficiency, and density improvement recognized by this group is quite humbling.”

Billed as a Big Data and High Performance Computing (HPC) summit, the Leverage Big Data + Enterprise HPC 2017 event focused on exploring some of the most transformative areas of the IT industry today. While the list of attendees is not made public, global leaders in social media, genealogy, content, finance, credit, security, and life sciences were in attendance. Central themes included big data, deep learning, artificial intelligence, IoT, virtual reality, and the changes they are having on how we plan for, support, and deliver services in this new HPC and big data driven world.

At the conclusion of the summit, attendees were asked to provide feedback on more than 30 different technology presentations held during the 3-day summit. While solutions spanned storage, compute, software, silicon, and data center offerings, ScaleMatrix’ Dynamic Density Control platform, which provides one of the most efficient and highest density ways to house IT infrastructure in a data center, was chosen by the attendees as the “Most Innovative Solution” at the summit.

“While the Big Data and HPC industries are advancing at a breakneck pace, the data centers that support them are starting to feel the strain from these higher density, power hungry deployments,” said Tom Tabor, CEO of Tabor Communications, which co-produces the summit through its technology publications along with partner nGage Events. “This year’s attendees acknowledge the innovations ScaleMatrix has made in these areas that captured the attention of our summit attendees. We were very pleased to have ScaleMatrix as part of the technology showcase.”

About ScaleMatrix
ScaleMatrix provides both customer-premise and hosted private cloud services for HPC and Big Data workloads delivered on the industry’s leading Dynamic Density Control (DDC) enclosure platform. Whether you’re tackling Deep Learning, IoT, or high-end Analytics, ScaleMatrix offers high-density Cloud, IaaS, and Colocation options within their U.S. data centers. In addition, ScaleMatrix deploys modular Dynamic Density Control™ enclosures that support HPC and Big Data application on customer-premise, that modernize and future-proof an entire enterprise data center. These high-performance enclosures offer near-silent operation, extreme energy efficiency, and support any rack-mountable hardware configuration at any density. Simplify deployments and protect your investments with ScaleMatrix. For more information call (888) 349-9994 or visit www.scalematrix.com.

Source: ScaleMatrix

The post ScaleMatrix Wins Most Innovative Solution Award at Leverage Big Data + Enterprise HPC appeared first on HPCwire.

Quantum Computing Startup Raises $64 Million in Funding

Tue, 03/28/2017 - 21:07

BERKELEY, Calif., March 28, 2017 — Rigetti Computing, a leading quantum computing startup, announced it has raised $64 million in Series A and B funding.

The Series A round of $24 million was led by Andreessen Horowitz. Vijay Pande, general partner at Andreessen Horowitz, has been appointed to Rigetti’s Board of Directors, joining Rigetti CEO Chad Rigetti and angel investor Charlie Songhurst.

The Series B round of $40 million was led by Vy Capital, followed by Andreessen Horowitz.

Major investors in both rounds include Y Combinator’s Continuity Fund, Data Collective, FF Science, AME Cloud Ventures, Morado Ventures, and WTI. Institutional investors in Series A include Sutter Hill Ventures, Susa Ventures, Streamlined Ventures, Lux Capital, and Bloomberg Beta.

The latest round brings the total amount of venture funding raised by Rigetti to $69.2 million.

“Quantum computing will enable people to tackle a whole new set of problems that were previously unsolvable,” said Chad Rigetti, founder and chief executive officer of Rigetti Computing. “This is the next generation of advanced computing technology. The potential to make a positive impact on humanity is enormous.”

“We will use the funding to expand our business and engineering teams and continue to invest in infrastructure to manufacture and deploy our quantum integrated circuits,” Rigetti added.

Quantum computers store and process information using individual photons, enabling dramatically greater computational power and energy efficiency. Rigetti Computing has taken a highly interdisciplinary approach to developing the technology, with a team from diverse backgrounds in computer science, engineering, physics, and chemistry.

“Quantum computing has promised breakthroughs in computing for decades but has so far remained elusive,” said Vijay Pande, Andreessen Horowitz general partner and Rigetti Board member. “Rigetti has assembled an impressive team of scientists and engineers building the combination of hardware and software that has the potential to finally unlock quantum computing for computational chemistry, machine learning and much more.”

About Rigetti Computing

Rigetti Computing was founded by Chad Rigetti in 2013 and has offices in Fremont and Berkeley, Calif. The company is building a cloud quantum computing platform for artificial intelligence and computational chemistry. Rigetti recently opened up private beta testing of Forest, its API for quantum computing in the cloud. Forest emphasizes a quantum-classical hybrid computing model, integrating directly with existing cloud infrastructure and treating the quantum computer as an accelerator.

Source: Rigetti Computing

The post Quantum Computing Startup Raises $64 Million in Funding appeared first on HPCwire.

Bill Gropp – Pursuing the Next Big Thing at NCSA

Tue, 03/28/2017 - 17:20

About eight months ago Bill Gropp was elevated to acting director of the National Center for Supercomputing Applications (NCSA). He is, of course, already highly accomplished. Development (with colleagues) of the MICH implementation of MPI is one example. He was NCSA’s chief scientist, a role Gropp retains, when director Ed Seidel was tapped to serve as interim vice president for research for the University of Illinois System, and Gropp was appointed acting NCSA director.

Don’t be far misled by the “acting” and “interim” qualifiers. They are accurate but hardly diminish the aspirations of the new leaders jointly and independently. In getting ready for this interview with HPCwire, Gropp wrote, “Our goal for NCSA is nothing less than to lead the transformation of all areas scholarship in making use of advanced computing and data.” That seems an ambitious but perhaps appropriate goal for the home of Blue Waters and XSEDE.

During our interview – his first major interview since taking the job – Gropp sketched out the new challenges and opportunities he is facing. While in Gropp-like fashion emphasizing the collaborative DNA running deep throughout NCSA, he also stepped out of his comfort zone when asked what he hopes his legacy will be – a little early for that question perhaps but his response is revealing:

William Gropp, NCSA

“If you look at us now we have three big projects. We’ve got Blue Waters. We have XSEDE. We hope to have the large synoptic survey telescope (LSST) data facility. These are really good models. They take a long time to develop and may require a fair amount of early investment. I would really like to lay the groundwork for our fourth big thing. That’s what I can contribute most to the institution.”

NCSA is a good place to think big. Blue Waters, of course, is the centerpiece of its computing infrastructure. Deployed in 2012, Blue Waters is a roughly 13-petaflops Cray XE/X hybrid machine supported with about 1.6 PB of systems memory and 26PB of storage (usable, with an aggregate 1.1TBs). No doubt attention is turning to what’s ahead. The scope of scientific computing and industry collaboration that goes on at NCSA in concert with the University of Illinois is big by any standard.

It’s also worth remembering the NCSA and Gropp are in the thick of U.S. policy development. The most recent report from the National Academies of Sciences, Engineering, and Medicine Report – Future Directions For NSF Advanced Computing Infrastructure To Support U.S. Science In 2017-2020] – was co-chaired by Gropp and Robert Harrison of Stony Brook University; not surprisingly, it argues strongly for NSF to produce a clear long-term roadmap supporting advanced computing and was cited in testimony two weeks ago at Congressional Hearings (National Science Foundation Part II: Future Opportunities and Challenges for Science).

In preparing for the interview, Gropp sent a brief summary of his thinking about the future, which said in part:

“This is a challenging time. Technically, computing is going through a major transition as the community copes with the end of Dennard (frequency) scaling and the consequences for hardware, software, and algorithms. The rapid expansion of data science has broadened the demand for computational resources while at the same time bringing about new and disruptive ways to provide those services, as well as increasing the demand for skilled workers in all areas of computing. Funding remains tight, with support for sustaining the human and physical infrastructure still depending mostly on ad hoc and irregular competitions for hardware; in addition, while everyone talks about the long-term value of science data, there is little appetite to pay for it.

“But these challenges produce opportunities to make critical impact and are exactly why this is also an exciting time to be in computing, and to be leading NCSA. Last August, NCSA Director Ed Siedel was asked by the president of the University of Illinois System to be interim vice president of research, and to guide that office as it refocuses on innovating for economic development by building on the many strengths of the University System. Ed and I have a similar vision for NCSA, and I was honored step in as acting director to guide NCSA while Ed is helping the University system.”

HPCwire: Thanks for your time Bill. It’s hard to know where to start. NCSA has many ongoing projects – participation in the National Data Service Consortium, the Illinois Smart State Initiative, the Visualization Laboratory, and XSEDE all come to mind. Perhaps provide an example or two of NCSA projects to get a sense of the range of NCSA activities?

Gropp: It’s hard to pick just one and somebody is going to be mad at me. Let me say a little a bit about the industry program. One of the things that we’ve been doing there has been focusing more on how the industry program can build on our connections with campus to provide opportunities, for example, for our students to work with companies and companies to work with our students. That’s been very attractive our partners. I was not at all surprised that the rarest commodity is talent and we are at just a fabulous institution that has very strong, very entrepreneurial students and so that’s been a great connection.

One of the reasons for mentioning the industry program is it was really through that view of connections that we’ve really became a involved in the Smart State initiative. So it was one of the things we discussed with the state including the opportunity for students to be involved in projects, which in some cases could have significant impact improving the life of the state. We are really in the initial phases. It is going to be fun to see how it develops and what works and what doesn’t. It was exciting to see the kinds of opportunities the state was interested in pursing and their flexibility about working not just with NCSA but also with students through programs like this that mirror what we did with industry. (Interview by Illinois Public Media with Gropp on the Smart State effort)

HPCwire: There must be a list of projects?

Gropp: Of course there is but it’s not quite ready to release to the public. We’re working on developing that. I can say another thing that is interesting about this is the state is quite understanding of the fact that in many cases, the sorts of projects they are looking at are quite ambitious, and so the work is being structured as a number of reasonable steps rather than some five-year proposal to solve all the states problems. It’s being structured in much more reasonable pieces where we can perform the pieces and see where we got and figure out what are the next steps.

HPCwire: Given the attention being paid to the rise of data-driven science can you talk a little about the National Data Service consortium (NDS)? I believe NCSA is a steering committee member. What is it exactly?

Gropp: Really it’s just what is says, a consortium trying to help us find commonality and common ground in providing data services. There have been seven semi-annual meetings and there’s the NDS lab which is an effort to provide software, frameworks may not be quite the right word, but to start looking at ways you can provide support for dealing with the five Vs of big data. We sort of know how to deal with velocity and volume, I am oversimplifying it but to some extent, that’s just money. Veracity is another tricky one including provenance and so forth. You can maybe slide reproducibility under that. We have work in that area, particularly with Victoria Stodden, who’s an NCSA affiliate and one of our quite brilliant faculty.

The really tricky one is Variety. There are so many different cases to deal with. Having frameworks to discuss that and places to discuss how we deal with that as well as how we deal with making resources available over time. How do we ensure data doesn’t get lost? Having a consortium that give us a place to talk about these things, a place to start organizing and developing cooperative projects so we are working together instead of working separately – a 1,000 flowers blooming is good but at some point you need to be able to pull this together. One of the things that has been put together that is so powerful is our ability to federate different data sources and draw information out of collections.

My role as NCSA director has been more working with the NDS to ensure it has a long-term sustainable direction because NDS will only be useful if it can help deliver these resources over the time we expect the data to be valuable. I think that’s one of the biggest challenges of big data compared to big computing. When we are doing big computing, you do your computation, you have your results, and you’re done, again oversimplifying. With the data you create the data and it retains value even increasing value and so managing over long lifetimes is again going to be a challenge. It’s important to think of the national data service not as something that one institution is offering to the nation but as collaboration among some of the people who want to support data science in this country getting together to solve these problems.

HPCwire: Sort of a big data related question, can you talk a little about the large synoptic survey telescope project NCSA is planning to support. Its expected output is staggering – 10 million alerts, 1000 pairs of exposures, 15 terabytes of data every night.

Gropp: That’s an important project in our future and was really begun under Dan Reed (former NCSA director). NSF wants those projects divided into a construction project and then operations project, which has not yet been awarded but that proposal will go later this year. [The latter] will do many things; it will operate the LSST facility itself but also the other facilities including the archiving the processing centers. This is significant big data activity that we are fully expecting to be involved in and in fact be leading the data resource side of that.

I don’t have the numbers in front of me but there is a lot of data that comes out of the telescope, an 8-meter telescope. The data is filtered a little bit there [delete there?] and sent by network from Chile to Illinois where it gets processed and archived, and we have to be able to process it in real time. The real time requirement, I think, is in seconds or minutes if not microseconds, but very quick processing of the data to discover and send out alerts on changes. It’s a very different kind of computing than sort of the FLOPS-heavy HPC computing that we usually think about. That will most likely be one of the things that occupies our datacenter, the National Petascale Computing Facility (NPCF).

Currently under construction in Chile, the LSST is designed to conduct a ten-year survey of the dynamic universe. LSST can map the entire visible sky in just a few nights; each panoramic snapshot with the 3200-megapixel camera covers an area 40 times the size of the full moon. Images will be immediately analyzed to identify objects that have changed or moved: from exploding supernovae on the other side of the Universe to asteroids that might impact the Earth. In the ten-year survey lifetime, LSST will map tens of billions of stars and galaxies. With this map, scientists will explore the structure of the Milky Way, determine the properties of dark energy and dark matter, and make discoveries that we have not yet imagined. – LSST.org

HPCwire: Given the many resources NCSA operates, what’s required to simply keep all the systems you have running. What the scope of systems supported and ongoing to support changes and maintenance activities for Blue Waters?

Gropp: We were just talking about that this morning. At no time in our history have we been operating so many HPC scale systems. There’s not just Blue Waters. There are systems for the industry program and for some of the other research groups. There’s also a campus cluster, which is officially operated by a different organization but is actually operated by our staff. [It’s an] exciting time to be running all these systems. The big thing we are waiting for is the RFP for the next track one system, and we are still quite optimistic about that.

Some of the systems reach a point of retirement and replacement so we have gone through that cycle a number of times. There was one system that we eventually broke apart and recycled parts out to various faculty members. There are things like that always going on.

For systems like Blue Waters, we have a maintenance agreement with Cray, which has actually been quite reliable. Keeping things up to date is always an issue; for example our security systems are state of the art. There’s a lot of work along those lines, which of course I can’t describe in detail. The biggest challenge for us, a big challenge for all of us in the community, is the lack of predictable schedules from our sponsors for keeping these systems up to date. So we are still waiting for the RFP for the next track one system and that remains a real challenge. That’s why the academy report called on NSF to produce a roadmap because we have to do planning for that.

National Petascale Computing Facility

We also have a lot of stuff that is going into the building (National Petascale Computing Facility) and we have a committee right now that is engaged and thinking about do we have sufficient room in the building, do we have enough power in the building, enough cooling, what do we do when we fill that building up? Those things are made much more difficult when there are so many uncertainties.

HPCwire: I probably should have started with this question. So what’s it like being director? What is the range of your responsibilities and what’s surprised you?

Gropp: I will say every day is different; that’s one of the things that is fun about the job. There are a lot of upper management sorts of things, so I spend time every day on budget and policy and personnel and implementing our strategic plan, but I also spend time interacting with more people on campus in a deeper way and also with some of our industry partners. Meeting new people from the state was really quite eye opening, both in terms of what the state is already doing but also in terms of what the opportunities are.

Last week I went to the state capital and gave some rare good news on the return on investment that they made in Blue Waters. The state provided funding for the datacenter. That was a new experience for me, going to a subcommittee hearing and being able to say that the investment you made in the University of Illinois and NCSA has paid off. Investing in science is a good thing to do and here are the numbers. It’s definitely an intense experience but I found it quite stimulating and different than what I have been doing.

On surprises, even though I have been here for quite awhile (2007), I really didn’t know the breadth and depth of all the things going on. Everyone has the tendency to see the stuff they are interested in and now I am responsible for everything so I have to be aware of everything. That was a very pleasant surprise. I wish I could say that dealing with our state budget situation was a surprise, but it wasn’t; it’s just been difficult. Really I think just coming to understand how much is going on here is more than I expected. In terms of goals, I have some very tactical things I want to get done. These sort of boring but important changes to policy to better support engagement with campus and make it easier for student to work with us, and you’ve already seen some of that in the directions we have gone with the industry program.

HPCwire: With your many director responsibilities are you still able to carry on research?

Gropp: I still have my students. I just met with one before our call. I have a number of grants. So I am still the lead PI on our Center for Exascale Simulation Plasma-coupled Combustion (XPACC). That one I find really find interesting because we are really looking at different ways of developing software for the large scale applications rather than going to new programming models. We’re trying to look at how to take the models we have, the existing code bases, and augment them with tools that help automate the task that skilled programmers find most challenging. I have another project where we have been looking at developing better algorithms and a third looking at techniques for making use of multicore processors for these large sparse linear systems and the non-linear systems they represent. That’s been fun.

Even before I took this [position], we were co-advising the students. That’s another thing I find really enjoyable here at Illinois is the faculty collaborates frequently and we have lots of joint projects. It’s fun for and I think it is good for the students because it gives several different perspectives, and they don’t run the risk of being quite so narrowly trained. One of the other things we have been doing, jumping back up to our discussion of technology, is we have always been involved in bringing in new technology and experimenting with it whether it’s hardware or software and faculty and staff and students and we are continuing to do that. In my role as director I get more involved in making decisions about which directions we are going, which projects. We have one proposal that has been submitted that involves certain kinds of deep learning. That was fun because of the tremendous upwelling of interest from campus.

So I think there will be lots of new things to do. I think if I had not been in this job I would have heard about them and said gee that sounds interesting I wish I had time to for it. In this job I say, gee that sounds great it’s my job to make it happens.

HPCwire: What I haven’t I asked that I should?

Gropp: I think the one question you didn’t ask is “what keeps me up at night.” I’m really very concerned about national support for research in general and high performance or maybe I should say advanced computing writ broadly. We see this in the delay of the RFP. We see it in a fairly modest roadmap going forward from NSF. We see it in hesitancy by other agencies to commit to the resources that are needed. I think we have so much to offer to the scientific community and the nation [and] it has been frustrating that there’s so little long-term consistent planning available. I know that we (NCSA) are not unique in this.

A trade we’ll accept is less money if you will give us consistency so we don’t have to figure out what we are getting every other year. If we had a longer term plan we’d be willing to accept a little less. So that’s the sort of thing. The uncertainty and the lack of recognition of the value of what we do at the scale that would benefit the country. That’s something that all of us spend time trying to change.

HPCwire: The new Trump administration and the economic environment generally doesn’t seem bullish on research spending, particularly basic research. How worried are you about support for advanced computing and science?

Gropp: I think we said many of these things in the academies report (summary) and I still stand behind them. I think we have lots of opportunities but I think we are …I think other countries, and that’s not just China, recognize the value of HPC, they recognize the value in fact in diversity (technologies). I was pleased to hear in the response to a question asked at the NSF hearing this week when the NSF quoted our report, saying there is a need for one or more large tightly-coupled machines and that they took that recommendation seriously.

It would be great if there were more than one NSF track one system. It would be great if there were more than a handful of track two systems. If you look at Japan, for example, they have nine large university advanced computing systems, not counting the Flagship 2020 system in their plans, and that, honestly, is more than we’ve got. So there is a concern we will not provide the level of support that will allow us to maintain broad leadership in the sciences. That’s been a continuous concern.

HPCwire: What’s your take on the race to Exascale? China has stirred up attention with its latest machine while the U.S. program has hit occasional speed bumps. Will we hit the 2022-2023 timeframe goal?

Gropp: Yes, I think the U.S. will get there in 2023. It will be a challenge but the routes that we are going down will allow us to get there. These high-end machines, they aren’t really general purpose, but they are general enough so that there are a sufficient number of science problems that they can solve. I think that will remain true. There will be some things that we will be able to accomplish on an exascale peak machine; there will be a challenge for those problems that don’t map into the sorts of architecture directions that we’re being pushed in order to meet those targets. I think that’s something that we all have to bear in mind. Reaching exascale doesn’t mean for all problems we can run them on one exascale peak system. It’s really going to be, are there enough, which I believe there are. It’s going to be a subset of problems that we can run and that set will probably shrink as we move from the pre-exascale machines to the exascale machines.

The post Bill Gropp – Pursuing the Next Big Thing at NCSA appeared first on HPCwire.

European HPC Summit Week 2017 in Barcelona to Gather HPC Stakeholders

Tue, 03/28/2017 - 14:36
BARCELONA, Spain, March 28, 2017 — The European HPC Summit Week 2017 edition (EHPCSW17) will include even more HPC workshops than the 2016 edition, covering a range of application areas including renewable energies, oil & gas, biomedicine, big data, mathematics, climate modelling, computing applications, as well as HPC future technologies. This year’s edition will take place 15-19 May in Barcelona, Spain, and will be hosted by Barcelona Supercomputing Center. It will be a great opportunity to network with all relevant European HPC stakeholders, from technology suppliers and HPC infrastructures to HPC scientific and industrial users in Europe.

“For EHPCSW17 we took on the important challenge of accommodating numerous HPC-related workshops. Our aim is to make the European HPC Summit Week a reference in the HPC ecosystem, and to create synergies between stakeholders,” says Sergi Girona, EXDCI project coordinator.

The programme starts on Monday with the EXDCI workshop, which will give an overview of EXDCI recent activities including the HPC vision and recommendations to improve the overall HPC ecosystem. PRACEdays17, the fourth edition of the PRACE Scientific and Industrial Conference, will take place from Tuesday morning until midday Thursday, and, under the motto HPC for Innovation: when Science meets Industry, will have several high-level international keynote talks, parallel sessions and a panel discussion.

On Tuesday afternoon, and in parallel to PRACEdays17, three additional workshops organised by HPC Centers of Excellence will take place:

In the late afternoon, a poster session, with a welcome reception sponsored by PRACE, will close the events of the day. On Wednesday afternoon, EuroLab4HPC will organise its workshop The Future of High-Performance Computing in parallel to the workshop Mathematics for Exascale and Digital Science.

On Thursday afternoon, the European Technology Platform for HPC (ETP4HPC) will organise a round-table entitled Exploiting the Potential of European HPC Stakeholders in Extreme-Scale Demonstrators in parallel to the EUDAT workshop: Coupling HPC and Data Resources and services together. The week will finish with scientific workshops organised by FETHPC projects and Centers of Excellence:

The full programme is available online at https://exdci.eu/events/european-hpc-summit-week-2017. The registration fee is €60 and those interested in attending the full week (or part of it) should fill out the centralised registration form by 5 May: https://exdci.eu/events/european-hpc-summit-week-2017

About the EHPCSW conference series

EXDCI coordinates the conference series “European HPC Summit Week”. Its aim is to gather all related European HPC stakeholders (institutions, service providers, users, communities, vendors and consultants) in a single week to foster synergies. Each year, EXDCI opens a call for contributions to all HPC-related actors who would like to participate in the week through a workshop.

This first edition took place in 2016 (EHPCSW16) in Prague, Czech Republic. The EHPCSW16 gathered a total 238 attendees with nearly all European nationalities represented. The four-day summit comprised a number of HPC events running concurrently: an EXDCI Workshop, PRACEdays16, the “EuroLab4HPC: Why is European HPC running on US hardware?” workshop and the ETP4HPC Extreme-Scale Demonstrators Workshop, as well as a number of private collaborative meetings.

Source: EXDCI

The post European HPC Summit Week 2017 in Barcelona to Gather HPC Stakeholders appeared first on HPCwire.

NICE Announces General Availability of EnginFrame 2017

Tue, 03/28/2017 - 14:31

ASTI, Italy, March 28, 2017 — NICE is pleased to announce the general availability of EnginFrame 2017, our powerful and easy to use web front-end for accessing Technical and Scientific Applications on-premises and in the cloud.

Since the NICE acquisition by Amazon Web Services (AWS), many customers asked us how to make the HPC experience in the Cloud as simple as the one they have on premises, while still leveraging the elasticity and flexibility that it provides. While we stay committed to delivering new and improved capabilities for on-premises deployments, like the new support for Citrix XenDesktop and the new HTML5 file transfer widgets, EnginFrame 2017 is our first step into making HPC easier to deploy and use in AWS, even without an in-depth knowledge of its APIs and rich service offering.

What’s new in EnginFrame 2017:

  • Easy procedure for deployment on AWS Cloud: you can create a fully functional HPC cluster with a simple Web interface, including:
    • Amazon Linux support
    • Virtual Private Cloud to host all components of an HPC system
    • Application Load Balancer for encrypted access to EnginFrame
    • Elastic File System for spoolers and applications
    • Directory Services for user authentication
    • CfnCluster integration for elastic HPC infrastructure deployment
    • Simpler EnginFrame license and process for evaluations on AWS
  • HTML5 file upload widget with support for server-side file caching, that replace the previous applet-based implementations
  • Service Editor capability to create new actions on files using services. Administrators can publish services associated to specific file patterns, that users can find in the context-sensitive menu in the spooler and file manager panels.
  • New Java Client API for managing interactive sessions: customers and partners can now implement interactive session management in their applications.
  • Citrix XenDesktop integration: support for graphical applications running on XenDesktop infrastructure.
  • Improved DCV and VNC session token management, with automatic token invalidation based on a time-to-live.
  • Many other fixes and enhancements.

The new features are immediately available for all the EnginFrame product lines:

  • EnginFrame Views: Manage interactive sessions, collaboration and VDI
  • EnginFrame HPC: In addition to the Views features, easily submit and monitor the execution of HPC applications and their data
  • EnginFrame Enterprise: Both EnginFrame Views and HPC can be upgraded to the Enterprise version, to support fault-tolerant and load-balanced deployments.

With immediate availability, all NICE customers with a valid support contract can download the new release, access the documentation and the support helpdesk.

Source: NICE

The post NICE Announces General Availability of EnginFrame 2017 appeared first on HPCwire.

Stampede Supercomputer Helps Diagnose Depression

Tue, 03/28/2017 - 09:42
AUSTIN, Texas, March 28, 2017 — Depression affects more than 15 million American adults, or about 6.7 percent of the U.S. population, each year. It is the leading cause of disability for those between the ages of 15 and 44.

Is it possible to detect who might be vulnerable to the illness before its onset using brain imaging?

David Schnyer, a cognitive neuroscientist and professor of psychology at The University of Texas at Austin, believes it may be. But identifying its tell-tale signs is no simpler matter. He is using the Stampede supercomputer at the Texas Advanced Computing Center (TACC) to train a machine learning algorithm that can identify commonalities among hundreds of patients using Magnetic Resonance Imaging (MRI) brain scans, genomics data and other relevant factors, to provide accurate predictions of risk for those with depression and anxiety.

Researchers have long studied mental disorders by examining the relationship between brain function and structure in neuroimaging data.

“One difficulty with that work is that it’s primarily descriptive. The brain networks may appear to differ between two groups, but it doesn’t tell us about what patterns actually predict which group you will fall into,” Schnyer says. “We’re looking for diagnostic measures that are predictive for outcomes like vulnerability to depression or dementia.”

In March 2017, Schnyer, working with Peter Clasen (University of Washington School of Medicine), Christopher Gonzalez (University of California, San Diego) and Christopher Beevers (UT Austin), published their analysis of a proof-of-concept study in Psychiatry Research: Neuroimaging that used a machine learning approach to classify individuals with major depressive disorder with roughly 75 percent accuracy.

Machine learning is a subfield of computer science that involves the construction of algorithms that can “learn” by building a model from sample data inputs, and then make independent predictions on new data.

The type of machine learning that Schnyer and his team tested is called Support Vector Machine Learning. The researchers provided a set of training examples, each marked as belonging to either healthy individuals or those who have been diagnosed with depression. Schnyer and his team labelled features in their data that were meaningful, and these examples were used to train the system. A computer then scanned the data, found subtle connections between disparate parts, and built a model that assigns new examples to one category or the other.

In the recent study, Schnyer analyzed brain data from 52 treatment-seeking participants with depression, and 45 heathy control participants. To compare the two, a subset of depressed participants was matched with healthy individuals based on age and gender, bringing the sample size to 50.

Read more at: https://www.tacc.utexas.edu/-/psychologists-enlist-machine-learning-to-help-diagnose-depression

Source: Aaron Dubrow, TACC

The post Stampede Supercomputer Helps Diagnose Depression appeared first on HPCwire.

New HPC Installation to Deploy Asetek Liquid Cooling Solution

Tue, 03/28/2017 - 07:22

OSLO, Norway, Mar. 28, 2017 — Asetek today announced confirmation of an order from one of its existing OEM partners for its RackCDU D2C (Direct-to-Chip) liquid cooling solution. The order is part of a new installation for an undisclosed HPC (High Performance Computing) customer.

“I am very pleased with the progress we are making in our emerging data center business segment. This repeat order, from one of our OEM partners, to a new end customer confirms the trust in our unique liquid cooling solutions and that adoption is growing,” said André Sloth Eriksen, CEO and founder of Asetek.

The order will result in revenue to Asetek in the range of USD 300,000 for approximately 15 racks with delivery in Q2 2017. The OEM partner as well as the installation site will be announced at a later date.

About Asetek
Asetek (ASETEK.OL) is the global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange. For more information, visit www.asetek.com

Source: Asetek

The post New HPC Installation to Deploy Asetek Liquid Cooling Solution appeared first on HPCwire.

Cycle Computing CEO to Address National Meeting of the ACS

Tue, 03/28/2017 - 07:17

NEW YORK, N.Y., March 28, 2017 — Cycle Computing today announced that its CEO, Jason Stowe, will address attendees at the 253rd National Meeting and Exposition of the American Chemical Society, being held April 2-6 in San Francisco, CA.

Jason is scheduled to speak on Sunday, April 2 starting at 9:00 am local time. His session, titled, “Lessons learned in using cloud Big Compute to transform computational chemistry research” is one part of an overall session endeavoring to answer the question — Should I move my computational chemistry or informatics tools to the Cloud? Jason’s discussion will focus on how scientists, engineers, and researchers are leveraging CycleCloud software to unlock the Big Compute capabilities of the public cloud, performing larger, more accurate, and more complete workloads than ever before. Real world use cases will include big pharma, materials science, and manufacturing researchers accelerated science using 160 to 160,000 cores on the cloud. Attendees of Jason’s discussion will gain a clear understanding of where cloud resources can, or cannot, help their work.

The American Chemical Society (ACS) mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and its people. As the largest scientific society in the world, the ACS is a leading and authoritative source of scientific information. The ACS supports and promotes the safe, ethical, responsible, and sustainable practice of chemistry coupled with professional behavior and technical competence. The society recognizes its responsibility to safeguard the health of the planet through chemical stewardship and serves more than 157,000 members globally, providing educational and career development programs, products, and services.

Cycle Computing’s CycleCloud software orchestrates Big Compute and Cloud HPC workloads enabling users to overcome the challenges typically associated with large workloads. CycleCloud takes the delays, configuration, administration, and sunken hardware costs out of HPC clusters. CycleCloud easily leverages multi-cloud environments moving seamlessly between internal clusters, Amazon Web Services, Google Cloud Platform, Microsoft Azure and other cloud environments. Researchers and scientists can use CycleCloud to size the infrastructure to the technical question or computation at hand.

More information about the CycleCloud cloud management software suite can be found at: www.cyclecomputing.com

About Cycle Computing

Cycle Computing is the leader in Big Compute software to manage simulation, analytics, and Big Data workloads. Cycle turns the Cloud into an innovation engine for your organization by providing simple, managed access to Big Compute. CycleCloud is the enterprise software solution for managing multiple users, running multiple applications, across multiple clouds, enabling users to never wait for compute and solve problems at any scale. Since 2005, Cycle Computing software has empowered customers in many Global 2000 manufacturing, Big 10 Life Insurance, Big 10 Pharma, Big 10 Hedge Funds, startups, and government agencies, to leverage hundreds of millions of hours of cloud based computation annually to accelerate innovation. For more information visit: www.cyclecomputing.com

Source: Cycle Computing

The post Cycle Computing CEO to Address National Meeting of the ACS appeared first on HPCwire.

Berkeley Lab Researchers Target Chem Code for Knights Landing

Mon, 03/27/2017 - 10:56
OpenMP optimized NWChem AIMD plane wave code is demonstrated to run faster on a single Intel Knights Landing node when compared to a conventional Intel Haswell node. The simulation run consists of 64 water molecule.

BERKELEY, Calif., March 27, 2017 — A team of researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab), Pacific Northwest National Laboratory (PNNL) and Intel are working hard to make sure that computational chemists are prepared to compute efficiently on next-generation exascale machines. Recently, they achieved a milestone, successfully adding thread-level parallelism on top of MPI-level parallelism in the planewave density functional theory method within the popular software suite NWChem.

“Planewave codes are useful for solution chemistry and materials science; they allow us to look at the structure, coordination, reactions and thermodynamics of complex dynamical chemical processes in solutions and on surfaces,” says Bert de Jong, a computational chemist in the Computational Research Division (CRD) at Berkeley Lab.

Developed approximately 20 years ago, the open-source NWChem software was designed to solve challenging chemical and biological problems using large-scale parallel ab initio, or first principle calculations.  De Jong and his colleagues will present a paper on this latest parallelization work at the May 29-June 2 IEEE International Parallel and Distributed Processing Symposium in Orlando, Florida.

Multicore vs. “Manycore”: Preparing Science for Next-Generation HPC

Since the 1960s, the semiconductor industry has looked to Moore’s Law—the observation that the number of transistors on a microprocessor chip doubles about every two years—to set targets for their research and development. As a result, chip performance sped up considerably, eventually giving rise to laptop computers, smartphones and the Internet. But like all good things, this couldn’t last.

As more and more silicon circuits are packed into the same small area, an increasingly unwieldy amount of heat is generated.  So about a decade ago, microprocessor designers latched onto the idea of multicore architectures—putting multiple processors called “cores” on a chip—similar to getting five people to carry your five bags of groceries home, rather than trying to get one stronger person to go five times faster and making separate trips for each bag.

Supercomputing took advantage of these multicore designs, but today they are still proving too power-hungry, and instead designers are using a larger number of smaller, simpler processor cores in the newest supercomputers. This “manycore” approach—akin to a small platoon of walkers rather than a few runners—will be taken to an extreme in future exaflop supercomputers.  But achieving a high level of performance on these manycore architectures requires rewriting software, incorporating intensive thread and data-level parallelism and careful orchestration of data movement. In the grocery analogy, this addresses who will carry each item, can the heavier ones be divided into smaller parts, and should items be handed around mid-way to avoid overtiring anyone—more like a squad of cool, slow-walking, collaborative jugglers.

Getting Up to Speed on Manycore

The first step to ensuring that their codes will perform efficiently on future exascale supercomputers is to make sure that they are taking full advantage of manycore architectures that are being deployed. De Jong and his colleagues have been working for over a year to get the NWChem planewave code optimized and ready for science, just in time for the arrival of NERSC latest supercomputer Cori.

The recently installed Cori system at the Department of Energy’s (DOE’s) National Energy Research Scientific Computing Center (NERSC) reflects one of these manycore designs. It contains about 9,300 Intel Xeon Phi (Knights Landing) processors and according to the November, 2016 Top500 list, is the largest system of its kind, also representing NERSC’s move towards exascale.  de Jong and his colleagues were able to gain early access to Cori through the NERSC Exascale Science Applications Program and the new NWChem code has been shown to perform well on the new machine.

According to de Jong, the NWChem planewave methods primarily comprise fast Fourier transform (FFT) algorithms and matrix multiplications of tall-skinny matrix products. Because current Intel math libraries don’t efficiently solve the tall-skinny matrix products in parallel, Mathias Jacquelin, a scientist in CRD’s Scalable Solvers Group, developed a parallel algorithm and optimized manycore implementation for calculating these matrices and then integrated that into the existing planewave codes.

When trying to squeeze the most performance from new architectures, it is helpful to understand how much headroom is left—how close are you to computing or data movement limits of the hardware, and when will you reach the point of diminishing returns in tuning an application’s performance. For this, Jacquelin turned to a tool known as a Roofline Model, developed several years ago by CRD computer scientist Sam Williams.

Jacquelin developed an analysis of matrix factorization routine within a roofline model for the Knights Landing nodes. In a test case that simulated a solution with 64 water molecules, the team found that their code easily scaled up to all 68 cores available in a single massively parallel Intel Xeon Phi Knights Landing node. They also found that the new, completely threaded version of the planewave code performed three times faster on this manycore architecture than on current generations of the Intel Xeon cores, which will allow computational chemists to model larger, more complex chemical systems in less time.

“Our achievement is especially good news for researchers who use NWChem because it means that they can exploit multicore architectures of current and future supercomputers in an efficient way,” says Jacquelin. “Because there are other areas of chemistry that also rely on tall-skinny matrices, I believe that our work could potentially be applied to those problems as well.”

“Getting this level of performance on the Knights Landing architecture is a real accomplishment and it took a team effort to get there,” says de Jong. “Next, we will be focusing on running some large scale simulations with these codes.”

This work was done with support from DOE’s Office of Science and Intel’s Parallel Computing Center at Berkeley Lab. NERSC is a DOE Office of Science User Facility. In addition to de Jong and Jacquelin, Eric Bylaska of PNNL was also a co-author on the paper.

Source: Linda Vu, Lawrence Berkeley National Laboratory

The post Berkeley Lab Researchers Target Chem Code for Knights Landing appeared first on HPCwire.