HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 13 hours 2 min ago

ACM Expresses Concern About New Executive Order Suspending Visas

Mon, 01/30/2017 - 09:45

Jan. 30 — The Association for Computing Machinery, a global scientific and educational organization representing the computing community, expresses concern over US President Donald J. Trump’s Executive Order imposing a 90-day suspension of visas to nationals of seven countries.

The open exchange of ideas and the freedom of thought and expression are central to the aims and goals of ACM. ACM supports the statute of International Council for Science in that the free and responsible practice of science is fundamental to scientific advancement and human and environmental well-being. Such practice, in all its aspects, requires freedom of movement, association, expression and communication for scientists. All individuals are entitled to participate in any ACM activity.

ACM urges the lifting of the visa suspension at or before the 90-day deadline so as not to curtail the studies or contributions of scientists and researchers.

Source: ACM

The post ACM Expresses Concern About New Executive Order Suspending Visas appeared first on HPCwire.

People to Watch 2017

Fri, 01/27/2017 - 11:05

With 2017 underway, we’re looking to the future of high performance computing and the milestones that are growing ever closer.

The post People to Watch 2017 appeared first on HPCwire.

Bright Computing Teams Up With SGI to Co-Sponsor UK HPC & Big Data Event

Fri, 01/27/2017 - 07:28

Jan. 27 — Bright Computing, a global leader in cluster and cloud infrastructure automation software, today announced that it has teamed up with SGI to co-sponsor a High Performance Computing & Big Data event in London, on February 1st, 2017.

Bright Computing formed a partnership with SGI in September 2016. The following month, the two companies announced they had been selected by the UK Met Office to provide a new HPC system for weather and climate data analysis.

Bright and SGI will co-sponsor High Performance Computing & Big Data 2017, taking place at the Victoria Park Plaza hotel, on Wednesday February 1st. The event promises to showcase the latest advances in the pioneering technologies and practices which are revolutionising compute- and data-intensive research across the public and private sector. Keynote speakers include Daniel Zeichner, MP and Chair of All-Party Parliamentary Group on Data Analytics; Professor Anthony Lee, Strategic Programme Director for the Turing-Intel Programme, and Dave Underwood, Deputy Director of HPC at the Met Office.

At the event, Bright Computing and SGI will share a one-hour seminar on: Maximising Your Investment in High Performance Computing. The session will be based on the principal that HPC infrastructure represents a significant capital expenditure and depreciates over time, for example, a £3 million investment in HPC will typically depreciate at a rate of £1 million per year; that’s £2,740 per day.  Dr Ben Bennett, Head of W/W HPC Marketing at SGI, and Lee Carter, VP W/W Alliances at Bright Computing, will explain how to get maximum value out of HPC hardware and improve your ROI using SGI and Bright technologies.  The presentation will include a case study on how the Met Office chose a Bright / SGI solution to launch their new HPC system and significantly improve productivity of weather and climate data analysis.

Source: Bright Computing

The post Bright Computing Teams Up With SGI to Co-Sponsor UK HPC & Big Data Event appeared first on HPCwire.

Supermicro Reports Second Quarter 2017 Financial Results

Fri, 01/27/2017 - 07:19

SAN JOSE, Calif., Jan. 27 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in high-performance, high-efficiency server, storage technology and green computing, has announced second quarter fiscal 2017 financial results for the quarter ended December 31, 2016.

Fiscal 2nd Quarter Highlights

  • Quarterly net sales of $652.0 million, up 23.3% from the first quarter of fiscal year 2017 and up 2.0% from the same quarter of last year.
  • GAAP net income of $22.0 million, up 62.5% from the first quarter of fiscal year 2017 and down 36.6% from the same quarter of last year.
  • GAAP gross margin was 14.3%, down from 15.1% in the first quarter of fiscal year 2017 and down from 16.6% in the same quarter of last year.
  • Server solutions accounted for 68.1% of net sales compared with 67.6% in the first quarter of fiscal year 2017 and 71.0% in the same quarter of last year.

Net sales for the second quarter ended December 31, 2016 totaled $652.0 million, up 23.3% from $529.0 million in the first quarter of fiscal year 2017. No customer accounted for more than 10% of net sales during the quarter ended December 31, 2016.

GAAP net income for the second quarter of fiscal year 2017 was $22.0 million or $0.43 per diluted share, a decrease of 36.6% from net income of $34.7 million, or $0.67 per diluted share in the same period a year ago. Included in net income for the quarter is $4.7 million of stock-based compensation expense (pre-tax). Excluding this item and the related tax effect, non-GAAP net income for the second quarter was $25.0 million, or $0.48 per diluted share, compared to non-GAAP net income of $38.0 million, or $0.73 per diluted share, in the same quarter of the prior year. On a sequential basis, non-GAAP net income increased from the first quarter of fiscal year 2017 by $8.4 million or $0.16 per diluted share.

GAAP gross margin for the second quarter of fiscal year 2017 was 14.3% compared to 16.6% in the same period a year ago. Non-GAAP gross margin for the second quarter was 14.4% compared to 16.7% in the same period a year ago. GAAP gross margin for the first quarter of fiscal year 2017 was 15.1% and Non-GAAP gross margin for the first quarter of fiscal year 2017 was 15.2%.

The GAAP income tax provision for the second quarter of fiscal year 2017 was $9.3 million or 29.7% of income before tax provision compared to $14.1 million or 28.8% in the same period a year ago and $6.4 million or 32.0% in the first quarter of fiscal year 2017. The effective tax rate for the second quarter of fiscal year 2017 was lower than the first quarter primarily due to an increase in the domestic production activities deduction and U.S. federal research and development (“R&D”) tax credits.

The Company’s cash and cash equivalents and short and long term investments at December 31, 2016 were $131.5 million compared to $183.7 million at June 30, 2016. Free cash flow for the six months ended December 31, 2016 was $(72.2) million, primarily due to an increase in the Company’s cash used in operating activities and used in the development and construction of improvements on the Company’s properties.

Business Outlook & Management Commentary

The Company expects net sales of $570 million to $630 million for the third quarter of fiscal year 2017 ending March 31, 2017. The Company expects non-GAAP earnings per diluted share of approximately $0.34 to $0.42 for the third quarter.

“We are pleased to report record second quarter revenues of $652.0 million that exceeded our guidance and outpaced a strong compare with last year. Contributing to this strong growth was our Twin family product line including our FatTwin, Storage, HPC, MicroBlade, and strong growth from enterprise cloud and Asia Pacific, particularly China. Component shortages and pricing, product and geographic mix adversely impacted gross margins while improved leverage allowed us to deliver stronger operating margins from last quarter,” said Charles Liang, Chairman and Chief Executive Officer. “We expect to continue the growth of last quarter and be reflected in the year-over-year revenue growth in the March quarter based on an increasing number of sizable customer engagements demanding the performance and advantages of our leading product lines. In addition, we are well positioned to benefit from technology transitions in 2017 and have upgraded our product lines to optimize these new technologies.”

It is currently expected that the outlook will not be updated until the Company’s next quarterly earnings announcement, notwithstanding subsequent developments. However, the Company may update the outlook or any portion thereof at any time. Such updates will take place only by way of a news release or other broadly disseminated disclosure available to all interested parties in accordance with Regulation FD.

About Super Micro Computer, Inc.

Supermicro, a global leader in high-performance, high-efficiency server technology and innovation is a premier provider of end-to-end green computing solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro’s advanced Server Building Block Solutions offer a vast array of components for building energy-efficient, application-optimized, computing solutions. Architecture innovations include Twin, TwinPro, FatTwin, Ultra Series, MicroCloud, MicroBlade, SuperBlade, Simply Double, Double-sided Storage, Battery Backup Power (BBP) modules and WIO/UIO. Products include servers, blades, GPU systems, workstations, motherboards, chassis, power supplies, storage, networking, server management software and SuperRack cabinets/accessories delivering unrivaled performance and value.

Source: Supermicro

The post Supermicro Reports Second Quarter 2017 Financial Results appeared first on HPCwire.

Bridges Dedication and Launch Being Live Streamed Jan. 27

Thu, 01/26/2017 - 22:08

Jan. 26 — The Pittsburgh Supercomputing Center (PSC) will officially launch its latest supercomputer, Bridges, on Friday, January 27, 2017.

Funded by a $17-million grant from the National Science Foundation, Bridges offers new computational capabilities to researchers working in diverse, data-intensive fields, such as genomics, the social sciences and humanities. Bridges represents a new way of doing business in high performance computing. Researchers can adapt its flexible architecture to their specific needs, in effect creating a “custom supercomputer.”

Bridges has already seen its first few months of use by the national scientific community. In that short time, users have reported progress in fields such genomics, public health, chemistry, machine learning and more.

PSC will be live streaming the Bridges Dedication and Launch starting at 10:30am Eastern time. Join us here: https://www.youtube.com/watch?v=7VHobnW3R70

Agenda

Welcome:

  • Nick Nystrom, Senior Director of Research, PSC

Speakers:

  • Farnam Jahanian, Provost and Chief Academic Officer, Carnegie Mellon University
    Introduced by Michael Levine, Scientific Director, PSC
  • N. John Cooper, Dean, Kenneth P. Dietrich School of Arts and Sciences, University of Pittsburgh
    Introduced by Ralph Roskies, Scientific Director, PSC
  • Irene Qualters, Director of the Office of Advanced Cyberinfrastructure, National Science Foundation
  • Erin Molchany, Director, Governor’s Southwest Regional Office
  • Rich Fitzgerald, Allegheny County Executive
  • William Peduto, Mayor, City of Pittsburgh
  • Bill Mannel, Vice President & General Manager of High-Performance Computing and Big Data, Hewlett Packard Enterprise
  • Chris Allison, HPC Specialist, Intel Corporation

Closing Remarks:

  • Nick Nystrom

Editor’s note: More information about the recently completed Bridges Phase 2 upgrade here: https://www.hpcwire.com/2017/01/12/nsf-approves-bridges-upgrade/

Source: PSC

The post Bridges Dedication and Launch Being Live Streamed Jan. 27 appeared first on HPCwire.

Technical Computing Hub UberCloud Receives Funding from Earlybird

Thu, 01/26/2017 - 16:41

LOS ALTOS, Calif., and ISTANBUL, Turkey, Jan. 26 — Today, UberCloud, the Silicon Valley based hub in the cloud for engineers and scientists to discover, try, and buy computing on demand, announces the closing of its Pre-A $1.7 million round lead by Earlybird Venture Capital. Roland Manger, co-founder and partner of Earlybird, joins the UberCloud Board of Directors.

UberCloud is the online Community, Marketplace, and Software Container Factory where engineers, scientists, and their service providers, discover, try, and buy ubiquitous high-performance computing power and Software-as-a-Service, from cloud resource providers and application software vendors around the world. Engineers and scientists can explore, discuss, and use computing power on demand to solve critical design and development problems. Unique UberCloud software container technology (based on Docker) simplifies software packageability and portability, enables ease of access and instant use of engineering SaaS solutions, and maintains scalability across multiple compute nodes.

“UberCloud has created an entire cloud computing ecosystem for complex technical simulations, leveraging cloud infrastructure providers, developing and utilizing middleware container technology, and bringing on board established and proven application software providers, all for the benefit of a growing community of engineers and scientists that need to solve critical technical problems on demand,” said Roland Manger, co-founder and Partner at Earlybird. “While technical computing has been slow to adopt the benefits of the Cloud, we are convinced that UberCloud can be a catalyst for change.” Roland combines a long-standing investment track record with entrepreneurial and operational experience in leading roles at early stage companies, and has been involved in several successful startups, most recently at Hazelcast, UiPath, and Peak Games.

“We here at Plug & Play are super excited for UberCloud, and their extraordinary growth in the field of high performance cloud computing alongside other big players in the market,” said Alireza Masrour, managing partner at Plug & Play Ventures and Founding Partner of the Plug & Play Startup Camp. “We’re looking forward to being a part of the UberCloud team in the future!”

“UberCloud specializes in running state of the art simulation applications like ANSYS and SIMULIA on the nearly infinite compute capacity offered by leading Cloud computing providers like Microsoft and HPE. Earlybird’s funding round has been specifically timed to fuel the company at a point of great potential,” explained Burak Yenier, co-founder and CEO of UberCloud. “Our products and powerful partnerships let us take advantage of a growing market opportunity and this funding round allows us to build our team to keep up.”

“We are excited to welcome Roland Manger on our Board of Directors,” added Wolfgang Gentzsch, co-founder and president of UberCloud. “Earlybird’s funding, together with continuous support from our strong partners Hewlett Packard Enterprise, Intel, and Microsoft Azure, will enable us to serve the multi-billion-dollar technical computing market in front of us and take UberCloud to the next level of fully automated container technology.”

UberCloud software container technology removes most of the challenges and roadblocks in engineering and scientific cloud computing, providing a seamless extension to the engineers’ in-house technical applications. These proven vendors´ all-in-one cloud solutions are presented on the UberCloud online marketplace and available to engineers and scientists on Cloud platforms around the world via one mouse click, simply through the customer’s desktop browser.

About Earlybird

Earlybird is a venture capital firm focused on European technology companies with global ambitions. Founded in 1997, Earlybird invests in all growth and development phases of a company, offering its portfolio companies not only financial resources, but also strategic and entrepreneurial support, including access to an international network and capital markets.

About UberCloud

UberCloud is the online Community, Marketplace, and Software Container Factory where engineers, scientists, and their service providers, discover, try, and buy ubiquitous high-performance computing power and Software-as-a-Service, from Cloud resource providers and application software vendors around the world. UberCloud’s unique high-performance software container technology simplifies software packageability and portability, enables ease of access and instant use of engineering SaaS solutions, and maintains scalability across multiple compute nodes. For further information: www.TheUberCloud.com and www.TheUberCloud.com/help/.

Source: UberCloud

The post Technical Computing Hub UberCloud Receives Funding from Earlybird appeared first on HPCwire.

PRACE Posts 2017 Best Practice Guide for KNL

Thu, 01/26/2017 - 16:31

If you are looking for guidance with programming and working with Intel’s Xeon Phi (Knights Landing) processors, a solid resource was posted today on the Partnership for Advanced Computing in Europe (PRACE) Researcher Infrastructure site: Best Practice Guide – Knights Landing, January 2017.

Given the rising availability of KNL-based systems from a growing numbers of vendors the guide may prove a useful support tool. It’s fairly comprehensive and was prepared by Vali Codreanu (SURFsara), Jorge Rodríguez (Barcelona Supercomputing Center), and Ole Widar Saastad (editor, University of Oslo).

Here’s an excerpt from the intro:

“This best practice guide provides information about Intel’s MIC architecture and programming models for the Intel Xeon Phi co-processor in order to enable programmers to achieve good performance of their applications. The guide covers a wide range of topics from the description of the hardware of the Intel Xeon Phi co-processor through information about the basic programming models as well as information about porting programs up to tools and strategies how to analyze and improve the performance of applications.”

And the table of contents linking to their sections:

  1. Introduction
  2. System Architecture / Configuration
  3. Programming Environment / Basic Porting
  4. Benchmark performance
  5. Application Performance
  6. Performance Analysis
  7. Tuning
  8. Debugging

As an example of the material, these charts and captions represent a tiny small snippet of the material in the memory portion of the Systems Architecture/Configuration section.

As can be seen in Figure 5, the KNL memory can work in three different modes. These are determined by the BIOS at POST time and thus require a reboot to switch between them.

 

Switching between these modes require change in the BIOS settings and a subsequent reboot. The command to alter the BIOS setting from command line is described in Section 2.3.6.

Here’s a link to the full guide: http://www.prace-ri.eu/best-practice-guide-knights-landing-january-2017/

The post PRACE Posts 2017 Best Practice Guide for KNL appeared first on HPCwire.

Weekly Twitter Roundup (Jan. 26, 2017)

Thu, 01/26/2017 - 12:58

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below.

First racks arrived of Stampede 2! https://t.co/2vmsmF8jJU pic.twitter.com/4t6GPF0Jhv

— TACC (@TACC) January 20, 2017

Visiting colleagues at Argonne this week to talk about I/O. Maybe I'll get to see Mira in person this time around pic.twitter.com/27HYZvAq33

— Glenn K. Lockwood (@glennklockwood) January 23, 2017

Cool updates to the State of Ohio Computer Center! #supercomputer #HPC pic.twitter.com/LqvvZI8614

— OhioSupercomputerCtr (@osc) January 23, 2017

#BurstBuffer #tutorial from A to Z by @geomark for early #HPC users pic.twitter.com/hH7Kgf7uuX

— KAUST HPC (@KAUST_HPC) January 24, 2017

@thedeadline I guess Beowulf was a grass roots initiative that led to the democratisation of #HPC by demonstrating how easy it could be.

— Chris Samuel (@chris_bloke) January 24, 2017

Today @TACC inspected new racks for the Stampede 2 supercomputer! @Dell pic.twitter.com/IpP4koX2az

— TACC (@TACC) January 24, 2017

Hey everyone, go follow @SCInclusivity twitter. "Everyone is welcome :)" #HPC #HPCtogether #HPCmatters

— Fernanda Foertter (@hpcprogrammer) January 24, 2017

Our 3rd Accelerated Data And Computing (ADAC) workshop by ORNL/OLCF, ETH/CSCS, and Tokyo Tech GSIC, co-hosted by Univ. Tokyo. ITC, Jan 25-27 pic.twitter.com/SH4HnMASZM

— Satoshi Matsuoka (@ProfMatsuoka) January 25, 2017

That time you discover your #hpc facility achieves only "mesoscale" status… pic.twitter.com/j1xA8hV23G

— James Cuff (@jamesdotcuff) January 23, 2017

We had a blast tonight at @cray_inc – great ideas #datachallenge #hackathon @ametsoc #AMS2017 pic.twitter.com/6Wdnb5vELF

— Nazila Merati (@floraandflying) January 23, 2017

@awscloud for #hpc presentation @ASUResearch @asu @ASUEngineering @ASU_UTO pic.twitter.com/sb0ugOt5Ug

— Research Computing (@ASUHPC) January 24, 2017

Click here to view the top tweets from last week.

The post Weekly Twitter Roundup (Jan. 26, 2017) appeared first on HPCwire.

Stathis Papaefstathiou Takes the R&D Reins at Cray

Thu, 01/26/2017 - 11:14

Earlier this month, Cray announced that tech veteran Stathis Papaefstathiou had joined the ranks of the iconic supercomputing company. As senior vice president of R&D, Papaefstathiou will be responsible for leading the software and hardware engineering efforts for all of Cray’s research and development projects. He is replacing Peg Williams who is retiring after more than a decade with Cray, but will be staying on in a transition period for a few months.

Papaefstathiou’s tenure in technical computing covers a 30-year span. Most recently, he was the SVP of engineering at the Aerohive Networks, where he led product development for a portfolio that includes network hardware, embedded operating systems, cloud-enabled network management solutions, big data analytics, DevOps and mobile applications. Previously, he spent two years leading cloud development efforts at F5 Networks and more than six years at Microsoft, starting as a computer science researcher before being promoted to general manager in charge of robotics.

HPCwire spoke with Papaefstathiou to get a sense of how his enterprise and cloud background will be leveraged at Cray as well has his larger vision and execution strategy.

HPCwire: Stathis, please introduce yourself and tell us about your background and how you came to this position.

Papaefstathiou: My background originally was in the HPC space. In the 90s I worked in a business unit as a post doc and researcher in HPC. It was a very exciting time then in HPC because there were many different architectures and technologies. There was also a lot of optimism about the future so people were trying to create single solutions that would solve all types of problems. I had the opportunity to work with Cray YMP and [another Cray system]. My work primarily was to understand how to model the architectures, the hardware architecture and describe applications in a way that the customers of the technology could match best their application with their appropriate hardware architecture.

As I mentioned, in the 90s, there were a lot of different types of supercomputers, from the SIMD connection machine to massively parallel computers to shared memory computers and so on. So customers needed to understand that before they make a commitment to a certain model that their application would run well. So the various agencies were funding research in order to build these kind of predictive systems.

For me Cray is obviously an iconic company. It’s a great honor after working in the HPC community to have the opportunity to work for Cray. It’s a very interesting industry because you always have to fight with the trends of commoditization. You always have to be on the bleeding edge of building new technologies. This is something extremely exciting for an engineer, so having this opportunity to be working always on the latest technology you don’t have the opportunity to do this in many places.

Finally for Cray, I believe that the last few years the company has embarked on this journey in going beyond the traditional HPC market and expanding and I think this is a very promising direction, but at the same time it’s very exciting, because it’s an inflection point for the company to have contribution there.

HPCwire: I understand you started out in HPC, but your most recent roles were very much in the enterprise datacenter/cloud realm as opposed to the traditional HPC space – and in the last couple years, Cray has really been promoting the convergence of supercomputing and big data.

Papaefstathiou: There is definitely convergence of technologies between enterprise cloud and HPC. I think one of the things that was sort of profound to me was that in my previous role I was the SVP of engineering for Aerohat Networks and this is a company that is building hardware for the edge of the network but one of the differentiators against the market is that it collects data from this networking infrastructure in order to create business intelligence analytics as to how the network is being used but also how this data can contribute to the bottom line of the business. For example if you are a retail company, you may want to know what is the traffic that you have in your different physical stores or where the customer is spending more of their time within your store. So this is a type of data analytics that Aerohat is working on.

So part of my role was to build the solution from the ground up – this big data analytics solution. Of course we were working in the public cloud like most companies start, and I realized a couple of things that were not obvious to me when we built the solution. The first one was that actually building the solution – this big data real-time solution with pretty substantial scale – it was hard to do, especially if you take into consideration some of the constraints or characteristics of the cloud architecture, things like you don’t have guarantees in latency, that you need to build a solution that has to be designed for fault tolerance from the ground up because you never know when you’re going to have a fault in the resources that you’re using in the cloud. So it was a very painful process of building the solution. The second thing that was sort of interesting is that at a certain scale of this solution, the cost benefit of using the public cloud changes. One of the things that I find very exciting about the work that Cray is doing in the analytics space is that there is a class of problem, in terms of scale and complexity, where Cray supercomputing might be a better solution than public cloud. So while at the same time we have the convergence of the technologies, we do have differentiation in the supercomputing space for the big data analytics and machine learning solutions.

HPCwire: What are the products/technologies your teams will be working on in 2017?

Papaefstathiou: The first thing is getting into the exascale phase. We are working toward the next-generation of supercomputing. What’s interesting is in addition to the performance aspect, which is very important here, we have gained in the last few years a lot of experience building solutions for broad range of workloads, so already today we have our cluster line, an analytics line with Urika, and of course a supercomputing line. As we move forward, it’s about creating a lego model where we can take and combine technologies to support different use cases at different scale, using the same stack of technology. We have already have started doing this in 2016, for example Urika GX is coming with Aries network, so we combine our supercomputing technology with our cluster technology and we build a use case. So we already have started doing that but now we’re thinking more and more about how to easily be able to create this type of solution in a much more iterative and organized way.

I do believe that more and more of these supercomputing solutions will benefit smaller companies that are now doing analytics and machine learning, and they’re looking for the right type of computation platforms to solve these problems.

HPCwire: What is your interest in containers?

Papaefstathiou: Containers are a very useful tool for us. One of the things which is expensive in the supercomputing world is to update the system with a new type of new software stack on top of the hardware. Containers provide us a way to easily make upgrades to the system in a very lightweight manner without having to make any change in the operating system, without having to impact the other parts of the software stack. So if, for example, you want to change your analytics solution and upgrade to the latest version, it’s very easy to just update the container in the compute node instead of having to bring up nodes from the ground up and update the whole stack. So that’s one use of the containers, obviously as we move forward, we can use containers for other types of use cases, for example multitenancy, which is a very good scenario because we are going to have multiple workloads running on the big systems so being able to use containers as mechanism to isolate compute nodes amongst the different workloads is an interesting application. And finally containers can be used so you can build your application using our programming tools, package it in a container and be able to send to supercomputing nodes, it becomes a way to democratize the development of the code because you can do it in a very contained way; you can package in a contained way and send to the supercomputer to run it.

HPCwire: Thoughts on burst buffers and what will see from Cray in that area?

Papaefstathiou: We continue to collaborate with NERSC on that, as well as containers. DataWarp is a very important technology for us and I think it’s going to be a great tool for us to get to exascale because moving data in and out of the system from the compute nodes to the storage at the scale of exascale really becomes a major problem so having Datawarp and the burst buffer architecture there in between these two layers of the system will be a very critical advantage that we have at Cray to solve these workloads at scale.

HPCwire: What are your major impressions of the state of HPC today? Trends, inflection points, future directions?

Papaefstathiou: I think that deep learning is a use case that can benefit from the use of HPC technologies. The work we did with Microsoft a few months back with the cognitive libraries, porting them to Cray and being able to get a lot of benefit there both scale and time to execution is an example of how supercomputing can be used there. Also the plethora of processor architectures available to our customers now, the GPUs, the manycore/multicore systens, Xeon Phi and the traditional Intel processors – these can be matched to specific workload requirements. I was telling you before about the this lego model where you can take different types of technologies and put them in the same system and customize effectively the system for your workload, I think we will see more and more of this happening.

I do believe that the ability of HPC technologies within the cloud front end – that’s also another exciting possibility because effectively we will democratize the use of HPC technologies for a broader audience. Now there is a bar a course bar for somebody to get into this space. With cloud there is a possibility with the cloud providers hosting high-performance computers, that might be a way for the broader community to access this technology.

HPCwire: Interesting to hear you say that because earlier you mentioned how some of the people using cloud and cloud-like solutions could benefit from a more traditional product but the converse is also true.

Papaefstathiou: Absolutely and I’ll give you an obvious example. One of the problems we will have in exascale, is doing system management at huge scale, being able to collect data — monitoring data, performance data — from tens of thousands of nodes, and being able to manage them and analyze them, and create troubleshooting optimization based on that — it’s a very hard problem. Already folks are doing this in the cloud community. Now there are some differences there, some adjustment has to take place, but this example of system management technology that can be used in the cloud can also be applied with some adjustments around supercomputing.

HPCwire: Speaking of exascale, what is your vision for exascale at Cray and can you speak to how exascale benefits will accrue to commercial HPC users?

Papaefstathiou: Exascale is very interesting. Because of the way they have organized the program [the US Exascale Computing Project], exascale is not about writing a benchmark and getting exascale performance; it’s about getting applications to run with exascale performance. This means that the system, the application and the whole stack, has to be thought of very holistically and solve a lot of hard problems in order to get to this level. Things for example that in the past might not have been in the critical path of performance of applications or the system, now become critical. We’re going to have to address problems that we didn’t have to this extent in the past and I mentioned two of them. One is system management, which in the past was an interesting problem, but now being able to collect all this data, being able to push the OS image to so many nodes, being able to do this efficiently and being able to upgrade the system efficiently — that will become a critical path in creating exascale systems. We talked about Datawarp — thinking about how to bring in data in and out of the exascale system, these will be very hard problems that have to be solved in order to meet this goal.

One of the things we have started doing is working on applying the very high-end technology we are building for the big supercomputers to a broader market and I gave the Urika GX example, where we took the Aries network that was designed for the supercomputer and put it into much smaller form factor that can benefit a much broader community of enterprises, for example, that are doing analytics — I think there is going to be an opportunity for some of these technologies to go downstream toward this broader market as we move forward and we’re thinking about this and we already have products in the market and will continue doing this in the future.

HPCwire: Are you actively focusing on meeting the requirements for the big Aurora supercomputer right now — is that one of the main things on your list?

Papaefstathiou: Yes, this is one of the drivers for getting to the exascale goal, absolutely. We do this often. We have these projects that are sort of the pilots in order to solve some of these hard problems to get to this goal. We’re working very hard on Aurora at this moment.

HPCwire: What else can you tell me about your larger vision for this position and some of the greater company goals you’ll be working to achieve?

Papaefstathiou: Peg Williams is my predecessor and she did a fantastic job building a very high performance team here. One of the things I realized when I joined was that the baseline of the team is very high. We do have some new dynamics that are happening because we have a really broad product portfolio today. We support a lot of technologies. We have new products that we are introducing in the market, some of them are beyond traditional HPC, for example our Urika analytics product line. Finally we have this convergence of technologies. Some of the technologies that are used in the cloud or in enterprise now can be used in HPC. This means from the team perspective while in the past, we were working with a traditional HPC cadence in terms of execution, now we need to go and mimic in some occasions some of the dynamic nature of the cloud and enterprise side. This is reflected both in terms of engineering systems and engineering process. So we are going to also see convergence in terms of the engineering process and the organization approach in order to capture this requirement.

The other area is that the one thing that it’s not well known in the engineering community is how Cray products are really impactful in solving some of the hard problems of the world in basic science, in the different enterprises and so on. I think there is a great opportunity for us to create the messages for this community beyond traditional HPC through our communication of our mission, through creating excitement around the technologies that we’re developing and creating momentum behind HPC in general and Cray. And for that purpose, we need to provide the right environment both for our employees and for the friends of the company, so there really is also an opportunity there for us to get outside of traditional HPC and approach the broader engineering community.

The post Stathis Papaefstathiou Takes the R&D Reins at Cray appeared first on HPCwire.

IBM Wants to be “Red Hat” of Deep Learning

Thu, 01/26/2017 - 10:33

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. It’s another step in IBM efforts to lay claim to leadership in the nascent deep learning market. Offering supported distributions of popular frameworks, said Sumit Gupta, IBM vice president, High Performance Computing and Analytics, is a natural next step in expanding and commercializing deep learning use.

“What we did with PowerAI is create a software distribution for deep learning and machine learning. The insight to do that came from the Linux world. Most enterprise clients don’t go to Linux.org to get their Linux, they go to Red Hat or SUSE,” said Gupta. “Today deep learning is completely an open source community with users going to TensorFlow.org or Caffe.org etc. to download software. But we have clients saying they would prefer to get a supported distribution. So we created PowerAI, a pre-curated, pre-bundled binary that has all the deep learning frameworks. The problems we’re solving are that downloading and installing these frameworks is hard. Tensor flow, for example, depends on a 100 different packages.”

“I want to emphasize that in a sense we have become the Red Hat of deep learning. As Red Hat is to Linux, IBM PowerAI is to deep learning.”

TensorFlow, of course, was originally created by Google and then put into the open source community. “TensorFlow is quickly becoming a viable option for companies interested in deploying deep learning for tasks ranging from computer vision, to speech recognition, to text analytics,” said Rajat Monga, engineering leader for TensorFlow. “IBM’s enterprise offering of TensorFlow will help organizations deploy this framework — we’re glad to see this support.”

According to  the IBM release, “IBM Technology Support Services will build upon its hardware support services by investing and launching a new innovative enterprise software support offering for the PowerAI stack for a competitive advantage. Further, IBM Global Business Solutions established a deep learning design and development team as part of its Cognitive Business Solutions practice to help build solutions on the PowerAI platform, while making use of popular frameworks such as Tensorflow.”

“Every enterprise is looking at emerging artificial intelligence methods to take advantage of the data they now have access to,” said Ken King, IBM general manager for OpenPOWER. “Our PowerAI software offering curates packages and provides enterprise-level support for the major deep learning frameworks like TensorFlow, to enable enterprises to easily use these new AI methods to build new computer models for analyzing their data.”

Selling Power-based servers is also a big goal. In September, IBM introduced several new Power8-based machines, including the Minsky platform that has the Power8+ chip and NVLink for communication with NVIDIA’s P100 GPUs. PowerAI has been optimized for Minsky and Gupta said TensorFlow, for example, “runs 30% faster on Minsky (Power System S822LC for HPC) compared to an x86 system with PCIe GPUs, so we have shown the value of NVlink between the CPU and GPU (details of the systems compared with below).”

Gupta characterized TensorFlow as becoming one of the most popular DL framework in the U.S. while Chainer is the most popular in Japan. The PowerAI suite now includes CAFFE, Chainer, TensorFlow, Theano, Torch, cuDNN, NVIDIA DIGITS, and several other machine and deep learning frameworks and libraries. The IBM PowerAI roadmap includes addition of supported versions of the Microsoft Cognitive Toolkit (previously called CNTK) and Amazon’s MXNet, said Gupta.

The newest edition of the PowerAI software is available now for download, said Gupta. It will also be available on the HPC specialist Nimbix cloud which offers high-end Minsky machines with NVLink and P100 GPUs. Gupta said Nimbix already has “a lot of customers right now are using the Minsky cloud that they put up a few months ago.”

Market traction for the Minsky platform, said Gupta, has been especially strong although he declined to offer numbers or identify major wins.

*Details behind the 30% advantage supplied by IBM:
Achieved 30% more Images/sec on TensorFlow 0.12/Inceptionv3 training. Results are based on IBM Internal Measurements running TensorFlow 0.12 (model-Inceptionv3, dataset-ImageNet2012) training for 500 iterations. Power System S822LC for HPC; 20 cores (2 x 10c chips), POWER8; 3.95GHz(Peak); 512GB RAM; 4x Nvidia Tesla P100 GPU; Ubuntu 16.04, TensorFlow 0.12Competitive System: E5-2640v4; 20 cores (2 x 10c chips), Broadwell;3.4Ghz; 512GB RAM; 4x Nvidia Tesla P100 (PCIe) GPU; Ubuntu 16.04, TensorFlow 0.12.1 when compared to a system with four NVIDIA Tesla P100’s attached through conventional PCIe when running the Inception v3 model, a popular image recognition framework.

Related Links

IBM Launches PowerAI Suite Optimized for its Highest Performing Server

IBM Debuts Power8 Chip with NVLink and Three New Systems

The post IBM Wants to be “Red Hat” of Deep Learning appeared first on HPCwire.

Former SGI CEO Jorge Titinger Joins TransparentBusiness as Chief Strategy Officer

Thu, 01/26/2017 - 07:12

SAN FRANCISCO, Calif., Jan. 26 — TransparentBusiness is pleased to announce the appointment of Jorge Luis Titinger as its Chief Strategy Officer. Mr. Titinger is best known as the former CEO of SGI (Silicon Graphics International Corp.), a global leader in high-performance solutions for compute, data analytics, and data management, which was recently acquired by HPE (Hewlett Packard Enterprise).

“I’m pleased to join the company which has established itself as a leader in remote work process management and coordination,” said Jorge Titinger. “I believe TransparentBusiness can help accelerate the adoption of a distributed workforce; this can result in significant bottom line benefits for the companies that embrace this new direction and bring the work to where the talent is.”

Prior to TransparentBusiness and SGI, Mr. Titinger served as the Chief Operating Officer and then as President and Chief Executive Officer of Verigy, Ltd. from June 2008 to October 2011. He served as Senior Vice President and General Manager of the Product Business Groups at FormFactor Inc. from November 2007 to June 2008. He served in several senior executive roles at KLA-Tencor Corporation, Applied Materials, and Insync Systems. Prior to his tenure in the Semiconductor Equipment industry, he held senior management roles at MIPS/SGI and Hewlett Packard. Mr. Titinger holds a B.S. and M.S. in Electrical Engineering and an M.S. in Engineering Management, all from Stanford University.

“We are delighted to be joined by a person with Mr. Titinger’s experience,” said Alex Konanykhin, CEO of TransparentBusiness. “His experience will allow for faster expansion of TransparentBusiness in the USA and globally.”

About TransparentBusiness

Designated by Citigroup as the “Top People Management Solution”, our TransparentBusiness.com platform greatly increases productivity of remote work, protects from overbilling, allows for easy monitoring and coordination of geographically distributed workforce and provides real-time information on the cost and status of all tasks and projects. Visit https://vimeo.com/163773083. TransparentBusiness is an Integrated Partner of ADP and a Technology Partner of Facebook; it serves over 9,000 clients in 142 countries.

Source: TransparentBusiness

The post Former SGI CEO Jorge Titinger Joins TransparentBusiness as Chief Strategy Officer appeared first on HPCwire.

UCSD/Venter Institute Leverage ML to Study the Microbiome

Wed, 01/25/2017 - 12:10

The importance of the human microbiome – all of the bacteria inside us – to maintaining health is well established and widely being explored. In fact, the human gut microbiome DNA contains about 100 times as many genes as its human host DNA. These genes not only work for the bacteria but also carry out important functions for the host, such as modulating immune development, amino acid biosynthesis, and energy harvest from food.

Recently, researchers from University of California, San Diego (UCSD) and the J.C. Venter Institute (JCVI) used machine learning to teach a computer to learn to distinguish between healthy and unhealthy gut microbiomes. The new approach shows promise for use in quickly deciphering microbiome genomes, predicting related-health issues, and providing guidance for therapy development. It turns out that accomplishing this task is a big data problem cum HPC project.

A paper on the work – Using Machine Learning to Identify Major Shifts in Human Gut Microbiome Protein Family Abundance in Disease – was presented at the IEEE International Conference on Big Data last month and an article describing the effort was posted on the UCSD web last week. Noteworthy, the software for the study (developed by Weizhong Li, associate professor at JCVI) was run on the data-intensive Gordon supercomputer at the San Diego Supercomputer Center (SDSC) and used 180,000 core-hours – that’s roughly equivalent to running a PC 24 hours a day for about 20 years.

Data from 30 healthy people (using sequencing data from the National Institutes of Health’s Human Microbiome Program) were combined with data from 30 samples from people suffering from the autoimmune Inflammatory Bowel Disease (IBD), including those with ulcerative colitis and with ileal or colonic Crohn’s disease. The mix of roughly 600 billion DNA bases was then fed into the Gordon supercomputer to reconstruct the relative abundance of these species; for instance, how many E. coli are present compared to other bacterial species. Ultimately, the technique demonstrated high accuracy for these data sets.

Here’s an excerpt from the paper’s abstract: “We use machine learning to analyze results obtained previously from computing relative abundance of ⇠10,000 KEGG orthologous protein families in the gut microbiome of a set of healthy individuals and IBD patients. We develop a machine learning pipeline, involving the Kolomogorv-Smirnov test, to identify the 100 most statistically significant entries in the KEGG database. Then we use these 100 as a training set for a Random Forest classifier to determine ⇠5% the KEGGs which are best at separating disease and healthy states. Lastly, we developed a Natural Language Processing classifier of the KEGG description files to predict KEGG relative over- or under-abundance. As we expand our analysis from 10,000 KEGG protein families to one million proteins identified in the gut microbiome, scalable methods for quickly identifying such anomalies between health and disease states will be increasingly valuable for biological interpretation of sequence data.”

In their discussion section, authors note that by looking at the function of specific disease-associated microbial communities, it should be possible to better identify targets for future intervention (i.e. small molecule development to target a specific gene pathway). Using machine learning methods greatly reduces the time required to investigate the immense amounts of data generated from metagenomic sequencing.

Link to the paper: http://lsmarr.calit2.net/repository/IEEE_BigData_KEGGs_CAMERA_READY.pdf

Link to the UCSD article: http://www.sdsc.edu/News%20Items/PR20170118_microbiome.html

The post UCSD/Venter Institute Leverage ML to Study the Microbiome appeared first on HPCwire.

International HPC Summer School Coming to Colorado

Wed, 01/25/2017 - 10:31

Jan. 25 — Graduate students and postdoctoral scholars from institutions in Canada, Europe, Japan and the United States are invited to apply for the eighth International Summer School on HPC Challenges in Computational Sciences, to be held June 25 to 30, 2017, in Boulder, Colorado, United States of America.

Applications are due March 6, 2017. The summer school is sponsored by Compute/Calcul Canada, the Extreme Science and Engineering Discovery Environment (XSEDE) with funds from the U.S. National Science Foundation, the Partnership for Advanced Computing in Europe (PRACE) and the RIKEN Advanced Insti­tute for Computational Science (RIKEN AICS).

Leading computational scientists and HPC technologists from the U.S., Europe, Japan and Canada will offer instructions on a variety of topics and also provide advanced mentoring. Topics include:

  • HPC challenges by discipline
  • HPC programming proficiencies
  • Performance analysis & profiling
  • Algorithmic approaches & numerical libraries
  • Data-intensive computing
  • Scientific visualization
  • Canadian, EU, Japanese and U.S. HPC-infrastructures

The expense-paid program will benefit scholars from Canadian, European, Japanese and U.S. institutions who use advanced computing in their research. The ideal candidate will have many of the following qualities, however this list is not meant to be a “checklist” for applicants to meet all criteria:

  • Familiar with HPC, not necessarily an HPC expert, but rather a scholar who could benefit from including advanced computing tools and methods into their existing computational work
  • A graduate student with a strong research plan or a postdoctoral fellow in the early stages of their research efforts
  • Regular practice with parallel programming (i.e., student utilizes parallel programming generally on a monthly basis or more)
  • May have a science or engineering background, however, applicants from other disciplines are welcome provided their research activities include computational work

Students from underrepresented groups in computing are highly encouraged to apply (i.e., women, racial/ethnic minorities, persons with disabilities, etc.). If you have any questions regarding your eligibility or how this program may benefit you or your research group, please do not hesitate to contact the individual associated with your region below.

Interested students should apply by March 6, 2017. Meals and housing will be covered for the selected participants, also support for travel will be given.

Further information and application: http://www.ihpcss.org

Source: XSEDE

The post International HPC Summer School Coming to Colorado appeared first on HPCwire.

RSC Granted Highest Elite Status in the Intel Solutions for Lustre Reseller Program

Wed, 01/25/2017 - 09:43

Jan. 25 — RSC Group, the leading developer and system integrator of innovative solutions for high-performance computing (HPC) and data centers in Russia and CIS, was granted the highest Elite status in the Intel Solutions for Lustre Reseller Program confirming the highest level of knowledge and practical experience of partner’s employees needed to promote, deploy and support Intel Lustre Solutions (Intel Enterprise Edition for Lustre software, Intel Foundation Edition for Lustre software, Intel Cloud Edition for Lustre software) for end customer’s scalable high-performance storage systems with parallel access based on Lustre – distributed cluster file system. Only 9 Intel’s partners in Europe currently have this status.

Lustre-based storage systems are currently used in over 73% supercomputers in Top100 of the most powerful supercomputing systems in the world. Lustre supports management of up to 512 petabyte (PB) storage and file size up to 32 PB. Such systems can be accessed through high-speed interconnects based on Intel Omni-Path Architecture (OPA), InfiniBand and Ethernet technologies. Maximum throughput may exceed 2 terabytes per second (Tbps).

Projects completed by RSC specialists with Lustre-based storage systems include supercomputers for the St. Petersburg Polytechnic University named after Peter the Great (SPbPU), South Ural State University (SUSU) and Moscow Institute of Physics and Technology (MIPT). For example, “Polytechnic” data center in SPbPU includes a parallel data storage system based on Lustre distributed file system that can store up to 1 PB of data and a 0.5 PB block-based storage for cloud environments. Both storage systems use server technologies based on Intel products.

Back in 2016 Intel also awarded RSC elite status of HPC Data Center Specialist confirming the highest competence of the partner in the field of development and end customer deployment of HPC solutions based on Intel server products, including Intel Xeon Phi 7200 and Intel Xeon E5-2600 processor families, Intel Server Boards, Intel SSD drives and Intel Omni-Path high-speed interconnect.

Russian customers use solutions based on RSC Tornado and RSC PetaStream ultra high dense, energy-efficient and liquid cooled HPC architectures in production environments since 2009 and 2013 accordingly. These solutions are installed and actively used for modeling and calculation of a broad range of scientific, research and industrial workloads by the St. Petersburg Polytechnic University named after Peter the Great, Joint Supercomputer Center of the Russian Academy of Sciences (JSCC RAS), South Ural State University, Moscow Institute of Physics and Technology, Russian Weather Forecast Agency (Roshydromet) and other customers from different industries.

About RSC Group

RSC Group is the leading developer and system integrator of new generation solutions for high-performance computing (HPC) and data centers in Russia and CIS based on Intel architectures, innovative liquid cooling technologies and a number of its own know- hows. RSC has the potential to create the most energy efficient solutions with record-breaking power usage effectiveness (PUE), the highest computing density in the industry with standard x86-based processors, to use fully “green” design, provide the highest solution reliability, noise-free operation of computing modules, 100% compatibility and guaranteed scalability with unmatched low cost of ownership and low power consumption. RSC specialists also have the experience of developing and implementing an integrated software stack of solutions to improve work efficiency and application of supercomputer systems from system software to vertically oriented platforms based on cloud computing technologies.

Source: RSC Group

The post RSC Granted Highest Elite Status in the Intel Solutions for Lustre Reseller Program appeared first on HPCwire.

Seagate Technology Reports Fiscal Second Quarter 2017 Financial Results

Wed, 01/25/2017 - 07:25

CUPERTINO, Calif., Jan. 25 — Seagate Technology plc (NASDAQ: STX) (the “Company” or “Seagate”) has reported financial results for the second quarter of fiscal year 2017 ended December 30, 2016. For the second quarter, the Company reported revenue of $2.9 billion, gross margin of 30.8%, net income of $297 million and diluted earnings per share of $1.00. On a non-GAAP basis, which excludes the net impact of certain items, Seagate reported gross margin of 31.8%, net income of $412 million and diluted earnings per share of $1.38.

During the second quarter, the Company generated $656 million in cash flow from operations, paid cash dividends of $188 million, and repurchased 4.1 million ordinary shares for $147 million. Cash, cash equivalents, and short-term investments totaled approximately $1.7 billion at the end of the quarter. There were 295 million ordinary shares issued and outstanding as of the end of the quarter.

“The Company’s product execution, operational performance, and financial results improved every quarter throughout 2016. In the December quarter we achieved near record results in gross margin, cash flow, and profitability. Seagate’s employees are to be congratulated for their incredible effort,” said Steve Luczo, Seagate’s chairman and chief executive officer. “Looking ahead, we are optimistic about the long-term opportunities for Seagate’s business as enterprises and consumers embrace and benefit from the shift of storage to cloud and mobile applications. Seagate is well positioned to work with the leaders in this digital transformation with a broad market-leading storage solution portfolio.”

Seagate has issued a Supplemental Financial Information document, which is available on Seagate’s Investors website at www.seagate.com/investors.

Quarterly Cash Dividend 

The Board of Directors of the Company (the “Board”) has approved a quarterly cash dividend of $0.63 per share, which will be payable on April 5, 2017 to shareholders of record as of the close of business on March 22, 2017. The payment of any future quarterly dividends will be at the discretion of the Board and will be dependent upon Seagate’s financial position, results of operations, available cash, cash flow, capital requirements and other factors deemed relevant by the Board.

About Seagate

To learn more about the Company’s products and services, visit www.seagate.com and follow us on Twitter, Facebook, LinkedIn, Spiceworks, YouTube and subscribe to our blog.

Source: Seagate

The post Seagate Technology Reports Fiscal Second Quarter 2017 Financial Results appeared first on HPCwire.

Heresies of the New HPC Cloud Universe

Wed, 01/25/2017 - 07:10

Perhaps ‘heresies’ is a bit strong, but HPC in the cloud, even for academics, is a fast-changing domain that’s increasingly governed by a new mindset, says Tim Carroll, head of ecosystem development and sales at Cycle Computing, an early pioneer in HPC cloud orchestration and provisioning software. The orthodoxy of the past – an emphatic focus on speeds and feeds, if you will – is being erased by changing researcher attitudes and the advancing capabilities of public (AWS, Microsoft, Google et al.) and private (Penguin, et al.) clouds.

Maybe this isn’t a revelation in enterprise settings where cost and time-to-market usually trump fascination with leading edge technology. True enough, agrees Carroll, but the maturing cloud infrastructure’s ability to handle the majority of science workflows – from simple Monte Carlo simulations to many demanding deep learning and GPU-accelerated workloads – is not only boosting enterprise HPC use, but also catching the attention of government and academic researchers. The job logjam (and hidden costs) when using institutional and NSF resources is prompting researchers to seek ways to avoid long queues, speed time-to-result, exercise closer control over their work and (potentially) trim costs, he says.

If all of that sounds like a good marketing pitch, well Carroll is after all in sales. No matter, he is also a long-time industry veteran who has watched the cloud’s evolution for years and has played a role in mainstreaming HPC, notably including seven years at Dell (first as senior manager HPC and later executive director emerging enterprise), before joining Cycle in 2014.

As a provider of a software platform that links HPC users to clouds, Cycle has a front row seat on changing HPC cloud user demographics and attitudes as well as cloud provider strengths and weaknesses. In this interview with HPCwire, Carroll discusses market and technology dynamics shaping HPC use in the cloud. Technology is no longer the dominant driver, he says. See if you agree with Carroll’s survey of the changing cloud landscape.

HPCwire: The democratization of HPC has been bandied about for quite awhile with the cloud portrayed as a critical element. There’s also a fair amount of argument around how much of the new HPC really is HPC. Let’s start at the top. How is cloud changing HPC and why is there so much debate over it?

Tim Carroll: Running HPC in the cloud runs antithetical to how most people were brought up in what was essentially an infrastructure centric world. Most of what you did [with] HPC was improve your ability to break through performance ceilings or to handle corners cases that were not traditional enterprise problems; so it was an industry that prided itself on breakthrough performance and corner cases. That was the mindset.

What HPC in the cloud is saying is “All of the HPC people who for years have been saying how big this industry is going to grow were exactly right, but not $25B being spent by people worrying about limits and corner cases. A healthy part of the growth came from people who didn’t care about anything but getting their work done. Some people still care about the traditional definition, but I would say there are a whole bunch of people who don’t even define it, they just see a way to do things that they couldn’t do five years ago or ten years ago.

HPCwire: So are the new users not really running HPC workloads and has the HPC definition changed?

Carroll: HPC workloads are always changing and perhaps the definition of the HPC along with it, but I think what’s really happening is the customer demographics are changing. It’s a customer demographic defined not by the software or the system, but the answer. When you ask someone in the research environment what their job is, they say I’m a geneticist or I am a computational chemist. Speak with an industrial engineer and they describe themselves as, no surprise, an industrial engineer. No one describes themselves as an “HPCer.” They all say I‘ve got a job to do and I’ve got to get it done in a certain amount of time at a certain price point. Their priority is the work.

I think what we did at Dell (now Dell EMC) was a huge step towards democratizing HPC. The attitude was that TOP500 was not the measure of success. Our goal very early on was to deliver more compute to academic researchers than another vendor. We did not strive for style points or the number of flops of any one particular system but we were determined to enable more scientists using more Dell flops than anybody else. That was our strategy and Dell was very successful with it.

HPCwire: That sounds a little too easy, as if they don’t need to know any or at least much computational technology. It’s clear systems vendors are racing to create easier-to-use machines and efforts like OpenHPC are making progress in creating a reference HPC stack. What do users need to know?

Carroll: Users and researchers absolutely need to understand the capabilities of their software and what can they actually do relative to the problem they need to solve, but they should not be required to know much more than that. For the last 20 years engineers and researchers defined the size of the problem they could tackle by the resources they knew they could get access. So if I know I have only got a 40-node cluster, what do I do? I start sizing my problems to fit on my cluster. Self-limiting happened unconsciously. But it doesn’t matter; the net effect was an artificial cap on growth.

So today, we’re not saying get rid of that 40-node cluster and make it bigger, but give people the choice to go bigger. Today, an engineer should be able to go to their boss and say, “I think I can deliver this research four months ahead of schedule if I have the ability to access 60 million core hours over a two week period and it’s going to cost – I am just making up numbers – $100,000.” Then the engineer and her boss go to the line of business and see if they want to come up with opex that will cover that and pull in their schedule by three months. Cloud gives people and organizations choice.

HPCwire: Stepping away from the enterprise for a moment, what’s the value proposition for academic and government researchers many of whom are very well versed indeed in HPC? Aren’t they simply likely to use internal resources or NSF cyberinfrastructure as opposed to the public cloud?

Carroll: The academic portion is really interesting because of how important the funding models are and the rules set by funding agencies. Because of that, it’s not always obvious if cloud is even an option. It also depends how the individual institutions charge for the other pieces [of a cloud deployment] that are being done. Often there is overhead and so it doesn’t matter how cost effective the cloud is because by the time it gets to the researcher, the landed cost to them is going to be prohibitive.

As a result, cloud for academic HPC has been murky for the last couple of years. People aren’t ready to get there yet. Jetstream was a step in the right direction (NSF-funded initiative) but I’m wondering if anyone put themselves in the shoes of users, big and small, to judge how that experience compares to the public cloud providers.

The cloud thing is going to be here this year, and next year, and many years after. And guess what there’s also going to be a refresh cycle on internal hardware next year and a refresh cycle the year after. People are going to have to get more and more granular in their justification for deploying in their internal infrastructure versus using public cloud. And I am not saying that’s an either or proposition. But if the demand for compute is growing at 50 percent per year and budgets are going up a lot less, how are you going to fill that gap providing the researcher what they need to get their jobs done. What is the value proposition of longer job queues?

How can academia or the funding agencies not embrace what is arguably the fastest moving, innovating, cost-effective platform in order to fill the compute demand gap. Cloud is just one more tool, but if one views it as the Trojan Horse to get inside academia eliminate infrastructure, that is just wrong. Cloud is going to get its portion of the overall market where it makes sense for certain workloads, but not necessarily entire segments. Embrace it.

HPCwire: What’s been the Cycle experience in dealing with the academic community?

Carroll: I am still somewhat surprised at the amount of pushback I get from the academic community based on anecdotal information – the number of people who talk about what can and can’t be done although they haven’t tried. And there are so many people at the public cloud providers who would love to help them. Who knows, it may work out that they run the workload to see how much it would cost and the data says it is still twice as expensive as [internally]. That’s great, now we have a hard data point rather than something anecdotal.

One of the great things cloud will do for academia is to clear the decks for people who are truly building specialized infrastructure to solve really hard problems. What’s typically happened is that institutions had to support a breadth of researchers and were faced with the challenge solving diverse needs from a demanding community. The result was commodity clusters became the best middle ground; good enough for the middle but not really what the high and the low needed. In trying to serve a diverse market with a single architecture, few people got exactly what their research required. What you are going to see is that bell curve is going to get turned upside down and centers are going to reallocate capex to specialized systems and run high throughput workloads on public cloud.

I should note that the major cloud providers all have enormous respect for the HPC market segment and appreciate the fact that the average customer at the high end probably consumes ten to several hundred times the compute of a typical enterprise customer. They are all staffing up with very talented people and are eager to collaborate with academia to deliver solutions to them.

HPCwire: What will be the big driver of cloud computing in academic research centers, beyond NSF resources I mean?

Carroll: In the last ten years universities have become far more competitive as far as attracting the right researchers. It used to be that every new researcher got his or her cluster in a startup package and that model is flat-out unsustainable. But it’s a small enough community that researchers will quickly hear when the word is “ABC University has a great system and their researchers have no queue times, with no workload limits.” Who cares where the compute is performed, the university will have generated a reputation that if you’re a researcher there, you just get what you need to get the job done. Competition for talent is going to drive larger cloud adoption in academia.

It can also be powerful for individual researchers working on small projects. There’s a professor who reached out to us who wanted to include [cloud computing] in her class as part of her teaching and doing real science. She wanted to start the job when the semester began and have it finish by the time it ended. We said, how about if instead of taking four months like you thought, we knock it down to a couple of days. It is not grant based research but the cost fit within her discretionary budget. So she is not doing it as an academic exercise. This is a piece of science that would not be done were it not for this. And it is part of the teaching.

HPCwire: How do you characterize the cloud providers and how does Cycle fit in?

Carroll: We (Cycle) are a software platform that gives people the ability to run their workloads under their control on whichever cloud is best for them. It is not a SaaS model. Users can still protect their corporate IP, with their existing workload, and run multiple workloads for multiple users across multiple clouds from a single control point. We fit in with the cloud providers by helping them and their users do what they do best.

My experience is that customers are not looking for “cloud”; they have a problem and a deadline and they are just trying to figure out how can they run workloads securely and cost effectively that can’t get run today on their internal infrastructure. If I came and said it’s the public cloud, that would be ok. Private cloud, ok. If I said we would pull a trailer stuffed with servers to their building, they’d say ok. They will make their choice based on how much it costs; is it secure, how much work is required to get started and keep it running. It just so happens that the winner is increasingly public cloud.

Back to your earlier question about change within HPC. Cloud is not causing infrastructure to disappear, it is causing labels to disappear. Customers who have been early adopters are now on their sixth or eighth or tenth workload and are starting to get into workloads that are considered “traditional HPC.” But they did not label the first workloads as “non-HPC.” They just view them as compute intensive applications and don’t care whether it is called HPC or something else.

The post Heresies of the New HPC Cloud Universe appeared first on HPCwire.

Rescale Opens Munich Office to Support EMEA Market Growth

Tue, 01/24/2017 - 09:35

Jan. 24 — Rescale Inc., the San Francisco-based global leader in cloud high-performance computing (HPC), recently opened an office in Munich, Germany to accommodate rapid growth in Europe, the Middle East, and Africa (EMEA). This will be the third location in the company’s global expansion, having opened an office in Tokyo, Japan last July 2016. To meet increasing demand from Rescale’s customers in the region and oversee new and existing partnerships, Rescale has appointed Wolfgang Dreyer as the General Manager of EMEA to lead a dedicated regional sales and technical team.

Wolfgang has dedicated his career to HPC solutions. As the founder and CEO of Quant-X, an EMEA HPC solution provider, he grew the company from a team of three to a global organization. He also led HPC solutions at Microsoft, launching their German market presence. In addition, Wolfgang has held positions in HPC sales and management at Allinea and IBM and most recently led operations in EMEA for Adaptive Computing since 2012.

At Rescale, the new EMEA sales and technical support team will leverage industry ties and local market knowledge to expand and deepen Rescale’s presence in Europe, where a robust manufacturing sector increasingly relies on simulation and cloud computing to lower production costs, refine complex designs, and improve operational effectiveness in an increasingly competitive global market. Wolfgang believes that Rescale’s solution is a good fit for these large enterprise customers, explaining, “Rescale’s hybrid solution gives customers the ability to utilize their existing on-premise solutions while adopting cloud HPC in parallel—with minimal impact to engineering workflows.”

The establishment of an on-the-ground presence throughout Europe, home to numerous powerhouse design and manufacturing firms in automotive, aerospace, and industrial equipment, marks an important step in Rescale’s global expansion. With an expanded presence in EMEA, Rescale will be able to better serve Europe’s diverse patchwork of economies and regulations, providing companies access to customizable hardware from a multi-cloud infrastructure network and a dedicated European HPC platform to comply with EU data privacy regulations. “Europe has always been a crucial market for Rescale,” said Rescale co-founder and CEO Joris Poort, “and we are thrilled to be establishing a solid regional foundation for sales and support for our customers in Europe. Wolfgang’s HPC expertise and deep familiarity with the region will be a tremendous asset to help serve our European customers.”

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s platform transforms traditional fixed IT resources into flexible, hybrid, private, and public cloud resources – built on the largest and most powerful high-performance computing network in the world. For more information on Rescale products and services, visit www.rescale.com.

Source: Rescale

The post Rescale Opens Munich Office to Support EMEA Market Growth appeared first on HPCwire.

D-Wave Introduces 2000Q Quantum Computer

Tue, 01/24/2017 - 07:16

Jan. 24 — D-Wave Systems Inc., the leader in quantum computing systems and software, today announced general commercial availability of the D-Wave 2000Q quantum computer. D-Wave also announced the first customer for the new system, Temporal Defense Systems Inc. (TDS), a cutting-edge cyber security firm. With 2000 qubits and new control features, the D-Wave 2000Q system can solve larger problems than was previously possible, with faster performance, providing a big step toward production applications in optimization, cybersecurity, machine learning, and sampling.

“D-Wave’s leap from 1000 qubits to 2000 qubits is a major technical achievement and an important advance for the emerging field of quantum computing,” said Earl Joseph, IDC program vice president for high performance computing. “D-Wave is the only company with a product designed to run quantum computing problems, and the new D-Wave 2000Q system should be even more interesting to researchers and application developers who want to explore this revolutionary new approach to computing.”

The new system continues D-Wave’s record of doubling the number of qubits on its quantum processing units (QPUS) every two years, which enables larger problems to be run, because increasing the number of qubits yields an exponential increase in the size of the feasible search space. Using benchmark problems that are both challenging and relevant to real-world applications, the D-Wave 2000Q system outperformed highly specialized algorithms run on state-of-the-art classical servers by factors of 1000 to 10000 times. The benchmarks included optimization problems and sampling problems relevant to machine learning applications.

The new anneal offsets control feature enables users to tune the annealing of individual qubits. Combined with a faster annealing time over previous D-Wave systems, problems can be solved more efficiently, improving application performance.

“D-Wave continues to advance the state-of-the-art in quantum computing with each generation of systems we deliver to customers,” said Vern Brownell, D-Wave’s CEO. “We are the only company selling quantum computers, and our growing ecosystem of users and developers gives us the benefit of their practical experience as we develop products to solve real-world problems. While other organizations have prototypes with just a few qubits in their labs, D-Wave is delivering the systems, software, training, and services needed to build an industry.”

The performance of the D-Wave 2000Q system was assessed in several ways:

  • In benchmark tests, D-Wave QPUs outperformed competitive classical algorithms by 1000 to 10000 times in pure computation time. For these tests, D-Wave developed efficient CPU- and GPU-based implementations of highly specialized algorithms that are recognized as the stiffest competition to D-Wave QPUs, and ran them on the latest-generation classical computer servers. These benchmark problems in sampling and optimization were created to represent the structure of common real-world problems, while maximizing the size of the problem that could fit on the 2000-qubit QPU. The benchmark comparisons were relative to single CPU cores and 2500-core GPUs at the largest problem size. Link to technical white paper.
  • The D-Wave 2000Q system outperformed the GPU-based implementations by 100 times in equivalent problem solving performance per watt. Power efficiency is a serious and growing issue in large-scale computing. The power draw of D-Wave’s systems has remained constant in successive generations, and is expected to continue to do so while the computational power increases dramatically. As a result, the computational power per watt is expected to increase much more rapidly for D-Wave QPUs than for classical systems. Link to technical white paper.
  • The new anneal offsets feature provided a remarkable improvement over baseline performance in a small-scale demonstration of integer factoring, in some cases making the computation more than 1000 times faster than when the problem was run without this feature. Link to technical white paper.

Looking ahead to future developments, Jeremy Hilton, SVP Systems said, “The D-Wave 2000Q quantum computer takes a leap forward with a larger, more computationally powerful and programmable system, and is an important step toward more general-purpose quantum computing. In the future, we will continue to increase the performance of our quantum computers by adding more qubits, richer connections between qubits, more control features; by lowering noise; and by providing more efficient, easy-to-use software.”

D-Wave 2000Q quantum computers are available this quarter for shipment, with systems also accessible to subscribers remotely over the internet. For more information about TDS’s purchase of the 2000Q system, please read the press release.

About D-Wave Systems Inc.

D-Wave is the leader in the development and delivery of quantum computing systems and software, and the world’s only commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing to solve the most challenging national defense, scientific, technical, and commercial problems. D-Wave’s systems are being used by some of the world’s most advanced organizations, including Lockheed Martin, Google, NASA Ames, and Los Alamos National Laboratory. With headquarters near Vancouver, Canada, D-Wave’s U.S. operations are based in Palo Alto, CA and Hanover, MD. D-Wave has a blue-chip investor base including Goldman Sachs, Bezos Expeditions, DFJ, In-Q-Tel, BDC Capital, Growthworks, Harris & Harris Group, International Investment and Underwriting, and Kensington Partners Limited. For more information, visit: www.dwavesys.com.

Source: D-Wave Systems

The post D-Wave Introduces 2000Q Quantum Computer appeared first on HPCwire.

Applications for Two PRACE Summer Activities Are Now Being Accepted

Tue, 01/24/2017 - 06:54

Jan. 24 — Both activities are expense-paid programmes and will allow participants to travel and stay at a hosting location and learn about HPC:

  • The 2017 International Summer School on HPC Challenges in Computational Sciences
  • The PRACE Summer of HPC 2017 programme

2017 International Summer School on HPC Challenges in Computational Sciences – Applications due March 6, 2017

The summer school is sponsored by Compute/Calcul Canada, the Extreme Science and Engineering Discovery Environment (XSEDE), the Partnership for Advanced Computing in Europe (PRACE) and the RIKEN Advanced Insti­tute for Computational Science (RIKEN AICS).

Graduate students and postdoctoral scholars from institutions in Canada, Europe, Japan and the United States are invited to apply for the eighth International Summer School on HPC Challenges in Computational Sciences, to be held June 25 – 30 2017, in Boulder, Colorado, United States of America.

Leading computational scientists and HPC technologists from the U.S., Europe, Japan and Canada will offer instructions on a variety of topics and also provide advanced mentoring.
Topics include:

  • HPC challenges by discipline
  • HPC programming proficiencies
  • Performance analysis & profiling
  • Algorithmic approaches & numerical libraries
  • Data-intensive computing
  • Scientific visualization
  • Canadian, EU, Japanese and U.S. HPC-infrastructures

For more details please visit:
http://www.prace-ri.eu/2017-international-summer-school-on-hpc-challenges-in-computational-sciences/

PRACE Summer of HPC 2017 – Applications due February 19, 2017

The PRACE Summer of HPC is a PRACE outreach and training programme that offers summer placements at top HPC centres across Europe to late-stage undergraduates and early-stage postgraduate students. Up to twenty top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE technical or industrial work and produce a report and a visualisation or video of their results.

Early-stage postgraduate and late-stage undergraduate students are invited to apply for the PRACE Summer of HPC 2017 programme, to be held in July & August 2017. Consisting of a training week and two months on placement at top HPC centres around Europe, the programme affords participants the opportunity to learn and share more about PRACE and HPC, and includes accommodation, a stipend and travel to their HPC centre placement.

The programme will run from 2 July to 31 August 2017, with a kick-off training week at IT4I Supercomputing Centre in Ostrava attended by all participants. Flights, accommodation and a stipend will be provided to all successful applicants. Two prizes will be awarded to the participants who produce the best project and best embody the outreach spirit of the programme.

For more details please visit:
http://www.prace-ri.eu/prace-sohpc-2017-opens-for-applications/

Source: PRACE

The post Applications for Two PRACE Summer Activities Are Now Being Accepted appeared first on HPCwire.

Cavium to Contribute Wedge 100C Switch Design to the OCP

Tue, 01/24/2017 - 06:48

SAN JOSE, Calif., Jan. 24 — Cavium, Inc. (NASDAQ: CAVM) a leading provider of semiconductor products that provides intelligent processing for enterprise, data center, cloud, wired and wireless networking, today announced that it will contribute its Wedge 100C switch hardware design, based on the production ready, field deployed, XPliant Programmable ASIC, to the Open Compute Project (OCP) Foundation.

The programmable Wedge 100C is an open switch platform, focused on Top of Rack deployments with 10G/25G/40G/50G/100G server connectivity and 100GbE uplinks to the aggregation layer of the network. The programmable Wedge 100C design uses Cavium XPliant CNX88091, programmable production-ready Ethernet switch and is based on the original OCP-ACCEPTED Wedge 100 switch specification and designs which were contributed by Facebook in October 2016.

The contributed network switch can be managed by a variety of Network Operation Systems (NOS), including the Facebook Open Switching System (“FBOSS”) software stack.

Cavium’s XPliant Family of Ethernet Switches uniquely address the needs of today’s highly dynamic datacenter networks by enabling developers to continuously evolve and improve data center network operations. The world’s first programmable Wedge 100C open switch platform will allow developers to continuously introduce new protocols, adapt networks to new server technologies like containers, and improve network visibility without requiring the deployment of new network switching systems. These attributes extend the life cycle of the Wedge 100C switch, and deliver on the OCP vision of providing high ROI on the datacenter operator’s switching infrastructure investment.

“The Open Compute Networking Project is excited to see that Cavium has shared the Wedge 100C hardware design with the community,” said Omar Baldonado, OCP Networking Project Co-Lead. “Wedge 100C provides data center operators with the option of using a programmable switching silicon. This is the flexibility the industry needs from rich hardware and software.”

“Contributing the programmable Wedge 100C open switch platform to the OCP is an important step in supporting the OCP’s goal of maximizing innovation,” said Eric Hayes, Vice President and General Manager of the Switching Platform Group at Cavium. “This contribution empowers today a community of developers to produce cost effective, innovative data center networking solutions, that are extremely beneficial to datacenter operators as they migrate their networks to 25 GbE and 100 GbE.”

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of integrated, software compatible processors ranging in performance from 1Gbps to 100Gbps that enable secure, intelligent functionality in Enterprise, Data Center, Broadband/Consumer, Mobile and Service Provider Equipment, highly programmable switches which scale to 3.2Tbps and Ethernet and Fibre Channel adapters up to 100Gbps. Cavium processors are supported by ecosystem partners that provide operating systems, tools and application support, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, China and Taiwan.

About the Open Compute Project

The Open Compute Project Foundation is a 501(c)(6) organization which was founded in 2011 by Facebook, Intel, and Rackspace. Our mission is to apply the benefits of open source to hardware and rapidly increase the pace of innovation in, near and around the data center and beyond.

Source: Cavium

The post Cavium to Contribute Wedge 100C Switch Design to the OCP appeared first on HPCwire.

Pages