HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 53 min ago

Messina Update: The U.S. Path to Exascale in 16 Slides

Wed, 04/26/2017 - 08:10

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. The biggest change, of course, is ECP’s accelerated timetable with delivery of the first exascale machine now scheduled for 2021. While much of the material covered by Messina wasn’t new there were a few fresh details on the long awaited Path Forward hardware contracts and on progress to-date in other ECP fronts.

Paul Messina, ECP Director

“We have selected six vendors to be primes, and in some cases they have had other vendors involved in their R&D requirements. [We have also] been working on detailed statements of work because the dollar amounts are pretty hefty, the approval process [reaches] high up in the Department of Energy,” said Messina of the Path Forward awards. Five of the contracts are signed and the sixth is not far off. Even his slide had the announcement to be ready by COB April 14, 2017. “It would have been great to announce them at this HPC User Forum but it was not meant to be.” He said the announcements will be made public soon.

The duration of the ECP project has been shortened to seven years from ten years although there’s a 12-month schedule contingency built in to accommodate changes said Messina. Interestingly, during Q&A Messina was asked about U.S. willingness to include ‘individuals’ not based in the U.S. in the project. The question was a little ambiguous as it wasn’t clear if ‘individuals’ was intended to encompass foreign interests broadly, but Messina answered directly, “[For] people who are based outside the U.S. I would say the policy is they are not included.”

Presented here are a handful of Messina’s slides updating the U.S. march towards exascale computing – most of the talk dwelled on software related challenges – but first it’s worth stealing a few Hyperion Research (formerly IDC) observations on the global exascale race that were also presented during the forum. The rise of national and regional competitive zeal in HPC and the race to exascale is palpable as evidenced by Messina’s comment on U.S. policy.

China is currently ahead in the race to stand up an exascale machine first, according to Hyperion. That’s perhaps not surprising given its recent dominance of the Top500 list. Japan is furthest along in settling on a design, key components, and contractor. Here are two Hyperion slides summing up the world race. (see HPCwire article, Hyperion (IDC) Paints a Bullish Picture of HPC Future, for full rundown of HPC trends)

Messina emphasized the three-year R&D projects (Path Forward) are intended to result in better hardware at the node level, memory, system level and energy consumption, and programmability. Moreover, ECP is looking past the initial exascale systems. “The idea is that after three years hopefully the successful things will become part of [vendors’] product lines and result in better HPC systems for them not just for the initial exascale systems,” he said. The RFPs for the exascale systems themselves will come from the labs doing the buying.

The ECP is a collaborative effort of two U.S. Department of Energy organizations, the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA). Sixteen of seventeen national labs are participating in ECP and the six who have traditionally fielded leadership HPC system – Argonne, Oak Ridge, Lawrence Livermore, Sandia, Los Alamos, and Lawrence Berkeley National Laboratories – form the core partnership and signed a memorandum of agreement on cooperation defining roles and responsibilities.

Under the new schedule, “We will have an initial exascale system delivered in 2021 ready to go into production in 2022 and which will be based on advanced architecture which means really that we are open to something that is not necessarily a direct evolution of the systems that are currently installed at the NL facilities,” explained Messina.

“Then the ‘Capable Exascale’ systems, which will benefit from the R&D we do in the project, we currently expect them to be delivered in 2022 and available in 2023. Again these are at the facilities that normally get systems. Lately it’s been a collaboration of Argonne, Oak Ridge and Livermore, that roughly every four years establish new systems. Then [Lawrence] Berkeley, Los Alamos and Sandia, which during the in between years installs systems.”  Messina again emphasized, “It is the facilities that will be buying the systems. [The ECP] be doing the R&D to give them something that is hopefully worth buying.”

Four key technical challenges are being addressed by the ECP to deliver capable exascale computing:

  • Parallelism a thousand-fold greater than today’s systems
  • Memory and storage efficiencies consistent with increased computational rates and data movement requirements
  • Reliability that enables system adaptation and recovery from faults in much more complex system components and designs
  • Energy consumption beyond current industry roadmaps, which would be prohibitively expensive at this scale

Another important ECP goal, said Messina, is to kick the development of U.S. advanced computing into a new higher trajectory (see slide below).

From the beginning the exascale project has steered clear of FLOPS and LINPACK as the best measure of success. That theme has only grown stronger with attention focused on defining success as performance on useful applications and the ability to tackle problems that are intractable on today’s Petaflops machines.

“We think of 50x times that performance on applications [as the exascale measure of merit], unfortunately there’s a kink in this,” said Messina. “The kink is people won’t be running todays jobs in these exascale systems. We want exascale systems to do things we can’t do today and we need to figure out a way to quantify that. In some cases it will be relatively easy – just achieving much greater resolutions – but in many cases it will be enabling additional physics to more faithfully represent the phenomena. We want to focus on measuring every capable exascale based on full applications tackling real problems compared to what they can do today.”

“This list is a bit of an eye chart (above, click to enlarge) and represents the 26 applications that are currently supported by the ECP. Each of them, when selected, specified a challenge problem. For example, it wasn’t just a matter of saying they’ll do better chemistry but here’s a specific challenge that we expect to be able to tackle when the first exascale systems are available,” said Messina

One example is GAMESS (General Atomic and Molecular Electronic Structure System) an ab initio quantum chemistry package that is widely used. The team working on GAMESS has spelled out specific problems to be attacked. “It’s not only very good code but they have ambitious goals; if we can help that team achieve its goals exascale for the games community code, the leverage is huge because it has all of those users. Now not all of them need exascale to do their work but those that do will be able to do it quickly and more easily,” said Messina.

GAMESS is also a good example of a traditional FLOPS heavy numerical simulation application. Messina reviewed four other examples (earthquake simulation, wind turbine applications, energy grid management optimization, and precision medicine). “The last one that I’ll mention is a collaboration between DoE and NIH and NCI on cancer as you might imagine,” said Messina. “It is extremely important for society and also quite different to traditional partial differential equation solving because this one will rely on deep learning and use of huge amounts of data – being able to use millions of patient records on types of cancer and the treatments they received and what the outcome was as well as millions of potential cures.”

Data analytics is a big part of these kinds of precision medicine applications, said Messina. When pressed on whether the effort to combine traditional simulation with deep learning would inevitably lead to diverging architectures, Messina argued for the contrary: “One of our second level goals is try to promote convergence as opposed divergence. I don’t know that we’ll be successful in that but that’s what we are hoping. [We want] to understand that better because we don’t have a good understanding of deep learning and data analytics.”

Co-design also been a priority and received a fair amount of attention. Doug Kothe, ECP director of applications development, is spearheading those efforts. Currently there are five co-design centers including a new one focused on graph analytics. All of the teams have firm milestones, including some shared milestones with other ECP effort to ensure productive cooperation.

Messina noted that, “Although we will be measuring our success based on whole applications, in the meantime you can’t always deal with the whole application, so we have proxies and sub projects. The vendors need this and we will need it to guide our software development.”

Ensuring resilience is a major challenge given the exascale system complexity. “On average the user should notice a fault on the order of once a week. There may be faults every hour but the user shouldn’t see them more than once a week,” said Messina. This will require, among other things, a robust capable software stack, “otherwise it’s a special purpose system or a system that is very difficult to use.”

Messina showed a ‘notional’ software stack slide (below, click to enlarge). “Resilience and workflows are on the side because we believe they influence all aspects of the software stack. In a number of areas we are investing in several approaches to accomplish the same functionality. At some point we will narrow things down. At the same time we feel we probably have some gaps, especially in the data issues, and are in the process of doing a gap analysis for our software stack,” he said.

Clearly it’s a complex project with many parts. Integration of all these development activities is an ongoing challenge.

“You have to work together so the individual teams have shared milestones. Here’s one that I selected simply because it was easy to describe. By the beginning of next calendar year [we should have] new prototype APIs to have coordination between MPI and OpenMPI runtimes because this is an issue now in governing the threads and messages when you use both programming models which a fair number of applications do. How is this going to work? So the software team doing this will interact with a number of application development teams to make sure we understand their runtime requirements. We can’t wait until we have the exascale systems to sort things out.”

“We also want to be able to measure how effective the new ideas are likely to be and so we are also launched a design space evaluation effort,” said Messina. The ECP project has actively sought early access to several existing resources for use during development.

These are just a few of the topics Messina touched on. Workforce training is another key issue that ECP is tackling. It is also increasing it communications and outreach efforts as shown below. There is, of course, an ECP web site and recently ECP launched newsletter expected to be published roughly monthly.

With the Path Forward awards coming soon, several working group meetings having been held, and the new solidified plan, the U.S. effort to reach exascale computing is starting to feel concrete. It will be interesting to see how well ECP’s various milestones are hit. The last slide below depicts the overall program. Hyperion indicated it will soon post all of Messina’s slides (and other presentations from HPC User Forum) on the HPC User Forum site.

The post Messina Update: The U.S. Path to Exascale in 16 Slides appeared first on HPCwire.

R Systems Sponsors rLoop Team as Part of SpaceX Hyperloop Pod Competition

Wed, 04/26/2017 - 08:08

CHAMPAIGN, Ill., April 26, 2017 — R Systems NA, Inc. confirmed today its sponsorship of the rLoop Hyperloop team participating in SpaceX Competition II.

Founded in 2002, SpaceX is a private space exploration company sponsoring the Hyperloop Pod       Competition. The competition aims to facilitate the development of the high speed Hyperloop transportation system by encouraging independent groups to develop functional prototypes. While many teams participating in the competition are student organizations, rLoop is a unique collaboration of over 100 members spanning more than 14 countries.

R Systems, a bare metal high performance computing provider, began sponsoring rLoop’s pod design in March 2016. “We were very excited when rLoop won the Pod Innovation Award at the SpaceX Competition in January,” said R Systems co-Founder Brian Kucic. “We are pleased to be able to support their ongoing efforts to revolutionize transportation, and to be a part of rLoop’s impressive global collaborative efforts.”

As the only non-student team among the 20 finalist teams to participate in SpaceX’s January Hyperloop competition, rLoop did not have ready access to university HPC resources. “We were looking for more CPU power to run the advanced simulations being developed by our team, and R Systems offered to provide exactly what we needed,” said rLoop Project Manager, Brent Lessard. “The R Systems support team had us running on their utility cluster in no time. We now have easy access to significant HPC resources to run our jobs at any time of the day or night.”

Amir Khan, rLoop’s design and analysis lead is pleased to hear about R System’s continued sponsorship. Khan said, “The ability of our engineers to access this powerful resource whenever needed provides us with a valuable advantage.”

According to R Systems Senior Systems Engineer Steve Pritchett, the relationship with rLoop has been very positive. Pritchett commented, “After we provided login access and installed necessary software, the rLoop users were able to focus on their pod design rather than dealing with system management issues.”

The second phase of the Hyperloop Competition focuses on maximum pod speed. R Systems coFounder Greg Keller says the company is a perfect match for the rLoop team. “At R Systems, we are intimately familiar with the goal of achieving maximum speed,” Keller said. “Our people and clusters quickly move our customer’s vision forward to conquer big challenges,” Keller added.

The SpaceX Competition II is scheduled for completion in mid-2017.

About Hyperloop

Hyperloop is a conceptual, high-speed transportation system proposed by SpaceX CEO Elon Musk. The concept involves passenger or cargo boarded pods being propelled in a low pressure tube using sustainable and cost-efficient energy. To accelerate its development, SpaceX is hosting a competition for engineering teams to design and test their own Hyperloop pods. More information about this innovative technology is available at http://www.spacex.com/hyperloop.

About rLoop

rLoop is a non-profit, open source participant in the SpaceX Hyperloop competition. With a mission to democratize the Hyperloop through collective design, rLoop has gained more than 100 members from over 14 countries. Learn more about how rLoop is revolutionizing transportation at http://rloop.org/.

About R Systems

R Systems is a service provider of high performance computing resources. The company empowers research by providing leading edge technology with a knowledgeable tech team, delivering the best performing result in a cohesive working environment. Offerings include lease-time for bursting as well as for short-term and long-term projects, available at industry-leading prices.

Source: R Systems

The post R Systems Sponsors rLoop Team as Part of SpaceX Hyperloop Pod Competition appeared first on HPCwire.

Asetek Reports Financial Results for Q1 2017

Wed, 04/26/2017 - 08:04

OSLO, Norway, April 26, 2017 — Asetek reported revenue of $11.5 million in the first quarter of 2017, a 10% increase from the first quarter 2016. The change from prior year reflects an increase in desktop revenue driven by shipments in the DIY market, partly offset by a decline in data center revenue.

  • Quarterly revenue growth of 10% driven by high-end gaming cooling demand
  • New orders and development agreement with undisclosed major player reflect increased end-user adoption in data center segment
  • Shipments of Asetek’s sealed loop coolers surpassed 4 million units since inception
  • Continued positive EBITDA
  • Cash dividend of NOK 1.00 per share approved at AGM
  • Reaffirming 2017 expectations of moderate desktop segment and significant data center revenue growth

“We are making good progress in our emerging data center business with several new orders and the announced signing of a development agreement with an undisclosed partner. It confirms that we are on track to meet our ambition of increasing end-user adoption in the data center market. Our desktop business segment delivered another quarter of revenue growth on demand from the high-end gaming market,” says André Sloth Eriksen, Chief Executive Officer.

Gross margin for the first quarter was 38.5%, compared with 39% in the first quarter of 2016 and 37% in the fourth quarter 2016. The EBITDA was $0.7 million in the first quarter 2017, compared with EBITDA of $1.2 million in the first quarter of 2016.

Desktop revenue was $11.1 million in the first quarter, an increase of 17% from the same period of 2016. Operating profit from the desktop segment was $3.4 million, an increase from $2.8 million in the same period last year, due to an increase in DIY product sales.

Data center revenue was $0.4 million, a decrease from $1.0 million in the prior year due to fewer shipments to OEM customers. This variability is expected while the Company secures new OEM partners and growth of end-user adoption through existing OEM partners.

Asetek continued to invest in its data center business and the segment operating loss was $1.8 million for the first quarter, compared with $1.0 million in the same period of 2016. Expenditures relate to technology development, and sales and marketing activities with data center partners and OEM customers.

Through new and repeat orders received from existing data center OEM partners in the first quarter and more recently in April, Asetek is increasing its end-user adoption with technology deployed to new HPC installations.

In February Asetek signed a development agreement with an undisclosed major player in the data center market and expects this agreement to result in new products in 2017 which will drive long-term data center revenue.

Asetek reaffirms its annual outlook for the full year 2017, anticipating moderate growth in the desktop business and significant revenue growth in the data center segment, when comparing to 2016.

The proposal of a cash dividend of NOK 1.00 per share was approved by the AGM.

Source: Asetek

The post Asetek Reports Financial Results for Q1 2017 appeared first on HPCwire.

Supermicro Announces 25/100Gbps Networking Solutions

Wed, 04/26/2017 - 08:00

SAN JOSE, Calif., April 26, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI) has announced general availability of Mellanox, Broadcom and Intel-based 100Gbps and 25Gbps standard networking cards and onboard SIOM solutions, 25Gbps MicroLP networking cards, and onboard riser cards optimized for the Ultra SuperServer.

Supermicro networking modules deliver high bandwidth and industry-leading connectivity for performance-driven server and storage applications in the most demanding Data Center, HPC, Cloud, Web2.0, Machine Learning and Big Data environments. Clustered databases, web infrastructure, and high frequency trading are just a few applications that will achieve significant throughput and latency improvements resulting in faster access, real-time response and virtualization enhancements with this generation of industry leading Supermicro solutions.

Supermicro’s range of 25Gbps and 100Gbs interface solutions:

Supermicro’s 25/100Gbps networking solutions offer high performance and efficient network fabrics, covering a range of application optimized products. These interfaces provide customers with networking alternatives optimized for their applications and data center environments. The AOC-S Standard LP series cards are designed for any Supermicro server with a PCI-E x8 (for 25G) or PCI-E x16 (for 100G) expansion slot. The AOC-C MicroLP add-on card is optimized for Supermicro high-density FatTwin and MicroCloud SuperServers. The Supermicro AOC-M flexible, cost-optimized 25/100Gbps onboard SIOM series cards support the Supermicro TwinPro, BigTwin, Simply Double and 45/60/90-Bay Top-Load SuperStorage, plus 7U 8-Way SuperServer. The Supermicro Ultra series utilizes the AOC-U series onboard riser cards. These 25G and 100G modules are fully compatible with Supermicro and other comparable industry switch products.

25/100G MODULE










PCI-E 3.0 X16

1 QSFP28


ALL W/ PCI-E 3.0 X16




PCI-E 3.0 X16

1 QSFP28






PCI-E 3.0 X16

2 QSFP28


ALL W/ PCI-E 3.0 X8




PCI-E 3.0 X8

2 SFP28






PCI-E 3.0 X8

2 SFP28


ALL W/ PCI-E 3.0 X8




PCI-E 3.0 X8

2 SFP28


ALL W/ PCI-E 3.0 X8




PCI-E 3.0 X8

1 SFP28






PCI-E 3.0 X16

2 SFP28






PCI-E 3.0 X16

4 SFP28






PCI-E 3.0 X16

2 SFP28






PCI-E 3.0 X8

2 SFP28






PCI-E 3.0 X8

2 SFP28



Supermicro 25/100G Ethernet Modules

“With 2.5 times the bandwidth of 10G, less than half the cost of 40G, and incorporating Remote Direct Memory Access for low latency with backward compatibility with 10G switches, the industry leading 25GbE capability that Supermicro offers our customers provides the highest scalability and potential for future growth,” said Charles Liang, President and CEO of Supermicro. “We believe that 100G, having a clear upgrade path from 25G, is the natural next step in the evolution of modern high-performance converged data center server/storage deployments for our customers as they experience ever higher demands on their data center I/O infrastructures.”

Dual- and Single-port Modules supporting 100Gbps
AOC-SHFI-i1C Omni-Path Standard Card
Designed for HPC, this card uses an advanced “on-load” design that automatically scales fabric performance with rising server core counts, making these adapters ideal for today’s increasingly demanding workloads with 100Gbps link speed, single QSFP28 connector, PCI-E 3.0 x16 slot and standard low-profile form factor.

AOC-MHFI-i1C/M Onboard Omni-Path SIOM Card
Designed specifically for HPC utilizing the Intel OP HFI ASIC, this card offers 100Gbps link speeds for Supermicro servers that support the SIOM interface.

AOC-S100G-m2C Standard Card
This card offers dual-port QSFP+ connectivity in a low-profile, short length standard form factor with PCI-E 3.0 x16 slot. Utilizing the Mellanox ConnectX-4 EN chipset with features such as VXLAN and NVGRE, this card offers network flexibility, high bandwidth with specific hardware offload for I/O virtualization, and efficiently optimizes bandwidth demand from virtualized infrastructure in the data center or cloud deployments.

Quad-, Dual- and Single-port Modules supporting 25Gbps
AOC-S25G-b2S Standard Card
Based on the Broadcom BCM57414 chipset with features such as RDMA, NPAR, VXLAN and NVGRE, it is backward compatible with 10GbE network and the most cost effective upgrade from 10GbE to 25GbE in data center or cloud deployments.

AOC-S25G-m2S Standard Card
This is a dual-port 25GbE controller that can be used in any Supermicro server with a PCI-E 3.0 x8 expansion slot. Based on the Mellanox ConnectX-4 Lx EN chipset with features such as RDMA and RoCE, it is backward compatible with 10GbE networks and addresses bandwidth demand from virtualized infrastructure.

AOC-S25G-i2S Standard Card
This card is implemented with the Intel XXV710. It is fully compatible with existing 10GbE networking infrastructures but doubles the available bandwidth. The 25GbE bandwidth enables rapid networking deployment in an agile data center environment.

AOC-C25G-m1S MicroLP Card
This card is based on the Mellanox ConnectX-4 Lx EN controller. It is the solution for Supermicro high density MicroCloud and Twin series servers.

AOC-MH25G-m2S2T/M  Onboard SIOM Card
This is a proprietary SIOM (Supermicro I/O module) card based on Mellanox ConnectX-4 Lx EN and optimized for SuperServers with SIOM support. Optimized for Supermicro BigTwin, TwinPro, and SuperStorage products.

AOC-M25G-m4S/M Onboard SIOM Card
This is one of the most feature rich 25GbE controllers in the market. Based on the Mellanox ConnectX®-4 Lx EN, with 4-ports of 25GbE SFP28 connectivity in small form factor SIOM, it provides density, performance, and functionality. Optimized for Supermicro BigTwin, TwinPro, and SuperStorage products.

AOC-URN4-m2TS Onboard 1U Ultra Riser Card
Mellanox ConnectX-4 Lx EN, 2 ports, 2 SFP28, onboard 1U Ultra Riser

AOC-URN4-i2TS Onboard 1U Ultra Riser Card
Intel XXV710, 2 ports, 2 SFP28, onboard 1U Ultra Riser

AOC-2UR68-m2TS Onboard 2U Ultra Riser Card
Mellanox ConnectX-4 Lx EN, 2 ports, 2 SFP28, onboard 2U Ultra Riser

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Supermicro

The post Supermicro Announces 25/100Gbps Networking Solutions appeared first on HPCwire.

Cray Signs Solutions Provider Agreement With Mark III Systems

Wed, 04/26/2017 - 01:46

SEATTLE and HOUSTON, April 26, 2017 — Global supercomputer leader Cray Inc. today announced the Company has signed a solutions provider agreement with Mark III Systems, Inc. to develop, market and sell solutions that leverage Cray’s portfolio of supercomputing and big data analytics systems.

Headquartered in Houston, Texas, Mark III Systems is a leading enterprise IT solutions provider focused on delivering IT infrastructure, software, services, cloud, digital, and cognitive solutions to a broad array of enterprise clients. The company’s BlueChasm digital development unit is focused on building and running open digital, cognitive, and AI platforms in partnership with enterprises, institutions, service providers, and software and cloud partners.

Mark III Systems can now combine the design, development, and engineering expertise of its BlueChasm team with the data-intensive computing capabilities of the Cray® XC™, Cray CS™, and Urika®-GX systems, and offer enterprise IT customers customized solutions across a wide range of commercial use cases.

“We’re very excited to be partnering with Cray to deliver unique platforms and data-driven solutions to our joint clients, especially around the key opportunities of data analytics, artificial intelligence, cognitive compute, and deep learning,” said Chris Bogan, Mark III’s director of business development and alliances.  “Combined with Mark III’s full stack approach of helping clients capitalize on the big data and digital transformation opportunities, we think that this partnership offers enterprises and organizations the ability to differentiate and win in the marketplace in the digital era.”

“Solution providers are a key part of Cray’s go-to-market strategy,” said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “We’re thrilled to be partnering with Mark III as they bring the expertise to develop and deliver differentiated solutions that leverage Cray’s supercomputing infrastructure and deliver superior value to our respective customers.”

For more information on Cray’s partner initiatives, please visit the Cray website at www.cray.com.

About Mark III Systems

Mark III Systems is a long-time, industry-leading IT solutions provider delivering IT infrastructure, software, services, cloud, digital, and cognitive solutions to enterprises, institutions, and service provider clients across North America.  With a diverse team of developers, DevOps engineers, enterprise architects, and systems engineers, Mark III’s areas of expertise include IT infrastructure, datacenter, HPC, data analytics, security, DevOps, IoT, AI, cognitive, and cloud.  Whether it be optimizing the performance and resiliency of an existing business-critical tech stack, or building a next-generation digital stack for data analytics, AI, IoT, or mobile use cases, Mark III’s “full stack” approach helps clients stand out and win in the era of digital transformation.  For more information, visit www.markiiisys.com.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

The post Cray Signs Solutions Provider Agreement With Mark III Systems appeared first on HPCwire.

MSST 2017 Announces Conference Themes, Keynote

Tue, 04/25/2017 - 15:00

April 25, 2017 — The 33rd International Conference on Massive Storage Systems and Technology (MSST 2017) will dedicate five days to computer-storage technology, including a day of tutorials, two days of invited papers, two days of peer-reviewed research papers, and a vendor exposition. The conference will be held on the beautiful campus of Santa Clara University, in the heart of Silicon Valley May 15-19, 2017.

Kimberly Keeton, Hewlett Packard Enterprise, will keynote:

Data growth and data analytics requirements are outpacing the compute and storage technologies that have provided the foundation of processor-driven architectures for the last five decades. This divergence requires a deep rethinking of how we build systems, and points towards a memory-driven architecture, where memory is the key resource and everything else, including processing, revolves around it.

Memory-driven computing (MDC) brings together byte-addressable persistent memory, a fast memory fabric, task-specific processing, and a new software stack to address these data growth and analysis challenges. At Hewlett Packard Labs, we are exploring MDC hardware and software design through The Machine. This talk will review the trends that motivate MDC, illustrate how MDC benefits applications, provide highlights from our Machine-related work in data management and programming models, and outline challenges that MDC presents for the storage community.

Themes for the conference this year include:

  • Emerging Open Source Storage System Design for Hyperscale Computing
  • Leveraging Compression, Encryption, and Erasure Coding Chip
  • Hardware Support to Construct Large Scale Storage Systems
  • The Limits of Open Source in Large-Scale Storage Systems Design
  • Building Extreme-Scale SQL and NoSQL Processing Environments
  • Storage Innovation in Large HPC Data Centers
  • How Large HPC Data Centers Can Leverage Public Cloud for Computing and Storage
  • Supporting Extreme-Scale Name Spaces with NAS Technology
  • Storage System Designs Leveraging Hardware Support
  • How Can Large Scale Storage Systems Support Containerization?
  • Trends in Non-Volatile Media

For registration and the full agenda visit the MSST 2017 website: http://storageconference.us

Source: MSST

The post MSST 2017 Announces Conference Themes, Keynote appeared first on HPCwire.

Cycle Computing Flies Into HTCondor Week

Tue, 04/25/2017 - 07:38

NEW YORK, April 25, 2017 — Cycle Computing today announced that it will address attendees at HTCondor Week 2017, to be held May 2-5 in Madison, Wisconsin. Cycle will also be sponsoring a reception for attendees, slated for Wednesday, May 3rd from 6:00 pm to 7:00 pm at the event in Madison.

Cycle’s Customer Operations Manager, Andy Howard, will present “Using Docker, HTCondor, and AWS for EDA model development” Thursday, May 4th at 1:30 pm. Andy’s session will detail how a Cycle Computing customer used HTCondor to manage Docker containers in AWS to increase productivity, throughput, and reduce overall time-to-results.

HTCondor develops, implements, deploys, and evaluates mechanisms and policies that support High Throughput Computing (HTC). Guided by both the technological and sociological challenges of such a computing environment, the Center for High Throughput Computing at UW-Madison continues to build the open source HTCondor distributed computing software and related technologies to enable scientists and engineers to increase their computing throughput. An extension of that research is HTCondor Week, the annual conference for the HTCondor batch scheduler, featuring presentations from developers and users in academia and industry. The conference gives collaborators and users the chance to exchange ideas and experiences, to learn about the latest research, to experience live demos, and to influence HTCondor’s short and long term research and development directions.

“At Cycle we have a great deal of history and context for HTCondor. Even today, some of our largest customers are using HTCondor under the hood in their cloud environments,” said Jason Stowe, CEO, Cycle Computing. “Simply put, HTCondor is an important scheduler to us and to our customers. We’re happy to remain part of the HTCondor community and support it with our presentation and the reception.”

Cycle Computing’s CycleCloud orchestrates Big Compute and Cloud HPC workloads enabling users to overcome the challenges typically associated large workloads. CycleCloud takes the delays, configuration, administration, and sunken hardware costs out of HPC clusters. CycleCloud easily leverages multi-cloud environments moving seamlessly between internal clusters, Amazon Web Services, Google Cloud Platform, Microsoft Azure and other cloud environments.

More information about the CycleCloud cloud management software suite can be found at www.cyclecomputing.com.


Cycle Computing is the leader in Big Compute software to manage simulation, analytics, and Big Data workloads. Cycle turns the Cloud into an innovation engine for your organization by providing simple, managed access to Big Compute. CycleCloud is the enterprise software solution for managing multiple users, running multiple applications, across multiple clouds, enabling users to never wait for compute and solve problems at any scale. Since 2005, Cycle Computing software has empowered customers in many Global 2000 manufacturing, Big 10 Life Insurance, Big 10 Pharma, Big 10 Hedge Funds, startups, and government agencies, to leverage hundreds of millions of hours of cloud based computation annually to accelerate innovation. For more information visit: www.cyclecomputing.com

Source: Cycle Computing

The post Cycle Computing Flies Into HTCondor Week appeared first on HPCwire.

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

Tue, 04/25/2017 - 06:30

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Using IBM Minsky servers with Nvidia P100 GPUs and Stone Ridge’s ECHELON petroleum reservoir simulation software, the trio say their effort “shatters previous (Exxon) results using one-tenth the power and 1/100th of the space. The results were achieved in 92 minutes with 60 Power processors and 120 GPU accelerators and broke the previous published record (Aramco) of 20 hours using thousands of processors.”

The ‘billion cell” simulation represents a heady challenge typically tackled with ‘supercomputer’ class HPC infrastructure. The Minsky, of course, is the top of IBM’s Power server line and leverages Nvidia’s fastest GPU and NVLink interconnect. This simulation used 60 processors and 120 accelerators. IBM owed the systems – each Minsky had two Power8 CPUs with 256GB of memory, four Nvidia P100 GPUs, and InfiniBand EDR.

Reservoir simulation
Source: Stone Ridge

“This calculation is a very salient demonstration of the computational capability and density of solution that GPUs offer. That speed lets reservoir engineers run more models and ‘what-if’ scenarios than previously so they can produce oil more efficiently, open up fewer new fields and make responsible use of limited resources,” said Vincent Natoli, president of Stone Ridge Technology, in the official announcement. “By increasing compute performance and efficiency by more than an order of magnitude, we’re democratizing HPC for the reservoir simulation community.”

According to the collaborators, the data set was taken from public information and used to mimic large oil fields like those found in the middle east. Key code optimization included taking advantage of the CPU-GPU NVLink and GPU-GPU NVLink in the Power systems and also scaling the software to take advantage of 10s of Minsky systems in an HPC cluster.

The new solution, say the collaborators, is intended “to transform the price and performance for business critical High Performance Computing (HPC) applications for simulation and exploration.” The performance is impressive but not overly cheap. IBM estimates the cost of the 30 Minsky systems in the range of $1.5 million to $2 million. ECHELON is a standard Stone Ridge product and IBM and Stone Ridge plan to jointly sell the new solution into the oil and gas market.

Sumit Gupta, IBM

Sumit Gupta, IBM vice president, High Performance Computing & Analytics, said, “The bottom line is that by running ECHELON on Minsky, users can achieve faster run-times using a fraction of the hardware. One recent effort used more than 700,000 processors in a server installation that occupies nearly half a football field. Stone Ridge did this calculation on two racks of IBM machines that could fit in the space of half a ping-pong table.”  

IBM has been steadily ratcheting up efforts to showcase its Power systems – including Minsky – as it tries to wrestle market share in an x86 dominated landscape. Last month, the company spotlighted another Power8-based system – VOLTRON at Baylor College – which researchers used to assemble the 1.2 billion letter genome of the mosquito that carries the West Nile virus.

IBM and its collaborators argue “this latest advance” challenges misconceptions that GPUs can’t be efficient on complex application codes such as reservoir simulators and are better suited to simple, more naturally parallel applications such as seismic imaging.

They do note, “Billion cell models in the industry are rare in practice, but the calculation was accomplished to highlight the growing disparity in performance between new fully GPU based codes like ECHELON and equivalent legacy CPU codes. ECHELON scales from the cluster to the workstation and while it can turn over a billion cells on 30 servers, it can also run smaller models on a single server or even on a single Nvidia P100 board in a desktop workstation, the latter two use cases being more in the sweet spot for the industry.”

The post IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record appeared first on HPCwire.

ASC17 Championship to Challenge Front-end Science

Tue, 04/25/2017 - 01:01

AI, challenge a Gordon Bell Prize application, optimize the latest third generation sequencing assembly tool, attempt to revitalize traditional scientific computing software on a quantum computing platform. All these sound like what a team of top engineers would do, but the truth is that these are the challenges that groups of university students, with an average age of 20 years old, need to overcome in the finals of the 2017 ASC Student Supercomputer Challenge (ASC17). The finals of this tournament are scheduled to be held at the National Supercomputing Center in Wuxi, China, from April 24 to 28, where 20 teams from around the world will compete to be crowned the champion.

In the ASC17 finals, the competitors have to use PaddlePaddle framework to accurately predict the traffic situation in a city for a particular day in the future. This requires each team to design and build an intelligent “brain” on their own, and then employ high-intensity training to coach this “brain” to come up with the results. They also need to ensure that the training is efficient and the trained “brains” will have a high recognition accuracy.

MASNUM, which is the third generation oceanic wave numerical model developed by China and was nominated for the Gordon Bell Prize. For compatibility with these top applications, the participants will get to perform their calculations using the world’s fastest supercomputer, Sunway TaihuLight, in the finals, as they attempt to extend parallel calculations in the software to 10,000 computing cores or more.

Currently for third-generation gene sequencers, each sequencing can generate as many as hundreds of thousands of gene fragments. Once the sequencing is completed, a more critical challenge emerges where the scientists have to assemble millions of gene fragments into a complete and correct genome and chromosome sequence. The finalists in ASC17 will attempt to optimize Falcon, a third-generation gene sequencing assembly tool, and the results will help research work in human genetics and even the origin of life to advance.

LAMMPS is the abbreviation for Large-scale Atomic/Molecular Massively Parallel Simulator, and is the most widely used molecular dynamics simulation software worldwide. It is the key software for research in many cutting-edge disciplines including chemistry, materials, and molecular biology. The challenge for ASC17 finalists is to port this very mature software to the latest “Knights Landing” architecture platform, and to improve the operational efficiency of this software.

In addition, the teams in ASC17 finals are also required by the organizing committee to make use of the supercomputing nodes from Inspur to design and build a supercomputer on their own under 3000W power to optimize HPL , HPCG and one mystery application. Each team should also provide an English presentation.

The ASC Student Supercomputer Challenge is initiated by China, and supported by experts and institutions worldwide. The competition aims to be the platform to promote exchanges among young supercomputing talent from different countries and regions, as well as to groom young talent. It also aims to be the key driving force in promoting technological and industrial innovations by improving the standards in supercomputing applications and research. ASC Challenge has been held for 6 years. This year the ASC17 Challenge is co-organized by  Zhengzhou University, the National Supercomputing Centre in Wuxi , and Inspur,with 230 teams from all over the world having taken part in the competition.

The post ASC17 Championship to Challenge Front-end Science appeared first on HPCwire.

New Mexico Students Showcase Projects at 27th Supercomputing Challenge

Tue, 04/25/2017 - 00:00

LOS ALAMOS, N.M., April 24, 2017 — More than 200 New Mexico students and teachers from 55 different teams will come together April 24-25 at the  Jewish Community Center in Albuquerque to showcase their computing research projects at the 27th annual New Mexico Supercomputing Challenge expo and awards ceremony.

“It is encouraging to see the excitement generated by the participants and the great support provided by all the volunteers involved in the Supercomputing Challenge,” said David Kratzer of the Laboratory’s High Performance Computing Division, the Los Alamos coordinator of the Supercomputing Challenge.

The Supercomputing Challenge is project-based learning geared to teaching a wide range of skills: research, writing, teamwork, time management, oral presentations and computer programming. Any New Mexico elementary-school, middle-school or high-school student is eligible to enter the Supercomputing Challenge. A full list of this year’s submitted reports is here.

After the students present their projects they will visit exhibits and demonstrations by several Sandia Laboratories scientists, faculty from New Mexico universities and others. They will also travel to nearby technology companies to learn about some of their state-of-the-art activities. In addition, the Supercomputing Challenge rented the Nuclear Museum of Science and History for the students to visit, which was sponsored by Lockheed Martin.

Kratzer said the challenge provides a pipeline of potential future employees for the Laboratory.

Fifteen Los Alamos National Laboratory employees, 30 Sandia National Laboratories employees and another 45 individuals from universities and businesses have volunteered to work on the year-end activities. The Los Alamos researchers will serve as finalist, expo and scholarship judges at this year’s challenge.

Sponsorships and awards

Eastern New Mexico University, the New Mexico Institute of Mining and Technology, New Mexico State University and the University of New Mexico, along with Cray Inc., have come together to give away $10,000 in scholarships to graduating high school seniors.

Other sponsors include the New Mexico Technology Council, the Los Alamos National Laboratory Foundation and the Albuquerque Journal. A full list of sponsors is here.

More information about the New Mexico Supercomputing Challenge is on the Supercomputing Challenge web page.

About the Supercomputing Challenge

The New Mexico High School Supercomputing Challenge was conceived in 1990 by former Los Alamos National Laboratory Director Sig Hecker and Tom Thornhill, president of New Mexico Technet Inc., a nonprofit company that set up a computer network in 1985 to link the state’s national laboratories, universities, state government and some private companies.

The post New Mexico Students Showcase Projects at 27th Supercomputing Challenge appeared first on HPCwire.

PEARC17 Announces Keynote Speakers

Mon, 04/24/2017 - 22:41

April 24, 2017 — The organizers of PEARC17 (Practice & Experience in Advanced Research Computing) today announced the keynote speakers for the conference in New Orleans, July 9–13, 2017.

The PEARC17 keynote on Tuesday, July 11 will be presented by Paula Stephan, professor of economics, Georgia State University and a research associate, National Bureau of Economic Research. Stephan’s talk, “How Economics Shapes Science,” will focus on the effects of incentives and costs on U.S. campuses.

Paul Morin, founder and director of the Polar Geospatial Center, an NSF science and logistics support center at the University of Minnesota, will present a keynote session on Wednesday, July 12, titled “Mapping the Poles with Petascale.” This is the compelling story of a small NSF-funded team from academia joining with the National Geospatial-Intelligence Agency and Blue Waters to create the largest ever topographic mapping project.

PEARC17’s inaugural conference will address the challenges of using and operating advanced research computing within academic and open science communities. Bringing together the high-performance computing and advanced digital research communities, this year’s theme—Sustainability, Success and Impact—reflects key objectives for those who manage, develop, and use advanced research computing throughout the nation and the world.

About the Speakers:

Paula Stephan is a Fellow of the American Association for the Advancement of Science and a member of the Board of Reviewing Editors, Science. Science Careers named Stephan its first “Person of the Year” in December 2012. Stephan has published numerous articles in such journals as The American Economic Review, The Journal of Economic Literature, Management Science, Nature, Organization Science, Research Policy and Science. Her book, How Economics Shapes Science, was published by Harvard University Press. Her research has been supported by the Alfred P. Sloan Foundation, the Andrew W. Mellon Foundation, and the National Science Foundation. Stephan serves on the National Academies Committee on the Next Generation of Researchers Initiative and the Research Council of The State University of New York (SUNY) System. See Stephan’s full bio at https://www.pearc.org/keynote-speakers.

Paul Morin is Founder and Director of the Polar Geospatial Center, an NSF science and logistics support center at the University of Minnesota. Morin leads a team of two dozen responsible for imaging, mapping, and monitoring the Earth’s polar regions for the National Science Foundation’s Division of Polar Programs. Morin is the liaison between the National Science Foundation and the National Geospatial-Intelligence Agency’s commercial imagery program. Before founding PGC, Morin was at the National Center for Earth-Surface Dynamics at the University of Minnesota, and he has worked at the University of Minnesota since 1987. Morin serves as the National Academy of Sciences-appointed U.S. representative to the Standing Committee on Antarctic Geographic Information under the Scientific Committee for Antarctic Research (i.e., the Antarctic Treaty System). One of his current projects is ArcticDEM, a White House initiative to produce a high-resolution, time-dependent elevation model of the Arctic using Blue Waters. See Morin’s full bio at https://www.pearc.org/keynote-speakers.

Source: PEARC

The post PEARC17 Announces Keynote Speakers appeared first on HPCwire.

ASC17 Makes Splash at Wuxi Supercomputing Center

Mon, 04/24/2017 - 20:13

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17).

As the sun rose higher in the sky over nearby Taihu Lake and with the world’s fastest TaihuLight supercomputer in close proximity, the 100-some students focused intently on their task: unboxing their shiny new hardware and building their clusters.

From an initial pool of 220 teams, representing more than one-thousand students from schools around the globe, these 20 teams earned their spots in the final round. Among them are former champions, such as Huazhong University of Science and Technology, Shanghai Jiao Tong University, and “triple crown” winners Tsinghua University, but for seven of the teams, ASC17 marks their first time as competition finalists. Contest officials are particularly proud of the event’s reach to cultivate young talent.

In the six years since its inception, ASC has developed into the largest student supercomputing competition and is also one with the highest award levels. During the four days of the competition, the 20 teams at ASC17 will race to conduct real-world benchmarking and science workloads as they vie for a total of six prizes worth nearly $35,000.

Inspur provides the teams with a rack and NF5280M4 servers, outfitted with two Intel Xeon E5-2680v4 (2.4Ghz, 14 cores) CPUs. The primary event sponsor also supplies DDR4 memory, SATA storage, Mellanox InfiniBand networking (card, switch and cables), as well an Ethernet switch and cables.

Eight Nvidia P100 boxes

Teams can substitute or add other componentry (except the servers) at their own expense or through sponsorship opportunities. Most of the teams we spoke with were able to forge a relationship with Nvidia, whose GPU gear is now widely used at all three major cluster challenges (at SC, ISC and ASC). We saw mostly P100 cards getting snapped into server trays this morning, but at least two teams had acquired the K40 parts with the hopes that they would offer a more optimal energy profile conducive to staying within the 3,000 watt contest power threshold. The most common configuration placed eight P100 GPUs in four nodes but on everyone’s mind was how much of the available compute power they would be able to leverage without exceeding the power threshold.

Days one and two of the competition are devoted to cluster building and testing. The on-site clusters are used for an application set that includes the High Performance Linpack (HPL), the High Performance Conjugate Gradient (HPCG), the mystery application (to be announced Wednesday), the genome analysis code Falcon and a traffic prediction problem to be solved with the Baidu deep learning framework, Paddle Paddle. The teams report different levels of experience with Paddle Paddle and with scaling to multiple GPUs, a skill that will be critical for achieving optimum performance.

Two other platforms will be used in the competition: the homegrown TaihuLight and a Xeon Phi Knights Landing (KNL) machine. Students will use TaihuLight to run and optimize the China-developed numerical wave modeler MASNUM application; the Inspur NF6248 KNL server (there’s a 20-node rack of these inside the contest hall) will be used for and the molecular dynamics simulator LAMMPS.  There is no 3,000 watt power limit for these workloads. Teams can receive a total of 100 points: 90 points for performance optimizations and 10 points for the presentation that they deliver to the judges after the conclusion of the testing.

One of the most exciting parts of this year’s competition is the inclusion of the Sunway TaihuLight machine, which teams have have had access to since March. Each team will be allowed to use at most 64 SW CPUs with 256 CGs. According to the rules: “Every team is allowed to design and implement proper parallel algorithm optimization and many-core optimization for the MASNUM source code. Each team needs to pass the correctness checking of each workload, and the goal is to achieve the shortest runtime of each workload.”

All in on AI

The addition of the Paddle Paddle framework continues the contest’s focus on AI and deep learning that was begun last year with the incorporation of a deep neural network program under the e-Prize category.

Wang Endong, founder of the ASC challenge, academician of the Chinese Academy of Engineering and chief scientist at Inspur, believes that with the convergence of HPC, big data and cloud computing, intelligent computing as represented by artificial intelligence will become the most important and significant component for the coming computing industry, bringing new challenges in computing technologies.

The AI thread has also been woven into the HPC Connection Workshop, which will be held at the Wuxi Supercomputing Center on Thursday. The theme for the 15th HPC Connection Workshop is machine intelligence and supercomputing. The impressive lineup of speakers includes Jack Dongarra (ASC Advisory Committee Chair, University of Tennessee, Oak Ridge National Laboratory), Depei Qian (professor, Beihang University, Sun Yat-sen University; director of the Key Project on HPC, National High-Tech R&D program); Simon See (chief solution architect, Nvidia AI Technology Center and Solution Architecture and Engineering), and Haohuan Fu (deputy director, National Supercomputing Center in Wuxi, Associate Professor, Tsinghua University).

The awards ceremony will be held Friday afternoon.

The 20 ASC17 Teams (asterisk indicates first-time finalist):

Tsinghua University

Beihang University

Sun Yat-sen University

Shanghai Jiao Tong University

Hong Kong Baptist University

Southeast University*

Northwestern Polytechnical University

Taiyuan University of Technology

Dalian University of Technology

The PLA Information Engineering University*

Ocean University of China*

Weifang University*

University of Erlangen-Nuremberg*

National Tsing Hua University

Saint Petersburg State University

Ural Federal University

University of Miskolc

University of Warsaw*

Huazhong University of Science and Technology

Zhengzhou University*

Taihu Lake, Wuxi, China

The post ASC17 Makes Splash at Wuxi Supercomputing Center appeared first on HPCwire.

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

Mon, 04/24/2017 - 20:00

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads.

Google has been using its TPUs for the inference stage of a deep neural network since 2015. It credits the TPU for helping to bolster the effectiveness of various artificial intelligence workloads, including language translation and image recognition programs. It also says TPU helped power its widely reported victory in the game of Go.

While TPUs aren’t new to Google data centers, the company started talking about them publicly only recently. Earlier this month, the Alphabet subsidiary opened up about the TPU, which it called “our first machine learning chip,” in a blog post. The company also released a technical paper, titled “In-Datacenter Performance Analysis of a Tensor Processing Unit​,” that details the design and performance characteristics of the TPU.

According to the paper, Google’s TPU was 15 to 30 times faster at inference than Nvidia’s K80 GPU and Intel Haswell CPU in a Google benchmark test. On a performance per watt scale, the TPUs are 30 to 80 times more efficient than the CPU and GPU (with the caveat that these are older designs). You can read more details on the TPU comparisons here.

While Google has been mum on possible commercial ventures around the TPU, some recent developments indicate that Google itself may not be aiming to compete directly with traditional chip manufacturers. Last week CNBC reported that a group of the original Google engineers who designed the TPU recently left the Web giant to found their own company, called Groq.

Google’s TPU chip (Source: Google)

According to an SEC document filed for Groq’s incorporation, the company has raised about $10 million. Leading the way is Chamath Palihapitiy, a prominent Silicon Valley venture capitalist. Other ex-Googlers named in the SEC document include Jonathan Ross, who helped invent the TPU, and Douglas Wightman, who worked on the Google X “moonshot factory.”

But that’s not all. “We have eight of the 10 original people that built that chip building the next generation chip now,” Palihapitiy said in a March interview with CNBC. Groq is playing its cards close to the vest, and isn’t disclosing exactly what it’s working on—although by all indications, it would appear to have something to do with machine learning chips.

There are many other groups chasing this new market opportunity, including traditional chip bigwigs Intel and IBM.

While Big Blue pushes a combination of its RISC Power chips and Nvidia GPUs in its Minsky AI server, its research arm is exploring other chip architectures. Most recently, the company’s Almaden Lab has discussed the capabilities of its “brain-inspired” TrueNorth chip, which features 1 million neurons and 256 million synapses. IBM says TrueNorth has delivered “deep networks that approach state-of-the-art classification accuracy” on several vision and speech datasets.

“The goal of brain-inspired computing is to deliver a scalable neural network substrate while approaching fundamental limits of time, space, and energy,” IBM Fellow Dharmendra Modha, chief scientist of Brain-inspired Computing at IBM Research, said in a blog post.

Intel isn’t standing still, and is developing its own chip architectures for next-generation AI workloads. Last year the company announced that its first AI-specific hardware, code-named “Lake Crest,” which is based on technology Intel acquired with $400-million acquisition of Nervana Systems, would debut in the first half of 2017. That is to be followed later this year with Knights Mill, the next iteration of its Xeon Phi co-processor architecture.

IBM’s TrueNorth training set (image source: IBM Research)

For its part, Nvidia will be looking to solidify its hold on the emerging machine learning market. While energy-hungry GPUs aren’t as efficient on the inference side of the equation, they’re tough to be beat for the compute-intensive training of neural networks, which is why Web giants like Google, Facebook, Microsoft and others are using so many of them for AI workloads.

However, Nvidia isn’t giving up on the inference side of the market, and recently published a benchmark that showed how much better its latest Pascal GPU architectures, most notably the P40, is at inferring than its older Kepler GPU architecture (see HPCwire’s coverage here). The K80 also out-performed the Google TPU, although Google has probably advanced its TPU since 2015, which is when it calculated the benchmark figures it recently shared. Nvidia’s recent hiring of Clément Farabet (formerly of Twitter) also could also portend a shift to more real-time workloads too.

Qualcomm could also be involved in the inference side of the equation. The mobile chipmaker has been working with Yann LeCun, Facebook’s Director of AI Research, to develop new chips for real-time inference, according to this Wired story. LeCun developed one of the first AI-specific chips for inference more than 25 years ago while working at Bell Labs.

The San Diego company recently announced plans to spend $47 billion to buy NXP, a Dutch company that makes chips for cars. NXP was working on deep learning and computer vision problems before the acquisition was announced, and it appears that Qualcomm will be looking to NXP to give it an edge in developing systems for autonomous driving.

Self-driving cars are one of the most prominent areas where deep learning and AI will have an impact. Beyond that, there are many other places where having an on-board AI chip to react to real-world conditions, including in mobile phones and virtual reality headsets. The technology is moving very quickly at the moment, and we’ll soon see other practical uses that will impact our lives.

This article first appeared on our sister site, Datanami.

The post Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money appeared first on HPCwire.

NCSA Director Named U of I VP for Economic Development and Innovation

Mon, 04/24/2017 - 15:10

URBANA, Ill., April 24, 2017 — NCSA Director Edward Seidel has been named vice president for economic development and innovation for the University of Illinois System, pending Board of Trustees approval, President Tim Killeen announced Monday. Seidel has served since August as interim vice president for research, a position that has been restructured and retitled to reflect the U of I System’s focus on fostering innovation to help drive the state’s economy through research and discovery.

Killeen said Seidel’s leadership over the last eight months has helped advance several new initiatives, such as working with executives of leading Illinois companies to develop collaborative research projects that will serve their businesses and lift the state’s economy. A longtime administrator and award-winning researcher, Seidel will lead an office that works with the System’s three universities to help harness their nearly $1 billion per year sponsored-research portfolio for technology commercialization and economic development activities.

“Ed’s personal experience with leading-edge research and with federal and international agencies – combined with his deep understanding of the U of I System’s capabilities and aspirations – has given him a rock-solid foundation for success,” Killeen said. “He’s off to a flying start.”

Seidel served since 2013 as director of the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. Seidel retained the title of NCSA director while serving as interim vice president, while Dr. William “Bill” Gropp took on the role of acting director. Gropp, the Thomas M. Siebel Chair in Computer Science and director of the Parallel Computing Institute in the Coordinated Science Laboratory, will continue to serve as interim director until a permanent NCSA director is named.

“NCSA congratulates Vice President Seidel on this well-earned appointment,” Gropp said. “It has been an honor co-leading and planning a vibrant and innovative future for NCSA. As interim director, I am looking forward to continuing to work with Ed, in his new role, as we advance new opportunities for the University of Illinois and NCSA.”

Seidel’s appointment as director three years ago marked a return to NCSA, where he once led the center’s numerical relativity group from 1991-96. He also was among the original co-principal investigators for Blue Waters, a federally funded project that brought one of the world’s most powerful supercomputers to Urbana-Champaign. He also is a Founder Professor in the Department of Physics and a professor in the Department of Astronomy at Illinois.

“It has been an honor leading NCSA during this exciting period,” said Seidel. “I am proud of what the center’s team has done to keep NCSA in a prominent national leadership position with projects like Blue Waters, XSEDE, LSST, the Midwest Big Data Hub, the National Data Service, and many others. I am also pleased to have helped NCSA move in directions that better leverage the great strengths of the university, in creating the world’s most advanced integrated cyberinfrastructure environment, in making it a home for transdisciplinary research and education programs at Illinois, and in enhancing NCSA’s industry program. As I take on new challenges with the U of I system, I look forward to continuing as a member of NCSA’s faculty, and to working with Bill as he and the team take NCSA to new heights in the future.”

About the National Center for Supercomputing Applications

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

Source: NCSA

The post NCSA Director Named U of I VP for Economic Development and Innovation appeared first on HPCwire.

IARPA Launches QEO Program to Develop Quantum Enhanced Computers

Mon, 04/24/2017 - 11:55

WASHINGTON, D.C., April 24, 2017 — The Intelligence Advanced Research Projects Activity, within the Office of the Director of National Intelligence (ODNI), announced today that it has embarked on a multi-year research effort to develop special-purpose algorithms and hardware that harness quantum effects to surpass conventional computing. Practical applications include more rapid training of machine learning algorithms, circuit fault diagnostics on larger circuits than possible today, and faster optimal scheduling of multiple machines on multiple tasks. If successful, technology developed under the Quantum Enhanced Optimization—“QEO”—program will provide a plausible path to performance beyond what is possible with today’s computers.

“The goal of the QEO program is a design for quantum annealers that provides a 10,000-fold increase in speed on hard optimization problems, which improves at larger and larger problem sizes when compared to conventional computing methods,” said Dr. Karl Roenigk, QEO program manager at IARPA.

Through a competitive Broad Agency Announcement process, IARPA has awarded a research contract in support of the QEO program to an international team led by the University of Southern California. Subcontractors include the California Institute of Technology, Harvard University, Massachusetts Institute of Technology, University of California at Berkley, University College London, Saarland University, University of Waterloo, Tokyo Institute of Technology, Lockheed Martin, and Northrup Grumman. Other participants providing validation include NASA Ames Research Center and Texas A&M. Participants providing government-furnished hardware and test bed capabilities include MIT Lincoln Laboratory and MIT.

For any questions, please contact us at dni-iarpa-baa-15-13@iarpa.gov


IARPA invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the Intelligence Community. Additional information on IARPA and its research may be found on https://www.iarpa.gov

Source: ODNI

The post IARPA Launches QEO Program to Develop Quantum Enhanced Computers appeared first on HPCwire.

ALCF Seeks Proposals to Advance Big Data Problems in Big Science

Mon, 04/24/2017 - 10:30

Argonne, Ill., April 24, 2017 — The Argonne Leadership Computing Facility Data Science Program (ADSP) is now accepting proposals for projects hoping to gain insight into very large datasets produced by experimental, simulation, or observational methods. The larger the data, in fact, the better.

From April 24 to June 15, ADSP’s open call provides an opportunity for researchers to make transformational advances in data science and software technology through allocations of computer time and supporting resources at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy Office of Science User Facility.

The ADSP, now in its second year, is the first program of its kind in the nation, and targets “big data” science problems that require the scale and performance of leadership computing resources, such as ALCF’s two petascale supercomputers: Mira, an IBM Blue Gene/Q, and Theta, an Intel/Cray system that came online earlier this year.

Data—the raw, voluminous, bits and bytes that pour out of today’s large-scale experiments—are the proverbial haystacks to the science community’s needles. Data analysis is the art (of sorts) of sorting and making sense of the output of supercomputers, telescopes, particle accelerators, and other big instruments of scientific discovery.

ADSP projects will focus on employing leadership-class systems and infrastructure to explore, prove, and improve a wide range of data science techniques. These techniques include uncertainty quantification, statistics, machine learning, deep learning, databases, pattern recognition, image processing, graph analytics, data mining, real-time data analysis, and complex and interactive workflows.

The winning proposals will be awarded time on ALCF resources and will receive support and training from dedicated ALCF staff. Applications undergo a review process to evaluate potential impact, data scale readiness, diversity of science domains and algorithms, and other criteria. This year, there will be an emphasis on identifying projects that can use the architectural features of Theta in particular, as future ADSP projects will eventually transition to Aurora, ALCF’s 200-petaflops Intel/Cray system expected to arrive late next year.

To submit an application or for additional details about the proposal requirements, visit http://www.alcf.anl.gov/alcf-data-science-program. Proposals will be accepted until the call deadline of 5 p.m. CDT on Thursday, June 15, 2017. Awards will be announced in September and commence October 1, 2017.

About Argonne National Laboratory

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

About the U.S. Department of Energy’s Office of Science

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.

Source: Argonne National Laboratory

The post ALCF Seeks Proposals to Advance Big Data Problems in Big Science appeared first on HPCwire.

Rescale Adds CST STUDIO SUITE to Its ScaleX Cloud Platform for HPC

Mon, 04/24/2017 - 08:57

SAN FRANCISCO, April 24, 2017 — Rescale is pleased to announce a partnership with Computer Simulation Technology (CST), part of SIMULIA, a Dassault Systèmes brand, that will allow engineers and scientists running simulations in CST STUDIO SUITE to easily access the world’s largest high-performance computing (HPC) network via Rescale’s ScaleX platform.

CST STUDIO SUITE is a best-in-class software package for electromagnetic simulation. Customers often demand high-performance IT resources for large system-level simulation. Reducing run-times, particularly for multi-parameter optimization, can improve design throughput and the critical time-to-market for a product. Such IT resources are traditionally on-premise, but can incur large start-up and maintenance costs and can be redundant within 3 years as new technology comes along. Rescale offers an alternative scalable, secure and turn-key, cloud-based platform that now allows CST STUDIO SUITE to run on its worldwide network of high-performance computers, including the most state-of-the-art hardware available.

Under the new partnership, CST customers can bring their own licenses, and CST STUDIO SUITE will be available pre-configured on Rescale’s ScaleX platform. By accessing Rescale’s ScaleX platform through any browser, CST STUDIO SUITE users can run sophisticated engineering simulations on Rescale’s global multi-cloud HPC network of over 60 data centers in 30 plus locations worldwide. Demanding users can scale out to thousands of cores and choose hardware configurations optimized to the requirements of CST STUDIO SUITE’s complete technology portfolio, with options ranging from economical HPC configurations to cutting-edge bare metal systems, low-latency InfiniBand interconnect, and the latest Intel and NVIDIA GPU chipsets.

With Rescale’s ScaleX platform, enterprises can leverage built-in administration and collaboration tools to build teams, manage resources, and share jobs with team members. Additionally, enterprise administrators can take advantage of best-in-class security features such as multi-factor authentication, single sign-on, and set IP access rights, on a platform that meets the highest security standards, including ISO 27001 and 27017, SOC2 Type 2, ITAR, and HIPAA.

“We are very excited to be partnering with CST, as a new part of the SIMULIA brand of Dassault Systèmes,” said Joris Poort, CEO at Rescale. “We believe that CST STUDIO SUITE users will benefit from the fast, flexible, secure, and huge on-demand resources that Rescale can bring to computationally-demanding tools, such as electromagnetic simulation.”

Dr. Martin Timm, Director Global Marketing at CST added, “CST STUDIO SUITE provides comprehensive, advanced solving engines based on various numerical methods for world-class electromagnetic simulation. These engines run optimally on various types of hardware, and we believe that making them available on Rescale’s ScaleX platform will allow our customers access to the best possible performance across the whole suite of tools.”

Rescale is sponsoring the CST European User Conference 2017 in Darmstadt, Germany this week on April 27-28, 2017. Attend Rescale’s presentation or booth to discuss the advantages of running CST STUDIO SUITE on the cloud with Rescale.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

Source: Rescale

The post Rescale Adds CST STUDIO SUITE to Its ScaleX Cloud Platform for HPC appeared first on HPCwire.

Lenovo Drives into Software Defined Datacenter with DSS-G Storage Solution

Mon, 04/24/2017 - 08:52

ORLANDO, Fla. April 24, 2017 — Lenovo (SEHK:0992) (Pink Sheets:LNVGY) today announced, at its annual Accelerate Partner Forum, the Lenovo Distributed Storage Solution for IBM Spectrum Scale (DSS-G)— a scalable software-defined storage (SDS) solution. Designed to support dense scalable file and object storage suitable for high-performance and data-intensive environments, the Lenovo DSS-G enables customers to manage the exponential rate of data growth and the subsequent need to store large amounts of both structured and unstructured data.

Today, deploying storage solutions for HPC, Artificial Intelligence (AI), analytics and cloud environments, key technology trends that are dramatically reshaping the data center, places a significant burden on IT resources1. DSS-G is Lenovo’s latest offering intended to accelerate adoption of software-defined data center technology, which provides customers with key benefits such as greater infrastructure simplicity, enhanced performance and lower total cost of ownership.

This announcement is a first step in executing Lenovo’s HPC and AI commitment of bringing the benefits of Software Defined Storage (SDS) to HPC clusters, and will follow with additional offerings for customers deploying Ceph or Luster.

Built on Lenovo’s System x3650 M5 server with powerful Intel Xeon processors, renowned for its industry-leading reliability and performance, the Lenovo DSS-G is available as a pre-integrated, easy-to-deploy rack-level offering. Featuring the Lenovo D1224 and D3284 12Gbps SAS storage enclosures and drives as well as software and networking components – including Red Hat Enterprise Linux support – the new offering allows for a wide choice of technology within an integrated solution.

As a follow-on to the successful GPFS Storage Server (GSS), the Lenovo DSS-G delivers on the needs of today’s agile and digital businesses. New features include:

  • Easy Scalability: Start small and easily grow performance / capacity via a modular approach
  • Innovative RAID: With IBM Spectrum Scale Declustered RAID, reduce rebuild overhead by up to 8X
  • Choice of High-speed network: Including Infiniband or Ethernet up to 100Gbps

The new Lenovo DSS-G offering is fulfilled by Lenovo Scalable Infrastructure (LeSI). LeSi leverages decades of engineering experience and leadership to reduce the complexity of deployment and delivers an integrated and fully-supported solution that matches best-in-industry components with optimized solution design. This enables maximum system availability and rapid root-cause problem detection throughout the life of the system.

Collectively, these features empower customers running data intensive HPC, big data or cloud workloads to focus their efforts on maximizing business value and reclaim valuable resources previously spent on designing, optimizing, and installing and supporting the infrastructure required to meet business demands.

In addition, Lenovo offers a comprehensive portfolio of services that supports the full lifecycle of the Lenovo DSS-G and all Lenovo IT assets. Expert professionals can assist with complex deployments as well as provide 24×7 monitoring and technical systems management with managed services. Available benefits also include a single point-of-contact for solution-level support.

For more information on the Lenovo DSS-G please click here.

Lenovo Quote (Madhu Matta, VP & GM, High Performance Computing and A.I.)

“The Lenovo HPC solutions are part of research projects focused on solving humanity’s most complex challenges. One in every five supercomputers in the world is built on Lenovo HPC offerings and we are proud to count major research universities among our partners. The Lenovo DSS-G offering enhances that capability. Clients can now deploy a software defined storage solution that enhances performance, scalability and capability of the HPC environment.”

About Lenovo

Lenovo (SEHK:0992) (Pink Sheets:LNVGY) is a $45 billion global Fortune 500 company and a leader in providing innovative consumer, commercial, and enterprise technology. Our portfolio of high-quality, secure products and services covers PCs (including the legendary Think and multimode Yoga brands), workstations, servers, storage, smart TVs and a family of mobile products like smartphones (including the Moto brand), tablets and apps.

Source: Lenovo

The post Lenovo Drives into Software Defined Datacenter with DSS-G Storage Solution appeared first on HPCwire.

Mellanox InfiniBand Delivers up to 250 Percent Higher ROI for HPC

Mon, 04/24/2017 - 08:48

SUNNYVALE, Calif. and YOKNEAM, Israel, April 24, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that EDR 100Gb/s InfiniBand solutions have demonstrated from 30 to 250 percent higher HPC applications performance versus Omni-Path. These performance tests were conducted at end-user installations and Mellanox benchmarking and research center, and covered a variety of HPC application segments including automotive, climate research, chemistry, bioscience, genomics and more.

Examples of extensively used mainstream HPC applications:

  • GROMACS is a molecular dynamics package design for simulations of proteins, lipids and nucleic acids and is one of the fastest and broadly used applications for chemical simulations. GROMACS has demonstrated a 140 percent performance advantage on an InfiniBand-enabled 64-node cluster.
  • NAMD is highly noted for its parallel efficiency and is used to simulate large biomolecular systems and plays an important role in modern molecular biology. Using InfiniBand, the NAMD application has demonstrated a 250 percent performance advantage on a 128-node cluster.
  • LS-DYNA is an advanced multi-physics simulation software package used across automotive, aerospace, manufacturing and bioengineering industries. Using InfiniBand interconnect, the LS-DYNA application has demonstrated a 110 percent performance advantage running on a 32-node cluster.

Due to its scalability and offload technology advantages, InfiniBand has demonstrated higher performance utilizing just 50 percent of the needed data center infrastructure and thereby enabling the industry’s lowest Total Cost of Ownership (TCO) for these applications and HPC segments. For the GROMACS application example, a 64-node InfiniBand cluster delivers 33 percent higher performance in comparison to a 128-node Omni-Path cluster; for the NAMD application, a 32-node InfiniBand cluster delivers 55 percent higher performance in comparison to a 64-node Omni-Path cluster; and for the LS-DYNA application, a 16-node InfiniBand cluster delivers 75 percent higher performance than a 32 node Omni-Path cluster.

“InfiniBand solutions enable users to maximize their data center performance and efficiency versus proprietary competitive products. EDR InfiniBand enables users to achieve 2.5X higher performance while reducing their capital and operational costs by 50 percent,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “As a standard and intelligent interconnect, InfiniBand guarantees both backward and forward compatibility, and delivers optimized data center performance to users for any compute elements – whether they include CPUs by Intel, IBM, AMD or ARM, or GPUs or FPGAs. Utilizing the InfiniBand interconnect, companies can gain a competitive advantage, reducing their product design time while saving on their needed data center infrastructure.”

The application testing was conducted utilizing end-user data centers and the Mellanox benchmarking and research center. The full report of testing conducted at end-user data centers and the Mellanox benchmarking and research center will be available on the Mellanox web site. For more information please contact Mellanox Technologies.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox InfiniBand Delivers up to 250 Percent Higher ROI for HPC appeared first on HPCwire.

DreamWorks Taps HPE, Qumulo to Accelerate Digital Content Pipeline

Mon, 04/24/2017 - 08:39

SEATTLE, Wash., April 24, 2017 —  Hewlett Packard Enterprise (HPE) and Qumulo today announced that DreamWorks Animation has selected the two companies to accelerate its digital content pipeline. The joint solution of HPE Apollo Servers and Qumulo Core software enables DreamWorks Animation to replace legacy storage systems used for HPC file-based workloads such as data intensive simulations for animated films and programs.

DreamWorks Animation was challenged to keep pace with the vast amount of small file data generated from animation rendering workflows. The studio faced significant challenges with their existing systems including insufficient scalability and write performance for large numbers of small files, lack of data visibility, and limited APIs for custom integrations with important media workflows. DreamWorks Animation upgraded its architecture to HPE Apollo servers and Qumulo Core for a flash-first hybrid storage architecture that is more performant and scalable to meet the demands of DreamWorks Animation’s digital content needs.

“Our film creation process requires an exceptional amount of digital manufacturing, and file-based data is one of the core assets of our business,” said Skottie Miller, Technology Fellow for Engineering and Infrastructure at DreamWorks Animation. “If storage fails to perform, everything is impacted. HPE and Qumulo deliver the next generation of scale-out storage that meets our demanding requirements. For any given film, we can generate more than half a billion files. Having the capability to support that with a best of breed solution such as HPE Apollo Servers and Qumulo’s modern scale-out storage software keeps our pipeline humming. Qumulo’s modern code base and architecture, write scalability, and integrated file systems analytics provides great value to our business and further strengthens our relationship with Hewlett Packard Enterprise.”

HPE Apollo servers and Qumulo Core offer maximum flexibility, scale and performance for on-premises and private cloud workloads. It is a complete and reliable solution for storing and managing tens of billions of files and objects and hundreds of petabytes of data. Qumulo’s scale-out storage easily scales capacity and performance linearly through an efficient and low cost flash-first hybrid architecture. Qumulo Core is also the world’s smartest storage system for continual delivery, real-time insight into data and storage, and offers users the option to choose their own hardware. Customers can integrate Qumulo Core into their existing workflows via robust REST APIs.

“With a sterling reputation for innovation, DreamWorks Animation makes every new technology investment in support of driving creativity and new entertainment experiences,” said Peter Godman, Co-Founder and CTO of Qumulo. “The collaboration between HPE, Qumulo and DreamWorks Animation demonstrates the power of technology innovation to push the industry forward. DreamWorks Animation and large enterprises can now implement a truly modern IT infrastructure for storing and managing file-based data at scale, achieve remarkable efficiencies and extreme performance, and gain real-time analytics for their massive data footprints.”

About Qumulo

Qumulo, headquartered in Seattle, the leader in modern scale-out storage, enables enterprises to manage and store enormous numbers of digital assets through real-time analytics built directly into the file system. Qumulo Core is a software-only solution designed to leverage the price/performance of commodity hardware coupled with the modern technologies of flash, virtualization and cloud. Qumulo was founded in 2012 by the inventors of scale-out NAS, and has attracted a team of storage innovators from Isilon, Amazon Web Services, Google and Microsoft. Qumulo has raised $130 million in four rounds of funding from leading investors. For more information, visit www.qumulo.com.

Source: Qumulo

The post DreamWorks Taps HPE, Qumulo to Accelerate Digital Content Pipeline appeared first on HPCwire.