HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 2 hours 40 min ago

NVMe Revolution Takes Center Stage At Flash Memory Summit 2017

Wed, 08/02/2017 - 09:11

WAKEFIELD, Mass., Aug 2, 2017 — NVM Express, Inc., the organization that developed the industry standard NVM Express (NVMe) specification for accessing Solid State Drives (SSDs) on a PCI Express (PCIe) bus as well as across Fabrics, today announced that the organization will lead the Tuesday, Aug. 8th program, “NVMe and PCIe SSDs,” at the Flash Memory Summit (FMS) 2017, which is being held on Aug. 8-10, 2017, at the Santa Clara Convention Center.

With more than 100 presentations and 6,000 attendees expected at FMS, the two-part NVM Express Forum, A-11 and A-12, will be the one place where attendees can see and hear educational sessions covering the most current NVMe advances, coming innovations, trends, and real-world use cases directly from NVM Express members.

“From next-gen security to new and planned features, the NVM Express Forum at FMS is dedicated to providing industry leaders with everything they need to know about the continuous evolution of NVMe,” said Janene Ellefson, NVM Express Marketing Co-chair. “We look forward to welcoming anyone interested in the future of non-volatile memory to attend the Forum and encourage attendees to view the latest NVMe-enabled devices at the technology pavilion.”

Join NVM Express at its FMS NVMe Forum A-11 and A-12 on Aug. 8, 2017
As part of the FMS NVMe/PCIe Storage Track, the two-part NVM Express Forum will include presentations on the NVMe revision 1.3, the NVMe Management Interface (NVMe-MI), and the NVMe over Fabrics specification (NVMe-oF) as well as updates on security and an NVMe market update from IDC’s Storage Research Director, Eric Burgener.

A-11 Morning Sessions:

  • 8:30 a.m., Part 1a: “NVMe – New Features and Markets are Everywhere” (Presenters: IDC, Microsemi, Western Digital)
  • 9:45 a.m., Part 1b: “NVMe –New Applications and Support Mechanisms” (Presenters: Google, Microsoft, Toshiba, Western Digital)

A-12 Afternoon Sessions:

  • 3:40 p.m., Part 2a: “NVMe – Providing Software and Management Support” (Presenters: Dell EMC, Intel, Microsoft, Seagate, VMware)
  • 4:55 p.m., Part 2b: “NVMe does Networking” (Presenters: Broadcom, Brocade, Cavium, Intel, Mellanox)

NVM Express Board members will be available for media and analyst briefings. For more information or to schedule a briefing at FMS with NVM Express, please contact Jessie Hennion at 781-876-6280 or jhennion@virtualmgmt.com.

About NVM Express, Inc.

With more than 100 members, NVM Express, Inc. is a non-profit organization focused on enabling broad ecosystem adoption of high performance and low latency non-volatile memory (NVM) storage through a standards-based approach. The organization offers an open collection of NVM Express (NVMe) specifications and information to fully expose the benefits of non-volatile memory in all types of computing environments from mobile to data center. NVMe-based specifications are designed from the ground up to deliver high bandwidth and low latency storage access for current and future NVM technologies. For more information, visit http://www.nvmexpress.org.

Source: NVM Express

The post NVMe Revolution Takes Center Stage At Flash Memory Summit 2017 appeared first on HPCwire.

CSRA Expands NIH Supercomputing

Wed, 08/02/2017 - 09:08

FALLS CHURCH, Va., Aug. 2, 2017 — CSRA Inc. announced today it has installed a second increment to the Biowulf supercomputing cluster at the National Institutes of Health (NIH) Center for Information Technology. Biowulf is designed to process a large number of simultaneous computations that are typical in genomics, image processing, statistical analysis, and other biomedical research areas.

“We are proud to report the next stage of supercomputing power for Biowulf at NIH,” said Vice President Kamal Narang, head of CSRA’s Federal Health Group. “CSRA’s world-renowned HPC experts partnered with NIH to make this advancement possible. Entering this new stage, NIH researchers have expanded computing power to discover new cures and save lives.”

The second stage of computing power announced today will enable NIH researchers to make important advances in biomedical fields. This field of research is deeply dependent on computation, such as whole-genome analysis of bacteria, simulation of pandemic spread, and analysis of human brain MRIs. Results from these analyses may enable new treatments for diseases including cancer, diabetes, heart conditions, infectious disease, and mental health.

This increment to Biowulf, which ranked #139 on the June 2017 TOP500 list of supercomputing sites, features compute nodes from Hewlett Packard Enterprise, with Intel processor and NVIDIA GPU technology; large scale storage from DataDirect Networks; Infiniband interconnect components from Mellanox Technologies; and Ethernet switches from Brocade Communication Systems.

The scope and performance of this second increment to Biowulf includes:

  • 1,104 compute nodes (1.2 Petaflops or 1.2 thousand trillion operations per second) and
  • 72 GPU nodes added to the existing 2,372 node cluster (1.6 Petaflop)
  • An additional 4.8 Petabytes (4.8 thousand trillion bytes of data) of storage

As the prime contractor, CSRA procured all of the components and managed the integration and installation of the equipment into the Biowulf system while collaborating with several industry partners. CSRA is also helping support the ongoing operation of the Biowulf cluster.

CSRA is a leader in HPC services, helping a wide variety of government customers achieve important mission objectives. Recently, the company was awarded a $51 million contract to support the Environmental Protection Agency’s (EPA) HPC systems. In addition to NIH and the EPA, CSRA supports supercomputers used by NASA, NOAA, the CDC, and the Department of Defense. This technology is used for applications ranging from aerospace system design, climate and weather modeling, astrophysics, ecosystems modeling, to health and medical research.

About CSRA Inc.

CSRA (NYSE: CSRA) solves our nation’s hardest mission problems as a bridge from mission and enterprise IT to Next Gen, from government to technology partners, and from agency to agency.  CSRA is tomorrow’s thinking, today. For our customers, our partners, and ultimately, all the people our mission touches, CSRA is realizing the promise of technology to change the world through next-generation thinking and meaningful results. CSRA is driving towards achieving sustainable, industry-leading organic growth across federal and state/local markets through customer intimacy, rapid innovation and outcome-based experience. CSRA has over 18,000 employees and is headquartered in Falls Church, Virginia. To learn more about CSRA, visit www.csra.com. Think Next. Now.

Source: CSRA

The post CSRA Expands NIH Supercomputing appeared first on HPCwire.

IARPA Investigates Classified as a Service Clouds

Wed, 08/02/2017 - 08:58

The rising cost of building secure on premise infrastructure and increasing concerns around security are prompting the Intelligence Advanced Research Projects Activity (IARPA) to explore interest on the part of existing cloud providers to develop and offer so-called Classified as a Service (ClaaS) offerings. An RFI went out on July soliciting feedback from “large U.S. owned entities that have multiple data centers located both in the U.S. and throughout the world that provide services similar to IaaS to the general public.”

The basic idea behind the RFI was determine if there is interest among large U.S. owned “infrastructure as a service providers” in new technologies and techniques to enable the most sensitive computing workloads to be executed on a public cloud. There is, of course, already a Gov/Cloud, an isolated portion of AWS “designed to host sensitive data and regulated workloads in the cloud, helping customers support their U.S. government compliance requirements, including the International Traffic in Arms Regulations (ITAR) and Federal Risk and Authorization Management Program (FedRAMP).”

Specialized clouds aren’t new. Nimbix and Penguin’s POD, for example, focuses on delivering HPC technology to support HPC workflows. Many have wondered whether the big cloud providers might also get into the game of specialized offerings. For example, might AWS offer an AWS Research Cloud for the academic research community. IARPA’s interest in clouds for handing sensitive material and workflows is unsurprising.

The IARPA RFI notes soberly that the “cost of maintaining and procuring private infrastructure for classified/sensitive workloads for the government continues to get increasingly more expensive compared to the cost of leveraging commercial cloud resources. This disparity may increase exponentially over the next decade. Existing IaaS offerings require customers to trust the software stack and employees of the cloud provider and are subject to numerous potential side-channel attacks due to shared resources. This is not acceptable to customers with the most sensitive data processing needs.”

It will be interesting to so what, if anything, comes of this latest RFI and whether larger cloud providers develop more formally segregated offerings.

The post IARPA Investigates Classified as a Service Clouds appeared first on HPCwire.

Brookhaven Lab to Lead 2017 New York Scientific Data Summit

Wed, 08/02/2017 - 08:57

UPTON, N.Y., Aug 2, 2017 — The Computational Science Initiative (CSI) at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory will be leading the 2017 New York Scientific Data Summit, to be held from Aug. 7 through 9 at NYU’s Kimmel Center for University Life.

Jointly organized by Brookhaven Lab, NYU, and Stony Brook University, the annual conference will bring together data experts, scientists, application developers, and end users from national labs, universities, technology companies, utilities, and federal and state governments to share ideas for unlocking insights from scientific big data. Keynote and invited speakers will focus on five topicscritical to enabling scientific discovery from rapidly generated, highly complex, and large-scale datasets: streaming data analysis, autonomous experimental design, performance for big data, big theory for big data, and interactive exploration of extreme-scale data. A series of poster presentations will highlight ongoing and emerging research in the field.

“Big data presents many challenges to academia, government, and industry,” said CSI Director Kerstin Kleese van Dam. “By working together, we are able to create exciting new solutions that advance U.S. research, national security, and competitiveness.”

The event is being co-hosted by the IEEE Computer Society–Long Island Chapter, the Moore-Sloan Data Science Environment at NYU, the New York State High Performance Computing Consortium, the NYU Center for Data Science, and Stony Brook University’s Institute for Advanced Computational Science (IACS). Sponsors include Cray, Hewlett Packard Enterprise, IACS, Intel, Juniper Networks, Kitware, NVIDIA, and the Simons Foundation.

To view the agenda and to register before the Aug. 3 deadline, please visit the event website.

Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Source BNL

The post Brookhaven Lab to Lead 2017 New York Scientific Data Summit appeared first on HPCwire.

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

Tue, 08/01/2017 - 17:11

With its Radeon “Vega” Instinct datacenter GPUs and EYPC “Naples” server chips entering the market this summer, AMD has positioned itself for a two-headed battle against rivals Intel and Nvidia. AMD took to the SIGGRAPH stage on Sunday to showcase both technologies with the unveiling of the Project 47 supercomputer, developed in partnership with Inventec, Mellanox and Samsung.

Based on Inventec’s P-series computing platform, the P47 rack houses 20 2U servers, each equipped with a single EPYC 7601 processor hooked to four “Vega”-based Radeon Instinct MI25 accelerators. AMD is aiming the one-petaflops of peak single-precision performance at a mix of graphics, machine intelligence and HPC workloads.

With 128 lanes of PCIe per EPYC socket, the four MI25 GPUs can operate at full bandwidth without the need to, as AMD’s Mark Hirsh observes, resort to “costly dual-CPU and PLX switch setups typically needed on competing platforms in order to run four GPUs.” A fully-populated rack boasts “more compute power and more cores, threads, compute units, IO lanes and memory channels in use at one time than in any other similarly configured system ever released,” adds Hirsch in a blog post.

Samsung contributed 10TB of DDR4 memory, HBM2 for the GPU cards, and high-performance NVMe SSD storage, and Mellanox supplied EDR (100G) InfiniBand connectivity.

A full 20-server rack of P47 systems achieves 30.05 gigaflops per watt in single-precision performance, a number that AMD’s press outreach cited as being “25 percent better compute efficiency than select competing supercomputing platforms.” Given that the P47 system doesn’t offer much in the way of double-precision arithmetic, it’s a potentially misleading claim. We’ll point out that machines from HPE-SGI, NEC, Fujitsu, Exascaler, Dell, Cray and the P100-powered Saturn V from Nvidia achieved between 9.5 and 14.1 Linpack gigaflops per watt on the latest Green500 listing and when we do the math for single-precision peak, they offer between 28-50 gigaflops per watt.

AMD presented two live demonstrations of the new server rack. The first involved remote testing in Autodesk Maya, Blender and Adobe Premiere Pro. The second test used all 80 GPUs to produce a full photorealistic rendering of a motorcycle in about a second. These demos were targeting the content producer community that was in attendance at SIGGRAPH, but there’s obvious potential for all manner of FP32-loving AI and HPC applications.

Unveiling the P47 rack, AMD CEO Lisa Su recalled the breaking of the original petaflops barrier by the IBM Roadrunner supercomputer in 2008, a feat that required 6,480 dual-core Opteron CPUs and 12,960 Sony CELL BE co-processing units. Sure once you normalize the flops math, Roadrunner still has the performance edge by about a 3x factor but it also filled up 700 racks and used 2.35 MW of electrical power.

AMD expects its partners Inventec and AMAX to begin shipping Project 47 systems in Q4 of this year. Pricing has not yet been announced.

The post AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack appeared first on HPCwire.

Nvidia Releases OptiX 5.0 SDK

Mon, 07/31/2017 - 17:08

LOS ANGELES, July 31, 2017 — NVIDIA today announced that it is bringing the power of artificial intelligence to rendering with the launch of NVIDIA OptiX 5.0 SDK with powerful new ray-tracing capabilities.

Running OptiX 5.0 on the NVIDIA DGX Station — the company’s recently introduced deskside AI workstation — will give designers, artists and other content-creation professionals the rendering capability of 150 standard CPU-based servers. This access to GPU-powered accelerated computing will provide extraordinary ability to iterate and innovate with speed and performance, at a fraction of the cost.

“Developers using our platform can enable millions of artists and designers to access the capabilities of a render farm right at their desk,” said Bob Pette, Vice President, Professional Visualization, NVIDIA. “By creating OptiX-based applications, they can bring the extraordinary power of AI to their customers, enhancing their creativity and dramatically improving productivity.”

OptiX 5.0’s new ray tracing capabilities will speed up the process required to visualize designs or characters, dramatically increasing a creative professional’s ability to interact with their content. It features new AI denoising capability to accelerate the removal of graininess from images, and brings GPU-accelerated motion blur for realistic animation effects.

OptiX 5.0 will be available at no cost to registered developers in November.

Rendering Appliance Powers AI Workflows

By running NVIDIA OptiX 5.0 on a DGX Station, content creators can significantly accelerate training, inference and rendering. A whisper-quiet system that fits under a desk, NVIDIA DGX Station uses the latest NVIDIA Volta-generation GPUs, making it the most powerful AI rendering system available.

To achieve equivalent rendering performance of a DGX Station, content creators would need access to a render farm with more than 150 servers that require some 200 kilowatts of power, compared with 1.5 kilowatts for a DGX Station. The cost for purchasing and operating that render farm would reach $4 million over three years compared with less than $75,000 for a DGX Station.

Industry Support for AI-based Graphics

NVIDIA is working with many of the world’s most important technology companies and creative visionaries from Hollywood studios to set the course for the use of AI for rendering, design, character generation and the creation of virtual worlds. They voiced broad support for the company’s latest innovations:

  • “AI is transforming industries everywhere. We’re excited to see how NVIDIA’s new AI technologies will improve the filmmaking process.” —Steve May, Vice President and CTO, Pixar
  • “We’re big fans of NVIDIA OptiX.  It greatly reduced our development cost while porting the ray tracing core of our Clarisse renderer to NVIDIA GPUs and offers extremely fast performance.  With the potential to significantly decrease rendering times with AI-accelerated denoising, OptiX 5 is very promising as it can become a game changer in production workflows.” — Nicolas Guiard, Principal Engineer, Isotropix
  • “AI has the potential to turbocharge the creative process.  We see a future where our artists’ creativity is unleashed with AI – a future where paintbrushes can truly ‘think’ and empower artists to create images and experiences we could hardly imagine just a few years ago.  At Technicolor, we share NVIDIA’s vision to chart a path that enhances the toolset for creatives to deepen audience experiences.”  — Sutha Kamal, Vice President, Technology Strategy, Technicolor


NVIDIA’s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

Source: Nvidia

The post Nvidia Releases OptiX 5.0 SDK appeared first on HPCwire.

Several SC17 Deadlines Extended to August 7

Mon, 07/31/2017 - 12:58

DENVER, July 31, 2017 — The submission deadline for Birds-of-a-Feather, the Doctoral Showcase, the Scientific Visualization Showcase and Research Posters at SC17 has been extended from Monday, July 31 to Monday, August 7. Don’t miss out on your opportunity to participate in these SC17 events!

  • Click here for more information about Birds-of-a-Feather, which are some of the most popular and well-attended sessions at the conference.
  • Click here for more information about the Doctoral Showcase, which gives students near the end of their Ph.D. the opportunity to present a summary of their dissertation research.
  • Click here for more information about the Scientific Visualization Showcase, which provides a forum for the year’s most instrumental movies in HPC.
  • Click here for more information about the Research Posters.

Web Submissions: https://submissions.supercomputing.org/

Source: SC17

The post Several SC17 Deadlines Extended to August 7 appeared first on HPCwire.

Intel Weighs in on Convergence of AI and HPC Challenges

Mon, 07/31/2017 - 11:55

Scaling deep neural networks for a fixed problem onto large systems with thousands of nodes is challenging. Indeed, it is one of several hurdles confronting efforts to converge artificial intelligence (AI) and HPC. Pradeep Dubey, Intel Fellow and director of Intel’s Parallel Computing Lab (PCL, has written a blog describing Intel efforts to better understand and solve that problem along with others and promises more details to come at SC2017.

Dubey’s blog posted last week – Ushering in the convergence of AI and HPC: What will it take? – acknowledges the path forward is uneven. Besides the scaling problem mentioned above, Dubey writes, “Adding to the dilemma, unlike a traditional HPC programmer who is well-versed in low-level APIs for parallel and distributed programming, such as OpenMP or MPI, a typical data scientist who trains deep neural networks on a supercomputer is likely only familiar with high-level scripting-language based frameworks like Caffe or TensorFlow.”

Pradeep Dubey, Intel Fellow

No surprise, Intel is vigorously attacking the scaling problem. “Working in collaboration with researchers at the National Energy Research Scientific Computing Center (NERSC), Stanford University, and the University of Montreal, we have achieved a scaling breakthrough for deep learning training. We have scaled to over 9,000 Intel Xeon Phi processor based nodes on the Cori supercomputer, while staying under the accuracy and small batch-size constraints of today’s popular stochastic gradient descent variants method using a hybrid parameter update scheme. We will share this work at the upcoming Supercomputing Conference in Denver, November 12 – 17, 2017.”

The blog links to an interesting paper, On Large-Batch Training For Deep Learning: Generalization Gap And Sharp Minima, written by Intel and Northwestern University researchers, that targets the scaling problem.

Here’s an excerpt from the abstract: “We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions—and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap.”

The blog is a good read and provides a glimpse into Intel efforts and thinking. According to Dubey’s posted bio, his research focus is computer architectures to efficiently handle new compute- and data-intensive application paradigms for the future computing environment. He holds over 36 patents, has published over 100 technical papers, won the Intel Achievement Award in 2012 for Breakthrough Parallel Computing Research, and was honored with Outstanding Electrical and Computer Engineer Award from Purdue University in 2014.

Link to blog: https://www.intelnervana.com/ushering-convergence-ai-hpc-will-take/

The post Intel Weighs in on Convergence of AI and HPC Challenges appeared first on HPCwire.

Annual HPC Symposium Features ‘Keynote Debate’ on Future of HPC Architecture

Mon, 07/31/2017 - 10:59

BOULDER, Co., July 31, 2017 — An innovative “Keynote Debate” on the future of HPC architecture and featuring leaders from 4 major high performance computing companies, highlights the Rocky Mountain Advanced Computing Consortium’s 7th annual High Performance Computing (HPC) Symposium in August.

The Aug. 16th debate will be the first of two keynote sessions during the Aug. 15-17 Symposium, scheduled for the CU-Boulder East (Research) Campus.  The second keynote features New York filmmaker, composer and director Kenji Williams presenting a live performance of the globally touring NASA-powered data visualization spectacle – BELLA GAIA.

Debate participants will be Marc Hamilton, Vice President of Solutions Architecture and Engineering at NVIDIA; Mike Vildibill, Vice President of Exascale Development Programs at Hewlett Packard Enterprise; Jerry Lotto, Director HPC and Technical Computing at Mellanox; and William Magro, Chief Technologist for HPC Software at Intel.  Dr. Thomas Hauser, Director of Research Computing at CU-Boulder and Chair of the 20-member RMACC, will act as debate leader, and Tiffany Trader, Managing Editor of HPCwire, the nation’s leading news and information resource for high performance computing, will serve as debate moderator.

The Symposium will be held at CU-Boulder’s Sustainability, Energy & Environment Complex (SEEC).  Registration is $150, which includes all conference materials and meals plus a Reception.  The Student registration fee is $30, and Post-Doc students can sign up for $75, thanks to support from the many event sponsors (listed below). For those only able to attend the Aug. 17th Tutorials, registration will be $100.

Information about the Symposium, including the program schedule and registration information can be found at the website: www.rmacc.org/hpcsymposium.

The Symposium’s technical program features six concurrent tracks covering a wide range of advanced computing topics, with a particular emphasis on data analytics and visualization.  Tutorial sessions feature the acclaimed “Supercomputing in Plain English” series by Henry Neeman, in addition to classes on python, R, and Singularity taught by experts from around the region.  Other technical presentations will cover HPC-related resources such as Globus, the Open Science Grid, Amazon EC2, and BRO.

The annual symposium– recognized as one of the nation’s leading regional events in the HPC field – brings together faculty, researchers, industry leaders and students from throughout the Rocky Mountain Region.Special beginner level tutorials and workshops are included for those new to the HPC field.

About The Rocky Mountain Advanced Computing Consortium

The Rocky Mountain Advanced Computing Consortium (RMACC) is a collaboration among the major research universities in Colorado, Idaho, Montana, New Mexico, Utah and Wyoming, and the government research agencies NOAA, NCAR, The U.S. Geological Survey, and NREL.   The RMACC mission is to facilitate widespread effective use of high performance computing throughout the Rocky Mountain region. Visit the website www.rmacc.org/about to learn more about the RMACC and its member institutions.

Symposium sponsors:

Intel (Diamond); Dell and Hewlett Packard Enterprise (Platinum); NVIDIA (Reception); PureStorage, Mellanox Technologies and DDN Storage (Gold); and Lenovo, Allinea, Silicon Mechanics, Sentinel Technologies, Cambridge Computer, Starfish Storage, and ConvergeOne (Silver).

About the Keynote Debate Participants

Marc Hamilton is Vice President, Solutions Architecture and Engineering at NVIDIA where he leads NVIDIA’s worldwide team of solutions architects and field application engineers responsible for working with customers to transform science and harness the power of the cloud with machine learning, HPC, and professional visualization solutions.

Prior to NVIDIA, Marc worked in the Hyperscale Business Unit within Hewlett Packard’s Enterprise Group where he led the HPC team for the Americas region. Marc also spent 16 years in HPC at Sun Microsystems.  Marc also worked at TRW developing HPC applications for the U.S. aerospace and defense industry. He has published a number of technical articles and is the author of the book, Software Development, Building Reliable Systems.

Marc holds a Bachelor’s degree in Math and Computer Science from UCLA, a Master’s in Electrical Engineering from USC, and is a graduate of the UCLA Executive Management program.

Mike Vildibill is Vice President of Hewlett Packard Enterprise’s Exascale Development, Federal Programs and HPC Storage groups, where he is responsible for product strategy, engineering and advanced technologies development. He has 25 years’ experience in HPC including executive positions at Sun Microsystems (acquired by Oracle), Appro (acquired by Cray Inc.), and DataDirect Networks.

Mike was Deputy Director of High-End Computing and Network Research at the San Diego Supercomputer Center, and a UCSD Principal Investigator and technical contributor to the NSF TeraGrid program, which deployed and operated thousands of miles of dark fiber across the U.S. to connect high-end data centers. He oversaw development of Sun’s first Infiniband switches and x86 HPC products, and served as Principal Investigator on the DARPA HPCS and silicon photonics R&D programs.

He has Bachelors degrees in Mathematics and Computer Science and an MBA with emphasis in Information Technologies, all from San Diego State University.

William Magro is an Intel Fellow and Intel’s Chief Technologist for HPC software. In this role, he serves as the technical lead and strategist for Intel’s high-performance computing (HPC) software and provides HPC software requirements for Intel product roadmaps.

Magro joined Intel in 2000 with the acquisition of Kuck & Associates Inc. (KAI). Prior to KAI, he spent 3 years as a post-doctoral fellow and staff member at the Cornell Theory Center at Cornell University in New York.

Magro has a Bachelor’s degree in applied and engineering physics from Cornell, and a Master’s degree and Ph.D. in physics, both from the University of Illinois at Urbana-Champaign.


Jerry Lotto joined Mellanox in 2016 as Director HPC and Technical Computing, with more than 30 years of experience with scientific computing.

An early adopter of InfiniBand, Jerry built the first HPC teaching cluster in the Harvard’s Department of Chemistry and Chemical Biology with an InfiniBand backbone during the early InfiniBand days. In 2007, he helped to create the Harvard Faculty of Arts and Sciences Research Computing group. In an unprecedented collaborative effort between 5 Universities, industry and state government, Jerry also helped to design the Massachusetts Green High Performance Computing Center in Holyoke, MA which was completed in November 2012.

In mid-2013, Jerry left Harvard University to join RAID, Inc as Chief Technology Officer working to help companies, universities and government design, build, integrate and use HPC and technical computing technologies throughout the United States.


Tiffany Trader, managing editor for HPCwire, has over a decade’s experience covering the HPC space and is one of the preeminent voices reporting on advanced scale computing today.

Since joining the HPCwire team in 2006, she has played an essential role in steering editorial strategy, engaging with key audiences, and delivering groundbreaking content to HPCwire’s worldwide audience.

As Managing Editor for HPCwire, Tiffany brings an unmatched wealth of expertise, insights, and editorial prowess based on years of experience covering high performance, cloud, green, and other advanced computing technologies.

Trader earned her Bachelor’s degree from San Diego State University, where she majored in Computer Science and Linguistics.

Dr. Thomas Hauser is the Director of Research Computing at the University of Colorado Boulder. His team is operating a supercomputer, a research data storage service, and a friction free network for large data transfers. Research Computing provides training and consulting in the area of computational science and engineering and data management to CU Boulder researchers and students. He is also one of the two executive directors of the Center for Research Data and Digital Scholarship, as well as chairing the Rocky Mountain Advanced Computing Consortium (RMACC) which collaborates throughout the Rocky Mountain Region on cyberinfrastructure projects

Prior to joining CU Boulder, Dr. Hauser was the Director of the Center for High Performance Computing at Utah State University, and Associate Director for Research Computing at Northwestern University. Dr. Hauser was also the founding director of the Center for High Performance Computing and faculty in the department of mechanical engineering at Utah State University.

Dr. Hauser earned his Ph.D. in mechanical engineering in computational fluid dynamics from the University of Technology in Munich, Germany.

Source: RMACC

The post Annual HPC Symposium Features ‘Keynote Debate’ on Future of HPC Architecture appeared first on HPCwire.

NVM Express Plugfest Ushers in Next Wave of NVMe and NVMe-MI Devices

Mon, 07/31/2017 - 10:16

WAKEFIELD, Mass., July 31, 2017 – NVM Express, Inc., the organization that developed the NVM Express (NVMe) and NVMe Management Interface (NVMe-MI) specifications for accessing solid-state drives (SSDs) on a PCI Express (PCIe) bus as well across Fabrics, today announced that, as a result of the May 22-25th NVMe Plugfest, NVM Express has certified more than 100 market-ready products in just four years. With testing conducted by the University of New Hampshire InterOperability Laboratory (UNH-IOL), an independent provider of broad-based testing and standards conformance solutions, the event certified and named 21 new NVMe products to NVMe Integrators List and 6 new NVMe-MI devices to the newly launched NVMe-MI Integrators List.

“The successful conclusion of this event provides assurance to the OEM community of the availability of compliant and interoperable NVMe SSDs that implement the NVMe-MI specification,” said Bill Lynn, a member of the Board of Directors with NVM Express, and distinguished engineer, Server Solutions Division, at Dell EMC. “The NVM Express Plugfest is critical to help accelerate the adoption of the advanced management capabilities enabled by NVMe-MI. NVME-MI-enabled servers will help accelerate the adoption of the advanced management capabilities that are critical to delivering high bandwidth and low latency storage access in compute platforms.”

The NVMe Plugfest attracted 49 engineers from 16 companies, demonstrating the companies’ desire to get an early, competitive advantage by verifying interoperability, robustness, and specification compliance of offerings that take advantage of NVMe and NVMe-MI, and, through an experimental and ad-hoc testing track, NVMe over Fabrics (NVMe-oF).

“NVMe is on a faster growth trajectory than ever before. Companies are now testing feature-rich NVMe SSDs that include unique enterprise and client capabilities and implement the NVMe-MI specification,” said David Woolf, senior engineer, Datacenter Technologies at the UNH-IOL. “Demand for the NVMe-oF specification tracks increased, signaling to us that the industry is moving quickly from trials to real live deployments that will require conformance and interoperability testing at our next Plugfest.”

The Plugfest drew participants from enterprise, client, and cloud storage companies as well as test equipment firms looking to successfully complete the rigorous NVMe Plugfest testing to get an early, competitive market advantage. Participants included Broadcom Limited, Cavium, Inc., Dera Storage, Intel Corporation, Lite-On Technology Corporation, Mellanox Technologies, Memblaze Technology Co., Microsemi Corporation, OakGate Technology, Realtek Semiconductor Corp., Samsung Electronics Co., Ltd., SANBlaze Technology, Inc., Serial Cables, SerialTek, Teledyne LeCroy, Toshiba Corporation, Viavi Solutions Inc., and Western Digital Corporation.

“The most recent NVMe Plugfest was our first to date and we took advantage of the event’s experimental NVMe-oF specification testing,” said Rob Davis, vice president of Storage Technology at Mellanox Technologies. “As we continue to develop and deploy NVMe-oF offerings like our ConnectX Network Adapters and BlueField SOC, the UNH-IOL NVMe program is critical to establishing market confidence that our products will work well across a broad range of multi-vendor systems.”

The next NVMe Plugfest scheduled for the fall of 2017 at the UNH-IOL in Durham, N.H., will include testing protocols for the new NVMe 1.3 specification and other updates to reflect the needs of the growing NVMe ecosystem.

About NVM Express, Inc.

With more than 100 members, NVM Express, Inc. is a non-profit organization focused on enabling broad ecosystem adoption of high performance and low latency non-volatile memory (NVM) storage through a standards-based approach. The organization offers an open collection of NVM Express (NVMe) specifications and information to fully expose the benefits of non-volatile memory in all types of computing environments from mobile to data center. NVMe-based specifications are designed from the ground up to deliver high bandwidth and low latency storage access for current and future NVM technologies. For more information, visit http://www.nvmexpress.org.

Source: NVM Express

The post NVM Express Plugfest Ushers in Next Wave of NVMe and NVMe-MI Devices appeared first on HPCwire.

Asperitas, Qarnot Announce Partnership on Green Edge Computing

Mon, 07/31/2017 - 10:11

HAARLEM/MONTROUGE/LOS ANGELES, July 31, 2017 — Dutch cleantech company Asperitas and French green HPC service provider Qarnot computing have announced a partnership at SIGGRAPH, the world’s largest annual conference and exhibition in computer graphics and interactive techniques in Los Angeles.

Qarnot will act as a value added reseller and operator of the Immersed Computing solution AIC24 developed by Asperitas. The AIC24 is a stand-alone plug and play solution to facilitate IT in a very energy efficient and reliable way making use of total liquid cooling. Qarnot and Asperitas are planning to deploy a showcase at the HQ of Qarnot in Paris.

Qarnot provides cloud computing through a disruptive distributed infrastructure where computing power is no longer deployed in concentrated datacentres, but spread throughout the city in the form of computing heaters and boilers, which are ideal for edge environments in residential and commercial buildings. Qarnot offers affordable HPC through a scalable platform, hybrid by design, made of computing hardware but also operating public clouds and private infrastructure, accessible through a web interface, a REST API and/or a Python SDK. The collaboration focus with Asperitas will be on deployments on the building market.

Qarnot computing platform is the perfect one for compute-intensive applications such as rendering and risk analysis. The company serves major banks (including BNP Paribas and Société Générale), 3D animation studios and research labs. Qarnot is recognized as a pioneer in distributed cloud and Smart Building and won several awards, including Popular Mechanics’ Editor Choice Award at Las Vegas CES 2016, Cloud Innovation World Cup Award in 2015 and, Crédit Agricole Smart Home Challenge Award in 2016.

Asperitas has worked on validating and developing Immersed Computing as a unique approach to the datacentre industry since 2014 together with a wide ecosystem of partners including ADSE, Brink Industrial and the University of Leeds. Building on existing liquid immersion cooling technologies by adding integration of power and network components, improving cooling physics with a strong focus on design and engineering for usability, Asperitas has come up with a complete and integrated solution which can be effectively utilised in most, if not all situations, climate independently.

Immersed Computing by Asperitas is a concept driven by sustainability, efficiency and flexibility. Using the most efficient model for operating IT, total liquid cooling, and going far beyond just technology. Immersed Computing includes an optimised way of work, highly effective deployment, flexible IT and drastic simplification of datacentre design. Offering great advantages on all levels of the datacentre value chain. Realising maximum results for Cloud, HPC and Edge environments. The AIC24 is the first Immersed Computing solution, launched in March 2017 at Cloud Expo Europe. Since the launch Asperitas has been honoured with several award nominations including Datacentre Facilities Innovation of the Year by Datacentre Solutions and Best Energy Solution by Datacloud Europe. Currently Asperitas is in the running for an Accenture Innovation Award.

Source: Asperitas

The post Asperitas, Qarnot Announce Partnership on Green Edge Computing appeared first on HPCwire.

Asetek Receives Two Incremental Orders in Support of DOE Cluster Installations

Mon, 07/31/2017 - 09:06

OSLO, Norway, July 31, 2017 — Asetek has announced two incremental orders from Penguin Computing, an established data center OEM. The orders are for Asetek’s RackCDU D2C (Direct-to-Chip) liquid cooling solution and will enable increased computing power for two currently undisclosed HPC sites at U.S. Department of Energy (DOE) National Laboratories.

The orders have a combined value of approximately USD 280,000 with delivery expected in September 2017.

“The awards through our OEM partner Penguin, comes after months of uncertainty surrounding DOE budgets due to the new administration in the USA. They are in line with our goal of increasing end-user adoption with existing OEMs and confirm our ability to expand our leading position in the HPC segment,” said André Sloth Eriksen, CEO and founder of Asetek.

The orders are a continuation of the partnership between Penguin and Asetek in support of a number of DOE National Laboratories implementing Asetek’s RackCDU D2C liquid cooling technology in Penguin’s Tundra Extreme Scale (ES) HPC server product line. RackCDU direct-to-chip hot water liquid cooling enhances Penguin’s ability to provide HPC solutions with extreme energy efficiency and higher rack cluster densities.

About Asetek

Asetek is the global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK.OL).

Source: Asetek

The post Asetek Receives Two Incremental Orders in Support of DOE Cluster Installations appeared first on HPCwire.

Cray Moves to Acquire the Seagate ClusterStor Line

Fri, 07/28/2017 - 16:42

This week Cray announced that it is picking up Seagate’s ClusterStor HPC array business for an undisclosed sum as part of a “strategic deal and partnership.”

“In short we’re effectively transitioning the bulk of the ClusterStor product line to Cray,” said CEO Peter Ungaro on the company’s Q2 investor call yesterday.

Cray will be taking over development, support, manufacturing and sales of the ClusterStor product line, including Cray’s Sonexion scale out Lustre storage system, which is based on ClusterStor. The supercomputing company expects to add more than 100 Seagate employees, primarily in R&D, customer service and channel and reseller support.

“Cray will be a great home for the ClusterStor, employees, customers and partners,” said Ken Claffey, vice president and general manager, Storage Systems Group at Seagate.

Seagate acquired the ClusterStor assets for $374 million in 2014, when it purchased Xyratex. The Lustre arrays have their origins in a company called ClusterStor that was founded by Lustre inventor Peter Braam and whose assets Xyratex acquired in 2010.

Cray has been working with Seagate since 2012, and is the biggest OEM for the ClusterStor line. Supporting Seagate’s other ClusterStor resellers figures prominently in Cray’s play here. The arrangement “provides a new route to market for our storage solution and potentially other products down the road,” said Ungaro.

Brian C. Henry, executive vice president and chief financial officer for Cray, told investors that Cray was already a significant part of Seagate’s ClusterStor revenues, so the added revenue that Cray would get from the transaction will come mostly from resellers. But the more substantial benefit to Cray’s bottom line, according to Henry, is due to improved gross margins, split roughly 50-50 between Cray and third-party resellers.

Cray, with new market vistas in mind, is intent on broadening its portfolio and reach. They’ve put their technology into a public cloud offering via a partnership with the Markley Group. And they’ve formed an impressive (according to IDC/Hyperion) security partnership with Deloitte. Now Cray is further investing into high-performance storage, generally held as the healthiest growth sector in the HPC market.

“The [HPC storage] market has a TAM [total addressable market] of about $1-2 billion in that range depending on how you want to slice and dice it, but the growth rates of that market are better than the overall supercomputing market. It’s been growing in the 10 or 11, even 12 to 13 percent range recently, so it’s growing a little bit faster than the overall supercomputing market, which is great for us and part of our excitement around this,” said Ungaro.

These trends are backed by findings from HPC market firm Hyperion. According to Hyperion’s latest update, global HPC external storage revenues will grow 7.8 percent over the 2016-2021 timeframe (to $6.3 billion), while HPC server sales, by comparison, will grow a modest 5.8 percent to $14.8 billion.

Cray didn’t reveal the terms of the deal, noting that “there are still many moving parts” but Henry anticipates staffing costs will add about $20 million to 2017 operating expenses.

The news of the acquisition comes at the same time as Cray is undergoing restructuring to as Ungaro puts it “better align the cost structure of our company to the current market conditions.” As a cost-cutting measure, Cray reduced its workforce by about 14 percent, laying off 190 people. Ungaro traced the decision to the market slowdown that has continued into 2017.

“That, combined with our estimates for the timing of a rebound and the need to continue to invest in several areas to enable future growth drove our decision to adjust our workforce,” he said on the investor call, adding that the company’s competitive position and win rates remain strong.

Cray expects the transaction to be be finalized late in the third quarter of 2017 and cites an overall net financial impact for Cray “in the range of break-even” for 2018.

Even with major consolidation plays like the Dell-EMC union and now the folding of ClusterStor into Cray, there is still a lot of diversity in the high-end storage vendor space. You can bet that established and aspiring HPC storage providers alike will be looking to mine this market disruption as an opportunity. Although they might not all be so bold as DDN, which today launched an outreach program targeting “the many users left stranded in the wake of Seagate shutting down its ClusterStor product line.”

EnterpriseTech Managing Editor Doug Black contributed to this report.

The post Cray Moves to Acquire the Seagate ClusterStor Line appeared first on HPCwire.

TACC’s Stampede2 Storms out of the Corral

Fri, 07/28/2017 - 10:25

AUSTIN, Texas, July 28, 2017 — Today, the Texas Advanced Computing Center (TACC) dedicated Stampede2, the largest supercomputer at any U.S. university, and one of the most powerful systems in the world in a ceremony at The University of Texas at Austin’s J.J. Pickle Research Campus.

“Stampede2 represents a new horizon for academic researchers in the U.S.,” said Dan Stanzione, TACC’s executive director. “It will serve many thousands of our nation’s scientists and engineers, allowing them to improve our competitiveness and ensure that UT Austin remains a leader in computational research for the national open science community.”

Representatives from TACC were joined by leaders from The University of Texas at Austin, The University of Texas System, the National Science Foundation (NSF) and industry partners Dell EMC, Intel and Seagate at the event.

“For 16 years, the Texas Advanced Computing Center has earned its reputation for innovation and technological leadership,” said Gregory L. Fenves, president of UT Austin. “It is only fitting that TACC has designed and now operates the most powerful supercomputer at any university in the U.S., Stampede2, enabling scientists and engineers to take on the greatest challenges facing society.”

Made possible by a $30 million award from NSF, Stampede2 is the newest strategic resource for the nation’s academic community and will enable thousands of researchers nationwide, from all disciplines, to answer questions that cannot be addressed through theory or experimentation alone and that require high-performance computing power.

“Building on the success of the initial Stampede system, the Stampede team has partnered with other institutions as well as industry to bring the latest in forward-looking computing technologies combined with deep computational and data science expertise to take on some of the most challenging science and engineering frontiers,” said Irene Qualters, director of NSF’s Office of Advanced Cyberinfrastructure.

In addition to its massive scale, Stampede2 will be among the first systems to employ cutting-edge computer processor, memory, networking and storage technology from its industry partners. Phase 1 of the system, which is currently complete, ranked as the 12th most powerful supercomputer in the world on the June Top500 list and contains 4,200 Intel Xeon Phi processor-based nodes and Intel Omni-Path Architecture. These 68-core massively-parallel processors include a new form of memory that improves the speed at which the processors can compute.

Later this summer, Phase 2 will add 1,736 Intel Xeon Scalable processor-based nodes, Intel’s biggest data center platform advancement in a decade, giving it a peak performance of 18 petaflops, or 18 quadrillion mathematical operations per second. In addition, Stampede 2 will later add Intel persistent memory, based on 3D XPoint media. This entirely new class of nonvolatile memory can help turn immense amounts of data into valuable information in real time.

Stampede2 is the flagship supercomputer at The University of Texas at Austin’s Texas Advanced Computing Center (TACC). A strategic national resource supported by a $30 million award from the National Science Foundation (NSF), Stampede2 will provide high-performance computing capabilities to thousands of researchers across the U.S. [Credit: Sean Cunningham, TACC]“Intel and TACC have been collaborating for years to provide the high-performance computing (HPC) community the tools they need to make the scientific discoveries and create solutions to address some of society’s toughest challenges,” said Trish Damkroger, Vice President of Technical Computing at Intel. “Intel’s leading solution portfolio for HPC provides the efficient performance, flexible interconnect, and ease of programing to be the foundation of choice for leading supercomputing centers.”

Dell EMC supplied Stampede2’s PowerEdge server racks and acted as the technological integrator for the project.

“Dell EMC is committed to partnering with leading researchers across the globe to advance high performance computing initiatives. We are proud to have partnered with TACC to deliver Stampede2, an updated system that drives technological research in areas critical to scientific progress,” said Armughan Ahmad, senior vice president and general manager, Ready Solutions and Alliances for Dell EMC.

Seagate Technologies provided the storage technology for the system, encompassing 30 petabytes of raw performance and enough capacity for billions of files.

Stampede2’s powerful and diverse architecture is well-tuned to support computational scientists and engineers who use a wide range of applications, from researchers who conduct large-scale simulations and data analyses using thousands of processors simultaneously to those who perform smaller computations or who interact with Stampede2 through web-based community platforms like CyVerse, which serves the life sciences community and DesignSafe, which serves the natural hazard engineering community.

The integration of Singularity containers, which can be used to package entire scientific workflows, software and libraries, and even data, will make a large catalogue of applications available to researchers.

TACC staff have worked since January to construct Stampede2 in TACC’s state-of-the-art data center, and deployed the system ahead of schedule. Since April, researchers have used the system to conduct large scale scientific studies of gravitational waves, earthquakes, nanoparticles, cancer proteins and severe storms.

Stampede2 will serve the science community through 2021. An additional proposed NSF award for $24 million will support upcoming operations and maintenance costs for the system.

A number of leading universities will collaborate with TACC to provide cyberinfrastructure expertise and services for the project. The partner institutions are: Clemson University, Cornell University, Indiana University, Ohio State University and the University of Colorado.

Stampede2 will be the largest supercomputing resource available to researchers through the NSF-supported Extreme Science and Engineering Discovery Environment (XSEDE), which will allocate time on the supercomputer to researchers based on a competitive peer-review process.

The system continues the important service to the scientific community provided by Stampede1 — also supported by NSF — which operated from 2013 to 2017 and over the course of its existence ran 8 million compute jobs in support of tens of thousands of researchers and more than 3,000 science and engineering projects.

Stampede2 will double the peak performance, memory, storage capacity, and bandwidth of its predecessor, while occupying half the physical size and consuming half the power. It will be integrated into TACC’s ecosystem of more than 15 advanced computing systems, providing access to long-term storage, scientific visualization, machine learning, and cloud computing capabilities.

Stampede2 comes online at a time when the use of NSF-supported research cyberinfrastructure resources is at an all-time high across all science and engineering disciplines. Since 2005, the number of active institutions using research cyberinfrastructure has doubled, the number of principal investigators has tripled, and the number of active users has quintupled.

Said Stanzione: “Stampede2 will help a growing number of scientists access computation at scale, powering discoveries that change the world.”

Source: TACC

The post TACC’s Stampede2 Storms out of the Corral appeared first on HPCwire.

Intel Reports Second-Quarter Revenue of $14.8 Billion

Thu, 07/27/2017 - 17:23


  • Record second-quarter revenue up 14 percent year-over-year (excluding Intel Security Group) with strong performance in client computing (up 12 percent) and data-centric* businesses (up 16 percent).
  • GAAP earnings per share (EPS) was $0.58 and non-GAAP EPS was $0.72, up 22 percent year-over-year driven by strong topline growth and gross margin improvement.
  • Intel raises full-year revenue outlook by $1.3 billion to $61.3 billion; raises full-year GAAP EPS outlook by $0.10 to $2.66 and non-GAAP EPS by $0.15 to $3.00.
  • Launched Intel’s highest performance products ever: the Intel® Core™ X-Series family for advanced gaming, VR and more, as well as Intel® Xeon® Scalable processors, which offer data center customers huge performance gains for artificial intelligence (AI) and other data-intensive workloads.

SANTA CLARA, Calif., July 27, 2017 — Intel Corporation today reported second-quarter revenue of $14.8 billion, up 9 percent year-over-year. After adjusting for the Intel Security Group (ISecG) transaction, second-quarter revenue grew 14 percent from a year ago. Operating income was $3.8 billion, up 190 percent year-over-year, and non-GAAP operating income was $4.2 billion, up 30 percent. EPS was $0.58, up 115 percent year-over-year, and non-GAAP EPS was $0.72, up 22 percent.

The company also generated approximately $4.7 billion in cash from operations, paid dividends of $1.3 billion, and used $1.3 billion to repurchase 36 million shares of stock. Intel is raising its full-year revenue outlook by $1.3 billion to $61.3 billion and raising its EPS outlook to $2.66 (GAAP) and $3.00 (non-GAAP), which is a 15 cent increase over the previous guidance.

“Q2 was an outstanding quarter with revenue and profits growing double digits over last year,” said Brian Krzanich, Intel CEO. “We also launched new Intel Core, Xeon and memory products that reset the bar for performance leadership, and we’re gaining customer momentum in areas like AI and autonomous driving. With industry-leading products and strong first-half results, we’re on a clear path to another record year.”

Key Business Unit Revenue and Trends Quarterly Year-Over-Year Q2 2017 vs. Q2 2016 Client Computing Group $8.2 billion up 12% Data Center Group $4.4 billion up 9% Internet of Things Group $720 million up 26% Non-Volatile Memory Solutions Group $874 million up 58% Programmable Solutions Group $440 million down 5%

*Data-centric businesses include DCG, IOTG, NSG, PSG, and all other

“We feel great about where we are relative to our three year plan and heading into the second half. Intel’s transformation continues in the third quarter when we expect to complete our planned acquisition of Mobileye,” said Bob Swan, Intel CFO. “Based on our strong first-half results and higher expectations for the PC business, we’re raising our full-year revenue and EPS forecast.”

GAAP Financial Comparison Quarterly Year-Over-Year Q2 2017 Q2 2016 vs. Q2 2016 Revenue $14.8 billion $13.5 billion up 9% Gross Margin 61.6% 58.9% up 2.7 points R&D and MG&A $5.1 billion $5.2 billion flat Operating Income $3.8 billion $1.3 billion up 190% Tax Rate 38.6% 20.4% up 18.2 points Net Income $2.8 billion $1.3 billion up 111% Earnings Per Share 58 cents 27 cents up 115% Non-GAAP Financial Comparison Quarterly Year-Over-Year Q2 2017 Q2 2016 vs. Q2 2016 Revenue $14.8 billion ^ $13.5 billion ^ up 9% Gross Margin 63.0% 61.8% up 1.2 points R&D and MG&A $5.1 billion ^ $5.2 billion ^ flat Operating Income $4.2 billion $3.2 billion up 30% Tax Rate 22.5% 20.4% ^ up 2.1 points Net Income $3.5 billion $2.9 billion up 23% Earnings Per Share 72 cents 59 cents up 22%

^ No adjustment on a non-GAAP basis.

Business Outlook

Intel’s Business Outlook and other forward-looking statements in this earnings release reflects management’s views as of July 27, 2017. Intel does not undertake, and expressly disclaims any duty, to update any such statement whether as a result of new information, new developments or otherwise, except to the extent that disclosure may be required by law.

Intel’s Business Outlook does not include the potential impact of any business combinations, asset acquisitions, divestitures, strategic investments and other significant transactions that may be completed after July 27, 2017 except for the planned acquisition of Mobileye N.V. (Mobileye), which we expect to close in the third quarter of 2017, pending satisfaction of all closing conditions.

Our guidance for the third-quarter and full-year 2017 include both GAAP and non-GAAP estimates. Reconciliations between these GAAP and non-GAAP financial measures are included below.

Q3 2017 GAAP Non-GAAP Range Revenue $15.7 billion $15.7 billion ^ +/- $500 million Gross margin percentage 61% 63% +/- a couple pct. pts. R&D plus MG&A spending $5.2 billion $5.1 billion approximately Restructuring and other charges $0 $0 approximately Amortization of acquisition-related intangibles included in operating expenses $50 million $0 approximately Impact of equity investments and interest and other, net $300 million $300 million ^ approximately Depreciation $1.8 billion $1.8 billion ^ approximately Operating income $4.3 billion $4.8 billion approximately Tax rate 24% 24% ^ approximately Earnings per share $0.72 $0.80 +/- 5 cents Full-Year 2017 GAAP Non-GAAP Range Revenue $61.3 billion $61.3 billion ^ +/- $500 million Gross margin percentage 61% 63% +/- a couple pct. pts. R&D plus MG&A spending $20.8 billion $20.7 billion approximately Restructuring and other charges $200 million $0 approximately Amortization of acquisition-related intangibles included in operating expenses $175 million $0 approximately Impact of equity investments and interest and other, net $1.4 billion $1.0 billion approximately Depreciation $7.0 billion $7.0 billion ^ +/- $200 million Operating income $16.4 billion $17.9 billion approximately Tax rate 27% 23% approximately Earnings per share $2.66 $3.00 +/- 5% Full-year capital spending $12.0 billion $12.0 billion ^ +/- $500 million

^ No adjustment on a non-GAAP basis.

For additional information regarding Intel’s results and Business Outlook, please see the CFO Earnings Presentation posted on our Investor Relations website at www.intc.com/results.cfm.

Forward-Looking Statements

The above statements and any others in this release that refer to Business Outlook, future plans and expectations are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “goals,” “plans,” “believes,” “seeks,” “estimates,” “continues,” “may,” “will,” “would,” “should,” “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking statements. Such statements are based on management’s expectations as of the date of this earnings release and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Intel presently considers the following to be important factors that could cause actual results to differ materially from the company’s expectations.

  • Demand for Intel’s products is highly variable and could differ from expectations due to factors including changes in business and economic conditions; consumer confidence or income levels; the introduction, availability and market acceptance of Intel’s products, products used together with Intel products and competitors’ products; competitive and pricing pressures, including actions taken by competitors; supply constraints and other disruptions affecting customers; changes in customer order patterns including order cancellations; and changes in the level of inventory at customers.
  • Intel’s gross margin percentage could vary significantly from expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and associated costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials or resources; and product manufacturing quality/yields. Variations in gross margin may also be caused by the timing of Intel product introductions and related expenses, including marketing expenses, and Intel’s ability to respond quickly to technological developments and to introduce new products or incorporate new features into existing products, which may result in restructuring and asset impairment charges.
  • Intel’s results could be affected by adverse economic, social, political and physical/infrastructure conditions in countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters, infrastructure disruptions, health concerns, fluctuations in currency exchange rates, sanctions and tariffs, and the United Kingdom referendum to withdraw from the European Union. Results may also be affected by the formal or informal imposition by countries of new or revised export and/or import and doing-business regulations, which could be changed without prior notice.
  • Intel operates in highly competitive industries and its operations have high costs that are either fixed or difficult to reduce in the short term.
  • The amount, timing and execution of Intel’s stock repurchase program may fluctuate based on Intel’s priorities for the use of cash for other purposes—such as investing in our business, including operational and capital spending, acquisitions, and returning cash to our stockholders as dividend payments—and because of changes in cash flows or changes in tax laws.
  • Intel’s expected tax rate is based on current tax law and current expected income and may be affected by the jurisdictions in which profits are determined to be earned and taxed; changes in the estimates of credits, benefits and deductions; the resolution of issues arising from tax audits with various tax authorities, including payment of interest and penalties; and the ability to realize deferred tax assets.
  • Gains or losses from equity securities and interest and other could vary from expectations depending on gains or losses on the sale, exchange, change in the fair value or impairments of debt and equity investments, interest rates, cash balances, and changes in fair value of derivative instruments.
  • Product defects or errata (deviations from published specifications) may adversely impact our expenses, revenues and reputation.
  • Intel’s results could be affected by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues. An unfavorable ruling could include monetary damages or an injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property.
  • Intel’s results may be affected by the timing of closing of acquisitions, divestitures and other significant transactions. In addition, risks associated with our planned acquisition of Mobileye N.V. are described in the “Forward Looking Statements” section of Intel’s press release entitled “Intel to Acquire Mobileye; Combining Technology and Talent to Accelerate the Future of Autonomous Driving” dated March 13, 2017, which risk factors are incorporated by reference herein.

Additional information regarding these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most recent reports on Forms 10-K and 10-Q, copies of which may be obtained by visiting our Investor Relations website at www.intc.com or the SEC’s website at www.sec.gov.

Earnings Webcast

Intel will hold a public webcast at 2:00 p.m. PDT today to discuss the results for its second quarter of 2017. The live public webcast can be accessed on Intel’s Investor Relations website at www.intc.com/results.cfm. The CFO Earnings Presentation, webcast replay, and audio download will also be available on the site.

Intel plans to report its earnings for the third quarter of 2017 on October 26, 2017 promptly after close of market, and related materials will be available at www.intc.com/results.cfm. A public webcast of Intel’s earnings conference call will follow at 2:00 p.m. PDT at www.intc.com.

About Intel

Intel (NASDAQ: INTC) expands the boundaries of technology to make the most amazing experiences possible. Information about Intel can be found at newsroom.intel.com and intel.com.

Source: Intel Corp.

The post Intel Reports Second-Quarter Revenue of $14.8 Billion appeared first on HPCwire.

Cray Inc. Reports Second Quarter 2017 Financial Results

Thu, 07/27/2017 - 14:47

SEATTLE, July 27, 2017 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced financial results for its second quarter ended June 30, 2017.

All figures in this release are based on U.S. GAAP unless otherwise noted.  A reconciliation of GAAP to non-GAAP measures is included in the financial tables in this press release.

Revenue for the second quarter of 2017 was $87.1 million, compared to $100.2 million in the second quarter of 2016.  Net loss for the second quarter of 2017 was $6.8 million, or $0.17 per diluted share, compared to a net loss of $13.1 million, or $0.33 per diluted share in the second quarter of 2016.  Non-GAAP net loss was $8.0 million, or $0.20 per diluted share for the second quarter of 2017, compared to non-GAAP net loss of $11.4 million, or $0.29 per diluted share for the same period of 2016.

Overall gross profit margin on a GAAP and non-GAAP basis for the second quarter of 2017 was 33%, compared to 36% for the second quarter of 2016.

Operating expenses for the second quarter of 2017 were $39.8 million, compared to $51.8 million for the second quarter of 2016.  Non-GAAP operating expenses for the second quarter of 2017 were $37.5 million, compared to $49.0 million for the second quarter of 2016.  Operating expenses for the second quarter of 2017 benefited from increased research and development credits.

As of June 30, 2017, cash, investments and restricted cash totaled $253 million.  Working capital at the end of the second quarter was $342 million, compared to $350 million at the end of the first quarter.

“As data continues to expand at an explosive rate, storage is becoming an increasingly key aspect to our strategic growth areas, including modeling and simulation, big data analytics, and artificial intelligence/deep learning,” said Peter Ungaro, president and CEO of Cray.  “We recently entered into an agreement to complete an exciting transaction and strategic partnership with Seagate that will strengthen our efforts in this area, broaden our storage portfolio and help us drive new growth in the high-performance storage market.  At the same time, our market has continued to experience a prolonged downturn, one which we believe will be temporary, but which drove us to take the difficult step last week to better align our workforce with both the short-term market realities and our long-term business strategies.  I want to thank those employees who were personally impacted.  I remain positive about the long-term prospects of our business as we remain well positioned to drive growth into the future.”

For 2017, while a wide range of results remains possible, Cray expects revenue to be in the range of $400 million for the year.  Revenue in the third quarter of 2017 is expected to be approximately $60 million.  GAAP and non-GAAP gross margins for the year are expected to be in the low- to mid-30% range.  Non-GAAP operating expenses for 2017, including an estimate for what the impact of the Seagate transaction would be, are expected to be in the range of $190 million.  For 2017, GAAP operating expenses are anticipated to be about $24 million higher than non-GAAP operating expenses, driven by stock-based compensation, restructuring, and costs related to the Seagate transaction.  GAAP gross profit is expected to be about $1 million lower than non-GAAP gross profit as a result of stock based compensation.

Actual results for any future periods are subject to large fluctuations given the nature of Cray’s business.

Recent Highlights

  • In July, Cray announced it has signed a definitive agreement with Seagate to complete a strategic transaction and enter into a partnership centered around Seagate’s ClusterStor high-performance storage business.  The agreement also calls for Seagate and Cray to collaborate to incorporate the latest Seagate technology into future ClusterStor and Sonexion products.  Cray will continue to support and enhance the ClusterStor product line and to support new and existing customers going forward.
  • In July, Cray announced that it will provide a Urika-GX to the Alan Turing Institute through a collaboration between Cray, Intel and the Institute.  The agile analytics platform will enable the development of advanced applications across a number of scientific fields including engineering and technology, defense and security, smart cities, financial services and life sciences.
  • In June, Cray announced the Cray Urika-XC analytics software suite, bringing graph analytics, deep learning, and robust big data analytics tools to the Company’s flagship line of Cray XC supercomputers.  With the Cray Urika-XC software suite, analytics and Artificial Intelligence (AI) workloads can run alongside scientific modeling and simulations on XC supercomputers.
  • In June, Cray was awarded a contract with the National Institute of Water and Atmospheric Research in New Zealand to provide two Cray XC50 supercomputers and a Cray CS400 cluster supercomputer in a contract valued at more than $18 million.
  • In June, Leidos and Cray announced that the companies signed a strategic alliance agreement to develop, market and sell Multi-Level Security solutions that include the Cray CS series of cluster supercomputers to Federal and commercial customers.
  • In May, Markley and Cray announced a partnership to provide supercomputing as a service solutions that combine the power of Cray supercomputers with the premier hosting capabilities of Markley.  Through the partnership, Markley will offer Cray supercomputing technologies, as a hosted offering, and both companies will collaborate to build and develop industry-specific solutions.
  • In May, Cray announced the launch of two new Cray CS-Storm accelerated cluster supercomputers — the Cray CS-Storm 500GT and the Cray CS-Storm 500NX.  Purpose-built for the most demanding AI workloads, the new Cray systems will provide customers with powerful, accelerator-optimized solutions for running machine learning and deep learning applications.

Conference Call Information
Cray will host a conference call today, Thursday, July 27, 2017 at 1:30 p.m. PDT (4:30 p.m. EDT) to discuss its second quarter ended June 30, 2017 financial results.  To access the call, please dial into the conference at least 10 minutes prior to the beginning of the call at (855) 894-4205. International callers should dial (765) 889-6838 and use the conference ID #56308197.  To listen to the audio webcast, go to the Investors section of the Cray website at www.cray.com/company/investors.

If you are unable to attend the live conference call, an audio webcast replay will be available in the Investors section of the Cray website for 180 days.  A telephonic replay of the call will also be available by dialing (855) 859-2056, international callers dial (404) 537-3406, and entering the conference ID #56308197.  The conference call replay will be available for 72 hours, beginning at 4:45 p.m. PDT on Thursday, July 27, 2017.

Use of Non-GAAP Financial Measures
This press release contains “non-GAAP financial measures” under the rules of the U.S. Securities and Exchange Commission (“SEC”).  A reconciliation of U.S. generally accepted accounting principles, or GAAP, to non-GAAP results is included in the financial tables included in this press release.  Management believes that the non-GAAP financial measures that we have set forth provide additional insight for analysts and investors and facilitate an evaluation of Cray’s financial and operational performance that is consistent with the manner in which management evaluates Cray’s financial performance.  However, these non-GAAP financial measures have limitations as an analytical tool, as they exclude the financial impact of transactions necessary or advisable for the conduct of Cray’s business, such as the granting of equity compensation awards, and are not intended to be an alternative to financial measures prepared in accordance with GAAP.  Hence, to compensate for these limitations, management does not review these non-GAAP financial metrics in isolation from its GAAP results, nor should investors.  Non-GAAP financial measures are not based on a comprehensive set of accounting rules or principles.  This non-GAAP information supplements, and is not intended to represent a measure of performance in accordance with, or disclosures required by GAAP.  These measures are adjusted as described in the reconciliation of GAAP to non-GAAP numbers at the end of this release, but these adjustments should not be construed as an inference that all of these adjustments or costs are unusual, infrequent or non-recurring.  Non-GAAP financial measures should be considered in addition to, and not as a substitute for or superior to, financial measures determined in accordance with GAAP.  Investors are advised to carefully review and consider this non-GAAP information as well as the GAAP financial results that are disclosed in Cray’s SEC filings.

Additionally, we have not quantitatively reconciled the non-GAAP guidance measures disclosed under “Outlook” to their corresponding GAAP measures because we do not provide specific guidance for the various reconciling items such as stock-based compensation, adjustments to the provision for income taxes, amortization of intangibles, costs related to acquisitions, purchase accounting adjustments, and gain on significant asset sales, as certain items that impact these measures have not occurred, are out of our control or cannot be reasonably predicted.  Accordingly, reconciliations to the non-GAAP guidance measures are not available without unreasonable effort.  Please note that the unavailable reconciling items could significantly impact our financial results.

About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges.  Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability.  Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray

The post Cray Inc. Reports Second Quarter 2017 Financial Results appeared first on HPCwire.

China’s Expanding Effort to Win in Microchips

Thu, 07/27/2017 - 10:36

The global battle for preeminence, or at least national independence, in semiconductor technology and manufacturing continues to heat up with Europe, China, Japan, and the U.S. all vying for sway. A fascinating article (China’s Next Target: U.S. Microchip Hegemony) in today’s Wall Journal examines China’s expanding efforts win the battle.

About 90 percent of the $190 billion worth of chips consumed in China today – roughly 58.5 percent of the market according to the article written by Bob Davis and Eva Dou – are imported or produced by foreign owned entities. No doubt we are covering old ground here, but the account explores China’s rapidly expanding efforts.

Here’s an excerpt: “We cannot be reliant on foreign chips,” said China’s vice premier, Ma Kai this year at a meeting of the National People’s Congress, China’s legislature. He heads a Communist Party committee that designed the country’s plan in 2014. Beijing created a $20 billion national chip financing fund—dubbed the “Big Fund”— and set goals for China to become internationally competitive by 2030, with some companies becoming market leaders.”

Davis and Dou write, “Today, the industry is riven by a nationalist battle between China and the U.S., one that reflects broad currents reshaping the path of globalization. Washington accuses Beijing of using government financing and subsidies to try to dominate semiconductors as it did earlier with steel, aluminum, and solar power. China claims U.S. complaints are a poorly disguised attempt to hobble China’s development. Big U.S. players like Intel Corp. and Micron Technology Inc. find themselves in a bind—eager to expand in China but wary of losing out to state-sponsored rivals.”

Link to The Wall Street Journal article: https://www.wsj.com/articles/chinas-next-target-u-s-microchip-hegemony-1501168303

The post China’s Expanding Effort to Win in Microchips appeared first on HPCwire.

Nick Nystrom Appointed Interim Director of PSC

Thu, 07/27/2017 - 10:29

PITTSBURGH, Penn., July 27, 2017 — Nick Nystrom, senior director of research at the Pittsburgh Supercomputing Center (PSC), has been appointed Interim Director of the center. Nystrom succeeds Michael Levine and Ralph Roskies, who have been co-directors of PSC since its founding in 1986.

During the interim period, Nystrom will oversee PSC’s state-of-the-art research into high-performance computing, data analytics, science and communications, working closely with Levine and Roskies to ensure a smooth and seamless transition.

A research physicist, Nystrom joined PSC in 1992. For the past year he has led the center’s scientific research teams, including the User Support for Scientific Applications, Biomedical and Public Health Applications groups, as well as a core team targeting converged high-performance computing and big data production resources and strategic applications.

Since joining PSC, Nystrom has developed massively scalable applications and conducted research in areas highly relevant to the work of the center, including quantum chemistry, software and performance engineering and many others. He has been instrumental in computer architecture initiatives, including leading the team that developed Bridges, a new kind of supercomputer that brings high-performance computing together with artificial intelligence and big data. Nystrom has also been a key player in the development of new collaborations within our universities and across academia, industry and government.

PSC, a joint effort of Carnegie Mellon University and the University of Pittsburgh that is housed administratively in the Mellon College of Science at CMU, has hosted 19 world-class supercomputers in the 31 years of its operations. A search committee to identify a permanent leader of the center will be co-chaired by Rebecca Doerge, dean of the Mellon College of Science at CMU, and Rob Rutenbar, senior vice chancellor for research at Pitt, with input from faculty at both institutions and senior staff at PSC.

About PSC

The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry, and is a leading partner in XSEDE (Extreme Science and Engineering Discovery Environment), the National Science Foundation cyberinfrastructure program.

Source: PSC

The post Nick Nystrom Appointed Interim Director of PSC appeared first on HPCwire.

EMSL Celebrates 20 Years of Scientific Achievement

Thu, 07/27/2017 - 10:27

RICHLAND, Wash., July 27, 2017 – A unique user facility that has helped scientists around the world shape their ideas and obtain answers to some of the most challenging scientific questions is celebrating two decades of achievement next week.

Scientists, community leaders and others will gather Aug. 3-4 to celebrate the achievements of the first 20 years of EMSL, the Environmental Molecular Sciences Laboratory, a U.S. Department of Energy Office of Science User Facility located at DOE’s Pacific Northwest National Laboratory in Richland, Wash.

Using EMSL resources, scientists worldwide have authored more than 6,000 scientific manuscripts which have garnered more than 200,000 citations as other scientists build on each other’s work. Those findings have helped chart the course of subsequent studies and shape the direction of current endeavors.

EMSL was proposed by former PNNL director William R. Wiley to explore connections at the molecular scale between the physical, mathematical and life sciences. While much of its initial focus was on environmental challenges such as the fate and transport of contaminants beneath the surface, the laboratory’s scope has grown remarkably. EMSL resources have contributed to important findings about the environment, atmospheric processes, biofuels and bioproducts, microbiology and life sciences, catalysis, energy storage, clean fuels and other topics.

A constant throughout EMSL’s years has been the creation of new ways to monitor what’s happening at the molecular level in a range of materials and organisms. One of the richest areas of exploration has been in a field known as subsurface science. Much of the leading work on the use of microbes to transform and sequester radioactive waste and heavy metals in soils and deep sediments has been done by scientific users who have come together through EMSL collaborations. Such research has opened up other areas, such as a deeper understanding of microbial communities that are important for the production of biofuels and bioproducts.

EMSL scientists have led the way to develop new ways of looking at whole proteins in live cells of bacteria and other organisms, yielding a broad view of how proteins actually carry out their functions in real time. And EMSL scientists have worked with colleagues around the country to solve the structures of proteins that contribute to infectious disease — a critical step for the creation of vaccines or better treatments someday.

In the area of energy storage, scientific users have made some of the best real-time observations ever made about what’s actually happening to the materials inside a battery as it operates. The findings about battery chemistry, including how a battery loses energy as it stores and releases its charge, have contributed to our knowledge about how to develop longer-lasting, higher-capacity batteries.

One of EMSL’s most widely-known contributions is the creation of NWChem, an open-source high-performance-computing software package that helps scientists understand problems in the realm of molecular chemistry and biochemistry. The software, which helps scientists simulate molecular structures and reaction mechanisms, has been downloaded more than 70,000 times.

The EMSL facility covers an area bigger than four football fields, filled with premier instruments for molecular environmental science and with a production computing system, all designed to help scientists answer important questions about the environment, biology and energy. But more important than the instruments is the scientific expertise and leadership EMSL personnel offer to users. EMSL is home to more than 150 scientists, many with unique expertise. Collectively these scientists have centuries’ worth of knowledge about what type of molecule might yield its secrets to, say, an NMR probe vs. a more conventional mass spectrometer, or when a measure of an organism’s proteins will yield more meaningful information than a measure of its DNA.

By working with both experimental and computational scientists who have come from more than 40 nations as well every state in the United States, EMSL personnel have a feel for the pulse of research in a way that few user facilities possess. Every visitor brings unique knowledge and questions that remain part of the EMSL brain trust long after they depart, informing subsequent explorations.

“Scientific user facilities like EMSL bring together the resources, the tools, and most importantly the people to solve some of the most difficult scientific challenges,” said Liyuan Liang, EMSL director. “Scientists from academic, industry and laboratories across the world join forces to tackle problems that otherwise might go unaddressed simply because they are too complex and challenging for any one scientist. We bring teams of scientists together to enable discovery.”

One of the most exciting areas for EMSL now is its working relationships with other DOE Office of Science User Facilities, including the Joint Genome Institute and the Atmospheric Radiation Measurement Climate Research Facility. Each offers unique resources, and for certain questions the suite of facilities offers an uncommonly broad view of certain scientific challenges.

The agenda next week includes a scientific symposium with a talk by X. Sunney Xie, a former EMSL scientist and now Mallinckrodt Professor of Chemistry and Chemical Biology at Harvard, who will speak about “life at the single molecule level.”

Also next week is a scientific meeting on “Multi-omics for microbiomes.” More than 150 scientists from around the world will gather in Pasco, Wash. Aug. 1-3 to discuss the activity of communities of microbes and their importance everywhere from our bodies to the planet. The meeting is sponsored both by EMSL, as its annual meeting for its users, and by the laboratory’s Microbiomes in Transition initiative.

About EMSL

EMSL, the Environmental Molecular Sciences Laboratory, is a DOE Office of Science User Facility. Located at Pacific Northwest National Laboratory in Richland, Wash., EMSL offers an open, collaborative environment for scientific discovery to researchers around the world. Its integrated computational and experimental resources enable researchers to realize important scientific insights and create new technologies. Follow EMSL on Facebook, LinkedIn and Twitter.

About PNNL

Interdisciplinary teams at Pacific Northwest National Laboratory address many of America’s most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy’s Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, Instagram, LinkedIn and Twitter.

Source: PNNL

The post EMSL Celebrates 20 Years of Scientific Achievement appeared first on HPCwire.

Hyperion: Storage to Lead HPC Growth in 2016-2021

Thu, 07/27/2017 - 09:45

Global HPC external storage revenues will grow 7.8% over the 2016-2021 timeframe according to an updated forecast released by Hyperion Research this week. HPC server sales, by comparison, will grow a modest 5.8% to $14.8 billion; that said, servers are still by far the largest chunk of the HPC market. Storage will hit $6.3 billion in 2021 according to Hyperion.

The HPC storage arena has been an interesting place of late. Just yesterday Seagate and Cray announced a deal in which Cray will  take over the Seagate ClusterStor line. “Adding Seagate’s ClusterStor product line to our DataWarp and Sonexion storage products will enable us to provide a more complete solution to customers,” said Peter Ungaro, CEO, Cray, in the official release. “Current ClusterStor customers and partners can be assured that we will continue to advance and support the ClusterStor products.”

In the latest Hyperion forecast, storage revenue will expand “fastest in EMEA (9.8% CAGR) and Asia-Pacific without Japan (9.6% CAGR). External HPC storage growth in North America will remain robust (6.2% CAGR.”

Interestingly and perhaps not surprisingly, government labs (17.1%), academic institutions (15.2%), and defense (18.2%) were the biggest consumers of HPC external storage in 2016. Hyperion says it now tracks HPC storage and software by 26 countries and twelve verticals. Hyperion defines external HPC storage as located outside of the server cabinets and can include solid-state, disk, and tape media.

Although focused on storage, the Hyperion update suggests HPC growth overall will outpace the general IT market and singles out the following drivers:

  • AI/Deep Learning. “Requirements for new HPC systems with a broad range of architectures to support development and operational capabilities in the artificial intelligence sector – especially in the area of deep learning.”
  • Enterprise and Cloud. The continued migration and expansion of enterprise HPC workloads to cloud-based ecosystems is spurring storage demand. “Hyperion expects that in many cases, HPC in the cloud operations will be used not as a replacement scheme but instead to augment critical on-premise HPCs capabilities,” according to the report.
  • Dig Data Analytics. The growth of new big data analytics workflows in non-traditional HPC environments, “especially in the finance, personalized medicine, and cyber security sectors.”

The post Hyperion: Storage to Lead HPC Growth in 2016-2021 appeared first on HPCwire.