HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 27 min ago

ANSYS Delivers New Solutions For HPC Electronic Designs

Tue, 06/06/2017 - 08:02

PITTSBURGH, June 6, 2017 — ANSYS (NASDAQ: ANSS) is expanding its best-in-class engineering simulation architecture to combine the advanced computer science of elastic computing, big data and machine learning to the physics-based world of engineering simulation. Available today, ANSYS RedHawk-SC, ANSYS Path-FX and ANSYS CMA enable automotive, mobile and high-performance computing (HPC) organizations to accelerate electronic product innovation and improve performance, reliability and cost.

Today’s leading automotive, HPC and mobile electronics require semiconductor chips with advanced process nodes to improve processor power consumption. As the scale and complexity of these chips increase, emerging design technologies need to provide optimized tools to quickly and accurately analyze and manage data. RedHawk-SC, Path-FX and CMA empower organizations to meet the growing electronic system demands for advanced driver-assistance systems, mobile phones, GPU-powered artificial intelligence and data center networking.

RedHawk-SC brings unprecedented performance and scalability to the production proven ANSYS RedHawk platform. RedHawk-SC’s elastic compute engine gives users a 10x improvement in capacity and scalability architecture over previous releases of RedHawk. Elastic compute enables customers to efficiently leverage commodity computers in private or public cloud environments, without requiring expensive, dedicated high memory computers.

With the addition of ANSYS SeaScape technologies, RedHawk users now have access to big data analytics, and popular machine learning packages, that reduce power while increasing performance and reliability of semiconductor designs. Customers can process large amounts of data from different physics-based simulations and chip design data to drive optimizations that improve the cost, performance and reliability of designs.

ANSYS Path-FX supports users with on-chip variability analyses that are essential to advanced process node designs, where power, timing and parametric yield are critical. Path-FX integrates with RedHawk-SC to provide comprehensive timing and voltage variability analysis, complementing timing sign-off tools from third-party vendors. ANSYS CMA provides a direct link for electronic system designers to accurately model and analyze power integrity and signal integrity effects efficiently through sophisticated chip power models produced by RedHawk-SC.

“Our focus on multiphysics simulation for chip-package-system delivers unique and significant customer value to a large and growing number of the top semiconductor and electronics companies,” said John Lee, general manager, ANSYS. “We are excited to be at the forefront of applying advanced computational sciences such as machine learning and big data to drive results that enable our automotive, mobile and HPC electronic system customers to realize their product promise.”

The new products will be highlighted at the Design Automation Conference, June 18-22 in Austin, Texas, and at the ANSYS Executive Seminars in Silicon Valley and Austin in June. The seminars are open to key customers, and industry analysts, and will focus on machine learning, advanced semiconductor and automotive reliability flows.

About ANSYS, Inc.

If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, chances are you’ve used a product where ANSYS software played a critical role in its creation. ANSYS is the global leader in engineering simulation. We help the world’s most innovative companies deliver radically better products to their customers. By offering the best and broadest portfolio of engineering simulation software, we help them solve the most complex design challenges and create products limited only by imagination.  Founded in 1970, ANSYS employs thousands of professionals, many of whom are expert M.S. and Ph.D.-level engineers in finite element analysis, computational fluid dynamics, electronics, semiconductors, embedded software and design optimization. Headquartered south of Pittsburgh, Pennsylvania, U.S.A., ANSYS has more than 75 strategic sales locations throughout the world with a network of channel partners in 40+ countries. Visit www.ansys.com for more information.

Source: ANSYS

The post ANSYS Delivers New Solutions For HPC Electronic Designs appeared first on HPCwire.

IBM Clears Path to 5nm with Silicon Nanosheets

Mon, 06/05/2017 - 16:37

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a process to build 5 nanometer (nm) chips by combining a novel switch architecture with advanced lithography techniques.

The heart of the R&D advance is a new gate-all-around architecture that employs stacked silicon nanosheets, replacing the FinFET structure used in today’s leading processors. Instead of having one single vertical fin, the horizontal “stack” can send signals through four gates, providing for better leakage control at smaller scales.

IBM Research scientist Nicolas Loubet holds a wafer of chips with 5nm silicon nanosheet transistors (Photo Credit: Connie Zhou)

“We understood that the FinFET structure is running out of steam at 7nm and we had to invent a new device structure which could continue the scaling for several more generations,” said Mukesh V Khare, vice president at IBM Research, in an interview with HPCwire.

According to IBM, the gate-all-around architecture paves the way for fingernail-sized chips (~600mm2, says Khare) packed with 30 billion transistors—50 percent more transistors than IBM’s 7nm process enables. IBM estimates that this would provide close to a 40 percent improvement in performance for the same power or 75 percent power savings at matched performance compared with today’s leading-edge 10nm technology.

The same Extreme Ultraviolet (EUV) lithography approach that IBM used to produce the 7nm test node was applied to the nanosheet transistor architecture. With EUV, the width of the nanosheets can be adjusted continuously, within a single manufacturing process. “This adjustability permits the fine-tuning of performance and power for specific circuits – something not possible with today’s FinFET transistor architecture production,” says IBM.

IBM and its research partners have built the transistors on 300mm wafers. “We put the entire process together, measure, validate and then our partners get full access to the technology to take it from proof point with us at IBM to manufacturing,” said Khare.

Khare emphasized, “These are not one chip or one device types of proof point; these are put together on a manufacturing scale fab which is used for research by IBM Research alliance, so this is a realistic toolset, the toolset that will eventually mature into the manufacturing toolset.”

Market analyst Jim McGregor of Tirias Research did not hesitate to call this a credible advance. “There are basically three pillars of innovation in semiconductor manufacturing,” he said. “One is the lithography process, which we’ve been completely constrained on but we’re slowly moving to EUV. The second is materials technology, which we’ve been advancing rapidly over the past decade through string silicon and other chemical makeups and the third area is transistor design. That has remained stagnant for many years, a couple decades, until we went to FinFET over the past couple years, however FinFETs are going to have their limitations architecturally and this [advance] is addressing that limitation and allows us to continue scaling, and continue basically Moore’s law. It also is injecting new materials that are going to be critical going forward, such as the the nanosheets and nanowires.”

As promising as the technical merits may be here, however, we cannot forget the economic considerations of Moore’s law, said McGregor, which will likely leave semiconductor makers like GlobalFoundries looking to leverage the FinFET technology for as long as possible to recoup the major investments that it and its partners have made. “I would estimate that at 5nm you will still see traditional FinFET,” said McGregor, “You may see IBM’s nanosheet architecture creep in later on, maybe as a sub-node to 5nm or a following process.”

IBM Research has already completed its work on the 7nm process that it introduced two years ago and transferred the technology to its manufacturing partners. IBM has said that its 7nm node will be reaching “manufacturing maturity” towards end of this year, or early next year. “The technology is very, very close,” said Khare, “and the cycle continues with another breakthrough. We will continue to work with our manufacturing partners to make this technology fully available, eventually they will decide the right timing based both on business leads as well as market drivers.”

Like the 7nm test chip before it, the latest semiconductor proof point is part of IBM’s $3 billion, five-year investment in chip R&D that was announced in 2014. As a fundamental building block for semiconductor technology, node advances will benefit all those market segments that can benefit from silicon technology scale, including high-performance computing, enterprise, and mobile. IBM, not surprisingly, is particularly focused on enhancing its cognitive computing and cloud platforms.

“Although a lot of people think of Intel when they think of semiconductor advancements, you have to remember that IBM and their development consortium accounts for a vast amount of innovation in semiconductor processing over the past 15 years or so especially,” said McGregor. “I’d say almost half of the major innovations have probably come from that alliance. It also helps push that new technology into manufacturing because these companies work so close together.”

Feature image caption: A scan of IBM Research Alliance’s 5nm transistor, built using an industry-first process to stack silicon nanosheets as the device structure (Photo credit: IBM)

The post IBM Clears Path to 5nm with Silicon Nanosheets appeared first on HPCwire.

Amazon Showcases “How-to” Blogs on Genomics Workflows

Mon, 06/05/2017 - 12:22

Computational science is often not the strongest suit for life scientists and that’s been a factor in the relatively modest adoption of the cloud for genomics research. Last week Amazon took a step towards easing the path for bioscience researchers with a series of blogs explaining how to set up genomics workflows on AWS.

The series of four blogs by Aaron Friedman, healthcare and life sciences architect, and Angel Pizarro, scientific computing technical business development manager at AWS, are reasonably detailed and includes samples of code as well as descriptions of various workflows. The first blog covers the general architecture and highlights three common layers in a batch workflow – job, batch, and workflow.

“At its core, a genomics pipeline is similar to a series of Extract Transform and Load (ETL) steps that convert raw files from a DNA sequencer to a list of variants for one or more individuals. Each step extracts a set of input files from a data source, processes them as a compute-intensive workload (transform), and then loads the output into another location for subsequent storage or analysis,” write Friedman and Pizarro in the introductory first blog.

“These steps are often chained together to build a flexible genomics processing workflow. The files can then be used for downstream analysis, such as population scale analytics with Amazon Athena or Spark on Amazon EMR. These ETL processes can be represented as individual batch steps in an overall workflow.”

As described by Amazon the remaining three blogs tackle:

  • Part 2 covers the job layer. We demonstrate how you can package bioinformatics applications in Docker containers, and discuss best practices when developing these containers for use in a multitenant batch environment.
  • Part 3 dives deep into the batch, or data processing layer. We discuss common considerations for deploying Docker containers to be used in batch analysis as well as demonstrate how you can use AWS Batch for a scalable and elastic batch engine.
  • Part 4 dives into workflow layer orchestration. We show how you might architect that layer with AWS services. You take the components built in parts 2 and 3 and combine them into an entire secondary analysis workflow. This workflow manages dependencies as well as continually checking the progress of existing jobs. We conclude by running a secondary analysis end-to-end for under $1 and discuss some extensions you can build on top of this core workflow.

Interestingly, AWS argues its approach to batch processing can be generalized to any type of batch workflow, such as post-trade analytics or fraud surveillance in financial services, or rendering and transcoding in media and entertainment.

Amazon claims the genomics workflow blogs will users show how to optimize Amazon EC2 Spot Instances use and save up to 90% off of traditional On-Demand prices. A fifth blog introduces Amazon life science partners who can facilitate implementing genomics workflows and singles out: BioTeam; DNAnexus; Illumina; Seven Bridges.

The post Amazon Showcases “How-to” Blogs on Genomics Workflows appeared first on HPCwire.

DARPA Announces Topological Excitations in Electronics Program

Mon, 06/05/2017 - 11:07

June 5, 2017 — DARPA’s Topological Excitations in Electronics program, announced today, aims to investigate new ways to arrange these moments in novel geometries that are much more stable than the conventional parallel arrangement. If successful, these new configurations could enable bits of data to be made radically smaller than possible today, potentially yielding a 100-fold increase in the amount of storage achievable on a chip. It could also enable designs for completely new computer logic concepts and even for topologically protected “quantum” bits—the basis for long-sought quantum computers.

“We’ve known for some time that there are some magnetic interactions that favor the magnetic moments being canted in a v-shape rather than the parallel arrangement, which yield a much more stable structure than having them all in parallel,” said Rosa Alejandra “Ale” Lukaszew, a program manager in DARPA’s Defense Sciences Office. “The canted interaction doesn’t allow the electrons to line up parallel to each other, so in order to fit them in a small region they must be configured in a special pattern. These unique geometric patterns, called topological excitations, are very stable and maintain their geometry even when shrunk to very small sizes. But only recently have we had the multiscale models, advanced metrology tools, and understanding of proper material combinations to fully explore this phenomenon.”

Another unique characteristic about topological excitations is that they can be moved at significant speed with a small amount of current, allowing for fast read and write operations if, for example, they are placed on a track that runs in front of a read/write head, Ale said. Such an approach would make it possible to explore novel, 3-D approaches to chip design, enabling storage capabilities of 100 Terabits per square inch, 100 times more than the current limit of 1 Terabit per square inch in laboratory demos.

A key goal of the program is to demonstrate topological excitations smaller than 10 nanometers at room temperature for memory applications. Currently there is preliminary research data showing that it is possible to create skyrmions (a particular type of topological excitation) of this size but only at very low temperatures, Ale said. The smallest sizes achieved to date at room temperature are 10s of nanometers, one to two orders of magnitude larger than the new program’s goal. However, sizes less than 10 nm are theoretically possible if the right materials can be found.

“If you can move these skyrmions very fast with low currents, then you also have the possibility of implementing logic,” Ale said. “You’re now not only getting into the memory business, you’re also getting into the processor business, because then you can implement a completely different paradigm for all the typical digital logic gates. If we can achieve sizes smaller than 10 nanometers at room temperature, we want to determine how controllable they are for memory and logic, their sizes, and their dynamics—this whole design space has to be explored. If we demonstrate that these things can move as fast as we think they can, then we could have logic that can go beyond 1 Terahertz, which is the limit right now.”

The program will also explore different materials in the quest for the right properties, as well as other topologically protected states for applications to quantum bits, for example.

“Magnetic materials are not the only type of material that can sustain topological excitations,” Ale said. “There are oxides that can sustain this type of excitation but with charge rather than magnetic moments, so the program is open to other approaches. If there is a possibility of creating skyrmions smaller and with less energetic requirements in an oxide, that could be more interesting than magnetics. And although our focus is on memory storage and logic, if the community has novel ideas for other applications, we’re listening.”

If the research shows topological excitations can achieve the expected gains in memory, processing speed, and power savings, it could eventually have tremendous application to military systems. Manned and unmanned aircraft could fly with much less battery weight on board, allowing them to fly longer and farther, and troops would have fewer batteries to carry on missions, lightening their load, Ale said.

The Topological Excitations in Electronics program seeks expertise in materials science (to address the goal of achieving <10nm TEs at room temperature); physics (to address possible interactions leading to topological excitations of interest as well as suitable metrology to investigate them), chemistry (to address suitable materials combinations), engineering (to develop proof of concept structures to establish applications of topological excitations for memory and logic), enabling multi-function materials, integrated design optimization, and efficient power use.

A Special Notice announcing a webcast Proposers Day on June 16, 2017, was released today and is available on FedBizOpps here: https://go.usa.gov/xNPEs.

Source: DARPA

The post DARPA Announces Topological Excitations in Electronics Program appeared first on HPCwire.

DARPA Picks Intel, Qualcomm, PNNL, 2 Others to Tackle HIVE Project

Mon, 06/05/2017 - 09:56

Getting the most from big data is an ongoing challenge. The Defense Advanced Research Projects Agency (DARPA) last Friday selected of five participants for its Hierarchical Identify Verify Exploit (HIVE) program, announced last summer, intended to develop a new high performance data handling platform.

“Today’s hardware is ill-suited to handle such data challenges, and these challenges are only going to get harder as the amount of data continues to grow exponentially,” according to Trung Tran, a program manager in DARPA’s Microsystems Technology Office (MTO) heading up HIVE. The goal is to develop a “powerful new data-handling and computing platform specialized for analyzing and interpreting huge amounts of data with unprecedented deftness.”

Selected for the project are: Intel Corporation (Santa Clara, California), Qualcomm Intelligent Solutions (San Diego, California), Pacific Northwest National Laboratory (Richland, Washington), Georgia Tech (Atlanta, Georgia), and Northrop Grumman (Falls Church, Virginia).

“The HIVE program is an exemplary prototype for how to engage the U.S. commercial industry, leverage their design expertise, and enhance U.S. competitiveness, while also enhancing national security,” said William Chappell, director of MTO, in the release announcing the selections. “By forming a team with members in both the commercial and defense sectors, we hope to forge new R&D pathways that can deliver unprecedented levels of hardware specialization.”

As described by DARPA, a core HIVE goal is creation of a “graph analytics processor which incorporates the power of graphical representations of relationships in a network more efficiently than traditional data formats and processing techniques according to DARPA. Examples of these relationships among data elements and categories include person-to-person interactions as well as seemingly disparate links between, say, geography and changes in doctor visit trends or social media and regional strife.”

“In combination with emerging machine learning and other artificial intelligence techniques that can categorize raw data elements, and by updating the elements in the graph as new data becomes available, a powerful graph analytics processor could discern otherwise hidden causal relationships and stories among the data elements in the graph representations.

DARPA suggests such a graph analytics processor might achieve a ‘thousandfold improvement in processing efficiency’ over today’s best processors, enabling the real-time identification of strategically important relationships as they unfold in the field rather than relying on after-the-fact analyses in data centers.

Link to the DARPA release: http://www.darpa.mil/news-events/2017-06-02

Link to more information about HIVE: https://www.fbo.gov/index?s=opportunity&mode=form&id=daa4d6dbee8741f56d837c404eac726d&tab=core&_cview=1

Image source: DARPA

The post DARPA Picks Intel, Qualcomm, PNNL, 2 Others to Tackle HIVE Project appeared first on HPCwire.

World Community Grid Taps IBM Cloud for Global Humanitarian Challenges

Fri, 06/02/2017 - 10:26

ARMONK, N.Y., June 2, 2017 — IBM (NYSE: IBM) today announced that World Community Grid, an IBM philanthropic initiative which allows anyone with a computer or Android device to contribute to scientific discovery, has migrated to IBM Cloud as it continues to grow and further its mission to support cutting-edge research into important global humanitarian issues.  

World Community Grid has adopted IBM Cloud for 100 percent of its infrastructure, including the infrastructure that prepares the researchers’ data sets, distributes tasks to volunteer devices, validates and aggregates the results and returns the data to the researchers. This system manages the workflow of the approximately 2.5 million virtual experiments performed by World Community Grid volunteers every day from more than 3.4 million devices.   

Launched in 2004, World Community Grid creates a virtual supercomputer by leveraging unused computing power contributed by volunteers around the world to accelerate health and sustainability research. Volunteers participate in World Community Grid by downloading and installing a free software program on their computer or Android devices.  

With the software, a volunteer’s device performs calculations and virtual experiments on behalf of researchers, making use of its compute power while it would be otherwise idle. The results are then transmitted back to researchers, where they are analyzed and used to accelerate research into pressing global challenges such as childhood cancer, Zika, HIV/AIDS, solar energy and clean water access.  

Prior to migrating to IBM Cloud, World Community Grid was hosted at a traditional data center. This infrastructure is responsible for dividing up research tasks among volunteer devices and then validating and assembling results for scientists as they are completed and returned by World Community Grid volunteers. World Community Grid wanted a more flexible hosting environment that allowed it to scale more easily.

World Community Grid will benefit from IBM Cloud’s global footprint of more than 55 data centers across 19 countries and dedicated network to improve speed and performance for volunteers around the world. As part of the migration, World Community Grid has also adopted DevOps best practices and deployed IBM and open source automation tools such as IBM UrbanCode Deploy, which will allow it to more efficiently perform website updates, technical upgrades and monitor for system issues.  

“World Community Grid makes it possible for computationally intensive research projects that would have taken years to be completed in weeks or months, and faster results means benefits are delivered sooner to patients and communities around the world,” said Jennifer Ryan Crozier, IBM Vice President of Corporate Citizenship and President of the IBM International Foundation. “By moving to IBM Cloud, World Community Grid is poised for years of growth and will leverage automation tools to make our development and deployment processes more efficient.”

Since its founding, World Community Grid has supported 27 research projects in critical areas including cancer, HIV/AIDS, Zika and Ebola viruses, genetic mapping, sustainable energy, clean water and ecosystem preservation. To date, World Community Grid has connected researchers to one half billion U.S. dollars’ worth of free supercomputing power. More than 730,000 individuals and 440 institutions from 80 countries have donated more than one million years of computing time on more than three million desktops, laptops and Android mobile devices since 2004. Volunteer participation has helped researchers to identify potential treatments for childhood cancer, more efficient solar cells and more efficient water filtration.

To learn more about World Community Grid and volunteer to contribute your unused computing power, please visit: https://www.worldcommunitygrid.org/   

To learn more about IBM Cloud, please visit: https://www.ibm.com/cloud-computing/

Source: IBM

The post World Community Grid Taps IBM Cloud for Global Humanitarian Challenges appeared first on HPCwire.

Huawei, ESI Build CAE Public Cloud for Product Design

Fri, 06/02/2017 - 10:20

PARIS, June 2, 2017 – ESI Group, leading innovator in Virtual Prototyping software and services for manufacturing industries, is pleased to unveil the early results of its partnership with leading information and communications technology (ICT) solutions provider Huawei, less than a year after the signature of a memorandum of understanding at HUAWEI CONNECT 2016 in Shanghai, China last September. ESI and Huawei have jointly announced a Computer-Aided Engineering (CAE) Public Cloud Solution to support the digital transformation in the manufacturing industries at Hannover Messe 2017. The joint action provides designers and engineers with a public cloud-based CAE solution across multiple physics and engineering disciplines, integrating ESI’s virtual engineering solutions with Huawei’s High-Performance Computing (HPC) Infrastructure-as-a-Service capabilities through browser-based modeling, user analytics, 2D and 3D visualization and real-time collaboration tools.

Validated on the Open Telekom Public Cloud, the joint solution already supports a variety of ESI applications for CAE, such as: Virtual Performance Solution for compute-on-demand, a general purpose Computational Fluid Dynamics (CFD) solution based on the well-known open source solver, OpenFOAM™, a sand casting vertical application powered by ESI Visual technology and ESI ProCAST and Data Analytics tool ESI MINESET. The combination enables online collaborative product development across the globe, large-scale simulations, and analysis of massive data. Customers will experience improved efficiency, cost optimizations, enhanced green credentials and other benefits.

Sun Jiawei, Director, IT Business Development Department, Huawei, said, “Huawei helps customers achieve business successes by sticking to our ‘Openness, Cooperation, Win-Win’ policy and devoting to establishing a positive cloud ecosystem. We are delighted to work with ESI Group to jointly help develop the public cloud solution and better serve customers with greater product choice.”

Sanjay Choudhry, Vice President Cloud Business Unit at ESI comments, “ESI HPC/CAE platform on the Open Telekom Cloud powered by Huawei is designed to address the complex demands of engineering organizations. The fully browser based cloud platform solves large multi-physics problems in a highly scalable and an extremely easy-to-use environment using a workflow based approach. We are very excited to be able to showcase this in the Hannover event”.

Join ESI at the Hannover Messe CAE Forum in Hall 6/L46 and attend our live presentations:

  • “Modeling of Metallic Additive Manufacturing Processes” (Monday, April 24th – 12:40 pm)
  • “Industrial Data Analytics Platform for Industry 4.0” (Tuesday, April 25th – 12:20 pm)
  • “Virtual Prototyping: From Manufacturing to Performance” (Wednesday, April 26th – 12:20 pm)
  • “Developing suitable and energy-efficient drive systems through multiphysics system simulation” (Thursday, April 27th – 12:20 pm)
  • “Virtual Car Prototyping in a Realistic Driving Environment” (Friday, April 28th – 12:20 pm)

When? 24-28 April, 2017

Where? Hannover Fairground in Hanover, Germany

For more info, please visit: www.esi-group.com/company/events/2017/hannover-messe-2017

Join ESI’s customer portal myESI to get continuously updated product information, tips & tricks, view the online training schedule, and access selected software downloads: myesi.esi-group.com

For more ESI news, visit: www.esi-group.com/press

About ESI Group

ESI Group is a leading innovator in Virtual Prototyping software and services. Specialist in material physics, ESI has developed a unique proficiency in helping industrial manufacturers replace physical prototypes by virtual prototypes, allowing them to virtually manufacture, assemble, test and pre-certify their future products. Coupled with the latest technologies, Virtual Prototyping is now anchored in the wider concept of the Product Performance Lifecycle, which addresses the operational performance of a product during its entire lifecycle, from launch to disposal. The creation of Hybrid Virtual Twins, leveraging simulation, physics and data analytics, enables manufacturers to deliver smarter and connected products, to predict product performance and to anticipate maintenance needs.

ESI is a French company listed in compartment B of NYSE Euronext Paris. Present in more than 40 countries, and addressing every major industrial sector, ESI Group employs about 1200 high-level specialists around the world and reported annual sales of €141 million in 2016. For more information, please visit www.esi-group.com.

About Huawei

Huawei is a leading global information and communications technology (ICT) solutions provider. Our aim is to enrich life and improve efficiency through a better connected world, acting as a responsible corporate citizen, innovative enabler for the information society, and collaborative contributor to the industry. Driven by customer-centric innovation and open partnerships, Huawei has established an end-to-end ICT solutions portfolio that gives customers competitive advantages in telecom and enterprise networks, devices and cloud computing. Huawei’s 180,000 employees worldwide are committed to creating maximum value for telecom operators, enterprises and consumers. Our innovative ICT solutions, products and services are used in more than 170 countries and regions, serving over one-third of the world’s population. Founded in 1987, Huawei is a private company fully owned by its employees.

Source: ESI Group

The post Huawei, ESI Build CAE Public Cloud for Product Design appeared first on HPCwire.

Interview with PRACE Ada Lovelace Award Winner Dr. Frauke Gräter

Thu, 06/01/2017 - 16:27

Bioscientist Dr. Frauke Gräter of the Heidelberg Institute for Theoretical Studies and University of Heidelberg was awarded the second annual PRACE Ada Lovelace Award for HPC at PRACEdays17 in Barcelona last month. Using advanced supercomputing techniques to reverse-engineer the mysteries of nature, Gräter is on the vanguard of the exciting field of materials science.

The PRACE Ada Lovelace Award was created to recognize women who have made outstanding contributions to high-performance computing research in Europe. It is named in honor of English mathematician Augusta Ada Byron Lovelace (1815-1852), credited with being the world’s first computer programmer. Winners receive €1,000 as well as a certificate and an engraved crystal trophy.

Dr. Frauke Gräter (left) with Dr. Toni Collis at PRACEdays17

As leader of the molecular biomechanics group at the Heidelberg Institute for Theoretical Studies, Gräter runs advanced computational techniques on some of Europe’s largest supercomputers to study how mechanical forces impact bio-compatible materials, like blood, silk and nacre (the iridescent substance commonly known as mother of pearl). Her project, “Micromechanics of Biocomposite Materials,” was awarded 11.5 million core hours on the Hermit supercomputer (a precursor to “Hazel Hen” at the High Performance Supercomputer Center Stuttgart) by PRACE under the 9th Call for Proposals for Project Access.

Women in HPC Executive Director and founder Dr. Toni Collis presented Gräter with the award during a ceremony on the final day of PRACEdays17. I had a chance to speak with Dr. Gräter afterwards about her research goals, expectations for exascale, and views on attracting more women into HPC.

HPCwire: Congratulations on winning the PRACE Ada Lovelace award — what does it mean to you?

Dr. Frauke Gräter: It’s an honor and there’s a need for emphasizing women in HPC. I’m happy to represent women in HPC research, but I do think though that in the end women need to compete overall with men, so hopefully there are also good reasons to award more standard prizes to women.

HPCwire: We’re here at PRACEdays in lovely Barcelona, what is the significance of PRACE for your research?

Gräter: It’s an enabler and a very important infrastructure. We do profit from PRACE, not just practically — I do computations on PRACE computers but also in terms of the visibility and political impact. PRACE is very important to make people in Europe, especially politicians, aware that [HPC] is an important scientific tool.

HPCwire: And as a computational resource, how significant is it to your work?

Gräter: I would say in the last few years, 10 or 20 percent of our HPC time we got through PRACE, so it’s not a majority. Because we are in Germany, we are well off in terms of supercomputing centers, so we mainly directly send our proposals there; it’s a very efficient route. Forschungszentrum Jülich, HLRS [High Performance Supercomputer Center Stuttgart] and SuperMUC [Leibniz Supercomputing Centre] is where we also do calculations, so PRACE is an interesting add-on but I see that within the European Union for some other countries this is the way to get computing time.

HPCwire: What is the work of the molecular biomechanics group at Heidelberg University?

A small mother of pearl shell nested inside an abalone shell. Source: Shutterstock

Gräter: We work in one part on materials that nature has made that are fascinating and also from an application point of view interesting, which is silk and nacre – actually this award was partly because of our work with nacre. Nacre is also known as mother of pearl. It’s mainly just like chalk, so a very cheap material, calcium carbonate that you find everywhere. It’s very regular and on a nanometer scale. There are protein layers in between these crystal tablets and this makes nacre so special. With the computer we model the interactions of the living net of protein of the inorganic material, the calcium carbonate. This shiny material inside shells is mechanically so robust – it’s a way that shells protect themselves.

HPCwire: What are some of the real-world benefits of your research to society?

Gräter: Industry tries to substitute other materials like steel with more lightweight materials, also bio-compatible materials. Both nacre and silicon are candidates, not necessarily to use them directly but if you understand the way nature has built them, to mimic these features, these strategies in synthetic materials, that’s the attempt — and computer simulations can very much help in seeing what are the screws we can use to make artificial materials better than they are now.

HPCwire: We are seeing more attention to diversity and inclusion in HPC — what can the community do to encourage broader engagement in science and computing (STEM/MINT*) fields?

Gräter: I think efforts on all levels are needed, and I do see them happening. We just had a girls day at the institute where school girls would come and look at how to work at the computer because I think in many cases that boys really like to play more computer games than girls. That’s one entrance way. Then they start to program and think it’s fun and want to do science with it. You don’t find this for girls as much; you find them mostly interested in the topics. They want to learn about how this protein in your body works and then the computer is a good tool one can use to explore. Girls are in medicine and biology, they want to know the mechanism in the body, so in the lab you find women and then they come to computers as a very good tool to learn even more. This is a way that I think we can attract females into the field, by the scientific question not so much by pushing them into the next programming class, so to say. First the motivation, then the programming – more by curiousity, this is what was my motivation.

HPCwire: Can you point to positive signs and actions you are witnessing?

Gräter: So in Germany, these girls days have been implemented in many cities and universities, and then there are some mentoring networks available, especially to women early career researchers, so I see many initiatives. It still really hasn’t gone all the way, of course, but it needs time. I do see positive developments. I for example always had a fairly okay male-to-female ratio in the group; and I think being a role model in the lectures myself is helpful.

HPCwire: Are you seeing greater involvement of women at events, on panels?

Gräter: I think it is now expected to have a woman on the panel – it’s a good thing, but I think a quota here would be very problematic because you might end up with a woman that’s not in terms of her expertise considered appropriate for the panel. I think this consciousness is there so it happens, but I’m not fond of quotas for these things. It can make things go backwards; this I see problematic.

HPCwire: You participated in the closing panel [at PRACEdays], “The gap between scientific code development and exascale technology” — would you share some of your thoughts on that topic and on how your research codes will benefit from exascale?

Gräter: As was brought up, the cases where you have a scientific question that really needs exascale are rare so for my field actually there is now the attempt to simulate whole cells and at the moment we still do single protein. [There’s an opportunity] to make it more complex and big when we have exascale: let’s simulate the cell, but at the same time, the time-scales of interest go from nano-seconds and micro-seconds to needing to simulate for seconds, and so solving one problem can become much more complex. You all of the sudden need to be more accurate to extrapolate to even longer time scales, so this will be very hard to come up with well-posed scientific problems that you can validate and so forth that make use of an exascale computer. I think we will fill these exascale computers up and we will learn through that, but it will be a learning curve by actually doing these calculations there.

HPCwire: So if you have a petaflops machine, what percentage of that machine do you need at this point?

Gräter: We often actually can easily use even a small fraction, 20 percent; if we have a long time slot, many CPU hours, a fraction at a time is totally fine – so we need millions and millions of CPU hours easily but the scaling must not be such that we fill up the whole computer – so this embarrassingly-parallel way is a way we can easily do happily for our scientific question, but that’s not exactly what an exascale machine is for. So here I see a vulnerable point.

I think there is a chance from my field there will come applications, but it needs very careful thinking about which ones.

HPCwire: Coming off that question, what are the other gaps or pain points you see in HPC? What developments or technologies would most further your research? 

Gräter: I think machine learning will move into my field and this whole question of data storage is an important one on the technical side. Not the I/O during the simulations, but the data handling in post-processing because that’s also what you first of all don’t apply compute time for, so you’re left with your data and the analysis is not yet developed such that you would do this in parallel and then it’s distributed and you want to visualize it – so a cloud type of high performance computer with a very high speed interconnect is something we will also need at that point.

HPCwire: In the closing panel, you spoke of AI as an exciting development that is also important for attracting young people into the field. How will you bring AI into your workflow?

Gräter: I see the first people just doing it – you ask colleagues and they are doing AI. In our case, there is opportunity for improvement of parameters by AI, also the analysis of data and substituting simulation by AI, so I see various aspects; it will never make our physics based models disappear but it’s a complementary way for us to advance.

*STEM/MINT: STEM is a popular but not universal acronym referencing the fields of science, technology, engineering and math. The equivalent concept in Germany is “MINT,” which stands for math, information technology, natural science and technology.

The post Interview with PRACE Ada Lovelace Award Winner Dr. Frauke Gräter appeared first on HPCwire.

HSA Foundation Takes HSA PRM Test Suite Open Source

Thu, 06/01/2017 - 12:46

BEAVERTON, Ore., June 1, 2017 – The HSA Foundation has made available to developers the HSA PRM (Programmer’s Reference Manual) conformance test suite as open source software. The test suite is used to validate Heterogeneous System Architecture (HSA) implementations for both the HSA PRM Specification and HSA PSA (Platform System Architecture) specification.

With this addition to the already available HSA Runtime Conformance tests, HSA developers now have a fully open source conformance test suite for validating all aspects of HSA systems.

HSA is a standardized platform design that unlocks the performance and power efficiency of the parallel computing engines found in most modern electronic devices. It allows developers to easily and efficiently apply the hardware resources—including CPUs, GPUs, DSPs, FPGAs, fabrics and fixed function accelerators—in today’s complex systems-on-chip (SoCs).

“The HSA Foundation has always been a strong proponent of open source development tools directly and through its member companies,” said HSA Foundation Chairman Greg Stoner. “Open sourcing worldwide the PRM conformance test suite is yet another example of an expanding array of development tools freely available supporting HSA.”

According to HSA Foundation President Dr. John Glossner, “The decision to open source the conformance test suite is strongly supported by the HSA Foundation and we believe this is an important step for allowing the developer community including non-member China Regional Committee (CRC) participants to test HSA systems. With the ability to develop conformance tests, the community can now contribute to the new test and thus drive the continual improvement of the test quality and consistency.”

“Good quality open source components are crucial in making heterogeneous computing more accessible to programmers and standards adopters. It is great to see that HSA Foundation continues its open source strategy by releasing the important PRM conformance test suite to the public,” said Dr. Pekka Jääskeläinen, CEO of Parmance.

The HSA Foundation through its member companies and universities has also released many additional projects which are all available on the Foundation’s GitHub site including:

  • HSAIL Developer Tools: finalizer, debugger, assembler, and simulator
  • GCC HSAIL frontend developed by Parmance and General Processor Technologies (GPT) allowing gcc finalization for any gcc machine target; the frontend is included in the upcoming GCC 7 release
  • Heterogeneous compute compiler (hcc) for single-source compilation of heterogeneous systems
  • Runtime implementations including AMD’s ROCm and phsa-runtime by Parmance and GPT; phsa-runtime can be used together with GCC HSAIL frontend to support the entire HSA programming stack using open source components
  • Portable Computing Language (pocl), an open source implementation of the OpenCL standard with a backend for HSA developed by the Customized Parallel Computing group of Tampere University of Technology (TUT) –an HSA Foundation Academic Center of Excellence

See the complete roster at: https://github.com/HSAFoundation.

About the HSA Foundation

The HSA (Heterogeneous System Architecture) Foundation is a non-profit consortium of SoC IP vendors, OEMs, Academia, SoC vendors, OSVs and ISVs, whose goal is making programming for parallel computing easy and pervasive. HSA members are building a heterogeneous computing ecosystem, rooted in industry standards, which combines scalar processing on the CPU with parallel processing on the GPU, while enabling high bandwidth access to memory and high application performance with low power consumption. HSA defines interfaces for parallel computation using CPU, GPU and other programmable and fixed function devices, while supporting a diverse set of high-level programming languages, and creating the foundation for next-generation, general-purpose computing.

Source: HSA Foundation

The post HSA Foundation Takes HSA PRM Test Suite Open Source appeared first on HPCwire.

Thomas Zacharia Named Director of Oak Ridge National Laboratory

Thu, 06/01/2017 - 09:19

OAK RIDGE, Tenn., June 1, 2017 — Thomas Zacharia, who built Oak Ridge National Laboratory into a global supercomputing power, has been selected as the laboratory’s next director by UT-Battelle, the partnership that operates ORNL for the U.S. Department of Energy.

“Thomas has a compelling vision for the future of ORNL that is directly aligned with the U.S. Department of Energy’s strategic priorities,” said Joe DiPietro, chair of the UT-Battelle Board of Governors and president of the University of Tennessee.

“He has led many of the innovative research and development initiatives that ORNL has successfully pursued over the past decade. His background in materials and computing positions him well to strengthen ORNL’s signature research capabilities in computational, neutron, materials, and nuclear science. His vision of ORNL playing a prominent role in advancing U.S. national and energy security reflects his leadership strengths. He has been key to the success of developing joint academic programs with UT. Finally, he embraces diversity and has a passion for developing and strengthening the workforce at the laboratory.”

Zacharia came to ORNL in 1987 as a postdoctoral researcher after receiving his Ph.D. in engineering science from Clarkson University in New York. He also holds a master’s in materials science from the University of Mississippi and a bachelor’s in mechanical engineering from the National Institute of Technology in Karnataka, India.

When UT-Battelle became ORNL’s management and operating contractor in April 2000, Zacharia was director of the Computer Science and Mathematics Division. In 2001, he was named associate laboratory director for the new Computing and Computational Sciences Directorate, and over the next eight years he built a scientific enterprise that brought more than 500 new staff to Oak Ridge and opened the nation’s largest unclassified scientific computing center, the Oak Ridge Leadership Computing Facility, a user facility of DOE’s Office of Science.

Zacharia was named ORNL’s deputy for science and technology in 2009, responsible for the lab’s entire research and development portfolio. During his tenure, the lab has strengthened its translational energy programs, establishing the Nuclear Science and Engineering Directorate and the Energy and Environmental Sciences Directorate. A team led by ORNL won DOE’s first Energy Innovation Hub, the Consortium for Advanced Simulation of Light Water Reactors, which leverages the lab’s expertise in computing and nuclear science and engineering. New capabilities were acquired in advanced manufacturing and cybersecurity, and the new Bredesen Center for Interdisciplinary Graduate Research and Education was established (it is now the UT’s largest doctoral program).

“Thomas represents the very best of Oak Ridge National Laboratory: scientific excellence, a willingness to tackle tremendous challenges for the benefit of the nation, and the vision to find innovative solutions and make them reality,” said Jeff Wadsworth, president and CEO of Battelle, and director of ORNL from 2003 to 2007. “His whole career shows that he knows how to apply ORNL’s unique breadth of expertise to our most important priorities in science, energy, national security, and economic competitiveness.”

In 2012, Zacharia took a leave to serve as executive vice president of the Qatar Foundation for Education, Science and Community Development, overseeing research in energy and the environment, information and computing technology, life sciences and biomedical research, and social sciences, as well as leading the country’s science and technology park, which is home to more than 40 multi-national companies including GE, Microsoft and Siemens. He returned to ORNL in 2015.

The UT-Battelle board conducted an open, competitive search for a new director after Thom Mason announced that he would be leaving to join Battelle after 10 years leading ORNL. Among the goals Zacharia outlined if he were chosen as director: leading ORNL to be the world’s premier research institution; building on the lab’s original sense of mission – winning World War II while pushing the boundaries of research – to reshape its creative energy for the future; celebrating a science and technology culture that encourages individuals to be the best in their fields; and pursuing institutional excellence that advances US leadership in neutron science, computing, materials, and nuclear science and engineering.

Zacharia’s appointment as director will be effective July 1, when Mason becomes senior vice president for laboratory operations at Battelle in Columbus, Ohio, where he will work with Executive Vice President of Global Laboratory Operations Ron Townsend to support the six DOE labs and one Department of Homeland Security lab managed by Battelle.

UT-Battelle, a partnership of the University of Tennessee and Battelle Memorial Institute, operates ORNL for DOE’s Office of Science. 

Source: ORNL

The post Thomas Zacharia Named Director of Oak Ridge National Laboratory appeared first on HPCwire.

FCIA Completes Gen 6 32GFC Plugfest Focused on FC-NVMe

Thu, 06/01/2017 - 09:06

MINNEAPOLIS, June 1, 2017 – Building upon last year’s successful Non Volatile Memory Express (NVMe) over Fibre Channel (FC) Fabric (FC-NVMe) proof of concept, the Fibre Channel Industry Association (FCIA) has announced the completion of the first Gen 6 32GFC plugfest solely focused on FC-NVMe. Validation of the proposed FC-NVMe standard’s conformance and device interoperability ensures FC’s status as the industry’s most reliable and robust storage networking protocol.

FCIA’s FC-NVMe plugfest was held during the NVM Express organization’s management interface (NVMe-MI) plugfest, the week of May 21st, 2017 at the University of New Hampshire InterOperability Lab (UNH-IOL). An independent provider of broad-based testing and standards conformance services for the networking industry, UNH-IOL has conducted more than 37 plugfests with FCIA over 18 years to test the continued development of FC technologies.

With 10 companies participating, the FCIA’s FC-NVMe plugfest featured conformance, error injection, multi-hop, and interoperability testing of FC-NVMe concurrently with Gen 6 32GFC and previous FC generation fabric switches and directors, utilizing datacenter-proven test tools and test methods.

“The successful conclusion of this event provides assurance to the FC SAN community that the draft version of the FC-NVMe standard specification combined with the NVMe fabric specifications meets the demanding performance and availability requirements of flash and NVMe storage,” said Mark Jones, president and chairman of the board, FCIA, and director, Technical Marketing and Performance, Broadcom Limited and FC-NVMe plugfest participant. “This event was also notable as providing the first multi-vendor interoperability demonstrating sustained low latency of Gen 6 32GFC port and fabric concurrency of FC-NVMe and FC, highlighting the adaptive architecture of FC that has more than 50 million installed ports in operation in the world’s leading datacenters.”

Key accomplishments from FCIA’s FV-NVMe plugfest include:

  • First Industry-wide multi-vendor conformance and interoperability testing of FC-NVMe:
    • Multiple vendor FC-NVMe initiator and target conformance and interoperability
    • Gen 6 32GFC fabric connectivity to a variety of market available NVMe drives
    • I/O validation over multi-vendor direct-connect and switched fabric topologies
    • Error injection tests to validate correct FC-NVMe and FC recovery and data integrity
    • Concurrent NVMe and legacy SCSI traffic through the same fabric ports
    • Backwards compatibility with previous FC speeds
    • FC-NVMe and FC over 10km single mode fiber Gen 6 32GFC trunks
    • FC-NVMe packet inspection conformance analysis using advanced trace capture and analysis tools
    • Large multi-vendor high availability multi-speed concurrent FC-NVMe and FC fabric conformance and interoperability
  • Gen 6 32GFC:
    • Gen 6 32GFC and previous generations concurrent interoperability with FC-NVMe
    • Multi-topology and link speed use case conformance, including 10km 32GFC trunk ISLs
    • Multi-vendor N_Port Virtualization (NPV) and N_Port ID Virtualization (NPIV) interoperability
    • Gen 6 32GFC protocol based always on buffer to buffer credit recovery and port flap fabric protections
    • Gen 6 32GFC use of the efficient peer zoning method
    • Gen 6 32GFC data protection and security
      • Gen 6 32GFC and 16G FC forward error correction (FEC) interoperability
      • T10 Protection Information (PI)
      • FC port-security, also referred to as port binding
    • Low-cost high-reliability 32/16/8G FC active-optical-cable (AOC) interoperability

The 10 companies participating in FCIA’s FV-NVMe plugfest were:

  • Amphenol Corporation
  • Brocade
  • Broadcom Limited
  • Cisco Systems
  • Hewlett Packard Enterprise
  • Huawei Technologies Co. Ltd
  • QLogic Corporation, a Cavium, Inc. company
  • SANBlaze Technology, Inc.
  • Teledyne Technologies; LeCroy Corporation
  • Viavi Solutions Inc.

“FCIA sponsored plugfests held at UNH-IOL, which is a neutral site, enables participants to collaborate openly while validating conformance and interoperability of FC products”, said Barry Maskas, plugfest chair and Technical Staff consultant at Hewlett Packard Enterprise. “With continued innovation, such as FC-NVMe, and use case configurations tested at this plugfest, participants laid a validation and certification foundation built on the methodology for conformance, interoperability, and compatibility that has proven successful by the FC industry companies. This and future FCIA sponsored plugfests, will enable FC SAN and FC-NVMe technology adopters to grow their storage infrastructure, taking advantage of speed, low latency, and functional improvements, while leveraging current investments.”

About FCIA

The Fibre Channel Industry Association (FCIA) is a non-profit international organization whose sole purpose is to act as the independent technology and marketing voice of the Fibre Channel industry. We are committed to helping member organizations promote and position Fibre Channel, and to providing a focal point for Fibre Channel information, standards advocacy, and education. FCIA members include manufacturers, system integrators, developers, vendors, industry professionals, and end users. Our member-led working groups and committees focus on creating and championing the Fibre Channel technology roadmaps, targeting applications that include data storage, video, networking, and storage area network (SAN) management. For more info, go to http://www.fibrechannel.org.

Source: FCIA

The post FCIA Completes Gen 6 32GFC Plugfest Focused on FC-NVMe appeared first on HPCwire.

Extreme Networks Wins Bid for Avaya’s Networking Business

Thu, 06/01/2017 - 08:49

SAN JOSE, Calif., June 1, 2017 — Extreme Networks, Inc. (“Extreme”) (NASDAQ: EXTR) has announced that it is the winning bidder to acquire Avaya Inc.’s (“Avaya”) networking business. The assets of Avaya’s networking business unit will therefore be sold to Extreme for approximately $100 million, in accordance with the terms and conditions of the asset purchase agreement entered into March 7, 2017. The final agreement has been approved by the United States Bankruptcy Court for the Southern District of New York and is expected to close on or shortly after July 1, 2017, subject to customary closing conditions and regulatory approvals.

“This strategic acquisition will be another milestone in the execution of Extreme’s growth strategy and clearly establishes Extreme as the third largest competitor in our enterprise markets and the only company in the world exclusively focused on delivering the highest quality end-to-end, wired and wireless enterprise IP networking,” said Ed Meyercord, President and CEO of Extreme Networks. “The conclusion of the auction process signals a big step forward in making this transaction a reality. Avaya’s networking business complements our existing portfolio and will significantly broaden Extreme’s enterprise solutions capabilities across our vertical target markets. We are moving forward with our integration planning for both Avaya Networking and the Brocade Data Center Networking business.”

Extreme will host a live webinar on June 14, 2017 to unveil its post-close go-to-market positioning and combined product roadmap with the Avaya Networking assets.

As previously announced, Extreme anticipates the transaction will be accretive to cash flow and earnings for its fiscal year 2018, which begins on July 1, and expects to generate over $200 million in annualized revenue from the acquired networking assets from Avaya. The announcement builds on Extreme’s strategy to expand the company’s state-of-the-art portfolio of data center, core, campus and edge networking solutions through a series of strategic acquisitions. In October 2016, the company closed its acquisition of the wireless LAN business from Zebra Technology Corporation, which is expected to generate over $115 million in annualized revenue in fiscal year 2018. In March, Extreme announced it entered into an agreement to acquire Brocade Communications Systems, Inc.’s data center switching, routing, and analytics business from Broadcom following the closing of Broadcom’s acquisition of Brocade. The Brocade transaction, once closed, is expected to generate over $230 million in annualized revenue from the acquired assets.

About Extreme Networks

Extreme Networks, Inc. (EXTR) delivers software-driven networking solutions that help IT departments everywhere deliver the ultimate business outcome: stronger connections with customers, partners and employees. Wired to wireless, desktop to data center, on premise or through the cloud, we go to extreme measures for our customers in more than 80 countries, delivering 100% insourced call-in technical support to organizations large and small, including some of the world’s leading names in business, hospitality, retail, transportation and logistics, education, government, healthcare and manufacturing. Founded in 1996, Extreme is headquartered in San Jose, California. For more information, visit Extreme’s website or call 1-888-257-3000.

Source: Extreme Networks

The post Extreme Networks Wins Bid for Avaya’s Networking Business appeared first on HPCwire.

Lenovo Distributed Storage Solution (DSS)

Wed, 05/31/2017 - 17:35

As data processing grows more and more specialized, effective storage strategies are more important than ever. And for IT professionals reevaluating their storage needs, software-defined and object-based storage are gaining ground by automating storage management – a trend that is only set to continue.

The post Lenovo Distributed Storage Solution (DSS) appeared first on HPCwire.

HPE Reports Fiscal 2017 Second Quarter Results

Wed, 05/31/2017 - 14:50

PALO ALTO, Calif., May 31, 2017 — Hewlett Packard Enterprise (NYSE:HPE) today announced financial results for its fiscal 2017 second quarter, ended April 30, 2017, which have been recast to reflect the spin-merger of its Enterprise Services business as discontinued operations.

Second quarter net revenue from continuing operations of $7.4 billion was down 13% from the prior-year period and down 5% when adjusted for divestitures and currency.

Second quarter GAAP diluted net loss per share from continuing operations was ($0.29), down from a GAAP diluted net earnings per share (EPS) from continuing operations of $0.18 in the prior-year period. Second quarter non-GAAP diluted net EPS from continuing operations was $0.25, down from $0.33 in the prior-year period.  Second quarter non-GAAP net earnings and non-GAAP diluted net EPS from continuing operations exclude after-tax costs of $903 million and $0.54 per diluted share, respectively, related to valuation allowances and divestiture taxes, separation costs, restructuring charges, amortization of intangible assets, acquisition and other related charges, tax indemnification adjustments, defined benefit plan settlement charges and remeasurement benefit, and an adjustment to earnings from equity interests.

“Despite some current headwinds, we delivered Q2 non-GAAP EPS in line with our outlook,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise.  “We saw strength in major components of our growth strategy, including high-performance compute, Aruba, all-flash storage and Technology Services.  While we still have much more work to do, HPE’s Q2 results give me confidence that our efforts are delivering for customers and partners.”

HPE fiscal 2017 second quarter continuing operations financial performance    Q2 FY17
Q2 FY16
Y/Y GAAP net revenue ($B) $7.4 $8.5 (13%) GAAP operating margin 2.4 % 5.3 % (2.9 pts.) GAAP net (loss) earnings ($B) ($0.5 ) $0.3 (251%) GAAP diluted net (loss) earnings per share ($0.29 ) $0.18 (261%) Non-GAAP operating margin 7.8 % 9.1 % (1.3 pts.) Non-GAAP net earnings ($B) $0.4 $0.6 (29%) Non-GAAP diluted net earnings per share $0.25 $0.33 (24%) Cash flow from operations ($B) $0.6 $1.1 ($0.5)

Information about HPE’s use of non-GAAP financial information is provided under “Use of non-GAAP financial information” below.

Outlook
For the fiscal 2017 third quarter, Hewlett Packard Enterprise estimates GAAP diluted net EPS to be in the range of ($0.02) to $0.02 and non-GAAP diluted net EPS to be in the range of $0.24 to $0.28. Fiscal 2017 third quarter non-GAAP diluted net EPS from continuing operations estimates exclude after-tax costs of approximately $0.26 per diluted share, related primarily to separation costs, restructuring charges and the amortization of intangible assets.

For fiscal 2017, Hewlett Packard Enterprise estimates GAAP diluted net EPS to be in the range of ($0.03) to $0.07 and non-GAAP diluted net EPS to be in the range of $1.46 to $1.56. Fiscal 2017 non-GAAP diluted net EPS estimates exclude after-tax costs of approximately $1.49 per diluted share, related primarily to valuation allowances and divestiture taxes, separation costs, restructuring charges, amortization of intangible assets, tax indemnification adjustments, defined benefit plan settlement charges and remeasurement benefit, and an adjustment to earnings from equity interests.

“While we faced margin pressure in Q2, we expect improvement through the remainder of the year as we mitigate commodities cost pressure and eliminate costs associated with spin-mergers and acquisitions,” said Tim Stonesifer, CFO, Hewlett Packard Enterprise.  “The completion of the spin-merger of our Enterprise Services business gives us the opportunity to further optimize the cost structure of the future HPE.  We are now focused on driving an incremental $200-300 million in cost savings in just the second half of this year.  We maintain our FY17 EPS outlook.”

Fiscal 2017 second quarter segment results

  • Enterprise Group revenue was $6.2 billion, down 13% year over year, down 7% when adjusted for divestitures and currency, with an 8.8% operating margin. Servers revenue was down 14%, down 14% when adjusted for divestitures and currency, Storage revenue was down 13%, down 13% when adjusted for divestitures and currency, Networking revenue was down 30%, up 14% when adjusted for divestitures and currency, and Technology Services revenue was down 2%, up 3% when adjusted for divestitures and currency.
  • Software revenue was $685 million, down 11% year over year, down 9% when adjusted for divestitures and currency, with a 26.4% operating margin. License revenue was down 29%, down 28% when adjusted for divestitures and currency, Support revenue was down 4%, flat when adjusted for divestitures and currency, Professional Services revenue was down 17%, down 16% when adjusted for divestitures and currency, and Software-as-a-service (SaaS) revenue was up 3%, up 4% when adjusted for divestitures and currency.
  • Financial Services revenue was $872 million, up 11% year over year, net portfolio assets were down 1%, and financing volume was down 7%. The business delivered an operating margin of 8.9%.

Revenue from continuing operations adjusted for divestitures and currency excludes revenue resulting from businesses divestitures in fiscal 2017, 2016 and 2015 and also assumes no change in the foreign exchange rate from the prior-year period. A reconciliation of GAAP revenue to revenue adjusted for divestitures and currency is provided in the earnings presentation at investors.hpe.com.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (HPE) is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.

Source: HPE

The post HPE Reports Fiscal 2017 Second Quarter Results appeared first on HPCwire.

Cancer Research: A Supercomputing Perspective

Wed, 05/31/2017 - 14:08

Cancer, the second-leading cause of death in the U.S. after heart disease, kills more than 500,000 citizens per year, including about 2,000 children.

In 2016, then Vice President Joe Biden launched the Cancer Moonshot, saying: “I know that we can help solidify a genuine global commitment to end cancer as we know it today —  and inspire a new generation of scientists to pursue new discoveries and the bounds of human endeavor.”

The importance of high performance computing (HPC) in cancer research was recognized by the Cancer Moonshot Task Force report, and by then Vice President Joe Biden and Energy Secretary Ernie Monitz.

“Supercomputers are key to the Cancer Moonshot,” Monitz wrote. “These exceptionally high-powered machines have the potential to greatly accelerate the development of cancer therapies by finding patterns in massive datasets too large for human analysis. Supercomputers can help us better understand the complexity of cancer development, identify novel and effective treatments, and help elucidate patterns in vast and complex data sets that advance our understanding of cancer.”

With complex, non-linear signaling networks, multiscale dynamics from the quantum to the macro level, and giant, complex datasets of patient responses, cancer is quite possibly the ultimate in HPC problems.

“What could be more complicated and more important?” said J. Tinsley Oden, a computational researcher at The University of Texas at Austin applying uncertainty quantification to cancer treatment predictions. “At each step, it has the most complex features. It is really a garden of rich, important problems that are in the path of many of the developments that we’ve been working on for years.”

Infographic depicts TACC’s multi-domain approach to fighting cancer — click to expand

Hundreds of oncologists, biologists and computer scientists use the HPC systems at the Texas Advanced Computing Center (TACC) to understand the fundamental nature of cancer biology and to improve cancer treatments. Their work addresses a range of cancers types and treatment modalities, and spans applied or fundamental research.

Though diverse in their specific targets, the approaches they use can be loosely grouped into seven broad methodologies: molecular simulation; bioinformatics; mathematical modeling; computational treatment planning; quantum calculation; clinical trial design; and machine learning. The following sections describe and provide examples of each.

Molecular Simulations

Simulating protein and drug interactions at the molecular level enables scientists to understand the mechanics of cancer to design more effective treatments.

For Rommie Amaro, professor of Chemistry and Biochemistry at the University of California, San Diego, this means uncovering new pockets in tumor protein 53 (p53) — “the guardian of the genome” — which plays a crucial role in conserving the stability of DNA and preventing mutations.

The model of full-length p53 protein bound to DNA as a tetramer. The surface of each p53 monomer is depicted with a different color. [Courtesy: Özlem Demir, University of California, San Diego]In approximately 50 percent of all human cancers, p53 is mutated and rendered inactive, therefore, reactivating mutant p53 using small molecules has been a long-sought-after anticancer therapeutic strategy.

In September 2016, writing in the journal Oncogene, Amaro reported results of the largest atomic-level simulation of the p53 to date — comprising more than 1.5 million atoms. The simulations, enabled by the Stampede supercomputer at TACC, helped identify new binding sites on the surface of the protein that could potentially reactivate p53.

“When most people think about cancer research they probably don’t think about computers,” she said. “But biophysical models are getting to the point where they have a great impact on the science.”

Virtual drug screening is another important HPC application for cancer research. Shuxing Zhang, professor of experimental therapeutics at MD Anderson Cancer Center, used molecule simulations on TACC’s Lonestar5 system to screen 1,448 Food and Drug Administration-approved small molecule drugs to determine which had the molecular features needed to bind and inhibit TNIK — an enzyme that plays a key role in cell signaling in colon cancer.

Zhang discovered that mebendazole, an FDA-approved drug that fights parasites, could effectively bind to TNIK and inhibit its enzymatic activity. He reported his results in Nature Scientific Reports in September 2016.

“Such advantages render the possibility of quickly translating the discovery into a clinical setting for cancer treatment in the near future,” Zhang wrote.

Bioinformatics

The human genome consists of three billion base pairs, so identifying single mutations by sight simply isn’t possible. For that reason, the field of bioinformatics — which uses computing and software to identify patterns and differences in biological data — has been an enormous boon for cancer researchers.

But bioinformatics is more than simple, one-to-one pattern matching.

A heat map showing differences in gene expression between primary tumors and cultured cell lines. Each row is a gene and each column is a tumor or cell sample. In the heat map, red indicates high expression and blue indicates low expression. NHA refers to normal human astrocytes, a star-shaped glial cell of the central nervous system. [Courtesy: Amelia Weber Hall, Iyer lab]“When you move into multi-dimensional, time-series, or population-level studies, the algorithms can get a lot more computationally intensive,” said Matt Vaughn, TACC’s Director of Life Sciences Computing. “This requires resources like those at TACC, which help large numbers of researchers explore the complexity of cancer genomes by providing elastic, large-scale computing capability.”

For Vishy Iyer, a molecular biologist at The University of Texas at Austin (UT Austin), and his collaborators, access to TACC’s Stampede supercomputer helps them mine reams of data from The Cancer Genome Atlas to identify genetic variants and subtle correlations that affect gene expression in tumors.

“TACC has been vital to our analysis of cancer genomics data, both for providing the necessary computational power and the security needed for handling sensitive patient genomic datasets,” Iyer said.

In February 2016, Iyer and a team of researchers from UT Austin and MD Anderson Cancer Center reported in Nature Communications on a genome-wide transcriptome analysis of the two types of cells that make up the prostate gland. They identified cell-type-specific gene signatures that were associated with aggressive subtypes of prostate cancer and adverse clinical responses.

“This knowledge can be helpful in the development of more targeted therapies that seek to eliminate cancer at its origin,” Iyer said.

Using a similar methodology, Iyer and a team of researchers from UT Austin and the National Cancer Institute identified a transcription factor associated with an aggressive type of lymphoma that is highly correlated with poor therapeutic outcomes. They published their results in the Proceedings of the National Academy of Sciences in January 2016.

Whereas Iyer, an experienced HPC user, develops custom tools for his analyses, a much larger number of researchers access Stampede and comparable systems through scientific gateways. One prominent gateway is Galaxy, an open source bioinformatics platform that serves 30,000 researchers and runs more than 3,000 compute jobs a day.

Since 2014, TACC has powered the data analyses for a large percentage of Galaxy users, allowing researchers to solve tough problems in cases where their personal computer or campus cluster is not sufficient. Of those researchers, a significant subset use the site to analyze cancer genomes.

“Galaxy can be used to identify tumor mutations that drive cancer growth, find proteins that are overexpressed in a tumor, as well as for chemo-informatics and drug discovery,” said Jeremy Goecks, Assistant Professor of Biomedical Engineering and Computational Biology at Oregon Health and Science University and one of Galaxy’s principal investigators.

Goecks estimates that hundreds of researchers each year use the platform for cancer research, himself included. Because cancer patient data is closely protected, the bulk of this usage involves either publically available cancer data, or data on cancer cell lines – immortalized cells that reproduce in the lab and are used to study how cancer reacts to different drugs or conditions.

“This is an ideal marriage of TACC having tremendous computing power with scalable architecture and Galaxy coming along and saying, we’re going to go the last mile and make sure that people who can’t normally use this hardware are able to.”

Mathematical Modeling

While some researchers believe bioinformatics will rapidly advance the understanding and treatment of cancer, others think a better approach is to mathematize cancer: to uncover the fundamental formulas that represent how cancer, in its varied forms, behaves.

At the Center for Computational Oncology at UT Austin, researchers are developing complex computer models to predict how cancer will progress in a specific individual.

Each factor involved in the tumor response — whether it is the speed with which chemotherapeutic drugs reach the tissue or the degree to which cells signal each other to grow — is characterized by a mathematical equation that captures its essence. These models are combined and parameterized and initialized with patient-specific data.

In April 2017, writing in the Journal of The Royal Society Interface, Thomas Yankeelov and collaborators at UT Austin and Vanderbilt University, showed that they can predict how brain tumors (gliomas) will grow in mice with greater accuracy than previous models by including factors like the mechanical forces acting on the cells and the tumor’s cellular heterogeneity.

To develop and implement their mathematically complex models, the center’s scientists use TACC’s supercomputers, which enable them to solve bigger problems that they otherwise could and reach solutions far faster.

Recently, the group has begun a clinical study to predict, after one treatment, how an individual’s cancer will progress, and use those predictions to plan the future course of treatment.

“There are not enough resources or patients to sort this problem out because there are too many variables. It would take until the end of time,” Yankeelov said. “But if you have a model that can recapitulate how tumors grow and respond to therapy, then it becomes a classic engineering optimization problem. ‘I have this much drug and this much time. What’s the best way to give it to minimize the number of tumor cells for the longest amount of time?’”

Computing at TACC helps Yankeelov accelerate his research. “We can solve problems in a few minutes that would take us three weeks to do using the resources at our old institution,” he said. “It’s phenomenal.”

Quantum Calculations

X-ray radiation is the most frequently used form of radiation therapy, but a new treatment is emerging that uses a beam of protons to kill cancer cells with minimum damage on surrounding tissues.

“As happens in cancer therapy, we know empirically that it works, but we don’t know why,” said Jorge A. Morales, a professor of chemistry at Texas Tech University and a leading proponent of the computational analysis of proton therapy. “To do experiments with human subjects is dangerous, so the best way is through computer simulation.”

Computational experiments can mimic the dynamics of the proton-cell interactions without causing damage to a patient and can reveal what happens when the proton beam and cells collide from start to finish, with atomic-level accuracy. Morales has been simulating proton-cell chemical reactions using quantum dynamics models on TACC’s Stampede supercomputer to investigate the fundamentals of the process.

His studies, reported in PLOS One in March 2017, as well as in Molecular Physics, and Chemical Physics Letters (2015 and 2014 respectively), have determined the basic byproducts of protons colliding with water within the cell, and with nucleotides and clusters of DNA bases – the basic units of DNA. The studies shed light on how the protons and their water radiolysis products damage DNA.

Though fundamental in nature, the insights and data that Morales’ simulations produce help researchers understand proton cancer therapy at the quantum level, and help modulate factors like dosage and beam direction.

“These simulations will bring about a unique way to understand and control proton cancer therapy that, at a very low cost, will help to drastically improve the treatment of cancer patients without risking human subjects,” Morales said.

Computational Treatment Planning

Wei Liu, a researcher at the Mayo Clinic, also studies proton therapy, but he looks at the treatment from a clinical perspective.

In comparison with current radiation procedures, proton therapy saves healthy tissue in front of and behind the tumor. It is particularly effective when irradiating tumors near sensitive organs where stray beams can be particularly damaging.

However, the pinpoint accuracy required by the protein beam, which is its greatest advantage, means that it must be precisely calibrated and that discrepancies from the ideal (whether from device, human error or even patient breathing) must be taken into consideration.

Writing in Medical Physics in January 2017, Liu and his collaborators showed that their “chance-constrained model” was better at sparing organs at risk than current methods.

“Each time, we try to mathematically generate a good plan,” he said. “There are 25,000 variables or more, so generating a plan that is robust to these mistakes and can still get the proper dose distribution to the tumor is a large-scale optimization problem.”

The researchers used the Lonestar5 supercomputer at TACC to generate treatment plans that minimize the risk and uncertainties involved in proton beam therapy.

“It’s very computationally expensive to generate a plan in a reasonable timeframe,” he continued. “Without a supercomputer, we can do nothing.”

Computational Trial Design

Another way researchers use TACC’s advanced computers is to design clinical trials that can better determine which combination of dosages will be most effective, specifically for the biological agents used in immunotherapy, which work very differently from chemotherapy and radiation.

Writing in the Journal of the Royal Statistics Society Series C (Applied Statistics), Chunyan Cai, assistant professor of biostatistics at McGovern Medical School at The University of Texas Health Science Center at Houston (UTHealth) described her efforts using Lonestar5 to identify biologically optimal dose combinations for agents that target the PI3K/AKT/mTOR signaling pathway, which has been associated with several genetic aberrations related to the promotion of cancer.

Scanning electron micrograph of a human T lymphocyte (also called a T cell) from the immune system of a healthy donor. Immunotherapy fights cancer by supercharging the immune system’s natural defenses (include T-cells) or contributing additional immune elements that can help the body kill cancer cells. HPC is helping researchers better understand how immunotherapeutic agents can be used effectively [Courtesy: NIAID]“Our research is motivated by a drug combination trial at MD Anderson Cancer Center for patients diagnosed with relapsed lymphoma,” Cai said. “The trial combined two novel biological agents that target two different components in the PI3K/AKT/mTOR signaling pathway.”

They investigated six different dose-toxicity and dose-efficacy scenarios and carried out 2,000 simulated trials for each of the designs.

Based on those simulations, she concluded that “the design proposed has desirable operating characteristics in identifying the biologically optimal dose combination under various patterns of dose–toxicity and dose–efficacy relationships.”

The research is leading to new, safer and more effective ways to test combinations of immunotherapeutic agents.

Machine Learning

A final, and truly radical, way that researchers are using HPC for cancer research is through the application of machine and deep learning.

The Eberlin research group at UT Austin develops clinical applications of ambient mass spectrometry for cancer diagnosis. They create tools and techniques to assist surgeons in distinguishing between normal and cancer tissue during tumor resection operations.

To do so, they have had to develop statistical methods that can analyze and interpret large amount of mass spectrometry data gathered from clinical samples.

Jonathan Young, a post-doctoral research in the group, is building machine learning classifiers to reliably predict whether a given tissue sample is cancer or normal, and if it is indeed cancer, which specific subtype the tumor belongs to.

Young uses the Maverick system at TACC, which contains a large number of NVIDIA GPUs, to develop and implement the machine learning algorithms. “The large memory capacity of Maverick is well suited for our extensive datasets, and the parallelization capability will aid in parameter sweeps during the training of classifiers,” Young said.

Young will present his work at the American Society for Mass Spectrometry (ASMS) Annual Conference this June.

Another example of the application of machine learning to cancer can be found in the work of Daniel Lobo, an assistant professor of biology and computer science at the University of Maryland, Baltimore County (UMBC). He is using machine learning to map out the cellular communication networks that underlie cancer, and to design methods to disrupt them.

In their January 2017 paper in Scientific Reports, Lobo and collaborators showed that machine learning can uncover the cellular networks that determine pigmentation in tadpoles and reverse-engineering never-before-seen coloration. Their work was facilitated by Stampede, which enabled the team to run billions of simulations to identify models of the cellular network and the means of altering it.

Lobo’s lab is applying the method to cancer research to determine what type of interventions might stop metastasis in its tracks without damaging other cells.

“Traditional approaches like chemotherapy attack the cells that grow the most, but leave cells that are signaling others to grow and that may be the most important,” Lobo says. “We’re using machine learning to find out the communication networks between these cells and hopefully to discover a treatment that can cause the tumor to collapse.”

“Getting a true understanding, given the complexity of the information, without some assistance from machine learning, is probably hopeless,” said Michael Levin, Lobo’s collaborator. “I think it’s inevitable that we use machine learning to enrich scientific and biomedical discovery.”

From patient-specific treatments to immunology to drug discovery, advanced computing is accelerating the basic and applied science underlying our understanding of cancer and the development and application of cancer treatments.

If scientists are the rocket in the cancer moonshot, HPC processing power is the jet fuel.

About the Author

Aaron Dubrow joined TACC in October 2007 as the Science and Technology Writer with the responsibility of reporting on the myriad of research and development projects undertaken by TACC.

The post Cancer Research: A Supercomputing Perspective appeared first on HPCwire.

Microsoft, Purdue Tackle Topological Quantum Computer

Wed, 05/31/2017 - 13:13

Topological qubits are among the more baffling, and if practical, more promising ways to approach scalable quantum computing. At least that’s what Microsoft, Purdue University, and three other universities are hoping after having recently signed a five-year agreement to develop a topological qubit based quantum computer.

Qubits are strange no matter what form they take. The basic idea being that through superposition, a qubit can be in two states at once (0 and 1) and hence a quantum computer’s capacity scales exponentially with the number of qubits versus classical computers which scale linearly with the number of bits. Most quantum computing efforts rely on producing superposition in some material – IBM uses superconducting devices – and there have been many qubit schemes proposed.

Topological qubits are among the more mysterious. They rely on ‘quasi’ particles called non-abelian anyons which have not definitively been proven to exist. Using these topological qubits, information is encoded by “braiding” the paths of these quasi-particles. The benefit, say researchers, is topological qubits resist decoherence much better than other qubit types and should require far less error correction. At least that’s the theory.

Purdue University and Microsoft Corp. have signed a five-year agreement to develop a usable quantum computer. Purdue is one of four international universities in the collaboration. Michael Manfra, Purdue University’s Bill and Dee O’Brien Chair Professor of Physics and Astronomy, Professor of Materials Engineering and Professor of Electrical and Computer Engineering, will lead the effort at Purdue to build a robust and scalable quantum computer by producing what scientists call a “topological qubit.” (Purdue University photo/Rebecca Wilcox)

Microsoft has been dabbling in topological qubit theory for several years. Last fall Microsoft quantum researcher Alex Boharov was interviewed by Nature (Inside Microsoft’s quest for a topological quantum computer, October 21, 2016) on why pursue such an exotic path.

“Our qubits are not even material things. But then again, the elementary particles that physicists run in their colliders are not really solid material objects. Here we have non-abelian anyons, which are even fuzzier than normal particles. They are quasiparticles. The most studied kinds of anyon emerge from chains of very cold electrons that are confined at the edge of a 2D surface. These anyons act like both an electron and its antimatter counterpart at the same time, and appear as dense peaks of conductance at each end of the chain. You can measure them with high-precision devices, but not see them under any microscope…” said Boharov.

As explained by Boharov, “Noise from the environment and other parts of the computer is inevitable, and that might cause the intensity and location of the quasiparticle to fluctuate. But that’s OK, because we do not encode information into the quasiparticle itself, but in the order in which we swap positions of the anyons. We call that braiding, because if you draw out a sequence of swaps between neighbouring pairs of anyons in space and time, the lines that they trace look braided. The information is encoded in a ‘topological’ property — that is, a collective property of the system that only changes with macroscopic movements, not small fluctuations.”

The upside is tantalizing he said, “So far, we’ve had an amazing ride in terms of creating more-efficient algorithms — reducing the number of qubit interactions, known as gates, that you need to run certain computations that are impossible on classical computers. In the early 2000s, for example, people thought it would take about 24 billion years to calculate on a quantum computer the energy levels of ferredoxin, which plants use in photosynthesis. Now, through a combination of theory, practice, engineering and simulation, the most optimistic estimates suggest that it may take around an hour. We are continuing to work on these problems, and gradually switching towards more applied work, looking towards quantum chemistry, quantum genomics and things that might be done on a small-to-medium-sized quantum computer.”

Now, Microsoft is further ramping up its quantum efforts with a collaboration that includes Purdue as well as a global experimental group established by Microsoft at the Niels Bohr Institute at the University of Copenhagen in Denmark, TU Delft in the Netherlands, and the University of Sydney, Australia. For Purdue, this is an extension of joint work on quantum computing with Microsoft begun roughly one year ago. Michael Freedman of Microsoft’s Station Q in Santa Barbara leads the effort.

“What’s exciting is that we’re doing the science and engineering hand-in-hand, at the same time,” says Purdue researcher Michael Manfra in an article on the project posted on the Purdue web site yesterday.

Purdue’s role in the project will be to grow and study ultra-pure semiconductors and hybrid systems of semiconductors and superconductors that may form the physical platform upon which a quantum computer is built. Manfra’s group has expertise in a technique called molecular beam epitaxy, and this technique will be used to build low-dimensional electron systems that form the basis for quantum bits, or qubits, according to the article.

Purdue President Mitch Daniels noted in the article that Purdue was home to the first computer science department in the United States, and said the partnership and Manfra’s work places the university at the forefront of quantum computing. “Someday quantum computing will move from the laboratory to actual daily use, and when it does, it will signal another explosion of computing power like that brought about by the silicon chip,” Daniels said.

Link to Purdue article written by Steve Tally: http://www.purdue.edu/newsroom/releases/2017/Q2/microsoft,-purdue-collaborate-to-advance-quantum-computing-.html

Link Nature interview: https://www.nature.com/news/inside-microsoft-s-quest-for-a-topological-quantum-computer-1.20774

Caption for feature image: Michael Freedman (left), Microsoft Corp. quantum computing researcher, and Suresh Garimella, executive vice president for research and partnerships, and Purdue’s Goodson Distinguished Professor of Mechanical Engineering, sign a new five-year enhanced collaboration between Purdue and Microsoft to build a robust and scalable quantum computer by producing what scientists call a “topological qubit.” (Purdue University photo/Charles Jischke)

The post Microsoft, Purdue Tackle Topological Quantum Computer appeared first on HPCwire.

Intel Promotes Three Corporate Officers

Wed, 05/31/2017 - 10:04

SANTA CLARA, Calif., May 31, 2017 –Intel Corporation today announced that its board of directors has promoted three corporate officers.

“Each of these proven Intel leaders has taken on expanded roles overseeing important areas of the business,” said Intel CEO Brian Krzanich. “These promotions recognize the scope and significance of the organizations they lead at Intel.”

Navin Shenoy was promoted from senior vice president to executive vice president. Shenoy is the newly appointed general manager of Intel’s Data Center Group (DCG), an important growth business that spans servers, network and storage solutions that are driving the adoption of pervasive cloud computing, virtualization of network infrastructure and artificial intelligence. Shenoy is responsible for the P&L, strategy and product development spanning server, storage and network solutions for cloud service providers, communications service providers, enterprise and government infrastructure customers. He joined Intel in 1995 and is based in Santa Clara, California.

Gregory Bryant was promoted from corporate vice president to senior vice president. He is the general manager of the Client Computing Group (CCG), Intel’s largest and most profitable business, which encompasses PCs, home gateways and other compute devices. Bryant recently succeeded Shenoy in this role and is responsible for the P&L, strategy and product development. He joined Intel in 1992 and is based in Hillsboro, Oregon.

Sandra Rivera was promoted from corporate vice president to senior vice president. She is general manager of the Network Platforms Group, which is the data center business group charged with providing innovative network technology and products to the market. In this role, Rivera manages the P&L, strategy and product development for solutions and services providers worldwide. She is also the executive sponsor guiding Intel’s strategy, commitments and deliverables for 5G. Rivera joined Intel in 2000 and will be based in Santa Clara, California.

With these promotions, Bryant and Rivera join Shenoy on Intel’s management committee.

About Intel

Intel (NASDAQ: INTC) expands the boundaries of technology to make the most amazing experiences possible. Information about Intel can be found at newsroom.intel.com and intel.com.

Source: Intel

The post Intel Promotes Three Corporate Officers appeared first on HPCwire.

AT&T Foundry, Caltech Form Alliance for Quantum Technologies

Wed, 05/31/2017 - 08:37

PALO ALTO, Calif., May 31, 2017 — The AT&T Foundry innovation center in Palo Alto, California is joining the California Institute of Technology to form the Alliance for Quantum Technologies (AQT). The Alliance aims to bring industry, government, and academia together to speed quantum technology development and emerging practical applications.

This collaboration will also bring a research and development program named INQNET (INtelligent Quantum NEtworks and Technologies). The program will focus on the need for capacity and security in communications through future quantum networking technologies.

Quantum networking will enable a new era of super-fast, secure networks. AT&T, through the AT&T Foundry, will help test relevant technologies for commercial applications.

Quantum computers won’t have a keyboard, monitor or mouse. They will be complex physics experiments with cryogenics for cooling, lasers, and other solid-state, electronic, optical and atomic devices. Moving quantum computing from the R&D lab to the real world requires solving technical and engineering challenges.

The science behind quantum computing is complex. And it cuts across disciplines including physics, engineering, computer science and applied mathematics. The basic idea is to apply the laws of quantum mechanics to processing and distributing information. It will enable exponentially more powerful computing.

Quantum networking is the process of linking quantum computers and devices together. This creates fast and secure networks beyond anything possible today with traditional processors.

“Quantum computing and networking holds the potential to radically transform how we connect as a society. It will make the impossible possible, as the internet once did,” said Igal Elbaz, vice president, ecosystem and innovation, AT&T. “The AT&T Foundry was founded to advance new products and services through innovation and collaboration. It’s the ideal place for this work as quantum technologies become a rapidly developing field in industrial research.”

“With quantum technologies and quantum engineering we’re experiencing a revolution in the applied fundamental. It is quite thrilling to accelerate the progress by integrating systems and ongoing R&D and especially by bringing together the experts,” said Maria Spiropulu, professor of Physics, California Institute of Technology. “The spirit of innovation and collaboration at the AT&T Foundry is the culture we hope permeates throughout this endeavor. I expect the catalysis effect on science and technology to be analogous.”

The field is still in its early stages. But these technologies could change our lives profoundly and rapidly. Some experts predict this as early as the next few decades.

Quantum devices and networks of quantum computers could speed scientific discoveries, push the field of machine intelligence, and potentially enable secure channels for communication beyond what’s possible with today’s technology.

AT&T and Caltech, through AQT and INQNET, are creating the model for technology development between academic institutions, industry, and national laboratories. One of the first demonstrations of intelligent and quantum network technologies will be in quantum entanglement distribution and relevant benchmarking and validation studies using commercial fiber provided by AT&T.

AT&T and Caltech plan to address the workforce development necessary for quantum technologies. They will hold roundtables and workshops to discuss the latest pertinent science and technology developments.

First opened in 2011, the AT&T Foundry network of innovation centers are located across the United States and in Israel.

About AT&T

AT&T Inc. (NYSE: T) helps millions around the globe connect with leading entertainment, business, mobile and high speed internet services. We offer the nation’s best data network* and the best global coverage of any U.S. wireless provider. We’re one of the world’s largest providers of pay TV. We have TV customers in the U.S. and 11 Latin American countries. Nearly 3.5 million companies, from small to large businesses around the globe, turn to AT&T for our highly secure smart solutions.

Source: AT&T

The post AT&T Foundry, Caltech Form Alliance for Quantum Technologies appeared first on HPCwire.

Call for Papers Now Open for In-Memory Computing Summit 2017

Wed, 05/31/2017 - 08:29

FOSTER CITY, Calif., May 31, 2017 — GridGain Systems, provider of enterprise-grade in-memory computing solutions based on Apache Ignite, today announced a Call for Papers for the third annual In-Memory Computing Summit, taking place October 24-25, 2017 at the South San Francisco Conference Center in Silicon Valley. The Call for Papers will end on June 30, 2017. Sponsorship opportunities are now available.

Organized by GridGain Systems, the In-Memory Computing Summit (IMCS) is held annually in both Europe and North America. They are the only industry-wide events that focus on the full range of in-memory computing-related technologies and solutions. The conferences are attended by technical decision makers, business decision makers, operations experts, DevOps professionals, and developers. The attendees make or influence purchasing decisions about in-memory computing, Big Data, Fast Data, IoT and HPC solutions. The In-Memory Computing Summit conference committee is looking for talks on a variety of topics including:

  • User stories and business use cases
  • What’s new and upcoming in in-memory computing
  • Best design practices and performance optimization
  • High availability, clustering, and replication
  • Monitoring, management, automation tools and best practices
  • In-memory computing in the cloud

Industry leaders, technical experts and visionaries can submit their proposals via the conference website.

Sponsorship Opportunities

In-Memory Computing Summit 2017 is sponsored by leading technology vendors. A limited number of Platinum, Gold and Silver sponsorship packages are available for the Silicon Valley event. Sponsors have an opportunity to increase their visibility and reputation as technology leaders, interact with key in-memory computing business and technical decision makers, and connect with technology purchasers and influencers.

Future In-Memory Computing Summits

The In-Memory Computing Summit Europe 2017 will take place June 20-21, 2017 at the Mövenpick Hotel Amsterdam City Centre.

About the In-Memory Computing Summit

The In-Memory Computing Summits are the only industry-wide events of their kind, tailored to in-memory computing-related technologies and solutions. They are the perfect opportunity to reach technical IT decision makers, IT implementers, and developers who make or influence purchasing decisions in the areas of in-memory computing, Big Data, Fast Data, IoT and HPC. Attendees include CEOs, CIOs, CTOs, VPs, IT directors, IT managers, data scientists, senior engineers, senior developers, architects and more. The events are unique forums for networking, education and the exchange of ideas — ideas that power digital transformation and the future of Fast Data. For more information, visit https://imcsummit.org/us/ and follow the events on Twitter @IMCSummit.

About GridGain Systems

GridGain Systems is revolutionizing real-time data access and processing by offering enterprise-grade in-memory computing solutions built on Apache Ignite. GridGain solutions are used by global enterprises in financial, software, ecommerce, retail, online business services, healthcare, telecom and other major sectors. GridGain solutions connect data stores (SQL, NoSQL, and Apache Hadoop) with cloud-scale applications and enable massive data throughput and ultra-low latencies across a scalable, distributed cluster of commodity servers. GridGain is the most comprehensive, enterprise-grade in-memory computing platform for high volume ACID transactions, real-time analytics, and hybrid transactional/analytical processing. For more information, visit gridgain.com.

Source: GridGain Systems

The post Call for Papers Now Open for In-Memory Computing Summit 2017 appeared first on HPCwire.

TYAN Displays HPC, Cloud Server Platforms at Computex

Wed, 05/31/2017 - 08:23

TAIPEI, May 31, 2017 — TYAN, an industry-leading server platform design manufacturer and subsidiary of MiTAC Computing Technology Corporation, exhibits its new line-up of HPC, cloud computing and storage server platforms this week at Computex 2017 in Taipei, Taiwan. TYAN showcases a full lineup of platforms based on the upcoming Intel Xeon Processor Scalable Family that are targeted at the Data Center, Virtualization, Supercomputing, Enterprise and Embedded infrastructure markets.

With increased needs of data-intensive applications driven by the growth of artificial intelligence, virtual reality and high-performance computing, customers need an advanced solution to address the demands of big-data, in-memory analytics workloads for their server infrastructures, said Danny Hsu, Vice President of MiTAC Computing Technology Corporation’s TYAN Business Unit. TYAN plans to offer the latest server platforms based on the upcoming Intel Xeon Processor Scalable family starting in mid-2017, and promises to offer workload-optimized performance, power efficiency, and hardware-enhanced platform features.

High-density GPU and Intel Xeon Phi Coprocessor-based Platforms Optimized for HPC and Machine Learning Applications

TYANs next generation HPC computing platforms are all based on the Intel Xeon Scalable Processor Family and are designed for the heavy computing workloads of big data and high performance data analysis applications. To meet these demands, TYAN have three platforms on display that target the HPC, Machine Learning, and Technical Computing markets.

The FT77D-B7109 is a 4U dual root complex GPU server with two CPU sockets and support for up to 8 Intel Xeon Phi coprocessor x200 series cards (codenamed Knights Landing). It specializes in massively parallel workloads including scientific computing, genetic sequencing, oil & gas discovery, large scale facial recognition, and brute force cryptography.

TYAN is also displaying a new high performance workstation platform named the FT48T-B7105. This workstation gives maximum I/O to the professional power user, with support for up to 5 Intel Xeon Phi coprocessors and is aimed at Digital Content Creation, Computer-Generated Imagery (CGI), and Computer-Aided Design (CAD) applications.

TYAN’s GA88-B5631 is a fully peer-to-peer single root complex 1U GPU server. Featuring a single Intel Xeon Scalable Processor Family CPU socket, the platform supports up to 4 Intel Xeon Phi coprocessor, and is ideal for many of today’s emerging cognitive computing workloads such as Machine Learning and Artificial Intelligence.

TYAN Highlights Extreme Performance, Density and Scalability for Next-Generation Datacenter, Enterprise and Cloud

Powered by the upcoming Intel Xeon Processor Scalable Family, TYANs new range of cloud computing and storage platforms are optimized for data intensive workloads and virtualization applications to deliver extreme performance, density and scalability with power and cost efficiency.

The 1U GT75B-B7102 with support for 10 2.5 small form factor SATA bays, four of which can support NVMe U.2 drives, is an ideal platform for virtualization and in-memory databases like Apache Ignite. TYANs GT62F-B5630 is a 1U server platform designed for hybrid NVMe/SATA cache data storage with support for up to 8 hot-swap NVMe U.2 drives along with an OCP v2.0 LAN Mezzanine. The single CPU socket design makes it an ideal platform for workloads that work best within a single NUMA domain and require large amounts of high-speed flash, such as many media streaming applications.

The all-new TN200-B7108-X4S is a dual-socket 2U 4-Node all-flash server platform with support for 24x 2.5 NVMe U.2/SATA drives. Each node gets 8 NVMe drives across 6 PCIe x4 NVME U.2 hot-swap drive bays up front and a pair of internal 2280/22110 NVMe M.2 ports. With a cumulative total of 8 CPU sockets and 32 NVMe devices across the entire chassis, the TN200-B7108-X4S is an ideal platform for High Performance Computing workloads and hyper-converged all-flash storage applications.

Exhibits also include cost effective general purpose server platforms featuring support for the dual-socket Intel Xeon Processor Scalable Family. TYAN’s 1U GT24E-B7106 is an energy-efficient server for data center deployment; the 2U TN76-B7102 with support for 2x GPUs and 12 3.5″ hot-swap drive bays is designed for multiple application scenarios including technical computing and virtual machine deployment.

TYAN Product Exhibits @ Computex 2017

HPC& Coprocessor  Platforms:

  • FT76-B7922: 4U quad-socket Intel Xeon processor E7-8800/4800 v4-based platform with support for up to 4 Intel Xeon Phi coprocessor modules, 96 DDR4 DIMM slots, and  8 2.5 hot-swap SAS 12Gb/s or SATA 6Gb/s devices
  • FT77D-B7109: 4U dual-socket Intel Xeon Processor Scalable Family-based platform with support for up to 8 Intel Xeon Phi coprocessor modules, and 14 2.5 hot-swap SATA 6Gb/s devices, 4 of the bays can support NVMe U.2 drives
  • FT48B-B7100: 4U dual-socket Intel Xeon Processor Scalable Family-based platform with support for up to 4 Intel Xeon Phi coprocessor modules, and 10 2.5 hot-swap SAS 12Gb/s or SATA 6Gb/s devices
  • FT48T-B7105: Pedestal dual-socket Intel Xeon Processor Scalable Family-based platform with support for up to 5 Intel Xeon Phi coprocessor modules, and 8 3.5 hot-swap SAS 12Gb/s or SATA 6Gb/s devices
  • GA88-B5631: 1U single-socket Intel Xeon Processor Scalable Family-based platform with support for up to 4 Intel Xeon Phi coprocessor modules, and 2 2.5 hot-swap SATA 6Gb/s devices

 

Cloud Computing and Storage Platforms:

  • GT86A-B7083: 1U dual-socket Intel Xeon processor E5-2600 v4-based platform supports up to 12 3.5 internal SATA 6Gb/s and 1 2.5 internal SSD boot devices
  • GT75B-B7102: 1U dual-socket Intel Xeon Processor Scalable Family-based platform supports up to 10 2.5 hot-swap SAS 12Gb/s or SATA 6Gb/s devices, and 4 of the bays can support NVMe U.2 drives
  • GT62F-B5630: 1U single-socket Intel Xeon Processor Scalable Family-based platform supports up to 10 2.5 hot-swap SAS 12Gb/s or SATA 6Gb/s devices, and 8 of the bays can support NVMe U.2 drives
  • TN200-B7108-X4S: 2U 4-node dual-socket Intel Xeon Processor Scalable Family-based platform. Each node supports up to 24 2.5 NVMe U.2 or SATA 6Gb/s drives

General Purpose Platforms:

  • GT24E-B7106: 1U dual-socket Intel Xeon Processor Scalable Family-based platform supports up to 4 3.5 hot-swap SAS 12Gb/s or SATA 6Gb/s devices, and 2 of the bays can support NVMe U.2 drives
  • TN76-B7102: 2U dual-socket Intel Xeon Processor Scalable Family-based platform  supports up to 8 standard PCIe slots and 12 3.5 hot-swap SAS 12Gb/s or SATA 6Gb/s devices

Embedded & Server Motherboards:

  • S3227: Intel Atom Processor C3000 series-based server board in Mini-ITX (6.7 x 6.7) form factor for low power embedded and networking applications.
  • S5539: Single-socket Intel Xeon processor D-1500 series-based server board in micro ATX (9.6″ x 9.6″) form factor for 1U or pedestal low-power storage server deployment
  • S7070: Dual-socket Intel Xeon processor E5-2600 v4-based server board in EEB (12 x 13) form factor for both server and workstation applications
  • S7076: Dual-socket Intel Xeon processor E5-2600 v4-based server board in rack-optimized, EATX (12 x 13) form factor for 1U intermediate server deployment
  • S7086: Dual-socket Intel Xeon processor E5-2600 v4-based server board in rack-optimized, EATX (12 x 13) form factor for 2U full-featured server deployment
  • S5542: Intel Xeon processor E3-1200 v6-based server board in ATX (12″ x 9.6″) form factor for entry server deployment
  • S5545: 7th Generation Intel Core i3/i5/i7 series processor-based board in u ATX (9.6″ x 9.6″) form factor for embedded applications
  • S5547: 7th Generation Intel Core i3/i5/i7 series processor- based board in Flex ATX (9″ x 7.5″) form factor for embedded applications

 

About TYAN

TYAN, as a leading server brand of Mitac Computing Technology Corporation under the MiTAC Group (TSE:3706),designs, manufactures and markets advanced x86 and x86-64 server/workstation board technology, platforms and server solution products. Its products are sold to OEMs, VARs, System Integrators and Resellers worldwide for a wide range of applications. TYAN enables its customers to be technology leaders by providing scalable, highly-integrated, and reliable products for a wide range of applications such as server appliances and solutions for high-performance computing and server/workstation used in markets such as CAD, DCC, E&P and HPC. For more information, visit MiTACs website at http://www.mitac.com or TYANs website at http://www.tyan.com

Source: TYAN

The post TYAN Displays HPC, Cloud Server Platforms at Computex appeared first on HPCwire.

Pages