HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 29 min ago

OpenSuCo: Advancing Open Source Supercomputing at ISC

Thu, 06/15/2017 - 14:37

As open source hardware gains traction, the potential for a completely open source supercomputing system becomes a compelling proposition, one that is being investigated by the International Workshop on Open Source Supercomputing (OpenSuCo). Ahead of OpenSuCo’s inaugural workshop taking place at ISC 2017 in Frankfurt, Germany, next week, HPCwire reached out to program committee members Anastasiia Butko and David Donofrio of Lawrence Berkeley National Laboratory to learn more about the effort’s activities and vision.

HPCwire: Please introduce “OpenSuCo” — what are your goals and objectives?

OpenSuCo: As we approach the end of MOSFET scaling, the HPC community needs a way to continue performance scaling. One way of providing that scaling is by providing more specialized architectures tailored for specific applications. In order to make possible the specification and verification of these new architectures, more rapid prototyping methods need to be explored. At the same time, these new architectures need software stacks and programming models to be able to actually use these new designs.

There has been a consistent march toward open source for each of these components. At the node hardware level, Facebook has launched the Open Compute Project; Intel has launched OpenHPC, which provides software tools to manage HPC systems. However, each of these efforts use closed source components in their final version. We present OpenSuCo: a workshop for exploring and collaborating on building an HPC system using open-source hardware and system software IP (intellectual property).

The goal of this workshop is to engage the HPC community and explore open-source solutions for constructing an HPC system – from silicon to applications.

Figure illustrates the progress in open source software and hardware

HPCwire: We’ve seen significant momentum for open source silicon in the last few years, with RISC-V and Open Compute Project for example, what is the supercomputing perspective on this?

OpenSuCo: Hardware specialization, specifically the creation of Systems-On-Chip (SoCs), offers a method to create cost-effective HPC architectures from off-the-shelf components. However, effectively tapping the advantages provided by SoC specialization requires the use of expensive and often closed source tools. Furthermore, the building blocks used to create the SoC may be closed source, limiting customization. This often leaves SoC design methodologies outside the reach of many academics and DOE researchers. The case for specialized accelerators can also be made from an economic sense as, in contrast to historical trends, the energy consumed per transistor has been holding steady, while the cost (in dollars) per transistor has been steadily decreasing, implying that we will soon be able to pack more transistors into a given area than can be simultaneously operated.

From an economic standpoint, we are witnessing an explosion of highly cost-sensitive and application-specific IoT (internet of things) devices. The developers of these devices face a stark choice: spend millions on a commercial license for processors and other IP or face the significant risk and cost (in both development time and dollars) of developing custom hardware. Similar parallels can be drawn to the low-volume and rapid design needs found in many scientific and government applications. By developing a low cost and robust path to the generation of specialized hardware, we can support the development and deployment of application-tailored processors across many DOE mission areas.

The design methodologies traditionally focused for use in these cost sensitive design flows can be applied to high-end computing due to the emergence of embedded IP offering HPC-centric capabilities, such as double-precision floating point, 64-bit address capability, and options for high performance I/O and memory interfaces. The SoC approach, coupled with highly accessible open source flows, will allow chip designers to include only features they want, excluding those not utilized by mainstream HPC systems. By pushing customization into the chip, we can create customization that is not feasible with today’s commodity board-level computing system design.

HPCwire: Despite pervasive support in tech circles not everyone is convinced of the merits of open source, what is the case for open source in high performance computing?

OpenSuCo: While many commercial tools provide technology to customize a processor or system given a static baseline, they generally provide only proprietary solutions that both restrict the level of customization that can be applied, as well as increase the cost of production. This cost is of greatest importance to low-volume or highly specialized markets, such as those found in the scientific, research, and defense applications, as large volume customers can absorb this NRE as part of their overall production. As an alternative to closed source hardware flows, open source hardware has been growing in popularity in recent years and mirrors the rise of Linux and open source software in the 1990s and early 2000s. We put forth that Open Source Hardware will drive the next wave of innovation for hardware IP.

In contrast to closed-source hardware IP and flows, a completely open framework and flow enable extreme customization and drive cost for initial development to virtually zero. Going further, by leveraging community-supported and maintained technology, it is possible to also incorporate all of the supporting software infrastructure, compilers, debuggers, etc. that work with open source processor designs. A community-led effort also creates a support community that replaces what is typically found with commercial products and leads to more robust implementations as a greater number of users are testing and working with designs. Finally, for security purposes, any closed-source design carries an inherent risk in the inability to truly inspect all aspects of its operation. Open source hardware allows the user to inspect all aspects of its design for a thorough review of its security.

HPCwire: Even with the advances in open source hardware, a completely open source supercomputing system seems ambitious at this point. Can you speak to the reality of this goal in the context of the challenges and community support?

OpenSuCo: We agree that building a complete open-source HPC system is a daunting task, however, a system composed of an increased number of open source components is an excellent way to increase technological diversity and spur greater innovation.

The rapid growth and adoption of the RISC-V ISA is an excellent example of how a community can produce a complete and robust software toolchain in a relatively short time. While largely used in IoT devices at the moment, there are multiple efforts to extend the reach of RISC-V – in both implementations and functionality, into the HPC space.

HPCwire: What is needed on the software side to make this vision come together?

OpenSuCo: The needs and challenges of an open source-based supercomputer are not any greater than that of a traditional “closed” system. Most future systems will need to face the continuing demands of increased parallelism, shifting Flop-to-Byte ratios and an increase in the quantity and variety of accelerators. An open system may possess greater transparency and a larger user community allowing more effective and distributed development. Regardless, continued collaboration between software and hardware developers will be necessary to create the required community to support this effort. As part of the OpenSuCo workshop we hope to engage and bring together a diverse community of software and hardware architects willing to engage on the possibility of realizing this vision.

HPCwire: You’re holding a half-day workshop at ISC 2017 in Frankfurt on June 22. What is on the agenda and who should attend?

OpenSuCo: The ISC 2017 workshop agenda consists of three technical tracks:

Hardware Track

Sven Karlsson and Pascal Schleuniger (Danmarks Tekniske Universitet)

Kurt Keville (Massachusetts Institute of Technology)\Anne Elster (Norwegian University of Science and Technology)

Software Track

Hiroaki Kataoka and Ryos Suzuki

Anastasiia Butko (Berkeley Lab)

Xavier Teurel (Barcellona Supercomputing Center)

Collaboration Track

Bill Nitzberg (Altair Engineering, Inc.)

Jens Breitbart (Robert Bosch GmbH)

Antonio Peña (Barcelona Supercomputing Center)

Keynote Speaker: Alex Bradbury (University of Cambridge)

The complete agenda of the event can be found online at http://www.opensuco.community/2017/05/24/isc17-agenda/.

While many of the emerging technologies and opportunities surround the rise of open-source hardware, we would like to invite all members of the HPC community to participate in a true co-design effort in building a complete HPC system.

HPCwire: You’ll also be holding a workshop at SC17. You’ve put out a call for papers. How else can people get involved in OpenSuCo activities?

OpenSuCo: While we have long advocated for innovative and open source systems for the HPC community, we are just beginning to tackle this comprehensive solution and cannot do it alone. We welcome collaborators to help build the next generation of HPC software and hardware design flows.

The post OpenSuCo: Advancing Open Source Supercomputing at ISC appeared first on HPCwire.

DOE Awards Six Research Contracts to Accelerate U.S. Supercomputing

Thu, 06/15/2017 - 13:51

WASHINGTON, D.C., June 15, 2017 – Today U.S. Secretary of Energy Rick Perry announced that six leading U.S. technology companies will receive funding from the Department of Energy’s Exascale Computing Project (ECP) as part of its new PathForward program, accelerating the research necessary to deploy the nation’s first exascale supercomputers.

The awardees will receive funding for research and development to maximize the energy efficiency and overall performance of future large-scale supercomputers, which are critical for U.S. leadership in areas such as national security, manufacturing, industrial competitiveness, and energy and earth sciences. The $258 million in funding will be allocated over a three-year contract period, with companies providing additional funding amounting to at least 40 percent of their total project cost, bringing the total investment to at least $430 million.

“Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” said Secretary Perry.

“These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing—exascale-capable systems.”

“The PathForward program is critical to the ECP’s co-design process, which brings together expertise from diverse sources to address the four key challenges: parallelism, memory and storage, reliability and energy consumption,” ECP Director Paul Messina said. “The work funded by PathForward will include development of innovative memory architectures, higher-speed interconnects, improved reliability systems, and approaches for increasing computing power without prohibitive increases in energy demand. It is essential that private industry play a role in this work going forward: advances in computer hardware and architecture will contribute to meeting all four challenges.”

The following U.S. technology companies are the award recipients:

  • Advanced Micro Devices (AMD)
  • Cray Inc. (CRAY)
  • Hewlett Packard Enterprise (HPE)
  • International Business Machines (IBM)
  • Intel Corp. (Intel)

The Department’s funding for this program is supporting R&D in three areas—hardware technology, software technology, and application development—with the intention of delivering at least one exascale-capable system by 2021.

Exascale systems will be at least 50 times faster than the nation’s most powerful computers today, and global competition for this technological dominance is fierce. While the U.S. has five of the 10 fastest computers in the world, its most powerful — the Titan system at Oak Ridge National Laboratory — ranks third behind two systems in China. However, the U.S. retains global leadership in the actual application of high performance computing to national security, industry, and science.

Additional information and attributed quotes from the vendors receiving the PathForward funding can be found here.

Source: DOE

The post DOE Awards Six Research Contracts to Accelerate U.S. Supercomputing appeared first on HPCwire.

Penguin Computing Announces FrostByte with ThinkParQ BeeGFS Storage

Thu, 06/15/2017 - 12:32

FREMONT, Calif., June 15, 2017 — Penguin Computing, provider of high performance computing, enterprise data center and cloud solutions, today announced FrostByte with ThinkParQ BeeGFS, the latest member of the family of software-defined storage solutions. FrostByte is Penguin Computing’s scalable storage solution for HPC clusters, high-performance enterprise applications and data intensive analytics.

“We are pleased to announce our Gold Partner relationship with ThinkParQ,” said Tom Coull, President and CEO, Penguin Computing. “Together, Penguin Computing and ThinkParQ can deliver a fully supported, scalable storage solution based on BeeGFS, engineered for optimal performance and reliability with best-in-class hardware and expert services.”

BeeGFS, developed at the Fraunhofer Center for High Performance Computing in Germany and delivered by ThinkParQ GmbH, is a parallel file system that was designed specifically to deal with I/O intensive workloads in performance-critical environments and with a strong focus on easy installation and high flexibility, including converged environments where storage servers are also used for computing. BeeGFS transparently spreads user data across multiple servers increasing the number of servers and disks in the overall storage system. Users can seamlessly scale performance and capacity from small clusters up to enterprise-class systems with thousands of nodes. BeeGFS is powering the storage of hundreds of scientific and industry customer sites worldwide.

Sven Breuner, CEO of ThinkParQ, stated, “Before officially teaming up with Penguin Computing, we learned from our customers about how much they value Penguin Computing as an HPC solution provider. Now that we are working very closely together, we are even more impressed by the level of professionalism and customer dedication, which absolutely convinced us that Penguin Computing is the ideal partner for highest quality customer solutions.”

BeeGFS offers a number of compelling features making it ideal for demanding, high-performance, high-throughput workloads found in HPC, life sciences, deep learning, big data analytics, media & entertainment, financial services, and much more. BeeGFS is natively supported in the Linux kernel and runs on x86_64, OpenPOWER, ARM64, and other architectures. It supports multiple networks with dynamic failover and provides fault-tolerance with built-in replication and filesystem sanity checks. BeeGFS includes both command-line and graphical tools, simplifying administration and monitoring. A unique feature is BeeGFS On Demand (BeeOND), which allows users to create temporary parallel filesystems on a per-job basis, even within cluster batch job scripts.

Penguin Computing FrostByte storage solutions lift the limitations of traditional storage appliances by delivering engineered designs to meet customer data protection and performance requirements in a highly scalable, supported platform. Penguin Computing offers FrostByte in both on-premise and hosted deployments with tunable support options, including Sysadmin-as-a-Service.

Penguin Computing’s FrostByte with ThinkParQ BeeGFS Storage will be featured at ISC 2017, Europe’s largest, annual, high-performance computing event, June 18-21. Visit booth #J-610 for details and enter the raffle for a chance to win a ThinkParQ BeeCopter.

About ThinkParQ

ThinkParQ was founded as a spin-off from the Fraunhofer Center for High Performance Computing by the key people behind BeeGFS to bring fast, robust, scalable storage to market. ThinkParQ is responsible for support, provides consulting, organizes and attends events, and works together with system integrators to create turn-key solutions. ThinkParQ and Fraunhofer internally cooperate closely to deliver high quality support services and to drive further development and optimization of BeeGFS for tomorrow’s performance-critical systems. Visit thinkparq.com to learn more about the company.

About Penguin Computing

Penguin Computing is one of the largest private suppliers of enterprise and high-performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing On-Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivery of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of a limited number of authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing. Penguin Computing has systems installed with more than 2,500 customers in 40 countries across eight major vertical markets. Visit www.penguincomputing.com to learn more about the company and follow @PenguinHPC on Twitter.

Source: Penguin Computing

The post Penguin Computing Announces FrostByte with ThinkParQ BeeGFS Storage appeared first on HPCwire.

HPC Symposium to Host Innovative ‘Keynote Debate’ and Live Performance ‘Bella Gaia’

Thu, 06/15/2017 - 12:22

BOULDER, Co., June 15, 2017 — “The Future of HPC Architecture” is the topic of an innovative “Keynote Debate,” which will be the first of two keynotes at the Rocky Mountain Advance Computing Consortium’s 7thannual High Performance Computing Symposium, Aug. 15-17 on the CU-Boulder East (Research) Campus.

This year’s Symposium – recognized as one of the nation’s leading regional events in the HPC field – will be held at CU-Boulder’s Sustainability, Energy & Environment Complex (SEEC).  Registration is $150, which includes all conference materials and meals plus a Reception.  The Student registration fee is just $30, and Post-Doc students can sign up for $75 thanks to support from the many event sponsors. For those only able to attend the Aug. 17th Tutorials, registration will be $100.

Information about the Symposium, including the program schedule and registration information can be found at the website: www.rmacc.org/hpcsymposium.

The Keynote Debate will be a moderated debate featuring industry, national labs and educational leaders in the ever-growing high performance computing field.  The second keynote features New York filmmaker, composer and director (and CU-Boulder graduate) Kenji Williams, who will present a live performance of the globally touring NASA-powered data visualization spectacle – BELLA GAIA. Williams’ unique storytelling method utilizes both performance and research materials combined with a stunning video presentation.

The technical program features six concurrent tracks that cover a wide range of advanced computing topics, with a particular emphasis on data analytics and visualization.  The tutorial sessions feature the acclaimed “Supercomputing in Plain English” series by Henry Neeman, in addition to classes on python, R, and singularity taught by experts from around the region.  Other technical presentations will cover HPC-related resources such as Globus, the Open Science Grid, Amazon EC2, and BRO.

The annual symposium brings together faculty, researchers, industry leaders and students from throughout the Rocky Mountain Region. Special beginner level tutorials and workshops are included for those new to the HPC field and for students who wish to learn how to use a variety of advanced computing skills in their research. Another feature for students is a poster competition with winners receiving an all-expenses paid trip to SC17 in Denver.

Symposium sponsors are led by Intel (Diamond), Dell (Platinum) and NVIDIA (Reception).  Hewlett Packard Enterprise, PureStorage, Mellanox Technologies and DDN Storage are Gold sponsors, and Lenovo, Allinea and Silicon Mechanics are supporting at the Silver sponsor level.

 About the Rocky Mountain Advanced Computing Consortium

The Rocky Mountain Advanced Computing Consortium is collaboration among academic and research institutions located throughout the intermountain states. The RMACC mission is to facilitate widespread effective use of high performance computing throughout the Rocky Mountain region. Membership is made up of the major research universities in Colorado, Wyoming, Montana, Idaho, Utah and New Mexico as well as the research agencies – NOAA, NCAR, The U.S. Geological Survey, and NREL.  To learn more about the RMACC visit: www.rmacc.org/about

Source: RMACC

The post HPC Symposium to Host Innovative ‘Keynote Debate’ and Live Performance ‘Bella Gaia’ appeared first on HPCwire.

Bio-IT World Best Practices Awards Opens for Entries

Thu, 06/15/2017 - 10:10

NEEDHAM, Mass., June 15, 2017 — Bio-IT World today opened the call for entries for the 2018 Bio-IT World Best Practices Awards. Bio-IT World has held the Best Practices awards since 2003, highlighting outstanding examples of technology innovation in the life sciences, from basic R&D to translational medicine. We particularly encourage vendors to nominate entries from valued academic and/or industry partners.

The Criteria for Entry Includes:

Send us a brief overview of your technology, including a statement of the issue or problem at hand, the innovative approach or technology applied, and the ROI (Return on Investment) in terms of scientific insights, cost savings, productivity, etc.

Entries will be accepted until March 2, 2018.

All entries will be judged by an expert panel in February/March 2018.

Winners will be announced in a plenary session at Bio-IT World Conference & Expo, May 15-17, 2018 at the Seaport World Trade Center in Boston.

All winners will be featured in follow-up profiles in Bio-IT World.

Past Winners Included:

2017 Best Practices Award Winners:

  • Clinical IT & Precision Medicine: Maccabi Healthcare System nominated by Medial EarlySign
  • Informatics: Rady Children’s Institute for Genomic Medicine nominated by Edico Genome
  • Knowledge Management: Allotrope Foundation
  • IT infrastructure/HPC: Earlham Institute
  • Judges’ Choice: Biomedical Imaging Research Services Section (BIRSS) nominated by SRA International
  • Editor’s Choice: Alexion Pharmaceuticals nominated by EPAM Systems
  • Honorable Mention: Fermenta – B.

2016 Best Practices Award Winners:

  • Clinical IT & Precision Medicine: Amgen
  • Informatics: FDA & DNAnexus
  • Knowledge Management: AstraZeneca
  • Judges Prize: Human Longevity
  • Editors’ Choice Award: XOMA

2015 Best Practices Award Winners:

  • Informatics: Biogen
  • IT Infrastructure: University of California, Santa Cruz
  • Knowledge Management: European Lead Factory
  • Research & Drug Discovery: UCB BioPharma
  • Clinical & Health IT: GlaxoSmithKline
  • Judges’ Prize: Michael J. Fox Foundation
  • Editors’ Choice Award: National Institutes of Health Undiagnosed Diseases Program

2014 Best Practices Award Winners:

  • Clinical & Health IT: AstraZeneca and Tessella
  • IT Infrastructure & HPC: Baylor College of Medicine
  • Research & Drug Discovery: U-BIOPRED
  • Informatics: The Pistoia Alliance
  • Knowledge Management: Genentech
  • Editors’ Prize: The Icahn School of Medicine at Mt. Sinai
  • Judges’ Prize: UK National Health Service

For more information on submitting your Bio-IT World Best Practices Awards entry form, visit: Bio-ITWorld.com/BestPractices — or contact: Allison Proffitt, Editorial Director, 617.233.8280 or aproffitt@Bio-ITWord.com. Deadline for entry is March 2, 2018.

Winners are chosen by a panel of experts and will be announced at the Cambridge Healthtech Institute’s Bio-IT World Conference & Expo, that is taking place May 15-17, 2018 at the Seaport World Trade Center in Boston.

Additional information on this event, can be found at Bio-ITWorldExpo.com

About Bio-IT World (www.Bio-ITWorld.com)

Part of Healthtech Publishing, Bio-IT World provides outstanding coverage of cutting-edge trends and technologies that impact the management and analysis of life sciences data, including next-generation sequencing, drug discovery, predictive and systems biology, informatics tools, clinical trials, and personalized medicine. Through a variety of sources including, Bio-ITWorld.com, Weekly Update Newsletter and the Bio-IT World News Bulletins, Bio-IT World is a leading source of news and opinion on technology and strategic innovation in the life sciences, including drug discovery and development.

About Cambridge Healthtech Institute (www.healthtech.com)

Cambridge Healthtech Institute (CHI), a division of Cambridge Innovation Institute, is the preeminent life science network for leading researchers and business experts from top pharmaceutical, biotech, CROs, academia, and niche service providers. CHI is renowned for its vast conference portfolio held worldwide including PepTalk, Molecular Medicine Tri-Conference, SCOPE Summit, Bio-IT World Conference & Expo, PEGS Summit, Drug Discovery Chemistry, Biomarker World Congress, World Preclinical Congress, Next Generation Dx Summit and Discovery on Target. CHI’s portfolio of products include Cambridge Healthtech Institute Conferences, Barnett International, Insight Pharma Reports, Cambridge Marketing Consultants, Cambridge Meeting Planners, Knowledge Foundation, Bio-IT World, Clinical Informatics News and Diagnostics World.

Source: Bio-IT World

The post Bio-IT World Best Practices Awards Opens for Entries appeared first on HPCwire.

E4 Engineering, QCT Supply CERN with Thousands of Servers, Petabytes of Storage

Thu, 06/15/2017 - 09:54

SCANDIANO, Italy, June 15th 2017 — CERN, the European Organization for Nuclear Research, is one of the world’s largest and most well-regarded centres for scientific research. They are probing the fundamental structures of the universe. They use the world’s largest and most complex scientific instruments to study the basic constituents of matter – the fundamental particles. The particles are made to collide together at close to the speed of light. The process gives the physicists clues about how the particles interact and provides insights into the fundamental laws of nature. The huge quantity of data generated by these types of collisions, that need to be processed, stored and made available to the whole scientific community, place extreme requirements in terms of computational performance and storage space.

Because High Energy Physics requires powerful computational systems to carry out the parallel and distributed computing, E4 Computer Engineering joined forces with Quanta Cloud Technology (QCT) to deliver 3,400 system units over the last couple of years, 60,000 cores and 10PBs of storage (3PBs on HDDs and 7PBs on SSDs). This is an impressive result. E4 Computer Engineering had been awarded a contract by CERN as the result of a competitive call for tender. The systems met CERN’s stringent criteria in terms of scalability, cost effectiveness and performance.

E4 Computer Engineering and QCT are dedicated to build, configure and deliver computational equipment to highly educated customers, such as CERN, with specific configurations and strict deadlines. This collaboration provides high performance computers combined with energy efficiency, with very low failure rates.

E4 Computer Engineering has a long and successful history of provisioning systems to CERN and to its Tier 1, INFN (Istituto Nazionale di Fisica Nucleare) through competitive tenders. With thousands of x86 cores and petabytes of storage delivered, E4 expertise ensures improved scalability and increased grid performances.

The longstanding relationship established with CERN has been particularly rewarding for our company because our engineers have been encouraged to adopt new procedures, methods and technologies, pushing us towards that level of excellence at which E4 Computer Engineering has sought for since its founding” said Cosimo Gianfreda, CTO E4 Computer Engineering. “We are also very proud about the partnership with QCT for the high flexibility of their hardware that always provides excellent expandability and configurability which adapts to the diverse range of data center infrastructure requirements”.

“Supporting scientific computing applications is an especially rewarding part of what we do at QCT,” said Mike Yang, President of QCT. “Our work with E4 has played an important role in supporting the applications that advance scientific understanding and the basic building blocks of our universe. All of this is driven by advanced computing and storage technologies, delivered with unsurpassed power and maintenance efficiency. We look forward to continuing our work with E4 to help CERN make discoveries that benefit all of humanity.”

“E4 has successfully supplied us with reliable and performant servers of the QCT brand over the last couple of years,” said Olof Bärring, Deputy Head of the computing facilities group, IT department, CERN. “The systems have proved to be suitable for different purposes ranging from High Throughput Computing (HTC) number crunching of physics data coming from LHC experiments to High Performance Computing (HPC) clusters for the CERN theory group. We are also very pleased with the reliable deliveries, the warranty support provided.”

About E4 Computer Engineering

Since 2002, E4 Computer Engineering has been innovating and actively encouraging the adoption of new computing and storage technologies. Because new ideas are so important, we invest heavily in research and hence in our future. Thanks to our comprehensive range of hardware, software and services, we are able to offer our customers complete solutions for their most demanding workloads on: HPC, Big-Data, AI, Deep Learning, Data Analytics, Cognitive Computing and for any challenging Storage and Computing requirements. E4. When Performance Matters.

About Quanta Cloud Technology (QCT)

Quanta Cloud Technology (QCT) is a global data center solution provider. We combine the efficiency of hyperscale hardware with infrastructure software from a diversity of industry leaders to solve next-generation data center design and operation challenges. QCT serves cloud service providers, telecoms and enterprises running public, hybrid and private clouds.

Product lines include hyperconverged and software-defined data center solutions as well as servers, storage, switches and integrated racks with a diverse ecosystem of hardware component and software partners. QCT designs, manufactures, integrates and services cutting-edge offerings via its own global network. The parent of QCT is Quanta Computer, Inc., a Fortune Global 500 corporation.

Source: QCT

The post E4 Engineering, QCT Supply CERN with Thousands of Servers, Petabytes of Storage appeared first on HPCwire.

Virginia Tech Researchers Discover Key to Faster Processing for Exascale

Thu, 06/15/2017 - 09:46

BLACKSBURG, Va., June 15, 2017 — Exascale computing — the ability to perform calculations at 1 billion billion per second — is what researchers are striving to push processors to do in the next decade. That’s 1,000 times faster than the first petascale computer that came into existence in 2008.

Achieving efficiency will be paramount to building high-performance parallel computing systems if applications are to run in environments of enormous scale and also limited power.

A team of researchers in the Department of Computer Science in Virginia Tech’s College of Engineering discovered a key to what could keep supercomputing on the road to the ever-faster processing times needed to achieve exascale computing — and what policymakers say is necessary to keep the United States competitive in industries from everything to cybersecurity to ecommerce.

“Parallel computing is everywhere when you think about it,”said Bo Li, computer science Ph.D. candidate and first author on the paper being presented about the team’s research this month. “From making Hollywood movies to managing cybersecurity threats to contributing to milestones in life science research, making strides in processing times is a priority to get to the next generation of supercomputing.”

Li will present the team’s research on June 29 at the Association for Computing Machinery’s 26th International Symposium on High Performance Parallel and Distributed Computing in Washington, D.C. The research was funded by the National Science Foundation.

The team used a model called Compute-Overlap-Stall (COS) to better isolate contributions to the total time to completion for important parallel applications. By using the COS model they found that a nebulous measurement called overlap played a key role in understanding the performance of parallel systems. Previous models lumped overlap time into either compute time or memory stall time, but the Virginia Tech team found that when system and application variables changed, the effects of overlap time were unique and could dominate performance. This led to the realization that the dominance and complexity of overlap meant it had to be modeled independently on current and future systems or efficiency would remain elusive.

“What we learned is that overlap time is not an insignificant player in computer run times and the time it takes to perform tasks,” said Kirk Cameron, professor of computer science and lead on the project. “Researchers have spent three decades increasing overlap and we have shown that in order to improve the efficiency of future designs, we must consider their precise impact on overlap in isolation.”

The Virginia Tech researchers applied COS modeling to both Intel and IBM architectures and found that the error rate was as low as 7 percent on Intel systems and as high as 17 percent on IBM architecture. The team validated their models on 19 different applications as benchmarks. The application benchmarks used the following code: LULESH, AMGmk, Rodinia, and pF3D.

“This study is important to all kinds of industries who care about efficiency,” said Li. “Any entity that relies on supercomputing including cybersecurity organizations, large online retailers such as Amazon and video distribution services like Netflix, would be affected by the changes in processing time we found in measuring overlap.”

One of the challenges in the study was “throttling” three elements: central processing unit speed, memory speed, and concurrency, or running several threads at once. Throttling refers to a sequence that causes the computer to be idle for several cycles. This is the first paper to evaluate the simultaneous combined effects of all three methods.

Parallel computing in the exascale realm has the potential to open up so many new frontiers in myriad areas of scientific research, it is almost boundless. Understanding overlap and how to make computers run their most efficiently will be a significant key to achieving the computing power required to run massive amounts of calculations in the not-too-distant future.

Source: Virginia Tech

The post Virginia Tech Researchers Discover Key to Faster Processing for Exascale appeared first on HPCwire.

ISC 2017 Opens Doors to HPC Community on June 18

Thu, 06/15/2017 - 09:42

FRANKFURT, Germany, June 15, 2017 – In less than three days, over 3,000 attendees from 58 countries will be converging on the city of Frankfurt to attend the 32nd ISC High Performance conference, living up to its reputation as the largest HPC forum in Europe.
Current registration numbers are eight percent higher than last year, promising an increase in attendance from Germany, the US, the UK and Japan. If the current trend continues, the conference will make history with record attendance.

This year’s main program will be held at Messe Frankfurt, from June 18 – 21 and the workshops will be offered at Marriott Frankfurt on June 22. The organizers have lined up a program comprising compelling topics that will be addressed by experts in their fields. Worthy of mention are the Industrial Day and Deep Learning Day, which will focus on commercial HPC and AI, respectively. Both programs are designed to benefit commercial users and the academic community.
The accompanying exhibition will display product demonstrations and research projects from 148 key vendors, research centers and universities in the HPC space.

The Opening Session

The conference will be officially opened at 8:30 am on Monday, June 19, by conference general co-chairs, Martin Meuer and Thomas Meuer, followed by the conference keynote at 9:00 am. This year’s keynote will be delivered by Professor Dr. Jennifer Chayes of Microsoft Research on Network Science. She will be talking about how massive data networks are challenging our conventional models in database management and are spurring new applications.

Another high point on Monday is the announcement of the 49th TOP500 list at 10:30 am. Project co-founder Erich Strohmaier will present the highlights of the new rankings and discuss the latest trends.

Here is a list of the not–to–be–missed Monday – Wednesday conference highlights:

  1. The Keynotes
  2. Distinguished Talks
  3. The Research Paper Session
  4. The Industrial Day
  5. The Deep Learning Day
  6. The Exhibition
  7. The Welcome Party

For easy program browsing, attendees can use the ISC 2017 Agenda App, which works on all Android, iOS and Windows devices.

Still Time Left to Register

Finally, if you haven’t done so yet, there is still time to register and save 200 Euros off the full conference pass.

If you are getting into Frankfurt on the weekend, you can register prior to Monday to avoid standing in the longer lines expected during the week. The registration counter will be open Sunday, June 18 at 7:30 am and will stay open until 6:00 pm.

Here are the opening hours for the whole week:

>Sun, June 18 7:30 am to 6:00 pm Mon, June 19 7:30 am to 6:00 pm Tue, June 20 7:30 am to 6:00 pm Wed, June 21 7:30 am to 4:00 pm

The Thursday registration desk will be set up at the Marriott Hotel

Thur, June 22 7:30 am to 2:00 pm

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

Over 400 speakers and 150 exhibitors, consisting of leading research centers and vendors, will greet attendees at ISC High Performance. A number of events complement the Monday – Wednesday keynotes, including the Distinguished Speaker Series, the Industrial Day, The Deep Learning Day, Tutorials, Workshops, the Research Paper Sessions, Birds-of-a-Feather (BoF) Sessions, Research Poster, the PhD Forum, Project Poster Sessions and Exhibitor Forums.

Source: ISC

The post ISC 2017 Opens Doors to HPC Community on June 18 appeared first on HPCwire.

Bright Computing Announces Cloud Bursting Support for Azure

Wed, 06/14/2017 - 13:51

AMSTERDAM, the Netherlands, June 14, 2017 — Bright Computing today announced that the latest generation of Bright Cluster Manager, version 8.0, includes support for Microsoft Azure. Bright will demonstrate their Azure cloud bursting capabilities at ISC 2017 in booth # D-1021, June 19 – 21 in Frankfurt, Germany.

The Bright integration with Azure enables organizations to provision and manage virtual servers running on the Azure cloud platform, as if they were local machines. Organizations can use this feature to build an entire cluster in Azure from scratch, or extend an on-premises cluster into the Azure cloud platform when extra capacity is needed.

Key features of the Bright Cluster Manager 8.0 integration with Azure include:

  • Uniformity – Bright Cluster Manager 8.0 ensures that cloud nodes look and feel exactly like on-premises nodes. This is accomplished by using the same software images to provision cloud nodes, as the software images that are already being used to provision on-premises nodes. Users are authenticated on cloud nodes in the same way as on-premises nodes, providing a seamless administration experience. A single workload setup allows users to manage separate queues for on-premises and cloud nodes.
  • Streamlined setup process – An intuitive wizard in Bright View asks some simple questions to quickly and easily set up the cloud bursting environment. In addition, Azure API endpoints are accessed via a single outgoing VPN port in the internet.
  • Data management – Bright Cluster Manager 8.0 includes a tool which automatically moves job data in and out of Azure.
  • Scale – Bright allows organizations to scale nodes up and down, based on the workload. Virtual nodes in the cloud can be terminated automatically when they are no longer needed, and recreated when new jobs are submitted to the queue.

Martijn de Vries, CTO at Bright Computing, commented; “We are pleased to offer this new integration to our customers and we are confident that the solution will be very popular with our user base. Cloud bursting from an on-premises cluster to Microsoft Azure offers companies an efficient, cost-effective, secure and flexible way to add additional resources to their HPC infrastructure. Bright’s integration with

Azure also gives our clients the ability to build an entire off-premises cluster for compute-intensive workloads in the Azure cloud platform.”

Venkat Gattamneni, director of product marketing, Cloud Platform, Microsoft Corp., said, “We are pleased to see Bright Computing’s commitment to Microsoft Azure with key features that deliver cloud benefits of agility, scale and hybrid consistency that our mutual customers need.”

Bright Cluster Manager 8.0 enables cost savings by instantiating compute resources in Azure only when they are needed. Built-in intelligence creates instances only after the data is ready for processing and the backlog in on-site workloads requires it.

To find out more about the Bright / Microsoft Azure integration, please email pr-team@brightcomputing.com.

About Bright Computing

Bright Computing is a global leader in cluster and cloud infrastructure automation software. Bright Cluster Manager, Bright Cluster Manager for Big Data, and Bright OpenStack provide a unified approach to installing, provisioning, configuring, managing, and monitoring HPC clusters, big data clusters, and OpenStack clouds. Bright’s products are currently deployed in more than 650 data centers around the world. Bright Computing’s customer base includes global academic, governmental, financial, healthcare, manufacturing, oil/gas/energy, and pharmaceutical organizations such as Boeing, Intel, NASA, Stanford University, and St. Jude Children’s Research Hospital. Bright partners with Amazon, Cray, Dell, Intel, Nvidia, SGI, and other leading vendors to deliver powerful, integrated solutions for managing advanced IT infrastructure such as high performance computing clusters, big data clusters, and OpenStack-based private clouds. For more information, visit www.brightcomputing.com

Source: Bright Computing

The post Bright Computing Announces Cloud Bursting Support for Azure appeared first on HPCwire.

ISC Industrial Day: Bridging Academia and Industrial HPC Users

Wed, 06/14/2017 - 11:25

As deputy chair of Industrial Day at ISC next week, my goal is to help bring clarity to the key opportunities and challenges afforded by HPC-scale technologies, including the specific barriers commercial companies are likely to encounter as they deploy new solutions or upgrade existing ones.

Our Industrial Day agenda will focus on choosing infrastructure products and services that provide higher ROI and greater flexibility, and on deploying practical solutions that help maximize innovation potential, increase market share and support new business models.

Industrial HPC users can be grouped into two categories: those who operate their own data centres and those who buy or access on-demand HPC resources. At this first iteration of Industrial Day, we’ll be focusing mostly on the first category, the on-prem data centre, which has been increasing steadily over the last 30 years.

Dr. Marie-Christine Sawley of Intel

Fifty percent of systems on the TOP500 list are now deployed in corporations, including four systems in the TOP50 and 10 in the TOP100. Many of them are based in Europe, owned and operated by leaders in energy and power, aeronautics, automotive, telecommunications, finance and other industries. Notable high-end users include Airbus, BMW, and Total, which now operates the world’s largest HPC system in the private sector.

Industrial Day will focus on recent developments in the European HPC community that are of interest to commercial organizations and that we believe will have a cascading effect on future solutions and usage models. Topics will include: how to qualify exascale performance, infrastructure selection, and the development of high performance data analytics (HPDA) use cases.

The Benefits of Exascale Performance

Complex, fundamental research in areas such as fusion, material science and quantum chromodynamics continue to move high performance computing to higher scale. However, the growing number of industrial users have different decision criteria and often operate at smaller scales. If many of the top-end solutions and lessons learned offer value for commercial users, other advances also are laying the foundation for future innovation that should be considered when evaluating options.

At Industrial Day, we’ll have experts speak in detail about the benefits exascale computing will provide for aircraft design and for complex multi-disciplinary simulations. We’ll also be talking about the software challenges of exascale computing, and the great value offered by projects such as EXA2CT, which is boot strapping exascale code enhancement by creating libraries and proto-applications of direct interest to industrial users: examples include fast Fourier transforms, linear algebra functions, and other core computations.

ISC will provide many additional opportunities to interact with experts breaking new ground in exascale computing. A great deal of collaborative research is underway in European HPC community. environments.

Selecting and Scaling Infrastructure, Services and Software

Universities and research organizations have extensive experience in procuring, connecting, and sharing HPC resources, and they tend to be among the earliest evaluators and adopters of new technology. Many are contributing to advances in HPC system software, virtualization and cloud computing that are redefining how HPC resources are deployed. For Intel, as for other technology vendors introducing new options at every layer of the solution stack — compute, memory, storage, fabric, and software — the experience and insights of these organizations can be invaluable.

Examples include the DEEP and DEEP-ER projects at the Jülich Supercomputing Centre, focused on creating an innovative HPC system architecture that distributes workloads across a standard HPC cluster and a highly-parallel booster system using an MPI-like software layer. Other projects we run in collaboration with our partners include high-density compute options—such as Intel Xeon Phi processors and FPGAs—into mainstream usage.

At Industrial Day, we’ll take a practical look at how these processes are being handled and how to balance requirements and suppliers to achieve higher value, higher reliability, conquer new market segments while containing costs. We’ll also talk about innovations in software extending the value of simulation-based design to other areas of the enterprise, such as innovation for new materials management, product design or service offering. Using design models to generate high-definition 3D images, for example, can be a useful tool for attracting customers earlier in the product design lifecycle.

Evaluating High Performance Data Analytics (HPDA)

HPC and big data analytics have evolved in relative isolation, but they are coming together quickly and have enormous potential for extracting actionable insights from rich and complex data sets. A great deal of research is focused on these areas, and high-value use cases are beginning to appear. HPC brings speed and scale to deep neural network training and other machine learning strategies that become progressively smarter on their own. HPDA opens the door to real-time and near-real time solutions that that can radically improve critical decision making in data-rich industrial environments.

Industrial Day will focus on examples in railway traffic control, IoT data analysis, and product performance lifecycle management. Industrial HPC users will gain a better understanding of machine learning and other HPDA technologies and better insight into the kinds of resources required for practical solutions that combine HPC best practices for operating very large systems with the latest advances in data analytics.

Jumpstarting a Two-Way Conversation

Investment in HPC has never been higher and Europe is an important locus for R&D, with a high density of universities and research organizations collaborating on large projects. The EU is fueling innovation with investments of €700M by the end of the decade.[1] Programs such as Horizon 2020 and platforms such as the European Technology Platform for HPC (ETP4HPC) bring EU decision makers together with HPC leaders to refine the agenda and keep R&D efforts on track.

The breadth and depth of this activity makes ISC High Performance 2017 an important event for HPC users. As deputy chair of Industrial Day, I’ll be leading an HPC user round table to discuss the opportunities and challenges that are most relevant to industrial users. Next year, I’ll be Industrial Day chair, and I’ll be using that information — and the feedback we receive — to extend and focus the agenda for Industrial Day at ISC High Performance 2018, so we can provide a richer exchange platform for industrial users.

[1] Source. “Europe Towards Exascale: A Lookback on 5 Years of European Exascale Research Collaboration,” produced by European Exascale Projects, June 2016. http://exascale-projects.eu/EuroExaFinalBrochure_v1.0.pdf

Dr. Marie-Christine Sawley is the Intel manager of the ECR lab in Paris, HTC collaboration with CERN and code modernization with BSC, and manages Intel’s participation in the EXA2CT and READEX projects funded by the European Union.

The post ISC Industrial Day: Bridging Academia and Industrial HPC Users appeared first on HPCwire.

Exxact Enhances Deep Learning Portfolio with NVIDIA DGX Station and DGX-1

Wed, 06/14/2017 - 11:03

FREMONT, Calif., June 14, 2017 — Exxact Corporation, a leading provider of high performance computing solutions for GPU-accelerated deep learning research, today announced that it will offer the new NVIDIA DGX Station and DGX-1 systems featuring the NVIDIA Tesla V100 data center GPUs based on the NVIDIA Volta architecture.

“NVIDIA’s DGX portfolio is paving the way for a new era of computing,” said Jason Chen, Vice President of Exxact Corporation. “The performance of the new DGX Station and DGX-1 systems for AI and advanced analytics is unmatched, providing data scientists a complete hardware and software package for compute-intensive AI exploration.”

Exxact, a pioneer in high performance computing since 1992, has a broad history of supporting the latest deep learning solutions. The addition of the new NVIDIA DGX Systems enhances Exxact’s endeavors and comprehensive portfolio. The innovative DGX Station and DGX-1 systems are built on a common deep learning software stack that is optimized for maximum performance with today’s most popular deep learning frameworks, making them integral solutions for users searching for unmatched computing performance.

“Exxact’s expanded lineup of deep learning solutions allows customers to purchase a fully integrated DGX system that is optimized and ready to deliver the fastest time to insights, with effortless productivity and groundbreaking performance,” said Craig Weinstein, Vice President for the America’s partner organization at NVIDIA.

Introduced last year, DGX-1 systems power a wide range of AI deployments at enterprises, cloud service providers and research organizations worldwide. The new Volta-based DGX-1 supercomputer delivers the computing capacity of 800 CPUs in a single, 3-rack unit server footprint.

The DGX-1 with Tesla V100 GPUs features:

  • Eight Tesla V100 GPU accelerators, connected by 300GB/s NVLink technology, in a Hybrid Cube Mesh
  • Up to 960 TFlops of peak performance
  • 5,120 Tensor Cores (V100-based systems)
  • 128GB total GPU memory
  • 512GB system memory
  • 4 X 1.92TB SSD RAID 0
  • Dual 10GbE, Quad InfiniBand 800 Gb/s networking
  • 3RU form factor
  • 3200W
  • DGX software stack

The new DGX Station is the world’s first personal supercomputer for AI development, with the computing capacity of 400 CPUs while consuming only 1/20th the power, in a form factor that fits neatly deskside. Engineered for peak performance and deskside comfort, the DGX Station is water-cooled and whisper quiet, emitting one-tenth the noise as other deep learning workstations. Data scientists can use it for compute-intensive AI exploration, including training deep neural networks, inferencing and advanced analytics.

The DGX Station features:

  • Four Tesla V100 GPU accelerators, connected by 200GB/s NVLink
  • 64GB total GPU memory
  • 256GB system memory
  • Data: 3 X 1.92TB SSD RAID 0
  • OS: 1 X 1.92TB SSD
  • Whisper quiet, water-cooled design

DGX-1 and DGX Station systems can run several jobs simultaneously with flexible allocation of GPU resources, allowing organizations to meet the demands of challenging deep learning projects, including both training and inferencing. DGX systems ensure a team of data scientists can continuously experiment and gain faster insights with effortless productivity and optimal performance.

Both new DGX systems include an optimized and ready-to-use deep learning software stack. This includes access to today’s most popular deep learning frameworks, NVIDIA DIGITS deep learning training application, third-party accelerated solutions, the NVIDIA Deep Learning SDK (e.g. cuDNN, cuBLAS, NCCL), CUDA toolkit, NVIDIA Docker and drivers.

The Volta-based DGX Station and DGX-1 servers are available for ordering now through Exxact and are expected to ship in the third quarter of this year. Further information, including detailed specifications and support packages, is available on:

About Exxact Corporation

Exxact develops and manufactures innovative computing platforms and solutions that include workstation, server, cluster, and storage products developed for Life Sciences, HPC, Big Data, Cloud, Visualization, Video Wall, and AV applications. With a full range of engineering and logistics services, including consultancy, initial solution validation, manufacturing, implementation, and support, Exxact enables their customers to solve complex computing challenges, meet product development deadlines, improve resource utilization, reduce energy consumption, and maintain a competitive edge. Visit Exxact Corporation at www.exxactcorp.com.

Source: Exxact


The post Exxact Enhances Deep Learning Portfolio with NVIDIA DGX Station and DGX-1 appeared first on HPCwire.

SKA Astronomy Project Gets Boost with Hybrid Memory Cube

Wed, 06/14/2017 - 10:21

An ambitious astronomy effort designed to peer back to the origins of the universe and map the formation of galaxies is underpinned by an emerging memory technology that seeks to move computing resources closer to huge astronomy data sets.

The Square Kilometer Array (SKA) is an international initiative to build the world’s largest radio telescope. A “precursor project” in the South African desert called MeerKAT consists of 64 44-foot “receptor” satellite dishes. The array gathers and assembles faint radio signals used to create images of distant galaxies.

Once combined with other sites, SKA would be capable of peering back further in time than any other Earth-based observatory. As with most advanced science projects, SKA presents unprecedented data processing challenges. With daily data volumes reaching 1 exabyte, “The data volume is becoming overwhelming,” astronomer Simon Ratcliffe noted during a webcast this week.

In response, Micron Technology Inc. has come up with a processing platform for handling the growing data bottleneck called the Hybrid Memory Cube (HMC). The memory specialist combined its fast logic process technology with new DRAM designs to boost badly needed bandwidth in its high-density memory system.

Steve Pawlowski, Micron’s vice president of advanced computing, claimed its memory platform delivers as much as a 15-fold increase in bandwidth, a capability that addresses next-generation networking and exascale computing requirements.

Applications such as SKA demonstrate “the ability to put [computing] at the edge” to access the most relevant data, Pawlowski added.

The radio telescope array uses a front-end processor to convert faint analog radio signals to digital. Those signals are then processed using FPGAs. Memory resources needed to make sense of all that data can be distributed using relatively simple algorithms, according to Francois Kapp, a systems engineer at SKA South Africa. The challenge, Kapp noted, is operating the array around the clock along with the “increasing depth and width of memory” requirements. “You can’t just add more memory to increase the bandwidth, ” he noted, especially as FPGAs move to faster interfaces.

Hence, the SKA project is wringing out Micron’s HMC approach as it maps the universe and seeks to determine how galaxies were formed. The resulting daily haul of data underscores what Jim Adams, former NASA deputy chief scientist, called “Big Science.”

The exascale computing requirements of projects such as SKA exceed those of previous planetary missions such as the 2015 New Horizon fly-by of Pluto. Adams said it took NASA investigators a year to download all the data collected by New Horizon.

The technical challenges are similar for earth-bound observatories. “Astronomy is becoming data science,” Ratcliffe added.

Micron positions its memory platform as a “compute building block” designed to provide more bandwidth between memory and computing resources while placing processing horsepower as close as possible to data so researchers can access relevant information.

Micron’s Hybrid Memory Cube moves processing power closer to astronomy data. (Source: Micron Technology)

Meanwhile, university researchers at the University of Heidelberg are attempting to accelerate adoption of new memory approaches like Micron’s through open-source development of configurable HMC controller that would serve as a memory interface.

Research Juri Schmidt noted that the German university’s network-attached memory scheme was another step toward pushing memory close to data by reducing the amount of data movement.

Micron’s Pawlowski noted that the current version of the memory platform is being used to sort and organize SKA data as another way to reduce data movement. The chipmaker is also investigating how to incorporate more logic functionality, including the use of machine learning to train new analytics models.

Computing, memory and, eventually, cloud storage could be combined with Micron’s low-power process technology for energy efficient high-performance computing. While the company for now doesn’t view HMC as an all-purpose platform, it would be suited to specific applications such as SKA, Pawlowski noted.

The astronomy initiative will provide a major test for exascale computing since, according to Adams, SKA “is a time machine,” able to look back just beyond the re-ionization period after the Big Bang when galaxies began to form.

The post SKA Astronomy Project Gets Boost with Hybrid Memory Cube appeared first on HPCwire.

DDN Storage Announces Product Enhancements for Data-Centric Computing

Wed, 06/14/2017 - 08:56

SANTA CLARA, Calif., June 14, 2017 — DataDirect Networks (DDN) today announced a series of performance and reliability innovations across its storage appliances, aimed at meeting the growing demand for storage performance, simplicity and reliability at scale in a new era of data-centric computing. Infinite Memory Engine (IME) and Storage Fusion Architecture (SFA) product lines now feature significant new data protection and performance improvements that allow DDN customers to deploy a complete suite of storage services across even the most demanding emerging IoT, deep learning and technical HPC environments.

The center of gravity in the HPC and technical computing data center has shifted to data. A new generation of innovative storage solutions is needed to deal with new application requirements while taking maximum advantage of new storage media and increased media capacities. DDN’s advances in flash-native caching and advanced data protection across block, file and object storage combine with enterprise-class support for parallel file systems to give users the performance, capacity and control they need to meet these new application requirements.

DDN’s Infinite Memory Engine (IME) – Extreme Scale and Performance in a Small Footprint

DDN’s IME flash-native data cache uniquely leverages NVMe to accelerate the performance of data-intensive application workflows. When combined with a parallel file system, like DDN’s EXAScaler®, IME’s latest release delivers the highest performance in the smallest possible footprint. Customers benefit from:

  • One-tenth the power consumption and one-tenth to one-three hundredth the space of disk-based parallel file systems with the same peak performance
  • Significantly lower cost per peak throughput
  • Flash-enabled transactional performance scaling to tens of millions of IOPS
  • Ability to scale performance and capacity independently

This magnitude of performance and efficiency in a small footprint enables simple system designs at performance levels that were simply not possible with drive-based systems. IME accelerates any application that is bottlenecked by traditional storage technologies. Because IME is platform-independent and application-transparent no code modifications are needed.  An effective solution for both data-intensive, single workflow environments and shared, multi-workflow environments, advancements in the latest release of IME make it an ideal solution for enterprise and deep learning use cases as well as more traditional HPC workflows. New capabilities in DDN’s latest release of IME include:

  • Choice of erasure coding options for improved data protection
  • Protection against compute node failure
  • Adds Ethernet to existing InfiniBand and Omni-Path support
  • Improved metadata performance
  • Support for the latest processor technologies (i.e. Intel Xeon Phi, Arm, and IBM Power)
  • New flash optimizations for full utilization of the latest media designs

Significantly Improved Data Protection Choices Across Block, File and Object Storage

Starting this year, DDN offers significantly improved data protection options across its entire product lines. All DDN product lines now feature advanced erasure coding or de-clustered RAID options that provide unmatched data availability, significantly reduced rebuild time, and orders of magnitude improvements in mean-time-to-data-loss (MTDL). New innovations include an industry-leading erasure coding option for the IME scale-out flash product line that eliminates read-modified-writes, the industry’s widest selection of erasure coding options for object storage with Extended Object Assure (XOA) in WOS, and a fundamental overhaul of data protection in DDN’s award winning SFA platform. Specific customer benefits include:

  • Industry leading variety of data protection choices
  • Significantly reduced rebuild rates of up to 4 minutes per TB for scale-out SSD storage (i.e. IME erasure coding)
  • Lower latency random IO
  • Unique sequential read and write performance enhancements for block-level de-cluster RAID.

Enterprise Lustre Support

In addition to IME improvements and advanced erasure coding across product lines, DDN has stepped up support for its EXAScaler Enterprise Lustre* distribution to include support for Lustre on ZFS and software-only EXAScaler installations. DDN already supports the largest and most diverse Lustre file system user base in the industry. DDN expertise, feature development, tools, integrations and robust support make Lustre simpler to deploy, scale and manage, and more productive in both traditional HPC and high performance commercial environments. DDN is currently working with the Lustre community on significant, near-term features for performance, availability and management in its next release.

“DDN is dedicated to advancing technical computing at scale, pushing the limits of the latest technology in every possible way for the benefit of our customers,” said Robert Triendl, SVP for sales, marketing and field services. “With the advent of inexpensive flash devices and the increased focus on data in technical computing and machine learning workflows, storage architectures are undergoing a fundamental transition. The advanced features in our latest product releases constitute a significant step forward – delivering competitive advantage and reduced time to results for our customers as they transition to next generation architectures.”

ISC 2017

DDN’s latest product releases will be on display at the upcoming ISC show in Frankfurt Germany, June 18-22, 2017. Stop by booth #1010 for a personal demonstration. DDN will host a User Group from 12:30 p.m. to 4 p.m., Tuesday, June 20, followed by a cocktail reception. A DDN party, co-sponsored by Intel, will be held from 7 p.m. to 9 p.m. on Tuesday, June 20.

DDN will host an IME and EXAScaler webinar, “Accelerating Lustre with DDN IME,” at 9 a.m. PT on Tuesday, June 27, to provide information about driving performance and reliability in the data-centric era. Register here.


DDN’s new IME features will be available in Q3 2017. The new SFAOS version will be available in early Q4 2017, and the new EXAScaler Enterprise Lustre Distribution and WOS XOA are available immediately.

Supporting Resources

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. With almost 20 years of experience driving performance at scale, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Storage Announces Product Enhancements for Data-Centric Computing appeared first on HPCwire.

STULZ, CoolIT Systems Launch Liquid-Cooled Micro-Datacenter for HPC

Wed, 06/14/2017 - 08:53

CALGARY, AB. June 14, 2017 – CoolIT Systems (CoolIT), in partnership with STULZ, today announced a liquid cooled micro data center for High Performance Computing (HPC) applications. The unified solution, named STULZ Micro DC, combines CoolIT’s industry-leading efficient Direct Contact Liquid Cooling technology (Rack DCLC) with STULZ’ world-renowned mission critical air cooling products to create a single enclosed solution for managing high-density compute requirements.

The liquid cooled STULZ Micro DC manages 100% of the IT load via liquid and is room neutral. With the Micro DC, users can scale from traditional IT workloads to more than 80kW of IT into each system (depending on configuration). The stand-alone solution incorporates all the key components in a specified enclosure, including the rack, liquid cooling, cable management, UPS, power monitoring, and fire protection systems. CoolIT Systems Rack DCLC™ technology captures 60-80% of the entire server heat load, while the remaining 20-40% is air cooled by an integral STULZ precision cooling unit contained inside the Micro DC. Configurations can also be purpose-built to include various options for heat rejection and reuse.

“The Micro DC is a compact data center that can be easily deployed in any environment while managing an incredible amount of power and performance,” said Joerg Desler, President of STULZ USA. “It’s modular system is configured to use various combinations of liquid and air cooling allowing the system to grow with the customer’s needs. Integrating STULZ technology with CoolIT Systems Rack DCLC™ has resulted in ultra-efficient total thermal solutions that can support any OEM server at even the highest density configurations.”

“The liquid cooled STULZ Micro DC is a self-contained, cost-efficient, high-density solution which impressively manages 100% of the IT heat,” said Geoff Lyon, CEO & CTO at CoolIT Systems. “The result is an incredibly easy system for customers to deploy and a great example of what can be achieved with the combined expertise of CoolIT Systems and STULZ.”

This is the first integrated product offering under CoolIT and STULZ’ Chip-to-Atmosphere partnership.  The STULZ Micro DC is available in three self-enclosed cabinet designs, each with 48U of standard rack space. Air and liquid cooling modules can be configured depending on the IT load and need for redundant operation.  Usable rack space will be dependent on the options selected. Additionally, businesses big or small can scale data center capacity up or down to meet fluctuating demands.

CoolIT’s DCLC™ technology uses the exceptional thermal conductivity of liquid to provide concentrated cooling to the hottest components inside a server, enabling very high density configurations even with today’s top performing processors. CoolIT’s liquid cooling solutions can be tailored to any server layout and have already been adopted by many server manufacturers as a reliable technology and is covered under standard warranties.

A demo of the liquid cooled Micro DC will be showcased by CoolIT Systems (booth C-1210) at the ISC High Performance 2017 event in Frankfurt from June 19 – 21.

Those interested in incorporating Chip-to-Atmosphere solutions in their projects should start by contacting their local STULZ or CoolIT Systems sales representative.

About STULZ Air Technology Systems, Inc.

STULZ Air Technology Systems, Inc. (STULZ USA) is an ISO 9001 registered manufacturer of environmental control equipment including a full line of energy efficient precision air conditioners, air handling units, ultrasonic humidifiers, and desiccant dehumidifiers. The company is responsible for product development, manufacturing, and distribution for the North American arm of the international STULZ Group. For more information about STULZ USA and its products, call 301-620-2033. E-mail your request to info@stulz-ats.comor visit www.stulz-usa.com.

CoolIT Systems, Inc.

CoolIT Systems, Inc. is the world leader in energy efficient Direct Contact Liquid Cooling for the Data Center, Server and Desktop markets. CoolIT’s Rack DCLC platform is a modular, rack-based, advanced cooling solution that allows for dramatic increases in rack densities, component performance, and power efficiencies. The technology can be deployed with any server and in any rack making it a truly flexible solution. For more information about CoolIT Systems and its technology, email sales@coolitsystems.com or visit www.coolitsystems.com.

Source: CoolIT Systems

The post STULZ, CoolIT Systems Launch Liquid-Cooled Micro-Datacenter for HPC appeared first on HPCwire.

NEC Supplies LX Supercomputer to Czech Hydrometeorological Institute

Wed, 06/14/2017 - 08:51

DUSSELDORF and TOKYO, June 14, 2017 — NEC Corporation (NEC; TSE: 6701) today announced that the Czech Hydrometeorological Institute (CHMI) in the Czech Republic selected NEC Deutschland GmbH to provide the next generation supercomputer system utilizing NEC’s scale-out LX series compute servers for their weather forecasts.

NEC’s scale-out LX series HPC cluster will enable the Czech Hydrometeorological Service to increase the accuracy of numerical weather forecasting and related applications, namely warning systems. Weather prediction models are increasingly complex, including rainfall, temperature, wind and related variables that have to be calculated as precisely as possible several days in advance. At the same time, regional peculiarities such as orography and terrain physiography need to be considered. In addition, the public must be made aware quickly of predictions of high impact weather events affecting daily life, including environmental risks linked to air pollution. High-performance computing (HPC) is therefore needed for running and completing weather and climate simulation jobs in time.

NEC will deliver the computational power of more than 300 nodes, connected through a high-speed Mellanox EDR InfiniBand network and containing the new Intel® Xeon® E5-2600 v4 product family dual socket compute nodes, with a total of over 3,500 computational cores.

The Supercomputer is configured for high availability, including redundant storage and power supplies, as operation is required 24×7.

Moreover, the computational peak performance of this HPC cluster will be more than 80 times faster than the currently used system.

This HPC solution also consists of a high-performance storage solution based on the NEC LXFS-z parallel file-system appliance, with more than 1 Petabyte of storage capacity and a bandwidth performance of more than 30 Gigabytes per second (GB/s), which are required to meet the production needs of the CHMI. This scalable ZFS-based Lustre solution also provides advanced data integrity features paired with a high density and high reliability design.

The new system is scheduled to be put into operation in early 2018.

“Reliable HPC technology by NEC shall be important both for forecast production and innovation; after Météo-France, CHMI is the second largest contributor to the development of the ALADIN Numerical Weather Prediction System, currently used by 26 countries. Moreover, in this project, we have a specific goal to improve air quality trend forecasts in relation to meteorological conditions and the performance of air quality warning systems,” said Dr. Radmila Brožková, head of the Numerical Weather Prediction Department, CHMI.

“We are very happy that CHMI has selected NEC to deliver an HPC solution for their weather and climate simulations, as NEC has a very special expertise in meteorological applications. For years, we have been successfully collaborating with meteorological institutes, and we look forward to cultivating these partnerships further,” said Andreas Göttlicher, Senior Sales Manager, NEC Deutschland.

NEC has a long-term track record in climate research and weather forecasting and holds a leading position in the supercomputer market.

About the Czech Hydrometeorological Institute

The Czech Hydrometeorological Institute is the Czech Republic’s central government institution for the fields of air quality, hydrology, water quality, climatology and meteorology, performing this function as an objective specialised service. It was established in 1919 as National Meteorological Institute. The present-day organization of the Institute covers hydrology and air quality as well. The Institute is run under the authority of the Ministry of the Environment of the Czech Republic and its main task is to establish and operate national monitoring and observation networks, create and maintain data bases of data on the condition and quality of the air and on sources of air pollution, on the condition and development of the atmosphere, and on the quantity and quality of water, and provide both climate and operating information about the condition of the atmosphere and hydrosphere, and forecasts and warnings alerting to dangerous hydrometeorological phenomena.

About NEC Corporation

NEC Corporation is a leader in the integration of IT and network technologies that benefit businesses and people around the world. By providing a combination of products and solutions that cross utilize the company’s experience and global resources, NEC’s advanced technologies meet the complex and ever-changing needs of its customers. NEC brings more than 100 years of expertise in technological innovation to empower people, businesses and society.  For more information, visit NEC at http://www.nec.com.

Source: NEC

The post NEC Supplies LX Supercomputer to Czech Hydrometeorological Institute appeared first on HPCwire.

TACC, NAG Host 1st HPC for Managers Institute in Austin

Wed, 06/14/2017 - 08:49

AUSTIN, June 14, 2017 — The Texas Advanced Computing Center (TACC) at The University of Texas at Austin and the Numerical Algorithms Group (NAG) has announced the first TACC-NAG High Performance Computing (HPC) for Managers Institute this September 12-14 in Austin.

This three-day workshop is specifically tailored to managers and decision makers who are using, or considering using, HPC within their organizations. It is also applicable to those with a real opportunity to make this career step in the future.

“The focus of HPC training has always been for programmers and users, and there are good reasons for that — an HPC system’s impact depends on the quality of simulations and analysis run on it,” said Lucas Wilson, director of Training and Professional Development in TACC’s User Services Group.

“But there is a tendency to forget the other side of the equation,” Wilson said. “Researchers cannot create impact using HPC if a sub-optimal system, or no system, is available. Our goal with this course is to offer insights to those providing HPC resources to help the managers determine how best to configure and operate HPC systems. TACC has offered a subset of this course in the past to our industrial partners, and this new partnership with NAG allows us to provide a more complete look at the process of planning for, acquiring and operating HPC services which will have the optimum research and business impact, building on the successful tutorials delivered by NAG at SC16 and at the Rice Oil & Gas HPC Conference.”

Topics covered will include strategic planning, technology options, procurement and deployment, people issues, total cost of ownership, performance measurement, and cost/benefit analysis of investing in HPC to support a company or institution’s R&D portfolio. A broad scope of HPC is covered from department scale clusters to the largest supercomputers, consideration of cloud, modeling and simulation to non-traditional use cases, and more. The training has been designed for industry attendees, although academic HPC managers will also find it valuable. TACC and NAG encourage attendees from diverse backgrounds and underrepresented communities.

“We were urged to create unique training opportunity by several of TACC’s industry partners and NAG’s HPC consulting customers,” said Andrew Jones, vice president of Strategic HPC Consulting and Services at NAG.

Jones added: “HPC managers from industry have told us of their need to have an opportunity for their HPC systems and computational sciences staff to learn about the business processes required to deliver a successful HPC environment. They believe this course will be very valuable in developing future HPC leaders and improving the collaboration with the researchers they support.”

In 2017, TACC began offering seven different, week-long, topic-based Institutes to highlight different focus areas in advanced computing. These Institutes are: Parallel Programming Foundations; Data and Information Analytics; Visualizing and Interacting with Data; Designing and Administering Large-Scale Systems; Computational Techniques for Life Sciences; Computational Science in the Cloud; and TACC-NAG HPC for Managers.

Registration is open now. Details and a full agenda can be found at: https://www.tacc.utexas.edu/education/institutes/hpc-concepts-for-managers

Important Dates:

  • July 31, 2017, registration deadline

For more information about the TACC Institute Series, please contact Lucas Wilson, director of Training and Professional Development at TACC: lwilson@tacc.utexas.edu. For more information about TACC’s industrial partners program, STAR, please contact Melyssa Fratkin, industrial programs director at TACC: mfratkin@tacc.utexas.edu

Source: TACC

The post TACC, NAG Host 1st HPC for Managers Institute in Austin appeared first on HPCwire.

RIKEN Posts Extensive Study of Global Supercomputer Plans in Time for ISC 2017

Tue, 06/13/2017 - 13:01

On the eve of ISC 2017 and the next release of the Top500 list, Japan’s RIKEN Advanced Institute for Computational Science has posted an extensive study by IDC that compares and contrasts international efforts on pre-exascale and early exascale plans. The study – Analysis of the Characteristics and Development Trends of the Next-Generation of Supercomputers in Foreign Countries – was contracted by RIKEN, completed last December, and posted last week on the RIKEN web site.

Much of the material is familiar to exascale race watchers but gathering it all in one place is fascinating and useful. RIKEN has made the report freely available and downloadable as a PDF from its website. It’s worth noting the report’s authors were formerly the core team of IDC’s HPC research group and now are members of Hyperion Research which was spun out of IDC this year. Despite the study’s length (70-plus pages) it is a quick read and the tables are well-worth scanning. Much of the focus is on the next round of leadership class computing supercomputers (pre-exascale machines) about which more is known but there is also considerable discussion exascale technology.

For supercomputer junkies, there’s a table for nearly every aspect. Below are two sample: 1) systems covered in this report and their current/planned performance and 2) memory systems either in use or planned.

There are many more tables covering topics such as architecture and node design, MTBF, interconnect, compilers and middleware, etc. A complete list of the 30 tables is at the end of the article and is a good surrogate for the reports scope.

Here’s the top line summary in an excerpt from the report: “Looking at the strengths and weaknesses in exascale plans and capabilities of different countries:

  • The U.S. has multiple programs, strong funding and many HPC vendors, but has to deal with changing federal support, a major legacy technology burden, and a growing HPC labor shortage.
  • Europe has strong software programs and a few hardware efforts, plus EU funding and support appears to be growing, and they have Bull, but they have to deal with 28 different countries and a weak investment community.
  • China has had major funding support, has installed many very large systems, and is developing its own core technologies, but has a smaller user base, many different custom systems and currently is experiencing low utilization of its largest computers.”

As noted earlier, while the report breaks little new ground its comprehensive view of these leading 15 supercomputer systems and side-by-side comparison is a useful resource. The list of tables is below along with a link to the report.

Link to report: http://www.aics.riken.jp/aicssite/wp-content/uploads/2017/05/Analysis-of-the-Characteristics-and-Development-Trends.pdf

List of Tables

Table 1 The Supercomputers Evaluated in This Study

Table 2 System Attributes: Planned Performance

Table 3 System Attributes: Architecture and Node Design

Table 4 System Attributes: Power

Table 5 System Attributes: MTBF Rates

Table 6 System Attributes: KPIs (key performance indicators)

Table 7 Comparison of System Prices

Table 8 Comparison of System Prices: Who’s Paying for It?

Table 9 Ease-of-Use: Planned New Features

Table 10 Ease-of-Use: Porting/Running of New Codes on a New Computer

Table 11 Ease-of-Use: Missing Items that Reduce Ease-of-Use

Table 12 Ease-of-Use: Overall Ability to Run Leadership Class Problems

Table 13 Hardware Attributes: Processors

Table 14 Hardware Attributes: Memory Systems

Table 15 Hardware Attributes: Interconnects

Table 16 Hardware Attributes: Storage

Table 17 Hardware Attributes: Cooling

Table 18 Hardware Attributes: Special Hardware

Table 19 Hardware Attributes: Estimated System Utilization

Table 20 Software Attributes: OS and Special Software

Table 21 Software Attributes: File Systems

Table 22 Software Attributes: Compilers and Middleware

Table 23 Software Attributes: Other Software

Table 24 R&D Plans

Table 25 R&D Plans: Partnerships

Table 27 Additional Comments & Observations

Table 28 IDC Assessment of the Major Exascale Providers: USA

Table 29 IDC Assessment of the Major Exascale Providers: Europe/EMEA

Table 30 Assessment of Exascale Providers: China 59

* No table 26

The post RIKEN Posts Extensive Study of Global Supercomputer Plans in Time for ISC 2017 appeared first on HPCwire.

DDN Storage Named to Elite $1 Billion+ Valuation “Storage Unicorn” List

Tue, 06/13/2017 - 08:34

SANTA CLARA, Calif., June 13, 2017 — DataDirect Networks (DDN) today announced that the company has achieved recognition in the prestigious “Storage Unicorns” list. “Storage Unicorns” are private storage companies deemed to have a valuation of $1 billion or more – a statistical rarity of very successful ventures. The list has been compiled by well-respected Condor Consulting Group.

“From the very beginning, our goal at DDN has been to create and deliver the absolute best storage solutions to solve our enterprise, technical computing and now web/cloud customers’ most difficult data-management challenges,” said DDN CEO and Co-Founder Alex Bouzari. “We are pleased and honored to be recognized as a Storage Unicorn, but we know deep down in our hearts that it is the hard work and unwavering dedication of our employees and the tremendous support of our customers and partners that have made this journey possible. Technology advances in our ever-faster, ever-more-connected world are creating tremendous opportunities for DDN to continue to serve, grow and thrive in the years to come.”

The “Storage Unicorn” list is a spin-off of the Global Unicorn Club list and DDN, which continues to be run profitably by its two founders Alex Bouzari and Paul Bloch, is one of a very small number of storage companies to reach Unicorn status in what is a $100 billion storage industry.

“(DDN) is a clear leader in storage for all kinds of high-demand applications; the company was founded in 1998 and is still led by the two same executives,” said Condor Consulting’s Philippe Nicolas, in a Storage Newsletter article announcing the report. “Finding the right mixture between technology, team and business is a tough journey. And for some executives, it’s a utopia they will never realize. On the other hand, a unicorn is about the cream of the cream, an elite group of companies. It illustrates the good sign of investment realized and business successes.”

Condor Consulting Group’s Storage Unicorn list was published in June 2017 and highlights companies that have reached a $1 billion valuation. In order to produce the list, Condor collected data from various sites such as Crunchbase, PitchBook, CB Insights, the media and investors, in addition to its own formulas and tools. The list will be updated twice each year, in June and December.

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. With almost 20 years of experience driving performance at scale, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN Storage Named to Elite $1 Billion+ Valuation “Storage Unicorn” List appeared first on HPCwire.

DOE’s HPC for Manufacturing Program Seeks Industry Proposals

Tue, 06/13/2017 - 08:20

LIVERMORE, Calif., June 13, 2017 — The U.S. Department of Energy’s (DOE) High Performance Computing for Manufacturing Program, designed to spur the use of national lab supercomputing resources and expertise to advance innovation in energy efficient manufacturing, is seeking a new round of proposals from industry to compete for $3 million.

Since its inception, the High Performance Computing for Manufacturing (HPC4Mfg) Program has supported projects partnering manufacturing industry members with DOE national labs to use laboratory HPC systems and expertise to upgrade their manufacturing processes and bring new clean energy technologies to market. The program’s portfolio includes small and large companies representing a variety of industry sectors. This is the fourth round of funding for this rapidly growing program.

The partnerships use world-class supercomputers and the science and technology expertise resident at the national laboratories, including Lawrence Livermore National Laboratory, which leads the program, principal partners Lawrence Berkeley (link is external)(LBNL) and Oak Ridge(link is external) (ORNL) national laboratories and other participating laboratories. An HPC expert at each lab teams up with U.S. manufacturers on solutions to address challenges that could result in advancing clean energy technology. By using HPC in the design of products and industrial processes, U.S. manufacturers can reap such benefits as accelerating innovation, lowering energy costs, shortening testing cycles, reducing waste and rejected parts and cutting the time to market. For more information about the program, see the web(link is external).

“U.S. manufacturers from a wide array of manufacturing sectors are recognizing that high performance computing can significantly improve their processes,” said Lori Diachin, an LLNL mathematician and director of the HPC4Mfg Program. “The range of ideas and technologies that companies are applying HPC to is expanding at a rapid rate, and they are finding value in both the access to supercomputing resources and the HPC expertise provided by the national laboratories.”

Concept proposals from U.S. manufacturers seeking to use the national labs’ capabilities can be submitted to the HPC4Mfg Program starting June 12. The program expects another eight to 10 projects worth approximately $3 million in total will be funded. Concept paper applications are due July 26.

HPC is showing potential in addressing a range of manufacturing and applied energy challenges of national importance to the United States. The HPC4Mfg Program releases biannual solicitations as part of a multiyear program to grow the HPC manufacturing community by enticing HPC expertise to the field, adding to a high-tech workforce, and enabling these scientists to make a real impact on clean energy technology and the environment. Past HPC4Mfg solicitations have highlighted energy intensive manufacturing sectors and the challenges identified in the Energy Department’s 2015 Quadrennial Technology Review. In this solicitation, the program continues to have a strong interest in these areas and are adding a special topic area of advanced materials.

A number of companies and their initial concepts will be selected and paired with a national lab HPC expert to jointly develop a full proposal this summer, with final selections to be announced in November. Companies are encouraged to highlight their most challenging problems so the program can identify the most applicable national lab expertise. More information about the HPC4Mfg Program, the solicitation call and submission instructions can be found on the web.

The Advanced Manufacturing Office within DOE’s Office of Energy Efficiency and Renewable Energy(link is external) provided funding to LLNL to establish the HPC4Mfg Program in March 2015. The Advanced Scientific Computing Research Program within DOE’s Office of Science supports the program with HPC cycles through its Leadership Computing Challenge allocation program. The National Renewable Energy Laboratory (NREL) also provides computing cycles to support this program.

HPC4Mfg recently announced the selection of new projects as part of a previous round of funding, including: LLNL and ORNL partnering with various manufacturers (Applied Materials, GE Global Research and United Technologies Research) to improve additive manufacturing processes that use powder beds to reduce material use, defects and surface roughness and improve the overall quality of the resulting parts; LBNL partnering with Samsung Semiconductor Inc. (USA) to improve the performance of semiconductor devices by enabling better cooling through the interconnects; Ford Motor Company partnering with Argonne National Laboratory to understand how manufacturing tolerances can impact the fuel efficiency and performance of spark-ignition engines; and NREL partnering with 7AC technologies to model liquid/membrane interfaces to improve the efficiency of air conditioning systems. In addition, one of the projects, a collaboration among LLNL, the National Energy Technology Laboratory and 8 Rivers Capital to study coal-based Allam cycle combustors will be co-funded by DOE’s Office of Fossil Energy.

Additional information about submitting proposals is available on the FedBizOps website(link is external)

Source: LLNL

The post DOE’s HPC for Manufacturing Program Seeks Industry Proposals appeared first on HPCwire.

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

Tue, 06/13/2017 - 06:03

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The global semiconductor company, headquartered in Santa Clara, Calif., says the process node offers a 40 percent performance improvement over its 14nm node, a 60 percent power reduction, and at least a 30 percent die cost reduction.

The platform integrates 17 million gates per square millimeter, over a 50 percent scaling off of 14nm. GlobalFoundries Chief Technology Officer Gary Patton noted, “Because of the need for multi patterning on these nodes, the complexity is increasing more than it has done historically. We scale a little bit more than 50 percent so when we add the higher complexity we still end up at the right point for our customers, which is at least a 30 percent die cost improvement and for some products maybe as much as 45 percent cost improvement.”

High performance computing, graphics, and networking are key areas for initial products, as are custom silicon plays. “We’re seeing a lot of push from some new players in the fabless space in the area of artificial intelligence and machine learning and they are very focused on leveraging the ASIC platform for those products.” Like Google TPUs perhaps.

GlobalFoundries has the technology to make chips up to 780 mm². Its smallest 14nm chips are around 50 mm² and some go as high as 700 mm² and it expects the same range to apply to 7nm as well.

The process design kit (PDK) is now available for GlobalFoundries 7LP FinFET and FX-7 ASIC. (Source: GlobalFoundries slide deck)

The lack of a 10nm node on GlobalFoundries’ roadmap was strategic, a response to customer input. “I don’t personally view it as skipping a node,” said Patton, “because if you look at the density of that 10nm and performance that it offers it’s more like a half node. Our customers wanted a stronger value proposition. We made a decision two years ago to just focus on 7nm and that’s allowed us to get this offering out at this time.”

Patton views 20nm and 10nm as “weak nodes;” in contrast he sees 14nm and 7nm as having long-term staying power. GlobalFoundries has invested $12 billion in the Malta “Fab 8” factory, and is still expanding going into 2018 to support its 14nm manufacturing ramp. Having a high-yield manufacturing base on 14nm makes the development on 7nm much easier, said Patton. They’ve had over 50 designs in 14nm, and have had 100 percent first-time success on every product tape out on 14nm at the factory, according to Patton.

The 7nm process technology heads for prime time just two years after it was introduced by the IBM Research alliance which includes GlobalFoundries and Samsung. The original proof of concept chip was manufactured with extreme ultraviolet lithography (EUV), but initial products will go forward using optical lithography. This probably won’t be a surprise to those familiar with EUV’s uphill climb toward commercial viability.

EUV is progressing, said Patton, but it’s not ready for high-volume commercial production. Not wanting to hold back its customers, GlobalFoundries is launching 7nm with conventional immersion lithography and has designed the technology to be drop-in compliant with EUV. Patton expects EUV versions will be ready a year after the initial product launches – pushing that EUV goalpost into 2019.

Patton, who was with IBM for 30 years and led IBM’s semiconductor research & development organization for the last eight before the chip manufacturing business was sold to GlobalFoundries in July 2015, reviews some of the benefits EUV offers through simplification of the process. “It allows us to take some masks out, which will improve the cycle time. We can take some processing steps out which will help — the more you process wafers, the more defects you introduce, so that will give a yield advantage. We see much better line edge control with EUV and that will give some improvement in the sharpness of the features, which will give parametric advantage,” he said.

Globalfoundries Fab 8 campus in Malta, New York, has two EUV scanners arriving in 2017 and two more are scheduled for delivery in 2018. Patton says that he is encouraged by the progress that has been made on EUV, but relays four key challenge areas relating to the light source, the toolset, the resist and the mask.

Aerial view of Fab 8 in Malta, N.Y. (Source: GlobalFoundries)

“A lot of good work is being done at ASML, as well as places like IMEC, on improving mask defects, developing pellicles that would mitigate some of the defect issues, but the key challenge is being able to do that in a way that’s reliable and can withstand the high-power that’s coming out of the EUV light source – so that’s probably the long pole of the tent so to speak in getting EUV ready, but there’s good progress. We’re expecting EUV will be ready for high volume manufacturing in the 2019 timeframe and we’ll be in a position to support.”

The big takeaway is that 7nm is here and it’s on time, said Jim McGregor, founder of Tirias Research, in an interview with HPCwire. “You have to remember that on the last major process node, the 14nm, GlobalFoundries was late. It had to partner with Samsung to get moving. Since then they’ve acquired the semiconductor group from IBM and these are a lot of the same experts that developed the latest technology for the past 20 years and have really led the consortium around IBM to develop to process technologies. Now we’re seeing GlobalFoundries that was kind of trailing company in terms of rolling out new process technology moving to the forefront of being one of the leaders in rolling out new process technology.”

Synergy between IBM and GlobalFoundries was likewise emphasized by Patton. “A key part of the IBM acquisition was we take over the manufacturing of the parts, which is I think is a more efficient situation for IBM because the IBM server volumes are small,” he told us. “So we take on the manufacturing investment; we do some special things for them to make sure the technology meets their server requirements. In exchange they committed for ten years to do what they do very well, which is the fundamental research on how to keep the technology moving forward. So they continue to do the research in the IBM Watson Research Center and that pipeline of innovation flows into the Albany NanoTech center, where we do the pathfinding and determine what elements are ready for development to keep extending technology either through scaling or other creative ways.”

Moore’s law may be slowing, but it isn’t dead yet in Patton’s view. GlobalFoundries is actively investigating next-generation semiconductor technologies, such as nanowires and vertical transistors, with alliance partners IBM and Samsung at the State University of New York (SUNY) Albany NanoTech Complex, located about 30 miles south of the Fab 8 facility. You can see the fruits of their nanowire efforts in the 5nm test chip that was unveiled last week.

The post GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019 appeared first on HPCwire.