HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 29 min ago

ARM Targets New Processors at Machine Learning

Tue, 05/30/2017 - 16:45

ARM Ltd. joined a growing roster of processor specialists zeroing in on artificial intelligence and machine learning applications with the introduction of two new processor cores, one emphasizing performance, and the other efficiency.

The chip intellectual property vendor unveiled its high-end Cortex-A75 paired with its “high-efficiency” Cortex A-55 processors during this week’s Computex 2017 event in Taipei, Taiwan. Along with greater efficiency and processing horsepower, the chipmaker is positioning its latest processors as filling the gap in cloud computing by boosting data processing and storage on connected devices.

Along with accelerating AI development, ARM also is advancing its flexible processing approach that incorporates a so-called “big” and “LITTLE” processor core configuration into a single computing cluster. That architecture is based on the assumption that the highest CPU performance is required only about 10 percent of the time. The company also argues that “big” cores can run faster when “little” cores handle low-level workloads.

Based on its DynamIQ multicore architecture technology previewed in March, the Cortex-75 targets emerging AI and machine learning workloads with a single-threaded performance boost of 50 percent, ARM claimed.

“New workloads and their processing requirements are still evolving, so fixed-function dedicated hardware accelerators may not address the newest [machine learning] algorithms,” ARM engineer Stefan Rosinger argued in a blog post. “It makes sense, in that case, to have general CPU capacity….”

Hence, the chipmaker eschews a one-size-fits-all approach by combining a general-purpose processor, “dedicated accelerators” and graphics processing in a system-on-chip as a way of achieving the highest efficiency, Rosinger continued.

The company said it tweaked the Cortex A-75 to deliver a 40 percent boost in infrastructure performance compared to its earlier A-72 processor core for handling machine learning and other complex workloads. The high-end core leaves headroom for emerging workloads, and also targets server and networking applications as ARM seeks for make inroads in x86-dominated datacenters as well as edge devices that would flesh out Internet of Things architectures.

Meanwhile, the Cortex A-55 includes ARM v.8 architecture extensions along with dedicated machine learning instructions. ARM claims an 18-percent performance boost over the previous Cortex version, but the processor’s improved power efficiency points it primarily at IoT edge devices.

Along with processing performance and power efficiency, ARM is betting the rise of AI will heighten requirements for securing data as more personal information is processed and stored on edge devices. “We need to enable faster, more efficient and secure distributed intelligence between computing at the edge of the network and into the cloud,” noted ARM’s Nandan Nayampally.

Meanwhile, ARM said it has so far lined up “more than 10” licensees for both new processors and its DynamIQ framework. Rosinger said he expected initial devices based on the processor cores in early 2018.

ARM was acquired last July by SoftBank Group (TYO: 9984) to expand the Japanese technology conglomerate’s foray into IoT expansion. ARM’s overarching IoT strategy focuses on developing and scaling its Cortex-M 32-bit microcontrollers and a device server that handles connections from IoT devices.

The post ARM Targets New Processors at Machine Learning appeared first on HPCwire.

The Meaning of the FY18 Exascale Request – A Former Insider’s Perspective

Tue, 05/30/2017 - 15:10

“No money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law.” These words are from the U.S. Constitution (Article 1, Section 9, Clause 7) and proudly adorn the website header of the House of Representative’s Appropriations Committee. The effect of these words is that the recently released Presidential Budget for Fiscal Year (FY) 2018 is just a starting point and has a very long way to go before it becomes law. That being said, while the budget is just a “suggestion” to Congress that is likely “dead on arrival,” it is a very strong suggestion and tells us a lot about the direction of the new administration. For exascale computing and advanced modeling and simulation this is very good news.

Understanding how much good news this means requires a bit of understanding about the minutiae of the federal budgeting process. Sorry for the budgeting wonk-speech, but this will help. Under normal circumstances, the process of writing the President’s budget starts about a year before it is submitted. That happens when the White House Office of Management and Budget (OMB) gives budget allocations to the agencies of the Executive Branch. That starts a process of negotiations and refinements of the budget. It also unleashes a huge amount of work by federal employees (aka feds) to write the thousands of pages of material that will eventually be released as part of the official presidential budget.

However, for the new administration’s FY18 budget that year long process was compressed down to a couple of months. When President Trump was inaugurated in January, a FY18 budget (done by the previous administration) was close to its final form. Usually, after taking office, a new administration will make a few tweaks to that budget and will submit it to Congress in February or March. However, President Trump and his Director or OMB, Mick Mulvaney, had other plans. In March, they released a “skinny version” of their plans for the FY18 budget. This effectively turned the regular budget development process on its head both in terms of its timing and allocation of dollars.

First, the timing — the release of the budget blueprint in March and the determination to release the full budget in May required that the normally year long process be condensed down to about two months. Granted, a tremendous amount of material for the FY18 budget already existed, but the OMB decision to issue a new blueprint required the feds to reengage in the process that goes from the White House to the agency heads through their Chief Financial Officers down to the offices and then back again. If you have not heard lately from your federal employee counterpart, this is likely the reason why. They have been busy!

Second, the allocations — as was extensively reported, the Trump administration’s skinny budget made substantial changes to the structure of the budget. This reflects very different priorities and their philosophy about the role of the federal government. Another important reason for the major shifts in funding levels was to create fiscal flexibility to make changes. The reality of the federal budget is that to make changes, without raising overall budget levels, requires stopping something “old” to do something “new.” The process of stopping or lowering existing federal government programs is very hard. This is because existing programs have groups that are benefiting from them and they understand how to apply pressure in the right places to keep them going. Remember the subsidy for mohair wool that started during the Korean War and ran for about 50 years.

Now — back to the Department of Energy (DOE) and the amazingly good news for exascale. The bottom line is that the President’s FY18 budget proposes to spend $508 million on exascale-related activities. This is a 77 percent increase over the FY17 enacted levels. The intent of this funding is to put the U.S. on track to have a productive exascale system by 2021. Funding is divided between two DOE programs, the Office of Science and the semi-autonomous National Nuclear Security Administration (NNSA). The NNSA request directs $161 million for the Advanced Simulation and Computing (ASC) program and another $22 million to begin construction of the physical infrastructure for the exascale system. The Office of Science (SC) money ($347 million) would go to the Advanced Scientific Computing Research (ASCR) program. (See Tiffany Trader’s coverage in HPCwire for a detailed look at the numbers.)

Both the NNSA and SC exascale activities will be the subject of debate as the President’s FY18 budget request moves forward in Congress. However, given the cuts that were seen in the rest of the DOE budget, getting to this point could be considered a minor miracle. Getting the increases to the NNSA exascale budget was likely to be relatively easy. President Trump said he was going to increase the federal government budget’s emphasis on national security and set aside about $1 billion for the NNSA. Using part of that to add to the ASC program must have been straightforward. That being said, there must have been a tremendous amount of work and planning needed to create the budget justification material.

The real challenge must have been on the SC side of the ECI numbers. In the March “skinny budget,” President Trump and Director Mulvaney proposed to cut the Office of Science by about $900 million. A cut like that in a roughly $6 billion program is huge and requires a major shift in the direction of SC. There are six research programs in SC and under normal circumstance, each of them would be expected to take part of the cut. However, this year, one program (ASCR), would not only survive intact, but would grow by about $300 million. That required the other five offices to take an even deeper cut. It must have been uncomfortable for the Director of the ASCR program to sit at the table with her peers. In any case, just like the NNSA ASC feds, there had to be a tremendous amount of work done by the SC feds to plan and create the justifications for the $347 million that would be added to ASCR program. And once again, that had to be done in only about two months. The lights in the DOE Forrestal and Germantown offices must have burned late into the night.

So — here we are. The President’s FY18 budget has been submitted. It is a big step, but only the first one. Now the Congressional Appropriations sub-committees’ staffers on the House and Senate sides will dismantle the numbers and start the process of having hearings and collecting information from a wide variety of sources to come up with their own version of the budget. Over the next few months, this will result in what is known as a “mark-up” of the budget. This will take two forms. One is changes that are made to the actual appropriation legislation. The second is known as “report language” that provides detailed instructions on how the DOE should spend the appropriated funds. And just to complicate things a bit more, there is also authorization language that comes from other committees in the Congress. Keep in mind, authorizations provide the permission to spend money, but appropriations put actual money in the checking account.

As noted above, the recently released President’s FY18 budget is likely to be “dead on arrival.” However, the real effect of the budget is the signals that it sends. Clearly President Trump’s administration is signaling some very significant changes in direction. The pros and cons of those changes will be debated long and hard in Congress and other places. However, the fact that the administration went to the considerable trouble to boost the exascale budget by the amount it did should be seen as a very positive sign by those who care about U.S. leadership in high performance computing and advanced modeling and simulation. Over the past few months, Secretary of Energy Perry has talked about his understanding of the importance of supercomputing and exascale. This budget signals that he is clearly committed to putting money where is mouth is.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness and the president of Larzelere & Associates Consulting. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

The post The Meaning of the FY18 Exascale Request – A Former Insider’s Perspective appeared first on HPCwire.

CDL Launches New Quantum Machine Learning Program

Tue, 05/30/2017 - 14:24

TORONTO, May 30, 2027 — Today the Creative Destruction Lab (CDL) at the University of Toronto’s Rotman School of Management announced a new program focused on the creation of quantum machine learning startups. This builds on the Lab’s five-year track record of assisting with the development of seed-stage, science-based companies, with a particular focus on artificial intelligence (AI)-enabled companies. To our knowledge, the CDL is currently home to the greatest concentration of AI-enabled companies of any program on Earth.

The mission: By 2022, the CDL’s Quantum Machine Learning Initiative will have produced more well-capitalized, revenue-generating quantum machine learning software companies than the rest of the world combined. The majority of these will be based in Canada.

From now until July 24, the CDL is accepting applications from inspired individuals anywhere in the world who are fully committed to building a quantum machine learning software company. The CDL is particularly interested in applicants with a graduate-level degree in physics, math, statistics, or electrical engineering and experience in machine learning, but will consider compelling applicants from all backgrounds. Applications will be processed on a rolling basis – spaces will be filled on a first-come, first-served basis. The CDL will provide international applicants with assistance in obtaining a visa for participating in this Canada-based program.

Apply here: https://www.creativedestructionlab.com/qml-application/

Up to 40 individuals or teams will be selected for the year-long program that begins this September in Toronto with an intensive introductory boot camp led by Dr. Peter Wittek, author of the first textbook on quantum machine learning, with tutorials delivered by other experts. Then, all companies will go through CDL’s nine-month objective-setting program, coached by carefully-selected entrepreneurs and investors with relevant track records of success.

Participants will be allotted time on D-Wave’s 2000Q quantum computer, have access to a D-Wave machine learning sampling service, and receive training and technical support from D-Wave experts.

Three Silicon Valley-based venture capital firms, each with a significant portfolio of AI-related investments, Bloomberg Beta, Data Collective (DCVC), and Spectrum 28, will invest pre-seed capital in every company admitted to, or formed in, the program that chooses to take it.

Business strategy coaching will be provided by William Tunstall-Pedoe (founder – Evi, sold to Amazon), Barney Pell (founder – Powerset, sold to Microsoft; Moon Express; LocoMobi), Geordie Rose (founder – D-Wave; Kindred), Sally Daub (Founder – ViXS Systems), Anthony Lacavera (Founder – Globalive; Wind), Ted Livingston (Founder – Kik), James Cham (Bloomberg Beta), Matt Ocko (DCVC), Lyon Wong (Spectrum 28), and Steve Jurvetson (DFJ), among others. Participants will also benefit from business development and implementation support from MBA students at the Rotman School. This program is made possible with the generous support of Mastercard, Comcast, RBC, and Scotiabank.

Source: Creative Destruction Lab

The post CDL Launches New Quantum Machine Learning Program appeared first on HPCwire.

ISC Announces 2018 Conference Topics

Tue, 05/30/2017 - 09:16

FRANKFURT, Germany, May 30, 2017 – The ISC program team, together with ISC 2018 program chair Prof Horst Simon, are pleased to announce 13 topics that will be the focus of next year’s conference. These topics embrace a range of subject matter critical to the development of the high performance computing field, which, in return, impacts the quality of human life.

Here is the list of the topics that will be addressed by over 100 invited speakers:

  • Beyond Moore’s Law
  • Exascale Systems
  • Climate Change
  • HPC and Electric Power Grid Control
  • Big Data Analytics
  • Cosmology and HPC
  • Human Brain Modeling and Related Big Data Challenges
  • Future Applications for Quantum Computers
  • Robotics
  • What’s New with Cloud Computing for HPC?
  • Future Challenges for Programming Models and Languages
  • The Rise of Containerized HPC
  • Artificial Intelligence on HPC Platforms

New Appointments

We are also pleased to announce that nine individuals who have served as deputy chairs in the 2017 conference have been promoted to chair positions for next year’s event. They bring with them varied research interests, as well as expertise.

Prof. David Keyes of KAUST will serve as the Research Paper chair and The Research Poster committee will be headed by Dr. Matthias S. Müller of RTWH Aachen. A topic close to his heart, Prof. Gerhard Wellein of the Friedrich-Alexander University Erlangen in Nuremberg will chair the PhD Forum chair. Prof. Georg Hager, also of the Erlangen Regional Computing Center, will chair the Birds-of Feather sessions.

The ISC Workshops committee will be headed by John Shalf of Lawrence Berkeley National Laboratory, while the Tutorials committee will be chaired by Dr. Rosa M. Badia of Barcelona Supercomputing Center. The HPC in Asia and the ISC Special Track chairs will be announced at a later date.

New next year, Kim McMahon of McMahon Consulting has been appointed as the ISC Diversity Chair. She possesses over 18 years of industry experience and is an active member of the Women in HPC organization.

The organizers are confident that the 2018 conference will offer talks that encompass an array of unique topics and speakers. The full program details will be available when registration opens in March 2018.

The conference will once again be held at Forum Messe Frankfurt and will take place from June 24 to June 28, 2018. 

Source: ISC

The post ISC Announces 2018 Conference Topics appeared first on HPCwire.

NVIDIA Wins Quartet of Major Awards at Computex

Tue, 05/30/2017 - 08:44

TAIPEI, May 30, 2017 — NVIDIA (NASDAQ: NVDA) has clinched four prestigious awards at Computex, extending its record winning streak at Asia’s largest technology tradeshow to nine years.

NVIDIA SHIELD TV won Computex’s top design and innovation award. And Best Choice Awards were won by NVIDIA Jetson TX2 AI supercomputer on a module, NVIDIA GRID 4.0 graphics virtualization platform and NVIDIA DGX-1 AI supercomputer.

“NVIDIA is honored to have captured these four awards across such a wide range of industries,” said Raymond Teh, vice president of Asia-Pacific Sales and Marketing at NVIDIA. “The wins show the spread of our innovative technologies in meeting the needs of consumers and enterprises, from the data center to the edge.”

SHIELD took the honors in the “game devices + content of games” category of the Computex d&I award — the first such win for NVIDIA. A panel of judges composed of top global industrial designers assessed all submissions based on innovation and elaboration, functionality, aesthetics, responsibility and positioning.

The world’s most advanced media streamer, SHIELD delivers the fastest, smoothest 4K HDR video and best-in-class gaming. Built-in Google Voice Search lets users control every experience with their voice.

Jetson TX2 won in the intelligent system and solution category, building on the success of its predecessor, the Jetson TX1, which won a Computex Best Choice Award last year. The Jetson TX2 is the world’s leading platform for AI computing at the edge. Its high-performance, power-efficient computing for deep learning and computer vision makes it ideal for AI city applications, robots, drones and other intelligent machines.

NVIDIA GRID 4.0 won in the cloud computing category. NVIDIA GRID is the industry’s most advanced technology for sharing virtual GPUs across multiple virtual desktop and application instances. NVIDIA GRID’s monitoring capabilities drive GPU-powered analytics to help measure, manage and support graphics virtualization environments.

NVIDIA DGX-1, the essential instrument of AI research, won in the computer and system category. The Best Choice Award went to the DGX-1 built on NVIDIA Tesla P100 GPU accelerators, designed to meet the computing demands of AI. Earlier this month, NVIDIA announced its successor, which uses Tesla V100 GPU accelerators built with the Volta GPU architecture. Available later this year, the new DGX-1 offers 3x faster deep learning training performance than its predecessor.

About NVIDIA

NVIDIA‘s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

Source: NVIDIA

The post NVIDIA Wins Quartet of Major Awards at Computex appeared first on HPCwire.

PSSC Labs Integrates Intel Omni-Path into its HPC Solutions

Tue, 05/30/2017 - 08:39

LAKE FOREST, Calif., May 30, 2017 — PSSC Labs, a developer of custom HPC and Big Data computing solutions, today announced it has certified the Intel Omni-Path High Performance Fabric on its PowerWulf HPC solutions. Omni-Path is already a proven solution deployed and deployed in several high-performance computing instillations that are ranked in the Top 500 List. This additional option will continue to expand PSSC Lab’s wide array of high value, high performance HPC solutions.

For years, the Mellanox Infiniband has dominated high performance fabric in the parallel computing space. Intel’s Omni-Path architecture is a game changing technology that increases speed and performance, at a cost lower than the Mellanox counterpart. By integrating Intel Omni-Path Fabric into its PowerWulf solutions, PSSC Labs continues its mission of providing cutting edge solutions that push the boundaries of price and performance, while maintaining the absolute highest level of application support and system reliability.

“PSSC Labs offers solutions with the latest Intel technology to provide our customers with the highest level of performance and stability,” said Alex Lesser, Vice President of PSSC Labs. “PSSC Labs is an Intel HPC Data Center Specialist and has been a Platinum Provider with Intel since 2009, and our knowledge and experience allows us to incorporate the best in Intel Technology with our leading turn-key, custom HPC solutions.”

Benefits of the Omnipath Processor compared to Mellanox Infiniband include:

  • Omni-Path clusters have up to 9% higher application performance than Infiniband
  • Omni-Path is up to 33% less expensive than Infiniband
  • Omni-Path is compatible with applications currently supported by Infiniband
  • Direct connection to the CPU in Future generations of Intel Xeon and Phi processors, thereby eliminating the cost and latency of a host bus adapter.
  • Higher bandwidth support with up to 100 Gbps

PSSC Lab’s PowerWulf HPC solutions offer a reliable, flexible, high performance computing platform for a variety of applications in the following verticals: Design & Engineering, Life Sciences, Physical Science, Financial Services and Machine/Deep Learning.

Every PowerWulf HPC Cluster with Intel Omni-Path includes a three year unlimited phone / email support package (additional year support available) with all support provided by our US based team of experienced engineers.  Prices for a custom built PowerWulf solution start at $20,000.  For more information see http://www.pssclabs.com/solutions/hpc-cluster/

About PSSC Labs

For technology powered visionaries with a passion for challenging the status quo, PSSC Labs is the answer for hand-crafted HPC and Big Data computing solutions that deliver relentless performance with the absolute lowest total cost of ownership.  All products are designed and built at the company’s headquarters in Lake Forest, California. For more information, 949-380-7288, www.pssclabs.com , sales@pssclabs.com.

Source: PSSC Labs

The post PSSC Labs Integrates Intel Omni-Path into its HPC Solutions appeared first on HPCwire.

Supermicro Highlights Server, Storage Portfolio at Computex 2017

Tue, 05/30/2017 - 07:55

TAIPEI, May 30, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in compute, storage and networking technologies including green computing, announces participation in Computex 2017 held at the NanGang Exhibition Center, Hall 1, 4F, May 30 to June 2, 2017. Supermicro will showcase more than 40 Embedded, IoT, Enterprise, Data Center and Gaming solutions in Booth M0120.

Supermicro Award Winning Products:

The Computex exhibition provides IT professionals the opportunity to experience our new, award-winning products that Supermicro has introduced this year. Two products have won awards at this year’s Computex; the Smart Edge Sever, SYS-E100-9AP, won the Computex Design and Innovation Award 2017 and the 60-Bay 4U SuperStorage won the 2017 Best Choice award. The Design and Innovation competition was held in Taiwan by the iF International Forum Design, GmbH and included 96 participants from 8 countries who submitted 255 entries. The Best Choice Award competition was held by the Taipei Computer Association and focused on Functionality, Innovation and Market Potential.

The Supermicro Smart Edge Server, SYS-E100-9AP, is designed to ensure interoperability between systems, for ease of services deployment, and to enable a broad ecosystem of solution providers. It not only lets users securely aggregate, share and filter data for analysis but also help ensure that data generated by devices can travel securely and safely from the edge to the cloud and back – without replacing existing infrastructure. The SYS-E100-9AP is designed for Smart Factory/Building/Home and Kiosk applications or interactive information systems and environmental monitoring.

The Supermicro top loading, 60-Bay 4U SuperStorage is the only 60-bay storage server supporting NVMe SSDs (6 x U.2) for I/O intensive meta-data operations, highest performance CPUs, largest memory with 24 DIMMs, on-board M.2 SSDs, HA boot drives, and wide range networking choices with Supermicro Super I/O Module (SIOM) cards. Deploying and servicing the 60-bay SuperStorage products is easy with tool-less chassis design, tool-less drive carriers, out-of-band management through IPMI 2.0 with dedicated LAN port, Supermicro RSD based on Intel® Rack Scale Design (Intel® RSD) support, and a built-in front panel LCD for fast system and disk diagnostics. As a mainstay in Supermicro capacity maximized, top loading storage product family, 60-Bay Storage offerings are optimized for the most demanding software-defined storage applications and especially popular among cloud service providers and large internet data center customers.

“Winning the ‘Best Choice Award’ for our 60-Bay, Top-Loading, Storage Server and the ‘Design and Innovation Award’ for our Edge Server confirms our ability to anticipate market demands. One 42U rack of 60-Bay top loading storage products would support more than 7PB of storage providing a powerful, multi-tiered, high density platform for today’s data hungry web scale applications. Our family of 45, 60 and 90 bay storage servers are capacity optimized, top-loading, storage products that offer the broadest range of solutions for Software Defined Storage,” said Charles Liang, President and CEO of Supermicro. “The Internet-of-Things is a growing opportunity for Supermicro and we are investing in IoT solutions development. We offer a wide range of Smart Edge Server solutions for highly dense, low power, fan-less and -20° to +60° C applications. We are pleased to have our leading products recognized by the Taipei Computer Association and the iF International Forum.”

Rack Scale Design:

  • Supermicro’s Rack Scale Design (Supermicro RSD) solutions based on the Intel Rack Scale Design software framework empower cloud service providers, telecoms, and Fortune 500 companies to build their own agile, efficient, software-defined data centers. Supermicro RSD is a total solution comprised of Supermicro server/storage/networking hardware and an optimized rack level management software that represents a superset of the open source Intel Rack Scale Design software framework and industry standard Redfish RESTful APIs developed by the DMTF (Distributed Management Task Force).

New Product Being Shown:

  • MicroBlade is an entirely new type of data center computing platform. It provides the efficiencies and TCO advantages of blades, but at a low acquisition cost. It is a powerful and flexible extreme-density 6U/3U all-in-one system that features 28/14 hot-swappable MicroBlade Server nodes supporting 28/14 of the newest dual-node Intel Xeon processor UP system configurations with up to 2 SSDs/1 HDD per node. Also, it supports high-density 1G and 10G switches to meet different bandwidth requirements for different workloads with support for integrated battery backup. This MicroBlade architecture includes server, networking, storage, and unified remote management for Cloud Computing, Video Streaming, Content Delivery, Social Networking, Desktop Virtualization and Remote Workstation applications.
  • BigTwin delivers the highest performance and efficiency in a 2U 4-node platform that supports the widest TDP range of CPUs, the current X10 version fully exploits all memory channels with maximum 24 DIMMS per node and All-Flash NVMe drives and hot-swap NVMe u.2.
  • 8U SuperBlade with up to 20x 2-socket blade servers or 10x 4-socket blade servers. Both blade servers support the highest performance Xeon CPU as well as hot-plug NVMe SSDs. The system is optimized for high performance computing applications with 100G EDR IB or Omni-Path switch as well as redundant Ethernet (25G, 10G, 1G) switch support.
  • 4U SuperBlade with up to 14x 2-socket or 1-socket blade servers. Both blade servers support high performance Xeon CPU as well as hot-plug NVMe SSDs. The system is optimized for data center, enterprise, and cloud applications with 100G EDR IB or Omni-Path switch as well as redundant Ethernet (25G, 10G, 1G) switch support.
  • The 1U and 2U Supermicro Ultra SuperServers provide scalability for Virtualization Hosting, Cloud Computing, Data Centers and High-Frequency Trading. The Supermicro SuperSever product line is designed to deliver unrivaled performance, flexibility, scalability, and serviceability that is ideal for demanding Enterprise workloads
  • 2U, 3U and 4U SuperStorage solutions in both JBOD and active storage configurations that support 24/40 NVMe drives or 60/90 SATA drives that are optimized for Microsoft, VMWare, RedHat and Software Defined Storage solutions. Data transfer rates can be as high as 20GB/s for NVMe solutions. These systems offer a fully redundant, fault-tolerant architecture with hot-swappable drive bays, power supplies and cooling fans. The active-active capable JBOD hardware is perfect for mission critical applications.
  • 2U TwinPro architecture builds on Supermicro’s proven Twin technology to provide the greatest and highest throughput storage, networking, I/O, memory, and processing capabilities allowing customers to further optimize Supermicro solutions to solve their most challenging IT requirements. Optimized for high-end Enterprise, HPC cluster, Data Center, and Cloud Computing environments, the Supermicro TwinPro Solutions are designed for ease of installation and maintenance with the highest quality for continuous operation at maximum capacity. The resulting benefit is best TCO for customers seeking the greatest competitive advantage from their data center resources.
  • SuperStorage Bridge Bay (SBB) features a fully redundant, fault-tolerant “Cluster-in-a-box” system. Optimized for mission-critical, enterprise-level storage applications, the SBB supports Hot-swap SAS HDDs with the option to expand by using the SBB JBOD. The Super SBB provides hot-swappable canisters for all active components. With heartbeat and data connection between the Server via the mid-plane, if one server fails, the other can take control and access the HDD’s (both controllers can also work as Active-active mode), keeping the system up and running.
  • Omni-Path 100G 48-port TOR network switch (SSH-C48Q/-C48QM) provides 100Gbps using the Intel Omni-Path Architecture (Intel OPA).  It provides a unique HPC cluster solution offering excellent bandwidth, latency and message rate that is highly scalable and easily serviceable.
  • Embedded and IoT motherboards that support the latest CPUs as well as legacy interfaces. Applications include Communications, Storage Appliances, Digital Signage, Digital Security and Surveillance, Gaming and Entertainment, Industrial Automation, Medical Instrumentation and Devices, as well as Defense and Aerospace.
  • Intel Xeon Phi coprocessor support. These systems achieve higher parallel processing capability with Intel Many Integrated Core Architecture (Intel MIC Architecture) based on Intel® Xeon Phi processors. Unified with the latest Intel Xeon processor family utilizing common instruction sets and Intel Xeon Phi coprocessor’s multiple programming models for HPC, engineering, scientific and research fields including financial analysis, oil/gas simulation, code optimization, 3-D rendering and chemistry applications.

More information on Supermicro products can be found at http://www.supermicro.com

About Super Micro Computer, Inc. (NASDAQ: SMCI)

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly, solutions available on the market. For more information, please visit, http://www.supermicro.com.

Source: Supermicro

The post Supermicro Highlights Server, Storage Portfolio at Computex 2017 appeared first on HPCwire.

Technique for Scaling up Quantum Computer Production

Tue, 05/30/2017 - 07:52

Quantum computing presents many challenges not least developing practical methods for fabricating qubits and the computers themselves. Last week researchers from MIT, Harvard, and Sandia National Laboratory reported a new method for creating tiny defects in diamonds and harnessing them for use in quantum systems. The work shows promise for scaling up quantum computer production.

Their paper, ‘Scalable focused ion beam creation of nearly lifetime-limited single quantum emitters in diamond nanostructures,’ published in Nature Communications, reports using focused ion beam technology to create interfaces between quantum memories and quantum networks. There is also an account on the MIT News web site, ‘Toward mass-producible quantum computers.’

As the researchers note in their abstract, “The controlled creation of defect centre—nanocavity systems is one of the outstanding challenges for efficiently interfacing spin quantum memories with photons for photon-based entanglement operations in a quantum network. Here we demonstrate direct, maskless creation of atom-like single silicon vacancy (SiV) centres in diamond nanostructures via focused ion beam implantation with ~32nm lateral precision and 50nm positioning accuracy relative to a nanocavity.” (Process shown here with caption below, click to enlarge)

The lead researcher from MIT, Dirk Englund, associate professor of electrical engineering and computer science, is quoted in the MIT article saying, “The dream scenario in quantum information processing is to make an optical circuit to shuttle photonic qubits and then position a quantum memory wherever you need it. We’re almost there with this. These emitters are almost perfect.”

As explained in the MIT article, Diamond-defect qubits result from the combination of “vacancies,” which are locations in the diamond’s crystal lattice where there should be a carbon atom but there isn’t one, and “dopants,” which are atoms of materials other than carbon that have found their way into the lattice. Together, the dopant and the vacancy create a dopant-vacancy “center,” which has free electrons associated with it. The electrons’ magnetic orientation, or “spin,” which can be in superposition, constitutes the qubit.

A perennial problem in the design of quantum computers is how to read information out of qubits. Diamond defects present a simple solution, because they are natural light emitters. In fact, the light particles emitted by diamond defects can preserve the superposition of the qubits, so they could move quantum information between quantum computing devices.

Link to Nature Communications article: https://www.nature.com/articles/ncomms15376

Link to MIT News article: http://news.mit.edu/2017/toward-mass-producible-quantum-computers-0526

Figure Caption
Figure 1 | Targeted Si ion implantation into diamond and SiV defect properties. (a) Illustration of targeted ion implantation. Si ions are precisely positioned into diamond nanostructures via a FIB. The zoom-in shows a scanning electron micrograph of a L3 photonic crystal cavity patterned into a diamond thin film. Scale bar, 500 nm; Si is silicon. (b) Intensity distribution of the fundamental L3 cavity mode with three Si target positions: the three mode-maxima along the centre of the cavity are indicated by the dashed circle. The central mode peak is the global maximum. (c) Atomic structure of a SiV defect centre in diamond. Si represents an interstitial Si atom between a split vacancy along the o1114 lattice orientation and C the diamond lattice carbon atoms.
(d) Simplified energy-level diagram of the negatively charged SiV indicating the four main transitions A, B, C and D26. Do is the energy splitting of the two levels within the doublets.

The post Technique for Scaling up Quantum Computer Production appeared first on HPCwire.

NVIDIA Partners with Manufacturers to Advance AI Cloud Computing

Tue, 05/30/2017 - 07:50

TAIPEI, May 30, 2017 — NVIDIA (NASDAQ: NVDA) today launched a partner program with the world’s leading original design manufacturers (ODM) — Foxconn, Inventec, Quanta and Wistron — to more rapidly meet the demands for AI cloud computing.

Through the NVIDIA HGX Partner Program, NVIDIA is providing each ODM with early access to the NVIDIA HGX reference architecture, NVIDIA GPU computing technologies and design guidelines. HGX is the same data center design used in Microsoft’s Project Olympus initiative, Facebook’s Big Basin systems and NVIDIA DGX-1 AI supercomputers.

Using HGX as a starter “recipe,” ODM partners can work with NVIDIA to more quickly design and bring to market a wide range of qualified GPU-accelerated systems for hyperscale data centers. Through the program, NVIDIA engineers will work closely with ODMs to help minimize the amount of time from design win to production deployments.

As the overall demand for AI computing resources has risen sharply over the past year, so has the market adoption and performance of NVIDIA’s GPU computing platform. Today, 10 of the world’s top 10 hyperscale businesses are using NVIDIA GPU accelerators in their data centers.

With new NVIDIA Volta architecture-based GPUs offering three times the performance of its predecessor, ODMs can feed the market demand with new products based on the latest NVIDIA technology available.

“Accelerated computing is evolving rapidly — in just one year we tripled the deep learning performance in our Tesla GPUs — and this is having a significant impact on the way systems are designed,” said Ian Buck, general manager of Accelerated Computing at NVIDIA. “Through our HGX partner program, device makers can ensure they’re offering the latest AI technologies to the growing community of cloud computing providers.”

Flexible, Upgradable Design
NVIDIA built the HGX reference design to meet the high-performance, efficiency and massive scaling requirements unique to hyperscale cloud environments. Highly configurable based on workload needs, HGX can easily combine GPUs and CPUs in a number of ways for high performance computing, deep learning training and deep learning inferencing.

The standard HGX design architecture includes eight NVIDIA Tesla GPU accelerators in the SXM2 form factor and connected in a cube mesh using NVIDIA NVLink high-speed interconnects and optimized PCIe topologies. With a modular design, HGX enclosures are suited for deployment in existing data center racks across the globe, using hyperscale CPU nodes as needed.

Both NVIDIA Tesla P100 and V100 GPU accelerators are compatible with HGX. This allows for immediate upgrades of all HGX-based products once V100 GPUs become available later this year.

HGX is an ideal reference architecture for cloud providers seeking to host the new NVIDIA GPU Cloud platform. The NVIDIA GPU Cloud platform manages a catalog of fully integrated and optimized deep learning framework containers, including Caffe2, Cognitive Toolkit, MXNet and TensorFlow.

“Through this new partner program with NVIDIA, we will be able to more quickly serve the growing demands of our customers, many of whom manage some of the largest data centers in the world,” said Taiyu Chou, general manager of Foxconn/Hon Hai Precision Ind Co., Ltd., and president of Ingrasys Technology Inc. “Early access to NVIDIA GPU technologies and design guidelines will help us more rapidly introduce innovative products for our customers’ growing AI computing needs.”

“Working more closely with NVIDIA will help us infuse a new level of innovation into data center infrastructure worldwide,” said Evan Chien, head of IEC China operations at Inventec Corporation. “Through our close collaboration, we will be able to more effectively address the compute-intensive AI needs of companies managing hyperscale cloud environments.”

“Tapping into NVIDIA’s AI computing expertise will allow us to immediately bring to market game-changing solutions to meet the new computing requirements of the AI era,” said Mike Yang, senior vice president at Quanta Computer Inc. and president at QCT.

“As a long-time collaborator with NVIDIA, we look forward to deepening our relationship so that we can meet the increasing computing needs of our hyperscale data center customers,” said Donald Hwang, chief technology officer and president of the Enterprise Business Group at Wistron. “Our customers are hungry for more GPU computing power to handle a variety of AI workloads, and through this new partnership we will be able to deliver new solutions faster.”

“We’ve collaborated with Ingrasys and NVIDIA to pioneer a new industry standard design to meet the growing demands of the new AI era,” said Kushagra Vaid, general manager and distinguished engineer, Azure Hardware Infrastructure, Microsoft Corp. “The HGX-1 AI accelerator has been developed as a component of Microsoft’s Project Olympus to achieve extreme performance scalability through the option for high-bandwidth interconnectivity for up to 32 GPUs.”

About NVIDIA
NVIDIA‘s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. More information at http://nvidianews.nvidia.com/.

Source: NVIDIA

The post NVIDIA Partners with Manufacturers to Advance AI Cloud Computing appeared first on HPCwire.

ERC Grant to Explore Commercial Potential for Computational Genomics

Tue, 05/30/2017 - 07:43

BARCELONA, May 30, 2017 — The European Research Council (ERC) has awarded the Barcelona Supercomputing Center (BSC) researcher David Carrera with a Proof of Concept Grant, to explore the commercial potential of a software solution for computational genomics. The objective of the ERC Proof of Concept is to pave the way for commercialization of Hi-OMICS, a Software Defined Infrastructure (SDI) controller that leverages Deep Learning technologies, to manage efficiently Computational Genomics workloads on SDI platforms.

The goal of Hi-OMICS is to bring an advanced orchestration platform for Genomic workloads (processing genomic or transcriptomic sequences, derived mostly, but not only, from Next Generation Sequencing) in order to significantly improve the cost-efficiency of the infrastructure in comparison to existing computational genomics platforms.

Hi-OMICS has been developed in the context of David Carrera’s Holistic Integration of Emerging Supercomputing Technologies (Hi-EST) ERC Starting Grant. One of the user cases developed in Hi-EST has been to explore how to improve the performance and cost-efficiency of genomics pipelines, with special focus in SMUFIN, a state of the art method to find genomic somatic mutations developed at BSC and published in Nature Biotech, under the leadership of ICREA Professor at BSC David Torrents.

The focus of this collaboration was to explore how to take SMUFIN one step forward in terms of performance and cost-efficiency by taking advantage of accelerators and non-volatile memories in the context of disaggregated resources for Software Defined Data Centers. As a result of this work, an accelerated version of SMUFIN has been developed, which provides a reduction of energy consumption by a factor of 20x, while it still delivers a performance improvement by a factor of 2x. The researchers have filed a patent for the new and disruptive version of the software.

Impact on the Personalized Medicine market

Due to this activity, Carrera’s team observed that genomics workloads could further improve cost-efficiency, as pipelines consume system resources in different ways during their execution,  which provide opportunities towards a smart orchestration of workloads across disaggregated Data Center resources. Increased performance can reduce the cost of running genomic applications, in combination with reduced  energy consumption and the need for lower infrastructure investments,  can improve cost-efficiency, which will have an immediate impact on the Personalized Medicine Market.

More about David Carrera

David Carrera is the Head of the “Data-Centric Computing” research group at the Barcelona Supercomputing Center, and Associate Professor at the Computer Architecture Department of the UPC. He is also a ICREA Academia professor. His research interests are focused on the performance management of data center workloads.

About BSC

Barcelona Supercomputing Center (BSC) is the national supercomputing centre in Spain. BSC specialises in High Performance Computing (HPC) and its mission is two-fold: to provide infrastructure and supercomputing services to European scientists, and to generate knowledge and technology to transfer to business and society.

BSC is a Severo Ochoa Center of Excellence and a first level hosting member of the European research infrastructure PRACE (Partnership for Advanced Computing in Europe). BSC also manages the Spanish Supercomputing Network (RES).

BSC is a consortium formed by the Mministry of Economy, Industry and Competitiveness of the Spanish Government, the Business and Knowledge Department of the Catalan Government  and the Universitat Politecnica de Catalunya – BarcelonaTech

Source: BSC

The post ERC Grant to Explore Commercial Potential for Computational Genomics appeared first on HPCwire.

GIGABYTE Announces Expansion of ARM Server Portfolio Based on ThunderX2

Mon, 05/29/2017 - 15:03

TAIPEI, Taiwan, May 29 — GIGABYTE Technology, a leading producer of high performance server hardware, and Cavium, Inc., a leading provider of semiconductor products that enable intelligent processing for enterprise, data center, cloud, wired and wireless networking – today announced availability of high performance ARM server platforms based on Cavium’s second generation 64-bit ARMv8 ThunderX2 product family.

GIGABYTE, with its award-winning product portfolio, has been long recognized in the industry as a leader in design and innovation. With a broad offering of commercially-available ARM-based server solutions, GIGABYTE continues to demonstrate engineering expertise in system level integration that manages intensive compute I/O, large memory configurations and power optimization. Through its world-wide presence, GIGABYTE is uniquely positioned to accelerate the adoption and deployment of ThunderX2 based systems into the cloud and hyperscale data center applications.

GIGABYTE has announced multiple platforms optimized to utilize the ThunderX2 SoC. The R181 (1U) series utilizes a single socket ATX form factor motherboard with multiple PCIe x16 Gen3 and SATA connectors for expansion modules, along with high memory capacity. R181 is an ideal solution for applications that need cost effective high performance compute platforms.

GIGABYTE’s H216 series is a 2U server platform that can support 4 dual socket ThunderX2 compute nodes, with 1TB memory capacity per sled and multiple PCIe x16 slots. The chassis itself supports dual redundant power supplies and 24 2.5” HDDs. The H216 is designed to address the requirements of the most demanding applications in high performance computing and public/private cloud applications.

All platforms integrate IPMI-based management and are supported by GIGABYTE’s proprietary remote management application.

“GIGABYTE’s second generation line of ARMv8 server offerings, enabled by Cavium’s ThunderX2 processors, provides best-in-class high performance ARM solution for the Data Center, the Cloud and for Telcos,” said Etay Lee, General Manager of GIGABYTE Technology. “ThunderX2 SoCs deliver performance competitive with the latest high end server processors. GIGABYTE has developed highly integrated system solutions enabling an efficient and cost effective path to high density Hyperscale class data center design using the industry’s best ARMv8 server class processor. Our innovation, reliability and flexibility deliver time-to-market advantages to customers developing next generation data center & cloud solutions.”

“The Cavium ThunderX2 multi-core ARMv8 product family is well suited to address the high compute demands of next generation data center and cloud infrastructure,” said Rishi Chugh, Director Product Marketing for Cavium’s Data Center Processor Group. “By using Cavium’s ThunderX2 class processors, GIGABYTE is able to offer a wide variety of high-performance, high volume 1U and 2U systems in single and dual socket configurations as well as reference platforms that fully utilize core differentiators such as integrated SATA port integration, high memory bandwidth & capacity along with multiple PCIe Gen3 x16 ports. Together, we are delivering the performance, scalability and reliability necessary for businesses to handle increasingly complex data intensive workloads in a highly virtualized cloud based environments.”

The ThunderX2 product family is Cavium’s second generation 64-bit ARMv8-A server processor SoC for Datacenter, Cloud and High Performance Computing applications. The family integrates fully out-of-order high performance custom cores supporting single and dual socket configurations. ThunderX2 is optimized to drive high computational performance, delivering outstanding memory bandwidth and memory capacity. The new line of ThunderX2 processors includes multiple workload optimized SKUs for both scale up and scale out applications and is fully compliant with ARMv8-A architecture specifications as well as ARM’s SBSA and SBBR standards. It is also widely supported by industry leading OS, Hypervisor and SW tool and application vendors.

About GIGABYTE Technologies

GIGABYTE was founded in 1986, from there establishing our uncontested position in continuous technological innovation. By focusing on key technologies and achieving strict quality standards, GIGABYTE has been regarded as an innovative and trusted motherboard leader globally. To keep pace in a rapidly changing world, we offer a comprehensive product line covering Motherboards, Graphics Cards, PC Components, PC Peripherals, Laptops, Desktop PCs, Network Communications, Servers and Embedded Products. GIGABYTE Technology is dedicated to enabling a full-range digital life, responding promptly and effectively to market needs and desires.

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Datacenter and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, please visit: http://www.cavium.com.

Source: GIGABYTE Technology / Cavium, Inc.

The post GIGABYTE Announces Expansion of ARM Server Portfolio Based on ThunderX2 appeared first on HPCwire.

Inventec Launches Baymax HyperScale Server with ThunderX2 Processors

Mon, 05/29/2017 - 14:58

TAIPEI, Taiwan, May 29, 2017 – Inventec, world’s leading ODM server manufacturer for enterprise and cloud datacenter server and storage platforms, today announced, Baymax, a new server platform optimized for cloud compute, high-performance cloud storage and Big Data applications based on Cavium, Inc.’s, second-generation 64-bit ARMv8 ThunderX2 product family.

Inventec is the largest ODM server manufacturer in the world and is a recognized leader in the delivery of high-performance and scalable server and storage designs to leading server OEMs as well as Mega Scale Datacenter end user customers.  The Baymax platform from Inventec is an OCP Project Olympus compliant 1U rack mount server platform supporting two ThunderX2 SoCs in a dual socket configuration with up to 1.5TB of DDR4 memory, three PCIe x16 slots, four SATA ports complimented by four M.2 flash for local storage. This platform is optimized for diverse workloads such as Hadoop, SQL, high performance cloud storage and Search workloads that require a balance of high computing performance with highest memory capacity and bandwidth.

The ThunderX2 product family is Cavium’s second generation 64-bit ARMv8-A server processor SoCs for Datacenter, Cloud and High Performance Computing applications. The family integrates fully out-of-order high performance custom cores supporting single and dual socket configurations. ThunderX2 is optimized to drive high computational performance delivering outstanding memory bandwidth and memory capacity. The new line of ThunderX2 processors includes multiple workload optimized SKUs for both scale up and scale out applications and is fully compliant with ARMv8-A architecture specifications as well as ARM’s SBSA and SBBR standards. It is also widely supported by industry leading OS, Hypervisor and SW tool and application vendors.

“Inventec’s success as world’s largest server ODM has been based on our compelling designs and manufacturing expertise and our ability to deliver leading edge cost effective server platforms to world’s largest mega scale datacenters,” said Evan Chien, Senior Director of Inventec Server Business Unit 6. “Earlier this year Inventec’s customers requested platforms based on Cavium’s ThunderX2 ARMv8 processors, and the new Baymax platform is the first platform being delivered.”

“Inventec is well known for their robust and innovative designs and broad customer base in both Hyperscale and Enterprise server markets and are expanding their ThunderX server portfolio offering with the addition of Baymax platforms,” said Gopal Hegde, VP/GM, Datacenter Processor Group, Cavium, “They are a great addition to the growing list of ThunderX2-based platform manufacturers.”

About Inventec

Since its establishment, Inventec has adhered to a corporate philosophy of “Innovation, Quality, Open Mind, and Execution.” Starting from calculator and telephone, Inventec has moved forward into notebook industry and established a compelling reputation. In 21th century, Inventec has entered the fields of cloud computing, mobile computing, wireless communications, network applications, digital home appliances, software application and sustainable energy for the diversification. For more information, please visit: http://www.inventec.com

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Datacenter and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, please visit: http://www.cavium.com

Source: Inventec / Cavium, Inc.

The post Inventec Launches Baymax HyperScale Server with ThunderX2 Processors appeared first on HPCwire.

Cavium, Partners to Demo Product Innovations at COMPUTEX 2017

Mon, 05/29/2017 - 14:51

TAIPEI, Taiwan, and SAN JOSE, Calif., May 29, 2017 — Cavium, Inc., a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, datacenter, cloud, wired and wireless networking, will demonstrate the company’s latest product innovations at COMPUTEX 2017, in Taipei, Taiwan from May 30th – June 3rd, at the Grand Hyatt hotel in Cavium’s suite 2412.

In an increasingly complex technology landscape, Cavium distinguishes itself by providing a rich and diverse product portfolio for all infrastructure segments, including datacenters, cloud, virtualization, security and networking.

Cavium’s ThunderX & ThunderX2 64–bit ARMv8-based family of processors feature high performance custom cores, single and dual socket configurations, high memory capacity & bandwidth, integrated hardware accelerators for networking, storage, and security along with highest level of I/O throughput and scalability. They are fully compliant with ARMv8-A architecture specifications as well as ARM’s SBSA and SBBR standards, and widely supported by industry-leading OS, Hypervisor and Software tool and application vendors. Various ODM & OEM ThunderX & ThunderX2 platforms targeting cloud & HPC workloads will be on display.

Cavium‘s XPliant Ethernet switch family is the first to deliver a high throughput programmable datacenter switching solution that is in production and shipping today. Platforms based on the XPliant Ethernet switch family leverage its flexible control of table resources and pipeline logic to meet the specific needs of the network architecture and while providing unprecedented packet visibility and telemetry. In addition to programmability, the XPliant family of Ethernet switches architecture offers a fully centralized shared dynamically allocated packet buffer to absorb large packet bursts and provide advanced traffic management functions. Various ODM and OCP compliant platforms will be on display running various network operating systems.

Cavium’s OCTEON Fusion–M, the industry‘s most comprehensive Radio Access Network (RAN) offerings, enabling Macro and Micro Cells, Intelligent Remote Radio Heads, and NFV and Cloud RAN solutions. The CNF75xx and CNF73xx processor families support 2G / 3G / 4G and emerging 5G standards.

Cavium’s OCTEON TX™ is a complete line of 64-bit ARM-based SoCs for control plane and data plane applications in networking, security, and storage. The OCTEON TX expands the addressability of Cavium’s embedded products into control plane application areas within enterprise, service provider, datacenter networking and storage that need support of extensive software ecosystem and virtualization features. This product line is also optimized to run multiple concurrent data and control planes simultaneously for security and router appliances, NFV and SDN infrastructure, service provider CPE, wireless transport, NAS, storage controllers, IOT gateways, printer and industrial applications.

Cavium’s FastLinQ Ethernet adapters support 10Gb/25Gb/40Gb/100Gb speeds and are ideally suited for enterprise–class datacenters, public and private clouds, managed service providers and telco deployments as well as storage applications. The feature rich family of adapters supports Universal RDMA with RoCE/RoCEv2 & iWARP, server virtualization with NPAR and SR–IOV, network tunneling with VXLAN, NVGRE and GENEVE, network storage with iSCSI, FCoE & NVMe-oF and improves cloud/telco efficiency with DPDK support.

The following Cavium and Partner product demonstrations will be shown during the week of Computex at the Grand Hyatt hotel, suite 2412:

    • ThunderX/ThunderX2: 64–bit ARMv8 Workload Optimized processors:
      • ThunderX/ThunderX2 running Cloud Applications on various ODM platforms
    • XPliant Ethernet Switch Family:
      • XPliant-based production platforms from various customers will be on display, including the S5160 Series from Arista and SLX series from Brocade.
      • Various XPliant-based ODM and OCP compliant switch platforms showing various form factors such as “32x100G” and “48x25G + 6x100G” will be on display.
      • Demonstration of various Network Operating Systems (NOS) such as SONiC from Microsoft, PicOS from Pica8 and open source NOS OpenSwitch (OPX) running on OCP AS7512-32X open network 100GE datacenter platform by Edgecore Networks.
      • Demonstration of Interoperability between Cavium’s XPliant CNX880xx and MACOM’s ES200 MACsec PHY, transporting wire-rate 100GbE MACsec encrypted traffic over QSFP28 100G optics.
    • OCTEON TX: 64-bit ARMv8 Embedded Multicore Processors:
      • High performance DPDK IPsec security applications.
      • The OCTEON TX CN81XX IoT gateway/router reference design.
      • Various low power fan-less ODM platforms for SD-WAN and NFV.
      • Openwrt and partner VNF software.
    • OCTEON Fusion–M:
      • The recently announced CNF73xx single chip micro BTS reference design.
      • The Facebook Telecom Infra Project (TIP) 4G OpenCellular basestation with:
        • OCTEON Fusion CNF7130 baseband processor.
        • Cavium’s open source LTE software.
      • Cloud RAN implementation.
        • OCTEON Fusion–M Remote Radio Head.
        • ThunderX Virtual Base Band Unit.
        • Cavium’s LTE split stack software.
  • Cavium FastLinQ Ethernet Adapters:
    • Latest 10/25/50/100Gb controller based OCP adapters.
    • Low latency Universal RDMA.
    • NVMe storage over iWARP showcasing best in class IOPs and latency.

To schedule a meeting with Cavium, please contact your local sales account manager or Lilly Ly (lilly.ly@cavium.com). Please enter Meeting Request at Computex 2017 in the subject line.

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Datacenter and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, please visit: http://www.cavium.com

Source: Cavium Inc.

The post Cavium, Partners to Demo Product Innovations at COMPUTEX 2017 appeared first on HPCwire.

Ingrasys Samples Cavium ThunderX2 Rack Mount Server Platforms

Mon, 05/29/2017 - 14:49

TAIPEI, Taiwan, May 29, 2017 – Ingrasys, fully owned subsidiary of Foxconn Technology Group, world’s largest manufacturer of server and storage platforms, today announced the sampling of new rack mount server platforms based on Cavium, Inc.’s, second generation 64-bit ARMv8 ThunderX2 product family.

Ingrasys is a trusted name in contract manufacturing services and is a recognized leader in the delivery of high-performance and scalable server and storage designs to leading server Innovation Design and Manufacturing (IDM) as well as container datacenter end user customers.

The Osmium platform from Ingrasys is a 2U4N rack mount density optimized server platform optimized for cloud compute and high performance computing workloads. The platform supports four compute nodes in a 2U form factor delivering highest compute and memory density in a very compact form factor. Each compute node integrates two ThunderX2 SoCs in a cache coherent dual socket configuration with up to 1 TB of memory per node and 4 x16 PCIe slots enabling a variety of rich IO configurations. The four compute nodes share common chassis and power supply infrastructure enabling cost and density optimized server platforms that require a balance of high density compute with flexible OCP v2.0 Mezzanine card for network and storage connectivity options.

The ThunderX2 product family is Cavium’s second generation 64-bit ARMv8-A server processor SoCs for Datacenter, Cloud and High Performance Computing applications. The family integrates fully out-of-order high performance custom cores supporting single and dual socket configurations. ThunderX2 is optimized to drive high computational performance delivering outstanding memory bandwidth and memory capacity. The new line of ThunderX2 processors includes multiple workload optimized SKUs for both scale up and scale out applications and is fully compliant with ARMv8-A architecture specifications as well as ARM’s SBSA and SBBR standards. It is also widely supported by industry leading OS, Hypervisor and SW tool and application vendors.

“Our customers continue to demand platform innovation that reduces TCO and increases workload performance,” said Taco Chang, Director of ARM platform product group at Ingrasys.  “The new platforms leverage ThunderX2’s compute, memory and rich IO capabilities to deliver highest levels of performance at compelling TCO allowing our customers to meet growing performance demands within their existing infrastructure.”

“Our design approach for the ThunderX product family continues to prove itself,” said Rishi Chugh, Director Product Marketing for Cavium’s Data Center Processor Group.  “Our partnership with Ingrasys is a great match, combining both companies’ focus on delivering high performance solutions for Cloud and high performance computing workloads with outstanding TCO.  Together, we are delivering the increases in performance and scalability necessary for businesses to handle increasingly data intensive workloads.”

Availability
Ingrasys Osmium ThunderX2 servers are sampling to select customers.

About Ingrasys

Ingrasys Technology Inc., founded in February 2002, is a global leading developer in the cloud computing technologies. Headquartered in Taoyuan, Taiwan with approximately 1,000+ employees worldwide. Over 90% are engineers specialized in cloud computing technologies developments. Based upon its own innovative innovation in the fields of storage, server, data center, Ingrasys has maintained a steady growth on the business toward the global market. Ingrasys is dedicating to deliver the solid-state networking storage device/server, as well as the full spectrum of intelligent embedded network systems and service solutions for the fast-growing cloud computing environment. With extensive experiences in the cloud computing technologies, Ingrasys Technology Inc. is continuously aiming as the Best-In-Class technology company that connects all our partners to the future growth. For more information, visit: http://www.ingrasys.com.

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Datacenter and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, visit: http://www.cavium.com.

Source: Ingrasys Technology Inc. / Cavium, Inc.

The post Ingrasys Samples Cavium ThunderX2 Rack Mount Server Platforms appeared first on HPCwire.

Doug Kothe on the Race to Build Exascale Applications

Mon, 05/29/2017 - 13:00

Ensuring there are applications ready to churn out useful science when the first U.S. exascale computers arrive in the 2021-2023 timeframe is Doug Kothe’s job. No pressure. He’s not alone, of course. The U.S. Exascale Computing Project (ECP) is a complicated effort with many interrelated parts and contributors, all necessary for success. Yet Kothe’s job as director of application development is one of the more visible and daunting and perhaps best described by his boss, Paul Messina, ECP director.

“We think of 50 times [current] performance on applications [as the exascale measure of merit], unfortunately there’s a kink in this,” said Messina. “The kink is people won’t be running today’s jobs in these exascale systems. We want exascale systems to do things we can’t do today and we need to figure out a way to quantify that. In some cases it will be relatively easy – just achieving much greater resolutions – but in many cases it will be enabling additional physics to more faithfully represent the phenomena. We want to focus on measuring every capable exascale system based on full applications tackling real problems compared to what they can do today.”

Doug Kothe, ECP

In this wide-ranging discussion with HPCwire, Kothe touches on ECP application development goals and processes; several technical issues such as efforts to combine data analytics with mod/sim and the need for expanded software frameworks to accommodate exascale applications; and early thoughts for incorporating neuromorphic and quantum computing not currently part of the formal ECP plan. Interestingly, his biggest worry isn’t reaching the goal on schedule – he believes the application teams will get there – but post-ECP staff retention when industry comes calling.

By way of review, ECP is a collaborative effort of two Department of Energy organizations—the Office of Science and the National Nuclear Security Administration. Six applications areas have been singled out: national security; energy security, economic security, scientific discovery; earth science; and health care. In terms of app-dev, that’s translated into 21 Science & Energy application projects, 3 NNSA application projects, and 1 DOE / NIH application project (precision medicine for cancer).

It’s not yet clear what the just released FY2018 U.S. Budget proposed by the Trump Administration portends. Funding for science programs were cut nearly across the board although ECP escaped. Kothe says simply, “It is the beginning of the process for the FY18 budget, and while the overall budget is determined, we will continue working on the applications that are already part of the ECP.”

In keeping with ECP’s broad ambitions, Kothe says, “All of our applications teams are focused on very specific challenge problems and by our definition a challenge problem is one that is intractable today, needs exascale resources, and is a strategic high priority for one of the DOE program offices. We aren’t claiming we are going to solve all the problems but we are claiming is simulation technology that can address the problem. The point is we have the applications vectored in rather specific directions.” (Summary list below, click to enlarge)

 

RISE OF DATA ANALYTICS
One of the more exciting and new-to-HPC areas is incorporation of data analytics into the HPC environment overall and ECP in particular. Indeed, harmonizing or at least integrating the big data and modelling and simulation is a goal specified by the National Strategic Computing Initiative. Data-driven science isn’t new nor is researcher familiarity with underlying statistics. But the sudden rise machine/deep learning techniques and including many that rely on lower precision calculations is somewhat new to the scientific computing community and an area where the commercial world has perhaps taken the lead. Kothe labels the topic “white hot”.

“Not being trained in the data analytics area I’ve been doing a lot of reading and talking [to others]. A large fraction of the area I feel like I know, but I didn’t appreciate the other 20 or 30 percent. The point is by exposing our applications teams to the data analytics community, even just calling libraries, we are going to see some interesting in situ and computational steering use cases. As an example of in situ, think of turbulence. It could be an LES (large eddy simulation) whose parameters could have been tuned a priori by machine learning or chosen on the fly by machine learning. That kind of work is already going on at some universities,” Kothe says.

Climate modeling is a case point. “A big challenge is subgrid models for clouds. Right now and even at exascale we probably cannot do one km or less resolution everywhere. We may be able to do regional coupled simulations that way, but if we try to do five or ten kilometers everywhere – of course it will vary whether over ocean or land ice, sea ice, or atmosphere – you will still have many clouds lost in one cell. You need a subgrid model. Maybe machine learning could be used to select the parameters. Think of a bunch of little LES models running in a 10km x10km cell holding lots of clouds that are then scaled into the higher level physics. I think subgrid models are potentially a poster child for machine learning.”

Steering simulations is another emerging use case. “There’s a couple of labs, Lawrence Livermore in particular, that are already using machine learning to make decisions, to automate decisions about mesh quality for fluid and structure simulations where the mesh is just flowing with the moving material and the mesh may start to contort in a way that will cause the numerical solution to break down or errors to increase. You could do quality checks on the fly and correct the mesh with machine learning.”

One interesting use is being explored as part of the Exascale CANcer Distributed Learning Environment (CANDLE) project (see HPCwire article, Enlisting Deep Learning in the War on Cancer). Part of the project is clarifying the RAS (gene) network activity. The RAS network is implicated very many cancers. “You have machine learning orchestrating ensembles of molecular dynamics simulations [looking at docking scenarios with the RAS protein] and examining factors that are involved in docking,” says Kothe. Machine learning can recognize already known areas and reduce need for computationally intensive simulation in those areas while zeroing in on lesser known areas for intense quantum chemistry simulations. Think of it as zooming in and out as needed.

 

FRAMEWORKS REVISITED
Clearly there’s no shortage of challenges for ECP application development. Kothe cites optimizing node performance and memory management among the especially thorny ones, “We’ve now have many levels of memory exposed to us. We don’t really quite know how best to use it.” Data structure choices can also be problematic and Kothe suggests frameworks may undergo a revival,

One of the application teams (astrophysics), recalls Kothe, came to him and said, “I am afraid to make a choice for a data structure that would be pervasive in my whole code because it might be the wrong one and I’m stuck with it.'” The point is I think what we are seeing with the applications a kind of ‘going back to the future’ in late 80s when you saw lots of heavyweight frameworks where an application would call out to a black box and say register this array for me and hand me back the pointer.

“That’s good and it’s bad. The bad part is you’re losing control and now you have to schlep around this black box and you don’t know if it is going to do what you want it to do. The good part is if you are on a KNL system or an NVIDIA system, you are on different nodes, and that block box memory manager would have been tuned for that hardware. [In] dealing with memory hierarchy risks, I think we are probably seeing applications move more towards frameworks which I find think is a good idea. We’ve learned kind of what I call the big F or little f frameworks. I think we’re learning how to balance the two so applications can be portable and not have to rely on an army of people but still do something that’s more agile than just choose one data structure and hope it works.”

Performance portability is naturally a major consideration. Historically, says Kothe, application developers and he includes himself in the category, “We chose portability over performance because we want to make sure our science can be done anywhere. Performance can’t be an afterthought but it often is. Portability in my mind has several dimensions. So the new system shows up and it is probably not something out of left field, you know something about it, but what’s a reasonable amount of effort that you think should be required to port your code? How much of the code base do you think should change? What is correctness in terms of the problem and getting the answer.

“I would claim that a 64-bit comparison is probably not realistic. I mean it’s probably not even appropriate. What set of problems would you run? You need to run real problems. We’re asking each app team to define what they think portability means and hope that collectively we’ll move towards a good definition and a good target for all the apps but I think it will end up being fairly app specific.”

THE CO-DESIGN IMPERATIVE
The necessity of co-design has become a given throughout HPC as well as with the ECP. Advancing hardware and new systems architectures must be taken into account not merely to push application performance but to get them to run at all. However coupling software too tightly to a specific machine or architecture is limiting. Currently ECP has established six co-design centers to help deal with specific challenges. Kothe believes use of motifs may help.

“Every application team at some level will be doing some vertically integrated co-design and there is probably more software co-design going on – the interplay with the compilers and runtime systems and that kind of thing – than anything else. By having the co-design centers identify a small number of motifs that applications are using, I think we can leverage a deep dive co-design on the motifs as opposed to doing kind of an extensive co-design vertically integrated within every application. This is new and there are some risks. But long term, my dream would be we [develop] community libraries that are co-designed around motifs that are used broadly among the applications.

“The poster child is probably [handling] particles. Almost every application has a discrete particle model for something and that’s good and it’s a challenge. So how do you encapsulate the particle [model] in a way that it can be co-designed not as a separate activity that’s not thinking about the [specific] consumer of that motif, but just thinking about making that motif rock and roll. That’s the challenge, to co-design motifs so they can be broadly used and I have high hopes there.”

 

 

STAY ON TARGET
“A big challenge with application developers, is everything sounds cool and looks good, so we want to keep them focused. Year by year the applications have laid out a number of milestones and for the most parts the milestones are step by step progression towards that challenge program. The progression has many dimensions: is the science capability improving, better physics, better algorithms; is the team utilizing the hardware efficiently [such as] state of the art test beds, the latest systems on the floor; are they integrating software technologies and probably one of the most important is they are using co-design efforts,” says Kothe

One ECP-wide tool is a comprehensive project database where “all the R&D projects and applications and software technology, all their plans and milestones are in one place.” A key aspect of ECP, says Kothe, is that everyone can see what everyone else is doing and how they are progressing.

Think of a milestone as a handful of things, says Kothe, that are generally tangible such as software release or a demonstration simulation. “It could be a report or a presentation. It can even be a small write up that says I tried this algorithm and it didn’t work. A milestone is a decision point.

“It’s not always a huge success. Failure can be just as valuable. Sometimes we can force a sense of urgency. We can review this seven-year plan and say, alright you can’t bring in a technology that doesn’t have a line of sight in this timeframe, or you’ve got algorithm A and B going along [and] at this point you have make a decision and choose one and go with it. I like that. I think it imparts a sense of urgency,” Kothe.

Kothe, of course, has his own milestones. One is an annual application assessment report due every September.

“I am hearing I am a slave driver and I didn’t really think had that personality,” says Kothe. One area where he is inflexible is on scheduled releases. “We want you to release on the scheduled date, that date is gospel. What’s in the release may float. So the team and budget, we like to be pretty rigid, but what’s in the release floats based on what you have learned. You have this bag of tasks and try to get as many tasks done as you can but you still must have the release.”

Currently, the comprehensive database of projects isn’t publicly available (would be interesting reading) but Kothe says individual PIs are encouraged to share information widely.

SOFTWARE TECHNOLOGY SHARING
Not surprisingly, close collaboration with the software technology team is emphasized. “Right now what we have this incredible opportunity because applications teams are exposed to a lot of software technologies they’ve never seen or heard of.” It’s a bit like kids in a candy store says Kothe, “They are looking at this technology and saying I want to do that, to do that, to do that, and so the challenge for integration is on managing the interfaces and doing it in a scalable way.”

There a couple of technology projects that everyone wants to integrate, he says, and that’s big bandwidth worry when you have 20-plus application projects lined up saying “let me try your stuff because chances are there will be new APIs and new functionalities and bugs and features [too]. The software technology people are saying, ‘Doug be careful. let’s come up with a scalable process.’” Conversely, says Kothe, it is also true there’s a fair amount of great “software technology the application teams are not exploring which they should be.”

“We have defined a number of integration milestones which are basically milestones that require deliverables from two or three areas. We call that shared fate. [I know] it sounds like we are jumping off a cliff together. A good example is an application project looks at a linear solver and says ‘you don’t have the functionality I need, lets negotiate requirements.’ So the solver negotiates a new API, a new functionality, and the application team will have a milestone that says it will have integrated and tested and the new technology [by a given date] and the software technology team has to have its release say two or three months before. These things tend to be daisy chained like that. You have a release, then an integration assessment, and we might have another release to basically deal with any issues.

“Right now, early on in ECP, we’re having a lot of point-to-point interaction where there’s lots of aps that want to do lots of same or different things with lots of software projects. I think once we settle down on the requirements the software technologies will be kind of one to all [having] settled on a base functionality and a base API. An obvious example is MPI but even with MPI there’s new features and functionalities that certain aspects. We can’t take it for granted that some of these tremendous technologies like MPI are going to be there working the way we need for exascale,” says Kothe.

 

ECP FUTURE WATCH
Even as ECP pushes forward it remains rooted in CMOS technology yet there are several newer technologies – not least neuromorphic and quantum computing – which have made great strides recently and seem on the cusp of practical application.

“One of the things I have been thinking about is even if we don’t have access to a neuromorphic chip what is its behavior like from a hardware simulator point of view. The same thing with quantum computing. Our mindset has to change with regards to the algorithms we lay out for neuromorphic or quantum. The applications teams need to start thinking about different types of algorithms. As Paul [Messina] has pointed out it’s possible quantum computing could fairly soon become an accelerator on traditional node. Making sure applications are compartmentalized is important to make that possible. It would allow us to be more flexible and extensible and perhaps exploit something like a quantum accelerator.”

Looking ahead, says Kothe, he worries most about the unknown unknowns – there will be surprises. “I feel like right now in apps space we kind of have known unknowns and we’ll hit some unknown unknowns, but I believe we are going to have a number of applications ready to go. We’ll have trips along the way and we may not do some things we plan now. I think we have an aggressive but not naive set of metrics. It’s really the people. We have some unbelievable people,” he says.

One can understand today’s attraction. Kothe points out this is likely to be a once-in-a-career opportunity and the mix of experience among the application team members significant. “What we see is millennials sitting at the table showing people new ways of doing software with gray-haired guys like me who have been to the school of hard knocks. There’s a tremendous cross fertilization. I’m confident. I saw it when we selected these teams. We had teams with rosters that looked like the all star team, but I am worried about retention. We are training people to be some of the best, especially the early career folks, so I am worried that they will be in high demand, very marketable.”

Kothe Bio from ECP website:
Douglas B. Kothe (Doug) has over three decades of experience in conducting and leading applied R&D in computational applications designed to simulate complex physical phenomena in the energy, defense, and manufacturing sectors. Kothe is currently the Deputy Associate Laboratory Director of the Computing and Computational Sciences Directorate (CCSD) at Oak Ridge National Laboratory (ORNL). Prior positions for Kothe at ORNL, where he has been since 2006, were Director of the Consortium for Advanced Simulation of Light Water Reactors, DOE’s first Energy Innovation Hub (2010-2015), and Director of Science at the National Center for Computational Sciences (2006-2010).

Feature Caption:
The Transforming Additive Manufacturing through Exascale Simulation project (ExaAM) is building a new multi-physics modeling and simulation platform for 3D printing of metals to provide an up-front assessment of the manufacturability and performance of additively manufactured parts. Pictured: simulation of laser melting of metal powder in a 3D printing process (LLNL) and a fully functional lightweight robotic hand (ORNL).

The post Doug Kothe on the Race to Build Exascale Applications appeared first on HPCwire.

InfiniBand Delivers Best Return on Investment

Mon, 05/29/2017 - 01:01
Higher return on investment of up to 250 Percent demonstrated on various high-performance computing applications; InfiniBand delivers up to 55 Percent higher performance Vs. Omni-Path Using half the infrastructure

 

The latest revolution in high-performance computing (HPC) is the move to a co-design architecture — a collaborative effort among industry thought leaders, academia, and manufacturers to achieve Exascale performance by taking a holistic system-level approach to achieve fundamental performance improvements. Co-design architecture exploits system efficiency and optimizes performance by creating synergies between the hardware and the software, as well as between the different hardware elements within the data center.

Industry wide, it is recognized that the CPU has reached the limits of its scalability. This has created a need for the intelligent network to act as a “co-processor”, sharing the responsibility for handling and accelerating application workloads. By placing computation for data-related algorithms on an intelligent network, it is possible to dramatically improve data center and applications performance and to improve scalability.

The new generation of smart interconnect solutions is based on a data-centric architecture, which can offload all network functions from the CPU to the network and perform computation in-transit, freeing CPU up cycles and subsequently increasing the system’s efficiency. With this new architecture, the interconnect supports the management and execution of more data algorithms within the network. This allows users to run algorithms on the data as it is being transferred within the system interconnect rather than waiting for the data to reach the CPU. Smart interconnect solutions can now deliver both In-Network Computing and In-Network Memory, representing the industry’s most advanced approach to achieve performance and scalability for high performance cluster systems.

Mellanox hardware-based acceleration technologies such as SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) for offloading data reduction and data aggregation protocols, hardware-based MPI tag matching, and MPI rendezvous offload are just a few of the solutions that work together to offload a significant amount of inter-process communication-related computation, enabling data algorithm processing as the data moves.

Figure 1 – Data Centric Architecture transition from CPU-centric to Data-centric to Overcome Latency Bottlenecks

The performance and scalability advantages of Mellanox interconnect solutions over Intel’s Omni-Path based solutions have been demonstrated over various applications. Testing has been conducted at different sites on production systems, comparing an InfiniBand EDR cluster to an Omni-Path connected cluster. The InfiniBand cluster includes servers with dual-socket Intel Xeon 16-core E5-2697 v4 CPUs at 2.60GHz. The Omni-Path cluster includes servers with dual-socket Intel Xeon 18-core Intel E5-2697 v4 CPUs at 2.30 GHz. Although there exists a small difference between the CPU frequencies, it is very possible to compare the scaling performance of the two clusters. As the following two cases clearly demonstrate, InfiniBand offers dramatically higher performance and lowers total cost of ownership.

Case I: NAMD

NAMD is a molecular dynamics application for chemistry and chemical biology. Figure 1 below shows test results for the standard ApoA1 benchmark of NAMD. As can be seen, a 64-node InfiniBand cluster delivered an impressive 250 percent higher performance than a 64-node Omni-Path cluster. Furthermore, if the same benchmark is run on an InfiniBand cluster, with half the number of servers (32 nodes), the InfiniBand cluster delivered 55 percent higher performance than the 64-node Omni-Path cluster.

Figure 2 – InfiniBand vs. Omni-Path Performance Comparison over NAMD

Case II: GROMACS

GROMACS is a molecular dynamics package used for simulations of proteins, lipids and nucleic acids. Figure 2 below shows test results for an industry standard benchmark simulation of lignocellulose. As can be seen, a 128-node InfiniBand cluster delivered 136 percent higher performance than a 128-node Omni-Path cluster. Furthermore, if the same benchmark is run on an InfiniBand cluster with half the number of servers (64 nodes), the InfiniBand cluster still delivered 33 percent higher performance than the 128-node Omni-Path cluster.

Figure 3 – InfiniBand vs. Omni-Path Performance Comparison over GROMACS

Both applications require fast and efficienct interprocess communications. The ability of InfiniBand to run a large portion of the MPI communication layer within the network greatly boosts the performance and scalability attainable from the HPC infrastructure. In both test cases, InfiniBand delivers higher performance (250 percent higher in the NAMD case, and 136 percent in the GROMACS case) versus Omni-Path – for the same-sized cluster job. Of equal import, in both cases InfiniBand delivered higher performance with only half the number of servers (for NAMD, a 32-node InfiniBand cluster delivered 55 percent higher performance than a  64-node Omni-Path cluster; and for GROMACS, a 64-node InfiniBand cluster delivered 33 percent higher performance than a 128-node Omni-Path cluster).

Mellanox has more than 17 years of experience designing high-speed communication fabrics. Today, Mellanox is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. For more information, please visit: http://www.mellanox.com/solutions/hpc/.

The post InfiniBand Delivers Best Return on Investment appeared first on HPCwire.

Bio-IT World Announces 2017 Best Practices Awards Winners

Fri, 05/26/2017 - 08:54

NEEDHAM, Mass., May 26, 2017 — Bio-IT World has announced the winners of the 2017 Best Practices Awards this morning at the Bio-IT World Conference and Expo in Boston, MA. Entries from Maccabi Healthcare System, Rady Children’s Institute for Genomic Medicine, Allotrope Foundation, Earlham Institute, Biomedical Imaging Research Services Section (BIRSS), and Alexion Pharmaceuticals were honored.

Since 2003, the Bio-IT World Best Practices Awards has honored excellence in bioinformatics, basic and clinical research, and IT frameworks for biology and drug discovery. Winners were chosen in four categories, and two discretionary awards this year as well.

“Looking back at the fourteen years since our first Best Practices competition, I am amazed by how far the bio-IT field has come. I continue to be inspired by the work done in our field,” said Bio-IT World Editor Allison Proffitt. “The Bio-IT World Community is increasingly open, and the partnerships and projects showcased here prove our dedication to collaborative excellence.”

Bio-IT World debuted the Best Practices Awards at the second Bio-IT World Conference & Expo in 2003, hoping to not only elevate the critical role of information technology in modern biomedical research, but also to highlight platforms and strategies that could be widely shared across the industry to improve the quality, pace, and reach of science. In the years since, hundreds of projects have been entered in the annual competition, and over 80 prizes have been given out to the most outstanding entries.

This year, a panel of eleven invited expert judges joined the Bio-IT World editors in reviewing detailed submissions from pharmaceutical companies, academic centers, government agencies, and technology providers.

The awards ceremony was held at the Seaport World Trade Center in Boston, where the winning teams received their prizes from Proffitt, veteran judge Chris Dwan, and Philips Kuhl, president of conference organizer Cambridge Healthtech Institute.

2017 Bio-IT World Best Practices Award Winners:

Clinical IT & Precision Medicine: Maccabi Healthcare System nominated by Medial EarlySign

Identifying High-Risk, Under-the-Radar Patients

In October 2015, Maccabi Healthcare System joined forces with Medial EarlySign to implement advanced AI and machine learning algorithms to uncover the “hidden” signals within electronic medical records (EMRs) and identify unscreened individuals at high risk of harboring Colorectal Cancer. The system used existing EMR Data only, including routine blood counts.

ColonFlag evaluated nearly 80,000 outpatient blood count tests results collected over one year, and flagged 690 individuals (approximately 1%) as highest risk population for further evaluation. Of those, 220 colonoscopies were performed, of which 42% had findings including 20 cancers (10%).

Informatics: Rady Children’s Institute for Genomic Medicine nominated by Edico Genome

Precision medicine for newborns by 26-hour Whole Genome Sequencing

Genetic diseases, of which there are more than 5,000, are the leading cause of death in infants, especially in Neonatal Intensive Care Units (NICU) and Pediatric Intensive Care Units (PICU). The gateway to precision medicine and improved outcomes in NICUs/PICUs is a rapid genetic diagnosis. Diagnosis by standard methods, including whole genome sequencing (WGS), is too slow to guide NICU/PICU management. Edico Genome, Rady Children’s Institute for Genomic Medicine, and Illumina have developed scalable infrastructure to enable widespread deployment of ultra-rapid diagnosis of genetic diseases in NICUs and PICUs. First described in “A 26-hour system of highly sensitive WGS for emergency management of genetic diseases” in September 2015, we have now improved and implemented this infrastructure at Rady Children’s Hospital (RCH). Among the first 48 RCH infants tested, 23 received diagnoses and 16 had a substantial change in NICU/PICU treatment. We are currently equipping other children’s hospitals to emulate these results.

Knowledge Management: Allotrope Foundation

The Allotrope Framework: A holistic set of capabilities to improve data access, interoperability and integrity through standardization, and enable data-driven innovation

The Allotrope Framework is comprised of a technique-, vendor-, and platform-independent file format for data and contextual metadata (with class libraries to ensure consistent implementation); Taxonomies and Ontologies- an extensible basis of a controlled vocabulary to unambiguously describe and structure metadata; and Data Models that describe the structure of the data.

Member companies, collaborating with vendor partners, have begun to demonstrate how the Framework enables cross-platform data transfer, facilitates finding, accessing and sharing data, and enables increased automation in laboratory data flow with a reduced need for error-prone manual input. The first production release is available to members and partners (as of Q4 2015), and phased public releases of the framework components will become available beginning mid-2017.

IT infrastructure/HPC: Earlham Institute

Improving Global Food Security and Sustainability By Applying High-Performance Computing To Unlock The Complex Bread Wheat Genome

One of the most important global challenges to face humanity will be the obligation to feed a world population of approximately nine billion people by 2050. Wheat is grown on the largest area of land of any crop at over 225 million hectares, and over two billion people worldwide are dependent on this crop as their daily staple diet. Unfortunately, the six primary crop species see up to 40% loss in yield due to plant disease. Furthermore, a changing climate, increased degradation in arable land, reduction in biodiversity through rainforest destruction, and increasing sea levels all contribute to declining crop yields that greatly undermines global food security and sustainability. A solution to this grand challenge is to unlock the complex genomics of important crops, such as bread wheat, to identify the genes that underlie resistance to disease and environmental factors. One of the toughest crops to tackle, bread wheat has a hugely complex genome and is five times bigger than the human genome, with 17 billion base pairs of DNA. By exploiting leading-edge HPC infrastructure deployed at the Earlham Institute (EI), scientists have now assembled the genomic blueprint of the bread wheat genome for the very first time. By analyzing this wheat assembly, breeders worldwide can now begin to explore new variations of wheat that exhibit the very traits that will help improve its durability in the face of dogged disease and climate change.

Judges’ Choice: Biomedical Imaging Research Services Section (BIRSS) nominated by SRA International

Biomedical Research Informatics Computing System (BRICS)

The Biomedical Research Informatics Computing System (BRICS) is a dynamic, expanding, and easily reproducible informatics ecosystem developed to create secure, centralized biomedical databases to support research efforts to accelerate scientific discovery, by aggregating and sharing data using Web-based clinical report form generators and a data dictionary of Clinical Data Elements. Effective sharing of data is a fundamental attribute in this new era of data informatics. Such informatics advances create both technical and political challenges to efficiently and effectively use biomedical resources. Designed to be initially un-branded and not associated with a particular disease, BRICS has been used so far to support multiple neurobiological studies, including the Federal Interagency Traumatic Brain Injury Research (FITBIR) program, the Parkinson’s Disease Biomarkers Program (PDBP), and the National Ophthalmic Disease Genotyping and Phenotyping Network (eyeGENE). Supporting the storage of phenotypic, imaging, neuropathological, and genomics data, the BRICS instances currently have more than 31,500 subjects.

Editor’s Choice: Alexion Pharmaceuticals nominated by EPAM Systems

Alexion Insight Engine

The Alexion Insight (AI) Engine is a decision support system that provides senior executives and corporate planning staff answers to business and scientific questions across a landscape of approximately 9,000 rare diseases. The AI Engine filters and sorts across key criteria such as prevalence, clinical trials, severity, and onset to prioritize in real-time diseases of interest for targets, line extensions, and business development activity. Over a period of two years Alexion worked with EPAM to develop the AI Engine. The system integrates data from several external data sources into a cloud-based, Semantic Web database. Gaps and errors in publicly available data were filled and corrected by a team of expert curators. The engine supports an interactive, web-based interface presenting the rare disease landscape. The AI Engine has reduced the amount of time required to produce recommendations to senior management on promising disease candidates from a few months to mere minutes.

About Bio-IT World (www.Bio-ITWorld.com)

Part of Healthtech Publishing, Bio-IT World provides outstanding coverage of cutting-edge trends and technologies that impact the management and analysis of life sciences data, including next-generation sequencing, drug discovery, predictive and systems biology, informatics tools, clinical trials, and personalized medicine. Through a variety of sources including, Bio-ITWorld.com, Weekly Update Newsletter and the Bio-IT World News Bulletins, Bio-IT World is a leading source of news and opinion on technology and strategic innovation in the life sciences, including drug discovery and development.

About Cambridge Healthtech Institute (www.healthtech.com)

Cambridge Healthtech Institute (CHI), a division of Cambridge Innovation Institute, is the preeminent life science network for leading researchers and business experts from top pharmaceutical, biotech, CROs, academia, and niche service providers. CHI is renowned for its vast conference portfolio held worldwide including PepTalk, Molecular Medicine Tri-Conference, SCOPE Summit, Bio-IT World Conference & Expo, PEGS Summit, Drug Discovery Chemistry, Biomarker World Congress, World Preclinical Congress, Next Generation Dx Summit and Discovery on Target. CHI’s portfolio of products include Cambridge Healthtech Institute Conferences, Barnett International, Insight Pharma Reports, Cambridge Marketing Consultants, Cambridge Meeting Planners, Knowledge Foundation Bio-IT World, Clinical Informatics News and Diagnostics World.

Source: Bio-IT World

The post Bio-IT World Announces 2017 Best Practices Awards Winners appeared first on HPCwire.

PRACEdays Reflects Europe’s HPC Commitment

Thu, 05/25/2017 - 20:39

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at the Polytechnic University of Catalonia. The program was packed with high-level international keynote speakers covering the European HPC strategy and science and industrial achievements in HPC. A diverse mix of engaging sessions showcased the latest advances across the array of computational sciences within academia and industry.

What began as mainly an internal PRACE conference now boasts an impressive scientific program. Chair of the PRACE Scientific Steering Committee Erik Lindahl is one of the people responsible for the program’s growth and success. At PRACEdays, HPCwire spoke with the Stockholm University biophysics professor (and GROMACS project lead) about the goals of PRACE, the evolution of PRACEdays, and the latest bioscience and computing trends. So much interesting ground was covered, that we’re presenting the interview in two parts with part one focusing on PRACE activities and part two showcasing Lindahl’s research interests and his perspective on where HPC is heading with regard to artificial intelligence and mixed-precision arithmetic.

HPCwire: Tell us about your role as Chair of the PRACE Scientific Steering Committee.

Erik Lindahl

Erik Lindahl: The scientific steering committee is really the scientific oversight body and our job is to do the scientific prioritization in PRACE. The reason I have engaged in PRACE was very much based on creating a European network of science and making sure that rather than being happy just competing in Sweden – Sweden is a nice country but it’s a very small part of Europe – what I really love about PRACE is we are getting researchers throughout Europe to have a common community of computing. But I think, this is a more important goal of PRACE than we think. Machines are nice but machines come and go and four years later we’ve used that money, but building this network of human infrastructure, that is something that is lasting.

HPCwire: How is PRACEdays helping accomplish that goal?

Lindahl: We have all of these Centers of Excellence that we are bringing together here, so Europe has now eight Centers of Excellence that provide joint training, tutorials, and tools to improve application performance. These are very young; they’ve been around for roughly 18 months, so right now, we don’t have all students going to PRACEdays; we can’t handle a conference that large, but we have all these Centers of Excellent and the various organizations and EU projects get together and then they in turn go out and spread the knowledge in their networks. In a couple of years we might very well have a PRACEdays that’s 500 people and then I hope we have all the students here. From the start this was mostly a PRACE internal conference, and the part that I’m very happy about is that we are increasing the scientific content and that’s what it’s going to take for the scientists to come.

HPCwire: PRACEdays is the central event of the European HPC Summit Week 2017, now in its second year.

Lindahl: That’s something also I’m very happy with to see it co-organized. It comes back to the same thing; Europe has a very strong computational landscape, but we sometimes forget that because we don’t collaborate enough.

HPCwire: What is the mission of PRACE?

Lindahl: The important thing with PRACE, not just PRACEdays but PRACE as a whole project, is that we are really establishing a European organization for computing and this is partly more of a challenge in Europe because in contrast with the U.S., while you have your 50 states, it is clear that it is one country, one grant organization sponsoring computing. The challenge for Europe has of course been, I would argue, that the national organizations of Europe are far stronger than the states in the US, but of course on the equivalent of the federal level, the European Union, the system has historically been much weaker so that what PRACE has established is that we finally have an organization that is not just providing computing cycles on the European arena, but also helping establish what is the vision for computing and how should we – not just Europe as a region push computing – but how should scientists in Europe push computing and what are the really big grand challenges that people should start approaching. And the challenge here is that no matter how good individual groups are, these problems are really hard just as you are seeing in the states – as nice as California is, if California tried to go it alone they would find it pretty difficult to compete with China and Japan.

HPCwire: How does PRACE serve European researchers?

Lindahl: The main role of PRACE is to provision resources and PRACE makes it possible for researchers to get what we call tier 0 resources for the very largest problems, the problems that are so large that it gets difficult to allocate them in a single country, and in particular most of these national systems tend to have, I wouldn’t say conservative programs, but kind of continuous allocations. What PRACE tries to push is these really grand challenge ideas: risky research; it’s perfectly okay to fail. You can spend one hundred million core hours to possibly solve a really difficult problem. I think in large part we are starting to achieve that. As always, of course, scientists want more resources. I’m very happy with the way that PRACE 2 has gotten countries to sign on and significantly increase the resources compared with what we had a few years ago.

The other part that I personally really like about PRACE are the software values and part of it of course has to do with establishing a vision and making sure there is really good education because all of these students, no matter how good our universities are, when people are sitting whether it’s in Stockholm, Barcelona or Frankfurt, there might be a handful of students in their area. PRACE makes it possible to provide training at a much more advanced level than we normally can in our national systems. Cost-wise it is not as large a part of the budget, but when it comes to competing and [facilitating] advanced computing, it is probably just as important as buying these machines.

The third part of this has to do with our researchers, and this is a part of where my role comes in as chair of the scientific steering committee. Researchers, we are a bit of a split personality. On the one hand we don’t like to apply for resources; writing research grants takes time away from the research you would like to be doing. On the other hand, a very important factor of having to compete for resources is not just that I’m competing for resources but when we are writing these grant applications, that’s also when we need to formulate our ideas – that’s when I need to be better than I was two or three years ago. Can I identify the really important problems to solve here, what I would like to do the next few years? I think here surprisingly lies a danger in our national system, in particular the ones that are fairly generously funded because the generously funded system you become complacent and you are kind of used to getting your resources. What I like with PRACE is you get a challenge: what if you had a factor of ten more resources than you do now? But you can’t just say that you would like to have it; you need to have a really good idea to get that, and it starts to challenge our best researchers who in essence compete against each other in Europe and become better than they were last year, and I think that’s a very important driving factor for science.

HPCwire: What is the vision for the PRACEdays conference?

Lindahl: PRACEdays is fairly young as a conference and we are still trying to get it to find its form. It’s not really an industry conference in the sense of having vendors here – I think there are other great venues, both ISC and Supercomputing and we see no point with trying to compete with them, but we are increasingly trying to move PRACEdays to become the venue where we have the scientists meet – not necessarily disciplinary because as a biophysicist I tend to go to a biophysical society, but of course there are lots of people working with computational aspects that are interdisciplinary or they might very well be using similar types of molecular simulation models, and materials sciences. [At PRACEdays] we really focus on computational techniques. We get to see what people are doing in other domains. We are going to start having computers with one million processors, and I think as scientists it’s very easy to try to become incrementally better – we all do that all the time; my code scales better this year than it did last year, but we have colleagues that already scale to a quarter million processors. That’s a challenge; we need to pick up 100 times better than we are, which is of course difficult, but if we don’t even think about it, we don’t start to do the work. I like these challenges because I’m seeing what people can do in other areas that I don’t get in my disciplinary conferences.

PRACEdays is also a venue where we get to meet all the different groups – the Centers of Excellence that the European commission has started to fund, so I think all of this is part of a budding computational infrastructure that is really shared in Europe. It’s certainly not without friction. If there wasn’t any friction it would be because we weren’t approaching hard problems. But I think things are really moving in the right direction and we are starting to establish a scheme where if you are like me, if you are a biophysicist, you should not just go to your national organization; the best help, the best resources, the best training is on the European [level] today and that I’m very happy with.

HPCwire: It’s the second year for the European HPC Summit.

Lindahl: That’s something also I’m very happy with to see it coorganized. It comes back to the same thing; Europe has a very strong computational landscape, but we sometimes forget that because we don’t collaborate enough.

HPCwire: Is it fair to think of PRACE as parallel to XSEDE in the US?

Lindahl: Yes and no, they have slightly different roles so PRACE works very close together with XSEDE and we are doing wonderful things together in training and we’re very happy to have them there. When it comes to the provisioning of resources. PRACE is more similar to the INCITE program, and this is intentional.

I think XSEDE does a wonderful thing in the US. The main thing that XSEDE managed to change in the US was to have a focus on the users, not just the focus on buying sexier machines, or how many boxes you have or how many FLOPS you have but what are you really doing for science and what does the scientist need and that was sorely needed not just in US but throughout the world.

This is a development that has happened in Europe too but the challenge with Europe is that we have lots of countries with very strong existing organizations there and if PRACE went in and started to take over the normal computing, suddenly I think you would alienate all these national organization that PRACE still very much depends on having good relations with, and that’s also why we’ve said that PRACE will engage in all these levels when it comes to training, when it comes to organization.

We have what we call a Tier 1 program, where it’s possible for researchers to get access to a large resource, say Knights Landing. A researcher in general in Europe who needs access to a special computer that’s not available in any other country, they can get access to that through these collaborative programs.

Then PRACE itself has hardware access through a program that’s much more similar to INCITE. The very largest programs, the programs that are really too large for any of the national systems, and I think overall that works well because I think on this level most countries see it as a complement rather than competing with their existing organizations.

HPCwire: The theme of this year’s PRACEdays is “HPC for Innovation: When Science Meets Industry.” Science and industry sometimes have split incentives. How much involvement should science have with industry and what’s your perspective on how public private partnerships and similar arrangements should work?

Lindahl: This is a difficult question and it comes down to the question of what is HPC. The traditional view that we’ve taken particularly in academia, is we focus on all of these very high end machines, whether it’s a petaflop, exaflop, yottaflop, the very extreme moonshot programs. That is of course important to large fields of science or I actually would say the reason academia stresses this is because academia’s role is the push the boundaries and industry normally shouldn’t be at the boundary, with a couple of exceptions today.

I think the joint role we have both in academia and industry is understanding this whole spectrum of approaches – so scientists might be thinking of running MPI over millions of processors but the very same techniques – if we can improve scaling, if we can make computers work faster – that’s used in machine learning too. In machine learning you might only run it over four nodes, but they too are just as interested in making this run faster, it’s just the problems they apply them to might be slightly different.

The other part that I think has changed completely in the last few years is this whole approach with artificial intelligence and machine learning that is now so extremely dependent on floating point performance in general. What we today call graphics processors, accelerators, they are now everywhere – it’s probably just a matter of time before you have a petaflop in your car. And it was less than ten years ago that a petaflop was the sexiest machines we had in the world. And at that level, even in your car, you are going to run parallel computations over maybe 20,000 cores. And when I was a student, we didn’t dream of that level of parallelism. Somewhere there, I think you are going to run on different machines because you wouldn’t buy a car if it cost you a billion dollars. The goals and the applications are different but the fundamental problems we work on are absolutely the same.

That was a bit of a detour, but when it comes to the public-private partnerships and the challenges here, there are certainly lots of areas where we are all starting to use commodity technology and accelerators might very well be one of them, so by the time that industry has caught on, by the time there is a market, then we can just go out and procure things on the open market, but then there are of course other areas where we are not quite sure where we are going to end up yet. And when it comes to industry, industry might not yet be at the point yet where it turns this into a product, and if we’re talking about chip development or networking technology, these things can also be very expensive. I certainly see that a role for some of these projects, we might very well have to engage together because there is no way that an academic lab can develop a competitive microprocessor we simply don’t have those resources; on the other hand, there is no way a company would do it because they are afraid they can’t market this and they can’t get their money back. So at some point starting to collaborate in this is not just okay, I think we have to do it.

The difficult part is we have to steer very carefully along this balance. This can’t turn into industry subsidies and similarly it can’t turn into industry subsidizing academia either because then it’s pointless. It’s a very difficult problem, but I don’t think we have any choice; we have to collaborate. If you start looking at machine learning nowadays, not just the most advanced hardware technology but in many case even the software, suddenly we have commercial companies hiring academics not because they are tier 2, but because they are the very best academics. So in artificial intelligence, some of the best research environments are actually in industry, not in academia. I think it’s a new world, but one we will gradually have to adapt to.

Stay tuned for part two, where Dr. Lindahl highlights his research passions and champions a promising future for HPC-AI synergies. We also couldn’t pass up the opportunity to ask about his pioneering work retooling the molecular dynamics code GROMACS to take advantage of single-precision arithmetic. It’s a fascinating story that takes on new relevance as AI algorithms push hardware vendors to optimize for single and even half precision instructions.

The post PRACEdays Reflects Europe’s HPC Commitment appeared first on HPCwire.

Russian Researchers Claim First Quantum-Safe Blockchain

Thu, 05/25/2017 - 14:08

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers.

The center said the technology has been successfully tested by one of Russia’s largest banks, Gazprombankm, and that the center is now working to expand the capability to other Russian and international financial services organizations.

The announcement was greeted with a wait-and-see attitude by industry observers, including HPC analyst Steve Conway, of Hyperion (formerly IDC), who noted that, given the complexity of the use case, neither the press release nor the white paper issued by the Russian Quantum Center provided enough technical detail to validate its announcement.

“As far as the use case goes,” Conway said, “it’s pretty universally acknowledged that one of the key early uses for quantum computing is going to be for cyber defense, so that’s no surprise. Efforts like that are underway around the world. It’s difficult to assess this one in comparison with any other without having any technical details about what they’re doing.”

Addison Snell, CEO of Intersect 360 Research, said, “It is still early in the development of quantum computing and difficult to compare the efficacy of the Russians’ approach versus efforts we have seen from companies like D-Wave and IBM. The most important point is that Russia, which already has capable supercomputing vendors, such as RSC and T-Platforms, is now part of the quantum computing discussion as well.”

The Russian Quantum Center said it secures the blockchain by combining quantum key distribution (QKD) with post-quantum cryptography, making it essentially “un-hackable,” according to the center. The technology creates special blocks that are signed by quantum keys rather than the traditional digital signatures, the center said, with the quantum keys generated by a QKD network.

QKD networks have become increasingly common around the world, particularly in the financial sector. China, Europe and the United States have existing QKD networks used for smart contracts, financial transactions and classified information.

Quantum computing holds the promise of delivering performance exponentially more powerful than today’s computers, but its commercial realization remains years away. It’s also seen as a major threat when in the hands of hackers.

Google appears to be at the forefront of this work – the company’s quantum-AI team has set for itself the goal of making a quantum annealer with 100 qubits by the end of this year. A qubit, or quantum bit, is the quantum computing equivalent of the classical bit. Conway pointed out that the Russian Quantum Center’s claims would require sophisticated quantum computing capabilities.

“It’s interesting because the challenges with creating a quantum computer increase dramatically with the number of qubits,” said Conway. “It’s a whole lot easier to do something with a couple of qubits than it is with hundreds or thousands of qubits. But in fact if you want to get serious about this you have to get to the thousands of qubits… I’d be surprised if this were in the thousands of qubits range, which is what you’d really need for serious cybersecurity.”

The post Russian Researchers Claim First Quantum-Safe Blockchain appeared first on HPCwire.

OpenMP ARB Appoints Duncan Poole of NVIDIA and Kathryn O’Brien of IBM to its Board of Directors

Thu, 05/25/2017 - 11:59

AUSTIN, Texas, May 25, 2017 — The OpenMP ARB, a group of leading hardware and software vendors and research organizations which creates the OpenMP standard parallel programming specification, has appointed Duncan Poole and Kathryn O’Brien to its Board of Directors. They bring a wealth of experience to the OpenMP ARB.

Duncan Poole is director of platform alliances for NVIDIA’s Accelerated Computing Division. He is responsible for driving partnerships where engineering interfaces are adopted by external parties who are building tools for accelerated computing. Duncan is also the president of OpenACC, and responsible for NVIDIA’s membership of OpenMP. His goal is to encourage the adoption of accelerators by developers who want good performance and portability of their accelerated code.

Kathryn O’Brien is a Principal Research Staff Member at IBM T.J. Watson Research Center, where she has worked for over 25 years. She managed the compiler team that implemented OpenMP on the CELL heterogeneous architecture. Since that time she has been heavily engaged in the adoption of OpenMP across a range of product and research compiler efforts. Over the last 8 years she has been part of the leadership team driving IBM Research’s Exascale program, where her focus has been on the evolution and development of the broader software programming and tools environment.

“Duncan and Kathryn bring us great experience”, says Partha Tirumalai, Chairman of the OpenMP Board of Directors. “We are very pleased to have them join the OpenMP board.”

In addition to Duncan and Kathryn, the board of directors of the OpenMP ARB consists of Partha Tirumalai of Oracle, Sanjiv Shah of Intel, and Josh Simons of VMware.

About OpenMP

The OpenMP ARB has a mission to standardize directive-based multi-language high-level parallelism that is performant, productive and portable. Jointly defined by a group of major computer hardware vendors, software vendors, and researchers, the OpenMP API is a portable, scalable model that gives parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from embedded systems and accelerator devices to multicore systems and large-scale shared-memory machines. The OpenMP ARB owns the OpenMP brand, oversees the OpenMP specification, and produces and approves new versions of the specification. Further information can be found at http://www.openmp.org/.

Source: OpenMP

The post OpenMP ARB Appoints Duncan Poole of NVIDIA and Kathryn O’Brien of IBM to its Board of Directors appeared first on HPCwire.

Pages