HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 13 hours 6 min ago

Nor-Tech Demo HPC Cluster Leveraged by Cal Tech to Test Intel’s Xeon-Phi x200 KNL Processor

Tue, 01/24/2017 - 06:45

Jan. 24 — Nor-Tech’s leading-edge demo cluster is proving instrumental to Cal Tech’s decision to upgrade its Nor-Tech cluster with user-friendly bootable Intel KNL.

Currently Cal Tech researchers are testing their code on this newest Intel processor, which is integrated into Nor-Tech’s demo cluster. The demo cluster is a no-cost, no-strings opportunity for current and prospective clients to test-drive simulation applications on a cutting-edge Nor-Tech HPC equipped with Intel KNL and other high-demand platforms installed and configured. Users can also integrate their existing platforms into the demo cluster.

Nor-Tech President and CEO David Bollig said, “Our clients trust us to take all of the question marks out of buying a new cluster or upgrading what they already have. We understand that there is a lot more than money on the line for them. They want assurance that the platforms they are currently using and any that they are considering will run seamlessly and end up saving them time. The demo cluster provides that assurance.”

Bollig continued, “When it comes to running simulations, we know that reducing time-to-results is critical. To compete in the research environment, everything needs to continually be better and faster—that’s what Nor-Tech clusters integrated with best fit software are able to deliver. In the case of Cal Tech, they have already had a number of headline-grabbing breakthroughs using the cluster they purchased from us.”

Nor-Tech’s HPC clusters are backed by the company’s easy to deploy pledge, no-wait-time support guarantee, and a team of HPC cluster experts that have been with the company for many years. Nor-Tech, in fact, has one of the lowest employee turnover rates in the industry.

“We maintain a friendly, supportive environment where our employees feel valued,” Bollig said. “First and foremost because it’s the right thing to do, but also because we realize the importance of continuity for our clients. They trust that the engineers who built their cluster and know it like the back of their hand will be available for support well into the future.”

Integrating Nor-Tech clusters with Intel Xeon Phi processors eliminates node bottlenecks, simplifies code modernization, and builds on a power-efficient structure. The bootable Intel Xeon Phi x86 CPU host processor offers an integrated architecture for powerful, highly parallel performance that enables deeper insight, innovation, and impact for the most demanding HPC applications.

To take full advantage of the processor, an application must scale well to over 100 software threads and either make extensive use of vectors or efficiently use more local memory bandwidth than is available on an Intel Xeon processor. Key specifications include:

  •     Up to 1 teraflop double-precision performance
  •     Exceptional performance-per-watt for highly parallel workloads
  •     Single programming model for all code
  •     Flexible usage models to maximize the clients’ investment

About Nor-Tech

2016 HPCwire award finalist, Nor-Tech is renowned throughout the scientific, academic, and business communities for easy to deploy turnkey clusters and expert, no wait time support. All of Nor-Tech’s technology is made by Nor-Tech in Minnesota and supported by Nor-Tech around the world. In addition to HPC clusters, Nor-Tech’s custom technology includes workstations, desktops, and servers for a range of applications including CAE, CFD, and FEA. Nor-Tech engineers average 20+ years of experience and are responsible for significant high performance computing innovations. The company has been in business since 1998 and is headquartered in Burnsville, Minn. just outside of Minneapolis. To contact Nor-Tech call 952-808-1000/toll free: 877-808-1010 or visit http://www.nor-tech.com.

Source: Nor-Tech

The post Nor-Tech Demo HPC Cluster Leveraged by Cal Tech to Test Intel’s Xeon-Phi x200 KNL Processor appeared first on HPCwire.

OpenPOWER Academic Group Carries 2016 Momentum to New Year

Mon, 01/23/2017 - 10:25

Jan. 23 — Academia has always been a leader in pushing the boundaries of science and technology, with some of the most brilliant minds in the world focused on how they can improve the tools at their disposal to solve some of the world’s most pressing challenges. That’s why, as the Leader of the OpenPOWER Academic Discussion Group, I believe working with academics in university and research centers to develop and adopt OpenPOWER technology is key to growing the ecosystem. The Academia Group is enabling several academicians to do research and development activities using Power CPUs and systems and this creates very strong ecosystem growth for OpenPOWER-based systems.

2016 was an amazing year for us, as we helped launch new partnerships at academic institutions like in A*CRC in SingaporeIIT Bombay in India, and more. We also assisted them in hosting OpenPOWER workshops where participants learned how OpenPOWER’s collaborative ecosystem is leading the way on a multitude of research areas. Armed with this knowledge, our members helped to spread the OpenPOWER gospel. Most recently, our members were at GTC India 2016 and SC16 to meet with fellow technology leaders and discuss the latest advances around OpenPOWER.

After joining the OpenPOWER Foundation as an academic member in October 2016, the Universidad Nacional de Córdoba in Argentina sent professors Carlos Bederián and Nicolás Wolovick to SC16 in Salt Lake City to learn more about OpenPOWER.

“The SC16 exhibition was a showcase of OpenPOWER systems, where the IBM S822LC for HPC was a remarkable piece of hardware to get to know firsthand. SC16 was also the ideal environment to discuss the balanced and powerful OpenPOWER architecture with qualified technical leaders from Penguin Computing, IBM, and others,” Wolovick explained. “Knowing the people, the hardware, and learning more about the forthcoming access to IBM S822LC for HPC are just a few of the reasons for Universidad Nacional de Córdoba’s active presence in the OpenPOWER Foundation.”

In Asia, representatives from OpenPOWER and Academic Discussion Group member IIT Bombay led discussions at NVIDIA’s GTC India to advance OpenPOWER. Their session, “Getting Started with GPU Computing”, was presented by IIT’s Professor Nataraj Paluri, who discussed the multiple advantages of OpenPOWER for accelerated computing by delivering ecosystem-driven innovation.

As a result of the Academic Discussion Group’s leadership, we were honored to receive HPCWire’s Reader’s Choice Award for Best HPC Collboartion Between Academia and Industry at SC16. Such awards only reaffirm OpenPOWER’s commitment to moving towards world-class systems — both offered by IBM and those built by our OpenPOWER partners that leverage POWER’s open architecture. SASTRA University’s Dr. V.S. Shankar Sriram joined us in receiving the award, and he expounded the benefits of joining OpenPOWER.

“Through the OpenPOWER foundation, we are focused in the projects related to human cognition and deep learning techniques for various life science applications. We have already ported applications like GROMACS onto the Power architecture. We are excited to be part of OpenPOWER, which helps our professors and researchers work as a team with shared objectives, and motivates us to achieve ambitious goals that have relevant impact we can be proud of.”

With such a successful 2016, we’re excited to carry the momentum into the new year! We’ve already got some great events planned, like:

  • CDAC National level Deep learning workshop, March 2017, Bangalore
  • ADG and OpenPOWER user group meeting, May 8th thru 11th, San Jose
  • OpenPOWER Workshop, June 22nd, Germany, More info: https://easychair.org/conferences/?conf=iwoph17
  • ADG and OpenPOWER user group meeting, date TBD, Denver, USA

Want to be even more involved with the OpenPOWER Academic Discussion Group? Then join OpenPOWER as an Academic member. Your membership entitles you to the latest news, event notifications, webcasts, discussions, and more. Learn more about membership and download the Membership Kit, here: https://openpowerfoundation.org/membership/how-to-join/.

Source: Ganesan Narayanasamy, Leader, OpenPOWER Academic Discussion Group, OpenPOWER Foundation

The post OpenPOWER Academic Group Carries 2016 Momentum to New Year appeared first on HPCwire.

HPC Startup Advances Auto-Parallelization’s Promise

Mon, 01/23/2017 - 10:11

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. In fact with the various programming standards, like OpenMP and OpenACC, evolving at a rapid pace, keeping up with the latest and best practices can be daunting, especially for domain scientists or others whose primary expertise is outside HPC programming. Auto-parallelization tools aim to ease this programming burden by automatically converting sequential code into parallel code. It’s one of the holy grails of computing, and while there are a handful of projects and products that support some degree of auto-parallelization, they are limited in what they can do.

HPC startup Appentra believes it has a unique approach to a classic HPC problem. The company’s Parallware technology is an LLVM-based source-to-source parallelizing compiler that assists in the parallelization of scientific codes with the OpenMP and OpenACC standards. CEO Manuel Arenaz refers to the process as guided parallelization.

Appentra was formed in 2012 by a team of researchers from the the University of A Coruña in Spain under the leadership of Arenaz, a professor at the university. 2016 brought some notable recognition. Last April, Appentra was selected as a 2016 Red Herring Top 100 Europe Winner, signifying the promising nature of the technology and its market-impact potential. And in November, the startup participated in the Emerging Technologies Showcase at SC16.

We recently spoke with the company’s CEO and co-founder to learn more about the technology and commercialization plans.

“Parallelization has remained an open problem since the 80s,” says Arenaz. “Nowadays there is still not a product that can really help users to parallelize their code — not just simple [benchmark] codes, but mini apps or fragments or snippets of codes of large applications running on supercomputers. The parallelization stage [of the five-stage HPC programming workflow — see graphic] is where we provide value to the HPC community and what we do is try to make it easier to automatize some of the parts of this manual process that is converting the code from sequential to parallel.”

Appentra’s product roadmap currently includes two tools, Parallware Trainer and Parallware Assistant, with the former due out later this year. Both will be sold under a subscription software licensing model, initially targeting academic and research centers.

Parallware Trainer will be the first product to market. It is billed as an “interactive real-time desktop tool that facilitates learning, implementation, and usage of parallel programming.” The purpose of the tool is to train the user, test the environment, and provide insights on the how the sequential code can be improved.

The key features of Parallware Trainer are summarized as follows:

  •  Interactive real-time editor GUI
  •  Assisted code parallelization using OpenMP & OpenACC
  •  Programming language C
  •  Detailed report of the parallelism discovered in the code
  •  Support for multiple compilers

“You can conceive the Parallware Trainer tool as Google Translator but instead of going from English to Spanish, it goes from sequential code to parallel code annotated with OpenMP or OpenACC pragmas,” says Arenaz.

“It enables learning by doing, is student-centric, and allows the student to play with more complex codes even during the training. It enables playing not only with lab codes prepared by the teacher, but also with codes students are writing at their office, so they can really begin to apply the concepts to their own codes, facilitating a smooth transition back to the office.”

Appentra is finishing work on the Parallware Trainer package now and plans to release it in an early access program during Q1 of this year with a general launch slated for Q2 or Q3 of this year.

With the training tool, users are not provided access to the information that the technology considered in order to discover the parallelism and implement the parallelization strategy. In the learning environment, the black box nature of the tool is warranted because having access to that information creates unnecessary complexity for someone that is learning parallel programming. Experts of course want to have full drilldown into the decisions that were made; they want full control. So in Parallware Assistant, which targets HPC developers, Appentra will provide complete details of all the analysis conducted on the program.

Appentra’s goal with both products is to move up the complexity chain from microbenchmarks to mini-apps to snippets of real applications.

“From the point of view of the Parallware Trainer tool, the level of complexity at the microbenchmark is enough for learning parallel programming, but looking at the Assistant and looking at using the Parallware Trainer even with more complex codes, like mini-apps or snippets of real applications, we need to increase the matureness of the Parallware technology,” says Arenaz.

“So we are also working toward providing a more robust implementation of the technology with the complexity of the mini-app. We are looking at inter-procedural discovery of parallelism through a procedural fashion in code that uses structs and classes not only plain arrays. These are some of the features that are not usually present in microbenchmarks but are present in mini-apps and of course real applications.”

Appentra is developing its technology in collaboration with a number of academic partners. The startup has worked most closely with Oak Ridge National Lab (HPC researchers Fernanda Foertter and Oscar Hernandez have been instrumental in developing tools based on the company’s core tech), but also has industrial partnerships lined up at the Texas Advanced Computing Center and Lawrence Berkeley National Lab. In the European Union, the Parallware Trainer has been used as part of training courses offered at the Barcelona Supercomputing Center, which Appentra cites as another close partner.

Appentra has also been accepted as a member of OpenPower and sees an opportunity to connect with the other academic members.

“Our research has told us that universities should be interested in using this tool for teaching parallel programming – and not only computer science faculties, but also mathematicians, physicists, chemists – they also really need parallel programming but it is too complex for them,” says Arenaz. “They want to focus on their science not on the complexity of parallel programming, but they need to learn the basics of it.”

Arenaz acknowledges that there are other tools in the market that provide some level of auto-parallelization but claims they don’t offer as much functionality or value as Parallware.

“The Cray Reveal is only available on Cray systems and is very limited. It cannot guarantee correctness when it adds OpenMP pragmas for instance,” the CEO says. “It doesn’t support OpenACC. It only supports a very small subset of the pragmas of OpenMP. It doesn’t support atomic. It doesn’t support sparse computations. It has many technical limitations, apart from only being available in Cray supercomputers. And the other [familiar] one, the Intel Parallel Advisor, is mainly a tool that enables the user to add parallelism, but again it doesn’t guarantee correctness.

“In both tools, it is the user that is responsible for guaranteeing that the pragmas that are noted in the code are correct and are performant. That is something that our Parallware technology overcomes and solves. This is from the point of view of the technology itself among similar products in the market. If we focus on training from the point of view of the Parallware Trainer, there is no similar product on the market. No tool we are aware of — and we have talked with all these big labs and supercomputer centers — enables interactive HPC training as the Parallware Trainer tool does.”

One of the chief aims of Parallware technology is helping users stay current on standards from OpenMP and OpenACC. With standards evolving quickly, Appentra says this is where users will find a lot of value.

“OpenMP and OpenACC are evolving very fast. Each year they are more and more complex because 3-4 years ago they only supported one programming paradigm, the data parallel paradigm. But they have incorporated the tasking paradigm and the offloading paradigm to support GPUs, Xeon Phis, and any type of accelerator that can come into the market in the future pre-exascale and exascale supercomputing systems. So the hardware is evolving very fast and all of the labs and all of the vendors are working at the level of OpenMP and OpenACC to provide features in the standard that support the new features in the hardware, but this is making the standards very big and opening up many possibilities for the user to program in parallel the same piece of code, so this is opening up another range of complexity to the user.

“So we keep track of these standards, we keep track of the features of the code because every single source code is unique. It has some slightly different features that makes it different from other very similar code even in the same computational field. That is our job mainly at the Parallware development team, to keep track of these features, select the most important features, select the most important features in the OpenMP and OpenACC standards and connect them through Parallelware technology, through this magic component that is the converter from sequential to parallel. That’s our work and that’s where the high-added value of our tools derives.”

Right now, Appentra is focused on adding the higher-complexity features of the OpenMP and OpenACC standards, but MPI is also in their sites.

“When we started the company, we created a prototype that automatically generated MPI code at the microbenchmark and it is something [we do] internally in our group. The problem that we faced is we were claiming that we were able to automatically parallelize codes using OpenMP, OpenACC and MPI but industry labs and supercomputing facilities didn’t even believe that we could do it for OpenMP, so we decided to stay focused on the standards that are easier for us to parallelize because as a startup company we need to focus on the low-hanging fruit.”

Here’s one example of how a Parallware implementation stacks up to hand-parallelized code from the NAS Parallel Benchmarks suite, a small set of programs designed at NASA to help evaluate the performance of parallel supercomputers. The red line shows the speedup gained with the parallel OpenMP implementation provided in the NAS parallel benchmark implementation. The green line is the OpenMP parallel implementation automatically generated by the Parallware technology. You can see that for a given number of threads the execution time is very close.

Source: Paper OpenMPCon (Sep 2015); Bench: NPB_EP

Initially, Appentra supports the C language, but Fortran is on their roadmap. Says Arenaz, “We are aware that 65-70 percent of the HPC market at this moment in big labs is Fortran code and the remaining 30-35 percent is C code. The reason why we are not supporting Fortran at this moment is a technical issue. We are based on the LLVM infrastructure, which only supports C at this time. It doesn’t support Fortran. Support for Fortran is a work in progress by PGI and NVIDIA through a contract with the national labs. We are partners of Nvidia so we will have early access to the first releases with Fortran support for LLVM. So we are working on Fortran support and we expect it might be some of the new features we might announce at Supercomputing next year.”

The post HPC Startup Advances Auto-Parallelization’s Promise appeared first on HPCwire.

Answered Prayers for High Frequency Traders? Latency Cut to 20 Nanoseconds

Mon, 01/23/2017 - 09:00

“You can buy your way out of bandwidth problems. But latency is divine.”

This sentiment, from Intel Technical Computing Group CTO Mark Seager, seems as old as the Bible, a truth universally acknowledged. Latency will always be with us. It is the devilish delay, the stubborn half-life of the time gap caused by processor and memory operations, that has bewitched computer architects, IT managers and end use since the genesis of the computer age.

Solarflare Communications is an unheralded soldier in the eternal war on latency. With its founding in 2001, Solarflare took on the daunting raison d’être of grinding down latency from one product generation to the next for the most latency-sensitive use cases, such as high frequency trading. Today, the company has more than 1,400 customers using its networking I/O software and hardware to cut the time between decision and action.

In high frequency trading, the latency gold standard is 200 nanoseconds. If you’re an equity trader using a Bloomberg Terminal or Thomson Reuters Eikon, latency of more than 200 nanoseconds is considered to be shockingly pedestrian, putting you at risk of buying or selling a stock at a higher or lower price than the one you saw quoted. Now, with its announcement of TCPDirect, Solarflare said it has cut latency by 10X, to 20-30 nanoseconds.

“To drop that to 20 nanoseconds, that’s pretty amazing,” Will Jan, VP and lead analyst at IT consultancy Outsell said.

He said most traders use Solarflare technology without knowing it, in the way we drive cars made up of parts not made by Toyota or Ford but by parts manufacturers, such as Bosch or Denso.

“They’re the backbone of a lot of server providers,” he said. “I always thought HPE, IBM and Dell…made this particular network IO component in software, but it turns out these guys are the providers. In this particular niche, when it comes to components that lower latency, these (server makers) farm it out to Solarflare. They’re happy making a lot of money in the background.”

The CTO of an equity trading firm, who agreed to talk with HPCwire‘s sister pub EnterpriseTech anonymously, said his company has been a Solarflare customer for four years and that its IT department has validated Solarflare’s claims for TCPDirect of 20-30 nanoseconds latency.

He regards Solarflare as a partner that allows his firm to focus on core competencies, rather than devoting in-house time and resources to lowering latency.

“It used to be the case that there weren’t a lot of commercial, off-the-shelf products applicable to this space,” he said. “If one of our competitors wanted to do something like this for competitive advantage, Solarflare can do it better, faster, cheaper, so they’re basically disincentivized from doing so. In a sense this is leveling the playing field in our industry, and we like that because we want to do what we’re good at, rather than spending our time working on hardware. We’re pleased when external vendors provide state-of-the-art technology that we can leverage.”

TCPDirect is a user-space, kernel bypass application library that implements Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), the industry standards for network data exchange, over Internet Protocol (IP). It’s as part of Onload, Solarflare’s application acceleration middleware designed to reduce CPU utilization and increase message rates.

Solarflare’s Ahmet Houssein

The latency through any TCP/IP stack, even written to be low-latency, is a function of the number of processor and memory operations that must be performed between the application sending/receiving and the network adapter serving it. According to Ahmet Houssein, Solarflare VP/marketing and strategic development, TCP/IP’s feature-richness and complexity means implementation trade-offs must be made between scalability, feature support and latency. Independently of the stack implementation, going via the kernel imposes system calls, context switches and, in most cases, interrupts that increase latency.

Houssein said TCPDirect attacks this network stack overhead problem with a “slimmer” kernel bypass architecture in which the TCP/IP stack resides in the address space of the user-mode application. This approach works for high frequency trading, he said, because it’s a use-case that requires only a limited number of connections, rather than the full TCP/IP feature-set included in Onload. Designed to minimize the number of processor and memory operations between the user-space application and a Solarflare Flareon Ultra server IO adapter, TCPDirect employs a smaller data structure internally that can be cache-resident.

TCPDirect’s zero-copy proprietary API removes the need to copy packet payloads as part of sending or receiving. Each stack is self-contained, removing processor operations other than those required to get the payload between the application and the adapter.

“We run in a standard x86 server, we are an Ethernet company and we are compliant with standard infrastructure, but for those applications that require this level of performance we give them this special piece of software,” Houssein said. “(TCPDirect) runs on top of our network interface controllers within that standard equipment.”

Solarflare concedes that TCPDirect is not a perfect fit for all low-latency use-cases because its hyperfocus on latency sacrifices some of the features found in Onload, of which TCPDirect is delivered as a part. Implementing TCPDirect requires applications to be modified to take advantage of the TCPDirect API – applications used, for example, in high frequency trading where latency is a quest that knows no end.

“If your competitors are getting faster, then you have to get faster too,” said the anonymous Solarflare customer quoted above. “Honestly, we’d prefer to say, ‘We’re good, let’s stop here and focus on other things.’ But no one’s going to do that.”

The post Answered Prayers for High Frequency Traders? Latency Cut to 20 Nanoseconds appeared first on HPCwire.

GIGABYTE Selects Cavium QLogic FastLinQ Technology to Power Next-Gen Servers

Mon, 01/23/2017 - 07:24

SAN JOSE, Calif., Jan. 23 — GIGABYTE Technology, a leading vendor of high performance server hardware, today announced that it has selected Cavium’s (CAVM), QLogic FastLinQ 10GbE and 25GbE Ethernet adapters and controllers for adoption across a broad portfolio of servers and differing form factors. These controllers support a comprehensive set of virtualization, Universal RDMA and multi-tenant services, enabling GIGABYTE server solutions to deliver higher performance options for the most sophisticated data centers, telcos and cloud providers.

“GIGABYTE servers – across standard, Open Compute Platform (OCP) and rack scale form factors – deliver exceptional value, performance and scalability for multi-tenant cloud and virtualized enterprise datacenters,” said Etay Lee, GM of GIGABYTE Technology’s Server Division. “The addition of QLogic 10GbE and 25GbE FastLinQ Ethernet NICs in OCP and Standard form factors will enable delivery on all of the tenets of open standards, while enabling key virtualization technologies like SR-IOV and full offloads for overlay networks using VxLAN, NVGRE and GENEVE.”

“QLogic 10GbE/25GbE Ethernet NICs, with Universal RDMA are designed to accelerate access to networking and storage while using a general purpose network,” said Rajneesh Gaur. “Integration on GIGABYTE motherboards and servers will provide customers with an innovative solution delivering flexibility in choice of RDMA and cost savings for next generation Cloud, Web2.0 and Telco Data centers.”

Purpose built for accelerating and simplifying datacenter networking, QLogic FastLinQ Ethernet technology in GIGABYTE servers delivers:

  • Broad Spectrum of Ethernet Connectivity Speeds – 10/25GbE to host the most demanding enterprise, telco and cloud applications and deliver scalability to drive business growth.
  • Universal RDMA – Industry’s only network adapter that delivers customers technology choices and investment protection with concurrent support for RoCE, RoCEv2 and iWARP.
  • Network Virtualization Offloads – Acceleration for Network Virtualization by offloading protocol processing for VxLAN, NVGRE, GRE and GENEVE, enabling customers to build and scale virtualized networks without impacting network performance.
  • Server Virtualization – Optimize infrastructure costs and increase virtual machine density by leveraging in-built technologies like SR-IOV and NIC Partitioning (NPAR) that deliver acceleration and QoS for workloads and infrastructure traffic.
  • Network Function Virtualization (NFV) – Leading small packet performance up to 100GbE Ethernet connectivity, and integration with DPDK and OpenStack, enables Telco’s and NFV application vendors to seamlessly deploy, manage and accelerate the most demanding NFV workloads.
  • OpenStack Integration: Cavium QLogic technologies, such as the Plug-in for Mirantis Fuel for automatic SR-IOV configuration across multiple OpenStack nodes, QConvergeConsole driven physical, and logical topology maps and functionality to deliver QoS for tenant workloads, are the key integration areas that enable hyperscale deployments with OpenStack.

QLogic 10/25GbE FastLinQ Ethernet NIC technology – in both Standard and OCP form factors – is planned to be available on GIGABYTE high performance server hardware starting Q1, 2017.

About GIGABYTE Technology Co., Ltd.

GIGABYTE is a leading global IT brand, with employees and business channels in almost every country. Founded in 1986, GIGABYTE started as a research and development team and has since become an industry leader in the world’s motherboard and graphics card markets. Drawing from its vast R&D and manufacturing capacities and know-how, GIGABYTE further expanded its product portfolio to include network communication devices, servers and storage, with a goal of serving each facet of the digital life in homes and offices. Everyday GIGABYTE aims to “Upgrade Your Life” with innovative technology, exceptional quality, and unmatched customer service.

About Cavium

Cavium, Inc. (CAVM), offers a broad portfolio of integrated, software compatible processors ranging in performance from 1Gbps to 100Gbps that enable secure, intelligent functionality in Enterprise, Data Center, Broadband/Consumer, Mobile and Service Provider Equipment, highly programmable switches which scale to 3.2Tbps and Ethernet and Fibre Channel adapters up to 100Gbps. Cavium processors are supported by ecosystem partners that provide operating systems, tools and application support, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, China and Taiwan.

Source: Cavium

The post GIGABYTE Selects Cavium QLogic FastLinQ Technology to Power Next-Gen Servers appeared first on HPCwire.

BigDog HPC Cluster Ready for Research Use

Mon, 01/23/2017 - 07:10

Jan. 23 — Southern Illinois University Carbondale is one of just 16 institutions in the U.S. to possess a high-speed, supercomputer.

SIU Information Technology’s BigDog High Performance Computing Cluster went online in October 2015, and underwent several weeks of testing and development. Now, it’s ready for more researchers to use it and push its limits.

Research coordinator Chet Langin oversees the supercomputer at SIU-C. He says it will help researchers compute their data much, much faster.

“What BigDog does is it takes those kinds of problems that might take seven days on a desktop or in a lab, and reduces it by rule of thumb typically to about seven hours. From seven days to seven hours on the high performance cluster.”

Langin says BigDog will enhance grant proposals.

“Faculty have been reaching out to me for letters of support. I have written letters of support, which I have returned to the professors and they attach that to their grant proposals, which they send off to the NSF or other funding agency, saying that if the grant is funded SIU will support the grant by providing this BigDog Supercomputer free of charge to this professor.”

The SIU-C Office of Information and Technology will offer faculty and students instruction on how to access and use BigDog.

Langin will lead sessions January 24, 25 at 10 a.m., and again on Jan. 31 at 1:30 p.m. in Morris Library.

Source: Brad Palmer, WSIU

The post BigDog HPC Cluster Ready for Research Use appeared first on HPCwire.

Technavio Releases New Global Supercomputer Market Research Report

Mon, 01/23/2017 - 06:45

LONDON, U.K., Jan. 23 — According to the latest market study released by Technavio, the global supercomputer market is expected to reach USD 4.95 billion by 2021, growing at a CAGR of 7%.

This research report titled ‘Global Supercomputer Market 2017-2021’ provides an in-depth analysis of the market in terms of revenue and emerging market trends. To calculate the market size, Technavio considers the installation and sale of supercomputer systems from end-users like government entities, research institution, and commercial industries.

Supercomputers are being rapidly adopted by research and academic institutions, as well as a number of industries such as energyoil and gas, manufacturing, and others to help improve product offerings in terms of reliability and robustness. Many vendors are also aiming to offer converged, high-performance technology solutions. This trend, which is growing significantly, will drive the market growth during the forecast period.

Technavio’s hardware and semiconductor analysts categorize the global supercomputer market into three major segments by end users. They are:

  • Commercial industries
  • Research institutions
  • Government entities

Government entities

Governments worldwide are recognizing the need for supercomputers due to the growing importance of economic competitiveness and security, which are key concern areas for a nation. They are also using such systems to develop advanced defense systems and electronic warfare tools. The US government to compete with China in terms of the fastest supercomputers, coordinated a federal strategy in 2015 for HPC research, development, and deployment.

According to Chetan Mohan, a lead embedded systems research analyst from Technavio, “Three major government entities that collaborated for this program were the Department of Energy, the Department of Defense, and the National Science Foundation. The program would accelerate R&D activities in several fields across the government, industrial, and academic sectors.”

Research institutions

Research institutions, both government-aided and private, have been using supercomputers for the longest period. Space agencies have been using such systems to study and understand the complexities of the universe by creating simulated models based on complex calculations and assumptions. CERN, the European Organization for Nuclear Research, operates the largest particle physics laboratory in the world and undertakes extensive research in the field of particle physics such as the recent creation of a black hole environment in the lab.

Also, leading universities use supercomputers to carry out research projects in fields such as biotechnology and engineering. The data processing in genetics involves thousands of factors which have to be processed, in such a situation supercomputers are the best choice.

Commercial industries

The commercial industries segment is lagging behind government entities and research institutions in terms of the adoption and use of supercomputers. Initially, only large enterprises such as Tata Group had resources to purchase supercomputing systems. The company had installed its first supercomputer Eka in 2007. However, market players brought these systems within reach of SMEs by developing mid-sized and small supercomputers.

“Enterprises are also collaborating with government entities. For instance, companies in the US can make use of investments and expertise in supercomputers through programs operated by some of the nation’s NSF-funded universities and Department of Energy laboratories,” says Chetan.

The top vendors highlighted by Technavio’s research analysts in this report are:

  • Bull Atos
  • Cray
  • Dell
  • FUJITSU
  • HPE
  • IBM
  • Lenovo
  • NEC
  • SGI
  • Sugon

Become a Technavio Insights member and access all three of these reports for a fraction of their original cost. As a Technavio Insights member, you will have immediate access to new reports as they’re published in addition to all 6,000+ existing reports covering segments like computing devicesdisplays, and lighting. This subscription nets you thousands in savings, while staying connected to Technavio’s constant transforming research library, helping you make informed business decisions more efficiently.

About Technavio

Technavio is a leading global technology research and advisory company. The company develops over 2000 pieces of research every year, covering more than 500 technologies across 80 countries. Technavio has about 300 analysts globally who specialize in customized consulting and business research assignments across the latest leading edge technologies.

Source: Technavio

The post Technavio Releases New Global Supercomputer Market Research Report appeared first on HPCwire.

CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again)

Fri, 01/20/2017 - 09:33

It’s starting to look like Carnegie Mellon University has a gambling problem – can’t stay away from the poker table. This morning CMU reports its latest Poker-playing AI software, Libratus, is winning against four of the world’s best professional poker players in a 20-day, 120,000 hand tournament – Brains vs. AI – at Rivers Casino in Pittsburgh. Maybe it’s a new way to fund graduate programs. (Just Kidding!)

One of the pros, Jimmy Chou, said he and his colleagues initially underestimated Libratus, but have come to regard it as one tough player, “The bot gets better and better every day. It’s like a tougher version of us.” Chou and three other leading players – Dong Kim, Jason Les and Daniel McAulay – specialize in this two-player, unlimited bid form of Texas Hold’em and are considered among the world’s top players of the game.

According the CMU report, while the pros are fighting for humanity’s pride – and shares of a $200,000 prize purse – Carnegie Mellon researchers are hoping their computer program will establish a new benchmark for artificial intelligence by besting some of the world’s most talented players.

Libratus was developed by Tuomas Sandholm, professor of computer science, and his student, Noam Brown. “Libratus is being used in this contest to play poker, an imperfect information game that requires the AI to bluff and correctly interpret misleading information to win. Ultimately programs like Libratus also could be used to negotiate business deals, set military strategy, or plan a course of medical treatment – all cases that involve complicated decisions based on imperfect information,” according to the CMU report.

CMU, of course, has been sharpening its AI poker skills for quite some time. Back in the fall of 2016, CMU’s software Baby Tartanian8, also created by Sandholm and Brown, placed third in the bankroll instant run-off category of another computer poker tournament (see HPCwire article, CMU’s Baby Tartanian8 Pokerbot Sweeps Annual Competition).

Back then Sandholm said, “Our ‘baby’ version of Tartanian8 was scaled down to fit within the competition’s 200 gigabyte data storage limit. It also could not do sophisticated, real-time deliberation because of the competition’s processing limit. The original Tartanian8 strategy was computed in late fall by myself and Noam on the Comet supercomputer at the San Diego Supercomputer Center (SDSC).”

In the spring of 2015 CMU’s Claudico software fared well in competition (See HPCwire article, CMU’s Claudico Goes All-In Against World-Class Poker Pros). In that first Brains Vs. AI contest in 2015, four leading pros amassed more chips than the AI, called Claudico. But in the latest contest, Libratus had amassed a lead of $459,154 in chips in the 49,240 hands played by the end of Day Nine.

Here’s a link t the CMU article: http://www.cs.cmu.edu/news/cmu-ai-tough-poker-player

 

The post CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again) appeared first on HPCwire.

Atos Announces First UK Delivery of New Bull Sequana Supercomputer

Fri, 01/20/2017 - 07:14

PARIS, France, Jan. 20 — Atos, a leader in digital transformation, announces the first installation of its Bull sequana X1000 new-generation supercomputer system, in the UK at the Hartree Centre. Founded by the UK government, the Science and Technology Facilities Council (STFC) Hartree Centre is a high performance computing and data analytics research facility. The world’s most efficient supercomputer, Bull sequana, is an exascale-class computer capable of processing a billion billion operations per second while consuming 10 times less energy than current systems.

This major collaboration between Atos and the Centre focuses on various initiatives aimed at addressing the UK Government’s Industrial Strategy which encourages closer collaboration between academia and industry. It includes:

  • The launch of a new UK based High Performance Computing (HPC) as a Service Offering (HPCaaS), which enables both large and small and medium-sized enterprises (SME) to take advantage of extreme computing performance through easily accessible Cloud portals. Improving SME access to such tools encourages and supports high-tech business innovation across the UK.
  • ‘Deep Learning’ as a service (DLaaS); an emerging cognitive computing technique with broad applicability from automated voice recognition to medical imaging. The technology can be used, for example, to automatically detect anomalies in mammography scans with a higher degree of accuracy than the human eye.

The new supercomputer will allow both academic and industry organisations to use the latest technology and develop applications using the most recent advances in artificial intelligence and high performance data analytics. As such, the Bull sequana system will aid Hartree to become the ‘go-to’ place in the UK for technology evaluation, supporting the work of major companies in fields ranging from engineering and consumer goods to healthcare and pharmaceuticals.

Andy Grant, Head of Big Data and HPC, Atos UK&I, said, “We believe that our Bull supercomputing technology and our expertise will reinforce the Centre’s reputation as a world class HPC centre of excellence and as the flagship model for industry-academic collaboration.”

Alison Kennedy, Director of the Hartree Centre, said, “The Hartree Centre works at the leading edge of emerging technologies and provides substantial benefits to the many industrial and research organisations that come to us.  Our collaboration with Atos will ensure that we continue to enable businesses, large and small, to make the best use of supercomputing and Big Data to develop better products and services that will boost productivity and drive growth.”

The partnership also encompasses a joint project to develop next-generation hardware and software solutions and application optimisation services, so that commercial and academic users benefit from the Hartree systems. It is also helping promote participation in STEM careers at higher education level and beyond, particularly in the North West of the UK.

The Bull sequana will be approximately 3.4 PFlops when installed and is composed of Intel Xeon and many core Xeon Phi (Knights Landing) processor technology.  It has been designed to accommodate future blade systems for Deep Learning, GPU and ARM based computing.

The new Bull sequana system is one of the most energy efficient general purpose supercomputers in the world and is in the TOP20 of the Green500 list of the most energy efficient computers.

About Atos

Atos SE (Societas Europaea) is a leader in digital transformation with circa 100,000 employees in 72 countries and pro forma annual revenue of circa € 12 billion. Serving a global client base, the Group is the European leader in Big Data, Cybersecurity, Digital Workplace and provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting edge technologies, digital expertise and industry knowledge, the Group supports the digital transformation of its clients across different business sectors: Defense, Financial Services, Health, Manufacturing, Media, Utilities, Public sector, Retail, Telecommunications, and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and is listed on the Euronext Paris market. Atos operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. www.atos.net

Source: Atos

The post Atos Announces First UK Delivery of New Bull Sequana Supercomputer appeared first on HPCwire.

IBM Reports 2016 Fourth Quarter and Full Year Financial Results

Fri, 01/20/2017 - 06:54

ARMONK, N.Y., Jan. 20 — IBM (NYSE: IBM) has announced fourth-quarter and full-year 2016 earnings results.

“In 2016, our strategic imperatives grew to represent more than 40 percent of our total revenue and we have established ourselves as the industry’s leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer.  “IBM Watson is the world’s leading AI platform for business, and emerging solutions such as IBM Blockchain are enabling new levels of trust in transactions of every kind.  More and more clients are choosing the IBM Cloud because of its differentiated capabilities, which are helping to transform industries, such as financial services, airlines and retail.”

“In 2016, we again made substantial capital investments, increased our R&D spending and acquired 15 companies — a total of more than $15 billion across these elements.  The acquisitions further strengthened our capabilities in analytics, security, cognitive and cloud, while expanding our level of industry expertise with additions such as Truven Health Analytics and Promontory Financial Group,” said Martin Schroeter, IBM senior vice president and chief financial officer.  “At the same time, we returned almost $9 billion to shareholders through dividends and gross share repurchases.”

Strategic Imperatives

Fourth-quarter cloud revenues increased 33 percent.  The annual exit run rate for cloud as-a-service revenue increased to $8.6 billion from $5.3 billion at year-end 2015.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 16 percent (up 17 percent adjusting for currency) and revenues from security increased 7 percent (up 8 percent adjusting for currency).

For the full year, revenues from strategic imperatives increased 13 percent (up 14 percent adjusting for currency).  Cloud revenues increased 35 percent to $13.7 billion.  The annual exit run rate for cloud as-a-service revenue increased 61 percent (up 63 percent adjusting for currency) year to year.  Revenues from analytics increased 9 percent.  Revenues from mobile increased 34 percent (up 35 percent adjusting for currency) and from security increased 13 percent (up 14 percent adjusting for currency).

Full-Year 2017 Expectations

The company expects operating (non-GAAP) diluted earnings per share of at least $13.80 and GAAP diluted earnings per share of at least $11.95.  Operating (non-GAAP) diluted earnings per share exclude $1.85 per share of charges for amortization of purchased intangible assets, other acquisition-related charges and retirement-related charges.  IBM expects a free cash flow realization rate in excess of 90 percent of GAAP net income.

Cash Flow and Balance Sheet

In the fourth quarter, the company generated net cash from operating activities of $3.2 billion; or $5.6 billion excluding Global Financing receivables.  IBM’s free cash flow was $4.7 billion.  IBM returned $1.3 billion in dividends and $0.9 billion of gross share repurchases to shareholders.  At the end of December 2016, IBM had $5.1 billion remaining in the current share repurchase authorization.

The company generated full-year free cash flow of $11.6 billion, excluding Global Financing receivables.  The company returned $8.8 billion to shareholders through $5.3 billion in dividends and $3.5 billion of gross share repurchases.

IBM ended the fourth-quarter 2016 with $8.5 billion of cash on hand.  Debt, including Global Financing debt of $27.9 billion, totaled $42.2 billion.  Core (non-Global Financing) debt totaled $14.3 billion.  The balance sheet remains strong and is well positioned to support the business over the long term.

Segment Results for Fourth Quarter

  • Cognitive Solutions (includes solutions software and transaction processing software) —revenues of $5.3 billion, up 1.4 percent (up 2.2 percent adjusting for currency) were driven by growth in cloud, analytics and security.
  • Global Business Services (includes consulting, global process services and application management) — revenues of $4.1 billion, down 4.1 percent (down 3.6 percent adjusting for currency).
  • Technology Services & Cloud Platforms (includes infrastructure services, technical support services and integration software) — revenues of $9.3 billion, up 1.7 percent (up 2.4 percent adjusting for currency).  Growth was driven by strong hybrid cloud services, analytics and security performance.
  • Systems (includes systems hardware and operating systems software) — revenues of $2.5 billion, down 12.5 percent (down 12.1 percent adjusting for currency).  Gross profit margins improved driven by z Systems performance.
  • Global Financing (includes financing and used equipment sales) — revenues of $447 million, down 1.5 percent (down 2.1 percent adjusting for currency).

Full-Year 2016 Results

Diluted earnings per share from continuing operations were $12.39, down 9 percent compared to the 2015 period.  Net income from continuing operations for the twelve months ended December 31, 2016 was $11.9 billion compared with $13.4 billion in the year-ago period, a decrease of 11 percent.

Consolidated net income was $11.9 billion compared to $13.2 billion in the year-ago period.  Consolidated diluted earnings per share were $12.38 compared to $13.42, down 8 percent year to year. Revenues from continuing operations for the twelve-month period totaled $79.9 billion, a decrease of 2 percent year to year compared with $81.7 billion for the twelve months of 2015.

Operating (non-GAAP) diluted earnings per share from continuing operations were $13.59 compared with $14.92 per diluted share for the 2015 period, a decrease of 9 percent.  Operating (non-GAAP) net income from continuing operations for the twelve months ended December 31, 2016 was $13.0 billion compared with $14.7 billion in the year-ago period, a decrease of 11 percent.

Source: IBM

The post IBM Reports 2016 Fourth Quarter and Full Year Financial Results appeared first on HPCwire.

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

Thu, 01/19/2017 - 16:09

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. and IDG Capital, the companies announced today (Thursday). The official announcement comes after months of speculation with Reuters reporting in November that the parties were in “advanced discussions.”

Tech analyst outfit International Data Corporation (IDC), a wholly-owned IDG subsidiary, is included in the deal but will go forth without its HPC group, which will find a new corporate home before the sale closes (details below).

The terms of the acquisition were not disclosed, but sources have estimated the sales price to be between $500 million to $1 billion. The stakeholders say they have received clearance from the Committee on Foreign Investment in the United States (“CFIUS”) for the transaction, which is expected to close within the first quarter of 2017.

Founded in 1964 by Pat McGovern, IDG is a prominent global media, market research and venture company; it operates in 97 countries around the world. McGovern, the long-time CEO, passed away in 2014.

“In an effort to carry on the legacy of late founder and chairman Pat McGovern, we have been focused on determining the optimal future balance between what’s best for IDG and Pat’s mission for the McGovern Foundation,” said Walter Boyd, chairman of IDG. “We believe China Oceanwide and IDG Capital will provide the right financial, strategic and cultural fit to take IDG to greater heights.”

China Oceanwide is a privately held, multi-billion dollar, international conglomerate founded by Chairman Zhiqiang Lu. Its operations span financial services, real estate assets, media, technology and strategic investment. The company has a global business force of 12,000.

IDG Capital is an independently operated investment management partnership, which cites IDG as one of many limited partners. It was formed in 1993 as China’s first technology venture investment firm. It operates in a wide swath of sectors, including Internet and wireless communications, consumer products, franchise services, new media, entertainment, education, healthcare and advanced manufacturing.

The Future of IDC’s HPC Team

Given IDC’s position as an analyst firm of record for the HPC community and organizer of the well-attended HPC User Forums, you may be wondering how IDG’s sale to Chinese interests will impact IDC’s HPC group, which deals with sensitive US information. Earl Joseph, IDC program vice president and executive director HPC User Forum, explained that to preserve the full scope of its business activities, IDC’s HPC group will be divested from IDC’s holdings prior to the completion of the sale.

“We want to let you know that we will be fully honoring all IDC HPC contracts and deliverables, and will continue our HPC operations as before,” Joseph shared in an email.

“Because the HPC group conducts sensitive business with governments, the group is being separated prior to the deal closing. It will be operated under new ownership that will be independent from the buyer of IDC to ensure that the group can continue to fully support government research requirements. The HPC group will continue to do business as usual, including research reports, client studies, and the HPC User Forums. After the deal closes, all research reports will be provided by the new HPC group at: www.hpcuserforum.com.”

The IDC HPC team also clarified that it will retain control of “all the IP on HPC, numbers, reports, etc.” and it “won’t be part of a non-US company.”

“We are being set up to be a healthy, growing concern,” said Joseph.

Until the IDC HPC group lines up a suitable buyer, it will remain part of IDG.

The post IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group appeared first on HPCwire.

Weekly Twitter Roundup (Jan. 19, 2017)

Thu, 01/19/2017 - 14:05

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below.

Just received our new clear tile for the Owens Cluster! #supercomputer pic.twitter.com/w0h4HVn0Ai

— OhioSupercomputerCtr (@osc) January 18, 2017

Preparation for #SC17 has begun, check out the new look and feel for SC17 with the #HPCConnects Logo! Supercomputing is only 10 months away

— SC17 (@Supercomputing) January 19, 2017

As part of the @MontBlanc_Eu Project, we got to visit the MareNostrum, a #supercomputer housed in what used to be a chapel. #HPC #whataphoto pic.twitter.com/QUOgefWkep

— Connect Tech Inc. (@ConnectTechInc) January 18, 2017

Even today's low temperatures did not prevent our lively #HPC discussions at today's event #ARM on the road. Thanks all for attending! pic.twitter.com/gbPGaksQ6d

— Mont-Blanc (@MontBlanc_Eu) January 17, 2017

Stampede supercomputer simulates silica glass, science to save on energy bills from heat loss https://t.co/ESdrytAJIx

— TACC (@TACC) January 19, 2017

We're creating a first-of-its-kind #supercomputer with @GW4Alliance thanks to £3M @EPSRC funding https://t.co/ojbC5XkmLB

— University of Bath (@UniofBath) January 17, 2017

Awesome week of presenting and learning at the @ddn_limitless Sales Conference – 2017 is going to be exciting across the product portfolio pic.twitter.com/fBHrAUIBPL

— Kurt Kuckein (@kkuckein) January 18, 2017

Congratulations @ragerber! Well deserved: https://t.co/JIF6H7I6Kl @BerkeleyLab pic.twitter.com/G0gcrumM3x

— NERSC (@NERSC) January 17, 2017

Last talk at @MontBlanc_Eu Conference. Jean Gonnord #CEA makes a vibrant appeal to buy European in #HPC pic.twitter.com/7QIXN7XV8P

— Pascale BernierBruna (@PBernierBruna) January 17, 2017

Highlights of @DeptofDefense Secretary Ashton Carter term include Vislab visit. https://t.co/fUdtcRX01z #HPCmatters pic.twitter.com/KztfVsSHKw

— TACC (@TACC) January 19, 2017

Click here to view the top tweets from last week.

The post Weekly Twitter Roundup (Jan. 19, 2017) appeared first on HPCwire.

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

Thu, 01/19/2017 - 11:09

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda.

CEA (Alternative Energies and Atomic Energy Commission), long a force in European HPC, and RIKEN, Japan’s largest research institution, share broad goals in the new initiative. On the RIKEN side, the Advanced Institute of Computation Science (AICS) will coordinate much of the work although activities expected to extend RIKEN-wide and to other Japanese academia.

Perhaps not surprisingly, further development of ARM is a driving force. Here are comments by project leaders from both partners:

  • RIKEN. “We are committed to building the ARM-based ecosystems and we want to send that message to those who are related to ARM so that those people will be excited in getting in contact with us,” said Shig Okaya, director, Flagship 2020 Project, RIKEN. Japan and contractor Fujitsu, of course, have committed to using ARM on the post k computer.
  • CEA. “We are [also] committed to development of the [ARM] ecosystem and we will [also] compare and cross test with the other platforms such as Intel. It’s a way for us to anticipate the future needs of our scientists and industry people so that we have a full working co-design loop,” said Jean-Philippe Bourgoin, director of strategic analysis and member of the executive committee, CEA. Europe also has a major ARM project – Mont Blanc now in its third phase – that is exploring use of ARM in leadership class machines. Atos/Bull is the lead contractor.
Jean-Philippe Bourgoin, director of strategic analysis and member of the executive committee CEA (r); Shig Okaya, director, Flagship 2020 Project, RIKEN

The agreement, announced last week in Japan and France, has been in the works for some time, said Okaya and Bourgoin, and is representative of the CEA-RIKEN long-term relationship. Although details are still forthcoming, the press release on the CEA website provides a worthwhile snapshot:

“The scope of the collaboration covers the development of open source software components, organized in an environment that can benefit both hardware developers and software and application developers on x86 as well as ARM architectures. The open source approach is particularly suited to combining the respective efforts of partners, bringing software environments closer to today’s very different architectures and giving as much resonance to the results as possible – in particular through contributions to the OpenHPC collaborative project.

“Priority topics include environment and programming languages, execution materials, and work schedulers optimized for energy. Particular attention is paid to performance and efficiency indicators and metrics – with a focus on designing useful and cost-effective computers – as well as training and skills development. Finally, the first applications included in the collaboration concern quantum chemistry and condensed matter physics, as well as the seismic behavior of nuclear installations.”

The new agreement, say both parties, “should enable France and Japan to join forces in the global race on this strategic (HPC and exascale) subject. The French and Japanese approaches have many similarities not only in their technological choices, but also in the importance given to building user ecosystems around these new supercomputers.”

Formally, the collaboration is part of an agreement between the French Ministry of National Education, Higher Education and Research and the Ministry of Education, Culture, Sports and Science And Japanese Technologies (MEXT). Europe and Japan have both been supporters of open architectures and open source software. It also helps nation each further explore non-x86 (Intel) processors architecture.

It’s worth noting that ARM, founded in the U.K., was purchased last year by Japanese technology conglomerate SoftBank (see HPCwire article, SoftBank will Purchase ARM Ltd for $32B).

Steve Conway, IDC research vice president, HPC/HPDA, said, “This CEA-RIKEN collaboration to advance open source software for leadership-class supercomputers, including exascale systems, makes great sense. Both organizations are among the global leaders for HPC innovations in hardware and software, and both have been strong supporters of the OpenHPC collaborative. IDC has said for years, that software advances will be even more important than hardware progress for the future of supercomputing.”

K computer, RIKEN

The collaboration is a natural one, said Okaya and Bourgoin, not least because each organization is leading exascale development efforts in their respective countries and each already hosts formidable HPC resources – RIKEN/AICS’s K computer and CEA’s Curie machine which is part of the Partnership for Advanced Computing in Europe (PRACE) network of computers.

“One of the outcomes of this partnership will be that the applications and codes developed by the Japanese will be able to be ported and run on the French computer and of course the French codes and applications will be able to be run on the Japanese computer. So the overall ecosystem [of both] will benefit,” said Bourgoin. He singled out three critical areas for collaboration: programming environment, runtime environment, and energy-aware job scheduling.

Okaya noted there are differences in the way each organization has tackled these problems but emphasized they are largely complementary. One example of tool sharing is the microkernel strategy being developed at RIKEN, which will be enriched by use of a virtualization tool (PCOCC) from CEA. At the application level, at least to start, two applications areas have been singled out:

  • Quantum Chemistry/Molecular Dynamics. There’s an early effort to port BigDFT, developed in large measure in Europe, to the K computer with follow-up work to develop libraries.
  • Earth Sciences. Japan has leading edge seismic simulation/prediction capabilities and will work with CEA to port Japan’s simulation code, GAMERA. Bourgoin noted the value of such simulations in nuclear installation evaluations and recalled that Japan and France have long collaborated on a variety of nuclear science issues.

The partnership seems likely to bear fruit on several fronts. Bourgoin noted the agreement has a lengthy list of detailed deliverables and timetable for delivery. While the RIKEN effort is clearly focused on ARM, Bourgoin emphasized it is not clear which processor(s) will emerge for next generations HPC and exascale in the coming decade. Europe and CEA want to be ready for whatever mix of processor architecture landscapes arises.

In addition to co-development, Bourgoin and Okaya said they would also work on HPC training issues. There is currently a lack of needed trained personnel, they agreed. How training would be addressed was not yet spelled out. It will be interesting to watch this collaboration and monitor what effect it has on accelerating ARM traction more generally. Recently, of course, Cray announced an ARM-based supercomputer project to be based in the U.K.

Neither partner wanted to go on record regarding geopolitical influences on processor development generally or this collaboration specifically. Past European Commission statements have made it clear the EC would likely back a distinctly European (origin, IP, manufacture) processor alternative to the x86. Japan seems likely to share such homegrown and home-control concerns with regard to HPC technology, which is seen as an important competitive advantage for industry and science.

The post France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale appeared first on HPCwire.

Appentra Joins the OpenPOWER Foundation

Thu, 01/19/2017 - 10:12

A Coruña, Spain, January 19 — Appentra Corporation (@Appentra), a software company for guided parallelization today announced it has joined the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture.

Appentra joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technology as well as industry leading open source software aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers. The group makes POWER hardware and software available to open development for the first time, as well as making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform.

OpenPOWER Foundation has a collaborative environment from which, the members obtain current information on OpenPOWER activities and they get involved in areas of interest to them. Thus, we will actively participate in the OpenPOWER Ready program to demonstrate that our new software Parallware Trainer is interoperable with other OpenPOWER Ready products. We are also interested in working with the Academia Discussion Group to better understand how Parallware Trainer can help in teaching parallel programming with OpenMP and OpenACC.

“For us it is of great value to share our experiences and learn from world-wide leading universities, national laboratories and supercomputing centers that are also members of OpenPOWER Foundation.” said Manuel Arenaz CEO at Appentra.

“The development model of the OpenPOWER Foundation is one that elicits collaboration and represents a new way in exploiting and innovating around processor technology.” says Calista Redmond, Director of OpenPOWER Global Alliances at IBM. “With the Power architecture designed for Big Data and Cloud, new OpenPOWER Foundation members like Appentra, will be able to add their own innovations on top of the technology to create new applications that capitalize on emerging workloads.”

About OpenPOWER Foundation

The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth.The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry. To learn more about OpenPOWER and to view the complete list of current members, go to www.openpowerfoundation.org. #OpenPOWER

About Appentra

Appentra is a technology company providing software tools for guided parallelization in high-performance computing and HPC-like technologies.

Appentra was founded in 2012 as a spin-off from the University of A Coruña. Dr. Manuel Arenaz and his team were conducting research in the area of advanced compilation techniques to improve the performance in high-performance parallel computing codes. Specifically, Dr. Arenaz’s team was focused on the static program analysis for parallelization of sequential scientific applications that use sparse computations, automatic parallelism discovery, and development of parallelizing code transformations for sparse applications.

This led to an idea: Develop a set of tools, The Parallware Suite, that help users manage the complexity of parallel programming, keep up with leading industrial standards, and not only parallelizes the code but train users how to parallelize their code. By using Parallware Suite, users can take control of their parallel applications, improve productivity in their output, and release the full potential of HPC in their environment.

Source: Appentra

The post Appentra Joins the OpenPOWER Foundation appeared first on HPCwire.

Altair to Offer HPC Cloud Offerings on Oracle Cloud Platform

Thu, 01/19/2017 - 08:46

TROY, Mich., Jan. 19 — Altair today announced a business collaboration with Oracle to build and offer High Performance Computing (HPC) solutions on the Oracle Cloud Platform. This follows Oracle’s decision to name Altair’s PBS Works as its preferred workload management solution for Oracle Cloud customers.

Altair PBS Works running on the Oracle Cloud Platform offers independent software vendors (ISVs) faster time to market on a proven HPC platform to address markets such as Oil & Gas, Insurance Information Processing and the internet of things (IoT). The Altair HPC advantage is the ability to quickly jumpstart an ISV interested in HPC with short time-to-market for their solutions on a proven platform.

“The Oracle Cloud Platform provides superior performance in terms of price, predictability, and throughput, with a low cost pay-as-you-go cloud model,” said Sam Mahalingam, Chief Technical Officer, Altair. “We are delighted to partner with Oracle to provide High Performance Computing (HPC) solutions with Altair’s PBS Works for the Oracle Cloud.”

Altair has served the HPC market for over a decade with award-winning workload management, engineering, and cloud computing software. Used by thousands of companies worldwide, PBS Works enables engineers in HPC environments to improve productivity, optimize resource utilization and efficiency, and simplify the process of workload management.

“Altair is a longtime leader in HPC and cloud solutions,” said Deepak Patil, Vice President of Product Management, Oracle Cloud Platform. “Their unique combination of HPC and engineering expertise makes PBS Works Oracle’s preferred workload management suite for High Performance Computing on the Oracle Cloud.”

The Oracle Platform promises to offer HPC users superior performance in terms of price, predictability, and throughput.

As part of the collaboration, Altair will work closely with Oracle to develop turnkey solutions allowing users to access cloud HPC resources in the Oracle Cloud Platform from any web-enabled device. These solutions will leverage Altair’s industry leading PBS Works job scheduling solution on the Oracle Cloud to enable intuitive web portal access and secure workload management for rapid, scalable access to Oracle Cloud Platform HPC resources.

Initial solutions will target life sciences, energy, and academia increasing HPC demand to run compute-intensive workloads including DNA sequencing, advanced simulations, and big data analytics to test new concepts or products in virtual space.

For more information on Altair’s PBS Works and HPC cloud offerings, visit www.pbsworks.com/overview.

About Altair

Altair is focused on the development and broad application of simulation technology to synthesize and optimize designs, processes and decisions for improved business performance. Privately held with more than 2,600 employees, Altair is headquartered in Troy, Michigan, USA and operates more than 45 offices throughout 20 countries. Today, Altair serves more than 5,000 corporate clients across broad industry segments. To learn more, please visit www.altair.com.

Source: Altair

The post Altair to Offer HPC Cloud Offerings on Oracle Cloud Platform appeared first on HPCwire.

SC17 Now Accepting Proposals for Workshops

Thu, 01/19/2017 - 07:00

Jan. 19 — SC includes full- and half-day workshops that complement the overall Technical Program events, with the goal of expanding the knowledge base of practitioners and researchers in a particular subject area. These workshops provide a focused, in-depth venue for presentations, discussion and interaction.  Workshop proposals were peer-reviewed academically with a focus on submissions that inspire deep and interactive dialogue in topics of interest to the HPC community.

Publishing through SIGHPC

Workshops held in conjunction with the SC conference are *not* included as part of the SC proceedings.

If a workshop will have a rigorous peer-review process for selecting papers, we encourage the organizers to approach ACM SIGHPC about their special collaborative arrangement, which allows the workshop’s proceedings to be published in the two digital archives (ACM Digital Library and IEEE Xplore). The workshop’s proceedings will also be linked to the SC16 online program.

Please note that this option requires a second proposal to SIGHPC and imposes additional requirements; see http://www.sighpc.org/events/collaboration/scworkshops for details.

Important Dates

  • Web submissions open: January 1, 2017
  • Submission Deadline: February 7, 2017

Web Submissions: https://submissions.supercomputing.org/

Email Contact: workshops@info.supercomputing.org

SC17 Workshop Chair: Almadena Chtchelkanova, NSF

SC17 Workshop Vice-Chair: Luiz DeRose, Cray Inc.

Source: SC17

The post SC17 Now Accepting Proposals for Workshops appeared first on HPCwire.

SDSC’s Gordon Supercomputer Assists in New Microbiome Study

Thu, 01/19/2017 - 06:45

Jan. 19 — A new proof-of-concept study by researchers from the University of California San Diego has succeeded in training computers to “learn” what a healthy versus an unhealthy gut microbiome looks like based on its genetic makeup. Since this can be done by genetically sequencing fecal samples, the research suggests there is great promise for new diagnostic tools that are, unlike blood draws, non-invasive.

As recent advances in scientific understanding of Parkinson’s disease and cancer immunotherapy have shown, our gut microbiomes – the trillions of bacteria, viruses and other microbes that live within us – are emerging as one of the richest untapped sources of insight into human health.

The problem is these microbes live in a very dense ecology of up to one billion microbes per gram of stool. Imagine the challenge of trying to specify all the different animals and plants in a complex ecology like a rain forest or coral reef – and then imagine trying to do this in the gut microbiome, where each creature is microscopic and identified by its DNA sequence.

Determining the state of that ecology is a classic ‘Big Data’ problem, where the data is provided by a powerful combination of genetic sequencing techniques and supercomputing software tools. The challenge then becomes how to mine this data to obtain new insights into the causes of diseases, as well as novel therapies to treat them.

The new paper, titled “Using Machine Learning to Identify Major Shifts in Human Gut Microbiome Protein Family Abundance in Disease,” was presented last month at the IEEE International Conference on Big Data. It was written by a joint research team from UC San Diego and the J. Craig Venter Institute (JCVI). At UC San Diego, it included Mehrdad Yazdani, a machine learning and data scientist at the California Institute for Telecommunications and Information Technology’s (Calit2) Qualcomm Institute; Biomedical Sciences graduate student Bryn C. Taylor and Pediatrics Postdoctoral Scholar Justine Debelius; Rob Knight, a professor in the UC San Diego School of Medicine’s Pediatrics Department as well as the Computer Science and Engineering Department and director of the Center for Microbiome Innovation; and Larry Smarr, Director of Calit2 and a professor of Computer Science and Engineering. The UC San Diego team also collaborated with Weizhong Li, an associate professor at JCVI.

Metagenomics and Machine Learning

The software to carry out the study was developed by Li and run on the data-intensive Gordon supercomputer at the San Diego Supercomputer Center (SDSC), an Organized Research Unit of UC San Diego, using 180,000 core-hours. That’s equivalent to running a personal computer 24 hours a day for about 20 years.

The work began with a genetic sequencing technique known as “metagenomics,” which breaks up the DNA of the hundreds of species of microbes that live in the human large intestine (our “gut”). The technique was applied to 30 healthy people (using sequencing data from the National Institutes of Health’s Human Microbiome Program), together with 30 samples from people suffering from the autoimmune Inflammatory Bowel Disease (IBD), including those with ulcerative colitis and with ileal or colonic Crohn’s disease. This resulted in sequencing around 600 billion DNA bases, which were then fed into the Gordon supercomputer to reconstruct the relative abundance of these species; for instance, how many E. coli are present compared to other bacterial species.

Since each bacterium’s genome contains thousands of genes and each gene can express a protein, this technique made it possible to translate the reconstructed DNA of the microbial community into hundreds of thousands of proteins, which are then grouped into about 10,000 protein families.

To discover the patterns hidden in this huge pile of numbers, the researchers harnessed what they refer to as “fairly out-of-the-bag” machine-learning techniques originally developed for spam filters and other data mining applications. Their goal was to use these algorithms to classify major changes in the protein families found in the gut bacteria of both healthy subjects and those with IBD, based on the DNA found in their fecal samples.

The researchers first used standard biostatistics routines to identify the 100 most statistically significant protein families that differentiate health and disease states. These 100 protein families were then used as a “training set” to build a machine learning classifier that could classify the remaining 9,900 protein families in diseased versus healthy states. The goal was to find a “signature” for which protein families were elevated or suppressed in disease versus healthy states.

The entire article can be found here.

Source: Tiffany Fox, SDSC

The post SDSC’s Gordon Supercomputer Assists in New Microbiome Study appeared first on HPCwire.

ITER and BSC Collaborate to Simulate the Process of Fusion Power Generation

Thu, 01/19/2017 - 06:30

Jan. 19 — The ITER Organization and the Barcelona Supercomputing Center have gone one step further in their collaboration to simulate the process of fusion power generation. Both parties have signed a Memorandum of Understanding (MoU) in which they agree on the importance of promoting and furthering academic and scientific cooperation in all academic and scientific fields of mutual interest and to advance the training of young researchers. ITER is the international nuclear fusion R&D project, which is building the world’s largest experimental tokamak in France. Its aims to demonstrate that fusion energy is scientifically and technologically feasible.

ITER and BSC already collaborate in the area of numerical modelling to assess the design of the ITER pellet injector. These computer simulations are based upon non-linear 3D Magnetohydrodynamics (MHD) methods. Their focus is modelling the injection of pellets to forecast and control instabilities that could damage the reactor. These instabilities are called Edge Localized Modes (ELM), which can occur at the boundary of the fusion plasma and are problematic because they can release large amounts of energy to the reactor wall, wearing it away in the process. The goal of these simulations is to assess the optimal pellet size and speed of the pellet injector.

The MoU is valid for a duration of 5 years and tightens the cooperation between the two institutions, leaders in their respective fields, further. ITER will become the biggest and most relevant fusion device in the world while BSC, with its 475 researchers and experts and the upgrade of MareNostrum 3 to MareNostrum 4 that will take place later this year, is one of the top supercomputing centers worldwide.  As the first step within this new MoU, the two institutes will start a collaboration on the ITER Integrated Modelling infrastructure, IMAS, together with the EUROfusion Work Package for Code Development.

Mervi Mantsinen

The Barcelona Supercomputing Center Fusion team is coordinated by Mervi Mantsinen, ICREA professor at BSC from October 2013. During this time, Mantsinen has been one of the scientific coordinators for the EUROfusion experimental campaign to prepare fusion at ITER. Mantsinen has coordinated one of the two largest experiments for 2015-2016 at the Joint European Torus (JET), the biggest and most powerful fusion reactor in the world and is assisting the design and construction of ITER. Previously Mantsinen worked at JET and the ASDEX Upgrade tokamak at the Max-Planck Institute for Plasma Physics in Garching, Germany.

Mantsinen’s research focuses on the numerical modelling of experiments in magnetically confined fusion devices in preparation for ITER operation. Her objective is to enhance modelling capabilities in the field of fusion through code validation and optimization. This research is done within the European fusion research program EUROfusion for Horizon 2020 in close collaboration with ITER, the International Tokamak Physics Activity, EUROfusion and the Spanish national fusion laboratory CIEMAT.

ITER is the international nuclear fusion R&D project, which is building the world’s largest experimental tokamak nuclear fusion reactor in France. ITER aims to demonstrate that fusion energy is scientifically and technologically feasible by producing ten times more energy than is put in.

Fusion energy is released when hydrogen nuclei collide, fusing into heavier helium atoms and releasing tremendous amounts of energy in the process. ITER is constructing a tokamak device for the fusion reaction, which uses magnetic fields to contain and control the plasma – the hot, electrically charged gas that is produced in the process.

EUROFUSION is the ‘European Consortium for the Development of Fusion Energy’ and manages and funds European fusion research activities. The EUROfusion consortium is composed of the member states of the European Union plus Switzerland as associated member.

The Joint European Torus (JET) is located at the Culham Centre for Fusion Energy in Oxfordshire, Great Britain.  JET is presently the largest and most powerful fusion reactor in the world and studies fusion in conditions approaching those needed for a fusion power plant.

About the Barcelona Supercomputing Center (BSC)

Barcelona Supercomputing Center (BSC) is the national supercomputing centre in Spain. BSC specializes in high performance computing (HPC) and its mission is two-fold: to provide infrastructure and supercomputing services to European scientists, and to generate knowledge and technology to transfer to business and society.

BSC is a Severo Ochoa Center of Excellence and a first-level hosting member of the European research infrastructure PRACE (Partnership for Advanced Computing in Europe). The center also manages the Spanish Supercomputing Network (RES).

The BSC Consortium is composed by the Ministerio de Economía, Industria y Competitividad of the Spanish Government, the Departament d’Empresa i Coneixement of the Catalan Government and the Universitat Politècnica de Catalunya – BarcelonaTech.

Source: BSC

The post ITER and BSC Collaborate to Simulate the Process of Fusion Power Generation appeared first on HPCwire.

ARM Waving: Attention, Deployments, and Development

Wed, 01/18/2017 - 17:07

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s RIKEN announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December.

The pieces of an HPC ecosystem for ARM seem to be sliding, albeit unevenly, into place. Market traction in terms of HPC product still seems far off – there needs to be product available after all – but the latest announcements suggest growing momentum in sorting out the needed components for potential ARM-based HPC offerings. Plenty of obstacles remain – Fujitsu’s much-discussed ARM-based post K computer schedule has been delayed amid suggestions that processor issues are the main cause. Nevertheless interest in ARM for HPC is rising.

The biggest splash at this week’s Mont-Blanc project meeting was announcement of Cray’s plans to build a massive ARM supercomputer for the GW4 consortium (GW4) in the U.K. On first glance, it looks to be the first production ARM-based supercomputer. Named Isambard after the Victorian engineer Isambard Kingdom Brunel, the new system is scheduled for delivering in the March-December 2017 timeframe. Importantly, Isambard “will provide multiple advanced architectures within the same system in order to enable evaluation and comparison across a diverse range of hardware platforms.”

Project leader and professor of HPC at the University of Bristol, Simon McIntosh-Smith, said “Scientists have a growing choice of potential computer architectures to choose from, including new 64-bit ARM CPUs, graphics processors, and many-core CPUs from Intel. Choosing the best architecture for an application can be a difficult task, so the new Isambard GW4 Tier 2 HPC service aims to provide access to a wide range of the most promising emerging architectures, all using the same software stack. [It’s] a unique system that will enable direct ‘apples-to-apples’ comparisons across architectures, thus enabling UK scientists to better understand which architecture best suits their application.”

Here’s a quick Isambard snapshot:

  • Cray CS-400 system
  • 10,000+ 64-bit ARMv8 cores
  • HPC optimized stack
  • Be used to compare x86, Knights Landing, and Pascal processors
  • Cost £4.7 million over three years

The specific ARM chip planned for use was not named although the speculation is it’s likely to be a Cavium. The new machine will be hosted by the U.K. Met (climate/weather forecasting) agency. Paul Selwood, Manager for HPC Optimization at the Met Office said: “This system will enable us, in co-operation with our partners, to accelerate insights into how our weather and climate models need to be adapted for these emerging CPU architectures,” in the release announcing the project. The GW4 Alliance brings together four leading research-intensive universities: Bath, Bristol, Cardiff and Exeter.

The second splash at the BSC meeting was perhaps less spectacular but also important. The Mont-Blanc project has been percolating along since 2011. A smaller prototype was stood up in 2015 and it seems clear much of Europe is hoping that ARM-based processors will offer an HPC alternative and greater European control over its exascale efforts. Cavium’s ThunderX2 chip – a 64-bit ARMv8-A server processor that’s compliant with ARMv8-A architecture specifications and ARM SBSA and SBBR standards – will power the third phase prototype.

Mont-Blanc, of course, is the European effort to explore how ARM can be practically scaled for larger machines including future exascale systems. Atos/Bull is the primary contractor. The third phase of the Mont-Blanc project seeks to:

  • Define the architecture of an Exascale-class compute node based on the ARM architecture, and capable of being manufactured at industrial scale.
  • Assess the available options for maximum compute efficiency.
  • Develop the matching software ecosystem to pave the way for market acceptance.

The CEA-Riken collaboration announced last week is yet another ARM ecosystem momentum builder. “We are committed to building the ARM-based ecosystems and we want to send that message to those who are related to ARM so that those people will be excited in getting in contact with us,” said Shig Okaya, director, Flagship 2020, and a project leader for the CEA-RIKEN effort. It will, among other things, focus on and programming languages, execution materials, and work schedulers optimized for energy. Co-development of codes and code sharing are big parts of the deal. (HPCwire covers the CEA-RIKEN partnership in greater detail here).

Whether the increased attention on ARM will translate into success beyond the mobile and SOC world where it is now a dominant player isn’t clear. One of CEA’s goals is to compare ARM with a range or architectures to determine which performs best and for which workloads. Many market watchers are wary of ARM’s potential in HPC, which is still a relatively small market. Then again, less success in HPC wouldn’t necessarily rule out success in traditional servers. We’ll see.

The post ARM Waving: Attention, Deployments, and Development appeared first on HPCwire.

Richard Gerber Named Head of NERSC’s HPC Department

Wed, 01/18/2017 - 11:32

Jan. 18 — Richard Gerber has been named head of NERSC’s High-Performance Computing (HPC) Department, formed in early 2016 to help the center’s 6,000 users take full advantage of new supercomputing architectures – those already here and those on the horizon – and guide and support them during the ongoing transition to exascale.

For the past year, Gerber served as acting head of the department, which comprises four groups: Advanced Technologies, Application Performance, Computational Systems and User Engagement.

“This is an exciting time because the whole HPC landscape is changing with manycore, which is a big change for our users,” said Gerber, who joined NERSC’s User Services Group in 1996 as a postdoc, having earned his PhD in physics from the University of Illinois. “Users are facing a big challenge; they have to be able to exploit the architectural features on Cori (NERSC’s newest supercomputing system), and the HPC Department plays a critical role in helping them do this.”

The HPC Department is also responsible for standing up and supporting world-class systems in a production computing environment and looking to the future. “We work with complex, first-of-a-kind systems that present unique challenges,” Gerber said. “Our staff is constantly providing innovative solutions that make systems more capable and productive for our users. Looking forward, we are evaluating emerging technologies and gathering scientific needs to influence future HPC directions that will best support the science community.”

In addition, NERSC is working to acquire its next large system, NERSC-9, and prepare users to make effective use of it and exascale architectures in general, Gerber noted.

“The challenge really is getting the community to exascale, and there are many aspects to that, including helping users explore different programming models,” he said. “Beyond that we are starting to think about how to prepare for a post-Moore’s Law world when it arrives. We want to help move the community toward exascale and make sure they are ready.”

About NERSC and Berkeley Lab

The National Energy Research Scientific Computing Center (NERSC) is the primary high-performance computing facility for scientific research sponsored by the U.S. Department of Energy’s Office of Science. Located at Lawrence Berkeley National Laboratory, the NERSC Center serves more than 6,000 scientists at national laboratories and universities researching a wide range of problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry, computational biology, and other disciplines. Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the U.S. DOE Office of Science.

Source: NERSC

The post Richard Gerber Named Head of NERSC’s HPC Department appeared first on HPCwire.

Pages