HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 5 hours 28 min ago

Ethernet Alliance Heralds Next Ethernet Era with OFC 2017 Demo

Thu, 03/16/2017 - 17:14

BEAVERTON, Ore., March 16  — The Ethernet Alliance, a global consortium dedicated to the continued success and advancement of Ethernet technologies, today unveiled details of its live, interactive OFC 2017 demo. Featuring one of the largest numbers of participating member companies ever, the Ethernet Alliance’s interoperability demo emphasizes the full spectrum of Ethernet speeds from 1 Gigabit (1G) to 400 Gigabit (400G) and features a live 400G demo interconnecting to four discrete member booths. The Ethernet Alliance can be found in booth 3709 on the OFC 2017 expo floor, March 21 – 23, 2017, at the Los Angeles Convention Center, Los Angeles, Calif.

“It’s an incredibly exciting time in the industry as investments in next-generation Ethernet standards are coming to fruition. Our members – including equipment manufacturers, system and component vendors, test and measurement, and everyone else in-between – are developing the solutions that will enable these standards,” said John D’Ambrosia, chairman, Ethernet Alliance; and senior principal engineer, Huawei. “The diversity of solutions will allow network designers to tailor their network to their individual application’s bandwidth needs and specific requirements. With multiple application spaces refreshing near-simultaneously, we’re witnessing the largest aggregated build-out ever. In short, the next Ethernet era is off to a terrific start.”

As the role of optics in Ethernet is undeniable, the Ethernet Alliance’s OFC 2017 multivendor demo showcases a broad array of optical technologies, featuring 400G optics and form factors, cabling, and other emerging fiber innovations. The organization’s display encompasses two demonstrations, one focusing on a wide range of solutions ranging from 1G to 100G. The second demo integrates live 400G optical network connections from the Ethernet Alliance’s booth to four other autonomous member company booths on the expo floor. Reflecting the whole of the Ethernet ecosystem, these demos incorporate an extensive variety of switches, NICs, servers, cabling, fiber, and cutting-edge test equipment that provides data generation and real-time analysis. These technologies are the latest evolution in the Ethernet portfolio, and the foundation of the next Ethernet era.

With one of the highest rates of member company participation to-date, the Ethernet Alliance’s OFC 2017 demo features equipment and technologies from 16 different organizations, including Amphenol Corporation; Aquantia Corporation; Broadcom Limited; Cisco Systems, Inc.; Finisar Corporation; Ixia; Juniper Networks; Mellanox Technologies, Ltd.; Molex, Inc.; Oclaro, Inc.; Panduit Corp.; Spirent Communications; TE Connectivity Ltd.; Teledyne LeCroy, Inc.; Viavi Solutions Inc.; and Xilinx, Inc.

“This demo is much more than merely hooking up PHYs – it’s a true representation of the disruptive transformations taking place at every level of the Ethernet ecosystem. With member company participation among the highest it has ever been in Ethernet Alliance history, we have everything from interoperable real-world products available for immediate deployment, to forward-looking 400G technologies that will be the cornerstone of tomorrow’s high-speed networks, to state-of-the-art test and measurement tools needed for validating a new generation of links and devices,” said David J. Rodgers, board member and OFC 2017 technical lead, Ethernet Alliance; and senior product marketing manager, Teledyne LeCroy. “Our 400G demonstration highlights how IEEE 802.3 standards facilitate interoperability, even at the pre-ratification stage. It’s another proof-point of Ethernet’s capacity to just plug in and perform as expected.”

In addition to its multivendor demo, the Ethernet Alliance is hosting an OFC 2017 panel entitled The Fracturing and Burgeoning Ethernet Market, where expert panelists will also discuss how the 100G market is simultaneously thriving and fracturing into numerous variants. Moderated by Chairman John D’Ambrosia, expert speakers for this Ethernet Alliance panel include Mark Nowell, vice president, Ethernet Alliance; and senior director of engineering, Cisco Systems, Inc.; Chris Cole, vice president, advanced development, Finisar Corporation; and Paul Brooks, product line manager for high-speed transport, Viavi Solutions, Inc. The Fracturing and Burgeoning Ethernet Market panel session will be held from 11am – 12pm PST, Tuesday, March 21, 2017, Expo Theater III on the OFC expo floor.

To experience the Ethernet Alliance’s live multivendor demo, please visit booth 3709 on the OFC 2017 expo floor. For more information about the Ethernet Alliance, please visit http://www.ethernetalliance.org, follow @EthernetAllianc on Twitter, visit its Facebook page, or join the EA LinkedIn group.

About the Ethernet Alliance

The Ethernet Alliance is a global consortium that includes system and component vendors, industry experts, and university and government professionals who are committed to the continued success and expansion of Ethernet technology. The Ethernet Alliance takes Ethernet standards to market by supporting activities that span from incubation of new Ethernet technologies to interoperability demonstrations and education.


Source: Ethernet Alliance

The post Ethernet Alliance Heralds Next Ethernet Era with OFC 2017 Demo appeared first on HPCwire.

LANL Donation Adding to University of New Mexico Supercomputing Power

Thu, 03/16/2017 - 17:06

March 16, 2017 — A new computing system to be donated to The University of New Mexico Center for Advanced Research Computing (CARC) by Los Alamos National Laboratory (LANL) will put the “super” in supercomputing.

The system is nine times more powerful than the combined computing power of the four machines it is replacing, according to CARC interim director Patrick Bridges.

The machine was acquired from LANL through the National Science Foundation-sponsored PR0bE project, which is run by the New Mexico Consortium (NMC). The NMC, comprising UNM, New Mexico State, and New Mexico Tech universities, engages universities and industry in scientific research in the nation’s interest and to increase the role of LANL in science, education and economic development.

The new system given to UNM from LANL

The system includes:

  • More than 500 nodes, each featuring two quad-core 2.66 GHz Intel Xeon 5550 CPUs and 24 GB of memory
  • More than 4,000 cores and 12 terabytes of RAM
  • 45-50 trillion floating-point operations per second (45-50 teraflops)

Additional memory, storage and specialized compute facilities to augment this system are also being planned.

“This is roughly 20 percent more powerful than any other remaining system at UNM,” Bridges said. “Not only will the new machine be easier to administer and maintain, but also easier for students, faculty and staff to use. The machine will provide cutting-edge computation for users and will be the fastest of all the machines.”

Andree Jacobson, chief information officer of the NMC, says that he is pleased the donation will benefit educational efforts.

“Through a very successful collaboration between the National Science Foundation, New Mexico Consortium, and the Los Alamos National Laboratory called PRObE, we’ve been able to repurpose this retired machine to significantly improve the research computing environment in New Mexico,” he said. “It is truly wonderful to see old computers get a new life, and also an outstanding opportunity to assist the New Mexico universities.”

To make space for the new machine, the Metropolis, Pequeña, and Ulam systems at UNM will be phased out over the next couple of months. As they are taken offline, the new machine will be installed and brought online. Users of existing systems and their research will be transitioned to the new machine as part of this process.

Source: The University of New Mexico

The post LANL Donation Adding to University of New Mexico Supercomputing Power appeared first on HPCwire.

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

Thu, 03/16/2017 - 14:04

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Among the big targets are National Institutes of Health ($6 billion cut from its $34 billion budget), the Department of Energy ($900 million cut from DOE Office of Science and elimination of the $300 million Advanced Research Projects Agency), National Oceanic and Atmospheric Administration (five percent cut) and Environmental Protection Agency ($2.6 billion cut or 31.4 percent of its budget).

Perhaps surprisingly, the National Science Foundation – a key funding source for HPC research and infrastructure – was not mentioned in the budget. Science was hardly the only target. The Trump budget closely adhered to the administration’s “America First” tenets slashing $10 billion from the US Agency for International Development (USAID). Health and Human Services and Education are also targeted for cuts of 28.7 and 16.2 percent respectively.

One of the more thorough examinations of Trump’s proposed budget impact on science is presented in Science Magazine (NIH, DOE Office of Science face deep cuts in Trump’s first budget). The Wall Street Journal also offers a broad review of the full budget (Trump Budget Seeks Big Cuts to Environment, Arts, Foreign Aid) and noted the proposed budget faces bipartisan opposition and procedural hurdles:

“…Already, Republicans have voiced alarm over proposed funding cuts to foreign aid. In addition, Senate rules require 60 votes to advance the annual appropriations bills that set each department’s spending levels. Republicans control 52 Senate seats, meaning the new president will need support from Democrats to advance his domestic spending agenda.

“You don’t have 50 votes in the Senate for most of this, let alone 60,” said Steve Bell, a former GOP budget aide who is now a senior analyst at the Bipartisan Policy Center. “There’s as much chance that this budget will pass as there is that I’m going to have a date with Elle Macpherson.”

A broad chorus of concern is emerging. The Information Technology and Innovation Foundation (ITIF) posted its first take on President Trump’s budget. It cuts critical investment and eliminates vital programs, argues ITIF. The preliminary evidence suggests that the administration is taking its cues from a deeply flawed framework put forward by the Heritage Foundation.

Overall, ITIF says “The reality is that if the United States is going to successfully manage its growing financial problems and improve living standards for all Americans, it needs to increase its investment in the primary drivers of innovation, productivity, and competitiveness. The Trump budget goes in the opposite direction. If these cuts were to be enacted, they would signal the end of the American century as a global innovation leader.”

Two years ago, the National Strategic Computing Initiative (July 2015) was established by then President Obama’s executive order. It represents a grandly ambitious effort to nourish all facets of the HPC ecosystem in the U.S. That said, after initial fanfare, NSCI has seemed to languish although a major element – DOE’s Exascale Computing Program – continues marching forward. It’s not clear how the Trump Administration perceives NSCI and to a large degree no additional funding has been funneled into the program since its announcement.

The issuing of the budget closely follows the recent release and media coverage of a December 2016 DOE-NSA Technical Meeting report that declares underinvestment by the U.S. government in HPC and supercomputing puts U.S. computer technology leadership and national competitiveness at risk in the face of China’s steady ascent in HPC. (See HPCwire coverage, US Supercomputing Leaders Tackle the China Question)

The Department of Defense is one of the few winners in the proposed budget with a $53.2 billion jump (10 percent) in keeping with Trump campaign promises. At a top level, NASA is relatively unscathed but Science reports “At NASA, a roughly $100 million to cut to the agency’s earth sciences program would be mostly achieved by canceling four climate-related missions, according to sources. They are the Orbiting Carbon ­Observatory-3; the Plankton, Aerosol, Cloud, ocean Ecosystem program; the Deep Space Climate Observatory; and the CLARREO Pathfinder. Overall, NASA receives a 1% cut.”

Obviously, it remains early days for the budget battle. There have been suggestions that the proposed cuts, some especially deep, are part of a broad strategy by the Administration to settle for lesser cuts but stronger buy-in from Congress on other Trump policy initiatives.

Link to Science Magazine coverage: http://www.sciencemag.org/news/2017/03/nih-doe-office-science-face-deep-cuts-trumps-first-budget

Link to WSJ article: https://www.wsj.com/articles/trump-budget-seeks-big-cuts-to-environment-arts-foreign-aid-1489636861

Link to Nature coverage: http://www.nature.com/news/us-science-agencies-face-deep-cuts-in-trump-budget-1.21652

The post Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF appeared first on HPCwire.

CPU-based Visualization Positions for Exascale Supercomputing

Thu, 03/16/2017 - 12:22

Since our first formal product releases of OSPRay and OpenSWR libraries in 2016, CPU-based Software Defined Visualization (SDVis) has achieved wide-spread adoption. This rapid uptake is the result of two factors: (1) the general availability of highly-optimized CPU-based rendering software such as the open-source OSPRay ray tracing library and the high performance OpenSWR raster library in Mesa3d  integrated into popular visualization tools like Kitware’s Paraview and VTK, as well as the community tool, VisIt; and (2) SDVis filling the big data visualization community need for software that uses runtime visualization algorithms that can handle giga-scale data.

Award winning results, such as the Best Visualization and Data Analytics Showcase award won by the Los Alamos’ Data Science at Scale Team at Supercomputing 2016, highlight the fact that CPU-based rendering is now at the forefront of visualization technology. The LANL team’s award winning asteroid impact visualization is featured as an LANL newsroom picture of the week.

Figure 1: One image from the LANL asteroid impact video (Source: LANL)

Dr. Aaron Knoll (Research Scientist, Scientific Computing and Imaging Institute at the University of Utah) explains that the key change from last year lies in how much OSPRay and other SDVis CPU-based visualization libraries are now being used. “2016 is the year OSPRay became used in practice and production,” he said.

This trend has occurred throughout the scientific community. For example, four out of six finalists at Supercomputing 2016 used OSPRay and/or OpenSWR for their CPU-based visualizations. Knoll also observed that many of the non-finalists – at least 50% – used OSPRay in some fashion. “Before,” he said, “people knew that OSPRay was there – now they just use it by default.”  So, unlike 2015, CPU-based visualizations are no longer a contrary view.

An exascale requirement

The idea behind SDVis is that larger data sets imply higher resolution (and therefore quality) that is too big for typical GPU memory. Focusing directly on the needs of large scale visualization rather than first targeting gaming means that SDVis software components can be designed to utilize massive-memory hardware and algorithms that scale as needed across the nodes in a cluster or inside a computational cloud.

Massive data poses a problem as it simply becomes impractical from a runtime point of view to move it around or keep multiple copies. It just takes too much time and memory capacity. This makes in-situ visualization (which minimizes data movement by running the visualization and simulation software on the same hardware) a “must-have.”  As I like to say, “A picture is worth an Exabyte”.

Eliminating data movement with in-situ visualization is a hot topic in the scientific literature and is now viewed by experts as a technology requirement for visualization in the exascale era. The paper “An Image-based Approach to Extreme Scale In Situ Visualization and Analysis” by James Ahrens et al. quantifies the data movement challenge as follows: “Imagery is on the order of 10**6 in size, whereas extreme scale simulation data is on the order of 10**15 in size.” Nine orders of magnitude is significant.

Ahrens explained, “We believe very strongly that in-situ is a requirement for exascale supercomputing.” More specifically, “For exascale, we need to be portable across all platforms. It’s an IO plus memory capacity issue.” Knoll agrees that in-situ visualization is a requirement, “the old way of business has to change.”

Managing success: CPU-based SDVis robustly encompasses new algorithmic and software approaches

Dr. Knoll points out that in-situ visualization encompasses a spectrum of technologies, not just software alone. He references the 3D XPoint and Intel Omni-Path architecture. Jointly developed by Micron and Intel, 3D XPoint is a non-volatile storage media that can be used as storage or to augment main memory as the media is byte-addressable. Intel Omni-Path is a high-bandwidth, low-latency communications architecture created by Intel to increase performance and decrease cost.

“Memory is key,” Knoll stresses. He points out that, “An Intel Xeon Phi processor can support up to 24x more DRAM than an equivalent single GPU (NVIDIA Tesla P100 with 16 GB RAM), and an Intel Xeon workstations (e.g., the Brickland-EX platform with 6 TB) up to 384x more. With 3D XPoint the cost of this ‘memory’ will decrease substantially, which goes hand in hand with the benefits of big data runtime algorithms where it does not cost substantially more to access (and render) 6 TB or data than 16 GB of data.”

Knoll envisions 3D XPoint working as an in-core file-system at scale that blurs the line between RDMA, in-situ visualization, and distributed file-systems. One example is the CORAL project that, “leverages Intel Crystal Ridge [now known as 3D XPoint] non-volatile memory technology that is configured in DDR4 compatible DIMM form factor with processor load/store access semantics on CORAL point design compute nodes. This software design will allow applications running on any CORAL point design compute node to have a global view of and global access to Crystal Ridge that is on other compute nodes.” [see PDF] “This technology gets me very excited,” Knoll adds. The importance of the communications fabric in this scenario is hopefully obvious.

Focusing visualization solutions on data size rather than gaming usage means that SDVis software components can be designed to utilize massive-memory hardware and scale as needed across the nodes in a cluster or inside a computational cloud. This frees developers to design for the user rather than the hardware. Knoll mentioned that VMD (Visual Molecular Dynamics), one of the two SC16 finalist applications in the visualization competition, wanted to use OSPRay in their SC16 submission, but the integration of OSPRay into VMD had not been completed in time for the SC16 submission. As a result, the SC16 finalist version had to go with the OpenGL-based data usage model. Happily, OSPRay is now integrated into VMD. The other SC16 finalist targeted InfoVis usage for which OSPRay was not required.

Figure 2: The Los Alamos team won the visualization award at SC16 for their SDVis based work

The transition from OpenGL targeted hardware rasterization to CPU-based rendering means that algorithm designers can exploit large memory (100’s of GBs or larger) visualization nodes to create logarithmic runtime algorithms.

Dr. Knoll stresses the importance of logarithmic runtime algorithms (a subtle but key technical point) as users are faced with orders of magnitude increases in data sizes on the big supercomputers. Logarithmic runtime algorithms are important for big visualizations and exascale computing as the runtime increases slowly (e.g. logarithmically) even when data sizes increase by orders of magnitude. Such algorithms tend to consume large amounts of in-core memory to hold the data and associated data structures. Thus memory capacity and latency are two key hardware metrics.

Research at the University of Utah [PDF] shows a single large memory (3 terabyte) workstation can deliver competitive and even superior interactive rendering performance compared to a 128-node GPU cluster; this is paradigm-changing. The group is exploring in-situ visualization using P-k-d trees and other fast, in-core approaches.  In another effort, the University of Utah team is creating spectacular resolution images based on massive data using fiber surfaces, OSPRay, and in-situ visualization. One example image is shown below. These two projects at the University of Utah hopefully make the point that new visualization algorithms are currently a hot research topic.

Figure 3: Fiber surfaces: classifying and summarizing multifields [i]Our design efforts on OSPRay includes the recognition that our software cannot – and does not – exist in a vacuum. The challenge is to provide sufficient modularity so researchers can adapt the package without having to touch the golden build source code. In other words, OSPRay is designed so researchers can explore new approaches without breaking the code for everyone. Our solution was to extend OSPRay with the aptly named ‘modules’ capability, which first appeared in v1.2.0. In using modules, the University of Utah team notes that modules provide a logical pairing between algorithm and data where researchers can: (1) write a module and (2) pair it with a data wrangling API like the VL-3 volume rendering tool. By design, successful and widely-utilized modules can be evaluated by the OSPRay team across a number of platforms as possible additions to the main body of the OSPRay code. Such accessibility and portability across CPU platforms highlights the adaptable yet robust characteristics of SDVis software.

Education will likely increase the rate of adoption

The adoption rate over the past year has been phenomenal, but we expect it to increase even further. As Dr. Knoll stated, “2016 is the year OSPRay became used in practice and production.” As a production visualization tool for scientific computing, OSPRay and more generically CPU-based SDVis has clearly come of age. Integration into packages such as ParaView and VisIt has made CPU-based rendering mainstream, which in turn means that using a CPU for visualization can no longer be considered a contrary viewpoint; it’s becoming the norm.

It is expected that education will likely increase the rate of adoption. A number of excellent educational resources are available online. For example, view the 2016 Intel HPC Developer software visualization track videos to delve more deeply into the technology and third-party use cases. Of course, hands-on experience and interacting with peers is always of value. Such interactions can be had at the IXPUG May 2017 Visualization workshop at the Texas Advanced Computing Center. Immediate hands-on experience can also be had simply by working with VisIt and ParaView, or downloading the OSPRay code from github and the OpenSWR code via the Mesa3D website.  Further background and up to date information about Software Defined Visualization is available at our IDZ (Intel Developer Zone) SDVis landing page., and in Chapter 17 of my Morgan Kaufman published book Intel Xeon Phi High Performance Programming: Knights Landing Edition.

To utilize CPU-based SDVis in your software, look to the following packages: (1) the OSPRay scalable, and portable ray tracing engine; (2) the Embree library of high-performance ray-tracing kernels; and (3) OpenSWR, a drop-in OpenGL replacement, highly scalable, CPU-based software rasterizer all provide core functionality for current SDV applications.

[i] Kui Wu, Aaron Knoll, Ben Isaac, Hamish Carr, and Valerio Pascucci, “Direct Multifield Volume Ray Casting of Fiber Surfaces, IEEE Visualization 2015.

About the Author

Jim Jeffers is a Principal Engineer and engineering leader at Intel who is passionate about world changing technology as well as author and industry expert on parallel computing hardware.

The post CPU-based Visualization Positions for Exascale Supercomputing appeared first on HPCwire.

Flatiron Institute to Repurpose SDSC’s Gordon Supercomputer

Thu, 03/16/2017 - 07:42

SAN DIEGO, Calif., March 16, 2017 — The San Diego Supercomputer Center (SDSC) at the University of California San Diego and the  Simons Foundation’s Flatiron Institute in New York have reached an agreement under which the majority of SDSC’s data-intensive Gordon supercomputer will be used by Simons for ongoing research following completion of the system’s tenure as a National Science Foundation (NSF) resource on March 31.

Under the agreement, SDSC will provide high-performance computing (HPC) resources and services on Gordon for the Flatiron Institute to conduct computationally-based research in astrophysics, biology, condensed matter physics, materials science, and other domains. The two-year agreement, with an option to renew for a third year, takes effect April 1, 2017.

Under the agreement, the Flatiron Institute will have annual access to at least 90 percent of Gordon’s system capacity. SDSC will retain the rest for use by other organizations including UC San Diego’s Center for Astrophysics & Space Sciences (CASS), as well as SDSC’s OpenTopography project and various projects within the Center for Applied Internet Data Analysis (CAIDA), which is based at SDSC.

“We are delighted that the Simons Foundation has given Gordon a new lease on life after five years of service as a highly sought after XSEDE resource,” said SDSC Director Michael Norman, who also served as the principal investigator for Gordon. “We welcome the Foundation as a new partner and consider this to be a solid testimony regarding Gordon’s data-intensive capabilities and its myriad contributions to advancing scientific discovery.”

“We are excited to have a big boost to the processing capacity for our researchers and to work with the strong team from San Diego,” said Ian Fisk, co-director of the Scientific Computing Core (SCC), which is part of the Flatiron Institute.

David Spergel, director of the Flatiron Institute’s Center for Computational Astrophysics (CCA) said, “CCA researchers will use Gordon both for simulating the evolution and growth of galaxies, as well as for the analysis of large astronomical data sets.  Gordon offers us a powerful platform for attacking these challenging computational problems.”

Simons Array and Simons Observatory

The POLARBEAR project and successor project called The Simons Array, led by UC Berkeley and funded first by the Simons Foundation and then in 2015 by the NSF under a five-year, $5 million grant, will continue to use Gordon as a key resource.

“POLARBEAR and The Simons Array, which will deploy the most powerful CMB (Cosmic Microwave Background) radiation telescope and detector system ever made, are two NSF supported astronomical telescopes that observe CMB, in essence the leftover ‘heat’ from the Big Bang in the form of microwave radiation,” said Brian Keating, a professor of physics at UC San Diego’s Center for Astrophysics & Space Sciences and a co-PI for the POLARBEAR/Simons Array project.

“The POLARBEAR experiment alone collects nearly one gigabyte of data every day that must be analyzed in real time,” added Keating. “This is an intensive process that requires dozens of sophisticated tests to assure the quality of the data. Only by leveraging resources such as Gordon are we be able to continue our legacy of success.”

Gordon also will be used in conjunction with the Simons Observatory, a 5-year $40 million project awarded by the Foundation in May 2016 to a consortium of universities led by UC San Diego, UC Berkeley, Princeton University, and the University of Pennsylvania. In the Simons Observatory, new telescopes will join the existing POLARBEAR/Simons Array and Atacama Cosmology Telescopes to produce an order of magnitude more data than the current POLARBEAR experiment. An all-hands meeting for the new project will take place at SDSC this summer. A video describing the project can be viewed by clicking the image below.

Delivering the Data

The result of a five-year, $20 million NSF grant awarded in late 2009, Gordon entered production in early 2012 as one of the 50 fastest supercom­puters in the world, and the first one to use massive amounts of flash-based memory. That made it many times faster than conventional HPC systems, while having enough bandwidth to help researchers sift through tremendous amounts of data. Gordon also has been a key resource within NSF’s XSEDE (Extreme Science and Engineering Discovery Environment) project. The system will officially end its NSF duties on March 31 following two extensions from the agency.

By the end of February 2017, Gordon had supported research and education by more than 2,000 command-line users and over 7,000 gateway users, primarily through resource allocations from XSEDE.  One of Gordon’s most data-intensive tasks was to rapidly process raw data from almost one billion particle collisions as part of a project to help define the future research agenda for the Large Hadron Collider (LHC). Gordon provided auxiliary computing capacity by processing massive data sets generated by one of the LHC’s two large general-purpose particle detectors used to find the elusive Higgs particle. The around-the-clock data processing run on Gordon was completed in about four weeks’ time, making the data available for analysis several months ahead of schedule.

About the Simons Foundation

The Simons Foundation’s mission is to advance the frontiers of research in mathematics and the basic sciences, supporting discovery-driven scientific research. Co-founded in New York City by Jim and Marilyn Simons, the foundation celebrated its 20th anniversary in 2014. The Foundation makes grants in four program areas: mathematics and physical sciences, life sciences, autism research, and education and outreach. In 2016 the Foundation launched an internal research division called the Flatiron Institute, a multidisciplinary institute focused on computational science.

About SDSC

As an Organized Research Unit of UC San Diego, SDSC is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community, including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC’s petascale Comet supercomputer continues to be a key resource within the National Science Foundation’s XSEDE (Extreme Science and Engineering Discovery Environment) program.

Source: SDSC

The post Flatiron Institute to Repurpose SDSC’s Gordon Supercomputer appeared first on HPCwire.

Researchers Write/Read Bits using Single Atoms

Wed, 03/15/2017 - 15:56

While commercial application isn’t imminent researchers have successfully scaled storage down to the classical limit of a single atom according to work reported in Nature last week. An international team of researchers, using a scanning tunneling microscope (STM) at the IBM Almaden Researcher Center, were able write and read bits using single holmium (Ho) atoms.

‘The single-atom bit represents the ultimate limit of the classical approach to high-density magnetic storage media. So far, the smallest individually addressable bistable magnetic bits have consisted of 3–12 atoms. Long magnetic relaxation times have been demonstrated for single lanthanide atoms in molecular magnets for lanthanides diluted in bulk crystals, and recently for ensembles of holmium (Ho) atoms supported on magnesium oxide (MgO). These experiments suggest a path towards data storage at the atomic limit,” write authors of the Nature Letter, “Reading and writing single-atom magnets.”

Many theoretical and practical issues remain. For instance, no one is quite sure of the mechanism by which Ho atoms retain their “relaxation” period. Nevertheless, the researchers, led by Andreas Heinrich, director of the Center for Quantum Nanoscience in the Institute of Basic Science, South Korea, were able to construct a two-bit atomic Ho array (figure below) as a proof of concept.

In describing the work for Nature in its News & Views section, Robert Sessoli of the University of Florence (which is unaffiliated with the work) called it an unambiguous demonstration that writes/reads at the single atom scale are possible.

The authors write, “Here we address the magnetic bistability of individual Ho atoms on MgO, which we switch using current pulses and detect through the tunnel magnetoresistance using a spin-polarized scanning tunnelling microscope (STM). We unambiguously prove the magnetic origin of the switching in the tunnelling resistance using STM-enabled single-atom electron spin resonance (ESR) on an adjacent iron (Fe) sensor atom. Additionally, we determine by this method the out- of-plane component of the Ho magnetic moment, and use the long lifetime to store two bits of information in an array of two Ho atoms whose magnetic state can be read locally by magnetoresistance, and remotely by means of ESR on a nearby sensor atom.” (See figure of experimental set-up below.)

In summing up his view of the research, Sessoli wrote: “Although (Fabiano) Natterer and colleagues’ work is still far from having real-world applications, their advancement of scanning probe microscopy techniques has shown that the storage and retrieval of magnetic information in a single atom is feasible. Several issues need to be resolved. In terms of reading and writing data, the techniques involved are not the most user-friendly or affordable. Even if other sensing methods are developed, the peculiar magnetic properties of Ho atoms exploited by the authors can be realized only in extreme conditions, such as in an ultrahigh vacuum.”

Link to Nature Letter: http://www.nature.com/nature/journal/v543/n7644/full/nature21371.html

Link to Nature News & Views: http://www.nature.com/nature/journal/v543/n7644/full/nature21371.html

Authors: Fabian D. Natterer1,2, Kai Yang1,3, William Paul1, Philip Willke1,4, Taeyoung Choi1, Thomas Greber1,5, Andreas J. Heinrich6,7 & ChristopherP.Lutz1

1, IBM Almaden Research Center, San Jose, California 95120, USA; 2, Institute of Physics, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland; 3, School of Physical Sciences and Key Laboratory of Vacuum Physics, University of Chinese Academy of Sciences, Beijing 100049, China; 4, IV. Physical Institute, University of Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen, Germany; 5, Physik-Institut, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich, Switzerland; 6, Institute of Basic Science, Center for Quantum Nanoscience, Seoul, South Korea; 7, Physics Department, Ewha Womans University, Seoul, South Korea.

The post Researchers Write/Read Bits using Single Atoms appeared first on HPCwire.

US Supercomputing Leaders Tackle the China Question

Wed, 03/15/2017 - 15:15

As China continues to prove its supercomputing mettle via the Top500 list and the forward march of its ambitious plans to stand up an exascale machine by 2020, one-to-two years ahead of other world tech leaders, there has not yet been a truly competitive counter from the U.S. Many government and industry representatives we’ve spoken with are quick to express concern (usually off-the-record) about the United States’ positioning within the pack of global supercomputing powers, but official channels haven’t been very forthcoming.

A U.S. government report, published by the Networking and Information Technology Research and Development (NITRD) program, reflects a new willingness to speak openly about the increased global pressures that impact the competitiveness of U.S. supercomputing and the implications for the nation’s economic prosperity, scientific leadership and military security.

The report summarizes the findings of a joint DOE-NSA Technical Meeting held September 28-29, 2016, which brought approximately 60 experts and leaders from the U.S. HPC community together to assess U.S. plans in light of China’s standing up the number-one ranked 93-petaflops Sunway TaihuLight in June. As we know, that was just the latest show of strength. China had already held the Top500 spot with Tianhe-2 for six iterations of the list (beginning with its debut in June 2013), the equivalent of three years. With the addition of TaihuLight, China now claims the number one and number two systems, which provide the list with nearly 19 percent of its total FLOPS. Arguments that question the utility of China’s FLOPS-centric approach in relation to the US, EU and Japanese focus on “sustained application performance” have merit, but only to a point.

The Chinese machine is not a stunt machine; the report is explicit about this:

“It is not a stunt. TaihuLight is a significant step up in performance for China; indeed, its 93 petaflop/s is significantly greater than the aggregate flops available to DOE today – Titan, Sequoia, Mira, Trinity (Haswell), etc. More importantly, where previous Chinese HPC systems were unimpressive except for running benchmarks (i.e., LINPACK tests), TaihuLight is being used for cutting-edge research, with real-world applications running well at scale. This year, three of six finalists for the Gordon Bell competition (see Appendix A) are Chinese efforts; China has never been a finalist before.”

Meeting attendees were also impressed by China’s homegrown processor architecture and the innovative nature of the design, the report notes.

It terms of both Top500 system share and aggregate performance share, China is now on par with the U.S., which prior to 2016, the U.S. had a clear lead going all the way back to the first Top500 ranking in 1993.

The report reflects this sentiment: “These results indicate that China has attained a near-peer status with the U.S. in HPC. The U.S. asserted its intention to maintain a leadership position in HPC in the July 2015 Executive Order establishing the National Strategic Computing Initiative (NSCI). It is now clear that future U.S. leadership will be challenged by the Chinese. The 2012 Net Assessment of Foreign HPC noted the aggressive development of Chinese HPC capabilities, and, in particular, the accelerated rate of investment that China was making in these areas.”

National security was high-priority topic. “HPC play[s] a vital role in the design, development, and analysis of many – perhaps almost all – modern weapons systems and national security systems,” the report states, and concludes “[national security] requires the best computing available[;] loss of leadership in HPC will severely compromise our national security.”

But the report authors also provided a thoughtful statement on the nature of China’s motivations:

Participants [especially those from industry] stressed that their personal interactions with Chinese researchers and at supercomputing centers showed a mindset where computing is first and foremost a strategic capability for improving the country: for pulling a billion people out of poverty; for supporting companies that are looking to build better products, or bridges, or rail networks; for transitioning away from a role as a lowcost manufacturer for the world; for enabling the economy to move from “Made in China” to “Made by China.

Having said that, their focus on using HPC systems and codes to build more advanced nuclear reactors and jet engine suggests an aggressive plan to achieve leadership in high-tech manufacturing, which would undermine profitable parts of the U.S. economy. And such codes, together with their scientific endeavors, are good proxies for the tools needed to design many different weapons systems.


The NSCI, and by extension the Exascale Computing Program, is central to the United States’ plan to ensure its global economic, scientific and military competitiveness. The post-Moore’s Law challenge, led by IARPA and other agencies, is another key part of the U.S. strategy.

“Leadership positions, once lost, are expensive to regain. To maintain U.S. leadership in HPC, a surge of USG investment and action is needed to address HPC priorities,” the report states.


The 2012 Net Assessment of Foreign HPC noted a divergence in R&D investment between the U.S. (slowing) and China (accelerating). Objectives #1, #2, and #3 of the NSCI [refer to Appendix E in report] can be seen as identifying the USG investments necessary to support a healthy HPC ecosystem for the next 30+ years. A notional timeline for the impact of these investments is the following:

+ Today to 2025 – HPC ecosystem nurtured by USG investments to reach a capable Exascale system

+ 2025 to 2035 – HPC ecosystem takes advantage of USG leadership in architectural innovations (described below)

+ 2035 and beyond – HPC ecosystem endures because of USG investments in “Post Moore’s Law” era

At the same time that U.S. supercomputing experts and officials are calling for “an investment surge” and spelling out the consequences of status quo funding levels, the Trump administration has proposed cuts to the DOE that would roll back funding to 2008 levels, guided by a blueprint from right-wing think tank, the Heritage Foundation.

“If these investments are not made, the U.S. can expect an HPC capability gap to emerge and widen in less than a decade,” the report asserts.

NSCI Objectives (source)

1) Accelerate delivery of a capable exascale computing system,

2) Increase coherence between technology for modeling/simulation and data analytics,

3) Establish a viable path forward in the “post-Moore’s Law” era,

4) Increase the capacity and capability of an enduring national HPC ecosystem, and

5) Develop U.S. government, industry, and academic collaborations to share the benefits.

The post US Supercomputing Leaders Tackle the China Question appeared first on HPCwire.

SDSC’s Flash ‘Gordon’ Too Fast for Retirement

Wed, 03/15/2017 - 08:52

They say a dog year is equivalent to about seven human years, but the average supercomputer’s lifespan is even shorter due mainly to the economics of powering and cooling the machines. A typical life cycle for today’s big iron is about five years, but sometimes another opportunity presents itself. Such is the case for “Gordon,” the San Diego Supercomputer Center (SDSC) system that was pioneering for its use of flash technology when it entered production in January 2012.

Five years goes by quickly, and now Gordon is nearing its official retirement date; on March 31, the data-focused machine ends its term as a National Science Foundation (NSF) resource. But that flash-enabled bandwidth, so valuable for today’s data heavy workloads, is proving to have staying power.

Yesterday, San Diego Supercomputer Center (SDSC) announced that “Gordon” will live on, thanks to an agreement with the Simons Foundation’s Flatiron Institute in New York. Flatiron will use Gordon’s computational power for ongoing research in astrophysics, biology, materials research and other fields according to an announcement put out by SDSC.

SDSC will provide high-performance computing (HPC) resources and services on Gordon for the Flatiron Institute as part of a two-year agreement, with an option to renew for a third year. The agreement takes effect April 1, 2017.

The contract guarantees Flatiron annual access to at least 90 percent of the machine’s cycles; SDSC will be able to use the remaining capacity within UC San Diego’s Center for Astrophysics & Space Sciences (CASS), SDSC’s OpenTopography project and various projects within the Center for Applied Internet Data Analysis (CAIDA), which is based at SDSC.

“We are delighted that the Simons Foundation has given Gordon a new lease on life after five years of service as a highly sought after XSEDE resource,” said SDSC Director Michael Norman, who also served as the principal investigator for Gordon. “We welcome the Foundation as a new partner and consider this to be a solid testimony regarding Gordon’s data-intensive capabilities and its myriad contributions to advancing scientific discovery.”

“We are excited to have a big boost to the processing capacity for our researchers and to work with the strong team from San Diego,” said Ian Fisk, co-director of the Scientific Computing Core (SCC), which is part of the Flatiron Institute.

David Spergel, director of the Flatiron Institute’s Center for Computational Astrophysics (CCA) said, “CCA researchers will use Gordon both for simulating the evolution and growth of galaxies, as well as for the analysis of large astronomical data sets.  Gordon offers us a powerful platform for attacking these challenging computational problems.”

Other projects that will continue to benefit from Gordon include the Simons Array (the successor to the POLARBEAR project) and Simons Observatory.

“POLARBEAR and The Simons Array, which will deploy the most powerful CMB (Cosmic Microwave Background) radiation telescope and detector system ever made, are two NSF supported astronomical telescopes that observe CMB, in essence the leftover ‘heat’ from the Big Bang in the form of microwave radiation,” said Brian Keating, a professor of physics at UC San Diego’s Center for Astrophysics & Space Sciences and a co-PI for the POLARBEAR/Simons Array project.

Through its NSF tenure, Gordon has supported research and education by more than 2,000 command-line users and over 7,000 gateway users, primarily through XSEDE-based resource allocations.

“One of Gordon’s most data-intensive tasks was to rapidly process raw data from almost one billion particle collisions as part of a project to help define the future research agenda for the Large Hadron Collider (LHC),” notes SDSC. “Gordon provided auxiliary computing capacity by processing massive data sets generated by one of the LHC’s two large general-purpose particle detectors used to find the elusive Higgs particle. The around-the-clock data processing run on Gordon was completed in about four weeks’ time, making the data available for analysis several months ahead of schedule.”

This story relied on reporting from SDSC. For more details, refer to the SDSC press release.

The post SDSC’s Flash ‘Gordon’ Too Fast for Retirement appeared first on HPCwire.

Women at SC Awarded the CENIC 2017 Innovations in Networking Award

Wed, 03/15/2017 - 08:02

BERKELEY, Calif. & LA MIRADA, Calif., March 15, 2017 –In recognition of work to expand the diversity of the SCinet volunteer staff and to provide professional development opportunities to highly qualified women in the field of networking, the Women in IT Networking at SC (WINS) program has been selected by CENIC as a recipient of the 2017 Innovations in Networking Award for Experimental Applications. Project members being recognized include Wendy Huntoon (KINBER), Marla Meehl (UCAR), and Kate Petersen Mace, Lauren Rotman, and Jason Zurawski (ESnet).

This powerful collaboration fosters gender diversity in the field of technology, a critical need. By funding women IT professionals to participate in SCinet and to attend the Supercomputing Conference, the program allows the next generation of technology leaders to gain critical skills.

“Until you roll your sleeves up and dig into building and operating SCinet, which is an amazingly robust, high-bandwidth network that exists for just two weeks, it’s hard to imagine just how tough it is — and how rewarding it is,” said Inder Monga, Director of ESnet, the Department of Energy’s Energy Sciences Network. “Many of our ESnet engineers have been members of the SCinet team over the years, bringing back valuable skills in network operations, project management, teamwork, and on-the-spot problem-solving. Our support of WINS is one way of contributing back to the conference and the community’s growth and success.”

In 2016, eight women were selected to be part of the WINS program; three were funded to return to SC16 after participating in the 2015 WINS cohort. Sana Bellamine, a CENIC Core Engineer, was a 2015 WINS award winner and was invited to participate again in SC16. As a part of her work on SCinet, she used high-end, state-of-the-art equipment to test 100 Gbps circuits, setting up the environment to test these circuits, and documenting the procedure for doing so. In addition to developing technical expertise, Sana formed lasting relationships with other members of the 2015 WINS cohort. They regularly exchange knowledge, code, and advice using a slack channel (a form of instant messaging), which helps inform their ongoing work within their respective organizations.

As Sana reflects on this experience and its continuing benefits, she notes, “I am thankful to CENIC and to the WINS program for the opportunity to be part of the SCinet team. As one of the SCinet wide-area network team members in 2016, I worked in close collaboration with another awardee on the development of procedures for testing 100GE circuits at line rate. These procedures were used to validate 7x100GE circuits into the SuperComputing show floor. CENIC associates were able to achieve the desired throughput for their planned demos over these 100GE links. The SCinet network is a mature, multi-vendor environment with a rich set of the tools. Having direct exposure to the SCinet network enables me to explore new approaches in my daily work at CENIC.”

Kate Petersen Mace, one of the project leaders from ESnet and the SC14 SCinet Chair, notes that, “The WINS program has been an overwhelming success for SCinet as a whole. As a long-time SCinet member, I understand through experience the amazing challenges and opportunities that volunteering for SCinet present. The dedication and diverse set of skills the WINS awardees have brought has been invaluable, and has strengthened the SCinet team. The WINS Management team is thrilled to see CENIC help lead the way in celebrating the value of a diverse workforce through its continued support of unique training and professional development opportunities—such as SCinet—for its employees.”

Participants grow immeasurably through their involvement with this high-capacity network that supports revolutionary HPC applications and experiments. By joining volunteers from academia, government, and industry working together to design and deliver SCinet, they acquire skills and experiences they can use in their daily work at their home institutions.

WINS is funded jointly through a grant from the National Science Foundation (NSF) and direct funding from the Department of Energy’s (DOE) ESnet. WINS awardees are selected from a competitive application process which includes review by an external committee of leaders in the research and education networking community.

Funds from NSF and DOE provide WINS awardees travel support to participate in SCinet staging and set-up, which take place in the weeks leading up to the conference. The awardees continue their work during the entire week of the Supercomputing conference, when SCinet goes live for attendees to use for any networking need—from wireless Internet access to multi-gigabit demonstrations. At the conclusion of the conference, awardees then help tear down the entire infrastructure in approximately 48 hours.

After their hands-on experience at the SC conference, participants receive support to attend community conferences like the Quilt semi-annual member meeting, and regional network meetings such as the CENIC annual meeting, the Internet2 Global Summit, and the National Lab Information Technology (NLIT) meeting, among others. At these events, the WINS awardees participate in panel discussions to share their experiences and continue building their professional networks. This participation has resulted in increased awareness of and dialogue about the diversity gap that continues to persist in the IT community.

“WINS is a creative approach to the problem of increasing the number of talented network engineers, by developing the capabilities and vision of underrepresented female engineers through deep engagement in SCinet,” notes Kevin Thompson, program manager in the NSF’s Office of Advanced Cyberinfrastructure, which provides WINS funding. “The project attacks a visible challenge in the production R&E networking community: gender diversity in the leadership and workforce. This effort will, at a minimum, significantly impact the careers of 15 women, and it has tremendous potential to do much more in the years ahead, especially if its sustainability approach succeeds.”

Innovations in Networking Awards are presented each year by CENIC to highlight the exemplary innovations that leverage ultra-high bandwidth networking, particularly where those innovations have the potential to transform the ways in which instruction and research are conducted or where they further the deployment of broadband in underserved areas.

About ESnet

The Energy Sciences Network (ESnet) is a high-performance, unclassified network built to support scientific research. Funded by the U.S. Department of Energy’s Office of Science (SC) and managed by the Lawrence Berkeley National Laboratory, ESnet provides services to more than 40 DOE research sites, including the entire National Laboratory system, its supercomputing facilities, and its major scientific instruments. ESnet also connects to 140 research and commercial networks, permitting DOE-funded scientists to productively collaborate with partners around the world.

About WINS

The Women in IT Networking at SC (WINS) program, introduced in November 2015 at the SC15 conference in Austin, Texas, was developed as a means for addressing the prevalent gender gap that exists in Information Technology (IT), particularly in the fields of network engineering and high-performance computing (HPC). The 2015 program* enabled five talented early to mid-career women from diverse regions of the U.S. research and education community IT field to participate in the ground-up construction of SCinet, one of the fastest and most advanced computer networks in the world. WINS is a joint effort between the Energy Sciences Network (ESnet), the Keystone Initiative for Network Based Education and Research (KINBER), and the University Corporation for Atmospheric Research (UCAR).

About UCAR

The University Corporation for Atmospheric Research (UCAR) is a nonprofit consortium of more than 100 North American member colleges and universities focused on research and training in the atmospheric and related earth-system sciences. UCAR manages the National Center for Atmospheric Research with sponsorship by the National Science Foundation. Through its community programs, UCAR supports and extends the capabilities of its academic consortium.


The Keystone Initiative for Network Based Education and Research (KINBER) is a membership organization devoted to fostering collaboration through technology for education, research, healthcare, libraries, public media, workforce development, government, and economic development. KINBER offers connectivity, technology infrastructure solutions and training, and professional development opportunities tailored to support the needs of its members, ranging from libraries and health systems to large university settings. KINBER built and manages the 1,800-mile Pennsylvania Research and Education Network, known as PennREN, which provides advanced data networking to non-profit organizations and fosters collaboration between Pennsylvania-based organizations for value-added services such as Internet2 connectivity, realistic high-definition video, real-time video conferencing, and data sharing. PennREN access points are now in 51 of Pennsylvania’s 67 counties, with initial connections in more than 70 locations over the 1,800-mile network.

Source: WINS

The post Women at SC Awarded the CENIC 2017 Innovations in Networking Award appeared first on HPCwire.

ISC 2017 Dedicates a Day to Deep Learning

Wed, 03/15/2017 - 07:26

FRANKFURT, Germany, March 15, 2017 — ISC High Performance is excited to bring users, as well as academic and industry leaders, to spend a full day together on Wednesday, June 21, to discuss the recent advances in artificial intelligence based on deep learning technology.

This year’s ISC High Performance conference and exhibition will be held at Messe Frankfurt from June 18 – 22, and will be attended by over 3,000 HPC community members, including researchers, scientists and business people.

The overwhelming success of deep learning has triggered a race to build larger artificial neural networks, using growing amounts of training data in order to allow computers to take on more complex tasks. Such work will challenge the computational feasibility of deep learning of this magnitude, requiring massive data throughput and compute power. Hence, implementing deep learning at scale has become an emerging topic for the high performance computing (HPC) community.

The program is designed and chaired by deep learning experts, Dr.-Ing. Janis Keuper, senior scientist at The Fraunhofer Institute for Industrial Mathematics ITWM, and Dr. Damian Borth, director of the deep learning competence center at the German Research Center for Artificial Intelligence (DFKI).

The Deep Learning Day will offer two keynotes, along with a series of talks, to give attendees up-to-date insights on the rapid development in deep learning and also demonstrate how this technology can be enabled with HPC. Also discussed will be how the computational demands of deep learning will affect current and future HPC infrastructure.

The principal topic areas include:

  1. How deep learning is changing the HPC landscape
  2. HPC and big data for autonomous driving and connected vehicles
  3. Future challenges for deep learning and HPC

The organizers have lined up a range of speakers from industry, academia, and the vendor community to share their expertise in this area. The current list includes:

Zeynep Akata, Amsterdam Machine Learning Lab, University of Amsterdam (Keynoter)
Brian van Essen, LLNL
Costas Bekas, IBM Research Zurich
René Wies, BMW Group
Kai Demtröder, BMW Group
Marco Pennachiotti, BMW Group
Mario Tokarz, BMW Group
Naveen Rao, Intel Data Center Group
Achim Noller, Bosch
Gunter Röth, NVIDIA
Mayank Daga, Advanced Micro Devices
Stephan Wolf, Google

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

Over 400 hand-picked expert speakers and 150 exhibitors, consisting of leading research centers and vendors, will greet attendees at ISC High Performance. A number of events complement the Monday – Wednesday keynotes, including the Distinguished Speaker Series, the Industry Track, The Machine Learning Track, Tutorials, Workshops, the Research Paper Sessions, Birds-of-a-Feather (BoF) Sessions, Research Poster, the PhD Forum, Project Poster Sessions and Exhibitor Forums.

Source: ISC

The post ISC 2017 Dedicates a Day to Deep Learning appeared first on HPCwire.

New Japanese Supercomputing Project Targets Exascale

Tue, 03/14/2017 - 19:28

Supercomputing in Japan is on a roll, pushed by AI synergies. In the span of just a few weeks, HPE and Fujitsu have both been tapped to provide Pascal GPU-based deep learning supercomputers to Japanese research institutions (Tokyo Institute of Technology and RIKEN, respectively).

Japan is also a front-runner in the race for exascale; the nation has promised to stand up its first exascale machine, “Post-K” by early 2022. Post-K is the successor to the K computer, Japan’s current reigning number-cruncher, and will be some 100 times faster.

Japan News this week revealed that another supercomputing project is also in the works, this one from emerging supercomputer maker ExaScaler Inc. and Keio University. Under the direction of ExaScaler CEO Dr. Motoaki Saito, the partners are developing an original supercomputer design with exascale aspirations.

Dr. Saito is the founder of three HPC companies, each targeting a key aspect of extreme-scale supercomputing:

1. PEZY Computing Co. Ltd. which is developing a manycore processor.

2. ExaScaler Inc., focused on highly-efficient liquid-cooling.

3. Ultra Memory, Inc., developing a 3D multi-layer memory system (see patent info).

PEZY and ExaScaler built one of the world’s most energy-efficient supercomputers, Shoubu, which held the number one spot on the Green500 for three iterations of the bi-annual list (June 2015, November 2015 and June 2016). With a rating of 6.67 gigaflops-per-watt, Shoubu is currently number three on the most recent listing (having been surpassed by two Pascal GPU-powered machines). Installed at RIKEN, Shoubu is based on the companies’ ZettaScaler-1.6 architecture. ZettaScaler-2.0 is due out in 2017.

All three companies (PEZY, Exascaler, and Ultra Memory) are in joint collaboration to develop a supercomputer system with exascale chops. The new supercomputer will be outfitted with a high-capacity, low-power 3D integrated circuit (IC) developed by Keio University Professor Tadahiro Kuroda. ExaScaler will supply its liquid carbon fluoride cooling technology.

Japan News reports that the approach enables the supercomputer to be downsized to about one meter wide by one meter long. The intention is to connect 18 of these boxes to create a 24 petaflops system (we’re confirming precision level) for installation at the Japan Agency for Marine-Earth Science and Technology’s Yokohama Institute for Earth Sciences.

According to Japan News, “the corporate-academic project team aims to achieve the fastest computing speed in Japan by June, which would make the computer the third-fastest in the world.”

The team’s aspirations also extend to creating the fastest supercomputer in the world, which entails surpassing China’s 93-petaflops Sunway TaihuLight and fending off a number of exascale-focused projects, but more funding will required to achieve that goal.

The project is supported by Japan Science and Technology Agency agency (JST), an independent public body of the Ministry of Education, Culture, Sports, Science and Technology (MEXT). We don’t know exact funding levels, but JST provides up to ¥5 billion for promising technologies. So far, the Japanese government (via MEXT’s Flagship 2020 project) has dedicated  ¥110 billion to the post-K project.

“There may be some challenges along the way, but [the ExaScaler/Keio University system] has the potential to become excellent technology in terms of both power consumption and price,” said RIKEN Advanced Institute for Computational Science team leader Junichiro Makino. “These developments may have a revolutionary impact on next-generation supercomputers.”

Japan was one of the first world supercomputing powers, going neck and neck with the US for TOP500 glory since the early days of that list. The nation is home to supercomputing stalwarts like Fujitsu and NEC, and has a strong relationship with SGI Japan (now part of HPE). Its current fastest supercomputer is the K computer, which debuted on the TOP500 list at number one in 2011; it is now in seventh position, capable of 10.5 Linpack petaflops.

The post New Japanese Supercomputing Project Targets Exascale appeared first on HPCwire.

Stony Brook Unlocks New Research with 100 Gbps Connection

Tue, 03/14/2017 - 10:29

STONY BROOK, N.Y., March 14, 2017 — Stony Brook University becomes the first higher education institution in New York State to offer a 100 gigabit-per-second (Gbps) connection to the NYSERNet Research and Engineering network through which it also connects to Internet2, revolutionizing the quality, quantity and speed of digital research.

“This connection supports Stony Brook’s regional leadership position in high-performance computing, while advancing our goal as a top public research university to educate and train future generations of scientists,” said University President Samuel L. Stanley Jr. “I’m looking forward to the many new possibilities this connection will offer for interdisciplinary collaboration, an essential element behind the expansion of our expertise and our success in science, technology, engineering and mathematics as well as in the social sciences and humanities.”

(Learn more about the connection by watching this video.)

The 100 Gbps connection is so fast that Stony Brook researchers can now  transfer a complete copy of a human genome file to a lab for testing in just 90 seconds, send 300,000 X-ray images in one minute, or download one e-book for every Stony Brook student (6,250 e-books a second to roughly 26,000 students) in only four seconds. While the faster transfer rate helps expedite Stony Brook’s scientific, engineering and medical research efforts, the 100 Gbps connection will also enable research that could not be done without such a high-speed connection.

“It’s very important for Stony Brook research to have our infrastructure match our aspirations,” said Robert Harrison, director of the Institute for Advanced Computational Science (IACS). “Data’s big and messy, and if you’re going to share it or use it, you have to be able to move it — that’s why this connection is essential for us to continue to play our lead role.”

Other areas of research that will benefit from the 100 Gbps connection immediately include computational astrophysics, in which scientists simulate the explosion of stars, which produces very large quantities of data, and biomolecular imaging, which uses very data-intensive experiments across several areas on campus, including biochemistry, pharmacological science and neurobiology.

The new high speed connection, which is currently accessible to students, faculty and staff who conduct research using either the SeaWulf or LI-red high-performance computational clusters operated by the IACS, builds the necessary foundation to link additional research locations on campus at high speed in the future, including the Research & Development Park. Several teams worked together to make this possible, including the New York State Education and Research Network (NYSERNet), Internet2, Hewlett-Packard Enterprise, the IACS and the Division of Information Technology (DoIT).

For more information see: Stony Brook offers researchers high-capacity, high-speed connections to the Internet as a member of NYSERNet and Internet2.

About Stony Brook University:

Stony Brook University is going beyond the expectations of what today’s public universities can accomplish. Since its founding in 1957, this young university has grown to become one of only four University Center campuses in the State University of New York (SUNY) system with more than 25,700 students, 2,500 faculty members, and 20 NCAA Division I athletic programs. Our faculty have earned numerous prestigious awards, including the Nobel Prize, Pulitzer Prize, Indianapolis Prize for animal conservation, Abel Prize and the inaugural Breakthrough Prize in Mathematics. The University offers students an elite education with an outstanding return on investment: U.S.News & World Report ranks Stony Brook among the top 40 public universities in the nation. Its membership in the Association of American Universities (AAU) places Stony Brook among the top 62 research institutions in North America. As part of the management team of Brookhaven National Laboratory, the University joins a prestigious group of universities that have a role in running federal R&D labs. Stony Brook University is a driving force in the region’s economy, generating nearly 60,000 jobs and an annual economic impact of $4.65 billion. Our state, country and world demand ambitious ideas, imaginative solutions and exceptional leadership to forge a better future for all. The students, alumni, researchers and faculty of Stony Brook University are prepared to meet this challenge.

About Internet2:

Internet2 is a non-profit, member-driven advanced technology community founded by the nation’s leading higher education institutions in 1996. Internet2 serves more than 94,000 community anchor institutions, 317 U.S. universities, 70 government agencies, 43 regional and state education networks, over 900 InCommon participants, 78 leading corporations working with our community, and 61 national research and education network partners that represent more than 100 countries.

Internet2 delivers a diverse portfolio of technology solutions that leverages, integrates, and amplifies the strengths of its members and helps support their educational, research and community service missions. Internet2’s core infrastructure components include the nation’s largest and fastest research and education network that was built to deliver advanced, customized services that are accessed and secured by the community-developed trust and identity framework.

Source: Internet2

The post Stony Brook Unlocks New Research with 100 Gbps Connection appeared first on HPCwire.

Peter Ho Addresses HPC Future Plans at Supercomputing Frontiers

Tue, 03/14/2017 - 10:22

Singapore, March 14, 2017 — Stakeholders of the National Supercomputing Centre (NSCC) Singapore aim to build on its early growth momentum to elevate supercomputing in Singapore into the next phase. This includes dovetailing NSCC’s future plans into the S$19 billion national RIE2020 masterplan, said Peter Ho, Chairman of NSCC Steering Committee.

Giving the opening address at the third edition of the Supercomputing Frontiers (SCF) Conference yesterday, Mr Ho observed that “Singapore is only at the start of developing and implementing a HPC strategic roadmap”.  

For the NSCC to realise the vision of catalysing supercomputing in Singapore, its team will have to bootstrap on the knowledge and experience of others,” he added.

The key stakeholders of the NSCC Singapore are the Agency for Science Technology and Research (A*STAR), Nanyang Technological University (NTU), National University of Singapore (NUS), and Singapore University of Technology and Design (SUTD).

Singapore enters the petascale supercomputing league, first in Southeast Asia

Addressing the objectives of the four-day Conference, Mr Ho said that he hoped that the conference would produce “rich discussions”. “It is through such discussions that I hope we will all learn something about what the future holds for supercomputing, and how we can position ourselves, not just in Singapore, but also in our respective countries, to exploit the power of supercomputing to improve competitiveness and the lives of our people.”, Mr Ho added.  

One key highlight of the SCF2017 opening ceremony is the launch of the inaugural SCF/NSCC Awards, which aim to promote excellence in High Performance Computing (HPC), networking, storage and visualisation in the areas of Singapore’s research, innovation, education and enterprise. This award also aims to give recognition to research and commercial efforts that tap on ASPIRE 1’s computational power to drive innovation, raise productivity and improve lives.

Notable winners of the inaugural awards are:

  • Supercomputing Frontiers Singapore Distinguished Service Award, Mr. Lim Chuan Poh, Chairman, A*STAR
  • Supercomputing Frontiers Singapore Visionary Award, Dr. Raj Thampuran, Managing Director, A*STAR
  • NSCC Outstanding HPC Scientific Award – Data Storage Institute, A*STAR
  • NSCC Outstanding HPC Industry Application Award – Keppel Offshore and Marine Technology Centre (KOMtech) Global Gene Corp Pte Ltd (Honourable Mention)
  • NSCC Outstanding HPC Innovation Award – Institute of High Performance Computing GenomeAsia100K (Honourable Mention)

The winners were carefully handpicked through a rigorous process by a seven-member judging panel and received the respective awards from Mr Ho. Other notable award submissions include: 

  1. Keppel Offshore & Marine Technology Centre (KOMtech)’s winning project, which made extensive use of NSCC’s HPC resources to optimise the designs of rigs and vessels in the product development process.
  1. Tan Tock Seng Hospital’s use of HPC technology combined with whole-genome-sequencing to provide useful information for on-the-ground infection control, by determining the transmission routes of bacteria that are invisible to the human eye.

The SCF 2017 is held at the Matrix@Biopolis and is attended by more than 450 participants from 12 countries. The focus on this year’s conference is on global trends and innovations in high performance computing. 

Source: National Supercomputing Centre Singapore

The post Peter Ho Addresses HPC Future Plans at Supercomputing Frontiers appeared first on HPCwire.

D-Wave, Virginia Tech Partner to Advance Quantum Computing

Tue, 03/14/2017 - 08:26

HANOVER, Md., March 14, 2017 — D-Wave Systems Inc., the leader in quantum computing systems and software, and Virginia Tech have established a joint effort to provide greater access to quantum computers for researchers from the Intelligence Community and Department of Defense. D-Wave and Virginia Tech will work towards the creation of a permanent quantum computing center to house a D-Wave system at the Hume Center for National Security and Technology.

The Hume Center leads Virginia Tech’s research, education, and outreach programs focused on the challenges of cybersecurity and autonomy in the context of national and homeland security. Education programs provide mentorship, internships, and scholarships, and seek to address key challenges in qualified US citizens entering federal service. Current research initiatives include cyber-physical system security, orchestrated missions, and the convergence of cyber warfare and electronic warfare.

“Both D-Wave and Virginia Tech recognize how vital it is that quantum computing be accessible to a broad community of experts focused on solving real-world problems,” said Bo Ewald, president of D-Wave International. “One of the many reasons we chose to work with Virginia Tech is their strong relationships with the intelligence and defense communities. A key area of focus will be to work with federal agencies towards the creation of a quantum computing center at the Hume Center.”

Under the agreement, D-Wave will work with Virginia Tech to enable their staff, faculty, and affiliates to build new applications and software tools for D-Wave quantum computers. Participants will be selected by Virginia Tech and include experts in artificial intelligence, machine learning, optimization, and sampling.

“Establishing a quantum computing center at the Hume Center will advance our mission of supporting national security, and provide access to technology that few researchers can leverage today,” said Mark Goodwin, deputy director and COO of the Hume Center. “Working closely with D-Wave supports that goal in a meaningful, immediate way.”

About D-Wave Systems Inc.

D-Wave is the leader in the development and delivery of quantum computing systems and software, and the world’s only commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing for the world. We believe that quantum computing will enable solutions to the most challenging national defense, scientific, technical, and commercial problems. D-Wave’s systems are being used by some of the world’s most advanced organizations, including Lockheed Martin, Google, NASA Ames, USRA, USC, and Los Alamos National Laboratory. With headquarters near Vancouver, Canada, D-Wave’s U.S. operations are based in Palo Alto, CA and Hanover, MD. D-Wave has a blue-chip investor base including Goldman Sachs, Bezos Expeditions, DFJ, In-Q-Tel, BDC Capital, Growthworks, Harris & Harris Group, International Investment and Underwriting, and Kensington Partners Limited. For more information, visit: www.dwavesys.com.

About the Hume Center at Virginia Tech

The Hume Center was founded in 2010 through an endowment from Ted and Karyn Hume and is located both in Blacksburg and in the National Capital Region. With support from Virginia Tech’s College of Engineering and Institute for Critical Technologies and Applied Sciences (ICTAS), the Hume Center leads the university’s education and research ecosystem for national security technologies, with an emphasis on the communication and computation challenges of the defense and intelligence communities. Approximately 150 undergraduate students and 50 graduate students participate in Hume Center programs each year and most receive scholarships, fellowships, or research assistantships and are vectored toward careers working for the federal government or its industrial base.

Source: D-Wave

The post D-Wave, Virginia Tech Partner to Advance Quantum Computing appeared first on HPCwire.

SES Partners with Luxembourg Institute of Science and Technology

Tue, 03/14/2017 - 07:32

LUXEMBOURG, Germany, March 14, 2017 — SES S.A. announced today that it has signed a partnership agreement with the Luxembourg Institute of Science and Technology (LIST).

The new cooperation framework with LIST complements the existing SES partnership agreement with the Luxembourg University Interdisciplinary Center for Security, Reliability and Trust (SnT), and widens the scope of SES’s international research activities together with other reputable universities. Under the agreement, SES and LIST will cooperate through their international network of research partners with unique expertise in satellite communications (SATCOM), to transform basic research into innovative space applications. LIST will therefore become another close technology partner of SES in the development of pioneering SATCOM commercial products and services to inspire or “disrupt” the market with new satellite platforms, analysis tools and innovative ground infrastructure.

The new partnership agreement further enhances Luxembourg’s technology ecosystem by attracting start-ups to develop their businesses in Luxembourg, and will facilitate the transfer of new technologies stemming from national public and private research. Those activities will be done in close coordination with the existing national funding initiatives, such as the Digital Tech Fund, with SES being a key stakeholder.

The first activities SES and LIST will focus on are related to the ‘Smart Space’ initiative, which includes research and development of applications in the context of High Performance Computing (HPC), aiming to establish a unique space ecosystem by building on Luxembourg’s competitive advantages, including global satellite communications and telecommunications networks, data centers and connectivity, and existing service providers. The parties will develop a European Centre of Excellence to address societal challenges such as climate change, environment, green mobility, security and healthcare. SES and LIST will also work on developing commercial applications in the areas of Internet of Things (IoT), e-platform solutions and optical communications. In addition, SES and LIST will jointly assess the development of competences in other satellite-related application areas, such as connected cars.

“Innovation is not only the driving force of the satellite industry, but also of our society in general. We are therefore proud to be an integral part of a large network of leading institutions and research and development partners, which is paramount in developing our future. Our collaboration with LIST is a perfect illustration of how we can combine and augment our existing SATCOM knowledge in Luxembourg to increase the speed of innovation and to shape the future together,” said Gerhard Bethscheider, Managing Director at SES Techcom Services. “This partnership will contribute to the creation of the space applications ecosystem, and will further reinforce Luxembourg’s leading position in the space domain. It also complements our successful long-term cooperation with the University of Luxembourg through its specific focus on impact-driven applied research.”

Fernand Reinig, Chief Executive Officer at LIST, said “SES’s expertise in the space industry and our research and development activities organically complement each other. We are delighted to partner with this world-leading company and contribute to shaping a better future for the benefit of Luxembourg and society in general.”

About SES

SES is the world-leading satellite operator and the first to deliver a differentiated and scalable GEO-MEO offering worldwide, with more than 50 satellites in Geostationary Earth Orbit (GEO) and 12 in Medium Earth Orbit (MEO). SES focuses on value-added, end-to-end solutions in four key market verticals (Video, Enterprise, Mobility and Government). It provides satellite communications services to broadcasters, content and internet service providers, mobile and fixed network operators, governments and institutions, and businesses worldwide. SES’s portfolio includes the ASTRA satellite system, which has the largest Direct-to-Home (DTH) television reach in Europe, and O3b Networks, a global managed data communications service provider. Another SES subsidiary, MX1, is a leading media service provider and offers a full suite of innovative digital video and media services. Further information available at: www.ses.com

About LIST

The Luxembourg Institute of Science and Technology (LIST) is a mission-driven Research and Technology Organisation (RTO) that develops advanced technologies and delivers innovative products and services to industry and society. As a major engine of the diversification and growth of Luxembourg’s economy through innovation, LIST supports the deployment of a number of solutions to a wide range of sectors, including space, ICT, telecommunications, environment, agriculture, and advanced manufacturing at national and European level. Thanks to its location in an exceptional collaborative environment, namely the Belval Research & Innovation Campus, LIST accelerates time to market by maximising synergies with different actors, including the university, the national funding agency and industrial clusters.

Source: SES

The post SES Partners with Luxembourg Institute of Science and Technology appeared first on HPCwire.

Supermicro Showcases Product Portfolio at Embedded World 2017

Tue, 03/14/2017 - 07:24

NUREMBERG, Germany, March 14, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a compute, storage, networking technologies and green computing company, is demonstrating a broad range of advanced Embedded/IoT Server Building Block Solutions to address both traditional and new embedded market segments in Booth 1-330 at Embedded World 2017 in Nuremberg, Germany from March 14-16.

With the industry’s broadest selection of innovative and first-to-market technologies that are the building blocks for today’s embedded computing platforms, Supermicro is showcasing solutions that scale across Performance (up to 16 cores/128G memory), Power (from below 10 Watts) to highly dense form factors and rich feature sets.

Many of these solutions are optimized for Edge Computing with Data Security features like encryption, Data Storage features such as compression, Data Networking with 10G SFP+, visual computing with GT4e and 4K UHD, and Virtualization (VT-d/VT-x).

“Rapid growth in the embedded markets and open standards are driving the need for higher levels of product integration and optimization through network connectivity, remote management, mobile communication, expanded I/O, and device-to-device communications using space and power efficient configurations,” said Charles Liang, President and CEO of Supermicro. “With key product features including wide temperature support from minus 30C to plus 75C, fan-less chassis, multiple display ports, DC power, high memory capacity and long lifecycle support, our fully converged low-power and compact platforms with integrated storage and high-speed communication ports offer scalable solution options for almost any embedded/IoT or edge application.”

Supermicro offers the industry’s most extensive selection of embedded motherboards and servers to support a wide range of embedded markets including DS, DSS, Industrial and Machine Automation, Retail, Transport, Communication and Networking (Security), as well as Warm and Cold Storage.

Some of the equipment being showcased includes:

  • SuperServer E200-9AP is a Compact Embedded Mini ITX Box ideal for a security appliance, video surveillance, digital signage or indoor kiosk
  • SuperServer 5019S-TN4 for VHD, Digital Recording System, Media/Content Streaming Server, Fast Transcoding for Web Streaming, supporting Intel Iris Pro Graphics P580 and up to 18 AVC streams or 8 HEVC streams at 1080p 30FPS
  • SuperServer 1019S-MP for Video Transcoding and Streaming, Intel Iris Pro Graphics P580 with 128MB of on-Package cache for high performance graphics, Digital Signage, Indoor Kiosk, and Interactive information system
  • SuperServer 5029AP-TN2 is an Embedded, Compact Mini Tower supporting two independent displays, an M.2 slot and 7-year product lifecycle
  • Mini-1U SuperServer SYS-E300-8D 1U is a 4-Core with six GbE LAN ports and two 10G SFP+ ports, M.2 ready system with one expansion slot for embedded networking applications, network security appliances, firewalls and virtualization
  • SuperServer E200-8D is a 6-core Xeon D solution for embedded networking applications, network security appliances, firewalls and virtualization applications
  • SuperServer 5018D-FN8T,  is a Front I/O, 1U, 4-Core, Xeon D server featuring six GbE ports plus two 10G SFP+ ports and a compact design less than ten inches deep for cloud, visualization, network and embedded applications
  • SuperServer 1018D-FRN8T is a 1U 16-Core Xeon D SoC-based OEM solution for network security appliances, firewalls, virtualization, SD-WAN and vCPE applications that offers a seven year life cycle
  • Xeon Motherboard X10SDV-12C-TLN4F+ with  Intel Xeon processor D-1557, Single socket FCBGA 1667; 12-Core, 24 Threads, 45W CPU
  • Xeon Motherboard X10SDV-TP8F with Intel Xeon processor D-1518, Single socket FCBGA 1667; 4-Core, 8 Threads, 35W CPU
  • Atom Motherboard A2SAV with Intel Atom processor E3940, SoC, FCBGA 1296

For more information on Supermicro’s complete range of embedded/IoT products, please visit www.supermicro.com/embedded.  Please visit Supermicro booth Hall 1-330 at Embedded World 2017, in Nuremberg, Germany.

For complete information on SuperServer solutions from Supermicro visit www.supermicro.com.

About Super Micro Computer Inc. (NASDAQ: SMCI)

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Supermicro

The post Supermicro Showcases Product Portfolio at Embedded World 2017 appeared first on HPCwire.

Top Weather Sites Rely on DDN Storage for Simulations & Forecasts

Tue, 03/14/2017 - 07:13

SANTA CLARA, Calif., March 14, 2017 — DataDirect Networks (DDN) today announced that its data storage solutions for high performance computing (HPC) are driving an increasing number of weather and climate research facilities around the globe to meet the needs for accuracy and timeliness of their forecasts and predictions. Weather and climate modeling centers are ingesting and producing ever-increasing volumes of data and utilizing some of the most powerful supercomputers and innovative HPC technologies available to improve model accuracy and granularity. As a data storage leader in HPC, DDN supports dozens of weather and climate supercomputing organizations and has experienced more than 60 percent growth in this sector customer base in the past year.

“DDN’s unique ability to handle tough application I/O profiles at speed and scale gives weather and climate organizations the infrastructure they need for rapid, high-fidelity modeling,” said Laura Shepard, senior director of product marketing, DDN. “These capabilities are essential to DDN’s growing base of weather and climate organizations, which are at the forefront of scientific research and advancements – from whole climate atmospheric and oceanic modeling to hurricane and severe weather emergency preparedness to the use of revolutionary, new, high-resolution satellite imagery in weather forecasting.”

New technologies are ushering in higher resolutions as modeling and digital data collection increase in scope. For example, NOAA/NASA recently launched the GOES-16 satellite, which has four times the spatial resolution of previous systems. Weather and climate modeling centers are amassing vast volumes of data as they strive to improve the accuracy and timeliness of their models via more diverse, higher-resolution input data, large data assimilation, multi-model ensemble forecasts and rapid forecast dissemination.

Per the Research Department Center at the European Centre for Medium-Range Weather Forecasts (ECMWF), a DDN customer, weather and climate prediction are HPC applications with significant societal and economic impact, ranging from disaster response and climate change adaptation strategies to agricultural production and energy policy. Forecasts are based on millions of observations made every day around the globe, which are then input to numerical models. The models represent complex processes that take place on scales from hundreds of meters to thousands of kilometers in the atmosphere, the ocean, the land surface, the cryosphere and the biosphere. Forecast production and dissemination to users is always time-critical, and output data volumes already reach petabytes per week.

More than two dozen of the world’s top supercomputing sites rely on DDN Storage to meet the demanding requirements for weather and climate modeling, including the National Center for Atmospheric Research (NCAR), UK Met Office, Bureau of Meteorology Australia, National Oceanic and Atmospheric Administration (NOAA), Meteorological Research Institute (MRI) Japan, Japan’s National Institute for Environmental Studies (NIES) and the European Centre for Medium-Range Weather Forecasts (ECMWF), among others. Examples include:

  • NCAR utilizes DDN’s SFA14K high-performance hyper-converged storage platform to drive the performance and deliver the capacity needed for scientific breakthroughs in climate, weather and atmospheric-related science to power its “Cheyenne” supercomputer. Sponsored by the National Science Foundation, NCAR brings together researchers from more than 100 colleges and universities and thousands of scientists from across the globe to identify the risks and opportunities associated with changes in the Earth’s atmosphere – from protecting aircraft from wind shear, to investigating changes in the earth’s ozone layer, to linking weather to factors that shape epidemics.
    “DDN Storage enables us to keep pace with the increased number of people trying to do very large data assimilation problems,” said Rich Loft, director of technology development in the computational and information systems laboratory at NCAR. “Earth system research is very data-intensive. NCAR is now able to do more to help scientists go beyond just studying phenomena to making actual predictions through data-intensive simulations that require larger I/O bandwidth and storage performance.”
  • UK Met Office, the United Kingdom’s national weather service, conducts weather forecasting and climate prediction research designed to protect lives and increase prosperity. The institution’s 500 scientists conduct research using data-intensive, high-resolution models to increase forecast accuracy and provide a deeper understanding of climate change. DDN Storage supports UK Met’s Managed Archive Storage System (MASS), which is predicted to grow to about 300 petabytes of weather and climate research data by 2020.
    “The development of high-resolution models is a key component of the Met Office forecast systems; however, it has created a major spike in the need to store and process large volumes of critical data,” said Alan Mackay, IT infrastructure manager, UK Met Office. “By 2020, we estimate our storage archive will grow to about 300PB. With DDN, we can meet our performance and capacity requirements and ensure our scientists and researchers can store data for later analysis and quickly retrieve it when needed.”
  • The Bureau of Meteorology, Australia’s national weather, climate and water agency, relies on DDN’s GRIDScaler Enterprise NAS storage appliance to handle its massive volumes of research data to deliver reliable forecasts, warnings, monitoring and advice spanning the Australian region and Antarctic territory.
    “The Bureau intends to use DDN’s GS14KX to support its new data-intensive computing applications with integrated workflows to the Cray XC40 HPC environment for weather forecasting. We will also consolidate workflows from multiple legacy systems into a high-performance, replicated storage system,” said Tim Pugh, supercomputer programme director at the Bureau of Meteorology Australia.

With DDN’s leadership in parallel file systems at scale and its deep expertise in Lustre* and IBM Spectrum Scale environments, DDN is well positioned to support weather and climate organizations as their unabated data growth continues and as they require acceleration technologies such as flash native caching to further speed simulations and hot data computations. For example, DDN’s Infinite Memory Engine solution can accelerate performance speeds by 3x and make application completion times predictable.

Technologies such as DDN’s flash-native storage cache – Infinite Memory Engine – are boosting weather code performance to process more data, faster. For example, researchers at Ireland’s high-performance computing center, ICHEC, realized a 3x performance boost of the popular Weather Research and Forecasting (WRF) model, with no code changes and with one-tenth the required infrastructure when using Infinite Memory Engine. With this type of accelerated performance, supercomputers can provide a quicker turn time for atmospheric and ocean simulations so that severe weather events can be predicted with sufficient time for preparedness. More performance also allows for better fidelity, with grid sizes reduced to 1 to 2 km on the more granular models. Improved fidelity translates to more accurate forecasts, so localized phenomenon such as tornadoes, hailstorms, and intense downpours can be predicted at more useful scales.

Supporting Resources

About DDN
DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN Storage

The post Top Weather Sites Rely on DDN Storage for Simulations & Forecasts appeared first on HPCwire.

Women in IT Invited to Apply for WINS Program at SC17 Conference

Mon, 03/13/2017 - 12:41

DENVER, Co., March 13, 2017 — Applications are now being accepted for the Women in IT Networking at SC (WINS) program at the SC17 conference to be held Nov. 12-17 in Denver, Colo. WINS was launched to expand the diversity of the SCinet volunteer staff and provide professional development opportunities to highly qualified women in the field of networking.

The WINS program is seeks qualified female U.S. candidates in their early to mid-career to join the SCinet volunteer team to help build and operate SCinet for SC17. Selected candidates will receive full travel support and mentoring by well-known engineering experts in the research and education community.

Applications are to be submitted using the 2017 WINS Application Form. The deadline to apply is 11:59 p.m. Friday, March 24, 2017 (Pacific time). More information can be found on the SC17 WINS page.

Each year, volunteers from academia, government and industry work together to design and deliver SCinet. Planning begins more than a year in advance and culminates in a high-intensity, around-the-clock installation in the days leading up to the conference.

Now in its third year, WINS is a collaboration between the University Corporation for Atmospheric Research (UCAR), the Department of Energy’s Energy Sciences Network (ESnet) and the Keystone Initiative for Network Based Education and Research (KINBER). Funding for WINS is provided through a three-year award by the National Science Foundation (NSF) and by ESnet.

The SC conference series is dedicated to promoting equality and diversity and recognizes the role that this has in ensuring the success of the conference series. SC17 is committed to providing an inclusive conference experience for everyone, regardless of gender, sexual orientation, disability, physical appearance, body size, race, or religion.

Source: SC17

The post Women in IT Invited to Apply for WINS Program at SC17 Conference appeared first on HPCwire.

Internet2 Supports Kirari! in Video Streaming at SXSW 2017

Mon, 03/13/2017 - 10:41

DENVER, Colo., March 13, 2017 – Internet2, a member-driven advanced technology community and operator of the coast-to-coast research and education network, will help broadcast an interactive live performance between Tokyo, Japan and Austin, Texas during South by South West (SXSW) 2017.

  • The live broadcast takes place on March 13, 2017 at 11:30 p.m. EST.
  • The live broadcast is made possible by a collaboration between the Nippon Telegraph and Telephone Corporation Japan (NTT), University of Texas System (UT System), Lonestar Education and Research Network (LEARN), and Greater Austin Area Telecommunication Network (GAATN).

During the hour-long event, close to ten live video streams of artists in Tokyo will be extracted in real-time and transported in various media formats synchronously over the Internet2 Network, then dynamically projected on double-sided transparent screens (see Fig. 1) at SXSW’s venue in Austin by utilizing NTT Japan’s Kirari! technologies.The live stream transmission at SXSW requires a connection to an Advanced Layer 2 Service (AL2S) across Internet2’s network. Unlike a commercial provider, the Internet2 Network is differently configured at the protocol-level to ensure maximum performance for high-performance applications and large data-transfers over the wide-area network. UT System, LEARN and GAATN will provide the connectivity from SXSW to the Internet2 backbone.

This initiative is the result of an ongoing collaboration between Internet2’s community that is comprised of members from higher education, industry, state and regional networks, as well as partners around the globe.

“Our community has a long history of enabling and advancing collaborations between high-performance networking technologies and applications in the arts and humanities,” said Ana Hunsinger, Vice President of Community Engagement at Internet2. “Collaboration is at the heart of everything we do at Internet2 and supporting the live video streaming at SXSW is one of the many examples of how our members work together for the public good.”Fig. 1. Digital rendering of the stage setup at Japan Factory during SXSW 2017. Courtesy NTT Japan.

About Internet2:

Internet2 is a non-profit, member-driven advanced technology community founded by the nation’s leading higher education institutions in 1996. Internet2 serves more than 94,000 community anchor institutions, 317 U.S. universities, 70 government agencies, 43 regional and state education networks, over 900 InCommon participants, 78 leading corporations working with our community, and 61 national research and education network partners that represent more than 100 countries.

Internet2 delivers a diverse portfolio of technology solutions that leverages, integrates, and amplifies the strengths of its members and helps support their educational, research and community service missions. Internet2’s core infrastructure components include the nation’s largest and fastest research and education network that was built to deliver advanced, customized services that are accessed and secured by the community-developed trust and identity framework.

Source: Internet2

The post Internet2 Supports Kirari! in Video Streaming at SXSW 2017 appeared first on HPCwire.

Flow Science Announces 2017 Americas Users Conference

Mon, 03/13/2017 - 10:33

SANTA FE, N.M., March 13, 2017 — Flow Science, Inc. has announced that it will hold its 2017 FLOW-3D Americas Users Conference in Santa Fe, NM on September 20-21 at the Hotel Santa Fe. All FLOW-3D, FLOW-3D/MP, and FLOW-3D Cast users—and anyone interested in the FLOW-3D product suite—are invited to attend the conference. The keynote speaker at the conference will be Dr. Edward Furlani, Professor, School of Engineering and Applied Sciences, University at Buffalo SUNY, who will share his extensive experience with FLOW-3D. Featured speaker, Dr. C.W. Hirt, Flow Science’s founder and Developer Emeritus, will discuss ongoing developments for the Granular Flow Model.

The conference will feature customer presentations and posters from both industry and academia that focus on validations, benchmarks and case studies, as well as the latest developments for FLOW-3D presented by Flow Science’s VP of Sales and Business Development, Dr. Amir Isfahani.

The call for abstracts for is now open. Share your experiences, present your success stories and obtain valuable feedback from your fellow CFD practitioners and Flow Science staff. The deadline to submit an abstract is Friday, August 4. The conference proceedings will be made available to attendees as well as through the Flow Science website.

Flow Science will also offer training for conference attendees the afternoon of September 19. The training will be devoted to optimizing simulation time and accuracy using the various numerical options that are available in our software packages. This training is included with the conference registration and will cover what the best numerical options would be for a wide range of applications. This course will be taught by Dr. Michael Barkhudarov, VP of R&D and Dr. Ioannis Karampelas, CFD Technical Support Engineer.

Online registration for the conference, training and workshop is now available.

About Flow Science
Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science’s headquarters is located in Santa Fe, New Mexico. Flow Science can be found online at www.flow3d.com. FLOW-3D is a registered trademark in the USA and other countries.

Source: Flow Science

The post Flow Science Announces 2017 Americas Users Conference appeared first on HPCwire.