HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 2 hours 39 min ago

Equus Compute Solutions Launches Servers with Xeon Scalable CPUs

Wed, 07/19/2017 - 12:10

July 19, 2017 — Equus Compute Solutions announced today the availability of servers that use the new Intel Xeon Processor Scalable Family (aka “Purley”). These newly released Intel CPUs provide powerful infrastructure options that represent an evolutionary leap forward in agility and scalability. Disruptive by design, these CPUs set a new benchmark in platform convergence and capabilities across compute, storage, memory, network and security.

“Today our customers can configure and purchase advanced Intel Xeon Scalable CPU-based servers,” said Costa Hasapopoulos, President of Equus Compute Solutions. “We look forward to helping Resellers, Software Vendors and Service Providers understand how to leverage this new technology in their business and realize tangible benefits from these new Intel CPUs using our custom cost-effective solutions.”

Examples of new Equus Servers shown at www.equuscs.com/intel-servers include:

  • R2096Q-2U: Rackmount 2U, Dual Intel Xeon Scalable CPUs, 2TB DDR4 memory (Optane Support), 8x SAS/SATA drives, 2x 10GBase-T Ethernet, 4x PCIe 3.0 x16 (LP) slots, and 1000W redundant power supplies.
  • M2098Q-2U4N:  Rackmount 2U, 4 Node, Dual Intel Xeon Scalable CPUs per node, 2TB DDR4 memory per node, 3x 3.5 SAS/SATA drives per node, On-board Broadcom drive controllers, and 2200W redundant power supplies.
  • M2099Q-2U4N:  Rackmount 2U, 4 Node, Dual Intel Xeon Scalable CPUs per node, 2TB DDR4 memory per node, 6x 2.5 SAS/SATA drives per node, On-board Broadcom drive controllers, and 2200W redundant power supplies.
  • R2097Q-Full: Tower 25.5 x17.2 x 7.0 inches, Dual Intel Xeon Scalable CPUs, 2TB DDR4 memory, 8x 3.5 SAS/SATA drives, 2x 10GBase-T Ethernet, 4x PCIe 3.0 x16 (LP) slots and 1200W redundant power supplies.

The new Equus servers are configured using different form factors, CPU sockets, disk storage, I/O, and multi-node capabilities. With up to 28 physical cores per CPU delivering highly enhanced per core performance, significant increases in memory bandwidth (six memory channels), and I/O bandwidth (48 PCIe lanes), even the most data-hungry, latency-sensitive applications, including in-memory databases and HPC, will see impressive improvements enabled by denser compute and faster access to large data volumes.

Detailed information for the new Equus Intel Xeon Scalable Processors servers is at http://www.equuscs.com/intel-servers.

Source: Equus

The post Equus Compute Solutions Launches Servers with Xeon Scalable CPUs appeared first on HPCwire.

Trinity Supercomputer’s Haswell and KNL Partitions Are Merged

Wed, 07/19/2017 - 11:41

Trinity supercomputer’s two partitions – one based on Intel Xeon Haswell processors and the other on Xeon Phi Knights Landing – have been fully integrated are now available for use on classified work in the National Nuclear Security Administration (NNSA)’s Stockpile Stewardship Program, according to an announcement today. The KNL partition had been undergoing testing and was available for non-classified science work.

“The main benefit of doing open science was to find any remaining issues with the system hardware and software before Trinity is turned over for production computing in the classified environment,” said Trinity project director Jim Lujan. “In addition, some great science results were realized,” he said. “Knights Landing is a multicore processor that has 68 compute cores on one piece of silicon, called a die. This allows for improved electrical efficiency that is vital for getting to exascale, the next frontier of supercomputing, and is three times as power-efficient as the Haswell processors,” Archer noted.

The Trinity project is managed and operated by Los Alamos National Laboratory and Sandia National Laboratories under the New Mexico Alliance for Computing at Extreme Scale (ACES) partnership. In June 2017, the ACES team took the classified Trinity-Haswell system down and merged it with the KNL partition. The full system, sited at LANL, was back up for production use the first week of July.

The Knights Landing processors were accepted for use in December 2016 and since then they have been used for open science work in the unclassified network, permitting nearly unprecedented large-scale science simulations. Presumably the merge is the last step in the Trinity contract beyond maintenance.

Trinity, based on a Cray XC30, now has 301,952 Xeon and 678, 912 Xeon Phi processors along with two pebibytes (PiB) of memory. Besides blending the Haswell and KNL processors, Trinity benefits from the introduction of solid state storage (burst buffers). This is changing the ratio of disk and tape necessary to satisfy bandwidth and capacity requirements, and it drastically improves the usability of the systems for application input/output. With its new solid-state storage burst buffer and capacity-based campaign storage, Trinity enables users to iterate more frequently, ultimately reducing the amount of time to produce a scientific result.

“With this merge completed, we have now successfully released one of the most capable supercomputers in the world to the Stockpile Stewardship Program,” said Bill Archer, Los Alamos Advanced Simulation and Computing (ASC) program director. “Trinity will enable unprecedented calculations that will directly support the mission of the national nuclear security laboratories, and we are extremely excited to be able to deliver this capability to the complex.”

Trinity Timeline:

  • June 2015, Trinity first arrived at Los Alamos, Haswell partition installation began.
  • February 12 to April 8, 2016, approximately 60 days of computing access made available for open science using the Haswell-only partition.
  • June 2016, Knights Landing components of Trinity began installation.
  • July 5, 2016, Trinity’s classified side began serving the Advanced Technology Computing Campaign (ATCC-1)
  • February 8, 2017, Trinity Open Science (unclassified) early access shakeout began on the Knights Landing partition before integration with the Haswell partition in the classified network.
  • July 2017, Intel Haswell and Intel Knights Landing partitions were merged, transitioning to classified computing.

The post Trinity Supercomputer’s Haswell and KNL Partitions Are Merged appeared first on HPCwire.

BSC Scientists Compare Algorithms That Search for Cancer

Wed, 07/19/2017 - 10:54

BARCELONA, July 19, 2017 – Eduard Porta-Pardo, a senior researcher in the Life Sciences Department at Barcelona Supercomputing Center (BSC), with the collaboration of a team of international scientists, has undertaken the first ever comparative analysis of sub-gene algorithms that mine the genetic information in cancer databases.  These powerful data-sifting tools are helping untangle the complexity of cancer, and find previously unidentified mutations that are important in creating cancer cells.

The study, published today in Nature Methods, reviews, classifies and describes the strengths and weaknesses of more than 20 algorithms developed by independent research groups. The evaluation of cancer genome analysis methods is a key activity for the selection of the most adequate strategies to be integrated in the BSC’s personalized medicine platform.

Despite the increasing availability of genome sequences, a common assumption is to consider a gene as a single unit, however, there are a number of events, such as DNA substitutions, duplications and losses that can occur within a gene—at the sub-gene level. Sub-gene algorithms provide a high-resolution view that can explain why different mutations in the same gene can lead to distinct phenotypes, depending on how the mutation impacts specific protein regions. A good example of how different sub-gene mutations influence cancer is the NOTCH1 gene. Mutations in certain regions of NOTCH1 cause it to act as a tumor suppressor in lung, skin and head and neck cancers. But, mutations in a different region can promote chronic lymphocytic leukemia and T cell acute lymphoblastic leukemia. So it is incorrect to assume that mutations in a gene will have the same consequences regardless of their location.

The study researchers applied each sub-gene algorithm to the data from The Cancer Genome Atlas (TCGA), a large-scale data set that includes genome data from 33 different tumor types from more than 11,000 patients. “Our goal was not to determine which algorithm works better than another, because that would depend on the question being asked,” says Eduard Porta-Pardo, first author of the paper. “Instead, we want to inform potential users about how the different hypotheses behind each sub-gene algorithm influences the results, and how the results differ from methods that work at the whole gene level.” Porta-Pardo is a former postdoc at Sanford Burnham Prebys Medical Discovery Institute (SBP) who has recently joined Life Sciences Department at BSC under the Director of Alfonso Valencia, also coauthor of this work.

The researchers have made two important discoveries. First, they found that the algorithms are able to reproduce the list of known cancer genes established by cancer researchers—validating the sub-gene approach and the link between these genes and cancer. Second, they found a number of new cancer driver genes—genes that are implicated in the process of oncogenesis—that were missed by whole-gene approaches.

Finding new cancer driver genes is an important goal of cancer genome analysis,” adds Porta-Pardo. This study should help researchers understand the advantages and drawbacks of sub-gene algorithms used to find new potential drug targets for cancer treatment.

Although Sanford Burnham Prebys Medical Discovery Institute (SBP) has led the project, this paper has been the effort between international institutions as Harvard Medical School, Institute for Research in Biomedicine (IRB), Universitat Pompeu Fabra (UPF) & Spanish National Cancer Research Centre, among others.

 

About Barcelona Supercomputing Center

Barcelona Supercomputing Center (BSC) is the national supercomputing center in Spain. BSC specializes in High Performance Computing (HPC) and its mission is two-fold: to provide infrastructure and supercomputing services to European scientists, and to generate knowledge and technology to transfer to business and society.

BSC is a Severo Ochoa Center of Excellence and a first level hosting member of the European research infrastructure PRACE (Partnership for Advanced Computing in Europe). BSC also manages the Spanish Supercomputing Network (RES).

BSC is a consortium formed by the Ministry of Economy, Industry and Competitiveness of the Spanish Government, the Business and Knowledge Department of the Catalan Government and the Universitat Politecnica de Catalunya (UPC).

Source: BSC

The post BSC Scientists Compare Algorithms That Search for Cancer appeared first on HPCwire.

IARPA Announces Map of the World Challenge for Satellite Imagery

Wed, 07/19/2017 - 09:46

WASHINGTON, July 19, 2017 — The Intelligence Advanced Research Projects Activity, within the Office of the Director of National Intelligence, announces the launch of the functional Map of the World —“fMoW”— Challenge. The challenge invites solvers from around the world to develop machine learning algorithms and other automated techniques that accurately detect and categorize classify points of interest from satellite imagery.

The goal of the fMoW Challenge is to promote and benchmark research in object recognition and categorization from satellite imagery for automated identification of facility, building, and land use. A satellite imagery dataset collected commercially with one million points of interest annotated will be provided for researchers and entrepreneurs, enabling them to better their methods for understanding satellite imagery through novel learning frameworks and multi-modal fusion techniques.

“Although deep learning has been making a really big impact in many areas including image processing, it has not been applied to the satellite imagery domain extensively,” said Dr. HakJae Kim, IARPA Program Manager. “Going into the challenge, these will be the largest functionally annotated databases of satellite imagery made available to the public, and we are excited to see what outcomes will be revealed.”  

IARPA invites experts from across academia, industry, and developer communities—with or without experience in satellite image analysis—to participate in a convenient, efficient, and non-contractual way. IARPA will provide solvers with a predetermined point-of-interest category library and image sets containing numerous unidentified points as training data. Participants will be asked to generate an algorithm to detect and categorize building and land use in the provided images.

Participants will generate algorithms to detect and categorize facility, building, and land use in the provided satellite images. Throughout the challenge, an online leaderboard will display solvers’ rankings and accomplishments, giving them various opportunities to have their work viewed and appreciated by stakeholders from industry, government, and academic communities. Solvers who are eligible to win a prize and with the most accurate and complete solutions will be eligible to win cash prizes from a total prize purse of $100,000.

To learn more about the functional Map of the World Challenge, including rules, criteria and eligibility requirements, visit www.iarpa.gov/challenges/fmow.html. To become a Solver, register today at crowdsourcing.topcoder.com/fmow. For updates and hints, follow @IARPAnews on Twitter and join the conversation using #IARPAfMoW. For questions, contact us at functionalmap@iarpa.gov.

About IARPA

IARPA invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the Intelligence Community. Additional information on IARPA and its research may be found onhttps://www.iarpa.gov

Source: IARPA

The post IARPA Announces Map of the World Challenge for Satellite Imagery appeared first on HPCwire.

Fujitsu Continues HPC, AI Push

Wed, 07/19/2017 - 08:50

Summer is well under way, but the so-called summertime slowdown, linked with hot temperatures and longer vacations, does not seem to have impacted Fujitsu’s output. The Japanese multinational has made a raft of HPC and AI-related announcements over the last few weeks. One of the most interesting developments is the advance of a custom AI processor, the Deep Learning Unit (DLU). With only a brief appearance in a 2016 press release, a fuller picture emerged during the International Supercomputing Conference in June.

As revealed in a presentation from Fujitsu’s Takumi Maruyama (senior director, AI Platform business unit), the processor features mixed-precision optimizations (8-bit, 16-bit and 32-bit) and a low power consumption design, with a stated goal of achieving a 10x performance/per watt advantage compared to competitors. The target energy efficiency gain relies on Fujitsu’s “deep learning integer,” which the company says reaches effective precision (on par with 32-bit) using 8- and 16-bit data sizes. The approach is reminiscent of that used by Intel’s Knights Mill processor (see coverage here) with Intel claiming INT32 accuracy with INT16 inputs (using INT32 accumulated output).

Source: Fujitsu (2017)

The massively parallel chip employs a few large master cores connected to many Deep Learning Processing Units (DPUs). One DPU consists of 16 DPEs (Deep learning processing elements). The DPE includes a large register file and wide SIMD execution units. Linked with Fujitsu’s Tofu interconnect technology, the design is scalable for very large neural networks.

Fujitsu’s roadmap for the DLU includes multiple generations over time: a first-gen coprocessor is set to debut in 2018, followed by a second-gen embedded host CPU. More forward-looking are potential specialized processors targeting neuromorphic or combinatorial optimization applications.

Upcoming Installs

National Center for High-performance Computing (NCHC) headquarters

Also at ISC, Fujitsu announced it’s building a nearly 3.5 petaflops (peak) system for Taiwan’s National Center for High-performance Computing, National Applied Research Laboratories (NCHC). The supercomputer is expected to come online in May 2018, at which time it will become the fastest computer in the country.

“The new system will serve as the core platform for research and development in Taiwan, fostering the development and growth of Taiwan’s overall industries and economy,” said Fujitsu in an official statement. In addition to accelerating current research, there will be a focus on accommodating new research fields, such as AI and big data.

The 715 node warm water-cooled cluster will be equipped with Skylake processors and connected with Intel Omni-Path technology. Nvidia P100 GPUs will be installed on 64 nodes, providing over a third (1.35 petaflops) of total theoretical peak performance (3.48 petaflops).

The Information Technology at Kyushu University in Japan has also placed an order for a Fujitsu system, a 10-petaflopper (peak) that is scheduled for deployment in October.

“This system will consist of over 2,000 servers, including the Fujitsu Server PRIMERGY CX400, the next-generation model of Fujitsu’s x86 server….This will also be Japan’s first supercomputer system featuring a large-scale private cloud environment constructed on a front-end sub system, linked with a computational server of a back-end sub system through a high-speed file system,” according to the release.

The new supercomputer will be integrated with three existing HPC systems at the Research Institute for Information Technology. The goal is to create an environment that “extend[s] beyond the current large-scale computation and scientific simulations, to include usage and research that require extremely large-scale computation, such as AI, big data, and data science.”

New AI-Based Algorithm Monitors Heat Stress

As temperatures rise, the health of employees in active outdoor roles, for example security guards or delivery professionals, is threatened. In Japan, 400-500 workplace casualties are attributable to heat stroke each year, leading companies to take measures to safeguard employees working in extreme conditions.

Fujitsu has developed an algorithm to bolster summer safety in the workplace. Based on Fujitsu’s Human Centric AI platform, Zinrai, the algorithm estimates on-going heat stress in workers. Fujitsu will release the algorithm as part of its digital business platform, MetaArc, which uses IoT to support on-site safety management. It is also conducting an internal trial from June to September at its Kawasaki Plant.

Source: Fujitsu (2017)

Says the company, “Sites where security and other duties typically take place may be locations where workers are susceptible to heat stress. However, changes in physical condition vary according to the individual, making it difficult to take uniform measures. This newly developed algorithm makes it possible to estimate the accumulation of heat stress on a per person basis, to tailor ways to protect people based on individual conditions.”

Machine Learning Advances Lung Disease Diagnosis

Fujitsu Laboratories Ltd. in partnership with Fujitsu R&D Center Co., Ltd., has developed a technology to improve the diagnosis for a group of lung diseases that includes pneumonia and emphysema. The technology retrieves similar disease cases from a computed tomography (CT) database based on abnormal shadows implicated in these disease states. The technology is especially needed for diffuse lung diseases like pneumonia, where the abnormal shadows are spread throughout the organ in all directions. These three-dimensional problems require a great deal of knowledge and experience on the clinician’s part to interpret and diagnose.

Source: Fujitsu (2017)

As explained by Fujitsu “the technology automatically separates the complex interior of the organ into areas through image analysis, and uses machine learning to recognize abnormal shadow candidates in each area. By dividing up the organ spatially into periphery, core, top, bottom, left and right, and focusing on the spread of the abnormal shadows in each area, it becomes possible to view things in the same way doctors do when determining similarities for diagnosis.”

Early studies using real-world data indicate a high-accuracy for the approach, which has the potential to save lives by reducing the time it takes to achieve a correct diagnosis.

Promoting open data usage in the Japanese Government

On June 28, Fujitsu announced that it will be part of a project run by the Cabinet Secretariat’s National Strategy Office of Information and Communications Technology to promote the use of open data held by the national or regional public organizations. The goal is to make open data, such as population statistics, industry compositions, and geographic data, more accessible and by doing so strengthen national competitiveness.

Fujitsu will leverage its Zinrai platform to develop a test system that can laterally search for data across multiple government systems, relating texts that have the same meaning. The system will also “learn” from users’ search results such that it can fine-tune its suggestions.

Source: Fujitsu (2017)

The study, “Creating an AI-Based Multi-Database Search and Best-Response Suggestion System (Research Study on Increasing Usability of Data Catalog Sites),” will run through until December 22, 2017. Fujitsu expects the trial to result in a proposal to the Strategy Office of Information and Communications Technology for implementation.

The Zinrai AI framework:

Source: Fujitsu (2016)

 

The post Fujitsu Continues HPC, AI Push appeared first on HPCwire.

OLCF’s Titan Advances Delivery of Accelerated, High-Res Earth System Model

Tue, 07/18/2017 - 14:29

OAK RIDGE, Tenn., July 18, 2017 — A new integrated computational climate model developed to reduce uncertainties in future climate predictions marks the first successful attempt to bridge Earth systems with energy and economic models and large-scale human impact data. The integrated Earth System Model, or iESM, is being used to explore interactions among the physical climate system, biological components of the Earth system, and human systems.

By using supercomputers such as Titan, a large multidisciplinary team of scientists led by Peter Thornton of the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) had the power required to integrate massive codes that combine physical and biological processes in the Earth system with feedbacks from human activity.

“The model we developed and applied couples biospheric feedbacks from oceans, atmosphere, and land with human activities, such as fossil fuel emissions, agriculture, and land use, which eliminates important sources of uncertainty from projected climate outcomes,” said Thornton, leader of the Terrestrial Systems Modeling group in ORNL’s Environmental Sciences Division and deputy director of ORNL’s Climate Change Science Institute.

Titan is a 27-petaflop Cray XK7 machine with a hybrid CPU-GPU architecture managed by the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

Through the Advanced Scientific Computing Research Leadership Computing Challenge program, Thornton’s team was awarded 85 million compute hours to improve the Accelerated Climate Modeling for Energy (ACME) effort, a project sponsored by the Earth System Modeling program within DOE’s Office of Biological and Environmental Research. Currently, ACME collaborators are focused on developing an advanced climate model capable of simulating 80 years of historic and future climate variability and change in 3 weeks or less of computing effort.

Now in its third year, the project has achieved several milestones — notably the development of ACME version 1 and the successful inclusion of human factors in one of its component models, the iESM.

“What’s unique about ACME is that it’s pushing the system to a higher resolution than has been attempted before,” Thornton said. “It’s also pushing toward a more comprehensive simulation capability by including human dimensions and other advances, yielding the most detailed Earth system models to date.”

The Human Connection

To inform its Earth system models, the climate modeling community has a long history of using integrated assessment models — frameworks for describing humanity’s impact on Earth, including the source of global greenhouse gases, land use and land cover change, and other resource-related drivers of anthropogenic climate change.

Until now, researchers had not been able to directly couple large-scale human activity with an Earth system model. In fact, the novel iESM could mark a new era of complex and comprehensive modeling that reduces uncertainty by incorporating immediate feedbacks to socioeconomic variables for more consistent predictions.

The development of iESM started before the ACME initiative when a multilaboratory team aimed to add new human dimensions — such as how people affect the planet to produce and consume energy–to Earth system models. The model–now a part of the ACME human dimensions component–is being merged with ACME in preparation for ACME version 2.

Along with iESM, the ACME team has added enhancements to the land, atmosphere, and ocean components of their code. These include a more capable framework for calculating the cyclical flow of chemical elements and compounds like carbon, nitrogen, and water in the environment. The new ACME land model includes a fully-coupled reactive transport scheme for these biogeochemical processes. This capability will provide a more consistent connection between physical (thermal and hydrologic) and biological components of the simulation.

Perhaps the most significant advancement, however, is the introduction of the phosphorous cycle to the code. Phosphorous is an essential nutrient for life, moving from soil and sediment to plants and animals and back. ACME version 1 is the first global earth system model that includes this dynamic.

In addition to increasing the resolution of the model, and thus estimating new parameters, ongoing tuning and optimizing of ACME has brought the team closer to reaching its 80-years-in-3-weeks simulation speed goal. With the advances, the team can now run about 3 or 4 simulated years per day, about twice the output of earlier code versions.

“The overall ACME project not only involves developing these high-resolution models but also optimizing their performance on high-performance computing platforms that DOE has at its disposal — including Titan–to get to our target of 5 simulated years per day,” Thornton said.

Increased utilization of Titan’s GPUs is helping the project reach the next level. The OLCF’s Matthew Norman is working with Thornton’s team to offload various parts of ACME to GPUs, which excel at quickly executing repetitive calculations.

“ACME version 2 should make much more use of the GPUs to increase simulation performance, and there are other projects that are spin-off efforts using ACME that are targeting Summit [the OLCF’s next leadership-class machine] and future exascale platforms,” Norman said.

The OLCF is continuing to assist the team with data management via advanced monitoring and workflow tool support to help reduce the amount of time researchers need to get results. OLCF staff, including liaisons Valentine Anantharaj and Norman, are also helping with various tasks like debugging, scaling, and optimizing code.

“The liaisons are crucial for helping us understand where to look for problems when they arise and getting the best performance out of the Titan supercomputer,” Thornton said.

For iESM to take the next step, the representation of land surface between coupled models must become more consistent. The team also aims to include other dimensions, including water management and storage, agricultural productivity, and commodity pricing structures. This will yield better information about potential changes in water resource availability, allocation, and shortages under different climates.

“These improvements are vital since there is concern that fresh water resources might be the pinch point that gets felt first,” Thornton said.

ACME version 1 will be publicly released in late-2017 for analysis and use by other researchers. Results from the model will also contribute to the Coupled Model Intercomparison Project, which provides foundational material for climate change assessment reports.

Source: ORNL

The post OLCF’s Titan Advances Delivery of Accelerated, High-Res Earth System Model appeared first on HPCwire.

Cloudian, ScaleMatrix & OnRamp BioInformatics Speed Genomic Data Analysis

Tue, 07/18/2017 - 14:26

SAN MATEO, Calif., July 18, 2017 — Cloudian, Inc., a global leader in enterprise object storage systems, today announced it is working with ScaleMatrix and OnRamp BioInformatics to improve efficiencies of data analysis and storage for life sciences companies focused on human genomics.

ScaleMatrix, a hybrid-cloud services provider, operates a 14,000-square-foot life-sciences incubator that couples a CLIA-certified genomics laboratory with a state-of-the-art datacenter powering Cloudian object storage and OnRamp BioInformatics genomic analysis software. The incubator helps reduce the barriers to entry for early-stage companies in the fields of genomics, molecular diagnostics and bioinformatics. Using an on-site Illumina sequencer at the incubator, researchers and biologists can generate genomic data then use genomic software from OnRamp BioInformatics to rapidly process and interpret data in ScaleMatrix’s GPU-enabled high-density compute farm and efficiently store valuable insights and data to Cloudian object storage.

The Cloudian deployment enables researchers to store, tag, track and retrieve large amounts of data intelligently and cost-effectively, while the OnRamp BioInformatics software streamlines and simplifies genomic data analysis so that biologists, clinicians and drug developers can unlock valuable insights.

“Our cradle-to-grave experience helps break down major barriers in life sciences, which involve massive data storage and analysis requirements that often aren’t within reach of start-ups,” said Chris Orlando, co-founder of ScaleMatrix. “Cloudian’s petabyte-scalable storage is the ideal platform to quickly and cost-effectively accommodate the data volumes inherent in genomic research. OnRamp BioInformatics completes the solution with an intuitive user experience, workflow automation and comprehensive data and analysis tracking to increase storage efficiency and research productivity.”

By 2025, 40 exabytes of data will be generated annually by DNA sequencers as two billion genomes are sequenced. One human genome generates 250 gigabytes of raw data, which can expand by a factor of from three to five as it is processed by any of the 11,000 open-source applications currently available for genome analysis. This data creates incredible management challenges for the future of precision medicine. Cloudian scale-out storage allows labs to store information at 70 percent less cost than with traditional on-premises storage; it also eliminates the variable charges of public cloud storage.

“DNA sequencing is important for our future and for human longevity, but the cost of storing and analyzing genomic data is a significant hurdle,” said Tim Wesselman, CEO of OnRamp Bioinformatics. “Too often, storage costs and data analysis complexity become bottlenecks to scientific breakthroughs by obscuring insights or even by forcing researchers to delete data that may have real value. Unlike public cloud storage, Cloudian has no access charges and easily scales to genomic proportions, so researchers can stay within their budgets and retain the data they need.”

“Object storage is ideal for the scale and management of the large amounts of data produced in the life sciences, and particularly genomics,” said Michael Tso, CEO of Cloudian. “We’re helping power developments in human genomics that can ultimately lead to new discoveries in health, drug development and longevity.”

About Cloudian
Based in Silicon Valley, Cloudian is a leading provider of enterprise object storage systems. Our flagship product, Cloudian HyperStore, enables service providers and enterprises to build reliable, affordable and scalable enterprise storage solutions. Join us on LinkedIn, follow us on Twitter (@CloudianStorage) and Facebook, or visit us at www.cloudian.com.

About ScaleMatrix
ScaleMatrix is a Hybrid Service Provider delivering an array of cloud, colocation, managed services, data protection and connectivity solutions under one simple umbrella. As developers of ground-breaking data center efficiency technology, our company offers a cutting-edge product catalog with white-glove support services at market prices which benefit from these proprietary cost-saving innovations. With a focus on helping clients choose the right platform and performance criteria for a variety of IT workloads, ScaleMatrix aims to be a one-stop shop for those looking to simplify and reliably manage development, production, and disaster recovery workloads with a single partner.

About OnRamp BioInformatics
Based in the Genomics Capital of San Diego, OnRamp BioInformatics provides software and systems to streamline and simplify the analysis, data management and storage of large-scale genomic datasets so that biologists, researchers and drug developers can harness the potential of DNA sequencing, while reducing costs and increasing productivity. More than 2000 analyses have been completed by OnRamp Bio’s solutions, and have been recognized as best practices for bioinformatics by Expert Review of Molecular Diagnostics. Follow us on Twitter (@OnRampBio) and Facebook, or visit us at www.onramp.bio.

Source: Cloudian

The post Cloudian, ScaleMatrix & OnRamp BioInformatics Speed Genomic Data Analysis appeared first on HPCwire.

Bright Computing, AMT Sign Partnership Agreement

Tue, 07/18/2017 - 13:09

SAN JOSE, Calif., July 18, 2017 — Bright Computing, a global leader in cluster and cloud infrastructure automation software, today announced a reseller agreement with AMT.

Operating in the Information Technology market since 1994, AMT provides IT professional services, infrastructure design and build, and cloud broker services based on multi-cloud providers. Additionally, AMT specializes in HPC services, implementing cloud and on-premise solutions that incorporate the latest HPC technologies and software. For HPC workloads, AMT offers code optimization services, to deliver the best value from hardware resources and to offer infrastructure savings as a whole.

Targeting the oil and gas, life sciences, and manufacturing industries, AMT chose to partner with Bright Computing to add a turnkey infrastructure management solution to its HPC portfolio, to empower its customers to deploy clusters faster and manage them more effectively.

By offering Bright technology to its customer base, the Brazil-based systems integrator intends to combine job schedulers with Bright Cluster Manager to deliver best in class HPC solutions.

Ricardo Lugão, HPC Director at AMT, commented; “We are very impressed with Bright’s technology and we believe it will make a huge difference to our customers’ HPC environments. With Bright, the management of an HPC cluster becomes very straightforward, empowering end users to administer their workloads, rather than relying on HPC experts.”

Jack Hanna, Director Alliances at Bright Computing, added; “We welcome AMT to the Bright partner community. This is an exciting company that has a lot of traction in the HPC space in Brazil, and we look forward to offering Bright technology to its customer base.”

Source: Bright Computing

The post Bright Computing, AMT Sign Partnership Agreement appeared first on HPCwire.

SUSE, Supermicro Enter Global Partnership

Tue, 07/18/2017 - 09:47

NUREMBERG, Germany, and SAN JOSE, Calif., July 18, 2017 — SUSE and Supermicro have entered into a global partnership that will provide innovative new enterprise IT solutions based on Supermicro hardware featuring SUSE OpenStack CloudSUSE Enterprise StorageSUSE Linux Enterprise Server for SAP Applications, and embedded Linux. SUSE and Supermicro are responding to market demand to provide converged solutions.

For the first of multiple planned joint solutions, Supermicro and SUSE have done the complex work of integrating OpenStack into a market-ready offering. Built on SUSE YES Certified hardware, this all-in-one private cloud solution is a validated reference architecture, leveraging Supermicro’s Green Computing with best performance per watt, dollar and square foot. This converged cloud and storage solution provides customers a streamlined installation of a production-ready Infrastructure as a Service private cloud with scalability from one rack to many, based on the customer’s needs.

“Supermicro’s NVMe-enabled OpenStack hardware with SUSE’s proven enterprise software suite maximizes compute performance,” said Michael McNerney, vice president, Software Solutions and Infrastructure, at Supermicro. “Combined with space-efficient SimplyDouble storage you get unparalleled block and object services to deliver a scalable cloud datacenter infrastructure. When maximum performance is required, our 1U Ultra SuperServer with 10 hot-swap NVMe drives provides unbeatable storage throughput.”

Phillip Cockrell, SUSE vice president of Worldwide Alliance Sales, said, “Supermicro’s hardware offerings and partner ecosystem teamed with SUSE’s OpenStack cloud and software-defined storage solutions will provide significant value in a very cost-competitive market. These solutions harness the innovation of open source with the backing and support of global providers Supermicro and SUSE. They are supplanting traditional, more-expensive cloud and storage solutions, giving customers more choice and flexibility to meet their business objectives and better serve their own customers.”

For more information about the SUSE and Supermicro alliance, including available solutions and reference architectures, visit www.suse.com/supermicro and www.supermicro.com/solutions/SUSE.cfm. Supermicro and SUSE will discuss more about their innovative joint solutions at SUSECON Sept. 25-29 in Prague.

About SUSE
SUSE, a pioneer in open source software, provides reliable, interoperable Linux, cloud infrastructure and storage solutions that give enterprises greater control and flexibility. More than 20 years of engineering excellence, exceptional service and an unrivaled partner ecosystem power the products and support that help our customers manage complexity, reduce cost, and confidently deliver mission-critical services. The lasting relationships we build allow us to adapt and deliver the smarter innovation they need to succeed – today and tomorrow. For more information, visit www.suse.com.

About Super Micro Computer, Inc. (NASDAQ: SMCI)
Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally-friendly, solutions available on the market. For more information, please visit, http://www.supermicro.com.

Copyright 2017 SUSE LLC. All rights reserved. SUSE and the SUSE logo are registered trademarks of SUSE LLC in the United States and other countries. All third-party trademarks are the property of their respective owners.

Supermicro, SuperServer, SuperBlade, MicroBlade, BigTwin, Building Block Solutions and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names and trademarks are the property of their respective owners.

Source: SUSE

The post SUSE, Supermicro Enter Global Partnership appeared first on HPCwire.

International Women in HPC Workshop Issues Call for Posters

Tue, 07/18/2017 - 09:41

July 18, 2017 — Women in HPC will once again attend the Supercomputing conference to discuss diversity and inclusivity topics. Activities  will bring together women from across the international HPC community, provide opportunities to network, showcase the work of inspiring women, and discuss how we can all work towards improving the under-representation of women in supercomputing.

The 7th International Women in High Performance Computing (WHPC) workshop at SC17 in Denver brings  together the HPC community to discuss the growing importance of increasing diversity in the workplace. This workshop will recognize and discuss the challenges of improving the proportion of women in the HPC community, and is relevant for employers and employees throughout the supercomputing workforce who are interested in addressing diversity.

Sessions include:

  • Improving Diversity in the Workplace: What methods have been put in place and tested to improve workplace diversity and inclusion?

  • Career Development and Mentoring: Skills to thrive; sharing your experiences and advice on how to succeed in the workplace.

  • Virtual Poster Showcase: Highlighting work from women across industry and academia.

Call for posters: Now Open!

Deadline for submissions: August 13th 2017 AOE

As part of the workshop, we invite submissions from women in industry and academia to present their work as a virtual poster. This will promote the engagement of women in HPC research and applications, provide opportunities for peer to peer networking, and the opportunity to interact with female role models and employers. Submissions are invited on all topics relating to HPC from users and developers. All abstracts should emphasise the computational aspects of the work, such as the facilities used, the challenges that HPC can help address and any remaining challenges etc.

For full details please see: http://www.womeninhpc.org/whpc-sc17/workshop/submit/

Workshop Committee

Chairs

– Workshop Chair: Toni Collis, EPCC, UK and Women in HPC Network, UK

– Poster Chair: Misbah Mubarak, Argonne National Laboratory, USA

– Mentoring Chair: Elsa Gonsiorowski, Lawrence Livermore National Laboratory, USA

– Publicity Chair: Kimberly McMahon, McMahon Consulting, USA

Steering and Organisation Committee

– Sunita Chandrasekaran, University of Delaware, USA

– Trish Damkroger, Intel, USA

– Kelly Gaither, TACC, USA

– Rebecca Hartman-Baker, NERSC, USA

– Daniel Holmes, EPCC, UK

– Adrian Jackson, EPCC, UK

– Alison Kennedy, Hartree Centre, STFC, UK

– Lorna Rivera, CEISMC, Georgia Institute of Technology, USA

Programme Committee (for early career posters)

– Sunita Chandrasekaran, University of Delaware, USA

– Toni Collis, EPCC, UK and Women in HPC Network, UK

– Elsa Gonsiorowski, Lawrence Livermore National Laboratory, USA

– Rebecca Hartman-Baker, NERSC, USA

– Daniel Holmes, EPCC, UK

– Adrian Jackson, EPCC, UK

– Alison Kennedy, Hartree Centre, STFC, UK

– Misbah Mubarak, Argonne National Laboratory, USA

– Lorna Rivera, CEISMC, Georgia Institute of Technology, USA

– Jesmin Jahan Tithi, Parallel Computing Lab, Intel Corporation, USA

Source: Women in HPC

The post International Women in HPC Workshop Issues Call for Posters appeared first on HPCwire.

CSIRO Powers Bionic Vision Research with Dell EMC PowerEdge Server-based AI

Tue, 07/18/2017 - 09:33

SYDNEY, Australia, July 18, 2017 — Dell EMC is announcing it will work with the Commonwealth Scientific and Industrial Research Organization (CSIRO) to build a new large-scale scientific computing system to expand CSIRO’s capability in deep learning, a key approach to furthering progress towards artificial intelligence.

The new system is named ‘Bracewell’ after Ronald N. Bracewell, an Australian astronomer and engineer who worked in the CSIRO Radiophysics Laboratory during World War II, and whose work led to fundamental advances in medical imaging.

In addition to artificial intelligence, the system provides capability for research in areas as diverse as virtual screening for therapeutic treatments, traffic and logistics optimization, modelling of new material structures and compositions, machine learning for image recognition and pattern analysis.

CSIRO requested tenders in November 2016 to build the new system with a $4 million budget, and following Dell EMC’s successful proposal, the new system was installed in just five days across May and June 2017. The system is now live and began production in early July 2017.

Greater scale and processing power enables richer, more realistic vision solution

One of the first research teams to benefit from the new processing power will be Data61’s Computer Vision group, led by Associate Professor Nick Barnes. His team develops the software for a bionic vision solution that aims to restore sight for those with profound vision loss, through new computer vision processing that uses large scale image datasets to optimize and learn more effective processing.

Bracewell will help the research team scale their software to tackle new and more advanced challenges, and deliver a richer and more robust visual experience for the profoundly vision impaired.

“When we conducted our first human trial, participants had to be fully supervised and were mostly limited to the laboratory, but for our next trial we’re aiming to get participants out of the lab and into the real world, controlling the whole system themselves,” Assoc. Professor Barnes said.

With access to this new computing capability, Assoc. Professor Barnes and his team will be able to use much larger data sets to help train the software to recognize and process more images, helping deliver a greater contextual meaning to the recipient.

“To make this a reality, we need to build vision processing systems that show accurate visualizations of the world in a broad variety of scenarios. These will enable people to the world through their bionic vision in a way that enables them to safely and effectively interact in challenging visual environments.” Assoc. Professor Barnes said.

“This new system will provide greater scale and processing power we need to build our computer vision systems by optimization of processing over broader scenarios, represented by much larger sets of images, to help train the software to understand and represent the world. We’ll be able to take our computer vision research to the next level, solving problems through leveraging large scale image data that most labs around the world aren’t able to.” Assoc. Professor Barnes said.

Efficient installation speeds time to results

The Bracewell system is built on Dell EMC’s PowerEdge platform, with partner technology including GPUs for computation and InfiniBand networking, which pieces all the compute nodes together in a low latency and high bandwidth solution faster than traditional networking.

Dell EMC ANZ High Performance Computing Lead, Andrew Underwood, said the installation process was streamlined and optimized for deep learning applications, with Bright Cluster Manager technology helping put these frameworks in place faster than ever before.

“Our system removes the complexity from the installation, management and use of artificial intelligence frameworks, and has enabled CSIRO to speed up its time to results for scientific outcomes, which will in turn boost Australia’s competitiveness in the global economy.” Mr. Underwood said.

The system includes:

  • 114 x PowerEdge C4130 with NVIDIA Tesla P100 GPUs, NVLINK, dual Intel Xeon processors and 100Gbps Mellanox EDR InfiniBand
  • Totaling;
    • 1,634,304 CUDA Compute Cores
    • 3,192 Xeon Compute Cores
    • 29TB RAM
  • 13 x 100Gbps 36p EDR InfiniBand switch fabric
  • Bright Cluster Manager Software 8.0

Doubling the aggregate computational power available to researchers

CSIRO Deputy Chief Information Officer, and Head of Scientific Computing, Angus Macoustra, said the system is crucial to the organization’s work in identifying and solving emerging science problems.

“This is a critical enabler for CSIRO science, engineering and innovation.  As a leading global research organization, it’s important to sustain our global competitiveness by maintaining the currency and performance of our computing and data infrastructures,” Mr. Macoustra said.

“The power of this new system is that it allows our researchers to tackle challenging workloads and ultimately enable CSIRO research to solve real-world issues. The system will nearly double the aggregate computational power available to CSIRO researchers, and will help transform the way we do scientific research and development,” Mr. Macoustra said.

Armughan Ahmad, senior vice president and general manager for Dell EMC Ready Solutions and Alliances said, “Dell EMC continues to be committed to creating technologies that drive human progress. This is particularly true of our high performance computing (HPC) Ready Solutions offerings, which are leveraged by leading research institutions like CSIRO The research performed at CSIRO will change the way we live and work in the future for the better. We’re proud to play a part in evolving the work happening at CSIRO and look forward to enabling scientific progress for years to come.”

The system builds on Dell EMC’s varied work in the high-performance computing space, with the Pearcey system installed in 2016 and numerous other systems for Australian universities such as the University of Melbourne ‘Spartan’, Monash University ‘MASSIVE3’ and the University of Sydney ‘Artemis’.

Source: Dell EMC

The post CSIRO Powers Bionic Vision Research with Dell EMC PowerEdge Server-based AI appeared first on HPCwire.

ANSYS, Saudi Aramco & KAUST Shatter Supercomputing Record

Tue, 07/18/2017 - 09:22

PITTSBURGH, July 18, 2017 — ANSYS (NASDAQ: ANSS), Saudi Aramco and King Abdullah University of Science and Technology (KAUST) have set a new supercomputing milestone by scaling ANSYS Fluent to nearly 200,000 processor cores – enabling organizations to make critical and cost-effective decisions faster and increase the overall efficiency of oil and gas production facilities.

This supercomputing record represents a more than 5x increase over the record set just three years ago, when Fluent first reached the 36,000-core scaling milestone.

The calculations were run on the Shaheen II, a Cray XC40 supercomputer, hosted at the KAUST Supercomputing Core Lab (KSL). By leveraging high performance computing (HPC), ANSYS, Saudi Aramco and KSL sped up a complex simulation of a separation vessel from several weeks to an overnight run. This simulation is critical to all oil and gas production facilities – empowering organizations around the world to reduce design development time and better predict equipment performance under varying operational conditions. Saudi Aramco will apply this technology to make more-informed, timely decisions to retrofit separation vessels to optimize operation throughout an oil field’s lifetime.

“Today’s regulatory requirements and market expectations mean that manufacturers must develop products that are cleaner, safer, more efficient and more reliable,” said Wim Slagter, director of HPC and cloud alliances at ANSYS. “To reach such targets, designers and engineers must understand product performance with higher accuracy than ever before – especially for separation technologies, where an improved separation performance can immediately increase the efficiency and profitability of an oil field. The supercomputing collaboration between ANSYS, Saudi Aramco and KSL enabled enhanced insight in complex gas, water and crude-oil flows inside a separation vessel, which include liquid free-surface, phase mixing and droplets settling phenomena.”

“Our oil and gas facilities are among the largest in the world. We selected a complex representative application – a multiphase gravity separation vessel – to confirm the value of HPC in reducing turnover time, which is critical to our industry,” said Ehab Elsaadawy, computational modeling specialist and oil treatment team leader at Saudi Aramco’s Research and Development Center. “By working with strategic partner, KAUST, we can now run these complex simulations in one day instead of weeks.”

KSL’s Shaheen II supercomputer is a Cray system composed of 6,174 nodes representing 197,568 processor cores tightly integrated with a richly layered memory hierarchy and interconnection network.

“Multiphase problems are complex and require multiple global synchronizations, making them harder to scale than single phase laminar or turbulent flow simulation. Unstructured mesh and complex geometry add further complexity,” said Jysoo Lee, director, KAUST Supercomputing Core Lab. “Our scalability tests are not just designed for the sake of obtaining scalability at scale. This was a typical Aramco separation vessel with typical operation conditions, and larger core counts are added to reduce the time to solution. ANSYS provides a viable tool for Saudi Aramco to solve their design and analysis problems at full capacity of Shaheen. And for KAUST-Aramco R&D collaboration, this is our first development work. There are more projects in the pipeline.”

About ANSYS, Inc.

If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, chances are you’ve used a product where ANSYS software played a critical role in its creation. ANSYS is the global leader in engineering simulation. We help the world’s most innovative companies deliver radically better products to their customers. By offering the best and broadest portfolio of engineering simulation software, we help them solve the most complex design challenges and create products limited only by imagination.  Founded in 1970, ANSYS employs thousands of professionals, many of whom are expert M.S. and Ph.D.-level engineers in finite element analysis, computational fluid dynamics, electronics, semiconductors, embedded software and design optimization. Headquartered south of Pittsburgh, Pennsylvania, U.S.A., ANSYS has more than 75 strategic sales locations throughout the world with a network of channel partners in 40+ countries. Visit www.ansys.comfor more information.

 

About Saudi Aramco

Saudi Aramco is the state-owned oil company of the Kingdom of Saudi Arabia and a fully integrated global petroleum and chemicals enterprise. Over the past 80 years, we have become a world leader in hydrocarbons exploration, production, refining, distribution and marketing. Saudi Aramco’s oil and gas production infrastructure leads the industry in scale of production, operational reliability, and technical advances. Our plants and the people who run them make us the world’s largest crude oil exporter, producing roughly one in every eight barrels of the world’s oil supply.

About King Abdullah University of Science and Technology (KAUST)

KAUST advances science and technology through distinctive and collaborative research integrated with graduate education. Located on the Red Sea coast in Saudi Arabia, KAUST conducts curiosity-driven and goal-oriented research to address global challenges related to food, water, energy and the environment. Established in 2009, KAUST is a catalyst for innovation, economic development and social prosperity in Saudi Arabia and the world. The university currently educates and trains over 900 master’s and doctoral students, supported by an academic community of 150 faculty members, 400 postdocs and 300 research scientists. With 100 nationalities working and living at KAUST, the university brings together people and ideas from all over the world. www.kaust.edu.sa

The KAUST Supercomputing Core Lab mission is to inspire and enable scientific, economic and social advances through the development and application of HPC solutions, through collaboration with KAUST researchers and partners, and through the provision of world-class computational systems and services. Visit https://corelabs.kaust.edu.sa/supercomputing/ for more information.

Source: ANSYS

The post ANSYS, Saudi Aramco & KAUST Shatter Supercomputing Record appeared first on HPCwire.

Cavium 25/50Gbps Ethernet Adapter Technology Powers HPE Synergy

Tue, 07/18/2017 - 09:11

SAN JOSE, Calif., July 18, 2017 — Cavium, Inc. (NASDAQ: CAVM), a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, cloud, wired and wireless networking, announced today that its FastLinQ Ethernet 25/50GbE technology will power HPE Synergy480 and 660 Gen10 compute modules. The introduction for this new class of I/O connectivity for the HPE Synergy Gen 9 and Gen 10 compute modules provides significant density, cost and power benefits and enables acceleration of enterprise, telco and cloud applications.

Cavium FastLinQ NICs are built from the ground up to bridge traditional and new IT with the agility, speed and continuous delivery needed for today’s applications. Based on the innovative technology from Cavium and HPE, customers can run more demanding workloads and applications that require higher bandwidth and lower latency than existing I/O offerings. This includes hybrid cloud, Big Data, high performance compute and traditional enterprise applications.

Cavium 25/50Gb Ethernet Adapter Technology for HPE

The HPE Synergy 6810C 25/50Gb Ethernet adapter, powered by Cavium’s QL45604 controller, delivers high bandwidth and low latency connectivity for the HPE Synergy platform. This dual-port 25/50Gb adapter provides 2.5 times the available server I/O bandwidth for HPE Synergy Gen 9 and Gen 10 compute modules vs. previous generation, allowing HPE customers to improve workload performance. With support for RDMA over Converged Ethernet (RoCE), I/O latency is significantly reduced to increase application performance.

“Cavium is a leading provider for I/O connectivity for a broad range of HPE servers,” said Rajneesh Gaur, VP and General Manager, Ethernet Adapter Group, at Cavium. “The introduction of the HPE Synergy 6810C 25/50Gb Ethernet adapter based on Cavium technology brings high-performance and low latency Ethernet connectivity for world-class HPE Synergy composable infrastructure. This is an important milestone in enabling enterprise data centers and service providers to increase operational efficiency and application performance enabling an agile IT infrastructure.”

“The ability to provide high-performance 25/50Gb Ethernet connectivity within HPE Synergy, the industry’s first platform for composable infrastructure, enables our customers to greatly improve application performance and virtual machine scalability, while significantly reducing hardware and management complexity. In addition, Cavium’s Ethernet technology supports enhanced security features and secure firmware updates in HPE Gen10 servers, providing customers increased protection against firmware attacks,” said Tom Lattin, Vice President of Server Options for Hewlett Packard Enterprise.

Key Ethernet technologies and features include:

  • HPE Synergy 6810C 25/50Gb Ethernet adapter supports up to 100Gbps bi-directional bandwidth for enhanced network performance.
  • Designed to support HPC clusters, computational, financial and other applications requiring low-latency network connectivity with support for RDMA over Converged Ethernet (RoCE). Also capable of supporting Internet Wide Area RDMA Protocol (iWARP) with future firmware upgrade.
  • DPDK support provides small packet acceleration capability for telco, ecommerce and other applications.
  • Advanced Server Virtualization features that include Single Root I/O virtualization (SR-IOV) for low latency high-performance virtual workloads.
  • Support for PXE boot, IPV6 and HPE Sea of Sensor 3D Technology to simplify connectivity and improve overall TCO of the network connectivity.
  • Acceleration for Networking Virtualization with Stateless Offloads for tunneling protocols including NVGRE, VxLAN and GRE.

Security Features

  • Authenticated updates for NICs validate that signed firmware is correct and trusted to eliminate rogue firmware installation.
  • Secure Boot safeguards the NIC and ensures no rogue drivers are being executed on start-up.
  • Sanitization (Secure User Data Erase) renders user and configuration data on the NIC irretrievable so that NICs can be safely repurposed or disposed.

Availability

The new HPE Synergy 6810C 25/50Gb Ethernet adapter was recently announced by HPE on July 11th, 2017 and will be shipping in the third quarter of 2017.

For more information, visit

  1. Cavium FastLinQ Ethernet Portfolio for HPE Servers:http://www.qlogic.com/OEMPartnerships/HP/Documents/Why_Cavium_Technology_for_HPE.pdf
  2. https://www.hpe.com/us/en/servers/networking.html  and www.cavium.com.

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity, and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data Center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware-reference designs, and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China, and Taiwan. For more information about the Company, please visit: http://www.cavium.com/.

Source: Cavium

The post Cavium 25/50Gbps Ethernet Adapter Technology Powers HPE Synergy appeared first on HPCwire.

DOE’s HPC for Manufacturing Seeks Industry Proposals to Advance Energy Tech

Tue, 07/18/2017 - 09:07

LIVERMORE, Calif., July 18, 2017 — The U.S. Department of Energy’s (DOE) High Performance Computing for Manufacturing Program, designed to spur the use of national lab supercomputing resources and expertise to advance innovation in energy efficient manufacturing, is seeking a new round of proposals from industry to compete for $3 million.

Since its inception, the High Performance Computing for Manufacturing (HPC4Mfg) Program has supported projects partnering manufacturing industry members with DOE national labs to use laboratory HPC systems and expertise to upgrade their manufacturing processes and bring new clean energy technologies to market. The program’s portfolio includes small and large companies representing a variety of industry sectors. This is the fourth round of funding for this rapidly growing program.

The partnerships use world-class supercomputers and the science and technology expertise resident at the national laboratories, including Lawrence Livermore National Laboratory, which leads the program, principal partners Lawrence Berkeley (link is external)(LBNL) and Oak Ridge(link is external) (ORNL) national laboratories and other participating laboratories. An HPC expert at each lab teams up with U.S. manufacturers on solutions to address challenges that could result in advancing clean energy technology. By using HPC in the design of products and industrial processes, U.S. manufacturers can reap such benefits as accelerating innovation, lowering energy costs, shortening testing cycles, reducing waste and rejected parts and cutting the time to market. For more information about the program, see the web(link is external).

“U.S. manufacturers from a wide array of manufacturing sectors are recognizing that high performance computing can significantly improve their processes,” said Lori Diachin, an LLNL mathematician and director of the HPC4Mfg Program. “The range of ideas and technologies that companies are applying HPC to is expanding at a rapid rate, and they are finding value in both the access to supercomputing resources and the HPC expertise provided by the national laboratories.”

Concept proposals from U.S. manufacturers seeking to use the national labs’ capabilities can be submitted to the HPC4Mfg Program starting June 12. The program expects another eight to 10 projects worth approximately $3 million in total will be funded. Concept paper applications are due July 26.

HPC is showing potential in addressing a range of manufacturing and applied energy challenges of national importance to the United States. The HPC4Mfg Program releases biannual solicitations as part of a multiyear program to grow the HPC manufacturing community by enticing HPC expertise to the field, adding to a high-tech workforce, and enabling these scientists to make a real impact on clean energy technology and the environment. Past HPC4Mfg solicitations have highlighted energy intensive manufacturing sectors and the challenges identified in the Energy Department’s 2015 Quadrennial Technology Review. In this solicitation, the program continues to have a strong interest in these areas and are adding a special topic area of advanced materials.

A number of companies and their initial concepts will be selected and paired with a national lab HPC expert to jointly develop a full proposal this summer, with final selections to be announced in November. Companies are encouraged to highlight their most challenging problems so the program can identify the most applicable national lab expertise. More information about the HPC4Mfg Program, the solicitation call and submission instructions can be found on the web.

The Advanced Manufacturing Office within DOE’s Office of Energy Efficiency and Renewable Energy(link is external) provided funding to LLNL to establish the HPC4Mfg Program in March 2015. The Advanced Scientific Computing Research Program within DOE’s Office of Science supports the program with HPC cycles through its Leadership Computing Challenge allocation program. The National Renewable Energy Laboratory (NREL) also provides computing cycles to support this program.

HPC4Mfg recently announced the selection of new projects as part of a previous round of funding, including: LLNL and ORNL partnering with various manufacturers (Applied Materials, GE Global Research and United Technologies Research) to improve additive manufacturing processes that use powder beds to reduce material use, defects and surface roughness and improve the overall quality of the resulting parts; LBNL partnering with Samsung Semiconductor Inc. (USA) to improve the performance of semiconductor devices by enabling better cooling through the interconnects; Ford Motor Company partnering with Argonne National Laboratory to understand how manufacturing tolerances can impact the fuel efficiency and performance of spark-ignition engines; and NREL partnering with 7AC technologies to model liquid/membrane interfaces to improve the efficiency of air conditioning systems. In addition, one of the projects, a collaboration among LLNL, the National Energy Technology Laboratory and 8 Rivers Capital to study coal-based Allam cycle combustors will be co-funded by DOE’s Office of Fossil Energy.

Additional information about submitting proposals is available on the FedBizOps website(link is external).

Source: LLNL

The post DOE’s HPC for Manufacturing Seeks Industry Proposals to Advance Energy Tech appeared first on HPCwire.

Researchers Use DNA to Store and Retrieve Digital Movie

Tue, 07/18/2017 - 08:25

From abacus to pencil and paper to semiconductor chips, the technology of computing has always been an ever-changing target. The human brain is probably the computer we use most (hopefully) and understand least. This month in Nature, a group of distinguished DNA researchers report storing and then retrieving a digital movie, a rather famous one at that, in DNA.

Using a DNA editing technique borrowed from nature (CRISPR-Cas), the researchers developed a code for pixels depicting shades of gray using the DNA’s four letters, converted the images, frame by frame, into that code, and coaxed the CRISPR editing machinery to insert the coded DNA into bacteria who did what bacteria do – replicate. The researchers then extracted DNA from the growing population of bacteria, sequenced the newly-generated DNA, decoded it, and reproduced the film clip.

Their paper – “CRISPR–Cas encoding of a digital movie into the genomes of a population of living bacteria” – is fascinating on many levels. Not only did the researchers succeed in storing the images in reproducing bacteria, they also uncovered more details of the CRISPR process including ferreting out efficiency factors, and pointed, however distantly, to prospects for a bacteria-DNA-based recording system that mighty be used as a “black box flight recorder” for cells.

There are several good accounts of the work including one in The New York Times (Who Needs Hard Drives? Scientists Store Film Clip in DNA written by Gina Kolata) and another from the National Institutes of Health (Scientists replay movie encoded in DNA). The researchers started working with an image of the hand and progressed to capturing the 1878 film by British photographer Eadweard Muybridge showing horses do indeed take flight, briefly, when running and all the hooves are aloft.

 

The researchers – Seth L. ShipmanJeff NivalaJeffrey D. Macklis, and George M. Church – are all from Harvard and very familiar to the genomics research community. In the paper they understandably concentrate on the CRISPR technology, which is widely being researched, rather than on applications. That said the NYT article notes:

“When something goes wrong, when a person gets ill, doctors might extract the bacteria and play back the record. It would be, said Dr. Church, analogous to the black boxes carried by airplanes whose data is used in the event of a crash.

At the moment, all that is “the other side of science fiction,” said Ewan Birney, director of the European Bioinformatics Institute and a member of the group that put Shakespeare’s sonnets in DNA. “Storing information in DNA is this side of science fiction.” Also in the NYT Article, Birney said, “People’s intuition is tremendously poor about just how small DNA molecules are and how much information can be packed into them.”

Storing information in DNA isn’t new. (That’s sort of what it does without regard to traditional computing technology.) It has been used to store computer data (zeros and ones) and even used for certain massively parallel computations before now. Indeed, researchers had already used the CRISPR system to store sequences in bacteria.

The new work yet another step forward. A good encapsulation of the work is provided in the paper’s abstract, excerpted below.

“DNA is an excellent medium for archiving data. Recent efforts have illustrated the potential for information storage in DNA using synthesized oligonucleotides assembled in vitro. A relatively unexplored avenue of information storage in DNA is the ability to write information into the genome of a living cell by the addition of nucleotides over time. Using the Cas1–Cas2 integrase, the CRISPR–Cas microbial immune system stores the nucleotide content of invading viruses to confer adaptive immunity. When harnessed, this system has the potential to write arbitrary information into the genome. Here we use the CRISPR–Cas system to encode the pixel values of black and white images and a short movie into the genomes of a population of living bacteria. In doing so, we push the technical limits of this information storage system and optimize strategies to minimize those limitations.”

Link to New York Times article: https://www.nytimes.com/2017/07/12/science/film-clip-stored-in-dna.html?_r=0

Link to NIH account: https://www.nih.gov/news-events/news-releases/scientists-replay-movie-encoded-dna

Link to video: https://www.youtube.com/watch?v=gK3dcjBaJyo

The post Researchers Use DNA to Store and Retrieve Digital Movie appeared first on HPCwire.

The Exascale FY18 Budget – The Next Step

Mon, 07/17/2017 - 13:23

On July 12, 2017, the U.S. federal budget for its Exascale Computing Initiative (ECI) took its next step forward. On that day, the full Appropriations Committee of the House of Representatives voted to accept the recommendations of its Energy and Water Appropriations Subcommittee for Fiscal Year 2018 (FY18) budget for a variety of agencies that includes the Department of Energy (DOE). Part of the DOE funding is for the United States effort to develop an exascale computer. Just to be clear, the U.S. government is not defining exascale as 10^18 floating point operations per second (Flops) on the Linpack benchmark. Rather, as defined by the National Strategic Computing Initiative (NSCI) an exascale is a “computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.”

The news for exascale that was released on July 12 was not terribly dramatic, but helped to expose a slight mystery. But first, the numbers. As reported earlier, for the Department of Energy (DOE) the President’s FY18 budget proposed to spend $508 million on Exascale Computing Initiative (ECI). This number was divided between the DOE Office of Science (SC) and the semi-autonomous National Nuclear Security Administration (NNSA). The request for the NNSA was a total of $183 million with the bulk ($161 million) going to the Advanced Simulation and Computing (ASC) program. The remainder ($22 million) would go to building physical infrastructure for the exascale systems. The House Appropriations Committee provided the full budget for the ASC program with very little comment. This means that the House appropriated NNSA with the full $183 million for exascale.

On the SC side of things, the program funding exascale is the Office of Advanced Scientific Computing Research (ASCR) and the situation is a bit more complicated. The President’s budget proposed that ASCR would get $347 million in support of ECI. The request number was split into two parts. The first part of the request was $197 million for Exascale Computing Project (ECP). The House Appropriations Committee gave $170 million to ECP, which is down $27 million. The other part of the SC funding request ($150 million) was for the two leadership computing facilities (one at Oak Ridge and the other at Argonne national laboratories) to prepare them for the exascale systems. The report language does not explicitly address the $150 million for facilities, but based on the overall ASCR numbers, it looks like they got nearly the full requested amount.

But, here is where the mystery pops up. Part of the text of the House markup report says, “The Committee is concerned that the deployment plan for an exascale machine has undergone major changes without an appropriately defined cost and performance baseline.” The report goes on to require the DOE to provide both Houses of Congress an updated baseline within 90 days of enactment of the Appropriations Act. The report does not provide any further information about the “major changes” it is referring to.

However, in the original President’s SC budget request, on page 25 of the introduction there are two short sentences that say, “The ALCF [Argonne Leadership Computing Facility] upgrade project will shift toward an advanced architecture, particularly well-suited for machine learning applications capable of more than an exaflop performance when delivered. This will impact site preparations and requires significant new non-recurring engineering efforts with the vendor to develop features that meet ECI requirements and that are architecturally diverse from the OLCF [Oak-ridge Leadership Computing Facility] exascale system.” It appears that these sentences in the request are part of the House’s Appropriator’s concerns about “major changes.” Another interesting note is that the announced Argonne 180 petaflops computer code-named Aurora is not mentioned in the request. At this point, any speculation about the meaning these words is useless and any conclusions probably wrong.

The good news is that the House Appropriations Committee supported the request for funding for the exascale program. Once again, this is rather amazing in light of the other significant budget cuts that are being made. These include the House agreeing with the President to zero out the funding for Advanced Research Project Agency – Energy (ARPA-E) and nearly cutting the funding for the Energy Efficiency and Renewable Energy (EERE) program by 50 percent. There is still a long way for the FY18 budget to become enacted in law, but the developments of last week help to shed light on the trajectory. Things are looking good for the U.S. exascale programs and that is great news for our country’s national and economic security.

Coming next – we hear from the Senate Appropriators!

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness and the president of Larzelere & Associates Consulting. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

 

The post The Exascale FY18 Budget – The Next Step appeared first on HPCwire.

Intel System Promises to Accelerate Pitt Research

Mon, 07/17/2017 - 11:38
PITTSBURGH, July 17, 2017 — Pitt researchers are among the first in the nation to have access to Intel’s powerful new computing systems.

The system will dramatically increase the speed of computation available to University researchers through Pitt’s Center for Research Computing, said Ralph Roskies, associate vice provost for research computing.

“A key feature in the new architecture is the high-bandwidth memory,” with six memory channels instead of the four used in previous-generation systems, he said.

The University is one of the first two institutions to install a computer cluster with the new Intel Xeon Scalable processors. The system at Pitt will enable users to work three times as fast as on the old system.

To understand the computing power improvements, consider a highway, said Center for Research Computing (CRC) co-director Kenneth Jordan.

“Suppose we have a two-lane highway. Even if there’s a 60-mph speed limit, with lots of cars, it slows to a crawl,” he said. Additional “lanes” of access to memory will speed the traffic flow; adding CPU cores — like swapping cars for buses — will move passengers even more efficiently.

The total of 2,400 cores can execute 41.6 billion instructions per second. And each of the 100 compute node has access to 24 of these “brains,” compared to only four or eight in the systems being retired. Each “brain” performs these billions of tasks and potentially shares access to the same 192 GB of memory.

With the boost in computational power provided by the new computer systems, Pitt researchers — including those from the University’s biomedical community — will experience faster turnaround on calculations and will be able to tackle more challenging computational problems.

Tech specs

The new cluster, supplied by information technology company Supermicro, has 100 compute nodes, each with two 12-core 2.6 GHz Intel Xeon Scalable processors, plus 192 GB high-speed memory and solid-state drives.

Faculty member John Keith of the Swanson School of Engineering Department of Chemical and Petroleum Engineering said, “With new investments in computational facilities, we can study larger and more complex problems that are more likely to make a real impact on society.”

Keith’s own research group, which is working to effectively convert greenhouse gas into carbon-neutral liquid fuels, will be able to model carbon dioxide conversion mechanisms with greater accuracy than ever before and will be able to run large-scale molecular dynamics simulations to learn which solvents will make chemical reactions more energy-efficient.

Students, too, will have access to the latest high-performance computing through CRC workshops and in courses beyond School of Computing and Information (SCI) data science and computing systems classes.

“The availability of a first-class computing infrastructure is indispensable to foster a new culture of collaborative research across multiple disciplines to elucidate how complex natural, human and engineered systems interact with each other and to gain deeper understanding on how they can be managed,” said Paul Cohen, professor and founding dean of the computing and information school, which he noted is focusing on systems-oriented rather than discipline-oriented science.

Taieb Znati, chair and professor in the computing and information school’s Department of Computer Science, said, “This addition to the existing research computing infrastructure represents a leap forward in terms of performance gains and scalability. The enhanced infrastructure will provide faculty with unique capabilities to gain deeper insights into a range of complex problems across a diverse range of scientific and engineering fields.”

One key problem the new cluster will address is the so-called “memory wall” — the processing bottleneck that’s resulted from gains in processing performance that have increasingly outpaced gains in memory speed.

Roskies said processors have gotten very fast in doing arithmetic. But the numbers first have to be retrieved from the system’s memory, which takes more time than doing the actual arithmetic operation.

“So just speeding up the adder or the multiplier doesn’t make as big a difference as being able to fetch or retrieve numbers from memory faster. That’s what this new system does,” he said.

About the Center for Research Computing

Pitt’s CRC is a key resource for advancing important University-based research. Notably, Keith and CRC-affiliated faculty members Giannis Mpourmpakis and Christopher Wilmer of chemical and petroleum engineering and Peng Liu of the Department of Chemistry have won prestigious National Science Foundation 2017 CAREER awards.

“The center has been absolutely critical to our group’s research; we use it for nearly every project,” said Wilmer.

Said Keith: “The support of Pitt’s CRC allows our group to focus its time doing our high-performance computational research projects. The CRC staff also provides highly valuable workshops and their individual expertise to train my students in the skills they need to advance their research — more than I could alone.”

Source: Pitt

The post Intel System Promises to Accelerate Pitt Research appeared first on HPCwire.

CMD-IT Announces 2017 Richard A. Tapia Award Winner Manual Perez Quinones

Mon, 07/17/2017 - 09:25

July 17, 2017 — Today, CMD-ITannounced the recipient of the 2017 Richard A. Tapia Achievement Award for Scientific Scholarship, Civic Science and Diversifying Computing is Manuel Perez Quinones, Associate Dean, College of Computing and Informatics and Professor, Department of Software and Information Systems, University of North Carolina, Charlotte. The Richard A. Tapia Award is awarded annually to an individual who demonstrates significant leadership, commitment and contributions to diversifying computing.

“Manuel Perez Quinones has a long history of leadership with diversity and inclusion in computer science” said Valerie Taylor, CMD-IT Executive Director. “In addition to his excellent work in human-computer interaction, Manuel has created and led impactful programs for African-Americans, Latinos, LGBTQ, Native Americans, and women students in his role as an academic leader. Most recently he created a Corporate Mentoring Program at UNC Charlotte for women freshmen students, matching them with female corporate representatives. He continues to co-manage the Hispanics in Computing listserv that he founded, which has over 400 members. Manuel’s work on increasing diversity in computer science has a profound impact on thousands of students, academics and industry professionals.”

The Richard A. Tapia award will be presented at the 2017 ACM Richard Tapia Celebration of Diversity in Computing Conference. Themed “Diversity: Simply Smarter,” the Tapia Conference will be held September 20-23 in Atlanta Georgia. The Tapia Conference is the premier venue to bring together students, faculty, researchers and professionals from all backgrounds and ethnicities in computing to promote and celebrate diversity in computing. The Tapia Conference is sponsored by the Association of Computing Machinery (ACM) and presented by the Center for Minorities and People with Disabilities in IT (CMD-IT).

For more information and to register for the Tapia Conference, visit www.tapiaconference.org.

Source: CMD-IT

The post CMD-IT Announces 2017 Richard A. Tapia Award Winner Manual Perez Quinones appeared first on HPCwire.

Summer Reading: IEEE Spectrum’s Chip Hall of Fame

Mon, 07/17/2017 - 08:43

Take a trip down memory lane – the Mostek MK4096 4-kilobit DRAM, for instance. Perhaps processors are more to your liking. Remember the Sh-Boom processor (1988), created by Russell Fish and Chuck Moore, and named after the bar in which it was conceived. Intel, AMD, and paid big licensing fees for its scheme to run faster than the clock on the circuit board.

Lists are often fun. Last month, the IEEE Spectrum created a Chip Hall of Fame, “To honor and tell the stories of these renowned blobs of silicon—and their creators and users.” It is by no means comprehensive, and the first class of inductees draws from the Spectrum’s “25 Microchips That Shook the World,” article which appeared in 2009 although there are many others chips as well.

As noted in the introduction to the Chip Hall of Fame, written by Stephen Cass, “[S]ome chips stand out like a celebrity on the red carpet. Many of these integrated circuits found glory by directly powering products that transformed the world, while others cast a long shadow of influence over the computing landscape. And some became cautionary tales in their failed ambitions.”

KAF-1300 Image Sensor from Kodak (1986) powered the launch of Kodak’s DCS 100 camera. It had 1.3 megapixels. Today most mobile phone cameras surpass that.

Don’t look for KNL or P100. None of the new chips on the block are present, but the Spectrum promises a growing cast of awardees annually and is seeking input on which ones are deserving.  For a bit of summer whimsy, check out the lists.

Here are the current categories:

  • Amplifiers & Audio
  • Interfacing
  • Logic
  • Memory & Storage
  • MEMs& Sensors
  • Processors
  • Wireless

Link to Chip Hall of Fame: http://spectrum.ieee.org/static/chip-hall-of-fame

Feature image: “The 6502 chip wasn’t just faster than its competitors—it was also way cheaper, selling for US $25 while Intel’s 8080 and Motorola’s 6800 were both fetching nearly $200.” Image source: Computer History Museum.

The post Summer Reading: IEEE Spectrum’s Chip Hall of Fame appeared first on HPCwire.

Asetek Announces New Regional OEM and New HPC Installation

Fri, 07/14/2017 - 07:46

OSLO, Norway, July 14, 2017 — Asetek today announced an order from a new European OEM customer to service demand for an HPC (High Performance Computing) installation.

The order is for Asetek’s RackCDU D2C (Direct-to-Chip) liquid cooling solution and is valued at USD 32,000 with delivery scheduled for Q3 2017.

All parties remain undisclosed at this point.

About Asetek

Asetek is the global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK.OL).

Source: Asetek

The post Asetek Announces New Regional OEM and New HPC Installation appeared first on HPCwire.

Pages