HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 2 hours 40 min ago

HPC Market Escalates to $33 Billion by 2022 at 5 Percent of CAGR

Tue, 08/08/2017 - 09:22

PUNE, India, Aug. 8, 2017 –– Market Research Future published a half-Cooked research report on “Global High-Performance Computing (HPC) Market Research Report – Forecast to 2022” – Market Analysis, Scope, Stake, Progress, Trends and Forecast to 2022.

IBM Corporation (U.S.), Hewlett-Packard Company (U.S.), Intel Corporation (U.S.), Microsoft Corporation (U.S.), Cisco Systems, Inc. (U.S.), Advanced Micro Devices, Inc. (U.S.), Fujitsu Ltd (Japan), Oracle Corporation (U.S.), Dell Inc. (U.S.), and Hitachi Data System Corporation (U.S.) are some of the prominent players profiled in MRFR Analysis and are at the forefront of competition in the Global High-Performance Computing (HPC) Market.

High Performance Computing (HPC) Market – Overview

The Global High-Performance Computing (HPC) Market is growing with the rapid pace; mainly due to the high demands for scientific advances, industrial innovation and economic competitiveness. In 2015, sales of HPC systems were a lot higher than predicted, according to a recent study report published by the Market Research Future, the global High Performance Computing (HPC) Market will continue to outpace the overall IT market with prominence over the forecast period.  The global High Performance Computing Market is expected to grow at approximately USD 33 Billionby 2022, at a CAGR of approx. 5% between 2016 and 2022.

The high-performance computing currently is playing a significant role in training and simulation, on-board systems for navigation, defence, and attack; and command, control, communications, intelligence, computers, surveillance, and reconnaissance. HPC is emerging as a prominent technology for the national defence and security requirements.  High performance computing contributes substantially to the design and development of advanced vehicles and weapons, planning and execution of operational military scenarios, the intelligence problems of cryptanalysis and image processing, the maintenance of nuclear stockpile. Due to which nations and regions across the world, as well as businesses, are increasing their investments in high performance computing. In addition, the global race to achieve exascale performance will propel the market growth of HPC. Moreover, HPC’s need in big data market or a high performance data analysis, or HPDA is contributing to the market growth of HPC. On the other hand, HPDA challenges have moved HPC to the forefront of R&D for machine learning, deep learning, artificial intelligence, and the Internet of Things.

Request a Sample Report @ https://www.marketresearchfuture.com/sample_request/2698

However, the high computational cost of technology may pose as a restraint to the HPC market growth.  Adversely, the advancement in technology across several domains is leading to the emergence of high performance computing market. The vertical of high performance computing is growing at a rapid rate. High performance computing techniques is mainly used in myriad defence and aerospace verticals.

The internet revolution, increase in computational complexity, emergence of big data and the high amount of governmental interest and investments are some of the factors that foster the market growth of HPC. New innovative HPC models dictate a huge growth potential across industries, besides opportunities in cloud computing, improvement in storage and data management conveniences are some of prominent factors that provide impetus to the market growth of HPC.

The high computing software offers a solid framework for solving the challenging, large-scale mathematical problems. It also includes automatic parallelism, multithreaded programming, multi process programming on the local grid, grid computing and math-aware programming language.

High Performance Computing (HPC) Market – Competitive Analysis

Characterized by the presence of several well-established and small players, the global Market of High Performance Computing (HPC) appears to be highly competitive. Well established players incorporate acquisition, collaboration, partnership, expansion, and technology launch in order to gain competitive advantage in this market and to maintain their market position. Vendors compete based upon pricing, Technology and services.  Growth potential demonstrated by the market is likely to attract many entrants to market; which will further result in fierce competition in the global market.

Browse Report @ https://www.marketresearchfuture.com/reports/high-performance-computing-market-2698

High Performance Computing (HPC) Market – Segments

Global High-Performance Computing (HPC) Market is segmented in to 4 Key dynamics for an easy grasp and enhanced understanding.

Segmentation by Component: Comprises – Software, Network Devices, Servers, and Storage.

Segmentation by Deployment: Comprises – On-Cloud and On-Premises.

Segmentation by Verticals: Comprises – Retail, Manufacturing, BFSI, IT and Telecommunication, Healthcare, Energy & Utilities, Transportation and Other.

Segmentation by Regions: Comprises Geographical regions – North America, Europe, APAC and Rest of the World.

HPC Market – Regional Analysis

The North America region is leading in the high-performance computing market followed by Europe. Due to technological advancement in that region and early adoption of technology by several industries has aided in the development of HPC market in North America. Europe HPC market follows next due to huge investments by big enterprises and small-scale enterprises in this region. Asia-Pacific is expected to witness highest growth in high performance computing market as HPC servers is expected to gain maximum market share by the forecast period. The growth of APAC region is attributed to the huge investment by the government and private sectors into the emergence of big data has increased the demand for systems that can handle a more data-intensive workload. HPC clusters are systems that can easily handle vast amount of data and extensively support high-performance data analysis. Also because of the presence of leading technology players in that region. Japan is developing a new supercomputer that could be among the world’s fastest systems.

Browse Related Report

Globally High-Performance Data Analytics (HPDA) Market is expected to grow at the rate of more than ~18% from 2016 to 2022.


About Market Research Future

At Market Research Future (MRFR), we enable our customers to unravel the complexity of various industries through our Cooked Research Report (CRR), Half-Cooked Research Reports (HCRR), Raw Research Reports (3R), Continuous-Feed Research (CFR), and Market Research & Consulting Services.

Source: Market Research Future

The post HPC Market Escalates to $33 Billion by 2022 at 5 Percent of CAGR appeared first on HPCwire.

WekaIO, Intel Demonstrate Native Scale-out NVMe-oF System

Tue, 08/08/2017 - 09:19

SAN JOSE, Calif., Aug. 8, 2017 — WekaIO, a high-performance cloud storage software company, is teaming up with Intel at the Flash Memory Summit (FMS) in Santa Clara, Calif., to demonstrate a native NVMe-oF system on the new “ruler” form factor for Intel SSDs which combined will deliver a revolutionary storage capacity of beyond 1PB in 1U while delivering in excess of 3 million IOPS.

Just last month, WekaIO introduced the industry’s first cloud-native scalable file system that delivers unprecedented performance to applications, scaling to exabytes of data in a single namespace. WekaIO Matrix software will be showcased at FMS running on an NVME-oF system, equipped with “ruler” form factor for Intel SSDs. The “ruler” form factor is designed from the ground up for data center racks to deliver high performance, space efficient capacity, and effective management and operations at scale.

Adding to the cloud inspired product line of the Intel SSD DC P4500 storage series, the “ruler” form factor for Intel SSDs uses the WekaIO Matrix scale-out file system and Intel 3D NAND technology, scaling beyond 1PB of storage and 3 million IOPS in a 1U appliance. The 1U form factor delivers exceptional capacity per rack unit enabling rack consolidation at data center scale.

“Storage subsystem architectures have not kept pace with the rapid evolution of computing architectures. For many data-intensive use cases, input/output (I/O) bottlenecks have hampered performance and scalability,” according to Julia Palmer, Research Director at Gartner, et al. “NVMe-based storage architectures eliminate sequential access barriers and protocol redundancies to provide dramatic efficiencies in bandwidth and latency characteristics. These new storage subsystem architectures can alleviate the I/O bottlenecks experienced in data-intensive use cases, enabling higher performance and scalability. Use cases that can benefit from higher-performance storage architectures include databases, HPC, virtualization and web applications.” – Gartner, The Future of Hyperconverged and Integrated Systems Will Be Shaped by Shared Accelerated Storage, Julia Palmer, Stanley Zaffors, Joseph Unsworth, and Chirag Dekat, May 25th, 2017.

Key features of the WekaIO solution include:

  • A distributed file system to share data and meet full POSIX compliance
  • Storage density scaling to beyond 1PB per rack unit
  • Disaggregated storage that eliminates the need to retrofit existing servers and enables independent scaling of capacity
  • Native NVMe for massive performance gains, lower latency and performance that is faster than traditional filers

“WekaIO is excited to work with Intel to demonstrate how our combined technologies can dramatically reduce a customer’s storage footprint with more capacity per unit and higher file performance that has yet to be seen on the market,” said Dr. Omri Palmon, co-founder and Chief Product Officer at WekaIO. “These technologies have the potential to deliver an unmatched customer experience, making it easy to scale storage and ramp up performance to new levels without having to retrofit their existing hardware.”

“Intel has a long history of storage innovation. The new “ruler” form factor SSDs extend that history, providing new levels of high performance, dense storage at lower cost,” said James Myers, Director of Non-volatile Memory Solutions Architecture at Intel. “Intel® 3D NAND technology based ruler SSDs demonstrate unmatched physical space efficiency with spectacular performance for shared file storage.”

This technology is ideal for organizations that require the highest possible performance in the smallest footprint, supporting data intensive and performance hungry applications such as electronic design automation, IoT and real-time analytics and financial modeling.

See this technology demonstrated at the FMS conference August 8-10 in the Intel Partner Pavilion, Hall C-D, booth 745.

Liran Zvibel, co-founder and CTO at WekaIO will be presenting “Designing Next-Generation File Systems for NVMe and NVMe-oF,” at the FMS conference on Thursday, August 10, from 9:45 a.m. -10:50 a.m.

About WekaIO

WekaIO leapfrogs legacy infrastructures and improves IT agility by delivering software-centric data storage solutions that unlock the true promise of the cloud. WekaIO Matrix software is ideally suited for performance intensive workloads such as Web 2.0 application serving, financial modeling, life sciences research, media rendering, Big Data analytics, log management and government or university research. For more information, visit www.weka.io , email us at sales@weka.io, or watch our latest video here.

Source: WekaIO

The post WekaIO, Intel Demonstrate Native Scale-out NVMe-oF System appeared first on HPCwire.

Mellanox Announces BlueField Storage Solutions to Accelerate NVMe over Fabrics

Tue, 08/08/2017 - 09:17

SUNNYVALE, Calif. & YOKNEAM, Israel, Aug. 8, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX) today announced the availability of storage reference platforms based on its revolutionary BlueField System-on-Chip (SoC), combining a programmable multicore CPU, networking, storage, security, and virtualization acceleration engines into a single, highly integrated device.

BlueField integrates all the technologies needed to connect NVMe over Fabrics flash arrays, with the fastest performance available in the market. BlueField provides 200 Gb/s of throughput and more than 10 million IOPS in a single SoC device. In addition, the powerful on-board multicore ARM processor subsystem enables flexible programmability that allows vendors to differentiate their software-defined storage appliances with advanced capabilities. This makes BlueField the ideal chip to control and connect All Flash Arrays and Just-a-Bunch-Of-Flash (JBOF) systems to InfiniBand and Ethernet Storage fabrics.

“BlueField is the most highly integrated NVMe over Fabrics solution,” said Michael Kagan, CTO of Mellanox. “By tightly integrating high-speed networking, programmable ARM cores, PCIe switching, cache, memory management, and smart offload technology all in one chip; the result is improved performance, power consumption, and affordability for flash storage arrays. BlueField is a key part of our Ethernet Storage Fabric solution, which is the most efficient way to network and share high-performance storage.”

“As data takes over the world, networked storage takes over computing,” said Peter Burris, GM and Chief Research Officer of Wikibon. “The BlueField SoC controller is a leading example of the right technology at the right time.”

BlueField and the BlueField flash array reference platform are being shown at Flash Memory Summit, Aug. 8-10, at the Santa Clara Convention Center, Mellanox booth, #138. BlueField samples and storage reference design systems will be available in Q4, 2017.

Supporting Quotes:

“We use Mellanox NICs and RDMA today, and are excited that BlueField integrates all the technologies needed to connect NVMe over Fabrics flash arrays,” said Yaniv Romem, CTO at Excelero. “We are looking forward to seeing the fastest performing SoC come to market in the form of Mellanox BlueField technology.”

“Continuous innovation around NIC and RDMA solutions is why E8 Storage is proud to collaborate with Mellanox to deliver an end-to-end shared NVMe solution for high-performance enterprise storage applications,” said Ziv Serlin, VP System Architecture, E8 Storage. “With 200 Gb/s of throughput and up to 10 million IOPS in a single SoC device, BlueField will bring increased performance and flexible programmability to the industry.”

“BlueField will power a new class of flash storage designs based on Mellanox’s robust SoC,” said Evan Chien, Director of Inventec. “We are pleased to see this advancement as BlueField enables a new level of flexible programmability that will enable us to differentiate our products with additional advanced capabilities.”

“The Mellanox BlueField technology and ConnectX-5 adapters can pass storage traffic directly to our flash controllers and NVRAM devices, thereby greatly enhancing NVMe over Fabrics performance,” said Derek Dicker, vice president and business unit manager for performance storage at Microsemi. “Our SwitchTek PCIe switches and FlashTek SSD controllers are the ideal fit for flash storage systems using the BlueField SoC and we look forward to leveraging this technology.”

“BlueField holds the promise of greatly enhancing our product offerings,” said Eugene McCabe, EVP, Sanmina. “As we continue to improve our offerings, BlueField will enable us to offer our customers increased performance and efficiency, thereby delivering a more robust system solution.”

“BlueField is a brand new, highly integrated device,” said Peter Tung, Chief Operation Officer for Enterprise Business Group, Wistron. “The advanced performance and capabilities makes it an elegant solution for controlling and connecting our All Flash Arrays to Mellanox InfiniBand and Ethernet storage fabrics.”

Supporting Resources:

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox Announces BlueField Storage Solutions to Accelerate NVMe over Fabrics appeared first on HPCwire.

DDN, Gatan Partner for Microscopy Workflows

Tue, 08/08/2017 - 08:46

SANTA CLARA, Calif., Aug. 7, 2017DataDirect Networks (DDN) today announced a strategic partnership with Gatan, the world’s leading manufacturer of instrumentation and software used to enhance and extend the operation and performance of electron microscopes, to deliver groundbreaking solutions for microscopy research environments and workflows. With innovative solutions that combine DDN’s high-performance data storage platform and Gatan’s high-performance cameras, the DDN and Gatan partnership offers a wide variety of end-user applications a powerful, end-to-end solution to accelerate research, to speed time to results, and to fully leverage the power of the latest technologies in electron microscopy.

“Massive increases in instrument data are revolutionizing life science and materials research and, at the same time, are placing unprecedented demands on storage,” said Robert Triendl, senior vice president of global sales, marketing, and field services at DDN. “At DDN, we are known for tackling the most data-intensive, technical computing environments. Gatan’s leading camera technologies create the type of demanding, high-performance environments in which DDN’s unique capabilities allow our customers to do more, and to do it faster than traditional enterprise storage. We are excited to team with Gatan to address not only today’s challenges but future ones as these technologies advance.”

Within recent years, dramatic advances in cameras for in-situ and cryo-electron microscopy have led to data rates exceeding 10 gigabytes per second. While Gatan’s latest high-resolution microscopy cameras are providing ever improving spatial and temporal resolution to the world’s top research projects, they are also producing immense volumes of digital data at unprecedented rates. This data growth from increased resolutions and frame rates has out-paced the ability of traditional storage solutions to support current microscopy workflow performance and scale requirements adequately. DDN’s storage uniquely supports reliable, fast instrument data ingest as well as analysis, archive, collaboration, publication and data protection without the need for costly, cumbersome infrastructure silos.

“Gatan cameras are well-known for producing the highest quality image and video available for scientific research and analysis. To take advantage of these capabilities, they need to be paired with data solutions that allow for fast, easy data accessibility,” said Sander Gubbens, president at Gatan. “As we analyzed storage solutions, DDN’s storage platform was a remarkable stand-out in the market. It allows the management of end-to-end microscopy workflows, from high-performance, multi-instrument ingest, to simultaneous high-speed analysis – all within a single platform – and allows Gatan’s high-end cameras to perform at full capacity – now and well into the future.”

Most storage solutions in the market today suffer bottlenecks that will not allow for simultaneous ingest and egress of high-rate microscopy data. Conversely, the combined DDN/Gatan solution allows ingest and egress to happen simultaneously. This simultaneous processing enables organizations to accelerate time to results and to optimize the use of their high-resolution, high-speed sensor equipment – thus increasing return on investment and maximizing the value users can achieve.

DDN storage platforms are ideal for high-resolution microscopy environments in life science, material science and product design, semiconductor design and production, and oil and gas research, among others. DDN solutions can scale to support from one to many high data rate instruments where each instrument may require from megabytes to >10 gigabytes per second each. DDN solutions efficiently grow to hundreds of petabytes of capacity and speeds of up to terabytes per second, delivering high performance, industry-leading density, and scalability for core workflows. DDN storage, combined with Gatan’s suite of high-resolution camera solutions, including the K3, K2, K2 IS, OneView, Rio, STEMx, and 3View, is enabling new levels of productivity and advances in microscopy workflows. Over the past several years, DDN and Gatan have been deployed together in leading life sciences sites like The Scripps Research Institute and Van Andel Research Institute, and are now collaborating on forward-looking products.

Microscopy & Microanalysis 2017 Meeting
Gatan and DDN will be showcasing the combined solution at the upcoming Microscopy & Microanalysis (M&M) showin St. Louis, MO, August 6 − 10, 2017. Stop by Gatan’s booth #504 for additional information.

Supporting Resources

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For almost 20 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

About Gatan

Gatan is the world’s leading manufacturer of instrumentation and software used to enhance and extend the operation and performance of electron microscopes. Gatan’s products, which are fully compatible with all brands of electron microscopes, cover the entire range of the analytical process from specimen preparation and manipulation to imaging and analysis. Its customer base spans the complete spectrum of end users of analytical instrumentation typically found in industrial, governmental and academic laboratories. The applications addressed by these scientists and researchers include metallurgy, semiconductors, electronics, biological science, new materials research and biotechnology. The Gatan brand name is recognized and respected throughout the worldwide scientific community and has been synonymous with high-quality products and the industry’s leading technology. Gatan, Inc. is a member of the Medical and Scientific Imaging segment of Roper Technologies.

Source: DDN

The post DDN, Gatan Partner for Microscopy Workflows appeared first on HPCwire.

Microsemi Collaboration Enables Mellanox, Others to Deliver NVMe-oF Architectures

Tue, 08/08/2017 - 08:39

ALISO VIEJO, Calif., Aug. 8, 2017 — Microsemi Corporation (Nasdaq: MSCC) today announced its collaboration with Mellanox Technologies, Ltd. (Nasdaq: MLNX) and Celestica to develop a unique reference architecture for NVM express over Fabrics (NVMe-oF) applications as part of Microsemi’s Accelerate Ecosystem Program.

Microsemi’s Accelerate Ecosystem speeds development efforts for customers and collaborators through technology alignment, joint marketing and sales acceleration. Collaborating with Microsemi allows companies like Mellanox and Celestica to leverage Microsemi’s peer-to-peer (P2P) memory architecture, which is supported by its Switchtec PCIe switches in combination with its Flashtec NVRAM cards and NVMe controllers to enable large data streams to transfer between NVMe-oF applications without the central processing unit (CPU) in the data plane. This leads to the development of highly optimized NVMe-oF storage subsystems with better throughput, latency and quality of service (QoS). It also enables customers’ data center storage applications such as rack scale architecture, which disaggregates flash and shareable pools of NVMe memory to operate at faster rates.

“With customers designing their next-generation applications around NVMe-oF today, Microsemi has created a strong ecosystem of industry leaders to put together tested and validated solutions for their specific needs,” said Amr Elashmawi, Microsemi’s vice president of corporate and vertical marketing. “The growth potential in this market makes this the perfect time to pair Celestica’s and Mellanox’s expertise with Microsemi’s unique value-add to showcase the NVMe-oF P2P reference architecture accelerating data center and cloud applications.”

The data center market continues to see NVMe storage devices increasingly being moved outside the server to centralized locations in order to share NVMe-based storage across multiple servers and CPUs. This enables better utilization in terms of capacity, rack space and power. According to industry research and marketing firm G2M, Inc., the NVMe market will be more than $57 billion by 2020 and nearly 40 percent of all-flash arrays will be NVMe-based by 2020. The firm also expects NVMe-oF adapter shipments will climb to 740,000 units by 2020.

“Working together with Microsemi through its Accelerate Ecosystem Program allows our team to leverage its performance storage tier, including Switchtec PSX switches, to develop innovative hardware platforms that can be customized for our customers,” said Jason Phillips, senior vice president, Enterprise Solutions at Celestica. “As a result of this important relationship, Celestica successfully launched the first commercially available NVMe dual-port All Flash Array platform, and is preparing to launch our first NVMe-oF solution, powered by Microsemi technology.”

While other companies have seen the benefits of Microsemi’s reference architecture, Microsemi also benefits from these cooperative efforts. With remote direct memory access (RDMA) a key technology in the NVMe-oF ecosystem, working closely with RDMA network interface card (NIC) providers like Mellanox enhances Microsemi’s ability to further serve the needs of its data center, cloud, hyperscale and enterprise original equipment manufacturer (OEM) customers. Such collaborations will enable Microsemi to gain market share for NVMe-oF applications, positioning Microsemi as a key player in the storage revolution.

“Mellanox is excited to collaborate with Microsemi as part of the Accelerate Ecosystem Program, which is delivering solutions with Microsemi’s Switchtec and Flashtec products for NVMe-oF applications,” said Rob Davis, vice president of storage technology at Mellanox Technologies. “Mellanox’s market leading ConnectX Network Adapters, including ConnectX-5 and our new BlueField SOC, combined with Microsemi’s P2P CPU memory offload capabilities, offer a comprehensive reference platform for high performance data plane applications and JBOF implementations.”

About Microsemi’s Product Portfolio for Data Center

Microsemi is a premier supplier of innovative semiconductor, board, system, software and services for enterprise and hyperscale data centers, enabling high performance, secure, low power and reliable infrastructure for scalable deployments. Microsemi technologies drive innovation in applications including storage systems, server storage, NVMe solutions, Ethernet switching, rack scale architecture, data center interconnect, board management, network timing and power subsystems. Building on a track record of technology leadership, Microsemi’s data center infrastructure portfolio is transforming networks that connect, store and move big data, while lowering the total cost of ownership of deploying next generation services.

The portfolio includes high performance NVMe storage controllers, NVRAM drives, SAS/SATA host bus adapters and RAID controllers enabling high capacity storage architectures, high density PCIe switching and firmware for rack scale architectures, PCIe re-drivers, and Ethernet PHYs for intra-rack connectivity. Microsemi’s product portfolio also includes clock and power management, IEEE1588 integrated circuits (ICs) and NTP servers for synchronization across the data center, as well as field programmable gate arrays (FPGAs) and system-on-chip (SoC) FPGAs to perform secure system management of servers and storage. For more information, visithttp://www.microsemi.com/applications/data-center.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyperconverged infrastructure. Mellanox’s intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at www.mellanox.com.

About Microsemi

Microsemi Corporation (Nasdaq: MSCC) offers a comprehensive portfolio of semiconductor and system solutions for aerospace & defense, communications, data center and industrial markets. Products include high-performance and radiation-hardened analog mixed-signal integrated circuits, FPGAs, SoCs and ASICs; power management products; timing and synchronization devices and precise time solutions, setting the world’s standard for time; voice processing devices; RF solutions; discrete components; enterprise storage and communication solutions, security technologies and scalable anti-tamper products; Ethernet solutions; Power-over-Ethernet ICs and midspans; as well as custom design capabilities and services. Microsemi is headquartered in Aliso Viejo, California and has approximately 4,800 employees globally. Learn more at http://www.microsemi.com.

Source: Microsemi

The post Microsemi Collaboration Enables Mellanox, Others to Deliver NVMe-oF Architectures appeared first on HPCwire.

Cavium Demos NVMe over Fabrics Performance at Flash Memory Summit

Tue, 08/08/2017 - 08:36

SAN JOSE, Calif., Aug. 8, 2017 — Cavium, Inc. (NASDAQ: CAVM) will showcase a broad range of NVMe over Fabric solutions including FC-NVMe on Gen 6 Fibre Channel and NVMe over Fabrics concurrently over RoCE and iWARP on Cavium FastlinQ 45000/41000 Series Ethernet NICs at the Flash Memory Summit 2017 at Booth #815 from August 8-10, at the Santa Clara Convention Center.

The NVMe over Fabrics (NVMe-oF) specification is an emerging technology that gives data center networks unprecedented access to NVMe SSD storage. It delivers efficient scaling of NVMe-based SSDs over data center fabrics including Remote Direct Memory Access (RDMA) networks (iWARP and RoCE), as well as, Fibre Channel.  This enables faster, more scalable connections between servers and storage as well as between storage controllers and NVMe enclosures.

Cavium is leading the transition to NVMe over Fabrics by enabling a broad range of high-performance solutions that seamlessly transition customer infrastructure and applications from legacy storage to current and next-generation flash/NVMe, and is the only solution supplier offering both NVMe-oF over Universal RDMA and Fibre Channel Adapters.

The Flash Memory Summit 2017 will feature the latest technology trends, the most exciting products, and the broadest coverage of a rapidly expanding market around flash, fabrics and end-to-end solutions. Cavium participation in Flash Memory Summit 2017 spans multiple live performance and interop demonstrations, speaking sessions by industry experts and product showcases. Key demonstrations and engagements include:

  • Demo: Breakthrough performance from an  end-to-end FC-NVMe solution with QLogic Gen 6 Fibre Channel delivering more than 2 million NVMe IOPS on the innovative SPDK-based software-defined storage framework @ Cavium Booth #815
  • Demo: Industry’s only solution that delivers technology choice and investment protection to customers with concurrent RoCE and iWARP transports for NVMe over Fabrics @ Cavium Booth #815
  • Demo: Concurrent FCP and FC-NVMe end-to-end multi-vendor interoperability demonstration featuring QLogic Gen 6 Fibre Channel @ FCIA Booth #828
  • Breakout Session: “Accelerate Access to Networked Flash with FC-NVMe” @ Forum V-31 on Thursday, August 10th at 8:30 a.m.
  • Expert Panel: “Ultra-Fast NVMe Storage Networks for Next Generation Flash Array” Session 303-B @ Forum W-32 on Thursday, August 10th at 1:30 p.m.
  • Expert Panel: “NVMe over Fabrics Does Networking Part 2b” @ Forum A-12 on Tuesday, August 8th at 5:00 p.m.

Cavium QLogic NVMe over Fibre Channel (FC-NVMe) Solution

Next-generation data intensive workloads utilize low latency NVMe flash-based storage to meet ever increasing user demand. By combining the lossless, highly deterministic nature of Fibre Channel with NVMe, FC-NVMe delivers the performance, application response time, and scalability needed for next generation data centers, while leveraging existing Fibre Channel infrastructure. Currently shipping QLogic 2690 Series of Enhanced Gen 5 and 2700 Series of Gen 6 Fibre Channel adapters are FC-NVMe ready and are being evaluated in FC-NVMe fabrics by multiple customers and partners across the industry.

Cavium FastLinQ NVMe over Ethernet Universal RDMA (NVMe-oF) Solution

Driven by the performance demands of NVMe, high-performance, low latency networking is a fundamental requirement for a fabric to scale out. Ethernet-based RDMA fabrics with their exceptional low latency and offload capabilities will become a popular choice for NVMe over Fabrics.  Cavium FastLinQ 45000/41000 Series Ethernet NICs support Universal RDMA (RoCE, RoCEv2 and iWARP concurrently) and deliver the ultimate choice to customers for scaling out NVMe over a general purpose Ethernet fabric.

Cavium FastLinQ 40000 Series 10/25/40/50/100GbE, QLogic 2690 Series Enhanced Gen 5 16GFC and 2700 Series Gen 6 32GFC adapters are available from Cavium and multiple leading OEMs and ODMs.

For more information, visit www.qlogic.com/nvmeof and cavium.com/fastlinq.

To schedule a meeting with Cavium, please contact your local sales account manager or Lilly Ly (lly@cavium.com). Please enter Flash Memory Summit 2017 in the subject line.

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data Center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware-reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan.

Source: Cavium

The post Cavium Demos NVMe over Fabrics Performance at Flash Memory Summit appeared first on HPCwire.

James Peery Named Chief Scientist of the Global Security Directorate at ORNL

Mon, 08/07/2017 - 12:18

OAK RIDGE, Tenn., Aug. 7, 2017 – James Peery, who has led critical national security programs at Sandia National Laboratories and Los Alamos National Laboratory, has been selected as the chief scientist of the Global Security Directorate at Oak Ridge National Laboratory.

“James brings more than two decades of experience in creating successful national security initiatives for the U.S. Department of Energy,” said Brent Park, associate laboratory director of global security at ORNL. “In particular, his leadership in cybersecurity, data analytics and high-performance computing will enable him to lead the laboratory’s cybersecurity initiative for the electric grid and beyond.”

Next-generation cybersecurity for the electric grid is a multi-directorate, multi-program effort at ORNL that supports the DOE cybersecurity program for critical energy infrastructure. The initiative aims to enable electric utilities and other components of the nation’s energy supply to defend against emerging and previously unseen cyberattacks.

Peery also will help ORNL researchers draw on the lab’s distinctive capabilities to develop scientific and technological solutions aligned with national security policies and strategies.

“As the lab’s chief scientist for national security challenges, James will lead our talented and passionate staff—with their incredible breadth of capabilities from computing to materials to nuclear science and technology to neutron sciences—with the sense of purpose that comes from serving the country in the compelling mission of national security,” ORNL Director Thomas Zacharia said.

Peery, who is a member of the U.S. Air Force’s Scientific Advisory Board, began his career at Sandia in 1990, the year he graduated from Texas A&M University with a doctorate in nuclear engineering. In one of his first assignments at Sandia, he developed first-generation massively parallel algorithms and tools for use in high-energy physics applications in support of national security. He soon rose to be manager of computational physics and then manager of computational solid mechanics and structural dynamics.

In 2002, he accepted a position at Los Alamos, where he successfully led the lab’s advanced code and computing strategy that supported the highly successful Advanced Scientific Computing element in the National Nuclear Security Administration’s stockpile stewardship program, sustaining annual nuclear weapon certification through predictive simulation and above-ground tests. At Los Alamos, he also led the team that acquired funding for the world’s first petaflop computer.

In 2007, he returned to Sandia. Among his many successes over the next 10 years, he advanced the laboratories’ high-performance computing and research and development in computational sciences, including Sandia’s selection to host NNSA’s high-performance computing platform for sensitive compartmented information. He strengthened the labs’ cybersecurity portfolio and was instrumental in the creation of Sandia’s quantum information sciences program and its Counterfeit Detection Center.

One of his areas of focus also is the work environment of his staff. As vice president of defense systems and assessments at Sandia from 2015 through 2017, he was an integral part of the laboratories’ leadership team that created a work environment that led to Sandia National Laboratories being named by Forbes as one of the nation’s top 20 employers (ranking No. 1 in the aerospace and defense industry category). He also led Sandia’s Wounded Warriors Career Development Program.

UT-Battelle manages ORNL for DOE’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit www.science.energy.gov.

Source: ORNL

The post James Peery Named Chief Scientist of the Global Security Directorate at ORNL appeared first on HPCwire.

ECSS Affiliates Program Begins with Cohort of Four

Mon, 08/07/2017 - 09:55

Aug. 7, 2017 — XSEDE has announced a pilot-phase program meant to give those in the advanced cyberinfrastructure community the opportunity to work with ECSS staff on cutting edge projects. The program is called the ECSS (Extended Collaborative Support Services) Affiliate program and was launched as a result of community interest.

ECSS is a service that allows researchers with XSEDE supercomputer allocations to collaborate with HPC experts to enhance their project workflow and hopefully get better results more efficiently. Affiliates are non-XSEDE staff members who add to the breadth of knowledge in the ECSS program.

“The Affiliates program will provide a vehicle for including qualified contributors on projects, expanding the impact of ECSS and also bringing us expertise in new areas,” said ECSS Affiliates program lead and XSEDE co-PI Nancy Wilkins-Diehr. “The program provides opportunities for Affiliates to get involved in cutting edge projects that they may not see at their own institutions, and it gives ECSS a way to bring in additional expertise.”

The first four ECSS Affiliates are Emre Brooks, Erin Hodgess, Lisa Lowe and Justin Oelgoetz.

Dr. Emre Brookes is an assistant professor in the Department of Biochemistry at the University of Texas Health Science Center at San Antonio, yet his degrees are in computer science and mathematics. His PhD work included developing and implementing new algorithms for analysis of experimental data with a new scalable constrained least squares fitting method and a novel regularization method using genetic algorithms. To provide the scientific community access to these methods, he created the first UltraScan Science Gateway, which has since migrated to Apache Airavata. These methods annually use millions of CPU hours of parallel resources supporting scientific research worldwide. His work concentrates on developing tools for analysis of scientific experimental data. He is the primary developer of the US-SOMO hydrodynamic modeling suite http://somo.uthscsa.edu and is actively involved with the hydrodynamic modeling, small-angle scattering and high performance computational communities. His most recent work, GenApp, focuses on developing an open framework to ease deployment of new and legacy scientific codes.

Erin Hodgess is an XSEDE Campus Champion and associate professor of Statistics at the University of Houston – Downtown. Hodgess has a bachelor’s degree in economics from the University of Dayton, a master’s degree in economics from the University of Pittsburgh, and a master’s and PhD in statistics from Temple University in Philadelphia. Her areas of research include time series, statistical computing, geostatistics, and high performance computing. She is very excited to join the ECSS Affiliate program and is looking forward to working on many projects.

Lisa Lowe has 17 years of experience developing computer codes for scientific applications. She specializes in developing, debugging, and optimizing large complex modeling codes designed to run on high performance computing platforms. Her PhD dissertation involved writing a parallel adaptive mesh refinement multigrid PDE solver for the initial value problem of general relativity, a spectral methods integrator for the evolution equations, and a level-set method for finding apparent horizons.  Lisa also completed a graduate certificate in environmental assessment in order to change fields and contribute to research benefiting human health and the environment. She is currently an HPC scientific programmer, working as a contractor to the Environmental Protection Agency (EPA). Her projects include modeling hypoxia and eutrophication in coastal ocean systems using hydrodynamic and water quality models, groundwater transport modeling using field line tracing, and risk assessment using Bayesian Markov chain Monte Carlo (MCMC) methods.

Justin Oelgoetz is an XSEDE Campus Champion and professor of physics and astronomy from Austin Peay State University in Clarksville, Tenn. Justin graduated with a B.S. in chemistry from Florida State University in 1998 and a PhD in chemical physics from Ohio State University in 2006 where he focused on computational atomic spectroscopy; he focused on the same subject from 2006 to 2008 while a postdoc at Los Alamos National Laboratory. Since coming to APSU he has transitioned to computational chemical physics, with an interest in the spectroscopy (X-Ray, UV-VIS, IR, Raman, etc), electronic structure and thermodynamical properties of amorphous solids (glasses) and thin films.  Oelgoetz is interested in a wide range of HPC topics and looks forward to working on a broad range projects.

Source: XSEDE

The post ECSS Affiliates Program Begins with Cohort of Four appeared first on HPCwire.

Rescale Partners with HPC Systems to Deliver Cloud HPC Platform in Japan

Mon, 08/07/2017 - 09:34

SAN FRANCISCO, Calif., Aug. 7, 2017 — Rescale is pleased to announce a channel partnership with HPC Systems Inc., a high-performance computing (HPC) hardware integrator based in Japan, to deliver Rescale’s ScaleX big compute platform to the Japanese market beginning July 2017.

Demand for cloud computing in scientific and technological computing in physics, chemistry, and engineering has increased in recent years. Rescale will now offer its ScaleX big compute platform through HPC Systems, which specializes in delivering high-performance computing systems to the scientific R&D community. Rescale helps shorten the development period in many scientific disciplines, such as in new drug discovery, and the partnership with HPC Systems will enable a wider community to appreciate the many benefits of computing on the cloud.

The agreement will allow HPC Systems to offer cloud services to its customers through a hybrid cloud stack, allowing them to leverage the cloud’s agility to quickly set up HPC environments and secure flexible computing resources at peak times. Via an API call to the ScaleX platform, HPC Systems’ optimization software Reaction plus Pro will integrate with ScaleX’s third-party scientific computing software applications, in addition to HPC Systems’ native applications.

With ScaleX, R&D scientists no longer need to carry out IT tasks such as setting up their computing environment or managing fixed assets. In addition, customers are able to run jobs on the ScaleX platform on-demand without the limitations of job runtime that are commonly enforced in enterprise IT environments to manage limited compute resources.

The extension of HPC to the cloud will benefit Japanese R&D customers. “HPC Systems is excited to partner with Rescale, the world’s leading cloud platform service provider for HPC solutions,” said Teppei Ono, CEO of HPC Systems. “Together, we are combining Rescale’s ScaleX cloud HPC platform with on-premises environments built with our HPC integration technology to deliver a hybrid system to researchers, scientists, and engineers in materials and life sciences. We look forward to offering turnkey software solutions both in Japan and worldwide.”

Rescale’s CEO Joris Poort echoed excitement about the partnership: “We are extremely pleased to be partnering with HPC Systems, whose diverse services and products have a proven track record in Japanese computational science. Our partnership will provide joint customers with new potential for innovation and will accelerate research in materials and drug development and discovery.”

About HPC Systems

HPC Systems develops, manufactures, and sells high-performance computers that perform scientific and technological calculations conducted by research and development institutions in the public and private sectors. HPC Systems provides optimal cluster and parallel file systems, integration, tuning acceleration services, HPC cloud services, computational chemistry solutions, and research and development support. For more information, visit http://www.hpc.co.jp.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

Source: Rescale

The post Rescale Partners with HPC Systems to Deliver Cloud HPC Platform in Japan appeared first on HPCwire.

2017 XSEDE Campus Champions Fellows Named

Fri, 08/04/2017 - 11:25

Aug. 4, 2017 — Four researchers from American universities will work with cyberinfrastructure and high-performance computing experts from XSEDE and U.S. research teams to work on real-world science and engineering projects over the next year in the 2017 Campus Champions Fellows program.

The 2017 cohort are current XSEDE Campus Champions, a collection of faculty, staff and researchers at over 200 U.S. institutions who advise others on their local campus on the use of high-end cyberinfrastructure, including but not limited to XSEDE resources. The goal of the Campus Champions Fellows program is to increase expertise on campuses by including Campus Champions as partners in XSEDE’s ECSS projects.

  • Richard Gayler, a professor of computer science at Kennesaw State in Georgia, is paired with Si Liu, a research associate in the high performance computing group at the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Gayler has been in academia all his life and was interested in acquiring some non-textbook skills in the development and modernization of HPC codes. Gayler and Liu are supporting PI Gabe Kooperman’s project “Assessing flood risk from threat of Madden-Julian Oscillation amplification.” Kooperman is a postdoctoral researcher in the department of Earth System Science at the University of California, Irvine who is working to understand how flood risks will change in the future by including effects of tropical weather patterns (the Madden-Julian Oscillation) in simulations. The TACC Ranch and Stampede systems will be used as a resource for this project.
  • Chet Langin, IT research coordinator at Southern Illinois University is paired with Alex Ropelewski, Director of the Biomedical Applications Group at the Pittsburgh Supercomputer Center (PSC). They are supporting PI Suping Zhou, Research Professor in the Department of Agricultural and Environmental Sciences at Tennessee State University. Zhou’s project is entitled “Computational Support for Bioinformatics Projects on Assembly Analysis of Fungal Metagenomes for the Discovery of Genes Involved in Important Biological Processes.” Working with Ropelewski, Langin will build skills developing an optimized workflow for Zhou’s polyploidy assembly work – installing programs, writing scripts, and possibly making the workflow and the bioinformatics tools available through the Bridges Galaxy interface.
  • Semir Sarajlic, a Research Computing Specialist at Georgia State University, is paired with Suresh Marru, the Deputy Director of the Science Gateways research Center at Indiana University’s Pervasive Technology Institute. They are working with PI Mohan Ramamurthy the Director of the Unidata Program Center at the University Corporation for Atmospheric Research (UCAR) on his project “Atmospheric Science in the Cloud: Enabling Data-Proximate Science” on Jetstream and Wrangler. One goal of this work is to make it straightforward for users to pull data from Unidata to request resources dynamically to burst to Jetstream.
  • Dan Voss, Director of Research Computing at the University of Kansas, is paired with Rich Knepper, Deputy Director at the Center for Advanced Computing, Cornell University. Rich works in XSEDE’s Capability and Resource Integration group (former Campus Bridging). Activities here bridge between a local campus and XSEDE resources. Projects include workflow submission systems that send jobs to XSEDE Service Provider resources from campus, the creation of shared virtual compute facilities that allow jobs to be executed on multiple resources, data management for researchers with Globus Connect, the creation of local XSEDE Compatible Cluster Systems, or other projects that utilize tools which reduce barriers for scaling analyses from campuses to national cyberinfrastructure.

Accepted Fellows, with the support of their home institution, make a 400-hour time commitment and are paid a stipend to allow them to focus time and attention on these collaborations. The program also includes funding for two visits, each ranging from one to two weeks, to an ECSS, PI or conference site to enhance the collaboration. Most Fellows and mentors met at PEARC17, July 9-13 in New Orleans. Fellows will present their work at PEARC18.

For more information on the XSEDE Campus Champions Fellows program, including all past cohorts, visit: https://www.xsede.org/ccfellows.

Source: XSEDE

The post 2017 XSEDE Campus Champions Fellows Named appeared first on HPCwire.

Purdue Professor Leads Effort to Increase Cybersecurity for Nuclear Power Plants

Fri, 08/04/2017 - 09:44

WEST LAFAYETTE, Ind., Aug. 4, 2017 — Cybersecurity for nuclear power plants is the focus of a new collaboration between government and academia, led by Purdue University’s Hany Abdel-Khalik.

The National Academy of Engineering has identified cybersecurity as one of the most complex issues engineering has ever faced. As engineering systems become more sophisticated, it gets harder to think about all the ways they can be compromised.

“Reactors are complex beasts. The nuclear community has a great number of scientists who have spent their entire careers thinking of all the different ways that things could go wrong,” said Abdel-Khalik, an associate professor of nuclear engineering. “Now things are becoming more digital and we’re relying on computers to make decisions, but computers aren’t human. They can make bad decisions if they are given bad data. That’s the advantage hackers could have right now.”

The Department of Energy’s Nuclear Energy University Program (NEUP) has provided funding for the Purdue-led team to develop tools to measure risk and mitigate the effects of a hypothetical security breach. Although nuclear reactors are inherently safe and have several built-in safety mechanisms, Abdel-Khalik and his colleagues want to construct another layer of defense.

For this project, the researchers will assume that hackers have access to the raw data used to control a reactor. Although obtaining this information would be extremely difficult, the team wants to make the reactor control system smart enough to realize it’s being manipulated.

“Say you have a friend and you’re used to them talking to you in a certain way. If they start saying weird things, or you see them do things that are out of the norm for their behavior, then you know there’s something wrong,” Abdel-Khalik said. “It’s the same for a nuclear reactor. You’re expecting the power, coolant flow and steam level to have certain patterns, and if they start deviating, that should signal to the operator that something is wrong and he should intervene.”

 When reactors were designed initially, much more of the operation was manual. If an operator saw something wrong, he or she would intervene right away. Now, control systems are being digitized to make operational and safety-related decisions.

Several nations across the globe have reported cyberattacks on critical infrastructure such as nuclear power plants in recent years, and the frequency and sophistication of the attacks is only increasing. This is because they’re high-reward targets for the attackers, Abdel-Khalik said.

“Attackers are always looking for ways to cause massive destruction, and nuclear reactors are very high-profile,” he said. “If they can say a reactor was hacked, that’s a big win for them. Even if no damage happens in the reactor and it is simply restarted.”

Elisa Bertino, professor of computer science and head of the Cyber Space Security Lab at Purdue, will be working with Abdel-Khalik on the project. The two got started on a small grant from Sandia National Laboratory in 2015 to show that hackers could change the state of a reactor if they had access to raw data from the plant, and through the NEUP funding, they gained the resources to develop a solution.

The NEUP funding is unique and effective because it allows labs and universities to work together. Abdel-Khalik believes that to solve such a complex issue, it will take collaboration between academia, government and industry.

“Academia has always been the birthplace for new ideas. Government labs can take an idea and nurture it to production, and then industry can customize a product for the various applications,” he said.

Abdel-Khalik and Bertino will partner with Virginia Wright, Idaho National Laboratory Program Manager for Domestic Nuclear Cyber Security, Dr. Katrina Groth, an expert on risk and reliability analysis of nuclear power plants at Sandia National Laboratory, and Professor Ayman Hawari, director of North Carolina State University PULSTAR research reactor.

The award notice can be found here.

Source: Purdue University

The post Purdue Professor Leads Effort to Increase Cybersecurity for Nuclear Power Plants appeared first on HPCwire.

PSSC Labs Integrates New Intel Xeon Scalable Processors

Fri, 08/04/2017 - 09:28

LAKE FOREST, Calif., Aug. 4, 2017 — PSSC Labs, a developer of custom HPC and Big Data computing solutions, today announced it will offer Intel’s new Xeon Scalable Processors in its PowerServe line of HPC servers and PowerWulf line of HPC clusters. The integration provides PSSC Labs customers with breakthrough technology, offering performance capable of handling cutting edge computing tasks including real-time analytics, virtualized infrastructure and high-performance computing.

In additional to advanced architecture, the new processors feature a rich suite of platform innovations for enhanced application performance including Intel AVX-512, Intel Mesh Architecture, Intel QuickAssist, Intel Optane SSDs, Intel Omni-Path Fabric.

PSSC Labs’ PowerServe and PowerWulf HPC solutions offer reliable, flexible, high performance computing solutions for a variety of applications across government, academic, and commercial environments including: Design & Engineering, Life Sciences, Physical Science, Financial Services and Machine/Deep Learning.

“PSSC Labs endeavors to offer our customers the latest and best hardware options in our line of custom turn-key HPC servers and clusters,” said Alex Lesser, Executive Vice President of PSSC Labs. “The new Intel Xeon processors are a major advancement for enhanced performance on resource hungry computing tasks.”

Every PowerWulf Server and PowerServe Cluster come with three-year unlimited phone / email support package (additional year support available) with all support provided by a US-based team of experienced engineers. Prices start at $2,495.

Intel Xeon Scalable Processor Features:

Advanced Architecture: new core microarchitecture, new on-die interconnects and memory controllers means optimized performance, reliability, security and manageability. The processor also offers better efficiency and lower energy costs as well as space efficiency.

Performance: The Intel Xeon Scalable Processors deliver an overall performance increase up to 1.65x versus the previous generation, and up to 5x OLTP warehouse workloads versus the current install base.

Scalability: Up to 28 cores and up to 6 terabytes of system memory, and can scale to support 2-socket – 8-socket systems.

Agility: Optimized computing, network, and storage performance on premise, through a network, or in the cloud. Optional network integration with Intel OmniPath Architecture.

Security: 3.1x performance improvement in cryptography performance compared to the previous generation. Application can now run with less than 1% overhead with data-at-rest encryption turned on. Intel Key Protection Technology delivers enhanced protection to security key attacks.

About PSSC Labs

For technology powered visionaries with a passion for challenging the status quo, PSSC Labs is the answer for hand-crafted HPC and Big Data computing solutions that deliver relentless performance with the absolute lowest total cost of ownership. All products are designed and built at the company’s headquarters in Lake Forest, California. For more information, 949-380-7288, www.pssclabs.com, sales@pssclabs.com.

Source: PSSC Labs

The post PSSC Labs Integrates New Intel Xeon Scalable Processors appeared first on HPCwire.

Supermicro Announces Q4 2017 Financial Results

Fri, 08/04/2017 - 09:25

SAN JOSE, Calif., Aug. 4, 2017 — Super Micro Computer, Inc. (NASDAQ:SMCI), a global leader in high-performance, high-efficiency server, storage technology and green computing, today announced fourth quarter and full-year financial results for the fiscal year ended June 30, 2017. The final results are in line with the preliminary results announced by the Company on July 20, 2017.

Fiscal 4th Quarter Highlights

  • Quarterly net sales of $717.9 million, up 13.7% from the third quarter of fiscal year 2017 and up 36.9% from the same quarter of last year.
  • GAAP net income of $17.1 million, up 2.8% from the third quarter of fiscal year 2017 and up 145.7% from the same quarter of last year.
  • GAAP gross margin was 13.5%, down from 14.0% in the third quarter of fiscal year 2017 and down from 14.1% in the same quarter of last year.
  • Server solutions accounted for 74.3% of net sales compared with 70.0% in the third quarter of fiscal year 2017 and 65.5% in the same quarter of last year.

Net sales for the fourth quarter ended June 30, 2017 totaled $717.9 million, up 13.7% from $631.1 million in the third quarter of fiscal year 2017. No customer accounted for more than 10% of net sales during the quarter ended June 30, 2017.

GAAP net income for the fourth quarter of fiscal year 2017 was $17.1 million or $0.33 per diluted share, an increase of 145.7% from net income of $7.0 million, or $0.13 per diluted share in the same period a year ago. Included in net income for the quarter is $5.1 million of stock-based compensation expense (pre-tax). Excluding this item and the related tax effect, non-GAAP net income for the fourth quarter was $20.7 million, or $0.39 per diluted share, compared to non-GAAP net income of $10.4 million, or $0.20 per diluted share, in the same quarter of the prior year. On a sequential basis, non-GAAP net income increased from the third quarter of fiscal year 2017 by $0.4 million or $0.01 per diluted share.

GAAP and Non-GAAP gross margin for the fourth quarter of fiscal year 2017 was 13.5% compared to 14.1% in the same period a year ago. GAAP and Non-GAAP gross margin for the third quarter of fiscal year 2017 were both 14.0%.

The GAAP income tax provision for the fourth quarter of fiscal year 2017 was $9.6 million or 35.8% of income before tax provision compared to $4.5 million or 39.0% in the same period a year ago and $5.1 million or 23.6% in the third quarter of fiscal year 2017. The effective tax rate for the fourth quarter of fiscal year 2017 was higher compared to the third quarter of fiscal year 2017 primarily due to higher foreign taxes.

The Company’s cash and cash equivalents and short and long term investments at June 30, 2017 were $115.9 million compared to $183.7 million at June 30, 2016. Free cash flow for the year ended June 30, 2017 was $(125.8) million, primarily due to an increase in the Company’s cash used in operating activities.

Fiscal Year 2017 Summary

Net sales for the fiscal year ended June 30, 2017 were $2,529.9 million, up 14.2% from $2,215.6 million for the fiscal year ended June 30, 2016. GAAP net income for fiscal year 2017 decreased to $69.3 million, or $1.34 per diluted share, a decrease of 3.7% from $72.0 million, or $1.39 per diluted share, for fiscal year 2016. Included in net income for the fiscal year ended June 30, 2017 is $19.2 million of stock-based compensation expense (pre-tax). Excluding this item and the related tax effect, non-GAAP net income for the fiscal year 2017 was $82.8 million or $1.57 per diluted share, a decrease of 1.3% compared to $83.8 million or $1.59 per diluted share for fiscal year 2016.

Business Outlook & Management Commentary

The Company expects net sales of $625 million to $685 million for the first quarter of fiscal year 2018 ending September 30, 2017. The Company expects non-GAAP earnings per diluted share of approximately $0.30 to $0.40 for the first quarter.

“Supermicro has built a strong foundation for sustained high growth while improving profitability. During the last couple of years we have made significant investments in global production capacity, engineering, quality, global services, and systems and datacenter management software. It is these investments that will power the new Supermicro 3.0,” said Charles Liang, Chairman and Chief Executive Officer. “Supermicro 3.0 positions us as the only Tier 1 IT Infrastructure Provider capable of both first to market product innovation and global scale, quality, services and support to engage our rapidly growing enterprise customer base deeply in their business requirements. The record high revenue and strong 27.6% second half growth over last year is a direct result of these Supermicro 3.0 investments. With the major investments in place and the new Skylake product portfolio shipping, future investment and expenses will begin to flatten driving improved profitability moving forward.”

It is currently expected that the outlook will not be updated until the Company’s next quarterly earnings announcement, notwithstanding subsequent developments. However, the Company may update the outlook or any portion thereof at any time. Such updates will take place only by way of a news release or other broadly disseminated disclosure available to all interested parties in accordance with Regulation FD.

Conference Call Information

Super Micro Computer will discuss these financial results in a conference call at 2:00 p.m. PT, today. To participate in the conference, please call 1-888-352-6793 (International callers dial 1-719-325-4753) 10 minutes prior. A recording of the conference will be available until 11:59 pm (Eastern Time) on Thursday, August 17, 2017, by dialing 1-844-512-2921 (International callers dial 1-412-317-6671) and entering replay PIN 7567416. The live web cast and recording of the call will be available on the Investor Relations section at www.supermicro.com two hours after the conference conclusion. They will remain available until the Company’s next earnings call.

Cautionary Statement Regarding Forward Looking Statements

Statements contained in this press release that are not historical fact may be forward-looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934. Such forward-looking statements may relate, among other things, to our expected financial and operating results, our ability to build and grow Super Micro Computer, the benefits of our products and our ability to achieve our goals, plans and objectives. Such forward-looking statements do not constitute guarantees of future performance and are subject to a variety of risks and uncertainties that could cause our actual results to differ materially from those anticipated. These include, but are not limited to: our dependence on continued growth in the markets for X86, blade servers and embedded applications, increased competition, difficulties of predicting timing, introduction and customer acceptance of new products, poor product sales, difficulties in establishing and maintaining successful relationships with our distributors and vendors, shortages or price fluctuations in our supply chain, our ability to protect our intellectual property rights, our ability to control the rate of expansion domestically and internationally, difficulty managing rapid growth and general political, economic and market conditions and events. Additional factors that could cause actual results to differ materially from those projected or suggested in any forward-looking statements are contained in our filings with the Securities and Exchange Commission, including those factors discussed under the caption “Risk Factors” in such filings.

Use of Non-GAAP Financial Measures

Non-GAAP gross margin discussed in this press release excludes stock-based compensation expense. Non-GAAP net income and net income per share discussed in this press release exclude stock-based compensation expense and the related tax effect of the applicable items. Management presents non-GAAP financial measures because it considers them to be important supplemental measures of performance. Management uses the non-GAAP financial measures for planning purposes, including analysis of the Company’s performance against prior periods, the preparation of operating budgets and to determine appropriate levels of operating and capital investments. Management also believes that the non-GAAP financial measures provide additional insight for analysts and investors in evaluating the Company’s financial and operational performance. However, these non-GAAP financial measures have limitations as an analytical tool, and are not intended to be an alternative to financial measures prepared in accordance with GAAP. Pursuant to the requirements of SEC Regulation G, detailed reconciliations between the Company’s GAAP and non-GAAP financial results is provided at the end of this press release. Investors are advised to carefully review and consider this information as well as the GAAP financial results that are disclosed in the Company’s SEC filings.

About Super Micro Computer, Inc.

Supermicro, a global leader in high-performance, high-efficiency server technology and innovation is a premier provider of end-to-end green computing solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro’s advanced Server Building Block Solutions offer a vast array of components for building energy-efficient, application-optimized, computing solutions. Architecture innovations include Twin, TwinPro, FatTwin, Ultra Series, MicroCloud, MicroBlade, SuperBlade, Simply Double, Double-sided Storage, Battery Backup Power (BBP) modules and WIO/UIO. Products include servers, blades, GPU systems, workstations, motherboards, chassis, power supplies, storage, networking, server management software and SuperRack cabinets/accessories delivering unrivaled performance and value.

Source: Supermicro

The post Supermicro Announces Q4 2017 Financial Results appeared first on HPCwire.

IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges

Thu, 08/03/2017 - 15:23

IBM announced yesterday a new record for magnetic tape storage that it says will keep tape storage density on a Moore’s law-like path far into the next decade. In collaboration with Sony, IBM scientists recorded 201 gigabits on one square inch of prototype magnetic tape, achieving a 20x improvement over the areal density used in current state of the art enterprise tape drives. This works out to a potential standard cartridge capacity of 330 terabytes (TB) of uncompressed data.

The demonstration device combines Sony’s new magnetic tape technology with IBM Research’s newly developed write/read heads and its next-generation servo and signal processing technologies, detailed in the IBM announcement:

+ Innovative signal-processing algorithms for the data channel, based on noise-predictive detection principles, which enable reliable operation at a linear density of 818,000 bits per inch with an ultra-narrow 48nm wide tunneling magneto-resistive (TMR) reader.

+ A set of advanced servo control technologies that when combined enable head positioning with an accuracy of better than 7 nanometers. This combined with a 48nm wide (TMR) hard disk drive read head enables a track density of 246,200 tracks per inch, a 13-fold increase over a state of the art TS1155 drive.

+ A novel low friction tape head technology that permits the use of very smooth tape media.

It’s the first time that IBM is using sputtered magnetic tape instead of the traditional barium ferrite technology.

Sony’s new magnetic tape technology (Credit: Sony)

As explained by IBM Scientist Mark Lantz (see video at end of article) “sputter tape uses several layers of thin metal films that are coated onto the tape using vacuum sputter technology that’s similar technology as is that used for integrated circuits.”

In current generation tape drives, a thin film of nanoscale barium ferrite particles are applied to the tape in liquid form, like a thin layer of paint.

“While sputtered tape is expected to cost a little more to manufacture than current commercial tape that uses Barium ferrite (BaFe), the potential for very high capacity will make the cost per TB very attractive,” said IBM Fellow Evangelos Eleftheriou.

(click to enlarge)

IBM envisions tape becoming a viable storage media for cloud, both as a back-up application and an archival tier for infrequently-accessed data. Of course, spinning disc and flash storage have their advantages, but tape’s economics for massive data archives are hard to argue with.

IBM believes that this latest achievement puts the company on track to continue scaling tape technology at near-historic rates, doubling the storage capacity every two years for at least the next ten years.

IBM’s legacy of tape storage innovations stretches back more than 60 years. The company launched its first commercial tape product, the 726 Magnetic Tape Unit, in 1952, providing a speed and capacity advantage over existing cathode ray tube and drum storage devices. The 726 used an oxide-coated, non-metallic tape, approximately a half-inch wide with a density of 100 bits per linear inch.

IBM and Sony presented the results of their collaboration this week at the 28th Magnetic Recording Conference in Japan. Neither company indicated when we might actually see a 330 TB tape drive in the wild. Leading conventional technology can only handle 15 TB per data cartridge.

Feature image caption: IBM scientist Dr. Mark Lantz holds a one square inch piece of Sony Storage Media Solutions sputtered tape, which can hold 201 Gigabytes (Photo credit: IBM Research)

The post IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges appeared first on HPCwire.

Engility Forms Five ENnovation Centers

Thu, 08/03/2017 - 11:01

CHANTILLY, Va., Aug 3, 2017 — Engility Holdings, Inc. (NYSE:EGL) has established five ENnovation Centers (EC) to develop advanced solutions for customers throughout the industry’s most cutting-edge growth areas, including agile software development, artificial intelligence (AI), cyber, high performance computing (HPC), and modeling and simulation. The purpose of the centers is to leverage employees’ deep domain knowledge and ingenuity to solve the Federal government’s toughest challenges.

“Government agencies have relied on Engility to turn ideas into real-world solutions,” said Lynn Dugle, Engility CEO. “By concentrating our subject matter and technical expertise under these ENnovation Centers, our customers have easier access to the resources needed to apply today’s solutions and test tomorrow’s technologies.”

Engility’s ENnovation Centers consist of virtually-networked experts with cutting-edge tools. “Our goal is to expand the ENnovation Centers as new technologies arise and the government’s challenges and opportunities evolve,” added Gay Porter, vice president of Engility’s Technical Solutions Group.

Agile DevOps ENnovation Center – As the pace of technology advancements continues to accelerate, it is imperative for government agencies and military services to apply proven agile software development expertise. Using tools and processes that streamline, automate and improve the development process, the Agile DevOps EC helps customers significantly reduce costs, increase efficiencies, improve security and enhance the user experience.

Artificial Intelligence ENnovation Center –Engility’s AI EC empowers customers by delivering ways to accelerate, automate and augment manual processes, quickly turning data into actionable intelligence. An example of a solution that the AI EC offers is Synthetic AnalystTM, a proprietary AI model that can enhance any mission with low-risk, low-cost integration of data and analytics.

Cyber ENnovation Center –The Cyber EC delivers proven offensive and defensive cybersecurity toolsets, processes and engineers that provide customers with resilient and effective systems to protect and support our customers’ missions. Our experts help customers address security requirements, from developing secure systems to providing vulnerability assessments on existing systems.

High Performance Computing ENnovation Center – Engility continues to drive the development of new approaches for HPC solutions. Whether it is big data analysis using high performance data analytics (HPDA) or emerging disruptive technologies such as exascale and quantum computers, Engility’s EC delivers leading-edge insights from academia and industry to leverage HPC to enhance scientific advancements.

Modeling and Simulation ENnovation Center – Modeling and simulation solutions deliver a win-win, cost-effective method to safely and efficiently conduct testing and evaluation. From dynamic modeling and analysis capabilities to integrated live, virtual and constructive (LVC) simulations, Engility offers this EC as a “test bed” for everything modeling and simulation.

For more information about the ENnovation Centers, please visit www.engilitycorp.com/ennovation.

About Engility

Engility (NYSE: EGL) is engineered to make a difference. Built on six decades of heritage, Engility is a leading provider of integrated solutions and services, supporting U.S. government customers in the defense, federal civilian, intelligence and space communities. Our innovative, highly technical solutions and engineering capabilities address diverse client missions. We draw upon our team’s intimate understanding of customer needs, deep domain expertise and technical skills to help solve our nation’s toughest challenges. Headquartered in Chantilly, Virginia, and with offices around the world, Engility’s array of specialized technical service offerings include high-performance computing, cybersecurity, enterprise modernization and systems engineering. To learn more about Engility, please visit www.engilitycorp.com and connect with us on FacebookLinkedIn and Twitter.

Source: Engility

The post Engility Forms Five ENnovation Centers appeared first on HPCwire.

Cavium Announces Financial Results for Q2 2017

Thu, 08/03/2017 - 09:26

SAN JOSE, Calif., Aug. 2, 2017 — Cavium, Inc. (NASDAQ: CAVM), a leading provider of semiconductor products that enable intelligent processing for enterprise, data center, cloud, wired and wireless networking, today announced financial results for the second quarter ended June 30, 2017.

Net revenue in the second quarter of 2017 was $242.1 million, a 5.5% sequential increase from the $229.6 millionreported in the first quarter of 2017 and 125.9% from the $107.2 million reported in the second quarter of 2016.

Generally Accepted Accounting Principles (GAAP) Results

Net loss for the second quarter of 2017 was $11.1 million, or ($0.16) per diluted share, compared to $50.5 million, or ($0.75)per diluted share in the first quarter of 2017. Gross margins were 53.5% in the second quarter of 2017 compared to 40.1% in the first quarter of 2017. GAAP operating loss (GAAP loss from operations as a percentage of revenue) was 2.5% in the second quarter of 2017 compared to 17.0% in the first quarter of 2017. Total cash and cash equivalents were $127.1 millionat June 30, 2017.

Non-GAAP Results                  

Cavium believes that the presentation of non-GAAP financial measures provides important supplemental information to management and investors regarding financial and business trends relating to Cavium’s financial condition and results of operations. Cavium believes that these non-GAAP financial measures provide additional insight into Cavium’s ongoing performance and core operational activities and has chosen to provide these measures for more consistent and meaningful comparison between periods. These measures should only be used to evaluate Cavium’s results of operations in conjunction with the corresponding GAAP measures. The reconciliation between GAAP and non-GAAP financial results is provided in the financial statements portion of this release.

In the second quarter of 2017, Non-GAAP net income was $48.9 million, or $0.67 per diluted share, Non-GAAP gross margin was 65.9% and Non-GAAP operating margin (non-GAAP income from operations as a percentage of revenue) was 23.3%.

Recent News Highlights                                           

  • July 25, 2017 – Cavium Expands XPliant Product Portfolio with 10GbE and 25GbE Optimized Programmable Switch Devices
  • July 18, 2017 – Cavium 25/50Gbps Ethernet Adapter Technology Powers Hewlett Packard Enterprise Synergy
  • July 11, 2017 – Cavium FastLinQ Ethernet Enables Universal RDMA for Dell EMC 14th Generation PowerEdge Servers
  • June 28, 2017 – Cavium and China Unicom Announced Trials of M-CORD NFV/5G Platforms in China
  • June 27, 2017 – Cavium Unveiled Industry’s Most Advanced 10/25/40/50Gbps Ethernet NIC Family
  • June 19, 2017 – Cavium and Leading Partners Showcased ThunderX-based Server Platforms & Software for High Performance Computing at ISC 2017
  • June 19, 2017 – Cavium Expands the ThunderX2 Server Ecosystem for Cloud and HPC Applications
  • May 29, 2017 – Cavium and Partners Demonstrated a Range of Efficient, Secure and Scalable Datacenter and Networking Infrastructure Solutions at COMPUTEX 2017
  • May 29, 2017 – Cavium FastLinQ Ethernet Adopted by Major ODMs for Next-Generation Cloud and Telco Datacenters 
  • May 29, 2017 – Inventec Launched New Baymax HyperScale Server Platforms Powered by Cavium ThunderX2 Processors
  • May 29, 2017 – Ingrasys Enables High Performance Computing and Hyperscale Workloads with New Class of Server Platforms Powered by Cavium ThunderX2 Processors
  • May 29, 2017 – GIGABYTE Technology Announced Expansion of their ARM Server Portfolio based on Cavium’s ThunderX2 Workload Optimized Processor Family
  • May 9, 2017 – Cavium QLogic Accelerates NVMe over Fabrics Adoption
  • May 8, 2017 – Cavium Demonstrated Newest Enterprise Connectivity and Datacenter Solutions at Dell EMC World 2017
  • May 8, 2017  Cavium Showcased Innovative Solutions for Private and Public Cloud Infrastructure and Scale Out Applications at OpenStack 2017
  • May 3, 2017 – Cavium Named Winner of Omega Award for Trailblazing Innovation by ACG Research for 2016
  • May 3, 2017 – Cavium Demonstrated Next-generation NFV, SDN, 5G and Telco Cloud Infrastructure Solutions at NFV World Congress 2017
  • May 2, 2017 – China Mobile, ARM, Cavium and Enea Signed Agreement for Cooperation in China Mobile Open NFV Testlab
  • May 2, 2017 – Cavium Demonstrated Leading Datacenter, HPC and Next-generation Cloud Infrastructure Solutions at Red Hat Summit 2017
  • April 27, 2017 – Online Launched ARMv8-Based Scaleway Public Cloud Service Powered by Cavium’s ThunderX Workload Optimized Processors

Cavium will broadcast its second quarter of 2017 financial results conference call today, August 2, 2017, at 2 p.m. Pacific time (5 p.m. Eastern time). The conference call will be available via a live web cast on the investor relations section of the Cavium website at http://www.cavium.com. Please access the website at least a few minutes prior to the start of the call in order to download and install any necessary audio software. An archived web cast replay of the call will be available on the web site for a limited period of time.

About Cavium

Cavium offers a broad portfolio of integrated, software compatible processors ranging in performance from 1Gbps to 100Gbp that enable secure, intelligent functionality in Enterprise, Data Center, Broadband, Mobile and Service Provider Equipment, highly programmable switches which scale to 3.2Tbps and Ethernet and Fibre Channel adapters up to 100Gbps. Cavium processors are supported by ecosystem partners that provide operating systems, tools and application support, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, please visit: http://www.cavium.com.

Source: Cavium

The post Cavium Announces Financial Results for Q2 2017 appeared first on HPCwire.

IBM Sets New Record for Magnetic Tape Storage

Wed, 08/02/2017 - 15:00

TSUKUBA, Japan, Aug. 2, 2017 — IBM Research scientists have achieved a new world record in tape storage – their fifth since 2006. The new record of 201 Gb/in2 (gigabits per square inch) in areal density was achieved on a prototype sputtered magnetic tape developed by Sony Storage Media Solutions. The scientists presented the achievement today at the 28thMagnetic Recording Conference (TMRC 2017) here.

Tape storage is currently the most secure, energy efficient and cost-effective solution for storing enormous amounts of back-up and archival data, as well as for new applications such as Big Data and cloud computing.

This new record areal recording density is more than 20 times the areal density used in current state of the art commercial tape drives such as the IBM TS1155 enterprise tape drive, and it enables the potential to record up to about 330 terabytes (TB) of uncompressed data* on a single tape cartridge that would fit in the palm of your hand. 330 terabytes of data are comparable to the text of 330 million books, which would fill a bookshelf that stretches slightly beyond the northeastern to the southwestern most tips of Japan.

Magnetic tape data storage is currently experiencing a renaissance. With this achievement, IBM scientists demonstrate the viability of continuing to scale the tape roadmap for another decade.

“Tape has traditionally been used for video archives, back-up files, replicas for disaster recovery and retention of information on premise, but the industry is also expanding to off-premise applications in the cloud,” said IBM Fellow Evangelos Eleftheriou. “While sputtered tape is expected to cost a little more to manufacture than current commercial tape that uses Barium ferrite (BaFe), the potential for very high capacity will make the cost per TB very attractive, making this technology practical for cold storage in the cloud.”

To achieve 201 billion bits per square inch, IBM researchers developed several new technologies, including:

  • Innovative signal-processing algorithms for the data channel, based on noise-predictive detection principles, which enable reliable operation at a linear density of 818,000 bits per inch with an ultra-narrow 48nm wide tunneling magneto-resistive (TMR) reader.
  • A set of advanced servo control technologies that when combined enable head positioning with an accuracy of better than 7 nanometers. This combined with a 48nm wide (TMR) hard disk drive read head enables a track density of 246,200 tracks per inch, a 13-fold increase over a state of the art TS1155 drive.
  • A novel low friction tape head technology that permits the use of very smooth tape media

IBM has been working closely with Sony Storage Media Solutions for several years, particularly on enabling increased areal recording densities. The results of this collaboration have led to various improvements in the media technology, such as advanced roll-to-roll technology for long sputtered tape fabrication and better lubricant technology, which stabilizes the functionality of the magnetic tape.

Many of the technologies developed and used in the areal density demonstrations are later incorporated into future tape products. Two notable examples from 2007 include an advanced noise predictive maximum likelihood read channel and first generation BaFe tape media.

IBM has a long history of innovation in magnetic tape data storage. Its first commercial tape product, the 726 Magnetic Tape Unit, was announced more than 60 years ago. It used reels of half-inch-wide tape that each had a capacity of about 2 megabytes. The areal density demonstration announced today represents a potential increase in capacity of 165,000,000 times compared with IBM’s first tape drive product. This announcement reaffirms IBM’s ongoing commitment and leadership in magnetic tape technology.

* Assuming the same format overheads as the TS1155 format and taking into account the 6.4% increase in tape length enabled by the thinner demo tape. A TS1155 JD cartridge, can hold 15 TB of uncompressed data in a 4.29 in. x 4.92 in. x 0.96 in. (109.0 mm x 125 mm x 24.5 mm) form factor.

Source: IBM

The post IBM Sets New Record for Magnetic Tape Storage appeared first on HPCwire.

Dell HPC Server Shines on STAC Test

Wed, 08/02/2017 - 12:20

A STAC report released this week indicates the Dell PowerEdge C4130 server set several new records on the STAC-A2 test suite (financial risk analysis). The machine tested had 4x Nvidia Tesla P100 GPU cards and 2x Intel Xeon E5-2690v4 CPUs. This particular server is positioned by Dell EMC to handle demanding workloads such as financial services, oil & gas exploration, scientific imaging/research, and HPC broadly.

Here are the top line results as reported by STAC (Securities Technology Analysis Center). “Compared to other publicly reported systems tested with STAC-A2 to date, this Dell solution had the:

  • Highest space efficiency (STAC-A2.β2.HPORTFOLIO.SPACE_EFF): 1.98x the efficiency of the previously tested system with 4 x P100 GPUs (NVDA161102); 1.95x the efficiency of the previous record holder (INTC170503)
  • Highest energy efficiency (STAC-A2.β2.HPORTFOLIO.ENERG_EFF): 12% higher than the previously tested system with 4 x P100 GPUs (NVDA161102)
  • Highest throughput in the portfolio benchmark (STAC-A2.β2.HPORTFOLIO.SPEED): 25.2 options per second
  • Fastest performance in warm and cold runs of the large problem size Greeks benchmark: 12.7 seconds (STAC-A2.β2.GREEKS.10-100k-1260.TIME.WARM); 5 seconds (STAC-A2.β2.GREEKS.10-100k-1260.TIME.COLD)
  • Tied the best score for warm runs of the baseline Greeks benchmark: 0.051 seconds (STAC-A2.β2.GREEKS.TIME.WARM)
  • Tied the best scores for problem-size capacity: Max assets (STAC-A2.β2.GREEKS.MAX_ASSETS) – 100; Max paths (STAC-A2.β2.GREEKS.MAX_PATHS) – 25,000,000

According to STAC, the STAC-A2 is the technology benchmark standard based on financial market risk analysis. “Designed by quants and technologists from some of the world’s largest banks, STAC-A2 reports the performance, scaling, quality, and resource efficiency of any technology stack that is able to handle the workload (Monte Carlo estimation of Heston-based Greeks for a path-dependent, multi-asset option with early exercise).”

In this instance key elements of the stack under test (SUT) included:

  • NVIDIA CUDA 8, cuRAND 8, cuBLAS 8
  • CUB library v1.5.2
  • Eigen library v3.2.8
  • Dell C4130 configuration G
  • 2x Intel Xeon CPU E5-2690v4 @ 2.60GHz
  • 4x NVIDIA Tesla P100 GPU accelerator @ 715MHz memory, 1328MHz (graphics) (with EEC enabled)
  • Red Hat Enterprise Linux 7.3
  • 16 x 16GB DDR4 SDRAM

Link to the STAC report: https://stacresearch.com/NVDA170718

The post Dell HPC Server Shines on STAC Test appeared first on HPCwire.

Rescale Announces Southeast Asia Expansion

Wed, 08/02/2017 - 10:39

SINGAPORE, Aug 2, 2017 — Rescale, the San Francisco-based global leader for turnkey big compute solutions, is pleased to announce the launch of its Singapore office covering Southeast Asia. The office will be managed by Zac Leow who brings 20 years of regional experience in managing startup company growth related to business applications and hardware infrastructure. He carries a strong track record of bringing new solutions to local markets as well as experience in massively scalable applications, IoT, mobile apps, analytics, security, and cloud-hosted PaaS and SaaS applications. Singapore has fast become a high-tech digital hub for the region, and Leow will be working with local partners to promote Rescale’s leading solutions to end users and IT professionals across multiple industries.

“Rescale is at the leading edge of HPC and big compute in the cloud, and has the widest industry software support,” commented Leow. “I am honored to head the Singapore office and to help customers in Southeast Asia accelerate and optimize their engineering and scientific simulations using Rescale’s innovative platform. Rescale has some of the most amazing engineering talents a company can have.”

“We are very excited to have Zac on board and setting up the local Rescale office,” added Fanny Treheux, Rescale’s Director of Solutions. “Singapore has always been a hub for high-tech, and Rescale is looking forward to engaging on solutions with local partners and tackling big compute problems from local companies.”

About Rescale

Rescale is the global leader for enterprise big compute. Trusted by the Global Fortune 500, Rescale empowers the world’s top executives, IT leaders, engineers, and scientists to securely manage product innovation and perform groundbreaking research and development faster at a lower cost. Rescale’s ScaleX platform solutions transform traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing infrastructure network in the world. Rescale offers hundreds of turnkey software applications on the platform which are instantly cloud-enabled for the enterprise. For more information on Rescale, visit www.rescale.com.

Source: Rescale

The post Rescale Announces Southeast Asia Expansion appeared first on HPCwire.

Global Hybrid FPGA Market to Grow at a CAGR of 9.4% by 2021

Wed, 08/02/2017 - 10:35

DUBLIN, Aug 2, 2017 — The “Global Hybrid FPGA Market 2017-2021” report has been added to Research and Markets’ offering.

The global hybrid FPGA market to grow at a CAGR of 9.44% during the period 2017-2021.

The report, Global Hybrid FPGA Market 2017-2021, has been prepared based on an in-depth market analysis with inputs from industry experts. The report covers the market landscape and its growth prospects over the coming years. The report also includes a discussion of the key vendors operating in this market.

The latest trend gaining momentum in the market is the increasing growth of hybrid FPGAs in HPC sector. HPC is a combination of processes aimed at delivering the highest and the most efficient performance in terms of computing. HPC addresses data and computation-intensive tasks, such as modeling and simulation, that classical computers cannot solve.

An HPC system is a cluster of processors with node sizes ranging from 16 to 64 nodes. It uses algorithms, networks, and simulated environments for small supercomputers to large supercomputers and can even be applied to quantum computers in the future. The high-performance computing market is being driven by the need to gain economic competitiveness and innovation on the product front. Recent developments such as the advent of cloud-based HPC are attracting small and medium enterprises to this domain.

Key vendors

  • Achronix Semiconductor
  • FPGA family
  • Intel
  • Lattice Semiconductor

Other prominent vendors

  • Atmel
  • Flex Logix Technologies
  • Microsemi
  • Texas Instruments

Key Topics Covered:

PART 01: Executive summary

PART 02: Scope of the report

PART 03: Research Methodology

PART 04: Introduction

PART 05: Market landscape

PART 06: Market segmentation by application

PART 07: Market segmentation by product

PART 08: Geo

PART 09: Key leading countries

PART 10: Decision framework

PART 11: Drivers and challenges

PART 12: Market trends

PART 13: Vendor landscape

PART 14: Key vendor analysis

For more information about this report visit https://www.researchandmarkets.com/research/lszq3c/global_hybrid

Source: Research and Markets

The post Global Hybrid FPGA Market to Grow at a CAGR of 9.4% by 2021 appeared first on HPCwire.