HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 13 hours 2 min ago

Supermicro Introduces New BigTwin Server Architecture

Tue, 02/14/2017 - 07:45

SAN JOSE, Calif., Feb. 14 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in compute, storage and networking technologies including green computing, has announced the fifth generation of its Twin family, the new BigTwin server architecture.

The Supermicro BigTwin is a breakthrough multi-node server system with a multitude of innovations and industry firsts. BigTwin supports maximum system performance and efficiency by delivering 30% better thermal capacity in a compact 2U form-factor enabling solutions with the highest performance processor, memory, storage and I/O. Continuing Supermicro’s NVMe leadership the BigTwin is the first All-Flash NVMe multi-node system. BigTwin doubles the I/O capacity with three PCI-e 3.0 x16 I/O options and provides added flexibility with more than 10 networking options including 1GbE, 10G, 25G, 100G, and InfiniBand with its industry leading SIOM modular interconnect. Each node can support current and next generation dual Intel Xeon processors with up to 3TB of memory, 24 drives of All-Flash NVMe, Hybrid NVMe/SATA/SAS, SSD and HDD, and two m.2 NVMe/SATA drives per node. Extending the industry’s largest portfolio of server and storage systems, the BigTwin is ideal for customers looking to create a simple to deploy and manage blazing fast high-density compute infrastructure. This new system is targeted for cloud, big data, enterprise, hyper-converged and IoT workloads that demand maximum performance, efficiency and flexibility.

“Exceeding our customers’ computing performance and efficiency demands has been our hallmark and our new BigTwin server is no exception. As our fifth generation Twin platform, BigTwin optimizes multi-node server density with maximum performance per watt, per square foot and per dollar with support for free-air cooled data centers,” said Charles Liang, President and CEO of Supermicro. “BigTwin is also the first and only multi-node system that supports up to 205-watt Xeon CPUs, a full 24 DIMMs of memory per node and 24 All-Flash NVMe drives ensuring that this architecture is optimized for today and future proofed for the next generation of technology advancements, including next generation Intel Skylake processors.”

BigTwin is a 2U server configuration that supports four compute nodes. Each node supports all of the following: 24 DIMMs of ECC DDR4-2400MHz and higher for up to 3TB of memory; flexible networking with SIOM add-on cards with quad/dual 1GbE, quad/dual 10GbE/10G SFP+, dual 25G, 100G, FDR or EDR InfiniBand options; 24 hot-swap 2.5″ NVMe / SAS3 / SATA3 drives; two PCI-E 3.0 x16 slots; M.2 and SATADOM; and dual Intel Xeon processor E5-2600 v4/v3 product families up to 145W; Supermicro’s PowerStick fully redundant high-efficiency power supplies (2200W, 2600W); and support for free-air cooled datacenters. Sold as a complete system for highest product quality, delivery, and performance, the BigTwin is supported by Supermicro IPMI software and Global Services and is optimized for HPC, data center, cloud and enterprise environments.

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Supermicro

The post Supermicro Introduces New BigTwin Server Architecture appeared first on HPCwire.

PRACE Issues Call for Posters for ACM Europe Celebration of Women in Computing

Tue, 02/14/2017 - 06:45

Feb. 14 — The ACM Europe Celebration of Women in Computing: womENcourage 2017 aims to celebrate, connect, inspire, and encourage women in computing. The conference brings together undergraduate, MSc, and PhD students, as well as researchers and professionals, to present and share their achievements and experience in computer science.

WomENcourage solicits posters from all areas of Computer Science. Posters offer the opportunity to engage with other conference attendees, disseminate research work, receive comments, practice presentation skills, benefit from discussing ideas with other researchers from the same field. Submissions should present novel ideas, designs, techniques, systems, tools, evaluations, scientific investigations, methodologies, social issues or policy issues related to any area of computing. Authors may submit original work or versions of previously published work. Posters are ideal for presenting early stage research.

Poster abstracts are to be submitted electronically through EasyChair at https://easychair.org/conferences/?conf=womencourage2017. Submissions should introduce the area in which the work has been done and should emphasize the originality and importance of the contribution. All submissions must be in English, in pdf format. They must not exceed one page in length and they must use the ACM conference publication format. This one-page extended abstract must be submitted to EasyChair as a paper which also contains a short (one paragraph) abstract. Poster abstracts that do not follow the submission guidelines will not be reviewed.

All submissions will be peer reviewed by an international Poster Evaluation Committee. Accepted submissions will be archived on the conference website (but there will be no proceedings). The Guide to a Successful Submission provides tips for preparing a good poster and provides information about the reviewing criteria. A submission may have one or more authors of any gender.

At least one author of each accepted submission is expected to attend the conference to present the ideas discussed in the submission. Information about student scholarships is available here.

Important Dates:

  • Poster abstracts: due April 30, 2017
  • Notification of accepted posters: June 5, 2017
  • Final poster abstracts due: July 3, 2017
  • Poster pdf due: July 31, 2017

Source: PRACE

The post PRACE Issues Call for Posters for ACM Europe Celebration of Women in Computing appeared first on HPCwire.

ASC Challenges TaihuLight and Gordon Bell Application

Tue, 02/14/2017 - 01:01

A high-resolution global surface wave simulation MASNUM_WAVE, a 2016 Gordon Bell Prize finalist, has entered into the ASC Student Supercomputer Challenge 2017. In the preliminary contest, all teams are to optimize this numerical model on Sunway TaihuLight, the world’s fastest computer.

In the preliminary contest, the MASNUM workload contains two sets of data: one from the Western Pacific Ocean, the other of all global oceans. Both are actual data yet with different granuality. ASC supplies each team with over 1,000 cores to use on the Sunway TaihuLight.

To maximize MASNUM’s scalability on Sunway TaihuLight will be critical for all teams. The Sunway TaihuLight system uses China’s home-grown manycore processor SW26010. An excellent performance would require a grasp of this unique computing architecture, network between nodes, and utilization efficiency.

Other than optimizing this Gordon Bell Prize application, each team will also work on an AI traffic prediction, conduct an HPL benchmark on a Xeon Phi cluster, and design a supercomputing system under 3000W power consumption. The finalists of the ASC Student Supercomputer Challenge 2017 will be announced on March 13. The final competition will take place at the National Supercomputing Center in Wuxi from April 24 to April 28.

In ASC17, over 230 teams from 15 countries have registered to compete. Details about the ASC17 preliminary contest can be found at http://www.asc-events.org/ASC17/Preliminary.php

The post ASC Challenges TaihuLight and Gordon Bell Application appeared first on HPCwire.

Mellanox Demonstrates Improvement in Crypto Performance With Innova IPsec 40G Ethernet Network Adapter

Mon, 02/13/2017 - 06:53

SUNNYVALE, Calif. and YOKNEAM, Israel, Feb. 13 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced superior crypto throughput of line rate using Mellanox’s Innova IPsec Network Adapter, demonstrating more than three times higher throughput and more than four times better CPU utilization when compared to x86 software-based server offerings. Mellanox’s Innova IPsec adapter provides seamless crypto capabilities and advanced network accelerations to modern data centers, thereby enabling the ubiquitous use of encryption across the network while sustaining unmatched performance, scalability and efficiency. By replacing software-based offerings, Innova can reduce data center expenses by 60 percent or more.

As security concerns in data centers continue to rise, along with the inability of CPU-based products to handle today’s exponential data growth, delivering cost effective and performant hardware-accelerated crypto solutions on a per-server basis has become paramount to maintaining the integrity and confidentiality of the data exchanged over the network infrastructure.

The Innova IPsec adapter addresses the growing need for security and “encryption by default” by combining Mellanox ConnectX advanced network adapter accelerations with IPsec offload capabilities to deliver end-to-end data protection in a low profile PCIe form factor. The Innova IPsec adapter offers multiple integrated crypto and security protocols and performs the encryption/decryption of data-in-motion, freeing up costly CPU cycles.

“The Innova security adapter product line enables the use of secure communications in a cost effective and a performant manner,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Whether used within an appliance such as firewall or gateway, or as an intelligent adapter that ensures data-in-motion protection, Innova IPsec adapters are the ideal solution for cloud, Web 2.0, telecommunication, high-performance compute, storage systems and other applications.”

Innova products deliver industry-leading technologies and accelerations through the integrated ConnectX-4 Lx network adapter, such as support for RDMA over Converged Ethernet (RoCE), Ethernet stateless offload engines, Overlay Networks, and more.

As part of the Innova product line, the Innova Flex Intelligent Network Adapter enables customers to leverage the flexibility of the embedded FGPA to develop their own logic within the adapter. The Innova Flex and IPsec network adapters are currently available in volume quantities.

Mellanox will be exhibiting at the RSA Conference 2017, Feb. 13-17, booth no. 406, in the South Hall of Moscone Center, San Francisco. At the show, Mellanox will showcase its Innova solutions, as well as the Company’s Ethernet and InfiniBand intelligent interconnect products.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at: www.mellanox.com.

Source: Mellanox Technologies

The post Mellanox Demonstrates Improvement in Crypto Performance With Innova IPsec 40G Ethernet Network Adapter appeared first on HPCwire.

Supermicro to Showcase Building Block Solutions at RSA Conference

Mon, 02/13/2017 - 06:45

SAN FRANCISCO, Calif., Feb. 13 — Super Micro Computer, Inc., a global leader in compute, storage, networking technologies and green computing, will display and demonstrate Supermicro’s leading edge technology for Internet security building blocks at the Moscone Convention Center booth #N5015 at RSA Conference 2017 from February 13 to 17. The RSA Conference 2017 is a global event ‘where the world talks security‘ and provides valuable insight into the advancements in information and cyber security.

Supermicro’s server Building Block Solutions are fully converged and scalable.  The company will have a team of experts on hand to discuss the advantages that each product provides in the security markets. Presentations will include Embedded SolutionsServersMotherboards, and Networking Solutions. The team will also discuss Virtualization, Transcoding, Cryptography and Secure Data Storage through encryption and compression with remote management capabilities.

“Cyber security is a perpetual issue that needs the latest technology. Supermicro provides many leading edge embedded products to our Networking and Communications customers providing security solutions.  Our 1U SuperServer 1028U-TN10RT+ with dual processors and 10 hybrid drives provides the industry’s most balanced and scalable architecture with 10G/25G/100G networking and NVMe high-speed storage that accelerates security application via PCI-e expansion slots,” said Charles Liang, President and CEO of Supermicro.  “Our fully converged low-power and high-performance compute platforms with integrated storage and high-speed communication ports offer scalable solutions options for the most demanding security and networking infrastructure.”

Discover how Supermicro converged infrastructure deployments are assisting security companies in building security appliances for data center and the edge. Leveraging our expertise and 20+ years’ engineering experience we provide the most comprehensive, and flexible, hardware servers for Network Security Appliances and Communication Infrastructure solutions.

Some of the equipment being showcased includes:

  • Super Server 5019S-MR for HA, Web hosting, File/Print, and Mission-Critical, Light load VM
  • SuperServer 5091S-MN4 for SMB, Web hosting, File/Print, Network-Centric, General Purpose computing and Mainstream; Domain Controller
  • 2U-Super Server 6028R-T for cloud and virtualization needs, compute intensive applications, data processing and high availability storage.
  • Mini-1U Super Server SYS-E300-8D 1U is a 4-Core, 8 LAN, M.2 ready with one expansion slot for embedded networking applications, network security appliances, firewalls and virtualization
  • SuperServer SYS-E200-8D is a 6-core Xeon D for embedded networking applications, network security appliances, firewalls and virtualization applications
  • SuperServer 5018D-FN8T,  is a Front I/O, 1U, 4-Core, Xeon D, 8-LAN with SFP+  compact design less than ten inches deep for cloud, visualization, network and embedded applications
  • SuperServer 1018D-FRN8T is a1U 16-Core Xeon D SoC solution  OEM solution for network security appliances, firewalls, virtualization, SD-WAN and vCPE applications that offers a seven year life cycle
  • Xeon Motherboard X10SDV-12C-TLN4F+ with  Intel Xeon processor D-1557, Single socket FCBGA 1667; 12-Core, 24 Threads, 45W
  • Xeon Motherboard X10SDV-TP8F with Intel Xeon processor D-1518, Single socket FCBGA 1667; 4-Core, 8 Threads, 35W
  • Atom Motherboard A2SAV with Intel Atom processor E3940, SoC, FCBGA 1296
  • Layer 2/3 Ethernet SuperSwitch – SSE-G3648B/SSE-G-3648BR is a 1U top-of-rack, 1/10G Ethernet Switch that is Bare metal with ONIE installed and is Cumulus Linux Ready

For more information on Supermicro’s complete range of high-performance, high-efficiency Server, Storage and Networking solutions, please visit www.supermicro.com.

About Super Micro Computer, Inc.

Supermicro (SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Super Micro Computer

The post Supermicro to Showcase Building Block Solutions at RSA Conference appeared first on HPCwire.

Is Liquid Cooling Ready to Go Mainstream?

Mon, 02/13/2017 - 06:42

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid.

Most people know that liquid cooling is far more efficient than air-cooling in terms of heat transfer. It is also more economical, reducing the cost of power by as much as 40 percent depending on the installation. Being more efficient with electricity can also reduce carbon footprint and contribute positively to the goals of “greenness” in the data centers, but there are other compelling benefits as well, more on that later.

Most HPC users are familiar with the Top500 but may not be as familiar with the Green500, which ranks the Top500 supercomputers in the world by energy efficiency. The focus of performance-at-any-cost computer operations has led to the emergence of supercomputers that consume vast amounts of electrical power and produce so much heat that large cooling facilities must be constructed to ensure proper performance.

To address this trend, the Green500 list puts a premium on energy-efficient performance for sustainable supercomputing. The most recent Green500, released during SC16, has several systems in the top 10 using liquid cooling.

CoolIT CHx650

US data centers consumed about 70 billion kilowatt-hours of electricity in 2014, about two percent of the country’s total energy consumption, according to a 2014 study conducted by the US Department of Energy in collaboration with researchers from Stanford University, Northwestern University, and Carnegie Mellon University. Liquid cooling can reduce electrical usage by as much as 40 percent, which would take a huge bite out of datacenter energy consumption.

Liquid cooling can also increase server density. The heat generated from the HPC servers rises up, and in a full rack of servers those at the top will experience temperature increases and ultimately shut down. Consequently, you can’t completely populate the rack with servers all the way to the top; instead you need additional racks and provision extra floor space to get to processing power wanted. Liquid cooling eliminates the need for additional racks, creating higher data center server density using less floor space. This need for processing power and capacity has only been increasing in a race to the top.

Another liquid cooling benefit is higher-speed processing – CPUs and other components can run at higher speed as they are cooled more efficiently. Also, servers require no fans making them operationally silent. The more servers in a datacenter the more fans are required and noise levels increase – until it hits a painful point, sometimes literally. Liquid cooling eliminates fans and thus reduces acoustic noise levels.

Reliability can also be improved as mechanical and thermal fatigue have been reduced in liquid cooling systems as there are no moving parts, no vibrations from fans for example, and the systems are being cooled more efficiently. The elimination of hot spots and thermal stresses will also lead to improved overall reliability, performance and life.

Liquid Cooling Round-up

Following is round up of vendors demonstrating liquid cooled servers at SC16:

Aquila developed Aquarius water cooled server system offered in an Open Compute Platform (OCP) rack, in partnership with Clustered Systems utilizing their cold plate cooling technology. Aquila has also partnered with Houston based TAS Energy to co-develop an edge data center around the Aquarius platform.

Asetek, a provider of hot water, direct-to-chip liquid cooling technology showcased solutions in use worldwide at HPC users with their OEM partners such as Cray, Fujitsu, Format, and Penguin. Liquid cooling solutions for HPE, NVIDIA, Intel, and others were also on display.

Asetek’s direct-to-chip cooling technology is deployed nine installations in the November 2016 Green500 list. The highest ranked is #5 on the list; The University of Regensburg QPACE3 is a joint research project with The University of Wuppertal and Jülich Supercomputing Center. Featuring Asetek liquid cooled Fujitsu PRIMERGY servers, it is one of the first Intel Xeon Phi KNL based HPC clusters in Europe. Ranked #6 on the Green500, Oakforest-PACS is the highest performance supercomputer system in Japan and ranked #6 on the Top500. Fujitsu also deployed HPC clusters with PRIMERGY server nodes at the Joint Center for Advanced High-Performance Computing (JCAHPC) in conjunction with the University of Tokyo and Tsukuba University.

Asetek also announced liquid cooling technology is cooling eight installations in the November 2016 edition of the TOP500 list of the fastest supercomputers in the world.

CoolIT Systems is a leader in direct Contact Liquid Cooling energy efficient liquid cooling solutions for the HPC, Cloud and Enterprise markets. CoolIT’s solutions target racks of high-density servers. The technology can be deployed with any server in any rack, according to CoolIT.

CoolIT has several OEMs including:

  • Hewlett Packard Enterprise Apollo 2000 System
  • NEC Blue Marlin
  • Dell PowerEdge C6320
  • Lenovo NeXtScale product offering

CoolIT Systems have also partnered with STULZ and showcased their Chip-to-Atmosphere concept within a micro datacenter. CoolIT was recently selected by the University of Toronto to provide custom liquid cooling for its new signal processing backend, which will support Canada’s largest radio telescope, the Canadian Hydrogen Intensity Mapping Experiment (CHIME), a joint project between the National Research Council of Canada (NRC) and three major Universities (McGill, Toronto, UBC).

Ebullient has developed cooling system designed a two-phase cooling systems for data center servers. Low-pressure fluid, 3M Novec 7000, is pumped through flexible tubing to sealed modules mounted on the processors in each server. The fluid captures heat from the processors and transports it back to a central unit, where it is either rejected outside the facility or reused elsewhere in the facility or in neighboring facilities. Ebullient’s direct-to-chip systems can cool any server, regardless of make or model.

Ebullient is an early stage company founded in 2013 based on technology developed University of Wisconsin raised $2.3M in January 2016

Green Revolution Cooling’s CarnotJet System is a liquid immersion cooling solution for data center servers. Rack-mounted servers from any OEM vendor can be installed in special racks filled with a dielectric mineral oil. On show at their SC16 booth was the Minimus server, their own design to further cost reduce the server component of the overall system.

In December Green Revolution announced a strategic partnership with Heat Transfer Solutions (HTS), an independent HVAC manufacturers’ representative in North America. As part of the partnership, HTS is making a financial investment in GRC, which will provide growth capital as the company continues to expand its presence in the data center market. In addition, a new CEO was appointed to help grow the company.

LiquidCool Solutions is a technology development firm specializing in cooling electronics by total immersion in 3M Novec dielectric fluid. LiquidCool Solutions was originally founded in 2006 as Hardcore Computing with a focus on workstations, rebranding in 2012 to LiquidCool Solutions and its focus on servers. The company has demonstrated two new liquid submerged servers based on the Clamshell design. The Submerged Cloud Server, a 2U 4-node server designed for Cloud-computing applications and Submerged GPU Server, is a 2U dual node server designed for HPC applications that can be equipped with four GPU cards or four Xeon Phi boards.

LiquidMips showcased a server-cooling concept, a single processor chip immersed in 3M Fluorinert. It’s a long way from being a commercially viable product but represents another company entering the immersive cooling market.

Inspur Systems Inc., part of Inspur Group, showed two types of cooling solutions at SC16, a phase changing cooling solution with ultra-high thermal capacity, and a direct contact liquid cooling solution which allows users to maximize performance and lower operating expenses.

Allied Control specializes in 2-phase immersion cooling solutions for HPC applications. Having built the world’s largest 40MW immersion cooled data center with 252kW per single rack resulting in 34.7kW/sqm or 3.2kW/sqft incl. white space, Allied Control offers performance centric solutions for ultra-high density HPC applications. Allied Control utilizes the 3M Novec dielectric fluid.

The BitFury Group (Bitcoin mining giant) acquired Allied Control in 2015. In January 2017 BitFury Group announced a deal Credit China Fintech Holdings to set up a joint venture that will focus on promoting the technology in China. As part of the deal, Credit China Fintech will invest $30 million in BitFury and the setting up of the joint venture that will sell BitFury’s bitcoin mining equipment.

ExaScaler Inc. is specialized in submersion liquid cooling technology. ExaScaler, and its sister company PEZY Computing, unveiled ZettaScaler-1.8, the first Super Computer with a performance density of 1.5 PetaFLOPS/m. The ZettaScaler-1.8 is an advanced prototype of the ZettaScaler-2.0 due to be released in 2017 with a performance density three times higher than the ZettaScaler-1.8. ExaScaler immersion liquid cooling using 3M Fluorinert cools ZettaScaler-1.8 Super Computer.

Fujitsu demonstrated a new form of data center, which included cloud-based servers, storage, network switch and center facilities, by combining the liquid immersion cooling technology for supercomputers developed by ExaScaler Inc. with Fujitsu’s know-how on general-purpose computers. Fujitsu is able to capitalize on three decades of liquid cooling expertise with mainframes, to supercomputers to Intel x86.

This new style of data center uses liquid immersion cooling technology that completely immerses IT systems such as servers, storage, and networking equipment in liquid coolant in order to cool the devices.

The liquid immersion cooling technology uses 3M’s Fluorinert, an inert fluid that provides high heat-transfer efficiency and insulation as a coolant. IT devices, including servers and storage, are totally submerged in a dedicated reservoir tank filled with liquid Fluorinert, and the heat generated from the devices is processed by circulating the cooled liquid through the devices. This improves the efficiency of the entire cooling system, thereby significantly reducing power consumption. A further benefit of the immersed cooling is that it provides protection from harsh environmental elements, such as corrosion, contamination, and pollution.

3M’s HPC offers solutions using 3M Engineered Fluids such as Novec or Fluorinert. Perhaps the winner at SC16 for immersed cooling is 3M as most of the vendors mentioned here use 3M Engineering Fluids. 3M fluids also featured in some of the networking product at the event. Fully immersed systems can improve energy efficiency, allows for significantly greater computing density, and helps minimize thermal limitations during design.

Huawei announced a next-generation FusionServer X6000 HPC server that uses a liquid cooling solution featuring a skive fin micro-channel heat sink for CPU heat dissipation and processing technology where water flows through memory modules. This modular board design and 50ºC warm water cooling offers high energy-efficiency and reduces total cost of ownership (TCO).

Other vendors

HPE Apollo

HPE and Dell both introduce liquid cooling server products in 2016. Though they do not have the lineage of Fujitsu they nevertheless recognize the values liquid cooling delivers to the datacenter.

HPE’s entrance is the Apollo family of high-density servers. These rack-based solutions include compute, storage, networking, power and cooling. Target users are high-performance computing workloads and big data analytics. The top of the server lineup is the Apollo 8000 uses a warm water-cooling system whereas other members of the Apollo family of servers integrate the CoolIT Systems Closed-Loop DCLC (Direct Contact Liquid Cooling).

Dell, like HPE, does not have the decades of liquid cooling expertise of Fujitsu. Dell took the covers of the Dell Triton water cooling system in mid 2016. Dell’s Extreme Scale Infrastructure team built Triton as a proof of concept for eBay, leveraging Dell’s rack-scale infrastructure. The liquid-cooled cold plates directly contact the CPUs and incorporates liquid to air heat exchanges to cool the airborne heat generated by the large number of densely packed processor nodes.

What about existing servers?

Good question, and the answer is no you really cannot. Adopting liquid cooling only makes sense on new server deployments. That is not to say it is impossible, but there are lots of modifications need to make water cooling, like direct-to-chip, or fully immersed to work, big maybe and not really recommended. An existing server has cooling fans that need to be disabled and CPU cooling towers removed and so on. You also need to add plumbing to your existing rack, which can be a pain.

There is no question that a prospective user needs to consider the impact and requirements on existing datacenter infrastructure, the physical bricks, mortar, plumbing, etc. For users considering water-cooled solutions you will need to plumb water to the server rack. If you are in a new datacenter that is one level of effort but if your datacenter is a large closet in an older building, like 43 percent of North American datacenter/server rooms, it may be a lot more difficult and expensive.

If you are considering a fully immersed solution, such as Fujitsu, no plumbing is required; all you need to do is hook up to chiller. It may be easier and less expensive than water-cooling. As a completely sealed unit it is conceivable that liquid immersed cooling solutions can be deployed almost anywhere, no datacenter required.

Most vendors covered in this market are small emerging technology companies. Asetek’s data center revenue was $1.8 million in the third quarter and $3.6 million in the first nine months of 2016, compared with $0.5 million and $1.0 million in the third quarter and first nine months of 2015, respectively. Asetek is forecasting significant data center revenue growth in 2016 from $1.9M in 2015.

CoolIT reported 2014 revenue of $27M for all product categories. It is worth noting that Asetek and CoolIT data center revenues are less than 10% of total company revenue. The remaining 90% is workstation and PC liquid cooling solutions. Ebullient, Liquid MIPS, LiquidCooling, Green Revolution and Aquila have very few customers and probably below $10M annual revenues.

The obvious question is since most of the vendors are small and very early stage – is there truly a market for liquid cooled servers? Industry analysts believe there is and forecast the market to grow from about $110M in 2015 to almost $960M in 2020, and additional $850M of incremental revenue in just five years.

With healthy future for growth prospects, we started to see larger players enter a market such as Fujitsu. In addition, the HPC system vendors are all OEMing liquid cooling technology solutions to solve the big system cooling issues in the datacenters. With the huge increase in data being generated, Artificial Intelligence and other applications need to mine this data. Consequently, more and more server power is required and new innovative cooling solutions needed making liquid cooling a practical and feasible solution.

As a side note, more and more government RFPs are asking for liquid cooling solutions. Solutions such as the one from Fujitsu can make the crossover from HPC to commercial datacenter a reality.

Could 2017 the breakout year for liquid cooling, move from innovator to early adopter?

The Supercomputing Conference is frequently a window into the future. At SC16, there were over a dozen companies demonstrating server liquid cooling solutions, with technologies ranged from Direct-to-Chip to Liquid Immersive Cooling where servers and storage are fully immersed in dielectric fluid.

Today the majority of providers are early stage or startup companies with a notable exception. Fujitsu, a global IT powerhouse brought over thirty years of liquid cooling experience and demonstrated an immersive cooling solution that had Intel-based servers, storage and network switches fully immersed in Fluorinert.

We will see cooling technology move from the confines of high-end supercomputers to a nice niche in the enterprise datacenter for such workloads as big data analytics, AI and high frequency trading.

The post Is Liquid Cooling Ready to Go Mainstream? appeared first on HPCwire.

TACC’s Rustler and XSEDE ECSS Support Assist With Analyzing Data for Transportation Systems

Mon, 02/13/2017 - 06:40

Feb. 13 — In the next 10 years you are going to see some form of autonomous or connected vehicles on the streets. Natalia Ruiz-Juri, a research associate with The University of Texas at Austin’s Center for Transportation Research (CTR) is fairly certain of this. She is one of many researchers at CTR and The University of Texas at Austin (UT Austin) who are studying the wide range of technical, social and policy aspects of connected and autonomous vehicle (CAV) technologies.

Fully autonomous vehicles or driverless cars are capable of sensing their environment and navigating without human input. They can detect surroundings using a variety of techniques such as radar, lidar, GPS, odometry, and computer vision. Similarly, connected vehicles (CVs) are vehicles that can exchange messages containing location and other safety-related information with other vehicles, and with devices affixed to roadside infrastructure.

CVs share information in the form of Basic Safety Messages (BSMs) with other vehicles and the infrastructure; these include vehicle position, speed and breaking status. Such real-time feedback and information exchange between vehicles is expected to greatly enhance safety, and it opens the door to several possibilities in traffic management.

For example, vehicles could talk to other vehicles that are much further ahead and get warned about congestion or dangerous conditions, thereby allowing a driver to make strategic decisions and take a different path.

Additionally, vehicles could also talk to infrastructure, such as an intersection light, which might be capable of tracking the number of vehicles passing through and potentially adjusting the signal timing plan accordingly. The advent of CVs would therefore have huge promise in improving traffic management and the overall utilization of transportation infrastructure, particularly if vehicle connectivity is considered along with automation.

While the basic goal of CVs, in particular, is safety — experts hypothesize up to 80 percent less accidents in the future — the data generated by CVs has an enormous potential to support transportation planning and operations.

The Big Data Problem

At this point researchers are still exploring diverse datasets. A number of connected vehicle test beds and autonomous vehicles test sites have been planned, or are already in place. Texas is part of one of the 10 US-Department of Transportation-designated autonomous vehicle proving grounds, and research sponsored by other agencies, such as TxDOT and the North Central Texas Council of Governments is also happening at UT Austin.

“The volume and complexity of CV data are tremendous and present a big data challenge for the transportation research community,” Ruiz-Juri said. While there is uncertainty in the characteristics of the data that will eventually be available, the ability to efficiently explore existing datasets is paramount.

Ruiz-Juri and her colleagues, including Chandra Bhat, James Kuhr and Jackson Archer, were interested in exploring the most comprehensive data set released to date — the Safety Pilot Model Deployment (SPMD) data, produced by a study conducted by The University of Michigan Transportation Research Institute and the National Highway Traffic Safety Administration.

The entire article can be found here.

Source: Faith Singer-Villalobos, TACC

The post TACC’s Rustler and XSEDE ECSS Support Assist With Analyzing Data for Transportation Systems appeared first on HPCwire.

China Marine National Lab Collaborates with Inspur to Move from Petascale to Exascale

Mon, 02/13/2017 - 01:05

China’s fastest supercomputer in the marine field was officially launched on December 15. The computing performance reaches 1 petaFlops. This computer is designed and built by Inspur for the Qingdao National Laboratory for Marine Science and Technology (QNLM). The Lab will work closely with Inspur to build exascale system (8x faster than the world’s current #1 supercomputer Sunway TaihuLight). The supercomputer will be used for marine research and development projects such as Transparent Ocean, Blue Life, Deep Sea at the Pole and Ocean Equipment.

Qingdao National Laboratory for Marine Science and Technology

Wu Lixin, an academician of the Chinese Academy of Sciences and the director of QNLM, expects that the exascale supercomputer to help build world class “Global Ocean Simulator” at a sub-km level. This will enable simulation of the Earth evolution in the past 20,000 years as well as ocean weather forecast at the highest possible accuracy. Wang Endong, an academician of the Chinese Academy of Engineering and Inspur’s Chief Scientist, views marine as one significant application of exascale supercomputing. He anticipates that with the help of exascale machines, QNLM can become a world leader in the area of marine computing.

With the advancement of ocean observation technology like satellite remote sensing, the amount of marine data has lept from 60PB to over 350PB. Powerful supercomputers are necessary for data processing and simulations in order to support marine research. Countries including USA, UK, and Russia have long invested on marine research and have hence developed marine supercomputers and applications. As one of China’s achievements in marine computing, a highly effective global surface wave numerical simulation with ultra-high resolution developed by the First Institute of Oceanography has been nominated for Gordon Bell Prize. The Qingdao National Laboratory for Marine Science and Technology’s petascale system along with its exascale plan is expected to boost China’s research and application in the marine field.

The post China Marine National Lab Collaborates with Inspur to Move from Petascale to Exascale appeared first on HPCwire.

Cray Posts Best-Ever Quarter, Visibility Still Limited

Fri, 02/10/2017 - 17:02

On its Wednesday earnings call, Cray announced the largest revenue quarter in the company’s history and the second-highest revenue year. The Seattle-based supercomputer maker recorded $346.6 million in revenue for the fourth quarter of 2016, a 30 percent year-over-year improvement. For the full year, Cray booked $629.8 million in total revenue, a shortfall of $94.9 million compared with 2015, a banner year for the company. Net income for 2016 was $10.6 million compared with $27.5 million in 2015. This marks the seventh consecutive year of profitability for Cray.

As we noted in August, wide year-to-year revenue swings are par for the course in the supercomputing space owing to uneven procurement cycles, but there are some specific challenges that the company has had to contend with this year. During its 2016 second quarter earnings call, Cray downsized the year’s revenue forecast by $175 million, citing damage from an electrical event at a Chippewa Falls factor, processor delays and market slowdown.

The greatest short-term fiscal impact was to the company’s third quarter projection. The quarter’s take was $77.5 million compared to a recorded $191.4 million in revenue for the third quarter of 2015. In August, Cray said that missed earnings for Q3 would move to Q4 or into 2017, and indeed the shift amplified their fourth quarter numbers.

While the July 2016 smoke event that damaged several systems that were undergoing pre-shipment testing impacted the company’s balance sheet short-term, the company was fully insured to recoup the losses. More problematic for Cray is market sluggishness and uncertainty, resulting in continued limited visibility.

“While our market can be lumpy, we are continuing to see very slow conditions in supercomputing, as we discussed on our last earnings call, with fewer opportunities, both in total numbers and dollars. In fact, we believe the high-end of the market which we service, also known as our serviceable addressable market for the high-end, was down by more than 25 percent on a revenue basis in 2016 and down even more on a bookings basis. This clearly had a significant impact on our results for the year, despite continued strong win rates and industry-leading market share,” said CEO Peter Ungaro on the Feb. 8 earnings call.

“Further, the timing of a rebound in our market is uncertain, which has significantly reduced our forward-looking visibility. As a result, we are not able to provide a reasonable range of revenue expectations for the full year of 2017 at this point,” he continued.

Cray had several big installations in 2016 with XC40 systems going to Los Alamos National Laboratory, the National Energy Research Scientific Competing Center in California and Kyoto University in Japan. It also completed the installation of the Pascal-based XC50 system at the Swiss National Supercomputing Centre.

Ungaro said that Cray’s cluster business was shy of projections, but the fourth quarter saw notable installations for a U.S. based aerospace manufacturer, the National Oceanic and Atmospheric Administration, Kyoto University, and a financial services firm. The CEO added that the fourth quarter was a strong one for data analytics and the company’s new Urika-GX platform. It installed two Urika-GX systems at the University of Stuttgart, which will support the aerospace and automotive industries.

Winning new business continues to be a primary goal for Cray. “While we’ve begun to some positive signs in customer activity, it has continued to lag our expectations for the past few quarters, primarily driven by reduced bid activity across the board,” said Ungaro.

The company’s second focus goal is to expand into commercial and big data markets. Cray said that commercial customers have roughly doubled its addressable market, and the company expects the commercial sector to continue to be a significant growth driver going forward. Specifically it sees growth opportunities in energy, manufacturing, financial services, and life sciences.

“We’re taking multiple steps in this area including broadening our product set to be more commercially acceptable, overlying sales and marketing plans to address commercial companies and solutions, and honing our service and support organization to align with commercial company expectations. We’ve done each of these things while also maintaining focus on our traditional, government and academic customers that are at the heart of what we do,” said Cray.

The overall tone struck by Cray as it heads into 2017 is one of cautious optimism.

“We have begun to see some positive signs in the market with new opportunities beginning to open up that were not there a few months ago. However, these opportunities are continuing to evolve slowly, and we haven’t seen this pace accelerate yet. This slowdown in the market is a main driver behind our lack of visibility for the year. We continue to believe that the market is going to rebound, especially at the high end,” said Ungaro.

Cray’s successful fourth quarter has helped to settle the market with several investment firms moving their recommendation from “sell” to “hold” or “buy.” Going into Wednesday’s report, Cray’s shares were at a three and a half year low. By market close on Thursday, the price had jumped 19 percent. But the company still has some work to do to regain investor confidence. The stock has a 12-month low of $16.10 and a 12-month high of $43.79. It is currently trading around $22.

The post Cray Posts Best-Ever Quarter, Visibility Still Limited appeared first on HPCwire.

Tit for Tat? New $10B Fab Deal Announced for China

Fri, 02/10/2017 - 14:59

GlobalFoundaries today announced a $10B project to build an advanced semiconductor manufacturing plant in the central Chinese city of Chengdu. The timing of the deal’s announcement – two days after Intel and President Trump jointly announced Intel’s plans to spend $7B on a U.S. based fab – is interesting. The battle for preeminence in chip technology has been heating up in recent years with China striving to become more technology independent and a leader in computer technology.

An article in the New York Times today (Plan for $10 Billion Chip Plant Shows China’s Growing Pull) suggests the announcement is more evidence of the shift of semiconductor technology’s center of gravity towards China. The U.S., under President Obama, enacted trade restrictions on some high-end Intel processors. At least partly in response, China built the Sunway Taihulight, currently on top of the Top500 list, using home-grown components. President Trump further riffled the waters with provocative statements discounting China’s long-held One China policy in which Taiwan is seen as a part of China. Trump reversed himself on that issue this week.

GlobalFoundaries is based in California. Noteworthy, process details for new fab were not spelled out although it is not expected to be cutting edge.

According to the article, China will spend about $100 billion to bring chip factories and research facilities to China. “Almost all of the large semiconductor enterprises in the United States have received investment offers from Chinese state actors,” according to a report from the Mercator Institute for China Studies, a think tank based in Germany. The report added that China’s newest industrial policy, Made in China 2025, had named semiconductors as a crucial area to improve.

The Times also reported:  “Jason Gorss, a GlobalFoundries spokesman, declined to provide financial details but said in an email that “industry analysts estimate that the total cost of an advanced semiconductor fab is on the order of $10B and this fab will be in that range.” It is not clear how much investment is being provided by the company and how much by the Chengdu government.” The article, written by Paul Mozur, looks at the shifting landscape.

Link to New York Times article (Plan for $10 Billion Chip Plant Shows China’s Growing Pull): https://www.nytimes.com/2017/02/10/business/china-computer-chips-globalfoundries-investment.html?mabReward=R1&recp=0&version=readinglist&action=click&pgtype=Homepage&clickSource=story-heading&module=c-column-middle-span-region&region=c-column-middle-span-region&WT.nav=c-column-middle-span-region&_r=0

The post Tit for Tat? New $10B Fab Deal Announced for China appeared first on HPCwire.

NVIDIA Reports Financial Results for Fourth Quarter and Fiscal 2017

Fri, 02/10/2017 - 07:54

SANTA CLARA, Calif., Feb. 10 — NVIDIA (NASDAQ: NVDA) has reported revenue for the fourth quarter ended January 29, 2017, of $2.17 billion, up 55 percent from $1.40 billion a year earlier, and up 8 percent from $2.00 billion in the previous quarter.

GAAP earnings per diluted share for the quarter were $0.99, up 183 percent from $0.35 a year ago and up 19 percent from $0.83 in the previous quarter. Non-GAAP earnings per diluted share were $1.13, up 117 percent from $0.52 a year earlier and up 20 percent from $0.94 in the previous quarter.

For fiscal 2017, revenue reached a record $6.91 billion, up 38 percent from $5.01 billion a year earlier. GAAP earnings per diluted share were $2.57, up 138 percent from $1.08 a year earlier. Non-GAAP earnings per diluted share were $3.06, up 83 percent from $1.67 a year earlier.

“We had a great finish to a record year, with continued strong growth across all our businesses,” said Jen-Hsun Huang, founder and chief executive officer of NVIDIA. “Our GPU computing platform is enjoying rapid adoption in artificial intelligence, cloud computing, gaming, and autonomous vehicles.‎

“Deep learning on NVIDIA GPUs, a breakthrough approach to AI, is helping to tackle challenges such as self-driving cars, early cancer detection and weather prediction. We can now see that ‎GPU-based deep learning will revolutionize major industries, from consumer internet and transportation to health care and manufacturing. The era of AI is upon us,” he said.

Capital Return

During fiscal 2017, NVIDIA paid $739 million in share repurchases and $261 million in cash dividends. As a result, the company returned an aggregate of $1.00 billion to shareholders in fiscal 2017.

For fiscal 2018, NVIDIA intends to return $1.25 billion to shareholders through ongoing quarterly cash dividends and share repurchases.

NVIDIA will pay its next quarterly cash dividend of $0.14 per share on March 17, 2017, to all shareholders of record on February 24, 2017.

NVIDIA’s outlook for the first quarter of fiscal 2018 is as follows:

  • Revenue is expected to be $1.90 billion, plus or minus two percent.
  • GAAP and non-GAAP gross margins are expected to be 59.5 percent and 59.7 percent, respectively, plus or minus 50 basis points.
  • GAAP operating expenses are expected to be approximately $603 million. Non-GAAP operating expenses are expected to be approximately $520 million.
  • GAAP other income and expense, net, is expected to be an expense of approximately $20 million, inclusive of additional charges from early conversions of convertible notes. Non-GAAP other income and expense, net, is expected to be an expense of approximately $4 million.
  • GAAP and non-GAAP tax rates for the first quarter of fiscal 2018 are both expected to be 17 percent, plus or minus one percent, excluding any discrete items.
  • Weighted average shares used in the GAAP and non-GAAP diluted EPS calculations are dependent on the weighted average stock price during the quarter.
  • Capital expenditures are expected to be approximately $50 million to $60 million.

Fourth Quarter Fiscal 2017 Highlights

During the fourth quarter, NVIDIA achieved progress in each of its four major platforms.

Gaming:

  • Introduced GeForce GTX 1050 and 1050 Ti mobile GPUs, which debuted in more than 30 gaming laptops at CES 2017.
  • Launched the new SHIELD TV, integrating Google Assistant for TV, SmartThings Hub technology and the NVIDIA SPOT AI mic.
  • Unveiled the GeForce NOW service, delivering an NVIDIA Pascal gaming PC, on demand, from the cloud to all computers.

Professional Visualization:

  • Launched NVIDIA’s new workstation-product lineup with Quadro GP100, enabling a new class of supercomputing workstations.
  • Introduced Quadro P5000, powering the first VR-ready mobile workstations from Dell and MSI.

Datacenter:

  • Collaborated with Microsoft to accelerate AI with a GPU-accelerated Microsoft Cognitive Toolkit available on the Microsoft Azure cloud and NVIDIA DGX-1.
  • Partnered with the National Cancer Institute and the U.S. Department of Energy to build CANDLE, an AI framework that will advance cancer research.
  • Unveiled the NVIDIA DGX SATURNV AI supercomputer, powered by 124 Pascal-powered DGX-1 server nodes, which is the world’s most efficient supercomputer.

Automotive:

  • Partnered with Audi, to put advanced AI cars on the road by 2020.
  • Partnered with Mercedes-Benz, to bring an NVIDIA AI-powered car to the market.
  • Partnered with Bosch, the world’s largest automotive supplier, to bring self-driving systems to production vehicles
  • Partnered with Germany’s ZF, to create a self-driving system for cars, trucks and commercial vehicles based on the NVIDIA DRIVE PX 2 AI car computer.
  • Partnered with Europe’s HERE, to develop HERE HD Live Map into a real-time, high-definition mapping solution for autonomous vehicles.
  • Partnered with Japan’s ZENRIN, to develop a cloud-to-car HD map solution for self-driving cars.

CFO Commentary

Commentary on the quarter by Colette Kress, NVIDIA’s executive vice president and chief financial officer, is available at http://investor.nvidia.com/.

About NVIDIA

NVIDIA‘s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.” More information at http://nvidianews.nvidia.com/.

Source: NVIDIA

The post NVIDIA Reports Financial Results for Fourth Quarter and Fiscal 2017 appeared first on HPCwire.

NCSA Facilitates Performance Comparisons With China’s Top Supercomputer

Fri, 02/10/2017 - 07:42

Feb. 10 — China has topped supercomputer rankings on the international TOP500 list of fastest supercomputers for the past eight years. They have maintained this status with their newest supercomputer, Sunway TaihuLight, constructed entirely from Chinese processors.

While China’s hardware has “come into its own,” as Foreign Affairs wrote in August, no one can say objectively at present how fast this hardware can solve scientific problems compared to other leading systems around the world. This is because the computer is new—having made its debut in June, 2016.

Researchers were able to use seed funding provided through the Global Initiative to Enhance @scale and Distributed Computing and Analysis Technologies (GECAT) project administered by the National Center for Supercomputing Application’s (NCSA) Blue Waters Project to port and run codes on leading computers around the world. GECAT is funded by the National Science Foundation’s Science Across Virtual Institutes (SAVI) program, which focuses on fostering and strengthening interaction among scientists, engineers and educators around the globe. Shanghai Jiao Tong University and its NVIDIA Center of Excellence matched the NSF support for this seed project, and helped enable the collaboration to have unprecedented full access to Sunway TaihuLight and its system experts.

It takes time to transfer, or “port,” scientific codes built to run on other supercomputer architectures, but an international, collaborative project has already started porting one major code used in plasma particle-in-cell simulations, GTC-P. The accomplishments made and the road towards completion were laid out in a recent paper that won “best application paper” from the HPC China 2016 Conference in October.

“While LINPACK is a well-established measure of supercomputing performance based on a linear algebra calculation, real world scientific application problems are really the only way to show how well a computer produces scientific discoveries,” said Bill Tang, lead co-author of the study and head of the Intel Parallel Computing Center at Princeton University. “Real @scale scientific applications are much more difficult to deploy than LINPACK for the purpose of comparing how different supercomputers perform, but it’s worth the effort.”

The GTC-P code chosen for porting to TaihuLight is a well-traveled code in supercomputing, in that it has already been ported to seven leading systems around the world—a process that ran from 2011 to 2014 when Tang served as the U.S. principal investigator for the G8 Research Council’s “Exascale Computing for Global Scale Issues” Project in Fusion Energy, or “NuFuSE.” It was an international high-powered computing collaboration between the US, UK, France, Germany, Japan and Russia.

A major challenge that the Shanghai Jiao Tong and Princeton Universities collaborative team have already overcome is adapting the modern language (OpenACC-2) in which GTC-P was written, making it compatible with TaihuLight’s “homegrown” compiler, SWACC. An early result from the adaptation is that the new TaihuLight processors were found to be about three times faster than a standard CPU processor. Tang said the next step is to make the code work with a larger group of processors.

“If GTC-P can build on this promising start to engage a large fraction of the huge number of TaihuLight processors, we’ll be able to move forward to show objectively how this impressive, new, number-one-ranking supercomputer stacks up to the rest of the supercomputing world,” Tang said, adding that metrics like time to solution and associated energy to solution are key to the comparison.

“These are important metrics for policy makers engaged in deciding which kinds of architectures and associated hardware best merit significant investments,” Tang added.

The top seven supercomputers worldwide on which GTC-P can run well all have diverse hardware investments. For example, NCSA’s Blue Waters has more memory bandwidth than other U.S. systems, while TaihuLight has clearly invested most heavily in powerful new processors.

As Tang said recently in a technical program presentation at the SC16 conference in Salt Lake City, improvements in the GTC-P code have for the first time enabled delivery of new scientific insights. These insights show complex electron dynamics at the scale of the upcoming ITER device, the largest fusion energy facility ever constructed.

“In the process of producing these new findings, we focused on realistic cross-machine comparison metrics, time and energy to solution,” Tang said. “Moving into the future, it would be most interesting to be able to include TaihuLight in such studies.”

About the National Center for Supercomputing Applications (NCSA)

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

Source: NCSA

The post NCSA Facilitates Performance Comparisons With China’s Top Supercomputer appeared first on HPCwire.

HPC Cloud Startup Launches ‘App Store’ for HPC Workflows

Thu, 02/09/2017 - 12:15

“Civilization advances by extending the number of important operations which we can perform without thinking about them,” wrote mathematician Alfred Lord Whitehead in his 1911 text, “An Introduction to Mathematics.” The words presaged the rise of computing and automation that would characterize the 20th century and also serve nicely as the guiding principle of new University of Chicago startup Parallel Works.

Like Globus, a University of Chicago/Argonne project with shared roots, Parallel Works is helping to automate the mundane and time-consuming computer science tasks so that stakeholders from the scientific and engineering world can focus on their core activities. Parallel Works describes its eponymous solution as a multi-domain platform to build, deploy and manage scientific workflow applications, a sort of a Google Play / Apple store kind of model that significantly reduces the barrier to entry for HPC.

Initially, Parallel Works is targeting product design and engineering in manufacturing and the built environment.

“We want to help businesses up-level their R&D processes, their manufacturing processes, their sales and business development processes to really democratize this capability and make it usable and accessible to an entirely new group of people,” said Michela Wilde, co-founder, general manager and head of business development for Parallel Works.

Michael Wilde

Parallel Works is based on the open source Swift Parallel Scripting language that company founder and CEO Michael Wilde helped guide the development of at the University of Chicago and Argonne National Laboratory circa 2005-2006.

Swift evolved out of an investigation into the idea of “virtual data,” identified by grid computing pioneer and Globus project founder Ian Foster as a potential way to automate the recipes by which large scientific collaborations would do their computational science. If researchers could automate the recipes for creating new datasets, then big data sets wouldn’t have to be transported across the planet but could be re-derived on demand. The computational physics of that didn’t really pan out, said Michael Wilde, since it turned out the computation time itself would in many cases dominate, but from that project, the genesis of Swift was born.

“Very early in the project we realized there were two huge benefits for that approach – automating science workflows and being able to record the provenance by which computational scientists came to scientific conclusions,” said the CEO.

These insights were the foundation of a new programming model, the Virtual Data Language, that expressed the steps of the scientific computational study. The language evolved into Swift, which drew its moniker from its developers’ group, the “Scientific Workflow Team.”

The Parallel Works platform primary compute interface for managing compute resources, workflow access and data files.

Over the next few years with funding grants from the NSF, DOE and NIH, the Swift team, led by Wilde as PI, applied the techniques to a number of scientific problems in climate science, earth system science, cosmology, genomics, protein structure prediction, energy modeling, power grid modeling, and infrastructure modeling. There was a lot of work with materials science since Argonne is the home of Advanced Photon Source, one of the world’s major instruments used for crystallography and the investigations into new energy materials.

In 2014, the Swift team began pursuing opportunities for Swift in the commercial realm. Michael Wilde found a collaborator in Matthew Shaxted, then a civil engineer with architecture and urban planning firm Skidmore Owings & Merrill (SOM). Shaxted had been independently applying Swift to many different modeling modalities that SOM uses in their daily business, including fluid dynamics and climate modeling, interior and exterior daylight and radiation modeling, and transportation modeling.

“That was kind of our crucible – our first encounter with industry – and it was kind of rich in that, who knew that an architecture company has all these different uses of HPC?” said Michael Wilde.

He and Shaxted shared a vision of a Swift-based platform that would enable people that do not have deep computer science expertise to use high performance computing and very sophisticated modeling and simulation workflow capability.

In 2015, Parallel Works was founded by Michael Wilde, Matthew Shaxted and Michela Wilde to target the HPC needs of the broad space of architecture and civil engineering, urban planning and related disciplines. Sharpening their business plan, the three founders immediately started to look for private investments.

A response curve resulting from a Parallel Works optimization study.

The first key funding soon followed. A $120,000 angel funding award from the University of Chicago Innovation Fund enabled the team to start onboarding customers. In early 2016, they took in a small amount of seed funding and were also awarded a phase one SBIR from the DOE of $150,000. They are currently working on closing a larger seed round and have also applied for a phase two DOE SBIR grant.

In October 2016, the Swift project was awarded $3 million in NSF funding to enhance Swift and engage scientific user communities for the next three years. The award is part of the NSF’s Software Infrastructure for Sustained Innovation (SI2) program. While the grant supports only open source development of a “Swift ecosystem” for the specific needs of scientific communities, Parallel Works and any other company can use Swift and contribute back to its open source code base, thus helping to ensure the technology’s sustainability for all users. Wilde feels that this is the kind of win-win technology transfer that many entrepreneurial incubation programs like the NSF I-Corps, the DOE Lab Corps, and DOE/Argonne’s new program “Chain Reaction Innovations” are helping to nurture.

The Parallel Works Platform

To understand the value of Parallel Works, you have to first understand its engine, the Swift parallel scripting language. Wilde explains Swift’s main purpose is to orchestrate the parallel execution of application codes into workflow patterns, to carry out parameter sweeps, design of experiment studies, optimization studies, uncertainty quantifications, and also complex multi-stage processing pipelines related to simulation or data analytics or simulation analytics or both.

“Very often [in the scientific process] you would do for example multiple simulations, then you would analyze the results, then you may select some promising candidates from those designs that you studied and look at them with other tools,” said Michael Wilde. “What you get is this whole concept of the scientific and engineering workflow where you have to run anywhere from tens to tens of millions of invocations of higher level tools. Those tools are typically application programs, sometimes they could be a little finer grained, they could be function libraries, like machine learning libraries and things like that, that you need to knit together and orchestrate. So we sometimes call that a coordination language and that’s essentially what Swift is.”

Resulting visualization of a Parallel Works parameter sweep study. Each image is displayed in the HTML viewer and can be downloaded for further evaluation.

It’s important to note that the function of Swift here is not to take an existing application and make it run in parallel. What it does, said Wilde, is take existing applications that may themselves be either serial or parallel codes — written in a variety of programming models, such as OpenMP, MPI or CUDA — and it orchestrates the execution of those codes.

Parallel Works embeds the Swift engine into a turnkey Web-based software as a service. “We provide supercomputing as a service,” said Wilde. “And that service can provide the big compute for applications that are intrinsically big compute in nature or for big data applications that are somewhat compute-intensive, where applying the computation processes to big datasets is complex and needs a coordination language like Swift.”

Only a year and half out from their founding, Parallel Works is certainly an early stage startup, but they are open for business. They have several customers already and are actively taking on new ones. “We have the bandwidth and the marketing strategy to pursue any and all leads,” said Michael Wilde. In terms of geographical targets, Wilde said that they will work with customers globally and already have two customers based in the UK that serve global markets.

The CEO believes that despite the many cloud success stories, there’s a scarcity of toolkits that make the cloud really easy to use while keeping the generality of the solutions. “We think we fit really well into that important intersection, that sweet spot,” he said. “Because when you use Parallel Works to get to the cloud, A) a huge number of solutions are right there and ready to go, and B) those solutions are crafted in a very high-level programming model so they’re very readily adaptable without having to touch the messy parts of the cloud and C) when you do want to get into those messy parts, you can go deeper. So in other words, simple things are simple, but complex things are possible and more productive than they were ever before.”

A notable urban design and engineering firm has been a collaborator and early customer. Prior to working with Parallel Works, they were using the cloud to run their modeling studies. Said Michela Wilde, “Basically what they had to do was start up one instance on a cloud service, run one job, start up another instance, run a second job, start up a third instance, run a third job. And each step of the process, because their workload is a multistage very sophisticated complex process, they had to manually go in and take the work on the instance, reconfigure it, get it set for the next step of the process. That would have to happen across each of the different instances that this job was running on, until finally the results would all come back. With Parallel Works, they were able to have Swift orchestrate that entire process start to finish automatically across all of the different compute processes that they wanted to use.”

Parallel Works is entering a competitive marketplace that includes companies such as Cycle Computing, Rescale and UberCloud, but Michael Wilde likes to characterize its number one competitor as Ad hoc Inc.

“For example,” said the CEO, “this design and engineering firm had a pile of shell and Python code that was sort of hacked together by very smart scientists but not necessarily professional programmers. They were able to get the job done because most scientists do indeed know how to program, but what they had was not general, not adaptable, not robust, not resilient. It was basically a pile of spaghetti code.

“What they have now is a nice structured code base where their science code is encapsulated within the Swift code and now they can go back up to the Swift layer and play all sorts of what-if studies using Parallel Works to specify higher-level workflows around their core workflow to look at different scenarios, different geographies, different topologies of buildings, in those spaces, explore different solutions and they can do that in parallel where they couldn’t do that before.”

Adding this parallelization means quicker time to results, by a significant factor.

“Before, they would get a job from a client and their process would take about two weeks start to finish to run the jobs,” said Michela Wilde. “So even though they were running in the cloud, it would take a decent amount of time to complete. And now they can basically use Parallel Works, go directly into their client’s office, and run these workflows. They take a couple of hours and get the same results that used to take several weeks.”

Parallel Works currently runs customer workloads predominantly on Amazon Web Services, but also has the ability to deploy jobs on the Ohio Supercomputer Center’s HPC resources. Their longer term plan entails connecting to additional computing sites, such as Microsoft Azure, Google Compute Engine, OpenStack clouds and other HPC center resources. The company execs say that thanks to the underlying tech, creating the necessary drivers to connect to these infrastructures will be easy, but they are waiting to get a better sense of their customer needs and demand first.

They’re also working on building up an app developer community.

Said Michael Wilde, “We envision tens then hundreds then thousands of workflow solution creators sitting on top of a market of tens and hundreds of application tools, things like the CFD solvers, and the bioinformatics tools and the molecular dynamics materials science codes.”

Currently, Parallel Works has been developing solutions directly to seed the marketplace and to get immediate customers. They also have customers, including Klimaat Consulting and Innovation, a Canadian engineering consulting firm, that are developing their own solutions. “We didn’t write Klimaat’s solution,” said Wilde. “They wrote it on top of our platform and so we have other companies that are doing the same thing now of creating solutions, some of them for direct internal use, some of them for marketing to their audience as a workflow solution.”

Swift at Exascale

Out of the gate, Parallel Works is targeting the embarrassingly parallel coarse-grained workloads in the design and manufacturing space, but Swift has the potential to power finer-grained computing applications.

Under DOE ASCR funding for an exascale-focused project called X-Stack, the Swift team studied whether they could extend the Swift “many task” programming model to extreme scale and exascale applications, and whether it could actually program fine-grained applications. The project resulted in Swift/T (“T” after its “Turbine” workflow engine), explained Michael Wilde.

“Swift/T runs over MPI on supercomputers and extends Swift’s ability to run over extremely high node-count and core-count systems, handle very high task rates, and to also coordinate in-memory tasks and data objects (in addition to application programs that pass data via files),” he shared.

In 2014, the Swift team published a Supercomputing paper that documented a 500,000 core workflow run with 100-200 microsecond tasks on the Blue Waters Cray supercomputer, achieving 1.5 billion tasks per second with high-level for-loops.

“So when it really gets down to that kind of extreme scale we can really go there,” said Wilde. “Even with our portable Java version of Swift, we are able to pump out 500-600 tasks per second to a resource pool which goes well beyond what any kind of cluster scheduler can to today.”

The post HPC Cloud Startup Launches ‘App Store’ for HPC Workflows appeared first on HPCwire.

Bright Computing Announces Alliance With Curtiss-Wright

Thu, 02/09/2017 - 07:05

SAN JOSE, Calif., Feb. 9 — Bright Computing, a global leader in cluster and cloud infrastructure automation software, today announced a strategic partnership with Curtiss-Wright’s Defense Solution division.

Curtiss-Wright Defense Solutions is an industry-leading supplier of highly engineered commercial off-the-shelf (COTS) module and system-level products designed for deployment in the harsh environments typical of aerospace, defense and industrial applications.

The relationship between the two companies began in 2015, when Curtiss-Wright launched an OpenHPEC Initiative to take proven development tools from the commercial HPC market and incorporate them in the design of highly scalable supercomputer-class high performance embedded computing (HPEC) system solutions for the Aerospace & Defense industry. Under the initiative, Curtiss-Wright introduced the OpenHPEC Accelerator Suite which includes Bright Cluster Manager and other best-in-class software development tools. OpenHPEC Accelerator Suite includes an industry leading cluster manager, debugger, and profiler from the HPC domain. It supports 40GB Ethernet, InfiniBand, and PCIe fabrics, as well as multiple versions of MPI for communications. Vector libraries offer optimized single and multi-thread functions, for both single and double precision, for demanding, math-intensive applications.

Since 2015, Bright Computing and Curtiss-Wright have collaborated to provide easy -to-deploy and robust HPC solutions, removing obstacles for developers building complex supercomputers in challenging environments.

Today, Bright Computing and Curtiss-Wright share multiple customers that are lead prime contractors in the Aerospace and Defence industry. Together, Bright and Curtiss-Wright solve challenges such as utilizing COTS software instead of custom software to reduce costs, decreasing time to market, and improving performance.

Dan Kuczkowski, SVP of Worldwide Sales at Bright Computing, commented, “We’re delighted to formalize our collaboration with Curtiss-Wright. The Aerospace and Defense industry have very high demands when it comes to HPC, so it’s great to be at the cutting edge with Curtiss-Wright, providing ground breaking solutions for the latest supercomputing challenges.”

“We are very excited to be working closely with Bright Computing to bring its supercomputing software tools to the embedded Aerospace & Defense market as part of our OpenHPEC Accelerator Suite software development toolset,” said Lynn Bamford, Senior Vice President and General Manager, Defense Solutions division. “Together, we are providing HPEC system integrators with proven and robust development tools from the Commercial HPC market to speed and ease the design of COTS-based highly scalable supercomputer-class solutions.”

Source: Bright Computing

The post Bright Computing Announces Alliance With Curtiss-Wright appeared first on HPCwire.

Cray Reports 2016 Full Year and Fourth Quarter Financial Results

Thu, 02/09/2017 - 06:59

SEATTLE, Wash., Feb. 9 — Global supercomputer leader Cray Inc. (Nasdaq: CRAY) has announced financial results for the year and fourth quarter ended December 31, 2016.

For 2016, Cray reported total revenue of $629.8 million, which compares with $724.7 million for 2015. Net income for 2016 was $10.6 million, or $0.26 per diluted share, compared to $27.5 million, or $0.68 per diluted share for 2015. Non-GAAP net income, which adjusts for selected unusual and non-cash items, was $19.9 million, or $0.49 per diluted share for 2016, compared to $53.0 million, or $1.30 per diluted share for 2015.

Revenue for the fourth quarter of 2016 was $346.6 million, which compares with $267.5 million in the fourth quarter of 2015. Net income for the fourth quarter of 2016 was $51.8 million, or $1.27 per diluted share, compared to net income of $20.3 million, or $0.50 per diluted share in the fourth quarter of 2015. Non-GAAP net income was $56.3 million, or $1.38 per diluted share for the fourth quarter of 2016, compared to non-GAAP net income of $32.2 million, or $0.79 per diluted share for the same period of 2015.

Overall gross profit margin on a GAAP and non-GAAP basis for 2016 was 35%. For 2015, GAAP and non-GAAP gross profit margin was 31% and 32%, respectively.

Operating expenses for 2016 were $211.1 million, compared to $184.7 million for 2015. Non-GAAP operating expenses for 2016 were $199.7 million, compared to $173.3 million for 2015.

As of December 31, 2016, cash and restricted cash totaled $225 million. Working capital at the end of the fourth quarter was $392 million, compared to $415 million at December 31, 2015.

“While 2016 wasn’t nearly as strong as we originally targeted, we finished the year well, with the largest revenue quarter in our history and solid cash balances, as well as delivering profitability for the year,” said Peter Ungaro, president and CEO of Cray. “We completed numerous large system installations around the world in the fourth quarter, providing our customers with the most scalable, highest performance supercomputing, storage and analytics solutions in the market. We continue to lead the industry at the high-end and, despite an ongoing downturn in the market, we’re in excellent position to continue to deliver for our customers and drive long-term growth.”

Outlook

Due to current market conditions, the Company has limited visibility into 2017. While a wide range of results remains possible, the Company continues to believe it will be difficult to grow revenue compared to 2016. Revenue in the first quarter of 2017 is expected to be approximately $55 million. GAAP and non-GAAP gross margins for the year are expected to be in the low-mid 30% range. Non-GAAP operating expenses for 2017 are expected to be roughly flat with 2016 levels. For 2017, GAAP operating expenses are anticipated to be about $12 million higher than non-GAAP operating expenses, and GAAP gross profit is expected to be about $1 million lower than non-GAAP gross profit.

Actual results for any future periods are subject to large fluctuations given the nature of Cray’s business.

Recent Highlights

  • In November, Cray launched its latest generation supercomputer, the Cray XC50, the company’s fastest supercomputer ever with a peak performance of one petaflop in a single cabinet. Among the many enhancements of the XC50, this new system adds support for the Nvidia Tesla P100 GPU accelerator as well as for next-generation Intel Xeon and Intel Xeon Phi processors.
  • In January, Cray appointed Stathis Papaefstathiou to the position of senior vice president of R&D. With more than 30 years of high tech experience, Papaefstathiou has held senior-level positions at Aerohive Networks, F5 Networks, and Microsoft.
  • In December, Cray announced the results of a deep learning collaboration between Cray, Microsoft, and the Swiss National Supercomputing Centre (CSCS) that expands the horizons of running deep learning algorithms at scale using the power of Cray supercomputers. Cray has validated and made available several deep learning toolkits on Cray XC and Cray CS-Storm systems to simplify the transition to running deep learning workloads at scale.
  • In November, Cray highlighted recent momentum for the Urika-GX agile analytics platform and previewed ongoing software updates to the system. New customers include a manufacturing collaborative and a customer engagement marketing solution provider, both looking to harness the Urika-GX to deliver enhanced value to their customers.
  • In November, Cray announced it had joined iEnergy the rapidly growing exploration and production industry community brokered by Halliburton Landmark. iEnergy community members can now choose to run Landmark SeisSpace Seismic Processing Software on a Cray CS400 cluster supercomputer.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray

The post Cray Reports 2016 Full Year and Fourth Quarter Financial Results appeared first on HPCwire.

Top Software Developers Tapped to Lead iRODS Consortium

Thu, 02/09/2017 - 06:46

CHAPEL HILL, N.C., Feb. 9 — Two software development experts with nearly 30 years of combined experience have officially taken the lead roles in the iRODS Consortium, the membership-based foundation that leads development and support of the integrated Rule-Oriented Data System (iRODS).

Jason Coposky, who had served as interim executive director of the consortium since last May, was officially named as the permanent executive director of the consortium early in 2017. In that role, he leads efforts to implement iRODS software development strategies, builds and nurtures relationships with existing and potential consortium members, and serves as the chief spokesperson on iRODS development and strategies to the worldwide iRODS community.

Coposky previously served as chief technologist of the iRODS Consortium. He has more than 20 years of industry experience in virtual reality, electronic design automation (EDA), visualization, and data management. He came to RENCI at UNC-Chapel Hill in 2006 as the first member of the institute’s scientific and information visualization team, where he created novel large-format display systems and multi-touch display systems. He became a member of RENCI’s iRODS development team in 2008 and was named chief technologist for the iRODS Consortium when the consortium formed in 2013.

Working closely with Coposky will be Terrell Russell, an iRODS software engineer since 2008, who now takes the role of iRODS Consortium chief technologist. Russell will lead the iRODS development team based at RENCI and will direct the full software development lifecycle. His duties will also include code review, package management, documentation, and high level architecture design.  Russell holds degrees in computer engineering, computer networking, and service organizations from North Carolina State University and a PhD in information science from UNC-Chapel Hill.

Both Coposky and Russell have been essential members of the team that has spent the last few years transforming iRODS from research software developed over two decades into the production-level version of iRODS now in version 4.2. Under their leadership, the iRODS team implemented a development infrastructure that supports exhaustive testing on supported platforms, and a plugin architecture that supports microservices, storage systems, authentication, networking, databases, rule engines, and an extensible API.

“With data becoming the currency of the knowledge economy, now is an exciting time to be involved with developing and sustaining a world-class data management platform like iRODS,” said Coposky. “Our consortium membership is growing, and our increasing ability to integrate with commonly used hardware and software is translating into new users and an even more robust product.”

“iRODS has come a long way in the last few years,” added Russell. “We have a growing team and strong sense of purpose.  We’re helping solve hard classes of problems in the open.”

Mark your calendars for the iRODS User Group Meeting

Consortium members, partners, iRODS users, and those interested in learning more about iRODS should plan to attend the 2017 iRODS User Group Meeting June 14 and 15 in Utrecht, the Netherlands. The meeting will feature use case presentations, live demonstrations, and open discussions about requested iRODS features. Participants representing academic, government, and commercial institutions worldwide are expected to attend. A one-day iRODS workshop will take place on June 13. To receive updates on the User Group Meeting and other iRODS events, click here. To learn more about iRODS, visit the iRODS website.

About the iRODS Consortium

The iRODS Consortium is a membership organization that leads development and support of the Integrated Rule-Oriented Data System (iRODS), free open source software for data discovery, workflow automation, secure collaboration, and data virtualization. The iRODS Consortium provides a production-ready iRODS distribution and iRODS professional integration services, training, and support. The world’s top researchers in life sciences, geosciences, and information management use iRODS to control their data. Learn more at irods.org.

The iRODS Consortium is administered by founding member RENCI, a research institute for applications of cyberinfrastructure at the University of North Carolina at Chapel Hill. For more information about RENCI, please visit www.renci.org.

Source: RENCI

The post Top Software Developers Tapped to Lead iRODS Consortium appeared first on HPCwire.

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

Wed, 02/08/2017 - 13:17

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. When completed Fab 42 will be the most advanced semiconductor factory in the world according to Intel. The new fab is targeting use of a 7 nanometer (nm) manufacturing process.

A report in the Wall Street Journal today noted, “It wasn’t immediately clear what role the Trump administration might be playing in facilitating the plant’s opening. Mr. Trump and his aides talk often about reducing the cost of doing business in the U.S. by easing regulatory and other burdens. The official said Wednesday Intel officials are expected to emphasize the administration’s tax and regulatory overhaul agenda.”

There’s been much discussion around 10nm and 7nm process technology and Intel’s plans to use them. Other major semiconductor manufacturers have forged ahead and in the summer of 2015 IBM announced “the world’s first 7nm node test chips with functioning transistors, accomplished via a partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE).” It noted production 7nm chips are at least two years away, but that IBM has delivered on its promise to develop the process node.” (See HPCwire article, IBM First to 7nm Process with Working Transistors)

“Intel is a global manufacturing and technology company, yet we think of ourselves as a leading American innovation enterprise,” said Krzanich in the release announcing the deal. “America has a unique combination of talent, a vibrant business environment and access to global markets, which has enabled U.S. companies like Intel to foster economic growth and innovation. Our factories support jobs — high-wage, high-tech manufacturing jobs that are the economic engines of the states where they are located.”

Krazanich discussed the investment in an email to Intel employees.

Intel is largest U.S. high-technology capital expenditure investor ($5.1 billion in the U.S. 2015) and its third largest investor in global R&D ($12.1 billion in 20151). The majority of Intel’s manufacturing and R&D is in the United States. As a result, Intel employs more than 50,000 people in the United States, while directly supporting almost half a million other U.S. jobs across a range of industries, including semiconductor tooling, software, logistics, channels, OEMs and other manufacturers that incorporate our products into theirs.

Intel says completion of Fab 42 in 3 to 4 years will directly create approximately 3,000 high-tech, high-wage Intel jobs for process engineers, equipment technicians, and facilities-support engineers and technicians who will work at the site. Combined with the indirect impact on businesses that will help support the factory’s operations, Fab 42 is expected to create more than 10,000 total long-term jobs in Arizona.

Much of the high tech industry had spoken out against Trump’s recent travel restriction on select countries. (See HPCwire article, Here’s What HPC Leaders Say about Trump Travel Ban)

Links:

Intel announcement (Intel Supports American Innovation with $7 Billion Investment in Next-Generation Semiconductor Factory in Arizona): https://newsroom.intel.com/news-releases/intel-supports-american-innovation-7-billion-investment-next-generation-semiconductor-factory-arizona/

WSJ article (Intel Corp. Announces $7 Billion Investment in Arizona Plant): https://www.wsj.com/articles/intel-corp-announces-7-billion-investment-in-arizona-plant-1486578589

Krzanich email: https://newsroom.intel.com/newsroom/wp-content/uploads/sites/11/2017/02/investing-in-the-future-of-moores-law.pdf

The post Intel and Trump Announce $7B for Fab 42 Targeting 7nm appeared first on HPCwire.

CoolIT Systems Issued U.S. Patent for Modular Heat-Transfer Solutions

Wed, 02/08/2017 - 10:06

Feb. 8 — CoolIT Systems, Inc. (CoolIT), the world leader in energy efficient liquid cooling solutions for HPC, Cloud and Enterprise markets, today announced that the United States Patent and Trademark Office has issued US Patent No. 9,496,200 covering modular heat-transfer solutions to cool an array of independent servers for rack based data center installations.

This proprietary modular heat-transfer system consists of a rack, heat-transfer elements (heat exchanger), component heat-exchange modules with distributed pumps, manifold module and a coolant heat-exchange module. CoolIT Systems has exclusive rights to the patented technology.

“U.S. Patent 9,496,200 protects the invention of utilizing a modular, building block approach for datacenter cooling with direct contact liquid cooling,” said Geoff Lyon CEO, CoolIT Systems. “CoolIT’s commitment to developing and patenting unique solutions provides our customers with the assured competitive advantage they are looking for. The 60 patent milestone adds confirmation to CoolIT’s leadership in developing innovative liquid cooling solutions for modern data centers.”

The latest patent comes just a month after CoolIT Systems was issued a related patent (U.S. Patent No. 9,453,691) that defines a fluid heat exchange system used to cool heat generating components of a computer with a fluid heat exchanger that splits a mass flow of coolant. This Split-Flow invention is another one of the company’s key performance differentiators. High levels of performance enable the effective cooling of server CPUs using very warm cooling liquid that eliminates the need for expensive chiller plants and bulky air conditioning units. The potential for reducing both the capital expense and on-going operating expense that direct contact liquid cooling provides is motivating adoption at an increasing rate.

The recently issued patents join CoolIT Systems growing intellectual property portfolio that includes more than 60 issued patents and numerous patent applications, to which CoolIT Systems has exclusive rights in the U.S. and major international markets.

About CoolIT Systems

CoolIT Systems, Inc. is the world leader in energy efficient Direct Contact Liquid Cooling for the Data Center, Server and Desktop markets. CoolIT’s Rack DCLC platform is a modular, rack-based, advanced cooling solution that allows for dramatic increases in rack densities, component performance, and power efficiencies. The technology can be deployed with any server and in any rack making it a truly flexible solution. For more information about CoolIT Systems and its technology, email or visit http://www.coolitsystems.com/.

Source: CoolIT Systems

The post CoolIT Systems Issued U.S. Patent for Modular Heat-Transfer Solutions appeared first on HPCwire.

UMass Rolls Out New GPU Cluster for Deep Learning

Wed, 02/08/2017 - 10:02

UMass today rolled out its new GPU cluster – Gypsum – aimed at deep learning. Like many institutions, UMass is seeking to attract Ph.D. students drawn to deep learning and artificial intelligence. At 400 GPUs, Gypsum is on the large side for academic GPU clusters according the university.

The new systems will be housed at the Massachusetts Green High Performance Computing Center in Holyoke, MA, and is the result of a five-year, $5 million grant to the campus from the Massachusetts Technology Collaborative made last year. It represents a one-third match to a $15 million gift supporting data science and cybersecurity research from the MassMutual Foundation of Springfield.

UMass GPU cluster

Gypsum is reported to have 400 GPUs from NVIDIA installed on 100 nodes. Of note, the M40 is based on NVIDIA’s Maxwell architecture and features high single precision performance, useful in training DL networks. Erik Learned-Miller of the College of Information and Computer Sciences (CICS) says this is the first year of the grant, during which about $2 million has been spent on two clusters: “Gypsum” and a smaller cluster of traditional CPU machines dubbed “Swarm II.”

Andrew McCallum, professor and founder of the Center for Data Science at UMass Amherst, says, “This is a transformational expansion of opportunity and represents a whole new era for the center and our college. Access to multi-GPU clusters of this scale and speed strengthens our position as a destination for deep learning research and sets us apart among universities nationally.” UMass currently has research projects that apply deep learning techniques to computational ecology, face recognition, graphics, natural language processing and many other areas.

Link to release (UMass Amherst Boosts Deep Learning Research with Powerful New GPU Cluster): http://www.umass.edu/newsoffice/article/umass-amherst-boosts-deep-learning

The post UMass Rolls Out New GPU Cluster for Deep Learning appeared first on HPCwire.

Applications Being Accepted for International Summer School on HPC Challenges in Computational Sciences

Wed, 02/08/2017 - 08:15

Feb. 8 — Graduate students and postdoctoral scholars from institutions in Canada, Europe, Japan and the United States are invited to apply for the eighth International Summer School on HPC Challenges in Computational Sciences, to be held June 25 to 30, 2017, in Boulder, Colorado, United States of America.

Applications are due March 6, 2017. The summer school is sponsored by Compute/Calcul Canada, the Extreme Science and Engineering Discovery Environment (XSEDE) with funds from the U.S. National Science Foundation, the Partnership for Advanced Computing in Europe (PRACE) and the RIKEN Advanced Insti­tute for Computational Science (RIKEN AICS).

Leading computational scientists and HPC technologists from the U.S., Europe, Japan and Canada will offer instructions on a variety of topics and also provide advanced mentoring. Topics include:

  • HPC challenges by discipline
  • HPC programming proficiencies
  • Performance analysis and profiling
  • Algorithmic approaches and numerical libraries
  • Data-intensive computing
  • Scientific visualization
  • Canadian, EU, Japanese and U.S. HPC-infrastructures

The expense-paid program will benefit scholars from Canadian, European, Japanese and U.S. institutions who use advanced computing in their research. The ideal candidate will have many of the following qualities, however this list is not meant to be a “checklist” for applicants to meet all criteria:

  • Familiar with HPC, not necessarily an HPC expert, but rather a scholar who could benefit from including advanced computing tools and methods into their existing computational work
  • A graduate student with a strong research plan or a postdoctoral fellow in the early stages of their research efforts
  • Regular practice with parallel programming (i.e., student utilizes parallel programming generally on a monthly basis or more)
  • May have a science or engineering background, however, applicants from other disciplines are welcome provided their research activities include computational work

Students from underrepresented groups in computing are highly encouraged to apply (i.e., women, racial/ethnic minorities, persons with disabilities, etc.). If you have any questions regarding your eligibility or how this program may benefit you or your research group, please do not hesitate to contact the individual associated with your region below.

Interested students should apply by March 6, 2017. Meals and housing will be covered for the selected participants, also support for travel will be given.

Further information and application can be found at http://www.ihpcss.org/.

About XSEDE

The Extreme Science and Engineering Discovery Environment (XSEDE) is the most advanced, powerful, and robust collection of integrated digital resources and services in the world. It is a single virtual system that scientists can use to interactively share computing resources, data, and expertise. XSEDE accelerates scientific discovery by enhancing the productivity of researchers, engineers, and scholars by deepening and extending the use of XSEDE’s ecosystem of advanced digital services and by advancing and sustaining the XSEDE advanced digital infrastructure. XSEDE is a five-year, $110-million project and is supported by the National Science Foundation. For more information, see xsede.org.

About Compute Canada/Calcul Canada

Compute Canada, in partnership with regional organizations ACENET, Calcul Québec, Compute Ontario and WestGrid, provides state-of-the-art advanced research computing systems, storage and software solutions. We serve Canadian researchers and their collaborators in all academic sectors. Our world-class team of more than 200 experts employed by 37 partner universities and research institutions across the country provide direct support to research teams. Compute Canada receives funding through The Canada Foundation for Innovation, while our provincial partners and academic institutions provide the required matching funds. Canada’s advanced research computing platform is currently undergoing an exciting refresh, with four new systems planned to be installed and become operational in 2017. For more information, see www.computecanada.ca.

About PRACE

The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 5 PRACE members (BSC representing Spain, CINECA representing Italy, CSCS representing Switzerland, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU’s Seventh Framework Programme (FP7/2007-2013) under grant agreement RI-312763 and from the EU’s Horizon 2020 Research and Innovation Programme (2014-2020) under grant agreements 653838 and 730913. For more information, see www.prace-ri.eu.

About RIKEN AICS

RIKEN is one of Japan’s largest research organizations with institutes and centers in locations throughout Japan. The Advanced Institute for Computational Science (AICS) strives to create an international center of excellence dedicated to generating world-leading results through the use of its world-class supercomputer “K computer.” It serves as the core of the “innovative High Performance Computing Infrastructure (HPCI)” project promoted by the Ministry of Education, Culture, Sports, Science and Technology. For more information, see www.aics.riken.jp/en/.

Source: NCSA

The post Applications Being Accepted for International Summer School on HPC Challenges in Computational Sciences appeared first on HPCwire.

Pages