HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 51 min ago

PRACE Announces Mars Mania Mobile Outreach Game

Wed, 05/17/2017 - 09:06

May 17, 2017 — Members of Kajak Games, a cooperative founded by the students of Kajaani University of Applied Sciences have produced PRACE Mars Mania mobile outreach game for PRACE. The game will be used in public events and science fairs around Europe to explain the power of supercomputers to younger audiences. It also highlights the scientific innovation Vlasiator enabled by PRACE and the European Research Council.

The game developer group consisted of six students of Kajaani University of Applied Sciences: Jyri Honkakoski, Veikka Huttunen, Salla Isola, Mikko Juutinen, Juuso Pönkänen, with Juho Salminen as team leader. Their work started in February 2017 and the final product was handed over to PRACE on the 25 April. The launch of the game will be in May 2017.

The group has had science support from professor Minna Palmroth´s research group. The game was created around the popular idea of sending and guiding a probe to Mars and avoiding obstacles and unpredictable space weather conditions. The game also highlights a breakthrough in science, innovation by Finnish Professor Minna Palmroth´s research group: the world´s most accurate space weather simulation model, Vlasiator. Professor Palmroth will present her work at the PRACE Scientific and Industrial Conference 2017 – PRACEdays17 – which will be held in Barcelona from 16 to 18 May 2017

“The game also includes a “real science corner”, where the scientific innovation of Vlasiator is explained by the research group. The Vlasiator project has also benefitted from PRACE and CSC supercomputing resources”, tells professor Palmroth.

“The game project has been a great collaboration effort. It has advanced smoothly to the goal within a very short production time. This has been possible with the help of talented students working for a multinational project with enthusiasm, energy and willingness to have an intensive and regular customer dialogue, which is always a key to success”, says Antti Tiiro, CEO of Kajak Games.

“A mobile game is a great opportunity to describe the space environment to a larger public”, says professor Palmroth.

CSC – IT Center for Science has provided the Kajak games group with game testing equipment and the multicultural PRACE project group has provided regular customer feedback in each game development phase.

 Kajak Games was founded in 2010 to help the game development students in Kajaani to publish and promote their games, and offer a great chance for the students to get experienced in running a business. More information: info@kajakgames.com

CSC – IT Center for Science is a Finnish center of expertise in ICT providing services at an internationally high level of quality for research, education, culture, public administration and enterprises, to help them thrive and benefit society at large. www.csc.fi

Source: PRACE

The post PRACE Announces Mars Mania Mobile Outreach Game appeared first on HPCwire.

Sandia Recognizes Top Female High School Students in Math, Science

Wed, 05/17/2017 - 09:03

LIVERMORE, Calif., May 17, 2017 – Female scholars from the junior class from San Francisco Bay Area high schools recently gathered at Sandia’s California site for the 26th annual Sandia Math and Science Awards.

The Sandia Math and Science Awards program recognizes high-achieving young women for their accomplishments in STEM (science, technology, engineering and math) subjects and encourages their future studies by pairing them with Sandia National Laboratories mentors. Teachers from 19 northern California high schools in Livermore, Dublin, Pleasanton, Tracy, Lathrop, Manteca and Oakland nominated students they deemed outstanding in math and science.

In her keynote address, Heidi Ammerlahn, director of Homeland Security and Defense Systems, touched upon her academic and professional journey and the role Sandia plays in ensuring a peaceful world.

“At the beginning of my career, I knew I wanted to do something with math and computer science,” Ammerlahn said. “But I also wanted to be involved in public service and serving my country. Sandia has allowed me to do both.”

Ammerlahn also discussed a major theme that emerged in this year’s nominations —  mentorship.

“You all aren’t just incredibly hard-working. You also went out of your way to motivate your peers and help others,” she said. “It says so much about you as human beings and future leaders.”

Kelsey Tresemer, an engineer with Sandia’s Advanced and Exploratory Systems group, shared her journey from a freshman theater major to nuclear engineer. She impressed upon the awardees not to be afraid to explore and change their minds.

Sandia Business Development Manager Annie Garcia, who led the Math and Science Awards planning committee for the first time, said she was proud to be part of the program.

“I was drawn to the Math and Science Awards because of its impact on young women during a pivotal time of their lives,” Garcia said. “We all need a little encouragement from time to time, so it is a pleasure to be a part of something that recognizes the achievements of the next generation of STEM leaders.”

The winners of the 2017 Sandia Math and Science Awards include:

Outstanding Achievement in Mathematics

  • April Chen, Amador Valley High School
  • Jailene Lopez, Castlemont High School
  • Christine Haggin, Dublin High School
  • Stephanie Plumb, East Union High School
  • Elena Zhang, Foothill High School
  • Gabriella Bond, Granada High School
  • Danielle Gallo, Lathrop High School
  • Ivy Tang, Livermore High School
  • Genna Vieira, Livermore Valley Charter Preparatory
  • Kaitlynn Funsch, Manteca High School
  • Jade Ou, Merrill F. West High School
  • Islah Zareef-Mustafa, MetWest High School
  • Jaqueline Hurtado, Millennium High School
  • Tiffany Ngo, Oakland High School
  • Lesly Carrillo Cazares, Sierra High School
  • Ivy Tu, Skyline High School
  • Gabrielle Arrieta, Tracy High School

Outstanding Achievement in Science

  • Makenzie Melby, Amador Valley High School
  • Sruthi Mukkamala, Dublin High School
  • Rashim Hakim, East Union High School
  • Peggi Li, Foothill High School
  • Meenakshi Singhal, Granada High School
  • Sara Hawk, John C. Kimball High School
  • Marissa Briseno, Lathrop High School
  • Melia Miller, Livermore High School
  • Ariel Kenfack, Livermore Valley Charter Preparatory
  • Surayya Sakhi, Manteca High School
  • Chaztine-Xiana Embucado, Merrill F. West High School
  • Jasmin Galvan, Met West High School
  • Yvonne Ng, Millennium High School
  • Zayra Cornejo Ibette Rivera, Oakland Tech High School
  • Emily Cunial, Sierra High School
  • Helen Nguyen, Skyline High School
  • Kiana Soeung, Tracy High School

About Sandia National Laboratories

Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia Lab

The post Sandia Recognizes Top Female High School Students in Math, Science appeared first on HPCwire.

NSF Issues $60M RFP for “Towards a Leadership-Class” System

Tue, 05/16/2017 - 16:15

In case you missed it, the National Science Foundation issued the RFP for the next ‘Towards a Leadership-Class Computing Facility – Phase 1’ last week. It’s for $60 Million and, among other things, specifies a system to be at least a two- to three-fold time-to-solution performance improvement over the Blue Waters system at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign’s (UIUC). 

The RFP is for the acquisition and deployment of a HPC system (Phase 1) with the option of a possible future upgrade to a leadership-class computing facility. As described by NSF, the phase 1 system would be a robust, well-balanced, and forward-looking computational asset for a broad range of research topics and “also serve as an evaluation platform for testing and demonstrating the feasibility of an upgrade to a leadership-class facility five years following deployment.”

Obviously the high stakes are likely to draw many responses. Bill Gropp, interim director of the NCSA told a local Urbana news outlet “we plan to win.” No doubt others do too. The fact that there’s a possible Phase 2 called out in the RFP suggests NSF is taking a slightly longer term planning approach than is typical to the competition.

Listed here are the five characteristics being sought in the RFP:

  • A detailed acquisition plan for deploying a reliable and well-balanced HPC system with at least two- to three-fold time-to-solution performance improvement over the current state of the art, the University of Illinois at Urbana-Champaign’s (UIUC)Blue Waters system, for a broad range of existing and emerging computational and data intensive applications;
  • A thorough operations plan for the Phase 1 system to ensure that it will serve as an effective computational tool for the broad scientific and engineering community, and for the Nation at large;
  • A detailed three- to five-year project plan for scientific and technical evaluation of the Phase 1 system that will lead to an upgrade design of a leadership-class system, called the Phase 2 system, as well as the physical facility that will host it: the Phase 2 system is expected to have a ten-fold or more time-to-solution performance improvement over the Phase 1 system;
  • Clear and compelling science and engineering use cases, as well as detailed strategic project goals for a leadership-class computing facility; and
  • A persuasive articulation of educational and industry outreach, and the achievement of other broader societal impact goals, in the long-term strategic plan for the leadership-class computing facility.

The $60,000,000 awarded in FY 2018 will be used “to fund one award” and at least “95% of the proposal amount” should be for the system acquisition cost. Up to $2,000,000 in additional funds “are anticipated to be available in FY 2019” planning activities associated with the conceptual design phase for Phase 2 of the award.

Note that this solicitation requests proposals for the acquisition and operation of a Phase 1 system as well as a project plan for the design of a potential upgrade or replacement to a leadership-class computing facility at the end of the five-year deployment period, subject to the availability of funds. Support for subsequent preliminary design and final design phases for Phase 2 will be provided in separate funding actions.

Link to RFP: https://nsf.gov/pubs/2017/nsf17558/nsf17558.htm

The post NSF Issues $60M RFP for “Towards a Leadership-Class” System appeared first on HPCwire.

Cray Offers Supercomputing as a Service, Targets Biotechs First

Tue, 05/16/2017 - 13:15

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver Supercomputing as a Service. The initial offering provides access to Cray’s Urika GX platform, housed in Markley’s massive Boston datacenter, and focused on the many biotechs in the region. The partners say the service is unique and they plan to address other verticals with a range of Cray products over time.

“We want to take a targeted approach and are going to be really thoughtful about what vertical is next or what type of infrastructure best solves the use case represented by that vertical and where the need or demand is,” said Fred Kohout, senior vice president of products and chief marketing officer, Cray. “We want to be customer-led here.” Certainly the Boston-Cambridge area is a mecca for large and small life sciences organizations in both industry and academia.

Cray Urika GX

Supercomputing as a service, say Cray and Markley, will make supercomputing available to many users who are unable to afford or support such resources themselves or who only need those resources sporadically. They also argue supercomputing produces a significant performance advantage over traditional HPC clusters, in this case on the order 5X for the genomics workloads evaluated so far.

“This is supercomputing. It sounds like a marketing term but it’s really different. We are not talking about 1000 Dell blades all in the same datacenter. We are talking about the Cray Aries interconnect and optimizations such Cray Graph Engine (CGE) that are qualitatively different than just having lots of CPUs close to each other,” said Patrick Gilmore, chief technology officer, Markley.

No doubt there will be kinks to iron out, but supercomputing as a service is an interesting paradigm shift and potential market expander for Cray. The partners declined to say much about pricing, other than they were looking at prices for HPC-in-the-cloud resources and that there would be a premium over those; how much wasn’t revealed. It will be more than 10 percent higher but it won’t “be a discouraging” premium say the partners.

Cray bills the Urika GX as the first agile analytics platform that fuses supercomputing abilities with open enterprise standards. Its Cray Graph Engine provides optimized pattern-matching and is tuned to leverage the scalable parallelization and performance of the Urika-GX. These strengths are particularly valuable for many bioinformatics tasks.

“Research and development, particularly within life sciences, biotech and pharmaceutical companies, is increasingly data driven. Advances in genome sequencing technology mean that the sheer volume of data and analysis continues to strain legacy infrastructures,” said Chris Dwan, who led research computing at both the Broad Institute and the New York Genome Center. “The shortest path to breakthroughs in medicine is to put the very best technologies in the hands of the researchers, on their own schedule. Combining the strengths of Cray and Markley into supercomputing as a service does exactly that.”

As explained by Jeff Flanagan, executive vice president, Markley Group, “the service will not be offered as a ‘partition service’ but as a reservation service. Companies and institutions will have the opportunity to reserve time on the various Cray machines and we are starting with the Urika GX.” One attractive aspect of starting with the Urika GX is that it looks a like a lot like standard Linux box and has a fair amount of pre-installed software (Hadoop is one example), according to Ted Slater, global head of healthcare and life sciences at Cray.

Markley and Cray have tried to remove much of the heavy lifting required for running finicky supercomputers; still, using the service isn’t trivial. As part of the pre-staging, users need to move their data into the Markley datacenter and also make sure their software will actually run when the user’s schedule time on the Urika GX occurs. This can take some time (e.g. a month). That said Markley’s datacenter has a variety of high performance resources (InfiniBand/100Gig Ethernet, petabytes of object storage, fast SSDs, etc.).

Uploading data shouldn’t be problem for most clients says Gilmore, “If you are in New England the Markley datacenter is pretty much the center of the Internet – something like 70 percent of the internet fiber goes through that building so there are lots of ways to get into the building. We’ll get you a connection either a VPN or a direct fiber. Most of the genomics customers were talking to are already customers with colocation [here], so there is probably direct fiber from their offices, their sequencers, to the building.”

Data can live on a storage array users already have in the colocation facility or be placed on a Markley array. Cray and Markley have also set up a virtualized version of the Urika GX to serve as a test platform for scripts, and “to make sure they don’t waste time on a very, very expensive, very fast supercomputer, when the reservation comes up.” Markley will then preload the data or make the connection to their array depending upon their preference.

“We are actually going to put the virtual machine behind a load balancer so that when you log onto it and test it, when it’s your turn to come up, you’ll just shut down the virtual machine. We will migrate the virtual machine onto the Cray for you. We’ll reconfigure the load balancing so you don’t actually have to do anything different. It works just like it did when on the virtual machine. We’ll also be transferring all the data for you,” said Gilmore. “So there is a little bit of preorganization to do, but the idea is to make this is as easy and seamless to do as possible. Users log on again, make sure the program works properly, then Monday morning (at the prearranged reserved time) they can turn on the jobs and away they go.”

Cray and Markley say little about the actual projects they did the beta testing on; however on the Cray web site there is brief account of genomics project conducted by the Broad Institute using the Urika GX. It no doubt has lessons. Hail, an open source scalable framework for exploring and analyzing genetic data at massive scale, was used in the Broad project.

Hail is built on top of Apache Spark, and can analyze terabyte-scale genetic data. “Still under active development, Hail is used in medical and population genomics at the Broad for a variety of diseases. It also serves as the core analysis platform for the Genome Aggregation Database (gnomAD) – the largest publicly available collection of human DNA sequencing data, and a critical resource for the interpretation of disease-causing genetic changes,” according to the Cray document.

Hail is also the tool that’s pre-installed on the Cray-Markley Urika GX offering although Slate says users can choose to port the tool of choice, such as GATK (also a Broad project). Slater said Cray has also been working with a genome assembler on its XC platform and is working to port it to GX for evaluation.

The post Cray Offers Supercomputing as a Service, Targets Biotechs First appeared first on HPCwire.

HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

Tue, 05/16/2017 - 09:32

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing toward the epic grandeur envisioned by HP (then, HPE now) starting in 2014. Certainly, senior HPE managers have high ambitions for the new architecture: nothing less than a new paradigm, called Memory-Driven Computing (MDC), that puts  memory, not processing, at the center of the computing platform.

HPE positions The Machine as the architecture for exascale-class performance by the time it’s commercially available in 2019 or 2020, which is roughly the timeframe the Department of U.S. Energy’s Exascale Computing Project has established for delivering an exascale machine. Along with completion of the new platform, HPE hopes will come a broad ecosystem of complementary development. The prototype unveiled today contains an oceanic 160 terabytes (TB) of memory, capable (according to HPE) of simultaneously working with the data held in every book in the Library of Congress five times over – or approximately 160 million books.

“It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing,” the company said in its announcement.

HPE’s Kirk Bresniker

For all the promise of The Machine, today’s announcement is not startling. HPE has regularly issued updates on The Machine’s development, most recently last November, when the company said it had successfully demonstrated an MDC proof-of-concept prototype. Today’s news: the prototype operates at scale, Kirk Bresniker, Fellow/VP and chief architect of Hewlett Packard Labs, told EnterpriseTech.

“We wanted to build a system big enough to hold really interesting problems in a way that had never been done before,” Bresniker said. “So we somewhat arbitrarily picked a scale – 160 TBs of memory on a memory fabric. Compare that to the paltry 2 GBs of memory on a typical laptop, that’s 80,000 times bigger. No one’s ever constructed a memory system that large before.”

Regarding computation power, the prototype has an optimized Linux-based operating system (OS) running across 40, 32-core ThunderX2, Cavium’s flagship second generation dual socket-capable ARMv8-A workload optimized System on a Chip.

In addition, The Machine has a Photonics/Optical communication links, including the new X1 photonics module, which HPE said are online and operational. And it has software programming tools designed to take advantage of abundant persistent memory.

Bresniker said Memory-Driven Computing has great potential because the architecture curtails so much of the data movement required for traditional computing.

“Rather than have a GPU hanging off of a PCI Express link – and you have to manage the data back and forth from the general purpose processor out to the GPU and back again – because I have a memory fabric that has an open interface I can place those acceleration resources directly, in direct communications, on the memory fabric,” he said.

Today’s announcement marks a transition beyond internal proof-of-concept.

“We’ve moved on from proving out that each individual piece is working,” said Bresniker, “to the point where now…we can do the handoff from the teams working on the hardware, the firmware, the operating system, to the application development teams, to begin to flex their minds and muscle around the ramifications for having this kind of a platform available to them for the first time.”

From the start of this project, Bresniker said, HPE has taken the somewhat unconventional and risky approach of sharing information about the new platform so that third parties can do their development work based on The Machine specifications.

“We always knew this had to be bigger than us, that this is a conversation that has to happen across the industry,” he said. “That’s why we started to have the communications so early. When we announced this in 2014, the prototype we’re showing off now was essentially a block diagram scrolled on my white board here in Palo Alto. But we wanted to have the conversation early because we wanted to work with the open source development communities, we needed to engage with them, we needed to engage with our software partners that we’ve traditionally had. We needed to engage with our component supply chain, all the memory, communications and computation components that need to understand how they fit into this memory fabric.”

Based on the current prototype, HPE said it expects the architecture could scale to an exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory – 4,096 yottabytes. For “context, that is 250,000 times the entire digital universe today,” HPE said in its announcement. “With that amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google’s autonomous vehicles; and every data set from space exploration all at the same time – getting to answers and uncovering new opportunities at unprecedented speeds.”

“Cavium shares HPE’s vision for Memory-Driven Computing and is proud to collaborate with HPE on The Machine program,” said Syed Ali, president and CEO of Cavium Inc. ”HPE’s groundbreaking innovations in Memory-Driven Computing will enable a new compute paradigm for a variety of applications, including the next generation data center, cloud and high performance computing.”

The post HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers appeared first on HPCwire.

Ellexus Unveils Breeze Healthcheck for Better I/O

Tue, 05/16/2017 - 09:26

CAMBRIDGE, May 16, 2017 — Ellexus, the I/O profiling company, has unveiled Breeze Healthcheck, a new software tool that gives every engineer the knowledge to improve the performance of their applications.

Breeze Healthcheck produces a simple report that tells the user what an application is doing wrong and why. The tool analyses dozens of different harmful I/O patterns and gives the top ten worst offenders and the impact they are having on the performance of your application and the scalability.

Using Breeze Healthcheck, IT managers can hand over the responsibility for good I/O to the users, who can find the problem and fix it themselves. The tool can also be integrated into a company’s software release structure or workflow qualification to catch bad I/O before it becomes a problem.

As well as looking for bad I/O patterns, Breeze Healthcheck will profile the performance of the IT infrastructure such as file system and network latency. Checks carried out by Breeze Healthcheck include:

  • Files or programs used in someone else’s home directory and other hard coded paths that shouldn’t be there
  • Lots of small temporary files saved on shared storage or not deleted
  • Programs that make very small reads and writes or very large reads and writes
  • Programs that stat() or open() lots of files without using them.

The Breeze Healthcheck report contains information about how much time is spent carrying out I/O so you can see how the performance of the file system and network is affecting your application.

Whether you are an IT manager, a software release engineer or a high-performance computing (HPC) user, you need to care about how applications access programs, libraries and files on local and shared file systems. Bad I/O patterns can harm shared storage and will limit application performance, wasting millions in lost engineering time.

Dr Rosemary Francis, CEO of Ellexus, said: “Breeze Healthcheck brings together all our engineers’ expertise in good and bad I/O into one, simple-to-use report. Years of working with a wide variety of HPC customers has given Ellexus a unique, in-depth understanding of what can affect application performance which we are now able to share.

“As organisations target new, more powerful compute architectures and cloud infrastructures, it’s never been more important to optimise the way programs access large datasets – and the responsibility for optimisation shouldn’t just lie in the domain of the experts. It’s all too easy for an application to do something completely stupid, so the focus has got to be on making it easy for all users to understand the mistakes they are making.”

About Ellexus

Ellexus is the I/O profiling company. From a detailed analysis of one application or workflow pipeline to whole-cluster, lightweight monitoring and reporting, we provide solutions that solve all I/O profiling needs.

Our tools provide unique insights into how big data storage clusters are working, enabling IT managers and high-performance computing engineers to multiply productivity and save millions on wasted engineering time. We work across many high-performance and scientific computing sectors, including the semiconductor industry, life sciences, bioinformatics and oil and gas. Customers include ARM, the Sanger Institute and Mentor Graphics.

Unlike any other application profiling software, our unique tools can be run on a live system. We don’t just give you data about what your programs are doing; our tools include expertise on what is going wrong and how you can fix it.

Source: Ellexus

The post Ellexus Unveils Breeze Healthcheck for Better I/O appeared first on HPCwire.

Fujitsu, 1QBit Collaborate on Quantum-Inspired AI Cloud Service

Tue, 05/16/2017 - 09:17

TOKYO and VANCOUVER, May 16, 2017 — Fujitsu Limited and 1QB Information Technologies Inc. announced that starting today they will collaborate on applying quantum-inspired technology to the field of artificial intelligence (AI), focusing on the areas of combinatorial optimization and machine learning. The companies will work together in both the Japanese and global markets to develop applications which address industry problems using AI developed for use with quantum computers.

This collaboration will enable software developed by 1QBit for quantum computers to run on a “digital annealer,” jointly developed by Fujitsu Laboratories Ltd. and the University of Toronto. A digital annealer is a computing architecture that can rapidly solve combinatorial optimization problems using existing semiconductor technology.

Over the last four years, 1QBit has developed new methods for machine learning, sampling, and optimization based on reformulating problems to meet the unique requirements of interfacing with quantum computers. The company’s research and software development teams have focused on solving sampling, optimization, and machine learning problems to improve applications in industries including finance, energy, advanced materials, and the life sciences. The combination of Fujitsu’s cutting-edge computer architecture and hardware technology, and 1QBit’s software technology, will enable advances in machine learning to solve complicated, large-scale optimization problems.

Fujitsu has systematized the technology and its experience with AI under the name of Zinrai, which has developed over the course of more than thirty years. The platform will support customers in using AI and will be available as the Fujitsu Cloud Service K5 Zinrai Platform Service. Fujitsu will offer the results of this collaboration as an option in the Fujitsu Cloud Service K5 Zinrai Platform Service Zinrai Deep Learning, a Zinrai cloud service, during 2017.

In the future, the two companies will provide a variety of services that combine 1QBit’s software and expertise in building applications which benefit from the capabilities of quantum computers, with Fujitsu’s hardware technology, its customer base – the largest in Japan – and its versatile ICT capabilities, including AI. The partnership aims to contribute to the creation of new businesses and the transformation of existing businesses by introducing new solutions to the computational challenges facing customers in a variety of fields, including finance, life sciences, energy, retail and distribution.

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Approximately 155,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE: 6702) reported consolidated revenues of 4.5 trillion yen (US$40 billion) for the fiscal year ended March 31, 2017. For more information, please see http://www.fujitsu.com.

About 1QBit

1QBit is dedicated to building quantum and quantum-inspired software to solve the world’s most demanding computational challenges. The company’s hardware-agnostic platforms and services enable the development of applications which scale alongside advances in both classical and quantum computers. 1QBit partners with Fortune 500 clients and leading hardware providers to redefine intractable industry problems in the areas of optimization, simulation, and machine learning. Headquartered in Vancouver, Canada, 1QBit’s interdisciplinary team of 50 comprises mathematicians, physicists, chemists, software developers, and quantum computing experts who develop novel solutions to problems, from research through to commercial application development. For more information, visit: 1qbit.com.

Source: Fujitsu

The post Fujitsu, 1QBit Collaborate on Quantum-Inspired AI Cloud Service appeared first on HPCwire.

DDN’s Yvonne Walker Recognized as One of CRN’s 2017 Women of the Channel

Tue, 05/16/2017 - 08:19

SANTA CLARA, Calif., May 16, 2017 — DataDirect Networks (DDN) today announced that CRN, a brand of The Channel Company, has named Yvonne Walker, DDN’s partner and customer relations manager, to its prestigious 2017 Women of the Channel list. Walker was recognized for her leadership in improving DDN’s channel and industry leader partnerships across the globe with new, innovative, educational and tactical marketing elements. With her guidance and support, DDN’s channel partner solutions and programs have helped solidify DDN’s rank as the top storage provider in high performance computing.

During her 10-year tenure in the storage industry, including the past two years at DDN, Walker has led global channel programs as well as vertical marketing strategies. She has strengthened DDN’s key partner relationships, enhanced the company’s PartnerLink program offering and bolstered channel revenue.

“We are committed to providing DDN partners with a strong foundation for commercial success,” Walker said. “Our continually evolving PartnerLink program, combined with innovative technologies and end-to-end solutions that support high-performance workflows, allows our partners to deliver unique, market-leading solutions to their customers that accelerate time to results, scale simply as data sets grow, and provide a real competitive advantage.”

Looking ahead, Walker plans to roll out new partner benefits that will help DDN partners sell more efficiently. These program elements are designed to enhance business planning, improve training, provide clear visibility to annual targets, and will include a new lead distribution program.

CRN editors select the Women of the Channel honorees based on their professional accomplishments, demonstrated expertise and ongoing dedication to the IT channel. Each is recognized for her outstanding leadership, vision and unique role in driving channel growth and innovation.

“These extraordinary executives support every aspect of the channel ecosystem, from technical innovation to marketing to business development, working tirelessly to keep the channel moving into the future,” said Robert Faletra, CEO of The Channel Company. “They are creating and elevating channel partner programs, developing fresh go-to-market strategies, strengthening the channel’s network of partnerships and building creative new IT solutions, among many other contributions. We congratulate all the 2017 Women of the Channel on their stellar accomplishments and look forward to their future success.”

The 2017 Women of the Channel list will be featured in the June issue of CRN Magazine and online at www.CRN.com/wotc

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

Source: DDN

The post DDN’s Yvonne Walker Recognized as One of CRN’s 2017 Women of the Channel appeared first on HPCwire.

Markley, Cray Partner for Supercomputing as a Service

Tue, 05/16/2017 - 08:14

SEATTLE, Wash., and BOSTON, Mass., May 16, 2017 — Global supercomputer leader, Cray Inc. (Nasdaq: CRAY) and Markley, a premier provider of data center space and cloud computing services, today announced a partnership to providesupercomputing as a service solutions that combine the power of Cray supercomputers with the premier hosting capabilities of Markley. Through thepartnership, Markley will offer Cray supercomputing technologies, as a hosted offering, and both companies will collaborate to build and develop industry-specific solutions.

The availability of sought-after supercomputing capabilities both on-premises and in the cloud has become increasingly desirable across a range of industries,including life sciences, bio-pharma, aerospace, government, banking, and more – as organizations work to analyze complex data sets and research, and reduce time to market for new products. Through the new supercomputing as a service offering, Cray and Markley will make it easier and more affordable for research scientists, data scientists, and IT executives to access dedicated, powerful compute and analytic capability to increase time to discovery and decision.

“The need for supercomputers has never been greater,” said Patrick W. Gilmore, chief technology officer at Markley. “For the life sciences industry especially, speed to market is critical. By making supercomputing and big data analytics available in a hosted model, Markley and Cray are providing organizations with the opportunity to reap significant benefits, both economically and operationally.”

Headquartered in Boston, Markley delivers best-of-breed cloud and data center offerings, including its enterprise-class, on-demand Infrastructure-as-a-Service solution that helps organizations maximize IT performance, reduce upfront capital expenses, increase speed to market, and improve business continuity. In addition, Markley guarantees 100 percent uptime, backed by the industry’s best Service Level Agreement.

“Cray and Markley are changing the game,” said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “Now any company that has needed supercomputing capability to address their business-critical research and development needs can easily and efficiently harness the power of a Cray supercomputer. We are excited to partner with Markley to create this new market for Cray.”

The first industry solution built by Cray and hosted by Markley will feature the Cray Urika-GX for life sciences – a complete, pre-integrated hardware-software solution. In addition, Cray has integrated the Cray Graph Engine (CGE) with essential pattern-matching capability and tuned it to leverage the highly-scalable parallelization and performance of the Urika-GX platform. Cray and Markley have plans for the collaboration to quickly expand and include Cray’s full range of infrastructure solutions.

The Cray Urika-GX system is the first agile analytics platform that fuses supercomputing abilities with open enterprise standards to provide an unprecedented combination of versatility and speed for high-frequency insights, tailor-made for life sciences research and discovery.

“Research and development, particularly within life sciences, biotech and pharmaceutical companies, is increasingly data driven. Advances in genome sequencing technology mean that the sheer volume of data and analysis continues to strain legacy infrastructures,” said Chris Dwan, who led research computing at both the Broad Institute and the New York Genome Center. “The shortest path to breakthroughs in medicine is to put the very best technologies in the hands of the researchers, on their own schedule. Combining the strengths of Cray and Markley into supercomputing as a service does exactly that.”

“HPC environments are increasingly being used for high-performance analytics use cases that require real-time decision making such as cybersecurity, real-time marketing, digital twins, and emerging needs driven by big data and Internet of Things (IoT) use cases. Augmenting your on-premises infrastructure with HPC clouds enables you to meet your existing SLAs while scaling up performance-driven analytics for emerging use cases,” notes Gartner, in Follow These Three Steps to Optimize Business Value from Your HPC Environments, by Chirag Dekate, September 16, 2016.

Cray and Markley will be hosting meetings to discuss the new supercomputing-as-a-service solution at Bio-IT World Conference and Expo, May 23-25, 2017, in Boston, at booth #452.

Cray and Markley will also host a live webinar, “Power Your Analytics with Supercomputing as a Service,” June 13th at 10:00 a.m. PDT. You can register for the webinar here.

For more information and to speak to a Cray or Markley sales representative, please contact us at Cray.com or at Markley.com

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go towww.cray.com for more information.

About Markley

Markley is a premier provider of mission-critical data center facilities and cloud computing services. The company guarantees clients 100 percent uptime and is trusted by Fortune 1000 companies, major global consumer brands, and the world’s most cutting-edge research firms to deliver high availability, consistent performance, and unparalleled client service. Markley is the only company in the world that hosts a supercomputing-as-a-service solution. Founded in 1992, the company owns and operates nearly 1.5 million square feet of highly secure space and multiple, strategically located cloud point-of-deliveries (PODs) covering all regions and time zones. To learn more about Markley, please visit: www.markleygroup.com.  

Source: Cray

The post Markley, Cray Partner for Supercomputing as a Service appeared first on HPCwire.

Abaco Unveils 40 Gigabit 3U VPX Ethernet Switch

Tue, 05/16/2017 - 08:10

HUNTSVILLE, Ala., May 16 2017 — Following the company’s announcement of the first 3U VPX single board computer – the SBC367D – to feature 40 Gigabit Ethernet backplane connectivity, Abaco Systems today announced the NETernity SWE440 3U VPX 10/40 Gigabit Ethernet switch.

The SWE440 provides an Ethernet-based system interconnect that allows single board computers, digital signal processors, graphics cards, sensor I/O cards and others to pass data at multi-Gigabit speeds, enabling the creation of true HPEC (high performance embedded computing) solutions in the 3U VPX form factor. Customers will deploy SWE440-enabled systems for next generation situational awareness and surveillance, electronic warfare and radar/sonar or any application requiring low latency, high speed data transfers.

By using advanced system-on-chip (SoC) technology, the SWE440 consumes minimal power – 40 watts or less. It also offers a wide range of data plane and control plane port configuration choices to match customers’ preferred OpenVPX implementation. The SWE440 supports up to eight 40 Gigabit Ethernet ports or up to 32 10 Gigabit Ethernet ports – or combinations of the two.

The new switch benefits from Abaco’s OpenWare switch management software. Developed by Abaco’s Networking Innovation Center, Openware is based on open industry standards and provides customers with significant flexibility for customization, together with extensive security features including denial of service attack prevention, user password mechanisms with multiple levels of security and military level authorization schemes including 802.1X and sanitization. It also provides a broad range of network protocol support for Layer 2 and Layer 3 functionality, including Layer 3 forwarding which provides dynamic routing with standard routing protocols – essential for customers with complex networks.

Optional front panel ports on the SWE440 enables its use in lab environments to aid customers during their qualification phase, simplifying and minimizing the transition from development to deployment while reducing cost.

“Our customers face a challenge to deploy very high performance computing in platforms with limited space and power,” said Mrinal Iyengar, Vice President, Product Management at Abaco Systems. “The SWE440 will enable such deployments, allowing Abaco to deliver ‘big iron’ compute performance in the compact 3U form factor. The combination of very high throughput with minimal latency, flexible management software and advanced security uniquely position the SWE440 to provide connectivity for advanced rugged systems.”

The SWE440, which provides a straightforward upgrade path for current 3U VPX switches such as Abaco’s GBX410, is available in five air-cooled and rugged conduction-cooled versions.

For more information:

Product page

Datasheet

About Abaco Systems

With more than 30 years’ experience, Abaco Systems is a global leader in open architecture computing and electronic systems for aerospace, defense and industrial applications. We deliver and support open modular solutions developed to upgrade and enhance the growing data, analytics, communications and sensor processing capabilities of our target applications. This, together with our 800+ professionals’ unwavering focus on our customers’ success, reduces program cost and risk, allows technology insertion with affordable readiness and enables platforms to successfully reach deployment sooner and with a lower total cost of ownership. With an active presence in hundreds of national asset platforms on land, sea and in the air, Abaco Systems is trusted where it matters most. www.abaco.com

Source: Abaco

The post Abaco Unveils 40 Gigabit 3U VPX Ethernet Switch appeared first on HPCwire.

ISC High Performance Sets Course for 2018 Conference

Tue, 05/16/2017 - 08:06

FRANKFURT, Germany, May 16, 2017 – The ISC Group, the organizer of ISC High Performance, is very pleased to announce the appointment of Prof. Horst Simon of Lawrence Berkeley National Laboratory (Berkeley Lab), USA, as the program chairman for ISC 2018. This new appointment not only enriches the international nature of ISC, it also brings a unique perspective to the yearly conference.

Berkeley Lab’s Wang Hall – computer research facility – exterior photos taken July 6, 2015.

The 2018 conference will once again be held at Forum Messe Frankfurt, from June 24 to June 28, 2018.

As program chair, Horst Simon will be working closely with the ISC program team to define the ISC 2018 topics, whilst also leading the 2018 steering committee in an effort to further elevate the value of ISC High Performance for the high performance computing (HPC) community. He will be replacing the 2017 program chair, Prof. Jack Dongarra of the University of Tennessee. The position of the annually rotating ISC program chair was first created in 2015 to establish a continuous knowledge sharing process with HPC leaders who play a pivotal role in advancing the field of HPC.

Simon is the Deputy Laboratory Director and Chief Research Officer of Berkeley Lab. He serves as management liaison to the University of California, the Department of Energy, and other public and private agencies and programs to represent the laboratories programs, accomplishments, and initiatives. Simon has been with Berkeley Lab since 1996, having served previously as Associate Laboratory Director for Computing Sciences, and Director of the National Energy Research Scientific Computing Center (NERSC). In his role as Deputy Director, Simon has been instrumental in the creation of new concepts such as Cyclotron Road and CalCharge that support energy innovation and forge stronger connections between the national labs and industry.

Outside his executive role at Berkeley Lab, Simon is an internationally recognized expert in the development of parallel computational methods for the solution of scientific problems of scale. His expertise and research interests are centered on algorithmic development for sparse matrix operations, large-scale eigenvalue problems, and domain decomposition methods. Simon has been honored twice with the prestigious Gordon Bell Prize – in 2009, for the development of innovative techniques that produce new levels of performance on a real application (in collaboration with IBM researchers) and in 1988, in recognition of superior effort in parallel processing research (with others from Cray and Boeing).

Within the HPC community, Simon plays another significant role as the co-editor of the TOP500 project. His TOP500 colleagues rely on his knowledge and connections to verify the existence of systems that vendors claim to have installed and submitted for inclusion in the biannual TOP500 List.

“I have been attending ISC conference series since the early 1990s, and as supercomputing has grown in its importance in scientific and industrial applications, ISC has evolved from a meeting of a small group of technical experts into an international conference of wide geographical reach and deep scientific impact,” remarked Simon. “It is an honor to serve as the ISC 2018 Program Chair, and I will strive to further strengthen the technical program, broaden the community reach, and assure a welcoming and inclusive conference for a diverse group of participants.”

The organizers are confident that next year’s conference will once again emerge as Europe’s most significant HPC forum, offering talks that encompass an array of unique topics and speakers. The organizers also look forward to announcing the 2018 focus topics and the appointment of various committee chairs within the next couple of weeks.

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

ISC High Performance attracts engineers, IT specialists, system developers, vendors, scientists, researchers, students, journalists, and other members of the HPC global community. The exhibition draws decision-makers from automotive, finance, defense, aeronautical, gas & oil, banking, pharmaceutical and other industries, as well those providing hardware, software and services for the HPC community. Attendees will learn firsthand about new products and applications, in addition to the latest technological advances in the HPC industry.

Source: ISC

The post ISC High Performance Sets Course for 2018 Conference appeared first on HPCwire.

PEARC17 Registration Closes May 31

Tue, 05/16/2017 - 07:30

NEW ORLEANS, May 16, 2017 — May 31 is the deadline for Practice and Experience in Advanced Research Computing (PEARC17) attendees to take advantage of early registration and guaranteed hotel room rates.

Go to http://pearc17.pearc.org to register for the conference and check the latest program schedule. To book rooms, go to http://pearc17.pearc.org/hotel or call 888-421-1442 and reference the conference name.

PEARC17 attendees staying at the Hyatt Regency New Orleans will experience the best of the Big Easy. The hotel is located in the heart of downtown, next to the Mercedes-Benz Superdome, Smoothie King Center and Champions Square. Nearby attractions include taking a ride on the Loyola Avenue Streetcar, which passes directly in front of the hotel, or a walk to the historic French Quarter, Arts District, Audubon Aquarium of the Americas, the National World War II Museum, and the scenic Mississippi Riverfront—all located within a mile of the hotel. Attendees will also savor some of the city’s best cuisine at the hotel’s restaurants and dining options: 8 Block Kitchen & Bar, Vitascope Hall, Q Smokery & Cafe, Pizza Consegna and Borgne by celebrity Chef John Besh.

To help plan your visit to the Big Easy, the PEARC17 web site has information about things to do and see in and around New Orleans, as well as additional informative links. See http://pearc17.pearc.org/new-orleans.

About PEARC

The PEARC (Practice & Experience in Advanced Research Computing) conference series is being ushered in with support from many organizations and will build upon earlier conferences’ success and core audiences to serve the broader community. In addition to XSEDE, organizations supporting the new conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC), the Science Gateways Community Institute (SGCI), the Campus Research Computing Consortium (CaRC), the ACI-REF consortium, the Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC), and Internet2.

Source: PEARC

The post PEARC17 Registration Closes May 31 appeared first on HPCwire.

ORNL, UTK Launch Doctoral Program in Data Science

Mon, 05/15/2017 - 11:26

OAK RIDGE, Tenn. May 15, 2017 — The Tennessee Higher Education Commission has approved a new doctoral program in data science and engineering as part of the Bredesen Center for Interdisciplinary Research and Graduate Education.

The Bredesen Center unites resources and capabilities from the University of Tennessee-Knoxville (UTK) and the Department of Energy’s Oak Ridge National Laboratory to promote advanced research and to provide innovative solutions to global challenges in energy, engineering and computation.

The new program, which is expected to begin in the fall, is the brainchild of ORNL Computational Sciences and Engineering Division Director Shaun Gleason, UT Business Analytics Associate Professor Russell Zaretzki, and Bredesen Center Director Lee Riedinger. It will bring new doctoral students from some of the world’s top institutions to East Tennessee for an in-depth education in data science as it applies to specific scientific domains.

The massive amounts of data gathered via cellphones, tablets, sensors and other devices, along with the enormous datasets generated at leading scientific facilities such as the Spallation Neutron Source (SNS), the Manufacturing Demonstration Facility, and the Oak Ridge Leadership Computing Facility (home of the Titan supercomputer) at ORNL, pose a unique set of challenges and opportunities for researchers across the scientific spectrum. Creating a new generation of graduates with an enhanced understanding of how to manage and analyze this data could greatly expedite research breakthroughs and provide novel solutions to long-standing problems.

For example, electronic health records, when analyzed en masse, could reveal better and cheaper ways to treat patients, and the combination of cell phones, GPS technology and traffic sensor data will allow researchers to optimize traffic flow and assist city planners in responding to emergencies more quickly and effectively. Researchers who use scientific facilities such as ORNL’s SNS, which provides the most intense pulsed neutron beams in the world for research and industrial development, will benefit from the ability to analyze data on the fly.

Home to not only UTK and ORNL but also the UT Health Sciences Center (UTHSC) and UT‑Chattanooga (UTC), Tennessee, as well as much of the nation, is experiencing an increased demand for data specialists. The DSE doctoral program is necessary to close a critical skills gap.

The curriculum will seek to integrate candidates’ data science education with seven scientific domains: health and biological sciences, advanced manufacturing, materials science, environmental and climate science, transportation science, national security, and urban systems science. Candidates will work alongside ORNL and UT researchers and emerge with a doctorate tied to a specific scientific specialty.

“The interdisciplinary nature of the program is what makes this new degree so unique,” said Gleason, adding it will also help both UT and ORNL continue to be leaders in the areas of data science and engineering.

The program will include a curriculum heavy in data analytics, computing, policy and entrepreneurship while offering a wide array of electives. Initial plans are to admit 10 to 15 graduate students per year, growing to an enrollment goal of approximately 100 students. Mentoring and research funding support will be divided among ORNL, UTK, UTHSC, and UTC.

The Energy Science and Engineering (ESE) program that provides the basis for the newly launched DSE track has already awarded 24 doctorates in its first five years and now includes more than 125 graduate students. The program’s interdisciplinary curriculum provides student experiences in entrepreneurship and policy relative to energy. One-third of the students focus on entrepreneurship, and some intend to start new energy-related companies in Tennessee once their graduate work is finished.

“The ESE program has been a big success and a new model for interdisciplinary graduate education linking the resources of a major university and a national laboratory,” Riedinger said. “The new DSE program expands this model to another area of national need and is expected to continue the tradition of excellence within the Bredesen Center, ORNL and UT.”

Oak Ridge National Laboratory is supported by the Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Source: ORNL

The post ORNL, UTK Launch Doctoral Program in Data Science appeared first on HPCwire.

What’s Up with Hyperion as It Transitions From IDC?

Mon, 05/15/2017 - 11:07

If you’re wondering what’s happening with Hyperion Research – formerly the IDC HPC group – apparently you are not alone, says Steve Conway, now senior VP of Research, Hyperion. There’s still a bit of confusion in the HPC community, says Conway, mostly about whether Hyperion will become part of the China-based company, Oceanwide, that is buying all of IDC. The answer is a definite no.

Indeed, says Conway, maintaining Hyperion as a U.S.-owned entity was a major requirement imposed by the U.S. government before allowing the sale of the rest of IDC to Oceanwide to proceed. You may already know this, but according to Conway, many in the HPC community are still asking the Hyperion team. It probably wouldn’t hurt to have a dedicated Hyperion Research web presence and associated FAQ but until the purchase is done, Hyperion has been reluctant to do so. (See HPCwire/EntderpriseTech article, IDC’s HPC Group Spun out to Temporary Trusteeship, for details to deal.)

Earl Joseph, Hyperion Research CEO, formerly leader of IDC HPC Group

Hyperion/IDC has long been a prominent presence in the HPC community so the swirl of attention and questions around its future is natural. It is a surprisingly small team – five members – given the rather large size of its influence. Part of the slowness to more effectively communicate the changes, says Conway, stems from the robustness of business.

“We’ve been going around as much as we can, meeting with folks and explaining we’re not going to be part of Oceanwide. We are inside of IDC for the moment but comfortably walled off and do not have contact with IDC proper at all. The second important point which we are very happy about is as we brief clients – we had to renew contracts, to Hyperion – not a single client left us and in a fair number of cases people have said “what more can we do to support you” because they wanted to keep us around.”

The U.S. government seems to agree and is taking pains to ensure Hyperion succeeds. Under terms of the sale IDC is prevented from entering the HPC market for three years.

Here are a few data points:

  • International. International business has grown from roughly ten percent to roughly 50 percent including big chunks in Asia and Europe.
  • Government work. Hyperion works with several governments and has just signed a 5-year contract with one. Its HPC ROI studies for various national interests, for example, have recently drawn much attention.
  • HPC User Forum. Hyperion has signed a contract through 2018 to continue to run the forum in conjunction with the all-volunteer steering committee which includes many community leaders. “IDC started the HPC User Forum in 2000, at the request of leading HPC sites in government, academia and industry who wanted access to a vendor-neutral user group,” says Conway.
  • Bidders. There several bidders for Hyperion who have all examined the “books” says Conway, with resolution expected in the next few months. Conway would not say anything more specific.
  • Expansion. Hyperion is looking to expand driven by extensive travel and growing engagement commitments. Business is up and Hyperion is looking to cautiously added staff.

The key message here, emphasizes Conway, is two-fold: 1) Hyperion will not be part of Oceanwide; 2) Business is steaming along nicely.

Steve Conway, Hyperion SVP

This picture of health painted by Conway seems to match well with past experience. IDC was long one of the few technology analysts closely tracking the HPC market. In recent years, it also tracked the migration of HPC technology into the enterprise. Hyperion/IDC’s twice-annual HPC market/technology updates (segment sales, vendor/technology trends, HPC impact, etc.) are presented at SC and ISC and much looked for.

Hyperion will also take advantage of the ownership change to expand its practice more aggressively into so-called proximity markets. “Particularly in areas like big data where it was very nicely covered in the enterprise by the mainstream IDC folks and we handled the HPC part. We already knew we needed to keep track of proximity markets, the markets that were not quite HPC, so we could see what motivates people to cross the boundary to move from enterprise technology to HPC,” says Conway.

“Now we feel freed up. We are not going to move into the mainstream enterprise market. That’s not where we want to be. We want to stay close to HPC but we are paying a lot of attention to proximity markets and to people who are doing things today not on HPC resources with plans to move up to HPC resources in 6 to 18 months.” Like everyone on the planet (or so it seems), Hyperion also plans growing coverage of deep learning and AI in the evolving HPC landscape. (See HPCwire article, Hyperion (IDC) Paints a Bullish Picture of HPC Future)

Bob Sorensen, Hyperion

The biggest barrier to growth, says Conway, is not the pipeline of business; it is finding more people who fit the team. In the last couple of years Hyperion/IDC added two analysts: Bob Sorensen, a long-time technology analyst for the U.S. government, and Kevin Monroe, who is finishing up a Ph.D. in economics. He expects that to grow albeit at a measured rate.

Whether the Hyperion name will stick is an open question and depends upon who purchases Hyperion. Conway notes there are other IDC-like companies that might be interested. Earl Joseph, formerly head of the IDC HPC unit, is handling the negotiations. In the meantime, it’s business as usual. For example, Hyperion is in the midst of seeking entries to its HPC Innovation Awards whose primary judges are members of the HPC User Forum. Here is a link to the application form: http://www.hpcuserforum.com/innovationaward/applicationform.html

The post What’s Up with Hyperion as It Transitions From IDC? appeared first on HPCwire.

Penguin Computing Announces Support for Singularity Containers on POD HPC Cloud and Scyld ClusterWare

Mon, 05/15/2017 - 09:33

FREMONT, Calif., May 15, 2017 — Penguin Computing, provider of high performance computing, enterprise data center and cloud solutions, today announced support for Singularity containers on its Penguin Computing On-Demand (POD) HPC Cloud and Scyld ClusterWare HPC management software.

“Our researchers are excited about using Singularity on POD,” said Jon McNally, Chief HPC Architect at ASU Research Computing. “Portability and the ability to reproduce an environment is key to peer reviewed research. Unlike other container technologies, Singularity allows them to run at speed and scale.”

“We’ve long desired to support containers in our public HPC cloud, but the most adopted technology of our users was Docker,” said Will Cottay, Director of Cloud Solutions at Penguin Computing. “For loosely coupled applications in a virtual or private environment Docker is great, but it doesn’t scale up to supercomputers. Singularity provides the flexibility of containers with the security and scalability needed for tightly coupled HPC workflows. We’re very grateful to Greg Kurtzer with the High Performance Computing Services group at Lawrence Berkeley National Laboratory for inventing and developing Singularity.”

Penguin Computing customers are able to build and run Singularity containers on their in-house HPC resources and run the same container on POD, ensuring the same application and OS environment. Entire workflows can be built into a container enabling both bursting and replication for disaster recovery.

Since Singularity supports the import or direct execution of Docker images, users can use their existing Docker assets, or leverage other’s work. A single command will download and run an image from a Docker Hub repository.

Penguin’s POD team is maintaining a public GitHub repository of specification files to make it easy for users to build containers tuned for HPC clusters.

Penguin Computing also now ships Singularity with Scyld ClusterWare 7 HPC management software. Earlier this year Penguin Computing announced Scyld ClusterWare 7 as the company’s latest version of its HPC provisioning software, enabling support of large scale clusters with enhanced functionality for clusters ranging to thousands of nodes.

Visit https://pod.penguincomputing.com/documentation/Singularity for information and documentation about Singularity on POD.

Visit http://singularity.lbl.gov for more information about Singularity.

About Penguin Computing

Penguin Computing is one of the largest private suppliers of enterprise and high performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing On-Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivery of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of a limited number of authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing. Penguin Computing has systems installed with more than 2,500 customers in 40 countries across eight major vertical markets. Visit www.penguincomputing.com to learn more about the company and follow @PenguinHPC on Twitter.

Source: Penguin Computing

The post Penguin Computing Announces Support for Singularity Containers on POD HPC Cloud and Scyld ClusterWare appeared first on HPCwire.

University of Waterloo Selects Mellanox InfiniBand for Academic Research

Mon, 05/15/2017 - 08:28

SUNNYVALE, Calif. & YOKNEAM, Israel, May 15, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced that the University of Waterloo selected Mellanox EDR 100G InfiniBand solutions to accelerate their new supercomputer. The new supercomputer will support a broad and diverse range of academic and scientific research in mathematics, astronomy, science, the environment and more.

The University of Waterloo is a member of SHARCNET (www.sharcnet.ca), a consortium of 18 universities and colleges operating a network of high-performance compute clusters in south western, central and northern Ontario, Canada.

“The growing demands for research and supporting more complex simulations led us to look for the most advanced, efficient, and scalable HPC platforms,” said John Morton, technical manager for SHARCNET. “We have selected the Mellanox InfiniBand solutions because their smart acceleration engines enable high performance, efficiency and robustness for our applications.”

“One of the unique challenges of academic computing lies in a university’s need to support a very broad range of applications and workflows,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Mellanox smart InfiniBand solutions deliver the highest performance, scalability and efficiency for a variety of workloads, and also ensure backward and future compatibility, protecting the university’s investment.”

The University of Waterloo system is using Mellanox’s EDR 100Gb/s solutions with smart offloading capabilities to maximize system utilization and efficiency. The system also includes Mellanox’s InfiniBand to Ethernet gateways to provide seamless access to an existing Ethernet-based storage platform.

Located in Southern Ontario, Canada, University of Waterloo’s supercomputer serves a diverse faculty, supporting both undergraduate and graduate research across a wide range of disciplines including Applied Health Sciences, Arts, Engineering, Environment, Math, and Science as well as leading edge research in astronomy.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox’s intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanox offers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at www.mellanox.com.

Source: Mellanox

The post University of Waterloo Selects Mellanox InfiniBand for Academic Research appeared first on HPCwire.

GCS Sets Records for Hours Delivered in 17th Large-Scale Call

Mon, 05/15/2017 - 08:19

Berlin/Germany, May 15, 2017—The Gauss Centre for Supercomputing (GCS) approved 30 large-scale projects during the 17th call for large-scale proposals, set to run from May 1, 2017 to April 30, 2018. Combined, these projects received 2.1 billion core hours, marking the highest total ever delivered by the three GCS centres—the High Performance Computing Center Stuttgart (HLRS), Jülich Supercomputing Centre (JSC), and Leibniz Computing Centre of the Bavarian Academy of Sciences and Humanities (LRZ). In addition to delivering record-breaking allocation time, GCS also broke records in proposals received and number of allocations awarded.

GCS awards large-scale allocations to researchers studying earth and climate sciences, chemistry, particle physics, materials science, astrophysics, and scientific engineering, among other research areas of great importance to society.

Of the 30 projects, four were granted allocations exceeding 100 million core-hours—another first for GCS—speaking to users’ increasingly strong command of making the best possible use of the various GCS centres’ flagship supercomputers.

“As we continue to provide world-class computing resources and user support at our three GCS facilities, our user base continues to expand based on the wide variety of excellent proposals we receive during each successive large-scale call,” said Dr. Dietmar Kröner, University of Freiburg Professor and Chairman of the GCS Scientific Steering Committee. “We have tough decisions to make, as we only have so many core-hours per year, and the proposals continue to get better each year.”

Several of the largest allocations are benefiting from the variety of architectures offered through the three GCS centres.

A team led by Dr. Matthias Meinke of RWTH Aachen University received a total of 335 million core hours—250 million at HLRS and 85 million at JSC—for a project dedicated to understanding turbulence, one of the last major unsolved fluid dynamics problems. The team studies turbulence as it relates to jet engine dynamics, and its research is focused on creating quieter, safer, more fuel efficient jet engines.

A team of astrophysicists led by Dr. Hans-Thomas Janka of the Max Planck Institute for Astrophysics was granted 120 million core hours on LRZ’s SuperMUC system to simulate supernovas—the death and explosion of stars, and one of the main ways that heavy elements travel across the universe.

In previous allocations, the team was able to create one of the first first-principle simulations of a 3D supernova, and plans to expand its research to more accurately understand the volatile, dynamic processes that govern the formation of neutrinos and gravitational waves occurring after a supernova.

Supercomputing has become an indispensable tool in studying the smallest, most fundamental building blocks of matter known to man—the quarks and gluons that make up protons and neutrons, and, in turn, our world. A research group based at the Department of Theoretical Physics at the University of Wuppertal is benefitting from two separate allocations—one of which uses both HLRS and JSC resources, while the other is solely based at JSC—to more deeply understand the mysterious subatomic world:

Dr. Szabolcs Borsányi leads a project aiming to make the first-ever estimate of sheer viscosity of the quark-gluon plasma—a novel state of matter that exists only at extremely high temperatures, making it very hard to study experimentally. The project was granted 35 million core hours on JSC’s JUQUEEN.

Prof. Dr. Zoltán Fodor was granted 130 million core-hours on HLRS’s Hazel Hen and 78 million core-hours on JUQUEEN to support large-scale international experimental work being done at the Large Hadron Collider in Switzerland and the Relativistic Heavy Ion Collider in the United States. The team uses HPC to more fully understand phase transitions within quantum chromodynamics—the behaviour of subatomic particles under extreme pressure or temperature conditions.

For a complete list of projects, please visit: http://www.gauss-centre.eu/gauss-centre/EN/Projects/LargeScaleProjects/call-17.html

About GCS Large-Scale Projects

In accordance with the GCS mission, all researchers in Germany are eligible to apply for computing time on the petascale HPC systems of Germany’s leading supercomputing institution. Projects are classified as “large-scale” if they are allocated more than 35 million core-hours in a given year at a GCS member centre’s high-end system. Computing time is allocated by the GCS Scientific Steering Committee to ground-breaking projects which seek solutions to long-standing complex science and engineering process that cannot be solved without access to world-leading computing systems. The projects are evaluated through a strict peer review process on the basis of the project’s scientific and technical excellence.

More information on the application process for a large-scale project can be found at: http://www.gauss-centre.eu/gauss-centre/EN/HPCservices/HowToApply/LargeScaleProjects/largeScaleProjects_node.html

About GCS

The Gauss Centre for Supercomputing (GCS) combines the three German national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s integrated Tier-0 supercomputing institution. Together, the three centres provide the largest, most powerful supercomputing infrastructure in all of Europe to serve a wide range of academic and industrial research activities in various disciplines. They also provide top-tier training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advanced Computing in Europe), an international non-profit association consisting of 24 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level.

GCS is jointly funded by the German Federal Ministry of Education and Research and the federal states of Baden-Württemberg, Bavaria, and North Rhine-Westphalia. It is headquartered in Berlin, Germany.

Source: GCS

The post GCS Sets Records for Hours Delivered in 17th Large-Scale Call appeared first on HPCwire.

SDSC’s Comet Helps Replicate Brain Circuitry to Direct Prosthetic Arm

Mon, 05/15/2017 - 08:14

SAN DIEGO, Calif., May 15, 2017 — By applying a novel computer algorithm to mimic how the brain learns, a team of researchers – with the aid of the Comet supercomputer based at the San Diego Supercomputer Center (SDSC) at UC San Diego and the Center’s Neuroscience Gateway – has identified and replicated neural circuitry that resembles the way an unimpaired brain controls limb movement.

The research, published in the March-May 2017 issue of the IBM Journal of Research and Development, lays the groundwork to develop realistic “biomimetic neuroprosthetics” — brain implants that replicate brain circuits and their function — that one day could replace lost or damaged brain cells or tissue from tumors, stroke, or other diseases.

“In patients with motor paralysis, the biomimetic neuroprosthetic could be used to replace the deteriorated motor cortex where it could interact directly with healthy brain pre-motor regions, and send commands and receive feedback via the spinal cord to a prosthetic arm,” said W.W. Lytton, a professor of physiology and pharmacology at State University of New York (SUNY) Downstate Medical Center in Brooklyn, N.Y., and the study’s principal investigator.

This scenario, portrayed in the IBM paper titled “Evolutionary algorithm optimization of biological learning parameters in a biomimetic neuroprosthesis”, required high-performance computing and expertise to simulate and evaluate potential computer models in an automated way, along with the Neuroscience Gateway (NSG) based at SDSC, which provided an entrance to these resources.

“The increasing complexity of the virtual arm, which included many realistic biomechanical processes, and the more challenging dynamics of the neural system, called for more sophisticated methods and highly parallel computing in a system such as Comet to tackle thousands of model possibilities,” said Amit Majumdar, director of the Data Enabled Scientific Computing division at SDSC, principal investigator of the NSG, and co-author of the IBM Journal paper.

“Combining these computational advantages can be an effective approach to build even more realistic biomimetic neuroprostheses for future clinical applications,” he added.

Read the full release at: http://www.sdsc.edu/News%20Items/PR20170510_neuroprosthesis.html

Source: SDSC

The post SDSC’s Comet Helps Replicate Brain Circuitry to Direct Prosthetic Arm appeared first on HPCwire.

New Driverless “National CP”

Fri, 05/12/2017 - 10:20

Driving on the highway, your hands are no longer bound to the steering wheel; instead, you can read a favorite book, watch an entertaining movie with your family, or even take a relaxing afternoon nap… The car is no longer simply a means of transportation, but rather genuinely becomes a mobile leisure space: this is the where the meaning of ‘driverless’ lies. Inspur is helping Baidu to turn the aforementioned scenario into reality within the next 3-5 years.

Does Driverless Car Training Last for a Long, Long Time?

According to US standards, driverless vehicle technology is divided into 5 stages. The current driverless technology research has only reached as far as L4, using artificial intelligence algorithms to achieve completely autonomous driving, mainly relying on high-precision maps, corresponding laser radar, camera, millimeter-wave radar, ultrasonic sensors, GPS and other sensors. Among these, the laser radar scan is equivalent to “eyes”, and is able to scan the surrounding 100-200 meters for objects, the pedestrians, vehicles, traffic signs, distance and other environmental factors in the process of driving to form a real-time road map that is passed to the computing device. The artificial intelligence algorithms serve as the “brain”, providing real-time analysis through the vehicle computing platform, identifying all the data, and making rational judgments to avoid, overtake or whatever course of action suitable for the situation at that time.

In order for driverless cars to have “intelligence”, first of all it is necessary to draw upon the deep learning technology for offline model testing, to make the machine learn through the laser scan and “see” which objects are human, which are animals, which are trees, which are car signals, what traffic signs mean, and so on. However, the current ability of the machine to extract abstract features is far less than that of humans. For example, a 4 or 5 year old child can quickly learn the characteristics of a cat after being exposed to them just a few times, while the Google X lab used more than 16,000 processors, and virtual brains composed of 1 billion nerve nodes to analyze 10 million frames from random untagged Youtube video clips. It took 10 days of operation before the machine could finally distinguish the image of a cat from other frames, and correctly found the cat’s photos from the next input of 20,000 images. The driverless environment is even more complex, and it’s necessary to identify as many as possible things that might be encountered in the process of driving, including a wide variety of people and objects. Such a large learning task of course requires the support of a strong computing power; otherwise the machine may have to learn till the end of the world.

Inspur SR-AI Rack supports 100 billion levels of model training

The offline model training initially used stand-alone multi-card computing devices and began to implement cooperative parallel computing with large-scale GPU clusters as the amount of data increased. However, driverless technology may currently be the most complex application of artificial intelligence, and its model training has already exceeded hundreds of billions of samples, one trillion parameter levels. However, the traditional training tasks are mostly done on a single machine, with only 4-8 cards, and simply cannot satisfy the large storehouse of models and parameters of training performance requirements.

In order to better promote the development of driverless vehicle technology, Inspur and Baidu jointly developed a hyper-scale AI computing module – the SR-AI Rack Scale Server for large-scale data sets and deep neural networks. This product is in accordance with the latest Scorpio 2.5 standards, and is the world’s first AI program using the PCIe Fabric interconnected architecture design. Using the PCI-E switch and I/O BOX modules with GPU and CPU physical decoupling pool, both with flexible configuration, it is able to support 16 GPU hyper scale scalability nodes. At the same time, the SR-AI Rack Scale Server is also the first domestic 100G RDMA GPU cluster. Its supporting RDMA technology (remote data direct access technology) can achieve direct interaction between GPU and memory data, without the need for CPU calculation. It massively reduces server-side data processing delays in network transmission, enabling clusters to reach nanosecond network latency with stand-alone processing capacity of up to 512 TFlops, which more than doubles the performance of conventional GPU servers, and is more than 5-40 times the performance of average AI programs.

Under the new AI computing equipment support, Baidu driverless vehicles have attained an accuracy rate of over 99.9% for traffic light recognition, and an accuracy rate of 95% for identifying pedestrians. In road tests, through GPU and corresponding algorithm support, Baidu driverless cars can accurately identify pedestrians in 0.25 seconds, and with further algorithm optimization in the future, this time will be reduced to 0.07 seconds. Knowing that accidents are inevitable, 0.01 seconds may be the difference between life and death.

In the data center computing, Inspur and Baidu has maintained years of strategic cooperation, do joint develop on artificial intelligence related computing architecture, technology and products aspects, and achieved quite a lot results. Heterogeneous computing server, FPGA acceleration module jointly developed by Inspur and Baidu is widely used in Baidu and other artificial intelligence scene, like Baidu driverless and Baidu Brain.
About Inspur
www.inspursystems.com

The post New Driverless “National CP” appeared first on HPCwire.

HSA Foundation Establishes China Regional Committee for Heterogeneous Computing

Fri, 05/12/2017 - 08:26

XIAMEN, China, May 12, 2017 — The HSA Foundation has announced the formation of the China Regional Committee (CRC), with founding members comprised of 20 renowned institutes, universities and standards authorities throughout China. With a focus on growing the HSA ecosystem, the CRC’s mandate is to enhance the awareness of heterogeneous computing and promote the adoption of standards such as Heterogeneous System Architecture (HSA) in China. Dr. Xiaodong Zhang, from Huaxia General Processor Technologies, will serve as the CRC’s chairman.

“The CRC will help define regional heterogeneous computing needs, obtain advice from local experts, help China market segments become more integrated with continuously expanding HSA technologies, and serve as a gateway for the HSA Foundation to be more proactive and effective in addressing heterogeneous computing opportunities and issues affecting the region,” noted Zhang.

“China’s fast growing role in semiconductor innovation, combined with its skilled talent base, makes it a strategically advantageous location for the HSA Foundation to establish its first regional committee. Our hope is to accelerate China’s heterogeneous computing development in line with the standardization work, as well as to benefit the local industry community with high performance heterogeneous systems with reduced complexity. The establishment of the CRC will help significantly in these efforts,” said HSA Foundation President Dr. John Glossner.

“The HSA ecosystem continues to grow rapidly in China and we look forward to further collaborative ventures with our new CRC colleagues,” said HSA Foundation Chairman and Managing Director Greg Stoner.

Glossner said that the HSA Foundation is gaining increasing traction, with recently announced HSA compliant products worldwide, the introduction of the HSA 1.1 specification, and other key developments.

The CRC’s initial members include CESI, a professional institute for standardization in the field of electronics and IT industry in China under the Ministry of Industry and Information Technology (MIIT), and organizations that play an influential role in the HSA ecosystem in China, especially in the fields of artificial intelligence (AI), machine learning, AR/VR and many others which require support from heterogeneous processing. Founding members of the CRC include:

  • China Electronics Standardization Institute (CESI)
  • Fudan University — State Key Laboratory of ASIC and System
  • Hunan Institute of Science and Technology
  • Institute of Computing Technology (ICT), Chinese Academy of Sciences
  • Jiangsu Research Center of Software Digital Radio
  • Nanjing University — State Key Laboratory for Novel Software Technology
  • Nanjing University of Aeronautics and Astronautics
  • Nanjing University of Posts and Telecommunications
  • Nanjing University of Science and Technology
  • Nantong University
  • Peking University
  • Shanghai Advanced Research Institute, Chinese Academy of Sciences
  • Shanghai Institute of Microsystem and Information Technology (SIMIT), Chinese Academy of Sciences
  • Shanghai Jiao tong University
  • Shanghai Research Center for Wireless Communications
  • Shanghai University
  • Shenyang Institute of Automation, Chinese Academy of Sciences — State Key Laboratory of Robotics
  • Southeast University — State Key Laboratory of Mobile Communications
  • Sun Yat-sen University
  • University of Science and Technology Beijing

2017 Heterogeneous Architecture Standards and Artificial Intelligence Conference

The first CRC Symposium is part of the 2017 Heterogeneous System Architecture Standards and Artificial Intelligence Conference, which will be held in Xiamen on May 25 – 26. The two-day event is co-hosted by CESI, the HSA Foundation and Chinese Association of Artificial Intelligence, with an organizing committee including Huaxia General Processor Technologies, the HSA Foundation CRC, and Xiamen Integrated Circuit Industry Association.

Renowned scholars and officials from related industry organizations will be invited to exchange and discuss standards and technologies for heterogeneous computing and artificial intelligence. A list of outstanding industry leaders will speak at the AI conference, joined by numerous other attending companies from related fields. For more conference information, a list of speakers and online registration, please visit www.hsa-china.com.

HSA is rapidly becoming a mainstream platform to support the promotion and application of the artificial intelligence industry and to develop standards for the next generation of SoCs and heterogeneous processors. The Symposium will bring together dozens of universities, institutes and companies to discuss the HSA Foundation and its development in China. Topics will include standards, key technologies, collaborative development, and software ecosystem construction, among others.

The CRC will also take an active role in developing the second annual Heterogeneous System Architecture 2017 Global Summit (visit www.hsafoundation.com; details to be posted soon). The two-day 2016 event was co-sponsored by the HSA Foundation and the China Semiconductor Industry Association (CSIA), and was also supported by the Beijing Economic and Technological Development Zone (E-Town), the Ministry of Industry and Information Technology of the People’s Republic of China (MIIT), and Cyberspace Administration of China.

Supporting Quotes

China Electronics Standardization Institute

“Heterogeneous computing is the key technology in the next-generation processor design. China Electronic Standardization Institute (CESI), as the primary non-profit and comprehensive research institution for China’s standardization of electronic information technologies, is very pleased to be a member of the CRC, and together with other CRC members, will drive heterogeneous computing standardization work in China. As a member of the HSA Foundation, we look forward to joining global colleagues to improve the HSA technical standardization and better promote the development of next generation processors worldwide including China.”

  • Baoyou Wang, Director of Basic Product Research Center, China Electronics Standardization Institute

Nanjing University

“The School of Microelectronics at Nanjing University focuses on a variety of core disciplines, some of which include multi-core processing chip architectures and implementations, reconfigurable computing, three-dimensional network-on-chip (NoC) design, SoC design and high-performance VLSI implementations in digital signal processing algorithms. Heterogeneous computing is one of today’s hottest technologies and encompasses important applications such as mobile devices, the Internet of Things (IoT), cloud computing, and artificial intelligence. We look forward to working with the HSA Foundation in effectively using CPU, GPU, DSP, FPGA and other hardware and software resources to support research and development of heterogeneous system architectures. We thank the HSA Foundation for facilitating a dedicated research platform for institutions and universities.”

  • Hongbing Pan, Professor, Nanjing University

Shenyang Institute of Automation, Chinese Academy of Sciences

“The institute’s main research directions include wireless sensor and communication technology, and industrial digital control systems. Our research group is engaged in R&D of industrial bus technology related to communications chips, and system-on-chip with communication functions. We look forward to working with HSA Foundation’s CRC where we will focus on the research of heterogeneous multi-core technology for industrial control SoC’s. With the development of China’s ‘Industry 4.0’, the traditional centralized control is transitioning to a decentralized model. Industrial control systems are composed of heterogeneous cores including micro controllers and DSPs connected by a common bus. HSAF technologies address these types of systems providing flexibility, high performance, integration, and miniaturization. We look forward to adopting HSAF technology and evaluating the effectiveness of HSA for industrial control systems.”

  • Chuang Xie, Senior Engineer and Director of SoC Designs, Shenyang Institute of Automation, Chinese Academy of Sciences

Southeast University

“The establishment of HSA Foundation’s CRC will further promote the rapid development of heterogeneous computing technology in the region. Southeast University has made several innovations in deep learning and cloud computing. Its Laboratory of Image Science and Technology, one of the earliest units in China to be involved in image processing, looks forward to contributing innovative technology solutions. This will enable researchers to focus on algorithm research and evaluate their effectiveness in HSA systems.”

  • Aodong Shen, Assistant Professor, Southeast University

Sun Yat-sen University

“Processors are facing great challenges. Moore’s Law is slowing down, while new applications such as big data and artificial intelligence require higher computation and storage capability. Heterogeneous computing is proposed as ‘CPU+’ architecture. It can significantly improve the system performance and energy efficiency for a wide range of application domains, and is evolving to become the main platform for the next generation computation industry. The HSA Foundation aims to standardize the heterogeneous computing architecture. It’s my honor to participate in HSA Foundation’s CRC. We look forward to providing input to the HSA Foundation with regional requirements and application results that will help develop the next generation standard for HSA, and push forward the research, development, and industrialization of heterogeneous computing in China.”

  • Zhiyi Yu, Professor, Sun Yat-sen University

AMD

“We are glad to see the HSA Foundation is expanding, and we will continue to take an active role to participate in heterogeneous computing activities and its open source efforts via the ROCm platform that bring HSA-enabled drivers, runtimes, compiler and tools to the global developer community. We hope together with the new members to promote more academic research in the China region.”

  • Paul Blinzer, AMD Fellow

Huaxia General Processor Technologies

“As a HSA Foundation member, it is exciting to see that universities, institutes and companies in China are joining the CRC and making it a growing platform for heterogeneous computing in the region. Huaxia GPT focuses on designing and licensing embedded HSA-compatible processors and optimizing them to enable quicker, easier programming of high-performance parallel computing devices in heterogeneous ecosystems. We look forward to the future collaboration with these newly joined forces on the cutting-edge applications in the field of machine vision, Internet of Things (IoT), Machine-to-Machine (M2M), edge computing and deep learning.”

  • Kerry Li, CEO, Huaxia General Processor Technologies

Imagination Technologies

“As a founding member of the HSA Foundation, Imagination works closely with other members to create specifications that make it easier to develop and program heterogeneous SoCs, and we are also developing IP cores enabling the realization of such SoCs. The role of China in designing next-generation semiconductors cannot be underestimated, and the HSA Foundation’s CRC can play a key role increasing awareness within the industry of the challenges and solutions around heterogeneous computing.”

  • James Liu, VP and GM China, Imagination Technologies

About the HSA Foundation

The HSA (Heterogeneous System Architecture) Foundation is a non-profit consortium of SoC IP vendors, OEMs, Academia, SoC vendors, OSVs and ISVs, whose goal is making programming for parallel computing easy and pervasive. HSA members are building a heterogeneous computing ecosystem, rooted in industry standards, which combines scalar processing on the CPU with parallel processing on the GPU, while enabling high bandwidth access to memory and high application performance with low power consumption. HSA defines interfaces for parallel computation using CPU, GPU and other programmable and fixed function devices, while supporting a diverse set of high-level programming languages, and creating the foundation for next-generation, general-purpose computing.

Source: HSA Foundation

The post HSA Foundation Establishes China Regional Committee for Heterogeneous Computing appeared first on HPCwire.

Pages