HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 53 min ago

Singularity HPC Container Technology Moves Out of the Lab

Thu, 05/04/2017 - 09:38

Last week, Singularity – the fast-growing HPC container technology whose development has been spearheaded by Gregory Kurtzer at Lawrence Berkeley National Lab – took a step ‘out of the lab’ with formation of SingularityWare LLC by Kurtzer. In this Q&A interview with HPCwire, Kurtzer discusses Singularity’s history, adoption and technology trends, and what the organizational change means for Singularity and its growing user base. Singularity remains firmly in the open domain says Kurtzer.

Container technology, of course, isn’t new. Docker containers and its ecosystem of tools have stormed through the enterprise computing environment. Enhanced application portability, reproducibility, and collaborative development efforts are among the many attractions. HPC was late to this party, or perhaps more accurately Docker was less than ideal for HPC. For example, security issues and inability to run well in closely-coupled computing environments were prominent stumbling blocks.

Gregory Kurtzer, SingularityWare CEO

Singularity was developed specifically to solve those problems and to accommodate HPC needs and it has enjoyed surprisingly rapid adoption. Kurtzer started working on the project in November of 2015 and release 1.0 was roughly one year ago in April 2016; the latest release, 2.2.1, was issued in February 2017.

Kurtzer says that Open Science Grid has already served roughly 20 million containers in Singularity. A listing of existing Singularity users and the computational resources being used is available for download at the Singularity web site (still hosted at LBNL, at least for the moment). Some of the institutions working with Singularity include: TACC, San Diego Supercomputer Center, GSI Helmholtz Center for Heavy Ion Research, NIH, Stanford University, and LBNL. There is also a small scattering of commercial users.

More than many, Kurtzer in his role as HPC systems architect and technical lead of the HPC group at LBNL has had a close eye on efforts to adapt containers for use in HPC. Incidentally, he will remain a scientific advisor for LBNL but become CEO of SingularityWare, which is being funded by another start-up, RStor. Parsing the SingularityWare-RStor relationship seems a little fuzzy at present but these are early days. Kurtzer informed the community of the new changes under the heading ‘Big Singularity Announcement‘ last Friday.

HPCwire: Container use – mostly Docker – grew rapidly in the enterprise/commercial space; what did you see as the need and role for container technology in HPC at the start of the Singularity project?

Greg Kurtzer: This occurred in chronologic steps. First the problem to solve was, how do we support Docker on our HPC resources. Scientists were asking for it, begging for it in fact, yet no HPC centers were able to install Docker on their traditional HPC systems. After dedicating some time to this, talking to many other resource providers, and integrators, and becoming very frustrated at the inherent incompatibilities. Then I had the novel idea of talking to the scientists. Learning from them what they need. Understanding what problems Docker was solving for them and how best to address that.

These problems that Docker solved for scientists really boiled down to: software environment reproducibility, environment mobility/agility, the ability to leverage the work of their peers, control of their own software stack, and the ability to do all of the above intuitively. But in HPC and on supercomputers, Docker has been deemed a non-starter due to its usage model. For example it would allow users to control a root owned damon process, without appropriate precautions in place to securely access and control data or limit escalation of user contexts. Nor is it generally compatible with resource manager workflows or compatible with MPI workloads, among lots of other factors. So while Docker was working fine for scientists using loosely coupled or non-parallel applications on private resources just fine, it is a dead end path for anyone requiring to scale up to supercomputers.

Singularity approached this from the perspective of, what problems do we really need to solve for science and compute, and what was found is that there is a much more appropriate manner to solve these problems than shoe-horning Docker, a solution designed for enterprise micro-service virtualization, onto scientific computing.

HPCwire: Can you give a few numbers to illustrate Singularity’s growth? Roughly how many users does Singularity have today, what’s been the rate of growth, and what segments (academia, labs, etc.) have been the most active and why? What does the typical user(s) look like?

Kurtzer: This is really hard to keep track of. I had someone recently come up to me and say “In less than a year, Singularity went from being unheard of to a standard component on every HPC resource I have access to!”. I had no response, but was jumping up and down excitedly … on the inside.

Along the effort of trying to keep up, I created a Google form that allows people to self register their system. On this voluntary registry you will find some of the largest public HPC resources in the world. One of them, at any given moment, is running 2000 Singularity containers at a time. The OSG (Open Science Grid) has served up well over twenty million containers with Singularity! I can go on, but the gist is that Singularity has been adopted faster than I could keep up with development, support and maintenance.

HPCwire: What have been the dominant use cases (research, development/prototyping, production, etc.) and how do you see that changing over time? Could you briefly cite a couple of examples of Singularity users and what they are using it for?

Kurtzer: I can elaborate on a couple of very general usage examples:

A scientist has a workload where the dependencies and environment is difficult to (re)create or includes some binary components specific to a particular distribution or flavor of Linux. Singularity will allow this scientist to build a container that properly addresses the dependencies. Once this has been done, that container can be copied to any system (private/local, HPC, or cloud) that the scientist has access to and run that container; assuming of course that Singularity has been installed on that host.

Cloud computing resources are becoming more common, a single Singularity container image can include all of the dependencies necessary to bring that workload from site to site and cloud to center. A result of the image being a single file makes it very easy to “carry around” an image that has all of the applications, tools and environment you may need.

Not all scientific workflows are public, many scientific libraries, programs and data are controlled (export, classified, trade secrets, etc.) which makes managing the visibility and access to these containers critical. Singularity’s use of single file images, like any file on a local file system, abide by standard POSIX permissions, makes Singularity a very capable technology for this use-case.

HPCwire: Maybe we should briefly describe what Singularity is and its underlying technology. What are the key features today, what feature gaps have been identified, and what’s the technology roadmap going forward? I realize the latter may not yet be fleshed out. How do Power- and ARM-based systems fit into the plans?

Kurtzer: Singularity is a container platform designed to support containers utilizing a single image file which allows users to have full control of where and how their containers are accessed and used. Embedded in the container image file is the entire encapsulation of the contained environment, and running any program, script, workflow, or accessing any data within, is as easy as running any command line program. Singularity can also deal with other container formats like Squash FS based containers and Docker containers, but these other formats are less optimal as general purpose Singularity containers.

Comparing to Docker and other enterprise container systems, these focus on the necessity for full process isolation and strive to give the illusion of sole occupancy on the physical host. For scientific compute, the goal is almost 180 degrees opposite. We want to leverage the host’s resources as directly and efficiently as possible. That may include file systems, interconnects and GPUs. Isolation from these services becomes a detriment in terms of our needs.

I have already ported Singularity to Power and it worked “out of the box” on ARM, but I don’t have much access to these architectures, so I don’t test much on them.

HPCwire: How should we relate Singularity to Docker? Are they competitive or likely to bump against each other in the HPC community?

Kurtzer: Traditionally, HPC is referring to a subset of use cases, where the applications are very tightly coupled, based on MPI (or PVM), and they require non-commodity internets for decent performance, and can scale to gigantic extreme scales. While this form of scientific computing is relatively small compared to the overall range of scientific computing in general (e.g. the long tails), large centers have to build computing resources capable of supporting this highest uncommon denominator of computing. As a result, for a container system to run on this resource it must be compatible with this architecture.

Docker is designed for micro-service virtualization. While some of the enterprise feature sets as I mentioned previously fit the scientific uses, Docker is not designed or compatible with the general multi-tenant shared compute architectures traditionally implemented on traditional HPC that I described above. For this reason, in HPC, I see no competition, as Docker just isn’t an option.

But outside of the traditional HPC, as we look more to the generalized scientific application stack, we do see that Docker is being used for local private use-cases. Here scientists now have options, with neither tool being “wrong” when it works. Singularity as a result is building in native compatibility with Docker; for example, in Singularity, you can run a container directly or bootstrap a new container image from a remote Docker registry, without having any Docker bits installed on your host.

Thus commands like:

$ singularity shell –nv docker://tensorflow/tensorflow:latest-gpu would work exactly as expected.

BTW, the above command demonstrates how to run GPU enabled Tensorflow utilizing Singularity’s native Nvidia GPU support, without having Tensorflow installed on the host. After installing Singularity (at present from the ‘development’ GitHub branch), it takes approx 30 seconds to start running programs like this.

HPCwire: Given the move to a for-profit organization, what are the expansion plans for Singularity’s user base/target market?

Kurtzer: While being funded primarily by the government, Singularity had limitations in terms of funding, partners, support and growth. Now that we have a sustainable funding and growth model, it is all about building the team, and developing new features! Some of these features will include support for backgrounded processes (daemons), trusted computing, integration with cloud orchestration platforms like Kubernetes and Mesos, as well as optimization for object stores. We will hopefully expand our use base even more by addressing more of the scientific use cases – e.g. the above command with Nvidia/Cuda support is an example of that – and the “target market” and project goals and direction, will remain the same.

HPCwire: Many job schedulers (e.g. Univa) and “cloud orchestrators” (e.g. Cycle Computing) have worked to become “container” friendly; do you expect the same will happen for Singularity?

Well, Singularity is job scheduler neutral. Any user can add Singularity container commands into their own batch scripts (as long as Singularity is installed) thus all resource managers are supported. As far as orchestration systems outside of traditional HPC, yes! We are recruiting people right now to help with that.

HPCwire: What does moving Singularity into SingularityWare mean for you personally?

Kurtzer: My previous capacity at LBNL was HPC systems architect and technical lead of the HPC group. After I developed Singularity, my “day job” did not vanish, so Singularity has been a side project that I’ve been working on primarily in the evenings and weekends. Working with RStor to create SingularityWare, LLC., has enabled me to focus my time and efforts to Singularity development and building the community and project by making it my primary effort.

HPCwire: As a for profit entity, SingularityWare’s goals and aspirations are presumably somewhat different?

Kurtzer: SingularityWare, LLC, is a hosting platform for Singularity (and Warewulf) rather than a for profit entity. As with any thriving project, it is important not to change anything, but only add value. This means there will be no changes in the existing support, development, contributions or release models that people have become accustomed to. Additionally, because there is no part of the fiscal sustainability model that relies on commercial support contracts, services, consulting or paid licenses, means that there is no pressure to “sell” Singularity.

Of course, if you find that you need more support services than what is currently provided via the open source community, then our door is always open. This is also the same now for technology partnerships.

HPCwire: Perhaps review the changes along the lines of how users will/can receive support, what (products) to expect from SingularityWare, and how community contributions will be handled?

Kurtzer: Given that RStor is funding my time (and others) for Singularity development, I would raise my hand, on behalf of my team at RStor as a provider of support and services for Singularity. As far as community contributions, all commits originating from RStor will assign copyright to SingularityWare, LLC. Contributions from other sources will be maintained and accepted exactly as they do now.

HPCwire: So what’s your new title and are you building a staff at SingularityWare?

Kurtzer: At RStor, my title is “Senior Architect” and YES, I am looking to hire developers! C programmers, Python, and Go developers please send me your resumes ASAP! I am also interested in working with academia and helping them receive funding for interns, grads and postdocs to contribute to Singularity. With regard to SingularityWare, LLC… I suppose I have the fancy title of CEO of a one person company.

HPCwire: I couldn’t find much online about RStor. What does partner/support mean here? What’s the relationship between the two. Will you have a position at RStor?

Kurtzer: To reiterate above, my primary responsibility at RStor is development and leadership of Singularity to which they are providing me a team. But given that they are doing some fantastic stuff, I find myself drawn to be part of helping to make sure their storage platform is highly optimized for the use-cases that hit home personally for me (e.g. Research Data Management – RDM).

This is one of the reasons I decided on this path; an appropriate RDM solution is severely lacking in the scientific industry. When I learned what RStor was planning on doing I became very excited with their direction and was impressed by their leadership and vision. I saw that RDM became easily tangible thanks to the technologies they are creating and if you couple RDM with Singularity single image file based containerization, you end up with the holy grail of reproducible and agile computing.

HPCwire: What haven’t I asked that I should? Please add what you think is important.

Kurtzer: Warewulf is another project that I lead and it is a widely utilized cluster management and provisioning system. Warewulf has been around for 15+ years and currently the basis of provisioning for OpenHPC. I have giant grand visions and have had many discussions with Intel, other national labs, and other corporates about how to make Warewulf exascale ready. I also have commitment from RStor to facilitate this as well.

The post Singularity HPC Container Technology Moves Out of the Lab appeared first on HPCwire.

Submissions for 2017 ACM SIGHPC Emerging Woman Leader Award Close June 30

Wed, 05/03/2017 - 19:58

May 3, 2017 — In the fields of high performance (HPC) and technical computing – as elsewhere in computing – there are fellowships and awards for  achievements occurring at the graduate student, early-career, and mature-career stages.  There are very few awards recognizing individuals in the middle stage of their careers, and none aimed specifically at women.  These are the years when faculty are working toward promotion and practitioners are moving through middle levels of management, a period which can be especially challenging for women.  “Technical computing” includes all of the various fields that are part of what we think of as HPC – areas such as visualization, analytics, operations, scientific application software (creation and porting/tuning), libraries, and so on – as well as professionals working with everything from small, workgroup-sized systems, to leading systems on the TOP500 list.

The ACM SIGHPC Emerging Woman Leader in Technical Computing (EWL/TC) is a biennial award open to any woman who has engaged in HPC and technical computing research, education, and/or practice for 5-15 years since receiving her highest degree. This international award creates a new career milestone achievement, and consists of a $2,000 honorarium, travel support to the SC conference, and a recognition plaque. It also establishes a cohort of role models for students and professional who are just getting started in our field.

  • Submissions open: April 13, 2017
  • Submissions close: June 30, 2017
  • Winner announced: July 2017

The award is presented every two years, with the first presentation in November during SC17.

See our how to nominate section for a description of what is required to nominate a candidate.

Award Committee

Francine Berman,  Rensselaer Polytechnic Institute
Candace Culhane (Chair), Los Alamos National Laboratory
Lori Diachin, Lawrence Livermore National Laboratory
Ron Perrott, University of Oxford

Source: SIGHPC

The post Submissions for 2017 ACM SIGHPC Emerging Woman Leader Award Close June 30 appeared first on HPCwire.

China Races to Show Quantum Advantage

Wed, 05/03/2017 - 18:15

Quantum computing has been one of those long-promised breakthroughs, forever on the horizon yet just out of reach, but a quickening is upon us with Google, IBM and others vying to cross a major threshold in a matter of months (as reported recently by MIT News). The target they are gunning for is quantum supremacy, the term coined by theoretical physicist John Preskill to define the point at which a quantum processor surpasses the ability of the largest classical supercomputer to carry out a well-defined problem.

A front-runner in the global supercomputing race, China is also proving itself as a world leader in quantum research. At a press conference in Shanghai on Wednesday, a quantum research team from Eastern China announced they had a hit a milestone in creating a quantum machine that can compete with today’s classical computers.

Team leader quantum physicist Pan Jianwei, an academician of the Chinese Academy of Sciences and his colleagues Lu Chaoyang and Zhu Xiaobo (of the University of Science and Technology of China), and Wang Haohua (of Zhejiang University) reported that their quantum processors could solve certain tasks faster than classical machines.

Experimental set-up for multiphoton boson-sampling (Source: Nature Photonics)

The researchers’ quantum device is called a boson sampling machine, considered “a strong candidate to demonstrate ‘quantum computational supremacy’ over classical computers,” according to the team. At the crux of the advance are two primary components: “robust multiphoton interferometers with 99% transmission rate and actively demultiplexed single-photon sources based on a quantum dot–micropillar with simultaneously high efficiency, purity and indistinguishability.”

The scientists’ implementations of three-, four- and five-photon boson sampling achieve sampling rates of 4.96 kHz, 151 Hz and 4 Hz, respectively, reaching speeds 24,000 times faster than previous experiments. “Our architecture can be scaled up for a larger number of photons and with higher sampling rates to compete with classical computers,” they write in their paper, which was published in the scientific journal Nature Photonics on Tuesday.

The Chinese team said their prototype quantum computing machine is 10 to 100 times faster than the first electronic computer, ENIAC, and the first transistor computer, TRADIC, and could “one day could outperform conventional computers.”

University of Texas at Austin Professor Scott Aaronson, who proposed the boson sampling machine, reported that the research showed “exciting experimental progress.”

“It’s a step towards boson sampling with say 30 photons or some number that’s large enough that no one will have to squint or argue about whether a quantum advantage has been attained,” he told the South China Morning Post.

Pan Jianwei is also the chief engineer of the world’s first quantum satellite, launched by China in August 2016. The goal of the project is to secure ultra-secure “hack-proof” quantum communications and to demonstrate features of quantum theories, such as entanglement. In January, Jianwei stated, “the overall performance has been much better than we expected; it will allow us to conduct all our planned experiments using the satellite ahead of schedule and even add some extra ones.”

The post China Races to Show Quantum Advantage appeared first on HPCwire.

NASA Issues a Challenge to Speed Up Its ‘FUN3D’ Supercomputer Code

Wed, 05/03/2017 - 14:21

May 3 — Do you, or someone you know, know how to program computers? NASA has a challenging assignment for you.

NASA’s aeronautical innovators are sponsoring a competition to reward qualified contenders who can manipulate the agency’s FUN3D design software so it runs ten to 10,000 times faster on the Pleiades supercomputer without any decrease in accuracy.

The competition is called the High Performance Fast Computing Challenge (HPFCC).

“This is the ultimate ‘geek’ dream assignment,” said Doug Rohn, director of NASA’s Transformative Aeronautics Concepts Program (TACP). “Helping NASA speed up its software to help advance our aviation research is a win-win for all.”

NASA’s aviation research is based on what is often described as a three-legged stool.

One leg sees initial designs tested with computational fluid dynamics, or CFD, which relies on a supercomputer for numerical analysis and data structures to solve and analyze problems.

Another leg involves building scale models to test in wind tunnels and hopefully confirm previous CFD results.

The third leg takes the research into the air, such as with experimental aircraft – or X-planes – that can fly with or without pilots, to further analyze and demonstrate a particular technology’s capability.

“This challenge is specifically targeted to speed up the CFD portion of our aerospace research,” said Michael Hetle, TACP program executive. “Some concepts are just so complex, it’s difficult for even the fastest supercomputers to analyze these models in real time. Achieving a speed-up in this software by orders of magnitude hones the edge we need to advance our technology to the next level!”

The FUN3D software is written predominately in Modern Fortran. Since the code is owned by the U.S. government, it has strict export restrictions requiring all challenge participants to be U.S. citizens over the age of 18.

NASA is looking for qualified people who can download the FUN3D code, analyze the performance bottlenecks, and identify possible modifications that might lead to reducing overall computational time.

Examples of modifications would be simplifying a single subroutine so that it runs a few milliseconds faster. If this subroutine is called millions of times, this one change could dramatically speed up the entire program’s runtime.

The HPFCC is supported by two NASA partners – HeroX and TopCoder – and offers two specific opportunities to compete. A prize purse of up to $55,000 will be distributed among first and second finishers in two categories.

To take on this challenge, and you don’t need a supercomputer to do so, just visit https://herox.com/HPFCC. Code submissions must be received by 5 p.m. EDT, June 29, and winners will be announced August 9.

For more information about this challenge, the FUN3D software, or the Pleiades supercomputer, send an email hq-fastcomputingchallenge@mail.nasa.gov.

Source: J.D. Harrington, NASA

The post NASA Issues a Challenge to Speed Up Its ‘FUN3D’ Supercomputer Code appeared first on HPCwire.

SDSC to Double Comet’s Graphic Processor Count

Wed, 05/03/2017 - 14:09
SDSC’s Petascale Comet Supercomputer. Credit: Ben Tolo, SDSC

SAN DIEGO, Calif., May 3, 2017 — The San Diego Supercomputer Center (SDSC) at the University of California San Diego has been granted a supplemental award from the National Science Foundation (NSF) to double the number of graphic processing units, or GPUs, on its petascale-level Comet supercomputer in direct response to growing demand for GPU computing across a wide range of research domains.

Under the supplemental NSF award, valued at just over $900,000, SDSC is expanding the high-performance computing resource with the addition of 36 GPU nodes, each with four NVIDIA P100s, for a total of 144 GPUs. This will double the number of GPUs on Comet from the current 144 to 288. The nodes are expected to be in production by early July.

The expansion will make Comet the largest provider of GPU resources available to the NSF-funded Extreme Science and Engineering Discovery Environment (XSEDE), a national partnership of institutions that provides academic researchers with the most advanced collection of digital resources and services in the world. Prior to this award, the NSF granted SDSC a total of $24 million to develop and operate Comet, which went into production in mid-2015.

Once used primarily for video game display graphics, today’s much more powerful GPUs have been developed that have more accuracy, speed, and accessible memory for more scientific applications that range from phylogenetics and molecular dynamics to creating some the most detailed seismic simulations ever made to better predict ground motions to save lives and minimize property damage.

“This expansion is reflective of a wider adoption of GPUs throughout the scientific community, which is being driven in large part by the availability of community-developed applications that have been ported to and optimized for GPUs,” said SDSC Director Michael Norman, who is also the principal investigator for the Comet program.

Applications include but are not limited to GPU-memory management systems such as VAST, analysis of data from large scientific instruments, and molecular dynamics software packages such as AMBER, LAMMPS, and BEAST – the latter used extensively by SDSC’s Cyberinfrastructure for Phylogenetic Research (CIPRES) science gateway, which receives the majority of its computing resources from Comet.

Francis Halzen, principal investigator of the IceCube Neutrino Observatory, the first detector of its kind designed to observe the cosmos from deep within the South Pole ice, welcomes the new GPU addition.

“The IceCube neutrino detector transforms natural Antarctic ice at the South Pole into a particle detector,” said Halzen, also a physics professor at the University of Wisconsin-Madison. “Progress in understanding the precise optical properties of the ice leads to increasing complexity in simulating the propagation of photons in the instrument and to a better overall performance of the detector.”

“This expansion will help the XSEDE organization meet increased demand for GPU resources from these areas, as well as prepare for research in new areas, such as machine learning, which has become increasingly important for a wide range of research in areas including image processing, bioinformatics, linguistics, and others,” said SDSC Director of Scientific Applications and Comet Co-PI Bob Sinkovits.

The P100 is NVIDIA’s newest GPU. SDSC benchmarking tests show that for some applications the GPUs achieve speed-ups of two times over the K80 GPUs already in Comet. The new GPU nodes will be added to the existing Comet GPU nodes and become a separately allocable resource.

The amended NSF award number for Comet, including the GPU additions, is FAIN 1341698. The award is estimated to run until March 30, 2020.

Source: Jan Zverina, SDSC

The post SDSC to Double Comet’s Graphic Processor Count appeared first on HPCwire.

12 Teams Will Compete at ISC17 Student Cluster Competition

Wed, 05/03/2017 - 13:54

May 3, 2017 — Here are the twelve teams that will take part in the three-day student cluster competition, from June 19-21, 2017, during the 32nd annual ISC High Performance 2017 conference and exhibition, in Frankfurt, Germany.

Supporting the global demand for Science, Technology, Engineering and Mathematics (STEM) talent, the sixth annual competition introduces university teams, comprised of six students and up to two advisors each. In addition to applying the knowledge gained through their education, SCC participants learn valuable new skills, including developing basic proposals, obtaining required sponsorships, securing industry partnerships, and designing a platform to run benchmarks and applications within a limited power budget. The competition also nurtures a healthy competitive spirit, develops comradery, and establishes early professional relationships and even lifelong friendships that might have never occurred otherwise. All of these experiences have become as much a benefit of the SCC as the hands-on live-learning experience itself.

Find more information, visit www.hpcadvisorycouncil.com.

Source: ISC High Performance/HPC Advisory Council

The post 12 Teams Will Compete at ISC17 Student Cluster Competition appeared first on HPCwire.

One Stop Systems Announces SkyScale

Wed, 05/03/2017 - 12:43

ESCONDIDO, Calif., May 3, 2017 — One Stop Systems (OSS) today announces the launch of SkyScale, a new company that provides HPC as a Service (HPCaaS). For years OSS has been designing and manufacturing the latest in high performance computing and storage systems. Now customers can lease time on these same systems, saving time and money. OSS systems are the distinguishing factor for SkyScale’s HPCaaS offering. OSS has been the first company to successfully produce a system that can operate sixteen of the latest NVIDIA Tesla GPU accelerators connected to a single server. These systems are employed today in deep learning applications and in a variety of industries including defense and oil and gas.

“Employing OSS compute and flash storage systems gives SkyScale an overwhelming advantage over competitive companies offering HPC services,” said Steve Cooper, CEO of OSS. “All of the systems available at SkyScale are the same systems currently used in the field by defense and machine learning applications with proven reliability. By making these systems available on a time-rental basis, we’re letting developers take advantage the most sophisticated systems to run their algorithms without having to own the equipment.”

“The NVIDIA Tesla GPU computing platform delivers massive leaps in performance compared to CPU-only systems for HPC applications such as deep learning,” said Paresh Kharya, NVIDIA Tesla Product Management Lead. “By leveraging the massively parallel processing capability of our platform, SkyScale is helping its customers address their most demanding computational challenges.”

Please visit www.skyscale.com for more information and call our experienced sales engineers for a quote and to discuss specific requirements. Systems are immediately available for scheduling.

About One Stop Systems

One Stop Systems designs and manufacturers supercomputers for high performance computing (HPC) applications such as deep learning, oil and gas exploration, financial trading, defense and any other applications that require the fastest and most efficient data processing. By utilizing the power of the latest GPU accelerators and flash storage cards, our systems stay on the cutting edge of the latest technologies. We have a reputation as innovators using the very latest technology and design equipment to operate with the highest efficiency. Now OSS offers these exceptional systems to customers who prefer to lease time on them instead of or in addition to purchasing them. OSS is always working to meet our customers’ greater needs.

About SkyScale

SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU hardware platforms for lease to customers desiring the fastest performance available as a service anywhere in the world. SkyScale builds, configures, and manages dedicated systems strategically located in maximum-security facilities, allowing customers to focus on results while minimizing capital equipment investment. Learn more at www.SkyScale.com.

Source: One Stop Systems

The post One Stop Systems Announces SkyScale appeared first on HPCwire.

Cavium to Demonstrate Semiconductor Solutions at NFV World Congress 2017

Wed, 05/03/2017 - 09:15

SAN JOSE, Calif., May 3, 2017 — Cavium, Inc. (NASDAQ: CAVM), a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, datacenter, cloud, wired and wireless networking, will demonstrate the company’s next generation NFV, SDN, 5G, and Telco Cloud Infrastructure solutions at NFV World Congress May 3rd – 5th at the DoubleTree Hotel in San Jose, California.

NFV World Congress was founded in 2014 by a group of visionary and senior carrier leaders, in order to form a center of gravity for the rapidly emerging NFV industry, and provide the right forum to accelerate innovation and support ecosystem growth. Over 1,000 delegates attended the launch event in spring 2015, growing to over 1,300 delegates in 2016, along with 50+ exhibitors and over 20 live demos.

Cavium will showcase the following demonstrations in booth #10:

  • M–CORD Demonstration: SDN, NFV, C–RAN and Multi–access Edge Computing
    • Open source SDN/NFV proof–of–concept demonstration based on our collaboration with ON.Lab and multiple network operators. M–CORD is the Mobile – Central Office Re–architected as Datacenter project. This PoC demonstration features Cavium processor-based COTS hardware from the RAN to the core network. In addition, commercial VNFs as Edge services will be demonstrated in Multi–access Edge Computing scenario running on a NFV infrastructure that is based on Cavium ThunderX ARM based COTS servers and XPliant based SDN white box switch.
  • Multi–mode Basestation On-a-Chip
    • Single–chip Macrocell / Microcell and smart radio head processors supporting 3GPP releases 10 – 13.
  • 64–bit ARM Processors for COTS Servers, Carrier–Grade NFV across Cloud/Edge/On–Premise, IoT Gateway, Embedded Applications
    • ThunderX servers and OCTEON TX ODM boxes featuring rich ecosystems: OS distros, dataplane and security processing acceleration, Ceph storage and more.
  • Telco Cloud Security and NFV Acceleration
    • LiquidIO intelligent Adapters enable network virtualization for network and security acceleration including OVS, vRouter, Firewall, IPSec, NFV based virtual appliances and Service Function Chaining. ETSI NFV ISG PoC #41 “Network Function Acceleration” scenarios.
  • QLogic FastLinQ Ethernet Adapter Family
    • 10/25/50/100 GbE connectivity with DPDK, NVMf with Universal RDMA, OVS, SDN Tunneling and OpenStack support for telco and private/hybrid cloud applications.
  • Security as a Service for Telco Cloud
    • LiquidSecurity appliance provides key management and crypto offload for virtual network functions such as vADC and vFW.

To schedule a meeting with Cavium, please contact your local sales account manager or Lilly Ly (lilly.ly@cavium.com). Please enter Meeting Request at NFV World Congress 2017 in the subject line.

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Datacenter and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan.

Source: Cavium

The post Cavium to Demonstrate Semiconductor Solutions at NFV World Congress 2017 appeared first on HPCwire.

Cycle Computing Brings CycleCloud to GPU Technology Conference

Wed, 05/03/2017 - 08:39

NEW YORK, May 3, 2017 —  Cycle Computing, the global leader in Big Compute and Cloud HPC orchestration software, today announced that it will be in attendance at the GPU Technology Conference to be held May 8-11 at the San Jose Convention Center in San Jose, CA. Cycle is set to showcase how its flagship CycleCloud supports GPU in the cloud. Discussions and demos will be held at Cycle’s booth 530, at the show.

GPU (graphics processing unit) instances from cloud service providers are supported for orchestration and monitoring, including GPU-specific metrics. GPUs enable important research and with CycleCloud, users can get the most of their cloud instances, as it enables, manages and optimizes large computational workloads.

“Our customers know the power and flexibility that the cloud uniquely provides. Our CycleCloud software provides built in GPU support, enabling customers to quickly get workloads up and running with built in GPU performance monitoring,” said Jason Stowe, CEO, Cycle Computing. “CycleCloud makes it easy to manage GPU-based workflows with the cost controls, performance monitoring, data management, and security features that organizations need.”

The GPU Technology Conference is the largest and most important event of the year for GPU developers. It showcases the most vital work in the computing industry today, including deep learning and AI, big data analytics, virtual reality, and self-driving cars with the brightest scientists, developers, graphic artists, designers, researchers, engineers, and IT managers who use GPUs to tackle computational and graphics challenges. The GPU Technology Conference attracts developers, researchers, and technologists from some of the top companies, universities, research firms and government agencies from around the world that seek to benefit from valuable training, labs and connections to quickly advance understanding in the incredibly dynamic fields that comprise GPU computing.

Cycle Computing’s CycleCloud orchestrates Big Compute and Cloud HPC workloads enabling users to overcome the challenges typically associated large workloads. CycleCloud takes the delays, configuration, administration, and sunken hardware costs out of HPC clusters. CycleCloud easily leverages multi-cloud environments moving seamlessly between internal clusters, Google Cloud Platform, Microsoft Azure, Amazon Web Services, and other cloud environments.

More information about the CycleCloud cloud management software suite can be found at www.cyclecomputing.com.

About Cycle Computing

Cycle Computing is the leader in Big Compute software to manage simulation, analytics, and Big Data workloads. Cycle turns the Cloud into an innovation engine for your organization by providing simple, managed access to Big Compute. CycleCloud is the enterprise software solution for managing multiple users, running multiple applications, across multiple clouds, enabling users to never wait for compute and solve problems at any scale. Since 2005, Cycle Computing software has empowered customers in many Global 2000 manufacturing, Big 10 Life Insurance, Big 10 Pharma, Big 10 Hedge Funds, startups, and government agencies, to leverage hundreds of millions of hours of cloud based computation annually to accelerate innovation. For more information visit: www.cyclecomputing.com

Source: Cycle Computing

The post Cycle Computing Brings CycleCloud to GPU Technology Conference appeared first on HPCwire.

Open Source Test Suite Adds to Broad Toolset for Heterogeneous System Architecture Development

Tue, 05/02/2017 - 17:55

BEAVERTON, Ore., May 2, 2017 – The HSA Foundation has made available to developers the HSA PRM (Programmer’s Reference Manual) conformance test suite as open source software. The test suite is used to validate Heterogeneous System Architecture (HSA) implementations for both the HSA PRM Specification and HSA PSA (Platform System Architecture) specification.

With this addition to the already available HSA Runtime Conformance tests, HSA developers now have a fully open source conformance test suite for validating all aspects of HSA systems.

HSA is a standardized platform design that unlocks the performance and power efficiency of the parallel computing engines found in most modern electronic devices. It allows developers to easily and efficiently apply the hardware resources—including CPUs, GPUs, DSPs, FPGAs, fabrics and fixed function accelerators—in today’s complex systems-on-chip (SoCs).

“The HSA Foundation has always been a strong proponent of open source development tools directly and through its member companies,” said HSA Foundation Chairman Greg Stoner. “Open sourcing worldwide the PRM conformance test suite is yet another example of an expanding array of development tools freely available supporting HSA.”

According to HSA Foundation President Dr. John Glossner, “The decision to open source the conformance test suite is strongly supported by the HSA Foundation and we believe this is an important step for allowing the developer community including non-member China Regional Committee (CRC) participants to test HSA systems. With the ability to develop conformance tests, the community can now contribute to the new test and thus drive the continual improvement of the test quality and consistency.”

“Good quality open source components are crucial in making heterogeneous computing more accessible to programmers and standards adopters. It is great to see that HSA Foundation continues its open source strategy by releasing the important PRM conformance test suite to the public,” said Dr. Pekka Jääskeläinen, CEO of Parmance.

The HSA Foundation through its member companies and universities has also released many additional projects which are all available on the Foundation’s GitHub site including:

  • HSAIL Developer Tools: finalizer, debugger, assembler, and simulator
  • GCC HSAIL frontend developed by Parmance and General Processor Technologies (GPT) allowing gcc finalization for any gcc machine target; the frontend is included in the upcoming GCC 7 release
  • Heterogeneous compute compiler (hcc) for single-source compilation of heterogeneous systems
  • Runtime implementations including AMD’s ROCm and phsa-runtime by Parmance and GPT; phsa-runtime can be used together with GCC HSAIL frontend to support the entire HSA programming stack using open source components
  • Portable Computing Language (pocl), an open source implementation of the OpenCL standard with a backend for HSA developed by the Customized Parallel Computing group of Tampere University of Technology (TUT) –an HSA Foundation Academic Center of Excellence

See the complete roster at: https://github.com/HSAFoundation.

About the HSA Foundation

The HSA (Heterogeneous System Architecture) Foundation is a non-profit consortium of SoC IP vendors, OEMs, Academia, SoC vendors, OSVs and ISVs, whose goal is making programming for parallel computing easy and pervasive. HSA members are building a heterogeneous computing ecosystem, rooted in industry standards, which combines scalar processing on the CPU with parallel processing on the GPU, while enabling high bandwidth access to memory and high application performance with low power consumption. HSA defines interfaces for parallel computation using CPU, GPU and other programmable and fixed function devices, while supporting a diverse set of high-level programming languages, and creating the foundation for next-generation, general-purpose computing.

Source:  HSA Foundation

The post Open Source Test Suite Adds to Broad Toolset for Heterogeneous System Architecture Development appeared first on HPCwire.

Trump Establishes American Technology Council.

Tue, 05/02/2017 - 15:48

U.S. President Donald Trump has established the American Technology Council with an executive order. So far there are few details about the ATC mission although Trump will chair the group. Apparently the president signed the order last Friday although it is dated May 1, 2017.

As stated in the executive order: “It is the policy of the United States to promote the secure, efficient, and economical use of information technology to achieve its missions. Americans deserve better digital services from their Government. To effectuate this policy, the Federal Government must transform and modernize its information technology and how it uses and delivers digital services.” The ATC will guide implantation of the policy.

On first glance, the goals seem government-centric rather than broadly focused on IT use throughout society or its effective advancement as competitive U.S. strategy. This contrasts with the National Strategic Computing Initiative (NSCI), established by President Obama with an executive order, which laid out broader HPC-focused vision. Indeed the two may not be comparable, and NSCI for all its ambition has seemed to lose steam recently.

CNN reports The Council falls under the White House Office of American Innovation led by Trump’s son-in-law Jared Kushner. In addition to Kushner, Chris Liddell and Reed Cordish (Kushner lieutenants), members of the ATC include the President; the Vice President; Secretary of Defense; Secretary of Commerce; Secretary of Homeland Security; Director of National Intelligence; OMB Director; Director of the Office of Science and Technology Policy; and U.S. CTO.

The post Trump Establishes American Technology Council. appeared first on HPCwire.

Blue Waters Study Dives Deep into Performance Details

Tue, 05/02/2017 - 14:55

If you’ve wondered about what, exactly, NCSA supercomputer Blue Waters has been doing since being fired up in 2013, a new report is full of details around workloads, CPU/GPU use patterns, memory and I/O issues, and a plethora of other metrics. Released in March, the study – Final Report: Workload Analysis of Blue Waters – provides a wealth of information around demand and performance. Blue Waters has supplied roughly 17.3 billion core hours to scientists to date.

“When the system was originally configured, it was not clear what balance of CPU or GPU should be in the system. We set the ratio based on analysis of the science teams approved to use Blue Waters and consultation with accelerated computing experts,” said Greg Bauer, applications technical program manager at NCSA. “The workload study shows the balance we went with is very reasonable, and that we were ready to keep up with the demand for the first three years.”

Blue Waters, of course, is the Cray XE6/XK7 supercomputer at the National Center for Supercomputing Applications (NCSA). It’s a formidable 13 petaflops (peak) machine with two types of nodes connected via a single Cray Gemini High Speed Network in a large-scale 3D Torus topology. The two different types of nodes are XE6 (AMD 6276 Interlagos processors) and XK7 (AMD 62767 plus Nvidia Kepler K20X GPUs). The NCSA supercomputer employs a high performance on-line storage system with over 25 PB of usable storage (36 PB raw) and over 1 TB/s sustained performance.

As noted in the report, “The workload analysis itself was a challenging computational problem – requiring more than 35,000 node hours (over 1.1 million core hours) on Blue Waters to analyze roughly 95 TB of input data from over 4.5M jobs that ran on Blue Waters during the period of our analysis (April 1, 2013 – September 30, 2016) that spans the beginning to Full Service Operations for Blue Waters to the recent past. In the process, approximately 250 TB of data across 100M files was generated. This data was subsequently entered into MongoDB and a MySQL data warehouse to allow rapid searching, analysis and display in Open XDMoD. A workflow pipeline was established so that data from all future Blue Waters jobs will be automatically ingested into the Open XDMoD data warehouse, making future analyses much easier.”

The report is a rich and also dense read. Here are a few highlights:

  • The National Science Foundation MPS (Math and Physical Sciences) and Biological Sciences directorates are the leading consumers of node hours, typically accounting for more than 2/3 of all node hours used.
  • The number of fields of science represented in the Blue Waters portfolio has increased in each year of its operation – more than doubling since its first year of operation, providing further evidence of the growing diversity of its research base.
  • The applications run on Blue Waters represent an increasingly diverse mix of disciplines, ranging from broad use of community codes to more specific scientific sub-disciplines.
  • The top 10 applications consume about 2/3 of all node hours, with the top 5 (NAMD, CHROMA, MILC, AMBER, and CACTUS) consuming about 50%.
  • Common algorithms, as characterized by Colella’s original seven dwarfs, are roughly equally represented within the applications run on Blue Waters aside from unstructured grids and Monte Carlo methods, which exhibit a much smaller fraction.

The pie chart below depicts the current Blue Waters workload (5/2/17).

One of many interesting questions examined is how use of the different node types varied. Here’s an excerpt:

For XE node jobs, all of the major science areas (> 1 million node hours) run a mix of job sizes and all have very large jobs (> 4096 nodes). The relative proportions of job size vary between different parent science areas. The job size distribution weighted by node hours consumed peaks at 1025 – 2048 for XE jobs. The largest 3% of the jobs (by node hours) account for 90% of the total node-hours consumed.

The majority of XE node hours on the machine are spent running parallel jobs that use some form of message passing for inter-process communication. At least 25% of the workload uses some form of threading, however the larger jobs (> 4096 nodes) mostly use message passing with no threading. There is no obvious trend in the variation of thread usage over time, however, thread usage information is only available for a short time period.

For the XK (GPU) nodes, the parent sciences Molecular Biosciences, Chemistry and Physics are the largest users with NAMD and AMBER the two most prevalent applications. The job size distribution weighted by node hours consumed peaks at 65 – 128 nodes for the XK jobs. Similarly to the XE nodes, the largest 7% of the jobs (by node-hour) account for 90% of the node-hours consumed on the XK nodes.

The aggregate GPU utilization (efficiency) varies significantly by application, with MELD achieving over 90% utilization and GROMACS, NAMD, and MILC averaging less than 30% GPU utilization. However, for each of the applications, the GPU utilization can vary significantly from job to job.

Blue Waters has enabled groundbreaking research in many areas. One of the projects in the area where no other supercomputer would work was a project led by Carnegie Mellon University astronomer Tiziana Di Matteo. While it wasn’t her first simulation on a leadership class supercomputer, it was her most detailed, allowing her to see the first quasars in her simulation of the early universe.

“The Blue Waters project,” DiMatteo wrote in a Blue Waters report, “made possible this qualitative advance, making possible what is arguably the first complete simulation (at least in terms of the hydrodynamics and gravitational physics) of the creation of the first galaxies and large-scale structures in the universe.”

For those wishing a still substantive but less dense look at Blue Waters, NCSA released the 2016 Blue Waters annual report today.

Link to Blue Water report: https://arxiv.org/ftp/arxiv/papers/1703/1703.00924.pdf

Link to Blue Waters 2016 annual report: https://bluewaters.ncsa.illinois.edu/portal_data_src/BW_AR_16_linked.pdf

The post Blue Waters Study Dives Deep into Performance Details appeared first on HPCwire.

Cray Reports First Quarter 2017 Financial Results

Tue, 05/02/2017 - 14:40

SEATTLE, May 02, 2017 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced financial results for its first quarter ended March 31, 2017.

All figures in this release are based on U.S. GAAP unless otherwise noted.  A reconciliation of GAAP to non-GAAP measures is included in the financial tables in this press release.

Revenue for the first quarter of 2017 was $59.0 million, compared to $105.5 million in the first quarter of 2016.  Net loss for the first quarter of 2017 was $19.2 million, or $0.48 per diluted share, compared to a net loss of $5.0 million, or $0.13 per diluted share in the first quarter of 2016.  Non-GAAP net loss was $28.4 million, or $0.71 per diluted share for the first quarter of 2017, compared to non-GAAP net loss of $5.3 million, or $0.13 per diluted share for the same period of 2016.

Overall gross profit margin on a GAAP and Non-GAAP basis for the first quarter of 2017 was 40%, compared to 38% for the first quarter of 2016.

Operating expenses for the first quarter of 2017 were $56.1 million, compared to $49.2 million for the first quarter of 2016.  Non-GAAP operating expenses for the first quarter of 2017 were $53.3 million, compared to $46.3 million for the first quarter of 2016.

As of March 31, 2017, cash, investments and restricted cash totaled $285 million. Working capital at the end of the first quarter was $350 million, compared to $373 million at the end of 2016.

“As expected, we got off to a slower start to the year,” said Peter Ungaro, president and CEO of Cray. “While activity at the high-end of the supercomputing market continues to be relatively slow and our visibility remains limited, our competitive position remains strong.  We were recently awarded several significant new contracts in the worldwide weather and climate segment — a market where our leadership position continues to expand.  We also released our 2017 revenue outlook today which, driven by the ongoing market conditions, is significantly lower than where we finished 2016.  Despite this, we continue to be confident in our ability to drive long-term growth over time.”

Outlook
For 2017, while a wide range of results remains possible, Cray expects revenue to be in the range of $400-$450 million for the year.  Revenue in the second quarter of 2017 is expected to be approximately $60 million.  GAAP and non-GAAP gross margins for the year are expected to be in the low- to mid-30% range.  Non-GAAP operating expenses for 2017 are expected to be roughly flat with 2016 levels.  For 2017, GAAP operating expenses are anticipated to be about $12 million higher than non-GAAP operating expenses, and GAAP gross profit is expected to be about $1 million lower than non-GAAP gross profit.

Actual results for any future periods are subject to large fluctuations given the nature of Cray’s business.

Recent Highlights

  • In May, Cray announced that it was selected to deliver a Cray CS400system to the Laboratory Computing Resource Center at Argonne National Laboratory.  The new 1.5 petaflops Cray system will serve as the Center’s flagship cluster.
  • In April, Cray announced that it signed a solutions provider agreement with Mark III Systems, to develop, market and sell solutions that leverage Cray’s portfolio of supercomputing and big data analytics systems.
  • In April, Cray completed its office move from downtown St. Paul to The Offices @ MOA in Bloomington, Minnesota.  This office is now fully operational, housing more than 350 Cray employees.
  • In January, Cray was selected by the GW4 Alliance and The Met Office in the UK to deliver the hardware and support for a new Tier 2 high performance computing service for UK-based scientists.  This unique new service will provide multiple advanced architectures within the same system in order to enable evaluation and comparison across a diverse range of processors.
  • In the last several months, Cray was awarded significant new contracts to deliver Cray supercomputers and storage systems to multiple leading weather and climate research centers around the world.  None of these awards has been announced specifically as of yet.

Conference Call Information
Cray will host a conference call today, Tuesday, May 2, 2017 at 1:30 p.m. PDT (4:30 p.m. EDT) to discuss its first quarter ended March 31, 2017 financial results.  To access the call, please dial into the conference at least 10 minutes prior to the beginning of the call at (855) 894-4205. International callers should dial (765) 889-6838 and use the conference ID #56308196.  To listen to the audio webcast, go to the Investors section of the Cray website atwww.cray.com/company/investors.

If you are unable to attend the live conference call, an audio webcast replay will be available in the Investors section of the Cray website for 180 days.  A telephonic replay of the call will also be available by dialing (855) 859-2056, international callers dial (404) 537-3406, and entering the conference ID #56308196.  The conference call replay will be available for 72 hours, beginning at 4:45 p.m. PDT on Tuesday, May 2, 2017.

Use of Non-GAAP Financial Measures
This press release contains “non-GAAP financial measures” under the rules of the U.S. Securities and Exchange Commission (“SEC”).  A reconciliation of U.S. generally accepted accounting principles, or GAAP, to non-GAAP results is included in the financial tables included in this press release.  Management believes that the non-GAAP financial measures that we have set forth provide additional insight for analysts and investors and facilitate an evaluation of Cray’s financial and operational performance that is consistent with the manner in which management evaluates Cray’s financial performance.  However, these non-GAAP financial measures have limitations as an analytical tool, as they exclude the financial impact of transactions necessary or advisable for the conduct of Cray’s business, such as the granting of equity compensation awards, and are not intended to be an alternative to financial measures prepared in accordance with GAAP.  Hence, to compensate for these limitations, management does not review these non-GAAP financial metrics in isolation from its GAAP results, nor should investors.  Non-GAAP financial measures are not based on a comprehensive set of accounting rules or principles.  This non-GAAP information supplements, and is not intended to represent a measure of performance in accordance with, or disclosures required by GAAP.  These measures are adjusted as described in the reconciliation of GAAP to non-GAAP numbers at the end of this release, but these adjustments should not be construed as an inference that all of these adjustments or costs are unusual, infrequent or non-recurring.  Non-GAAP financial measures should be considered in addition to, and not as a substitute for or superior to, financial measures determined in accordance with GAAP.  Investors are advised to carefully review and consider this non-GAAP information as well as the GAAP financial results that are disclosed in Cray’s SEC filings.

Additionally, we have not quantitatively reconciled the non-GAAP guidance measures disclosed under “Outlook” to their corresponding GAAP measures because we do not provide specific guidance for the various reconciling items such as stock-based compensation, adjustments to the provision for income taxes, amortization of intangibles, costs related to acquisitions, purchase accounting adjustments, and gain on significant asset sales, as certain items that impact these measures have not occurred, are out of our control or cannot be reasonably predicted.  Accordingly, reconciliations to the non-GAAP guidance measures are not available without unreasonable effort.  Please note that the unavailable reconciling items could significantly impact our financial results.

About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges.  Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability.  Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray

The post Cray Reports First Quarter 2017 Financial Results appeared first on HPCwire.

Mark Meili to Keynote CIMdata’s Product & Manufacturing Innovation Workshop

Tue, 05/02/2017 - 13:27

ANN ARBOR, Michigan, May 2, 2017 — CIMdata, Inc., the leading global PLM strategic management consulting and research firm, announces that Mr. Mark Meili, Director of Modeling and Simulation at Procter & Gamble, will make a keynote presentation at the upcoming Product & Manufacturing Innovation Driven by Digital Design & Simulation Workshop. The workshop will take place at the UI LABS Innovation Center, home to DMDII, in Chicago, Illinois, on June 6 and 7.

In-silico is a term that describes science done in a computer as opposed to the more traditional in-vivo or physically-based experimentation. Innovation is being, and will increasingly be, driven by a combination of simulation and data analytics. The quality and quantity of engineering and science that can already be done using simulation is simply amazing. The scope, however, is not well understood by many technical leaders, much less their business partners. As systems continue to increase in complexity, physical prototypes of the entire systems are too expensive and time consuming to use for most technical learning. Purposeful interventions in systems understanding and simulation are needed. In his keynote address, “Simulation Led Innovation – The promise, the pitfalls, and business imperative for ever better products and business execution,” Mr. Meili will talk about the technical skills and capabilities required to change the way we work. He will also discuss the need for this way of working to be a stated business strategy in order to be successful.

CIMdata’s Product & Manufacturing Innovation Driven by Digital Design & Simulation Workshop, is the must-attend event for industrial organizations and solution providers interested in learning more about model-driven engineering strategies and the solutions that will enable on-going product and manufacturing innovation to create competitive advantage, minimize total lifecycle costs, and drive top line revenue growth. It will provide attendees with independent experiences from industrial companies and a collaborative networking environment where ideas, trends, experiences, and critical relationships germinate and take root.

CIMdata’s thought-leadership team of Don Tolle, Dr. Keith Meintjes, Dr. Ken Versprille, Dr. Suna Polat, and Frank Popielas, will be on hand in Chicago to facilitate the workshop and associated discussions.

For more information visit http://www.cimdata.com/en/education/knowledge-council-workshops/joint-kc-workshop-2017

About Mark Meili 

Mark A. Meili is Director of Modeling and Simulation for Procter & Gamble in Cincinnati, Ohio. Over his career he has held a variety of technical and management positions in both R&D and Product Supply Engineering. His current role spans technical work processes from research to commercialization to supply chain operation. Mark has been both a practitioner and champion of first principles, understanding the need to reduce risk and enable robust technical decision-making throughout his 30-year career. Mark received bachelor of science degrees from Kansas State University, one in Mechanical Engineering and one in Grain Science.

Source: CIMdata

The post Mark Meili to Keynote CIMdata’s Product & Manufacturing Innovation Workshop appeared first on HPCwire.

Altair Announces Speaker Lineup for 2017 PBS Works User Group

Tue, 05/02/2017 - 13:05

TROY, Mich., May 02, 2017 — Altair’s 2017 PBS Works User Group will be hosted in Las Vegas, NV from May 22-25, featuring an agenda packed with eminent thought and industry leaders. Top speaker presentations include Boeing, NASA Ames, U.S. Department of Defense, the National Computational Infrastructure, General Electric, Intel Corporation, Oracle, Orbital ATK, the University of Nevada Las Vegas, and more. Les Ottolenghi, EVP & Chief Information Officer at Caesars Palace Entertainment will keynote the event.

This four-day event will be held at the Innevation Center in Las Vegas and includes two days of presentations, panel discussions, and two days of hands-on workshops. Altair plans to unveil the latest PBS Works Suite functionalities, including PBS Professional updates, new intuitive user interfaces, advanced admin features, cloud bursting capabilities, and more. Product managers and developers will be demonstrating new features to attendees live at the event.

This year Altair is proud to host a tour of the Switch Las Vegas Data Center on May 23rd, the most advanced, efficient data center campus in the world. Attendees will also have the opportunity to learn and network with some of the best minds in HPC. The PBS Works User Group (PBSUG) provides a first-rate opportunity to connect with fellow users and learn from cross-industry applications. There is no event more important for PBS Works’ users and administrators to attend this year than PBSUG 2017. Sessions will deliver valuable tips, insights, and practical information to enhance users’ professional skills.

InsideHPC’s Rich Brueckner will moderate a Q&A Panel featuring this year’s sponsors, Intel, Oracle, and Panasas. Additionally, PBSUG 2017 will feature two discussion panels with Altair PBS Works product managers and engineers. “These open forums are really valuable for both users and Altair,” says Bill Nitzberg, PBS Works CTO. “Attendees can learn about our plans and give truly uncensored feedback directly to the engineering teams. It’s a great chance for the whole community to learn from each other.”

For more information & registration, visit: http://www.pbsworks.com/pbsug/2017/default.aspx.

About Altair

Altair is focused on the development and broad application of simulation technology to synthesize and optimize designs, processes and decisions for improved business performance. Privately held with more than 2,600 employees, Altair is headquartered in Troy, Michigan, USA and operates more than 50 offices throughout 22 countries. Today, Altair serves more than 5,000 corporate clients across broad industry segments. To learn more, please visit www.altair.com.

Source: Altair

The post Altair Announces Speaker Lineup for 2017 PBS Works User Group appeared first on HPCwire.

Supercomputers Assist in Search for New, Better Cancer Drugs

Tue, 05/02/2017 - 12:59

AUSTIN, May 2, 2017 — Surgery and radiation remove, kill, or damage cancer cells in a certain area. But chemotherapy — which uses medicines or drugs to treat cancer — can work throughout the whole body, killing cancer cells that have spread far from the original tumor.

The model of full-length p53 protein bound to DNA as a tetramer. The surface of each p53 monomer is depicted with a different color. [Courtesy: Özlem Demir, University of California, San Diego]Finding new drugs that can more effectively kill cancer cells or disrupt the growth of tumors is one way to improve survival rates for ailing patients.

Increasingly, researchers looking to uncover and test new drugs use powerful supercomputers like those developed and deployed by the Texas Advanced Computing Center (TACC).

“Advanced computing is a cornerstone of drug design and the theoretical testing of drugs,” said Matt Vaughn, TACC’s Director of Life Science Computing. “The sheer number of potential combinations that can be screened in parallel before you ever go in the laboratory makes resources like those at TACC invaluable for cancer research.”

Three projects powered by TACC supercomputer, which use virtual screening, molecular modeling and evolutionary analyses, respectively, to explore chemotherapeutic compounds, exemplify the type of cancer research advanced computing enables.

Continue reading at: https://www.tacc.utexas.edu/-/supercomputers-assist-in-search-for-new-better-cancer-drugs

Source: TACC

The post Supercomputers Assist in Search for New, Better Cancer Drugs appeared first on HPCwire.

PEARC17 Details Advanced Research Computing on Campuses Workshop

Tue, 05/02/2017 - 11:41

May 2, 2017 — The Advanced Research Computing on Campuses (ARCC): Best Practices Workshop, co-locating with PEARC17 in New Orleans in July, will present a full-day tutorial, a series of lightning talks, and Birds of a Feather sessions. Plenaries, breaks, meals, and other activities will be shared with PEARC17 attendees.

An ARCC-hosted tutorial, “Enabling and Advancing Research Computing on Campuses” will be on Monday, July 10, and will provide a broad overview of research computing on campuses including evolving trends in cyberinfrastructure, computing, and people resources on campuses. Other ARCC sessions are scheduled for July 11 and 12. ARCC sessions are open to all PEARC17 attendees.

As part of the community organizations that are supporting the PEARC17 conference, ACI-REF and the National Center for Supercomputing Applications (NCSA) are co-locating the 2017 ARCC best practices workshop with PEARC17. Last year’s 2016 ARCC workshop drew more than 100 attendees from universities across the country.

“On behalf of the members of the ACI-REF consortium, we strongly support this effort to bring the broader cyberinfrastructure community together under one roof for an exchange of ideas and new discovery,” said Gwen Jacobs, PhD, Director of Cyberinfrastructure, University of Hawaii. “In ACI-REF we have seen the benefit first hand of the gains made when groups come together to exchange expertise and learn from each other. Co-locating our ARCC meeting with PEARC17 will broaden the scope and impact of the meeting for all participants. We’re excited to be part of this community effort.”

See the ARCC Best Practices Workshop page for details, and register for PEARC17 to attend.

About PEARC

The PEARC (Practice & Experience in Advanced Research Computing) conference series is being ushered in with support from many organizations and will build upon earlier conferences’ success and core audiences to serve the broader community. In addition to XSEDE, organizations supporting the new conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC), the Science Gateways Community Institute (SGCI), the Campus Research Computing Consortium (CaRC), the ACI-REF consortium, the Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC), and Internet2.

Source: PEARC

The post PEARC17 Details Advanced Research Computing on Campuses Workshop appeared first on HPCwire.

NCSA Reports Blue Waters Breakthroughs

Tue, 05/02/2017 - 11:35

URBANA, Ill., May 2, 2017 — The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign released Tuesday the 2016 Blue Waters Project Annual Report, highlighting a year of scientific exploration and breakthroughs enabled by the project’s leadership class supercomputer and its associated support, training, and education efforts. For the project’s third annual report, research teams were invited to present highlights from their research that leveraged Blue Waters, the National Science Foundation’s most powerful system for computation and data analysis.

Download the 2016 NCSA Blue Waters Annual Report.

The report demonstrates how the Blue Waters Project is accelerating research and impact across a wide range of science and engineering disciplines. This year’s edition contains 30 percent more high-impact result summaries than the 2015 report, which itself showed more than a 40% increase over the 2014 report. Readers will go on a journey to explore how researchers, from the most senior experts to undergraduate students, are conducting groundbreaking investigations into topics such as exploding supernovae and a dwarf “dark galaxy”; understanding, at unprecedented levels, how sunlight is transformed into chemical energy; investigations for new insights into how influenza, ebola, and other virus strains infect people; how the entire polar regions are changing at resolutions and time-to-insight that is millions of times more than could be done just two years ago; how fluid flows in applications from steel casting to blood flowing in our bodies and what happens when particles and ice are mixed with the flow; how earthquakes, plate tectonics and supervolcanoes involve and influence people; and how computer solutions can assist political re-districting for better fairness and effectiveness.

“The NCSA Blue Waters Project brings previously impossible or intractable investigations and insights within the reach of researchers across the United States,” said Dr. William “Bill” Gropp, NCSA interim director and co-principal investigator for the Blue Waters Project. “This 2016 NCSA Blue Waters Annual Report demonstrates how the combination of massive computing power and the intellectual might of pioneering scientists and engineers creates opportunities for us to better understand and shape our world. My sincerest gratitude goes to the National Science Foundation, the University of Illinois, and the State of Illinois for financial investment in this critical project to understand and improve lives and develop the nation’s advanced digital workforce.”

Other sections of the report highlight the expanded novel Petascale Application Improvement Discovery (PAID) program and education and workforce development efforts. PAID is the program where the Blue Waters project is providing millions of dollars to science teams and computational and data experts to improve the performance of applications (in a measurable manner). This report documents how the project has enabled some teams to achieve orders of magnitude improvement in productivity and time-to-solution. Additionally, the project plays a role educating and developing the next generation extreme scale workforce through workshops, symposium, graduate fellowship, undergraduate internships, the original and evolving Virtual School for Computational Science and Engineering, funding for the HPC University and our training workshops and allocations.

“While we have now completed our third full-service operational year for this supercomputer and our services, it is very exciting to observe that more and more results are being delivered and the wonder we feel about the doors that are being opened to new discoveries,” said Dr. William “Bill” Kramer, Principal Investigator and Blue Waters Project Director. “I congratulate the research teams for pushing science forward in ways we only dreamed were possible.”

Source: NCSA

The post NCSA Reports Blue Waters Breakthroughs appeared first on HPCwire.

NEC to Release Novel Vector Architecture that Delivers Superior Sustained Application Performance and Power Efficiency

Tue, 05/02/2017 - 10:19

High performance computing workflows in many industries are limited by memory bandwidth requirements, which are not satisfied by standard architectures. If memory bandwidth is a bottleneck, the sustained performance of real scientific applications will be limited.

Workflows impacted by a lack of memory bandwidth include a very broad range of applications such as computational fluid dynamics, structural analysis, and many applications that model physical systems. Much of the code used in these workflows can be well-adapted to vector processing. Specifically, the underlying code structures exhibit inherent parallelism, often only obscured by the programming style and not by the underlying mathematics. The lack of architectures really addressing the so-called memory wall was contemplated for many years in the HPC-community.

With these issues in mind, NEC will soon introduce a new vector-architecture nick-named Aurora, a solution that uses its industry-leading vector processor technology tightly coupled with industry standard scalar technology.

Expanding the use of vector processing

Organizations today must find ways to speed the time to results. The best way to accomplish this is to accelerate modeling and simulation algorithms. This offers many benefits. Individual jobs run faster, freeing up systems to run other workloads. And individuals or groups can more quickly explore different possibilities by modifying parameters, running more variations of the same job. But this speedup should not come at the price of increased coding complexity.

While some workloads benefit from acceleration technology such as GPUs, this benefit can only be achieved as a result of a large porting effort. An alternative that can be used on broad classes of applications is vector processing. Vector processing can satisfy the underlying need for memory bandwidth and thus leads to higher sustained performance without the need to rephrase huge amounts of codes in terms of complicated coding paradigms.

Today there is a growing need to make use of vector processing in a wide variety of industries. However, in most companies require a change in mindset. In the past some organizations have been hesitant about moving to vector processing because they either did not want to change their code or they thought it would be too much work.

That is no longer the case. Vectorization is close to inevitable on every platform, and therefore codes are adapted anyway. And in any case, smart optimization techniques like cache-blocking on scalar platforms are more difficult to apply than vectorization, let alone to rewrite a code using OpenCL or CUDA.

What is needed is a solution that delivers the desired performance benefits while allaying these concerns. Additionally, the system should be efficient in terms of performance per dollar or Watt. And their usage should also enable the scientist to work effectively.

A technology partner that can help

While almost all contemporary architectures utilize single instruction, multiple data (SIMD) parallelism in some way, only one vendor, NEC, is offering an architecture with real vector registers, entities which provide data to functional units continuously for a sequence of cycles by just one instruction.

Building on its long history developing vector processing systems for the most demanding workloads, NEC’s new Aurora vector system is designed to accelerate traditional memory bandwidth-intensive HPC workloads, as well as other applications such as big data analytics.

The solution is not just an accelerator that speeds up a small portion of code, as would be the case when using GPUs. Instead, the full code will run on a so-called vector engine, an accompanying x86-based platform just acts as kind of a frontend and a development environment.

Three design concepts guided the development of Aurora. They include:

Industry leading memory bandwidth performance: The individual cores of the new vector architecture will be  quite fast and can run code more efficiently. Similar to NEC’s SX Series, the solution offers industry-leading memory bandwidth per processor, core, power, and price. The cores are tailored to memory-intensive scientific applications and big data analytics applications.

Ease of use: Aurora offers a dedicated vector hardware/software environment. Specifically,

NEC’s optimized vector processing hardware and software are combined with a de facto standard environment such as Xeon clusters. This allows a workflow to start on an x86 system, and differently from accelerators, the entire application is passed to the vector engine. The x86-system supports the vector engine like a frontend, taking away all workload that does not relate to the application, daemons, and administrative processes.

Flexibility afforded by a hybrid solution: Aurora offers closely aligned scalar and vector machines. They can be used in hybrid configurations to tackle every kind of application, providing the appropriate hardware for each kind of code of a workload or workflow. This capability can be used in a workflow, for example, that requires pre- and post-processing of data or in a simulation such as a climate simulation that involves ocean code that needs vector processing and some atmospheric chemistry code that runs better on a scalar system. Software integration will include a common parallel filesystem, common scheduling, and an MPI that allows organizations to use both scalar and vector nodes in one application.

Summary

Organizations in many industries currently run simulation, modeling, and big data analytics applications that are limited because of memory bandwidth. Vector processors can deliver the necessary performance, but a solution must also be easy to use offering a standard environment and use of familiar Xeon clusters. Additionally, since many workflows include a mix of code where some algorithms can benefit from vector processing and others run more efficiently on scalar processors, any solution must be flexible allowing for such hybrid operations.

These are all areas where NEC Aurora can help. Aurora is a next-generation product that is designed to expand the use of the technology from traditional HPC problems to include a broader class of memory bandwidth-intensive applications used in organizations today.

NEC plans to make Aurora available for different environments including a rack-mounted server and supercomputer models, and it will be offered in a tower server model as well, so every scientist can develop code without continuous access to a high-end system.

For more information about meeting the demands of your memory bandwidth-intensive applications and workloads, visit: http://uk.nec.com/en_GB/emea/products/hpc/index.html

 

The post NEC to Release Novel Vector Architecture that Delivers Superior Sustained Application Performance and Power Efficiency appeared first on HPCwire.

Shortest Distance Calculation No Sweat for this Road ‘Oracle’

Mon, 05/01/2017 - 15:46

As big data analytics takes to the roads, the need for accurate distance calculations has blossomed too. However, traditional approaches to this problem face considerable challenges with accuracy and scalability. Now a group of researchers out of the University of Maryland at College Park think they found have an accurate and scalable solution with their Distance Oracle.

Roads may seem like a pedestrian topic for big data analytics. After all, roads are just strips of concrete or asphalt that we use to drive from one place to another. What’s so complicated about that?

But roads are surprisingly complex when considered in aggregate. When you consider that this country has more than one million named roads, constituting more than 4 million miles of paved byways, you begin to understand the massive scale and complexity of our road network.

Anybody who traverses the roads for business or pleasure has a need to calculate the distances they’ll travel. Whether it’s a Blue Apron driver delivering a meal in a big city or a family headed to the beach for Memorial Day, the number of miles on the route between points A and B is a fundamental number that will impact many aspects of the journey.

The simplest way to calculate distance is to put a ruler down against a paper map. This Euclidian distance will be useful when we all have self-flying cars, but until then its usefulness will be limited mainly to crows. It may surprise you, however, to know that many online services, including Google Maps, often utilize the Euclidian distance when a user submits a common query like “Where are the 10 closest Chinese restaurants?”

The more useful number is the distance we’ll travel across the existing roads. A graph database can be a useful tool for calculating this distance. If you transposed the roads into a graph database, and considered every intersection of two roads (sometimes three) to be a vertex (or node), one could then use a shortest distance algorithm, such as Dijkstra’s algorithm, to find the shortest distance between two vertices.

While this approach works and is accurate, it’s computationally expensive and therefore not scalable. If you wanted to calculate the distances between 1,000 different places (or vertices in graph speak), then you’d have 1,000 times 1,000, or potentially up to 1 million, paths to check. And the amount of space you’d need to compute that would be N cubed, or a billion.

“This is for a small problem,” says University of Maryland Computer Science Professor Hanan Samet. “If you look at a road network of the United States, it has about 24 million vertices. These numbers get way beyond anything you can conceive of.”

Professor Samet and two of his students at the University of Maryland’s Institute for Advanced Computer Studies, Jagan Sankaranarayanan and Shangfu Peng, worked on this problem and came up with what they believe is a groundbreaking solution that could find application in any number of real-world big data analytic problems companies are facing today.

The solution boils down to precomputing distances between two points in an intelligent manner, storing the results in an SQL table, and then figuring out how to serve the distance values very quickly.

Read the entire article at Datanami.

The post Shortest Distance Calculation No Sweat for this Road ‘Oracle’ appeared first on HPCwire.

Pages