HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 29 min ago

Hedge Funds (with Supercomputing help) Rank First Among Investors

Mon, 05/22/2017 - 12:48

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest share (27 percent) of stock trading of any investor type, according to the article. That’s up from 2010 when quant-based trading was tied with bank trades for the bottom share. Algorithm-based trading is, of course, the ‘sine qua non’ of hedge funds and has helped lift them to the top of the investing crowd.

The WSJ article, written by Gregory Zuckerman and Bradley Hope, quickly reviews the rise of quants in the financial industry and showcases its still-growing attraction as a lucrative career for algorithm stars formerly headed for computer science. Here’s an excerpt:

“A decade ago, the brightest graduates all wanted to be traders at Wall Street investment banks, but now they’re climbing over each other to get into quant funds,” says Anthony Lawler, who helps run quantitative investing at GAM Holding AG. The Swiss money manager last year bought British quant firm Cantab Capital Partners for at least $217 million to help it expand into computer-powered funds.

“Guggenheim Partners LLC built what it calls a “supercomputing cluster” for $1 million at the Lawrence Berkeley National Laboratory in California to help crunch numbers for Guggenheim’s quant investment funds, says Marcos Lopez de Prado, a Guggenheim senior managing director. Electricity for the computers costs another $1 million a year.

“Algorithmic trading has been around for a long time but was tiny. An article in The Wall Street Journal in 1974 featured quant pioneer Ed Thorp. In 1988, the Journal profiled a little-known Chicago options-trading firm that had a secret computer system. Journal reporter Scott Patterson wrote a best-selling book in 2010 about the rise of quants.”

Link to full article: https://www.wsj.com/articles/the-quants-run-wall-street-now-1495389108

The post Hedge Funds (with Supercomputing help) Rank First Among Investors appeared first on HPCwire.

PEARC17 Announces Complete Conference Schedule

Mon, 05/22/2017 - 11:05

NEW ORLEANS, May 22, 2017 — The PEARC17 organizers today announced the full schedule for the Practice & Experience in Advanced Research Computing conference in New Orleans, July 9–13, 2017. PEARC17’s robust technical program will address issues and challenges facing those who manage, develop, and use advanced research computing throughout the nation and the world.

Featuring content submitted by the community for the community, PEARC17 will include more than 50 technical papers by professionals in the field of advanced research computing.

The conference will kick off with 20 full-day and half-day tutorials on Monday, July 10. The tutorials will cover a wide range of topics, including advancing on-campus research computing, many-core programming, cloud computing and more.

Attendees will have the opportunity to attend panel discussions focusing on workforce issues, visualization, and national-scale infrastructure. Included in PEARC17’s dynamic program will also be a poster reception, the Visualization Showcase, numerous Birds-of-a-Feather sessions, and a student program.

Also included with PEARC17 registration is the Advanced Research Computing on Campuses (ARCC) Best Practices Workshop, which is co-locating with this year’s conference. ARCC will present a full-day tutorial on Monday. Attendees need to sign-up for the ARCC tutorial as a part of tutorial registration. During the main conference, ARCC will offer focused technical sessions and panels, as well as Birds-of-a-Feather sessions.

PEARC17 Registration and Hotel Booking Deadline is May 31

May 31 is the deadline for PEARC17 attendees to register and book a room at guaranteed hotel room rates. There will be no extension of these deadlines—late registration fees will apply beginning June 1.

To register, go to http://pearc17.pearc.org to register for the conference and check the latest program schedule. To book rooms, go to http://pearc17.pearc.org/hotel or call 888-421-1442 and reference the conference name.


Being held in New Orleans July 9-13, PEARC17—Practice & Experience in Advanced Research Computing 2017—is for those engaged with the challenges of using and operating advanced research computing on campuses or for the academic and open science communities. This year’s inaugural conference offers a robust technical program, as well as networking, professional growth and multiple student participation opportunities.

Organizations supporting the new conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC); XSEDE; the Science Gateways Community Institute (SGCI); the Campus Research Computing Consortium (CaRC); the ACI-REF consortium; the Blue Waters project; ESnet; Open Science Grid; Compute Canada; the EGI Foundation; the Coalition for Academic Scientific Computation (CASC); and Internet2. (need to add all current supporting/exhibitors)

Source: PEARC17

The post PEARC17 Announces Complete Conference Schedule appeared first on HPCwire.

Bright Computing Announces Integration with BeeGFS from ThinkParQ

Mon, 05/22/2017 - 11:00
AMSTERDAM, May 22, 2017 – Bright Computing today announced that integration with BeeGFS has been included in Bright Cluster Manager 8.0.

BeeGFS, developed at the Fraunhofer Center for High Performance Computing in Germany and delivered by ThinkParQ, is a parallel cluster file system with a strong focus on performance and flexibility, and is designed for very easy installation and management. BeeGFS is free of charge, and transparently spreads user data across multiple servers. By increasing the number of servers and disks in the system, users can simply scale performance and capacity of the file system to the level needed, from small clusters up to enterprise-grade systems with thousands of nodes.

This latest development from Bright Computing is in response to an increasing number of requests from Bright’s customer base to integrate with BeeGFS.

The new integration offers:

  • Setup utility – The process of setting up BeeGFS within Bright Cluster Manager 8.0 has been streamlined, by introducing an easy-to-use wizard that takes you through the process
  • Configuration – BeeGFS can be configured in Bright Cluster Manager 8.0 through CMDaemon roles
  • Metrics, health-checks, and service management – Bright ensures that BeeGFS is always working properly by reporting on the performance and health of BeeGFS in BrightView, the central management console

Martijn de Vries, CTO at Bright Computing, commented; “BeeGFS is a highly respected and popular parallel file system; many of our customers have been using it for a number of years. It makes perfect sense to formalize the integration to ensure that setup and management of BeeGFS in Bright Cluster Manager is quick, easy, error free, and adds immediate value to our customers’ cluster management experience.”

Sven Breuner, CEO of ThinkParQ, added: “Leveraging the full performance of the available hardware while still keeping all components easy to manage has always been our primary goal with BeeGFS. Seeing how easy it is with the new Bright Cluster Manager to configure BeeGFS and to monitor the system utilization is really impressive and will help system administrators to significantly improve the application runtime of their cluster users. And since many of the BeeGFS customers are using Bright Cluster Manager already and all others can now enable it with just a few clicks, this integration provides an immediate benefit to the community.”

Source: Bright Computing

The post Bright Computing Announces Integration with BeeGFS from ThinkParQ appeared first on HPCwire.

PSSC Labs Announces “Greenest” New Eco Blade Server Platform for HPC

Mon, 05/22/2017 - 07:47

LAKE FOREST, Calif., May 22, 2017 — PSSC Labs, a developer of custom HPC and Big Data computing solutions, today announced its new Eco Blade server platform, the most energy efficient high performance blade server on the market.

The Eco Blade is a unique server platform engineered specifically for high performance, high density computing environments – simultaneously increasing compute density while decreasing power use. Eco Blade offers two complete, independent servers within 1U of rack space. Each independent server supports up to 64 Intel Xeon processor cores and 1.0 TB of enterprise memory for a total of up to 128 Cores and 2 TB of memory per 1U.

There is no shared power supply or backplane, a unique design feature that translates to power savings of up to 46% on a per server comparison with servers of similar capabilities from leading brands. By lowering the power consumption of these servers, PSSC Labs is offering the greenest server of its kind on the market, translating to lower lifetime ownership costs for institutions that adopt the Eco Blade servers.

According to the EPA, volume servers are responsible for 68% of all electricity consumed by IT equipment in data centers in 2006. A study by the US government found that in 2014 US data centers consumed about 70 billion kilowatt-hours of electricity, making up 2% of the country’s total energy consumption. Using over 90% energy efficient power supplies and combining them with power saving features, the Eco Blade servers result in significant savings in energy costs over the lifetime of the product. Along with the significant reduction in power used, the Eco Blade is built using 55% recyclable material, it a move that cements PSSC Labs’ commitment to finding sustainable enterprise server solutions that reduce waste and powers progress.

“The global thirst for more computing power and storage see demand for volume servers and the resulting energy consumption to continue to rise. As an industry, it is our responsibility to find ways to reduce power consumption while still providing the computing ability needed to fuel cutting edge research and groundbreaking enterprises,” said Alex Lesser, Vice President of PSSC Labs. “PSSC Labs has taken a big step in engineering a HPC / data center server that does not compromise on performance but will significantly reduce power consumption. By deploying the Eco Blade server instead of traditional servers from other manufacturers, companies will reduce their cap ex and op ex while being good stewards of the environment.”

In addition to the lifetime saving in energy costs, the Eco Blade servers allow higher density rack configurations which reduce the amount of infrastructure and networking equipment required, translating to huge cost savings during the initial purchase as well as savings to recurring maintenance costs.

Eco Blade Technical Specs:

  • Supports 2 x Intel Xeon E5-2600v4 Processors
    • Up to 128 Processor Cores in just 1U of rack space (with hypertheading enabled)
  • Up to 1.0TB of High Performance ECC Registered System Memory Per Server
  • 2 x Redundant SSD Operating System Hard Drives
  • Network Connectivity Options include 10GigE, 40GigE and 100GigE
    • Support for Mellanox Infiniband and Intel Omnipath
  • Remote Management through Dedicated IPMI 2.0 Network
  • Certified Compatible with Red Hat Linux, CentOS Linux, Ubuntu Linux, Microsoft Operating Systems

Application Compatibility

  • Docker
  • Kubernetes
  • Mesos
  • OpenStack
  • Joyent
  • Rancher
  • Chef
  • Puppet
  • High Performance Computing (HPC) workloads

All PSSC Labs server installation includes a one year unlimited phone / email support package (additional year support available), with fully US based support staff. Prices for a custom-built Eco Blade servers start at $2,299.  For more information on the Eco Blade, visit http://www.pssclabs.com/servers/virtualization-servers/.

About PSSC Labs

For technology powered visionaries with a passion for challenging the status quo, PSSC Labs is the answer for hand-crafted HPC and Big Data computing solutions that deliver relentless performance with the absolute lowest total cost of ownership. All products are designed and built at the company’s headquarters in Lake Forest, California. For more information, 949-380-7288, www.pssclabs.com, sales@pssclabs.com.

Source: PSSC Labs

The post PSSC Labs Announces “Greenest” New Eco Blade Server Platform for HPC appeared first on HPCwire.

AI: Breaking Down Performance Barriers With Intel® Scalable System Framework

Mon, 05/22/2017 - 01:01

Interest in artificial intelligence (AI) is rising fast, as practical applications in speech, image analysis, and other complex pattern recognition tasks have now surpassed human accuracy. Once trained, a neural network can be deployed on edge devices or in the cloud to meet demands in near real-time. Yet training a deep neural network within a reasonable time frame requires the speed and scale of a high performance computing (HPC) system.

Like many other HPC workloads, neural network training is both compute- and data-intensive. It stresses every aspect of cluster design, from floating-point performance, to memory-bandwidth and capacity, to message latency and network bandwidth. Powerful processors are essential, but not sufficient to meet these intense demands.

Figure 1. Intel® Scalable System Framework simplifies the design of efficient, high-performing clusters that optimize the value of HPC investments.

Intel® Scalable System Framework (Intel® SSF) is designed to address the technology barriers that currently limit performance for neural network training and other HPC workloads. The framework accomplishes this by delivering balanced high-performance at every layer of the solution stack—compute, memory, storage, fabric, and software. This holistic, system-level approach simplifies the design of optimized clusters and helps organizations take advantage of disruptive new technologies with less effort and lower risk.

That’s a good thing, because disruptive new technologies are coming fast.

Powerful Compute for AI Acceleration

Intel® Xeon® processors and Intel® Xeon Phi™ processors are key compute components of Intel SSF. Intel announcements at SC16 highlighted their success versus GPUs in addressing the performance and scalability challenges of deep neural networks. This is just the beginning. The Intel processor roadmap is poised to deliver a 100X increase in neural network training performance within the next three years[1], shattering the performance barriers that currently slow AI innovation. Intel SSF will help to unleash the full potential of these and other Intel processor innovations.

Groundbreaking Memory and Storage Technologies

The performance gap between processors and memory/storage solutions has been widening for decades, requiring ever-more complex workarounds.  Beginning with the current Intel Xeon Phi processor family, Intel is offering up to 16 GB of fast on-chip memory to help resolve the data access bottleneck. This, too, is just a beginning. Intel breakthroughs in memory and storage technology are beginning to enter the market now, and are pivotal to the Intel SSF roadmap. These innovations will allow memory and storage to finally catch up with processor performance, enabling transformative new efficiencies that will redefine what is possible and affordable in AI and other data-driven fields.

A Fabric for the Future of AI

Neural network training is a tightly-coupled application that alternates compute-intensive number crunching with cluster-wide data sharing, so fabric performance is critical. Intel SSF addresses this challenge with Intel® Omni-Path Architecture (Intel® OPA), which matches the line speed of EDR InfiniBand and includes optimizations to improve message passing efficiency, fabric scalability, and cost models. Today’s Intel Xeon Phi processors offer integrated Intel OPA controllers to further reduce latency and cost. Ongoing processor and fabric integration will increase efficiency at every scale and provide a cost-effective foundation for the massive neural networks of tomorrow.

Optimized Software that Ties It All Together

Although foundational AI algorithms have been around since the mid 1960s, they were designed for functionality, not performance. Intel is working with vendors and the open source community to deliver software that is highly optimized for performance on Intel architecture across the full breadth of AI and HPC needs. This includes everything from core math libraries and machine learning frameworks, to memory- and logic-based AI applications. It also includes essential system software, such as Intel® HPC Orchestrator, which helps to simplify the design, deployment, and use of high-performing systems that can scale cost-effectively to support extreme requirements.

A Launching Pad for AI Innovation

The next wave of AI innovation will require enormous new computing capability. Intel SSF provides a unified platform that enables a leap forward in performance and efficiency for AI and a host of other HPC workloads, including big data analytics, data visualization, and digital simulation.

As innovation heats up, the advantages will grow. Intel SSF will help AI innovators ride the wave of escalating performance while maintaining application compatibility[2], so they can focus on driving deeper and more useful intelligence into almost everything they create.

We can’t wait to see the results.

Stay tuned for more articles focusing on the benefits Intel SSF brings to AI at each level of the solution stack through balanced innovation in compute, memory, storage, fabric, and software technologies.

[1] https://www.hpcwire.com/2016/11/21/intel-details-ai-hardware-strategy/

[2]The aim of Intel SSF is to help drive generation by generation performance gains that benefit existing software without requiring a recompile. Additional and in some cases massive gains may become possible through software optimization.


The post AI: Breaking Down Performance Barriers With Intel® Scalable System Framework appeared first on HPCwire.

IBM, D-Wave Report Quantum Computing Advances

Thu, 05/18/2017 - 23:00

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version.

That progress follows an announcement earlier this week that commercial quantum computer developer D-Wave Systems has garnered venture funding that could total up to $50 million to build it next-generation machine with up to 2,000 qubits.

Also this week, Hewlett Packard Enterprise Labs introduced what it claims is the largest single-memory computer. The premise behind HPE’s approach is putting memory rather than the processor at the center of the computing architecture. The prototype unveiled this week contains 160 terabytes of main memory spread across 40 nodes that are connected via a high-speed fabric protocol.

Meanwhile, IBM researchers continue to push the boundaries of quantum computing as part of its IBM Q initiative launched in March to promote development of a “universal” quantum computer. Access to a 16-qubit processor via the IBM cloud would allow developers and researchers to run quantum algorithms. The new version replaces an earlier 5-qubit processor.

The company also rolled on Wednesday (May 17) the first prototype of a 17-qubit commercial processor, making it IBM’s most powerful quantum device. The prototype will serve as the foundation of IBM Q’s commercial access program. The goal is to eventually scale future prototypes to 50 or more qubits.

IBM has provided researchers with free cloud access to its quantum processors to test algorithms and develop new applications ranging from modeling financial data and machine learning to cloud security. (The flip side of the data security equation, critics argue, is that quantum computers could some day be used to defeat current data encryption methods.)

Another commercial developer, D-Wave Systems, said this week it has landed $30 million in investor funding to build a next-generation quantum computer with “more densely-connected qubits” for machine learning and other applications. The funding was provided by Canada’s Public Sector Pension Investment Board, which plans to invest an additional $20 million if D-Wave achieves certain development milestones.

Earlier this year, cyber-security specialist Temporal Defense Systems purchased the first D-Wave 2000Q system.

The post IBM, D-Wave Report Quantum Computing Advances appeared first on HPCwire.

OCF Delivers 600 Teraflop Supercomputer for University of Bristol

Thu, 05/18/2017 - 14:13

May 18, 2017 — For over a decade the University of Bristol has been contributing to world-leading and life changing scientific research using High Performance Computing (HPC), having invested over £16 million in HPC and research data storage. To continue meeting the needs of its researchers working with complex and large amounts of data, they will now benefit from a new HPC machine, named BlueCrystal 4 (BC4). Designed, integrated and configured by the HPC, storage and data analytics integrator OCF, BC4 has more than 15,000 cores making it the largest UK University system by core count and a theoretical peak performance of 600 Teraflops.

Over 1,000 researchers in areas such as paleobiology, earth science, biochemistry, mathematics, physics, molecular modeling, life sciences, and aerospace engineering will be taking advantage of the new system. BC4 is already aiding research into new medicines and drug absorption by the human body.

“We have researchers looking at whole-planet modeling with the aim of trying to understand the earth’s climate, climate change and how that’s going to evolve, as well as others looking at rotary blade design for helicopters, the mutation of genes, the spread of disease and where diseases come from,” said Dr Christopher Woods, EPSRC Research Software Engineer Fellow, University of Bristol. “Early benchmarking is showing that the new system is three times faster than our previous cluster – research that used to take a month now takes a week, and what took a week now only takes a few hours. That’s a massive improvement that’ll be a great benefit to research at the University.”

BC4 uses Lenovo NeXtScale compute nodes, each comprising of two 14 core 2.4 GHz Intel Broadwell CPUs with 128 GiB of RAM. It also includes 32 nodes of two NVIDIA Pascal P100 GPUs plus one GPU login node, designed into the rack by Lenovo’s engineering team to meet the specific requirements of the University.

Connecting the cluster are several high-speed networks, the fastest of which is a two-level Intel Omni-Path Architecture network running at 100Gb/s. BC4’s storage is composed of one PetaByte of disk provided by DDN’s GS7k and IME systems running the parallel file system Spectrum Scale from IBM.

Effective benchmarking and optimisation, using the benchmarking capabilities of Lenovo’s HPC research centre in Stuttgart, the first of its kind, has ensured that BC4 is highly efficient in terms of physical footprint, while fully utilising the 30KW per rack energy limit. Lenovo’s commitment to third party integration has allowed the University to avoid vendor lock-in while permitting new hardware to be added easily between refresh cycles.

Dr Christopher Woods continues: “To help with the interactive use of the cluster, BC4 has a visualisation node equipped with NVIDIA Grid vGPUs so it helps our scientists to visualise the work they’re doing, so researchers can use the system even if they’ve not used an HPC machine before.”

Housed at VIRTUS’ LONDON4, the UK’s first shared data centre for research and education in Slough, BC4 is the first of the University’s supercomputers to be held at an independent facility. The system is directly connected to the Bristol campus via JISC’s high speed Janet network. Kelly Scott, account director, education at VIRTUS Data Centres said, “LONDON4 is specifically designed to have the capacity to host ultra high density infrastructure and high performance computing platforms, so an ideal environment for systems like BC4. The University of Bristol is the 22nd organisation to join the JISC Shared Data Centre in our facility, which enables institutions to collaborate and share infrastructure resources to drive real innovation that advances meaningful research.”

Currently numbering in the hundreds, applications running on the University’s previous cluster will be replicated onto the new system, which will allow researchers to create more applications and better scaling software. Applications are able to be moved directly onto BC4 without the need for re-engineering.

“We’re now in our tenth year of using HPC in our facility. We’ve endeavored to make each phase of BlueCrystal bigger and better than the last, embracing new technology for the benefit of our users and researchers,” commented Caroline Gardiner, Academic Research Facilitator at the University of Bristol.

Simon Burbidge, Director of Advanced Computing comments: “It is with great excitement that I take on the role of Director of Advanced Computing at this time, and I look forward to enabling the University’s ambitious research programmes through the provision of the latest computational techniques and simulations.”

Due to be launched at an event on 24th May at the University of Bristol, BC4 will house over 1,000 system users, carried over from BlueCrystal Phase 3.

Source: OCF

The post OCF Delivers 600 Teraflop Supercomputer for University of Bristol appeared first on HPCwire.

PRACEdays 2017 Wraps Up in Barcelona

Thu, 05/18/2017 - 13:01

Barcelona has been absolutely lovely; the weather, the food, the people. I am, sadly, finishing my last day at PRACEdays 2017 with two sessions: an in-depth look at the oil and gas industry’s use of HPC and a panel discussion on bridging the gap between scientific code development and exascale technology.

Henri Calandra of Total SA spoke on the challenges of increased HPC complexity and value delivery for the oil and gas industry.

The main challenge Total and other oil and gas companies are finding is that discoveries of oil deposits are becoming more rare. To stay competitive, they need to first and foremost open new frontiers for oil discovery, but do this while reducing risk and costs.

In the 1980s, seismic data was reviewed in the 2 dimensional space. The 1990’s started development of 3D seismic depth imaging. Continuing into the 2000’s, 3D depth imaging was improved as wave equations were added to the traditional imaging. The 2010’s brought more physics, more accurate images, and more complex processes to visually view the seismic data.

Henri Calandra of Total SA – click to enlarge

The industry continues to see drastic improvements. A seismic simulation that in 2010 took four weeks to run, in 2016 takes one day. Images have significantly higher resolution and the amount of detail seen in the images enables Total to be more precise in identifying seismic fields and potential hazards in drilling.

If you look closely at the pictures (shown on the slide), you can make out improvements the image quality. Although it may seem slight to our eye, the geoscientist can see the small nuances in the images that help them be more precise, identify hazards, and achieve a better positive acquisition rate.

How did this change over the last 30+ years happen? Improved technology, integrating more advanced technologies, improved processes, more physics, more complex algorithms – basically more HPC.

Using HPC, Total has been able to reduce their risks, become more precise and selective on their explorations, identify potential oil fields faster, and optimize their seismic depth imaging.

What’s next: Opening new frontiers enabled by the better appraisal of potential new opportunities. HPC has enabled seismic depth imaging methods that can do more iterations, more physics, and more complex approximations. Models are larger, there are multiple resolutions, and 4D data. There is interactive processing happening during the drilling and these multi real-time simulations allow adjustments to the drilling, thus improving the success rate of finding oil.

Developing new algorithms is a long-term process and typically last across several generations of supercomputers. Of course, the oil and gas industry is looking forward to exascale. But the future is complex — in the compute in the form of manycore, with accelerators, and heterogeneous systems. Complexity in the storage with the abundance of data and movement between tiers of storage via multiple storage technologies. Complexity in the tools such as OpenCL, CUDA, OpenMP, OpenACC, and compilers. There is a need for standardized tools to hide the hardware complexity and help the users of the HPC systems.

None of this can be addressed without HPC specialists. Application development cannot be done without a strong collaboration between the physicist, scientist, and HPC team. This constant progress will continue to improve the predictions Total relies on for finding productive oil fields.

The second session of the day was a panel moderated by Inma Martinez: titled “Bridging the gap between scientific code development and exascale technology.” Much of the focus was on the software challenges for extreme scale computing faced by the community.

The panelists:

Henri Calandra: Total

Lee Margetts: NAFEMS

Erik Lindahl: PRACE Scientific Steering Committee

Frauke Gräter: Heidelberg Institute for Theoretical Studies

Thomas Skordas, European Commission

This highly anticipated session looked at the gap between hardware, software, and application advances and the role of industry, academia and the European Commission in the development of software for HPC systems.

Thomas Skordas pointed out that driving leadership in exascale is important and it’s about much more than hardware. It’s the next generation code, training, and understanding the opportunities exascale can accomplish.

Frauke Gräter sees data as a significant challenge; the accumulation of more and more data and the analysis of that data. In the end, scientists are looking for insights and research organizations will invest in science.

Parallelizing the algorithms is the key action according to Erik Lindahl. There is too much focus on the exascale machine but algorithms need to be good to make the best use of the hardware. Exascale, expected to happen around 2020, is not expected to be a staple in commercial datacenter until 2035. There is not a supercomputer in the world that does not run open source software, and exascale machines will follow this practice.

Lee Margetts talked of “monster machines” — the large compute clusters in every datacenter. As large vendors adopt artificial intelligence and machine learning, will we see the end of the road for the large “monster” machines? We have very sophisticated algorithms and are using very sophisticated computing. What if this technology that is used in something like oil and gas were used to predict volcanoes or earthquakes — the point being, can technologies be used for more than one science?

Henri Calandra noted that data analytics and storage will become a huge issue. If we move to exascale, we’ll have to deal with thousands of compute nodes and update code for all these machines.

The biggest challenge is the software challenge.

When asked about the new science we will see, the panel had answers that fit their sphere of knowledge. Thomas spoke of brain modeling and self-driving cars. Frauke added genome assembly and new scientific disciplines such as personalized medicine. She says, “To attract young people, we need to marry machine learning and deep learning into HPC.” Erik notes that we have a revolution of data because of accelerators. Data and accelerators enabling genome resource will drive research in this area. Lee spoke of integrating machine learning into manufacturing processes.

Kim McMahon, XAND McMahon

As Lee said, “Diversity in funding through the European commission is really important – we need to fund the mavericks as well as the crazy ones.”

My takeaway is that the accomplishment of an exascale machine is not the goal that will drive the technology forward. It’s the analysis of the data. The algorithms. Parallelizing code. There will be some who will buy the exascale machine, but it will be years after it’s available before it’s broadly accepted. As Lee said, “the focus is not the machine, the algorithms or the software, but delivering on the science. Most people in HPC are domain scientists who are trying to solve a problem.”

The post PRACEdays 2017 Wraps Up in Barcelona appeared first on HPCwire.

MIT Grad Earns ACM Doctoral Dissertation Award

Thu, 05/18/2017 - 12:38

NEW YORK, May 17, 2017 – Haitham Hassanieh is the recipient of the Association for Computing Machinery (ACM) 2016 Doctoral Dissertation Award. Hassanieh developed highly efficient algorithms for computing the Sparse Fourier Transform, and demonstrated their applicability in many domains including networks, graphics, medical imaging and biochemistry. In his dissertation “The Sparse Fourier Transform: Theory and Practice,” he presented a new way to decrease the amount of computation needed to process data, thus increasing the efficiency of programs in several areas of computing.

In computer science, the Fourier transform is a fundamental tool for processing streams of data. It identifies frequency patterns in the data, a task that has a broad array of applications. For many years, the Fast Fourier Transform (FFT) was considered the most efficient algorithm in this area. With the growth of Big Data, however, the FFT cannot keep up with the massive increase in datasets. In his doctoral dissertation Hassanieh presents the theoretical foundation of the Sparse Fourier Transform (SFT), an algorithm that is more efficient than FFT for data with a limited number of frequencies. He then shows how this new algorithm can be used to build practical systems to solve key problems in six different applications including wireless networks, mobile systems, computer graphics, medical imaging, biochemistry and digital circuits. Hassanieh’s Sparse Fourier Transform can process data at a rate that is 10 to 100 times faster than was possible before, thus greatly increasing the power of networks and devices.

Hassanieh is an Assistant Professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Illinois at Urbana-Champaign. He received his MS and PhD in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT). A native of Lebanon, he earned a BE in Computer and Communications Engineering from the American University of Beirut. Hassanieh’s Sparse Fourier Transform algorithm was chosen by MIT Technology Review as one of the top 10 breakthrough technologies of 2012. He has also been recognized with the Sprowls Award for Best Dissertation in Computer Science, and the SIGCOMM Best Paper Award.

Honorable Mention for the 2016 ACM Doctoral Dissertation Award went to Peter Bailis of Stanford University and Veselin Raychev of ETH Zurich. The 2016 Doctoral Dissertation Award recipients will be formally recognized at the annual ACM Awards Banquet on June 24 in San Francisco, CA.

In Bailis’s dissertation, “Coordination Avoidance in Distributed Databases,” he addresses a perennial problem in a network of multiple computers working together to achieve a common goal: Is it possible to build systems that scale efficiently (process ever-increasing amounts of data) while ensuring that application data remains provably correct and consistent? These concerns are especially timely as Internet services such as Google and Facebook have led to a vast increase in the global distribution of data. In addressing this problem, he introduces a new framework, invariant confluence, that mitigates the fundamental tradeoffs between coordination and consistency. His dissertation breaks new conceptual ground in the areas of transaction processing and distributed consistency—two areas thought to be fully understood. Bailis is an Assistant Professor of Computer Science at Stanford University. He received a PhD in Computer Science from the University of California, Berkeley and his AB in Computer Science from Harvard College.

Raychev’s dissertation, “Learning from Large Codebases,” introduces new methods for creating programming tools based on probabilistic models of code that can solve tasks beyond the reach of current methods. As the size of publicly available codebases has grown dramatically in recent years, so has interest in developing programming tools that solve software tasks by learning from these codebases. His dissertation takes a novel approach to addressing this challenge that combines advanced techniques in programming languages with machine learning practices. In the thesis, Raychev lays out four separate methods that detail how machine learning approaches can be applied to program analysis in order to produce useful programming tools. These include: code completion with statistical language models; predicting program properties from big code; learning program from noisy data; and learning statistical code completion systems. Raychev’s work is regarded as having the potential to open up several promising new avenues of research in the years to come. Raychev is currently a co-founder of the company DeepCode. He received a PhD in Computer Science from ETH Zurich. A native of Bulgaria, he received MS and BS degrees from Sofia University.

About the ACM Doctoral Dissertation Award

Presented annually to the author(s) of the best doctoral dissertation(s) in computer science and engineering. The Doctoral Dissertation Award is accompanied by a prize of $20,000, and the Honorable Mention Award is accompanied by a prize totaling $10,000. Financial sponsorship of the award is provided by Google. Winning dissertations will be published in the ACM Digital Library as part of the ACM Books Series.

About ACM

ACM, the Association for Computing Machinery www.acm.org is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post MIT Grad Earns ACM Doctoral Dissertation Award appeared first on HPCwire.

US, Europe, Japan Deepen Research Computing Partnership

Thu, 05/18/2017 - 12:08

On May 17, 2017, a ceremony was held during the PRACEdays 2017 conference in Barcelona to announce the memorandum of understanding (MOU) between PRACE in Europe, RIST in Japan, and XSEDE in the United States. The MOU allows for the promotion and sharing of resources between the organizations, including PRACE’s federated resources in Europe, the K computer and other systems in Japan, and XSEDE’s network of HPC systems and advanced digital services in the US.

Discussing details of the enhanced partnership were Dr. Anwar Osseyran, council chair of the Partnership for Advanced Computing in Europe (PRACE); John Towns, principal investigator and project director for the Extreme Science and Engineering Discovery Environment (XSEDE); and Masahiro Seki, president of the Research Organization for Information Science and Technology (RIST).

XSEDE PI John Towns (left) with RIST President Masahiro Seki (center) and PRACE Council Chair Anwar Osseyran (right). Source

“The aim is to stimulate collaboration in the area of research and computational science by sharing information on the usage of supercomputers,” said Dr. Osseyran. “The collaboration will be of mutual benefit, reciprocity and equality and we will identify the capabilities of cooperation in the areas of science, technology and industry. [Further, the MOU] will reinforce the HPC ecosystems for all of us.”

The agreement builds on the partners’ work with the International HPC Summer School. (The eighth such event will take place June 25 to June 30, 2017, in Boulder, Colorado, United States. Compute Canada is also a partner.)

“As research becomes much more an international endeavor, the need for infrastructure to collaborate closely and support those research endeavors becomes even more important,” said John Towns. “Having the agreement such as the one we have signed now facilitates the collaboration of the infrastructures and allows us to promote science and engineering and industry work and the use of HPC resources and very importantly the associated services and support staff that surround them. Being able to effectively use these resources is quite important and often they are very difficult as the technology moves very rapidly, so having access to the expertise is also critical. I’m very happy to be a part of this and I look forward to our work [together].”

“The three parties — PRACE, XSEDE and RIST — have recognized the importance of trilateral collaboration,” said Masahiro Seko. “Finalizing the MOU today makes me happier than anything else. In the new MOU, we will continuously implement…in the area of promotional shared use of supercomputers; at the same time our collaboration will be accelerated through the users of all the members of the partnering organizations, and especially the trilateral union will be of great help to promote advanced supercomputing in the field of sensor technology and in industry.”

The ceremony commemorates the official signing which took place on April 4, 2017. The agreement contains the following elements:

(1) Exchange of information: Mutual exchange of experiences and knowledge in user selection and user support etc. is helpful for the three parties in order to execute their projects more effectively and efficiently.

(2) Interaction amongst the staff of the parties in pursuing any identified collaboration opportunities: Due to the complex and international nature of science, engineering and analytics challenge problems that require highly advanced computing solutions, collaborative support between RIST, PRACE and XSEDE will enhance the productivity of globally distributed research teams.

(3) Holding technical meetings: Technical meetings will be held to support cross organizational information exchange and collaboration.

The post US, Europe, Japan Deepen Research Computing Partnership appeared first on HPCwire.

NSF, IARPA, and SRC Push into “Semiconductor Synthetic Biology” Computing

Thu, 05/18/2017 - 09:59

Research into how biological systems might be fashioned into computational technology has a long history with various DNA-based computing approaches explored. Now, the National Science Foundation has fired up a new program – Semiconductor Synthetic Biology for Information Processing and Storage Technologies – and just issued a solicitation in which eight to ten grants totaling around $4 million per year for three years are expected to be awarded.

The program is a joint effort between NSF, the Intelligence Advanced Research Projects Activity (IARPA), and Semiconductor Research Corporation (SRC) and has grand ambitions and was the subject of a Computing Community Consortium blog posted yesterday by Mitra Basu, the program director: “New information technologies can be envisioned that are based on biological principles and that use biomaterials in the fabrication of devices and components; it is anticipated that these information technologies could enable stored data to be retained for more than 100 years and storage capacity to be 1,000 times greater than current capabilities. These could also facilitate compact computers that will operate with substantially lower power than today’s computers.”

Five goals are specified and each submission must include elements of at least three (proposals are due in October 2017):

  • Advancing basic and fundamental research by exploring new programmable models of computation, communication, and memory based on synthetic biology.
  • Enriching the knowledge base and addressing foundational questions at the interface of biology and semiconductors.
  • Promoting the frontier of research in the design of new bio-nano hybrid devices based on sustainable materials, including carbon-based systems that test the physical size limit in transient electronics.
  • Designing and fabricating hybrid semiconductor-biological microelectronic systems based on living cells for next-generation information processing functionalities.
  • Integrating scaling-up and manufacturing technologies involving electronic and synthetic biology characterization instruments with CAD-like software tools.

The solicitation notes that “semiconductor and information technologies are facing many challenges as CMOS/Moore’s Law approaches its physical limits, with no obvious replacement technologies in sight. Several recent breakthroughs in synthetic biology have demonstrated the suitability of biomolecules as carriers of stored digital data for memory applications…[T]he (SemiSynBio) solicitation seeks to explore synergies between synthetic biology and semiconductor technologies. Today is likely to mark the beginning of a new technological boom to merge and exploit the two fields for information processing and storage capacity.”

As described in the solicitation the program’s goal is to “foster exploratory, multi-disciplinary, longer-term basic research leading to novel high-payoff solutions for the information technology industry based on recent progress in synthetic biology and the know-how of semiconductor technology. It is also anticipated that research in synthetic biology will benefit by leveraging semiconductor capabilities in design and fabrication of hybrid and complex material systems for extensive applications in biological and information processing technologies. In addition, the educational goal is to train new cadre of students and researchers.”

A bit tongue in cheek, and certainly not noticed for the first time, it’s safe to say nature has already figured out how to do this at least once, at high level (perhaps), with human computers conditioned by deep learning and programed to survive, explore, and continue learning.

Link to NSF solicitation: https://www.nsf.gov/pubs/2017/nsf17557/nsf17557.htm

Link to CCC blog: http://www.cccblog.org/2017/05/17/new-nsf-program-solicitation-on-semiconductor-synthetic-biology-for-information-processing-and-storage-technologies-semisynbio/

The post NSF, IARPA, and SRC Push into “Semiconductor Synthetic Biology” Computing appeared first on HPCwire.

Rescale Named a “Cool Vendor” by Gartner

Thu, 05/18/2017 - 09:09

SAN FRANCISCO, Calif., May 18, 2017 — Rescale, the turnkey platform provider in cloud high performance computing, today announced that it has been named a “Cool Vendor” based on the May 2017 report “Cool Vendors in Cloud Infrastructure, 2017” by leading industry analyst firm Gartner.

The report makes recommendations for infrastructure and operations (I&O) leaders seeking to modernize and exploit more agile solutions, including the following:

  • “I&O leaders should examine these Cool Vendors closely and leverage the opportunities that they provide.”
  • “As enterprises grapple with the right mix of on-premises, off-premises and native cloud, choosing a cloud infrastructure vendor becomes more critical.”

“Rescale is very excited to be named a Gartner ‘Cool Vendor’ for 2017,” said Jonathan Oakley, VP of Marketing at Rescale. “High-performance computing (HPC) is the fastest growing compute segment of the cloud market with significant pent-up demand as CIOs look for efficiencies and agility in their IT infrastructure during their cloud transformation, while at the same time satisfying the increasing HPC demands of end-users. Those HPC end-users have become mission critical for the Global Fortune 500 leaders in aerospace, automotive, energy, financial services, industrials, life sciences, and semiconductor industry segments.”

Rescale’s ScaleX platform provides the enterprise with a turnkey multi-cloud solution accessing the largest network of HPC capacity globally with over 60 data centers locations, as well as hybrid solutions, allowing CIOs to leverage existing infrastructure assets. There are no compromises with Rescale’s suite of solutions. Rescale works with all major public cloud providers, including Amazon Web Services, Google Cloud Platform, IBM Cloud, and Microsoft Azure, along with over 220 natively-integrated software solutions from leading vendors ANSYS, Dassault Systemes, Siemens, and many others.

Rescale enables immediate benefits across the enterprise, such as:

  • Faster time to market – shortened design cycles and improved software deployment
  • Transformed IT agility – instant access to HPC infrastructure and global collaboration
  • Integrated solutions – hybrid cloud, private cloud, public cloud, and on-premises compute
  • Optimized cost structure – pay as you go for only what you need


Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

Source: Rescale

The post Rescale Named a “Cool Vendor” by Gartner appeared first on HPCwire.

In-Memory Computing Summit Previews Keynotes for First Annual European Conference

Thu, 05/18/2017 - 09:07

FOSTER CITY, Calif., May 18, 2017 — GridGain Systems, provider of enterprise-grade in-memory computing platform solutions based on Apache Ignite, today announced the keynote addresses for the In-Memory Computing Summit Europe, the premier In-Memory Computing (IMC) conference for users from across Europe and Asia. The IMC Summit Europe will take place at the Mövenpick Hotel Amsterdam City Centre, June 20-21, 2017. Attendees can receive a 10 percent Early Bird discount on the registration fee when registering by May 21, 2017.

The In-Memory Computing Summit Europe is the only industry-wide event focusing on the full range of in-memory computing technologies. It brings together computing visionaries, decision makers, experts and developers for the purpose of education, discussion and networking.

The keynote addresses for this year’s event include:

  • In-Memory Computing, Digital Transformation, and the Future of Business — Abe Kleinfeld, GridGain Systems
  • A new platform for collaboration between fintechs, academics and the finance industry — Felix Grevy, Misys
  • In Memory Computing: High performance and highly efficient web application scaling for the travel industry — Chris Goodall, CG Consultancy
  • SNIA and Persistent Memory — Alex McDonald, SNIA Europe
  • Panel Discussion: The Future of In-Memory Computing — Rob Barr, Barclays; Lieven Merckx, ING; Chris Goodall, CG Consultancy; Sam Lawrence, FSB Technology; and Nikita Ivanov, GridGain Systems

Super Saver Registration Discounts
Attendees can receive a 10 percent discount by registering now. The Early Bird admission rate of EUR 449 ends on May 21, 2017. Register via the conference website, or email attendance and registration questions to info@imcsummit.org.

By sponsoring the In-Memory Computing Summit Europe, organizations gain a unique opportunity to enhance their visibility and reputation as leaders in in-memory computing products and services. They can interact with key in-memory computing business and technical decision makers, connect with technology purchasers and influencers, and help shape the future of Fast Data.

Sponsorship packages are available. Visit the conference website for more information on sponsorship benefits and pricing and to download a prospectus. Current sponsors include:

  • Platinum Sponsors — GridGain Systems
  • Gold Sponsors — ScaleOut Software
  • Silver Sponsors — Fujitsu, Hazelcast
  • Foundation/Association Sponsor — SNIA
  • Media Sponsors — IT for CEOs, Jet Info Magazine

About the In-Memory Computing Summits

The In-Memory Computing Summits in Europe and North America are the only industry-wide events focused on in-memory computing-related technologies and solutions. They are the perfect opportunity to connect with technical decision makers, IT implementers, and developers who make or influence purchasing decisions in the areas of in-memory computing, Big Data, Fast Data, IoT, web-scale applications and high performance computing (HPC). Attendees include CEOs, CIOs, CTOs, VPs, IT directors, IT managers, data scientists, senior engineers, senior developers, architects and more. The Summits are unique forums for networking, education and the exchange of ideas — ideas that are powering the Digital Transformation and future of Fast Data. For more information, visit https://imcsummit.org and follow the event on Twitter @IMCSummit.

About GridGain Systems

GridGain Systems is revolutionizing real-time data access and processing by offering enterprise-grade in-memory computing solutions built on Apache Ignite. GridGain solutions are used by global enterprises in financial, software, ecommerce, retail, online business services, healthcare, telecom and other major sectors. GridGain solutions connect data stores (SQL, NoSQL and Apache Hadoop) with cloud-scale applications and enable massive data throughput and ultra-low latencies across a scalable, distributed cluster of commodity servers. GridGain is the most comprehensive, enterprise-grade in-memory computing platform for high volume ACID transactions, real-time analytics and hybrid transactional/analytical processing. For more information, visit gridgain.com.

Source: GridGain Systems

The post In-Memory Computing Summit Previews Keynotes for First Annual European Conference appeared first on HPCwire.

BioTeam Test Lab at TACC Deploys Avere Systems

Thu, 05/18/2017 - 09:04

PITTSBURGH, Penn., May 18, 2017 — Avere Systems, a leading provider of hybrid cloud enablement solutions, announced today that BioTeam, Inc. is incorporating Avere FXT Edge filers into its Convergence Lab, a testing environment hosted at the Texas Advanced Computing Center (TACC), in Austin, Texas. In cooperation with vendors and TACC, BioTeam utilizes the lab to evaluate solutions for its clients by standing up, configuring and testing new infrastructure under conditions relevant to life sciences in order to deliver on its mission of providing objective, vendor agnostic solutions to researchers. The life sciences community is producing increasingly large amounts of data from sources ranging from laboratory analytical devices, to research, to patient data, which is putting IT organizations under pressure to support these growing workloads.

Avere’s technology offers life science organizations the ability to flexibly process and store these growing datasets where it makes the most sense — at performance levels that help to improve the rate of discovery. Avere Edge filers allow seamless integration of multiple storage destinations, including multiple public clouds and on-premises data centers, increasing the options that organizations like BioTeam can provide its customers for data center optimization. BioTeam plans on utilizing the FXT filers to test burst buffer workloads and hybrid storage strategies for life sciences data and workloads in order to develop effective recommendations for their customers under the right conditions.

Avere’s technology provides many world-renowned life science research facilities with flexibility and performance benefits, in addition to the ability to support the large data sets common to BioIT workflows. By reducing the dependency on traditional storage and facilitating modernization with hybrid cloud infrastructures, Avere also helps organizations keep their IT costs in check.

The BioTeam Lab takes an integrative approach to streamlining computer-aided research from the lab bench to knowledge. Solutions are driven by BioTeam’s clients and tailored to meet the scientific needs of the organization. Inside the lab, BioTeam works with vendors to understand the end-to-end experience of using their technologies and handles everything including the racking, installations, configuration, testing and integration, vendor communication and return shipping. Remote access to the lab is available from virtually any location with an internet connection. TACC provides the space, power, cooling, connectivity, support and deep collaboration on lab projects.

“BioTeam is a fast-growing consulting company that is comprised of a highly cross-functional and creative group of scientists and engineers. Our unique cross section of experience allows us to enable computer-aided discovery in life sciences by creating and adapting IT infrastructure and services to fit the scientific goals of the organizations we work with,” said Ari Berman, Vice President and General Manager of Consulting, BioTeam. “As part of our larger suite of hardware and software, having Avere in our lab gives us the hands-on ability to test Avere-based hybrid storage scenarios in a controlled and optimized life sciences environment, utilizing real workloads. These scenarios will allow BioTeam to understand where Avere technology best fits in the life sciences and healthcare domain and will allow us to innovate next-generation strategies for storage and analytics workflows. Having this opportunity allows us to deepen our understanding of the overall storage landscape and to be able to recommend fit for purpose solutions to our customers.”

“Working with BioTeam is a natural fit for Avere. Our technology has a solid track record of helping life science organizations leverage the cloud for large workloads for both cloud compute and storage resources,” said Jeff Tabor, Senior Director of Product Management and Marketing at Avere Systems. “We look forward to collaborating with the BioTeam and continuing to help the industry effectively integrate cloud into their data center strategies and seamlessly use multiple cloud vendors.”

Next week at the BioIT World Conference in Boston, BioTeam and Avere will co-present “Freeing Data: How to Win the War with Hybrid Clouds.” BioTeam Senior Scientific Consultant Adam Kraut and Avere CEO Ron Bianchini will take the stage on May 25, 2017 at 12:20pm ET. Avere Systems is exhibiting at the show, booth #536, from May 23 – 25, 2017.

About Avere Systems

Avere helps enterprise IT organizations enable innovation with high-performance data storage access, and the flexibility to compute and store data where necessary to match business demands. Customers enjoy easy reach to cloud-based resources, without sacrificing the consistency, availability or security of enterprise data. A private company based in Pittsburgh, Pennsylvania, Avere is led by industry experts to support the demanding, mission-critical hybrid cloud systems of many of the world’s most recognized companies and organizations. Learn more at www.averesystems.com.

About BioTeam, Inc.

BioTeam, Inc. has a well-established history of providing complete, and forward-thinking solutions to the life sciences. With a cross-section of expertise that includes classical laboratory scientific training, applications development, informatics, large data center installations, HPC, enterprise and scientific network engineering, and high-volume as well as high-performance storage, BioTeam leverages the right technologies customized to its client’s unique needs in order to enable them to reach their scientific objectives. For more information, please visit the company website.

About Texas Advanced Computing Center

TACC designs and deploys the world’s most powerful advanced computing technologies and innovative software solutions to enable researchers to answer complex questions like these and many more. Every day, researchers rely on our computing experts and resources to help them gain insights and make discoveries that change the world. Find out more at https://www.tacc.utexas.edu/.

Source: Avere Systems

The post BioTeam Test Lab at TACC Deploys Avere Systems appeared first on HPCwire.

Hyperion Research Adds New HPC Innovation Awards for Datacenters

Wed, 05/17/2017 - 14:45

ST. PAUL, Minn., May 17, 2017 – Hyperion Research, the new name for the former IDC HPC group, today announced it is adding two new categories to its global awards program for high performance computing (HPC) innovation. Both new categories are for innovations benefiting HPC use in data centers—either dedicated HPC data centers or the growing number of enterprise data centers that are exploiting HPC server and storage systems for advanced analytics. The new categories complement Hyperion’s long-standing innovation awards for HPC users:

  1. The first new award category rewards applied HPC innovations for which data centers are primarily responsible.
  2. The second new category rewards HPC vendors for HPC innovations that have proven to benefit data centers.

Hyperion also welcomes submissions for HPC innovations resulting from collaborations between data centers and vendors, and for innovations involving private, hybrid or public clouds.

“Hyperion Research welcomes award submissions at any time of year and announces awards twice a year, at the annual ISC European supercomputing conference in June and the annual SC worldwide supercomputing conference in November,” according to Hyperion Research CEO Earl Joseph. “The first round of winners of the new awards will be made public at the ISC’17 conference that will be held in June 2017 in Frankfort Germany.”

Submission forms are available at Hyperion’s website: http://www.hpcuserforum.com/innovationaward/applicationform.html

About Hyperion Research

Hyperion Research is the new name for the former IDC high performance computing (HPC) analyst team. IDC agreed with the U.S. government to divest the HPC team before the recent sale of IDC to Chinese firm Oceanwide. As Hyperion Research, the team continues all the worldwide activities that have made it the world’s most respected HPC industry analyst group for more than 25 years, including HPC and HPDA market sizing and tracking, subscription services, custom studies and papers, and operating the HPC User Forum. For more information, see www.hpcuserforum.com.

Source: Hyperion Research

The post Hyperion Research Adds New HPC Innovation Awards for Datacenters appeared first on HPCwire.

TACC Simulations Advance Cancer Immunotherapy

Wed, 05/17/2017 - 14:40

AUSTIN, May 17, 2017 — The body has a natural way of fighting cancer – it’s called the immune system, and it is tuned to defend our cells against outside infections and internal disorder. But occasionally, it needs a helping hand.

Immunotherapy fights cancer by supercharging the immune system’s natural defenses or contributing additional immune elements that can help the body kill cancer cells.

In recent decades, immunotherapy has become an important tool in treating a wide range of cancers, including breast cancer, melanoma and leukemia.

But alongside its successes, scientists have discovered that immunotherapy sometimes has powerful — even fatal — side-effects. Much still needs to be learned about how the immune system fights cancer, and in this area, supercomputers play an important role.

Identifying Patient-Specific Immune Treatments

Not every immune therapy works the same on every patient. Differences in an individual’s immune system may mean one treatment is more appropriate than another. Furthermore, tweaking one’s system might heighten the efficacy of certain treatments.

Researchers from Wake Forest School of Medicine and Zhejiang University in China developed a novel mathematical model to explore the interactions between prostate tumors and common immunotherapy approaches, individually and in combination. In a study published in February 2016 in Nature Scientific Reports, they used their model to predict how prostate cancer would react to four common immunotherapies:

  • Androgen deprivation therapy — used to control prostate cancer cell growth by suppressing or blocking the production and action the hormone androgen in men;
  • Vaccines — which train the immune system to recognize and destroy harmful substances;
  • Treg depletion — where the subpopulation of T cells, which modulate the immune system, are reduced to increase the efficacy of immunotherapy treatments; and
  • IL-2 neutralization — which disables interleukin, a type of signaling molecule in the immune system.

To study the systematic effects of these four treatments, the researchers incorporated data from animal studies into their complex mathematical models and simulated tumor responses to the treatments using the Stampede supercomputer at the Texas Advanced Computing Center (TACC).

Model construction for predicting treatment outcomes of prostate cancer. [Courtesy: Huiming Peng, Weiling Zhao, Hua Tan, Zhiwei Ji, Jingsong Li, King Li & Xiaobo Zhou, Scientific Reports 6, Article number: 21599 (2016)]

“We do a lot of modeling which relies on millions of simulations,” said Jing Su, a researcher at the Center for Bioinformatics and Systems Biology at Wake Forest School of Medicine and assistant professor in the Department of Diagnostic Radiology. “To get a reliable result, we have to repeat each computation at least 100 times. We want to explore the combinations and effects and different conditions and their results.”

The researchers found that the depletion of T Cells and the neutralization of Interleukin 2 can have a stronger effect when combined with androgen deprivation therapy and vaccines.

The study highlights a potential therapeutic strategy that may manage prostate tumor growth more effectively. It also provides a framework for studying tumor-related immune mechanisms and the selection of therapeutic regimens in other types of cancer.

In separate work published in Nature Scientific Reports in July 2016, the team used data-driven methods to identify two distinct groups of breast cancer patients: one that displayed a wide range of immune pathways and could be effectively treated with hormone therapy and chemotherapy; and another that displayed less immunity and was best treated with surgery.

Read the full article at: https://www.tacc.utexas.edu/-/advancing-cancer-immunotherapy-with-computer-simulations-and-data-analysis

Source: Aaron Dubrow, TACC

The post TACC Simulations Advance Cancer Immunotherapy appeared first on HPCwire.

DOE’s HPC4Mfg Leads to Paper Manufacturing Improvement

Wed, 05/17/2017 - 11:18

Papermaking ranks third behind only petroleum refining and chemical production in terms of energy consumption. Recently, simulations made possible by the U.S. Department of Energy’s HPC4Mfg program helped a group of paper companies develop a strategy likely to cut energy costs by 10-20 percent.

“This was true ‘HPC for manufacturing,’” said David Trebotich, a computational scientist in the Computational Research Division at Berkeley Lab and co-PI on the project. “We used 50,000-60,000 cores at NERSC to do these simulations. It’s one thing to take a research code and tune it for a specific application, but it’s another thing to make it effective for industry purposes. Through this project we have been able to help engineering-scale models be more accurate by informing better parameterizations from micro-scale data.”

The effort was run jointly with the companies and Lawrence Livermore National Laboratory and Lawrence Berkeley National Laboratory. Simulations were run on the National Energy Research Supercomputing Center’s Edison system. A brief account of the project is on the NERSC web site (HPC4Mfg Paper Manufacturing Project Yields First Results). The first phase targeted “wet pressing”—an energy-intensive process in which water is removed by mechanical pressure from the wood pulp into press felts that help absorb water from the system like a sponge before it is sent through a drying process.

“The major purpose is to leverage our advanced simulation capabilities, high performance computing resources and industry paper press data to help develop integrated models to accurately simulate the water papering process,” said Yue Hao, an LLNL scientist and co-principal investigator. Trebotich ran a series of production runs on NERSC’s Edison system and was successful in providing his LLNL colleagues with numbers from these microscale simulations at compressed and uncompressed pressures, which improved their model.

“I used the flow and transport solvers in Chombo-Crunch to model flow in paper press felt, which is used in the drying process,” Trebotich explained. “The team at LLNL has an approach that can capture the larger scale pressing or deformation as well as the flow in bulk terms. However, not all of the intricacies of the felt and the paper are captured by this model, just the bulk properties of the flow and deformation. My job was to improve their modeling at the continuum scale by providing them with an upscaled permeability-to-porosity ratio from pore scale simulation data.”

Link to article: http://www.nersc.gov/news-publications/nersc-news/science-news/2017-2/hpc4mfg-paper-manufacturing-project-yields-first-results/

Image: The researchers used a computer simulation framework, developed at LLNL, that integrates mechanical deformation and two-phase flow models, and a full-scale microscale flow model, developed at Berkeley Lab, to model the complex pore structures in the press felts. Image: David Trebotich, Berkeley Lab

The post DOE’s HPC4Mfg Leads to Paper Manufacturing Improvement appeared first on HPCwire.

PRACEdays 2017: The start of a beautiful week in Barcelona

Wed, 05/17/2017 - 10:38

Touching down in Barcelona on Saturday afternoon, it was warm, sunny, and oh so Spanish. I was greeted at my hotel with a glass of Cava to sip and treated to a tour of the historic hotel. A short rest, walk around Barcelona, and a little bit of work filled the time until dinner — at 8pm.

On Tuesday morning, PRACEdays 2017 commenced as part of the European Summit Week. The program began with a welcome by Sergi Girona, EXDCI coordinator, and Serge Bogaerts, managing director of PRACE, outlining the week of plenaries, keynotes, breakout sessions, BoFs, and poster sessions. There will be a lot to see and learn this week in Barcelona!

PRACE Council Chair Anwar Osseyran was next with a detailed overview of PRACE achievements and the challenges ahead. PRACE prides themselves on providing open access of the best HPC systems for European scientists. Their criterion: scientific excellence.

Anwar Osseyran at PRACEdays 2017

In the PRACE partnerships, there are seven “Tier 0” systems (top systems available for international use), including the recent addition Piz Daint, currently number eight on the Top500 list. Of the seven world-class systems they have, there are over 60 petaflops of peak performance enabling 524 scientific projects.

Anwar has positioned the challenges PRACE sees in how to adapt and modernize HPC Infrastructure into four quadrants:

  • European open science cloud: Enabling persistent access to data. This is a huge challenge affecting health care.
  • Strong HPC infrastructures for data processing.
  • Adapting HPC solutions for cloud environments to make it easy and accessible for scientists.
  • How to achieve exascale.

As PRACE considers these challenges, the question of funding comes in. How will PRACE fund all their ambitions? If they can’t do it all, what technologies and applications should they focus on? As Anwar says, consider the “mundane versus heavenly science. It’s about choices.

On more than one occasion during his presentation, Anwar discussed the concept of collaboration among the communities versus the benefits of competition. Anwar suggested that competition among scientists produces better results. I would have thought that collaboration among the supercomputing centers would be more of a norm – sharing resources, results — all contributing to better science.

As Anwar said in his closing statement: “It’s about finding a balance between traditional, disruptive, and fundamental science.”

The first keynote was delivered by Minna Palmroth, titled Understanding Near-Earth Space in Six Dimensions.

Minna Palmroth at PRACEdays 2017

The thing I really love about events like this is the opportunity to learn more about the science, the big science problems, and things I’ve never thought about. Minna hit the mark in her presentation on near-space problems.

The Earth has radiation belts. Navigation and weather satellites sail in plasma around the Earth, traversing the radiation belts. Two types of phenomena are affecting spacecraft and satellites: A single event upset (like a system failure), and the aging of the spacecraft due to the harsh radiation they experience in the radiation belts.

The radiation belt situation is already extremely important, but will be more important in the future as the number of spacecraft grows. The challenge in a nutshell: how to simulate large, and ever increasing numbers of spacecraft in the radiation belt requires a dense grid, and complex grid calculations in multiple dimensions simultaneously.“

Minna is a research professor and unit head with the department of physics at the University of Helsinki, Finland. They are solving parallelization on three levels:

  • Across nodes on clusters and supercomputers using MPI.
  • Across multiple cores within a node using OpenMP.
  • Within cores with vectorization.

Their most recent development supports multiple ions, an optimised boundary conditions implementation resulting in improved scaling. This has given them the processing power and speed to do the math needed for the near-space problems they have identified.

The simulations Minna shared of solar winds and radiation belts as they hit the Earth’s atmosphere are fascinating. The solar winds create significant amounts of heat that dissipate and spread around the Earth’s atmosphere.

The system Vlasiator is a newly developed large-scale space physics model. The goal is to model the entire near-Earth space, going far beyond the existing large-scale plasma simulations. This will take the modeling from the current solar winds and radiation belts to space weather and spacecraft instrument optimization. Vlasiator has been used to discover phenomena that no one thought existed, and with the continued modeling improvements such as adding machine learning, Vlasiator will be an important tool to understanding space phenomena and methods to protect spacecraft, technological systems, or human life in space.

The second keynote, Using Big-Data Methodologies in the Chemical Industry, was given by Telli van der Lei.

The information shared by Telli is not surprising; we have long known that modeling supply chains can produce positive results. Regardless, this is a topic that can’t be discussed enough, especially in a conference that is heavily research and academic (approximately 73% of the attendees). Talking about the business application of the science they are modeling and the improvements it enables is a good thing. It takes science and computation that was developed in one place, used and enhanced in another, and demonstrating it back to the first place.

Telli van der Lei at PRACEdays 2017

Telli is an academic, now working for industry. She works for DSM as a senior scientist in the Supply Chain and Process Modeling. Doing the modeling in industry, Telli professes, can be quite hard. In her presentation, Telli talks about the industry issues DSM thinks about, the results they achieved with their Supply Chain Modeling, and the challenges she thinks about going forward.

For industry issues, health is number one. I would argue that nearly every country in the world has the issues of aging population, healthcare, and optimal food composition. After health, nutrition, how to feed the growing population, and the drive of urbanization creating clusters of humans while decreasing farming space is a growing issue. Lastly, resource constraints of available materials to feed into the supply chain is a major issue.

Kim McMahon, XAND McMahon

DSM uses computer modeling to simulate the supply chain from raw materials to manufacturing, warehouse to the client. Using the modeling and incorporating the process into their supply chain, they have realized some amazing results in correctness of orders, reduction in supply chain costs, reduction of inventory, and a more efficient, flexible, and responsive supply chain.

Out of all this, they have modeling and advanced analytics which have come from their proven successes. They still see challenges such as from a modeling perspective, how do you optimize the input, output, and runtime of existing models or incorporate the business choices in models? They can use HPC to simulate, but how are you convinced your results have value? As Telli said: “It’s not yay – here we go [with our results], it’s how you change your business.”

The post PRACEdays 2017: The start of a beautiful week in Barcelona appeared first on HPCwire.

TACC, Texas Digital Library Join Chronopolis Digital Preservation Network

Wed, 05/17/2017 - 10:03

AUSTIN, Texas, May 17, 2017 — The Texas Digital Library (TDL), along with the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, has joined the Chronopolis digital preservation network, becoming the first new node since the network’s inception in 2008. Other nodes in the TRAC-certified digital preservation network, which is administered by the UC San Diego Library, include the University of California San Diego; the National Center for Atmospheric Research; and the University of Maryland Institute for Advanced Computer Studies.

“By collaborating with other mission-aligned institutions in the Chronopolis network, we are advancing our collective goal of digitally preserving our cultural and scientific heritage for this and future generations,” said Kristi Park, Executive Director of the Texas Digital Library. “In Texas, in particular, this partnership gives our state’s institutions another trusted, non-commercial option for secure long-term storage of their uniquely valuable digital materials.”

“This partnership, along with TACC’s participation in the national Digital Preservation Network, demonstrates our commitment to supporting solutions for the long term preservation of our digital intellectual heritage,” said Chris Jordan, leader of the Data Management and Collections Group at TACC. “Our users create and utilize petabytes of irreplaceable digital data on a daily basis, and it is important for us to support solutions for that data at all stages of the research life cycle, including long-term preservation and access for future students and researchers.”

Partnering with TACC to provide a local Chronopolis replication node and access to petabyte-scale storage on the Corral data management resource, TDL will offer digital preservation services to its members using DuraCloudTM@TDL for simple ingest and management. Chronopolis services will be part of a broad range of TDL Digital Preservation Services that also include managed commercial storage in the Amazon cloud, as well as Digital Preservation Network (DPN) services. The first DPN node to offer production services, Chronopolis joins DPN as one of TDL’s efforts to provide community-driven long-term preservation alternatives to Amazon storage.

“Having TDL as a partner is a strategic collaboration that makes sense for a number of reasons,” said Brian E. C. Schottlaender, Principal Administrator for Chronopolis and UC San Diego’s University Librarian. “Having TDL on board will increase the geographical diversity of the Chronopolis network, advance our shared mission to preserve critical digital materials, and extend digital preservation services throughout Texas.

Chronopolis has the capacity to preserve hundreds of terabytes of digital data of any type, with minimal requirements of the data provider. The system leverages high-speed networks, mass-scale storage capabilities, and the expertise of the partners, to provide a geographically distributed, heterogeneous, and highly redundant preservation repository system. Features of the network include: three geographically distributed copies of deposited data; curatorial audit reporting; and application of contemporary best practices for data packaging and sharing. Chronopolis has been certified as a “trustworthy digital repository” by the Center for Research Libraries (CRL), and meets accepted best practices in the management of digital repositories.

The Texas Digital Library is a consortium of Texas institutions that builds capacity for preserving, managing, and providing access to unique digital collections of enduring value. TDL’s empowering technology infrastructure, services, and community programs support research, teaching, and digital curation efforts at member institutions; facilitate collaboration amongst the TDL community and with external partners; and connect local work to a global ecosystem of digital library efforts.

About TACC

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is a leading research center for advanced computational science, engineering and technology. TACC’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies. To fulfill this mission, TACC provides comprehensive advanced computing resources and support services to researchers in Texas and across the nation. TACC conducts research and development in applications and algorithms, in computing systems design/architecture, and in programming tools and environments to produce new technologies that expand the capabilities of researchers for knowledge discovery. TACC also educates the next generation of computational researchers, and promotes awareness of the importance and impact of computing to science and society. Visit TACC’s website at: www.tacc.utexas.edu

Source: TACC

The post TACC, Texas Digital Library Join Chronopolis Digital Preservation Network appeared first on HPCwire.

Cycle Computing Presenting at Bio IT World

Wed, 05/17/2017 - 09:30

NEW YORK, May 17, 2017 — Cycle Computing, the global leader in Big Compute and Cloud HPC orchestration software, today announced that it will be demonstrating its latest version of the CycleCloud software suite at the 15th Bio-IT World Conference & Expo to be held May 23rd-25th at the Seaport World Trade Center in Boston. Highlights to be demonstrated include full support for launching and managing GPU instances, complete multi-cloud cost management, and security.

The CycleCloud demonstration will highlight how the latest release of CycleCloud delivers simple, managed access to Cloud HPC and Data Science in minutes. Customers can seamlessly run workloads across Microsoft Azure, Google Cloud Platform, and Amazon Web Services. Key features of this latest release include:

  • Multi-cloud workflows for compute & data
  • Templated life science apps
  • Active Directory and LDAP support
  • Management of tens of clusters and thousands of cores
  • Consistent security and encryption
  • Cost control & reporting per user or group

Additionally, Cycle Computing’s CEO, Jason Stowe, will make two presentations during the event. The first will be the keynote introduction to the “Plenary Keynote Session CIO Panel” on May 24th at 8:00 am ET. Additionally on May 24th, Jason will present “How Cloud Has Changed Life Sciences” as part of the “Cloud Computing: Implementing Cloud” track. His presentation from 12- 12:30 PM ET will focus on the impact cloud computing — with its flexibility and scale — has provided cost-effective and powerful access to life sciences researchers.

Last year, Bio-IT World Conference & Expo brought together more than 3,300 attendees from 41 countries to navigate the new era of precision medicine and build collaboration across the industry. With over 13 tracks, 14 pre-conference workshops, and three industry awards, the 2017 Bio-IT World Conference & Expo promises to be bigger than ever with more expert content, more industry insights, and more opportunities to build relationships.

Cycle Computing’s CycleCloud software will be on display at the show at booth 361. CycleCloud orchestrates Big Compute and Cloud HPC workloads enabling users to overcome the challenges typically associated large workloads. It takes the delays, configuration, administration, and sunken hardware costs out of HPC clusters and leverages multi-cloud environments moving seamlessly between internal clusters, Google Cloud Platform, Microsoft Azure, Amazon Web Services, and other cloud environments. More information about the CycleCloud cloud management software suite can be found at www.cyclecomputing.com.

About Cycle Computing

Cycle Computing is the leader in Big Compute software to manage simulation, analytics, and Big Data workloads. Cycle turns the Cloud into an innovation engine for your organization by providing simple, managed access to Big Compute. CycleCloud is the enterprise software solution for managing multiple users, running multiple applications, across multiple clouds, enabling users to never wait for compute and solve problems at any scale. Since 2005, Cycle Computing software has empowered customers in many Global 2000 manufacturing, Big 10 Life Insurance, Big 10 Pharma, Big 10 Hedge Funds, startups, and government agencies, to leverage hundreds of millions of hours of cloud based computation annually to accelerate innovation. For more information visit: www.cyclecomputing.com

Source: Cycle Computing

The post Cycle Computing Presenting at Bio IT World appeared first on HPCwire.