HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 52 min ago

OCF Supports Scientific Research at the Atomic Weapons Establishment

Mon, 04/24/2017 - 07:40

LONDON, April 24, 2017 — High Performance Computing (HPC) storage and data analytics integrator, OCF, is supporting scientific research at the UK Atomic Weapons Establishment (AWE), with the design, testing and implementation of a new HPC, cluster and separate big data storage system.

AWE has been synonymous with science, engineering and technology excellence in support of the UK’s nuclear deterrent for more than 60 years. AWE, working to the required Ministry of Defence programme, provides and maintains warheads for the Trident nuclear deterrent.

The new HPC system is built on IBM’s POWER8 architecture and a separate parallel file system, called Cedar 3, built on IBM Spectrum Scale. In early benchmark testing, Cedar 3 is operating 10 times faster than the previous high-performance storage system at AWE. Both server and storage systems use IBM Spectrum Protect for data backup and recovery.

“Our work to maintain and support the Trident missile system is undertaken without actual nuclear testing, which has been the case ever since the UK became a signatory to the Comprehensive Nuclear Test Ban Treaty (CTBT); this creates extraordinary scientific and technical challenges – something we’re tackling head on with OCF,” comments Paul Tomlinson, HPC Operations at AWE. “We rely on cutting-edge science and computational methodologies to verify the safety and effectiveness of the warhead stockpile without conducting live testing. The new HPC system will be vital in this ongoing research.”

From the initial design and concept to manufacture and assembly, AWE works across the entire life cycle of warheads through the in-service support to decommissioning and disposal, ensuring the maximum safety and protecting national security at all times.

The central data storage, Cedar 3, will be in use for scientists across the AWE campus, with data replicated across the site.

“The work of AWE is of national importance and so its team of scientists need complete faith and trust in the HPC and big data systems in use behind the scenes, and the people deploying the technology,” says Julian Fielden, managing director, OCF. “Through our partnership with IBM, and the people, skills and expertise of our own team, we have been able to deliver a system which will enable AWE maintain its vital research,”

The new HPC system runs on a suite of IBM POWER8 processor-based Power systems servers running the IBM AIX V7.1 and Red Hat Enterprise Linux operating system. The HPC platform consists of IBM Power E880, IBM Power S824L, IBM Power S812L and IBM Power S822 servers to provide ample processing capability to support all of AWE’s computational needs and an IBM tape library device to back up computation data.

Cedar 3, AWE’s parallel file system storage, is an IBM Storwize storage system. IBM Spectrum Scale is in use to enable AWE to more easily manage data access amongst multiple servers.

About the Atomic Weapons Establishment (AWE)

The Atomic Weapons Establishment has been central to the defence of the United Kingdom for more than 60 years through its provision and maintenance of the warheads for the country’s nuclear deterrent. This encompasses the initial concept, assessment and design of the nuclear warheads, through component manufacture and assembly, in-service support, decommissioning and then disposal.

Around 4,500 staff are employed at the AWE sites together with over 2,000 contractors. The workforce consists of scientists, engineers, technicians, crafts-people and safety specialists, as well as business and administrative experts – many of whom are leaders in their field. The AWE sites and facilities are government owned but the UK Ministry of Defence (MOD) has a government-owned contractor-operated contract with AWE Management Limited (AWE ML) to manage the day-to-day operations and maintenance of the UK’s nuclear stockpile. AWE ML is formed of three shareholders – Lockheed Martin, Serco and Jacobs Engineering Group. For further information, visit: http://www.awe.co.uk

About OCF

OCF specialises in supporting the significant big data challenges of private and public UK organisations. Our in-house team and extensive partner network can design, integrate, manage or host the high performance compute, storage hardware and analytics software necessary for customers to extract value from their data. With a 14-year heritage in HPC, managing big data challenges, OCF now works with over 20 per cent of the UK’s Universities, Higher Education Institutes and Research Councils, as well as commercial clients from the automotive, aerospace, financial, manufacturing, media, oil & gas, pharmaceutical and utilities industries.

Source: OCF

The post OCF Supports Scientific Research at the Atomic Weapons Establishment appeared first on HPCwire.

Internet2 Announces Winners of 2017 Gender Diversity Award

Mon, 04/24/2017 - 07:36

WASHINGTON, D.C., April 24, 2017 — Internet2 today announced six recipients of the Gender Diversity Award and two recipients of the Network Startup Resource Center (NSRC)-Internet2 Fellowship ahead of its annual meeting, the Internet2 Global Summit, taking place this week in Washington, D.C. from April 23-26. The Global Summit meeting hosts nearly 1,000 C-level information technology decision-makers and high-level influencers from higher education, government and scientific research organizations. This year’s winners and fellows are:

  • Zeynep Ondin, Virginia Tech, gender diversity award winner
  • Meloney Linder, University of Wisconsin, gender diversity award winner
  • Courtney Fell, University of Colorado Boulder, gender diversity award winner
  • Kerry Havens, University of Colorado Boulder, gender diversity award winner
  • Claire Stirm, Purdue University, gender diversity award winner
  • Jieyu Gao, Purdue University, gender diversity award winner
  • Sarah Kiden, Uganda Christian University, NSRC-Internet2 fellow
  • Dr. Kanchana Kanchanasut, Asian Institute of Technology in Thailand, NSRC-Internet2 fellow

According to a recent report by the National Center for Science and Engineering Statistics, while women have reached parity with men among science and engineering degree recipients overall, they constitute disproportionally smaller percentages of employed scientists and engineers than they do of the U.S. population.

The Gender Diversity Award was established in 2014 by the Internet2 community as part of a larger Gender Diversity Initiative, with the aim of improving gender diversity in the information technology field within research and education. It provides awardees the opportunity to engage in discussions around the latest applied innovations and best-practices for their campuses, as well as access to mentors and a network of women IT and technology professionals. The Gender Diversity Award is offered twice a year, once at the Internet2 Global Summit meeting and once at the Internet2 Technology Exchange meeting.

Since 2011, the NSRC and Internet2 have worked with universities, network service providers, and industry and government agencies in Africa, Asia, Europe, the Pacific Islands, the Middle East, Latin America, and the Caribbean to provide support to research and education communities in countries underserved by the current research and education networking infrastructure.

“We continue to see a growing number of talented nominees each year and I’m so grateful for our community’s continued efforts to promote diversity and support our colleagues who are just starting their career or thinking about growing their career in the IT and technology field,” said Ana Hunsinger, Internet2’s vice president of community engagement. “These awards and fellowships are significant because they remove financial barriers from women’s participation in timely discussions around applied innovations and best-practices in their profession, and gives them access to a new experience of professional growth and development for their career. It’s also an opportunity for our community to engage with talented individuals from both the U.S. and abroad, and help mentor the next generation of community leadership.”

Both the award and fellowship cover travel expenses, hotel accommodation, and conference registration for the 2017 Global Summit. Funding for two of this year’s award is made possible by the Internet2 Gender Diversity Initiative, while Cisco Systems and ServiceNow, in their capacity as industry sponsors of the 2017 Global Summit, are funding one award each. The University of Colorado Boulder and Purdue University are providing travel support for one of their respective award winners. Funding for the two fellowships is provided by NSRC and Internet2.

Ondin, Linder, Fell, Havens, Stirm, Jieyu, Kiden, and Dr. Kanchanasut will be recognized during the 2017 Global Summit General Session on Wednesday, April 26 at 10:30 a.m. EST. A full list of the 2017 Internet2 Gender Diversity Award winners and NSRC-Internet2 fellows, along with their bios, appears below:

Zeynep Ondin, Ph.D. is a user experience and interaction designer for the IT Experience & Engagement unit within the Division of Information Technology at Virginia Tech since2016. In her current role, she works at improving the user experience across the division’s various platforms and mechanisms of user engagement in order to provide a consistent user experience for all students, faculty, and staff who interact with IT systems and services. Prior to joining Virginia Tech, she spent 10 years working in various IT roles at higher education institutions.

Meloney Linder serves as associate dean for communications, facilities and technology for the University of Wisconsin – Madison, Wisconsin School of Business (WSB). Meloney’s responsibilities include strategic oversight of WSB’s brand and consumer insights, integrated marketing communications, information technology services, academic technology and web, and building and conference services for the school. She is committed to advancing higher education and the mission of the UW-Madison and Wisconsin School of Business through collaboration. Meloney serves as an advisor on WSB Dean’s Leadership Team and currently serves as the chair of the University of Wisconsin – Madison’s divisional technology advisory group.

Courtney Fell is a learning experience designer at the University of Colorado (CU) Boulder. She first came to CU in 2007 as a Spanish instructor and soon began leveraging technology to create interactive online lessons for her language students. From there, Courtney left the classroom to support other faculty in the sound incorporation of technology in their classrooms. Courtney now works for CU’s Office of Information Technology where she partners with campus leaders to find human-centered solutions to the university’s most complex challenges. In the last few years, she has led a number of successful and transformative initiatives for CU Boulder including: moving new student orientation online for domestic and international students, developing an innovative cross-campus large lecture experience for space studies, and exploring the use of robotic technologies paired with video conferencing software to provide a flexible learning solution for CU students.

Kerry Havens is an ambitious and caring working mother and perpetual student. Working in the Office of Information Technology at the University of Colorado (CU) Boulder for the past 16 years, she developed a passion for finding broad solutions that fit many needs. She continually finds herself at a crossroads between working with people and technology and is currently seeking opportunities to solidify her path towards a career in leadership in an organization that helps kids and young adults find purpose, gratefulness, and kindness.

Claire Stirm is a science gateway manager with HUBzero in the Academic Research Computing Department at Purdue University. Stirm graduated from Purdue University in 2016 with a degree in Professional Writing and a degree in Classical Studies. Stirm is currently earning a Master’s of Science in Communication with a focus in strategic communication from the Brian Lamb School of Communication at Purdue University. Since joining the HUBzero Team, Stirm has worked with researchers in plant genomics, healthcare and volcanology. In her free time Stirm enjoys reading, writing and camping.

Jieyu Gao joined the Emerging IT Leaders program with Information Technology department at Purdue upon her graduation from Purdue’s Applied Statistics and Economics (Honor) program in 2016. She works with researchers and faculty to help resolve their data analysis concerns. She is interested in learning new technologies and machine learning algorithms and applications.


Sarah Kiden is the head of systems at Uganda Christian University and a facilitator at Research and Education Network for Uganda (RENU). She loves to learn, build and support systems/networks, and has been involved in coordinating capacity building initiatives for universities and research institutions in Uganda since 2014. In her free time, she volunteers with the Internet Society Uganda Chapter, through which she picked interest and became active in internet policy development at ICANN. She recently co-founded DigiWave Africa, a non-profit organization which supports the safe and responsible use of technology for youth and children. Sarah holds an MSc in information systems and BSc in information technology.

Dr. Kanchana Kanchanasut is a professor in computer science at the Asian Institute of Technology (AIT), Thailand. Starting in 1988, she was among the first to bring the internet to Thailand, and has worked closely with the research and education (R&E) networks in Thailand and in the Asia-Pacific region. Nearly 20 years ago, Dr. Kanchanasut established the Internet Education and Research Laboratory (intERLab) at AIT to provide much-needed capacity building for internet engineers in the region. In recognition of her pioneering role in the early days of internet development, driving cross-border R&E networks, and starting the first open and neutral Internet Exchange Point in Southeast Asia in 2015 – the Bangkok Neutral Internet Exchange (BKNIX), Dr. Kanchanasut was inducted in the Internet Hall of Fame as the first representative from Thailand. In 2016, she was awarded the prestigious Jon Postel Service Award for her many years of service for R&E and internet development in Asia. Currently she is a researcher at the Internet Education and Research Laboratory where she is focusing on challenged networks research and community wireless mobile network deployments.

For more information on the Global Summit, taking place April 23-26 at the Renaissance Washington, D.C. Downtown Hotel, visit https://meetings.internet2.edu/2017-global-summit/

About Internet2

Internet2 is a non-profit, member-driven advanced technology community founded by the nation’s leading higher education institutions in 1996. Internet2 serves 317 U.S. universities, 70 government agencies, 43 regional and state education networks, and through them supports more than 94,000 community anchor institutions, over 900 InCommon participants, 78 leading corporations working with our community, and more than 60 national research and education network partners representing more than 100 countries.

Internet2 delivers a diverse portfolio of technology solutions that leverages, integrates, and amplifies the strengths of its members and helps support their educational, research and community service missions. Internet2’s core infrastructure components include the nation’s largest and fastest research and education network that was built to deliver advances, customized services that are accessed and secured by the community-developed trust and identity framework.

Source: Internet2

The post Internet2 Announces Winners of 2017 Gender Diversity Award appeared first on HPCwire.

Musk’s Latest Startup Eyes Brain-Computer Links

Fri, 04/21/2017 - 21:08

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers.

The initial goal is aiding the disabled, but the visionary inventor reportedly views the AI startup as a way of forging non-verbal forms of communication while at the same time promoting ethical AI research.

Details of the new venture are sketchy, but according to several reports this week the new venture called Neuralink Corp. would assist researchers in keeping up with steady advancements in machine intelligence. Details of the AI interface startup were first reported by the Wall Street Journal.

Neuralink’s proposed interface reportedly involves implanting “tiny electrodes in human brains.” On Thursday (April 20), Musk confirmed details of the startup, saying he would serve as chief executive. The startup’s initial goal is developing links between computers and the brain that could be used to assist the disabled.

Ultimately, Neuralink’s goal is to forge a new language Musk calls “consensual telepathy.”

More details about the neural startup emerged this week on the web site Wait But Why. Based on the assumption that spoken words are merely words “compressed approximations of uncompressed thoughts,” Musk explained the notion of consensual telepathy this way:

“If I were to communicate a concept to you, you would essentially engage in consensual telepathy. You wouldn’t need to verbalize unless you want to add a little flair to the conversation or something, but the conversation would be conceptual interaction on a level that’s difficult to conceive of right now.”

Asked about a timeline, Musk said a computer-brain interface for applications beyond the disabled remains nearly a decade away. “Genetics is just too slow, that’s the problem,” Musk asserted, according to the web site. “For a human to become an adult takes twenty years. We just don’t have that amount of time.”

Raising concerns about the societal implications of AI, Musk helped launch OpenAI in 2015 to redirect research toward “safe artificial general intelligence.” In launching OpenAI, Musk and his co-founders noted: “It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

Also in 2015, Musk donated $10 million to the Future of Life Institute that seeks to mitigate the “existential risks” posed by advanced AI.

The post Musk’s Latest Startup Eyes Brain-Computer Links appeared first on HPCwire.

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

Fri, 04/21/2017 - 17:59

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. This is the largest known high-performance computing cluster to run in the public cloud, according to Google’s Alex Barrett and Michael Basilyan.

Andrew Sutherland, principal research scientist at MIT, photographed at the MIT campus in Cambridge, MA

Sutherland used Google’s cloud to explore generalizations of the Sato-Tate Conjecture and the conjecture of Birch and Swinnerton-Dyer to curves of higher genus, write Barrett and Basilyan on the Google Cloud Platform blog. “In his latest run, he explored 1017 hyperelliptic curves of genus 3 in an effort to find curves whose L-functions can be easily computed, and which have potentially interesting Sato-Tate distributions. This yielded about 70,000 curves of interest, each of which will eventually have its own entry in the L-functions and Modular Forms Database (LMFDB),” they explain.

Sutherland compared the quest to find suitable genus 3 curves to “searching for a needle in a fifteen-dimensional haystack.” It’s highly compute-intensive research that can require evaluating a 50 million term polynomial in 15 variables.

Before moving to the public cloud platform, Sutherland conducted his research locally on a 64-core machine but runs would take months. Using MIT clusters was another option, but there were sometimes access and software limitations. With Compute Engine, Sutherland can create a cluster with his preferred operating system, libraries and applications, the Google blog authors note.

According to Google, the preemtible VMs that Sutherland used are “full-featured instances that are priced up to 80 percent less than regular equivalents, but can be interrupted by Compute Engine.”

Since the computations are embarrassingly parallel, interruptions have limited impact and the workload can also grab available instances across Google Cloud Regions. Google reports that in a given hour, about 2-3 percent of jobs are interrupted and automatically restarted.

Coordinating instances was done with a combination of Cloud Storage and Datastore, which assigns tasks to instances based on requests from the Python client API. “Instances periodically checkpoint their progress on their local disks from which they can recover if preempted, and they store their final output data in a Cloud Storage bucket, where it may undergo further post-processing once the job has finished,” write the blog authors. Pricing for the 220,000 cluster was not shared.

Sutherland is already planning an even larger run of 400,000 cores, noting that when you “can ask a question and get an answer in hours rather than months, you ask different questions.”

There have been several other notably large cloud runs conducted by HPC cloud specialist Cycle Computing over the years. In late 2013, Cycle spun up a 156,000-core AWS cluster for Schrödinger and the University of Southern California to power a quantum chemistry application. The year prior, Cycle Computing created a 50,000 core virtual supercomputer on AWS to facilitate Schrödinger’s search for novel drug compounds for cancer research. In November 2014, Cycle customer HGST ran a 1 million simulation job in eight hours to help identify an optimal advanced drive head design. At peak, the cluster incorporated 70,908 IvyBridge cores with a peak performance of 729 teraflolps.

Cycle has also leveraged the Google Compute Engine (GCE). In 2015, Cycle ran a 50,000-core cancer gene analysis workload for the Broad Institute using preemptible virtual machine instances.

Amazon has also benchmarked several self-made clusters for the Top500 list. The most recent, a 26,496 core Intel Xeon cluster, entered the list in November 2013 at position 64 with 484 Linpack teraflops. As of November 2016, the cluster was in 334th position.

The post MIT Mathematician Spins Up 220,000-Core Google Compute Cluster appeared first on HPCwire.

Academic Communities Join Forces to Safeguard Against Cyberattacks

Fri, 04/21/2017 - 08:51

DENVER, Colo., April 21, 2017 — The increase in the risk from cyberattacks has received significant attention from the research and education (R&E) community and has spurred many campuses to adopt new security controls and implement additional tools to protect their institutions. These risks include:

  • ransomware attacks which typically attacks a system or computer with the intent to disrupt or block access to data until a ransom is paid;
  • distributed denial-of-service (DDoS) attacks that are intended to interfere with the availability of a campus’ network or applications.

“There are always new threats to cybersecurity, and the threats often evolve faster than the safeguards,” said Kim Milford, executive director of the Research and Education Networking Information Sharing and Analysis Center (REN-ISAC), which started in 2002 and coordinates information sharing about computer security threats and countermeasure among higher education institutions. “Through active sharing among the research and education community about the most current threats, we can collectively defend against them by updating our processes and finding safeguards that most effectively protect against those threats.”

Higher education could be susceptible to cyberthreats for many different reasons, but are likely targets due to their computing resources, intellectual property, and vast amount of personal information belonging to their students, faculty, and staff. In 2016, REN-ISAC sent out 67,000 notifications to R&E institutions about potential machines compromised by vulnerability exploits.

Milford will be presenting an annual assessment of the current risks along with practical and operational advice to the research and education community at the 2017 Internet2 Global Summit on Tuesday, April 25 from 4:30 – 5:30 p.m. EST at the Renaissance Downtown Washington, D.C. Hotel.

EDITOR’S NOTE: Interviews are available to members of the media upon request. Reporters interested in obtaining a press badge for the 2017 Global Summit should contact Sara Aly, saly@internet2.edu

Read the full press release here.

About Internet2

Internet2 is a non-profit, member-driven advanced technology community founded by the nation’s leading higher education institutions in 1996. Internet2 serves 317 U.S. universities, 70 government agencies, 43 regional and state education networks, and through them supports more than 94,000 community anchor institutions, over 900 InCommon participants, 78 leading corporations working with our community, and more than 60 national research and education network partners representing more than 100 countries.

Internet2 delivers a diverse portfolio of technology solutions that leverages, integrates, and amplifies the strengths of its members and helps support their educational, research and community service missions. Internet2’s core infrastructure components include the nation’s largest and fastest research and education network that was built to deliver advances, customized services that are accessed and secured by the community-developed trust and identity framework.


Established in 2004 as part of the National Council of Information Sharing and Analysis Centers (ISACS), the Research and Education Networking Information Sharing and Analysis Center (REN-ISAC) is a member organization committed to aiding and promoting cybersecurity protection in the research and education (R&E) community. With over 500 member institutions and 1600 active participant, REN-ISAC helps to analyze cybersecurity threat trends and protection techniques that impact R&E. REN-ISAC analyzes this information, along with information provided in publicly available resources such as the Verizon Data Breach Report, and provides the R&E IT professionals with alerts, advisories, ongoing discussions and recommendations to help reduce risks. For more information, visit www.ren-isac.net

Source: Internet2

The post Academic Communities Join Forces to Safeguard Against Cyberattacks appeared first on HPCwire.

Royal Canadian Institute for Science recognizes IBM’s commitment to STEM

Fri, 04/21/2017 - 08:49

MARKHAM, Ontario, April 21, 2017 — The Royal Canadian Institute for Science (RCIS) will award IBM Canada (NYSE: IBM) the 2017 William Edmond Logan Award tonight at an award ceremony held at the MarS Discovery Centre in Toronto. The award recognizes outstanding contributions to the public awareness of science by an institution or an organization.

The William Edmond Logan Award recognizes IBM Canada’s demonstrated commitment to promoting Science, Technology, Engineering and Mathematics (STEM) to youth. “The large number of students reached through these initiatives, in particular aboriginal students, and the manner in which these programs engage IBM staff is impressive,” said RCIS President Peter Love in the award statement. “We are delighted to recognize another venerable institution for its work in building public awareness of science.”

IBM Canada’s STEM 4 Girls initiative officially launched in 2016 with a commitment to youth across Canada in grades six through to post-secondary with a particular focus on female students. Programs are designed to teach students specific STEM skills ranging from robotics to cybersecurity, with a strong focus on mentorship, outreach, and female-focused recruitment in post-secondary schools. In its first year alone, IBM Canada’s STEM program reached more than 1,000 students and has doubled that number within the first few months of 2017.

“By encouraging youth to innovate and problem-solve, IBM’s STEM program equips them with skills that will stay with them as they move forward in their education,” said IBM Canada President Dino Trevisani. “Year after year we see the demand for jobs related to STEM increasing, and preparing our young people for the new collar jobs of the future is incredibly important.”

About IBM

IBM is one of Canada’s top ten private R&D investors, and in 2016 contributed more than $478 million to Canadian research activities. IBM has a unique approach to collaboration that provides academic researchers, small and large business, start-ups and developers with business strategies and computing tools they need to innovate. Areas of focus include health, agile computing, water, energy, cities, mining, advanced manufacturing, digital media and cybersecurity. For more information about IBM’s continued investments in Canadian innovation, please visit: http://www.ibm.com/ibm/ca/en/canadian-innovation.html

Source: IBM

The post Royal Canadian Institute for Science recognizes IBM’s commitment to STEM appeared first on HPCwire.

Nvidia P100 Shows 1.3-2.3x Speedup Over K80 GPU on Financial Apps

Thu, 04/20/2017 - 17:52

When it comes to the true performance of the latest silicon, every end user knows that the best processor is the one that works best for their application. An interim step between touted “peak” specs and real-world performance is where benchmarking comes in as a useful exercise that can reveal new insights.

With Nvidia’s Pascal-based Tesla P100 GPU now deployed in several Top500 supercomputers and hitting its cloud stride, many end users have already made the jump from the previous Kepler-architecture or are considering it. To provide guidance to high-performance computing professionals in the financial services sector, the HPC software specialists at Xcelerit conducted a comparison study of these two accelerators using selected applications from their in-house Xcelerit Quant Benchmark Suite.

The Pascal-based P100 provides 1.6x more double-precision flops than the Kepler-generation K80: 4.7 teraflops for the PCIe-based P100 versus 2.91 teraflops for the K80. Further, “P100’s stacked memory features 3x the memory bandwidth of the K80, an important factor for memory-intensive applications,” says Xcelerit. NVLink-connected P100s, although not used for this benchmarking study, offer a 2-3x improvement in GPU-GPU communication (bandwidth) compared to PCIe.

“Beyond compute instructions, many other factors influence performance, such as memory and cache latencies, thread synchronisation, instruction-level parallelism, GPU occupancy, and branch divergance,” the team writes.

The benchmarking relied on selected applications form the Xcelerit Quant Benchmark Suite, a representative set of applications widely used in quantitative finance. The hand-tuned set includes LIBOR Swaption Portfolio (Monte-Carlo), American Options (Binomial Lattice), European Options (Closed form), and Barrier Options (Monte-Carlo).

The test machine featured:

CPU: 2 sockets, Haswell (Intel Xeon E5-2698 v3)
GPU: NVIDIA Tesla K80 and NVIDIA Tesla P100 (ECC on)
OS: RedHat Enterprise Linux 7.2 (64bit)
RAM: 128GB (K80 system) and 256GB (P100 system)
CUDA Version: 8.0
CPU Backend Compiler: GCC 4.8
GPU clock: maximum boost
Precision: double performance

The full algorithm execution time was recorded from inputs to outputs, including setup of the GPU and data transfers. The results indicate each GPU’s speedup over sequential implementation on a single CPU core.

Source: Xcelerit

On average, the P100 delivered a 1.7X boost overall with application speedups ranging from 1.3x to 2.3x. “This high variation of the speedup across applications can be explained by the different application characteristics, in particular the relation of compute instructions to memory access operations,” writes Xcelerit.

The most compute-heavy applications with the fewest memory accesses — the LIBOR swaption portfolio and Black-Scholes option pricers — saw the least speedup; while the memory-intensive Binomial American option pricer had the biggest gain.

See the complete study here.

The post Nvidia P100 Shows 1.3-2.3x Speedup Over K80 GPU on Financial Apps appeared first on HPCwire.

Quantum Adds Global Smarts to StorNext File System

Thu, 04/20/2017 - 15:00

Companies that use Quantum’s StorNext platform to store massive amounts of data this week got a glimpse of new storage capabilities that should make it easier to access their data horde from anywhere in the world.

StorNext is scale-out storage platform that combines a parallel, shared-disk file system – or what Quantum sometimes called a “streaming file system” – with a data management layer to automate many administrative tasks.

Originally created to provide fast data transfer between Windows and SGI IRIX workstations, the platform today supports an array of protocols, including Fibre Channel, InfiniBand, iSCSI and Ethernet. It can front-end large and sophisticated storage area networks (SAN) clusters or work with individual network attached storage (NAS) devices that individually use NFS or CIFS file systems.

The platform supports a variety of storage mediums, including flash, spinning disk, tape, and cloud repositories, and is used extensively in the media and entertainment, oil and gas, genomics, and surveillance industries, where large file sizes and high-performance demands thwart simpler storage approaches.

Quantum, which acquired StorNext in 2006 from Advanced Digital Information Corporation, this week unveiled version 6 of the platform. Key new data storage and management capabilities are delivered via the new FlexSync and FlexSpace features.

FlexSync provides a way to synchronize data between multiple StorNext systems in an automated fashion. The feature leave ages StorNext’s existing metadata monitoring capabilities to immediately recognize when a file is changed, and replicate that change to other systems, even if the file is in use or locked.

Customers can set FlexSync up in a number of configurations, including one-to-one, one-to-many, and many-to-one file scenarios, the company says. They can also create policies that automatically trigger file replication tasks based on various conditions, thereby ensuring that stakeholders get the freshest data possible, no matter where they’re located.

Meanwhile, the new FlexSpace feature gives globally distributed teams fast access to a centrally located copy of data. The feature ensures that multiple instances of StorNext located anywhere in the world can access a single archive repository. “Users at different sites can store files in the shared archive, as well as browse and pull data from the repository,” Quantum says.

FlexSpace also supports public cloud object stores like AWS S3, Microsoft Azure, and Google Cloud via the FlexTier capability that Quantum unveiled in StorNext version 5.6 late last year. Users can also use FlexSpace to access their own private cloud object stores, including ones based on Quantum’s own Lattus object storage, as well as third-party object stores like NetApp StorageGRID, IBM Cleversafe, and Scality RING.

FlexSpace syncs multiple individual StorNext repositories to a master repository, which can even be an object store in the cloud (image courtesy Quantum)

Molly Presley (Rector), who joined Quantum last fall as its new vice president of global marketing, says the StorNext enhancements deliver benefits where traditional NAS and general-purpose, scale-out storage offerings for unstructured data fall short. “We designed StorNext 6 to give businesses and other organizations the ability to interact with their data in new ways and thereby drive greater creativity, productivity, innovation and insight,” she says in a press release.

Version 6 also brings a new quality of service (QoS) feature that lets users tune the performance of the storage repositories on a machine-by-machine basis, the company says. This can help assure that workstations that are hungry for storage bandwidth can get the data they need, while scaling back the bandwidth usage of less-critical applications.

Other new features in StorNext 6 include:

  • a new copy expiration feature that automatically removes file copies from more expensive storage tiers;
  • a selectable retrieve function that dictates the order of retrieval of remaining copies;
  • more efficient tracking of changes in files.

Quantum plans to ship StorNext 6 on its Xcellis, StorNext M-Series, and Artico archive appliances. General availability is expected this summer.

The post Quantum Adds Global Smarts to StorNext File System appeared first on HPCwire.

IBM Reports 2017 First-Quarter Results

Thu, 04/20/2017 - 14:26

ARMONK, N.Y., April 20, 2017 — IBM (NYSE: IBM) today announced first-quarter earnings results.


• Diluted EPS from continuing operations: GAAP of $1.85; Operating (non-GAAP) of $2.38
• Revenue from continuing operations of $18.2 billion
• Strategic imperatives revenue of $7.8 billion in the quarter, up 12 percent (up 13 percent adjusting for currency)
• Strategic imperatives revenue of $33.6 billion over the last 12 months represents 42 percent of IBM revenue
• Cloud revenue of $14.6 billion over the last 12 months
— Cloud as-a-Service annual exit run rate of $8.6 billion in the quarter, up 59 percent year to year (up 61 percent adjusting for currency)
• Maintains full-year EPS and free cash flow expectations.

“In the first quarter, both the IBM Cloud and our cognitive solutions again grew strongly, which fueled robust performance in our strategic imperatives,” said Ginni Rometty, IBM chairman, president and chief executive officer.  “In addition, we are developing and bringing to market emerging technologies such as blockchain and quantum, revolutionizing how enterprises will tackle complex business problems in the years ahead.”

FIRST QUARTER 2017 Diluted EPS Net Income Gross Profit Margin GAAP from Continuing Operations $1.85 $1.8B 42.8%    Year/Year -11% -13% -3.7Pts Operating (Non-GAAP) $2.38 $2.3B 44.5%    Year/Year 1% -1% -3.0Pts REVENUE Total IBM Strategic Imperatives Cloud As reported (US$) $18.2B $7.8B $3.5B    Year/Year -3% 12% 33%    Year/Year adjusting for currency -2% 13% 35%

“We continued to make investments in the first quarter to expand our cognitive and cloud platform and we increased our research and development spending,” said Martin Schroeter, IBM senior vice president and chief financial officer.  “At the same time we returned more than $2.6 billion to shareholders through dividends and gross share repurchases.”

Strategic Imperatives

First-quarter cloud revenues increased 33 percent (up 35 percent adjusting for currency) to $3.5 billion.  Cloud revenue over the last 12 months was $14.6 billion.  The annual exit run rate for cloud as-a-service revenue increased to $8.6 billion from $5.4 billion in the first quarter of 2016.  Revenues from analytics increased 6 percent (up 7 percent adjusting for currency).  Revenues from mobile increased 20 percent (up 22 percent adjusting for currency) and revenues from security increased 9 percent (up 10 percent adjusting for currency).

Full-Year 2017 Expectations

The company continues to expect operating (non-GAAP) diluted earnings per share of at least $13.80 and GAAP diluted earnings per share of at least $11.95.  Operating (non-GAAP) diluted earnings per share exclude $1.85 per share of charges for amortization of purchased intangible assets, other acquisition-related charges and retirement-related charges.  IBM continues to expect free cash flow to be relatively flat year to year.

Cash Flow and Balance Sheet

In the first quarter, the company generated net cash from operating activities of $4.0 billion, or $1.9 billion excluding Global Financing receivables.  IBM’s free cash flow was $1.1 billion, down year to year consistent with the amount of the Japan tax refund received in the first quarter of 2016.  IBM returned $1.3 billion in dividends and $1.3 billion of gross share repurchases to shareholders.  At the end of March 2017, IBM had $3.8 billion remaining in the current share repurchase authorization.

IBM ended the first quarter of 2017 with $10.7 billion of cash on hand.  Debt, including Global Financing debt of $28.5 billion, totaled $42.8 billion.  Core (non-Global Financing) debt totaled $14.3 billion.  The balance sheet remains strong and is well positioned to support the business over the long term.

Segment Results for First Quarter

  • Cognitive Solutions (includes Solutions Software and Transaction Processing Software) — revenues of $4.1 billion, up 2.1 percent (up 2.8 percent adjusting for currency) were driven by growth in analytics and security, which include Watson-related offerings.
  • Global Business Services (includes Consulting, Global Process Services and Application Management) — revenues of $4.0 billion, down 3.0 percent (down 1.9 percent adjusting for currency).  Strategic imperatives grew double digits led by the cloud and mobile practices.
  • Technology Services & Cloud Platforms (includes Infrastructure Services, Technical Support Services and Integration Software) — revenues of $8.2 billion, down 2.5 percent (down 2.0 percent adjusting for currency) with strong growth in strategic imperatives driven by hybrid cloud services.
  • Systems (includes Systems Hardware and Operating Systems Software) — revenues of $1.4 billion, down 16.8 percent (down 16.1 percent adjusting for currency).
  • Global Financing (includes financing and used equipment sales) — revenues of $405 million, down 1.2 percent (down 2.1 percent adjusting for currency).

Tax Rate

For the first quarter, IBM’s ongoing effective GAAP tax rate was approximately 12 percent. The ongoing effective operating (non-GAAP) tax rate was approximately 15 percent, which is within the expected range of 15 percent plus or minus 3 points provided earlier this year.  IBM’s reported tax rates include the effect from a discrete tax benefit disclosed earlier this year.

Source: IBM

The post IBM Reports 2017 First-Quarter Results appeared first on HPCwire.

New HPC Cluster Installed at Clarkson University

Thu, 04/20/2017 - 14:23

April 20, 2017 — An Advanced Clustering team was on-site last week at Clarkson University to install a new cluster we built for Dr. Chen Liu, Assistant Professor of Electrical and Computer Engineering. Dr. Liu was awarded a three-year grant from the National Science Foundation to lead a four-person research team dedicated to acquiring a heterogeneous high-performance computing cluster.

“Our project is a small-scale super computer with a lot of horsepower for computation ability,” Liu said. “It has many servers, interconnected to look like one big machine. Research involving facial recognition, iris recognition and fingerprint recognition requires a lot of computing power, so we’re investigating how to perfect that capability and make biometrics run faster.”

Liu is the principal investigator. His colleagues Joseph Skufca, Professor and Chair of Mathematics, and Paynter-Krigman Endowed Professor in Engineering Science Stephanie Schuckers are co-PIs. Clarkson University Chief Information Officer Joshua Fiske rounds out the team.

Source: Advanced Clustering

The post New HPC Cluster Installed at Clarkson University appeared first on HPCwire.

Scaling an HPC Career in Nepal Can Be a Steep Climb

Thu, 04/20/2017 - 13:37

Umesh Upadhyaya works as an IT Associate at the International Centre for Integrated Mountain Development (ICIMOD) in Nepal, which supports the country’s one and only HPC facility. He is directly involved in an initiative that focuses on climate change and atmosphere modeling, an area that has particular relevance to the country’s dependence on its agricultural production and hydroelectric power.

Part of what Umesh wants to accomplish at ICIMOD is acquiring the necessary technical skills so that he can assist research scientists in setting up and supporting HPC resources at the Nepal facility. Unfortunately, at this point the government doesn’t have the funds to allocate for training or workshops to help him acquire such skills.

Umesh Upadhyaya

The conference organizers for ISC High Performance became aware of his plight and are offering Umesh free registration for the tutorials, the conference and workshops at this year’s conference in June. STEM-Trek, a non-profit group that supports professional development for individuals from underserved regions who are trying to establish themselves in the HPC workforce, is trying to help him secure travel funding. The hope is that an ISC exhibitor will come forward to help sponsor his trip to Frankfurt, Germany.

ISC’s Nages Sieslack recently got an opportunity to speak with Umesh about his work, ICIMOD’s mission, and what he would like to achieve if he could attend the ISC conference this year.

Why are you interested in attending ISC 2017?

Umesh Upadhyaya: Scientific computing is such an exciting realm of technology and there is a severe lack of skills in Nepal in this particular area. By attending the ISC 2017, I would have the opportunity to network with academics, researchers, and representatives from industry and bring in a lot of experience to my organization via interactions in the various workshops, tutorials and conference sessions.

The ISC 2017 platform will also help me learn about advancements in infrastructure design, state-of-the-art hardware, breakthroughs in computational sciences, and the latest use cases of HPC.

Can you tell us a bit about ICIMOD and its purpose?

Umesh: The International Centre for Integrated Mountain Development is a regional intergovernmental learning and knowledge-sharing center, based in Kathmandu, Nepal. It serves the eight regional member countries of the Hindu Kush Himalayas – Afghanistan, Bangladesh, Bhutan, China, India, Myanmar, Nepal, and Pakistan. ICIMOD aims to assist mountain people to understand changes to their environment, adapt to them, and make the most of new opportunities, while also addressing upstream-downstream issues.

ICIMOD supports transboundary programs through partnership with regional partner institutions, facilitates the exchange of experience, and serve as a regional knowledge hub. It also strengthens networking among regional and global centers of excellence. Overall, we are working to develop an economically and environmentally sound mountain ecosystem to improve the living standards of local populations and to sustain vital services for the billions of people living downstream now, and in the future.

Can you describe your center’s current HPC capabilities?

Umesh: The High-Performance Center provides a unique ability to access the latest systems, CPUs, and networking technologies. ICIMOD has installed and is operating a high performance computing cluster based on Dell blade servers equipped with Intel Xeon processors. The center hosts a Linux environment and is specifically used for atmosphere modeling. Air quality scientists currently run the latest version of WRF, WRF-Chem, and STEM etc. in this HPC environment. The in-house Dell blade servers and storage system comprises 160 cores, 512 GB of memory and 100 TB of disk storage.

ICIMOD High-Performance Center also offers an environment for developing, testing, benchmarking and optimizing software. The center provides on-site technical support and enables secure sessions locally or remotely.

What kind of research are you and your project involved in, and how do you use HPC at the center?

Umesh: The cluster is currently available to ICIMOD scientists and PhD fellows for academic and research purposes, especially those related to weather and pollution models. The modeling software is currently supported by the GCC, Intel and PGI compilers.

I am working with research scientists to determine and compare run times of WRF v3.8 compiled with gfortran versus commercial Intel and PGI compilers. Currently, ICIMOD uses MPICH2 and gfortran for multi-CPU runs for WRF v3.8 in a multi-clustered environment, and recently we subscribed to PGI and Intel compilers. All our compute nodes are bare metal and have low-latency interconnects for better parallel processing performance. My research emphasis is on the performance of WRF on different software compilers using our 160-core Dell cluster.

What would you like to achieve by attending ISC 2017? 

Umesh: Attending ISC 2017 would be a professionally rewarding experience for someone like me, who is beginning his career in HPC. Sharing the same space with attendees at ISC will help me engage with the larger HPC community and hopefully return with new ideas that make me more effective at my work. Overall, I believe the conference will inspire me to grow and challenge myself in many areas of HPC.

ISC exhibitors interested in funding Umesh Upadhyaya can contact Elizabeth Leake of STEM-Trek at info@stem-trek.org.

The post Scaling an HPC Career in Nepal Can Be a Steep Climb appeared first on HPCwire.

HPC Unlocks Secret to Drought-Resistant Crops

Thu, 04/20/2017 - 12:40
This network shows the cross-species co-expression relationships between genes in Arabidopsis and Agave. Dark green nodes represent Agave genes, light green nodes represent Arabidopsis genes, blue edges represent positive co-expression relationships, and red edges represent negative co-expression relationships. The co-expression network was used in the paper to investigate the co-expression relationships of genes within the same gene family.

OAK RIDGE, Tenn., April 20, 2017 — A multi-institution research team has used supercomputing to understand processes leading to increased drought resistance in food and fuel crops.

Photosynthesis, the method plants use to convert energy from the sun into food, is a ubiquitous process many people learn about in elementary school. Almost all plants use photosynthesis to gather energy and stay alive.

Not all photosynthetic processes are the same, though. In recent years, researchers have grown increasingly interested in desert plants’ preferred method of photosynthesis—crassulacean acid metabolism (CAM), a process named after the Crassulaceae family of plants, which include succulents like friendship plants, pig’s ears, and hens and chicks.

These plants caught researchers’ attention because of their seemingly opposite photosynthetic schedule, and understanding this process may be the genetic key to helping plants of all kinds conserve water. With a more fundamental understanding of CAM, scientists aim to help the plants upon which society relies for food and fuel become more drought resistant, thereby expanding the area where crops can grow and thrive.

“One of the benefits of CAM photosynthesis is water efficiency,” said Oak Ridge National Laboratory (ORNL) computational biologist Dan Jacobson, who is part of a multi-institutional team that recently published a CAM study in Nature Plants. “When you think of bioenergy and food crops, you want them to be able to tolerate drought stress or grow in areas that aren’t currently arable land. That means they have to be able to withstand some kind of environmental stress, most commonly drought stress. CAM species are very good at this.”

To that end, Jacobson works with a large group of experimentalists and computational scientists to more fully understand the CAM process. This cross-omics team (combining expertise in metabolomics, proteomics, and genomics) uses computing resources at the Oak Ridge Leadership Computing Facility (OLCF)—a US Department of Energy Office of Science User Facility located at ORNL—to catalog how plants’ CAM processes vary and ultimately uncover how CAM processes may be genetically engineered into feed stock, food crops, and crops for bioenergy applications.

Shining a light on photosynthesis

When most people think of photosynthesis, they are actually thinking of a specific form, called C3 photosynthesis. This process follows the Calvin Cycle, in which plants capture light energy during the day and convert it into energy-bearing adenosine triphosphate (ATP).

ATP helps plants split water atoms into their hydrogen and oxygen constituent particles. Meanwhile, a C3 photosynthetic plant opens up small pores—called stomata—to absorb carbon dioxide from the atmosphere. Then at night, the newly freed hydrogen particles combine with carbon dioxide absorbed during the day to create the carbohydrates plants use to live and grow.

CAM photosynthesis works the same way, but stomata open for respiration at night and stay tightly closed during the day, allowing plants to conserve more water. This helps plants like cactus and Agave survive in climates where water is scarce.

Less than 10 percent of known plant species use this specialized form of photosynthesis, but researchers hope that by understanding how CAM works, they can apply this water-saving method to other plants. To do that, though, researchers need to understand how molecules interact during CAM photosynthesis and how metabolites and proteins change over time.

Data-intensive design

In addition to simulating processes too dangerous or complex for experiments, supercomputers also help scientists make connections in vast amounts of data. For this project, researchers from ORNL, the University of Tennessee, Newcastle University in the United Kingdom, and the University of Nevada, Reno gathered photosynthesis data from Agave (a CAM plant) and compared it with the Arabidopsis genus of plants (C3 plants). To conduct a study between Agave and a C3 plant, the team selected the Arabidopsis genus plant thale cress, one of the first plants to have its genome sequenced and a good candidate for plant studies.

The team then studied what gene expressions control stomata opening and closing in both CAM and C3 plants and how proteins regulated this process. Collecting this data in both a common CAM and a C3 species allowed the team to distinguish traits ubiquitous to CAM plants from species-specific traits. However, finding these connections required a machine capable of comparing large data sets against themselves.

Jacobson and his collaborators used the OLCF’s Eos analysis cluster to run “all-versus-all” comparisons of the team’s data sets. These comparisons scan large data sets and compare each individual plant’s data with all others. This helps the team form relationships between the metabolic processes underpinning CAM in individual Agave specimens as well as the differences between Agave’s CAM properties from thale cress’s C3 properties.

“These all-against-all vector comparisons for correlation networks allowed us to look for different types of patterns and different times of day where the [gene expression] transcripts are correlated with each other, where they were correlated to proteins or metabolites, or times of the day where they shift dramatically,” Jacobson said.

The team members gained access to OLCF resources through the OLCF’s Director’s Discretionary program, and after familiarizing themselves with Titan’s hybrid architecture, they plan to expand research into other CAM species, comparing larger data sets and more fully cataloging CAM processes. “As we gain more knowledge from these various approaches, we hope to tease apart the underlying mechanisms for CAM and how it is regulated,” Jacobson said. “That starts to build toward having enough knowledge to deploy CAM in a new species.”

Jacobson also indicated that without access to high-performance computing, the team would not have been able to find these meaningful connections in a timely manner. “This is the first study looking at a cross-omics, time-course experiment to try and explore CAM at this molecular detail,” he said. “I think the ability to use supercomputing infrastructure enabled things that wouldn’t have been possible otherwise. We were able to have a pretty big impact on the analysis of this work because of those resources.”

Related Publication: P. Abraham, H. Yin, A. Borland, D. Weighill, et al., “Transcript, Protein, and Metabolite Temporal Dynamics in the CAM Plant Agave.” Nature Plants 12, no. 2 (2016): 1–10, doi:10.1038/nplants.2016.178.

About Oak Ridge National Laboratory

Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Source: ORNL

The post HPC Unlocks Secret to Drought-Resistant Crops appeared first on HPCwire.

PEARC17 Poster, Visualization Showcase, and BoF Deadline is May 1

Thu, 04/20/2017 - 11:51

April 20, 2017 — The deadline for PEARC17 Poster, Visualization Showcase, and Birds-of-a-Feather submissions is May 1.

Submissions should emphasize experiences and lessons derived from operation and use of advanced research computing on campuses or provided for the academic and open science communities. Submissions that align with the conference theme—Sustainability, Success, and Impact—are particularly encouraged. See the Call for Participation for more information.

Students with accepted posters are eligible for financial support to cover the costs of airfare, shared lodging, and registration fees.

Source: PEARC17

The post PEARC17 Poster, Visualization Showcase, and BoF Deadline is May 1 appeared first on HPCwire.

PNNL Unlocks Hardware’s Hidden Talent for Rendering 3D Graphics

Thu, 04/20/2017 - 11:43

RICHLAND, Wash., April 20, 2017 — When Shuaiwen Leon Song boots up Doom 3 and Half-life 2, he does so in the name of science. Song studies high performance computing at Pacific Northwest National Laboratory, with the goal of making computers smaller, faster and more energy efficient. A more powerful computer, simply put, can solve greater scientific challenges. Like modeling complex chemical reactions or monitoring the electric power grid.

The jump from supercomputers to video games began when Song asked if hardware called 3D stacked memory could do something it was never designed to do: help render 3D graphics. 3D rendering has advanced science with visualizations, models and even virtual reality. It’s also the stuff of video games.

“We’re pushing the boundaries of what hardware can do,” Song said. “And though we tested our idea on video games, this improvement ultimately benefits science.”

Song collaborated with researchers from the University of Houston to develop a new architecture for 3D stacked memory that increases 3D rendering speeds up to 65 percent. The researchers exploited the hardware’s feature called “processing in memory”, the results of which they presented at the 2017 IEEE Symposium on High Performance Computer Architecture, or HPCA.

A normal graphics card uses a graphics processing unit, or GPU, to create images from data stored on memory. 3D stacked memory has an added logic layer that allows for the memory to do some processing too — hence the name “processing in memory.” This essentially reduces the data that has to travel from memory to GPU cores. And like an open highway, less traffic means faster speeds.

The researchers found the last step in rendering — called anisotropic filtering — creates the most traffic. So by moving anisotropic filtering to the first step in the pipeline, and performing that process in memory, the researchers found the greatest performance boost.

Song tested the architecture on popular games such as Doom 3 and Half-life 2. Virtual aliens and demons aside, this research is not so different than Song’s other work. For example, Song is exploring how high performance computers can model changing networks of information, and how to predict changes in these graphs. With research questions like these, Song means to push the boundaries of what computers can do.

The work was funded by DOE’s Office of Science.

About Pacific Northwest National Laboratory

Interdisciplinary teams at Pacific Northwest National Laboratory address many of America’s most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy’s Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, Instagram, LinkedIn and Twitter.

Source: PNNL

The post PNNL Unlocks Hardware’s Hidden Talent for Rendering 3D Graphics appeared first on HPCwire.

Hyperion (IDC) Paints a Bullish Picture of HPC Future

Thu, 04/20/2017 - 11:24

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Global exascale plans are solidifying (who, what, when, and how much ($)). The new kid on the block – all things ‘big’ data driven – is becoming an adolescent and behaving accordingly. And HPC ROI, at least as measured by Hyperion, is $551 per $1 invested (revenue growth) and $52 per $1 of profit invested.

This new version of HPC has been taking shape for some time and most of the themes are familiar (see HPCwire 2015 article, IDC: The Changing Face of HPC): industry consolidation, SGI’s acquisition by HPE along with the Dell EMC merger being the most recent; accelerated computing versus Moore’s Law; the growing appetite of HPC technology suppliers for expansion into the enterprise; big data’s transformation into a more nuanced multi-faceted blend of technologies and applications making it a form of HPC. These are just a few of the major trends laid out by Hyperion at its HPC User Forum.

All netted down, HPC is still expected to be a growth market, according to Earl Joseph, now CEO of Hyperion, which is expected to be acquired by year’s end. Joseph cited the following drivers:

  • Growing recognition of HPC’s strategic value.
  • HPDA, including ML/DL, cognitive and AI.
  • HPC in the cloud will lift the sector writ large.

“There’s a lot of growth in the upper half of the market and we are back to slowdown in the lower half of the market,” said Joseph. “Supercomputers are showing a very good recovery but they still haven’t hit the high point (~$5 billion) of three or four years ago.” They likely won’t get back to that level till 2022/2023 suggested Joseph.

Overall the HPC market segments have tended to hold their position. Storage ($4,316 million) remained the largest non-server segment and the fastest growing segment overall with a 7.8 percent annual growth expected over the next five years.

Vendor jockeying will continue he noted. Consolidation has been a major factor. HPE topped the revenue list in 2016 and will likely do so again in 2017 when SGI’s revenue is added. Dell EMC would no doubt question that and it will be interesting to watch this rivalry. IBM has never recovered its position after jettisoning its x86 businesses. The battle between x86 offerings, IBM Power, and ARM continues with both Europe and Japan making substantial bets on ARM for HPC uses. Indeed, the rise of heterogeneous computing generally is creating new opportunities for a variety of accelerators and accelerated systems.

These are the top HPC server suppliers by revenue ($ millions) according to Hyperion: HPE/HP ($3,878), Dell ($2,014), Lenovo ($909), IBM ($492), Cray ($461), Sugon ($315), Fujitsu ($226), SGI ($169), NEC ($166), Bull Atos ($118), and Other ($2,453). Interesting to note that “Other” is the second largest total revenue.

Not surprisingly, Hyperion looked closely at the intensifying race for exascale machines. China, for example, has three efforts on the path to exascale. Joseph expects China to be first to stand up an exascale. “They are saying 2019 but we’re not sure they will hit that date. We’re saying 2020,” said Joseph. The major players – U.S., EU, Japan, and China – are all speeding up their efforts. In the U.S., for example, Path Forward awards are expected soon.

Many questions remain. China is still selecting final vendors, something that was supposed to be done last fall said Joseph. Japan’s design is the closest to being “locked in” with the prime contractor Fujitsu having settled on an ARM-based architecture. But that project has experienced some delay and its financing method is not fixed.

“According to Japan’s latest announcement, their machine will be up in 2023 but we really expect it to be 2024. The cost may be a bit higher too, $800 million to $900-plus million range. Also, the Japanese government has not yet agreed to fund the whole system. They are funding it one year at time,” said Joseph.

Nevertheless, exascale funds are starting to flow and plans taking firmer shape. As shown here, Hyperion has characterized the major exascale programs and forecast likely costs, technology choices, and timetables. Paul Messina, director of the U.S. Exascale Computing Project, provided an update at the HPC User Forum and HPCwire will have detailed coverage of the U.S. effort shortly.

Predictably, the Hyperion presentation covered a lot of ground drawn from Hyperion/IDC’s ongoing research efforts. Steve Conway, another IDC veteran and now Hyperion SVP research, reviewed the adoption of HPDA as well as zeroing in on two of its drivers, deep learning and machine learning. You may recall that IDC was one of the first to recognize the rise of data analytics as part of HPC. Clearly there are many potential uses cases Conway said. Today, the HPC-HPDA convergence is taken for granted and is depicted in the slide below.

Hyperion has just created four new data-intensive segments, bulleted here, with more to follow:

  • Fraud and anomaly detection. Two example use cases include government (intelligence, cyber security) and industry (credit card fraud, cyber security).
  • Affinity Marketing. Discern potential customers’ demographics, buying preferences and habits.
  • Business intelligence. Identify opportunities to advance market position and competitiveness.
  • Precision Medicine. Personalized approach to improve outcomes, control costs.

“Fraud and anomaly detection are the largest today. Business intelligence is growing quickly. The tortoise that will probably win the race is precision medicine because of the size of the health care over time,” said Conway, noting the HPDA market is growing two to three times faster than traditional overall HPC market.

Not surprisingly, deep learning is the darling of this frontier and also the most technically challenging. Singling out precision medicine as a promising area for DL, Conway said “IBM Watson is the name that’s known here but I promise you x86 clusters are doing the same thing.”

Making the machine learning to deep learning shift is a difficult journey said Conway. Having enough data both to train deep learning systems and also to infer high fidelity decisions when put into practice is the big challenge. “If you are in the realm of Google or Baidu or Facebook, you have plenty of data. If you are outside of that realm you are in trouble. In most of these realms you do not have enough data to do deep learning,” said Conway.

“One case in point, and we have many of them: We talked to the United Health Group which has about 100 million people that it covers; that’s not nearly enough to do the deep learning they need and they know it. They have built a facility in Cambridge, Mass., and invited competitors to come in and to pool anonymized data to try to get to the point where they can actually start playing with deep learning. This is a big issue.”

Aside from having enough data, there’s the computation challenge. Today, GPUs “rule the roost in these ecosystems, with the software built around them, but we expect to see other things like Intel Phis and the remarkable resurgence of FPGAs have a role. Another big issue vendors are having here is there really aren’t good benchmarks and they spend too much time just trying to decide what would be satisfactory results,” Conway said.

In earlier studies HPC user willingness to deploy in the cloud has often seemed tepid. Costs, security, adequate performance (data movement, computation, and storage) were all concerns, especially so in public cloud. Hyperion suggested attitudes seem to be changing and reported a jump in the number of HPC sites using public clouds – 64 percent now up from 13 percent in 2011. Conway cautioned that the size and number of jobs were still limited to a small proportion of any give user’s needs. Conversely, suggested Conway, private and hybrid cloud use was growing fast and held more near-term promise.

Despite the great flux within HPC many areas have changed little according to Hyperion. For example, software problems (management s/w, parallel s/w, license issues, etc.) remain the number one pain point to HPC adoption or use according to Hyperion research. This prompted a member of the audience to say, “Earl, this looks like exactly the same IDC slide I saw ten years ago.” It sort of is.

Storage access time was now the number two complaint, followed by clusters still too hard to use and manage.

Hyperion presented a fair amount of detail concerning its ROI study and is making the full data available to requesters. (Download Results: www.hpcuserforum.com/ROI)

Slides courtesy of Hyperion Research.

The post Hyperion (IDC) Paints a Bullish Picture of HPC Future appeared first on HPCwire.

CSRA Wins $58 Million Contract to Support the EPA’s HPC Systems

Thu, 04/20/2017 - 08:50

FALLS CHURCH, Va., April 20, 2017 — CSRA Inc. (NYSE: CSRA) today announced that it has secured a $58 million contract with the Environmental Protection Agency (EPA). CSRA will support the EPA’s High Performance Computing (HPC) systems and Environmental Modeling and Visualization Laboratory (EMVL) projects. As part of the contract, CSRA will be responsible for provisioning, maintaining, and supporting the EPA’s HPC environment, as well as its scientific visualization hardware and software. CSRA will also provide technical support of projects involving scientific computing.

“We are thrilled to secure this new contract with the EPA and continue providing the agency with the best scientific and technical resources to carry out its mission,” said Executive Vice President Paul Nedzbala, head of CSRA’s Health and Civil Group. “CSRA continues to lead the federal HPC market by providing industry-leading insights and leveraging partnerships that provide our customers with the latest Next-Generation technology.”

CSRA has led the industry in HPC support for years and this contract adds to the company’s portfolio of federal customers, which currently includes NASA, NOAA, the CDC, and NIH. The agreement also reinforces CSRA’s leadership in the civilian HPC market and its ability to provide superior Next Gen IT solutions to federal customers.

Computational modeling and simulation are important tools that allow the EPA to perform the scientific research to guide decisions and better protect human health and the environment. These techniques are increasingly used to solve complex research problems quickly and in a cost-effective manner. Examples of this modeling include:

  • Monitoring the movement of airborne contaminants
  • Using “in silico” methods to screen for potential toxicity
  • Using visualization techniques to “see” how a chemical compound is processed at the cellular/organism level

CSRA’s industry-leading expertise in HPC will enable the EPA to leverage this capability for maximum efficiency. CSRA’s ability to deliver personnel with in-depth, subject-matter expertise will enable EPA scientists to maximize their use of the supercomputing environment. This will enable the EPA to leverage computational tools to operate in the most cost-efficient ways while performing its regulatory responsibilities.

About CSRA Inc.

CSRA (NYSE: CSRA) solves our nation’s hardest mission problems as a bridge from mission and enterprise IT to Next Gen, from government to technology partners, and from agency to agency.  CSRA is tomorrow’s thinking, today. For our customers, our partners, and ultimately, all the people our mission touches, CSRA is realizing the promise of technology to change the world through next-generation thinking and meaningful results. CSRA is driving towards achieving sustainable, industry-leading organic growth across federal and state/local markets through customer intimacy, rapid innovation and outcome-based experience. CSRA has over 18,000 employees and is headquartered in Falls Church, Virginia. To learn more about CSRA, visit www.csra.com. Think Next. Now.

Source: CSRA

The post CSRA Wins $58 Million Contract to Support the EPA’s HPC Systems appeared first on HPCwire.

GRC’s Cooling System at PIC Supports Data Processing for LHC at CERN

Wed, 04/19/2017 - 17:57

AUSTIN, Texas, April 19, 2017, Green Revolution Cooling (GRC), a leader in immersion cooling, today announced key performance and reliability results from its installation at Port d’Informació Científica (PIC) in Barcelona, Spain. The ultra-efficient cluster installed in October 2015, has since been used to process dozens of petabytes of data from CERN’s Large Hadron Collider, and leading-edge astrophysics projects.

“The GRC system has beaten all expectations in terms of performance and reliability,” said Vanessa Acin Portella, IT Team Leader at PIC. “We’ve had zero server or cooling failures in the 18 months that the system has been running.”

GRC’s oil immersion cooling system, as the name suggests, immerses servers in an oil bath. The mineral oil based coolant called ElectroSafe, is an electrical insulator with 1,200x the heat capacity of air, making it ideal for cooling IT equipment. There are several advantages of using immersion cooling over traditional air cooling, this technique eliminates the need for any type of air conditioning or specialized facility design for data centers. This reduces the upfront cost of building a data center facility, while reducing energy use by up to 50%. Apart from the cost savings, the system also helps improve server reliability by protecting them from hot spots, dust, moisture, oxygen, and vibrations.

Another key feature of the PIC facility is that it has zero water consumption. Data center water utilization has been a growing concern lately. Traditional air cooled data centers consume large amounts of water to support chillers and air conditioning systems. GRC’s immersion cooling technology, on the other hand, can eliminate water consumption by using dry coolers instead of evaporative cooling towers in most climates. “Water is the next frontier for data center efficiency and sustainability.” Said Christiaan Best, Founder and CTO of GRC. “Delivering waterless cooling around the year, is just another way we’re helping environmentally conscious customers [like PIC] achieve their data center and sustainability goals.” He added.

“The [GRC] system’s ability to support close to 50kW of IT load per rack, without any air conditioning, refrigerant, or water use is what made it attractive to us,” added Vanessa, “We had limited space, power, and cooling. GRC’s technology made it possible to add capacity in a storage area, while reducing power requirements by 30%.”

Given the superior reliability and performance of the GRC system, PIC plans to explore the use of custom whitebox hardware to further exploit the cost savings offered by the GRC System.

About PIC

Created in 2003, PIC is a joint undertaking of the Spanish and Catalan governments through CIEMAT and IFAE. PIC has been designated by the Spanish government as its LHC Tier-1 centre, and it is the main (Tier-0) data centre for the MAGIC telescope and the PAU dark energy survey. PIC is also a Scientific Data Center for the EUCLID satellite of the European Space Agency and is ramping up its support for the next-generation Cherenkov Telescope Array (CTA). PIC maintains a transversal innovation activity with many significant results over the years, related to software, hardware, monitoring and energy efficiency.

Visit www.pic.es for more information.

Source: GRC Cooling

The post GRC’s Cooling System at PIC Supports Data Processing for LHC at CERN appeared first on HPCwire.

Intel Open Sources All Lustre Work, Brent Gorda Exits

Wed, 04/19/2017 - 13:25

In a letter to the Lustre community posted on the Intel website, Vice President of Intel’s Data Center Group Trish Damkroger informs that effective immediately the company will be contributing all Lustre development to the open source community. Damkroger also announced that Brent Gorda, General Manager, High Performance Data Division at Intel is leaving the company. Gorda is the former CEO of Whamcloud, the Lustre specialist acquired by Intel in 2012.

Damkroger, who is responsible for the high performance computing portfolio at Intel, writes:

Trish Damkroger

Starting today, Intel will contribute all Lustre features and enhancements to the open source community. This will mean that we will no longer provide Intel-branded releases of Lustre, and instead align our efforts and support around the community release.

These changes are designed to increase Intel’s involvement in the community and to accelerate technical innovation in Lustre. For the community as a whole this will mean easier access to the latest stable Lustre releases and an acceleration of the technical roadmap.

By open-sourcing all of our work – including the Intel Manager for Lustre software, our work on Hadoop, and features for the Intel-optimized releases, Intel will provide the opportunity for deeper collaboration in the ongoing development of Lustre tools, and a broader adoption of the technology.

We will continue to support our broad base of existing partners and customers, but will be focused on growing Lustre adoption with the added benefit of our alignment around the latest community releases. Our support offering post transition will be refocused on our traditional Lustre L3 offering. We will also continue to play a leading role in supporting the OpenSFS and EOFS communities, and in engaging partnerships to further the development of open source storage software such as Lustre and DAOS. By contributing our work directly to the community, we hope to further accelerate the development of software and tools to continue to grow Lustre, and future solutions for I/O heavy workloads.

Brent Gorda

We are proud of our role in making Lustre as popular as it is, and look forward to continuing to drive its growth. We will be platinum sponsors at the upcoming Lustre User Group meeting at the end of May and you can expect a strong participation in the event, with 7 talks scheduled for the main conference and our technical team active in the collocated developer events before the conference.

In closing, I want to thank a dear friend and 10-year colleague, Brent Gorda, for his contribution to Intel and the Lustre community. He has been a committed and energetic evangelist for Lustre and HPDD for the past 7 years, and we are fortunate to have had him with us at Intel. I look forward to working with him over the next few months as we manage this transition.

The letter can be read in full here.

The post Intel Open Sources All Lustre Work, Brent Gorda Exits appeared first on HPCwire.

Call for Registrations for Scaling to Petascale Institute 2017

Wed, 04/19/2017 - 10:46

April 19, 2017 — Registration is now open for the “Scaling to Petascale Institute” to be held June 26-30, 2017.  Details are at https://bluewaters.ncsa.illinois.edu/petascale-summer-institute.

This institute is for people developing, modifying and supporting research projects who seek to enhance their knowledge and skills to scale software to petascale and emerging extreme scale computing systems.  Participants should have familiarity with Linux, programming in Fortran, C, C++, Python or similar language, and familiarity with MPI (message passing interface). There will be hands-on activities for many of the sessions.

Presentations will be made by faculty and professionals from Argonne Leadership Computing Facility (ALCF), the Blue Waters project at the National Center for Supercomputing Applications (NCSA), National Energy Research Scientific Computing Center (NERSC), Oak Ridge Leadership Computing Facility (OLCF), Stony Brook University, and the Texas Advanced Computing Center (TACC).

The agenda will address the following topics.

  • MPI – Introduction and Advanced topics
  • OpenMP
  • Scaling, code profiling, and debugging
  • GPU programming
  • OpenACC
  • Phi programming
  • Software libraries
  • Parallel I/O
  • HDF5
  • Globus
  • Software engineering

There are two options for participation.  All options require participants to register.

  1. Attend the institute at one of the collaborating host sites to receive full support.  Participants will be able to verbally ask questions of the presenters through two-way video conferencing facilities.  Participants will receive training accounts on the Blue Waters, NERSC and TACC systems.  Staff will be available at each site to assist during hands-on sessions.  Seating at each site is limited, and registration is handled on a first-come first-served basis.  If your organization is not a collaborating host site, you may encourage your organization to apply to be a host site by May 1.
  2. View the institute sessions via YouTube live, but with a reduced level of support.  Participants will be able to submit written questions through the social media tools supported by the institute.  Due to account allocations policies, participants will NOT be able to receive accounts on the institute computing systems.

The institute is led by Argonne Leadership Computing Facility (ALCF), the Blue Waters project at the National Center for Supercomputing Applications (NCSA), the National Energy Research Scientific Computing Center (NERSC), the Oak Ridge Leadership Computing Facility (OLCF), and the Texas Advanced Computing Center (TACC).

Source: NCSA

The post Call for Registrations for Scaling to Petascale Institute 2017 appeared first on HPCwire.

Mateo Valero Named Recipient of 2017 IEEE Computer Society Charles Babbage Award

Wed, 04/19/2017 - 07:43

LOS ALAMITOS, Calif., April 19 2017 — Mateo Valero, professor in the Computer Architecture Department at Polytechnic University of Catalonia and director of the Barcelona Supercomputing Center, has been selected to receive the 2017 IEEE Computer Society (IEEE-CS) Charles Babbage Award. The new award recognizes “contributions to parallel computation through brilliant technical work, mentoring PhD students, and building on incredibly productive European research environment.”

The award, consisting of a certificate and a $1,000 honorarium, will be presented on 31 May 2017 at the annual IEEE-CS International Parallel and Distributed Processing Symposium (IPDPS 2017). Valero will give a keynote speech on Runtime-Aware Architectures at the conference on 1 June 2017.

Valero’s research focuses on high-performance architectures. An IEEE and ACM Fellow and an Intel Distinguished Research Fellow, he has published approximately 700 papers, served in the organization of more than 300 international conferences, and given more than 500 invited talks. Valero is also the director of the Barcelona Supercomputing Center—the National Center of Supercomputing in Spain.

Valero has been honored with several awards, including the 2007 IEEE/ACM Eckert-Mauchly Award, the 2015 IEEE-CS Seymour Cray Award, the 2009 IEEE Harry Goode Award, the 2012 ACM Distinguished Service Award, the 2015 Euro-Par Achievement Award, the Spanish National Award Julio Rey Pastor, the Spanish National Award Leonardo Torres Quevedo, the King Jaime I Award given by the Valencian government, and the Research Award given by the Catalan Foundation for Research and Innovation. He has been named Honorary Doctor by the Universities of Chalmers, Belgrade, Las Palmas de Gran Canaria, Zaragoza, Complutense de Madrid, Cantabria, Granada, Veracruz, and CINVESTAV. He is also a Hall of Fame member of the ICT European Program (selected in 2008 as one of the 25 most influential European researchers in IT from 1983–2008). He received the Aragón Award in 2008, which is the highest recognition granted by the government of Aragón, and the Creu de Sant Jordi in 2016, which is the highest recognition granted by the Catalan government. of Supercomputing in Spain.

Valero became a founding member of the Royal Academy of Engineering of Spain in 1994. He was also elected a member of the Royal Academy of Sciences and Arts of Barcelona (2006) and the Academy of Europe (2009), as well as Correspondent Academic of the Spanish Royal Academy of Exact, Physical, and Natural Sciences (2005) and the Mexican Academy of Sciences (2012).

Valero obtained his telecommunications engineering degree from the Technical University of Madrid (UPM) in 1974 and his PhD in telecommunications from the Polytechnic University of Catalonia (UPC) in 1980. He has been teaching at UPC since 1974, and has been a full professor in the Computer Architecture Department since 1983. He has also been a visiting professor at ENSIMAG in France and at the University of California, Los Angeles. He has been chair of the Computer Architecture Department at UPC (1983-1984, 1986-1987, 1989-1990, and 2001-2005) and the Dean of the Computer Engineering School (1984-1985). the highest recognition granted by the Catalan government. of Supercomputing in Spain.

In 1998, Valero won a “Favourite Son” Award from his home town, Alfamén (Zaragoza), and in 2006, Alfamén named their public college after him.

The new IEEE-CS Charles Babbage Award was established in memory of Charles Babbage in recognition of significant contributions in the field of parallel computation. This award covers all aspects of parallel computing including computational aspects, novel applications, parallel algorithms, theory of parallel computation, and parallel computing technologies, among others.

For more information about the award, including a list of past recipients, visit www.computer.org/web/awards/charles-babbage.

For more information on IEEE-CS Awards program, visit www.computer.org/awards.

About IEEE Computer Society

IEEE Computer Society, the computing industry’s unmatched source for technology information and career development, offers a comprehensive array of industry-recognized products, services, and professional opportunities. Known as the community for technology leaders, IEEE Computer Society’s vast resources include membership, publications, a renowned digital library, training programs, conferences, and top-trending technology events. Visit www.computer.org for more information on all products and services.

Source: IEEE

The post Mateo Valero Named Recipient of 2017 IEEE Computer Society Charles Babbage Award appeared first on HPCwire.