Feed aggregator

SDSC Comet: Lustre filesystem issues

XSEDE News - Tue, 06/27/2017 - 15:43

We are currently seeing some problems with the network to the storage servers for the Lustre filesystems (/oasis/scratch/comet, /oasis/projects/nsf). This is causing intermittent access issues on both filesystems. We will update once the problem is resolved. Please email help@xsede.org if you have any questions.

Higgins gives insight into drinking water contamination

Colorado School of Mines - Tue, 06/27/2017 - 14:14

Colorado School of Mines Associate Professor of Civil and Environmental Engineering Chris Higgins was recently interviewed for an article in Chemical Engineering News on how polymers can help remove contaminants from drinking water. The article, "Polymer network captures drinking water contaminant," focuses on how cross-linked cyclodextrin removes perfluorinated  chemical PFOA from water. 

From the story:

Categories: Partner News

STEM-Trekker Badisa Mosesane Attends CERN Summer Student Program

HPC Wire - Tue, 06/27/2017 - 13:11

Badisa Mosesane, an undergraduate scholar who studies computer science at the University of Botswana in Gaborone, recently joined other students from developing nations around the world in Geneva, Switzerland to participate in the European Organization for Nuclear Research (CERN) Summer Student Program.

Each year, advanced undergraduate and beginning graduate students from developing countries who study physics, computing and engineering are encouraged to apply—and it’s very competitive! In 2016, 137 students from 60 countries were represented and more than 1,000 have participated since the program began in 2003.

For eight weeks this summer, Badisa will attend lectures, and work side-by-side with student-peers and scientists from a range of disciplines on some of the world’s biggest experiments. The students will have the opportunity to foster a multinational, interdisciplinary professional network that will prove useful throughout their careers. Badisa is assigned to the Experimental Physics Neutrino group where he will assist with the development of a web-based app that will visualize data from the ProtoDUNE project.

Badisa wrote to tell us about his first week at CERN. “I’m involved with a massive OS installation across ~300 nodes on a (Experimental Hall for Neutrino) computing cluster, and was assigned the task of integrating Cobbler with GLPI and OCS inventory software to inventory Linux services and software,” he said. “Later this week, I’ll learn about ROOT, a toolkit that’s widely used in the high energy physics arena for data analysis, storage and visualization,” he added.

Badisa is a rising star among African undergraduate computer science students. His passion for high performance computing (HPC) has allowed him to successfully compete with graduate and PhD-level students for limited travel funds, and seats at advanced computational and data science workshops. In June, 2016 he participated in the South African HPC Winter School at the Nelson Mandela Metropolitan University and offered by the Centre for HPC (CHPC) in South Africa. In January, he attended the 7th CHPC Scientific Programming School at the Hartebeesthoek Radio Astronomy Observatory (HartRAO) where he perfected his Linux and Python skills.

While Badisa’s living expenses are covered by the CERN program this summer, he lacked support for the purchase of a round-trip flight, and that’s where STEM-Trek was able to help thanks to a generous donation from Cray Computer Corporation.

“Cray, STEM-Trek and CERN recognize that science diplomacy and a well-trained science and engineering workforce are crucial to every nation’s economy,” said STEM-Trek Executive Director Elizabeth Leake. “But, even with a full scholarship, there are often last-mile expenses that are difficult for some students to manage, and that’s where STEM-Trek helps when we can,” she added.

To learn more about the program and to hear testimonials from past participants, visit the CERN web site and watch this video.

Badisa (center) with colleagues from the University of Botswana and the South African CHPC. U-Botswana Professor Tshiamo Motshegwa (far right) encouraged Badisa to apply for the program. Dr. Motshegwa is the Southern African Development Community (SADC) HPC Forum Chair.

2016 CERN Summer School Programme, photo courtesy

The post STEM-Trekker Badisa Mosesane Attends CERN Summer Student Program appeared first on HPCwire.

Wilcox recognized for research into carbon, mercury capture

Colorado School of Mines - Tue, 06/27/2017 - 11:58

A Colorado School of Mines associate professor of chemical and biological engineering has been recognized for her research into capturing mercury and carbon dioxide from coal-fired power plants and preventing their release into the atmosphere.

Jennifer Wilcox was awarded the 2017 Arthur C. Stern Award for Distinguished Paper, which is given annually for an outstanding contribution to the Journal of the Air & Waste Management Association. The paper, titled “Heterogenous Mercury Reaction Chemistry on Activated Carbon,” was published in 2011 with coauthors Erdem Sasmaz, Abbigail Kirchofer and Sang-Sup Lee.

The work examines materials that can oxidize mercury, allowing it to be captured. “Coal burning is the number one anthropogenic source of mercury emissions worldwide,” Wilcox said. “This work leads to a deeper understanding of how materials may be modified for more effective mercury removal from exhaust streams of coal-fired power plants,” said the citation from the Air & Waste Management Association.

The award is based on the publication of a paper in JA&WMA that has greatly advanced science and technology; is technical, scientific or management in nature, while advancing the mission of JA&WMA; and is considered to be a substantial contribution toward improving our understanding of air pollution and waste management problems, their impact on environment and health, and the use of sustainable practices in reducing our environmental footprint.

Wilcox also received a Best Presentation Award in the Fall 2016 session of the American Chemical Society, which led to an invitation to publish in the journal Industrial & Engineering Chemistry Research. The paper, titled “Effect of Water on the CO2 Adsorption Capacity of Amine-Functionalized Carbon Sorbents," was subsequently featured on the cover of the journal’s May 31, 2017, issue. Wilcox’s coauthors were Peter Psarras and Jiajun He.

The exhaust of coal-fired power plants is comprised mostly of nitrogen, with near-equal amounts of water vapor and CO2, Wilcox said. Because water is often more reactive than CO2, it is important to design materials that have an affinity for carbon dioxide. “This work, through a combination of modeling and experiments, shows a novel material with promise for the selective removal of CO2 from coal-fired power plant exhaust in the presence of water vapor and acid gases,” Wilcox said

Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Ashley Spurgeon, Assistant Editor, Mines Magazine | 303-273-3959 | aspurgeon@mines.edu

Categories: Partner News

Interdisciplinary research team receives ARMA award

Colorado School of Mines - Tue, 06/27/2017 - 10:59

Colorado School of Mines Civil and Environmental Engineering Professor Marte Gutierrez, Petroleum Engineering Professor Azra Tutuncu and alumnus Luke Frash have been awarded the 2017 Applied Rock Mechanics Research Award by the American Rock Mechanics Association.

Luke Frash and Marte Gutierrez showcase their research during a visit from Darren Mollot, Director of the Office of Clean Energy Systems in the Department of Energy’s (DOE) Office of Fossil Energy.

Frash earned bachelor’s and master’s degrees in engineering with specialties in civil engineering and a PhD in civil and environmental engineering from Mines, studying under Gutierrez. He is now a researcher at Los Alamos National Laboratory in New Mexico.

The team is receiving the award for their 2015 publication, “True-Triaxial Hydraulic Fracturing of Niobrara Carbonate Rock as an Analogue for Complex Oil and Gas Reservoir Stimulation.” The main topics of research, funded partially by the U.S. Department of Energy and the Unconventional Natural Gas and Oil Institute, were development of enhanced geothermal systems and hydraulic fracturing in shale oil and gas reservoirs.

“Well stimulation by hydraulic fracturing is a common method for increasing the injectivity and productivity of wells,” Gutierrez said. “This method is beneficial for many applications, including oil, gas, geothermal energy and CO2 sequestration; however, hydraulic fracturing in shale and other similarly complex geologies remains poorly understood.”

Seeking to bridge the gap in understanding, the team conducted research on large natural rock specimens using true-triaxal stresses, intended to represent field-scale complexities of known oil and gas reservoirs.

“Results from such large-scale hydraulic experiments, particularly on naturally heterogeneous rock samples, remain very limited,” Gutierrez said.

The research team developed special equipment to conduct these innovative field-scale experiments, and Gutierrez says “the results from the scale-model hydraulic fracturing experiments are envisioned to be of important value to the practice of hydraulic fracturing in several fields.”

The award will be presented during the 51st U.S. Rock Mechanics/Geomechanics Symposium in San Francisco, California, on June 25-28, 2017.

Support for the research was provided by the Unconventional Natural Gas and Oil Institute (UNGI) Coupled Integrated Multi Scale Measurements and Modeling Consortium (CIMMM), and the U.S. Department of Energy under DOE Grant No. DE-FE0002760, “Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal Systems.”

Contact: Agata Bogucka, Communications Manager, College of Earth Resource Sciences & Engineering | 303-384-2657 | abogucka@mines.edu Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu
Categories: Partner News

Linux CIuster Institute (LCI) Workshop

XSEDE News - Tue, 06/27/2017 - 08:58

Advanced Training Available in August at the Intermediate Linux CIuster Institute (LCI) Workshop

For the first time, LCI is offering an Intermediate Workshop that will offer more in-depth training on running HPC computing clusters. If you have some experience as an HPC system administrator and want to expand your skills, this is the workshop for you!

August 14-18, 2017
Georgia Institute of Technology
Atlanta, GA
More information: http://www.linuxclustersinstitute.org/workshops/aug2017/
Register: http://www.linuxclustersinstitute.org/workshops/aug2017/register.php

In just five days you will:
• Strengthen your overall knowledge of HPC system administration
• Focus in depth on file systems and storage, HPC networks, and job schedulers
• Get hands-on training and discuss real-life stories with experienced HPC administrators

Those who have attended an Introductory LCI workshop in the past are especially encouraged to attend!

Breakfast, lunch, snacks, and beverages are provided daily at LCI workshops as part of your registration fee!

Contact Leslie Froeschl at lfroesh@illinois.edu with any questions!

Silicon Mechanics Integrates AMD’s EPYC 7000 Series

HPC Wire - Tue, 06/27/2017 - 08:54

BOTHELL, Wash., June 27, 2017 — Silicon Mechanics, a system integrator and custom design manufacturer that provides the expertise necessary to scale open technology throughout an organization, has announced immediate availability of AMD’s new EPYC family of CPUs. Starting today, Silicon Mechanics has availability of Supermicro 1U and 2U Ultra servers, plus the BigTwin multi-node server, based on the versatile AMD EPYC platform.

“With the industry excitement surrounding this new AMD release, and as a result of our long-standing hardware partnership with Supermicro, Silicon Mechanics is ready to immediately deploy systems outfitted with EPYC,” said Silicon Mechanics Chief Marketing Officer, Sue Lewis.

EPYC, formerly known in the industry as Naples, offers the following new features:

  • Up to 32 Zen cores
  • 8 DDR4 channels/CPU — up to 2666 MT/s
  • Up to 2TB memory per CPU
  • 128 PCIe lanes
  • Dedicated security subsystem
  • Integrated chipset
  • Socket compatibility with next-gen EPYC processors

“AMD’s EPYC processors support the rapid evolution of performance requirements for data centers, and will serve to enable customer innovation in software-defined storage, web services and machine learning,” said Silicon Mechanics Chief Technology Officer, Daniel Chow. “The 128 lanes of PCIe connectivity offer flexibility and performance in a wide array of server configurations. Our customers are excited to exercise these new capabilities.”

For more information on AMD’s EPYC product release, please click here. To self-configure your next server infrastructure solution with AMD EPYC, please click here.

About Silicon Mechanics

Silicon Mechanics is a system integrator and custom design manufacturer that provides the expertise necessary to scale open technology throughout an organization, from building out HPC or storage clusters to the latest in virtualization, containerized services and more. For more than 15 years, Silicon Mechanics has provided consistent execution in delivering innovative open technology solutions for commercial enterprises, government organizations and the research market. Learn more about maximizing the potential of open technology by visiting www.siliconmechanics.com.

Source: Silicon Mechanics

The post Silicon Mechanics Integrates AMD’s EPYC 7000 Series appeared first on HPCwire.

Envenio Secures $1.3M Investment

HPC Wire - Tue, 06/27/2017 - 08:52

NEW BRUNSWICK, June 27, 2017 — Envenio has announced that it has secured investment from Celtic House Venture Partners, Green Century Investments and New Brunswick Innovation Foundation, to the value of $1.3million.

The Canadian CFD software developer has announced that the funding will be used to grow and strengthen the sales and engineering teams, in line with an ambitious and exciting business plan to increase the use of its cloud-hosted, on-demand CFD platform, EXN/Aero.

The on-demand nature of EXN/Aero has already received widespread praise from large organizations and engineering consultancies alike.

Aside from vital financing, each of the three investors bring decades of industry experience and credibility, promising to strengthen the already exciting, innovative and ambitious plans held by Envenio since its inception.

With over 20 years’ experience in nurturing Canadian technology companies, Celtic House Venture Partners has over $4.5billion worth of exits (acquisitions/IPOs), and is largely regarded as one of the most active investors in technology and innovation.

“We share Envenio’s belief that the billion dollar global CFD industry is positioned for disruption from new cloud-based and GPU-based approaches that offer unparalleled performance coupled with new service delivery models derived from consumer internet technology” says Tomas Valis of Celtic House Venture Partners.

Green Century Investments brings extensive experience from a number of sectors, and while its headquarters are in Toronto, its reach extends far beyond Canada, to countries including China. Holding a strong belief that sustainability is vital for business as well as the environment, Envenio will play a key role in its overall goal of building an ecosystem for continuing global success.

Not-for-profit corporation, New Brunswick Investment Foundation (NBIF), adds this investment to its $70million portfolio, alongside $380million leveraged from other sources. With a strong record of helping to create over 90 companies and fund 400 applied research projects since its inception in 2003, the corporation currently has 47 companies on its books.

Speaking about the investment, Scott Walton, VP of Envenio said “Since the company was founded, we have funded most of the product development through engineering consulting”.

“Now that the product is on the market, we are looking to accelerate its adoption. It’s the world’s first HPC-optimized, cloud hosted, on-demand CFD tool. It is our honor to be funded by some of Canada’s leading technology investment firms who have a long history of success in Software-as-a-Service products” he added.

Envenio & EXN/Aero

Envenio is a Canadian-based CFD software developer, responsible for the creation of on-demand, cloud-hosted CFD tool, EXN/Aero. EXN/Aero is a general purpose computational fluid dynamics, cloud solver that speeds up simulation runs by an order of magnitude. Compatible with most meshing tools, and using open source post-processing, there are a range of on-demand options available to users, helping them to overcome common limitations in their everyday work. Ideal for CFD consulting, this CFD software is sure to be an asset to companies or CFD freelancers like.


Celtic House Venture Partners

Celtic House has collaborated with management teams and repeat entrepreneurs to develop technology companies from the inception phase through to exit, generating 25 initial public offerings and successful acquisitions. From offices in Toronto and Ottowa, Celtic House manages in excess of $425million across three funds.


Green Century Investment

GCI focuses on sustainability on a wider scale than simply environmental protection. With a clear goal to support sustainable business across multiple sectors, the company is actively building an ecosystem to continue success globally. Headquartered in Toronto, the company has global reach including as far afield as China.


New Brunswick Innovation Foundation

NBIF is a private, not-for-profit corporation that invests in startup companies and R&D. With over $70 million invested, plus $380 million more leveraged from other sources, NBIF has helped to create over 90 companies and fund 400 applied research projects since its inception in 2003, with a current portfolio of 47 companies. All of NBIF’s investment returns go back into the Foundation to be re-invested in other new startup companies and research initiatives.


Source: Envenio

The post Envenio Secures $1.3M Investment appeared first on HPCwire.

Carnegie Mellon Launches Artificial Intelligence Initiative

HPC Wire - Tue, 06/27/2017 - 08:47

PITTSBURGH, June 27, 2017 — Carnegie Mellon University’s School of Computer Science (SCS) has launched a new initiative, CMU AI, that marshals the school’s work in artificial intelligence (AI) across departments and disciplines, creating one of the largest and most experienced AI research groups in the world.

“For AI to reach greater levels of sophistication, experts in each aspect of AI, such as how computers understand the way people talk or how computers can learn and improve with experience, will increasingly need to work in close collaboration,” said SCS Dean Andrew Moore. “CMU AI provides a framework for our ongoing AI research and education.”

From self-driving cars to smart homes, AI is poised to change the way people live, work and learn, Moore said.

“AI is no longer something that a lone genius invents in the garage,” Moore added. “It requires a team of people, each of whom brings a special expertise or perspective. CMU researchers have always excelled at collaboration across disciplines, and CMU AI will enable all of us to work together in unprecedented ways.”

CMU AI harnesses more than 100 faculty members involved in AI research and education across SCS’s seven departments. Moore is directing the initiative with Jaime Carbonell, the Newell University Professor of Computer Science and director of the Language Technologies Institute;Martial Hebert, director of the Robotics Institute; Computer Science Professor Tuomas Sandholm; and Manuela Veloso, the Herbert A. Simon University Professor of Computer Science and head of the Machine Learning Department.

Carnegie Mellon has been on the forefront of AI since creating the first AI computer program,Logic Theorist, in 1956. It created the first and only Machine Learning Department, studying how software can make discoveries and learn with experience. CMU scientists pioneered research into how machines can understand and translate human languages, and how computers and humans can interact with each other. Carnegie Mellon’s Robotics Institute has been a leader in enabling machines to perceive, decide and act in the world, including a renowned computer vision group that explores how computers can understand images.

CMU AI will focus on educating a new breed of AI scientist and on creating new AI capabilities, from smartphone assistants that learn about users by making friends with them to video technologies that can alter characters to appear older, younger or even as a different actor.

“CMU has a rich history of thought leadership in every aspect of artificial intelligence. Now is exactly the right time to bring this all together for an AI strategy to benefit the world,” Moore said.

That expertise, spread across several departments, has enabled CMU to develop such technologies as self-driving cars; question-answering systems, including components of IBM’s Jeopardy-playing Watson; world-champion robot soccer players; 3-D sports replay technology; and even an AI smart enough to beat four of the world’s top poker players.

“AI is a broad field that involves extremely disparate disciplines, from optimization and symbolic reasoning to understanding physical systems,” Hebert said. “It’s difficult to have state-of-the art expertise in all of those aspects in one place. CMU AI delivers that and makes it centrally accessible.”

Recent developments in computer hardware and software make it possible to reunite elements of AI that have grown independently and create powerful new AI technologies. These developments have created incredible demand from industry for computer scientists with AI know-how.

“Students who study AI at CMU have an opportunity to work on projects that unite multiple disciplines — to study AI in its depth and multidisciplinary, integrative aspects. They generally leave CMU for positions of great leadership, and they lead global AI efforts both in terms of starting new ventures and joining innovative companies that tremendously value our education and research,” Veloso said. “CMU students at all levels have a big impact on what AI is doing for society.”

Nearly 1,000 CMU students are involved in AI research and education. CMU also is vigorously engaged in outreach programs that introduce students in elementary and high school to AI topics and encourage their skills in that area.

“We’re teaching and engaging with those who will improve lives through technology, and who have taken responsibility for what happens in the rest of the century,” Moore said. “Exposing these hugely talented human beings to the best AI resources and researchers is imperative for creating the technologies that will advance mankind. This is the first of many steps CMU will take to ensure AI is accessible to all.”

About Carnegie Mellon University

Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.

Source: Carnegie Mellon

The post Carnegie Mellon Launches Artificial Intelligence Initiative appeared first on HPCwire.

Atos Wins Contract with Safran for IT Infrastructure

HPC Wire - Tue, 06/27/2017 - 08:46

PARIS, June 27, 2017 – Atos, a global leader in digital transformation, has been selected by Safran, leader in the aeronautics and aerospace sectors, as its partner to optimize datacenters worldwide. The four-year contract runs till 2021 and has the option of a two-year extension.

By awarding Atos the contract to optimize its datacenters, Safran is accelerating its digital transformation by securing the best solutions on the market.

Atos will deploy a flexible hybrid cloud orchestration service, as well as standardized process management to harmonize Safran’s management of all traditional infrastructures across public and private clouds. For Europe, Atos will work with its operational centers based in France, as well as in Romania and Poland, to provide a strong private cloud platform: Atos Canopy Digital Private Cloud. Services for the Unites States will be provided locally.

“With this contract, we are aiming to rapidly transform our entire Information System over the cloud. The collaboration between the Safran and Atos teams will help us spring into this new era,” said Thierry Milhé – VP International Production of IT Services at Safran.

The security solution will transform Safran’s current standard model into a data-centric model that interfaces with the assets in place at Safran, reinforcing them and controlling all access. Surveillance focuses on data flow, taking into account each country’s specific regulatory requirements.

By getting Atos to optimize our data centres, we are transforming our IT foundations in order to be able to offer our various core businesses a range of flexible and secure services. We expect to see some technological breakthroughs with these innovative digital solutions,” explains Loïc Bournon, Chief Information Officer at Safran. 

We are happy to contribute to Safran’s performance by optimizing its data centres in a secure way across the entire group.  Thanks to our proven experience in manufacturing and aeronautics, we are using our expertise to deploy an efficient industrial model and an ambitious transformation path that respects the constraints of Safran’s core businesses,” says Eric Grall, Executive Vice-President and Head of Global Operations at Atos. 

These activities constitute the IT foundation required to guide Safran through the process of growth, performance, and innovation.

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around € 12 billion. The European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, TelecommunicationsTransportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index. www.atos.net

Source: Atos

The post Atos Wins Contract with Safran for IT Infrastructure appeared first on HPCwire.

Science Gateways Bootcamp -- Applications Accepted Through 7/28/17!

XSEDE News - Tue, 06/27/2017 - 07:56

The Science Gateways Community Institute’s Incubator team is offering an intensive Bootcamp that will take place October 2-6, 2017 at the Purdue Research Park of Indianapolis, IN.

Science Gateways Bootcamp: Strategies for Developing, Operating, and Sustaining Your Gateway is designed for leaders of innovative digital offerings, sometimes called gateways, who are seeking to further develop and scale their work.

Participants will engage in hands-on activities to help them articulate the value of their work to key stakeholders and to create a strong development, operations, and sustainability plan. The Bootcamp will include:

Core business strategy skills
Technology best practices
Long-term sustainability strategies

By the end of the Bootcamp, participants will have developed a working hypothesis of their sustainability strategy and identified the key action steps to get there.

APPLY NOW: http://sciencegateways.org/bootcamp
Applications will be accepted through 7/28/2017.

Read about the experiences of our first cohort of Bootcamp attendees here: https://sciencegateways.org/-/reflections-from-the-inaugural-science-gateways-bootcamp-in-april-2017

The EU Human Brain Project Reboots but Supercomputing Still Needed

HPC Wire - Mon, 06/26/2017 - 13:59

The often contentious, EU-funded Human Brain Project whose initial aim was fixed firmly on full-brain simulation is now in the midst of a reboot targeting a more modest goal – development of informatics tools and data/knowledge repository for brain research. Think Google search engine and associated repository for brain researchers. It’s still a massive effort.

There’s a fascinating article in IEEE Spectrum (The Human Brain Project Reboots: A Search Engine for the Brain Is in Sight) touching on the highs, lows, and emerging aspirations of the HBP. High performance computing, not surprisingly, is a core component of the HBP and not restricted just to traditional computing paradigms – both the SpiNNaker and BrainScaleS neuromorphic platforms are HBP efforts.

According to the IEEE Spectrum article, “Sheer computing muscle is one thing that won’t be a problem, says Boris Orth, the head of the High Performance Computing in Neuroscience division at the Jülich Supercomputing Center. Orth walks between the monolithic black racks of the JuQueen supercomputer, his ears muffled against the roar of cooling fans. This is one of the big machines that HBP researchers are using today. Jülich recently commissioned JURON and JULIA, two pilot supercomputers designed with extra memory, to help neuroscientists interact with a simulation as it runs.”

The original plan, spearheaded by Henry Markham, spurred debate and backlash in the brain research community. You may recall Markham also led the Swiss Blue Brain Project at EPFL. Here’s another excerpt from the article:

“As soon as the HBP was funded, things got messy. Some scientists derided the aspiration as both too narrow and too complex. Several labs refused to join the HBP; others soon dropped out. Then, in July 2014, more than 800 neuroscientists signed an open letter to the European Commission threatening to boycott HBP projects unless the commission had an independent panel review “both the science and the management of the HBP.”

“The commission ordered an overhaul, and a year later an independent panel published a 53-page report [PDF] that criticized the project’s science and governance alike. It concluded that the HBP should focus on goals that can be “realistically achieved” and “concentrate on enabling methods and technologies.”

The Human Brain Project reboot is being likened more to the international Human Genome Project which produced a full, searchable genome, and associated tools. The HBP will emulate this approach. The ambitious project is scheduled to end in 2023, ten years after it was begun. The IEEE Spectrum article is fascinating as well as a quick read.

Link to IEEE Spectrum article: http://spectrum.ieee.org/computing/hardware/the-human-brain-project-reboots-a-search-engine-for-the-brain-is-in-sight

Feature image:
3D Reconstruction: Data from the polarized light imaging of the brain is pieced together by a computer to produce a 3D image of the neuronal fiber tracts (shown here as tubes). Credit: Katrin Amunts and Markus Axer/Jülich Research Center

The post The EU Human Brain Project Reboots but Supercomputing Still Needed appeared first on HPCwire.

Bill Gropp Named NCSA Director

HPC Wire - Mon, 06/26/2017 - 12:37

URBANA, Ill. June 26, 2017 —Dr. William “Bill” Gropp, Interim Director and Chief Scientist of the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, will become the center’s fifth Director on July 16, 2017, pending Board of Trustees approval. Gropp was appointed to the roles of acting and then interim director of NCSA by Vice Chancellor for Research Peter Schiffer when former NCSA director Dr. Ed Seidel stepped up to serve as Vice President for Economic Development and Innovation for the University of Illinois System.

Dr. William “Bill” Gropp

“Bill has provided solid and forward-looking leadership as acting and interim director during the past ten months, said Dr. Peter Schiffer, Vice-Chancellor for Research at the University of Illinois at Urbana-Champaign. “I have every confidence that he will guide NCSA into the next era of scientific research and the application of advanced digital resources.”

Gropp, who joined the Urbana-Champaign faculty in 2007, holds the Thomas M. Siebel Chair in Computer Science and has served as NCSA’s chief scientist since 2015. He is a co-principal investigator of Blue Waters, the fastest supercomputer on an academic campus, which enables scientists from across the country to make discoveries not otherwise possible. Gropp was recently named principal investigator of the NSF-funded Midwest Big Data Hub, a growing network of partners investing in data and data sciences to address grand challenges for society and science.

Gropp is a leader in the advanced computing community who co-chaired the National Academies’ Committee on Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science. His most widely known contribution to the scientific computing community was the development of the MPICH implementationof the Message Passing Interface (MPI), which he designed with collaborators at Argonne National Laboratory. MPI allows large-scale computations to be run on thousands to millions of processor cores simultaneously and for the results of those computations to be efficiently shared. Gropp has authored more than 187 technical publications, including co-authoring the book Using MPI, which is in its third edition and has sold over 19,000 copies.

Gropp was recognized as the recipient of the the 2016 ACM/IEEE Computer Society Ken Kennedy Award for his highly influential contributions to the programmability of high performance parallel and distributed computers.

“I am honored to be appointed the director of this amazing organization as we drive NCSA’s mission of being a world-class integrative center for transdisciplinary research, education, and innovation into a new era”, said Gropp. “I am excited by the many opportunities that NCSA is uniquely able to pursue in order to solve grand challenges for the benefit of science and society. Our strength is in our experience, our broad range of expertise, and our strong and growing connections with the University of Illinois at Urbana-Champaign campus. We will leverage these strengths to innovate and provide advanced computing and data infrastructure to the nation, partnering with the campus in new initiatives, particularly in data and health sciences, and in strengthening our historic partnerships in engineering, humanities, and the sciences.”

Gropp held the positions of Assistant (1982-1988) and Associate (1988-1990) Professor in the Computer Science Department at Yale University. In 1990, he joined the Numerical Analysis group at Argonne, where he was a Senior Computer Scientist in the Mathematics and Computer Science Division, a Senior Scientist in the Department of Computer Science at the University of Chicago, and a Senior Fellow in the Argonne-Chicago Computation Institute. From 2000 through 2006, he was also Associate Director of the Mathematics and Computer Science Division at Argonne.

Gropp received his B.S. in Mathematics from Case Western Reserve University in 1977, a MS in Physics from the University of Washington in 1978, and a Ph.D. in Computer Science from Stanford in 1982. Gropp is a Fellow of ACM, IEEE, and SIAM and received the Sidney Fernbach Award from the IEEE Computer Society in 2008. Gropp is a member of the National Academy of Engineering.

About the National Center for Supercomputing Applications (NCSA)

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

Source: NCSA

The post Bill Gropp Named NCSA Director appeared first on HPCwire.

Geology graduate student awarded research grants

Colorado School of Mines - Mon, 06/26/2017 - 11:15

Geology graduate student Rosemarie (Rosie) Fryer has been awarded two grants from national organizations for her research on the submarine lobe deposits of Point Loma in San Diego, California.

Fryer received a $2,500 grant from the American Association of Petroleum Geologists (AAPG) Grants-in-Aid Program, and a $1,775 grant from the Geological Society of America.

The AAPG program provides financial assistance to graduate geoscience students to promote research in petroleum and energy mineral resources or related to environmental geology issues, awarding scholarships ranging from $500-$3,000 to approximately 100 graduate students nationwide every year.

The goal of the GSA student research grant program is to support geoscience master’s and doctoral thesis research, awarding approximately 400 grants averaging $1,752 to graduate students across the United States each year.

Fryer plans to use her grant money to fund field trips to the Point Loma study area during the 2017-2018 academic year. “I am extremely excited that these funds will be used directly towards a field season in the fall, for creating thin sections and laser grain size analysis for my master’s thesis,” she said. 

As these sand-rich submarine lobe deposits form significant hydrocarbon reservoirs, Fryer’s research could prove extremely beneficial to the oil and gas industry by allowing for more accurate geological reservoir models. According to Fryer, the project has immediate applicability to reservoirs currently hosted in submarine lobe deposits, such as the Deepwater Wilcox Reservoirs in the Gulf of Mexico and others in the North Sea, West Africa and the Permian Basin.

Agata Bogucka, Communications Manager, College of Earth Resource Sciences & Engineering | 303-384-2657 | abogucka@mines.edu
Ashley Spurgeon, Assistant Editor, Mines Magazine | 303-273-3959 | aspurgeon@mines.edu

Categories: Partner News

Asetek Receives Follow-on Order From Penguin Computing

HPC Wire - Mon, 06/26/2017 - 09:59

AALBORG, Denmark, June 26, 2017 — Asetek today announced a further order from Penguin Computing, an established data center OEM, for an undisclosed HPC (High Performance Computing) installation.

“This repeat order reflects our strong partnership with Penguin Computing. It is also another confirmation of the increasing need for liquid cooling in high density HPC clusters,” said André Sloth Eriksen, CEO and Founder of Asetek.

On Friday 23 June, Asetek and Penguin Computing announced that Asetek was selected to provide liquid cooling for NVIDIA’s P100 GPU accelerators, the most advanced GPUs yet produced by NVIDIA, as part of Penguin’s Tundra ES (Extreme Scale) platform.

Today’s follow-on order is for Asetek’s RackCDU Direct-to-Chip (D2C) liquid cooling solution and includes additional loops to cool NVIDIA’s P100 GPU accelerators.

The order has a value of USD 140,000 with delivery to be completed in Q3 2017.

Asetek signed a global purchasing agreement with Penguin Computing in 2015.

Source: ASETEK

The post Asetek Receives Follow-on Order From Penguin Computing appeared first on HPCwire.

DOE Launches Chicago Quantum Exchange

HPC Wire - Mon, 06/26/2017 - 09:53

While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a Department of Energy sponsored collaboration between the University of Chicago, Fermi National Accelerator Laboratory, and Argonne National Laboratory to “facilitate the exploration of quantum information and the development of new applications with the potential to dramatically improve technology for communication, computing and sensing.”

The new hub will be within within the Institute for Molecular Engineering (IME) at UChicago. Quantum mechanics, of course, governs the behavior of matter at the atomic and subatomic levels in exotic and unfamiliar ways compared to the classical physics used to understand the movements of everyday objects. The engineering of quantum phenomena could lead to new classes of devices and computing capabilities, permitting novel approaches to solving problems that cannot be addressed using existing technology.

Lately, it seems work on quantum computing has ratcheted up considerably with IBM, Google, D-Wave, and Microsoft leading the charge. The Chicago Quantum Exchange seems to be a more holistic endeavor to advance the entire “quantum” research ecosystem and industry.

“The combination of the University of Chicago, Argonne National Laboratory and Fermi National Accelerator Laboratory, working together as the Chicago Quantum Exchange, is unique in the domain of quantum information science,” said Matthew Tirrell, dean and Founding Pritzker Director of the Institute for Molecular Engineering and Argonne’s deputy laboratory director for science. “The CQE’s capabilities will span the range of quantum information, from basic solid state experimental and theoretical physics, to device design and fabrication, to algorithm and software development. CQE aims to integrate and exploit these capabilities to create a quantum information technology ecosystem.”

According to the official announcement, the CQE collaboration will benefit from UChicago’s Polsky Center for Entrepreneurship and Innovation, which supports the creation of innovative businesses connected to UChicago and Chicago’s South Side. The CQE will have a strong connection with a major Hyde Park innovation project that was announced recently as the second phase of the Harper Court development on the north side of 53rd Street, and will include an expansion of Polsky Center activities. This project will enable the transition from laboratory discoveries to societal applications through industrial collaborations and startup initiatives.

Companies large and small are positioning themselves to make a far-reaching impact with this new quantum technology. Alumni of IME’s quantum engineering PhD program have been recruited to work for many of these companies. The creation of CQE will allow for new linkages and collaborations with industry, governmental agencies and other academic institutions, as well as support from the Polsky Center for new startup ventures.

IME’s quantum engineering program is already training a new workforce of “quantum engineers” to meet the need of industry, government laboratories, and universities. The program now consists of eight faculty members and more than 100 postdoctoral scientists and doctoral students. Approximately 20 faculty members from UChicago’s Physical Sciences Division also pursue quantum research.

Link to University of Chicago article: https://news.uchicago.edu/article/2017/06/20/chicago-quantum-exchange-create-technologically-transformative-ecosystem

Feature image: Courtesy of Nicholas Brawand

The post DOE Launches Chicago Quantum Exchange appeared first on HPCwire.

Julia Computing Awarded $910,000 Grant by Alfred P. Sloan Foundation

HPC Wire - Mon, 06/26/2017 - 09:49

CAMBRIDGE, Mass., June 26, 2017 — Julia Computing has been granted $910,000 by the Alfred P. Sloan Foundation to support open-source Julia development, including $160,000 to promote diversity in the Julia community.

The grant will support Julia training, adoption, usability, compilation, package development, tooling and documentation.

The diversity portion of the grant will fund a new full-time Director of Diversity Initiatives plus travel, scholarships, training sessions, workshops, hackathons and Webinars. Further information about the new Director of Diversity Initiatives position is below for interested applicants.

Julia Computing CEO Viral Shah says, “Diversity of backgrounds increases diversity of ideas. With this grant, the Sloan Foundation is setting a new standard of support for diversity which we hope will be emulated throughout STEM.”

Diversity efforts in the Julia community have been led by JuliaCon Diversity Chair, Erica Moszkowski. According to Moszkowski, “This year, we awarded $12,600 in diversity grants to help 16 participants travel to, attend and present at JuliaCon 2017. Those awards, combined with anonymous talk review, directed outreach, and other efforts have paid off. To give one example, there are many more women attending and presenting than in previous years, but there is a lot more we can do to expand participation from underrepresented groups in the Julia community. This support from the Sloan Foundation will allow us to scale up these efforts and apply them not just at JuliaCon, but much more broadly through Julia workshops and recruitment.”

Julia Computing seeks job applicants for Director of Diversity Initiatives. This is a full-time salaried position. The ideal candidate would have the following characteristics:

  • Familiarity with Julia
  • Strong scientific, mathematical or numeric programming skills required – e.g. Julia, Python, R
  • Eager to travel, organize and conduct Julia trainings, conferences, workshops and hackathons
  • Enthusiastic about outreach, developing and leveraging relationships with universities and STEM diversity organizations such as YesWeCode, Girls Who Code, Code Latino and Black Girls Code
  • Strong organizational, communication, public speaking and training skills required
  • Passionate evangelist for Julia, open source computing, scientific computing and increasing diversity in the Julia community and STEM
  • This position is based in Cambridge, MA

Interested applicants should send a resume and statement of interest to jobs@juliacomputing.com.

Julia is the fastest modern high performance open source computing language for data, analytics, algorithmic trading, machine learning and artificial intelligence. Julia combines the functionality and ease of use of Python, R, Matlab, SAS and Stata with the speed of C++ and Java. Julia delivers dramatic improvements in simplicity, speed, capacity and productivity. Julia provides parallel computing capabilities out of the box and unlimited scalability with minimal effort. With more than 1 million downloads and +161% annual growth, Julia is one of the top 10 programming languages developed on GitHub and adoption is growing rapidly in finance, insurance, energy, robotics, genomics, aerospace and many other fields.

Julia users, partners and employers hiring Julia programmers in 2017 include Amazon, Apple, BlackRock, Capital One, Comcast, Disney, Facebook, Ford, Google, Grindr, IBM, Intel, KPMG, Microsoft, NASA, Oracle, PwC, Raytheon and Uber.

  1. Julia is lightning fast. Julia provides speed improvements up to 1,000x for insurance model estimation, 225x for parallel supercomputing image analysis and 10x for macroeconomic modeling.
  2. Julia provides unlimited scalability. Julia applications can be deployed on large clusters with a click of a button and can run parallel and distributed computing quickly and easily on tens of thousands of nodes.
  3. Julia is easy to learn. Julia’s flexible syntax is familiar and comfortable for users of Python, R and Matlab.
  4. Julia integrates well with existing code and platforms. Users of C, C++, Python, R and other languages can easily integrate their existing code into Julia.
  5. Elegant code. Julia was built from the ground up for mathematical, scientific and statistical computing. It has advanced libraries that make programming simple and fast and dramatically reduce the number of lines of code required – in some cases, by 90% or more.
  6. Julia solves the two language problem. Because Julia combines the ease of use and familiar syntax of Python, R and Matlab with the speed of C, C++ or Java, programmers no longer need to estimate models in one language and reproduce them in a faster production language. This saves time and reduces error and cost.

About Julia Computing

Julia Computing was founded in 2015 by the creators of the open source Julia language to develop products and provide support for businesses and researchers who use Julia.

About The Alfred P. Sloan Foundation

The Alfred P. Sloan Foundation is a not-for-profit grantmaking institution based in New York City.  Founded by industrialist Alfred P. Sloan Jr., the Foundation makes grants in support of basic research and education in science, technology, engineering, mathematics, and economics.  This grant was provided through the Foundation’s Data and Computational Research program, which makes grants that seek to leverage developments in digital information technology to maximize the efficiency and trustedness of research. sloan.org

Source: Julia Computing


The post Julia Computing Awarded $910,000 Grant by Alfred P. Sloan Foundation appeared first on HPCwire.

Atos Highlights Opportunities in New Era of Supercomputing

HPC Wire - Mon, 06/26/2017 - 09:47

LONDON, June 26, 2017 – Atos, a leader in digital transformation, declares the world is at the dawning of a new Age of Data in its Digital Vision for Supercomputing and Big Data thought leadership paper.

Speaking ahead of its launch today at a reception in the Houses of Parliament attended by over 100 MPs, Adrian Gregory, CEO, Atos UK&I said: “We all are privileged to be living through the fourth industrial revolution and to witness the world evolve at a rapid pace due to technology. This is especially true when we look at the developments in supercomputing and Big Data and the impact this is already having on the business landscape.

“Such advances mean that technology is no longer merely a facilitator; it is an engine driving the transformation of businesses and public services and is the defining force for new operating models across all sectors. It is crucial to organisations everywhere that we harness this potential and as the leading European supercomputing manufacturer, Atos has chosen to take the lead”, added Adrian.

Digital Vision for Supercomputing and Big Data deconstructs the key developments and explores ways in which organisations can drive performance gains and deliver new products and services more quickly to enhance the experience of customers and citizens.

Julian David, CEO, techUK, said: “The potential of data analytics, backed by the power of High Performance Computing is still to be fully realised. Providing organisations across the public and private sectors with a fuller understanding of the associated opportunities in analytics is key to ensuring the UK remains at the forefront of this digital revolution, and competitive on a global scale.”

Presented by some of the leading subject matter experts within Atos and across public and private sector including Intel, the STFC Hartree Centre and Cambium LLP, topics as diverse as the convergence of High Performance Computing and Big Data, self-learning cyber security and the risks and rewards of quantum computing are discussed in the paper.

Building on previous Digital Vision publications for London, Government plus Health, Digital Vision for Supercomputing & Big Data explains how increasingly, data will be collected and traded as part of a burgeoning data economy and how the processing and storage of vast amounts of data opens up new possibilities with agile analytics critical to those wishing to exploit new opportunities and drive growth. 

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around € 12 billion. The European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.

Source: Atos

The post Atos Highlights Opportunities in New Era of Supercomputing appeared first on HPCwire.

UMass Dartmouth Reports on HPC Day 2017 Activities

HPC Wire - Mon, 06/26/2017 - 09:42

UMass Dartmouth’s Center for Scientific Computing & Visualization Research (CSCVR) organized and hosted the third annual “HPC Day 2017” on May 25th. This annual event showcases on-going scientific research in Massachusetts that is enabled through high-performance computing (HPC). This year the participants came from institutions all over the state: Boston University, Harvard, MIT, Northeastern University, Tufts University, WPI, UMass Amherst, Boston, Dartmouth, Lowell, Medical and even industry.

The event featured a total of 13 talks presenting the application of HPC in research areas ranging from biological systems to cosmology. The conference was highly attended with 139 attendees pre-registered, and with 20 registering on-site. A special poster session with awards for student projects was included as well. Over 20 posters were presented at the conference showcasing top notch student research from all over the state. Five awards were granted that were made possible through generous donations by Nvidia, Dell and MathWorks. The conference lunch was sponsored by Microway Inc., while the two coffee breaks were sponsored by Dell.

Dartmouth HPC Day 2017

There were two keynote speakers this year. The first was Dr. Sushil Prasad from the National Science Foundation, who talked about his vision for an impactful curricular change to Computer Science programs in the country. His talk was titled “Developing IEEE TCPP Parallel and Distributed Computing Curriculum and NSF Advanced Cyberinfrastructure Learning and Workforce Development Programs.” On the same theme, there was also an interactive Education Panel that included stakeholders from industry and academia to discuss issues associated with HPC education and training. The second keynote speaker was Dr. Luke Kelley from Harvard who gave an exciting and visually engaging talk titled “Predictions of future Gravitational Wave Observations using Simulations of the Universe”. This is a very special time for the gravitational physics research community following the recent first-ever discovery of gravitational waves by the LIGO detector.

The CSCVR also used this event to debut a small prototype GPGPU computing system, that is powered purely using solar panels. The unique feature of this system is its extremely high power efficiency — an order-of-magnitude larger than traditional systems, made possible by leveraging highly-efficient consumer electronics (in particular, Nvidia Shield TV “set-top” units). The CSCVR has a history of developing innovative supercomputers from using gaming consoles to more recently, using video-gaming graphics cards and mobile-devices.

The CSCVR provides undergraduate and graduate students with high quality, discovery-based educational experiences that transcend the traditional boundaries of academic fields, and foster collaborative research in the computational sciences. The CSCVR’s computational resources are being utilized to solve complex problems in the sciences ranging from the modeling of ocean waves to uncovering the mysteries of black hole physics.

Prof. Gaurav Khanna is a physics professor at the University of Massachusetts Dartmouth who serves as the associate director of the campus’ Center for Scientific Computing & Visualization Research.

The post UMass Dartmouth Reports on HPC Day 2017 Activities appeared first on HPCwire.

AI: Scaling Neural Networks Through Cost-Effective Memory Expansion

HPC Wire - Mon, 06/26/2017 - 07:55

Neural networks offer a powerful new resource for analyzing large volumes of complex, unstructured data. However, most of today’s Artificial Intelligence (AI) deep learning frameworks rely on in-core processing, which means that all the relevant data must fit into main memory. As the size and complexity of a neural network grows, cost becomes a limiting factor. DRAM memory is simply too expensive.

Of course, memory bottlenecks are hardly new in intensive-computing environments such as High Performance Computing (HPC). Transferring large data sets to large numbers of high-performance cores has been an increasing challenge for decades. Fortunately, that is beginning to change. New Intel memory and storage technologies are being integrated into the Intel® Scalable System Framework (Intel® SSF) to help reverse this trend. They do this by moving high volume data closer to the processing cores, and by accelerating data movement at each tier of the memory and storage hierarchy.

Moving Data Closer to Compute

To accelerate the flow of data into the compute cores, Intel is integrating high-speed memory directly into Intel® Xeon® Phi™ processors and future Intel® Xeon® processors. By moving memory closer to compute resources, these solutions help to optimize core utilization. They also help to improve workload scaling. Intel Xeon Phi processors, for example, have demonstrated up to 97 percent scaling efficiency for deep learning workloads up to 32-nodes1.

Transforming the Economics of Memory

Intel® Optane™ technology provides even more far-reaching advantages for data movement. This groundbreaking, non-volatile memory technology combines the speed of DRAM with the capacity and cost efficiency of NAND.  Based on Intel® Optane™ technology, Intel® Optane™ SSDs are designed to provide 5-8x faster performance than Intel’s fastest NAND-based SSDs2.  Intel Optane SSDs can be combined with Intel® Memory Drive Technology to extend memory and provide cost-effective, large-memory pools.

When connected over the PCIe bus, an Intel Optane SSD provides an efficient extension to system memory. Behind the scenes, the Intel Memory Drive Technology transparently integrates the SSD into the memory subsystem and orchestrates data movement. “Hot” data is automatically pushed onto the DRAM to maximize performance. The OS and applications see a single high-speed memory pool, so no software changes are required.

Figure 1. You can extend memory cost-effectively using high-speed Intel® Optane™ SSDs and Intel® Memory Drive Technology.

How good is performance? Based on Intel internal testing, the DRAM + Intel Optane SSD combination provides roughly 75 to 80 percent of the performance of a comparable DRAM-only solution3. The outlook may be even better for deep learning applications. Intel engineers found that the DRAM + Intel Optane SSD combination can optimize a data locality and minimize cross socket traffic which could result in better performance4 than the DRAM-only solution. This is the case for big datasets distributed across all system memory where every application thread has access to all data. Such an example could be found in the General Matrix Multiplication (GEMM) benchmark which represents some portion of Deep Learning core algorithms.

Accelerating Storage

With today’s exploding data volumes, transferring data from bulk storage to local storage to cluster memory can lead to operational bottlenecks at any point. Intel Optane SSDs can be used as high-speed buffers to break through these barriers. A relatively small number of Intel® Optane™ SSDs can dramatically reduce data transfer times. They can also improve performance for applications that are constrained by excessive storage latency or insufficient storage bandwidth.

Figure 2. Intel® Scalable System Framework simplifies the design of efficient, high-performing clusters that optimize the value of HPC investments. Simplifying Integration with Intel® Scalable System Framework (Intel® SSF)

By accelerating data movement, Intel Optane SSDs—and future Intel products based on Intel Optane technology—will help to transform many aspects of HPC and AI.  Their inclusion in Intel SSF will make it easier for organizations to take advantage of emerging memory and storage solutions based on this new technology.

As deep learning emerges as a mainstream HPC workload, these balanced, large-memory cluster solutions will help organizations deploy massive neural networks to analyze some of the world’s largest and most complex datasets.Intel SSF provides a scalable blueprint for efficient clusters that deliver higher value through increased integration and balanced designs. This system-level focus helps Intel synchronize innovation across all layers of the HPC and AI solution stack, so new technologies can be integrated more easily by system vendors and end-user organizations.

Stay tuned for additional articles focusing on the benefits Intel SSF brings to AI at each level of the solution stack through balanced innovation in compute, fabric, storage, and software technologies.


1 https://syncedreview.com/2017/04/15/what-does-it-take-for-intel-to-seize-the-ai-market/

2 https://www.intel.com/content/www/us/en/solid-state-drives/optane-ssd-dc-p4800x-brief.html

3 Based on Intel internal testing using SGEMM MKL from the Intel® Math Kernel Library. System under test (DRAM + SSD): 2 X Intel® Xeon® processor E5-2699 v4, Intel® Server Board S2600WT, 128 GB DDR4 memory + 4 X Intel® Optane SSD SSDPED1K375GA), Cent OS 7.3.1611. Baseline system (all DRAM): 2 X Intel® Xeon® processor E5-2699 v4, Intel® Server Board S2600WT, 768 GB DDR4 memory, Cent OS 7.3.1611.

4 Achieving higher performance while using less DRAM memory was made possible by Intel® Memory Drive Technology, which automatically takes advantage of NUMA technology in Intel processors to enhance data placement not only across the hybrid memory space, but also within the available DRAM memory.

The post AI: Scaling Neural Networks Through Cost-Effective Memory Expansion appeared first on HPCwire.


Subscribe to www.rmacc.org aggregator