HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 16 hours 26 min ago

STEM-Trekker Badisa Mosesane Attends CERN Summer Student Program

Tue, 06/27/2017 - 13:11

Badisa Mosesane, an undergraduate scholar who studies computer science at the University of Botswana in Gaborone, recently joined other students from developing nations around the world in Geneva, Switzerland to participate in the European Organization for Nuclear Research (CERN) Summer Student Program.

Each year, advanced undergraduate and beginning graduate students from developing countries who study physics, computing and engineering are encouraged to apply—and it’s very competitive! In 2016, 137 students from 60 countries were represented and more than 1,000 have participated since the program began in 2003.

For eight weeks this summer, Badisa will attend lectures, and work side-by-side with student-peers and scientists from a range of disciplines on some of the world’s biggest experiments. The students will have the opportunity to foster a multinational, interdisciplinary professional network that will prove useful throughout their careers. Badisa is assigned to the Experimental Physics Neutrino group where he will assist with the development of a web-based app that will visualize data from the ProtoDUNE project.

Badisa wrote to tell us about his first week at CERN. “I’m involved with a massive OS installation across ~300 nodes on a (Experimental Hall for Neutrino) computing cluster, and was assigned the task of integrating Cobbler with GLPI and OCS inventory software to inventory Linux services and software,” he said. “Later this week, I’ll learn about ROOT, a toolkit that’s widely used in the high energy physics arena for data analysis, storage and visualization,” he added.

Badisa is a rising star among African undergraduate computer science students. His passion for high performance computing (HPC) has allowed him to successfully compete with graduate and PhD-level students for limited travel funds, and seats at advanced computational and data science workshops. In June, 2016 he participated in the South African HPC Winter School at the Nelson Mandela Metropolitan University and offered by the Centre for HPC (CHPC) in South Africa. In January, he attended the 7th CHPC Scientific Programming School at the Hartebeesthoek Radio Astronomy Observatory (HartRAO) where he perfected his Linux and Python skills.

While Badisa’s living expenses are covered by the CERN program this summer, he lacked support for the purchase of a round-trip flight, and that’s where STEM-Trek was able to help thanks to a generous donation from Cray Computer Corporation.

“Cray, STEM-Trek and CERN recognize that science diplomacy and a well-trained science and engineering workforce are crucial to every nation’s economy,” said STEM-Trek Executive Director Elizabeth Leake. “But, even with a full scholarship, there are often last-mile expenses that are difficult for some students to manage, and that’s where STEM-Trek helps when we can,” she added.

To learn more about the program and to hear testimonials from past participants, visit the CERN web site and watch this video.

Badisa (center) with colleagues from the University of Botswana and the South African CHPC. U-Botswana Professor Tshiamo Motshegwa (far right) encouraged Badisa to apply for the program. Dr. Motshegwa is the Southern African Development Community (SADC) HPC Forum Chair.

2016 CERN Summer School Programme, photo courtesy

The post STEM-Trekker Badisa Mosesane Attends CERN Summer Student Program appeared first on HPCwire.

Silicon Mechanics Integrates AMD’s EPYC 7000 Series

Tue, 06/27/2017 - 08:54

BOTHELL, Wash., June 27, 2017 — Silicon Mechanics, a system integrator and custom design manufacturer that provides the expertise necessary to scale open technology throughout an organization, has announced immediate availability of AMD’s new EPYC family of CPUs. Starting today, Silicon Mechanics has availability of Supermicro 1U and 2U Ultra servers, plus the BigTwin multi-node server, based on the versatile AMD EPYC platform.

“With the industry excitement surrounding this new AMD release, and as a result of our long-standing hardware partnership with Supermicro, Silicon Mechanics is ready to immediately deploy systems outfitted with EPYC,” said Silicon Mechanics Chief Marketing Officer, Sue Lewis.

EPYC, formerly known in the industry as Naples, offers the following new features:

  • Up to 32 Zen cores
  • 8 DDR4 channels/CPU — up to 2666 MT/s
  • Up to 2TB memory per CPU
  • 128 PCIe lanes
  • Dedicated security subsystem
  • Integrated chipset
  • Socket compatibility with next-gen EPYC processors

“AMD’s EPYC processors support the rapid evolution of performance requirements for data centers, and will serve to enable customer innovation in software-defined storage, web services and machine learning,” said Silicon Mechanics Chief Technology Officer, Daniel Chow. “The 128 lanes of PCIe connectivity offer flexibility and performance in a wide array of server configurations. Our customers are excited to exercise these new capabilities.”

For more information on AMD’s EPYC product release, please click here. To self-configure your next server infrastructure solution with AMD EPYC, please click here.

About Silicon Mechanics

Silicon Mechanics is a system integrator and custom design manufacturer that provides the expertise necessary to scale open technology throughout an organization, from building out HPC or storage clusters to the latest in virtualization, containerized services and more. For more than 15 years, Silicon Mechanics has provided consistent execution in delivering innovative open technology solutions for commercial enterprises, government organizations and the research market. Learn more about maximizing the potential of open technology by visiting www.siliconmechanics.com.

Source: Silicon Mechanics

The post Silicon Mechanics Integrates AMD’s EPYC 7000 Series appeared first on HPCwire.

Envenio Secures $1.3M Investment

Tue, 06/27/2017 - 08:52

NEW BRUNSWICK, June 27, 2017 — Envenio has announced that it has secured investment from Celtic House Venture Partners, Green Century Investments and New Brunswick Innovation Foundation, to the value of $1.3million.

The Canadian CFD software developer has announced that the funding will be used to grow and strengthen the sales and engineering teams, in line with an ambitious and exciting business plan to increase the use of its cloud-hosted, on-demand CFD platform, EXN/Aero.

The on-demand nature of EXN/Aero has already received widespread praise from large organizations and engineering consultancies alike.

Aside from vital financing, each of the three investors bring decades of industry experience and credibility, promising to strengthen the already exciting, innovative and ambitious plans held by Envenio since its inception.

With over 20 years’ experience in nurturing Canadian technology companies, Celtic House Venture Partners has over $4.5billion worth of exits (acquisitions/IPOs), and is largely regarded as one of the most active investors in technology and innovation.

“We share Envenio’s belief that the billion dollar global CFD industry is positioned for disruption from new cloud-based and GPU-based approaches that offer unparalleled performance coupled with new service delivery models derived from consumer internet technology” says Tomas Valis of Celtic House Venture Partners.

Green Century Investments brings extensive experience from a number of sectors, and while its headquarters are in Toronto, its reach extends far beyond Canada, to countries including China. Holding a strong belief that sustainability is vital for business as well as the environment, Envenio will play a key role in its overall goal of building an ecosystem for continuing global success.

Not-for-profit corporation, New Brunswick Investment Foundation (NBIF), adds this investment to its $70million portfolio, alongside $380million leveraged from other sources. With a strong record of helping to create over 90 companies and fund 400 applied research projects since its inception in 2003, the corporation currently has 47 companies on its books.

Speaking about the investment, Scott Walton, VP of Envenio said “Since the company was founded, we have funded most of the product development through engineering consulting”.

“Now that the product is on the market, we are looking to accelerate its adoption. It’s the world’s first HPC-optimized, cloud hosted, on-demand CFD tool. It is our honor to be funded by some of Canada’s leading technology investment firms who have a long history of success in Software-as-a-Service products” he added.

Envenio & EXN/Aero

Envenio is a Canadian-based CFD software developer, responsible for the creation of on-demand, cloud-hosted CFD tool, EXN/Aero. EXN/Aero is a general purpose computational fluid dynamics, cloud solver that speeds up simulation runs by an order of magnitude. Compatible with most meshing tools, and using open source post-processing, there are a range of on-demand options available to users, helping them to overcome common limitations in their everyday work. Ideal for CFD consulting, this CFD software is sure to be an asset to companies or CFD freelancers like.

www.envenio.ca

Celtic House Venture Partners

Celtic House has collaborated with management teams and repeat entrepreneurs to develop technology companies from the inception phase through to exit, generating 25 initial public offerings and successful acquisitions. From offices in Toronto and Ottowa, Celtic House manages in excess of $425million across three funds.

www.celtic-house.com

Green Century Investment

GCI focuses on sustainability on a wider scale than simply environmental protection. With a clear goal to support sustainable business across multiple sectors, the company is actively building an ecosystem to continue success globally. Headquartered in Toronto, the company has global reach including as far afield as China.

www.greencenturyinvestment.com

New Brunswick Innovation Foundation

NBIF is a private, not-for-profit corporation that invests in startup companies and R&D. With over $70 million invested, plus $380 million more leveraged from other sources, NBIF has helped to create over 90 companies and fund 400 applied research projects since its inception in 2003, with a current portfolio of 47 companies. All of NBIF’s investment returns go back into the Foundation to be re-invested in other new startup companies and research initiatives.

www.nbif.ca

Source: Envenio

The post Envenio Secures $1.3M Investment appeared first on HPCwire.

Carnegie Mellon Launches Artificial Intelligence Initiative

Tue, 06/27/2017 - 08:47

PITTSBURGH, June 27, 2017 — Carnegie Mellon University’s School of Computer Science (SCS) has launched a new initiative, CMU AI, that marshals the school’s work in artificial intelligence (AI) across departments and disciplines, creating one of the largest and most experienced AI research groups in the world.

“For AI to reach greater levels of sophistication, experts in each aspect of AI, such as how computers understand the way people talk or how computers can learn and improve with experience, will increasingly need to work in close collaboration,” said SCS Dean Andrew Moore. “CMU AI provides a framework for our ongoing AI research and education.”

From self-driving cars to smart homes, AI is poised to change the way people live, work and learn, Moore said.

“AI is no longer something that a lone genius invents in the garage,” Moore added. “It requires a team of people, each of whom brings a special expertise or perspective. CMU researchers have always excelled at collaboration across disciplines, and CMU AI will enable all of us to work together in unprecedented ways.”

CMU AI harnesses more than 100 faculty members involved in AI research and education across SCS’s seven departments. Moore is directing the initiative with Jaime Carbonell, the Newell University Professor of Computer Science and director of the Language Technologies Institute;Martial Hebert, director of the Robotics Institute; Computer Science Professor Tuomas Sandholm; and Manuela Veloso, the Herbert A. Simon University Professor of Computer Science and head of the Machine Learning Department.

Carnegie Mellon has been on the forefront of AI since creating the first AI computer program,Logic Theorist, in 1956. It created the first and only Machine Learning Department, studying how software can make discoveries and learn with experience. CMU scientists pioneered research into how machines can understand and translate human languages, and how computers and humans can interact with each other. Carnegie Mellon’s Robotics Institute has been a leader in enabling machines to perceive, decide and act in the world, including a renowned computer vision group that explores how computers can understand images.

CMU AI will focus on educating a new breed of AI scientist and on creating new AI capabilities, from smartphone assistants that learn about users by making friends with them to video technologies that can alter characters to appear older, younger or even as a different actor.

“CMU has a rich history of thought leadership in every aspect of artificial intelligence. Now is exactly the right time to bring this all together for an AI strategy to benefit the world,” Moore said.

That expertise, spread across several departments, has enabled CMU to develop such technologies as self-driving cars; question-answering systems, including components of IBM’s Jeopardy-playing Watson; world-champion robot soccer players; 3-D sports replay technology; and even an AI smart enough to beat four of the world’s top poker players.

“AI is a broad field that involves extremely disparate disciplines, from optimization and symbolic reasoning to understanding physical systems,” Hebert said. “It’s difficult to have state-of-the art expertise in all of those aspects in one place. CMU AI delivers that and makes it centrally accessible.”

Recent developments in computer hardware and software make it possible to reunite elements of AI that have grown independently and create powerful new AI technologies. These developments have created incredible demand from industry for computer scientists with AI know-how.

“Students who study AI at CMU have an opportunity to work on projects that unite multiple disciplines — to study AI in its depth and multidisciplinary, integrative aspects. They generally leave CMU for positions of great leadership, and they lead global AI efforts both in terms of starting new ventures and joining innovative companies that tremendously value our education and research,” Veloso said. “CMU students at all levels have a big impact on what AI is doing for society.”

Nearly 1,000 CMU students are involved in AI research and education. CMU also is vigorously engaged in outreach programs that introduce students in elementary and high school to AI topics and encourage their skills in that area.

“We’re teaching and engaging with those who will improve lives through technology, and who have taken responsibility for what happens in the rest of the century,” Moore said. “Exposing these hugely talented human beings to the best AI resources and researchers is imperative for creating the technologies that will advance mankind. This is the first of many steps CMU will take to ensure AI is accessible to all.”

About Carnegie Mellon University

Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.

Source: Carnegie Mellon

The post Carnegie Mellon Launches Artificial Intelligence Initiative appeared first on HPCwire.

Atos Wins Contract with Safran for IT Infrastructure

Tue, 06/27/2017 - 08:46

PARIS, June 27, 2017 – Atos, a global leader in digital transformation, has been selected by Safran, leader in the aeronautics and aerospace sectors, as its partner to optimize datacenters worldwide. The four-year contract runs till 2021 and has the option of a two-year extension.

By awarding Atos the contract to optimize its datacenters, Safran is accelerating its digital transformation by securing the best solutions on the market.

Atos will deploy a flexible hybrid cloud orchestration service, as well as standardized process management to harmonize Safran’s management of all traditional infrastructures across public and private clouds. For Europe, Atos will work with its operational centers based in France, as well as in Romania and Poland, to provide a strong private cloud platform: Atos Canopy Digital Private Cloud. Services for the Unites States will be provided locally.

“With this contract, we are aiming to rapidly transform our entire Information System over the cloud. The collaboration between the Safran and Atos teams will help us spring into this new era,” said Thierry Milhé – VP International Production of IT Services at Safran.

The security solution will transform Safran’s current standard model into a data-centric model that interfaces with the assets in place at Safran, reinforcing them and controlling all access. Surveillance focuses on data flow, taking into account each country’s specific regulatory requirements.

By getting Atos to optimize our data centres, we are transforming our IT foundations in order to be able to offer our various core businesses a range of flexible and secure services. We expect to see some technological breakthroughs with these innovative digital solutions,” explains Loïc Bournon, Chief Information Officer at Safran. 

We are happy to contribute to Safran’s performance by optimizing its data centres in a secure way across the entire group.  Thanks to our proven experience in manufacturing and aeronautics, we are using our expertise to deploy an efficient industrial model and an ambitious transformation path that respects the constraints of Safran’s core businesses,” says Eric Grall, Executive Vice-President and Head of Global Operations at Atos. 

These activities constitute the IT foundation required to guide Safran through the process of growth, performance, and innovation.

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around € 12 billion. The European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, TelecommunicationsTransportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index. www.atos.net

Source: Atos

The post Atos Wins Contract with Safran for IT Infrastructure appeared first on HPCwire.

The EU Human Brain Project Reboots but Supercomputing Still Needed

Mon, 06/26/2017 - 13:59

The often contentious, EU-funded Human Brain Project whose initial aim was fixed firmly on full-brain simulation is now in the midst of a reboot targeting a more modest goal – development of informatics tools and data/knowledge repository for brain research. Think Google search engine and associated repository for brain researchers. It’s still a massive effort.

There’s a fascinating article in IEEE Spectrum (The Human Brain Project Reboots: A Search Engine for the Brain Is in Sight) touching on the highs, lows, and emerging aspirations of the HBP. High performance computing, not surprisingly, is a core component of the HBP and not restricted just to traditional computing paradigms – both the SpiNNaker and BrainScaleS neuromorphic platforms are HBP efforts.

According to the IEEE Spectrum article, “Sheer computing muscle is one thing that won’t be a problem, says Boris Orth, the head of the High Performance Computing in Neuroscience division at the Jülich Supercomputing Center. Orth walks between the monolithic black racks of the JuQueen supercomputer, his ears muffled against the roar of cooling fans. This is one of the big machines that HBP researchers are using today. Jülich recently commissioned JURON and JULIA, two pilot supercomputers designed with extra memory, to help neuroscientists interact with a simulation as it runs.”

The original plan, spearheaded by Henry Markham, spurred debate and backlash in the brain research community. You may recall Markham also led the Swiss Blue Brain Project at EPFL. Here’s another excerpt from the article:

“As soon as the HBP was funded, things got messy. Some scientists derided the aspiration as both too narrow and too complex. Several labs refused to join the HBP; others soon dropped out. Then, in July 2014, more than 800 neuroscientists signed an open letter to the European Commission threatening to boycott HBP projects unless the commission had an independent panel review “both the science and the management of the HBP.”

“The commission ordered an overhaul, and a year later an independent panel published a 53-page report [PDF] that criticized the project’s science and governance alike. It concluded that the HBP should focus on goals that can be “realistically achieved” and “concentrate on enabling methods and technologies.”

The Human Brain Project reboot is being likened more to the international Human Genome Project which produced a full, searchable genome, and associated tools. The HBP will emulate this approach. The ambitious project is scheduled to end in 2023, ten years after it was begun. The IEEE Spectrum article is fascinating as well as a quick read.

Link to IEEE Spectrum article: http://spectrum.ieee.org/computing/hardware/the-human-brain-project-reboots-a-search-engine-for-the-brain-is-in-sight

Feature image:
3D Reconstruction: Data from the polarized light imaging of the brain is pieced together by a computer to produce a 3D image of the neuronal fiber tracts (shown here as tubes). Credit: Katrin Amunts and Markus Axer/Jülich Research Center

The post The EU Human Brain Project Reboots but Supercomputing Still Needed appeared first on HPCwire.

Bill Gropp Named NCSA Director

Mon, 06/26/2017 - 12:37

URBANA, Ill. June 26, 2017 —Dr. William “Bill” Gropp, Interim Director and Chief Scientist of the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, will become the center’s fifth Director on July 16, 2017, pending Board of Trustees approval. Gropp was appointed to the roles of acting and then interim director of NCSA by Vice Chancellor for Research Peter Schiffer when former NCSA director Dr. Ed Seidel stepped up to serve as Vice President for Economic Development and Innovation for the University of Illinois System.

Dr. William “Bill” Gropp

“Bill has provided solid and forward-looking leadership as acting and interim director during the past ten months, said Dr. Peter Schiffer, Vice-Chancellor for Research at the University of Illinois at Urbana-Champaign. “I have every confidence that he will guide NCSA into the next era of scientific research and the application of advanced digital resources.”

Gropp, who joined the Urbana-Champaign faculty in 2007, holds the Thomas M. Siebel Chair in Computer Science and has served as NCSA’s chief scientist since 2015. He is a co-principal investigator of Blue Waters, the fastest supercomputer on an academic campus, which enables scientists from across the country to make discoveries not otherwise possible. Gropp was recently named principal investigator of the NSF-funded Midwest Big Data Hub, a growing network of partners investing in data and data sciences to address grand challenges for society and science.

Gropp is a leader in the advanced computing community who co-chaired the National Academies’ Committee on Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science. His most widely known contribution to the scientific computing community was the development of the MPICH implementationof the Message Passing Interface (MPI), which he designed with collaborators at Argonne National Laboratory. MPI allows large-scale computations to be run on thousands to millions of processor cores simultaneously and for the results of those computations to be efficiently shared. Gropp has authored more than 187 technical publications, including co-authoring the book Using MPI, which is in its third edition and has sold over 19,000 copies.

Gropp was recognized as the recipient of the the 2016 ACM/IEEE Computer Society Ken Kennedy Award for his highly influential contributions to the programmability of high performance parallel and distributed computers.

“I am honored to be appointed the director of this amazing organization as we drive NCSA’s mission of being a world-class integrative center for transdisciplinary research, education, and innovation into a new era”, said Gropp. “I am excited by the many opportunities that NCSA is uniquely able to pursue in order to solve grand challenges for the benefit of science and society. Our strength is in our experience, our broad range of expertise, and our strong and growing connections with the University of Illinois at Urbana-Champaign campus. We will leverage these strengths to innovate and provide advanced computing and data infrastructure to the nation, partnering with the campus in new initiatives, particularly in data and health sciences, and in strengthening our historic partnerships in engineering, humanities, and the sciences.”

Gropp held the positions of Assistant (1982-1988) and Associate (1988-1990) Professor in the Computer Science Department at Yale University. In 1990, he joined the Numerical Analysis group at Argonne, where he was a Senior Computer Scientist in the Mathematics and Computer Science Division, a Senior Scientist in the Department of Computer Science at the University of Chicago, and a Senior Fellow in the Argonne-Chicago Computation Institute. From 2000 through 2006, he was also Associate Director of the Mathematics and Computer Science Division at Argonne.

Gropp received his B.S. in Mathematics from Case Western Reserve University in 1977, a MS in Physics from the University of Washington in 1978, and a Ph.D. in Computer Science from Stanford in 1982. Gropp is a Fellow of ACM, IEEE, and SIAM and received the Sidney Fernbach Award from the IEEE Computer Society in 2008. Gropp is a member of the National Academy of Engineering.

About the National Center for Supercomputing Applications (NCSA)

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers, and students together to solve grand challenges at rapid speed and scale.

Source: NCSA

The post Bill Gropp Named NCSA Director appeared first on HPCwire.

Asetek Receives Follow-on Order From Penguin Computing

Mon, 06/26/2017 - 09:59

AALBORG, Denmark, June 26, 2017 — Asetek today announced a further order from Penguin Computing, an established data center OEM, for an undisclosed HPC (High Performance Computing) installation.

“This repeat order reflects our strong partnership with Penguin Computing. It is also another confirmation of the increasing need for liquid cooling in high density HPC clusters,” said André Sloth Eriksen, CEO and Founder of Asetek.

On Friday 23 June, Asetek and Penguin Computing announced that Asetek was selected to provide liquid cooling for NVIDIA’s P100 GPU accelerators, the most advanced GPUs yet produced by NVIDIA, as part of Penguin’s Tundra ES (Extreme Scale) platform.

Today’s follow-on order is for Asetek’s RackCDU Direct-to-Chip (D2C) liquid cooling solution and includes additional loops to cool NVIDIA’s P100 GPU accelerators.

The order has a value of USD 140,000 with delivery to be completed in Q3 2017.

Asetek signed a global purchasing agreement with Penguin Computing in 2015.

Source: ASETEK

The post Asetek Receives Follow-on Order From Penguin Computing appeared first on HPCwire.

DOE Launches Chicago Quantum Exchange

Mon, 06/26/2017 - 09:53

While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a Department of Energy sponsored collaboration between the University of Chicago, Fermi National Accelerator Laboratory, and Argonne National Laboratory to “facilitate the exploration of quantum information and the development of new applications with the potential to dramatically improve technology for communication, computing and sensing.”

The new hub will be within within the Institute for Molecular Engineering (IME) at UChicago. Quantum mechanics, of course, governs the behavior of matter at the atomic and subatomic levels in exotic and unfamiliar ways compared to the classical physics used to understand the movements of everyday objects. The engineering of quantum phenomena could lead to new classes of devices and computing capabilities, permitting novel approaches to solving problems that cannot be addressed using existing technology.

Lately, it seems work on quantum computing has ratcheted up considerably with IBM, Google, D-Wave, and Microsoft leading the charge. The Chicago Quantum Exchange seems to be a more holistic endeavor to advance the entire “quantum” research ecosystem and industry.

“The combination of the University of Chicago, Argonne National Laboratory and Fermi National Accelerator Laboratory, working together as the Chicago Quantum Exchange, is unique in the domain of quantum information science,” said Matthew Tirrell, dean and Founding Pritzker Director of the Institute for Molecular Engineering and Argonne’s deputy laboratory director for science. “The CQE’s capabilities will span the range of quantum information, from basic solid state experimental and theoretical physics, to device design and fabrication, to algorithm and software development. CQE aims to integrate and exploit these capabilities to create a quantum information technology ecosystem.”

According to the official announcement, the CQE collaboration will benefit from UChicago’s Polsky Center for Entrepreneurship and Innovation, which supports the creation of innovative businesses connected to UChicago and Chicago’s South Side. The CQE will have a strong connection with a major Hyde Park innovation project that was announced recently as the second phase of the Harper Court development on the north side of 53rd Street, and will include an expansion of Polsky Center activities. This project will enable the transition from laboratory discoveries to societal applications through industrial collaborations and startup initiatives.

Companies large and small are positioning themselves to make a far-reaching impact with this new quantum technology. Alumni of IME’s quantum engineering PhD program have been recruited to work for many of these companies. The creation of CQE will allow for new linkages and collaborations with industry, governmental agencies and other academic institutions, as well as support from the Polsky Center for new startup ventures.

IME’s quantum engineering program is already training a new workforce of “quantum engineers” to meet the need of industry, government laboratories, and universities. The program now consists of eight faculty members and more than 100 postdoctoral scientists and doctoral students. Approximately 20 faculty members from UChicago’s Physical Sciences Division also pursue quantum research.

Link to University of Chicago article: https://news.uchicago.edu/article/2017/06/20/chicago-quantum-exchange-create-technologically-transformative-ecosystem

Feature image: Courtesy of Nicholas Brawand

The post DOE Launches Chicago Quantum Exchange appeared first on HPCwire.

Julia Computing Awarded $910,000 Grant by Alfred P. Sloan Foundation

Mon, 06/26/2017 - 09:49

CAMBRIDGE, Mass., June 26, 2017 — Julia Computing has been granted $910,000 by the Alfred P. Sloan Foundation to support open-source Julia development, including $160,000 to promote diversity in the Julia community.

The grant will support Julia training, adoption, usability, compilation, package development, tooling and documentation.

The diversity portion of the grant will fund a new full-time Director of Diversity Initiatives plus travel, scholarships, training sessions, workshops, hackathons and Webinars. Further information about the new Director of Diversity Initiatives position is below for interested applicants.

Julia Computing CEO Viral Shah says, “Diversity of backgrounds increases diversity of ideas. With this grant, the Sloan Foundation is setting a new standard of support for diversity which we hope will be emulated throughout STEM.”

Diversity efforts in the Julia community have been led by JuliaCon Diversity Chair, Erica Moszkowski. According to Moszkowski, “This year, we awarded $12,600 in diversity grants to help 16 participants travel to, attend and present at JuliaCon 2017. Those awards, combined with anonymous talk review, directed outreach, and other efforts have paid off. To give one example, there are many more women attending and presenting than in previous years, but there is a lot more we can do to expand participation from underrepresented groups in the Julia community. This support from the Sloan Foundation will allow us to scale up these efforts and apply them not just at JuliaCon, but much more broadly through Julia workshops and recruitment.”

Julia Computing seeks job applicants for Director of Diversity Initiatives. This is a full-time salaried position. The ideal candidate would have the following characteristics:

  • Familiarity with Julia
  • Strong scientific, mathematical or numeric programming skills required – e.g. Julia, Python, R
  • Eager to travel, organize and conduct Julia trainings, conferences, workshops and hackathons
  • Enthusiastic about outreach, developing and leveraging relationships with universities and STEM diversity organizations such as YesWeCode, Girls Who Code, Code Latino and Black Girls Code
  • Strong organizational, communication, public speaking and training skills required
  • Passionate evangelist for Julia, open source computing, scientific computing and increasing diversity in the Julia community and STEM
  • This position is based in Cambridge, MA

Interested applicants should send a resume and statement of interest to jobs@juliacomputing.com.

Julia is the fastest modern high performance open source computing language for data, analytics, algorithmic trading, machine learning and artificial intelligence. Julia combines the functionality and ease of use of Python, R, Matlab, SAS and Stata with the speed of C++ and Java. Julia delivers dramatic improvements in simplicity, speed, capacity and productivity. Julia provides parallel computing capabilities out of the box and unlimited scalability with minimal effort. With more than 1 million downloads and +161% annual growth, Julia is one of the top 10 programming languages developed on GitHub and adoption is growing rapidly in finance, insurance, energy, robotics, genomics, aerospace and many other fields.

Julia users, partners and employers hiring Julia programmers in 2017 include Amazon, Apple, BlackRock, Capital One, Comcast, Disney, Facebook, Ford, Google, Grindr, IBM, Intel, KPMG, Microsoft, NASA, Oracle, PwC, Raytheon and Uber.

  1. Julia is lightning fast. Julia provides speed improvements up to 1,000x for insurance model estimation, 225x for parallel supercomputing image analysis and 10x for macroeconomic modeling.
  2. Julia provides unlimited scalability. Julia applications can be deployed on large clusters with a click of a button and can run parallel and distributed computing quickly and easily on tens of thousands of nodes.
  3. Julia is easy to learn. Julia’s flexible syntax is familiar and comfortable for users of Python, R and Matlab.
  4. Julia integrates well with existing code and platforms. Users of C, C++, Python, R and other languages can easily integrate their existing code into Julia.
  5. Elegant code. Julia was built from the ground up for mathematical, scientific and statistical computing. It has advanced libraries that make programming simple and fast and dramatically reduce the number of lines of code required – in some cases, by 90% or more.
  6. Julia solves the two language problem. Because Julia combines the ease of use and familiar syntax of Python, R and Matlab with the speed of C, C++ or Java, programmers no longer need to estimate models in one language and reproduce them in a faster production language. This saves time and reduces error and cost.

About Julia Computing

Julia Computing was founded in 2015 by the creators of the open source Julia language to develop products and provide support for businesses and researchers who use Julia.

About The Alfred P. Sloan Foundation

The Alfred P. Sloan Foundation is a not-for-profit grantmaking institution based in New York City.  Founded by industrialist Alfred P. Sloan Jr., the Foundation makes grants in support of basic research and education in science, technology, engineering, mathematics, and economics.  This grant was provided through the Foundation’s Data and Computational Research program, which makes grants that seek to leverage developments in digital information technology to maximize the efficiency and trustedness of research. sloan.org

Source: Julia Computing

 

The post Julia Computing Awarded $910,000 Grant by Alfred P. Sloan Foundation appeared first on HPCwire.

Atos Highlights Opportunities in New Era of Supercomputing

Mon, 06/26/2017 - 09:47

LONDON, June 26, 2017 – Atos, a leader in digital transformation, declares the world is at the dawning of a new Age of Data in its Digital Vision for Supercomputing and Big Data thought leadership paper.

Speaking ahead of its launch today at a reception in the Houses of Parliament attended by over 100 MPs, Adrian Gregory, CEO, Atos UK&I said: “We all are privileged to be living through the fourth industrial revolution and to witness the world evolve at a rapid pace due to technology. This is especially true when we look at the developments in supercomputing and Big Data and the impact this is already having on the business landscape.

“Such advances mean that technology is no longer merely a facilitator; it is an engine driving the transformation of businesses and public services and is the defining force for new operating models across all sectors. It is crucial to organisations everywhere that we harness this potential and as the leading European supercomputing manufacturer, Atos has chosen to take the lead”, added Adrian.

Digital Vision for Supercomputing and Big Data deconstructs the key developments and explores ways in which organisations can drive performance gains and deliver new products and services more quickly to enhance the experience of customers and citizens.

Julian David, CEO, techUK, said: “The potential of data analytics, backed by the power of High Performance Computing is still to be fully realised. Providing organisations across the public and private sectors with a fuller understanding of the associated opportunities in analytics is key to ensuring the UK remains at the forefront of this digital revolution, and competitive on a global scale.”

Presented by some of the leading subject matter experts within Atos and across public and private sector including Intel, the STFC Hartree Centre and Cambium LLP, topics as diverse as the convergence of High Performance Computing and Big Data, self-learning cyber security and the risks and rewards of quantum computing are discussed in the paper.

Building on previous Digital Vision publications for London, Government plus Health, Digital Vision for Supercomputing & Big Data explains how increasingly, data will be collected and traded as part of a burgeoning data economy and how the processing and storage of vast amounts of data opens up new possibilities with agile analytics critical to those wishing to exploit new opportunities and drive growth. 

About Atos

Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around € 12 billion. The European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.

Source: Atos

The post Atos Highlights Opportunities in New Era of Supercomputing appeared first on HPCwire.

UMass Dartmouth Reports on HPC Day 2017 Activities

Mon, 06/26/2017 - 09:42

UMass Dartmouth’s Center for Scientific Computing & Visualization Research (CSCVR) organized and hosted the third annual “HPC Day 2017” on May 25th. This annual event showcases on-going scientific research in Massachusetts that is enabled through high-performance computing (HPC). This year the participants came from institutions all over the state: Boston University, Harvard, MIT, Northeastern University, Tufts University, WPI, UMass Amherst, Boston, Dartmouth, Lowell, Medical and even industry.

The event featured a total of 13 talks presenting the application of HPC in research areas ranging from biological systems to cosmology. The conference was highly attended with 139 attendees pre-registered, and with 20 registering on-site. A special poster session with awards for student projects was included as well. Over 20 posters were presented at the conference showcasing top notch student research from all over the state. Five awards were granted that were made possible through generous donations by Nvidia, Dell and MathWorks. The conference lunch was sponsored by Microway Inc., while the two coffee breaks were sponsored by Dell.

Dartmouth HPC Day 2017

There were two keynote speakers this year. The first was Dr. Sushil Prasad from the National Science Foundation, who talked about his vision for an impactful curricular change to Computer Science programs in the country. His talk was titled “Developing IEEE TCPP Parallel and Distributed Computing Curriculum and NSF Advanced Cyberinfrastructure Learning and Workforce Development Programs.” On the same theme, there was also an interactive Education Panel that included stakeholders from industry and academia to discuss issues associated with HPC education and training. The second keynote speaker was Dr. Luke Kelley from Harvard who gave an exciting and visually engaging talk titled “Predictions of future Gravitational Wave Observations using Simulations of the Universe”. This is a very special time for the gravitational physics research community following the recent first-ever discovery of gravitational waves by the LIGO detector.

The CSCVR also used this event to debut a small prototype GPGPU computing system, that is powered purely using solar panels. The unique feature of this system is its extremely high power efficiency — an order-of-magnitude larger than traditional systems, made possible by leveraging highly-efficient consumer electronics (in particular, Nvidia Shield TV “set-top” units). The CSCVR has a history of developing innovative supercomputers from using gaming consoles to more recently, using video-gaming graphics cards and mobile-devices.

The CSCVR provides undergraduate and graduate students with high quality, discovery-based educational experiences that transcend the traditional boundaries of academic fields, and foster collaborative research in the computational sciences. The CSCVR’s computational resources are being utilized to solve complex problems in the sciences ranging from the modeling of ocean waves to uncovering the mysteries of black hole physics.

Prof. Gaurav Khanna is a physics professor at the University of Massachusetts Dartmouth who serves as the associate director of the campus’ Center for Scientific Computing & Visualization Research.

The post UMass Dartmouth Reports on HPC Day 2017 Activities appeared first on HPCwire.

AI: Scaling Neural Networks Through Cost-Effective Memory Expansion

Mon, 06/26/2017 - 07:55

Neural networks offer a powerful new resource for analyzing large volumes of complex, unstructured data. However, most of today’s Artificial Intelligence (AI) deep learning frameworks rely on in-core processing, which means that all the relevant data must fit into main memory. As the size and complexity of a neural network grows, cost becomes a limiting factor. DRAM memory is simply too expensive.

Of course, memory bottlenecks are hardly new in intensive-computing environments such as High Performance Computing (HPC). Transferring large data sets to large numbers of high-performance cores has been an increasing challenge for decades. Fortunately, that is beginning to change. New Intel memory and storage technologies are being integrated into the Intel® Scalable System Framework (Intel® SSF) to help reverse this trend. They do this by moving high volume data closer to the processing cores, and by accelerating data movement at each tier of the memory and storage hierarchy.

Moving Data Closer to Compute

To accelerate the flow of data into the compute cores, Intel is integrating high-speed memory directly into Intel® Xeon® Phi™ processors and future Intel® Xeon® processors. By moving memory closer to compute resources, these solutions help to optimize core utilization. They also help to improve workload scaling. Intel Xeon Phi processors, for example, have demonstrated up to 97 percent scaling efficiency for deep learning workloads up to 32-nodes1.

Transforming the Economics of Memory

Intel® Optane™ technology provides even more far-reaching advantages for data movement. This groundbreaking, non-volatile memory technology combines the speed of DRAM with the capacity and cost efficiency of NAND.  Based on Intel® Optane™ technology, Intel® Optane™ SSDs are designed to provide 5-8x faster performance than Intel’s fastest NAND-based SSDs2.  Intel Optane SSDs can be combined with Intel® Memory Drive Technology to extend memory and provide cost-effective, large-memory pools.

When connected over the PCIe bus, an Intel Optane SSD provides an efficient extension to system memory. Behind the scenes, the Intel Memory Drive Technology transparently integrates the SSD into the memory subsystem and orchestrates data movement. “Hot” data is automatically pushed onto the DRAM to maximize performance. The OS and applications see a single high-speed memory pool, so no software changes are required.

Figure 1. You can extend memory cost-effectively using high-speed Intel® Optane™ SSDs and Intel® Memory Drive Technology.

How good is performance? Based on Intel internal testing, the DRAM + Intel Optane SSD combination provides roughly 75 to 80 percent of the performance of a comparable DRAM-only solution3. The outlook may be even better for deep learning applications. Intel engineers found that the DRAM + Intel Optane SSD combination can optimize a data locality and minimize cross socket traffic which could result in better performance4 than the DRAM-only solution. This is the case for big datasets distributed across all system memory where every application thread has access to all data. Such an example could be found in the General Matrix Multiplication (GEMM) benchmark which represents some portion of Deep Learning core algorithms.

Accelerating Storage

With today’s exploding data volumes, transferring data from bulk storage to local storage to cluster memory can lead to operational bottlenecks at any point. Intel Optane SSDs can be used as high-speed buffers to break through these barriers. A relatively small number of Intel® Optane™ SSDs can dramatically reduce data transfer times. They can also improve performance for applications that are constrained by excessive storage latency or insufficient storage bandwidth.

Figure 2. Intel® Scalable System Framework simplifies the design of efficient, high-performing clusters that optimize the value of HPC investments. Simplifying Integration with Intel® Scalable System Framework (Intel® SSF)

By accelerating data movement, Intel Optane SSDs—and future Intel products based on Intel Optane technology—will help to transform many aspects of HPC and AI.  Their inclusion in Intel SSF will make it easier for organizations to take advantage of emerging memory and storage solutions based on this new technology.

As deep learning emerges as a mainstream HPC workload, these balanced, large-memory cluster solutions will help organizations deploy massive neural networks to analyze some of the world’s largest and most complex datasets.Intel SSF provides a scalable blueprint for efficient clusters that deliver higher value through increased integration and balanced designs. This system-level focus helps Intel synchronize innovation across all layers of the HPC and AI solution stack, so new technologies can be integrated more easily by system vendors and end-user organizations.

Stay tuned for additional articles focusing on the benefits Intel SSF brings to AI at each level of the solution stack through balanced innovation in compute, fabric, storage, and software technologies.

 

1 https://syncedreview.com/2017/04/15/what-does-it-take-for-intel-to-seize-the-ai-market/

2 https://www.intel.com/content/www/us/en/solid-state-drives/optane-ssd-dc-p4800x-brief.html

3 Based on Intel internal testing using SGEMM MKL from the Intel® Math Kernel Library. System under test (DRAM + SSD): 2 X Intel® Xeon® processor E5-2699 v4, Intel® Server Board S2600WT, 128 GB DDR4 memory + 4 X Intel® Optane SSD SSDPED1K375GA), Cent OS 7.3.1611. Baseline system (all DRAM): 2 X Intel® Xeon® processor E5-2699 v4, Intel® Server Board S2600WT, 768 GB DDR4 memory, Cent OS 7.3.1611.

4 Achieving higher performance while using less DRAM memory was made possible by Intel® Memory Drive Technology, which automatically takes advantage of NUMA technology in Intel processors to enhance data placement not only across the hybrid memory space, but also within the available DRAM memory.

The post AI: Scaling Neural Networks Through Cost-Effective Memory Expansion appeared first on HPCwire.

US Air Force Research Lab Taps IBM to Build Brain-Inspired AI Supercomputing System

Sat, 06/24/2017 - 19:33

ARMONK, N.Y. and ROME, N.Y., June 24 — IBM and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.

IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors.

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain” perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

“AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

The system fits in a 4U-high (7”) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.    For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) – orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. In 2016, the TrueNorth Team received the inaugural Misha Mahowald Prize for Neuromorphic Engineering and TrueNorth was accepted into the Computer History Museum.  Research with TrueNorth is currently being performed by more than 40 universities, government labs, and industrial partners on five continents.

About IBM Research

For more than seven decades, IBM Research has defined the future of information technology with more than 3,000 researchers in 12 labs located across six continents. Scientists from IBM Research have produced six Nobel Laureates, 10 U.S. National Medals of Technology, five U.S. National Medals of Science, six Turing Awards, 19 inductees in the National Academy of Sciences and 20 inductees into the U.S. National Inventors Hall of Fame. For more information about IBM Research, visit www.ibm.com/research.

About Air Force Research Laboratory

With headquarters at Rome, NY, the Information Directorate (RI) research vector develops novel and affordable Command, Control, Communications, Computing, Cyber, and Intelligence (C4I) technologies. RI is recognized as a national asset and leader in C4I. Refining data into information and knowledge for decision makers to command and control forces is what we do. This knowledge gives our air, space, and cyberspace forces the competitive advantage needed to protect and defend this great nation. For more information about AFRL,  http://www.wpafb.af.mil/afrl/ri.aspx

Source: IBM

The post US Air Force Research Lab Taps IBM to Build Brain-Inspired AI Supercomputing System appeared first on HPCwire.

PEARC17 Student Program; Diverse Cohort of 66

Sat, 06/24/2017 - 19:21

June 24 — The Practice & Experience in Advanced Research Computing Conference Student Program is pleased to announce that 66 students will attend PEARC17 in New Orleans, July 9-13, 2017.

“Since this is PEARC’s maiden voyage, we’re especially pleased at the number and diversity of students who qualified to participate,” said Student Program Chair Alana Romanella (Virginia Tech). “It was our goal to attract candidates from a variety of research domains and demographics that are traditionally under-represented in computational and data science degree programs and careers. Adding diversity to the national advanced research computing workforce pipeline is also a priority for organizations that supported PEARC student travel, including STEM-Trek, XSEDE, Google, Micron Foundation, Science Gateway Community Institute, and San Diego Supercomputer Center,” she added.

XSEDE15 Student Program Committee and Student Volunteer Leads. PEARC17 Student Program Chair Alana Romanella (front, left).

Among 46 students who will receive travel support to attend PEARC17, 32 percent are female, and 50 percent are from demographics that are under-represented in research computing academic tracks and careers. Twenty students are entirely self-funded, and those with partial support received an average of $450 from their home institutions. Fifty-one are expected to participate in the general conference technical program and in targeted student program activities, including a presentation by Federal Bureau of Investigation (FBI) agents on cybersecurity and FBI careers; an intensive collaborative modeling and analysis challenge; a session on careers in modeling and large data analytics; a mentorship program; and volunteer opportunities to assist with conference activities.

ABOUT PEARC17

Being held in New Orleans July 9-13, PEARC17—Practice & Experience in Advanced Research Computing 2017—is for those engaged with the challenges of using and operating advanced research computing on campuses or for the academic and open science communities. This year’s inaugural conference offers a robust technical program, as well as networking, professional growth and multiple student participation opportunities.

Organizations supporting the new conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC); XSEDE; the Science Gateways Community Institute (SGCI); the Campus Research Computing Consortium (CaRC); the ACI-REF consortium; the Blue Waters project; ESnet; Open Science Grid; Compute Canada; the EGI Foundation; the Coalition for Academic Scientific Computation (CASC); and Internet2.
See http://pearc17.pearc.org/ for details, and follow PEARC on Twitter (@PEARC_17) and on Facebook (PEARChpc).

The post PEARC17 Student Program; Diverse Cohort of 66 appeared first on HPCwire.

FAU Students Win Highest Linpack Award at ISC17’s Student Cluster Competition

Fri, 06/23/2017 - 11:07

FRANKFURT, Germany, June 23, 2017 — GCS sponsored team FAU Boyzz, six students of Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany (FAU), walked away from the Student Cluster Competition (SCC), held in the framework of the International Supercomputing Conference 2017 (ISC), with a highly coveted championship title. Team FAU Boyzz, featuring bachelor students studying computational engineering, computer science, and medical engineering, captured the trophy for the hotly competed SCC High Performance Linpack (HPL) benchmark challenge. The amazing HPL score of 37,05 Teraflops (1 Teraflop = 1 trillion floating point operations per second), delivered on the student’s self assembled Hewlett Packard Enterprise (HPE) cluster featuring 12 NVIDIA P100 GPUs, marks a new all time high in the history of ISC’s SCC. The score almost triples the result of the previous year’s SCC Linpack high mark achieved at ISC.

The HPL benchmark traditionally enjoys special attention amongst the challenges the student teams face in the course of a gruelling three-day ambitious competition which is an integral part of the annually recurring ISC, the international counterpart of the worlds largest HPC conference SC held in the US.

“This competition is quite fun and quite challenging,” said Jannis Wolf, team captain of “FAU Boyzz.” “We have been preparing for this for a year, and we’ve met people that we otherwise never would have—our team had different disciplines coming together.”

Through the contest, teams of undergraduate students are exposed to a variety of application codes and asked to solve a variety of problems related to HPC. In addition to application performance, teams are judged on their clusters’ energy efficiency and power consumption, application performance and accuracy, and interviews by subject matter experts assessing their knowledge of their systems and applications.

“One of the best parts is the practical knowledge that comes from this process,” said team member Lukas Maron. Indeed, the teams are given real-world applications, and work closely with mentors who are already active in the HPC community. This type of experience is invaluable for students’ future career prospects, and also for exposing them to possible new avenues to explore.

“I think this is a great opportunity for students to get a feeling for what it is like at an HPC conference, to deal with a wide variety of applications, and to get to be able to design a cluster from scratch,” said FAU researcher and team mentor Alexander Ditter. “Of course, it would not be possible for us to participate in these kind of friendly competitions were there no support from the research community as well as the industry. Thus I deliberately would like to express big thanks to our sponsors GCS and SPPEXA who helped us financially, and to our hardware sponsors HPE and NVIDIA. We hope our success made them proud.”

The complete list of teams participating in the ISC Student Cluster Competition are represented by:
• Centre for High Performance Computing, (South Africa)
• Nanyang Technological University (Singapore)
• EPCC University of Edinburgh (UK)
• Friedrich-Alexander University Erlangen–Nuremberg (Germany)
• University of Hamburg (Germany)
• National Energy Research Scientific Computing Center (USA)
• Universitat Politècnica De Catalunya Barcelona Tech (Spain)
• Purdue and Northeastern University (USA)
• The Boston Green Team (Boston University, Harvard University, Massachusetts Institute of Technology (MIT), University of Massachusetts – Boston (UMass Boston) (USA)
• Beihang University (China)
• Tsinghua University (China)

“The Gauss Centre for Supercomputing, who by definition is highly interested in drawing young talent’s attention toward High Performance Computing, is always open to support up and coming HPC talent, also in the framework of these kind of events,” explains Claus Axel Müller, Managing Director of the GCS. “We are well aware of the financial constraints students are facing when trying to participate in international competition, especially if travel and related expenses are involved. Thus we are happy to be of help, and we would like to sincerely congratulate the FAU Boyzz for their great achievements at ISC.”

Team FAU Boyzz of Friedrich-Alexander-Universität Erlangen-Nürnberg, proud winner of the LINPACK benchmark challenge at ISC17’s SCC. From left: Phillip Suffa, team captain Jannis Wolf, Benedikt Oehlrich, Lukas Maron, Fabian Fleischer, Egon Araujo.
Copyright: GCS

About GCS

The Gauss Centre for Supercomputing (GCS) combines the three German national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s integrated Tier-0 supercomputing institution. Together, the three centres provide the largest, most powerful supercomputing infrastructure in all of Europe to serve a wide range of academic and industrial research activities in various disciplines. They also provide top-tier training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advanced Computing in Europe), an international non-profit association consisting of 24 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level.

Source: GCS

The post FAU Students Win Highest Linpack Award at ISC17’s Student Cluster Competition appeared first on HPCwire.

Asetek Announces Order to Cool NVIDIA’s P100 GPU Accelerators

Fri, 06/23/2017 - 09:12

AALBORG, Denmark, June 23, 2017 — Asetek today announced a new order from Penguin Computing, Asetek’s longstanding OEM partner for a new undisclosed HPC (High Performance Computing) installation.

Asetek’s proprietary Direct-to-Chip (D2C) liquid cooling technology was selected by Penguin Computing to cool NVIDIA’s P100 GPU accelerators, the most advanced GPUs yet produced by NVIDIA.

“Utilizing Penguin’s Tundra ES platform with Asetek’s D2C liquid cooling technology will allow our government clients to push the boundaries of these GPU’s for deep learning research,” said Ken Gudenrath, Director, Federal Division, Penguin Computing.

“Asetek has been chosen to cool the world’s most advanced GPUs. It is an important validation of our offering and we are pleased that our OEM partner, Penguin Computing, continues to select Asetek technology. This is an initial order, and we expect additional deliveries to follow,” said André Sloth Eriksen, CEO and founder of Asetek.

Asetek signed a global purchasing agreement with Penguin Computing in 2015.

This initial order is for 140 loops to be used with Asetek’s RackCDU Direct-to-Chip (D2C) liquid cooling solution and have a value of USD 40,000 with delivery in August 2017.

About Asetek

Asetek is the global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange (ASETEK). For more information, visit www.asetek.com

Source: Asetek

The post Asetek Announces Order to Cool NVIDIA’s P100 GPU Accelerators appeared first on HPCwire.

TACC Supercomputers Help Study Snake Evolution

Fri, 06/23/2017 - 09:06

AUSTIN, June 23, 2017 — Evolution takes eons, but it leaves marks on the genomes of organisms that can be detected with DNA sequencing and analysis.

As methods for studying and comparing genetic data improve, scientists are beginning to decode these marks to reconstruct the evolutionary history of species, as well as how variants of genes give rise to unique traits.

A research team at the University of Texas at Arlington led by assistant professor of biology Todd Castoe has been exploring the genomes of snakes and lizards to answer critical questions about these creatures’ evolutionary history. For instance, how did they develop venom? How do they regenerate their organs? And how do evolutionarily-derived variations in genes lead to variations in how organisms look and function?

“Some of the most basic questions drive our research. Yet trying to understand the genetic explanations of such questions is surprisingly difficult considering most vertebrate genomes, including our own, are made up of literally billions of DNA bases that can determine how an organism looks and functions,” says Castoe. “Understanding these links between differences in DNA and differences in form and function is central to understanding biology and disease, and investigating these critical links requires massive computing power.”

To uncover new insights that link variation in DNA with variation in vertebrate form and function, Castoe’s group uses supercomputing and data analysis resources at the Texas Advanced Computing Center or TACC, one of the world’s leading centers for computational discovery.

Recently, they used TACC’s supercomputers to understand the mechanisms by which Burmese pythons regenerate their organs — including their heart, liver, kidney, and small intestines — after feeding.

Burmese pythons (as well as other snakes) massively downregulate their metabolic and physiological functions during extended periods of fasting. During this time their organs atrophy, saving energy. However, upon feeding, the size and function of these organs, along with their ability to generate energy, dramatically increase to accommodate digestion.

Within 48 hours of feeding, Burmese pythons can undergo up to a 44-fold increase in metabolic rate and the mass of their major organs can increase by 40 to 100 percent.

Writing in BMC Genomics in May 2017, the researchers described their efforts to compare gene expression in pythons that were fasting, one day post-feeding and four days post-feeding. They sequenced pythons in these three states and identified 1,700 genes that were significantly different pre- and post-feeding. They then performed statistical analyses to identify the key drivers of organ regeneration across different types of tissues.

What they found was that a few sets of genes were influencing the wholesale change of pythons’ internal organ structure. Key proteins, produced and regulated by these important genes, activated a cascade of diverse, tissue-specific signals that led to regenerative organ growth.

Intriguingly, even mammalian cells have been shown to respond to serum produced by post-feeding pythons, suggesting that the signaling function is conserved across species and could one day be used to improve human health.

“We’re interested in understanding the molecular basis of this phenomenon to see what genes are regulated related to the feeding response,” says Daren Card, a doctoral student in Castoe’s lab and one of the authors of the study. “Our hope is that we can leverage our understanding of how snakes accomplish organ regeneration to one day help treat human diseases.”

Source: Aaron Dubrow, TACC

The post TACC Supercomputers Help Study Snake Evolution appeared first on HPCwire.

The Ultra-High Density AI Supercomputer AGX-2 Demonstrated at ISC17

Fri, 06/23/2017 - 01:01

Frankfurt, Germany, June 23, 2017 – Inspur, a leading HPC&AI total solutions provider, demonstrated the ultra-high density AI supercomputer AGX-2 which dedicate to accelerate the Artificial Intelligence computing. It is the latest product unveiled by Inspur and NVIDIA at GTC17 Last month. It is the world’s first 2U 8GPUs with an enabled NVLink2.0, which is designed to provide maximum throughput for superior application performance to science and engineering computing, taking AI computing to the next level.

The Ultra-High Density AI Supercomputer AGX-2

The AGX-2 supports up to 8 NVIDIA® Tesla® P100 GPUs, offering either PCI-e interface or NVLink 2.0 for faster interlink connections between the CPU and GPU to reach peak performance results of up to 150GB/s. AGX-2 provides great I/O expansion capabilities, supporting 8x NVMe/SAS/SATA hot swap hard drives and high-speed cluster interconnect for up to 4x 100Gbps EDR InfiniBand ™ connector cards. AGX-2 supports both air-cooling and on-chip liquid-cooling to optimize and improve power efficiency and performance.

According to the LINPACK benchmark results, the AGX-2 achieves 29.33 TFLOPS, which is 2.47 times faster than the testing on NF5288M4 manufactured by Inspur with 4 GPUs in a 2U form factor. Regarding the real performance of training on an AI model, the AGX-2 delivers 1165 images/s, which is 2.49 times faster than the NF5288M4 with 4 Tesla M40, when the GoogLeNet model is trained with TensorFlow.

Leijun Hu, Vice President of Inspur Group

“NVIDIA is the world leader in visual computing and is reshaping the next era of AI computing” said Leijun Hu, Vice President of Inspur Group, “Inspur is proud to partner with NVIDIA to announce the new and innovative AGX-2 GPU server, offering high computing density and faster, easier multi-GPU computing. The cooperation between the two companies also shows Inspur’s capability to develop high performance computing servers to propel AI, deep learning and advanced analytics, and we are hoping to provide even more energy-efficient computing solutions to serve customers around the world.

Marc Hamilton ,VP, Solutions Architecture and Engineering at NVIDIA

“Inspur , while having a long term cooperation with NVIDIA  ,has rich R&D and practice experience in computing system for deep learning as well as long time history cooperation with NVIDIA,” said Marc Hamilton ,VP, Solutions Architecture and Engineering at NVIDIA. “The launch of AGX-2, the Ultra Dense Server which employs world’s top NVIDIA Tesla P100 GPU and high-speed interconnect NVLink technology, will comprehensively increase efficiency in AI and scientific engineering computing in terms of performance and energy consumption, and it will provide both Chinese and global enterprises with leading high-performance computing capability.”

Inspur is a global leading server manufacturer, providing total computing solutions to the world’s leading AI and cloud computing companies, such as Baidu, Alibaba and Tencent. From building the world’s fastest supercomputer to being a leading server provider in China and across the world, Inspur is well positioned to be the fastest growing vendor for Cloud Data Center Solutions worldwide.

The post The Ultra-High Density AI Supercomputer AGX-2 Demonstrated at ISC17 appeared first on HPCwire.

How ‘Knights Mill’ Gets Its Deep Learning Flops

Thu, 06/22/2017 - 13:40

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information about the upcoming deep learning-targeted Knights Mill processor during ISC 2017 in Frankfurt this week. The Knights Mill will get at least a 2-4X speedup for deep learning workloads thanks to new instructions that provide optimizations for single, half and quarter-precision.

When Intel announced Knights Mill (KNM), the AI-focused Knights Landing (KNL) derivative, last August, the company didn’t offer much in the way of details. It would be self-hosted like Knights Landing, said Intel at the time, but would have AI-targeted design elements such as enhanced variable precision compute and high capacity memory. As Intel gets closer to its target production date, Q4 of this year, it is slowly pulling back the covers on Knights Mill. Attendees of HP-CAST were briefed ahead of ISC and a detailed presentation was delivered at the Inter-experimental Machine Learning (IML) Working Group workshop in March.

According to IML presentation slides, the addition of Quad Fused Multiply Add (QFMA) instructions enable a 2x performance gain for Knights Mill over Knights Landing on 32-bit floating point operations. Variable precision instructions enable higher throughput for machine learning tasks. With Quad Virtual Neural Network Instruction (QVNNI), 16-bit INT operations are four times faster per clock than KNL FP32, claims Intel. And thanks to INT32 accumulated output, Intel says users can achieve “similar accuracy to single-precision.”

The new instruction sets also provide optimizations for 8-bit integer arithmetic, said Intel VP and GM of the technical computing initiative Trish Damkroger in a pre-show briefing with HPCwire. Our understanding is that this is accomplished within the 16-bit registers, where lanes are split to get three 8-bit operations and the fourth lane is used to do bit-mapping between registers.

There are also frequency, power and efficiency enhancements that contribute to the performance improvement of Knights Mill, but the biggest change is the deep learning optimized instructions.

“Knights Mill uses the same overarching architecture and package as Knights Landing. Both CPUs are a second-generation Intel Xeon Phi and use the same platform,” writes Intel’s Barry Davis in a blog post.

Customers will have a choice to make based on their precision requirements.

“Knights Mill uses different instruction sets to improve lower-precision performance at the expense of the double-precision performance that is important for many traditional HPC workloads,” Davis continues addressing the differentiation. “This means Knights Mill is targeted at deep learning workloads, while Knights Landing is more suitable for HPC workloads and other workloads that require higher precision.”

Here we see Intel differentiating its products for HPC versus AI, and the Nervana-based Lake Crest neural net processor also follows that strategy. Compare this with Nvidia’s Volta: despite being highly deep learning-optimized with new Tensor cores, the Tesla V100 is also a double-precision monster offering 7.5 FP64 teraflops.

Nvidia’s strategy is one GPU to rule them all, something VP of accelerated computing Ian Buck was clear about when we spoke this week.

“Our goal is to build one GPU for HPC, AI and graphics,” he shared. “That’s what’s we achieved in Volta. In the past we did different products for different segments, FP32-bit optimized products like P40, double-precision with the P100. In Volta, we were able to combine all that, so we have one processor that’s leading performance for double-precision, single-precision and AI, all in one. For folks who are in general HPC they not only get leading HPC double-precision performance, but they also get the benefits of AI in the same processor.”

So which strategy will ultimately win the hearts, minds and pocketbooks of end users and their funding bodies? In addition to its HPC success, Nvidia has captured the lion’s share of deep learning workloads, but the buzz over Google’s TPUs, activity around ASICs and FPGAs, and the proliferation of AI-silicon efforts, like Intel’s Lake Crest and the Knights Crest that will follow, reflect the huge groundswell towards application-optimized processing.

The post How ‘Knights Mill’ Gets Its Deep Learning Flops appeared first on HPCwire.

Pages