HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 6 hours 6 min ago

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

Mon, 10/16/2017 - 13:31

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed by UC San Diego to perform data-driven predictive modeling and real-time tracking of such fires. Firemap has attracted about 115,000 unique visitors and about 1.5 million views during the latest outbreak.

“Views of Firemap have dramatically – and understandably – increased in recent days as firefighters and other first responders do their best to contain this unusually high concentration and rapid spread of wildfires that has already caused enormous damage and caused the loss of more than 30 lives,” said Ilkay Altintas, chief data science officer at the San Diego Supercomputer Center (SDSC) at UC San Diego and principal investigator for WIFIRE. “It is our hope that Firemap continues to help both first responders and residents to better cope in this time of extreme crisis.”

Firemap was developed as part of UCSD’s ‘WIFIRE’ collaboration and enables a ‘what-if’ analysis of fire scenarios ahead of time as well as real-time fire forecasting. “The overall goal of WIFIRE, the result of a multi-year National Science Foundation (NSF) grant, is to make data and predictive models readily available so that the direction and rate of fire spread can be known as early as possible to assist in rescue and containment efforts,” notes an article posted on the San Diego Supercomputer Center web site.

As of early Friday, at least 31 people have died and some 3,500 homes and businesses were destroyed by the blazes, which could become the deadliest and most destructive in California history, according to the Associate Press. More than 8,000 firefighters were still battling the blazes, which started October 8 and soon leveled entire neighborhoods in parts of Sonoma and Napa counties. As some 20 wildfires raged for a fifth day and many remained out of control, the flames spanned more than 300 square miles (777 square kilometers), an area equivalent to the size of New York City’s five boroughs, said the AP.

This image shows another run of simulations from this week’s wildfires in Sonoma and Napa, CA using the WIFIRE project’s Firemap tool. Source: John Graham, Qualcomm Institute/WIFIRE.

Firemap has already attracted some interest of a number of fire departments. Since late 2015, the WIFIRE team has had a partnership with the Los Angeles Fire Department (LAFD) focused on a pilot study to use WIFIRE’s new Firemap tool in real-time fire situations. The WIFIRE team and LAFD tested the operational aspects of the developed technology in 2016, and in monitoring the Sand, Blue Cut, and Soberanes fires that burned more than 200,000 acres combined in California last year, the comparison between the fires’ actual daily progression and WIFIRE’s real-time prediction model were extremely close.

WIFIRE’s Firemap data resource also provides easy access to information on past fires, past and current weather conditions as well as weather forecasts, satellite detections as fast as they are received, HPWREN camera images, and information on vegetation and landscapes from a variety of sources. These are all datasets available on different websites that viewers can now see in one place, achieve programmatic access via web services, and be used for planning fire response and management of natural resources well ahead of time.

The WIFIRE project includes researchers from SDSC, as well as the university’s California Institute for Telecommunications and Information Technology’s (Calit2) Qualcomm Institute, and the Mechanical and Aerospace Engineering (MAE) department at the Jacobs School of Engineering. The University of Maryland’s Department of Fire Protection Engineering is also a project participant.

Link to UCSD article: http://www.sdsc.edu/News%20Items/PR20171013_wifire.html

Caption for feature image: WIFIRE model of this week’s Sonoma and Napa Valley, CA wildfires over a six-hour timespan, showing their fast progression. Each color is half an hour of activity.  Source: Daniel Crawl, SDSC/WIFIRE

Source: UCSD

The post UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views appeared first on HPCwire.

ISC High Performance 2018 Now Open for Submissions

Mon, 10/16/2017 - 08:15

Oct. 16, 2017 —  ISC has announced that ISC High Performance is now open for tutorial and workshop proposal submissions. If you possess knowledge and skills in a particular field in high performance computing (HPC), and enjoy sharing them, the ISC 2018 Tutorials Committee looks forward to hearing from you. Along the same lines, the ISC 2018 Workshops Committee also calls on the HPC community members to send in their proposals for workshops.

Proposals for workshops that want to disseminate their own call for papers and implement the peer-review process need to be submitted by December 7, 2017. Proposals for regular workshops without call for papers must be submitted by February 21, 2018. Tutorial proposals will be accepted through February 13, 2018.

The 2018 conference will be held at Forum Messe Frankfurt from June 24 – 28. The ISC tutorials will be held on Sunday, June 24, 2018 as either half-day or full-day sessions. The ISC workshops will take place on Thursday, June 28, and will also be either half-day or full-day.

The tutorials attract close to 300 attendees, while the workshops are attended by over 600 people. Overall attendance for the 2018 conference is expected to be 3,500.

ISC 2018 Tutorials

The ISC tutorials are interactive educational courses focusing on key topics in HPC, networking, storage and data science. Instructors are encouraged to give attendees a comprehensive introduction to the topic, as well as provide a closer look at specific problems. Tutorials are encouraged to include a “hands-on” component to allow attendees to practice prepared materials.

Submitted tutorial proposals will be reviewed by the ISC 2018 Tutorials Committee (http://www.isc-hpc.com/isc-committees.html), which is headed by Dr. Rosa Badia of Barcelona Supercomputing Center, as the Tutorials Chair. Sandra Wienke, from the RWTH Aachen University, is the committee’s Deputy Chair.

The 2018 tutorials are intended to cover all areas of interest as listed in the ISC 2018 call for research papers (http://www.isc-hpc.com/research-papers-2018.html). Tutorials on topics related to emerging technologies are encouraged. The committee also encourages tutorials of broad applicability over those that focus solely on research in a limited domain or a particular group. Practical tutorials will be preferred to completely theoretical ones, and we urge organizers to incorporate hands-on sessions where appropriate.

Please visit the tutorial webpage (http://www.isc-hpc.com/tutorials-2018.html) for the submission and review process, and terms and conditions, as well as travel funding.

ISC 2018 Workshops

The goal of the workshops is to provide attendees with a platform for presentations, discussions and interactions in a particular subject area. The Workshop Committee (http://www.isc-hpc.com/isc-committees.html) invites workshop proposals on topics related to all aspects of research, development, and application of large-scale, high-performance experimental and commercial systems. Topics include HPC computer architecture and hardware, programming models, system software, applications, deep learning, artificial intelligence, solutions for heterogeneity, reliability, and power efficiency, as well as how those areas relate to big data and cloud computing.

Submitted workshop proposals will be reviewed by the ISC 2018 Workshops Committee, which includes John Shalf of Lawrence Berkeley National Laboratory as the Workshop Chair, Sadaf Alam, from the Swiss National Supercomputing Center (CSCS), as the Deputy Chair, Dr. Rio Yokota, of Tokyo Institute of Technology, as the Workshop Proceedings Chair and Dr. Michèle Weiland, from EPCC – The University of Edinburgh, as Workshop Proceedings Deputy Chair.

Publication

The committee will organize a joint Workshop Proceedings that will be published with Springer similar to the ISC 2018 research papers proceedings. The post-conference proceedings provide workshop organizers with the opportunity to tentatively accept papers for presentation at ISC and make the final acceptance during or shortly after the conference ends, based on potentially revised papers.

The proceedings will be organized in a light-weight fashion and published after the conference, but the committee will collect preliminary versions of the papers and make them available during the workshop to the workshop attendees. Workshop organizers are also free to publish the paper proceedings on their own.

For submission and review process, and terms and conditions, please visit the workshops web page.

About ISC High Performance

First held in 1986, ISC High Performance is a conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments. ISC High Performance attracts engineers, IT specialists, system developers, vendors, scientists, researchers, students, journalists, and other members of the HPC global community. The exhibition draws decision-makers from automotive, finance, defense, aeronautical, gas & oil, banking, pharmaceutical and other industries, as well those providing hardware, software and services for the HPC community. Attendees will learn firsthand about new products and applications, in addition to the latest technological advances in the HPC industry.

Source: ISC High Performance

The post ISC High Performance 2018 Now Open for Submissions appeared first on HPCwire.

XSEDE Announces Collaboration Opportunities for NSF “Track 1” Systems Proposals

Fri, 10/13/2017 - 15:17

URBANA, Ill., Oct. 13, 2017 — The Extreme Science and Engineering Discovery Environment (XSEDE), the National Science Foundation (NSF)-funded network of open and accessible cyberinfrastructure resources and autonomous Service Providers, announced today collaboration opportunities and engagement guidelines, that may be leveraged by institutions developing proposals for the NSF’s Towards a Leadership-Class Computing Facility (NSF17-558).

XSEDE has released a list of items, efforts, and activities that XSEDE is willing to commit to collaborate on with all potential proposers.  Proposers may request a letter of commitment from XSEDE PI, John Towns, that commits to collaborate on what is included in their proposal with the understanding that they select only from the menu of options provided. Collaboration options not listed in the menu may be requested for XSEDE’s consideration. If those requests are approved, XSEDE must offer any and all options to all potential proposers in order to avoid conflicts of interest. Through this approach, XSEDE will provide opportunities for collaboration while being blind to the specific plans of LCCF proposers.

“The vast collaborative network provided by XSEDE is a vital component to advancing the science enterprise for researchers across the nation and this includes supporting efforts to develop the next leadership-class system,” said John Towns, XSEDE principal investigator and Executive Director for Science and Technology at NCSA. “As such, maintaining an open and transparent collaboration process is not only important, but necessary to ensure that XSEDE resources are equally accessible to all parties developing LCCF proposals.”

Potential collaboration areas:

  • Education, Training, Outreach, and Community Engagement
    • Collaboration with researchers, educators and students to integrate XSEDE resources, trainingand campus engagement into curricula, including curricula that engages underrepresented communities.
  • Extended Collaborative Support Services (ECSS)
    • Collaboration with researchers and ECSS consultants with a wide range of skills,  including optimizing code, integrating XSEDE resources into science gateways, delivering training, and working with new communities to enhance their use of proposed resources
  • Resource Allocation Services (RAS)
    • Collaboration with XSEDE’s RAS, which helps to manage allocations, track usage, and allow usage via XSEDE’s Single Sign-On Hub.
  • Infrastructure Services and Integration Support
    • Collaboration with XSEDE Operations and Cyberinfrastructure Integration, which focuses on cybersecurity, networking, data transfer, enterprise services, and providing an operations center for prompt frontline user support.

Potential proposers should direct all communications with XSEDE related to the LCCF proposal to XSEDE PI John Towns (jtowns@ncsa.illinois.edu). For full details on XSEDE’s LCCF collaboration opportunities and engagement guidelines, visit www.xsede.org/about/lccf-options.

About XSEDE

The Extreme Science and Engineering Discovery Environment (XSEDE) is the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world. It is a single virtual system that scientists can use to interactively share computing resources, data, and expertise. XSEDE accelerates scientific discovery by enhancing the productivity of researchers, engineers, and scholars by deepening and extending the use of XSEDE’s ecosystem of advanced digital services and by advancing and sustaining the XSEDE advanced digital infrastructure. XSEDE is a five-year, $110-million project and is supported by the National Science Foundation.

Source: XSEDE

The post XSEDE Announces Collaboration Opportunities for NSF “Track 1” Systems Proposals appeared first on HPCwire.

Exascale Imperative: New Movie from HPE Makes a Compelling Case

Fri, 10/13/2017 - 13:32

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader explain in layman terms why and how exascale computing will drive the next big advances in scientific discovery and applied technologies such as precision medicine.

HPE is one of six leading technology vendors selected by the Department of Energy to spearhead the U.S. Exascale Computing Project. The short movie does a nice job explaining exascale’s potential with a few concrete examples – think the Square Kilometer Array (SKA) project and cancer fighting research. It also touches on some of the obstacles on the path to exascale such as power; an exascale machine built by aggregating today’s biggest systems would require roughly 650 megawatts of power.

While the ideas presented are deeply ingrained in the HPC and science communities, they are often less familiar to general audiences. Speakers include: HPE execs Nicolas Dube (chief strategist), Paolo Faraboschi (fellow), Bill Mannel (VP and general manager), Mike Vildibill (VP of the advanced technology group), along with Donna Crawford, associate director computation emeritus, Lawrence Livermore National Laboratory, and Trader.

HPE’s vision for memory-driven computing is also summarized with SKA as a well-chosen exemplar of the challenge. SKA is expected to generate and exabyte of data per day. The video is a little over six minutes.

The post Exascale Imperative: New Movie from HPE Makes a Compelling Case appeared first on HPCwire.

Bulgaria Signs European Declaration on High-Performance Computing

Fri, 10/13/2017 - 10:29

Oct. 13, 2017 — The European declaration on high-performance computing (HPC) has been signed today in Sofia by Bulgarian Minister of Education and Science, Krasimir Valchev, in the presence of Commissioner Gabriel. Bulgaria is the tenth Member State who is joining the European effort to build the next generation of computing and data infrastructures.

Mariya Gabriel, European Commissioner for Digital Economy and Society said: “I am very pleased to welcome Bulgaria in this bold European initiative. High-performance computing is pervasive in our daily lives: from personalised medicine to weather forecast, cybersecurity and to cars and planes simulation and design. Access to HPC resources is essential for public and private users. As no Member State has the capacity to develop such computing power quickly and on their own, strong cooperation and support at European level is a must.”

Krasimir Valchev, Minister of Education and Science, added: “According to the Bulgarian National Strategy for Research Development 2017-2030, Bulgaria should in a short term modernize its research system to ensure that the needs of the Bulgarian scientific community, the Bulgarian industry and the Bulgarian citizens are met.  By signing this Declaration, Bulgaria joins the club of the Member States engaged in digitizing Europe with the help of high-performance computing power. This is a step in the right direction for our country, which will help us to further develop our research, innovation and industrial potential.”

The EuroHPC declaration was launched and signed by seven Member States in Rome in March 2017 (see the press statementspeech and blog post by Vice-President Ansip). Two other countries signed it in June and July 2017. The objective of this declaration is the establishment of a joint cooperation framework between the signatories countries to acquire and deploy an integrated supercomputing infrastructure capable of at least 1018calculations per second (so-called exascale computers). The countries have agreed to work together to develop a world-class HPC ecosystem based on European technology and relying on energy-efficient computing via low-power chips. The aim is to have EU exascale supercomputers in the global top three by 2022.

Top class HPC infrastructure and services will then be available to support a wide range of users: scientific communities, large industry and SMEs, as well as the public sector. The HPC initiative will also support the European Open Science Cloud (EOSC) and will allow millions of our researchers to share and analyse data in a trusted environment across technologies, disciplines and borders. Ultimately, such European world-class HPC infrastructure will boost scientific leadership, industry competitiveness and EU’s innovation capacity to meet societal and scientific challenges.

Next steps

The European Commission, together with countries who have signed the declaration are preparing, by the end of 2017, a roadmap with implementation milestones to deploy the European exascale supercomputing infrastructure.

Switzerland is expected to be the next country to join the European effort on 20 October 2017. All other Member States are encouraged to join EuroHPC and work together, and with the European Commission, in this initiative.

Source: European Commission

The post Bulgaria Signs European Declaration on High-Performance Computing appeared first on HPCwire.

Internet2 Announces 2017 Inclusivity Initiative Scholarship Recipients

Fri, 10/13/2017 - 10:23

WASHINGTON, Oct. 13, 2017 — Internet2 announced six recipients of the Inclusivity Initiative Scholarship ahead of its annual technical meeting, the Internet2 Technology Exchange, taking place next week in San Francisco from October 15-18. The scholarship recognizes talented individuals seeking opportunities to gain hands-on technical experience, and spotlights women in the field of information technology and their efforts to use technology to serve research and education at their individual institutions.

This year’s winners are:

  • Gabriella Perez, research technology compliance specialist, University of Iowa
  • Forough Ghahramani, associate director, Rutgers Discovery Informatics Institute
  • Jessica Shaffer, network support engineer, Georgia Institute of Technology
  • Julia Staats, associate core engineer, CENIC
  • Kayla Pierson, designer and user experience specialist, University of Montana
  • Sarvani Chadalapaka, HPC administrator, University of California, Merced

The scholarship covers travel expenses, hotel accommodation, and conference registration for the 2017 Technology Exchange meeting. Funding for this year’s award is made possible by Cirrus IdentityCisco SystemsDuo SecurityInternet2 and Fortinet.

“The main goal of the Inclusivity Initiative Scholarship is to increase the meaningful participation of people who are underrepresented in the information technology field, from both the national and global research and education communities, at conferences and technical meetings,” said Ana Hunsinger, Internet2’s vice president of community engagement. “All the winners were nominated by a senior administrator at their home institution who believes in the importance of supporting inclusivity and mentoring colleagues who are just starting their career or thinking about ways to grow in the profession. Ensuring their attendance at technical conferences gives them the opportunity to engage with the larger community on the shared implementation challenges and best common practices.”

The Technology Exchange convenes over 650 attendees from more than 250 institutions, 17 countries, and 46 states including network engineers, technologists, architects, scientists, operators, and administrators in the fields of advanced networking, trust and identity, information security, applications for research, and web-scale computing.

Perez, Ghahramani, Shaffer, Staats, Pierson, and Chadalapaka will be recognized during the keynote address on Monday, October 16. A full list of the 2017 Internet2 Inclusivity Initiative Scholarship winners, along with their bios, appears below:

Gabriella Perez

Gabriella Perez joined the ITS-Research Services at the University of Iowa as research technology compliance specialist in May 2017. While her position continues to develop and evolve, Gabriella works as a campus liaison – identifying and marketing new IT services, assisting with data management planning, working with the IT Security Office to ensure research data is kept safe, serving on the Institutional Review Board as a technology advocate, and helping researchers find compliant technology solutions. More recently, she’s been focusing her efforts on creating IT requirements for export-controlled data as well as aiding in the implementation of the NIST 800-171 security requirements for several institutional software services. After Gabriella graduated from the University of Iowa, she worked at Epic and Transamerica before returning to the Iowa campus.

Forough Ghahramani

Forough Ghahramani is associate director of Rutgers Discovery Informatics Institute (RDI2). Prior to her role in academia, she was a principal of life sciences computing, and worked at Hewlett Packard in senior engineering and management positions. As a leader in higher education, technologist, and entrepreneur, Forough’s diversified career experience includes higher education management, program development, project management, strategic alliances, software engineering, bioinformatics, and organization-wide information technology planning.

Forough has a doctorate in higher education management from University of Pennsylvania, an MBA in marketing from DePaul University, master’s degree in computer science from Villanova University, and bachelor’s degree in mathematics with a minor in biology from Pennsylvania State University. As a female trailblazer in STEM innovation and entrepreneurship, Forough is consulted on the state, national, and international levels in various capacities, including workforce development strategies and entrepreneurship programs for women. Forough is also the chair of Women Impacting Public Policy Education Foundation Board, chair of Institute of Electrical and Electronics Engineers’ Women in Engineering Princeton, chair of Association of University Technology Managers’ Women Inventors Metrics Committee, member of Society of Women Engineers Public Policy, and member of Women’s Center for Entrepreneurship Corporation.

Kayla Pierson

Kayla is a designer and user experience specialist with the web team at the University of Montana, assisting with design, development, and support. She is the university’s design lead focused on brand continuity and user experience for web presence and mobile applications. She plays a large role in continuing to grow UM’s content management system and developing a modular framework in which university websites are built. Additionally, Kayla heads up the training and support effort by supervising student employees, and teaching short training courses. She has technical expertise in Velocity, Less, Sass, Adobe Creative Cloud, and InVision, and has some experience developing applications in the university’s ecosystem using Laravel, jQuery and RequireJS along with course work in Python, Java, and R. Kayla holds a bachelor’s degree in media arts and is currently pursuing a master’s in computer science.

Inclusivity Initiative Award in recognition of Carrie Regenstein recipient:

Sarvani Chadalapaka

Sarvani Chadalapaka is the high performance computing administrator with the Office of Information Technology at the University of California, Merced. In her current role since 2016, she enables researchers affiliated with UC Merced to use campus-wide and regional HPC resources, as well as managing the hardware and software of the campus cluster called the MERCED Cluster (NSF Award #1429783). As an XSEDE Campus Champion, she participates in Campus Champion information sharing sessions and acts as a bridge between the local campus and XSEDE resources. Every week, Sarvani facilitates a hands-on HPC clinic where users can get one-on-one help and engage in peer mentoring. Through her efforts, HPC users on campus have increased over 300% and the MERCED cluster has more than doubled its cores, while also expanding the software supported. Sarvani holds a master’s degree in electrical engineering from the University of Texas-Arlington and a bachelor’s degree in science from India. She is passionate about women in STEM and participates in numerous campus community activities, such as the Polynesian dance troupe.

Women in IT Networking at SC (WINS) recipients:

Jessica Shaffer

Jessica Shaffer began working for the Georgia Institute of Technology (Georgia Tech) in 2011 as a co-op student with the Office of Information Technology’s (OIT) network services team. After graduating from Georgia Tech, she accepted a full-time offer from OIT and now serves as a network support engineer supervisor. In addition to configuring, installing, and troubleshooting campus network devices, she manages two full-time engineers and coordinates the Network Services Co-operative Program. Jessica was selected to participate in the Women in IT Networking at SC (WINS) program for the SC16 high performance computing conference, and she feels very fortunate for the continued mentorship and support she has received from her home institution, the WINS committee, and the SC16 team to expand her networking experience and encourage gender diversity discussions among IT organizations.

Julia Staats

Julia is an associate core engineer at CENIC who is customer focused, detail oriented, and highly productive. She joined CENIC as a network engineer in operations, working in CENIC’s 24×7 network operations center. Through demonstrating initiative in developing her technical skills, she earned a promotion to the core team. Her day-to-day responsibilities include, but are not limited to, handling circuit deployment, establishing Layer 1/2/3 connectivity for CENIC and Pacific Wave customers and peers, implementing backbone upgrades, and providing technical consultation services to customers. Last year Julia was selected to participate in the 2016 Women in IT Networking at SC (WINS) Program, and joined the SCinet DevOps team at SC16 to support the SuperComputing conference. Julia is passionate about technology, innovation and education. She grew up in Beijing and holds a bachelor’s degree in economics and an MBA.

Featured diversity and inclusivity sessions at this year’s Technology Exchange include:

A panel will introduce the topic of work-life integration and will address how their organizations have approached the work-life integration challenge. What policies and practices does the organization engage in to create a culture that either encourages or discourages work-life integration? Retention and job satisfaction can be closely tied to the way that an organization creates a work-life integration culture. The panel will describe what has worked in their organizations and what needed to be modified and why so that we can all think about what is possible in our organizations.

A provocative documentary film by Robin Hauser Reynolds that exposes the dearth of American female and minority software engineers and explores the reasons for this gender gap. CODE raises the question: what would society gain from having more women and minorities code? Following the screening, one of the contributors to the film, Avis Yates (NCWIT) will be available for questions and discussion on this important topic.

The 2017 Internet2 Technology Exchange is co-hosted by the Corporation For Education Network Initiatives In California (CENIC)Energy Sciences Network (ESnet), and the University of California, Berkeley. For more information on the event program or to register to attend, visit https://meetings.internet2.edu/2017-technology-exchange.

About Internet2

Internet2 is a non-profit, member-driven advanced technology community founded by the nation’s leading higher education institutions in 1996. Internet2 serves 324 U.S. universities, 59 government agencies, 43 regional and state education networks and through them supports more than 94,000 community anchor institutions, over 900 InCommon participants, and 78 leading corporations working with our community, and 61 national research and education network partners that represent more than 100 countries.

Source: Internet2

The post Internet2 Announces 2017 Inclusivity Initiative Scholarship Recipients appeared first on HPCwire.

New Method to Detect Spin Current in Quantum Materials Unlocks Possibilities

Fri, 10/13/2017 - 10:17

OAK RIDGE, Tenn., Oct. 13, 2017 – A new method that precisely measures the mysterious behavior and magnetic properties of electrons flowing across the surface of quantum materials could open a path to next-generation electronics.

Found at the heart of electronic devices, silicon-based semiconductors rely on the controlled electrical current responsible for powering electronics. These semiconductors can only access the electrons’ charge for energy, but electrons do more than carry a charge. They also have intrinsic angular momentum known as spin, which is a feature of quantum materials that, while elusive, can be manipulated to enhance electronic devices.

A team of scientists, led by An-Ping Li at the Department of Energy’s Oak Ridge National Laboratory, has developed an innovative microscopy technique to detect the spin of electrons in topological insulators, a new kind of quantum material that could be used in applications such as spintronics and quantum computing.

“The spin current, namely the total angular momentum of moving electrons, is a behavior in topological insulators that could not be accounted for until a spin-sensitive method was developed,” Li said.

Electronic devices continue to evolve rapidly and require more power packed into smaller components. This prompts the need for less costly, energy-efficient alternatives to charge-based electronics. A topological insulator carries electrical current along its surface, while deeper within the bulk material, it acts as an insulator. Electrons flowing across the material’s surface exhibit uniform spin directions, unlike in a semiconductor where electrons spin in varying directions.

“Charge-based devices are less energy efficient than spin-based ones,” said Li. “For spins to be useful, we need to control both their flow and orientation.”

To detect and better understand this quirky particle behavior, the team needed a method sensitive to the spin of moving electrons. Their new microscopy approach was tested on a single crystal ofBi2Te2Se, a material containing bismuth, tellurium and selenium. It measured how much voltage was produced along the material’s surface as the flow of electrons moved between specific points while sensing the voltage for each electron’s spin.

The new method builds on a four-probe scanning tunneling microscope—an instrument that can pinpoint a material’s atomic activity with four movable probing tips—by adding a component to observe the spin behavior of electrons on the material’s surface. This approach not only includes spin sensitivity measurements. It also confines the current to a small area on the surface, which helps to keep electrons from escaping beneath the surface, providing high-resolution results.

“We successfully detected a voltage generated by the electron’s spin current,” said Li, who coauthored a paper published by Physical Review Letters that explains the method. “This work provides clear evidence of the spin current in topological insulators and opens a new avenue to study other quantum materials that could ultimately be applied in next-generation electronic devices.”

Additional coauthors of the study titled, “Detection of the Spin-Chemical Potential in Topological Insulators Using Spin-Polarized Four-Probe STM,” include ORNL’s Saban Hus, Giang Nguyen, Wonhee Ko and Arthur Baddorf; X.-G. Zhang of the University of Florida; and Yong Chen of Purdue University.

This research was conducted at the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility. The development of the novel microscopy method was funded by ORNL’s Laboratory Directed Research and Development program.

ORNL is managed by UT-Battelle for DOE’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

Source: ORNL

The post New Method to Detect Spin Current in Quantum Materials Unlocks Possibilities appeared first on HPCwire.

SC17 Video Sheds Light on How Supercomputers Are Unraveling the Mystery of the Human Brain

Fri, 10/13/2017 - 10:12

Oct. 13, 2017 — A video produced by the SC17 conference highlights how the massive European-based Human Brain Project (HBP), comprising a veritable orchestra of scientists, collaborates to deliver the most exquisitely detailed human brain models ever created.

According to experts, detailed and accurate brain models are a game-changer for neuroscience and related research. What if instead of behavioral research, scientists could use the models to study cognition, simulate and study disease, and learn from the ways of the brain—everything from how it uses energy to how memory, representation and even consciousness itself are constructed?

Through the HBP, teams of scientists, doctors and researchers have partnered with supercomputing and data scientists to pursue these and other ambitious and lofty goals. Ultimately, this team aims to make major advances in fighting brain cancer, Alzheimer’s, CTE and other neurological disorders.

“The human brain is organized on many different levels, from molecules to cells to small circuits and large circuits; and to really understand how all of these different levels are related, and also to understand what makes us human, is one of the biggest challenges of the 21st century,” said Prof. Dr. Katrin Amunts, Director of the Institute of Neuroscience and Medicine (INM) of Research Centre Jülich, Germany and Scientific Research Director of the Human Brain Project.

The project kicked off in 2013 and is scheduled to continue for a decade. “We have partners from about 24 European countries—people who come from physics, medicine, psychology,” Amunts said. Using the brains of donors embedded in paraffin, the researchers slice the brains into microscopic layers, recording the brains in great detail. “We have to create an ‘atlas’ (of the brain) that has a very large size in terms and bits and bytes,” Amunts said.

And that’s where her supercomputing colleagues come in.

“The Human Brain Project is a new approach of using supercomputers to understand the brain by modeling it in completely different ways,” said Prof. Dr. Dirk Pleiter, Research Group Leader at the Jülich Supercomputing Centre and Professor of Theoretical Physics at the Regensburg University. “We have to be able to store vast amounts of data for very fast access; to analyze the data, we have to allow for very quick visualizations with very quick turnarounds. We have to be able to schedule jobs in different ways than we did before, and then the supercomputer becomes more of a useful instrument than it was in the past,” said Pleiter. “Step by step, we are getting there,” he said.

Source: Brian Ban, SC

The post SC17 Video Sheds Light on How Supercomputers Are Unraveling the Mystery of the Human Brain appeared first on HPCwire.

International Team Reconstructs Nanoscale Virus Features from Correlations of Scattered X-rays

Fri, 10/13/2017 - 10:07

Oct. 13, 2017 — As part of an international research team, Jeff Donatelli, Peter Zwart and Kanupriya Pande of the Center for Advanced Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory (Berkeley Lab) contributed key algorithms which helped achieve a goal first proposed more than 40 years ago – using angular correlations of X-ray snapshots from non-crystalline molecules to determine the 3D structure of important biological objects. This technique has the potential to allow scientists to shed light on biological structure and dynamics that were previously impossible to observe with traditional X-ray methods.

The breakthrough resulted from a single-particle diffraction experiment conducted at the Department of Energy’s (DOE’s) Linac Coherent Light Source (LCLS) by the Single-Particle Initiative organized by the SLAC National Accelerator Laboratory.  As part of this initiative, the CAMERA team combined efforts with Ruslan Kurta, a physicist at the European XFEL (X-ray free electron laser) facility in Germany, to analyze angular correlations from the experimental data and use CAMERA’s multi-tiered iterative phasing (M-TIP) algorithm to perform the first successful 3D virus reconstructions from experimental correlations. The results were described in a paper published Oct. 12 in Physical Review Letters.

Reconstructions of a rice dwarf virus (top) and a PR772 bacteriophage (bottom) from experimental correlation data using M-TIP. The images on the right show asymmetries in the internal genetic material for each virus reconstruction. (Image Credit: Jeff Donatelli, Berkeley Lab)

“For the past 40 years, this was considered a problem that could not be solved,” said Peter Zwart, co-author on the paper and a physical bioscientist who is a member of CAMERA based out of the Molecular Biophysics and Integrated Imaging Division at Berkeley Lab. “But it turns out that the mathematical tools that we developed are able to leverage extra information hidden in the problem that had been previously overlooked. It is gratifying to see our theoretical approach lead to a practical tool.”

New Research Opportunities Enabled by XFELs

For much of the last century, the go-to technique for determining high-resolution molecular structure has been X-ray crystallography, where the sample of interest is arranged into a large periodic lattice and exposed to X-rays which scatter off and form diffraction patterns that are collected on a detector. Even though crystallography has been successful at determining many high-resolution structures, it is challenging to use this technique to study structures which are not susceptible to crystallization or structural changes that do not naturally occur within a crystal.

The creation of XFEL facilities, including the Linac Coherent Light Source (LCLS) and the European X-FEL, have created opportunities for conducting new experiments which can overcome the limitations of traditional crystallography. In particular, XFEL beams are several orders of magnitude brighter than and have much shorter pulse lengths than traditional X-ray light sources, which allow them to collect measurable diffraction signal from smaller uncrystallized samples and also study fast dynamics. Single-particle diffraction is one such emerging experimental technique enabled by XFELS, where one collects diffraction images from single molecules instead of crystals. These single-particle techniques can be used to study molecular structure and dynamics that have been difficult to study with traditional imaging techniques.

Overcoming Limitations in Single-Particle Imaging via Angular Correlations

One major challenge of single-particle imaging is that of orientation determination. “In a single-particle experiment, you don’t have control over rotation of the particles as they are hit by the X-ray beam, so each snapshot from a successful hit will contain information about the sample from a different orientation,” said co-author Jeff Donatelli, an applied mathematician in CAMERA who developed many of the algorithms in the new framework. “Most approaches to single-particle analysis have so far been based on trying to determine these particle orientations from the images; however, the best resolution achievable from these analyses is restricted by how precisely these orientations can be determined from noisy data.”

Instead of trying to directly determine these orientations, the team took a different approach based on idea originally proposed in the 1970s by Zvi Kam. “Rather than examine the individual data intensities in an attempt to find the correct orientation for each measured frame, we eliminate this step altogether by using so-called cross-correlation functions,” Kurta said.

This approach, known as fluctuation X-ray scattering, is based on analyzing the angular correlations of ultrashort, intense X-ray pulses scattered from non-crystalline biomolecules. ”The beauty of using correlation data is that it contains a comprehensive fingerprint of the 3D structure of an object that extends traditional solution scattering approaches,” Zwart said.

Reconstructing 3D Structure from Correlations with CAMERA’s M-TIP Algorithm

The team’s breakthrough in reconstructing 3D structure from correlation data was enabled by the multi-tiered iterative phasing (M-TIP) algorithm developed by CAMERA. “Among the prominent advantages of M-TIP is its ability to solve the structure directly from the correlation data without having to rely on any symmetry constraints, and, more importantly, without the need to solve the orientation determination problem,” Donatelli said.

Donatelli, CAMERA director James Sethian and Zwart developed their M-TIP framework by developing a mathematical generalization of a class of algorithms known as iterative phasing techniques, which are used for determining structure in a simpler problem, known as phase retrieval. A paper describing the original M-TIP framework was published August 2015 in the Proceedings of the National Academy of Sciences.

“Advanced correlation analyses in combination with ab-initio reconstructions by M-TIP clearly define an efficient route for structural analysis of nanoscale objects at XFELs,” Zwart said.

Future Directions for Correlation Analysis and M-TIP

The team notes that methods used in this analysis can also be applied to analyze diffraction data when there is more than one particle per shot.

“Most algorithms for single-particle imaging can only handle one molecule at a time, thus limiting signal and resolution. Our approach, on the other hand, is scalable so that we should also be able to measure more than one particle at a time,” said Kurta. Imaging with more than one particle per shot will allow scientists to achieve much higher hit rates, since it is easier to use a wide beam and hit many particles at a time, and will also avoid the need to separate out single-particle hits from multiple-particle hits and blank shots, which is another challenging requirement in traditional single-particle imaging.

As part of CAMERA’s suite of computational tools, they have also developed a different version of M-TIP which solves the orientation problem and can classify the images into conformational states, and consequently can used to study small biological differences in the measured sample. This alternate version of M-TIP was described in a paper published June 26 2017 in the Proceedings of the National Academy of Sciences and is part of a new collaboration initiative between SLAC National Accelerator Laboratory, CAMERA, the National Energy Research Scientific Computing Center (NERSC) and Los Alamos National Laboratory as part of DOE’s Exascale Computing Project (ECP).

This work was supported by the offices of Advanced Scientific Computing Research and Basic Energy Sciences in the Department of Energy’s Office of Science and the National Institute of General Medical Sciences at the National Institutes of Health. LCLS and NERSC are both DOE Office of Science User Facilities.

The Office of Science supports Berkeley Lab. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Source: Berkeley Lab

The post International Team Reconstructs Nanoscale Virus Features from Correlations of Scattered X-rays appeared first on HPCwire.

NSF Awards TACC and University of Louisville $600,000 Grant for Data Science Education

Thu, 10/12/2017 - 13:27

Oct. 12, 2017 — Colleges and universities across the US are creating data science programs to train future professionals to manage the massive amounts of digital data created by a range of sources – from web traffic to digital cameras. This data analysis frequently requires large-scale cyberinfrastructure – advanced computing systems that can deal with terabytes or even petabytes of data. However, few programs teach students how to use such resources effectively.

A new, three-year, $600,000 grant from the National Science Foundation’s (NSF) Education and Human Resources directorate to the Texas Advanced Computing Center (TACC) and the University of Louisville (UofL) will support the development of training, tools, and a cloud-based virtual environment to teach data science at the largest scales and provide computational resources for education. The grant is part of NSF’s “Improving Undergraduate STEM Education” (IUSE) program.

“The fast pace of technology and software developments makes keeping up with knowledge about big data analytics a challenge not only for the students but also for the educators,” said Weijia Xu, a research scientist at TACC and the principal investigator on the project. “TACC and the University of Louisville, both leaders in big data and cloud computing, are uniquely positioned to develop tools to help train students and teachers nationwide.”

The grant will allow the team from TACC (Xu, along with Ruizhu Huang, and Rosalia Gomez) and UofL (led by Hui Zhang) to create lightweight tools, training modules and exercises focusing on useful, open-source software for data science including R, Hadoop, Spark, and TensorFlow.

“The project will deliver a full set of interactive documents and video tutorials on using and configuring the platform,” said Huang. “The educational activities will use graphical, interactive, simulation-based, and experiential learning components to teach data science concepts and computing skills, accessed through the cloud-based platform. The project aims to help students develop critical workforce skills in data science.”

The training will cover both data analytics and machine learning and will introduce students and educators to emerging technologies, such as containers — a form of virtualization that allows data scientists to work in reproducible environments of their choosing and design.

The training and tools will be available for use on existing campus computing infrastructure and also can leverage resources available at TACC, which has some of the most powerful advanced computing systems in the world.

Students and professors will access these learning tools through a cloud-based virtual environment that TACC and UofL will develop. The project will complement existing curriculum in data science and will enhance the learning experience for students regardless of whether they are at a top data science program or a small minority-serving institution. The materials will be designed for both in-person instruction and for remote, online use.

The project will train diverse students in this critical area. The University of Texas at Austin, where TACC is based, is one of nation’s top 10 universities in terms of the number of Hispanic undergraduate degrees awarded, while UofL was ranked by U.S. News and World Reports as one of the best schools for African-American students outside historically black colleges and universities.

Education and outreach specialists at TACC will partner with K-12 STEM programs to take advantage of the cloud-based virtual environment to reach students as early as possible. TACC staff plan to create an activity using the cloud-based virtual environment that targets the approximately 200 underrepresented high school students who participate in the CODE @ TACC summer programs each year.

The research team will also collaborate with Campus Champions from the Extreme Science and Engineering Discovery Environment (XSEDE), who serve as local experts on campuses nationwide, to disseminate training opportunities. Their presence at two-year and four-year institutions will ensure rich diversity among students.

Said Rosalia Gomez, TACC Education & Outreach Manager: “To address the high-demand of advanced computing resources that are currently limited in classrooms across the country, this award will help us develop curriculum and learning frameworks and provide campus-wide access to resources that will impact students of diverse backgrounds.”

Source: TACC

The post NSF Awards TACC and University of Louisville $600,000 Grant for Data Science Education appeared first on HPCwire.

Sandia Labs Researchers Identify Novel Behavior of Cool Flames

Thu, 10/12/2017 - 10:50

LIVERMORE, Calif., Oct. 12, 2017 — A “cool flame” may sound contradictory, but it’s an important element of diesel combustion — one that, once properly understood, could enable better engine designs with higher efficiency and fewer emissions.

Sandia National Laboratories mechanical engineer Jackie Chen and colleagues Alex Krisman and Giulio Borghesi recently identified novel behavior of a key, temperature-dependent feature of the ignition process called a cool flame in the fuel dimethyl ether.

The adjective cool is relative: the cool flame burns at less than 1,150 degrees Kelvin (1,610 degrees Fahrenheit), about half the typical flame burning temperature of 2,200 degrees Kelvin. While cool flames were first observed in the early 1800s, their properties and usefulness for diesel engine design have only recently been investigated.

“We’re trying to quantify the influence of cool flames in stratified turbulent jets during the ignition and flame stabilization processes. The insights gleaned will contribute to more efficient, cleaner burning engines,” Chen said. “Our holy grail is to understand the physics of turbulent mixing coupled with high-pressure ignition chemistry, to aid in developing predictive computational fluid dynamics models that can be used to optimize engine design.”

The team’s research has shown that during autoignition (the spontaneous ignition of injected fuel in a combustion engine), cool flames accelerate the formation of ignition kernels — tiny localized sites of high temperature that seed a fully burning flame — in fuel-lean regions. The work was performed at Sandia’s Combustion Research Facility using Direct Numerical Simulations, a powerful numerical experiment that resolves all turbulence scales, and was published in the Proceedings of the Combustion Institutewith Krisman as the lead author. The work was supported by the Department of Energy’s (DOE’s) Office of Basic Energy Sciences.

Borghesi further extended the cool flame study by performing a three-dimensional study on n-dodecane, a diesel surrogate fuel that has been the recent focus of Sandia’s Engine Combustion Network on spray combustion in diesels (the study that Krisman authored with dimethyl ether, a simpler fuel, was in two dimensions). Borghesi’s paper is pending publication. Taken together, both Krisman’s and Borghesi’s papers will form a comprehensive study of low-temperature chemistry in autoignitive flames at different stages of ignition.

Cool flames can improve engine design

The details of starting an engine are often taken for granted. Unlike a gasoline engine, in which the fuel-air mixture is ignited with a spark plug, in a diesel engine the fuel must auto-ignite when it is injected into the hot, compressed air that is in the piston at the top of the piston stroke. As the fuel is injected into the engine cylinder, rapid mixing and combustion combine to burn the fuel and drive the engine. While this lasts mere fractions of a second, the conditions of the flame that start this powerful process are crucial for improving engine efficiency and minimizing pollution formation.

The cool flame studies were performed at the DOE’s Oak Ridge Leadership Computing Facility on Titan, a 27-petaflop supercomputer, using a computational grant from DOE INCITE , or Innovative and Novel Computational Impact on Theory and Experiment. Computations using some of the world’s largest supercomputers, such as Titan, are required to produce an accurate and detailed calculation of the autoignition process.

“Combustion processes are challenging to study because the fuel itself is quite complicated,” Borghesi said. “Fuel oxidation chemistry consists of hundreds of species and thousands of chemical reactions. A realistic simulation of diesel combustion needs to capture this complex chemistry accurately in an overall model that includes turbulent mixing and heat transfer.”

As part of the DOE Exascale Computing Program, the team collaborates with outside institutions (including NVIDIALawrence BerkeleyOak RidgeArgonne and Los Alamos national laboratories; the National Renewable Energy Laboratory and Stanford University) to develop performance-portable algorithms to enhance the computing efficiency for Direct Numerical Simulation combustion studies.

Team focus turns to speed, structure of flames in diesel engines

In the future, the team would like to investigate basic questions about the speed and structure of flames at diesel engine conditions and study the relationship between spray evaporation, ignition, mixing and soot processes associated with multicomponent fuels. These basic questions will contribute to studying the cool flame’s crucial role in engine energy production and exercise the valuable capabilities of Direct Numerical Simulations running on exascale supercomputers as a highly precise and detailed numerical simulation method.

About Sandia National Laboratories

Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.

Source: Sandia National Laboratories

The post Sandia Labs Researchers Identify Novel Behavior of Cool Flames appeared first on HPCwire.

Google Compute Engine Announces Machine Types With Up to 96 vCPUs

Thu, 10/12/2017 - 08:52

Oct. 12, 2017 — Google has announced new machine types that have up to 96 vCPUs and 624 GB of memory—a 50% increase in compute resources per Google Compute Engine VM. These machine types run on Intel Xeon Scalable processors (codenamed Skylake), and offer the most vCPUs of any cloud provider on that chipset. Skylake in turn provides up to 20% faster compute performance, 82% faster HPC performance, and almost 2X the memory bandwidth compared with the previous generation Xeon.

96 vCPU VMs are available in three predefined machine types:

  • Standard: 96 vCPUs and 360 GB of memory
  • High-CPU: 96 vCPUs and 86.4 GB of memory
  • High-Memory: 96 vCPUs and 624 GB of memory

You can also use custom machine types and extended memory with up to 96 vCPUs and 624GB of memory, allowing you to better create exactly the machine shape you need, avoid wasted resources, and pay for only what you use.

The new 624GB Skylake instances are certified for SAP HANA scale-up deployments. And if you want to run even larger HANA analytical workloads, scale-out configurations of up to 9.75TB of memory with 16 n1-highmem-96 nodes are also now certified for data warehouses running BW4/HANA.

You can use these new 96-core machines in beta today in four GCP regions: Central US, West US, West Europe, and East Asia. To get started, visit your GCP Console and create a new instance.

Source: Google

The post Google Compute Engine Announces Machine Types With Up to 96 vCPUs appeared first on HPCwire.

Spectra Logic Launches LTO-8 Pre-Purchase Program

Wed, 10/11/2017 - 10:53

BOULDER, Colo., Oct. 11, 2017 — Spectra Logic today announced the launch of its fourth LTO tape technology pre-purchase program that provides its customers with access to LTO-7 drives and media for use until LTO-8 technology becomes available. Pre-purchase customers will be priority recipients of LTO-8 drives and media, allowing them to be among the first to gain the capacity and performance advantages of LTO-8.

LTO-8 tape technology doubles the capacity of LTO-7 to an astonishing 30 TB compressed (12TB native) per cartridge, and improves performance by 20 percent, up to 360Mbps. The additional capacity equates to fewer tape cartridges required to store the same amount of data while the performance boost translates into the need for fewer tape drives to do the same amount of work. In addition, the new LTO-8 drives are backward compatible with LTO-7 tape media, allowing users to read/write any LTO-7 media.

Once available, LTO-8 tape drives and media will ship with Spectra’s entire line of tape libraries and be fully compatible with Spectra’s BlueScale® library software. When fully populated with LTO-8 drives and media, all of Spectra’s tape libraries, including its most compact, the Spectra® T50e Tape Library, will support at least one petabyte of compressed storage capacity. The Spectra TFinity® ExaScale® Tape Library configured with LTO-8 will deliver 58PB of compressed storage capacity in a three-frame footprint and up to 1.6EB of compressed storage capacity in Spectra’s largest configuration of 44 frames. In addition, LTO-8 tape technology will support LTFS, WORM and AES 256-bit hardware encryption.

As LTO advancements outpace the capacity and performance increases of other storage technologies, tape continues to be relied on by the world’s largest organizations for long-term storage and archive.  LTO-8 will utilize new and improved tunneling magnetoresistive (TMR) drive heads, as opposed to giant magnetoresistive (GMR) drive heads used in previous generations of LTO tape drives. This technological advancement solidifies LTO’s future as the lowest cost storage in the industry and ensures its viability for generations to come.

“LTO tape is the only data storage technology that has five more generations of double capacity growth in its future,” said Nathan Thompson, Spectra Logic CEO, founder and author of Society’s Genome.  “Spectra foresees the availability of LTO-9 at 24TB per tape cartridge in two years; LTO-10 at 48TB in four years; LTO-11 at 96TB in six or seven years; and LTO-12 at 190+TB in eight to nine years. I firmly believe that no other commercial data storage technology available now or on the horizon, will keep pace with or fulfill the world’s increasing demand for cost-effective, long-term data storage like tape technology.”

Spectra Logic is also introducing its new MigrationPass™ program which provides customized options for customers with LTO-4, LTO-5 and LTO-6 tape to easily migrate data to the newest LTO tape technology. This professional services offering combined with tape drives and partitioning will allow customers to move data from older generations to the latest LTO-8 technology.

Product specifications for LTO-8 include:

  • Capacity: 12TB of native storage capacity, 30TB compressed (2.5:1 compression)
  • Performance: Up to 360 MB/sec native throughput
  • Reliability: 10-19 bit error rate

“Spectra Logic’s pre-purchase program, which has been offered to customers for the past four generations of LTO upgrades, is a great option for anyone looking to purchase a new tape library, or maximize the benefits of their existing Spectra tape libraries,” said Spectra Logic CTO Matt Starr. “Upgrading from previous versions of LTO media to LTO-8 will provide dramatic increases in capacity, improve performance, and future-proof the investment customers have in their tape library technology, enabling them to keep pace with continuous data growth.”

LTO customers experience the lowest cost, most energy-efficient tape technology with ongoing advancements driven by high volume and competition. Commitment to tape by Spectra and the LTO Consortium is underpinned by the LTO roadmap which extends to a tenth generation of drives and media. View the most recent LTO roadmap here.

Availability

Spectra Logic’s pre-purchase program is available immediately to customers and channel partners worldwide and covers LTO-8 drives that will integrate with Spectra’s complete line of tape libraries, including the Spectra TFinity ExaScale, T950, T680, T380, T200, T120 and T50e.

Less than two years after the release of LTO-7, LTO-8 tape technology is expected to ship in the Fall of 2017.

About Spectra Logic Corporation

Spectra Logic develops data storage solutions that solve the problem of short- and long- term digital preservation for business and technology professionals dealing with exponential data growth. Dedicated solely to storage innovation for nearly 40 years, Spectra Logic’s uncompromising product and customer focus is proven by the adoption of its solutions by industry leaders in multiple vertical markets globally. Spectra enables affordable, multi- decade data storage and access by creating new methods of managing information in all forms of storage—including archive, backup, cold storage, private cloud and public cloud. To learn more, visit www.SpectraLogic.com.

Source: SpectraLogic

The post Spectra Logic Launches LTO-8 Pre-Purchase Program appeared first on HPCwire.

PSSC Labs Partners with Biosoft Integrators to Debut Specialized Genetic Research Cluster at ASHG 2017

Wed, 10/11/2017 - 07:53

LAKE FOREST, Calif., Oct. 11, 2017 — PSSC Labs, a developer of custom High-Performance Computing (HPC) and Big Data computing solutions, today announced it will be showcasing its PowerWulf Bio Titanium Cluster at ASHG 2017. PSSC Labs will be on the main exhibition floor at booth #830.

The American Society of Human Genetics (ASHG) the primary professional membership organization for human genetics specialists worldwide. With nearly 8,000 members, the ASHG gathers researchers, academicians, clinicians, laboratory practice professionals, genetic counselors, and others who have a special interest in the field of human genetics. Their annual meeting, attended by over 6,000 people yearly, advances genetic research and education as well as attracting displays of cutting edge service and support for the genetic research community.

PSSC Labs has partnered with Biosoft Integrators (BSI) to create a superior, complete turn-key hardware and software platform designed specifically for genetic research, which it will debut at this year’s ASHG meeting. The PowerWulf Bio Titanium Cluster is a plug-and-play supercomputing solution proven compatible with all leading sequencing platforms.

Biosoft Integrators (BSI) works with researchers around the world to integrate laboratory technology platforms. With extensive experience in laboratory settings, BSI provides researchers with greater efficiency and management by providing tools which unify the laboratory and laboratory informatics, eliminating the need to manually communicate work and data between different software and equipment. Combine their deep research knowledge with PSSC’s superior customizable hardware, the resulting PowerWulf Bio Titanium Cluster provides an unprecedented level of integration and support. With a fully customizable suite of hardware and software, and available onsite installation and training, it is the simplest way for research scientist to set up or expand their research computing ability.

“Our partnership with PSSC Labs over the last 6 years has allowed us to provide the fastest and most reliable computing solutions to our customers across the globe,” said Stu Shannon Co- Founder and COO of BSI. “PSSC’s expertise in High Performance Computing components and server architecture allowed us to configure a best in class turn-key solution for our worldwide customer base to support rapid time to answer, high availability and complete customer satisfaction.”

Every PowerWulf Bio Titanium Cluster includes a three-year unlimited phone / email support package (additional year support available) with all support provided by a US based team of experienced engineers. For more information https://www.pssclabs.com/solutions/hpc-cluster/

About Biosoft Integrators

Biosoft Integrators (BSI) works with researchers around the world to integrate laboratory technology platforms. Founded by Henry Marentes and Stu Shannon who between them have over 40 years in the fields of software, hardware and laboratory integration, BSI has a full range of solutions and services to meet the needs of your research lab or manufacturing facility. BSI’s HPC systems, laboratory information management systems, and analysis tools allow today’s lab to quickly move into the world of NGS.

About PSSC Labs

PSSC Labs offers hand-crafted HPC and Big Data computing solutions that deliver performance with low total cost of ownership.  All products are designed and built at the company’s headquarters in Lake Forest, California. For more information, 949-380-7288www.pssclabs.comsales@pssclabs.com.

Source: PSSC Labs

The post PSSC Labs Partners with Biosoft Integrators to Debut Specialized Genetic Research Cluster at ASHG 2017 appeared first on HPCwire.

ACM’s Council on Women in Computing Appoints Jodi Tims Chair

Wed, 10/11/2017 - 07:39

NEW YORK, Oct. 11, 2017 — ACM-W, the Association for Computing Machinery’s Council on Women in Computing, today announced that Jodi Tims, Professor of Computer Science at Baldwin Wallace University (US), has been named Chair. Concurrent with the announcement of Tims’s appointment, ACM-W announced that Natasa Milic-Frayling, Professor of Data Science at the University of Nottingham (UK), has been named Chair of the ACM-W Europe Committee, and Arati Dixit, Associate Professor of Computer Science at Savitribai Phule Pune University (India), has been tapped as Chair of the ACM-W India Committee.

ACM-W supports, celebrates and advocates internationally for the full engagement of women in all aspects of computing, providing a wide range of programs and services to ACM members and working in the larger community to advance the contributions of technical women. ACM-W activities are organized around local chapters, regional Celebrations of Women in Computing conferences, and student scholarships.

ACM-W chapters foster a wide range of events including networking and mentoring opportunities, panel sessions, career fairs, and hackathons. Presently, there are more than 180 ACM-W student and professional chapters around the world. ACM-W Celebrations are regional conferences, typically involving keynote speakers, workshops, panels, student presentations and posters and career fairs. Attendees network, learn and share. ACM-W supports these events with funding (in partnership with Microsoft), website hosting, handling of finances and guidance based on years of practice. More than 30 ACM-W Celebrations took place around the world last year. ACM-W also provides a scholarship program that enables women students to attend important ACM research conferences.

“I am honored to be named Chair of ACM-W during this exciting time,” said Tims of her appointment. “In the last seven years, ACM-W’s membership, chapters, Celebrations and geographic reach have enjoyed phenomenal growth. In the near future, we hope to increase the number of ACM-W Professional Chapters and build a stronger ACM-W presence in regions such as Africa, South America, and China.”

Tims believes that more ACM-W Professional Chapters will address an important need for women practitioners, who can sometimes feel isolated. She also hopes that new Professional Chapter members will connect to local ACM-W Student Chapters to provide mentoring, internships, shared events and more.

Jodi Tims is a Professor of Computer Science, and Chair of the Department of Computer Science at Baldwin Wallace University in Berea, Ohio. Prior to becoming ACM-W Chair, Tims served as Vice-chair, and led ACM-W’s Celebrations initiative.

Natasa Milic-Frayling is Professor and Chair of Data Science at the School of Computer Science, University of Nottingham (UK). Prior to joining University of Nottingham in 2015, she worked as a Principal Researcher at Microsoft Research in Cambridge, UK. Before becoming Chair of the ACM-W Europe Committee, Milic-Frayling served on the ACM-W Europe Executive Committee.

Arati Dixit is an Associate Professor at the Department of Technology, Savitribai Phule Pune University and a visiting faculty member at PVPIT, Bavdhan. Prior to becoming Chair, Dixit served on the ACM-W India Executive Committee. She is the Vice-chair of ACM iSIGCSE (India Special Interest Group on Computer Science Education).

About ACM-W

ACM-W is the ACM Council on Women in Computing (http://women.acm.org). ACM-W supports, celebrates, and advocates internationally for the full engagement of women in all aspects of the computing field, providing a wide range of programs and services to ACM members and working in the larger community to advance the contributions of technical women.

About ACM

ACM, the Association for Computing Machinery (www.acm.org), is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post ACM’s Council on Women in Computing Appoints Jodi Tims Chair appeared first on HPCwire.

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

Tue, 10/10/2017 - 18:56

On Tuesday (Oct. 10), Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks the latest milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing.

Like IBM, Microsoft and Google, Intel is developing quantum computing technologies with the goal of building a commercial universal quantum computer that is some thousands of times larger than today’s prototypes. Quantum supremacy — the threshold when quantum machines outperform their classical counterparts on select problems — will be reached at roughly 50-qubits, but delivering on quantum’s promise for applications like chemistry, materials science and cryptography is going to require machines at least 1,000 times that scale.

Intel asserts that its fabrication and packaging expertise give it a leg up on its competitors in the space.

Intel’s director of quantum hardware, Jim Clarke, holds the new 17-qubit superconducting test chip. (Credit: Intel Corporation)

“We tapped into our existing knowledge of both fabrication and packaging here at Intel to build a packaged 17-qubit chip that has been optimized for the low-temperature [20 millikelvin – 250 times colder than deep space] environment,” said Jim Clarke, Intel’s director of quantum hardware.

The heart of the advance is a new architecture that improves reliability and thermal performance, and reduces radio frequency (RF) interference between qubits, said Clarke, while a scalable interconnect scheme allows for 10-100 times more signals into and out of the chip as compared to wirebonded chips. Intel emphasized its “advanced processes, materials and designs that enable [the company’s] packaging to scale for quantum integrated circuits, which are much larger than conventional silicon chips.”

The quantum supremacy horizon will likely be reached in the next couple years, but building a broadly useful quantum computer is likely to require thousands or millions of qubits (the quantum version of a classical bit). That could take a decade to achieve. “We are at mile one in a marathon,” said Clarke, “there’s a lot of learning to do, but we’re in it for the long-haul. So when we design these systems we’re not designing a system for something that probably won’t be useful today; we’re designing the whole system for something that will hit the commercial viability of a large-scale system.

“When I say system, what I mean is it’s more than a chip,” he said. “If I have a million qubit chip today I wouldn’t have the infrastructure to run it. This means the control electronics, the architecture, the algorithms and the software. At Intel, we’re working on all parts of the stack because we recognize that ultimately something that’s going to be relevant to the general population and commercial value to Intel is to build that complete system.”

Despite quantum computing’s very long rampup and recent investment and development spurt, the field is full of open questions. It’s far from clear what the superior qubit design will be so Intel is investigating multiple qubit types. Superconducting qubits are incorporated into its newest test chip, but the company has also been working on an alternative type called spin qubit in silicon, similar to a single electron transistor in a magnetic field. The qubit in silicon technology leverages Intel’s transistor expertise, where the superconducting qubits rely heavily on innovations in its packaging space.

With both of these systems Intel’s goal is to build a universal processor. “Both systems have advantages and disadvantages and neither system has been completely solved,” Clarke told us. “There’s still fundamental physics that have to be proven on both. We have a set of metrics that we’re trying to characterize for both types and to a certain extent, we’re hedging our bets. When one technology shows itself to be more viability than the other, we would probably pick one and run with it.”

Intel says its partnership with QuTech, begun in 2015, has enabled it to go from design and fabrication to test much more quickly. “Our quantum research has progressed to the point where our partner QuTech is simulating quantum algorithm workloads, and Intel is fabricating new qubit test chips on regular basis in our leading-edge manufacturing facilities,” said Dr. Michael Mayberry, corporate vice president and managing director of Intel Labs.

“With this test chip, we’ll focus on connecting, controlling and measuring multiple, entangled qubits towards an error correction scheme and a logical qubit,” said Professor Leo DiCarlo from QuTech. “This work will allow us to uncover new insights in quantum computing that will shape the next stage of development.”

The new test chip is about the size of a quarter in a package about the size of a half-dollar coin. In the unboxing video from QuTech’s Leo DiCarlo and Intel’s Dave Michalak, the duo report that the next step is to “test and characterize all the qubits in the device [to assess] how each performs individually and also how they all perform together when they’re entangled.”

The post Intel Delivers 17-Qubit Quantum Chip to European Research Partner appeared first on HPCwire.

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

Tue, 10/10/2017 - 16:53

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan and will begin operation in fiscal 2018 (starts in April). ABCI will use Intel’s Xeon Gold processors and Nvidia V100 GPUs and deliver 550 petaflops theoretical peak performance in half-precision floating point and 37 petaflops of double-precision peak floating point performance. The award is from Japan’s National Institute of Advanced Industrial Science and Technology (AIST).

The latest contract win means Fujitsu is now riding two CPU horses in the high stakes supercomputer race towards exascale. It is also building Japan’s post K supercomputer that is based on ARM processors. The post K machine, part of Japan’s Flagship 2020 Project, has encountered delays reportedly related to ARM development issues.

The new ABCI datacenter will be located on the Kashiwa II campus of the University of Tokyo. If this system had competed in the latest Top500 ranking of supercomputers published in June 2017, it would have taken the top position in Japan and third place globally. First reports roughly a year ago indicated the ABCI system target spec would be a 33-petaflops double-precision or 130-petaflops half-precision (see HPCwire article, Japan Plans Super-Efficient AI Supercomputer). The V100 tensor cores, which had not been announced when the plans became public, account for the much higher FP16 capability.

“The most noteworthy detail in the ABCI announcement is that it is being hailed – and configured – as a general-purpose supercomputer, not restricted to AI applications. The announcement highlights its double-precision performance, which is generally associated with scientific applications, as opposed to the single- or half-precision benchmarks that have come to be associated with deep learning,” said Addison Snell, CEO, Intersect360 Research.

“This win is also an important stepping stone for Fujitsu toward its Post-K architecture for exascale computing. Rather than SPARC processors, the ABCI system will use Intel Xeon processors with Nvidia Tesla GPU accelerators. Fujitsu can leverage this experience toward its eventual deployments that are ARM-based, with acceleration.”

The next Top500 list is due out at SC17 next month (Denver) and expectations are for shuffling at the top. In September, China’s released details of the upgrade to Tianhe-2 (MilkyWay-2) – now Tianhe-2A. It will use a proprietary accelerator (Matrix-2000), a proprietary network, and provide support for OpenMP and OpenCL. The upgrade is about 25 percent complete and expected to be fully functional by November 2017 according to a report by Jack Dongarra.

“The most significant enhancement to the system is the upgrade to the TianHe-2 nodes; the old Intel Xeon Phi Knights Corner (KNC) accelerators will be replaced with a proprietary accelerator called the Matrix-2000. In addition, the network has been enhanced, the memory increased, and the number of cabinets expanded. The completed system, when fully integrated with 4,981,760 cores and 3.4 PB of primary memory, will have a theoretical peak performance of 94.97 petaflops, which is roughly double the performance of the existing Tianhe-2 system. NUDT also developed the heterogeneous programming environment for the Matrix-20002 with support for OpenMP and OpenCL,” wrote Dongarra (Report on The TianHe-2A System).

It will be interesting to see if ABCI is stood up in time for next June’s Top500 list and where it lands. 37 petaflops (peak) should secure a top 10 or even top 5 placement but its enormous AI capability and low power will be the bigger story for many. According to today’s announcement, AIST has been planning to deploy ABCI as a global open innovation platform that will enable high speed AI processing by combining algorithms, big data and computational power. (Slide below taken from an early AIST presentation)

“As a cloud platform for AI applications offering the world’s top class machine learning processing capability, high performance computational capability, and energy efficiency, ABCI is expected to create new applications in a variety of fields. Furthermore, the system is foreseen to promote the utilization of cutting-edge AI technology by industry, including transfer of the latest cloud platform technology to the public through an open design,” said Fujitsu.

ABCI will feature a “high-performance computational system, a high-capacity storage system, and a variety of networking technology,” according to Fujitsu:

PRIMERGY CX2570 M4

“[The core of ABCI] will consist of 1,088 PRIMERGY CX2570 M4 servers, mounted in Fujitsu’s PRIMERGY CX400 M4 multi-node servers. Each server will feature the latest components, including two Intel Xeon Gold processor CPUs (a total of 2,176 CPUs) and four NVIDIA Tesla V100 GPU computing cards (a total of 4,352 GPUs), as well as Intel SSD DC P4600 series based on an NVMe standard, as local storage.

“Moreover, the 2U size chassis PRIMERGY CX400 M4 can each mount two PRIMERGY CX2570 M4 server nodes with GPU computing cards, offering high installation density. In addition, by utilizing “hot water cooling” for its servers, this system can also realize significant power savings.”

Fujitsu has been investing heavily in AI and deep learning in recent years; that includes developing of a custom AI processor, the Deep Learning Unit (See HPCwire article, Fujitsu Continues HPC, AI Push). Fujitsu’s roadmap for the DLU includes multiple generations over time: a first-gen coprocessor is set to debut in 2018, followed by a second-gen embedded host CPU. More forward-looking are potential specialized processors targeting neuromorphic or combinatorial optimization applications. There was no mention of the DLU in today’s announcement.

Fujitsu says it plans to apply its AI and HPC technology to “AIST’s high system requirement standards for both hardware and software.” The company also plans to leverage lessons learned from the ABCI project to its Human Centric AI Zinrai initiative.

The post Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST appeared first on HPCwire.

OmniScale Launches PR, Marketing Services for Advanced IT

Tue, 10/10/2017 - 11:55

VANCOUVER, Wash., Oct. 10, 2017 — Co-founders Isaac Lopez and Matt Walters today announced the launch of OmniScale Media, a full-service media agency specializing in strategic marketing and communications programs for advanced technology companies. Focused on growing awareness and driving demand, OmniScale Media leverages the founders’ media, marketing, editorial, live events, and sales backgrounds around High-Performance Computing (HPC), Big Data, Artificial Intelligence (AI), and the derivative technologies of these fast-paced market segments. The agency’s mission is to help companies that produce complex, leading-edge technology solutions navigate increasingly complex marketing communications environments to build scalable and highly effective marketing and communications processes designed to produce measurable results.

OmniScale Media’s approach is focused on helping corporate marketing leaders shape messages and both qualify and quantify the customer conversation lifecycle at every stage in the process.  Our team works on behalf of our clients to identify key influencers and target them with disruptive insights that shape their thinking and inspire them to action.

“Modern technology currents are tremendously transforming businesses, adding considerable pressure on IT, and bringing more customer stakeholders into the conversation,” said Walters. “With more decision-makers at the table, marketers involved with disruptive technology have a tougher job influencing consensus and bringing customers to a collective ‘yes.’ Our focus is on providing the marketing framework needed to help pave the way so marketing doesn’t just ‘support’ sales, it creates momentum and drives it.”

About the OmniScale Media Founders

Co-founders, Isaac Lopez and Matt Walters have built a synergistic partnership creating successful media and marketing programs for advanced tech companies and audiences together since 2010. Working previously for Tabor Communications, a leading advanced-scale computing media & events company focused on the top of the computing pyramid, the duo found their talents to be synergistic.

About OmniScale Media, LLC

OmniScale Media, LLC is a full-service media agency specializing in creating engagement that drives adoption of advanced technology solutions. With 25 years combined experience in high-end tech PR, marketing, demand generation, event production and media, OmniScale Media is the go-to agency for advanced technology companies looking to create disruptive insights that inspire people to engage and take action. OmniScale Media can be found online at www.OmniScaleMedia.com

Source: OmniScale Media, LLC

The post OmniScale Launches PR, Marketing Services for Advanced IT appeared first on HPCwire.

HPC Chips – A Veritable Smorgasbord?

Tue, 10/10/2017 - 09:23

No this isn’t about the song from Charlotte’s Web or the Scandinavian predilection for open sandwiches; it’s about the apparent newfound choice in the HPC CPU market.

For the first time since AMD’s ill-fated launch of Bulldozer the answer to the question, ‘Which CPU will be in my next HPC system?’ doesn’t have to be ‘Whichever variety of Intel Xeon E5 they are selling when we procure’.

In fact, it’s not just in the x86 market where there is now a genuine choice. Soon we will have at least two credible ARM v8 ISA CPUs (from Cavium and Qualcomm respectively) and IBM have gone all in on the Power architecture (having at one point in the last ten years had four competing HPC CPU lines – x86, Blue Gene, Power and Cell).

In fact, it may even be Intel that is left wondering which horse to back in the HPC CPU race with both Xeon lines looking insufficiently differentiated going forward. A symptom of this dilemma is the recent restructuring of the Xeon line along with associated pricing and feature segmentation.

I’m also quite deliberately avoiding the potentially disruptive appearance of a number of radically different computational solutions being honed for machine learning and which will inevitably have some bearing on HPC in the future.

Have we seen peak Intel?

Intel’s 90+ percent market share in the datacentre has for years worried many observers. While their products have undoubtedly been very good, when you have an effective monopoly, the evolutionary pressure that drives innovation and price competitiveness understandably wanes.

“Success breeds complacency. Complacency breeds failure. Only the paranoid survive.” – Andy Grove”

The re-emergence of credible competition can only be a good thing for the wider market, but in HPC things are less clear cut. Intel still holds a strong hand in the game of poker that is HPC procurement, namely AVX-512, but since some of the larger Top500 systems tend to be heterogeneous in nature, is this going to be enough to fend off the challenge from the following pack in other parts of the HPC ecosystem?

IBM and Nvidia are clearly hoping to make significant to make inroads at the top table of HPC with their CORAL generation systems, and Qualcomm and Cavium will also be hoping to chip away at Intel’s monopoly (though they are probably not directly aiming at HPC) but these non-x86 alternatives face significant problems when it comes to showing their capabilities in the HPC space.

AMD have a great opportunity to make gains in the HPC space with their EPYC line (the only x86 competitor) and early signs are encouraging that they will take the fight to Intel and not just on price-performance grounds.

Inertia in HPC is a funny thing

We mainly think of inertia as a property of physical objects but in the HPC industry there is a similar phenomenon relating to application code bases (and languages), instruction sets (and optimised software library ecosystems) and how hard it is to justify doing something different. In the case of HPC, this is really an argument about the barrier to entry for the new HPC CPU vendors, and what they have to be able to demonstrate in order to displace the incumbent (i.e. Intel).

Without trying to evade answering the question, we all hope that the non-Intel vendors can find the right combination of price-performance to chip away at the current Intel dominance in the datacentre. Not because we want to see Intel fail, but because we want them to succeed. Healthy competition is definitely good for users, though less obviously so for Intel’s shareholders.

If all you have is a hammer

“Ah-ha!” I hear you cry, “We already embrace different ISAs and heterogeneity in the Top500.” and indeed we do.  In fact the latest Green500 list is testament to how effective this approach can be. We also know that LINPACK is a historically poor predictor for most actual HPC application performance but we still use it as a flagship benchmark, predominantly because it does a good job of stress testing the computational elements of system architecture. With the march towards exascale now looking more like the retreat from Moscow, there is increasing need to improve the system efficiency for applications that don’t exhibit LINPACK-esque scaling characteristics. Machine learning looks to be the new yardstick so it will be interesting to see the rapid evolution of new solutions and benchmarks.

Moore’s Law in ICU

We should also acknowledge the increasing challenges facing silicon fabrication and process technology. Keeping the Moore’s Law show on the road is hard. This isn’t news to folk in HPC but it is one of the reasons why exascale in under 20MW (anything else looks prohibitively expensive) looks to be an exceedingly challenging goal in the next five years.

Intel are still at the vanguard when it comes to eking out the increasingly esoteric improvements needed, but when you have to re-state what aspects of process naming conventions should matter, you are already rapidly approaching the point of diminishing returns.

Moore’s law is an engine that has historically driven significant growth across the board and enabled the in silico renaissance that most HPC users are engaged in, but it is faltering at just the moment that exascale computing systems need a significant uplift in system efficiency. There still need to be huge improvements in parallelism, memory and storage efficiency, and data transmission and that’s even before you start to consider the considerations around fault recovery and software complexity for such huge systems.

We’ve been fairly good at scrambling over the various ‘walls’ we’ve encountered in the last couple of decades but does anyone else have a feeling that we are at the cusp of a period of innovation in HPC that we haven’t seen for some time?

Benchmark, benchmark, benchmark

For the first time in at least five years, the need for comparative benchmarking, conducted as part of your pre- and tender process, is looking to be an absolutely essential step to deliver the best value. Rather than just being viewed as something that provides a little more confidence that the vendors have tuned the MPI implementation and fabric topology, and you know what compiler flags to flip, it will shine a light into some of the dark musty corners that more complacent software developers and vendors have chosen to ignore. If for no other reason it will ensure that the supported pricing you get from your suppliers will be as keen as it should be.

Diarise Latimer is a Managing Consultant for Red Oak Consulting.

The post HPC Chips – A Veritable Smorgasbord? appeared first on HPCwire.

Child Care at SC17, Deadline is Oct. 16

Tue, 10/10/2017 - 08:59

I’ve been thinking about this child care thing – on a couple of different fronts. First, if you are working parent who would like to come to the conference but can’t because of lack of help taking care of your child, it’s a huge deal. But there’s a non-trivial ripple effect to this benefit because the HPC community as a whole is richer for the greater participation that might otherwise be lost.

I’m also thinking about how to get the word out. The SC team has written some about this, but this is such an important topic that I’m writing about it again.

First – the facts

Note the details. If you are thinking about using the child care benefit at SC17, you need to know these things:

  • Deadline to sign up is 13-October. This is a HARD deadline, due to staffing and licensing of the temporary child care facility at the conference.
  • Space is limited due to the room allocated to childcare and regulations on child care facilities. So, there is a finite number of children that can be accommodated.
  • Child care is available for children from 6 months to 12 years of age. (Children 6 months or under may be brought into all conference sessions subject to the terms and conditions of the SC17 Child Policy which aims to ensure the safety of all attendees.)
  • Cost:
    • $6 an hour per child for attendees and exhibitors
    • $5 per hour for SC17 committee members
    • $3 per hour for student attendees
  • You’ve signed up, but then need to cancel – what happens then? If you cancel before the start of the conference, you get 50% of your fee refunded.

Full details can be found on the website.

Second – the why

In my opinion, one of the top reasons the SC conference should be doing this is because it’s the right thing to do. Conferences need to do things to help make it easier for working parents to attend. I am purposely using “working parents” instead of “working mothers” (you will see a strong propensity to gender diversity in HPC on the SC17 Inclusivity website) because the benefit should be for anyone who needs it.

It is projected that the HPC industry will have a shortfall of 1 million workers by 2020. I’m not a researcher and I didn’t come up with the number – I’m repeating the number that has been reported by multiple luminaries in our industry including John West, Lorna Rivera, and Toni Collis. (If you want the actual number, email me and I’ll work with Toni and Lorna to get a firm statistic and source.) But whether the number is 1 million or 750,000, it’s still a big number. The gap is rising because the younger generation is going into a variety of scientific fields other than HPC. Why – we don’t know for certain. But, we need to make it easy for them to stay in HPC, come to conferences, learn, and network.

I must note that most of the major conferences offer some type of on-site child care. That is not a reason to offer on-site child care. But, it is a consideration to ensure that the SC conference is competitive and keeps up with the current trends.

Third – spread the word

I’m writing this article as a way to get the word out. Very kindly, HPCwire is publishing the article and promoting it. I’m tweeting about it via my Twitter handle and the SC17 Inclusivity handle. I’d appreciate you sharing this information with your network – whether it’s in your company newsletter, your company Twitter, or your own personal Twitter. Tag @SCInclusivity, @kamcmahon, or use #SC17. I’ll notice and share your communication.

In Summary

I’m proud to be part of the SC17 Inclusivity Committee and working with the Steering Committee and conference organizers to make the conference more accessible for everyone. I welcome your ideas and suggestions. You can DM me on Twitter @kamcmahon or email me at kmcmahon@mcmahonconsulting.com

See you in Denver for SC17!

https://sc17.supercomputing.org/inclusivity/site-child-care/

The post Child Care at SC17, Deadline is Oct. 16 appeared first on HPCwire.

Pages