HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 12 hours 55 min ago

Nimbix Unveils Expanded Cloud Product Strategy

Wed, 01/18/2017 - 07:25

RICHARDSON, Tex., Jan. 18 — Nimbix, a leading provider of high performance and cloud supercomputing services, announced today its new combined product strategy for enterprise computing, end users and developers.  This new strategy will focus on three key capabilities – JARVICE Compute for high performance processing, including Machine Learning, AI and HPC workloads; PushToCompute for application developers creating and monetizing high performance workflows; and MaterialCompute, a brand new intuitive user interface, featuring the industry’s largest high performance application marketplace available from a cloud provider.

Nimbix’s JARVICE platform powers the Nimbix Cloud and is capable of processing massively parallel turnkey workflows ranging from enterprise simulation to machine learning and serving all major industries and organizations.  Unlike other cloud providers who leverage virtualization technology to provide slices of a physical machines to users, JARVICE delivers high performance computation on bare-metal supercomputing systems using Nimbix’s patented Reconfigurable Cloud Computing technology and fully containerized application components for agility and security.  JARVICE powers the Nimbix Cloud and is also available as a product for both hosted and on-premises private cloud deployments.

PushToCompute, released in September of 2016, is the fastest, easiest way for developers to onboard commercial or open source compute-intensive applications into the cloud.  Using the industry standard Docker format, PushToCompute seamlessly interfaces with major third party registries such as Docker Hub, Google Container Registry, and others, as well as private registries. PushToCompute is available as a subscription service and will expand to include build capabilities for both x86 and POWER architectures in the first half of 2017.  With these new capabilities PushToCompute will offer end to end continuous integration as well as continuous delivery services for developers of compute intensive workflows such as machine learning and other complex algorithms.  Once deployed, these workflows can be made available in the public marketplace for monetization in an on-demand fashion.

MaterialCompute, Nimbix’s newest offering, sets a new standard for ease of use and accessibility of high-end computing services.  MaterialCompute aims to reduce clicks, improve flows, and optimize display on both desktop and mobile devices.  With MaterialCompute, users choose applications and workflows from the marketplace and execute these on the Nimbix Cloud at any scale, from any network, on any device, and leveraging any advanced computing technologies such as the latest GPUs from NVIDIA and FPGAs from Xilinx.  Developers can also leverage MaterialCompute to create and manage applications, and interface seamlessly to PushToCompute mechanisms.

“Delivering optimized technology capabilities to different communities is key to a successful public cloud offering,” said Nimbix Chief Technology Officer Leo Reiter.  “With this unified approach, Nimbix delivers discrete product capabilities to different audiences while maximizing value to all parties with the underlying power of the JARVICE platform.”

JARVICE and PushToCompute are available with both on-demand and subscription pricing.  MaterialCompute will be available for public access in February 2017 and will serve as the primary front-end for all Nimbix Cloud services.

About Nimbix

Nimbix is the leading provider of purpose-built cloud computing for big data and computation. Powered by JARVICE, the Nimbix Cloud provides high performance software as a service, dramatically speeding up data processing for Energy, Life Sciences, Manufacturing, Media and Analytics applications. Nimbix delivers unique accelerated high-performance systems and applications from its world-class datacenters as a pay-per-use service. Additional information about Nimbix is included in the company overview, which is available on the Nimbix website at https://www.nimbix.net.

Source: Nimbix

The post Nimbix Unveils Expanded Cloud Product Strategy appeared first on HPCwire.

Michele Tauter Named SC19 Chair

Wed, 01/18/2017 - 07:00

Jan. 18 — The University of Delaware’s Michela Taufer has been elected general chair of the 2019 International Conference for High Performance Computing, Networking, Storage and Analysis (SC19).

Sponsored by the Association for Computing Machinery and IEEE, SC is the primary international high-performance computing (HPC) conference.

“We are excited to have the benefit of Dr. Taufer’s leadership for SC19,” says John West, director of strategic initiatives at the Texas Advanced Computing Center and chair of the SC Steering Committee.

“This conference has a unique role in our community, and we depend upon the energy, drive, and dedication of talented leaders to keep SC fresh and relevant after nearly 30 years of continuous operation. The Steering Committee also wants to express its gratitude for the commitment that the University of Delaware is making by supporting Michela in this demanding service role.”

Taufer has been involved with the SC conference since 2007 and has served in many roles, including reviewer, technical papers area chair, doctoral showcase chair, and technical program co-chair. She is currently on the Student Cluster Competition Reproducibility committee and the Reproducibility Advisory Board of the Steering Committee. She is the finance chair for 2017, and she was elected to the Steering Committee in 2015.

In addition to her work with the SC conference, Taufer has been involved in other major conferences in the HPC field. In 2015 she co-chaired the IEEE International Conference on Cluster Computing, and in 2017, she will be general chair of the IEEE International Parallel and Distributed Processing Symposium.

“This is a well-deserved honor for Prof. Taufer and marks her as one of a few recognized leaders in the field of HPC,” says Kathy McCoy, chair of the Department of Computer and Information Sciences.

“This bring tremendous recognition to Prof. Taufer and her contributions, and it shines a spotlight on all of Delaware’s HPC efforts. We are thankful for her leadership.”

About the SC conference series

Established in 1988, the annual SC conference has grown in size and impact each year. Approximately 5,000 people participate in the technical program, with about 11,000 people overall.

SC has built a diverse community of participants including researchers, scientists, application developers, computing center staff and management, computing industry staff, agency program managers, journalists, and congressional staffers.

The SC technical program has addressed virtually every area of scientific and engineering research, as well as technological development, innovation, and education. Its presentations, tutorials, panels, and discussion forums have included breakthroughs in many areas and inspired new and innovative areas of computing.

Source: Diane Kukich, University of Delaware

The post Michele Tauter Named SC19 Chair appeared first on HPCwire.

HiPEAC Conference Begins January 23

Wed, 01/18/2017 - 06:45

Jan. 18 — Taking place in Stockholm from January 23-25, the 12th HiPEAC conference will bring together Europe’s top thinkers on computer architecture and compilation to tackle the key issues facing the computing systems on which we depend. HiPEAC17 will see the launch of the HiPEAC Vision 2017, a technology roadmap which lays out how technology affects our lives and how it can, and should, respond to the challenges facing European society and economies, such as the ageing population, climate change and shortages in the ICT workforce.

The Vision 2017 proposes a reinvention of computing. “We are at a crossroads, as our current way of making computers and their associated software is reaching its limit,” says Editor of the Vision, Marc Duranton of CEA. “New domains such as cyber-physical systems, which entangle the cyber and physical worlds, and artificial intelligence require us to trust systems and so develop more efficient approaches to cope with the challenges of safety, security, privacy, energy efficiency and increasing complexity. It really is the right time to reinvent computing!”

The Vision 2017 also highlights the economic importance of Europe remaining at the forefront of technological innovation. In that vein, HiPEAC17 is not a traditional academic conference; the network brings together computing systems research teams based in universities and research labs with those based in industry so as to ensure that research is relevant to market needs.  Indeed, the network has recently given a Technology Transfer Award to Horacio Pérez-Sánchez of the Universidad Católica de Murcia for his team’s work on computational drug discovery technologies, work supported by the EU-funded Tetracom initiative, which facilitated the transfer of research results from university labs to commercial application.

HiPEAC17 will also serve as a platform for HiPEAC’s recruitment service, which aims to help match European companies and research teams with the people with the skills they need, something that often proves to be a hurdle to business development.

Highlights of the conference include:

  • Launch of Matryx Computers, pre-integrated (hardware and fully-featured OS) computer platforms based on FPGA, by Embedded Computing Specialists (Brussels);
  • New startup Zeropoint Technologies (Stockholm), which is innovating ultrafast memory compression systems;
  • RWTH Aachen spinoff SILEXICA, just awarded $8 million in series A funding and celebrating the release of its next generation SLX Tool Suite for multicore platforms;
  • Keynotes from well-known experts Kathryn McKinley (Microsoft), Sarita Adve (University of Illinois) and Sandro Gaycken (Digital Society Institute, ESMT Berlin) will focus on data centre tail latency, memory hierarchies in the era of specialization, and the ‘as yet unsolvable problem’ of cybersecurity.

The City of Stockholm will host a conference evening reception at the famous Stockholm City Hall, home of the Nobel Prize banquet. Once again, the biggest international names in technology have shown their confidence in HiPEAC by generously supporting the conference.

Source: Barcelona Supercomputing Center

The post HiPEAC Conference Begins January 23 appeared first on HPCwire.

NEC Joins Forces With Micro Strategies

Wed, 01/18/2017 - 06:40

IRVING, Tex., Jan. 18 — NEC Corporation of America (NEC), a leading provider and integrator of advanced IT, communications, networking and biometric solutions, today announced that it has significantly strengthened its channel in data networking with the addition of Parsippany, New Jersey based Micro Strategies Inc., a leading provider of enterprise technology solutions for over 30 years. Micro Strategies specializes in the implementation of Networking, Mobility, Analytics, Security, Cloud, Infrastructure, Software, ECM, and High Availability solutions.

“We are delighted to join with Micro Strategies, one of the fastest growing companies in our space over the past 12 years, averaging annual revenue growth of around twelve per cent,” said Larry Levenberg, Vice President, NEC Corporation of America. “This complementary relationship combines our strength in Infrastructure, SDN, and cloud services with Micro Strategies’ growing footprint in multiple facets of IT.”

Micro Strategies has two innovation centers in New Jersey and Pennsylvania and the NEC relationship will initially focus on delivering a broad range of converged infrastructure technology solutions and backup.

Starting with its mainframes 40 years ago, NEC has engineered highly efficient storage solutions that reduce the ever-growing cost of storing business critical data. NEC storage solutions deliver high performance, superior scalability, and higher data resiliency. Virtualization extends storage infrastructure investments to reduce costs and simplify manageability. NEC’s Express5800 Server Series provides innovative features that address today’s complex IT infrastructure computing needs. Powered by energy efficient and reliable Intel Xeon processors, Express5800 servers deliver the proven performance and advanced functionality that reduce procurement and operational costs.

“We are very pleased to join forces with NEC,” said Anthony Bongiovanni, president and CEO of Micro Strategies. “This is an incredibly exciting time of growth for Micro Strategies and fundamental to our success is our customer-centric focus and the broad range of solutions we are able to offer through our partner relationships. We looked with diligence at how the addition of a partner can benefit our customers and NEC met all the criteria. We feel NEC aligns with our strategy going forward with a similar business philosophy they refer to as ‘Smart Enterprise’.”

Source: NEC

The post NEC Joins Forces With Micro Strategies appeared first on HPCwire.

Women Coders from Russia, Italy, and Poland Top Study

Tue, 01/17/2017 - 16:27

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. The U.S. placed 14th. Countries with largest proportions of women coders participating in the challenges are India, United Arab Emirates, and Romania. The U.S. was eleventh.

Attracting women to STEM careers generally and HPC specifically is an ongoing challenge although progress is being made (see HPCwire interview: A Conversation with Women in HPC Director Toni Collis). In the HackerRank study, roughly 17 percent of all coders participating in its challenges are women. Interestingly, the 17 percent figure roughly mirrors the proportion of women in technical positions at Google (17 percent) and Facebook (15 percent) according to HackerRank.

As with all such studies, this one must be read with a grain of salt. “We began our analysis with an attempt to assess exactly how many HackerRank test takers are female. Though we don’t collect gender data from our users, we were able to assign a gender to about 80% of users based on their first name. We did not include first names with equal gender distributions,” reports HackerRank

To determine the top performers, HackerRank reviewed scores on algorithms challenges, which account for more than 40 percent of all HackerRank tests. Algorithms challenges include sorting data, dynamic programming, searching for keywords, and other logic-based tasks. “Scores typically range from 0 to 115 points, although scores can reach as high as 10,000. We examined the 20 countries with the most female users in order to have large sample sizes. Russia’s female developers, who only account for 7.8 percent of Russian HackerRank users, top the list with an average score of 244.7 on algorithms tests,” according to the blog.

More details can be found in the full blog: https://www.hackerrank.com/work/tech-recruiting-insights/blog/female-developers

The post Women Coders from Russia, Italy, and Poland Top Study appeared first on HPCwire.

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

Tue, 01/17/2017 - 12:30

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur.

The two companies said they will jointly target oil and gas, life sciences, financial services, academia and other sectors.

The two companies have a track record of working together on joint deals, primarily in Asia.

“Inspur has worked closely with DDN on projects across China for many years, and we are excited to expand our collaboration with DDN to deliver joint solutions to customers worldwide,” said Vangel Bojaxhi, Inspur’s worldwide business development manager.

Inspur, founded in 2000, is headquartered in Jinan, Shandong Province, and has 26,000 employees. It has the largest share (18.2 percent) of China’s server market`and is, according to the company, the largest server provider for Alibaba and Baidu. According to industry watcher Gartner Group, Inspur was the world’s fastest growing server vendor for the first three quarters of 2016, with server shipment year-on-year growth of 28 percent during that period.

Privately held DDN has evolved from a nearly 100 percent partnerships sales model, as of two years ago, to a 50-50 balance between partnerships and direct sales, according to Larry Jones, DDN’s partner manager for the Inspur relationship. He said the deal was spurred by Inspur’s ambition to expand its reach beyond China, hiring an international business development manager who is familiar with DDN and has worked with Jones in the past. The joint agreement grew from there.

“It’s really exciting for both companies,” he said. “Inspur has its own storage organization, but like most server manufacturers they don’t have a HPC storage offering that’s anywhere near what DDN can do.

“For us, we’ve done some deals with Inspur in China but never on a global basis,” said Jones. The relationship is “in its infancy, but we’re hoping to grow it slowly and build on our mutual relationships with clients and take advantage of the expertise and core competencies of each company.”

While the partnership will help Inspur gain a toehold in the U.S. market, for DDN it’s intended to help the company reach markets in China and Europe, according to Jones. He said this is first partnership of this kind in the U.S. for Inspur.

“I think it’s going to start more in the commercial marketplace,” he said, “then as time goes on it will progress into the traditional HPC market as Inspur is accepted on a global basis. The things they do in China in the high end, traditional HPC space, they do with Chinese components. But they also are a Western component servers vendor too, so they make computers out of all the western components, (x86) machines that look very much like a Dell EMC or an HPE or Lenovo or IBM kind of thing, with Intel and Nvidia processors, as opposed to machines based on all Chinese technologies.

“We’ll be offering a fully integrated stack,” added Jones. “So if you know what we do with Lustre and Spectrum Scale file systems offerings, here’s a set of equipment that has been built, tested, we know it works, deployed, supported, so everything is there. Inspur is in position to say: ‘Here’s a complete, integrated solution that includes DDN storage.”

The post Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN appeared first on HPCwire.

Tabor Communications and nGage Events Announce Convergence of Leverage Big Data and EnterpriseHPC Summits

Tue, 01/17/2017 - 09:41

SAN DIEGO, Calif., Jan. 17 — Tabor Communications and nGage Events today announced the combining of the Leverage Big Data and EnterpriseHPC summits reflecting the realities of convergence happening as enterprises increasingly leverage High Performance Computing (HPC) to solve modern scaling challenges of the big data era. The goal of the joined events will be to foster a deeper understanding of the advanced scale solutions increasingly being employed to solve Big Data challenges across industries and achieve performance beyond the capabilities of their traditional IT environments. The event is scheduled for March 19-21 2017 at the Ponte Vedra Inn & Club in Ponte Vedra Beach, Florida.

The summit, “Leverage Big Data + EnterpriseHPC 2017,” will focus on bridging the challenges that CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions face as they work to build systems and applications that require increasing amounts of performance and throughput. Focus areas will include streaming analytics, modeling and simulation, machine learning/AI and the myriad number of analytics approaches that require real-time performance. As big data models that drive new value mature, many enterprises are hitting the system performance ceiling of traditional IT, and are unaware of options available to them through the High Performance Computing (HPC) and related cloud computing paradigms that have matured in government, science, and academia over the last 30 years, and that is beginning to be employed by leading enterprises today.

The theme of the combined event will be “Integrating High Performance Computing in the Enterprise and Building Big Data Solutions that Scale.”

“Streaming analytics and high-performance computing loom large in the future of enterprises which are realizing the scaling limitations of their legacy environments,” said Tom Tabor, CEO of Tabor Communications. “As organizations develop analytic models that require increasing levels of compute, throughput and storage, there is a growing need to understand how businesses can leverage high performance computing architectures that can meet the increasing demands being put on their infrastructure.”

“In combining the two events,” Tabor continues, “we look to support the leaders who are trying to navigate their scaling challenges, and connect them with others who are finding new and novel ways to succeed.”

“Currently in our fourth year of hosting both of these events, we’ve found the same issues and challenges are being faced on both sides of the infrastructure and business solution equation,” said Philip McKay, President and CEO of nGage Events. “A common denominator is the need for increased dialogue around high performance/high productivity approaches and solutions.

“These are big technology investments into both solutions and infrastructure, not to mention a significant culture shift,” McKay continued. “It’s important for the right people to have the right conversations informed by what some of the trailblazers across the verticals are doing. This will be a holistic look at a transformational phenomenon happening in business, industry and science today.”

“Enterprises that have a Big Data or streaming analytics strategy could well be served to augment their traditional enterprise and cloud architectures with High Performance Computing architectures and hardware,” said Alex Woodie, Co-Chair of the combined summit. “CTOs and technical managers who do not embrace new technologies or create dynamic corporate cultures will find themselves at a competitive disadvantage in the near future as the full impact of available technical talent crunch comes to bear.  This summit focuses on exploring solutions to enterprise Big Data problems and provides a comprehensive overview of approaches for decision makers to ensure their enterprises are successful in the rapidly changing tech landscape.”

The converged Leverage Big Data + EnterpriseHPC 2017 Summit brings together the leaders who are tackling these streaming and high-performance challenges and are responsible for driving the new vision forward. Attendees of this invitation-only summit will interact with leaders across industries faced with similar technical challenges for a summit that aims to build dialogue and share solutions and approaches to delivering both systems and software performance in this emerging era of computing.

The summit will be co-chaired by EnterpriseTech Managing Editor, Doug Black, and Datanami Managing Editor, Alex Woodie.

Attending the Summit

This is an invitation-only hosted summit that is fully paid for qualified attendees, including flight, hotel, meals and summit badge. Targets of the summit include CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions. To apply for an invitation to this exclusive event, please fill out the qualification form at the following link: Hosted Attendee Interest Form.

Summit Sponsors

Current sponsors for the summit include ANSYS, ASRock Rack, Birst, Caringo, Cray, DDN Storage, HDF Group, Impetus, Lawrence Livermore National Lab, Paxata, Quantum, Redline Performance, Striim, Verne Global, with more to be announced. For sponsorship opportunities, please contact us at summit@enterprisehpc.com.

The summit is hosted by Datanami, EnterpriseTech and HPCwire through a partnership between Tabor Communications and nGage Events, the leader in host-based, invitation-only business events.

Source: Tabor Communications

The post Tabor Communications and nGage Events Announce Convergence of Leverage Big Data and EnterpriseHPC Summits appeared first on HPCwire.

DDN and Inspur Sign Joint Sales and Marketing Agreement

Tue, 01/17/2017 - 07:32

SANTA CLARA, Calif., Jan. 17 — DataDirect Networks (DDN) today announced that it has signed a joint sales and marketing agreement with Inspur, a leading China-based, cloud-computing and total-solution-and-services provider, in which the companies will leverage their core strengths and powerful computing technologies to offer industry-leading high-performance computing solutions to HPC customers worldwide.

“DDN is delighted to expand our work with Inspur globally and to build upon the joint success we have achieved in China,” said Larry Jones, DDN’s partner manager for the Inspur relationship. “DDN’s leadership in massively scalable, high-performance storage solutions, combined with Inspur’s global data center and cloud computing solutions, offer customers extremely efficient, world-class infrastructure options.”

Under the terms of the agreement, DDN and Inspur will offer complete, rigorously tested solutions based on DDN storage platforms integrated with servers, networking, software and services provided by Inspur. The jointly-marketed systems will allow customers throughout the world to deploy powerful, cost-effective infrastructure solutions simply, easily and from a single source.

With a focus on innovation, Inspur has proven capabilities in HPC R&D, system design, manufacturing, deployment, operation, service and maintenance of the most advanced petascale supercomputers. Inspur offers complete HPC software, hardware and unique professional services for application development. Inspur HPC and Deep Learning solutions have contributed to solving the world’s most complex scientific, engineering and data analysis problems.

DDN and Inspur will target several key industries with their joint solutions, including oil and gas, life sciences, financial services and academia. Experts from DDN and Inspur are now working on new IT architecture solutions for several key customers.

“Inspur has worked closely with DDN on projects across China for many years, and we are excited to expand our collaboration with DDN to deliver joint solutions to customers worldwide,” said Vangel Bojaxhi, worldwide business development manager, Inspur Technologies. “Together, we offer customers some of the most effective and compelling HPC, Deep Learning, Big Data and Cloud Computing solutions in the market.”

About DDN

DataDirect Networks (DDN) is the world’s leading big data storage supplier to data-intensive, global organizations. For more than 18 years, DDN has designed, developed, deployed and optimized systems, software and storage solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage the power of DDN storage technology and the deep technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at the largest scale in the most efficient, reliable and cost-effective manner. DDN customers include many of the world’s leading financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers. For more information, go to www.ddn.com or call 1-800-837-2298.

About Inspur

Inspur, founded in 1945, is a global leader in total IT solutions and service provider with 26,000+ employees worldwide. Inspur provides products and complete solutions that are well known for their exceptional quality, performance, competitive costs, energy efficiency, and optimization for specific high workloads in high performance cloud data centers worldwide. As a leading total solutions and service provider, Inspur is proficient in providing IaaS, PaaS, and SaaS solutions including: high-end servers and compute cluster systems, high performance and enterprise mass storage, excellent cloud operating system, and trustworthy information security technology. Inspur is ranked as the largest server manufacturer in China and ranked in the top 5 largest in the world*. Currently, it is also the fastest growing server vendor globally. Inspur is also the largest server provider for IT giants like Alibaba and Baidu worldwide data centers. Inspur’s business has expanded to 102 countries and regions with more than 8,000 partners. Inspur delivers exceptional hardware, software, and IT services worldwide with R&D centers in China, United States, Japan, Taiwan, Hong Kong, and established business representative branches in 26 countries. For more information, visit http://en.inspur.com.

Source: DDN

The post DDN and Inspur Sign Joint Sales and Marketing Agreement appeared first on HPCwire.

Universities Join Industry Partners to Develop New HPC Service for UK-Based Scientists

Tue, 01/17/2017 - 07:25

Jan. 17 — The GW4 Alliance, together with Cray Inc. and the Met Office, has been awarded £3m by EPSRC to deliver a new Tier 2 high performance computing (HPC) service for UK-based scientists. This unique new service, named ‘Isambard’ after the renowned Victorian engineer Isambard Kingdom Brunel, will provide multiple advanced architectures within the same system in order to enable evaluation and comparison across a diverse range of hardware platforms.

The team will unveil the Isambard project at the Mont-Blanc HPC conference in Barcelona today, in front of an audience of leading academics and organisations including the European Commission.

“This is an exciting time in high performance computing,” said Prof Simon McIntosh-Smith, leader of the project and Professor of High Performance Computing at the University of Bristol. “Scientists have a growing choice of potential computer architectures to choose from, including new 64-bit ARM CPUs, graphics processors, and many-core CPUs from Intel. Choosing the best architecture for an application can be a difficult task, so the new Isambard GW4 Tier 2 HPC service aims to provide access to a wide range of the most promising emerging architectures, all using the same software stack. Isambard is a unique system that will enable direct ‘apples-to-apples’ comparisons across architectures, thus enabling UK scientists to better understand which architecture best suits their application.”

Professor Nick Talbot, Chair of the Board for the GW4 Alliance and Deputy Vice-Chancellor for Research and Impact at the University of Exeter, said: “We are delighted to collaborate with respected industry partners Cray and the Met Office on this multi-million pound project, which will benefit scientists across the UK. This is a clear example of how GW4 can harness the strengths of its universities and industrial partners across the region to produce pioneering solutions to some of our greatest global challenges.”

The GW4 Isambard project exemplifies university-industry collaboration and the world-leading capability of the South West England and South East Wales region in digital innovation, in-line with the findings of the recent Science and Innovation Audit.

“At Cray, our mission is to help our customers solve the most demanding technical and scientific problems, and we are constantly evaluating new technologies that can help achieve that,” said Adrian Tate, director of Cray’s EMEA Research Lab. “We are excited to be a part of this important collaboration with GW4 and the Met Office as we work together to explore and evaluate diverse processing technologies within a unified architecture. By building a Centre of Excellence with GW4 and technology partners, we expect deep insights into application efficiency using new processing technologies, and we relish the opportunity to share these insights with the UK scientific community.”

Paul Selwood, Manager for HPC Optimisation at the Met Office said: “The Met Office is very excited to be involved with this project, which builds on existing collaborations with both Cray and the GW4 Alliance. This system will enable us, in co-operation with our partners, to accelerate insights into how our weather and climate models need to be adapted for these emerging CPU architectures.”

Established in 2013, the GW4 Alliance brings together four leading research-intensive universities: Bath, Bristol, Cardiff and Exeter. It aims to strengthen the economy across the region through undertaking pioneering research with industry partners.

About GW4 Alliance

From the creative arts to the physical sciences, the GW4 Alliance has world-leading scholarship, infrastructure and faculty. The GW4 Alliance has a combined turnover of over £1.8bn, employs over 8,000 staff and trains over 23,000 postgraduate students. The GW4 Alliance aims to cultivate the regional economy, develop a highly skilled workforce and build a research and innovation ecosystem for the South West and Wales.  For more information about GW4 see www.gw4.ac.uk or follow @GW4Alliance.

About the South West England and South East Wales Science and Innovation Audit

GW4 Alliance, University of the West of England (UWE Bristol), Plymouth University, key businesses and Local Enterprise Partnerships across the South West England and South East Wales developed the Science and Innovation Audit (SWW-SIA) for the Department for Business, Energy and Industrial Strategy. The report found that the region can lead the UK and compete with the world in advanced engineering and digital innovation.For more information see http://gw4.ac.uk/sww-sia/

About the Engineering and Physical Sciences Research Council (EPSRC)

As the main funding agency for engineering and physical sciences research, our vision is for the UK to be the best place in the world to Research, Discover and Innovate.

By investing £800 million a year in research and postgraduate training, we are building the knowledge and skills base needed to address the scientific and technological challenges facing the nation. Our portfolio covers a vast range of fields from healthcare technologies to structural engineering, manufacturing to mathematics, advanced materials to chemistry. The research we fund has impact across all sectors. It provides a platform for future economic development in the UK and improvements for everyone’s health, lifestyle and culture.

Source: GW4 Alliance

The post Universities Join Industry Partners to Develop New HPC Service for UK-Based Scientists appeared first on HPCwire.

TMT Releases New Version of SequenceL

Tue, 01/17/2017 - 07:00

AUSTIN, Tex., Jan. 17 — Texas Multicore Technologies (TMT) announced the company has released a major new version of its SequenceL functional programming language and auto-parallelizing compiler and tool set.

TMT provides computer programming tools and services to modernize software to run on multicore computing platforms with optimal performance and portability. SequenceL is a compact, powerful functional programming language and auto-parallelizing compiler that quickly and easily converts algorithms to robust, massively parallel code. TMT has worked closely with its strategic platform partners Intel, AMD, Dell, HPE, IBM, and ARM to do the hard, low-level work of building low-level platform optimizations in its tools so the broad base of software developers, engineers, and scientists don’t have to.

“The need to exploit parallelism in software is growing rapidly. Why? Because the problems we need to solve at speed are growing both in respect of data volumes and complexity. I’m talking about applications in the sphere of advanced analytics/AI/machine learning/cognitive computing,” said Robin Bloor, Chief Analyst at The Bloor Group.” Tools that can make parallel software easier to construct, efficient and faster in operation are gold dust – especially for the domain experts. An automated approach to software parallelism has become a necessity. As the processor vendors continue to add cores to their products, they are making it increasingly difficult, if not impossible, for anyone to program them effectively.”

Important features in SequenceL v3.0 include:

  • Addition of a free Community Edition available for immediate download. The commercial version is now named Professional Edition and a free trial remains available for download.
  • Significant performance enhancements, including:
    • Improved compiler optimizations for parallel code
    • Improved vectorization in generated code
    • Compatibility with standard BLAS and LAPACK libraries
    • New FFT library based on FFTW
  • Online documentation and tutorials.
  • Support for all popular compute platforms, including x86 (Windows, Mac OS X, Linux), POWER (Linux), and ARM (Linux).

“Our goal has always been to enable all people developing software – not just a gifted few “parallel ninjas” with a lot of time and deep expertise – to quickly and easily unleash the full performance potential of multicore platforms,” said Doug Norton, Chief Marketing Officer of TMT. “The new free Community Edition and support for all popular platforms is a major step to deliver on this goal, building on the successes we have had with our large enterprise customers.”

About Texas Multicore Technologies (TMT)

TMT provides auto-parallelizing computer programming tools and services to modernize software to run on multicore computing platforms with optimal performance and portability. Founded in 2009, the company delivers easy to use, auto-parallelizing, race-free programming solutions based on the powerful SequenceL functional programming language to enable faster and better applications sooner. For more information, visit texasmulticore.com.

Source: TMT

The post TMT Releases New Version of SequenceL appeared first on HPCwire.

Mont-Blanc Project Selects Cavium ThunderX2 for ARM-based HPC Platform

Mon, 01/16/2017 - 10:02

BARCELONA, Jan. 16 — The Mont-Blanc European project has selected Cavium’s (NASDAQ:CAVM) ThunderX2 ARM server processor to power its new high performance computing (HPC) prototype. The ambition of the Mont-Blanc project is to define the architecture of an Exascale-class compute node based on the ARM architecture, and capable of being manufactured at industrial scale. The project takes a holistic approach, encompassing not just hardware, but also operating system and tools, and applications. The new platform will therefore be a key asset to all Mont-Blanc partners, to assess options for maximum compute efficiency, to further develop the software ecosystem for ARM HPC platforms, and to implement life-size tests.

The ThunderX2 product family is Cavium’s 2nd generation 64-bit ARMv8-A server processor SoCs for High Performance Computing in the Data Center and cloud applications. With fully out of order high performance custom cores supporting single and dual socket configurations, ThunderX2 is optimized to drive highest computational performance delivering outstanding memory bandwidth and memory capacity. The ThunderX2 processor family is fully compliant with ARMv8-A architecture specifications as well as ARM’s SBSA and SBBR standards and is widely supported by industry leading OS, Hypervisor and SW tool and application vendors.

The new Mont-Blanc prototype will be built by Atos, the coordinator of phase 3 of Mont-Blanc, using its Bull expertise and products. The platform will leverage the infrastructure of the Bull sequana pre-exascale supercomputer range for network, management, cooling, and power. Atos and Cavium signed an agreement to collaborate to develop this new platform, thus making Mont-Blanc an Alpha-site for ThunderX2.

“ThunderX2 is a server-class chip designed for high compute performance. With the adoption of this new generation of power- and performance-efficient processors, we are entering a new and exciting dimension of the Mont-Blanc project. This already gives us a glimpse of what a European exascale-class HPC platform could be in the near future.” says Etienne Walter, coordinator of phase 3 of the Mont-Blanc project.

“As the race to Exascale intensifies, we are pleased to be the vendor of choice to partner with Atos to deliver Mont-Blanc platform” said Rishi Chugh, Director of Marketing, Data Center Processor Group at Cavium. “ThunderX2 builds on established architecture and ecosystem of ThunderX delivering performance competitive with next generation of incumbent processors”.

About the Mont-Blanc project
The current third phase of the Mont-Blanc project continues to take a holistic approach, encompassing hardware, operating system and tools, and applications, with the following targets:

  • Defining the architecture of an Exascale-class compute node based on the ARM architecture, and capable of being manufactured at industrial scale;
  • Assessing the available options for maximum compute efficiency;
  • Developing the matching software ecosystem to pave the way for market acceptance of ARM solutions.

The project is run by a European consortium that includes:

  • Industrial hardware/software technology providers: Atos, using its expertise in supercomputing & Big data following the acquisition of Bull (coordinator – France); ARM, the world leader in embedded high-performance processors (United Kingdom); and AVL, the world’s largest independent company for the development, simulation and testing technology of powertrains (Austria);
  • Academic/research HPC centres: Barcelona Supercomputing Centre (Spain); Swiss Federal Institute of Technology in Zurich (Switzerland); CNRS (CNRS/LIRMM – France); University of Stuttgart (HLRS -Germany); University of Cantabria (Spain); University of Graz (Austria); University of Versailles Saint Quentin (France).

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 671697.

 

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of integrated, software compatible processors ranging in performance from 1Gbps to 100Gbps that enable secure, intelligent functionality in Enterprise, Data Center, Broadband/Consumer, Mobile and Service Provider Equipment, highly programmable switches which scale to 3.2Tbps and Ethernet and Fibre Channel adapters up to 100Gbps. Cavium processors are supported by ecosystem partners that provide operating systems, tools and application support, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, China and Taiwan. For more information, visit http://www.cavium.com.

Source: Cavium, Inc.

The post Mont-Blanc Project Selects Cavium ThunderX2 for ARM-based HPC Platform appeared first on HPCwire.

Exascale Computing Project Announces 2017 Argonne Training Program on Extreme-Scale Computing

Mon, 01/16/2017 - 07:00

Jan. 16 — The U.S. Exascale Computing Project has announced its 2017 Argonne Training Program on Extreme-Scale Computing (ATPESC) scheduled for July 30 – August 11, 2017, near Chicago, Ill., and computational scientists now have the opportunity to apply.

With the challenges posed by the architecture and software environments of today’s most powerful supercomputers, and even greater complexity on the horizon from next-generation and exascale systems, there is a critical need for specialized, in-depth training for the computational scientists poised to facilitate breakthrough science and engineering using these amazing resources.

This program provides intensive hands-on training on the key skills, approaches and tools to design, implement, and execute computational science and engineering applications on current supercomputers and the HPC systems of the future. As a bridge to that future, this two-week program fills many gaps that exist in the training computational scientists typically receive through formal education or shorter courses. The 2017 ATPESC program will be held at a new location from previous years, at the Q Center, one of the largest conference facilities in the Midwest, located just outside Chicago.

Instructions for applying to the program can be found at http://extremecomputingtraining.anl.gov<http://extremecomputingtraining.anl.gov/ and the deadline for applicant submissions is Friday, March 10, 2017.

Program Curriculum

Renowned scientists, HPC experts and leaders will serve as lecturers and will guide the hands-on laboratory sessions. The core curriculum will address:

  • Computer architectures and their predicted evolution
  • Programming methodologies effective across a variety of today’s supercomputers and that are expected to be applicable to exascale systems
  • Approaches for performance portability among current and future architectures
  • Numerical algorithms and mathematical software
  • Performance measurement and debugging tools
  • Data analysis, visualization, and methodologies and tools for Big Data applications
  • Approaches to building community codes for HPC systems

Eligibility and Application

Doctoral students, postdocs, and computational scientists interested in attending ATPESC can review eligibility and application details on the event website.

Cost

There are no fees to participate. Domestic airfare, meals, and lodging are provided.

Source: Exascale Computing Project

The post Exascale Computing Project Announces 2017 Argonne Training Program on Extreme-Scale Computing appeared first on HPCwire.

Distributed Computing Project Einstein@Home Discovers New Gamma-Ray Pulsars

Fri, 01/13/2017 - 12:38

Jan. 13 — An analysis that would have taken more than a thousand years on a single computer has found within one year more than a dozen new rapidly rotating neutron stars in data from the Fermi gamma-ray space telescope. With computing power donated by volunteers from all over the world an international team led by researchers at the Max Planck Institute for Gravitational Physics in Hannover, Germany, searched for tell-tale periodicities in 118 Fermi sources of unknown nature. In 13 they discovered a rotating neutron star at the heart of the source. While these all are – astronomically speaking – young with ages between tens and hundreds of thousands of years, two are spinning surprisingly slow – slower than any other known gamma-ray pulsar. Another discovery experienced a “glitch”, a sudden change of unknown origin in its otherwise regular rotation.

“We discovered so many new pulsars for three main reasons: the huge computing power provided by Einstein@Home; our invention of novel and more efficient search methods; and the use of newly-improved Fermi-LAT data. These together provided unprecedented sensitivity for our large survey of more than 100 Fermi catalog sources,” says Dr. Colin Clark, lead author of the paper now published in The Astrophysical Journal.

Neutron stars are compact remnants from supernova explosions and consists of exotic, extremely dense matter. They measure about 20 kilometers across and weigh as much as half a million Earths. Because of their strong magnetic fields and fast rotation they emit beamed radio waves and energetic gamma rays similar to a cosmic lighthouse. If these beams point towards Earth once or twice per rotation, the neutron star becomes visible as a pulsating radio or gamma-ray source – a so-called pulsar.

“Blindly” detecting gamma-ray pulsars

Finding these periodic pulsations from gamma-ray pulsars is very difficult. On average only 10 photons per day are detected from a typical pulsar by the Large Area Telescope (LAT) onboard the Fermi spacecraft. To detect periodicities, years of data must be analyzed, during which the pulsar might rotate billions of times. For each photon one must determine exactly when during a single split-second rotation it was emitted. This requires searching over years long data sets with very fine resolution in order not to miss a signal. The computing power required for these “blind searches” – when little to no information about the pulsar is known beforehand – is enormous.

Previous similar blind searches have detected 37 gamma-ray pulsars in Fermi-LAT data. All blind search discoveries in the past 4 years have been made by Einstein@Home which has found a total of 21 gamma-ray pulsars in blind searches, more than a third of all such objects discovered through blind searches.

Computing resource Einstein@Home

Enlisting the help of tens of thousands of volunteers from all around world donating idle compute cycles on their tens of thousands of computers at home, the team was able to conduct a large-scale survey with the distributed computing project Einstein@Home. In total this search required about 10,000 years of CPU core time. It would have taken more than one thousand years on a single household computer. On Einstein@Home it finished within one year – even though it only used part of the project’s resources.

The scientists selected their targets from 1000 unidentified sources in the Fermi-LAT Third Source Catalog by their gamma-ray energy distribution as the most “pulsar-like” objects. For each of the 118 selected sources, they used novel, highly efficient methods to analyze the detected gamma-ray photons for hidden periodicities.

The entire article can be found here

Source: Albert Einstein Institute Hannover 

The post Distributed Computing Project Einstein@Home Discovers New Gamma-Ray Pulsars appeared first on HPCwire.

Weekly Twitter Roundup (Jan. 12, 2017)

Thu, 01/12/2017 - 14:03

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below.

Algorithms for #HPC scientific simulation developed by @KAUST_ECRC team pic.twitter.com/KJuaq6PMnw

— Bilel Hadri (@mnoukhiya) January 11, 2017

A @NERSC staffer found Gerty Cori stamps issued by @USPS in 2007 & had 2share. DYK Cori is namesake of the @ENERGY facility's flagship #HPC? pic.twitter.com/cdE5ZxvsLy

— Berkeley Lab CS (@LBNLcs) January 9, 2017

Will CPU affinity ever get fixed, because it's so wonky still and yet core count keeps going up. #HPCquestions #HPC

— Fernanda Foertter (@hpcprogrammer) January 10, 2017

CTO Matt Starr discusses #Spectra's deep #storage solutions for #HPC and what's in store for 2017. #WorldwideSalesMeetings @StarrFiles pic.twitter.com/lP1Vj3XXLo

— Spectra Logic (@spectralogic) January 10, 2017

Rich Computational ressources available @KAUST_News #hpcmatters #hpc pic.twitter.com/waSM4maYx5

— Bilel Hadri (@mnoukhiya) January 11, 2017

Lustre read volumes over the last few days on Cori's scratch file system. Last few bursts integrated to > 1 PiB of data read each–yikes! pic.twitter.com/hhYAKLdwig

— Glenn K. Lockwood (@glennklockwood) January 11, 2017

Our "Basics of Supercomputing" workshop is happening now! Learn more about future events Email: rc-announce-join@colorado.edu #CU_RC #HPC pic.twitter.com/cMxUb36NXF

— CU-Boulder RC (@CUBoulderRC) January 9, 2017

We are honored to receive the @Ford IT Innovation Award, helping speed their dev cycle w/ #DGX1 built on Tesla P100 GPUs #AI #deeplearning. pic.twitter.com/u7GcQrgvLM

— NVIDIA Data Center (@NVIDIADC) January 12, 2017

https://twitter.com/ORNL/status/819183753398521856

KAUST's Computational Ecosystem talk by Prof. Keyes #WEP2017 @KAUST_News #HPC pic.twitter.com/OUheAmZIuD

— Bilel Hadri (@mnoukhiya) January 11, 2017

Bright is proud to be a member of the #Teratec association and take part in the 23rd Teratec General Assembly in #Paris today #hpc pic.twitter.com/sBD3gEOhv6

— Bright – EMEA (@BrightEMEA) January 12, 2017

Today we are excited to be at the @HPE 2017 Federal Partner Summit #HPC pic.twitter.com/xV0Kj2iRZS

— ComnetCo (@ComnetCo_Inc) January 10, 2017

Click here to view the top tweets from last week.

The post Weekly Twitter Roundup (Jan. 12, 2017) appeared first on HPCwire.

D-Wave Systems Releases Open-Source Quantum Software Tool

Thu, 01/12/2017 - 10:49

Jan. 12 — D-Wave Systems Inc., the leader in the development and delivery of quantum computing systems and software, has released an open-source, quantum software tool as part of its strategic initiative to build and foster a quantum software development ecosystem. The new tool, qbsolv, enables developers to build higher-level tools and applications leveraging the quantum computing power of systems provided by D-Wave, without the need to understand the complex physics of quantum computers. The promise of qbsolv and quantum acceleration is to enable faster solutions to larger and more complex problems. Users given early access to qbsolv have validated its benefits, and started to develop and release new software tools and applications building on it.

Qbsolv is used to solve large optimization problems, useful in a wide range of important applications. Qbsolv handles large problems by automatically breaking them down into smaller segments that can run individually on D-Wave’s quantum processor, then combining the individual answers into one overall solution. To date, users have shown that qbsolv enables solution of problems up to twenty times larger than could be solved on a D-Wave processor without using qbsolv. As the power of the D-Wave system continues to increase, the size of the individual problem segments will increase, allowing solution of even larger problems in less time.  Qbsolv, along with a technical white paper, is available now on GitHub, the online software repository, at github.com/dwavesystems/qbsolv.

Users given early access to qbsolv have already validated its use in several domains, including:

  • Scientists at Los Alamos National Laboratory used qbsolv with a D-Wave system to find better ways of splitting the molecules on which they performed electronic structure calculations, among the most computationally intensive of all scientific calculations. In some cases this new method gave better results than the industry-standard graph partitioner and the winner of last year’s graph-partitioning challenge.
  • Scientists at a research institute are using qbsolv to find a faster solution to the multiple-sequence-alignment (MSA) problem from genomics, a computationally hard problem used to study the evolution and function of DNA, RNA and protein.

D-Wave’s release of qbsolv adds to a growing software ecosystem that enables application developers to use D-Wave’s quantum systems more quickly and easily.  Quantum software companies such as 1QBitQC Ware and QxBranch as well as users of D-Wave’s systems are building tools and applications. D-Wave often works with these diverse groups to foster development of software that is both useful and effectively leverages quantum resources.

“Just as a software ecosystem helped to create the immense computing industry that exists today, building a quantum computing industry will require software accessible to the developer community,” said Bo Ewald, president, D-Wave International Inc. “D-Wave is building a set of software tools that will allow developers to use their subject-matter expertise to build tools and applications that are relevant to their business or mission. By making our tools open source, we expand the community of people working to solve meaningful problems using quantum computers.”

The quantum applications developer community includes noteworthy players such as Fred Glover, the inventor of the tabu search solver. “Qbsolv opens a world of possibility for developers,” said Glover. “Quantum applications are a new frontier and we’re just beginning to scratch the surface of what can be accomplished. We are looking forward to implementing new hybrid quantum/classical algorithms working with D-Wave’s qbsolv developers.”

Scott Pakin, a computer scientist at Los Alamos National Laboratory, has built a quantum macro assembler, available on GitHub, that leverages qbsolv to create programs that would otherwise be too large to implement on the D-Wave system. Other prominent national labs are also using qbsolv to develop quantum computing frameworks that they hope to open source.

In order to encourage widespread use of the tool, qbsolv is designed to ingest a quadratic unconstrained binary optimization (QUBO) format that is familiar and accessible to application developers. The QUBO form has been used with classical computing to solve many different kinds of problems. As an example, in one clinical trial an application developed in QUBO form was successful in predicting epileptic seizures 20-40 minutes in advance of their occurrence. Another study using the QUBO form explored grouping machines and parts together in a flexible manufacturing system in order to facilitate economies in time and cost.

About D-Wave Systems Inc.

D-Wave is the leader in the development and delivery of quantum computing systems and software, and the world’s only commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing to solve the most challenging national defense, scientific, technical, and commercial problems. D-Wave’s systems are being used by some of the world’s most advanced organizations, including Lockheed Martin, Google, NASA Ames and Los Alamos National Laboratory. With headquarters near Vancouver, Canada, D-Wave’s U.S. operations are based in Palo Alto, CA, and Hanover, MD. D-Wave has a blue-chip investor base including Goldman Sachs, Bezos Expeditions, DFJ, In-Q-Tel, BDC Capital, Growthworks, Harris & Harris Group, International Investment and Underwriting, and Kensington Partners Limited. For more information, visit: www.dwavesys.com.

Source: D-Wave Systems

The post D-Wave Systems Releases Open-Source Quantum Software Tool appeared first on HPCwire.

NSF Seeks Input on Cyberinfrastructure Advances Needed

Thu, 01/12/2017 - 10:47

In cased you missed it, the National Science Foundation posted a “Dear Colleague Letter” (DCL) late last week seeking input on needs for the next generation of cyberinfrastructure to support science and engineering. “With this DCL, NSF seeks input that provides a holistic view of the future needs for advanced cyberinfrastructure for advancing the Nation’s research enterprise,” states the letter.

Here is a excerpt from the DCL:

“The National Science Foundation (NSF) embraces an expansive, ecosystem view of research cyberinfrastructure – spanning advanced computing resources, data and software infrastructure, workflow systems and approaches, networking, cybersecurity and associated workforce development – elements whose design and deployment are motivated by evolving research priorities as well as the dynamics of the scientific process.

“The critical role of this broad spectrum of shared cyberinfrastructure resources, capabilities and services – and their integration – in enabling science and engineering research has been reaffirmed by the National Strategic Computing Initiative, which was announced in July 2015, and in the National Academies’ 2016 report on Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020. While these efforts are computing-centric, they expose the inherent inseparability of computing from the larger cyber ecosystem. With this DCL, NSF seeks input that provides a holistic view of the future needs for advanced cyberinfrastructure for advancing the Nation’s research enterprise.”

NSF asks responders to answer three guiding questions:

  • Question 1 (maximum 1200 words) – Research Challenge(s). Describe current or emerging science or engineering research challenge(s), providing context in terms of recent research activities and standing questions in the field.
  • Question 2 (maximum 1200 words) – Cyberinfrastructure Needed to Address the Research Challenge(s). Describe any limitations or absence of existing cyberinfrastructure, and/or specific technical advancements in cyberinfrastructure (e.g. advanced computing, data infrastructure, software infrastructure, applications, networking, cybersecurity), that must be addressed to accomplish the identified research challenge(s).
  • Question 3 (maximum 1200 words, optional) – Other considerations. Any other relevant aspects, such as organization, process, learning and workforce development, access, and sustainability, that need to be addressed; or any other issues that NSF should consider.

“We invite you to step outside of the immediate demands of your current research and to think boldly about the opportunities for advancing your discipline in the next decade. We look forward to your contribution to our plans for the future of advanced cyberinfrastructure for the NSF-supported community,” states the DCL. For questions concerning this effort and submission of input, please contact William Miller, Science Advisor, NSF Office of Advanced Cyberinfrastructure, at the following address: nsfci2030rfi@nsf.gov.

Full directions are included with the letter: https://www.tacc.utexas.edu/-/national-science-foundation-dear-colleague-letter

The post NSF Seeks Input on Cyberinfrastructure Advances Needed appeared first on HPCwire.

NSF Approves Bridges Phase 2 Upgrade for Broader Research Use

Thu, 01/12/2017 - 09:36

The recently completed phase 2 upgrade of the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC) has been approved by the National Science Foundation (NSF) making it now available for research allocations to the national scientific community, according to an announcement posted this week on the XSEDE web site.

“Bridges’ new nodes add large-memory and GPU resources that enable researchers who have never used high-performance computing to easily scale their applications to tackle much larger analyses,” says Nick Nystrom, principal investigator in the Bridges project and Senior Director of Research at PSC. “Our goal with Bridges is to transform researchers’ thinking from ‘What can I do within my local computing environment?’ to ‘What problems do I really want to solve?'”

New nodes introduced in this upgrade include two with 12 TB (terabytes) of random-access memory (RAM), 34 with 3 TB of RAM, and 32 with two NVIDIA Tesla P100 GPUs. The configuration of Bridges, using different types of nodes optimized for different computational tasks, represents a new step to provide powerful “heterogeneous computing” to fields that are entirely new to high-performance computing (HPC) as well as to “traditional” HPC users, according to the article.

Bridges offers a distinct and heterogeneous set of computing resources:

  • The large-memory 12- and 3-TB nodes, featuring new “Broadwell” Intel Xeon CPUs, greatly expand Bridges’ capacity for DNA sequence assembly of large genomes and metagenomes (collections of genomes of species living in an environment); execution of applications implemented to use shared memory; and scaling analyses using popular research software packages such as MATLAB, R, and other high-productivity programming languages.
  • The GPU nodes provide the research community with early access to P100 GPUs and significantly expand Bridges’ capacity for “deep learning” in artificial intelligence research and accelerating applications across a wide range of fields in the physical and social sciences and the humanities.
  • The upgrade increases Bridges’ long-term storage capacity to 10 PB (petabytes, or thousands of terabytes), strengthening Bridges’ support for community data collections, advanced data management and project-specific data storage.

“Bridges is a national computing resource and project uniquely configured to support ‘Big Data’ capabilities at scale as well as HPC simulations,” says Irene Qualters, the director of the NSF’s Office of Advanced Cyberinfrastructure. “It explores coherence of these two computing modalities within a reliable and robust platform for science and engineering researchers, and through its participation in XSEDE, it promotes interoperability, broad access and outreach that encourages workflows and data sharing across and beyond research cyberinfrastructure.”

Together with the already operational Phase 1 hardware, the Phase 2 upgrade increases Bridges’ speed to 1.35 Pf/s (Petaflops, or quadrillions of 64-bit floating-point operations per second)—about 29,000 times as fast as a high-end laptop. The upgrade also expands the system’s total

The complete technical specification of Bridges is available at: https://www.psc.edu/index.php/bridges/user-guide/system-configuration

Full XSEDE article: https://www.xsede.org/bridges-superocomputer-completed

The post NSF Approves Bridges Phase 2 Upgrade for Broader Research Use appeared first on HPCwire.

IBM Introduces New All-Flash Storage Solutions

Thu, 01/12/2017 - 08:47

ARMONK, N.Y., Jan. 12 — IBM (NYSE: IBM) today announced new, all-flash storage solutions designed for midrange and large enterprises, where high availability, continuous up-time, and performance are critical. Built to provide the speed and reliability needed for workloads ranging from enterprise resource planning (ERP) and financial transactions to cognitive applications like machine learning and natural language processing. The solutions announced today are designed to support cognitive workloads which clients can use to uncover trends and patterns that help improve decision-making, customer service and ROI.

IBM continues to push the boundaries in the design of flash solutions developed with the performance to manage the most demanding workloads such as “six nines availability”, ensuring continuous operations 99.9999 percent of the time. Through deep integration between IBM Storage and IBM z Systems, co-developed software that provides data protection, remote replication and optimization for midrange and large enterprises, is embedded in these new solutions. This advanced microcode is ideal for cognitive workloads on z Systems and Power System requiring the highest availability and system reliability possible.

“The DS8880 All-Flash family is targeted at users that have experienced poor storage performance due to latency, low server utilization, high energy consumption, low system availability and high operating costs. These same users have been listening, learning and understand the data value proposition of being a cognitive business,” said Ed Walsh, general manager, IBM Storage and Software Defined Infrastructure. “In the coming year we expect an awakening by companies to the opportunity that cognitive applications, and hybrid cloud enablement, bring them in a data driven marketplace.”

Today’s IBM is announcing a new family of DS8880 all-flash systems designed to meet a wide variety of business applications, workloads, and use cases where microsecond response times and uncompromised availability are sought. The family includes:

  • Business Class Storage – the IBM DS8884F has been designed for traditional applications such as ERP, order processing, database transactions, customer relationship management and human resources information systems. It offers the lowest entry cost for midrange enterprises with 256 GB Cache (DRAM) and between 6.4-154 TB of Flash Capacity.
  • Enterprise Class Storage – the IBM DS8886F has been engineered for high speed transactional operations like high-performance online transaction processing, high-speed commercial data processing, high-performance data warehouse and data mining and critical financial transaction systems. It provides users 2 TB Cache (DRAM) and between 6.4-614.4 TB of Flash Capacity.
  • Analytic Class Storage – the IBM DS8888F is ideal for cognitive and real-time analytics and decision making including predictive analytics, real time optimization, machine learning and cognitive systems, natural language speech and video processing. To support this it delivers 2 TB Cache (DRAM) and between 6.4 TB-1.22 PB of Flash Capacity providing superior performance and capacity able to address the most demanding business workload requirements.

Working through a network of offices, supported by a team of over 850, the Health Insurance Institute of Slovenia (Zavod za zdravstveno zavarovanje Slovenije), provides health insurance to approximately two million customers. In order to successfully manage its new customer-facing applications (e.g. electronic ordering processing and electronic receipts) its storage system required additional capacity and performance. After completing research of solutions capable of managing these applications – which included both Hitachi and EMC – the organization deployed the IBM DS8886 along with IBM DB2 for z System/OS data server software to provide an integrated data backup and restore system.

“As long-time users of IBM storage infrastructure and mainframes, our upgrade to the IBM DS8000 with IBM business partner Comparex was an easy choice. Since then, its high performance and reliability have led us to continually deploy newer DS8000 models as new features and functions have provided us new opportunities,” said Bojan Fele, CIO of Health Insurance Institute of Slovenia. “Our DS8000 implementation has improved our reporting capabilities by reducing time to actionable insights. Furthermore, it has increased employee productivity, ensuring we can better serve our clients.”

According to Scott Sinclair, a senior analyst at ESG, the announcement by IBM on its family of new all-flash DS8880 solutions is very impressive in regards to performance gains and the move to offer flash across the portfolio a great step. By moving data to another path IBM is at the next level of innovation in order to take advantage of next generation technologies. Further to this, the classification of the new family of DS8880 into business, enterprise, and analytic solutions makes a lot of sense.

Availability 

The new family of DS8880 all-flash data systems will be available worldwide on January 20, 2017 from IBM and through IBM Business Partners. Visit the IBM DS8880 landing page for more information about the solutions announced today.

Source: IBM

The post IBM Introduces New All-Flash Storage Solutions appeared first on HPCwire.

Researchers Use TACC Supercomputer to Create All-Atom Simulation of Genome Editing in Action

Thu, 01/12/2017 - 07:21

Jan. 12 — One of the most talked about biological breakthroughs in the past decade was the discovery of the genome editing tool CRISPR/Cas9, which can alter DNA and potentially remove the root causes of many hereditary diseases.

Originally found as part of the immune system of the Streptococcus pyogenes bacteria, CRISPR associated protein 9 (CAS9), in its native state, recognizes foreign DNA sequences and disables them.

In bacteria, the system is used to target foreign viral DNA from bacteriophages – DNA that it has already recognized as an enemy through its evolutionary history and has incorporated a record of it into its own DNA.

CRISPR (Clustered regularly interspaced short palindromic repeats, pronounced “crisper”) represent segments of DNA that contain short repetitions of base sequences followed by short segments of “spacer DNA” derived from previous exposures to foreign DNA. The complex consists of proteins that unravel DNA, others that cut the double helix at a specific location, and a guide RNA that can recognize enemy DNA in the cell.

Researchers studying this ancient immune system realized that, by changing the sequence of the guide RNA to match a given target, it could be used to cut not just viral DNA, but any DNA sequence at a precisely chosen location. Furthermore, new sections of DNA could be introduced to join to the newly cut sections.

The method was first conceived and developed by Jennifer Doudna (University of California, Berkeley) and Emmanuelle Charpentier (Umeå University) and has been used in cultured cells — including STEM cells — and in fertilized eggs to create transgenic animals with targeted mutations that help study genetic functions.

CRISPR/Cas9 can affect many genes at once, allowing for the treatment of diseases that involve the interaction of multiple genes.

The method is improving rapidly and is expected to one day have applications in basic research, drug development, agriculture, and the clinical treatments of human patients with genetic diseases.

However, creating targeted CRISPR/Cas9 mutations is currently expensive and time-consuming, particularly for large-scale studies. The process is also error-prone, limiting its widespread use. These problems stem, in part, from a lack of full understanding of how CRISPR/Cas9 works at the molecular level.

In November, a research team from the University of North Texas (UNT) led by Jin Liu used the Maverick supercomputer at the Texas Advanced Computing Center (TACC), to perform the first all-atom molecular dynamics simulations of Cas9-catalyzed DNA cleavage in action.

The simulations, reported in Nature Scientific Reports, shed light on the process of Cas9 genome editing. They also helped resolve controversies about specific aspects of the cutting: such as where precisely the edits occur and whether Cas9 generates blunt-ended or staggered-ended breaks with overhangs in the DNA.

“Right now there are quite a few problems in how we use this in the therapeutic applications. The specificity and efficiency of the enzyme are not high,” Liu said. “It is also difficult to deliver the enzyme to the position of the gene editing. To solve these problems, first, we need to know how this enzyme works. Our research is providing the foundation for the understanding of the mechanism of Cas9.”

The entire article can be found here.

Source: Aaron Dubrow, TACC

The post Researchers Use TACC Supercomputer to Create All-Atom Simulation of Genome Editing in Action appeared first on HPCwire.

Inspur Joins Open Compute Project as Platinum Member

Thu, 01/12/2017 - 06:44

Jan. 12 — Inspur, a leading cloud computing total solutions provider, announced today that it has joined the Open Compute Project (OCP) as a Platinum member, showing its dedication and strategic focus on driving innovation into hyperscale computing and cloud-based solutions. OCP was initiated by Facebook in 2011 with the mission to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing — reducing the environmental impact of data centers. Since then, OCP has been consistently innovating around open source contributions for networking, servers, storage and Open Rack. In strong support of OCP’s mission as a Platinum member, Inspur plans to lead the move to convergence amongst the different Open Rack Scale platforms currently available in the market to provide a way to balance complexity, energy consumption, space and cost considerations against demands for compute-intensive applications.

“We are thrilled to welcome the Inspur team as a new Platinum member, and look forward to their assistance in accelerating OCP’s innovation and adoption worldwide,” stated Corey Bell, CEO of the Open Compute Project Foundation.

“Our innovative Inspur Converged Rack Scale Motherboard modular design is inspired by the Facebook Tioga Pass and other Open Rack Scale standards, and takes open motherboard designs to the next level,” said Dolly Wu, vice GM of Inspur American region. “We believe this is a major breakthrough in making convergence possible for Open Rack scale platforms.This revolutionary design will help to further lower TCO for the Cloud Datacenter and enable hyperscale deployments to be even faster, and more efficient.”

Inspur plans to attend the upcoming OCP Summit on March 8-9 in Santa Clara, CA. More information about Inspur and its current products and solutions can be found at www.inspursystems.com.

About Open Compute Project Foundation

The Open Compute Project Foundation is a 501(c)(6) organization which was founded in 2011 by Facebook, Intel, Rackspace, Goldman Sachs and Andy Bechtolsheim. Our mission is to apply the benefits of open source to hardware and rapidly increase the pace of innovation in, near and around the data center and beyond.

About Inspur

Inspur is a leading global IT total solutions and services provider, founded in 1945 with 26,000+ employees worldwide. Inspur is ranked by Gartner as one of the top 5 largest server manufacturers in the world, is #1 in China, and is currently the fastest growing server vendor globally. Inspur provides its customers with data center products and storage solutions that are Tier1 quality and performance, energy efficient, cost effective and customized for actual workloads and data center environments. As a leader in cloud computing and big data solutions, Inspur is committed to providing the best total solutions at the IaaS, PaaS and SaaS level with high-end servers, mass storage systems, cloud operating system and information security technology. For more information, visit http://www.inspursystems.com.

Source: Inspur

The post Inspur Joins Open Compute Project as Platinum Member appeared first on HPCwire.

Pages