HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 6 hours 8 min ago

HPC Advisory Council and ISC Group Announce 2018 Student Cluster Competition

Tue, 09/12/2017 - 07:41

SUNNYVALE, Calif., Sept. 12, 2017 — The HPC Advisory Council (HPCAC), the community-led organization dedicated to high-performance computing (HPC) research, outreach and education, and the ISC Group, organizers of Europe’s premier HPC forum, today announced that they have officially kicked off the ISC-HPCAC Student Cluster Competition (SCC) with an open call for team entries inviting international STEM student teams currently enrolled in four-year higher education and undergraduate programs to submit proposals for the 2018 competition. Submissions will be accepted through November 10, 2017. The top twelve teams selected will be announced on November 15, 2017 and face off in Frankfurt, Germany, during the annual ISC High Performance Conference and Exhibition scheduled for June 24-28, 2018.

Now in its seventh year, the Student Cluster Competition enables international teams to take part in a real-time contest focused on advancing STEM disciplines and HPC skills development. To take home top honors, the teams will have to showcase systems of their own design, adhering to strict power constraints and achieve the highest performance across a series of standard HPC benchmarks and applications.

Showcased at the conference’s closing plenary session, the intense three-day competition will culminate in front of thousands of conference attendees. Students will take center stage, alongside HPC luminaries, for a live ceremony to award and recognize each of the participating teams.

“The Student Cluster Competition provides a real-world hands-on education that directly benefits students and their individual studies,” noted Gilad Shainer, chairman of the HPC Advisory Council. “Team members gain access to a wealth of industry expertise, training and tools and hands-on exposure to a range of technologies and techniques they’ll use for competition and throughout their careers. By helping advance their knowledge and capabilities, the entire HPC community benefits.”

“SCC is an opportunity to contribute to the ISC tradition of supporting education and our future workforce,” said Martin Meuer, co-chairman of ISC High Performance. “All of our competitors have access to the full conference and our student programs. This is our way of giving back to the community, to the students, their current studies, and future success. We look forward to welcoming the incoming teams and wish all of the entries good luck.”

“Always fierce rivals in the heat of competition, teams establish peer relationships and lifelong friendships. They’re basically building the next generation of the HPC community,” said Pak Lui, principal architect at Huawei Technologies. “That overall experience is an extremely powerful influencer. It fuels the ongoing rivalries and the number of entries submitted from established teams and in attracting entirely new teams,” said the competitions veteran SCC director. “It’s the combination of the individual student and teams total experience, their combined contributions of ingenuity, unity and spirit that makes for such a great competition and is what makes this SCC a world-class championship.”

Visit the HPCAC’s 2018 Student Cluster Competition site for more detailed information and criteria and to submit team entries. Preparation for competition includes working with technology partners to design and build a competitive system from commercially available components, and working with advisors and mentors to master the HPC applications, tools, techniques and tricks that are critical to a team’s overall performance during the live competition.

Advocate the Future of HPC as a Sponsor of the ISC-HPCAC Student Cluster Competition

Interested companies are urged to join the HPCAC and ISC Group in ongoing support of STEM student development to further advance the skills of these next generation HPC experts. Become a SCC sponsor.

Sponsor proceeds go to support student teams and all the costs associated with the live competitions exclusively.

To unite in this visionary collaboration and help fulfill the growing, global demand for STEM expertise a range of sponsor packages and promotions are available including featured brand placements, social media, media coverage and much more.

About ISC Group

The ISC Group organizes ISC High Performance the world’s oldest and Europe’s premier conference and networking event for the international HPC community. The group’s portfolio includes the TOP500 site featuring the TOP500 List, which is updated twice a year and provides a well-accepted metric for the 500 most powerful computer systems in the world.

About ISC High Performance

The annual ISC High Performance conference series offer a comprehensive five-day technical symposium focusing on HPC and R&D disciplines, technological development and its application in scientific fields and adoption in commercial environments.

About the HPC Advisory Council

Founded in 2008, the non-profit HPC Advisory Council (HPCAC) is an international organization with over 400 members committed to education and outreach. Members share expertise, lead special interest groups and have access to the technology center to explore opportunities and evangelize the benefits of HPC technologies, applications and future development. The HPCAC hosts multiple annual conferences and STEM challenges worldwide including the RDMA Student Competition in China and the Student Cluster Competition in Germany. Membership is free of charge and obligation. More information: www.hpcadvisorycouncil.com.

Source: HPCAC & ISC Group

The post HPC Advisory Council and ISC Group Announce 2018 Student Cluster Competition appeared first on HPCwire.

Make the iSLC Superior Connection with Innodisk – Visit Booth 2932 at G2E 2017

Mon, 09/11/2017 - 13:09

Innodisk is showing the latest iSLC products at G2E 2017 in Las Vegas! Visit booth 2932 at the Sand Expo on October 3-5 and compare the cost savings and reliability you get with iSLC versus SLC and MLC. This white paper outlines Innodisk’s superior flash solution at a low price. Download now or visit Innodisk at G2E!

The post Make the iSLC Superior Connection with Innodisk – Visit Booth 2932 at G2E 2017 appeared first on HPCwire.

Cavium and Enea Deliver Optimized Platform for uCPE

Mon, 09/11/2017 - 08:46

SAN JOSE, Calif. and STOCKHOLM, Sweden, Sept. 11, 2017 — Cavium, Inc. (NASDAQ: CAVM), a provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, wired and wireless networking, and Enea (NASDAQ Stockholm: ENEA), a global supplier of network software platforms and world class services, helping customers develop amazing functions for the connected society, today announced availability of optimized uCPE solutions based on Cavium OCTEON TX ARMv8 processor SoC families and Enea NFV Access software platform.

Using universal customer premises equipment (uCPE), service providers offer their enterprise customers on-demand deployment with flexibility of choosing from a range of virtualized network appliances in terms of specific virtual network functions (VNF) and VNF vendors. In addition to the agile and flexible deployment benefits, the uCPE software platform provides a consistent management interface regardless of the selected network functions, and service-function-chaining of the subscribed VNFs for individual enterprise subscribers.

“uCPE provides on-demand deployment of multiple network functions previously each implemented as its own purpose built appliance, plus virtualization software platform in cost effective single-box COTS systems,” said Raj Singh Vice President & General Manager of the Network & Communication Group at Cavium. “Cavium’s highly integrated OCTEON TX ARMv8 processor SoC families, with high performance processing, exceptional power efficiency, hardware accelerators abstracted by standard APIs like DPDK, standard software eco-systems and cost effective and scalable white box hardware eco-systems, offer highly optimized uCPE processing solutions.”

“Enea NFV Access offers out-of-the-box VNF management and service function chaining for orchestrating on-demand agile VNF deployment – without the overhead that solutions originally developed for data centers carry with them,” said Karl Mörner, SVP Product Management, Enea. “Enea NFV Access is a versatile virtualization software platform streamlined for high networking performance and minimal footprint, and contributes to uCPE agility and innovation, reducing cost and complexity for computing at the network edge.”

Further Reading

Read more about Cavium Octeon TX here: http://www.cavium.com/OCTEON-TX_ARM_Processors.html
Read more about Enea NFV Access here: https://www.enea.com/enea-nfv-access/

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Data Center and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware-reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan. For more information, please visit: http://www.cavium.com.

About Enea 

Enea is a global supplier of network software platforms and world class services, with a vision of helping customers develop amazing functions in a connected society. We are committed to working together with customers and leading hardware vendors as a key contributor in the open source community, developing and hardening optimal software solutions. Every day, more than three billion people around the globe rely on our technologies in a wide range of applications in multiple verticals – from Telecom and Automotive, to Medical and Avionics. We have offices in Europe, North America and Asia, and are listed on Nasdaq Stockholm. Discover more at www.enea.com and start a conversation at info@enea.com.

Source: Cavium

The post Cavium and Enea Deliver Optimized Platform for uCPE appeared first on HPCwire.

SimScale Hits the 100,000 Users Mark

Mon, 09/11/2017 - 08:40

MUNICH, Germany, Sept. 11, 2017 – Five years after it’s launch, SimScale has announced that their platform has reached the 100 000 users milestone. In 2013, the company was launching the world’s first cloud-based simulation platform, making a big step towards democratization of the engineering simulation.

Engineering simulation—also known as Computer-Aided Engineering (CAE)—is the usage of computer software to aid in engineering analysis tasks. Consisting mainly of computational fluid dynamics, finite element analysis, and thermal simulation, CAE is based on physical equations and allows an accurate prediction of the behavior of fluids and structures, and as such enables engineers to virtually test the reliability and performance of their products early in the design process.

Until recently, the CAE software market has been dominated by a handful of major players offering traditional on-premises software solutions. The cost of licenses and hardware—reaching $40-60k—,as well as the knowledge and experience necessary to use these solutions, put them out of reach for majority of the small and medium size businesses.

Things have changed, when in 2013, five founders of SimScale launched an affordable and intuitive engineering simulation solution accessible over the web. Their goal was to enable every engineering and designer to design better products faster with the help of virtual prototyping. To get closer to that goal, towards the end of 2015, the company has announced the launch of Community Plan, which granted a completely free access to the full functionality of the platform to any user willing to share his projects publicly. According to SimScale, this decision was an important enabler of the users’ growth, and reaching the 100 000 users milestone, which the company has announced today. The 100 000 users, are comprised of designers, engineers, hobbyists, and students from all over the world who take an advantage of the SimScale library of over 1 ,000 high quality, publicly available simulation projects, which can be used for free as templates when working on an individual simulation setup.

“We are proud to see that we are getting closer to achieving our initial goal. Within five years, SimScale became an integral part of the design validation process for thousands of successful companies, as well as individual and academic users. 100 000 users of SimScale, is a milestone proving that we are moving towards the right direction. However, there is still a long way ahead of us, knowing that only 1 out of 10 engineers who could benefit from the simulation technology, is currently using it,” said Agata Krzysztofik, chief marketing office at SimScale.

About SimScale

SimScale is a provider of powerful web-based 3D simulation technology which is changing the way engineers, designers, and students design products. With a founding team of mechanical engineers, computer scientists, and mathematicians, SimScale’s goal is to enable everyone to design better products faster and cheaper by putting engineering simulation tools into the hands of a broader range of users.

Founded in 2012 and based in Munich, Germany, SimScale is led by five founders: David Heiny, Vincenz Dölle, Alexander Fischer, Johannes Probst, and Anatol Dammer. Currently, SimScale has 70 000 users worldwide. For more information, visit www.simscale.com.

Source: SimScale

The post SimScale Hits the 100,000 Users Mark appeared first on HPCwire.

GW4 Shortlisted for Times Higher Education Award for ‘Isambard’ Supercomputer

Fri, 09/08/2017 - 20:47

Sept. 8 — The GW4 Alliance [made up of four leading research-intensive universities: Bath, Bristol, Cardiff and Exeter] has been shortlisted under the category of Technological Innovation of the Year at this year’s Times Higher Education [THE] Awards for its world-first supercomputer, Isambard.

The THE Awards are often called the Oscars of the higher education sector. Each year they attract hundreds of entries from UK universities, honouring creativity, efficiency and innovation in the higher education sector.

The world’s first ARM-based production supercomputer

The EPSRC awarded the GW4 Alliance, together with Cray Inc. and the Met Office, £3m to deliver a new Tier 2 high performance computing (HPC) service to benefit scientists across the UK. This collaboration has produced the world’s first ARM-based production supercomputer, named ‘Isambard’ after the renowned Victorian engineer Isambard Kingdom Brunel.

Isambard will enable researchers to choose the best hardware system for their specific scientific problem, improving efficiency and cost-effectiveness. The supercomputer is able to provide system comparison at high speed as it includes over 10,000, high-performance 64-bit ARM cores, making it one of the largest machines of its kind anywhere in the world.

“Testament to the power of industry and academic collaboration”

Professor Simon McIntosh-Smith, lead academic on the project at the University of Bristol, commented: “Since we announced the system we’ve been contacted by a wide range of world-class academic and industrial HPC users asking for access to the service. Isambard could be the first of a new generation of ARM-based supercomputers and it is exciting to see this potential recognised by Times Higher Education.”

Professor Nick Talbot, Chair of GW4 Board and Deputy Vice-Chancellor (Research) at University of Exeter, said: “We are delighted that GW4’s supercomputer, Isambard, has been shortlisted for this prestigious award. The project is a testament to the power of industry and academic collaboration, and we are proud to share in this success with partners Cray, the Met Office and ARM.  Isambard lives up to its venerable namesake in catalysing our region’s expertise in engineering and innovation, and looks set to provide huge benefits to scientists across the UK, and beyond.”

The Times Higher Awards 2017 gala ceremony will take place on Thursday 30 November 2017 at the Grosvenor House Hotel, London.

Further information

Established in 2013, the GW4 Alliance brings together four leading research-intensive universities: Bath, Bristol, Cardiff and Exeter. It aims to strengthen the economy across the region through undertaking pioneering research with industry partners.

Source: GW4 Alliance

The post GW4 Shortlisted for Times Higher Education Award for ‘Isambard’ Supercomputer appeared first on HPCwire.

XTREME Design Inc. Engaged David Barkai as Adviser for its Entry to the US Market

Fri, 09/08/2017 - 20:30

Sept. 8 — XTREME Design Inc., a Japanese startup offering a cloud-based, virtual, supercomputing-on-demand service, announced that it engaged Dr. David Barkai as an adviser for entering the US market with its own HPC cloud service. In this role, Dr. Barkai will leverage his deep experience in HPC technology to provide the fast-growing company with strategic guidance as it expands into the US market. XTREME Design Inc. is planning to exhibit at Supercomputing 2017 to be held in Denver in November (Booth #1485).

Dr. David Barkai:

David is a long time HPC practitioner with over 35 years of experience. He entered the field very shortly after receiving a Ph.D. in theoretical physics. A common thread of Davidʼs career is a focus on the relationships between applications/workloads and architecture. He was in several HPC companies during their hay days – Control Data, Floating Point Systems, Cray Research, Supercomputing Systems Inc., as well as a stint at NASA Ames. And, more recently, in Intel for 15 years ‒ most of it as one of its HPC experts. After retiring from Intel, Dr. Barkai stayed involved with HPC through a couple of years each at Cray and SGI.

About XTREME Design Inc.

XTREME Design Inc. is a well-funded startup with over 15 years of experience in high-performance computing and cloud technologies. Their IaaS computing services deliver an easy-to-use customer experience through a robust UI/UX and cloud management features. The XTREME DNA product delivers high-end cloud-based compute capabilities supporting private, public, and hybrid cloud, featuring the latest CPUs, GPUs, and interconnect options. Applications include Computer Aided Engineering (CAE), machine learning, deep learning, high performance data analysis, and the Internet of Things (IoT). For additional information, please contact: info@xd-lab.net https://xd-lab.net/

Source: XTREME Design Inc.

The post XTREME Design Inc. Engaged David Barkai as Adviser for its Entry to the US Market appeared first on HPCwire.

URISC@SC17 and a Tale of Four Unicorns

Fri, 09/08/2017 - 17:53

This will be STEM-Trek’s third year to support a workshop during the annual supercomputing conference, or SC. This year’s program is titled “Understanding Risk in Shared Cyberecosystems,” or URISC@SC17. We’re collaborating with Von Welch who leads the U.S. National Science Foundation (NSF) supported Center for Trustworthy Scientific Cyberinfrastructure at Indiana University, and specialists from the South African Centre for High Performance Computing (CHPC).

We acknowledge that universities struggle to provide professional staff with conference-related travel and advanced training opportunities. Therefore, early-career professionals who work as campus technology facilitators or sysadmins at regional-serving public universities in the U.S. and Africa are invited to apply for travel support by Sept. 11, 2017.

Our SC co-located workshops have had NSF and private support. This year we are thankful that Google is helping, but we’re still fundraising to bridge gaps—flight costs are likely to increase due to petro industry damage caused by Hurricane Harvey, and we would love to support a larger cohort. Anyone interested in helping may contact info@stem-trek.org.

As for return on investment, some prospective donors might think this demographic represents weak sales potential. Few are in the market for new systems or services since they operate on a shoestring; some support hardware that’s ten or more years old with no replacement in sight. What keeps them up at night? Most say it’s a challenge to keep older hardware running, they can’t afford or take time off to train, they don’t have enough time for outreach and education, and everyone struggles with cybersecurity.

What prospective donors may not realize is that the demographic we support represents industry growth in important and often new directions. This might be more difficult to understand for those who haven’t worked with this community as long as we have (I have 13 years of experience with campus tech/10 years global HPC external relations/5 years with African projects). Some are building new centers from scratch, and local advocacy often rises to meet unique regional industry needs. Once trained, many relocate to find better-paying jobs with academic, government or commercial facilities that appreciate their resourcefulness and creativity. I call them “unicorn-generalists” who are likely to become lead decision-makers in the $44 billion HPC industry.1

I can think of many unicorn-generalist examples, but I’ll share four today.

Nick Thorne (standing) was the lead trainer at the CHPC in Cape Town, and now works as a research engineer with the Large Scale HPC Group at the Texas Advanced Computing Center (TACC).

When I met Thorne in 2012, he was laser-focused on building CHPC Director Happy Sithole’s vision. Dr. Sithole understands the importance of student development and international outreach better than anyone I know. Thorne contributed substantially to the CHPC Advanced Computer Engineering Lab’s effort to establish a student cluster competition for South African universities to compete in. This project, aimed at SA human capital development, allowed a winning team to represent SA at the annual International Supercomputing Conference Student Cluster Competition in Germany. SA has placed first or second at ISC since they began to compete in 2014. Thorne also supported the Southern African Development Community (SADC) CHPC Ecosystems project (map, below) that began with donated decommissioned hardware (part of TACC’s Ranger system, and another donated by University of Cambridge). He supervised the refurbishing, distribution and cluster installation at five national and three international sites; the five in SA support student cluster teams and light research. Thorne possesses the rare combination of interpersonal, diplomatic and technical aptitude which makes him a good director candidate. It wouldn’t surprise me if someday Thorne becomes the TACC director (give him another 15-20 years—he’s young, and so is TACC Director Dan Stanzione).

When I first met Chungu Ngolwe (tallest, center back row), he worked as a systems engineer for ZAMREN, Zambia’s national research and education network. As project leader for the setup of Eduroam for the Zambian Federation, he facilitated workshops on campus network design, routing, switching and Eduroam before assuming the role as ZAMREN’s HPC sysadmin.

Ngolwe attended several workshops that STEM-Trek and CHPC supported between 2012 and 2016, and I have always been impressed with his motivation and enthusiasm. Following a brief interview in 2016, I suggested he might consider specializing in cybersecurity (that epiphany had to do with questions he asked, and his personal interests). With a small child at home, I knew night classes would be inconvenient, so I shared a link to the collection of free or low-cost online instructional materials listed on our web site—training he could pursue in his spare time. My hunch must have been correct. I recently learned that Ngolwe left ZAMREN and now supports cybersecurity for Copperbelt Energy Corporation, one of the largest energy providers on the African continent.

As a high school junior who grew up in South African townships, Zama Mtshali stood at a crossroads. Her grades were excellent, so she knew she would be college-bound. Unfortunately, she didn’t know what to study. With a University of Cape Town prospectus in hand, she searched for academic tracks that would provide her with employable skills. It was her goal to support herself and her family in the future. One sister is a soap opera actress, and her brothers have professional jobs, but none work in STEM fields. No one influenced Mtshali to pursue computer science, but she read where mathematical aptitude was helpful—and she loved math! This was the decision-tree that ultimately led Mtshali to her current role as a sysadmin for the largest HPC system for open research on the African continent, Lengau (which means “Cheetah” in the Tswana language; the fastest animal in the world).

Mtshali has great role models since many women hold leadership positions in South African government science and technology divisions (and throughout the SADC region). She also has encouragement and support from CHPC Director Happy Sithole, and the respect of fellow CHPC sysadmins. STEM-Trek is committed to increasing workforce diversity, but we’re often disappointed by the small number of women who apply for travel support to attend HPC workshops; they are the rarest unicorns! When it becomes normalized for women everywhere to sit in the HPC “bullpen,” we expect more will follow Mtshali’s footsteps.

Scott Yockel (Interim Director of Research Computing, Harvard University) is one of many U.S. success stories. We invited Yockel to talk about Harvard’s new green data center at the HPC on Common Ground @SC16 workshop (OCG).

While STEM-Trek can’t take credit for Yockel’s career trajectory, his path has not been unlike many who participate in our workshops. He grew up in Oklahoma, and attended Oklahoma Baptist University where he earned an undergraduate degree in chemistry before completing his graduate degree at the University of North Texas. As a computational chemist, he returned to UNT to manage their HPC facility before accepting a position at Harvard.

Even though Texas isn’t an EPSCoR state (NSF’s Established Program to Stimulate Competitive Research), and UNT benefits from urban cultural and employment opportunities found in nearby Dallas, as an undergrad his HPC career was influenced by training programs offered by EPSCoR XSEDE partners in Oklahoma (NSF Extreme Science and Engineering Discovery Environment). He could therefore relate to OCG participants, and recognizes that they’re also driven, resourceful, creative, and mobile. They must be all of these things in order to succeed, and many do!

To learn more about STEM-Trek and to follow URISC@SC17, please visit our website.

1 EnterpriseTech report, June 21, 2017, “Intersect 360 at ISC: HPC Industry at $44 billion by 2021,”accessed Sept. 6, 2017

The post URISC@SC17 and a Tale of Four Unicorns appeared first on HPCwire.

Call for Papers: Supercomputing Asia 2018

Fri, 09/08/2017 - 07:49

Sept. 8, 2017 — Supercomputing Asia (SCA) is an inaugural annual conference that will encompass an umbrella of notable supercomputing events with the key objective of promoting a vibrant and relevant HPC ecosystem in Asian countries and will be held from 26 to 29 March 2018 at Resorts World Convention Centre, Singapore.

Riding on the success from the previous year, Supercomputing Frontiers will be rebranded as Supercomputing Frontiers Asia (SCFA), which serves as the technical programme for SCA18. The technical programme for SCA18 consists of four tracks (more detail here):

  • Application, Algorithms & Libraries
  • Programming System Software
  • Architecture, Network/Communications & Management
  • Data, Storage & Visualisation

Paper Submission

Abstract submissions due: 6 Oct 2017

Paper submissions due: 13 Oct 2017

All submissions must follow Springer’s LNCS format (http://www.springer.com/computer/lncs/lncs+authors) without changing default margins, fonts, etc. The total page limit is 18 pages excluding references. Supplementary materials that facilitate verification of the results, e.g., source code, proof details, etc., may be appended without a page limit or uploaded as separate files, but reviewers are neither required to read them nor will they be printed in the proceedings. Hence submissions must be complete, intelligible and self-contained within the 18 pages bound.

Papers should have page numbers to facilitate their review. In LaTeX, this can be achieved for instance using \pagestyle{plain}. Each submission must be a single PDF file. Papers should present original research and should provide sufficient background material to make them accessible to the broader community. It should not be submitted in parallel to any other conference or journal. All manuscripts will be reviewed and judged on correctness, originality, technical strength, and significance, quality of presentation, and interest and relevance to the conference. At least one author of an accepted paper must attend SCA18 to present the paper.

Proceedings

The conference proceedings will be published in Springer Nature’s Lecture Notes in Computer Science (LNCS). The camera-ready versions need to be submitted via Springer Nature’s Online Conference Service (OCS) system. The link to the submission site will be provided soon. Please contact the Programme Chairs for any questions/clarifications.

Source: SCFA

The post Call for Papers: Supercomputing Asia 2018 appeared first on HPCwire.

QY Research Groups Release HPC Market Study

Fri, 09/08/2017 - 07:39

Sept. 8, 2017 — QY Research Groups has released a latest report based on thorough research on Supercomputer Market. This in-depth report discusses this industry’s market in forms of overview/definition, application, classification, predictions pertaining value and volume, and future predictions. It also prominently attributes the current situation and outlooks with industrial and financial aspect. Furthermore, it comprises of current events, latest market trends, schematic representation of the global companies with their prime developments, mergers & acquisitions, deals and agreements, expansions and investments, etc. Additionally, it talks about the vital prospects such as market restrains growth drivers, challenges and potential opportunities that may affect the overall Supercomputer market.

Free Sample PDF Copy of Supercomputer Market research report @ www.qyresearchgroups.com/request-sample/501476

Due to the constant altering business landscape, enhancements in the technology, the process of communication in several organizations have become intricate. Moreover, nowadays, customers’ demands have increased and expanded, this in return expects efficient and effective communication inside an organization. Escalated use of social networking websites, increased use of smartphones, and ascending demand for improved enterprise efficiency are the prime growth factors of the Supercomputer market. Enhanced technologies and several Information technology tools better the productivity of the business and escalate functional efficiency. Additionally, many services and solutions are put to use in various industrial verticals which includes banks, insurance (BFSI), public sector, travel & hospitality, healthcare, energy & utilities, education, IT & telecom, transportations and logistics, retail, many other industry sectors such as media and communications.

The methodological analysis which is used to approximate and forecast the Supercomputer Market starts with gathering data on vital vendors with the help of secondary research through various trusted sources that include news articles, presentations, journals and paid database. In addition, information which the vendors provide is also taken into consideration to analyze the segmentation of the market.

The Supercomputer Market report wraps:

• Industry summary with market definition, key elements such as market restrains, drivers, potential opportunities, challenges, trends in the market, etc.
• Market sectioning depending on product, application, geographical region, competitive market share
• Market size, approximates, forecasts for the said frame of time
• Distribution channel assessment
• Competitive analysis of crucial market manufacturers, trends, company profiles, strategies, etc.
• Factors accountable for the growth of the market
• Thorough assessment of prime market geographically
• Factual information, insights, market date backed by statistics and industry
Browse full report of Global Supercomputer Market with Table of Content @ www.qyresearchgroups.com/report/global-supercomputer-mark…

This report focuses on top manufacturers in the global market:

• IBM
• Fujitsu
• Lenovo
• NEC
• Dell
• Bull Atos
• Cray
• HPE
• Silicon Graphics International (SGI)
• Sugon Information Industry (Dawning)

The report is worth a buy because:

This report on Supercomputer Market assists in analyzing the condition and situation of the market in primary regions of the world. Apart from rendering an overview of product manufacturing processes, the research report also renders impeded strategy of the industry, latest technological developments, cost structures, product specifications, etc. Future predictions based on the development of this industry are also covered. The report also reviews micro and macro factors vital for the new entrants along with the current market players.

Table of contents:

Global Supercomputer Market Research Report 2017
1 Supercomputer Market Overview
1.1 Product Overview and Scope of Supercomputer
1.2 Supercomputer Segment by Type (Product Category)
1.2.1 Global Supercomputer Production and CAGR (%) Comparison by Type (Product Category)(2012-2022)
1.2.2 Global Supercomputer Production Market Share by Type (Product Category) in 2016
1.2.3 Linux
1.2.4 Unix
1.2.5 Other
1.3 Global Supercomputer Segment by Application
1.3.1 Supercomputer Consumption (Sales) Comparison by Application (2012-2022)
1.3.2 Commercial Industries
1.3.3 Research Institutions
1.3.4 Government Entities
1.3.5 Other
1.4 Global Supercomputer Market by Region (2012-2022)
1.4.1 Global Supercomputer Market Size (Value) and CAGR (%) Comparison by Region (2012-2022)
1.4.2 United States Status and Prospect (2012-2022)
1.4.3 EU Status and Prospect (2012-2022)
1.4.4 China Status and Prospect (2012-2022)
1.4.5 Japan Status and Prospect (2012-2022)
1.4.6 South Korea Status and Prospect (2012-2022)
1.4.7 Taiwan Status and Prospect (2012-2022)
1.5 Global Market Size (Value) of Supercomputer (2012-2022)
1.5.1 Global Supercomputer Revenue Status and Outlook (2012-2022)
1.5.2 Global Supercomputer Capacity, Production Status and Outlook (2012-2022)

Check Best Discount offers for your region on Supercomputer Market Research Report @ www.qyresearchgroups.com/check-discount/501476

About Us

QY Research Groups is a company that simplifies how analysts and decision makers get industry data for their business. Our unique colossal technology has been developed to offer refined search capabilities designed to exploit the long tail of free market research whilst eliminating irrelevant results. QY Research Groups is the collection of market intelligence products and services on the Web. We offer reports and update our collection daily to provide you with instant online access to the world’s most complete and current database of expert insights on Global industries, companies, products, and trends.

Source: QY Research Groups

The post QY Research Groups Release HPC Market Study appeared first on HPCwire.

EPFL Physicists Construct New Particle Detector for Large Hadron Collider

Fri, 09/08/2017 - 07:26

Sept. 8, 2017 — EPFL’s physicists are moving forward in their efforts to solve the mysteries of the universe. A particle detector made up of 10,000 kilometers of scintillating fiber is under construction and will be added onto CERN’s particle accelerator.

The Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, produces hundreds of millions of proton collisions per second. But researchers working on the Large Hadron Collider beauty (LHCb) experiment, which involves physicists from EPFL, can only record 2,000 of those collisions, using one of the detectors installed on the accelerator. So in the end, this technological marvel leaves the physicists wanting more. They are convinced that the vast volume of uncaptured data holds the answers to several unresolved questions.

In elementary particle physics, the Standard Model – the theory that best describes phenomena in this field – has been well and truly tried and tested, yet the researchers know that the puzzle is not complete. That’s why they are looking for phenomena that are not accounted for by the Standard Model. This quest for “new physics” could explain the disappearance of antimatter after the Big Bang and the nature of the dark matter that, although it represents around 30% of the universe, can only be detected by astronomical measurements at this point.

“To extract more information from the LHC data, we need new technologies for our LHCb detector,” says Aurelio Bay from EPFL’s Laboratory for High Energy Physics. EPFL has teamed up with several research institutes to develop the new equipment that will upgrade the experiment in 2020.

Using scintillating fiber to detect particles

After five years of work, EPFL’s physicists, together with some 800 international researchers involved in the LHCb project, have just taken an important preliminary step towards significantly enhancing their experimental equipment. They have decided to build a new detector – a scintillating fiber tracker dubbed SciFi.

Construction of the tracker, which incorporates 10,000 kilometers of scintillating fibers each with a diameter of 0.25mm, has already begun. When particles travel through them, the fibers will give off light signals that will be picked up by light-amplifying diodes. The scintillating fibers will be arranged in three panels measuring five by six meters, installed behind a magnet, where the particles exit the LHC accelerator collision point. The particles will pass through several of these fiber ‘mats’ and deposit part of their energy along the way, producing some photons of light that will then be turned into an electric signal.

Data on how the particles traverse the fibers will be enough to reconstruct their trajectory. The physicists will then use this information to restore their primitive physical state. “What we will essentially be doing is tracing these particles’ journey back to their starting point. This should give us some insight into what happened 14 billion years ago, before antimatter disappeared, leaving us with the matter we have today,” says Bay.

Huge data flows

SciFi is a key component for acquiring data at the highest speed, as it includes filters that are designed to preserve only useful data. In an ideal world, the physicists would collect and analyze all of the data without needing to use too many filters. But that would involve a massive amount of data.

“We may already be at the limit, because we of course have to save the data somewhere. First we use magnetic storage and then we distribute the data on the LHC GRID, which includes machines in Italy, the Netherlands, Germany, Spain, at CERN, and in France and the UK. Many countries are taking part, and numerous studies on this data are being run simultaneously,” adds Bay. He points to his computer screen: red is used to denote programs that are not working well or those that have been trying for several days to be included among the priorities.

Bay neatly puts this initiative into a physicist’s perspective: “If the LHC doesn’t have enough energy to uncover new physics, it’s all over for my generation of physicists! We will have to come up with a new machine, for the next generation.”

Source: Sandy Evangelista, EPFL

The post EPFL Physicists Construct New Particle Detector for Large Hadron Collider appeared first on HPCwire.

Appentra’s CEO to Participate as Mentor at the EuroHack 2017

Thu, 09/07/2017 - 19:59

Sept. 7 — Appentra’s CEO Manuel Arenaz has traveled to Lugano, Switzerland, to attend EuroHack 2017, the third GPU Hackathon organized by the Swiss National Supercomputing Centre.

This event is part of the series of 2017 GPU Hackathons organized by the Oak Ridge National Laboratory in the world-class supercomputing facilities at Juelich, Brookhaven, NASA, CSCS and OLCF.

EuroHack 2017 provides an opportunity for users of groups of large hybrid CPU-GPU systems to either port their (potentially) scalable application to GPU accelerators, or optimize an existing GPU-enabled application, on a state-of-the-art GPU system. The goal is that the development teams leave at the end of the week with applications running on GPUs, or at least with a clear roadmap of how to get there.

PhD Manuel Arenaz is attending as mentor of the team ALYA coming from the Barcelona Supercomputing Center (BSC). He is helping the BSC team to develop an OpenACC version of the software package ALYA, a simulation code for high performance computational mechanics that solves coupled multiphysics problems. He is using the an approach based on the parallel design patterns supported by our tool Parallware Trainer, an interactive real-time editor to facilitate the learning, usage and implementation of parallel programming using OpenMP and OpenACC . This approach has already been successfully applied in the GPU mini-hackathon organized at the Supercomputing Center of Galicia (CESGA) earlier this year.

“The EuroHack 2017 will allow us to get feedback from the participants to continue improving Parallware Trainer. The feedback will be used to develop new features that are of interest in world-class training courses on OpenMP and OpenACC. Moreover, this is a great opportunity to learn how to improve the organization of the second GPU Hackathon at CESGA, which we will be organized during next year,” said the company.

“In general, this is being a great experience! We think that hackathons like this are really important in order to scientists and engineers get the knowledge and tools to solve essential problems efficiently.”

Here the video of a previous EuroHack organized in Lugano:

This is the video of the GPU Hackathon that we have co-organized with Cesga last May:

Source: Appentra Solutions, S.L.

The post Appentra’s CEO to Participate as Mentor at the EuroHack 2017 appeared first on HPCwire.

EU Funds 20 Million Euro ARM+FPGA Exascale Project

Thu, 09/07/2017 - 17:57

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development.

Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work to develop a complete HPC system based on ARM Cortex processors and Xilinx Ultrascale FPGAs. The goal is to deploy an energy-efficient petaflops system by 2020 and lay a path to achieve exascale capability in the 2022-23 timeframe.

All told, the European Commission is planning a €50 million investment for the EuroEXA group of projects, spanning “research, innovation and action across applications, system software, hardware, networking, storage, liquid cooling and data centre technologies.”

John Goodacre, professor of computer architectures at the University of Manchester said, “To deliver the demands of next generation computing and Exa-Scale HPC, it is not possible to simply optimize the components of the existing platform. In EuroEXA, we have taken a holistic approach to break-down the inefficiencies of the historic abstractions and bring significant innovation and co-design across the entire computing stack.”

EuroEXA has set an objective (link) to bring “multiple European HPC projects and partners together with the industrial SME focus of [Maxeler] for FPGA data-flow; [Icetope] for infrastructure; [Allinea] for HPC tooling and [ZeroPoint Technologies] to collapse the memory bottleneck; to co-design a ground-breaking platform capable of scaling peak performance to 400 PFLOP in a peak system power envelope of 30MW; over four times the performance at four times the energy efficiency of today’s HPC platforms. Further, we target a PUE parity rating of 1.0 through use of renewables and immersion-based cooling.”

An integrated and operational prototype machine will be validated with applications from across climate/weather, physics/energy and life-science/bioinformatics domains.

The project is coordinated at the Institute of Communication and Computer Systems in Greece and has 15 project partners across eight countries:

Spain — Barcelona Supercomputing Center
United Kingdom — ARM-UK, Iceotope, Maxeler Technologies, The University Of Manchester, The Hartree Centre of STFC, ECMWF (European Centre For Medium-Range Weather Forecasts)
Greece — FORTH (Foundation For Research And Technology Hellas), Synelixis Solutions Ltd
Belgium — IMEC
Sweden — ZeroPoint Technologies
Netherlands — Neurasmus
Italy — INFN (Istituto Nazionale Di Fisica Nucleare), INAF (Istituto Nazionale Di Astrofisica)
Germany — Fraunhofer-Gesellschaft

The post EU Funds 20 Million Euro ARM+FPGA Exascale Project appeared first on HPCwire.

Creating a Modular, Building-Block Architecture for Life Science Workflows

Thu, 09/07/2017 - 16:47

As genomic data becomes ubiquitous, infrastructure bottlenecks for life sciences organizations are narrowing. But speedy analysis and real-time decision making don’t have to remain out of reach: modern end-to-end systems are emerging as flexible solutions for a competitive edge.

The post Creating a Modular, Building-Block Architecture for Life Science Workflows appeared first on HPCwire.

CoolIT Supports Launch of Canadian Hydrogen Intensity Mapping Experiment

Thu, 09/07/2017 - 16:03

KALEDON, British Columbia, Sept. 7, 2017 –  Canada’s newest and largest radio telescope, the Canadian Hydrogen Intensity Mapping Experiment (CHIME) formally launched today at the Dominion Radio Astrophysical Observatory in Penticton, BC. Proudly supported by CoolIT Systems with a custom Direct Liquid Cooling solution, the CHIME telescope will map out the entire northern sky each day, aiming to constrain the properties and evolution of Dark Energy over a broad swath of cosmic history.

The Honorable Kirsty Duncan, Minister of Science, accompanied by CHIME representatives today installed the final piece (the last receiver to complete construction) of this radio telescope, which will bring to light some mysteries from the universe. A tour of the installation followed the announcement.

CHIME is the first research telescope to be built in Canada in more than 30 years and is the product of a collaboration that includes the University of British Columbia (UBC), the University of Toronto, McGill University, and the National Research Council of Canada (NRC). The CHIME collaboration realized early in its planning process that cooling the custom GPU-intensive servers with traditional air conditioning would be difficult and costly, and began exploring liquid-cooled solutions.

“We chose to work with CoolIT Systems because their solutions are modular and robust, and as a result the most flexible and efficient for our situation,” says Dr. Keith Vanderlinde, University of Toronto. “With the custom liquid cooling solution, we can drastically reduce CHIME’s energy consumption and squeeze additional processing out of the GPUs.”

“CHIME ‘sees’ in a fundamentally different way from other telescopes. A massive supercomputer is used to process incoming radio light and digitally piece together an image of the radio sky,” comments Vanderlinde. “All that computing power also lets us do things that were previously impossible: we can look in many directions at once, run several experiments in parallel, and leverage the power of this new instrument in unprecedented ways.”

To complete its primary cosmological mission, mapping out the largest volume of space ever attempted in a survey, CHIME requires a powerful signal processing backend, capable of sustaining real-time correlation of high-cadence radio data. Given the scale of the telescope, with 400MHz of bandwidth and 2,048 receiving elements, this requires ~8×1015 integer operations per second (~8 Pop/s) operating 24/7 on a 6.4 Tb/s input stream. All nodes must be able to operate in high ambient temperatures: up to 45˚C for extended periods of time.

CoolIT Systems’ custom Rack DCLC implementation provides a net cooling effect on room temperature. The liquid cooled system consists of 256 rack-mounted General Technics GT0180 custom 4u servers housed in 26 racks managed by CoolIT Systems Rack DCLC CHx40 Heat Exchange Modules. The custom direct contact cooling loops manage 100% of heat generated by the single Intel Xeon E5 2620v3 CPUs and the Dual AMD FirePro S9300x2 GPUs, while simultaneously pulling heat from the ambient air into the liquid coolant loops.

About CoolIT Systems, Inc.

CoolIT Systems Inc. (CoolIT) is a world leader in Direct Contact Liquid Cooling (DCLC) for the Data Center, Server and Desktop markets. As an experienced innovator with 50 patents and more than 2 million liquid cooling units deployed, CoolIT brings a wealth of design, engineering, and manufacturing knowledge to the table. CoolIT’s Rack DCLC platform is a modular, rack-based, advanced cooling solution that allows for dramatic increases in rack densities, component performance, and power efficiencies. The technology can be deployed with any server and in any rack making it a truly flexible solution that allows for an edge in today’s highly competitive marketplace.

About CHIME

The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a radio telescope composed of four 20m x 100m parabolic cylindrical reflectors, each with 256 dual-polarization radio receiving elements. The system will map out the entire northern sky each day, aiming to constrain the properties and evolution of Dark Energy over a broad swath of cosmic history.

Source: CoolIT Systems

The post CoolIT Supports Launch of Canadian Hydrogen Intensity Mapping Experiment appeared first on HPCwire.

MIT-IBM Watson AI Lab Targets Algorithms, AI Physics

Thu, 09/07/2017 - 13:51

Investment continues to flow into artificial intelligence research, especially in key areas such as AI algorithms that promise to move the technology from specialized tasks to broader applications that leverage big data and exploit machine and “continuous” learning capabilities.

With those and other goals in mind—including the societal implications of AI—IBM and the Massachusetts Institute of Technology announced an AI research partnership on Thursday (Sept. 7). Along with critical AI algorithms and related deep learning software, the 10-year, $240 million investment in the new MIT-IBM Watson AI Lab will focus on hardware development and applications such as health care and cyber-security.

IBM and university researchers also will “explore the economic and ethical implications of AI on society,” the partners noted.

The AI lab will be co-chaired by Anatha Chandrakasan, dean of MIT’s School of Engineering, and Dario Gil, vice president of AI at IBM Research. Along with algorithms, the partners will seek proposals from MIT researchers and company scientists on AI physics, applications and “advancing shared prosperity through AI.”

MIT President L. Rafael Reif, left, and John Kelly III, IBM senior vice president, Cognitive Solutions and Research, shake hands at the conclusion of a signing ceremony establishing the new MIT–IBM Watson AI Lab. Photo credit: Jake Belcher

The final category is a nod to growing concerns about the economic impact of AI on labor markets as more tasks are automated.

The partners also stressed they would seek to take AI advances to the next level by encouraging MIT faculty and students to launch startups designed to commercialize technologies developed by the new lab.

The AI lab builds on earlier collaboration between IBM and MIT in cognitive sciences designed to advance research in areas such as machine vision.

IBM’s previous research efforts built around its Watson cognitive computing have so far achieved mixed results. According to a report by the web site statnews.com, a three-year effort to promote the technology for recommending cancer treatments has fallen short of expectations. A cancer specialist who has used it told the web site: “Watson for Oncology is in their toddler stage.”

The report also noted that IBM researchers have yet to publish any scientific papers about clinical outcomes based on the technology. Nevertheless, IBM and MIT said they would continue pursuing “optimum treatment paths for specific patients” along with medical applications such as image analysis and securing private medical data.

Separately, the company also is collaborating with MIT and Harvard University on AI and genomics.

Hence, IBM’s new research partnership with MIT looks like an attempt to go back to the drawing board to focus research on fundamentals such as algorithm development and AI “physics.” The latter category would focus on materials, devices and architectures that would support future approaches to AI model training. Among the current techniques is broader use of neural processing units that are incorporated into chipsets to accelerate the execution of AI algorithms.

IBM also said the partnership is intended to exploit the intersection between machine learning and quantum computing, including new quantum devices and using quantum machines as another way to accelerate machine-learning algorithms.

“Today, it takes an enormous amount of time to train high-performing AI models to sufficient accuracy,” IBM’s Gil noted in a blog post. “For very large models, it can be upwards of weeks of compute time on GPU-enabled clusters.”

Meanwhile, algorithm research will focus on moving beyond specialized tasks such as facial recognition and computer vision to address more complex problems. “Researchers will invent new algorithms that can not only leverage big data when available, but also learn from limited data to augment human intelligence,” the partners said.

Acknowledging growing unease about the societal implications of AI technology, the partners also emphasized they would take a step back to consider the economic impact of AI technology. Among the issues addressed will be the creation of “AI systems that can detect and mitigate human biases,” Gil explained, along with “ensuring that AI systems [complement] worker skills that might be in short supply and exploring how productivity gains will be distributed across firms, workers and consumers.”

The post MIT-IBM Watson AI Lab Targets Algorithms, AI Physics appeared first on HPCwire.

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

Thu, 09/07/2017 - 13:21

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources, Oracle filed a notice with California Employment Development Department on September 1 of plans to lay off 983 employees, including 615 in hardware development, at its Santa Clara facility. This follows cuts of 415 from the SPARC and Solaris teams earlier this year.

The first revelation came on Saturday (Sept. 2), from Simon Phipps, development of Solaris at Sun from 2005 to 2010, in a Tweet: “For those unaware, Oracle laid off ~ all Solaris tech staff yesterday in a classic silent EOL of the product.”

IEEE Spectrum ran a brief piece yesterday – R.I.P. SPARC and Solaris –  in which its quoted comments from thelayoff.com, “SPARC people are out.” “The entire SPARC core team has been let go as of Friday. It’s gone. No more SPARC. You can’t have a SPARC w/o a team to develop the core.”

Oracle hasn’t officially commented. The SPARC chip, developed in the 1980s at Sun Microsystem, used a novel reduced instruction set computing (RISC) architecture. The IEEE Spectrum article noted, “The SPARC workstation, based on that processor, was a rogue project spearheaded by Andy Bechtolsheim; it came out in 1989 and became Sun’s best-selling product.”

Fujitsu M12 Servers

Oracle purchased Sun in 2009 and questions have swirled around SPARC’s future for some time as the technology landscape has changed. Fujitsu’s switch from SPARC to ARM for its post K computer added fuel to speculation of SPARC’s decline. Still, just this past April Fujitsu introduced two new servers – the M12-2 and M12-2S – featuring the new SPARC64 XII chip.

The new servers, said Fujitsu, would “achieve the world’s highest per CPU core performance in arithmetic processing, offering dramatic improvements for a wide range of database workloads, from mission-critical systems on premises to big data processing in the cloud.”

A big part of the intended message at the time was to demonstrate Fujitsu’s ongoing commitment to the SPARC ecosystem. “We feel this is a good empirical marker to show we are continuing to invest in the SPARC platform. This is not a softball product release. These are all significant advances and represent a lot of time and effort,” said Alex Lam, vice president and head of North America strategy. (See HPCwire article, Fujitsu Launches M12 Servers; Emphasizes Commitment to SPARC.)

The San Jose Mercury report (Oracle slashes at least 900 Santa Clara jobs, more worldwide) on the layoffs suggested Oracle’s pivot to the cloud was a driver of the cuts. Tim Bajarin, principal analyst with Campbell-based Creative Strategies was quoted, “When Oracle bought Sun Microsystems, Sun was primarily a workstation company. But the workstation demand faded as new technologies became dominant, including software on demand and software as a service offerings. The cloud is the future for technology companies.”

According to the IEEE report, “[Oracle] is cutting other hardware-related positions in the U.S. and around the world, for a total of 2,500 employees terminated. Just a week ago Oracle announced plans to hire 5,000 engineers, sales, and support people for its cloud computing business.”

The chart below on SPARC’s roadmap is from Jan 2017.

According to an anonymous commenter at thelayoff.com: “M8 will be announced september 19th…; M9 is officially cancelled.”

Link to IEEE Spectrum article: https://spectrum.ieee.org/view-from-the-valley/at-work/tech-careers/rip-sparc-and-solaris

Link to San Jose Mercury article: http://www.mercurynews.com/2017/09/05/oracle-slashes-more-than-900-santa-clara-jobs-more-worldwide/

Link to SiliconAngle article: https://siliconangle.com/blog/2017/09/05/oracle-layoffs-signal-end-life-sparc-solaris-products/

Link to thelayoff.com: https://www.thelayoff.com/oracle

The post Oracle Layoffs Reportedly Hit SPARC and Solaris Hard appeared first on HPCwire.

Noblis Releases Online Portal for Bioinformatics Research, Tools

Thu, 09/07/2017 - 13:08

“It’s crucial for government agencies and labs alike to harness the power of life science data, which has the ability to combat and protect against bioterrorism, counter the spread of disease, customize healthcare and medical countermeasures and protect our global food supply chain, among other possibilities,” said Walter Berger, Noblis BioPortal program manager.

Noblis’ bioinformatics research and tools bring the best of the company’s microbiology, bioinformatics, and data science expertise together to solve pressing bioinformatics challenges. Noblis’ tools run on a robust high performance computing infrastructure, and enable advanced bioinformatic analysis in a variety of domains. This allows users to analyze very large datasets, such as next-generation sequencing reads, to turn genomic data into actionable information.

“Noblis is a leader in pairing multidisciplinary teams with high performance computing, said Dr. Roger Mason, SVP of National Security and intelligence at Noblis. “We are proud to showcase the tangible ways we are helping clients leverage the power of genomic data through Noblis BioPortal.”

The portal also includes information on Noblis research, technical presentations, and life science news, as well as links to various community-developed open source applications that can help solve a variety of complex problems quickly.

“As a nonprofit, investing in research and development that solves our nation’s most pressing problems is at our core,” said Amr ElSawy, Noblis President and CEO. “We are proud to share how we’re advancing bioinformatics research and development with the launch of Noblis BioPortal, and look forward to continuing to work with our federal, academic, and research partners to solve the next generation of bioinformatics challenges.”

To visit Noblis BioPortal, visit noblis.bioportal.org.

About Noblis     

Noblis, Inc. is a nonprofit science, technology, and strategy organization that brings the best of scientific thought, management, and engineering expertise in an environment of independence and objectivity. We work with a wide range of government and industry clients in the areas of national security and intelligence, transportation and telecommunications, citizen services, environmental sustainability, and health. Together with our wholly owned subsidiaries, Noblis ESI and Noblis NSP, we tackle the nation’s toughest problems and support our clients’ most critical missions.

Source: Noblis

The post Noblis Releases Online Portal for Bioinformatics Research, Tools appeared first on HPCwire.

Researchers Report Inventing a New ‘Flip-Flop’ Qubit Technology

Thu, 09/07/2017 - 10:54

Researchers from the University of New South Wales, Australia, yesterday reported a method for creating ‘flip flop’ qubits, made of single atoms, that can be spaced farther apart than has been previously possible but still close enough to pack many qubits close together – a sweet spot if you will. This capability may help overcome a scalability problem in terms of actually building scalable quantum computers and pave the way for larger spin-based quantum computers.

Writing in Nature Communications (Silicon quantum processor with robust long-distance qubit couplings), the researchers neatly summarize the challenge and how their work may overcome it:

“Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices.

“We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowing selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.”

An account of the work on Phys.org (Flip-flop qubits: Radical new quantum computing design invented) points out that at the other end of the qubit spectrum are superconducting circuits – pursued for instance by IBM and Google – and ion traps. These systems are large and easier to fabricate, and are currently leading the way in the number of qubits that can be operated. However, due to their larger dimensions, in the long run they may face challenges when trying to assemble and operate millions of qubits, as required by the most useful quantum algorithms.

“Our new silicon-based approach sits right at the sweet spot,” said Andrea Morello, a professor of quantum engineering at UNSW and one of the paper’s authors. “It’s easier to fabricate than atomic-scale devices, but still allows us to place a million qubits on a square millimeter.” In the single-atom qubit used by Morello’s team, and which Tosi’s new design applies, a silicon chip is covered with a layer of insulating silicon oxide, on top of which rests a pattern of metallic electrodes that operate at temperatures near absolute zero and in the presence of a very strong magnetic field.

Lead author Guilherme Tosi, a Research Fellow at Quantum Computation and Communication Technology (CQC2T) in Sydney, developed the pioneering concept along with Morello and co-authors Fahd Mohiyaddin, Vivien Schmitt and Stefanie Tenberg of CQC2T, with collaborators Rajib Rahman and Gerhard Klimeck of Purdue University in the USA.

Link to paper: https://www.nature.com/articles/s41467-017-00378-x

Link Phys.org article: https://phys.org/news/2017-09-flip-flop-qubits-radical-quantum.html

The post Researchers Report Inventing a New ‘Flip-Flop’ Qubit Technology appeared first on HPCwire.

First Volta-based Nvidia DGX Systems Ship to Boston-based Healthcare Providers

Thu, 09/07/2017 - 10:36

The Center for Clinical Data Science (CCDS), Boston, is at the confluence of major technology trends driving the healthcare industry: AI-based diagnostics of large volumes of medical images, shared among multiple medical institutions, utilizing GPU-based neural networks.

Founded by Massachusetts General Hospital and later joined by Brigham & Women’s Hospital, CCDS today announced it has received what it calls a purpose-built AI supercomputer from the portfolio of Nvidia DGX systems with Volta, said by Nvidia to be the biggest GPU on the market.

Later this month, CCDS will also receive a DGX Station, Nvidia’s “personal AI supercomputer,” that the organization will use to develop new training algorithms “and bring the power of AI directly to doctors” in the form of a desk-side system. CCDS is the first Nvidia customer to receive Volta-based DGX systems.

The idea is to provide Boston-area radiologists with AI “assistants” integrated into their daily workflows, helping them more quickly and accurately diagnose disease from MRIs, CAT scans, X-rays and other medial images. CCDS said the trained neural networks residing on DGX-1 systems in its data center “are in a constant state of learning, continually ingesting countless medical images worldwide.”

CCDS, Mass General and Brigham’s are Harvard Medical School teaching hospitals and founding members of Partners Healthcare, a Boston-based healthcare network that last April announced a collaboration with Persistent Systems to develop a shared-resource industry cloud – what the companies describe as an industry-wide open-source platform for knowledge exchange and development of decision support apps in clinical environments. The platform is based on SMART (open, standards-based technology platform for integration of apps with electronic health records) and FHIR (Fast Healthcare Interoperability Resources) developed by the Health Level-7 standards organization. The platform will enable provider systems to deploy “industry-leading best practices in clinical care across their ecosystems.”

CCDS’s GPU-based diagnostics capability could play a significant role in that ecosystem.

Nvidia DGX-1 with Volta

“Because trained neural networks can provide superhuman pixel-by-pixel image evaluation and analyze scores of other data with incredible speed, doctors can make more accurate diagnoses and treatment plans,” stated CCDS in an Nvidia blog post. For example, radiologists reviewing medical images usually review them in the order in which they were received. “However, with AI-assisted imaging, it’s possible to triage the images, bringing the most troubling to the top of a radiologist’s queue.”

“Today’s practitioners have a barrage of data thrown at them — lab reports, MRIs, CAT scans, family health histories and more — which makes it incredibly difficult to make decisions,” said CCDS Executive Director Dr. Mark Michalski. “So, having technology that can aid them in this effort can be incredibly transformative.”

By all accounts, Volta is a processing powerhouse. Its architecture offers a new type of compute unit, Tensor Core, designed to accelerate AI workloads. With 640 Tensor Cores (8 per SM), the Tesla V100 delivers 120 tensor teraflops.

The upgraded DGX-1 contains eight Volta V100s, providing a combined 960 tensor teraflops. Its smaller cousin, the DGX Station, harnesses four Volta V100s to deliver half that performance in a water-cooled form factor that fits desk-side.

The post First Volta-based Nvidia DGX Systems Ship to Boston-based Healthcare Providers appeared first on HPCwire.

Dell Technologies Reports Fiscal Year 2018 Second Quarter Financial Results

Thu, 09/07/2017 - 09:16

ROUND ROCK, Texas, Sept. 7, 2017 — Dell Technologies (NYSE: DVMT) announces its fiscal 2018 second quarter results1. For the second quarter, consolidated revenue was $19.3 billion and non-GAAP revenue was $19.6 billion. During the quarter, the company generated an operating loss of $1.0 billion, with non-GAAP operating income of $1.6 billion. The company generated cash flow from operations of $1.8 billion.

“Today we celebrate one year since the historic combination between Dell and EMC. We’ve experienced great progress in bringing together our family of businesses and offering our customers and partners the most comprehensive set of solutions,” said Tom Sweet, chief financial officer, Dell Technologies. “In the second quarter, we generated strong cash flow and made progress on our de-levering goal. We were pleased with the growth velocity of our client, server, hyperconverged and all-flash array offerings. We have the right strategy, portfolio and investments in place to deliver long-term growth.”

Since Sept. 7, 2016, Dell Technologies has delivered significant results, including:

  • Combining two great companies, creating the essential IT infrastructure company with more than 140,000 employees
  • Combining two salesforces into one powerful go-to-market motion and creating an integrated channel program, both of which are driving velocity and revenue synergies across all segments
  • Expansion of the Dell Financial Services (DFS) portfolio, now the exclusive originator of Dell EMC business and the VMware preferred finance partner
  • Industry leadership in newer and fast-growing categories, including all-flash and hyperconverged infrastructure

Fiscal second quarter 2018 results

Three Months Ended

Six Months Ended

August 4, 2017

July 29, 2016

Change

August 4, 2017

July 29, 2016

Change

(in millions, except percentages; unaudited)

Net revenue

$               19,299

$               13,080

48 %

$               37,115

$               25,321

47 %

Operating income (loss)

$                  (979)

$                      67

NM

$               (2,479)

$                    (72)

NM

Net loss from continuing operations

$                  (978)

$                  (262)

(273)%

$               (2,361)

$                  (686)

(244)%

Non-GAAP net revenue

$               19,634

$               13,145

49 %

$               37,805

$               25,464

48 %

Non-GAAP operating income

$                 1,552

$                    756

105 %

$                 2,749

$                 1,295

112 %

Non-GAAP net income from continuing operations

$                    873

$                    362

141 %

$                 1,454

$                    626

132 %

Adjusted EBITDA

$                 1,866

$                    884

111 %

$                 3,433

$                 1,527

125 %

Information about Dell Technologies’ use of non-GAAP financial information is provided under “Non-GAAP Financial Measures” below. All comparisons in this press release are year-over-year unless otherwise noted.

Operating segments summary

Client Solutions Group (Dell) continued to take share globally while growing profitably. Dell outperformed the market worldwide, experiencing 3.7 percent unit growth during the calendar quarter2. Revenue for the second fiscal quarter was $9.9 billion, up 7 percent year over year and the highest since the same quarter of fiscal 2015. Operating income was $566 million for the quarter, a 17 percent increase or 5.7 percent of revenue.

Key highlights:

  • Increased PC shipments by 3.7 percent, with 18 consecutive quarters of year-over-year PC unit share growth and the highest market share since 20062
  • Strong notebook momentum and double-digit revenue growth across all high-end commercial and consumer product lines
  • Ranked No. 1 workstation vendor worldwide3
  • No. 1 displays provider worldwide for the 16th consecutive quarter and double-digit revenue growth4

Infrastructure Solutions Group (Dell EMC) generated $7.4 billion in revenue, up 7 percent quarter over quarter. Server and networking revenue was $3.7 billion, a quarter-over-quarter and year-over-year increase of 16 percent, and storage revenue was $3.7 billion. Operating income for the quarter was $430 million.

Key highlights:

  • Continued triple-digit demand growth for hyperconverged portfolio, including VxRail, which has more than 2,000 customers and 14,000 nodes deployed to date
  • Launched and shipped new 14G servers; strong overall server demand growth in each of the major regions
  • Strong all-flash growth at scale, more than 2x the nearest competitor
  • Double-digit demand growth in next-generation Isilon scale-out NAS with new Infinity architecture
  • Strong demand for our flexible consumption and utility models, signing several large, multi-year strategic deals

VMware segment revenue for the second quarter was $1.9 billion, with operating income of $561 million, or 29.4 percent of revenue.

Additional financial highlights

The company ended the quarter with a cash and investments balance of $15.3 billion. In the second quarter, Dell Technologies paid down $1.0 billion of core debt. Additionally, subsequent to quarter-end, the company paid down the $1.5 billion bridge facility. Including these latest debt payments, the company has repaid approximately $9.5 billion of gross debt, excluding DFS-related debt, since closing the EMC transaction.

Also since closing the EMC transaction, the company has repurchased a total of 19.7 million shares of Class V common stock for $1.1 billion, under both the previously announced Class V Group and DHI Group repurchase programs. The company also has announced its board has approved an amendment to the Class V Group repurchase program for up to an additional $300 million of repurchases over six months. This will be funded from proceeds of sales of VMware Class A common stock under a new stock purchase agreement with VMware.

Conference call information

As previously announced, the company will hold a conference call to discuss its second quarter performance today at 7 a.m. CDT. The conference call will be broadcast live over the internet and can be accessed at investors.delltechnologies.com. For those unable to listen to the live broadcast, an archived version will be available at the same location for 30 days.

A slide presentation containing additional financial and operating information may be downloaded from Dell Technologies’ website at investors.delltechnologies.com.

About Dell Technologies

Dell Technologies is a unique family of businesses that provides the essential infrastructure for organizations to build their digital future, transform IT and protect their most important asset, information. The company services customers of all sizes across 180 countries – ranging from 98 percent of the Fortune 500 to individual consumers – with the industry’s most comprehensive and innovative portfolio from the edge to the core to the cloud.

Source: Dell

The post Dell Technologies Reports Fiscal Year 2018 Second Quarter Financial Results appeared first on HPCwire.

Pages