HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 13 hours 6 min ago

NCSA’s Cybersecurity and Data Expertise Engaged for ‘Smart State’ Initiative

Thu, 02/02/2017 - 06:39

Feb. 2 — The National Center for Supercomputing Applications’ (NCSA) world-renowned cybersecurity and large-scale data capabilities are being called upon to advance Illinois as the nation’s premier “Smart State.” On Thursday the state of Illinois and University of Illinois System announced a new partnership that will bring together next-generation technology and highly-skilled expertise to improve quality of life, grow the state’s economy and retain and attract residents. The first phase of the initiative will partner the Illinois Department of Innovation & Technology (DoIT) and NCSA to tackle Big Data and cybersecurity issues such as how to better secure transportations systems and citizen’s data.

“Illinois is nationally recognized as the first U.S. state to have a vision and roadmap for becoming a smarter state,” said Hardik Bhatt, Secretary Designate and state CIO of the Department of Innovation & Technology (DoIT). “The goal is to use technology, Internet of Things, analytics, and cybersecurity to improve operational efficiency and find new and more cost-effective ways to serve our customers.”

Illinois launched Smart State initiatives, under the leadership of Governor Bruce Rauner, with the release of a white paper titled, “Introducing the Smart State: Illinois Leads the Way” in early 2016. This was followed by two smart state workshops in April and December that connected leaders from both the public and private sector to explore how becoming a smart state will improve government efficiency and access to services, and create a climate that promotes the growth of business and industry.

DoIT’s Chief Information Security Officer Kirk Lonbom commented, “A partnership between DoIT and NCSA will bring great benefits to Illinois businesses and citizens in the area of cybersecurity. The threat posed by cyber-attackers grows exponentially by the day and collaborations such as these accelerate the pace of cybersecurity progress.”

“At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science, industry, and society,” said Bill Gropp, acting director of NCSA. “We are excited about leveraging these resources to modernize infrastructure in order to better serve the citizens of Illinois and uplift the state’s economy.”

About the National Center for Supercomputing Applications

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. NCSA has been advancing one third of the Fortune 50 for more than 30 years by bringing industry, researchers and students together to solve grand challenges at rapid speed and scale.

About the University of Illinois System

The University of Illinois System is a world leader in research and discovery, the largest educational institution in the state with more than 81,000 students, more than 24,000 faculty and staff, and universities in Urbana-Champaign, Chicago and Springfield. The U of I System awards more than 20,000 undergraduate, graduate and professional degrees annually.

Source: University of Illinois

The post NCSA’s Cybersecurity and Data Expertise Engaged for ‘Smart State’ Initiative appeared first on HPCwire.

Asetek Achieves 200 Million Hours of Fault-Free Pump Operation at Datacenter Installations

Thu, 02/02/2017 - 06:35

Feb. 2 — Asetek has announced that its server pump has achieved 200 million hours of reliable operation in real world use. Installed at end-user locations as diverse as Singapore and Norway, Asetek pumps have run fault-free for the equivalent of 22,000 years.

“To date, we have not had a single server pump failure at any of our data center installations around the world,” said Mette Nørmølle, Vice President of Engineering.  “Our low-pressure architecture is the key to enabling a cost-effective solution that is relied on by data centers demanding unrivaled performance and maximum uptime.”

At the heart of Asetek liquid cooling is the Direct-to-Chip (D2C) CPU Cooler. The CPU Cooler is a patented integrated pump and cold plate assembly used to cool server CPUs. Because a single pump has sufficient power to circulate cooling water in a server node, servers with more than one CPU have multiple pumps, providing built in redundancy.

With over 3.5 million units deployed worldwide in desktop PCs and servers, Asetek’s cooler pumps incorporate features designed to meet our customers’ strict demands for reliability, performance and uptime. Pumps are mechanically sealed with the impeller, the only moving part, suspended in lubricating cooling liquid. As a result, high reliability and low cost are both inherent in the pump design.

Asetek’s reliable data center solutions include RackCDU D2C and Server Level Sealed Loop (ServerLSL). RackCDU D2C provides cooling energy savings greater than 50% and density increases of 2.5x-5x. ServerLSL provides liquid assisted air cooling for server nodes, replacing less efficient air coolers and enabling the servers to incorporate the highest performing CPUs and GPUs.

Asetek liquid cooling is currently available to data centers around the globe through its network of OEM partners.

About Asetek

Asetek (ASETEK.OL) is the global leader in liquid cooling solutions for data centers, servers and PCs. Asetek’s server products enable OEMs to offer cost effective, high performance liquid cooling data center solutions. Its PC products are targeted at the gaming and high performance desktop PC segments. With over 3.5 million liquid cooling units deployed, Asetek’s patented technology is being adopted by a growing portfolio of OEMs and channel partners. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. For more information visit http://www.asetek.com.

Source: Asetek

The post Asetek Achieves 200 Million Hours of Fault-Free Pump Operation at Datacenter Installations appeared first on HPCwire.

Volunteer Computing Project Helps Smash Childhood Cancer

Wed, 02/01/2017 - 15:32

On Tuesday, IBM announced that its World Community Grid will provide free virtual supercomputing power to a global team of scientists engaged in the fight against childhood cancers.

Every year, approximately 300,000 children and teens are diagnosed with cancer and about 80,000 die of cancer (source). Although the outlook for cancer diagnosis in children has improved greatly, the disease remains the number one cause of death by disease in this population beyond infancy. Thanks to a partnership with the World Computing Grid, scientists with the Smash Childhood Cancer project will be able to run large-scale drug simulations on thousands of donated CPU cycles, enabling the search for a treatment or cure.

Dr. Akira Nakagawara

The Smash Childhood Cancer project formed to identify drug candidates to treat neuroblastoma and other childhood cancers. Dr. Akira Nakagawara, an internationally renowned pediatric oncologist, molecular biologist and CEO of the Saga Medical Center KOSEIKAN, in Japan, leads the project. In 2014, Dr. Nakagawara established the Childhood Cancer project, which used IBM’s World Community Grid to identify several promising drug candidates to fight neuroblastoma.

“We were excited by the idea of such massive computing power being available for our research,” Dr. Nakagawara said Tuesday in a blog post. “We also liked the community aspect: World Community Grid is for everyone, and anyone with a computer and an internet connection can participate. With the help of computing power donated by volunteers, we were able to make a breakthrough discovery of seven potential drug candidates that destroyed neuroblastoma cells in mice, and crucially, did so without causing any apparent side effects.”

In addition to advancing potential neuroblastoma treatments, the new Smash Childhood Cancer project will expand the search for other forms of childhood cancers, including brain tumors, Wilms’ tumor (malignant tumor of the kidney), germ cell tumors (which impact the reproductive and central nervous system), hepatoblastoma (cancer of the liver) and osteosarcoma (cancer of the bone).

The global Smash Childhood Cancer team includes expert researchers from Japan (Chiba University and Kyoto University); China (The University of Hong Kong in Hong Kong); and the United States (Connecticut Children’s Medical Center, The Jackson Laboratory, and the University of Connecticut School of Medicine).

Like other volunteer computing efforts, such as SETI@home and Folding@home, the World Community grid is comprised of thousands of PCs and mobile devices that execute “embarrassingly parallel” workloads while connected over the web. Since 2004, the IBM-operated World Community Grid has harnessed the power of more than 3 million computing devices, assisting worthy causes with over one million years of computing time. To date some 28 projects have been supported, furthering investigations into cancer, HIV/AIDS and tropical disease research and advancing solar technology and low-cost water filtration systems.

It’s easy for concerned citizen scientists to contribute. All it takes is signing up, then downloading and installing a free app on your computer or Android device. When the device is idle, spare compute cycles run experiments on behalf of the research team and the results are transmitted back to researchers who then analyze the data.

Here’s a short video describing the Smash Childhood Cancer project:

The post Volunteer Computing Project Helps Smash Childhood Cancer appeared first on HPCwire.

HPC Career Notes (Feb. 2017)

Wed, 02/01/2017 - 12:13

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ve got the details. Check in each month for an updated list and you may even come across someone you know, or better yet, yourself!

Richard Gerber 

Richard Gerber has been put in charge of NERSC’s HPC department. For the past year, Gerber served as acting head of the department before being appointed head. Gerber’s career has revolved around HPC for almost 30 years and he’s been with Lawrence Berkeley National Laboratory (LBNL) since 1996. Prior to joining LBNL, he spent time at the NASA Ames Research Center as a National Research Council Postdoctoral Fellow.

“We work with complex, first-of-a-kind systems that present unique challenges,” said Gerber. “Our staff is constantly providing innovative solutions that make systems more capable and productive for our users. Looking forward, we are evaluating emerging technologies and gathering scientific needs to influence future HPC directions that will best support the science community.”

Stathis Papaefstathiou 

Stathis Papaefstathiou has been named the senior vice president of research and development at Cray. In his new position he will be responsible for leading the hardware and software engineering efforts for all R&D projects. Papaefstathiou has over 30 years of experience in the tech industry and joins Cray from Aerohive Networks where he served as the senior vice president of engineering.

“My admiration and respect for Cray goes back to my days as a university research fellow, and throughout my career I have continued to hold the company’s engineering and R&D capabilities in very high regard,” said Papaefstathiou. “Leading the R&D teams at Cray is both an honor and an exciting opportunity, and I look forward to working with this talented group to expand the boundaries of what can be made possible with a Cray supercomputer.”

Click here to view our recent interview with Stathis.

Jeff Cotten

Rackspace has promoted Jeff Cotten to the position of President. He recently served as the senior vice president and general manager of fanatical support for AWS and has held a variety of different leadership positions within the company since joining in 2008. Prior to joining Rackspace, Cotten worked at EDS, a HP company, for about eight years.

“Jeff is a Racker success story — a veteran who has led at every level in the company,” said Taylor Rhodes, Rackspace CEO. “In every leadership role that we’ve given him, he has delivered expertise and support that customers couldn’t get anywhere else, all the while inspiring industry-leading levels of engagement from the Rackers in his care. Jeff is uniquely qualified to serve as our President.”

Jorge Titinger

Jorge Titinger has joined TransparentBusiness as chief strategy officer. Titinger is well-known in the HPC industry for his former role as the CEO of SGI before the company was acquired by Hewlett Packard Enterprise. Prior to SGI, Titinger was president and CEO at Verigy.

“I’m pleased to join the company which has established itself as a leader in remote work process management and coordination,” said Titinger. “I believe TransparentBusiness can help accelerate the adoption of a distributed workforce; this can result in significant bottom line benefits for the companies that embrace this new direction and bring the work to where the talent is.”

Martin Fink

Western Digital has named Martin Fink CTO. Fink recently served as the CTO and director of HP Labs at Hewlett Packard Enterprise. He joined HP in 1985 and stayed with the company until 2015 when he moved over to HPE. In addition to his new position at Western Digital, Fink also serves on the board of the Wild Beer Co. and Hortonworks.

“Martin is a respected technologist who will play an important, strategic role in the ongoing growth and transformation of Western Digital,” said Steve Milligan, CEO, Western Digital. “He has been a leading voice on the value and promise of memory-driven computing and will lead our continued innovation focus areas, including the commercialization of Storage Class Memory solutions.”

Mark Adams

Seagate Technology has appointed Mark Adams to the company’s board of directors. From 2012 to 2016, he served as the president of Micron Technology before resigning for personal health reasons. Prior to joining Micron in 2006, Adams was the chief operating officer at Lexar Media.

“On behalf of the full Board, we are pleased to welcome Mark to Seagate,” said Steve Luczo, Seagate’s CEO. “Mark has extensive semiconductor industry executive and board experience and we look forward to leveraging his strategic guidance and operational insights.”

 

Do you know someone that should be included in next month’s list? If so, send us an email at Thomas@taborcommunications.com. We look forward to hearing from you.

The post HPC Career Notes (Feb. 2017) appeared first on HPCwire.

Here’s What HPC Leaders Say about Trump Travel Ban

Wed, 02/01/2017 - 10:29

On Friday, President Trump signed an executive order that bars citizens from seven Muslim-majority nations entry to the United States for at least 90 days, kicking off a swift and strong reaction from the science and technology community. High-profile tech companies, Apple, Google, HPE and others, have issued statements opposing the ban. That same ripple of concern is rushing through the scientific community.

As reported in the Washington Post on Monday, thousands of academics, including 50 Nobel laureates, have joined together to protest the ban. A petition denouncing the action was signed by 14,800 verified U.S. faculty members and more than 18,000 supporters as of Wednesday morning.

The Association for Computing Machinery, the world’s largest scientific and educational computing society, also expressed its concern over President Trump’s order and urged an end to the ban. In a statement issued Monday, the ACM said it “supports the statute of International Council for Science in that the free and responsible practice of science is fundamental to scientific advancement and human and environmental well-being, [and] such practice, in all its aspects, requires freedom of movement, association, expression and communication for scientists. All individuals are entitled to participate in any ACM activity.”

To capture the rising chorus of voices on this issue, we reached out to the HPC leadership community and found a number of people willing to go on record and others who declined, citing wariness about possible reprisals; one person spoke to us on the condition of anonymity. We also collected some of the sentiments from the larger tech community.

Thomas Sterling, Director, Center for Research in Extreme Scale Technologies, Indiana University

“Science discovery, knowledge, and understanding is reserved for no single self-selected elitist group but is a shared fabric of all societies as are their benefits to all of humanity. Only artificial barriers such as political boundaries, restrictive belief systems, and economic obstacles impede the dissemination and free flow of ideas and their creative application to common challenges among all peoples such as health, climate, food production, and lack of want. HPC is a tool, both a product and enabler of the universal culture of science and engineering, and ultimately human knowledge. Where any one body is precluded from the natural exchange of concepts and the advancement of methods such as HPC, all suffer to a degree due to limits on creativity and human productivity.

“The HPC community as it impacts a diversity of fields is an international body exemplified by the dynamics of cooperation through the movement of peoples in all directions whether of senior experts for short forums around the world, students for extended stays at universities across continents for periods of study, or the immigration of trained practitioners residing permanently in new adopted lands. This ebb and flow of human capital enriches all societies and refreshes their capabilities. The last week has seen a disruption of international communities and cooperation including science and engineering.

“In April, I and others from a number of countries were to be invited to participate in an international forum on topics related to computing including HPC in Tehran, Iran. Building bridges with our colleagues there and welcoming them into our world societies without borders is a wonderful opportunity to facilitate the likelihood of world prosperity and is a responsibility of all thought-leaders contributing to advances of our shared civilization. Now this small step is being withdrawn with both the U.S. and Iran blocking travel of each other’s citizens to their respective nations. The acts precipitating these circumstances are neither noble nor of any profit. They satisfy only narrow views of small minds with short horizons, without perspective or vision of a better world nearly in our grasp but possibly lost for a generation as we drift back into our tribal caves, dark and dank without enlightened images of a greater world.”

Unnamed source, a prominent and long-time member of the HPC community with experience in the federal government, in private industry and academia

“It’s a 90-day ban, and there are all kinds of court challenges that have already started. It’s not clear what implementation long-term would even look like. I think people are upset and that’s probably a good thing, that’s what keeps a balance in our political system. But I think it’s not clear, to me at least…it’s a 90-day ban, there’s a lot of things changing right now, we don’t know enough to say anything helpful. So I feel like everybody is sort of, they were waiting for something to be upset about and here we are. So they’re all ready to go. But I’m not sure it’s the right thing.

“I don’t know if it’s an overreaction, it’s an early reaction. It’s too early. We just don’t know enough for people to be this upset in high end computing. Now maybe in other areas, such as civil rights or areas of the business community where travel is a lot more fluid. Maybe they do know enough and have already seen enough where it’s already impacted them. But for us, half of these countries are on the terrorist counties Watch List anyway, or at least some of them. And for government HPC facilities, those people we’re already not permitted to participate. Countries like Iran are already severely restricted in terms of what they can do in the defense and security space, which is where a lot of the supercomputers are.

“For people who were tired of things being the same, they might get a little relief from that emotion. Things will change, I just think it’s too early to know whether they’ll be on balance good or bad, and I’m willing to wait and see. A lot of people around me aren’t, they are very excited.”

Steve Conway, Research Vice President in IDC’s High Performance Computing group

“I have worked for large technology companies in the U.S., and in order to compete they have to be able to hire the best and the brightest from around the world. If there are any restrictions that aren’t necessary, that really inhibits American companies’ ability to compete globally, and they can fall behind. We have to be able to hire the best and the brightest from anywhere in the word. And of course all employers in all countries have the right to exclude people who have proven that they are not trustworthy, and so forth, and that’s OK. But a ban can be too broad and too unspecific if it really filters out people who can really help the U.S. economy.

“I’ve said this in meetings in Washington, that arguably the single biggest advantage America has in the whole area of business competition is our university system, particularly at the graduate level. There’s nothing in the world that compares with it. And it’s a magnet, it attracts not just people from the U.S. but people from all over the world. That’s an investment by our country, and we ought to be able to hold onto as many of the best people coming out of our educational system as we can. If they want to contribute to our economy, we certainly shouldn’t be turning them away.”

John Gustafson, Visiting Scientist, A*STAR – Agency for Science, Technology and Research, inventor of Gusatfon’s law

“After 18 months in Singapore, I can say with confidence that Singapore is a model for how to handle immigration and travel. Of the 5.5 million people living in Singapore, 2 million are not Singaporeans, the highest non-citizen percentage of any country. The Singapore government is on good terms with the rest of the world, but it is very selective and careful with who is allowed in as a long-term resident, with screening that takes months by the Ministry of Manpower. They have just the right balance between caution and openness. Despite the amount of time they take, they are actually quite efficient and effective, and no one malicious ever seems to make it through that filter. Once you get the corruption out of government, it’s amazing what it is able to accomplish.”

Bob Sorensen, Research Vice President in IDC’s High Performance Computing group

“Throughout its history, U.S. high technology capabilities in the academic, government, and commercial sectors have always benefitted from access to the best and brightest minds in the world. Indeed, the ability to attract highly skilled scientists and engineers from around the world is one of the United States’ most important competitive advantages. Any barriers that impede the free flow of those people, and the intellectual capability they engender, can only diminish the ability of the U.S. to remain at the forefront of global scientific and technological development, which in the long run could have serious implications for both U.S. national security concerns and its global economics competitiveness.”

Shahin Khan, Founding Partner, OrionX.net & Founder, StartupHPC.com

“I usually take several steps back on these things and try to see them in the larger context while remaining sensitive to immediate issues.

“The supercomputing community has always been a model of how government, academic, and industrial organizations can cooperate to advance humanity, not just in science and technology but also in business and policy. This is especially important if we look at current global challenges in the context of the transition from the Industrial Age to the Information Age. There are left-over problems of Industrial Age: most notably climate change, but also some social and economic constructs; and there are new problems stemming from digitization: automation, globalization, awareness, and digital mistrust. The shear complexity of these old and new grand-challenge problems, and to solve them while avoiding unintended consequences, demands supercomputing and its unique cooperative model.”

The Tech Industry Response

Amazon

“We’re a nation of immigrants whose diverse backgrounds, ideas, and points of view have helped us build and invent as a nation for over 240 years,” wrote Amazon founder and CEO Jeff Bezos in a company-wide email. “No nation is better at harnessing the energies and talents of immigrants. It’s a distinctive competitive advantage for our country—one we should not weaken.

“To our employees in the US and around the world who may be directly affected by this order, I want you to know that the full extent of Amazon’s resources are behind you.”

Google

“It’s painful to see the personal cost of this executive order on our colleagues,” said Google CEO Sundar Pichai in a memo to employees. Google reports more than 100 employees are affected by the order, according to Bloomberg.

Microsoft

“As an immigrant and as a CEO, I’ve both experienced and seen the positive impact that immigration has on our company, for the country, and for the world,” wrote Satya Nadella, Microsoft’s chief executive, in a LinkedIn post. “We will continue to advocate on this important topic.”

HPE

Meg Whitman, CEO of Hewlett-Packard Enterprise and chairperson of HP, sent an email to HPE employees on Monday morning (source: Axios):

“HPE will continue to support its diverse and global family of employees through these challenging times. We are in this together. We will also continue to advocate for immigration policies that recognize America’s core principles and the contributions immigrants make to our collective strength and prosperity. Even while securing its borders, America must not turn its back on the ideals that have motivated generations and inspired the world.”

IBM

“IBM has long believed in diversity, inclusion and tolerance. As we shared with IBMers this weekend, we have always sought to enable the balance between the responsible flow of people, ideas, commerce and information with the needs of security, everywhere in the world,” IBM said in a memo (link).

“As IBMers, we have learned, through era after era, that the path forward – for innovation, for prosperity and for civil society – is the path of engagement and openness to the world. Our company will continue to work and advocate for this.”

Intel

The note that Brian Krzanich shared with all its employees was documented by The Oregonian.

Intel Employees,

I wanted to get a note out to you that goes beyond the statement on our Policy blog or my latest tweet, about the recent directives around immigration. First, as the grandson of immigrants and the CEO of a company that was co-founded by an immigrant, we believe that lawful immigration is critical to the future of our company and this nation. One of the founding cultural behaviors at Intel is constructive confrontation where you focus on the issue, and not on the person or organization. The statement we submitted today does just that. It focuses on the issues. We will continue to make our voice heard that we believe immigration is an important part of making Intel and America all that we can be. I have heard from many of you and share your concern over the recent executive order and want you to know it is not a policy we can support.

At Intel we believe that immigration is an important part of our diversity and inclusion efforts. Inclusion is about making everyone feel welcome and a part of our community. There are employees at Intel that are directly affected by this order. The HR and Legal teams are working with them in every way possible and we will continue to support them until their situations are resolved. I know I can count on all of you to role model our culture and support these employees and their families.

I am committing to all of you – as employees of what I believe to be the greatest company on the planet – that we will not back down from these values and commitments. There will always be forces from outside of the company that will try and distract us from our mission. The key to our success will be our unrelenting focus. As our founder Robert Noyce said: “Do not be encumbered by the past, go out and do something wonderful today.” Each of us can go out and do something wonderful to role model our values.

As mentioned in Quartz, some companies and tech leaders were noteworthy for their silence. “Yahoo CEO Marissa Mayer, Trump advisors IBM CEO Ginni Rometty and Oracle co-CEO Safra Catz, and Google cofounder Larry Page were notably absent from those speaking out,” said Quartz.

A broad coalition of tech companies has formed to challenge the new immigration order. A group of technology firms was expected to meet yesterday (Tuesday) to discuss legal strategies for blocking the travel ban. Github organized the meeting, according to Reuters, and Google, Netflix, Yelp, Salesforce and SpaceX were among the companies invited.

The post Here’s What HPC Leaders Say about Trump Travel Ban appeared first on HPCwire.

Industry Voice Added to U.S. Drive to Exascale

Wed, 02/01/2017 - 09:53

A federally-funded effort to deliver HPC systems 50 times faster than today’s supercomputers has added a business perspective to the multi-year development project, which one industry observer said could reduce the runtime for complex simulations from a year to two hours and “really ramp-up U.S. industrial R&D.”

The Exascale Computing Project (ECP) has formed an Industry Council comprised of executives from major U.S. corporations across a range of industries and chaired by an SVP at United Technologies. ECP is led by six Department of Energy national labs, with project management at Oak Ridge National Laboratory.

According to Steve Conway, Research VP, HPC, International Data Corp., the council points up the value of supercomputing to the R&D work of American corporations, something that is often under-appreciated.

“A lot of people, even some senior government officials, think that getting to exascale computing is only about science, only about basic scientific research – which is very important – but it’s also about helping America’s industrial and economic competitiveness,” Conway told EnterpriseTech. “Some of America’s biggest companies…really depend on HPC for their advanced research that is incredibly important to their ability to compete with companies outside of the U.S. But even small- and medium-sized businesses are going to benefit from exascale… The technical advances that will be happening inside this program in order to reach exascale are going to benefit companies that only use one or two racks of HPC systems, and also companies that just use the cloud to do their HPC… I don’t think that’s very well known.”

Conway said calculations that might take a year to process using today’s supercomputers could potentially be run in two hours or less on an exascale system. This means high complex simulations related to cancer research and aircraft design, to name two examples, could be run at much higher levels of resolution than is possible today. “This is a big contrast for people who assume what these big computers are all about is figuring out whether a proton turns left or right,” he said.

The companies participating in the ECP Industry Council include:

  • Altair Engineering, Incorporated
  • ANSYS, Incorporated
  • Cascade Technologies, Incorporated
  • Chevron Energy Technology Company
  • Cummins Inc.
  • The Dow Chemical Company
  • DreamWorks Animation
  • Eli Lilly and Company
  • Exxon Mobil Corporation
  • FedEx Corporation
  • General Electric
  • General Motors Company
  • Mars, Inc.
  • Procter & Gamble Company
  • Tri Alpha Energy, Inc.
  • United Technologies Corporation
  • Westinghouse Electric Co.
  • Whirlpool Corporation

According to a prepared statement from ECP, the council “will provide guidance and feedback on ECP’s strategic direction, project scope, technical requirements and progress, providing the perspective of private industry as it relates to the emerging need for exascale-level computation and the formation of a holistic exascale ecosystem.”

Dr. J. Michael McQuade, Senior Vice President, Science & Technology, United Technologies Corporation, will serve as the first chair of the council.

“Exascale-level computing will help industry address ever more complex, competitively important problems, ones that are beyond the reach of today’s leading edge computing systems,” McQuade said. “We compete globally for scientific, technological and engineering innovations. Maintaining our lead at the highest level of computational capability is essential for our continued success.”

According to ECP Director Paul Messina, “The external Industry Council is vitally important to keeping the project in sync with the real-world needs of the HPC industrial user community. These experienced executives will bring deep insight to the requirements of the U.S. industrial sector and help us ensure future exascale capabilities are designed to address a wide range of industrial applications.”

Conway pointed out that even for the largest industrial companies, it’s not economically feasible to buy and maintain a world-class supercomputer.

“A lot of those companies listed in the announcement use DoE supercomputers for their most advanced research because no matter how big these companies are, they can’t justify buying a gigantic supercomputer for the part of it that they would use,” Conway said. “So this is a great boon to the big US industrial firms, and soon that’s going to include companies that are involved in healthcare and lots of other endeavors that depend on very big data.”

The post Industry Voice Added to U.S. Drive to Exascale appeared first on HPCwire.

Keynote Speakers Announced for Leverage Big Data + EnterpriseHPC 2017 Summit

Wed, 02/01/2017 - 08:25

SAN DIEGO, Calif., Feb. 1 — The Leverage Big Data + EnterpriseHPC 2017 Summit, a live hosted event dedicated to exploring the convergence happening as enterprises increasingly leverage High Performance Computing (HPC) to solve modern scaling challenges of the big data era, today announced its lineup of keynote speakers. The summit, which aims to foster a deeper understanding of the advanced scale solutions increasingly being employed to solve Big Data challenges across industries and achieve performance beyond the capabilities of their traditional IT environments, will have topics ranging from HPC environments, real-time security analysis, emerging technologies and more. The keynotes will feature The BioTeam, Inc.’s Asya Shklyar, IDC’s Bob Sorensen, Ford’s Sanjeev Kapoor, and Capital One’s Sagar Gaikwad.

The summit, scheduled for March 19-21, 2017 at the Ponte Vedra Inn & Club in Ponte Vedra Beach, Florida, will focus on bridging the challenges that CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions face as they work to build systems and applications that require increasing amounts of performance and throughput.

The theme of the combined summit will be “Integrating High Performance Computing in the Enterprise and Building Big Data Solutions that Scale,” and will feature the following keynote sessions (with more announcements on further keynotes and panels to follow):

“Are All High Performance Environments Created Equal?”

Keynote Presenter: Asya Shklyar – Senior Scientific Consultant for Infrastructure, The BioTeam, Inc. (formerly SpaceX)

“A Transformational Journey to Autonomous Vehicles through Big Data Analytics and Technology Lenses”

Keynote Presenter: Sanjeev Kapoor – Senior Project Manager, Emerging Technologies for Digital Transformation, Ford Motor Company

“Cyber Security Strategies & Approaches and the Emerging Role of HPC in Cyber Security”

Keynote Presenter: Bob Sorensen – Research Vice President, High Performance Computing Group, IDC

“Approaches to Achieving Realtime Ingestion and Analysis of Security Events”

Keynote Presenter: Sagar Gaikwad – Manager, Big Data CyberTech, Capital One

Through this collection of keynotes, the converged Leverage Big Data + EnterpriseHPC 2017 Summit is uniting leaders in overcoming streaming and high-performance challenges across industries who drive their organizations to success. Attendees of this invitation-only summit will engage with luminaries faced with similar technical challenges, build dialogue and share solutions to delivering both systems and software performance in this emerging era of computing.

The summit will be co-chaired by EnterpriseTech Managing Editor, Doug Black, and Datanami Managing Editor, Alex Woodie.

ATTENDING THE SUMMIT

This is an invitation-only hosted summit that is fully paid for qualified attendees, including flight, hotel, meals and summit badge. Targets of the summit include CTOs, CIOs, database, systems & solutions architects, and other decision-makers involved in the build-out of scalable big data solutions. To apply for an invitation to this exclusive event, please fill out the qualification form at the following link: Hosted Attendee Interest Form

SUMMIT SPONSORS

Current sponsors for the summit include ANSYS, ASRock Rack, Birst, Caringo, Cray, DDN Storage, HDF Group, Impetus, Lawrence Livermore National Lab, Paxata, Quantum, Redline Performance, Striim, Verne Global, with more to be announced. For sponsorship opportunities, please contact us at summit@enterprisehpc.com.

The summit is hosted by Datanami, EnterpriseTech and HPCwire through a partnership between Tabor Communications and nGage Events, the leader in host-based, invitation-only business events.

Source: Tabor Communications

The post Keynote Speakers Announced for Leverage Big Data + EnterpriseHPC 2017 Summit appeared first on HPCwire.

Univa Introduces Unisight v4.1

Wed, 02/01/2017 - 07:00

CHICAGO, Ill., Feb. 1 — Univa, a leading innovator of workload management products, today announced the general availability of its Unisight v4.1 product, providing simple and extensible metric collection for all types of Univa Grid Engine data including NVIDIA GPUs and Software Licenses. Unisight v4.1 is a comprehensive monitoring and reporting tool that provides Grid Engine cluster admins the ability to measure resource utilization and use facts to plan additional server and application purchases.

Bundled with Univa Grid Engine software, Univa’s powerful and highly scalable solution collects current and historical data on jobs, applications, container images, users, GPUs, software licenses and hosts. Unisight is used to generate and share reports that provide unmatched visibility into overall performance, efficiency and actual use of cluster resources.

Unisight v4.1 key new features include the functionality to combine multiple data attributes on a single graph, and collect data from any Univa Grid Engine complex entry automatically. Upgrading from Unisight v4.0 to Unisight v4.1 is straightforward for current users.

“With Unisight v4.1, organizations take an important step toward improving data center automation choices by understanding infrastructure utilization and workflow,” said Fritz Ferstl, CTO and Business Development, EMEA at Univa. “With built-in reports, customers can monitor resource usage – including software license – to obtain the deep insights required to make informed long-term IT strategy and budget decisions from server architecture to memory requirements.”

Unisight v4.1 release supports Univa Grid Engine v8.3.1p12 or later patches, or v8.4.1 or later.

Key new features include:

  • Comparing multiple values from an attribute in an object, e.g., comparing ‘running’ and ‘pending’ jobs on the same graph.
  • Collecting metrics automatically from the Grid Engine GPU load sensor for one or more NVIDIA GPU cards
  • Gathering data automatically for Docker-enabled Univa Grid Engine hosts and creating reports and graphs to show Docker enabled hosts and recently used Docker images
  • Importing old Univa Grid Engine Reporting or Accounting files (newer than UGE 8.2.X) into Unisight v4.1 for reports and graphs.
  • Detecting automatically added or removed complex entries from Univa Grid Engine
  • Collecting data on Univa Grid Engine complex entries or self defined metrics to be used as filters and metrics 

Univa Unisight 

Univa Unisight is a monitoring and reporting tool that allows enterprises to track, measure and analyze the efficiency of dynamic and shared clusters.

Availability

Univa Unisight v4.1 is available now and is in production in some of the world’s most demanding environments.  For more information, contact Univa at: sales@univa.com.

About Univa Corporation

Univa is the leading innovator of workload management products that optimize performance of applications, services and users. Univa increases utilization and enables enterprises to scale compute resources, including containers, across on-premise, hybrid, and cloud infrastructures. Advanced reporting and monitoring capabilities provide insights to make scheduling modifications and achieve even faster time-to-results. Univa’s solutions help hundreds of companies to manage thousands of applications and run billions of tasks every day. Univa is headquartered in Chicago, with offices in Canada and Germany. For more information, please visit www.univa.com.

Source: Univa

The post Univa Introduces Unisight v4.1 appeared first on HPCwire.

AMD Reports Fourth Quarter and Annual 2016 Financial Results

Wed, 02/01/2017 - 06:50

SUNNYVALE, Calif., Feb. 1 — ​AMD (NASDAQ: AMD) has announced revenue for the fourth quarter of 2016 of $1.11 billion, operating loss of $3 million and net loss of $51 million, or $0.06 per share. Non-GAAP operating income was $26 million, non-GAAP net loss was $8 million and non-GAAP loss per share was $0.01.

“We met our strategic objectives in 2016, successfully executing our product roadmaps, regaining share in key markets, strengthening our financial foundation, and delivering annual revenue growth,” said Dr. Lisa Su, AMD president and CEO. “As we enter 2017, we are well positioned and on-track to deliver our strongest set of high-performance computing and graphics products in more than a decade.”

Q4 2016 Results

  • Q4 2016 was a 14-week fiscal quarter compared to 13-week fiscal quarters for Q3 2016 and Q4 2015.
  • Revenue of $1.11 billion was up 15 percent year-over-year, primarily due to higher GPU sales. Revenue was down 15 percent sequentially, primarily driven by seasonally lower sales of semi-custom SoCs.
  • On a GAAP basis, gross margin was 32 percent, up 2 percentage points year-over-year and up 27 percentage points sequentially as Q3 2016 gross margin was negatively impacted by a $340 million charge (WSA charge) related to the sixth amendment of the wafer supply agreement with GLOBALFOUNDRIES. Operating loss was $3 million compared to an operating loss of $49 million a year ago and an operating loss of $293 million in the prior quarter. The year-over-year improvement was primarily due to higher revenue and IP monetization licensing gain while the sequential improvement is primarily due to the absence of the WSA charge offset by lower fourth quarter revenue. Net loss was $51 million compared to a net loss of $102 million a year ago and net loss of $406 million in the prior quarter. Loss per share was $0.06 compared to a loss per share of $0.13 a year ago and loss per share of $0.50 in the prior quarter.
  • On a non-GAAP basis, gross margin was 32 percent, up 2 percentage points year-over-year and up 1 percentage point sequentially primarily due to higher Computing and Graphics segment revenue. Operating income was $26 million compared to an operating loss of $39 million a year ago and operating income of $70 million in the prior quarter. Operating income was lower in the current quarter due to lower revenue. Net loss was $8 million compared to net loss of $79 million a year ago and net income of $27 million in the prior quarter. Loss per share was $0.01 compared to a loss per share of $0.10 a year ago and earnings per share of $0.03 in the prior quarter.
  • Cash and cash equivalents were $1.26 billion at the end of the quarter, up $6 million from the end of the prior quarter.

2016 Annual Results

  • Revenue of $4.27 billion, up 7 percent on an annual basis, increased in both reportable segments.
  • On a GAAP basis, gross margin was 23 percent, down 4 percentage points from the prior year primarily due to the WSA charge. Operating loss was $372 million compared to an operating loss of $481 million in the prior year. Operating loss improvement was due to higher revenue, lower restructuring charges, and an IP monetization licensing gain, offset by the WSA charge. Net loss was $497 million compared to a net loss of $660 million in the prior year. Loss per share was $0.60 compared to a loss per share of $0.84 in 2015.
  • On a non-GAAP basis, gross margin was 31 percent, up 3 percentage points year-over-year  primarily due to improved product mix and an inventory write-down recorded in Q3 2015. Operating income was $44 million compared to an operating loss of $253 million in the prior year. Operating income improvement was primarily related to higher revenue and the IP monetization licensing gain. Net loss was $117 million compared to a net loss of $419 million in the prior year. Loss per share was $0.14 compared to a loss per share of $0.54 in 2015.
  • Cash and cash equivalents were $1.26 billion at the end of the year, up from $785 million at the end of the prior year.

Quarterly Financial Segment Summary

  • Computing and Graphics segment revenue was $600 million, up 28 percent year-over-year and 27 percent sequentially. The year-over-year increase was primarily driven by higher GPU sales. The sequential increase was primarily due to higher GPU and client processor sales.
    • Operating loss was $21 million, compared to an operating loss of $99 million in Q4 2015 and an operating loss of $66 million in Q3 2016. The year-over-year and sequential improvements were driven primarily by higher revenue.
    • Client average selling price (ASP) was down year-over-year driven by desktop processors, and down sequentially driven by desktop and mobile processors.
    • GPU ASP increased year-over-year due to higher desktop and professional graphics ASPs. GPU ASP increased sequentially due to higher mobile and professional graphics ASPs.
  • Enterprise, Embedded and Semi-Custom segment revenue was $506 million, up 4 percent year-over-year primarily driven by higher embedded and semi-custom SoC revenue. Sequentially, revenue decreased 39 percent due to seasonally lower sales of semi-custom SoCs.
    • Operating income was $47 million compared to $59 million in Q4 2015 and $136 million in Q3 2016. The year-over-year decrease was primarily driven by higher R&D investments in Q4 2016, partially offset by an IP monetization licensing gain. The sequential decrease was primarily due to seasonally lower sales of semi-custom SoCs.
  • All Other operating loss was $29 million compared with an operating loss of $9 million in Q4 2015 and an operating loss of $363 million in Q3 2016. The year-over-year operating loss increase was primarily related to higher stock-based compensation charges in Q4 2016. The sequential improvement was primarily due to the absence of the WSA charge.

Q4 2016 Highlights

  • AMD disclosed new details on its upcoming CPU and GPU architectures and offerings:
    • AMD delivered new details on the architecture, go-to-market plans, and performance of upcoming “Zen”-based processors:
      • Revealed Ryzen, the brand that will span “Zen”-based desktop (codenamed “Summit Ridge”) and notebook (codenamed “Raven Ridge”) products.
      • Introduced AMD SenseMI technology, a set of sensing, adapting, and learning features built into AMD Ryzen processors. AMD SenseMI technology is a key enabler of AMD’s landmark generational increase of greater than 40 percent in instructions per clock with its “Zen” core architecture.
      • Delivered a first look at the impressive gaming capabilities of an AMD Ryzen CPU and Vega GPU-based desktop system running Star Wars:  Battlefront – Rogue One in 4K at more than 60 frames per second.
      • Showcased ecosystem readiness and the breadth of partner support for forthcoming Ryzen desktop processors with new AM4 motherboards and ‘Dream PCs’ from global system integrators (SIs), as well as upcoming third-party AM4 thermal solutions.
    • AMD introduced preliminary details of its forthcoming Vega GPU architecture designed to address the most data- and visually-intensive next-generation workloads. Key architecture advancements include a differentiated memory subsystem, next-generation geometry pipeline, new compute engine, and a new pixel engine. GPU products based on the Vega architecture are expected to ship in the second quarter of 2017.
  • AMD announced a new collaboration with Google, making Radeon GPU technology available to Google Cloud Platform users worldwide starting in 2017 to help accelerate Google Compute Engine and Google Cloud Machine Learning services.
  • To accelerate the machine intelligence era in server computing, AMD unveiled the Radeon Instinct initiative, a new suite of GPU hardware and open-source software offerings designed to dramatically increase performance, efficiency, and ease of implementation of deep learning and high-performance compute (HPC) workloads. Radeon Instinct products are expected to ship in 1H 2017.
  • AMD introduced several new products and technologies in the quarter, including:
    • New 7th Generation AMD PRO Processor-based commercial desktops and notebooks from Lenovo.
    • Radeon Pro WX Series of professional graphics cards based on the Polaris architecture, featuring fourth-generation Graphics Core Next (GCN) technology, and engineered on the 14nm FinFET process.
    •  A new family of power-efficient graphics processors, the Radeon Pro 400 Series, first available in the all-new 15-inch Apple MacBook Pro.
    • Radeon FreeSync 2 technology, the next major milestone in delivering smooth gameplay and advanced pixel integrity to gamers, with planned availability to consumers in 1H 2017, adding to the 100+ FreeSync monitors already available today.
    • Radeon Pro Software EnterpriseRadeon Software Crimson ReLive Edition, and updates to the Radeon Open Compute Platform (ROCm) software solutions.

Current Outlook

For Q1 2017, AMD expects revenue to decrease 11 percent sequentially, plus or minus 3 percent. The midpoint of guidance would result in Q1 2017 revenue increasing approximately 18 percent year-over-year. For additional details regarding AMD’s results and outlook please see the CFO commentary posted at quarterlyearnings.amd.com.

About AMD

For more than 45 years, AMD has driven innovation in high-performance computing, graphics, and visualization technologies — the building blocks for gaming, immersive platforms, and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses, and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) websiteblogFacebook and Twitter pages.

Source: AMD

The post AMD Reports Fourth Quarter and Annual 2016 Financial Results appeared first on HPCwire.

Fibre Channel Industry Association Elects 2016/17 Board of Directors

Wed, 02/01/2017 - 06:35

Feb. 1 — Supporting continued advancement of the purpose-built, data center proven network infrastructure for storage, the Fibre Channel Industry Association (FCIA) today announced its 2016-17 Board of Directors.

“With their strong backgrounds in technology roles, the directors of the FCIA Board provide valuable insight and assistance in promoting the continued growth of Fibre Channel for use in 2017 and beyond,” said Mark Jones, president and chairman of the board, FCIA. “Fibre Channel is known by data center professionals for its high bandwidth, low latency and extreme reliability and we will continue to be a focal point of information, standards and education to maintain that trust in the industry.”

The members of the 2016/2017 FCIA Board of Directors are:

FCIA Officers:

  • Chairman and President: Mark Jones, Broadcom Limited
  • Treasurer: Greg McSorley, Amphenol
  • Secretary: J Metz, Cisco

Members at Large:

  • Marketing Chair: Rupin Mohan, Hewlett Packard Enterprise
  • Craig Carlson, Cavium
  • Kevin Ehringer, DCS
  • Jay Neer, Molex
  • Steven Wilson, Brocade

In 2016, FCIA launched several new initiatives that extends Fibre Channel’s position as the industry’s most reliable and robust storage networking solution, including:

  • First public demonstration of NVM Express over Fabrics using Gen 6 32G Fibre Channel at the 2016 Flash Memory Summit
  • A Plugfest held during the week of June 20th, 2016 at the University of New Hampshire InterOperability Lab (UNH-IOL) with 11 companies participating
  • The release of an updated Fibre Channel Roadmap, showing the historic speeds and feeds of Fibre Channel and the future speeds up to Terabit Fibre Channel (TFC). Download here.
  • Continued development of Gen 6 Fibre Channel, the industry’s fastest industry standard networking protocol that enables storage area networks of up to 128GFC
  • A proposed 64GFC specification that is set to double data bandwidth over existing Gen 6 32GFC and 128GFC. When completed, it will come in a single-lane serial variant that will support up to a 12,800 MB/s data rate and a four-lane parallel variant (256GFC) that will support up to a 51,200 MB/s data rate.

FCIA BrightTALK Webcasts

In 2017, FCIA is also launching a series of BrightTALK webcasts where thought leaders will actively share their insights and provide up-to-date information on the Fibre Channel industry. Don’t miss the opportunity to attend the first webcast scheduled for Thursday, February 16 at 11:00 a.m. PST titled “Introducing Fibre Channel NVMe.” J Michel Metz, R&D engineer for the Office of the CTO, Cisco and board member, FCIA and Craig Carlson, senior technologist at Cavium and board member, FCIA, will leading the discussion, followed by a Q&A session.

Register today at: http://bit.ly/2jc55H9.

About FCIA

The Fibre Channel Industry Association (FCIA) is a non-profit international organization whose sole purpose is to act as the independent technology and marketing voice of the Fibre Channel industry. We are committed to helping member organizations promote and position Fibre Channel, and to providing a focal point for Fibre Channel information, standards advocacy, and education.  FCIA members include manufacturers, system integrators, developers, vendors, industry professionals, and end users. Our member-led working groups and committees focus on creating and championing the Fibre Channel technology roadmaps, targeting applications that include data storage, video, networking, and storage area network (SAN) management. For more info, go to http://www.fibrechannel.org.

Source: FCIA

The post Fibre Channel Industry Association Elects 2016/17 Board of Directors appeared first on HPCwire.

New UChicago Startup Offering HPC Solutions to Businesses

Tue, 01/31/2017 - 14:49

CHICAGO, Ill., Jan. 31 — A new University of Chicago startup is delivering supercomputing-as-a-service to small- and medium-sized businesses and helping these firms more readily compete with industry behemoths.

High-performance computing drives innovation and efficiency in the fast-growing industries that are rapidly transforming our lives — smart cities, Internet of Things, self-driving cars, advanced materials, environmental sustainability and personalized medicine.

Using technology developed at Argonne National Laboratory and the University of Chicago, Parallel Works has democratized this powerful computing practice, which is typically limited to the upper echelons of industry due to cost, complexity and resource constraints.

Parallel Works customers can run advanced simulations on large-scale computing resources without requiring specialized skills in parallel programming and computer science.

Klimaat Consulting and Innovation, an engineering consulting firm focused on climate responsive design, uses the Parallel Works platform for detailed urban micro-climate simulations. This helps their clients, including some of the world’s leading architectural firms, make quick, informed urban design and ecology decisions and create healthier city habitats.

“Our belief is that big computing will be the next wave after big data,” said Parallel Works CEO Mike Wilde. “Most engineering and scientific investigations require a tremendous amount of computing power. We’re enabling those studies for companies that previously could not afford it, and making it much more productive for those large companies that are already deeply dependent on large-scale computation.”

Wilde is a software architect at Argonne and senior fellow at the University of Chicago’s Computation Institute. The company has received funding from the University of Chicago Innovation Fund and the U.S. Department of Energy; it is housed at the University’s Polsky Center for Entrepreneurship and Innovation.

This funding has enabled Parallel Works to onboard early customers. The company is currently raising a seed round to accelerate sales and continue platform development.

The Parallel Works platform offers scalable computing as a service, adjusting computing power based on demand. A large energy simulation that once took 20 days can now finish in less than an hour, by using 5,000 processing units working in parallel.

Parallel Works is built on the Swift parallel scripting system, which enables scientists and researchers to more easily deploy large and complex simulation studies on supercomputers.

“Supercomputing ensures that a product’s design is the best it can be and that the optimal  answer is found fast,” Wilde said. “We’re increasing the value of a manufacturer’s most valuable asset, its engineers, by eliminating the often frustrating challenges of coding and managing computing hardware.”

Source: University of Chicago

The post New UChicago Startup Offering HPC Solutions to Businesses appeared first on HPCwire.

Intersect360 on 2016 Top Performers and Trends

Tue, 01/31/2017 - 14:06

HPC supplier consolidation in 2016 is showing up on market watcher scorecards with Dell EMC (15.7 percent) slipping ahead of HPE (14.7 percent) as the top HPC system supplier according Intersect360 Research’s just released report, Top of All Things in HPC, 2016. Once HPE absorbs SGI (7.3 percent), it will likely edge back into first. Lenovo (10.1 percent) and Cray (4.9 percent) were also in the top five. NVIDIA again dominated the accelerator market (systems and number of accelerators in use).

The Intersect360 report is based on HPC technology use as reported in a survey of HPC user sites (487) and not based on vendor reported revenue. For the purposes of the report, ‘top’ is defined as companies accounting for 50% market share or the top five, whichever comes first. The survey was conducted in the second and third quarters of 2016.

Other prominent trends were:

  • Mellanox gained share of mentions in every locale for system interconnect and networks, with storage showing the largest gain.
  • Middleware profile remained consistent with the prior year with programming environment, job management, and compilers being the top three subcategories.
  • Top five ISV application packages and the top five Open Source application packages remained the same between 2015 and 2016.

There’s lot to watch in the frothy processor market where accelerators of all stripes continue to gain more use and ARM and IBM Power offerings strive to win market share from Intel. “The most significant is the battle between Xeon Phi and NVIDIA GPUs for share of the many-core market, which will be fought in both HPC and Hyperscale markets. For emerging processors, we’ll be keeping close tabs on both ARM and POWER, which had similar end-user outlooks one year ago. This year will start to show which of them will get more traction,” said Addison Snell, CEO, Intersect360 Research.

“FPGAs are still relevant, particularly for highly scalable applications that are predominantly text or integer-based, rather than floating point. They could get a boost in certain hyperscale applications, such as search, where GPUs are more targeted to machine learning and deep learning. The overall trend is one of architectural diversity, in which end users will match application workloads to the architectures that suit them best. 88% of HPC users expect to support multiple architectures going forward.”

After accounting for the merged Dell EMC’s rise in market share the storage segment remained much the same. It’s still fairly fragmented in terms of vendors and products. Data Direct Networks (14.8 percent), Dell EMC (12.7 percent) and IBM (11.0 percent) were the top providers, followed by NetApp (8.5 percent) and HPE (7.2%). The ‘others’ category remains larger (45.8%) and Intersect360 reports Amazon Web Services is creeping up, growing from 1 percent in 2014 to 1.7 percent in 2016.

Storage software use remains fragmented as well. SpectrumScale/GPFS, Lustre, and PanFS are the top storage software packages and in total 57 unique packages were cited by respondents. This agrees well with an observation by Ari Berman of BioTeam who noted that in life sciences, “[W]e came up with, 48 viable active types of files systems out there that people are using actively in life science. And they all have vastly different characteristics – management potential, scalability, throughput speed, replication, data safety, all that stuff.” (See HPCwire article: BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences)

Snell said, “One of the most noteworthy things to pull from the survey responses is the strength of IBM Spectrum Scale (GPFS) relative to Lustre. Lustre often gets more of attention, but IBM has been quietly successful with its data-centric strategy and has a strong position in high-performance storage. DDN also perennially does well in our surveys, with a survey share higher than the company’s true revenue share, demonstrating that DDN’s customers strongly identify as “HPC.”

All said, the rough and tumble of the storage market continues with no real sign of consolidation yet.

The interconnect segment is growing more interesting. As noted earlier Ethernet use grew in the Top500 this year although InifniBand “continues to be the performance system of choice,” according to the Intersect360 report. Intel’s Omni-Path Architecture hasn’t shown up in significant numbers yet but Intersect360 expects to see increased OPA market share in 2017.

“Frankly the past year’s surveys would have been too early to really get a sense of Omni-Path adoption. In a survey in early 2016, 51% of HPC users had a favorable forward-looking impression of Omni-Path (versus 76% favorable for InfiniBand),” said Snell.

Of course everyone, including the HPC industry, is watching President Trump for clues about how the administration’s actions will affect them. Much is still uncertain. “I would revert back to what I said in the HPCwire video at SC16 – expect disinvestment from public sector research (DOE, NOAA, NASA, NSF, NIH, …), but conversely, policies may encourage spending in key commercial vertical markets for HPC. For example, the administration favors investment in oil and gas and manufacturing, and deregulation in finance and pharmaceuticals.”

The post Intersect360 on 2016 Top Performers and Trends appeared first on HPCwire.

Carnegie Mellon AI Beats Top Poker Pros

Tue, 01/31/2017 - 07:18

Jan. 31 — Libratus, an artificial intelligence developed by Carnegie Mellon University, made history by defeating four of the world’s best professional poker players in a marathon 20-day poker competition called “Brains Vs. Artificial Intelligence: Upping the Ante” at Rivers Casino in Pittsburgh.

Once the last of 120,000 hands of Heads-up, No-Limit Texas Hold’em were played on Jan. 30, Libratus led the pros by a collective $1.766,250 in chips. The developers of Libratus — Tuomas Sandholm, professor of computer science, and Noam Brown, a Ph.D. student in computer science — said the sizable victory is statistically significant and not simply a matter of luck.

“The best AI’s ability to do strategic reasoning with imperfect information has now surpassed that of the best humans,” Sandholm said.

This new milestone in artificial intelligence has implications for any realm in which information is incomplete and opponents sow misinformation, said Frank Pfenning, head of the Computer Science Department in CMU’s School of Computer Science. Business negotiation, military strategy, cybersecurity and medical treatment planning could all benefit from automated decision-making using a Libratus-like AI.

“The computer can’t win at poker if it can’t bluff,” Pfenning said. “Developing an AI that can do that successfully is a tremendous step forward scientifically and has numerous applications. Imagine that your smartphone will someday be able to negotiate the best price on a new car for you. That’s just the beginning.”

The pros — Dong Kim, Jimmy Chou, Daniel McAulay and Jason Les — will split a $200,000 prize purse based on their respective performances during the event.

McAulay, of Scotland, said Libratus was a tougher opponent than he expected, but it was exciting to play against it.

“Whenever you play a top player at poker, you learn from it,” McAulay said.

Les, of Costa Mesa, Calif., agreed that superior opponents help poker players improve.

“Usually, you have to lose a lot and pay a lot of money for the experience,” he said. “Here, at least I’m not losing any money.”

“This experiment demanded that we assemble some of the world’s best professional poker players who specialize in Heads-up No-Limit Texas Hold’em and that they would play to the best of their abilities throughout the long contest,” Brown said. “These players more than met that description and proved to be a tenacious team of opponents for Libratus, studying and strategizing together throughout the event.”

Libratus’ victory was made possible by the Pittsburgh Supercomputing Center’s Bridges computer, on which the AI computed its strategy before and during the event, and by Rivers Casino, which hosted the event.

“Rivers Casino was proud to partner with Carnegie Mellon University and the Pittsburgh Supercomputing Center to host the Brains Vs. Artificial Intelligence: Upping the Ante competition,” said Craig Clark, general manager of Rivers Casino. “History-making events like this are very important as they increase awareness of how companies in Pittsburgh are impacting the world.”

The event was surrounded by speculation about how Libratus was able to improve day to day during the competition. It turns out it was the pros themselves who taught Libratus about its weaknesses.

“After play ended each day, a meta-algorithm analyzed what holes the pros had identified and exploited in Libratus’ strategy,” Sandholm said. “It then prioritized the holes and algorithmically patched the top three using the supercomputer each night. This is very different than how learning has been used in the past in poker. Typically researchers develop algorithms that try to exploit the opponent’s weaknesses. In contrast, here the daily improvement is about algorithmically fixing holes in our own strategy.”

Sandholm also said that Libratus’ end-game strategy, which was computed live with the Bridges computer for each hand, was a major advance.

“The end-game solver has a perfect analysis of the cards,” he said.

It was able to update its strategy for each hand in a way that ensured any late changes would only improve the strategy. Over the course of the competition, the pros responded by making more aggressive moves early in the hand, no doubt to avoid playing in the deep waters of the endgame where the AI had an advantage, he added

Sandholm will be sharing all of Libratus’ secrets now that the competition is over, beginning with invited talks at the Association for the Advancement of Artificial Intelligence meeting Feb. 4-9 in San Francisco and in submissions to peer-reviewed scientific conferences and journals.

Throughout the competition, Libratus recruited the raw power of approximately 600 of Bridges’ 846 compute nodes. Bridges total speed is 1.35 petaflops, about 7,250 times as fast as a high-end laptop and its memory is 274 Terabytes, about 17,500 as much as you’d get in that laptop. This computing power gave Libratus the ability to play four of the best Texas Hold’em players in the world at once and beat them.

“We designed Bridges to converge high-performance computing and artificial intelligence,” said Nick Nystrom, PSC’s senior director of research and principal investigator for the National Science Foundation-funded Bridges system. “Libratus’ win is an important milestone toward developing AIs to address complex, real-world problems. At the same time, Bridges is powering new discoveries in the physical sciences, biology, social science, business and even the humanities. With its unique emphasis on usability, new projects are always welcome.”

Sandholm said he will continue his research push on the core technologies involved in solving imperfect-information games and in applying these technologies to real-world problems. That includes his work with Optimized Markets, a company he founded to automate negotiations.

“CMU played a pivotal role in developing both computer chess, which eventually beat the human world champion, and Watson, the AI that beat top human Jeopardy! competitors,” Pfenning said. “It has been very exciting to watch the progress of poker-playing programs that have finally surpassed the best human players. Each one of these accomplishments represents a major milestone in our understanding of intelligence.”

Brains Vs. AI was sponsored by GreatPoint Ventures, Avenue4Analytics, TNG Technology Consulting GmbH, the journal Artificial IntelligenceIntel and Optimized Markets, Inc. Carnegie Mellon’s School of Computer Science partnered with Rivers Casino, the Pittsburgh Supercomputing Center (PSC) through a peer-reviewed XSEDE allocation, and Sandholm’s Electronic Marketplaces Laboratory for the event.

Head’s-Up No-Limit Texas Hold’em is an exceedingly complex game, with 10160 (the number 1 followed by 160 zeroes) information sets — each set being characterized by the path of play in the hand as perceived by the player whose turn it is. That’s vastly more information sets than the number of atoms in the universe.

The AI must make decisions without knowing all of the cards in play, while trying to sniff out bluffing by its opponent. As “no-limit” suggests, players may bet or raise any amount up to all of their chips.

About Carnegie Mellon University

Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.

About the Pittsburgh Supercomputing Center

The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry, and is a leading partner in XSEDE (Extreme Science and Engineering Discovery Environment), the National Science Foundationcyberinfrastructure program.

About Rivers Casino

Opened in 2009, Rivers Casino has been voted a “Best Place to Work” in thePittsburgh Business Times, a “Top Workplace” in the Pittsburgh Post-Gazette, “Best Overall Gaming Resort” in Pennsylvania by Casino Player magazine and “Best Overall Casino” in Pennsylvania by Strictly Slotsmagazine. The casino features more than 2,900 slots, 83 table games, a 30-table poker room, nine distinctive restaurants and bars, a riverside amphitheater, a multipurpose event space, live music performances, free parking and multiple promotions and giveaways daily. Already, more than $631 million in jackpots have been awarded to players at Rivers Casino. For more information, visit riverscasino.com.

Source: Carnegie Mellon University

The post Carnegie Mellon AI Beats Top Poker Pros appeared first on HPCwire.

Fujitsu Reports Fiscal 2016 Third Quarter Financial Results

Tue, 01/31/2017 - 07:15

TOKYO, Japan, Jan. 31 — Fujitsu today reported a profit for the third quarter attributable to owners of the parent of 20.3 billion yen, representing an improvement of 15.0 billion yen compared to the third quarter of fiscal 2015.

Consolidated revenue for the third quarter of fiscal 2016 was 1,115.4 billion yen, down 51.4 billion yen from the third quarter of fiscal 2015, but was essentially unchanged on a constant-currency basis. Revenue in Japan rose 3.5%. Revenue in the Services sub-segment increased, primarily from system integration business, and revenue from network products also rose. On the other hand, revenue outside of Japan decreased 15.0%. Results were significantly impacted by foreign exchange movements, and, in addition, there was a decline in revenue from infrastructure services in Europe. Compared to the same period in the prior fiscal year, the appreciation of the yen against the US dollar, the British pound, and other currencies served to reduce revenue by roughly 60.0 billion yen.

Fujitsu recorded an operating profit of 37.3 billion yen, up 23.2 billion yen from the third quarter of fiscal 2015. Operating profit improved because of cost reductions in PCs and mobile phones, and operating profit from network products in Japan benefited from higher revenue. In addition, business model transformation expenses fell by about 10.1 billion yen, from 17.6 billion yen to 7.4 billion yen.

Net financial income was 5.5 billion yen, representing an improvement of 2.9 billion yen from the same period in fiscal 2015, primarily from foreign exchange gains. Income from investments accounted for using the equity method was 0.7 billion yen loss, representing a deterioration of 4.0 billion yen from the same period in the prior fiscal year, primarily because a reserve was recorded to cover potential losses from an affiliated company in Japan.

As a result, profit for the period before income taxes was 42.1 billion yen, an increase of 22.1 billion yen from the third quarter of the previous fiscal year.

Business Segment Financial Results

Revenue in the Technology Solutions segment amounted to 764.5 billion yen, a decrease of 4.5% from the third quarter of fiscal 2015. Revenue in Japan rose 6.5%. In the Services sub-segment, revenue from system integration services and revenue from infrastructure services both rose. In the System Platforms sub-segment, revenue from network products rose on sales of mobile phone base stations to telecommunications carriers. Revenue outside Japan fell 20.3%. In addition to the impact of foreign exchange movements, revenue from infrastructure services fell on weak sales in Europe and the US. The segment posted an operating profit of 50.6 billion yen, up 15.6 billion yen compared to the same period in fiscal 2015. Despite the impact of lower revenue from the Services sub-segment outside Japan, operating profit increased, primarily due to the effects of higher revenue in the Services sub-segment in Japan and from network products. Business model transformation expenses declined by 9.5 billion yen, from 15.9 billion yen in the third quarter of fiscal 2015, related to the realignment of the hardware products business in Europe, to 6.4 billion yen in the third quarter of fiscal 2016 on a resource shift to digital services in European locations.

Revenue in the Ubiquitous Solutions segment was 259.6 billion yen, essentially unchanged from the third quarter of fiscal 2015. Revenue in Japan rose by 4.3%. For PCs, revenue rose on the back of continuing strong sales of enterprise PCs. Revenue outside Japan fell by 7.2%. Excluding foreign exchange movements, revenue was essentially unchanged from the same period the previous year. The segment posted an operating profit of 9.6 billion yen, an improvement of 10.7 billion yen over the same period in fiscal 2015. For PCs, operating profit improved because of the impact of higher revenue in Japan as well as cost efficiencies, in addition to ongoing component cost reductions at locations in Japan because of the continued strength of the yen against the US dollar.

Revenue in the Device Solutions segment amounted to 137.0 billion yen, down 9.6% from the third quarter of fiscal 2015. The segment posted an operating profit of 4.3 billion yen, down 1.4 billion yen from the third quarter of fiscal 2015. In addition to the impact of lower revenue from LSI devices, particularly for use in smartphones, operating profit declined for both LSI devices and electronic components due to the impact of lower revenue as a result of the continuing strength of the yen against the US dollar.

Fiscal 2016 Consolidated Projections

Fujitsu has revised its full-year fiscal 2016 financial forecast announced on October 27, 2016, as follows.

There has been no change to the consolidated totals in the forecast announced last time of revenue of 4,500.0 billion yen, operating profit of 120.0 billion yen, and profit for the period attributable to the owners of the parent of 85.0 billion yen. Forecasts for individual segments, however, have been revised.

The forecast for revenue for Technology Solutions has been reduced by 40.0 billion yen. At the same time, the forecast for Ubiquitous Solutions has been increased by 30.0 billion yen, while Other/Elimination and Corporate has been increased by 10.0 billion yen.

With regard to operating profit, 7.0 billion yen in business model transformation expenses (shifting resources to businesses related to digital services at locations in Europe) that, in the forecast announced last October, were included in the Other/Elimination and Corporate segment have been reallocated, with 6.0 billion to Technology Solutions and 1.0 billion to Ubiquitous Solutions. Besides this, reflecting business variability, the forecast for Technology Solutions has been reduced by 9.0 billion yen. The forecast for Ubiquitous Solutions, Device Solutions and Other/Elimination and Corporate has been increased by 1.0 billion yen, 7.0 billion yen and 1.0 billion yen respectively.

About Fujitsu

Fujitsu is the leading Japanese information and communication technology (ICT) company, offering a full range of technology products, solutions, and services. Approximately 159,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE:6702; ADR:FJTSY) reported consolidated revenues of 4.7 trillion yen (US$41 billion) for the fiscal year ended March 31, 2016. For more information, please see http://www.fujitsu.com.

Source: Fujitsu

The post Fujitsu Reports Fiscal 2016 Third Quarter Financial Results appeared first on HPCwire.

Broad Portfolio of Cavium QLogic Technology Now Available on HPE Synergy Platform

Tue, 01/31/2017 - 06:59

SAN JOSE, Calif., Jan. 31 — Today, Cavium, Inc. (NASDAQ: CAVM), a leading provider of semiconductor products that enable intelligent processing for enterprise and cloud data centers, announced that a broad portfolio of its next generation QLogic Fibre Channel and Ethernet technologies are now available on Hewlett Packard Enterprise Synergy Composable Infrastructure blade server platform. HPE Synergy is the world’s first platform architected for Composable Infrastructure — built from the ground up to bridge traditional and new IT with the agility, speed and continuous delivery needed for today’s applications.

Cavium QLogic is the leading I/O innovator with HPE on the HPE Synergy platform and is the exclusive provider of 10GbE and 20GbE I/O with the HPE Synergy 2820C and HPE Synergy 3820C Converged Network Adapters (CNAs). QLogic technology also provides 16Gb Fibre Channel connectivity for HPE Synergy customers with the HPE Synergy 3830C 16Gb FC Adapter. In addition, QLogic FastLinQ Ethernet technology provides the internal networking within the HPE Synergy frame with the QLogic 57840S ASIC LOM on the HPE Synergy Composer modules, which delivers the necessary scalability and flexibility required to build the composable infrastructure and provides iSCSI SAN connectivity within the HPE Synergy frame.

Cavium QLogic I/O Technologies for HPE Synergy include:

  • The HPE Synergy 2820C 10Gb Converged Network Adapter is a key element in HPE Composable fabric connecting pools of compute resources to networks with reliable, high-performing converged 10Gbps Ethernet connectivity. With Flex-10 Technology, it converges Ethernet and FCoE onto a single connection simplifying hardware and reducing costs. Concurrent with storage I/O functionality this adapter also enables single root I/O virtualization (SR-IOV) capabilities for networking functions.
  • The HPE Synergy 3820C 10/20Gb Converged Network Adapter is another key element in HPE Composable fabric connecting pools of compute resources to networks with reliable, high-performing converged 10Gb or 20Gb Ethernet connectivity. With Flex-20 Technology, the Synergy 3820C converges Ethernet, iSCSI and FCoE onto a single connection, simplifying hardware management and reducing costs by up to 60% [1]. The Synergy 3820C is an ideal choice for any virtualized or converged data center.
  • Designed for the HPE Synergy Composable fabric, HPE Synergy 3830C 16Gb Fibre Channel Host Bus Adapter connects Synergy compute resource pools to SANs over 16Gb native Fibre Channel (FC) fabrics. It provides high-performance connectivity to HPE Synergy Virtual Connect FC Modules and Brocade FC Switch Modules. The Synergy 3830C supports advanced virtualization, security, port isolation, dynamic power management and low CPU utilization features. QLogic StorFusion technology built in the Synergy 3830C integrates with Brocade Gen 5 16Gb FC fabrics enabling rapid deployment and orchestration, advanced diagnostic and improved reliability and resiliency for HPE Synergy frames connecting to shared SAN storage. Doubling[2] the I/O performance of 8Gb FC HBAs, the Synergy 3830C is ideal for FC storage intensive workloads.
  • Cavium QLogic 57840S ASIC LOM on the HPE Synergy Composer modules which utilizes the QLogic NIC partitioning (NPAR) technology to virtualize the physical connections within the HPE Synergy Composer which in turn delivers the necessary scalability and flexibility required to build the composable infrastructure.

The expanded suite of Fibre Channel and Ethernet-based adapters for HPE Synergy are available and shipping today. For more information, please visit the QLogic HPE Partnership microsite.

“Cavium QLogic is a strategic partner for HPE,” said Raghib Hussain Chief Operating Officer, Cavium. “The introduction of HPE’s flagship Synergy platform and QLogic FastLinQ Ethernet and Gen 5 Fibre Channel adapters is a major milestone in this partnership. Now, Enterprise datacenters and service providers can increase operational velocity, deliver frictionless IT and reduce costs.”

“With HPE Synergy, compute, storage and fabric are now always available as single pools of resources that can be instantly configured according to the specific needs of each application,” said Tom Lattin, Vice President of Server Options, Hewlett Packard Enterprise. “The Synergy Ethernet adapters developed through our partnership with Cavium QLogic deliver a robust feature set and advanced performance offload capability which enables HPE customers to realize the true potential of the HPE Synergy Composable infrastructure.”

For more information, visit www.qlogic.com

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of integrated, software compatible processors ranging in performance from 1Gbps to 100Gbps that enable secure, intelligent functionality in Enterprise, Data Center, Broadband/Consumer, Mobile and Service Provider Equipment, highly programmable switches which scale to 3.2Tbps and Ethernet and Fibre Channel adapters up to 100Gbps. Cavium processors are supported by ecosystem partners that provide operating systems, tools and application support, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, China and Taiwan.

Source: Cavium

The post Broad Portfolio of Cavium QLogic Technology Now Available on HPE Synergy Platform appeared first on HPCwire.

Mellanox Introduces IDG4400 Flex Network Platform Based on New Indigo Network Processor

Tue, 01/31/2017 - 06:50

SUNNYVALE, Calif. & YOKNEAM, Israel, Jan. 31 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced the IDG4400 Flex Network platform based on Indigo, Mellanox’s newest network processor (previously known as NPS-400). The Indigo high-end network processor is capable of sophisticated packet processing combined with unprecedented performance. Indigo’s L2-L7 packet processing solution offers powerful capabilities positioning it to become a world-leading platform for a wide range of applications, including router-type functions, intrusion prevention and detection, application recognition, firewall, DDoS prevention and more. Indigo hardware acceleration features, coupled with powerful software libraries, have demonstrated stateful packet processing at record rates of 500Gb/s and deep packet inspection (DPI) at 320Gb/s over millions of flows; 20 times higher versus other offerings at this scale.

A single IDG4400 network processor platform is capable of realizing the DPI processing capability of a full rack of servers. In addition, the Indigo platform may be used in conjunction with Mellanox’s Spectrum Ethernet switch systems for increased scalability. The Spectrum switch systems provide Ethernet connectivity of 10, 25, 40, 50 and 100Gb/s, and the deterministic zero packet loss performance and mega scale make it a most efficient data center building block. By combining Indigo IDG4400 and Spectrum Ethernet switching solutions, data center managers gain a cost efficient, comprehensive L2-L7 switching and packet processing solution capable of analyzing data in depth as it passes through the network.

“Today IT managers run their security and network applications over expensive compute servers,” said Yael Shenhav, vice president of product marketing at Mellanox Technologies. “The Indigo IDG4400 Flex platform, combined with Spectrum Ethernet switch systems, enable network professionals to effectively offload these applications, thereby reducing data center capital and operating expenses while improving overall return on investment.”

Mellanox provides a complete software infrastructure solution for data center solutions developers, including stateful flow table (SFT) for stateful packet processing and deep packet inspection (DPI) software packages. These packages enable ease of implementation and faster time to market while utilizing the Indigo acceleration capabilities to their fullest. In addition, these libraries are compatible with the OpenNPU SDK released through the open source software initiative opennpu.org. The Indigo IDG4400 1U platform is a C programmable Linux-based platform, which delivers 10, 40 and 100GbE network connectivity, allowing for maximum flexibility. For Indigo IDG4400 Flex availability, please contact Mellanox.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox Technologies

The post Mellanox Introduces IDG4400 Flex Network Platform Based on New Indigo Network Processor appeared first on HPCwire.

Stampede Simulations Show Better Way to Predict Blood Clots

Mon, 01/30/2017 - 15:52

The heart is a wonder of design – a pump that can function for 80 years, and billions of heartbeats, without breaking down. But when it does malfunction, the results can be dire.

In research reported in the International Journal of Cardiology this month, scientists from Johns Hopkins University and Ohio State University presented a new method for predicting those most at risk for thrombus, or blood clots, in the heart.

A hemodynamic profile of a patient with a history of left ventricular thrombus (blood clotting) derived from computational fluid dynamic modeling. Credit: Rajat Mittal, Jung Hee Seo and Thura Harfi

The critical factor, the researchers found, is the degree to which the mitral jet – a stream of blood shot through the mitral valve – penetrates into the left ventricle of the heart. If the jet doesn’t travel deep enough into the ventricle, it can prevent the heart from properly flushing blood from the chamber, potentially leading to clots, strokes and other dangerous consequences.

The findings were based on simulations performed using the Stampede supercomputer at the Texas Advanced Computing Center and validated using data from patients who both did and did not experience post-heart attack blood clots. The work was supported by a grant from the National Science Foundation.

The metric that characterizes the jet penetration, which the researchers dub the E-wave propagation index (EPI), can be ascertained using standard diagnostic tools and clinical procedures that are currently used to assess patient risk of clot formation, but is much more accurate than current methods.

“The beauty of the index is that it doesn’t require any additional measurements. It simply reformulates echocardiogram data into a new metric,” said Rajat Mittal, a computational fluid dynamics expert and professor of mechanical engineering at Johns Hopkins University and one of the principal investigators on the research. “The clinician doesn’t have to do any additional work.”

Heart disease is the leading cause of death in the U.S. and by far the most expensive disease in terms of health care costs. Heart attacks cause some deaths; others result from blood clots, frequently the result of a heart weakened by disease or a traumatic injury.

Clots can occur whenever blood remains stagnant. Since the chambers of the heart are the largest reservoirs of blood in the body, they are the areas most at risk for generating clots.

Predicting when a patient is in danger of developing a blood clot is challenging for physicians. Patients recovering from a heart attack are frequently given anticoagulant drugs to prevent clotting, but these drugs have adverse side-effects.

Cardiologists currently use the ejection fraction – the percentage of blood flushed from the heart with each beat – as well as a few other factors, to predict which patients are at risk of a future clot.

For healthy individuals, 55 to 70 percent of the volume of the chamber is ejected out of the left ventricle with every heartbeat. For those with heart conditions, the ejection fraction can be reduced to as low as 15 percent and the risk of stagnation rises dramatically.

Though an important factor, the ejection fraction does not appear to be an accurate predictor of future clotting risk.

Computational fluid dynamics results show that the mitral jet propagates towards the apex mainly during the E-wave. A mitral jet that propagates further towards the apex during the E-wave will produce significant apical washout. Thus, the propagation distance of the mitral jet into the left ventricle by the end of the E-wave indexed by the length of the left ventricle should correlate well with apical “washout,” and therefore, with left ventricle thrombus risk.

“Because we understood the fluid dynamics in the heart using our computational models, we reached the conclusion that the ejection fraction is not a very accurate measure of flow stasis in the left ventricle,” Mittal said. “We showed very clearly that the ejection fraction is not able to differentiate a large fraction of these patient and stratify risk, whereas this E-wave propagation index can very accurately stratify who will get a clot and who will not,” he said.

The results were the culmination of many years of investigation by Mittal and his collaborators into the fundamental relationship between the structure and function of the heart. To arrive at their hypothesis, the researchers captured detailed measurements from 13 patients and used those to construct high-fidelity, patient-specific models of the heart that take into account fluid flow, physical structures and bio-chemistry.

These models led, in turn, to new insights into the factors that correlate most closely to stagnation in the left ventricle, chief among them, mitral jet penetration.

Working in collaboration with clinicians, including lead author, Thura Harfi of Ohio State University, the team tested their hypothesis using data from 75 individual — 25 healthy patients, 25 patients who experienced clots in their left ventricle, and 25 patients who had a compromised heart but who didn’t have any clots.

Pending validation in a larger cohort of patients, the researchers found that based on the EPI measurement, one in every five patients with severe cardiomyopathy who are currently not being treated with anticoagulation, would be at risk of a left ventricular clot and would benefit from anticoagulation.

“Physicians and engineers don’t interact as often as they should and that creates a knowledge gap that can be closed with this type of collaborative research,” Harfi said. “Computational fluid dynamics is such an established way of studying phenomena in mechanical engineering, but has rarely been tried in humans. But now, with the development of high-resolution cardiac imaging techniques like cardiac computed tomography (CT) and the availability of supercomputing power, we can apply the power of computational fluid dynamics simulations to study blood flow in human beings. The information you get from a computer simulation you cannot get otherwise.”

Mittal and his team required large computing resources to derive and test their hypothesis. Each simulation ran in parallel on 256 to 512 processors and took several 100,000 computing hours to complete.

“This work cannot be done by simulating a single case. Having a large enough sample size to base conclusions on was essential for this research,” Mittal said. “We could never come close to being able to do what we needed to do it if weren’t for Stampede.”

Time on Stampede was provided through the Extreme Science and Engineering Discovery Environment (XSEDE).

Mittal foresees a time where doctors will perform patient-specific heart simulations routinely to determine the best course of treatment. However, hospitals would need systems hundreds of times faster than a current desktop computer to be able to figure out a solution locally in a reasonable timeframe.

In addition to establishing the new diagnostic tool for clinicians, Mittal’s research helps advance new, efficient computational models that will be necessary to make patient-specific diagnostics feasible.

The team plans to continue to test their hypothesis, applying the EPI metric to a larger dataset. They hope in the future to run a clinical study with prospective, rather than retrospective, analysis.

With a better understanding of the mechanics of blood clots and ways to the predict them, the researchers have turned their attention to other sources of blood clots, including bio-prosthetic heart valves and atrial fibrillation (AFib) – a quivering or irregular heartbeat that affects 2.7 million Americans.

“These research results are an important first step to move our basic scientific understanding of the physics of how blood flows in the heart to real-time predictions and treatments for the well-being of patients,” said Ronald Joslin, NSF Fluid Dynamics program director.

“The potential for impact in this area is very motivating,” Mittal said, “not just for me but for my collaborators, students and post-docs as well.”

Source:  Aaron Dubrow, Texas Advanced Computing Center (TACC)

The post Stampede Simulations Show Better Way to Predict Blood Clots appeared first on HPCwire.

Early Science Projects for Aurora Supercomputer Announced

Mon, 01/30/2017 - 15:38

LEMONT, Ill., Jan. 30 — The Argonne Leadership Computing Facility (ALCF), a Department of Energy Office of Science User Facility, has selected 10 computational science and engineering research projects for its Aurora Early Science Program starting this month. Aurora, a massively parallel, manycore Intel-Cray supercomputer, will be ALCF’s next leadership-class computing resource and is expected to arrive in 2018.

The 10 investigator-led projects originate from universities and national laboratories from across the country and span a wide range of disciplines. Collectively, these projects represent a typical system workload at the ALCF and cover key scientific areas and numerical methods. The teams will receive hands-on assistance to port and optimize their applications for the new architecture using systems available today.

The Early Science Program helps lay the path for hundreds of other users by doing actual science, using real scientific applications, to ready a future machine. “As with any bleeding edge resource, there’s testing and debugging that has to be done,” said ALCF Director of Science Katherine Riley. “And we are doing that with science.”

The Aurora Early Science Program follows in the ALCF tradition of delivering science on day one. Early Science programs also helped usher in earlier ALCF computers, including Theta, an Intel-Cray system that came online last year, and Mira, an IBM Blue Gene/Q. Both machines continue to serve the scientific research community today. Aurora, a future system based on Intel’s third-generation Xeon Phi processor, called Knights Hill (KNH) and second-generation OmniPath interconnect, and Cray’s Shasta platform, is expected to deliver at least 20 times the computational performance of Mira.

For the next couple of years, ALCF will host numerous training events to help the Aurora Early Science project teams and the computational community prepare their codes for the architecture and scale of the coming system, with assistance from Intel and Cray. Each Early Science team is also paired with a dedicated postdoctoral researcher from the ALCF.

The Early Science teams will use Theta, a 9.65 petaflops system based on Intel’s second-generation Xeon Phi processor and Cray’s Aries interconnect. “The Theta system is well suited for targeting KNH as well as non-hardware-specific development work, such as new algorithms or physics modules needed for the proposed early science runs,” said Tim Williams, an ALCF computational scientist who manages the Early Science Program.

In addition, the project teams will have access to training and hardware at the DOE’s Oak Ridge Leadership Computing Facility and DOE’s National Energy Research Supercomputing Center as alternative development platforms to encourage application code portability among heterogeneous architectures.

AURORA EARLY SCIENCE PROJECTS

Extending Moore’s Law computing with quantum Monte Carlo
Investigator: Anouar Benali, Argonne National Laboratory

For decades, massively parallel supercomputers have reaped the benefits—predicted by Moore’s Law—of the relentless increase in density of components on chips that also rapidly improved performance of PCs and smartphones. This project aims to give something back, by attacking a fundamental materials problem impacting the latest and future chips: electrical current leakage through HfO2-silicon interface. HfO2 is used widely as a dielectric in Si-CMOS chips like the Aurora CPUs. Simulating this problem with the highly accurate quantum Monte Carlo (QMC) method is only now becoming computationally possible with supercomputers like Aurora.

Design and evaluation of high-efficiency boilers for energy production using a hierarchical V/UQ approach
Investigator: Martin Berzins, The University of Utah

This project will simulate and evaluate the design of a next-generation, 500-megawatt advanced ultra supercritical coal boiler. In a coal-fired power plant, this design promises to reduce the boiler footprint 50%, saving costs and improving efficiency (53% efficiency, compared with traditional-boiler 35% efficiency), and also reducing CO2 emissions by 50% relative to a traditional boiler. Simulations on Aurora using the Uintah asynchronous many-task software will incorporate validation and uncertainty quantification (V/UQ), predicting thermal efficiency with uncertainty bounds constrained by observed data.

High-fidelity simulation of fusion reactor boundary plasmas
Investigator: C. S. Chang, Princeton Plasma Physics Laboratory

The behavior of plasma at the outer edge of a tokamak fusion reactor is critically important to success of future fusion reactors such as ITER, now under construction in France. Misbehavior at the edge can lead to disruptions bombarding a small area of the divertor plates—metal structures at the bottom of the tokamak designed to absorb ejected heat—at levels beyond which the divertor material can withstand. This project will use particle simulations of the plasma, including impurities and the important magnetic field geometry at the edge, to predict behavior of ITER plasmas and to help guide future experimental parameters.

NWChemEx: Tackling chemical, materials and biochemical challenges in the exascale era
Investigator: Thomas Dunning, Pacific Northwest National Laboratory

The NWChemEx code is providing the framework for next-generation molecular modeling in computational chemistry and for implementing critical computational chemistry methods. This project will apply it to two problems in development of advanced biofuels: design of feedstock for efficient production of biomass; and design of new catalysts for converting biomass-derived chemicals into fuels.

Extreme-scale cosmological hydrodynamics
Investigator: Katrin Heitmann, Argonne National Laboratory

This project will simulate large fractions of the universe, including not only gravity acting on dark matter, but also baryons (which make up visible matter such as stars), and gas dynamics using a new, smoothed particle hydrodynamics method. These simulations are deeply coupled with guiding and interpreting observations from present and near-future cosmological surveys.

Extreme-scale unstructured adaptive CFD
Investigator: Kenneth Jansen, University of Colorado at Boulder

This project will use unprecedented high-resolution fluid dynamics simulations to model dynamic flow control over airfoil surfaces at realistic flight conditions and to model bubbly flow of coolant in nuclear reactors. Synthetic jet actuators, tiny cavities with speaker-like diaphragms that alternately expel and intake air, can alter and control airflow across surfaces such as plane tail rudders, allowing much stronger force (turning force, for a rudder). The reactor fluid flow problems will simulate realistic reactor geometries with far more accurate multiphase flow modeling than today’s state of the art, yielding valuable information on thermal management to improve safety of existing light-water reactors and inform the design of next-generation systems.

Benchmark simulations of shock-variable density turbulence and shock-boundary layer interactions with applications to engineering modeling
Investigator: Sanjiva Lele, Stanford University

What do inertial confinement fusion (ICF) and supersonic aircraft have in common? Both involve the flow of gases in extreme conditions, including shock waves and turbulence. This project aims to advance scientific understanding of variable density turbulence and mixing, including shock interactions and near-wall effects. These apply to the mixing of the fuel capsule surface with the imploding plasma in ICF, and shock interaction with fuel streams in a supersonic jet engine as a way to improve combustion.

Lattice quantum chromodynamics calculations for particle and nuclear physics
Investigator: Paul Mackenzie, Fermilab

This project will deliver calculations urgently needed by experimental programs of high energy and nuclear physics, based on the computational methods of lattice quantum chromodynamics (lattice QCD). QCD embodies our most fundamental understanding of the strong nuclear force and associated particles, a key component of the more general Standard Model of particle physics. In high energy physics, lattice calculations are required to extract the fundamental parameters of the standard model (such as quark masses) from experiment. Evidence for physics beyond the Standard Model can be discovered if discrepancies are found between different methods for determining these parameters.

Metascalable layered materials genome
Investigator: Aiichiro Nakano, University of Southern California

Functional materials, as the name implies, have behaviors useful in science and industry. There is great interest today in engineering materials to have desired behaviors. One approach involves stacking extremely thin layers of different materials to achieve a complex molecular interplay throughout the stack. The resulting behavior of the stack cannot be explained by traditional theories and can only be predicted by directly simulating the layers as collections of molecules interacting with each other. Massive quantum mechanical and reactive molecular dynamics simulations on Aurora will be validated by experiments on the same materials using a free-electron X-ray laser.

Free energy landscapes of membrane transport proteins
Investigator: Benoit Roux, The University of Chicago

Membrane transport protein molecules play key roles in cellular biology functions. This includes natural processes as well as drug delivery and drug resistance. How these “molecular devices” perform their function is extremely complex. The proteins move into dramatically different conformations in the process. Modeling the myriad possibilities with atomistic molecular dynamics, even using the best statistical approaches, is at the forefront of what’s possible. These calculations on Aurora will advance that front.

About Argonne National Laboratory

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

Source: Argonne National Laboratory

The post Early Science Projects for Aurora Supercomputer Announced appeared first on HPCwire.

Four Co-design Centers Support Exascale Project

Mon, 01/30/2017 - 13:37

It’s clear co-design is a vital component among activities required to achieve exascale computing. The leadership and early directions of the four co-design centers so far established in support of DOE’s Exascale Computing Project were summarized late last week in an article posted on the Argonne National Laboratory web site.

The four centers created include:

  • Co-design Center for Online Data Analysis and Reduction at the Exascale (CODAR)
  • Center for Efficient Exascale Discretizations (CEED)
  • Co-design Center for Particle Applications (CoPA)
  • Block-Structured Adaptive Mesh Refinement Co-design Center (BSAMR)

The term ’co-design’ describes the integrated development and evolution of hardware technologies, computational applications and associated software.” In pursuit of ECP’s mission to help people solve realistic application problems through exascale computing, each co-design center targets different features and challenges relating to exascale computing.” The full article, Co-design centers to help make next-generation exascale computing a reality, written by Joan Koka, identifies the leaders and briefly touches on goals for each center.

1. CODAR
Ian Foster, a University of Chicago professor and Argonne Distinguished Fellow, leads the CODAR effort, “Exascale systems will be 50 times faster than existing systems, but it would be too expensive to build out storage that would be 50 times faster as well,” he said. “This means we no longer have the option to write out more data and store all of it. And if we can’t change that, then something else needs to change.”

There are many powerful techniques for doing data reduction, and CODAR researchers are studying various approaches. One example is lossy compression which attempts to remove unnecessary or redundant information to reduce overall data size. This technique is what’s used to transform the detail-rich images captured on our phone camera sensors into JPEG files, which are small in size. While data is lost in the process, the most important information ― the amount needed for our eyes to interpret the images clearly ― is maintained, and as a result, we can store hundreds more photos on our devices.

2. CEED
CEED is looking at the process of discretization in which the physics of the problem is represented as a finite number of grid points that represent the model of the system. “Determining the best layout of the grid points and representation of the model is important for rapid simulation,” said computational scientist Misun Min, the Argonne lead in CEED.

Discretization enables researchers to numerically represent physical systems, like nuclear reactors, combustion engines or climate systems. How researchers discretize the systems they’re studying affects the amount and speed of computation at exascale. CEED is focused particularly on high-order discretizations that require relatively few grid points to accurately represent physical systems.

3. CoPA
Researchers are studying methods that model natural phenomena using particles, such as molecules, electrons or atoms. Particle methods span a wide range of application areas, including materials science, chemistry, cosmology, molecular dynamics and turbulent flows. When using particle methods, researchers characterize the interactions of particles with other particles and with their environment in terms of short-range and long-range interactions.

“The idea behind the co-design center is that, instead of everyone bringing their own specialized methods, we identify a set of building blocks, and then find the right way to deal with the common problems associated with these methods on the new supercomputers,” Salman Habib, the Argonne lead in CoPA and a senior member of the Kavli Institute for Cosmological Physics at the University of Chicago, said.

4. BSAMR
AMR allows an application to achieve higher level of precision at specific points or locations of interest within the computational domain and lower levels of precision elsewhere. “Without AMR, calculations would require so much more resources and time,” said Anshu Dubey, the Argonne lead in the Block-Structured AMR Center and a fellow of the Computation Institute. AMR is already used in applications such as combustion, astrophysics and cosmology; now researchers in the Block-Structured AMR co-design center are focused on enhancing and augmenting it for future exascale platforms.

Link to full article: http://www.anl.gov/articles/co-design-centers-help-make-next-generation-exascale-computing-reality

The post Four Co-design Centers Support Exascale Project appeared first on HPCwire.

Tapia Call for Participation Deadline Extended to February 10

Mon, 01/30/2017 - 13:05

Jan. 30 — There is still time to submit to Tapia 2017.  The call for participation deadline has been extended to Friday, February 10th.

You can submit your content to our three conference tracks:

  • Professional development
  • Technical
  • Broadening participation

In these three tracks, you can submit a variety of sub-program submissions: birds of a feather (BOFs), workshops, panels, doctoral consortium, student posters and student posters for the ACM Student Research Competition (SRC).

Have questions about Program Submissions?

Check out our video from our first Facebook Live event hosted by 2017 Program Committee Chair Tao Xie, Associate Professor, University of Illiniois at Urbana-Champaign and 2016 Poster Presenter Angello Astorga. Watch the video today to learn all about best practices for submitting to the 2017 Tapia Conference Call for Participation.

All program submissions for the Tapia 2017 Conference are due by February 10th at 11:59 PM Pacific [Extended].

Tapia Conference Scholarships

Need support getting to the conference? Apply for a Tapia Scholarship. The Tapia Conference provides travel scholarships for community college, undergraduate/graduate students, Post-Docs, and a limited number for faculty at colleges/universities in the U.S. and U.S. Territories.

Applications for Poster and Doctoral Consortium Program submitters are due Friday February 10 at 11:59 pm PT [Extended].

All other scholarship applications are due February 28th at 11:59 pm PT.

The Tapia Conferences are sponsored by the Association for Computing Machinery (ACM) and presented by the Center for Minorities and People with Disabilities in Information Technology (CMD-IT). The conferences are in-cooperation with the Computing Research Association (CRA).

Click here for more information.

Source: CMD-IT

The post Tapia Call for Participation Deadline Extended to February 10 appeared first on HPCwire.

Pages