Feed aggregator

Seetharaman selected for distinguished lecturer award

Colorado School of Mines - Fri, 10/20/2017 - 15:10

Colorado School of Mines Metallurgical and Materials Engineering Professor Sridhar Seetharaman has been selected as the recipient of the 2019 Extraction and Processing Division’s Distinguished Lecturer Award.

The award recognizes an outstanding scientific leader in the field of nonferrous extraction and processing metallurgy. Seetharaman will have the opportunity to present a lecture to more than 4,000 attendees at the 2018 annual meeting of The Minerals, Metals and Materials Society (TMS).

Seetharaman joined Mines in 2017 from the U.S. Department of Energy, where he served as a senior technical advisor, and was previously a professor at Carnegie Mellon University and Warwick University. His research interests include materials production using clean and energy-efficient processes and developing materials for clean energy production.

Seetharaman will receive the award at the 148th TMS annual meeting in San Antonio, Texas, in March 2019.


Joe DelNero, Digital Media and Communications Manager, Communications and Marketing | 303-273-3326 | jdelnero@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Mines students get on-the-ground engineering experience in Guatemala

Colorado School of Mines - Fri, 10/20/2017 - 14:13

Landslide risk is a fact of life for hundreds of thousands of Guatemalans residing in settlements on the slopes of steep ravines. 

How well the available tools, techniques and programs manage that risk is the subject of a Colorado School of Mines graduate student project—research that got an infusion of help from a group of Mines undergraduate students earlier this year. 

Six undergraduate students studying geological, civil, environmental and humanitarian engineering traveled to Guatemala for two weeks in August, helping conduct field interviews in impacted communities and analyzing data at the local university, Universidad de San Carlos de Guatemala in Guatemala City.

Mines graduate student David LaPorte has been in Guatemala since January, thanks to a 10-month Fulbright grant, and worked with Mines faculty and staff back in Golden to make the international engineering experience possible. 

“We can build retaining walls all day but if people don’t change their behaviors, their risk won’t be lowered—that was a big component of the trip, giving undergrads the opportunity to come and see this, what engineering looks like on the ground, especially in another country,” LaPorte said.

A master’s candidate in the Department of Geology and Geological Engineering, LaPorte’s work is focused on evaluating the current landslide risk management initiatives put in place by the Guatemalan government and NGOs. That has meant a lot of field work, talking with local residents and stakeholders to better understand how they perceive risk.

“It’s been challenging and rewarding in a lot of different ways,” LaPorte said. “One of the biggest things for me has been to just learn how to work in another culture, the difference in time, the importance of relationships, the way things are organized and managed. It’s been a steep learning curve but I’m really going to take away a lot.”

So did the undergraduate students who traveled to Guatemala to help with the research. 

“I can’t stress enough how great of a trip it was and how wonderful it was to see an actual connection, a tangible connection between engineering and humanitarian work,” said Vy Duong, a junior studying civil and humanitarian engineering and a 2016-2017 Shultz Scholar

A key component of Mines’ Humanitarian Engineering program is the importance of community engagement and how to do it in a meaningful way as part of engineering projects. While in Guatemala, the students spent five days in the field, conducting interviews and meeting with residents of three different communities. Another five days were spent at the local university, analyzing data and building landslide susceptibility maps. Students also got to hike an active volcano and visit the historic town of Antigua.

One thing the students didn’t do during their trip is build something.

“Our goal wasn’t immediately tangible. Our goal was to help them help themselves in the coming years and help the different organizations work together,” said Matt Kelly, a junior studying geological engineering. “It was hard not getting to say, ‘Oh, I built that retaining wall and those five homes are good.’ But what we did in the end was much more helpful.”

That difference was part of the appeal of sending students to Guatemala, said Juan Lucena, professor and director of the Humanitarian Engineering program, located in the Engineering, Design and Society Division at Mines. He also serves as one of LaPorte’s research advisors.

“Different than the more popular humanitarian engineering projects where students build gadgets—water pumps, bridges, wheelchairs, etc.—this project was about applying risk mitigation research on vulnerable communities in Guatemala,” Lucena said. “This shows that humanitarian principles and criteria can also guide engineering research and its application.” 

A central part of LaPorte’s project is working out how to package science in a way that can be used by the local population, and tracking and improving how they actually use it, said Paul Santi, professor of geology and geological engineering and LaPorte’s faculty advisor. 

LaPorte’s efforts also build on the work of Mines graduate Ethan Faber MS’16. As a geology and geological engineering master’s student, Faber developed a landslide risk evaluation tool that helps Guatemalan residents quantify their own vulnerability, working with in-country advisor Edy Manolo Barillas-Cruz, MS ’06, a Guatemalan native who returned to his country after graduation to become the national risk advisor.

“Our earlier work in Guatemala City focused on developing and validating methods of mitigation that those in poor communities could actually implement with limited means,” Santi said. “David’s goal is to figure out if people are actually doing this, why or why not, and how to best educate and encourage them to take appropriate actions to reduce landslide risk to their homes.” 

Two of the students who traveled to Guatemala were sponsored by the Colorado-Wyoming Alliance for Minority Participation, of which Mines is a member. The alliance’s mission is to increase the number of historically and currently underrepresented African American, Native American, Hispanic, Pacific Islander, and Alaska Native students earning bachelor's degrees in the STEM fields. Another received funding from the Shultz Family Leadership in Humanitarian Engineering Fund.  

Duong, who has been involved with Mines Without Borders since her freshman year, said the field interviews in particular were a great learning experience that built on her coursework in humanitarian engineering. 

“I learned a lot from these interviews and I was always surprised by what I learned,” Duong said. “The community members, they have a feeling of when a landslide is going to happen—all these things we learn in theory, we have all the scientific backing for why it happens, but it’s not like these people without the formal education don’t know the signs.”

For Kelly, the trip really cemented his desire to find ways after graduation to use what he has learned at Mines to do engineering projects that really help people in a sustainable manner. He’s also hoping to become fluent in Spanish. 

“There's no engineering project that happens in a vacuum. It affects everyone and everything around them,” said Kelly, an Army veteran. “Now that I have real-world experience being in another country, examining landslide hazards and risk assessment, it will help me with future courses and future employment.”

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Two ORNL-Led Research Teams Receive $10.5 Million to Advance Quantum Computing

HPC Wire - Fri, 10/20/2017 - 10:43

OAK RIDGE, Tenn., Oct. 20, 2017 — By harnessing the power of quantum mechanics, researchers hope to create quantum computers capable of simulating phenomenon at a scale and speed unthinkable on traditional architectures, an effort of great interest to agencies such as the Department of Energy tasked with tackling some of the world’s most complex science problems.

DOE’s Office of Science has awarded two research teams, each headed by a member of Oak Ridge National Laboratory’s Quantum Information Science Group, more than $10 million over five years to both assess the feasibility of quantum architectures in addressing big science problems and to develop algorithms capable of harnessing the massive power predicted of quantum computing systems. The two projects are intended to work in concert to ensure synergy across DOE’s quantum computing research spectrum and maximize mutual benefits.

Caption: ORNL’s Pavel Lougovski (left) and Raphael Pooser will lead research teams working to advance quantum computing for scientific applications. Credit: Oak Ridge National Laboratory, U.S. Dept. of Energy

ORNL’s Raphael Pooser will oversee an effort titled, “Methods and Interfaces for Quantum Acceleration of Scientific Applications,” part of the larger Quantum Computing Testbed Pathfinder program funded by DOE’s Advanced Scientific Computing Research office.

Pooser’s team, which includes partners from IBM, commercial quantum computing developer IonQ, Georgia Tech and Virginia Tech, received $7.5 million over five years to evaluate the performance of a suite of applications on near-term quantum architectures.

The idea, Pooser said, is to work with industry leaders to understand the potential of quantum architectures in solving scientific challenges on the scale of those being tackled by DOE. ORNL will focus on scientific applications spanning three fields of study: quantum field theory, quantum chemistry and quantum machine learning.

“Quantum applications that are more exact and faster than their classical counterparts exist or have been proposed in all of these fields, at least theoretically,” said Pooser. “Our job is to determine whether we can get them to work on today’s quantum hardware and on the hardware of the near future.”

Many of these applications have never been programmed for quantum architectures before, which presents a unique challenge. Because today’s quantum computers are relatively small, applications must be tuned to the hardware to maximize performance and accuracy. This requires a deep understanding of the uniquely quantum areas of the programs, and it requires running them on various quantum architectures to assess their validity, and ultimately their feasibility.

“Many new quantum programming techniques have evolved to address this problem,” said Pooser, adding that his team would “implement new programming models that leverage the analog nature of quantum simulators.”

To increase their chances of success, Pooser’s team will work closely with his ORNL colleague Pavel Lougovski who is overseeing the “Heterogeneous Digital-Analog Quantum Dynamics Simulations” effort, which has received $3 million over three years.

Lougovski has partnered with the University of Washington’s Institute for Nuclear Theory and the University of the Basque Country UPV/EHU in Bilbao, Spain, to develop quantum simulation algorithms for applications in condensed matter and nuclear physics, specifically large-scale, many-body systems of particular interest to DOE’s Office of Science.

Lougovski’s team will pursue an algorithm design approach that combines best features of digital and analog quantum computing with the end goal of matching the complexity of quantum simulation algorithms to available quantum architectures. Because development and deployment of quantum hardware is a nascent field compared to traditional computing platforms, the team will also harness the power of hybrid quantum systems that use a combination of quantum computers and traditional processors.

“We have assembled a multidisciplinary team of computer scientists, applied mathematicians, scientific application domain experts, and quantum computing researchers,” Lougovski said. “Quantum simulation algorithms, much like our team, are a melting pot of various quantum and classical computing primitives. Striking a right balance between them and available hardware will enable new science beyond the reach of conventional approaches.”

ORNL’s quantum information researchers have decades of quantum computing research experience, and the laboratory has also made significant investments across the quantum spectrum, including in quantum communications and quantum sensing, and has strong relationships with industry leaders. The lab’s Quantum Computing Institute brings together expertise across the quantum spectrum and fosters collaboration across domains, from nanotechnology to physics to chemistry to biology.

These assets, along with ORNL’s rich history in traditional high-performance computing and ramping up applications to exploit powerful computing resources, will be critical in realizing the potential of the quantum platform to greatly accelerate scientific understanding of the natural world.

ORNL is managed by UT-Battelle for the DOE Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.

Source: ORNL

The post Two ORNL-Led Research Teams Receive $10.5 Million to Advance Quantum Computing appeared first on HPCwire.

NCSA Calls on HTCondor Partnership to Process Data for DES

HPC Wire - Fri, 10/20/2017 - 07:54

Oct. 20, 2017 — The Laser Interferometer Gravitational-Wave Observatory (LIGO) has detected gravitational waves from a neutron star-neutron star merger. This event reveals a direct association between the merger and the galaxy where it occurred. Scientists have been trying to show for decades that this has been happening, but this is the first time they’ve been able to prove it. However, what makes this event unique from the previous four gravitational waves detected prior, is that this neutron star-neutron star merger was detected in three different ways.

LIGO detected gravitational waves. The FERMI satellite detected gamma rays, and hours later as the sun set in Chile, the Dark Energy camera saw an optical source (light) from a neutron-neutron star merger. This multi messenger astronomy event was the first detection of its kind in history. The images from the Dark Energy camera were processed using the Dark Energy Survey (DES) data reduction pipelines at NCSA using HTCondor.

“HTCondor has made it possible for us take raw data from a telescope and process and disseminate the results within hours of it the observations occurring,” Professor Robert Gruendl production scientist for DES and senior research scientist at NCSA.

HTCondor is a specialized workload management system for compute-intensive jobs. Unlike simple batch systems, HTCondor has the ability to distribute workloads across many sites. DES is actively running workloads on Blue WatersIllinois Campus Cluster, and the Open Science Grid at Fermilab. “HTCondor’s central role in the production system is to make data available scientists within hours of being observed,” said Miron Livny.

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign and HTCondor at the University of Wisconsin-Madison Center for High Through Computing(CHTC) have been collaborating on projects for 30 years.

“This collaboration will be a powerful means to develop high-throughput computing (HTC) data processing and analysis capability that is also beginning to address the unique and evolving needs of the LSSTcommunity while advancing the state of the art of HTC,” said Miron Livny, senior researcher in distributed computing at the University of Wisconsin-Madison. “This mutually beneficial partnership will deliver better astronomy science and distributed data intensive computing science,” said Livny. The Condor project also contributed to the TeraGrid project and the GRIDS project, both involving significant NCSA involvement.

Originally known as simply Condor, this system reprocessed radio data from the Baltimore-Illinois-Maryland Association (BIMA) in the late eighties. This collaboration between the Universities of California, Illinois, and Maryland built and operated the BIMA radio telescope array, which was the premier imaging instrument of its time in 1986 at millimeter wavelengths in radio astronomy.

The Dark Energy Survey Data Management (DESDM), led by NCSA, has relied on HTCondor software to enable data processing on the Blue Waters supercomputer for DES. DES is an international, collaborative effort to map hundreds of millions of galaxies, detect thousands of supernovae, and find patterns of cosmic structure in an effort to understand dark matter and the expansion of the Universe.

Source: NCSA

The post NCSA Calls on HTCondor Partnership to Process Data for DES appeared first on HPCwire.

Scientists Use CSCS Supercomputer to Search for “Memory Molecules”

HPC Wire - Fri, 10/20/2017 - 07:00

LUGANO, Switzerland, Oct. 20, 2017 — Until now, searching for genes related to memory capacity has been comparable to seeking out the proverbial “needle in a haystack”. Scientists at the University of Basel made use of the CSCS supercomputer “Piz Daint” to discover interrelationships in the human genome that might simplify the search for “memory molecules” and eventually lead to more effective medical treatment for people with diseases that are accompanied by memory disturbance.

Every human being’s physical and mental constitution is the outcome of a complex interaction between environmental factors and the individual genetic make-up (DNA). The complete set of genes and the genetic information it stores is called the genotype. This genotype influences, among other things, a person’s memory and hence the ability to remember the past. In addition to the DNA coding, there are other factors contributing to memory capacity, such as nutrition, schooling and family homelife.

Memories are made of… what?

Scientists at the University of Basel from the Transfaculty Research Platform Molecular and Cognitive Neuroscience (MCN) are interested in processes related to memory performance by investigating the molecular basis of memory. “Molecular neuroscience is a dynamic field, with work being done around the world by a community that ranges from mathematicians and computer scientists to applied psychologists,” explains Annette Milnik, a post-doctoral fellow in the research group under Professor Andreas Papassotiropoulos, co-head of the research platform. The goal of Milnik’s research is to find patterns in genes that are related to memory capacity and that might explain how memory works and how it can be influenced. “There is no such thing as ‘the’ memory gene, but rather many variations in the genome that, combined with numerous other factors, form our memory,” says Milnik.

To investigate memory capacity, researchers from the fields of medical science, psychiatry, psychology and biology make usage of brainwave measurements, memory tests and imaging techniques while the brain is subjected to various stimuli. The researchers also make usage of animal models, as well as genetic and epigenetic studies. The latter examine phenomena and mechanisms that cause chemical changes in the chromosomes and genes without altering their actual DNA sequence.

One quadrillion tests

To decipher the molecular basis of memory capacity, researchers “zoom” deep into the human DNA. For this purpose, Milnik examines particular gene segments and their variants. Originally Milnik studied  psychology and human medicine, but five years ago she traded in her doctor’s career for research and the associated statistical analysis. In her current work, the number of statistical tests together amounted to one quadrillion (1015). Analysing such a quantity of data would not be possible without a supercomputer like “Piz Daint”, she notes. Yet her results might significantly simplify future analysis of large datasets in the search for the “memory molecule”.

Although the genetic code (DNA) is fixed in all cells, mechanisms like epigenetic processes exists that regulate which parts of the code are expressed. As an example, kidney and liver cells each use different parts of the genome. One process that performs this “functional attribution” is called methylation. “You could imagine flags marking the spots on the human genome where methylation takes place,” explains Milnik. A pattern of flags typical for a particular gene thus identifies a given cell function like a pointer. The cell function is influenced by how the genes are specifically read. Moreover, according to Milnik, the environment too may influence the flag-patterns. “We are facing highly complex relationships between genes, the environment, and how they interact. This is why we want to take a step back and seek out a simplified model for these relationships,” says Milnik.

Influence of genetic variations 

Using material sampled from healthy young volunteers, Milnik and her colleagues examined 500,000 genetic variations known as Single Nucleotide Polymorphisms (SNPs)–the basic building blocks of nucleic acids–in conjunction with 400,000 flag-patterns. They wanted to investigate the impact of the genetic code on methylation. According to the researchers, the results of their study show not only that single SNPs located nearby the flags have an impact on the flag-pattern (methylation), but also that combinations of genetic variants–both in proximity and farther apart in the genome–affect this flag-pattern. “This shows us that genetic variants exert a complex influence on methylation,” says Milnik. The flag-pattern thus unifies the impact of a larger set of genetic variants that is then represented in one signal.

This means for Milnik that she has found a kind of intermediate filter that reduces the large datasets that have been used to investigate memory capacity. In the past, each single genetic variation has been related individually to memory capacity. But now, according to Milnik, we know that the flags accumulate information from a complex system of SNP-effects. So in the future, instead of using all of the individual SNPs to explore memory capacity and other complex human characteristics, the methylation flag-pattern could be used as well. This approach as it currently stands is still basic research. However, once the molecules relevant to memory capacity can be identified in this way, a next step could be to investigate whether medicines exist that interact with the corresponding gene products and which might be able to influence memory capacity, explains Milnik. This would offer a gleam of hope for treatment of diseases that are accompanied by memory disturbance, such as dementia or

Source: CSCS

The post Scientists Use CSCS Supercomputer to Search for “Memory Molecules” appeared first on HPCwire.

OpenSFS Offers Maintenance Release 2.10.1 for the Lustre File System

HPC Wire - Thu, 10/19/2017 - 17:25

Oct. 19 — In a move to solidify and expand the use of the open-source Lustre file system for the greater high-performance computing (HPC) community, Open Scalable File Systems, Inc., or OpenSFS, today announced the first Long Term Support (LTS) maintenance release of Lustre 2.10.1.

This latest advance, in effect, fulfills a commitment made by Intel last April to align its efforts and support around the community release, including efforts to design and release a maintenance version. This transition enables growth in the adoption and rate of innovation for Lustre.

“When OpenSFS transitioned to being community-driven, it was exactly things like this that we were hoping for,” stated Steve Simms, former OpenSFS president. “It’s a real milestone for Lustre as an open-source project.”

“The Lustre file system is a critical tool for HPC to meet the growing demands of users who are using vast amounts of data to tackle increasingly complex problems,” added Trish Damkroger, Vice President of Technical Computing at Intel. “Intel continues to invest in the Lustre community as shown by the 2.10.1 release and are looking forward to continued collaboration with OpenSFS on the Lustre 2.11 release”

Aside from Intel, other members of the Lustre Working Group (LWG), a technical group of senior Lustre contributors from multiple organizations, played major roles in the development of this new software package. (See list below).

“We really appreciate all the contributions to the Lustre code base from all the different organizations of LWG,” said Dustin Leverman, LWG co-chair.

Though excitement about new software often revolves around new features and performance, this new release focuses on stability and reliability and is critical for researchers and other HPC users who want to focus on HPC rather than fixing bugs. This maintenance release improves the Lustre 2.10.0 code base, which included several new features, such as progressive file layouts, multi-rail LNet, and project quotas. The next feature release will be Lustre 2.11, currently slated for Q1 of 2018.

The new release supports a variety of recently updated popular open-source technologies including Red Hat 7.4, CentOS 7.4, and ZFS 0.7.1. This should help expand the adoption of Lustre in many organizations, particularly smaller universities and HPC centers, that rely on open-source for their advanced computing needs. The announcement further underscores Lustre’s status as community-supported open-source software. (A full list of changes is linked to below.)

“It is exciting to see the development community and the vendors supporting Lustre come together to deliver the first maintenance release of the 2.10 LTS stream. This is a very important achievement for the Lustre community,” said Sarp Oral, current OpenSFS president.

OpenSFS encourages contributions from any group interested in support Lustre, not just through software development. Contributors to the Lustre code base and LWG include:

Canonical, CEA, Cray, DDN, Hewlett Packard Enterprise, Indiana University, Intel HPDD, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratory, Seagate, SuperMicro.

About Open SFS

OpenSFS is a nonprofit organization founded in 2010 to advance Lustre development, ensuring it remains vendor-neutral, open, and freely downloadable (http://lustre.org/download/). OpenSFS participants include vendors and customers who employ the world’s best Lustre file system experts, implementing and supporting Lustre solutions across HPC and commercial enterprises. OpenSFS actively promotes the growth, stability and vendor neutrality of the Lustre file system.

OpenSFS web site: http://opensfs.org

Lustre Working Group: http://wiki.opensfs.org/Lustre_Working_Group

Lustre 2.10.1 Changelog: http://wiki.lustre.org/Lustre_2.10.1_Changelog

The post OpenSFS Offers Maintenance Release 2.10.1 for the Lustre File System appeared first on HPCwire.

Data Vortex Users Contemplate the Future of Supercomputing

HPC Wire - Thu, 10/19/2017 - 16:01

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia to share their experiences with Data Vortex machines and have a larger conversation about transformational computer science and what future computers are going to look like.

Coke Reed and John Johnson with PEPSY at PNNL

The meeting opened with Data Vortex Founder and Chairman Dr. Coke Reed describing the “Spirit of Data Vortex,” the self-routing congestion-free computing network that he invented. Reed’s talk was followed by a series of tutorials and sessions related to programming, software, and architectural decisions for the Data Vortex. A lively panel discussion got everyone thinking about the limits of current computing and the exciting potential of revolutionary approaches. Day two included presentations from the user community on the real science being conducted on Data Vortex computers. Beowulf cluster inventor Thomas Sterling gave the closing keynote, tracing the history of computer science all the way back from antiquity up to current times.

“This is a new technology but it’s mostly from my perspective an opportunity to start rethinking from the ground up and move a little bit from the evolutionary to the revolutionary aspect,” shared user meeting host PNNL research scientist Roberto Gioiosa in an interview with HPCwire. “It’s an opportunity to start doing something different and working on how you design your algorithm, run your programs. The idea that it’s okay to do something revolutionary is an important driver and it makes people start thinking differently.”

Roberto Gioiosa with JOLT at PNNL

“You had that technical exchange that you’d typically see in a user group,” added John Johnson, PNNL’s deputy director for the computing division. “But since we’re looking at a transformational technology, it provided the opportunity for folks to step back and look at computing at a broader level. There was a lot of discussion about how we’re reaching the end of Moore’s law and what’s beyond Moore’s computing – the kind of technologies we are trying to focus on, the transformational computer science. The discussion actually was in some sense, do we need to rethink the entire computing paradigm? When you have new technologies that do things in a very very different way and are very successful in doing that, does that give you the opportunity to start rethinking not just the network, but rethinking the processor, rethinking the memory, rethinking input and output and also rethinking how those are integrated as well?”

The heart of the Data Vortex supercomputer is the Data Vortex interconnection network, designed for both traditional HPC and emerging irregular and data analytics workloads. Consisting of a congestion-free, high-radix network switch and a Vortex Interconnection Controller (VIC) installed on commodity compute nodes, the Data Vortex network enables the transfer of fine-grained network packets at a high injection rate.

The approach stands in contrast to existing crossbar-based networks. Reed explained, “The crossbar switch is set with software and as the switches grow in size and clock-rate, that’s what forces packets to be so long. We have a self-routing network. There is no software management system of the network and that’s how we’re able to have packets with 64-bit headers and 64-bit payloads. Our next-gen machine will have different networks to carry different sized packets. It’s kind of complicated really but it’s really beautiful. We believe we will be a very attractive network choice for exascale.”

Data Vortex is targeting all problems that require either massive data movement, short packet movement or non-deterministic data movement — examples include sparse linear algebra, big data analytics, branching algorithms and fast fourier transforms.

The inspiration for the Data Vortex Network came to Dr. Reed in 1976. That was the year that he and Polish mathematician Dr. Krystyna Kuperberg solved Problem 110 posed by Dr. Stanislaw Ulam in the Scottish Book. The idea of Data Vortex as a data carrying, dynamical system was born and now there are more than 30 patents on the technology.

Data Vortex debuted its demonstration system, KARMA, at SC13 in Denver. A year later, the Data Vortex team publicly launched DV206 during the Supercomputing 2014 conference in New Orleans. Not long after, PNNL purchased its first Data Vortex system and named it PEPSY in honor of Coke Reed and as a nod to Python scientific libraries. In 2016, CENATE — PNNL’s proving ground for measuring, analyzing and testing new architectures — took delivery of another Data Vortex machine, which they named JOLT. In August 2017, CENATE received its second machine (PNNL’s third), MOUNTAIN DAO.

MOUNTAIN DAO is comprised of sixteen compute nodes (2 Supermicro F627R3-FTPT+ FatTwin Chassis with 4 servers each), each containing two Data Vortex interface cards (VICs), and 2 Data Vortex Switch Boxes (16 Data Vortex 2 level networks, on 3 switch boards, configured as 4 groups of 4).

MOUNTAIN DAO is the first multi-level Data Vortex system. Up until this generation, the Data Vortex systems were all one-level machines, capable of scaling up to 64 nodes. Two-level systems extend the potential node count to 2,048. The company is also planning for three-level systems that will be scalable up to 65,653 nodes, and will push them closer to their exascale goals.

With all ports utilized on 2-level MOUNTAIN DAO, L2 applications depict negligible L1 to L2 performance differences.

PNNL scientists Gioiosa and Johnson are eager to be exploring the capabilities of their newest Data Vortex system.

“If you think about traditional supercomputers, the application has specific characteristics and parameters that have evolved to match those characteristics. Scientific simulation workloads tend to be fairly regular; they send fairly large messages so the networks we’ve been using so far are very good at doing that, but we are facing a new set of workloads coming up — big data, data analytics, machine learning, machine intelligence — these applications do not look very much like the traditional scientific computing so it’s not surprising that the hardware we been using so far is not performing very well,” said Giosiosa.

“Data Vortex provides an opportunity to run both sets of workloads, both traditional scientific application and matching data analytics application in an efficient way so we were very interested to see how that was actually working in practice,” Gioiosa continued. “So as we received the first and second system, we started porting workloads, porting applications. We have done a lot of different implementations of the same algorithm to see what is the best way to implement things in these systems and we learned while doing this and making mistakes and talking to the vendor. The more we understood about the system the more we changed our programs and they were more efficient. We implement these algorithms in ways that we couldn’t do on traditional supercomputers.”

Johnson explained that having multiple systems lets them focus on multiple aspects of computer science. “On the one hand you want to take a system and understand how to write algorithms for that system that take advantage of the existing hardware and existing structure of the system but the other type of research that we like to do is we liked to get in there and sort of rewire it and do different things, and put in the sensors and probes and all different things, which can help you bring different technologies together but would get in the way of porting algorithms directly to the existing architecture so having different machines that have different purposes. It goes back to one of the philosophies we have, looking at the computer as a very specialized scientific instrument and as such we want it to be able to perform optimally on the greatest scientific challenges in energy, environment and national security but we also want to make sure that we are helping to design and construct and tune that system so that it can do that.”

The PNNL researchers emphasized that even though these are exploratory systems they are already running production codes.

“We can run very large applications,” said Gioiosa. “These applications are on the order of hundreds of thousands of lines of code. These are production applications, not test apps that we are just running to extract the FLOPS.”

At the forum, researchers shared how they were using Data Vortex for cutting-edge applications, quantum computer simulation and density function theory, a core component in computational chemistry. “These are big science codes, the kind you would expect to see running on leadership-class systems and we heard from users who ported either the full application or parts of the application to Data Vortex,” said Johnson.

“This system is usable,” said Gioiosa. “You can run your application, you can do real science. We saw a simulation of quantum computers and people in the audience who are actually using a quantum computer said this is great because in quantum computing we cannot see the inside of the computer, we only see outside. It’s advancing understanding of how quantum algorithms work and how quantum machines are progressing and what we need to do to make them mainstream. I call it science, but this means production for us; we don’t produce carts but we produce tests and problems and come up with solutions and increase discovery and knowledge so that is our production.”

Having held a successful first user forum, the organizers are looking ahead to future gatherings. “There are events that naturally bring us together, like Supercomputing and other big conferences, but we are keen to have this forum once every six months or every year depending on how fast we progress,” said Gioiosa. “We expect it will grow as more people who attend will go back to their institution and say, oh this was great, next time you should come too.”

What’s Next for Data Vortex

The next major step on the Data Vortex roadmap is to move away from the commodity server approach they have employed in all their machines so far to something more “custom.”

“What we had in this generation is a method of connecting commodity processors,” said Dr. Reed. “We did Intel processors connected over an x86 (PCIe) bus. Everything is fine grained in this computer except the Intel processor and the x86 bus and so the next generation we’re taking the PCIe bus out of the critical path. Our exploratory units [with commodity components] have done well but now we’re going full custom. It’s pretty exciting. We’re using exotic memories and other things.”

Data Vortex expects to come out with an interim approach using FPGA-based compute nodes by this time next year. Xilinx technology is being given serious consideration, but specific details of the implementation are still under wraps. (We expect more will be revealed at SC17.) Current generation Data Vortex switches and VICs are built with Altera Stratix V FPGAs and future network chip sets will be built with Altera Stratix 10 FPGAs.

Data Vortex has up to this point primarily focused on big science and Department of Defense style problems, but now they are looking at expanding the user space to explore anywhere there’s a communication bottleneck. Hyperscale and embedded systems hold potential as new market vistas.

In addition to building its own machines, Data Vortex is inviting other people to use its interconnect in their computers or devices. In fact, the company’s primary business model is not to become a deliverer of systems. “We’ve got the core communication piece so we’re in a position now where we’re looking at compatible technologies and larger entities to incorporate this differentiating piece to their current but more importantly next-generation designs,” Data Vortex President Carolyn Coke Reed Devany explained. “What we’re all about is fine-grained data movement and that doesn’t necessarily have to be in a big system, that can be fine-grained data movement in lots of places.”

The post Data Vortex Users Contemplate the Future of Supercomputing appeared first on HPCwire.

AI Self-Training Goes Forward at Google DeepMind

HPC Wire - Thu, 10/19/2017 - 14:23

Imagine if all the atoms in the universe could be added up into a single number. Big number, right? Maybe the biggest number conceivable. But wait, there’s a bigger number out there. We’re told that Go, the world’s oldest board game, has more possible board positions than there are atoms in the universe. Urban myth? All right, let’s say Go has half as many positions as there are atoms. Make it a tenth. The point is: Go complexity is beyond measure.

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training.

The absence of human training may have “liberated” AlphaGo Zero to find new ways to play Go that humans don’t know, putting the new system beyond the talents of the human-trained AlphaGo.

Richard Windsor, analyst at Edison Investment Research, London, notes that today’s announcement is an important step forward on one of the three big AI challenges which, he said, are:

  • AI systems that can be trained with less data
  • AI that takes lessons learned from one task and applies it across multiple tasks
  • AI that builds its own models

“DeepMind has been able to build a new Go (AlphaGo Zero) algorithm that relies solely on self-play to improve and within 36 hours was able to defeat AlphaGo Lee (the one that beat [professional Go player] Lee Sedol) 100 games to 0…,” Windsor said. “DeepMind’s achievement represents a huge step forward in addressing the first challenge as AlphaGo Zero used no data at all…”

According to DeepMind, previous versions of AlphaGo were trained on the basis of thousands of human games. But AlphaGo Zero “skips this step and learns to play simply by playing games against itself, starting from completely random play.” In doing so, it quickly surpassed human level of play and went undefeated against AlphaGo.

The new self-training algorithm, according to the DeepMind blog, is significant for AI systems to take on problems for which “human knowledge may be too expensive, too unreliable or simply unavailable. As a result, a long-standing ambition of AI research is to bypass this step, creating algorithms that achieve superhuman performance in the most challenging domains with no human input.”

DeepMind said AlphaGo Zero uses a novel form of reinforcement learning in which the system starts off with a neural network that knows nothing about Go. “It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.”

AhaGo has become progressively more efficient thanks to hardware gains and more recently algorithmic advances (Source: DeepMind)

The updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again, improving incrementally with each game. (The algorithmic change also significantly improves system efficiency, see graphic at right.)

“This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn tabula rasa from the strongest player in the world: AlphaGo itself,” said DeepMind.

Put another way by Windsor: “It is almost as if the use of human data limited the potential of the machine’s ability to maximize its potential.”

While the new system makes strides against the self-training Big AI Challenge, Windsor expressed doubts that it addresses the third challenge (automated model building) because it used a model already used by the previous version of AlphaGo.

“…the system of board assessment and move prediction (but not the experience) used in AlphaGo Lee was also built into AlphaGo Zero,” said Windsor. “Hence, we think that this system was instead using a framework that had already been developed to play and applying reinforcement learning to improve, rather than building its own models.”

But this isn’t to minimize the achievement of AlphaGo Zero, nor to quell those (such as Elon Musk) who worry that human intelligence will eventually be dwarfed by AI, with potential dystopic implications.

“What will really have the likes of Elon Musk quaking in their boots is the fact that AlphaGo Zero was able to obtain a level of expertise of Go that has never been achieved by a human mind,” Windsor said.

Having said that, include Windsor among those who don’t believe machines will enslave the human race. He also said that DeepMind may have trouble applying its achievement elsewhere.

“Many of the other digital ecosystems have been trying to use computer generated images to train image and video recognition algorithms but there has been no real success to date and we suspect that taking what DeepMind has achieved and applying it to real world AI problems like image and video recognition will be very difficult,” he said, explaining that “the Go problem is based on highly structured data in a clearly defined environment whereas images, video, text, speech and so on are completely unstructured.”

But DeepMind sounded a more optimistic note on the broader applicability of AlphaGo Zero teaching itself new and incredibly complicated tricks.

“These moments of creativity give us confidence that AI will be a multiplier for human ingenuity, helping us with our mission to solve some of the most important challenges humanity is facing…. If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society.”

The post AI Self-Training Goes Forward at Google DeepMind appeared first on HPCwire.

SC17 Video: How Supercomputing Helps Explain the Ocean’s Role in Weather and Climate

HPC Wire - Thu, 10/19/2017 - 13:10

DENVER, Oct. 19, 2017 — Using the power of today’s high performance computers, Earth scientists are working hand in hand with visualization experts to bring exquisitely detailed views of Earth’s oceans into sharper focus than ever before.

A video just released by SC17 conference relates how scientists are zooming in on one of the highest-resolution computer simulations in the world to explore never-before-seen features of the global ocean eddies and circulation.

“The ocean is what makes life possible on this beautiful planet,” said Dr. Dimitris Menemenlis, Research Scientist in the Earth Science Section at NASA’s Jet Propulsion Laboratory (JPL), Pasadena, Calif. “We should therefore try to understand and study and know how it works.”

Menemenlis has been doing just that—collaborating with other experts for two decades to continually improve data assimilation and numerical modeling techniques in order to achieve increasingly accurate descriptions of the global ocean circulation.  Numerical global ocean simulations today have horizontal grids cells spaced by 1 to 2 kilometers, compared to 25 to 100 kilometers 20 years ago.

“We are working with people at NASA centers, universities, and labs around the world who are looking for answers to important questions such as how ocean heat interacts with land and sea ice, how ice melt could raise sea levels and affect coastal areas, how carbon in the atmosphere is changing seawater chemistry, and how currents impact the ocean carbon cycle,” stated Menemenlis.

The new simulation accurately represents temperature and salinity variations in the ocean caused by a wide range of processes, from mesoscale eddies to internal tides. This simulation gives scientists a better picture of how ocean currents carry nutrients, carbon dioxide, and other chemicals to various locations around the world. These improvements are made possible by evolving supercomputer capabilities, satellite and other observational methods, and visualization methods.”

In particular, visualization and data analysis experts in the NASA Advanced Supercomputing (NAS) Division at NASA’s Ames Research Center in Silicon Valley have developed an interactive visualization technique that allows scientists to explore the entire global ocean on NAS’s 128-screen hyperwall and then zoom in on specific regions in near-real-time. Menemenlis says the new capability helps to quickly identify interesting ocean phenomena in the numerical simulation, that would otherwise be difficult to discover.

Scientists making satellite and in situ ocean observations can use the results from the simulation to better understand the observations and what they tell us about the ocean’s role in our planet’s weather and climate. The ultimate goal is to create a global, full-depth, time-evolving description of ocean circulation that is consistent with the model equations as well as with all the available observations.

“The ocean is vast and there are still a lot of unknowns. We still can’t represent all the conditions and are pushing the boundaries of current supercomputer power,” said Menemenlis. “This is an exciting time to be an oceanographer who can use satellite observations and numerical simulations to push our understanding of ocean circulation forward.”

Source: SC17

The post SC17 Video: How Supercomputing Helps Explain the Ocean’s Role in Weather and Climate appeared first on HPCwire.

Findley receives ASM International Award

Colorado School of Mines - Thu, 10/19/2017 - 11:30

A Colorado School of Mines professor has received an award for distinguished contributions in the field of materials science and engineering.

Kip Findley, an associate professor in the Department of Metallurgical and Materials Engineering, received ASM International’s Silver Medal Award for outstanding contributions to developing a physically based understanding of deformation, fatigue and fracture in high-performance steels.

“It is a tremendous honor to be recognized by ASM International,” Findley said. “This recognition encompasses the work we do in the Advanced Steel Processing and Products Research Center. Our center works at the interface of users and producers of steel to develop steel alloys and processing for enhanced performance, including fatigue, fracture and deformation. Our research and cooperation with industry leads to advancements in steel products for these applications to enable increased fuel efficiency and safer pipelines, for example.”

Findley received the award at the MS&T17 conference October 8-12 in Pittsburgh, Pennsylvania. The silver medal recognizes mid-career researchers for contributions and service to the field. Only one academic and one non-academic may receive this honor each year. Judging is based on technical or business accomplishments, beneficial impact of contributions to industry or society and volunteer professional service.


Joe DelNero, Digital Media and Communications Manager, Communications and Marketing | 303-273-3326 | jdelnero@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Sharp invited to Kavli Frontiers of Science Symposium

Colorado School of Mines - Thu, 10/19/2017 - 11:03
An associate professor of civil and environmental engineering at Colorado School of Mines has been invited to participate in a National Academy of Science symposium designed to encourage collaboration between distinguished young scientists.   Jonathan Sharp will participate in the U.S. Kavli Frontiers of Science symposium, which takes place Feb. 15-17, 2018, in Irvine, California.   Selected by a committee of Academy members, attendees are young researchers who have already made recognized contributions to science, including recipients of major national fellowships and awards and who have been identified as future leaders in science.   The symposium series was established in 1989, and more than 200 of its “alumni” have been elected to the National Academy of Sciences. Twelve have been awarded Nobel Prizes.   The 2018 symposium includes eight sessions featuring formal presentations of cutting-edge research:   • 3D Genome • Bio-interfaces – Blurring the Borders between Biological and Digital Computing • Brain-Machine Interfaces • Green Chemistry • Humans and Pathogens in an Evolutionary Context • Materials by Design • Ocean Anoxia • The Search for Life Near and Far   The meeting also includes time for informal discussions and poster sessions.   Sharp joined Mines in 2009 and holds a bachelor’s degree from Princeton University and master’s and doctoral degrees in civil and environmental engineering from the University of California, Berkeley. His research integrates facets of microbiology, engineering, geochemistry and hydrology to enhance understanding of the natural and built environment, particularly on the impact of biological processes on water quality.   In 2011, Sharp received an NSF CAREER Award for a project titled “Cleaner Water Through Microbial Stress: An Integrated Research and Education Plan.”   CONTACT
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Categories: Partner News

Intel FPGAs Power Acceleration-as-a-Service for Alibaba Cloud

HPC Wire - Thu, 10/19/2017 - 08:03

Oct. 19, 2017 — Intel today announced that Intel field programmable gate arrays (FPGAs) are now powering the Acceleration-as-a-Service of Alibaba Cloud, the cloud computing arm of Alibaba Group. The acceleration service, which can be launched from the Alibaba Cloud website, enables customers to develop and deploy accelerator solutions in the cloud for Artificial Intelligence inference, video streaming analytics, database acceleration and other fields where intense computing is required.

The Acceleration-as-a-Service with Intel FPGAs, also known as Alibaba Cloud’s F1 Instance, provides users access to cloud acceleration in a pay-as-you go model, with no need for upfront hardware investments.

“Intel FPGAs offer us a more cost-effective way to accelerate cloud-based application performance for our customers who are running business applications and demanding data and scientific workloads,” said Jin Li, vice president of Alibaba Cloud. “Another key value of FPGAs is that they provide high performance at low power, and the flexibility for managing diverse computing workloads.”

“Our collaboration with Alibaba Cloud brings forward FPGA-based accelerator capabilities and tools that will be offered to developers and end users as they work on large and intense computing workloads,” said John Sakamoto, vice president, Communications and Data Center Solutions, Intel Programmable Solutions Group. “A public cloud environment offers developers a place to start the FPGA journey, with virtually no initial capital outlay and a low-risk environment to experiment, that can scale to meet growing capacity requirements.”

As part of the Intel deployment, Alibaba Cloud users will have access to the Acceleration Stack for Intel Xeon CPU with FPGAs, which offers a common developer interface, abstracted hardware design, and development tools that support hardware or software development flows (OpenCL or RTL) that the developer is most familiar with. Users will also have access to a rich ecosystem of IP for genomics, machine learning, data analytics, cyber security, financial computation and video transcoding.

Source: Intel

The post Intel FPGAs Power Acceleration-as-a-Service for Alibaba Cloud appeared first on HPCwire.

Emeritus professor of hydrology receives NGWA honors

Colorado School of Mines - Wed, 10/18/2017 - 17:12

An emeritus professor of geology and geological engineering at Colorado School of Mines has been honored for her work in groundwater modeling by the National Groundwater Association.

Eileen Poeter is the 2017 recipient of the NGWA M. King Hubbert Award, presented to a person “who has made a major science or engineering contribution to the groundwater industry.” Poeter was also honored with the NGWA Lifetime Member Award and NGWA Fellow Designation.

During her time at Mines, Poeter was the director of the International Ground Water Modeling Center, now the Integrated GroundWater Modeling Center. Poeter led the center from 1997 to 2011, having started at Mines as an assistant professor in 1987 and working her way up the ranks to full professor by 1996.

Under Poeter’s directorship, the IGWMC hosted the first MODFLOW and More conference in 1998. Mines has hosted the conference every two to three years since then, establishing it as the premier conference for practical applications of groundwater modeling. Rowlinson Professor of Hydrology Reed Maxwell assumed leadership of both the IGWMC as and the MODFLOW conference in 2011.

“I took a position at Colorado School of Mines because of its focus on practical applications and because it recognized geological engineering as a unique discipline worthy of department status,” said Poeter. “My groundwater hydrology career strove to bring more realistic geology into groundwater models and to help geologists see that hydrologic data could improve geologic interpretation.” 

John McCray, director of the Civil and Environmental Engineering Program at Mines, called Poeter his “personal mentor and professional hero.”

“When I joined Colorado School of Mines as faculty in 1998, she was really the only groundwater hydrogeologist at Mines. Now, I believe we have a program that is one of the best in the U.S., said McCray. “I would argue that her impacts in this area have been among the most important in the history of ground water.”

Poeter will receive all three honors during an awards ceremony at the 2017 NGWA Groundwater Week in Nashville, Tennessee, December 5-7, 2017.

Contact: Agata Bogucka, Communications Manager, College of Earth Resource Sciences & Engineering | 303-384-2657 | abogucka@mines.edu Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu  
Categories: Partner News

DECTech wins Mayor’s Gold Mine Award for Excellence

Colorado School of Mines - Wed, 10/18/2017 - 13:38

A Colorado School of Mines outreach program to engage girls in STEM has been awarded the Mayor’s 2017 Gold Mine Award for Excellence.

DECTech was one of five award recipients at the City of Golden Mayor’s 2017 Community Celebration on Oct. 3. The Gold Mine Award recognizes contributions to the betterment of Colorado School of Mines and the Golden community. 

City officials said DECTech was chosen “for their efforts to make an impact on local girls, and show them the fun and importance of science, technology, engineering and math (STEM).”

Short for “Discover-Explore-Create Technology,” DECTech was founded in 2012 by Tracy Camp, professor and head of the Computer Science Department at Mines, in response to studies that show girls’ interest in science and engineering starts to decline the closer they get to middle school. The program, designed to foster and continue that interest through creative and interactive activities, originally targeted girls in 3rd-6th grades but has since grown to include middle and high school students. Female Mines students serve as the program’s leaders. 

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Mines alum George Saunders wins Man Booker Prize

Colorado School of Mines - Wed, 10/18/2017 - 11:47

George Saunders ‘81 has won one of the most prestigious awards in English literature, the Man Booker Prize, for his first novel, “Lincoln in the Bardo.”

Saunders studied geophysics at Colorado School of Mines, working after graduation as a field geophysicist in Indonesia. He later enrolled in an MFA program at Syracuse University, where he still teaches.

Saunders recently talked to the Lawrence Journal-World about how his engineering background informs his creative writing.

“It was a gradual sort of transformation, but it did really help me. I think one way was it got me out into the world a little bit and got me overseas and understanding how many of my ideas were just sort of provincial and small. I think also the logic of that kind of work (engineering) also got into my own writing. There’s a kind of a rigor in engineering. Engineering doesn’t care how hard you try—if the answer’s wrong, it’s wrong. So, that was something that’s been helpful to me over the years. I can do a lot of drafts of a story. After draft No. 400, if it’s still stupid, then I’m like, ‘OK, gotta do another one,’ you know? I don’t have that expectation that putting in effort necessarily yields a result. You have to keep pushing.”

Published in February, “Lincoln in the Bardo” tells the story of President Lincoln’s visit to his son’s crypt in a Washington cemetery and has been heralded for its unconventional form. The Man Booker Prize, first awarded in 1969, recognizes the best original novel written in the English language and published in the U.K. Eligibility was previously restricted to authors from Great Britain, Ireland and the Commonweath nations before being expanded in 2014 to any novel written in English and published in Britain.  

National and international media wrote about Saunders’ win: The New York Times, The Guardian, The Atlantic, BBC News, NPR, Washington Post, The Telegraph.

Image credit: Booker Prize Foundation

Emilie Rusch, Public Information Specialist, Communications and Marketing | 303-273-3361 | erusch@mines.edu
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | ramirez@mines.edu

Categories: Partner News

Dassault Systèmes’ Living Heart Project Reaches Next Milestones

HPC Wire - Wed, 10/18/2017 - 10:31

HOLLYWOOD, Fla., Oct. 18, 2017 — Dassault Systèmes (Paris:DSY) (Euronext Paris: #13065, DSY.PA) today outlined, at the 3DEXPERIENCE Forum North America, multiple milestones in its Living Heart Project aimed to drive the creation and use of simulated 3D personalized hearts in the treatment, diagnosis and prevention of heart diseases. As the scientific and medical community seeks faster and more targeted ways to improve patient care, the Living Heart Project is extending its reach through new partnerships and applications while lowering the barriers to access.

The Living Heart is now available through the 3DEXPERIENCE platform on the cloud, offering the speed and flexibility of high-performance computing (HPC) to even the smallest medical device companies. Any life sciences company can immediately access a complete, on-demand HPC environment to scale up virtual testing securely and collaboratively while managing infrastructure costs. This also crosses an important boundary toward the use of the Living Heart directly in a clinical setting.

“Medical devices need thousands of tests in the development stage,” said Joe Formicola, President and Chief Engineer, Caelynx. “With the move of the Living Heart to the cloud, effectively an unlimited number of tests of a new design can be carried out simultaneously using the simulated heart rather than one at a time, dramatically lowering the barrier to innovation, not to mention the time and cost.”

Since signing a 5-year agreement with the FDA in 2014, Dassault Systèmes continues to align with the regulatory agency on the use of simulation and modeling to accelerate approvals. Bernard Charles, CEO and vice chairman of the board of directors of Dassault Systèmes, gave the keynote at the 4th Annual FDA Scientific Computing Day in October 2016. Later, in July 2017, FDA Commissioner Dr. Scott Gottlieb publicly outlined the FDA plan to help consumers capitalize on advances in science stating, “Modeling and simulation plays a critical role in organizing diverse data sets and exploring alternate study designs. This enables safe and effective new therapeutics to advance more efficiently through the different stages of clinical trials.”

The Living Heart Project has grown to more than 95 member organizations worldwide including medical researchers, practitioners, device manufacturers and regulatory agencies united in a mission of open innovation to solve healthcare challenges. The project has supported 15 research grant proposals by providing access to the model, associated technologies and project expertise. Novel use of the model to understand heart disease and study the safety and effectiveness of medical devices has appeared in eight articles published in peer-reviewed journals to date.

For the first time, the Living Heart was used to simulate detailed drug interactions affecting the entire organ function. Researchers at Stanford University working with UberCloud recently used the Living Heart as a platform for a model that would enable pharmaceutical companies to test drugs for the risk of inducing cardiac arrhythmias, the leading negative side effect preventing FDA approval.

“The Living Heart Project is a strategic part of a broader effort by Dassault Systèmes to leverage its advanced simulation applications to push the boundaries of science,” said Jean Colombel, Vice President Life Sciences, Dassault Systèmes. “By creating both a community and a transformational platform we are beginning to see the advances from the Living HeartProject being used for additional aspects of cardiovascular research as well as for other parts of the body, for example the brain, the spine, the foot, and the eye, to reach new frontiers in patient care.”

Source: Dassault Systèmes

The post Dassault Systèmes’ Living Heart Project Reaches Next Milestones appeared first on HPCwire.


Subscribe to www.rmacc.org aggregator