FREMONT, Calif., Jan. 3 — Penguin Computing, provider of high performance computing, enterprise data center and cloud solutions, today announced a new version of its Scyld ClusterWare high performance computing cluster management solution with enhanced functionality for support of large scale clusters.
“The release of Scyld ClusterWare 7 continues the growth of Penguin’s HPC provisioning software and enables support of large scale clusters ranging to thousands of nodes,” said Victor Gregorio, Senior Vice President of Cloud Services at Penguin Computing. “We are pleased to provide this upgraded version of Scyld ClusterWare to the community for Red Hat Enterprise Linux 7, CentOS 7 and Scientific Linux 7.”
Scyld ClusterWare 7 also provides support for Intel Omni-Path and Mellanox InfiniBand architectures. Penguin Computing’s years of HPC expertise allow ClusterWare to be highly-tuned for HPC workloads by offering pre-bundled OS optimizations for cluster performance, a single system install for straightforward change management, tools for monitoring cluster health, ready-to-run HPC schedulers, a wide array of optimized MPI implementations, and all the middleware needed to effectively run a compute cluster.
The announcement reinforces a total solutions approach by Penguin Computing. For example, implementing the company’s OCP-compliant Tundra platform with Scyld ClusterWare 7 and Intel Omni-Path or Mellanox InfiniBand fabrics provides an exceptional infrastructure for a large, dense compute environment.
Scyld ClusterWare is designed for extremely rapid provisioning paired with the ability to instantly update compute clusters by applying changes to a single master node, allowing simplified management of HPC clusters without consuming system administration cycles.
About Penguin Computing
Penguin Computing is one of the largest private suppliers of enterprise and high performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing On-Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivering of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of a limited number of authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing. Penguin Computing has systems installed with more than 2,500 customers in 40 countries across eight major vertical markets. Visit http://www.penguincomputing.com to learn more about the company and follow @PenguinHPC on Twitter.
Source: Penguin Computing
The promise of machine learning has a science fiction flavor to it: computer programs that learn from their experiences and get better and better at what they do. So is machine learning fact or fiction?
The global marketplace answers this question emphatically: Machine learning is not just real; it is a booming field of technology that is being applied in countless artificial intelligence (AI) applications, ranging from crop monitoring and drug development to fraud detection and autonomous vehicles. Collectively, the global AI market is expected to be worth more than $16 billion by 2022, according to the research firm MarketsandMarkets.
In the life sciences arena, researchers are leveraging machine learning in their work to drive groundbreaking discoveries that may help improve the health and wellbeing of people. This research is taking place around the world.
In the United States, for example, researchers at the MIT Lincoln Laboratory Supercomputing Center (LLSC) are applying the power of machine learning algorithms and a new Dell EMC Top500 supercomputer to ferret out patterns in massive amounts of patient data collected from publicly available sources. These scientific investigations could potentially lead to faster personalized treatments and the discovery of cures.
In one such project, researchers affiliated with the LLSC used the new Top500 supercomputer to gain insights from an enormous amount of data collected from an intensive care unit over 10 years. “We did analytics and analysis on this data that was not possible before,” says LLSC researcher Vijay Gadepally. “We were able to reduce two to 10 times the amount of time taken to do analysis, such as finding patients who have similar waveforms.” Watch the video.
In China, Dell EMC is collaborating with the Chinese Academy of Sciences on a joint artificial intelligence and advanced computing laboratory. This lab focuses on research and applications of new computing architectures in the fields of brain information processing and artificial intelligence. Research conducted in the lab spans cognitive function simulation, deep learning, brain computer simulation, and related new computing systems. The lab also supports the development of brain science and intellect technology research, promoting Chinese innovation and breakthroughs at the forefront of science. In fact, Dell China was recently honored with an “Innovation Award of Artificial Intelligence in Technology & Practice” award in recognition of the collaboration. Read the blog.
In Europe, meanwhile, the University of Pisa is using deep learning technologies and systems from Dell EMC for DNA sequencing, encoding DNA as an image. The examples like these could go on and on, because the application of machine learning techniques in the life sciences has tremendous momentum in laboratories around the world.
So why does this matter? In short, because we need to gain insights from massive amounts of data, and this process requires systems that exceed human capabilities. Machine learning algorithms can dig through mountains of data to ferret patterns that might not otherwise be recognizable. Moreover, machine learning algorithms get better over time, because they learn from their experiences.
In the healthcare arena, machine learning promises to drive life-saving advances in patient care. “While robots and computers will probably never completely replace doctors and nurses, machine learning/deep learning and AI are transforming the healthcare industry, improving outcomes, and changing the way doctors think about providing care,” notes author Bernard Marr, writing in Forbes. “Machine learning is improving diagnostics, predicting outcomes, and just beginning to scratch the surface of personalized care.”
Machine learning is also making wide inroads in diverse industries and commercial applications. MasterCard, for example, is using machine learning to detect fraud, while Facebook is putting machine learning technologies to work via a facial recognition algorithm that continually improves its performance. Watch the video.
“Machine learning has become extremely popular,” says Jeremy Kepner, Laboratory Fellow and head of the MIT Lincoln Laboratory Supercomputing Center. “Computers can see now. That’s something that I could not say five years ago. That technology is now being applied everywhere. It’s so much easier when you can point a camera at something and it can then produce an output of all the things that were in that image.” Watch the video.
Here’s the bottom line: Machine learning is no longer the stuff of science fiction. It’s very real, it’s here today and it’s getting better all the time—in life sciences and fields beyond.
Ready to get started with machine learning?
Here are some ways to further your understanding of what machine learning systems could do for your organization:
- Many courses available in machine learning.
- A growing number of open source communities are driving advances in machine learning. You can find links to communities and other resources in the Intel Developer Zone.
- Dell EMC | Intel HPC Innovation Centers around the world offer opportunities for technical collaboration and early access to technology.
 MarketsandMarkets. “Artificial Intelligence Market by Technology (Deep Learning, Robotics, Digital Personal Assistant, Querying Method, Natural Language Processing, Context Aware Processing), Offering, End-User Industry, and Geography – Global Forecast to 2022.” November 2016.
 Bernard Marr. “How Machine Learning, Big Data And AI Are Changing Healthcare Forever.” Forbes. Sept. 23, 2016
The post Capitalizing on machine learning—from life sciences to financial services appeared first on HPCwire.
The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), headquartered in Bremerhaven, Germany, is one of the country’s premier research institutes within the Helmholtz Association of German Research Centres, and is an internationally respected center of expertise for polar and marine research. In November 2015, AWI awarded Cray a contract to install a cluster supercomputer that would help the institute accelerate time to discovery. Now the effort is starting to pay off.
The new Cray CS400 system, nicknamed “Ollie” by AWI staff was installed in April 2016 and is being phased in for use by researchers across AWI. Ollie made it into the Top500 in June (365) and most recently in November (473). The system uses the Intel Xeon processor E5-2600 v4 (Broadwell) as well as Intel’s Omni-Path Architecture (OPA) fabric. The file systems chosen was BeeGFS (formerly FhGFS) parallel cluster file system to spread user data across multiple servers to improve performance and capacity scaling.
AWI now uses its new supercomputer to run advanced research applications related to climate and environmental studies, including global circulations models, regional atmospheric models, glaciology studies and other computing-intensive, numerical simulations such as bioinformatics protein simulations.
“We have just started running on the Cray HPC system and have ported the main ice flow models and are starting to do Paleo ice sheet simulations on it,” said Thomas Kleiner whose glaciology research contributes to the understanding of ice sheet dynamics in the earth system and the impact of climate change. “The new system is much larger and allows us to run more detailed simulations such as simulations of Antarctica at 5km resolutions which was not possible on our older systems. It also allows us to do many simulations at the same time which helps in our research.”
“However, we also want to run simulations further back in time which is very important for climate change modeling at AWI. Compared to other components in the earth system (e.g. atmosphere or ocean), ice sheet models are relatively inexpensive in terms of computational recourses if they run for only a few thousands years, but ice sheets have a long memory of the past climate and therefore models need to be run over very long time scales (several glacial cycles).
“Running a 1,000 year simulation of Antarctica at a 10km resolution (573 x 489 x 101 grid nodes) for climate and glacier research requires 114 CPU hours on the Cray CS400 system. However, we need a resolution of 5km to get adequate detail and also need to run multiple simulations with varying parameters for model uncertainty estimates. What we currently have to do is run simulations at a coarser resolution for many thousands of years until around 10,000 years before the present time and then run simulations at 5km resolution, where the 5km setup (1145 x 977 x 201 grid nodes) already requires 420 CPU hours per 100 years. Every model improvement in terms of considered physics requires a complete recalibration of the model to match observations (although very sparse). The parameter space is huge and needs to be investigated carefully.
“We have relevant small-scale (less than a km) processes that are controlling ice sheet internal dynamics, and on the other side global atmospheric and ocean models that deliver climatic boundary conditions to the ice sheet on course grids but require very short time steps (hours to seconds). Thus, HPC systems of the future are needed to allow us to bridge the gap between the different scales (spatially and temporally) in fully coupled Earth system models including ice sheets.”
In 2004, AWI established a bioinformatics group to provide services to projects requiring bioinformatics and data analysis background. This group participates in data analyses in diverse projects including phylogenetics, phylogenomics, population genetics, RNA-Seq and metatranscriptomics.
Lars Harms, a bioinformatics researcher at AWI, a new user to HPC systems, is using the Cray system to speed up metatranscriptomics research and in some cases to enable the analysis at all. “Our metatranscriptomics research, helps to analyze the functional diversity and the state of organism communities in their taxonomic composition with response patterns to environmental change, to gradients, or ecological dynamics. Processing the associated large datasets on the Cray HPC system help to speed up otherwise time-consuming tasks. Furthermore, the multi-purpose concept of the AWI Cray system including some high-memory nodes is a big advantage for our research enabling us to assemble large-scale metatranscriptomes that is not possible on the existing small-scale servers due to lack of memory.”
Harms is also performing functional annotations using BLAST and HMMER code on the Cray CS400 system. “BLAST and HMMER are typically very time consuming to run. We are using the Cray system to process these tasks in a highly parallel manner which provides a huge speed up compared to our existing platforms.” Harms found a way to speed up analyzing large datasets of protein sequences using HMMER3 even further by copying the entire hmm-database onto solid state drives (SSDs) attached directly to the system nodes. This resulted in a huge speed up due to the faster data transfer rates with the SSDs in comparison to the file system of the Cray system (Figure 1).
One challenge that Harms still faces is the need to optimize software. He said, “Much of the existing software code and tools were developed to work on a single server and need to be optimized to take advantage of parallel processing capabilities of modern processors and HPC systems.”
Given the high cost of energy in Europe, maximizing energy efficiency is a top AWS priority. Malte Thoma, AWI system administrator, emphasized that energy efficiency was a major consideration when selecting a new HPC system. The Cray CS400 is an air cooled system that can control energy consumption on a per job level by allowing users and administrators to set the maximum frequency of the processor. This is done by using a cpu frequency setting in the slurm.epilog and slurm.prolog files as well as an AWI written bash-script tool which reduces maximum performance if the temperature in the room exceeds specific limits. The CS400 system provides the ability to set a general power limit for all or a fraction of nodes to conserve energy using features of the Intel Node Manager (server firmware that provides fine-grained power control).
When the Cray CS400 system is running each node and CPU, it consumes approximately 150KW of power. If the system is idle and CPUs are in HPC performance mode, it consumes 100KW. If all nodes are switched into the power save mode and the computer is idle, energy usage goes down to 55 KW—which is a reduction of almost a third in energy usage. The system can also switch between a performance mode and a power save mode. When a user starts a job, the nodes are put into performance mode but automatically switch back to power save mode when the job finishes.
Unique Modeling and System Profiling Tools
AWI scientists develop software systems, tools or libraries to support AWI staff on their individual research. The researchers, system administrator and IT team use Cray and Intel compilers as well as other tools in optimizing code. There are a number of existing AWI projects such as FESOM, MITgcm and MPI-ESM running on other platforms which are not yet run on the Cray system. The team is also performing benchmarking or profiling work on MPI or OpenMP modifying code to improve parallelization and vectorization.
According to Dr. Natalja Rakowsky, “A major optimization was performed when the sea-ice-ocean-model FESOM was redesigned switching from finite elements to finite volumes. The data structure was improved considerably. Both codes operate on a grid that is unstructured-triangular in the horizontal, and consists of layers in 3D (Z coordinates). FESOM collects the variables along the horizontal first, layer by layer. This results in indirect addressing in all loops, and in a lot of cache misses, because many computations are performed along the vertical. FESOM2 has the vertical as the first dimension, allowing direct addressing along the inner loop and often vectorization becomes possible. Cache misses remain an issue in all 2D (horizontal) computations. Here, we found a way to renumber the grid nodes to reduce the number of cache misses, see http://epic.awi.de/30632/1/IMUM2011_SFC_Rakowskyetal.pdf (the code presented here, TsunAWI, can be regarded as a simplified, 2D-only branch of FESOM).”
Other optimizations in preparation are:
- reduce load inbalancing by a better domain decomposition (getting a high quality equal distribution of 2D and 3D nodes is not easy, and sea ice is not even taken into account yet)
- asynchronous MPI
- check major loops for vectorization, avoid some divisions (replace by precomputed inverse)
- from serial to parallel I/O
The glaciology, bioinformatics and other research at AWI continue to generate huge amounts of data that will take advantage of the HPC resources. “For our research, we must find ways to process all of this data. Supercomputers can help us solve the issue of processing more data quickly, allowing us to do research that was not possible before,” states Harms.
Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR.
The post AWI Uses New Cray Cluster for Earth Sciences and Bioinformatics appeared first on HPCwire.
Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below.
Happy holidays and we’ll see you next year!
— Asian Scientist (@asianscientist) December 20, 2016
— HPC Guru (@HPC_Guru) December 21, 2016
— Dave Moss (@DaveMossTechBod) December 20, 2016
— NCSAatIllinois (@NCSAatIllinois) December 21, 2016
Stampede sends season's greetings! pic.twitter.com/C1SW3DZrnW
— TACC (@TACC) December 19, 2016
— Asian Scientist (@asianscientist) December 20, 2016
— SGI (@sgi_corp) December 21, 2016
— Fernanda Foertter (@hpcprogrammer) December 20, 2016
— NCCS User News (@NASA_NCCS) December 16, 2016
— Berkeley Lab (@BerkeleyLab) December 21, 2016
— SGI (@sgi_corp) December 22, 2016
Click here to view the top tweets from last week.
OAK RIDGE, Tenn., Dec. 22 — Jeffrey S. Vetter, a researcher at the Department of Energy’s Oak Ridge National Laboratory, has been recognized by two professional societies for his achievements in computational science.
Vetter, a researcher in the ORNL Computer Science & Mathematics Division’s Future Technologies group, was cited by the IEEE Board of Directors “for contributions to high performance computing.”
The IEEE (Institute of Electrical & Electronics Engineers) selects fewer than 0.1 percent of its voting members for elevation to fellow. Vetter’s IEEE fellowship is effective January 1.
ORNL is managed by UT-Battelle for the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
LIVERMORE, Calif., Dec. 22 — Sandia National Laboratories has formed an industry-funded Spray Combustion Consortium to better understand fuel injection by developing modeling tools. Control of fuel sprays is key to the development of clean, affordable fuel-efficient engines.
Intended for industry, software vendors and national laboratories, the consortium provides a direct path from fundamental research to validated engineering models ultimately used in combustion engine design. The three-year consortium agreement builds on Department of Energy (DOE) research projects to develop predictive engine fuel injector nozzle flow models and methods and couple them to spray development outside the nozzle.
Consortium participants include Sandia and Argonne national laboratories, the University of Massachusetts at Amherst, Toyota Motor Corp., Renault, Convergent Science, Cummins, Hino Motors, Isuzu and Ford Motor Co. Data, understanding of the critical physical processes involved and initial computer model formulations are being developed and provided to all participants.
Sandia researcher Lyle Pickett, who serves as Sandia’s lead for the consortium, said predictive spray modeling is critical in the development of advanced engines.
“Most pathways to higher engine efficiency rely on fuel injection directly into the engine cylinder,” Pickett said. “While industry is moving toward improved direct-injection strategies, they often encounter uncertainties associated with fuel injection equipment and in-cylinder mixing driven by fuel sprays. Characterizing fuel injector performance for all operating conditions becomes a time-consuming and expensive proposition that seriously hinders engine development.”
Industry has consequently identified predictive models for fuel sprays as a high research priority supporting the development and optimization of higher-efficiency engines. Sprays affect fuel-air mixing, combustion and emission formation processes in the engine cylinder; understanding and modeling the spray requires detailed knowledge about flow within the fuel injector nozzle as well as the dispersion of liquid outside of the nozzle. However, nozzle flow processes are poorly understood and quantitative data for model development and validation are extremely sparse.
“The Office of Energy Efficiency and Renewable Energy Vehicle Technologies Office supports the unique research facility utilized by the consortium to elucidate sprays and also supports scientists at Sandia in performing experiments and developing predictive models that will enable industry to bring more efficient engines to market,” said Gurpreet Singh, program manager at the DOE’s Vehicle Technologies Office.
Performing experiments to measure, simulate, model
Consortium participants already are conducting several experiments using different nozzle shapes, transparent and metal nozzles and gasoline and diesel type fuels. The experiments provide quantitative data and a better understanding of the critical physics of internal nozzle flows, using advanced techniques like high-speed optical microscopy, X-ray radiography and phase-contrast imaging.
The experiments and detailed simulations of the internal flow, cavitation, flash-boiling and liquid breakup processes are used as validation information for engineering-level modeling that is ultimately used by software vendors and industry for the design and control of fuel injection equipment.
The goals of the research are to reveal the physics that are general to all injectors and to develop predictive spray models that will ultimately be used for combustion design.
“Predictive spray modeling is a critical part of achieving accurate simulations of direct injection engines,” said Kelly Senecal, co-founder of Convergent Science. “As a software vendor specializing in computational fluid dynamics of reactive flows, the knowledge gained from the data produced by the consortium is invaluable to our future code-development efforts.”
Industry-government cooperation to deliver results
Consortium participants meet on a quarterly basis where information is shared and updates are provided.
“The consortium addresses a critical need impacting the design and optimization of direct injection engines,” Pickett said. “The deliverables of the consortium will offer a distinct competitive advantage to both engine companies and software vendors.”
Source: Sandia National Laboratories
Aaron Dubrow of the Texas Advanced Computing Center has written a brief history of NSF supercomputing efforts appearing the Huffington Post this week. 2016, of course, is the thirtieth anniversary for the original NFS-backed supercomputing centers and to mark the milestone Dubrow calls out thirty scientific advances that have been achieved through use of NSF-supported supercomputers (the most recent ten are bulleted below.)
The five original NSF-funded SC centers, very familiar to most in the HPC community, include:
- The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign;
- The Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh;
- The San Diego Supercomputer Center (SDSC) at the University of California, San Diego;
- The Cornell Theory Center at Cornell University (later to become the Cornell University Center for Advanced Computing);
- The John von Neumann Center at Princeton University (which was discontinued after 5 years).
Here’s an excerpt from Dubrow’s article posted this week: “These centers, which celebrated their 30th anniversaries this year, have served as cornerstones of the nation’s high-performance computing and communications strategy. They helped push the limits of advanced computing hardware and software, even as they provided supercomputer access to a broad cross-section of academic researchers, enabling the study of everything from subatomic particles to the structure of the early universe.
“In the intervening years, NSF has supported new centers and university programs — including the Texas Advanced Computing Center (TACC) at the University of Texas at Austin and the National Institute for Computational Sciences (NICS) at the University of Tennessee, Knoxville — as well as major programs at Indiana University, Purdue, Rice University and many other leading research institutions.”
Among the advances noted by Dubrow are these eleven
- 2006: Team led by University of Illinois researcher Klaus Schulten simulates an entire life form for the first time. (NCSA)
- 2007: Astrophysicist Volker Bromm and his team model the first billion years of the universe, shedding light on the cosmic past and future. (TACC)
- 2008: During Hurricane Ike, researchers use the Ranger supercomputer to develop storm surge forecasts and safeguard coastal communities. (TACC)
- 2009: Researchers use advanced computing to show how individual social security numbers can be guessed from public information on the Web. (PSC)
- 2010: Researchers aid oil spill containment effort after Deep Water Horizon explosion using satellite and supercomputing technologies. (TACC, LONI)
- 2011: Extreme Science and Engineering Discovery Environment (XSEDE) awarded $121 million by NSF to bring advanced cyberinfrastructure, digital services and expertise to the nation’s scientists and engineers. (NSF)
- 2012: University of Illinois researchers use PSC systems to show how large-scale traders used small stock purchases to game the system; discovery leads to rule changes in the NYSE and NASDAQ exchanges. (PSC)
- 2013: 3D image data enables University of South Carolina researchers to create patient-specific tissue structures. (NICS)
- 2014: A widely published global genome study using XSEDE resources and expertise shows how avian lineages diverged after the extinction of dinosaurs. (SDSC, NICS, TACC)
- 2015: Wake Forest researchers publish virtual crash test study, helping auto manufacturers design safer vehicles and restraint systems. (PSC)
- 2016: XSEDE resources from TACC and SDSC help confirm discovery of gravitational waves by Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors.
Link to his Huffington Post article: http://www.huffingtonpost.com/entry/three-decades-of-making-impossible-research-possible_us_58501c95e4b0016e50430775
The post Retrospective: NSF-funded SCs Make Impossible Research Possible appeared first on HPCwire.
Dec. 22 — According to new research report, “High Performance Computing Market by Components Type (Servers, Storage, Networking Devices, & Software), Services, Deployment Type, Server Price Band, Vertical, & Region – Global Forecast to 2020”, the High Performance Computing (HPC) Market is estimated to grow from USD 28.08 Billion in 2015 and projected to be of USD 36.62 Billion by 2020, at a high Compound Annual Growth Rate (CAGR) of 5.45% during the forecast period.
The HPC market is growing as it interests all kinds of businesses with most common end users of these systems being researchers, scientists, engineers, educational institutes, government and military and others who rely of HPC for complex applications. However, HPC is not only limited to these verticals or departments, but is also seen gaining tractions among the enterprises.
“HPC Servers to gain market prominence by next five years”
The HPC servers by components type is expected to hold the largest market share during the forecast period. Almost half of the total HPC market revenue is contributed by servers and the trend is expected to continue during the entire forecast period. The significant rise in the usage of servers is driven by the growth of increasing complex applications, requiring large clusters to solve complex scientific and mathematical problems.
“The APAC HPC market is expected to be the fastest-growing region”
Considering the regional trends of the HPC market, North America is projected to hold the largest market size. The market in APAC is in the growth phase and is the fastest-growing region for the global data HPC market. This is mainly attributed to the growing focus of the market players to the great opportunities in the HPC market along with the increasing number of data centers and developing infrastructure in this region. On the other hand, the Latin America region is in the introductory phase in terms of adoption of HPC solutions.
There are various companies that are coming up with innovative and efficient HPC solutions and services having opportunities in this market due to the need for improved management and business operations, globally. The major players offering HPC solutions and services are AMD, Intel, SGI, Dell, Atos SE, and Fujitsu among others.
The scope of the report covers detailed information regarding the major factors influencing the growth of the HPC market, such as drivers, restraints, opportunities, and challenges. A detailed analysis of the key industry players has been done to provide insights into their business overview, products and services, key strategies, new product launches, partnerships, collaborations, expansions, and competitive landscape associated with the HPC market.
More information can be found here.
MarketsandMarkets is world’s No. 2 firm in terms of annually published premium market research reports. Serving 1700 global fortune enterprises with more than 1200 premium studies in a year, M&M is catering to multitude of clients across 8 different industrial verticals. We specialize in consulting assignments and business research across high growth markets, cutting edge technologies and newer applications. Our 850 fulltime analyst and SMEs at MarketsandMarkets are tracking global high growth markets following the “Growth Engagement Model – GEM”. The GEM aims at proactive collaboration with the clients to identify new opportunities, identify most important customers, write “Attack, avoid and defend” strategies, identify sources of incremental revenues for both the company and its competitors.
The post New Report Says HPC Market Expected to Reach $36.62 Billion by 2020 appeared first on HPCwire.
Dec. 21 — Australia’s national research computing facility, National Computational Infrastructure (NCI), has become the first Australian organisation to join the OpenPOWER Foundation, a global open technical community enabling collaborative development and industry growth. NCI has additionally purchased four of IBM’s latest Power System servers for High Performance Computing (HPC) to underpin its research efforts through artificial intelligence, deep learning, high performance data analytics and other compute-heavy workloads.
The news means NCI will for the first time introduce an open architecture solution and IBM Power Systems for HPC technology into its data centre providing increased flexibility, optimisation, efficiency and a bespoke solution that directly supports its needs.
Today’s announcement follows a collaborative development process with the IBM Australia Development Laboratory (ADL) and its Linux and Open Technology team, based in Canberra. The ADL provides OpenPOWER development capability and locally develops IBM’s Power System firmware. NCI’s decision to purchase the new IBM Power System servers was strongly influenced by its direct access to the local IBM Power Systems development team.
NCI provides world-class services to Australian researchers, industry and government. A wide range of applications are run on NCI to support crucial national research projects, including climate and weather modelling, satellite data for environmental monitoring and genomics research. NCI will use the initial four IBM Power Systems HPC servers to run its top five graphics processing unit (GPU) based workloads to assess their performance.
“To be the first ever Australian organisation to join the OpenPOWER Foundation provides recognition of NCI’s standing, and represents a step toward a more heterogeneous architecture,” said Allan Williams, Associate Director (Services and Technologies), NCI.
“Having the local IBM Power development team at our fingertips in Australia and being able to work with them in a truly collaborative fashion was critical to our decision to purchase the new IBM Power System S822LC for HPC servers. The new Power architecture provides the ideal infrastructure for GPU-based workloads.”
Released in September 2016, the new series of IBM servers are designed to help propel AI and cognitive workloads and to drive greater data center efficiency. Featuring a new chip, the opensource lineup incorporates innovations from the OpenPOWER community that deliver higher levels of performance and greater computing efficiency than available on any x86-based server.
“In order to tackle the challenges of today’s world –from cancer to climate change –organisations need accelerated computing that can drive big data workloads. NCI plays a critical role in supporting some of Australia’s largest research projects, and this new system and architecture will be key for it to achieve higher levels of performance and greater computing efficiency,” said Mike Schulze, Director, IBM Australia Development Laboratory.
The post NCI Becomes First Australian Organization to Join OpenPOWER Foundation appeared first on HPCwire.
Some years quietly sneak by – 2016 not so much. It’s safe to say there are always forces reshaping the HPC landscape but this year’s bunch seemed like a noisy lot. Among the noisemakers: TaihuLight, DGX-1/Pascal, Dell EMC & HPE-SGI et al., KNL to market, OPA-IB chest thumping, Fujitsu-ARM, new U.S. President-elect, BREXIT, JR’s Intel Exit, Exascale (whatever that means now), NCSA@30, whither NSCI, Deep Learning mania, HPC identity crisis…You get the picture.
Far from comprehensive and in no particular order – except perhaps starting with China’s remarkable double helping atop the Top500 List – here’s a brief review of ten 2016 trends and a few associated stories covered in HPCwire along with occasional 2017 SWAGs. By the way, we hope you are enjoying HPCwire’s new look; it’s still a work in progress and intended to add impact, functionality, and mobility.
1. Make Way for Sunway
China’s standing up of the 90-plus Petaflops Sunway (TaihuLight) on the June Top500 List caused a lot of consternation outside China including too many dismissive comments: It was this. It wasn’t that. It’s a trick. Never happened. I like Thomas Sterling’s characterization in his ISC16 closing keynote. To paraphrase (and with apologies to Thomas) – Get over it and enjoy the engineering triumph for a moment.
Are there legitimate questions? Sure. Nevertheless, built with homegrown processors as well as other components, TaihuLight’s marks China’s breakout in terms of going it alone. The recent U.S. election seems poised to heap more fuel on China’s inner fire for self-control over its technology destiny. China’s Tihan-2 (34 Pflops) is still solidly in the number two spot. China and the U.S each had 171 computers on the most recent Top500, the first time for such parity. HPCwire managing editor Tiffany Trader’s piece on China’s standing up Sunway looks at the achievement and her second piece from SC16 looks at the new rivalry (links below).
2. GPU Fever
In April NVIDIA announced its latest and greatest GPU, the impressive P100 (Pascal architecture) and its DGX-1 development box. It promptly stood up a clustered version of the latter – the DGX SATURNV (124 DGX-1 servers, each with eight P100s) – that placed 28th on the November 2016 Top500. Clearly the rise of GPU computing is continuing and gaining strength. Indeed the rise of accelerator-assisted heterogeneous computing is broadly gaining momentum. The SATURNV was also remarkably energy-efficient at 8.17 gigaflops/watt.DGX-1
Given the rapid emergence of deep learning in all of its forms – and yes training is generally quite different than inferencing – it seems very likely that GPU craze will continue including various mixed-precision offerings for better DL performance. Intel, IBM, ARM, pick-your-favorite company, are all in the accelerated computing hunt. FPGAs. SoCs. Soon neuromorphic options. Right now, NVIDIA is the clear leader. Here are two looks at both the Pascal and DGX-1
3. Who’s Who in HPCBill Mannel, HPE
This is a question, not the usual introduction to a list of prominent players. Mergers and acquisitions are reshaping the technology supplier world (again) and at the largest scales. Does that mean we’re in a period consolidation? HPE buys SGI. Dell ‘buys’ EMC. SoftBank acquires ARM. Mobile chip king Qualcom’s $39B purchase of automotive chip supplier NXP is being billed as the largest semiconductor deal ever. (One wonders how Cray will fare in a land of Giants. Cray has cutting edge supercomputing technology – is that enough?) There were many more deals but here’s a look at three majors.
Let’s start with HPE-SGI. HPE, of course, is itself the result of venerable Hewlett-Packard’s split-up a year ago. The latter was kept alive by a Rackable deal some years ago. Pairing the two creates a giant with strength at all levels of HPC with HPE seeking, among other things, to leverage SGI’s shared memory technology, enterprise Hanna platforms, and high-end supercomputer offerings. We’ll see. Here are two looks at this most recent of deals (closed in November).Jim Ganthier, Dell EMC
Dell EMC. Has a certain ring to it. Dell, now private, is looking to leverage scale – always a Dell goal – as well as add technologies needed to offer complete compute-storage solutions. Both Dell and EMC were already juggernauts and they now declare that Dell EMC’s combined revenues make it the top provider (revenue) in HPC, displacing HPE (servers.) Like most system suppliers, Dell EMC is increasingly focused “packaged HPC solutions”, to the extent such a thing is possible, to help drive advanced scale computing adoption in the enterprise. Combined revenue is expected to be in the $70-plus billion range.
Here are two brief articles examining Dell EMC plans.
Perhaps more surprising was the SoftBank acquisition of ARM. The worry was SoftBank might meddle in ARM’s openness but that doesn’t seem to be the case. Indeed, ARM, like others, seems to be gaining some momentum. Fujitsu, of course, is switching from Sparc to ARM for its post K computer and around SC16 ARM announced it is being supported in the latest OpenHPC stack. Then just a week or so ago, ARM announced plans to purchase toolmaker Allinea. The latter suggests SoftBank is making good on its promise to infuse new resources.
On the technology front, ARM introduced its Scalable Vector Extensions (SVE) for HPC in August which provides the flexibility to implement vector units with a broad choice of widths (128 to 2048 bits at 128-bit increments); applications can be compiled once for the ARMv8-A SVE architecture, and executed correctly without modification on any SVE-capable implementation, irrespective of that implementation’s chosen SVE vector width. This is another arrow in ARM’s HPC quiver and integral to Fujitsu’s plans to use the processor.
Market signals from ARM chip suppliers have been a bit more mixed and it will be interesting to watch ARM traction in 2017, not least traction in China. Here are three articles looking at ARM’s progress and that SoftBank purchase.
4. Containers Start Penetrating HPC
This is perhaps not an earth-shaking item, but nevertheless important. Virtualization technologies are hardly new and containers, mostly from Docker, have been sweeping through the enterprise. HPC has taken the hint with a pair offerings introduced recently – Singularity and Shifter – which may help individual researchers and small research teams gain easier access to NSF resources.
Shifter was developed first (2015) at NERSC for use with Edison supercomputer. Singularity (2016) came out of Lawrence Berkley National Laboratory this year and has aspirations for making the use of containers possible across many leadership class computers, such as TACC and SDSC for example. Gary Kurtzer (LBNL) is Singularity’s leader. The idea is simple but not so easy to execute on leadership class computers: create ‘containers’ in which researchers can put their complete application environment and reproducibly run it anywhere Singularity is installed. Adoption has been surprisingly fast says Kurtzer.
Here’s an interview by Trader with Kurtzer from this past fall on Singularity progress as well as a backgrounder on Shifter.
5. Deep Learning – The New Everything
Earlier this year, Google’s DeepMind’s AlphaGo platform defeated one of the world’s top GO players. Just training the system took a couple of years. No matter. Expectations are crazy high for DL and machine learning re: autonomous driving, precision medicine, recommender systems of all variety, authentication and recognition (voice and image), etc. We are clearly just at the start of the DL/ML era. Let’s leave aside true AI for the moment and all the scary and hopeful voices surrounding it.
Deep learning and machine learning are doing productive things now. Sometimes it’s as simple as complicated pattern recognition (I enjoyed writing that). Other times it is blended with traditional simulation & modeling applications to speed and refine the simulation computation. In many cases it’s the only way to realistically make sense of huge datasets.
The DL language is all over the place. Vendors and users alike seem to have nuanced preferences for AI over DL over ML over cognitive computing. There’s also a growing froth of technologies competing for sway: GPUs, FPGAs, DSPs, brain-inspired architectures. If SC15 last year seemed like a loud pivot towards driving HPC into the enterprise, SC16 was nearly totally focused on DL and data analytics of one or another sort – chip makers, systems builders, job schedulers and provisioning tools all seemed to be chanting the same (or at least similar) verse.
Here are six articles from the 2016 archive on various aspects of DL (use, technology, etc.).
6. Murky Public Policy
So much of HPC depends on public policy and government funding. The change in administration muddies what was already pretty murky water. One example is the tentative new Secretary of Energy Rick Perry’s various slips of the tongue over what the agency is and expressing a desire to abolish DoE. Hmm. Not sure he had the agency’s full compass in mind when he spoke. Anyway, conventional wisdom is Washington’s emphasis will shift from R&D to decidedly D and defense-related.
One initiative that’s been hanging in the wind through 2016 is NSCI, the National Strategic Computing Initiative. Greeted with enthusiastic fanfare at its inception by Presidential Executive Order at the end of July 2015, later regarded somewhat skeptically because inaction, and now mostly reborn as an umbrella label for pr-existing programs – it is easy to wonder if NSCI will survive. No doubt many of its pre-existing components will and one hopes several of the newer goals such as HPC workforce expansion also will have legs (and funding).
The DoE Exascale project – which now and perhaps rightfully takes pain to set useful exascale level computing rather than a numeric LINPACK score as its goal – has speeded its schedule somewhat. The first machine is now expected in 2021. There’ve been suggestions more changes are coming, so it is perhaps premature to say much. Maybe more funding or urgency will follow given the international landscape seems increasingly dominated by nationalistic agendas versus regional cooperation. China’s ascent may also trigger more HPC R&D spending…or not.
The extent to which NSCI frames broad goals for HPC advancement make it an interesting, though perhaps fading, indicator and wish list. Here are five articles looking at NSCI and the U.S. Exascale Project including a recent interview with Paul Messina, director of U.S. Exascale efforts.
7. How’s ‘HPC’ Business
While we’re talking policy, it’s worth look at the business (and broad technology) climate. IDC is one of the key keepers of the notes here – remember the saying that he/she who keeps the notes has power – and IDC reports 2016 was a good year and 2017 looks better. 2016 clocked in at roughly $11.6B (server revenue) and 2017 projected at $12.5B, a nice growth.Bob Sorensen, IDC
Speaking at the SC16 IDC breakfast update, Bob Sorensen, VP Research, noted ““HPC universe is expanding in ways that are not directly observed…because we haven’t quite decided what the definition of HPC should be.” He identified work being done with new hardware and software for deep learning. “From the training phase, the computationally intensive part where you go and train a deep neural network to understand a tough problem – exaflops regression kind of training,” involving GPUs, (Xeon) Phis and FPGAs.
The times they are a changin’. Here at HPCwire the ‘what-constitutes-HPC-today’ discussion has been especially vigorous this year. Here are IDC’s two market updates from SC16 and ISC16, along with a ten-year retrospective from Addison Snell, CEO, Intersect360 Research, which notched its ten-year anniversary right around the SC16 timeframe. (Congrats Addison!)
8. Surveying the TronscapeThomas Sterling (Indiana University)
Turning from business to the technology landscape, it worth noting there are few speakers better able to capture and prioritize the breadth of HPC technology at any given moment than Thomas Sterling, director of CREST; he would demur from any such notion. His annual closing keynote at ISC is substantive and entertaining and perhaps casts a somewhat wider and more technology-in-the-trenches net than the items called out here earlier.
Here’s an account of ISC16 Sterling’s talk. It’s a fast read and well worth the effort. Typically Thomas, he touches on a lot of ground and with clarity.
9. OpenPOWER Pushes Toward Liftoff
2017 has to be IBM and OpenPOWER’s lift off. Last year HPCwire put a rather harsh lens over the effort describing its many challenges:
“2016 promises to be pivotal in the IBM/OpenPOWER effort to claim a non-trivial chunk of the Intel-dominated high-end server landscape. Big Blue’s stated goal of 20-to-30 percent market share is huge. Intel currently enjoys 90-plus percent share and has seemed virtually unassailable. In an ironic twist of the old mantra ‘no one ever got fired buying IBM’ it could be that careers at Big Blue rise or fall based upon progress,” I wrote.
There have several Power8 and Power8+ (NVLink) systems available. IBM has optimized one system (PowerAI) for DL/ML, everyone’s darling target, and worked with HPC cloud provider Nimbix to put the latest Power technology in its cloud including tools to make using it easier. The Power9 roadmap has been more fully described and first Power9 chips are expected this year, including support of the CORAL project. There are quite a few more items checked off on the IBM/OpenPower done list.
HPCwire will again review IBM/OpenPOWER’s progress early in 2017, but based on a conversation with Ken King and Brad McCredie at SC16, the pieces of the puzzle – technology, product, channel, growing interest from hyperscales, lower price point product from partners, continuing investment in advances such as OpenCAPI – are in place. Sales are what’s needed next. Below is a link to last year’s broad article and one describing the PowerAI offering.
10. Marvelous Marvin Remembered
Before ending it is good to recall that last January (2016), artificial intelligence pioneer Marvin Minsky died at age 88 – a sad way to start a year so thoroughly dominated by discussion around AI precursors deep learning, machine learning, and cognitive computing (whatever your preferred definition).
The New York Times obituary by Glenn Rifkin is well worth reading. Here’s a brief excerpt: “Well before the advent of the microprocessor and the supercomputer, Professor Minsky, a revered computer science educator at M.I.T., laid the foundation for the field of artificial intelligence by demonstrating the possibilities of imparting common-sense reasoning to computers.”Minsky slide from Sterling ISC16 Keynote
11. BonusBill Gropp
What would an end-of-year article be without a couple of plaudits, re: Bill Gropp, was elevated to Acting Director, NCSA, and also won the 2016 Ken Kennedy award. Well deserved. NCSA celebrated turning thirty. James Reinders left Intel making us all wonder where he will reappear and in what capacity. Here’s his brief farewell published in HPCwire. There are many more but we’ll stop here.
Quantum and neuromorphic computing efforts kept gaining momentum. Two big neuromorphic computing systems were stood up in the spring, and IBM sold a TrueNorth-based system to LBNL for collaboration. The quantum picture still seems a bit fuzzy to me, but Bo Ewald shows no slowdown in evangelizing (article) and Los Alamos National Lab is racing to develop a broader range of applications for its D-Wave machine. The Center for Evaluation of Advanced Technology (CENATE) at PNNL ramped up fast (update here) and will hold its first workshop in 2017.
Less of a bonus trend and more under the expected label, Knights Landing (KNL) systems began cropping up everywhere in the second half of the year (Intel’s SC16 recap). The OmniPath-InfiniBand competition continued in force. Seagate continued its drive into HPC; DataDirect Networks maintained its strength at the high-end, continued its push into the enterprise, and adopted software defined strategy with vigor. OpenHPC delivered version 2.0 of its open source HPC stack and tools and now supports ARM as well as x86. One wonders if IBM will take the plunge.
There are many more important trends warranting notice here – the growth of NSF cyberinfratructure for example – but sadly my deadline is here too. See you in 2017.
BEAVERTON, Ore., Dec. 21 — The OpenFabrics Alliance (OFA) has published a Call for Sessions for its 13th annual OFA Workshop, taking place March 27-31, 2017 in Austin, TX. The OFA Workshop is a collaborative event designed to generate lively exchanges among OFA members, developers, users, research, and business professionals who share a vested interest in high performance networks. The Alliance has also opened early bird registration for the workshop. For more information and to support the OFA Workshop 2017, visit the event website.
Call for Sessions:
The OFA Workshop 2017 Call for Sessions encourages industry experts to spark lively event discussions by presenting on critical high performance networking issues. Sessions are designed to educate attendees on current development opportunities, troubleshooting techniques, and disruptive technologies affecting the deployment of high performance computing environments.
The deadline to submit session proposals is February 3, 2017 at 5:00 p.m. PST. For a list of recommended session topics, formats, and submission instructions, download the official OFA Workshop 2017 Call for Sessions flyer.
Early bird registration is now open for all participants of the OFA Workshop 2017. For more information on event registration and lodging, visit the registration webpage.
- Dates: March 27-31, 2017
- Location: Hyatt Regency, Austin, TX
- Registration Site: http://bit.ly/OFA2017REG
- Registration Fee: $595 (Early Bird to February 13, 2017), $695 (Regular)
- Lodging: Hyatt Regency room discounts available until March 6, 2017.
About the OpenFabrics Alliance
The OpenFabrics Alliance (OFA) is a 501(c) (6) non-profit company that develops, tests, licenses and distributes the OpenFabrics Software (OFS) – multi-platform, high performance, low-latency and energy-efficient open-source RDMA software. OpenFabrics Software is used in business, operational, research and scientific infrastructures that require fast fabrics/networks, efficient storage and low-latency computing. OFS is free and is included in major Linux distributions, as well as Microsoft Windows Server 2012. In addition to developing and supporting this RDMA software, the Alliance delivers training, workshops and interoperability testing to ensure all releases meet multivendor enterprise requirements for security, reliability and efficiency. For more information about the OFA, visit www.openfabrics.org.
Source: OpenFabrics Alliance
The post Call for Sessions and Registration Now Open for 13th Annual OpenFabrics Alliance Workshop appeared first on HPCwire.
Google researcher Moritz Hardt and colleagues have developed an approach for testing whether machine learning algorithms inject bias, such as gender or racial bias into their decisions. There has been worry for some time that ML algorithms might deliberately or inadvertently inject bias in applications spanning advertising, credit, employment, education, and criminal justice.
The paper, Equality of Opportunity in Supervised Learning, is written by Hardt and colleagues Eric Price (University of Texas, Austin), and Nathan Srebro (University of Chicago) and posted on the ARXiv.org site. Back in 2014, the Obama Administration’s Big Data Working Group released a report arguing that discrimination can sometimes “be the inadvertent outcome of the way big data technologies are structured and used” and pointed toward “the potential of encoding discrimination in automated decisions”.
The authors note, “Despite the demand, a vetted methodology for avoiding discrimination against protected attributes in machine learning is lacking. A naive approach might require that the algorithm should ignore all protected attributes such as race, color, religion, gender, disability, or family status. However, this idea of “fairness through unawareness” is ineffective due to the existence of redundant encodings, ways of predicting protected attributes from other features.
The group’s work “depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.”
Details of the approach can be found in the paper: https://arxiv.org/abs/1610.02413
The post Google Develops Method for Testing ML Algorithms for Bias appeared first on HPCwire.
Dec. 20 — Boston IT Solutions is attending the 23rd IEEE international conference on High Performance Computing. At stand #4 – HiPC Hyderabad 2016, the company will be showcasing a selection of innovative HPC solutions including deep learning, cloud and Lustre systems.
Solutions exhibited include:
vScaler is a hyper converged solution for simplifying data center infrastructure, integrating and delivering server, storage, and networking resources at the click of a mouse. With a minimal deployment period of as little as 15 minutes, a variety of applications at any scale can be tested and deployed swiftly on this turnkey appliance. Powered by the OpenStack technology, the vScaler platform enables organizations to quickly deploy scalable and production-ready private cloud environments that can inter-operate with other public clouds.
Boston Venom 1501-0T (KNL Development Kit)
Keeping the requirements of developers in mind, Boston in partnership with Supermicro designed the Boston Venom 1501-0T, KNL Development Kit. This kit comes integrated with an Intel Xeon Phi x200 Processor, supporting up to a staggering 72 cores in a single socket, each performing at up to 1.7Gh (with Turbo boost). One such highly scalable processor could provide 3 trillion floating points operations a second. It also includes 16GB of high speed MCDRAM on package memory, which can act as cache or could be used in conjunction to legacy DDR4 DRAM enabling enhanced performance. Further capacity can be added by traditional DDR4 ECC LRDIMMs, enabling a total of up to 384GB for memory hungry applications. An integrated liquid cooling solution and 6 low noise cooling fans help to keep the processor and other components at optimal temperature and your working environment quiet.
Boston ANNA Pascal (Artificial Neural Network Accelerator)
This latest deep learning platform from Boston Limited is an industry leading, latest-generation GPU based server and a master class of server design and innovation. NVIDIA Tesla P100 GPUs power the Boston ANNA Pascal to deliver the highest absolute performance for HPC and deep learning workloads with infinite computing needs.
The streamlined design eliminates complex cabling and GPU pre-heat for maximum airflow, cooling and the highest level of performance per watt. The high-density 1U server includes 2 x PCI-E Gen 3 slots for InfiniBand to enable strong RDMA performance and can support up to 4x GPUs, making it an optimal system for scalable GPU accelerator bound applications where density matters.
NVLink provides a high-speed 80Gb/s interconnect path wholly devoted to peer GPU-to-GPU connections. Combining this with RDMA via Infiniband or Omnipath provides the most powerful possible parallel computing environment as shown below, with the increase in performance compared to the previous generation K80 GPUs.
The reimagined appliance is crafted with innovation at each level from silicon to software, with over 5 ground-breaking technologies providing a dramatic jump in performance up to a 12X leap in neural network training, thus reducing training time from weeks to just hours.
Boston DataScaler-L SSF (Lustre appliance)
The Boston DataScaler-L SFF appliance brings all the features of Intel Enterprise Lustre into 4U form factor. This provides a lower cost/capacity entry point for Lustre storage systems. The data Scaler-L SFF is optimized for the small or mid-sized clusters with capacities as low as 18TB (and scaling to 720TB).
The Boston data Scaler-L SFF has been developed and engineered in response to customer demands for higher performance HPC solutions at a more attractive price point.
Manoj Nayee, Managing Director, Boston IT Solutions says, “Attending HiPC is an opportunity for Boston to showcase and launch our newest solutions, in partnership with Supermicro, to the HPC, data and analytics markets. All of the solutions on show are available immediately and have a variety of use-cases. I encourage attendees to visit our booth to discover more.”
About Boston Limited
Boston Limited has been providing first-to-market technology to a diverse client base since 1992. Boston’s high performance, mission-critical server, storage and workstation solutions can be tailored for every client. With expertly trained engineers and a dedicated R & D labs facility, we are able to fully customize the specification, design and brand a solution in order to help clients solve their toughest business challenges simply and effectively. Since its founding in London, UK, Boston has expanded operations globally. Following on from the successful launch of Boston IT Solutions India in 2009, Boston launched Boston Server & Storage Solutions GmbH in Germany a year later, with offices opening on the East coast of America in 2013. For more information about Boston Limited (India), please visit http://www.bostonindia.in and follow @bostonindia on Twitter.
Source: Boston IT Solutions (India)
The post Boston IT Solutions Showcases Array of HPC Offerings at HiPC Hyderabad 2016 appeared first on HPCwire.
Dec. 20 — Today, Green Revolution Cooling (GRC), a pioneer and leader in the immersion cooling market, announces a strategic partnership with Heat Transfer Solutions (HTS), the largest independent HVAC manufacturers’ representative in North America. As part of the partnership, HTS is making a financial investment in GRC which will provide growth capital as the company continues to expand its presence in the data center market.
“We’re excited about the opportunity to provide GRC with the long-term financial support required to grow the company to become an enabler for our customers’ sustainability goals,” HTS Principal Derek Gordon said. “With our experience designing custom HVAC solutions for data center markets, HTS understands the complications of traditional data center cooling, and sees the value in the proven technology developed by GRC.”
To lead GRC into its next stage of development, Peter Poulin has been appointed as the company’s new CEO. Poulin is a 30-year IT industry veteran, having spent the first half of his career in various sales, marketing, and general management roles at Compaq Computer Corporation. He has also served as the VP of North American Sales & Marketing for APC both prior to, and after, the acquisition of APC by Schneider Electric. Over the last 15 years, Poulin has focused on helping small companies drive growth and navigate transformational changes, including, as CEO of Motion Computing, leading the sale of the company’s product lines to Xplore Technologies.
The appointment of Poulin enables Christiaan Best, founder and former CEO, to transition to the role of company CTO, where he will focus on custom designs and expanding GRC’s product portfolio with additional patented technologies.
“With the explosion of data center density and capacity requirements, driven by IoT, Big Data, and HPC trends, our customers are increasingly challenged with reducing their carbon footprint, rapidly expanding capacity, and quickly deploying compute power to the edge of the network where environmental conditions may not be compatible with traditional data center infrastructure requirements,” said Poulin. “Our customers are experiencing material reductions in capital costs, deployment times, and energy costs, attributable to our CarnotJet systems. I am excited to be with a leader that is protecting both our customers computing assets and our planet’s environment.”
HTS is the largest independent commercial HVAC manufacturers’ representative in North America. The company represents more than 100 HVAC manufacturers and employs approximately 600 employees (engineers, technicians and support staff) in 16 cities across Canada and the United States. Delivering Real Success to all involved in its projects, HTS provides HVAC and refrigeration solutions to commercial, institutional, data center, and industrial markets from leading manufacturers such as Daikin, Epsilon, AcoustiFLO, and Haakon Industries. For more information about HTS, visit http://www.hts.com/ or connect via LinkedIn, Twitter, and Facebook.
Green Revolution Cooling is a pioneer and leader in the liquid immersion cooling market for data centers. GRC’s CarnotJet System, a rack-based immersion cooling system for servers, uses a mineral oil based dielectric coolant that eliminates the need for chillers, air conditioners, and air handlers. Thereby, helping cut data center construction costs by up to 60%, while reducing data center cooling energy by up to 95%. GRC’s solutions have helped some of the largest cloud, HPC, and telecom organizations build extremely efficient, cost effective, and resilient data centers across the globe. Visit www.grcooling.com for more information.
The post Green Revolution Cooling Secures Partnership With HTS appeared first on HPCwire.
We caught up with Addison Snell, CEO of HPC industry watcher Intersect360, at SC16 last month, and Snell had his expected, extensive list of insights into trends driving advanced-scale technology in both the commercial and research sectors.
Possibly the most significant trend is what Snell calls “technology disaggregation,” the proliferation of alternative processing architectures along with an array of new storage, fabric and communications technologies. Taken together, these technologies have created a “Wild West” environment and, compared with the relatively straightforward times of the past, “the worst of all worlds,” Snell said, for application programmers and systems administrators.
Snell also shared his views on the democratization of HPC along with some of the more notable m&a activity in the industry of late. He noted that he is impressed with Microsoft Azure, putting it in “the most improved” category over the past year, jumpstarted by its adoption of Linux and by populating instances with GPUs for AI and machine learning.
Finally, Snell offered up his views on HPC winners and losers in the approaching administration of President Trump, as well as his thoughts on the U.S. drive to exascale. This interview is conducted by Doug Black, managing editor of EnterpriseTech, which is HPCwire’s sister publication.
The post Addison Snell: The ‘Wild West’ of HPC Disaggregation appeared first on HPCwire.
Allinea Software, whose cross-platform development and performance analysis tools are used by 80 percent of the world’s top 25 supercomputers, has been acquired by ARM Ltd., which said the move strengthens its HPC offering for both scientific and business computing by extending its portfolio of development tools for the HPC, machine learning and data analytics markets.
The acquisition could signal broadening acceptance of alternative processors used in advanced scale computing environments. While ARM processors are typically used in mobile devices because of their smaller size, reduced complexity and lower power consumption, the architecture is being aggressively pushed into server markets by ARM processor vendors, such as Applied Micro, Cavium, and Qualcomm.
“It’s self-evident that HPC – and, more widely, parallel and distributed computing – is at a fascinating, exciting point,” said Allinea Founder and CEO David Lecomber in a blog post, noting that his organization will integrate with the ARM HPC compiler and libraries engineering teams within ARM’s Development Solutions Group. “Today we can see that the reach of ‘our kind of computing’ is no longer the preserve of scientific research.”
Allinea tools support multiple CPU architectures used in HPC environments, and its customers include the US Department of Energy, NASA, supercomputing national labs and universities, and private companies using HPC systems for scientific computation. The tools help developers deal with systems ranging from hundreds to hundreds of thousands of cores. The product suite includes the developer tool suite Allinea Forge, which incorporates an application debugger called Allinea DDT and a performance analyzer called Allinea MAP, and an analysis tool for system owners, users and administrators called Allinea Performance Reports.
ARM said the acquisition reflects the company’s long-term growth strategy in HPC and builds on ARM’s recent success with Fujitsu’s 64-bit ARMv8-A powered Post K supercomputer, and the launch of the ARMv8-A Scalable Vector Extension. It follows the announcement that ARMv8-A will be the first alternative architecture with support for the OpenHPC, the Intel-led consortium of the Linux Foundation, and the release of ARM Performance Libraries for software development and portability to ARMv8-A server platforms.
“Writing and deploying software that exploits the ever increasing computing power of clusters and supercomputers is a demanding challenge – it needs to run fast, and run right, and that’s exactly what our suite of tools is designed to enable,” said Lecomber. “As part of ARM, we’ll continue to work with the HPC community, our customers and our partners to advance the development of our cross-platform technology, and take advantage of product synergies between ARM’s compilers, libraries and advisory tools and our existing and future debugging and analysis tools.”
“As systems and servers grow in complexity, developers in HPC are facing new challenges that require advanced tools designed to enable them to continue to innovate,” said Javier Orensanz, general manager, development solutions group, ARM. “Allinea’s ability to debug and analyze many-node systems is unique, and with this acquisition we are ensuring that this capability remains available to the whole ARM ecosystem, and to the other CPU architectures prevalent in HPC, as well as in future applications, such as artificial intelligence, machine learning and advanced data analytics.”
The post Targeting HPC and AI, ARM Acquires Tools Vendor Allinea appeared first on HPCwire.
SANTA FE, NM, Dec. 16 — Flow Science, Inc. has announced that it will hold its 17th annual FLOW-3D European Users Conference on June 5-7, 2017 in Barcelona, Spain, at the Avenida Palace Hotel. The conference will be co-hosted by Simulaciones y Proyectos, the official distributor of FLOW-3D products in Spain and Portugal.
The conference is open to FLOW-3D, FLOW-3D Cast and FLOW-3D/MP users and other persons throughout Europe who are interested in learning more about the family of FLOW-3D products. The meeting will feature user presentations from a variety of industrial and research applications that focus on validations, benchmarks and case studies, as well as the latest product developments.
Flow Science will offer a half day training course devoted to optimizing simulation time and accuracy using the various numerical options that are available in our software packages. This training is included with the conference registration and will cover what the best numerical options would be for a wide range of applications. This course will be taught by Dr. Michael Barkhudarov, VP of R&D, and Dr. Ioannis Karampelas, CFD Technical Support Engineer.
Call for Abstracts
The call for abstracts is now open. Users are invited to share their experiences, present their success stories and obtain valuable feedback from their fellow users and Flow Science technical staff. The deadline to submit an abstract is Friday, April 21, 2017. The conference proceedings will be made available to attendees as well as through the Flow Science website.
Online registration for the conference and free training is now available. Register by April 21, 2017, to receive the early-bird rate.
About Flow Science
Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science’s headquarters is located in Santa Fe, New Mexico. Flow Science can be found on the internet at www.flow3d.com.
Source: Flow Science, Inc.
NEW BRUNSWICK, NJ, Dec. 16 – Rutgers Senior Vice President Christopher Molloy, the founder of the Rutgers Discovery Informatics Institute (RDI2), and researchers from universities statewide are among those expected to participate in a celebration of Caliburn, Rutgers’ new supercomputer, today (December 15) on the university’s Busch Campus in Piscataway.
Caliburn is the most powerful such system in New Jersey. It was built with a $10 million award from the New Jersey Higher Education Leasing Fund. The lead contractor was HighPoint Solutions of Sparta, N.J., which was chosen after a competitive bidding process. The system manufacturer and integrator is Super Micro Computer Inc. of San Jose, California.
“This new system will give Rutgers the high-performance computing capacity that our world-class faculty needs and deserves, particularly as the use of computation and big data have become key enablers in nearly every field of research,” Christopher J. Molloy, senior vice president for research and economic development, said. “We are extremely appreciative of the state’s support for this initiative, which is a great investment in the university and ultimately the future of New Jersey.”
Manish Parashar, distinguished professor of computer science at Rutgers and founding director of the Rutgers Discovery Informatics Institute (RDI2), led the effort to build the system. Parashar and Ivan Rodero, RDI2’s associate director of technical operations, designed the system with a unique architecture and capabilities. It is based on a new network interconnect developed by Intel (Omni-Path) and among the first clusters to use the Intel Omni-Path fabric and the latest Intel processors.
“This system provides state-of-the-art advanced cyber infrastructure that will dramatically increase the computation power, provide greater speeds and offer expanded storage capacity to faculty, researchers and students across Rutgers and the state,” Parashar said, “This system will significantly elevate the competitiveness of Rutgers researchers in computational and data-enabled science, engineering and medicine, as well as those in social science and humanities disciplines.”
Along with users at Rutgers, the system will be accessible to researchers at other New Jersey universities and industry users. RDI2 will work with the New Jersey Big Data Alliance, which was founded by Rutgers and seven other universities in the state, to build an industry users program. The capabilities of this new system will establish New Jersey’s reputation in advanced computing and benefit a broad spectrum of industry sectors and academic disciplines.
The updated Top 500 ranking of world’s most powerful supercomputers issued last month ranks Rutgers’ new supercomputer #242 among all of the world’s supercomputers. The Top 500 project provides a reliable basis for tracking and detecting trends in high-performance computing. Twice annually it assembles and releases a list of the 500 most powerful computer systems in the world.
The project was built in three phases. Phase I went live in January and provides approximately 150 teraflops of computational and data analytics capabilities and one petabyte of storage to faculty and staff researchers throughout the university. To date there have been more than 100 users from 17 departments universitywide. The system has delivered over 12 million computing hours and 100 terabytes (TB) of storage to the Rutgers community over the past few months. Among the heaviest users have been researchers in the Waksman Institute of Microbiology, the Departments of Physics at New Brunswick and Camden, Department of Chemistry at Newark, and the Center for Integrative Proteomics Research.
Phase II included a new self-contained modular data center at Rutgers University–New Brunswick. Phase III encompasses the installation of the Caliburn supercomputer and final elements of the network, which provides high-speed access to users. The Supermicro solution is based on a FatTwin SuperServer system. It has 560 nodes, each with two Intel Xeon E5-2695 v4 (Broadwell) processors, 256 gigabytes (GB) of RAM, and a 400 GB Intel NVMe drive. Overall, the system has 20,160 cores, 140 TB of memory and 218 TB of non-volatile memory. The performance is 603 TFLOPS with a peak performance of 677 TFLOPS.
“As the leading provider of high-performance computing (HPC) solutions, Supermicro is very pleased to help enable this state-of-the-art HPC solution at Rutgers based on our multi-node FatTwin architecture,” said Tau Leng, vice president and GM of HPC at Super Micro Computer, Inc. “Key features of these FatTwin SuperServers include support for E5-2600 v4 processors, NVMe, 100 Gbps Omni-Path fabric, and an innovative cooling architecture to deliver maximum performance while reducing the TCO for the Caliburn supercomputer which will supply high-performance computational and data analytics capabilities to researchers for years to come.”
Rutgers Discovery Informatics Institute (RDI2) is a Rutgers-wide multidisciplinary institute for Advanced Computation and Data Sciences, with the overarching goal of establishing a comprehensive and internationally competitive Computational and Data-Enabled Science and Engineering program at Rutgers that can nurture the fundamental integration of research, education, and infrastructure. RDI² aims to bridge more traditional research boundaries and catalyze socio-technical changes in research across all fields of science and engineering, stimulating new thinking and new practices essential to address grand challenges in science, engineering, and industry. RDI² is strategically positioned to engage leading researchers in innovative, interdisciplinary collaborations, and has established successful research collaborations with computational groups across Rutgers and beyond, including industry. The institute has a strong research program with more than 50 grants totaling over $40 million. RDI² was instrumental in establishing the universitywide ACI strategy that resulted in the formation of the Rutgers Office of Advanced Research Computing, is playing a leadership role in New Jersey’s cyberinfrastructure and Big Data efforts and the formation of the statewide Big Data Alliance, and is spearheading the Discovery Science spoke of the NSF-funded Northeast Regional BigData Innovation Hub. It has architected and deployed the largest research-computing platform in Rutgers history. RDI² also has designed, deployed and operates a production data cyberinfrastructure for the National Science Foundation’s Ocean Observatories Initiative (OOI).
Source: Rutgers Discovery Informatics Institute (RDI2)
CAMBRIDGE, UK, Dec. 16 – ARM has acquired Allinea Software, an industry leader in development and performance analysis tools that maximize the efficiency of software for high performance computing (HPC) systems. Currently, 80 percent of the world’s top 25 supercomputers use Allinea’s tools, with key customers including the US Department of Energy, NASA, a range of supercomputing national labs and universities, and private companies using HPC systems for their own scientific computation.
“As systems and servers grow in complexity, developers in HPC are facing new challenges that require advanced tools designed to enable them to continue to innovate,” said Javier Orensanz, general manager, development solutions group, ARM. “Allinea’s ability to debug and analyze many-node systems is unique, and with this acquisition we are ensuring that this capability remains available to the whole ARM ecosystem, and to the other CPU architectures prevalent in HPC, as well as in future applications such as artificial intelligence, machine learning and advanced data analytics.”
This acquisition further enhances ARM’s long-term growth strategy in HPC and builds on ARM’s recent success with Fujitsu’s 64-bit ARMv8-A powered Post K supercomputer, and the launch of the ARMv8-A Scalable Vector Extension. It follows the announcement that ARMv8-A will be the first alternative architecture with OpenHPC support, and the release of ARM Performance Libraries, which provide ease of software development and portability to ARMv8-A server platforms. As this momentum continues, bringing Allinea’s expertise into ARM will continue to enable partners with access to a comprehensive software tools suite that address increasingly complex system challenges.
“Writing and deploying software that exploits the ever increasing computing power of clusters and supercomputers is a demanding challenge – it needs to run fast, and run right, and that’s exactly what our suite of tools is designed to enable,” said David Lecomber, CEO, Allinea. “As part of ARM, we’ll continue to work with the HPC community, our customers and our partners to advance the development of our cross-platform technology, and take advantage of product synergies between ARM’s compilers, libraries and advisory tools and our existing and future debugging and analysis tools. Our combined expertise and understanding of the challenges this market faces will deliver new solutions to this growing ecosystem.”
Allinea’s unique tools provide developers with the ability to deal with systems with hundreds, thousands (and hundreds of thousands) of cores. The product suite includes the developer tool suite Allinea Forge, which incorporates an application debugger called Allinea DDT and a performance analyzer called Allinea MAP, and an analysis tool for system owners, users and administrators called Allinea Performance Reports.
Allinea will be integrated into the ARM business with all functions and Allinea’s Warwick and Eastleigh locations retained. Allinea’s former CEO David Lecomber will join the ARM development solutions group management team.
The post ARM Extends HPC Offering with Acquisition of Tools Provider Allinea Software appeared first on HPCwire.
At the international IEDM 2016 conference earlier this month, Purdue University researchers revealed a number of technologies and concepts aimed at transforming tomorrow’s semiconductors. Some of the endeavors are set to boost the performance of silicon-based transistors, while others portend a path beyond silicon CMOS.
Sustaining the progress ensconced in Moore’s law over the last 50 years is top of mind to these researchers. That observation-turned-prophecy made by Gordon Moore, that transistor density on integrated circuits would double every year (revised to two years), has driven the modern era of ubiquitous computing. While transistor density may be technically on track, the benefits (smaller, faster, cheaper, more energy efficient silicon) are starting to lag as feature sizes push against the limits of physics.
Interpretations of Moore’s law aside, there is consensus that new technologies are needed to ensure continued computational progress.
“For the past 50 years, ever more electronic devices envelop us in our day-to-day life, and electronic-device innovation has been a major economic factor in the U.S. and world economy,” said Gerhard Klimeck, a professor of electrical and computer engineering and director of Purdue’s Network for Computational Nanotechnology in the university’s Discovery Park. “These advancements were enabled by making the basic transistors in computer chips ever smaller. Today the critical dimensions in these devices are just some 60 atoms thick, and further device size reductions will certainly stop at small atomic dimensions.”
Purdue researchers presented five papers during the the annual International Electron Devices Meeting (IEDM 2016), which took place Dec. 5-7 in San Francisco.
Two papers describe novel approaches for suppressing self-heating and enhancing the performance of conventional CMOS chips. The remaining papers focus on creating devices that generate less heat. Explored are networks of nanomagnets, extremely thin layers of a material called black phosphorous and “tunnel” field effect transistors, or FETs.
“There are two approaches, one is that we change the materials, use different materials or more advanced materials to replace silicon, second is we change the transistor concepts to hopefully make it much faster or energy efficient,” said Peide Ye, the Richard J. and Mary Jo Schwartz Professor of Electrical and Computer Engineering (see YouTube video below).
Ye is working to develop CMOS devices with black phosphorous. The material shows promise as a post-silicon semiconductor being capable of passing large amounts of current with ultra-low resistance while maintaining good switching performance.
Read more about this important research at http://www.purdue.edu/newsroom/releases/2016/Q4/innovations-offer-peek-into-the-future-of-electronic-devices.html.
Feature image caption: A device is made from the semiconductor germanium in research led by Purdue Professor Peide Ye (source: Purdue University image/Erin Easterling)