Data-intensive science is not a new phenomenon as the high-energy physics and astrophysics communities can certainly attest, but today more and more scientists are facing steep data and throughput challenges fueled by soaring data volumes and the demands of global-scale collaboration. With data generation outpacing network bandwidth improvements, moving data digitally from point A to point B, whether it’s for processing, storage or analysis, is by no means a solved problem as evidenced by the continuation or what could even be called the revitalization of sneakernets.
Even for those scientists fortunate to have access to the highest-speed networks, like the 100 Gigabit Ethernet research and education infrastructure, Internet2, it takes a certain level of expertise to maximize data transfers. Recognizing that their advanced networking capabilities were not always fully exploited, a group of Clemson University researchers has come up with a way to optimize transfers for everyone.
Not surprisingly the work is coming out of the Clemson genetics and biochemistry department, which has had a front row seat to the past decade’s data deluge. In a news writeup, Clemson’s Jim Melvon observes that while high-energy physics is often cited as the poster child for data-intensive science, genomics is catching up. And as in the computational physics community, long distance data sharing and collaboration is essential for life science researchers.
To maximize data transfer times across the Internet2 backbone and the attached campus network, the Clemson scientists developed an open-source software platform called Big Data Smart Socket (BDSS). As described in the Clemson media release, “the groundbreaking software takes advantage of specialized infrastructure such as parallel file systems, which distribute data across multiple servers, and advanced software-defined networks, which allow administrators to build, tune and curate groups of researchers into a virtual organization.”
“What used to take days now takes hours – or even less,” said Alex Feltus, associate professor in genetics and biochemistry in Clemson University’s College of Science. The software runs on any computer and although it was designed to optimize the transfer of large bioscience data sets, Feltus says the same methods will work for any large modern data sets.
As users generate data transfer requests, BDSS rewrites the request in a more optimal way, adding parallelism through to the hard drives, enabling faster and more efficient data transfers.
“We’ve found the right buffer size, number of parallel data streams and the optimal parallel file system to perform the transfers,” said Feltus, who is director of the Clemson Systems Genetics Lab. “It’s very important that end-to-end data movement – and not just network speed – is optimized. Otherwise, bottlenecks on the sending or receiving side can slow transfers to a crawl. Our BDSS software enables researchers to receive data – optimized for the architecture of their own computer systems – far more quickly than before. Previously, researchers were having to move rivers of information through small pipes at the sending and receiving ends. Now, we’ve enhanced those pipes, which vastly improves information flow.”
Read the Clemson announcement and find links to related papers here.
A brief video offers additional details:
SEATTLE, Wash., Jan. 11 — Global supercomputer leader Cray Inc. (Nasdaq: CRAY) today announced the appointment of Stathis Papaefstathiou to the position of senior vice president of research and development. Papaefstathiou will be responsible for leading the software and hardware engineering efforts for all of Cray’s research and development projects.
With more than 30 years of high tech experience, Papaefstathiou has held senior-level positions at Aerohive Networks, F5 Networks, and Microsoft. In his most recent position at Aerohive, Papaefstathiou was the senior vice president of engineering and led product development across the company’s entire product portfolio, including network hardware, embedded operating systems, cloud-enabled network management solutions, big data analytics, and DevOps.
Prior to Aerohive, Papaefstathiou served as vice president of product development at F5 Networks and was responsible for defining the strategies for dynamic datacenters, cloud, and virtualization. Previously, Papaefstathiou held several technical and senior management positions at Microsoft in high performance computing, distributed operating systems, datacenter automation, as well as in Microsoft Research. He received his Ph.D. in computer science from the University of Warwick in the United Kingdom.
“At our core, we are an engineering company, and we’re excited to have Stathis’ impressive and diverse technical expertise in this key leadership position at Cray,” said Peter Ungaro, president and CEO of Cray. “Leveraging the growing convergence of supercomputing and big data, Stathis will help us continue to build unique and innovative products for our broadening customer base.”
“My admiration and respect for Cray goes back to my days as a university research fellow, and throughout my career I have continued to hold the company’s engineering and R&D capabilities in very high regard,” said Papaefstathiou. “Leading the R&D teams at Cray is both an honor and an exciting opportunity, and I look forward to working with this talented group to expand the boundaries of what can be made possible with a Cray supercomputer.”
Papaefstathiou will take over the position currently held by Peg Williams, who is retiring from Cray. Papaefstathiou will work with Williams through a transition period prior to her retirement.
“Peg has played an instrumental role in shaping the company and our products throughout her career at Cray, and I sincerely thank her for her efforts and contributions,” said Ungaro. “She will be missed, and all of us at Cray wish her the best in retirement.”
About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.
The post Cray Appoints Stathis Papaefstathiou to Senior VP of Research and Development appeared first on HPCwire.
Jan. 11 — Blue Waters Graduate Fellowships provide PhD students with a year of support, including a $38,000 stipend, up to $12,000 in tuition allowance, an allocation of up to 50,000 node-hours on the powerful Blue Waters petascale computing system, and funds for travel to a Blue Waters Symposium to present research progress and results.
For the Fellowships, preference will be given to candidates engaged in multidisciplinary research projects that combine disciplines such as computer science, applied mathematics, and computational science applications. Applicants should be in the second or later year of their graduate program with a well-developed, related research proposal. Applicants must be a U.S. citizens or permanent residents of the U.S. by the time of the application deadline.
Applications are evaluated based on:
- Academic record from undergraduate and graduate work
- GRE scores
- Related experience and service
- Research plan and its relationship to use of the Blue Waters supercomputer
- Letters of reference
Questions? Contact firstname.lastname@example.org
Application instructions can be found here.
The post Blue Waters Graduate Fellowship Applications Due Feb. 3 appeared first on HPCwire.
Jan. 11 — Summer of HPC is a PRACE programme that offers summer placements at HPC centres across Europe. Up to 21 top applicants from across Europe will be selected to participate. Participants will spend two months working on projects related to PRACE scientific or industrial work and ideally produce a visualisation or video of their results. The programme will run from July 1st, to August 31th 2017. At the end of the programme, two best participants will be awarded for their contribution – there are awards for Best Visualisation and for HPC Ambassador.
Flights, accommodation & a stipend will be provided to all successful applicants; all you need to bring is your interest in computing and some enthusiasm!
Participating in PRACE Summer of HPC
Applications are welcome from all disciplines. Previous experience in HPC is not required. Some coding knowledge is a prerequisite but the most important attribute is a desire to learn, and share, more about HPC. A strong visual flair and an interest in blogging, video blogging or social media are desirable. Applications are open from January 11, 2017, to February 19, 2017.
Eligibility to the Programme
The following eligibility apply:
Applicant must be studying at a European Institution at time of application.
Be late stage undergraduate or early stage postgraduate students.
Be over the age of 18.
- Attached latest CV.
Have the minimum prerequisites outlined by projects.
Be able to pass the Code Test.
Student selects at least 2 projects of their choice.
- Recommendation returned before the deadline.
Final year students will be accepted as long as they are registered with a European institution at time of application.
More information can be found in the FAQ.
How to Apply
To apply you will need to complete an application form and provide us with a copy of your CV, reference and code test.
You should receive a confirmation email immediately. If you don’t receive confirmation within few hours, please get in touch.
The application form is available online at https://events.prace-ri.eu/event/sohpc2017.
The post Applications Being Accepted for PRACE’s Summer of HPC appeared first on HPCwire.
To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Now, says Ken King, general manager, OpenPOWER, for IBM Systems Group, it’s time for all that organizational and technology rubber to hit the road and be converted into volume sales. That’s a tall, but very doable order, he argues.
IBM’s broad ambition, of course, is to make a big dent in Intel’s ironclad grasp of the x86 server landscape. “This year is about scale. We’re [IBM/OpenPOWER] only going to be effective if we get to 10-15-20 percent of the Linux socket market. Being at one or two percent won’t [do it],” says King. In a wide ranging conversation with HPCwire, King and Brad McCredie, Intel Fellow with responsibility for technology oversight of IBM Power Systems, discussed what’s been accomplished and what remains to be done to make IBM/OpenPOWER successful. King also shares some interesting insights on China market dynamics.IBM’s Ken King
There’s no shortage of doubters. Taking on Intel is not for the faint hearted. IBM has perhaps sounded a strident note around the Power initiative in past years. As 2017 gets rolling King’s comments seem tempered by the tremendous efforts required so far. Make no mistake, IBM remains very confident, suggests King, but also clear-eyed about the challenges and encouraged by how much has been accomplished.
Here a very brief sampling of important IBM/OpenPOWER achievements so far:
- OpenPOWER Membership has grown to 280-plus since its incorporation in December 2013 by signature founders IBM, NVIDIA, Mellanox, Google, and Tyan. Basically all elements of the IT technology supplier landscape (accelerator chips, networking, storage, software, etc.) are represented. Says King, “We don’t even think about how many members there are anymore.” There’s enough.
IBM’s Brad McCredie
The Power8 and Power8+ (NVLink) processors are out and in products. CAPI (Coherent Accelerator Processor Interface technology) for Power8 is out and work on OpenCAPI, launched in October 2016, is well underway – “It’s an entirely new open standard that has a unique physical and protocol layers. The one thing that we did was preserve the APIs the accelerator sees,” says McCredie.
- Power-based Products. In September IBM launched three new Power8/8+ servers including Minsky – the first commercial product with NVIDIA’s Pascal P100 GPU. Around ten Power-based systems from OpenPOWER partners (including from HPC stalwart Supermicro) were announced or launched at SC16. IBM launched PowerAI bundling of Minsky optimized for deep learning and intended to make DL adoption in the enterprise easier – King says more “packaged” solutions (finance, manufacturing, etc) are in development.
- Hyperscale Wins. Google (and Rackspace) announced in the spring plans for a Power9-based server supporting OCP. More recently Tencent – China largest Internet portal – announced plans to include Power-based systems in its mix. The market is still watching to see how quickly these announcements turn into real offerings for hyperscale customers and sales for IBM/OpenPOWER. IBM’s own cloud may be seen as a competitor though it’s unclear how much effect on OpenPOWER adoption it will have.
- Developer Clouds. IBM launched Supervessel, its Power-based cloud development platform, in Europe and China last year. At SC16 Big Blue launched collaboration with HPC cloud specialist Nimbix which has put the PowerAI platform and associated developer tools in its cloud.
- Power8/9 Roadmap. As noted, the Power8+ chip implemented NVLink for GPU communications. Power9 is due in 2017 and is expected to support NVLink and PCIe4 and also leverage OpenCAPI. Unlike the Power8 chip launch, which was strictly an IBM affair, the Power9 debut (four initial versions) will be an OpenPOWER family affair, says King, with numerous partner contributions and implementation commitments announced together.
The stakes are clearly ratcheting up. It’s also probably worth repeating that accelerator-assisted computing, driven by innovations from diverse ecosystem partners, is the fundamental IBM/OpenPOWER vision. This contrasts, argues King, with centralized control over architectural advance as practiced by Intel. No doubt Intel sees it differently. In any case, the ‘decline’ of Moore’s Law and the rise of heterogeneous computing as the necessary route for advancing high performance (in science and the enterprise) is a core IBM/OpenPOWER value proposition.
Asked whether or not IBM is satisfied with the progress, King is surprisingly candid.
“There’s no one answer to that. There are multiple answers in my opinion. My bosses are, of course, going to say not fast enough to the monetization, right. That said, as we work with our partners, and some of our partners are very large partners that understand this market really well, and in a couple of cases when we’ve met with them and I express my concern about monetization not happening fast enough, their feedback as are you kidding. ‘What are you expecting? You are starting from ground zero for the most part and in two years look where this has come. It takes five years to really scale from a monetization perspective’,” says King.
Five years seems an eternity in the technology, yet many analysts agree that’s probably what it takes given Intel’s dominant position and the customer challenges in making a switch.
Addison Snell, CEO, Intersect360 Research, says, “IBM has all the right pieces in place for POWER, and now the challenge is that they actually have to make sales. IBM did announce some good wins in both HPC and hyperscale markets around SC16, and we’ll get our first look at how much the total volume picked up when we do our total market model over the next few months, and of course, in our annual HPC Site Census survey. Our 2016 Site Census survey didn’t pick up any significant number of new POWER-based installations, but that was before Minsky was available, and IBM could see things picking up from here.”
King recognizes the challenges.
“Just to finish the thought. On the deployment side, as I said earlier, you are seeing now a lot of significant growth. This year is scaling. The challenge we are working through is the switching cost. On existing workloads you’ve got clients already ingrained in their data datacenters with x86-optimized based solutions. We have to show significant performance advantage to justify the switching costs associated it with it. We as an ecosystem, not just IBM, that’s the hurdle we are working on clearing. We aren’t all the way there. If we were all the way there we would be at 20-30 percent market share of Linux on power.
“Where we’re succeeding is delivering innovations that are showing the necessary level of differentiation. It can’t be 1.1X TCO. It’s got to be 1.5X or 2.0X to justify, ‘OK, I am going to move to another platform. I am going to port my applications to another platform. I am going to optimize them on that.’ That’s the hurdle we’re working on clearing,” says King.
Citing a 2016 Intersect360 special study on processor architectures, Snell says “When HPC users rated the importance of technical features on a five-point scale, the most important characteristic was memory bandwidth, followed by double-precision performance and memory latency. These are areas in which the POWER architecture has an opportunity to gain. Furthermore, over half the HPC users responding indicated they would be using or evaluating Power/OpenPOWERIntersect360 Processor Report
over the next few years. If IBM can effectively sell the benefits of POWER to those evaluators, it stands a good chance to take a fair market share as a toehold in the HPC industry.”
King emphasizes a critical success element is this idea of building a complete ecosystem in which everyone benefits and the flexibility it offers. A good example, he says, is how the Tencent deal happened, starting with product design all the way to sales channel used.
“It was a system that was configured and manufactured by Supermicro at a cost point that made sense for Tencent and with that then becoming an IBM product. We then licensed or provided it as a product to reseller Inspur who then sold it to the customer because the customer’s procurement list only had Chinese companies,” says King.
“We couldn’t do that in the old model. The OpenPOWER model creates tons of flexibility for us through our partnerships. Now Inspur has decided they will develop their own OpenPOWER systems, initially for the China market, and then more globally for the broader the market as part of seeing the progress and seeing the interest from hyperscale clientele.”
China, of course, is a tricky geography to sell into at the moment, full of promise and market growth opportunities, but also closely controlled. China has worked hard to develop and expand its technology prowess generally and supercomputer capability specifically (See HPCwire article, US, China Vie for Supercomputing Supremacy).
King declined to comment on what effects, if any, the change and in US administration will trigger. President-elect Trump’s overture to Taiwan and U.S. border tax talk seem likely to stir political and trade tensions. Currently, a big control point is China’s use of ‘local, secure, indigenous, and controllable technology’ rules. “The more that tightens up the harder it is for multinationals. The more the expectation is for the multi-nationals to provide their intellectual property to China. You got to find the balance,” says King.
“The positive for us is it creates a very interesting door opener for Power, and ARM as well and alternative technologies. They are testing everything. But you have to be careful. Every major player, chip providers etc, is in China and everyone In China is trying to be the partner that’s got that one alternative. If we’re partnering with one company, then somebody else is going to partner with AMD, and somebody with ARM. China will continue to be a battleground for the next few years and I don’t know when or how it will end,” says King.IBM Power S822LC for High Performance Computing
Leaving aside extended discussion of IBM’s multi-prong deep learning strategy as the wave of the future – that was the common theme among many technology suppliers at SC16 – it seems clear attacking the Linux market means a heavy emphasis on the wider enterprise. The convergence of “HPC and HPDA” is providing more opportunity for accelerated analytics says King noting that one large retailer has deployed a Power system with a GPU-accelerated database application. “They will probably allow us to talk about them next year once they have got some numbers and results,” he says.
IBM announced earlier its partnership with Kinetica which describes its platform as “a distributed, in-memory database accelerated by GPUs that can simultaneously ingest, analyze, and visualize streaming data for truly real-time actionable intelligence. Kinetica leverages the power of many core devices (such as GPUs) to deliver results orders of magnitude faster than traditional databases on a fraction of the hardware” IBM says it will work accelerated DB vendors “as we see fit”.
IBM’s PowerAI solution (Minsky) has already been optimized for a number of the machine learning/deep learning frameworks – Theano, Caffee and Torch – with more expected. But don’t mistake the obvious emphasis on penetration of the enterprise as a lack of interest or commitment in the Top500 and leadership class machines, says King.
“We are heavily invested and focused in helping move down the path to exascale. So CORAL was a first step. There are other projects I can’t talk about now,” King says adding “We wouldn’t have won that in our traditional Power environment. It was the open architecture. It was the data centric computing model. It was the partnership with NIVIDIA and Mellanox. All of that enabled us to win it. That continues to be a strategy that we think that those kinds of agencies find very attractive.”
All netted down, the table is set for success, say King and McCredie. Power-based products, technology innovations, channels, (a few) signature customers, expanding ecosystem, and – not least – a philosophy that puts a premium on independently-driven innovation by partners are all present.
IBM’s fire to succeed is undiminished they insist, though the rhetoric seems a little less strident. What’s needed now are sales. One more year seems perhaps too ambitious to turn that tide, but enough to provide a strong indicator.
The post For IBM/OpenPOWER: Success in 2017 = (Volume) Sales appeared first on HPCwire.
Jan. 10 — High-Performance Computing (HPC) and Big Data technology provided by Atos will speed up the processing of omics information and epidemiological studies at the Institute, a world leading centre of research into viral diseases of livestock and those that spread from animals to people (zoonoses).
The technology is an essential tool to enable the Institute to continue to process and analyse huge amounts of information generated by its research projects, building further capability andsignificantly contributing to the reduction of the impact of viral diseases both in livestock and humans.
Genome research manages massive amounts of data, which requires vast computer processing and storage capacity. By eliminating the bottlenecks that often occur in data analysis and storing data more efficiently, it is possible to generate information about increasing numbers of viral diseases.
Supercomputers are also critical when modelling the spread of disease with realistic simulations that take many factors into account, including work on the foot-and-mouth disease outbreak in the UK in 2001. Such simulations are essential to increase preparedness and inform policy makers in case of future outbreaks.
Pirbright is strategically funded by the Bioscience and Biological Science Research Council (BBSRC) and a unique national centre that enhances the UK’s capability to control, contain and eliminate viral diseases of animals through its highly innovative fundamental and applied bioscience. With surveillance, vaccine production and informed support to policy making, Pirbright boosts the competitiveness of livestock and poultry producers in the UK and abroad, thereby improving the quality of life of both animals and people.
The capability provided by Pirbright is essential in the context of the ever-changing nature of viral disease threats emerging from the globalisation of trade, environmental change and expanding human and animal populations.
Computational and bioinformatics facilities are the key link in the long chain of cutting edge research facilities at Pirbright, notably a number of high- and low-containment laboratories, a bioimaging suite and a unit for sequencing in containment. To meet the Institute’s requirements, Atos provided a Bull system featuring several types of computer nodes in order to be able to deal with the complexity of the tasks carried out in the different departments of the Institute.
“Atos came up with a combination of different platforms that will allow the institute to analyse and collate the broad range of datasets generated by our research programmes. We were looking for an IT partner with a broad expertise in life science projects,” said Bryan Charleston, Interim Director of The Pirbright Institute.
“Purchasing a supercomputer is not like going to a supermarket and picking something off a shelf. It required a lot of design and discussion to define the system we wanted. Atos has enabled us to fully support our computational needs. We can assemble genomes, study the interaction between virus and host, understand how viruses evolve and model how they spread between individuals and farms,” said Paolo Ribeca, Head of Integrative Biology and Bioinformatics at Pirbright.
“Atos is determined to solve the technical challenges that arise in life sciences projects, to help scientists to focus on making breakthroughs and forget about technicalities. We know that one size doesn’t fit all and that is the reason why we studied carefully The Pirbright Institute’s challenges to design a customised and unique architecture. It is a pleasure for us to work with Pirbright and to contribute in some way to reduce the impact of viral diseases,” said Natalia Jiménez, WW Life Sciences lead at Atos.
Andy Grant, Head of Big Data and HPC, Atos UK&I, said, “We are really excited to see our HPC and analytics technologies being deployed in such a critical area of science, fundamental to the health and wellbeing of both our animal and human populations. The Bull supercomputer and storage environment deployed has been designed specifically to tackle kinds of data intensive computing challenges that The Pirbright Institute undertakes, removing bottlenecks to dramatically increase the throughput of analytical jobs.”
The post Pirbright Institute Selects Atos Supercomputer to Help Fight Viral Diseases of Animals and People appeared first on HPCwire.
SUNNYVALE, Calif. and YOKNEAM, Israel, Jan. 10 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that Spectrum Ethernet switches and ConnectX-4 100Gb/s Ethernet adapters have been selected by Baidu, the leading Chinese language Internet search provider, for Baidu’s Machine Learning platforms. The need for higher data speed and most efficient data movement placed Spectrum and RDMA-enabled ConnectX-4 adapters as key components to enable world leading machine learning platforms. With the Mellanox solutions, Baidu was able to demonstrate 200 percent improvement in machine learning training times, resulting in faster decision making.
“We are pleased to continue working with Mellanox to enable the most efficient platforms for our applications,” said Mr. Liu Ning, system department deputy director at Baidu. “Mellanox Ethernet solutions with RDMA allow us to fully leverage our Machine Learning platform and work with various machine models while saving valuable CPU cycles and associated computing costs.”
“Machine Learning has become a critical predictive and computational tool for many businesses worldwide,” said Amir Prescher, senior vice president business development, Mellanox Technologies. “Working with Baidu, the premiere internet search provider in China, has enabled Mellanox to showcase the advantages and cost effectiveness of our Spectrum switches and ConnectX-4 100Gb/s adapters solutions to enable the most efficient machine learning platforms.”
In 2013, Baidu established the Institute of Deep Learning, IDL, with the goal of better leveraging Machine Learning as it applies to image recognition, voice recognition and search, and advertising CTR forecast (i.e., Click Through Rate prediction, pCTR). To support IDL networking requirements, the Baidu RDMA solution guarantees businesses can transfer with no disruption or downtime. Even if RDMA encounters problems, the network can automatically switch over to TCP in order to guarantee continuous system operation. Baidu RDMA application solutions and Mellanox RDMA technology will continue to support Baidu’s IDL to drive future innovation and breakthrough technology solutions.
Baidu Inc. is the leading Chinese language Internet search provider. As a technology-based media company, Baidu aims to provide the best and most equitable way for people to find what they’re looking for. In addition to serving individual Internet search users, Baidu provides an effective platform for businesses to reach potential customers.
Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.
Source: Mellanox Technologies
The post Mellanox Ethernet Solutions Selected to Accelerate Baidu’s Machine Learning Platforms appeared first on HPCwire.
FREMONT, Calif., Jan. 10 — AMAX, a leading provider of HPC and Data Center solutions, announced that it has become an official Intel Technology Provider, Cloud Data Center Specialist. This recognition validates AMAX’s competency and capability in delivering cutting-edge cloud technology to its global customer base within the Americas, APAC and EMEA regions.
This program recognizes AMAX’s history of excellence as a technology partner for the design and deployment of fully-integrated rack level platforms to support software-defined cloud infrastructures, particularly around OpenStack. AMAX’s design concepts revolve around leveraging white box server and networking platforms along with open standards such as OpenStack, Open Compute and Open Networking, empowering its customers with future-proofed infrastructure designs optimized for maximum usability and cost efficiency.
As a total solution provider, AMAX’s cloud offerings include its CloudMax OpenStack framework, a comprehensive and customizable OpenStack starter kit to kickstart OpenStack development, as well as [SMART] DC Data Center Manager, a cross-platform out-of-band DCIM solution to streamline and manage a highly-efficient, modern day data center.
AMAX’s unique business model revolves around custom-tailoring both solutions and programs to meet customer requirements. AMAX is capable of the design, production and worldwide deployment of solutions, with global production facilities and a full menu of value-added services such as New Product Introduction (for companies wishing to productize branded solutions), global logistics, and on-site installation and support.
“AMAX is excited to be recognized by Intel for its experience in global cloud deployments,” said James Huang, Product Marketing Manager, AMAX. “We look forward to continued collaboration with industry leaders and technology partners to develop groundbreaking SDI designs geared towards lowering the overall cost of cloud ownership.”
As a certified Intel Technology Provider, Cloud Data Center Specialist, HPC Data Center Specialist, as well as a partner of the Intel Technology Provider Program, AMAX works closely with Intel to develop solutions around the latest technologies, solution architectures and up-to-date insights. As a Cloud Data Center Specialist, AMAX has been validated by Intel as a trusted partner who can demonstrate commitment and excellence in deploying x86-based data center solutions, tailored to best meet customers’ needs.
The post AMAX Joins Intel Technology Provider Program as Cloud Data Center Specialist appeared first on HPCwire.
SEATTLE, Wash., Jan. 10 — Global supercomputer leader Cray Inc. (Nasdaq: CRAY) has announced selected preliminary 2016 financial results. The 2016 anticipated results presented in this release are based on preliminary financial data and are subject to change until the year-end financial reporting process is complete.
Based on preliminary results, total revenue for 2016 is expected to be about $630 million, in the range of the previously provided guidance, and the Company expects to be profitable on both a GAAP and non-GAAP basis for 2016.
As of December 31, 2016, cash and investments are expected to total about $225 million.
For 2017, while a wide range of results remains possible, the Company currently believes it will be difficult to grow over 2016.
“While 2016 was challenging, we finished the year on a high-note, delivering the largest revenue quarter in our history,” said Peter Ungaro, president and CEO of Cray. “We achieved all of the large system acceptances and most of the smaller ones we were working toward in the quarter, installing high-end supercomputers and analytics solutions at numerous sites around the world. We are not yet able to provide detailed 2017 guidance as we continue to lack visibility for the year, but we remain confident in our competitive position and our ability to drive long-term growth.”
About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.
The post Cray Announces Selected Preliminary 2016 Financial Results appeared first on HPCwire.
SUNNYVALE, Calif., Jan. 9 — DriveScale, the company that is pioneering flexible, scale-out computing for the enterprise using standard servers and commodity storage, today announced it has become a certified technology partner in Dell EMC’s Technology Partner Program. By certifying the DriveScale System on Dell EMC PowerEdge Servers, Ethernet switches and Direct Attached Storage (DAS), customers are able to simplify and optimize data centers targeted at cloud-native or cloud-ready applications with dynamic workloads. Implementing the DriveScale System reduces hardware acquisition costs and provides significant improvement in server and storage utilization — allowing organizations to implement more big data workloads.
Cloud-native infrastructure is designed for distributed scale-out applications such as big data. These modern applications require built-in resiliency and data locality. However, traditional data center infrastructures — such as DAS on server nodes — do not permit data center managers to change the ratio of compute to storage on demand. As a certified Dell EMC technology partner, DriveScale now delivers a combined solution that can be easily adopted by organizations that are adjusting to the exponential growth of big data workloads and the limits imposed by using traditional architectures that are not designed for optimal utilization of shared resources and dynamic allocation.
“By using the DriveScale System with Dell EMC PowerEdge Servers, Ethernet switches and Direct Attached Storage, customers are able to future-proof their infrastructure against the scalability challenges that have plagued enterprise companies in recent years,” says S.K. Vinod, VP of Product Management at DriveScale. “Our scale-out architecture gives enterprise companies the ability to maximize existing hardware resources — building a private cloud infrastructure for big data that optimizes these resources. Joining the Dell EMC Technology Partner Program provides DriveScale with the additional resources needed to continue to innovate the data center and in turn allows organizations to continually deliver a higher quality of service to end users by responding more quickly without having to make massive capital expenditures.”
The latest version of the DriveScale System, released in September, further empowers organizations to custom build a data center that fits their needs, providing improved storage-ready functionality, expanded server OS support for building big data infrastructures as well as proactive monitoring and issue rectification via real-time alerting. By significantly improving the economics of acquisition and operation costs of data centers, DriveScale changes the way customers purchase and deploy computing hardware through disaggregation of storage and compute. Using DriveScale’s management software, the customer can combine compute with the right amount of storage on demand depending on the kind of workload they need to run.
“In the healthcare industry, it’s critical that everything move in real time, adapting to the needs of patients, practitioners and organizations. With our healthcare analytics solutions, we need to ensure that customers get the most efficient, scalable and cost effective infrastructure available,” said Charles Boicey, Chief Innovation Officer at Clearsense. “DriveScale allows us to optimize and adjust our data center infrastructure as our needs change in the face of rapid growth. By achieving certification on additional platforms — such as Dell EMC offerings — DriveScale continues to strive for a better solution to the major challenges that face the enterprise data center.”
To learn more about the DriveScale System, visit: https://www.drivescale.com/solutions/
DriveScale is leading the charge in bringing hyperscale computing capabilities to mainstream enterprises. Its composable data center architecture transforms rigid data centers into flexible and responsive scale-out deployments. Using DriveScale, data center administrators can deploy independent pools of commodity compute and storage resources, automatically discover available assets, and combine and recombine these resources as needed. The solution is provided via a set of on-premises and SaaS tools that coordinate between multiple levels of infrastructure. With DriveScale, companies can more easily support Hadoop deployments of any size as well as other modern application workloads. DriveScale is founded by a team with deep roots in IT architecture and that has built enterprise-class systems such as Cisco UCS and Sun UltraSparc. Based in Sunnyvale, California, the company was founded in 2013. Investors include Pelion Venture Partners, Nautilus Venture Partners and Ingrasys, a wholly owned subsidiary of Foxconn. For more information, visit www.drivescale.com or follow us on Twitter at @DriveScale_Inc.
About Dell EMC Technology Partner Program
The Dell EMC Technology Partner Program helps Independent Software Vendors (ISVs), Independent Hardware Vendors (IHVs) and Solution Providers build innovative and highly competitive solutions using Dell EMC platforms. The program provides a global platform for engagement and access to Dell EMC training, tools and expertise to help technology partners deliver value to their customers. More information can be found at www.dell.com/technologypartner.
The post DriveScale Joins Dell EMC Technology Partner Program appeared first on HPCwire.
In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. One of the most important takeaways from our conversation is that Women in HPC is for everyone. As Collis says, “It is not just for women in HPC; it is about women in HPC. It’s for the entire community to move forward and to recognize that by building a diverse workforce, we improve scientific output.” An applications consultant in HPC Research and Industry in the EPCC at the University of Edinburgh, Collis has been named Inclusivity Committee Chair for SC17.
HPCwire: Hi Toni, first of all, allow me to congratulate you on your recent HPCwire award wins. You received both “The Readers’ and Editors’ Choice for Workforce Diversity Leadership Award” and “The Readers’ Choice for Outstanding Leadership in HPC.” This is a testament to what you and Women in HPC have accomplished. Can you tell us how Women in HPC got its start and what your mission is?
Toni Collis: Women in HPC was started when we realized that nobody was looking at the representation of women in our community. Many of the more traditional fields have huge resources put into addressing the under-representation of women and other groups but nobody was looking at it in our field. Partly because we are part of computer science, we’re part of physics, we’re part of chemistry. But we have our own unique challenges. So that’s where it came from when I realized no one else was looking at it.
When we set out, we set out to find out what the problem was. And what we’ve realized is it’s not just about what the problem is; it’s now far more about building fellowship and advocates and building this message of “let’s move the representation of women forward, let’s put it on the agenda and make it part of what HPC is currently caring about.”
HPCwire: Who is Women in HPC for and how can people get involved?
Collis: Women in HPC is for everyone. And that’s one of the things I’m incredibly passionate about. It is not just for women in HPC; it is about women in HPC. It’s for the entire community to move forward and recognize that by building a diverse workforce, we improve scientific output. So this is about everybody; it’s about the benefits to our community and it’s also about giving women who are here the support and fellowship that they don’t receive if they don’t know any other women. So a lot of what I do is for example connect women with other women so they have a mutual network of support, which men with lots of male colleagues have automatically.
HPCwire: There are a lot of different ways that women can get involved in HPC – it’s not just in computer science and engineering, right?
Collis: Absolutely – I think one of the fascinating things we’re finding with the research we are doing to explore where women are and where they come from into the HPC community is that women seem to be more likely to take an unusal route – they are not just coming from a computer science program and have a PhD in a computer science. For example, I’m a physicist. Physics is a fairly common route into HPC, but there are many many women who have come from history and geology and meteorology who are moving in, and it’s about making everybody aware that you don’t have to be a computer science graduate; you don’t even have to have a PhD. Sometimes you don’t even need to have to have a first degree. It’s about recognizing that everybody has a place here; everybody has something to contribute irrespective of where they come from. What matters is your passion for HPC and what you can contribute now, not your scientific background or lack thereof.
HPCwire: What has the momentum been like and the progress been like for Women in HPC over the last three years? It seems every SC and ISC, WHPC has a larger and larger presence.
Collis: Indeed. I will be honest that when we set out in early 2014 with our launch in Edinburgh, we intended this to be a UK community. We did not expect to go international. We did come to SC14, but really because it was the sensible thing to do — this is where a huge number of people will be, both from the UK and the world. And then we realized that everybody wanted this, not just the UK, and so we have grown from strength to strength. At SC14, we had one workshop and one BOF. This year at SC16, we’ve had a workshop, we’ve had two BOFs, we’ve had several panels. And we’ve been talking at booths across the show floor, raising awareness of the message. And I’m delighted to say that there are now inclusivity and diversity messages embedded in both the ISC program in Frankfurt and SC program as well. That’s testament to the fact that this is a growing community, a growing interest of the HPC environment that we all need to be taking care of this.
What we are hoping to do going forward is provide more activities outside the SC and ISC shows as we recognize that many of the people we are trying to reach don’t have the opportunity to come to these conferences. And actually coming to events where they build their networks is one of the single most important activities for career development – building your network and building your connections – so we do need to provide activities going forward at small events around the world, and that’s something I’m hoping to develop further in 2017.
HPCwire: So as you said, you had a lot of different activities here. You had your workshop, your networking event, your BoFs. Can you share some highlights from those events?
Collis: This is the first time we’ve run a full-day workshop and I admit we had concerns. We were competing with the technical program; do people really want to take a whole day out of the technical workshop and technical tutorial program to spend it with Women in HPC? The answer was a resounding yes. We structured two key core messages in our workshop which we then continued as the week went on in our other activities. The first message has been for early career women to give them the skills, give them the confidence to move forward.
I think a lot of the time we talk about how we need to give women more confidence as if they’re broken, but actually part of the issue is when you’re in a male-dominated environment the reason you need confidence is sometimes it’s incredibly stifling to be in a male-dominated environment, and I think a lot of us who’ve been in this community for a while have forgotten that. The story I always like to tell at this point is of a male colleague of mine who walked into a BoF on Women in HPC a couple years ago and promptly walked out again, telling me afterwards that he felt uncomfortable. He didn’t walk back in until I walked in with him. I think a lot of the time we forget how it feels early on in our careers to walk into a room full of men when you’re the only woman in the room. So our early career program is about helping women feel like they belong at SC, giving them the skills to thrive in a community where they might not feel like they belong.
The other item that we covered extensively in our workshop was methods that work for employers to improve diversity because if we’re talking about this but we’re not offering advice we’re not going to change anything. So we’ve talked a lot about hiring practices, we’ve talked about biases, both explicit and implicit, and the unknown things that are going on – changing culture, embracing diversity in the workplace and giving employers real positive steps they can take to improve things for everybody. And I should point out that when you improve things for women, it really does improve things for everybody as well.
HPCwire: I understand that with everything else you have going on, you are also going to be making an announcement about founding members.
Collis: So we want to work with corporate founding members to recognize the industry sector and all their engagement with what we’re doing. We have been incredibly lucky so far to be supported by industry and what we’re doing because mainly this is a volunteer effort. And what we want to do is work with them to recognize that and to improve diversity as broadly as we can. We have a program that we’re hoping to move forwards in the next couple of months to work with industry partners as founding members of Women in HPC to support the work we’re doing, work with them, to change HPC far more than just in the academic and non-profit sector. It’s moved outside that. Because HPC is fantastic in that a lot of people move back and forth between academia and industry and non-profits so they’re as much a part of that as the non-profit sector is and we want to work with them to move that message forward.
HPCwire: You mentioned that Women in HPC is largely a volunteer effort. Who else is involved in this effort?
Collis: I have a fantastic list of volunteers, far longer than I can recite today. My advisory board in particular from people throughout the world has been absolutely fundamentally important in helping shape the vision of Women in HPC as we’ve grown and evolved because when we set out, we didn’t know what we were taking on to be honest. And helping us with awareness raising, which is one of our single biggest challenges which is to get the message out there, to start this conversation. Our volunteers are crucial to that. Our other volunteers are those who turn up to our events and help run them. To run our Women in HPC workshop on Sunday took 20 volunteers and another 15 mentors. It is a non-trivial exercise and we couldn’t do it without all of those people putting in a non-trivial amount of effort to get them going.
HPCwire: Another big congratulations to you because you’ve also been named the SC17 Inclusivity Committee Chair. This year Trish Damkroger is the Chair for the SC16 Diversity Committee. Next year, the committee program (with a revised name) will be in its second year and you’ll be taking the reins. Would you like to share about how SC is promoting and encouraging diversity?
Collis: The key thing here is that [diversity] is now a core part of the SC message; diversity is important. And [we’re] getting the message across that diversity breeds innovation, breeds scientific output and improves everything for everybody. We want the show floor to be a welcoming place for everybody. We want the papers to reflect the diversity of our community. And at the same time it’s really important to us that we do not dilute the quality of SC. It’s not about favoring one group over another; this is about embedding best practice into our community and recognizing that everybody has something to contribute. Something I was delighted to see happening this year is double-blind reviews in the technical program and I’m delighted to say that they are going to take that forward next year.
We will be doing other things. This year we’ve had child care. We will be expanding that program next year. There’s also a parents’ room. We need far more publicity about the fact that there are facilities here for parents with children. Expanding family day, encouraging more families to come to the show to find out what HPC is all about. And a lot of what the Diversity and Inclusivity Committee will be doing is messaging; it’s about getting the message out, what we’re doing, how we’re counting, measuring and talking about the issues around diversity, talking about the fact that business as usual is not working and we all need to embrace change in order to bring about more inclusivity and more diversity.
The post A Conversation with Women in HPC Director Toni Collis appeared first on HPCwire.
Bio-processor developer Edico Genome is collaborating with storage specialist Dell EMC to bundle computing and storage for analyzing gene-sequencing data.
The partners said this week the platform will be based on Edico Genome’s DRAGEN processor that uses an FPGA for hardware acceleration. That processor will be integrated with a Dell 4130 server for souped up genome analysis along with Dell EMC’s Isilon network attached storage for genomic data storage.
The bundle is touted as enabling analysis of an entire genome in as little as 22 minutes. Standard software currently requires as much as a day to complete a full genomic analysis. The partners said applications for rapid genomic analysis include faster diagnoses for cancer patients and critically ill newborns along with faster results for drug developers and researchers.
Along with the Dell server and Isilon storage, the FPGA-based platform is coupled with the storage giant’s Virtustream storage cloud. The genomics platform also supports third-party cloud providers.
The DRAGEN processor leverages an FPGA to provide hardware-accelerated implementations of genome pipeline algorithms. Hardware-based algorithms are used for mapping, alignment and sorting of genomic data, San Diego-based Edico Genome said.
DRAGEN, which stands for Dynamic Read Analysis for Genomics, is based on a configurable “bio-IT” processor architecture. The flexible platform enables development of custom algorithms as well as refining existing algorithm pipelines.
The partners noted that the DRAGEN engine addresses unmet computing and storage needs for big data genomics. The result is a scalable platform for secondary analysis used in genomics applications along with cheaper storage of soaring genomic data sets.
The FPGA-based platform also addresses the reanalysis of raw sequencing data as updated algorithms and applications are released. That approach has resulted in huge computational loads and associated costs.
The other problem addressed by the engine is big data storage, the partners stressed. Raw data files are typically maintained and duplicate copies of files are retained as backups, generating huge data storage requirements for each initial sample.
The partners said the bundle is available in three tiers ranging from up to 100 terabytes for applications such as neonatal intensive care units implementing genome-sequencing capabilities to 1 petabyte of data throughput for “major sequencing centers.”
The DRAGEN engine also is being offered in on-premise, cloud and hybrid cloud versions. The hybrid version allows genomic big data to be processed, stored and moved to the cloud.
Dell-EMC said its bundled Isilon storage platform consolidates large, semi-structured file-based workloads into a single system that supports Edico’s multiple DRAGEN pipeline workflows. The single volume storage system supports the DRAGEN input formats as well as industry standard output formats.
The partners further asserted that the bundled approach reduces the need for clusters of larger servers, thereby reducing costs related to storage space and IT infrastructure.
In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ve got the details. Check in each month for an updated list and you may even come across someone you know, or better yet, yourself!
The Gauss Centre for Supercomputing (GCS) has awarded Britta Nestler with the 2017 Gottfried Wilhelm Leibniz Prize. Nestler was recognized for her research in computer-based materials sciences and her efforts in developing new material models. Nestler is a professor at the Karlsruhe Institute of Technology and used GCS HPC tools to assist her research.
“For our research and progress in computational materials science, supercomputing centers are the essential infrastructure to conduct extreme scale computations. The high performance computing power provided by GCS enables our team to explore new dimensions of microstructure simulations and to gain insight into complex multiphysics and multiscale processes in material systems under various influences. I am very thankful for the support and professional assistance of GCS, my team experienced over the past years, facilitating our research on up-to-date systems,” said Nestler.
Steve Lionel, also known as “Dr. Fortran,” has announced his retirement. Lionel ends his career with Intel where he spent the last 15 years. Prior to joining Intel in 2001, Lionel worked as a consulting software engineer at the Digital Equipment Corporation. His career has revolved around the Fortran business and he joined the Fortran standards committee in 2008.
In a blog post announcing his retirement, Lionel writes, “I’ve decided that, at the end of 2016, I’m going to retire, after more than 38 years in the Fortran business. It was a difficult decision – I love the work I do and the people I work with – both inside and outside Intel – but I had been thinking about this for a while and it just felt like the right thing to do. I want to emphasize that Intel very much wanted me to stay, but I’m comfortable with my choice.”
Stacy Repult has been named CFO of Nimbix where she will oversee all corporate financial accounting, HR, and financial operations for the company. She joins Nimbix from Omnitracs where she served as the vice president of finance.
“I am excited to join the Nimbix team and look forward to helping to manage the financial controls and operational discipline which is paramount to our continued success in scaling the business and growing profitably,” said Repult.
Sunita Chandrasekaran has been honored with the 2016 IEEE-CS TCHPC Award for Excellence for Early Career Researchers in High Performance Computing. The award recognizes those who have made remarkable contributions in the HPC field within five years of receiving their doctoral degrees. Chandrasekaran is an assistant professor of computer science at the University of Delaware. She is one of three winners nationwide.
“Sunita is very deserving of this award,” says Kathleen McCoy, chair of the Department of Computer and Information Sciences. “She brings tremendous enthusiasm to her teaching, and she is forging valuable collaborations with researchers at national labs and in industry.”
The IEEE Board of Directors has named Jeffrey Vetter an IEEE Fellow. Vetter is a researcher at the Department of Energy’s Oak Ridge National Laboratory (ORNL) and was recognized for his accomplishments in computational science, specifically “for contributions to high performance computing.” Vetter is a distinguished R&D staff member at ORNL and is also the founding group leader of the Future Technologies Group in the Computational Mathematics Division of ORNL.
Do you know someone that should be included in next month’s list? If so, send us an email at Thomas@taborcommunications.com. We look forward to hearing from you.
SUNNYVALE, Calif., Jan. 6 — AMD (NASDAQ: AMD) has unveiled preliminary details of its forthcoming GPU architecture, Vega. Conceived and executed over 5 years, Vega architecture enables new possibilities in PC gaming, professional design and machine intelligence that traditional GPU architectures have not been able to address effectively. Data-intensive workloads are becoming the new normal, and the parallel nature of the GPU lends itself ideally to tackling them. However, processing these huge new datasets requires fast access to massive amounts of memory. The Vega architecture’s revolutionary memory subsystem enables GPUs to address very large data sets spread across a mix of memory types. The high-bandwidth cache controller in Vega-based GPUs can access on-package cache and off-package memories in a flexible, programmable fashion using fine-grained data movement.
“It is incredible to see GPUs being used to solve gigabyte-scale data problems in gaming to exabyte-scale data problems in machine intelligence. We designed the Vega architecture to build on this ability, with the flexibility to address the extraordinary breadth of problems GPUs will be solving not only today but also five years from now. Our high-bandwidth cache is a pivotal disruption that has the potential to impact the whole GPU market,” said Raja Koduri, senior vice president and chief architect, Radeon Technologies Group, AMD.
Highlights of the Vega GPU architecture’s advancements include:
- The world’s most advanced GPU memory architecture: The Vega architecture enables a new memory hierarchy for GPUs. This radical new approach comes in the form of a new high-bandwidth cache and its controller. The cache features leading-edge HBM2 technology which is capable of transferring terabytes of data every second, doubling the bandwidth-per-pin over the previous generation HBM technology. HBM2 also enables much greater capacity at less than half the footprint of GDDR5 memory. Vega architecture is optimized for streaming very large datasets and can work with a variety of memory types with up to 512TB of virtual address space.
- Next-generation geometry pipeline: Today’s games and professional applications make use of incredibly complex geometry enabled by the extraordinary increase in the resolutions of data acquisition devices. The hundreds of millions of polygons in any given frame have meshes so dense that there are often many polygons being rendered per pixel. Vega’s next-generation geometry pipeline enables the programmer to extract incredible efficiency in processing this complex geometry, while also delivering more than 200% of the throughput-per-clock over previous Radeon architectures.1 It also features improved load-balancing with an intelligent workload distributor to deliver consistent performance.
- Next-generation compute engine: At the core of the Vega architecture is a new, next-generation compute engine built on flexible compute units that can natively process 8-bit, 16-bit, 32-bit or 64-bit operations in each clock cycle.2 These compute units are optimized to attain significantly higher frequencies than previous generations and their support of variable datatypes makes the architecture highly versatile across workloads.
- Advanced pixel engine: The new Vega pixel engine employs a Draw Stream Binning Rasterizer, designed to improve performance and power efficiency. It allows for “fetch once, shade once” of pixels through the use of a smart on-chip bin cache and early culling of pixels invisible in a final scene. Vega’s pixel engine is now a client of the onboard L2 cache, enabling considerable overhead reduction for graphics workloads which perform frequent read-after-write operations.
GPU products based on the Vega architecture are expected to ship in the first half of 2017.
For more than 45 years AMD has driven innovation in high-performance computing, graphics, and visualization technologies ― the building blocks for gaming, immersive platforms, and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses, and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) website, blog, Facebook and Twitter pages.
On Wednesday, after rampaging through China’s top Go playing landscape for seven days and defeating many of the world’s top players online, a mystery player named Master was unveiled as Google’s AlphaGo. Yes, that AlphaGo. Apparently, there’s a new (sort of) top dog in the world of Go today.
A report in today’s Wall Street Journal (Humans Mourn Loss After Google Is Unmasked as China’s Go Master) looks at AlphaGo’s latest exploits. Here’s a brief excerpt:
“Master played with inhuman speed, barely pausing to think. With a wide-eyed cartoon fox as an avatar, Master made moves that seemed foolish but inevitably led to victory this week over the world’s reigning Go champion, Ke Jie of China.
“It was clear by then that Master must be a computer. But whose computer? Master revealed itself Wednesday as an updated version of AlphaGo, an artificial-intelligence program designed by the DeepMind unit of Alphabet Inc.’s Google.” Master’s record—60 wins, 0 losses over the seven days ending Wednesday.
Back in March the Google AlphaGo platform shook the Go world with a decisive win over Lee Sedol of South Korea and one of the world’s top Go players. Go is the ancient Chinese strategy board whose number of possible moves is vast – 10761 compared to the 10120 possible in chess – making it an extremely complex game despite its relatively simple rules. The final score in the AlphaGo – Sedol match was 4-1. The match lasted roughly a week.
More from the WSJ: “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”
Link to earlier HPCwire article (Google’s AlphaGo Defeats Go Star Lee Sedol): https://www.hpcwire.com/2016/03/16/googles-alphago-defeats-go-champion-lee-sadol/
The post Humans Beware: There’s a New Alpha Dog on the Prowl appeared first on HPCwire.
Hewlett Packard Lab researchers have put together a test optical chip that implements an Ising Machine according to an article in IEEE Spectrum this week. Ising Machines are based on a century-old idea for using magnetic spin for computer processing. A variety of technical and scaling problems have hampered development of Ising Machines so far. The work by HP Labs (now part of Hewlett Packard Enterprise) uses optical components and may help overcome past obstacles.
Here’s a brief excerpt from the IEEE Spectrum article (HPE’s New Chip Marks a Milestone in Optical Computing):
Silicon integrated circuits containing parts that can manipulate light are not new. But this chip, which integrates 1,052 optical components, is the biggest and most complex in which all the photonic components work together to perform a computation, says team member Dave Kielpinski, a senior research scientist at Hewlett Packard Labs (now a part of Hewlett Packard Enterprise, or HPE). “We believe that it is by a wide margin,” he says.
The chip, which was developed through the U.S. Defense Advanced Research Projects Agency’s Mesodynamic Architectures program and was still undergoing testing as IEEE Spectrum went to press, is an implementation of an Ising machine—an approach to computation that could potentially solve some problems, such as the infamous “traveling salesman problem,” faster than conventional computers can.
The Ising approach is based on a century-old model for how the magnetic fields of atoms interact to give rise to magnetism. The model envisions every atom as having a property called “spin” that prefers to point either up or down. In a ferromagnetic material, above a certain temperature, these spins are oriented randomly and are flipped repeatedly by heat. But when the temperature falls below a certain threshold, the interactions between the atoms dominate, and most of the spins settle down to point in the same direction.
Turning Ising Machines into practical realities has proven difficult. Acoustic noise, for example, can disrupt optical processing circuits used and require complicated feedback mechanisms to make adjustments. The HPE chip does not require such feedback. More detail is spelled out in the IEEE article.
The post HPE Testing an Optical Chip that Implements Ising Machine appeared first on HPCwire.
Jan. 5 — Nor-Tech just announced the integration of Intel Omni-Path into its HPC clusters, leading-edge demo cluster, and simulationclusters.com–its web utility for demonstrating the ROI of upgrading from a workstation to a cluster.
For more than a decade, Nor-Tech has been building easy-to-deploy HPC clusters with no-wait-time support for the world’s finest research institutions, Fortune 100 businesses and exciting innovators with applications that include aerospace, automotive, life sciences and a range of enterprise and scientific uses. Nor-Tech’s cluster capabilities range from 1,000+ core supercomputers to smartly optimized entry-level clusters.
Nor-Tech President and CEO David Bollig said, “Intel works hard to continually stay on the leading edge. Our partnership has also allowed us to stay on the forefront of innovation and deliver maximum value to our clients.”
The Intel Omni-Path Architecture (Intel OPA), an element of Intel Scalable System Framework, can scale to 10s of 1,000s of nodes at a price competitive with today’s fabrics. The Intel OPA 100 Series product line is an end-to-end solution of PCIe adapters, silicon, switches, cables, and management software.
Intel OPA is designed specifically to address issues of poor HPC performance and expensive scaling in existing standards-based high performance fabrics. The enhancements of Intel OPA will facilitate the progression toward Exascale while cost-effectively supporting clusters of all sizes.
Intel Omni-Path, along with Intel Orchestrator and other utilities are integrated into Nor-Tech’s innovative demo cluster and are also available for a test drive at simulationclusters.com, which is a collaboration between Intel, Dassault Systèmes, and Nor-Tech. As with Nor-Tech’s demo cluster, this is a no-cost, no-strings opportunity.
Nor-Tech’s HPC clusters pair well with Omni-Path and are available in iterations that include datacenter clusters, ultra-quiet clusters, ruggedized portable clusters, entry-level clusters and clusters with a customized combination of attributes. Recent orders for Nor-Tech’s HPC clusters include a major commercial aircraft manufacturer, over 20 large universities, pharmaceutical companies, and vehicle manufacturers.
2016 HPCwire award finalist, Nor-Tech is renowned throughout the scientific, academic, and business communities for easy to deploy turnkey clusters and expert, no wait time support. All of Nor-Tech’s technology is made by Nor-Tech in Minnesota and supported by Nor-Tech around the world. In addition to HPC clusters, Nor-Tech’s custom technology includes workstations, desktops, and servers for a range of applications including CAE, CFD, and FEA. Nor-Tech engineers average 20+ years of experience and are responsible for significant high performance computing innovations. The company has been in business since 1998 and is headquartered in Burnsville, Minn. just outside of Minneapolis. To contact Nor-Tech call 952-808-1000/toll free: 877-808-1010. To take advantage of the demo cluster, visit: http://www.nor-tech.com/solutions/hpc/demo-cluster. The online utility is at: http://www.simulationclusters.com.
The post Nor-Tech Announces Integration of Intel Omni-Path into HPC Clusters appeared first on HPCwire.
Jan. 5 — Four of the world’s best professional poker players will compete against artificial intelligence developed by Carnegie Mellon University in an epic rematch to determine whether a computer can beat humans playing one of the world’s toughest poker games.
In “Brains Vs. Artificial Intelligence: Upping the Ante,” beginning Jan. 11 at Rivers Casino, poker pros will play a collective 120,000 hands of Heads-Up No-Limit Texas Hold’em over 20 days against a CMU computer program called Libratus.
The pros — Jason Les, Dong Kim, Daniel McAulay and Jimmy Chou — are vying for shares of a $200,000 prize purse. The ultimate goal for CMU computer scientists, as it was in the first Brains Vs. AI contest at Rivers Casino in 2015, is to set a new benchmark for artificial intelligence.
“Since the earliest days of AI research, beating top human players has been a powerful measure of progress in the field,” said Tuomas Sandholm, professor of computer science. “That was achieved with chess in 1997, with Jeopardy! in 2009 and with the board game Go just last year. Poker poses a far more difficult challenge than these games, as it requires a machine to make extremely complicated decisions based on incomplete information while contending with bluffs, slow play and other ploys.”
A previous CMU computer program, called Claudico, collected fewer chips than three of the four pros who competed in the 2015 contest. The 80,000 hands played then proved to be too few to establish the superiority of human or computer with statistical significance, leading Sandholm and the pros to increase the number of hands by 50 percent for the rematch.
“I’m very excited to see what this latest AI is like,” said Les, a pro based in Costa Mesa, Calif. “I thought Claudico was tough to play; knowing the resources and the ideas that Dr. Sandholm and his team have had available in the 20 months since the first contest, I assume this AI will be even more challenging.”
Brains Vs. AI is sponsored by GreatPoint Ventures, Avenue4Analytics, TNG Technology Consulting GmbH, the journal Artificial Intelligence,Intel and Optimized Markets, Inc. Carnegie Mellon’s School of Computer Science has partnered with Rivers Casino, the Pittsburgh Supercomputing Center (PSC) through a peer-reviewed XSEDE allocation, and Sandholm’s Electronic Marketplaces Laboratory for this event.
“We were thrilled to host the first Brains Vs. AI competition with Carnegie Mellon’s School of Computer Science at Rivers Casino, and we are looking forward to the rematch,” said Craig Clark, general manager of Rivers Casino. “The humans were the victors last time, but with a new AI from the No. 1 graduate school for computer science, the odds may favor the computer. It will be very interesting to watch and see if man or machine develops an early advantage.”
Les said it’s hard to predict the outcome. Not only is the AI presumably better, but the pros themselves are playing better.
“From the human side, poker has gotten much tougher in the last 20 months,” Les said. That’s because pros generally have embraced publicly available game theory tools that have elevated game play, he explained.
“Though some casual poker fans may not know all of them, Les, Kim, McAulay and Chou are among the very best Heads-Up No-Limit Texas Hold’em players in the world,” said Phil Galfond, a pro whose total live tournament winnings exceed $2.3 million and who owns the poker training site Runitonce.com. Unlike the multi-player poker tournaments popular on television, professional one-on-one No-Limit Texas Hold’em is often played online.
“Your favorite poker player almost surely wouldn’t agree to play any of these guys for high stakes, and would lose a lot of money if they did,” Galfond added. “Each of the four would beat me decisively.”
The Libratus AI encompasses new ideas and is being built with far more computation than any previous pokerbot, Sandholm said. To create it, he and his Ph.D. student Noam Brown started from scratch.
“We don’t write the strategy,” Sandholm said. “We write the algorithm that computes the strategy.” He and Brown have developed a new algorithm for computing strong strategies for imperfect-information games and are now using the Pittsburgh Supercomputing Center’s Bridges supercomputer to calculate what they hope will be the winning strategy.
“We’re pushing on the supercomputer like crazy,” Sandholm said, noting they have used around 15 million core hours of computation to build Libratus, compared with the 2-3 million core hours used for Claudico. That computing process will continue up to and during the contest.
The entire article can be found here.
Source: Byron Spice, Carnegie Mellon School of Computer Science
The post Top Poker Pros Face Off Against Artificial Intelligence appeared first on HPCwire.
Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on.
“The ultimate goal is to simulate your body,” says Ari Berman, vice president and general manager of life sciences computing consultancy BioTeam whose clients span government, academia, and biopharma. “If we can do that can we target what’s wrong and not only increase lifespan but also our health-span. The health span is still quite short in the US even though people are living longer.”Ari Berman, BioTeam
Of course we aren’t there yet, but the needle is moving in the right direction and fast according to Berman. In this conversation with HPCwire, Berman examines the changing dynamics in HPC use in life sciences research today along with thoughts on the future. In 2015, he predicted that 25 percent of life scientists would require access to HPC resources – a forecast he now says was correct. By the end of 2017 the number will rise to around 35 percent and could be 50 percent by the end of 2018. The reason uptake has been relatively slow, says Berman, is that HPC remains an unfriendly or at least unfamiliar place for the majority of LS researchers.
Long-term it won’t matter. The science requires it and the emergence of scientist-friendly gateways such as CyVerse (formerly iPlant) are accelerating HPC adoption in life sciences by simplifying access, says Berman. In this whirlwind tour of HPC in the life sciences, Berman talks about several important themes, starting with broad trends in HPC use followed by specific trends in key technologies:
- Life Science HPC Use Today
- Genomics Data isn’t the Biggest Driver
- Trends in LS Core Compute – Think Density
- Data Management & Storage Challenge.
- Networking – Not So Easy Anymore.
- Processor Frenzy? Not!
Theme 1 – Spreading HPC Use Via Portals; LS’s Changing Data Drivers
Characterizing HPC use in life sciences can be problematic, notes Berman, “It depends on what you define as HPC. If spinning up a CfnCluster at Amazon is HPC, then the number has grown a lot larger. If we are looking at traditional HPC facilities, that are whole owned datacenters and managed by HPC technicians, then it’s a bit smaller just because those resources aren’t as readily available. So I am going to go with the wider definition on this one because a lot of HPC these days is being done in the various clouds under various conditions. In cancer research, for instance, they’ve got a full data commons project going run out of NCI and each one of those has a really nice graphic interface for the use of HPC resources, both on-prem resources and the cloud. I think things like that are going to become more prevalent.
“In 2015 I thought that 25 percent of life scientists would require access to HPC and at least at NIH that is absolutely true and at most places it was true. I verified [the estimate] with the guys who run the NIH HPC resources at Biowulf (main NIH HPC cluster, ~60K cores). We’ve had a number of accounts that have gone exactly the same way. At this point the biggest rate-limiting factor is the lack of knowledge of command line and how to operate with it among lab scientists. I believe that life sciences usage would be more like half if that wasn’t a barrier. HPC is traditionally not an easy thing to use, even when you are not writing your own software.
“What we’re starting to evangelize and I think that what’s going to happen is the proliferation of science gateways, this idea that was started by Nancy Wilkins-Diehr (Assoc. Dir., San Diego Supercomputer Center). That idea is going to continue to grow but on a wider scale and enable bench scientists who just don’t have the sophistication or time to learn command line and queuing systems but want to get to some standard stuff on really high powered computers. We’re building a few of these gateways for some customers to enable wide scale HPC access in very unsophisticated computational environments. I think that will bring down the barrier for general usage in life sciences.
“For 2017 I’m going to go a bit more conservative than I want to and say it will probably jump to 35%, so another ten percent will probably start where the availability and use of those resources is going to go way up by not requiring command line access and the ability to use common tools. In fact there’s likely going to be a period of abstraction that happens where some life scientists don’t know they are using HPC but are with resources like CyVerse and other portals that actually do access and utilize high performance computing on the back end. My guess is by the end of 2018 it will be at least half are using HPC.”
Theme 2 – Genomics Data is No Longer Top HPC Need Driver
“What’s happening now is that genomics is not the only big kid on the block. [In large measure] sequencing platforms are normalizing in terms of data output, the style and quantity and size and what’s possible with the amount of files being generated, and are starting to standardize a bit. Now the optic technology that made next generation sequencing possible is moving to other devices such as microscopes creating new streams of data.
“So the new heavy hitters are these light sheet microscopes and one of these devices with like 75 percent usage can generate up to 25TB of data a week and that’s more than sequencers. And it does it easily and quickly and gives you just enormous amounts of image data and there’s a whole number of these things hitting the lab. I can tell you, as a person who formerly spent lots of time on confocal microscopes, I would have loved these because it saves you an enormous amount of time and gives you higher resolutions.
“Light sheet microscopy is one of the things displacing next gen sequencing as a leading generator of data; closely behind that is cryogenic electron microscopy (cryoem) where they use very high resolution scanning electron microscopes against cryopreserved slices of tissue. This allows them not to have to fix and stain a section of tissue or whatever they are looking at. It allows them to just freeze it very quickly and look at it without chemical modifications, which allows researchers to do things like actually see protein structures of viruses, DNA, very small molecules, and all sorts of interesting things. Cryoems can generate 5TB of data per day. So there’s a lot of data coming out and the analyses of that information is also expanding quite a bit as well – it’s all image recognition. Really the imaging field is nipping at the heels of not surpassing the data generation potential of next generation sequencing.”
Managing and analyzing this imaging data has moved life sciences computing beyond traditional genomics and bioinformatics and gets into phenotyping and correlation and structural biology – all of which require more computational power, specifically HPC. “These other types of research domains extend the capability for using HPC for primary analysis and for just plain data management for these volumes of data. You can’t do it on a workstation.”
As a brief interesting aside, Berman suggests the capability of applying machine learning to this kind of imaging data analysis is still fairly limited. The new flood of imaging data is certainly driving increased GPU use (touched on later in this article) but use of ML to interpret the imaging data isn’t ready for prime time.
“The problem with machine learning is that the more complex the model the less likely it is to resolve. The number of variables you have in any sort of supervised or unsupervised machine learning model – supervised does better with a greater number of variables if you train it first – but the problem with using ML for region of interest selection and things like that is the variability can be incredibly high in life sciences. You are not looking for something that is necessarily of a uniform shape or size or color variation things like that.
“The more tightly you can define your matrix in a machine learning algorithm the better it works. So the answer is maybe. I am sure someone is trying but I don’t know of it, certainly Facebook does this to some degree. But faces are an easy-to-predict shape out of a lot of noise so selecting a region of interest of a face out of a picture is a much easier thing than trying to select for something that no one really knows what it looks like and isn’t easy to picture, like a virus. Maybe over time that model can be built.”
Interestingly, there is currently a major effort to advance machine learning infrastructure as part of the NCI Cancer Moonshot program (See HPCwire article, Enlisting Deep Learning in the War on Cancer). Hopes are high but it is still early days there.
Theme 3 – Trends in LS Core Compute: Think Density
“Core compute in life sciences was pretty uninteresting for quite awhile. It was a solved problem and easy to do. The challenge in life sciences was the heterogeneity of the systems and configurations because they handle an incredibly wide range of computational needs. It generally had very little to do with the CPUs and more to do with I/O capacity, memory bandwidth, memory availability and things like that.
“But it’s pretty clear that we have finally started to reach the point we just can’t cram any more transistors in a CPU without making it slower and taking more energy,” says Berman echoing the mantra heard throughout HPC these days. “The operating frequency of CPUs has flattened and started actually to go down. They are still getting faster but that’s because they’re making more optimizations than in the past. We are also at the point where you really can’t get that many more cores on a die without significantly affecting your power budget and your cooling and things like that. I’m not sure there’s going to be a lot more core density coming up in the future, but compute requirements continue to increase and density matters in that case.”
One result, he says, is a growing push, at least with regard to space and energy budgets, towards greater system density. Again, this is something rippling through all of advanced scale computing generally and not restricted to life sciences.Comet, Source: Dell
“I was doing a tour of San Diego Supercomputing Center [recently] and amazed at the compute density. I’ve seen Comet before there but it’s so tiny yet it has almost as many cores as Stampede (TACC) which takes up eight full length aisles in the datacenter. Comet takes two small aisles. It’s a really interesting comparison to see how the density of compute has increased. I think that’s one of the things that is going to catch on more. You’re going to just have to cram more system level architectures into a smaller space. Unfortunately that means quite a lot of power and cooling to deal with that. My guess is at some point people are going to say air is a terrible way to cool things and the ultra high density designs that Cray and SGI and those folks do that are water cooled are probably going to catch on more in this space to improve the density and decrease energy needed.”
Asked if this was just a big lab phenomenon, Berman said, “Honestly, I think that same trend at least for the hyper density compute is taking hold for local on-prem stuff as well as the national labs and for the same reasons; power is expensive, space is at a premium, and if you are going to make an investment you want to shove as much into a rack as possible. [Not only supercomputers] but I think local on-premise deployments are going to start to adopt, if they can, the use of 48U racks instead of 42Us racks where you can just get more stuff into it. I’ve seen a number of smaller centers and server rooms being renovated to be able to handle those racks sizes because it changes how you can wire up your room and cool it.
“Another trend is that GPUs have caught on in a really big way in life sciences for a number of applications and especially with all the imaging. The deconvolution matrices and some other resolution enhancement tools can be very much GPU-driven and I think that as more and more imaging comes into play the need for GPUs to process the data is going to be key. I am seeing a lot more GPUs go in locally and that’s a small number of nodes.”
Back in 2015, Berman talked about the diversity of nodes – fat and thin – being used in life sciences and the fact that many core compute infrastructures were being purpose-built for specific use cases. That practice is changing, he says.
“As far as the heterogeneity of nodes used, that seems to be simplifying down to just a few building blocks, standard compute nodes, thin nodes, and they are not terribly thin either – there’s something like 16 to 20 cores and high memory nodes ranging from 1.5TB to 6TB – and having some portion, maybe 5 to 10 % of the cluster. Then you are having GPU nodes, sometimes they are spread evenly through the cluster, [and are] just assigned by a queue or they are dedicated nodes with high density in them.”NVIDIA P100 GPU
Berman says the latest generation of GPUs, notably NVIDIA’s Pascal P100, will be game changers for applications in molecular dynamics and simulation space. “The P100 have come out in really hyper dense offerings, something like 180 teraflops of performance in a single machine. Just insane. So those are incredibly expensive but people who are trying to be competitive with something like an Anton are going to start updating [with the new GPU systems].”
Given CPU bottlenecks it’s perhaps not surprising Berman is also seeing efforts to reduce overhead on system tasks. “We are seeing, at least in newer applications, more use of Intel’s on-package features, namely the encryption offloading and the data plane where you literally take network transmission and offload it from the CPU. I think that when Intel comes out with chips with on package FPGAa [to handle those tasks] that might change things a lot.”
Berman is less hopeful about FPGA use as LS application accelerators. “I’m not sure it’s going to accelerate algorithms because there’s still a lot involved in configuring an FPGA to do an algorithm. I think that’s why they haven’t really caught on in life sciences. And I don’t think they will because the speed increase isn’t worth the effort. You might as well just get a whole lot more compute. I think that performing systems level things, imagine a Linux kernel, starting to take advantage of FPGAs for stuff that is very high overhead makes sense.”
One persistent issue dogging FPGA use in life sciences, he says, is the constant change of algorithms. That hasn’t stopped companies from trying. Convey Computer, for example, had a variety of FPGA-based bioinformatics solutions. A more recent hopeful is Edico Genome and its DRAGEN processor (board and FPGA) which has a couple of marquee wins (e.g. Earlham Institute, formerly TGAC).
“I see this about every two years where someone [FPGA-based solution provider] will get into four or five high impact environments but usually not more. People have realized that doing chip customization and hardware description languages is not something that is a common skill set. And it’s an expensive skill set. We’ve talked to them (DRAGEN). We probably still are going to get one of their units in our lab as a demo and really check it out. Because it does sound really promising but still the field hasn’t normalized on a set algorithms that are stable enough to release a stable package in an FPGA. I honestly think it’s less about the viability of the technology and more about the sheer sprawl of algorithms in the field. The field is not mature enough for it yet.”
Theme 4 – The Data Management & Storage Challenge.
Perhaps not surprisingly, “Storage and data management are still the two biggest headaches that BioTeam runs into. The really interesting thing is that data management is becoming sort of The Issue, really fast. People were sort of hemming and hawing about it – it doesn’t really matter – but really this year data management became a real problem for most people and there’s no solution for it.
“On storage itself, and Aaron Gardner (senior scientific consultant, BioTeam) and I just gave a half-day workshop on the state of storage in life sciences. It’s such a complex field right now because there are all these vendors, all offering something, they all think their thing is the greatest. The reality is there’s, I think we came up with, 48 viable active types of files systems out there that people are using actively in life science. And they all have vastly different characteristics – management potential, scalability, throughput speed, replication, data safety, all that stuff.”
“We saw a surge of Lustre for a little bit and then everyone realized it is simply not ready for life sciences. The roadmap looks really good. But we’ve built a number of these and installed a number of these and it’s just not there. There are too many problems. It was very much designed for high volume, highly parallel workloads, and not for the opposite, which a lot of life sciences are running. Things like single client throughput being deliberately low; that makes Lustre nearly useless in the life sciences environment. So I am seeing a fall off on that moving to GPFS that can work well in most environments and honestly the code is more mature and there’s better support.”
Data hoarding continues in life sciences – no one is willing to discard data – and that’s prompting a need for careful tiering of storage, says Berman. “Tier 1 and 2 should be picked with the smallest possible storage footprint and have only active data, and combined with another much larger tier that is less expensive where people store stuff. Those other tiers are turning out to be anything from scale out NAS to even object storage. It’s an incredibly complicated environment and once you tier, you still want to make it appear as a single namespace because otherwise you are very much complicating the lives of your users.”
“To really pull that stuff together, across many domains, possibly four different tiers of storage is a hard thing to do because vendors tend to live within their own domains and only help you find what they have. So all of them are trying corner you into only buying their stuff and there’s not a lot of commercially supported ways of binding more than two tiers together without multiple software packages.
“We’re really seeing a resurgence in tools, like iRODS that can function as both a data management layer and a policy engine that can collect and operate on rather extensive metadata collections to make smart decisions. In the rather complex life sciences storage environment iRODS is about the only tool we see that really works integratively across everything as both a metadata management layer and policy instrument, and it has got a lot more mature and is reasonably safe to put into production environments.”
“It’s supported through RENCI and the iRODS consortium. You can do relatively sophisticated metadata curation with it to make smart decisions and federate your data across multiple tiers of storage and multiple types of storage. [Also] because of recent changes in the iRODS data mover, it’s become an interesting target for moving data to as a data transfer tool and it’s now as fast as GridFTP in globus. There some really interesting use cases we are starting to explore as an abstraction tool.”
As a practical matter, says Berman, most life science users are not inclined to or skilled at tracking data storage. “You might have five different storage systems underneath but no one cares. I think that abstraction is sort of where the whole field is going next. When you interoperate with data, you don’t care about where it is or where it is being computed on and that data live in sort of this api-driven environment that can be accessed a whole lot of ways.”
Theme 5 – Networking: Not So Easy Anymore.
“Networking on every level is where I am spending my time,” says Berman, “and networking within clusters is becoming an interesting challenge. For awhile InfiniBand was the only thing you wanted to use because it was cost effective and fast but now all of a sudden Arista and Juniper have come out with extraordinarily cost effective 100 Gigabit Ethernet environments that start to rival the Mellanox operating environment in cost and performance. Then you don’t have the challenges of trying to integrate RDMA with Ethernet (RoCE – RDMA over converged Ethernet). So a number of organizations are starting to make decisions that involve 100 Gig Ethernet and Arista is making a lot of great deals to break into that environment and honestly their Ethernet has some of the lowest latencies on market today.”
“There are really interesting decisions here and implications for cluster design; and given the challenges of things like Lustre, even if you are using RDMA over InfiniBand, those things may not have the benefits over Ethernet. The only advantage we’re seeing is there’s been a sort of a surge from the storage side in using NFS over RDMA which is actually incredibly fast and so if you have a reasonably high performance scale out NAS of some sort like you built a high tuned ZFS system, for instance.”
“I think InfiniBand is still a really interesting target there because you can do NFS over RDMA. We’ve played with that a little bit. So the back end of clusters are still something to think about and Mellanox was interesting for a long time because you could mix those Ethernet and IB; they’ve gone away from that because I think they are trying to consolidate their packages and now you have to buy one or the other. But at least you have the option there.”
The IB-OmniPath Architecture battle has been loudly raging this year. So far, says Berman, OPA has hit rough skidding. “In my mind, it still isn’t in the game at all except in the national supercomputing level [and that’s] because the promises of it still aren’t actually in the offering. There’s a 2.0 timeline now. Also they are not planning on offering any sort of Ethernet gating – you’ll have to build some sort of routing device to be able to move stuff between that backend fabric and wide area Ethernet. So from a cluster point of view that’s an interesting divergence in trends because for a while we were designing and building purely IB backbones because you could use the Ethernet gateways. Now we are sort of reverting back a little bit and others are too.”
Berman noted a rising trend with “organizations biting the bullet” and building high performance science DMZs to serve science clients. “Most of them, even if they don’t have the data need right now, are starting with 10 Gig networks but using 100 Gig capable hardware so it is pretty easy to swap to 100 Gig if they see that need. And that whole network field just diversified its offerings. Instead of just being 10, 40 and 100 Gigabit Ethernet, now there’s 10, 25, 40, 50, 75 and 100 Gigabit Ethernet available, and prices have come down.
“As much as I love science DMZs — and I spend most of time designing them and implementing right now — I still think they are a band aid to a bigger problem. At the enterprise [level], supporting this type of stuff in a dynamic way, basically people are behind [the curve]. You lose opportunities in designing a traditional enterprise network to be able to virtualize your environments and use software defined networking and virtual circuits and set up virtual routers – things like that which can really make your environment much more flexible and way more supportive of lots of different uses cases including all the secure enterprise.”
Other networking trends:
- “We are also spending a lot of time trying to solve very wide area network movement problems and binding organizations that are spread across great distances. We are really starting to get into “how do you move petabytes of data from the U.S. to Europe. ESnet has been really helpful with that. That’s not a simple problem to solve by any means.”
- “The other thing that we are starting to see is that even the Cisco users are starting to realize that Cisco is designed as an enterprise stack not a high performance stack – it will do it but you have to force it. I think that some people are starting a little bit to get away from their good old standard and starting to consider things like Arista and Ciena and Brocade and Juniper; basically that whole other entire 15 percent of the market has much more potential in the high performance space than in the enterprise space.”
Theme 6 – Processor Frenzy? Not!
Few technology areas have received more attention recently than the frothy processor technology landscape. Stalwart Intel is facing challenge from IBM (Power), ARM, and even NVIDIA (P100). Berman is pretty unambiguous. “For system level processors, clearly Intel is just wining in every way. IBM’s decision to divest the x86 architecture was an interesting one as we all know and it turns out that for the x86 market and that space Lenovo is actually not doing very well.
“The IBM Power architecture is an extremely narrow use case as far as I can tell. It’s the same fear as people who are afraid of going away from Cisco in the networking space. Everyone’s stuff is compiled and works on Intel. They know it. No one wants to take the time to reintegrate all of their stuff for new architecture. The Power stuff has its advantages in certain environments and disadvantages in others. The space where Power8 excels above Intel is really floating point in the precision space, which is not the majority of life sciences. The majority of life sciences requirements are integer based except for the simulation space and the predictive stuff.
All netted out, he says, “I see zero Power8 in the life sciences field. I haven’t come across any of it. I see a couple of donated servers in the national supercomputer centers but they are not even doing it. Power8 is most prevalent in IBM’s cloud of course and that’s the biggest installation anywhere that I know of outside of the DoD but no one can know about that, right. Unless something major changes, I don’t see enough market pressure for Power8 to take any hold in a real way in the life sciences computing market. There’s just too much effort to change over to it.
“ARM is kind of the same thing. In fact it is exactly the same thing. You know it’s a completely different architecture, completely different than Power than Xeon. It’s kind of interesting in niche environments, especially field environments and far-flung environments where [obtaining steady power can be an issue]. People keep playing with it but it is some weird fraction of a percent that’s out there. I’ve not seen any real move towards it in life sciences at all. Not in any environments, not in cloud, not in anything.
“So I don’t think that getting in life sciences is really the place for that particular architecture; it would require the same type of software integration and rewrites as GPUs did back in the day and it took them so long to be adopted in order for that to take hold in my mind. Most people aren’t going to hold their publication off for a year or year and a half while they try to rewrite or revalidate programs to run on ARM. It’s far more likely that someone will use the Power8 or Sparc.
“When the rubber hits the road it’s about what the end users can actually get done and what’s the risk-benefit of doing it. In life sciences, organization don’t get into things they haven’t done before without really doing this cost-benefit analysis and the cost of those architectures both in human and recoding and trying something new versus just keeping your head down and getting it done the old fashioned way because you know it is going to work — that is often the tradeoff.”
The post BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences appeared first on HPCwire.
Jan. 4 — Bright Computing, a global leader in cluster and cloud infrastructure automation software, today announced the launch of a tiered partner program for value added reseller partners in Europe, Middle East, Africa, and Asia Pacific.
The new tiered program has been designed to build, recognize, and reward loyalty amongst the Bright partner community. The program acknowledges the contribution that the Bright partner community makes to Bright’s business, and provides the tools to improve partner profitability while empowering end customers to build dynamic datacenter infrastructures.
The program comprises three participation levels. Partners that invest more in their relationship with Bright will receive higher-value benefits and resources across several categories:
- Premier partners who demonstrate significant achievements as a Bright reseller, will be rewarded with a number of exclusive commercial, resource and marketing program benefits.
- Advanced partners who have achieved success as a Bright reseller, will be rewarded with a proportional number of program benefits.
- All new partners join at the Member tier; this introductory level to the Bright Partner Program enables partners to get up to speed quickly and easily, rewarding them with a number of standard benefits.
Bill Wagner, CEO at Bright Computing, commented; “The new tiered partner program is an exciting evolution for Bright, and is a result of the importance we place on our resellers and the growing impact that the channel is having on business revenue contribution. We want to reward partners who demonstrate loyalty to Bright and I believe that this program is designed to do exactly that.”
The tiered partner program will be rolled out to the Americas later in 2017.
About Bright Computing
Bright Computing is a global leader in cluster and cloud infrastructure automation software. Bright Cluster Manager, Bright Cluster Manager for Big Data, and Bright OpenStack provide a unified approach to installing, provisioning, configuring, managing, and monitoring HPC clusters, big data clusters, and OpenStack clouds. Bright’s products are currently deployed in more than 650 data centers around the world. Bright Computing’s customer base includes global academic, governmental, financial, healthcare, manufacturing, oil/gas/energy, and pharmaceutical organizations such as Boeing, Intel, NASA, Stanford University, and St. Jude Children’s Research Hospital. Bright partners with Amazon, Cray, Dell, Intel, Nvidia, SGI, and other leading vendors to deliver powerful, integrated solutions for managing advanced IT infrastructure such as high performance computing clusters, big data clusters, and OpenStack-based private clouds. www.brightcomputing.com
Source: Bright Computing
The post Bright Computing Rolls Out Tiered Partner Program for Resellers in EMEA and APAC appeared first on HPCwire.