About eight months ago Bill Gropp was elevated to acting director of the National Center for Supercomputing Applications (NCSA). He is, of course, already highly accomplished. Development (with colleagues) of the MICH implementation of MPI is one example. He was NCSA’s chief scientist, a role Gropp retains, when director Ed Seidel was tapped to serve as interim vice president for research for the University of Illinois System, and Gropp was appointed acting NCSA director.
Don’t be far misled by the “acting” and “interim” qualifiers. They are accurate but hardly diminish the aspirations of the new leaders jointly and independently. In getting ready for this interview with HPCwire, Gropp wrote, “Our goal for NCSA is nothing less than to lead the transformation of all areas scholarship in making use of advanced computing and data.” That seems an ambitious but perhaps appropriate goal for the home of Blue Waters and XSEDE.
During our interview – his first major interview since taking the job – Gropp sketched out the new challenges and opportunities he is facing. While in Gropp-like fashion emphasizing the collaborative DNA running deep throughout NCSA, he also stepped out of his comfort zone when asked what he hopes his legacy will be – a little early for that question perhaps but his response is revealing:William Gropp, NCSA
“If you look at us now we have three big projects. We’ve got Blue Waters. We have XSEDE. We hope to have the large synoptic survey telescope (LSST) data facility. These are really good models. They take a long time to develop and may require a fair amount of early investment. I would really like to lay the groundwork for our fourth big thing. That’s what I can contribute most to the institution.”
NCSA is a good place to think big. Blue Waters, of course, is the centerpiece of its computing infrastructure. Deployed in 2012, Blue Waters is a roughly 13-petaflops Cray XE/X hybrid machine supported with about 1.6 PB of systems memory and 26PB of storage (usable, with an aggregate 1.1TBs). No doubt attention is turning to what’s ahead. The scope of scientific computing and industry collaboration that goes on at NCSA in concert with the University of Illinois is big by any standard.
It’s also worth remembering the NCSA and Gropp are in the thick of U.S. policy development. The most recent report from the National Academies of Sciences, Engineering, and Medicine Report – Future Directions For NSF Advanced Computing Infrastructure To Support U.S. Science In 2017-2020] – was co-chaired by Gropp and Robert Harrison of Stony Brook University; not surprisingly, it argues strongly for NSF to produce a clear long-term roadmap supporting advanced computing and was cited in testimony two weeks ago at Congressional Hearings (National Science Foundation Part II: Future Opportunities and Challenges for Science).
In preparing for the interview, Gropp sent a brief summary of his thinking about the future, which said in part:
“This is a challenging time. Technically, computing is going through a major transition as the community copes with the end of Dennard (frequency) scaling and the consequences for hardware, software, and algorithms. The rapid expansion of data science has broadened the demand for computational resources while at the same time bringing about new and disruptive ways to provide those services, as well as increasing the demand for skilled workers in all areas of computing. Funding remains tight, with support for sustaining the human and physical infrastructure still depending mostly on ad hoc and irregular competitions for hardware; in addition, while everyone talks about the long-term value of science data, there is little appetite to pay for it.
“But these challenges produce opportunities to make critical impact and are exactly why this is also an exciting time to be in computing, and to be leading NCSA. Last August, NCSA Director Ed Siedel was asked by the president of the University of Illinois System to be interim vice president of research, and to guide that office as it refocuses on innovating for economic development by building on the many strengths of the University System. Ed and I have a similar vision for NCSA, and I was honored step in as acting director to guide NCSA while Ed is helping the University system.”
HPCwire: Thanks for your time Bill. It’s hard to know where to start. NCSA has many ongoing projects – participation in the National Data Service Consortium, the Illinois Smart State Initiative, the Visualization Laboratory, and XSEDE all come to mind. Perhaps provide an example or two of NCSA projects to get a sense of the range of NCSA activities?
Gropp: It’s hard to pick just one and somebody is going to be mad at me. Let me say a little a bit about the industry program. One of the things that we’ve been doing there has been focusing more on how the industry program can build on our connections with campus to provide opportunities, for example, for our students to work with companies and companies to work with our students. That’s been very attractive our partners. I was not at all surprised that the rarest commodity is talent and we are at just a fabulous institution that has very strong, very entrepreneurial students and so that’s been a great connection.
One of the reasons for mentioning the industry program is it was really through that view of connections that we’ve really became a involved in the Smart State initiative. So it was one of the things we discussed with the state including the opportunity for students to be involved in projects, which in some cases could have significant impact improving the life of the state. We are really in the initial phases. It is going to be fun to see how it develops and what works and what doesn’t. It was exciting to see the kinds of opportunities the state was interested in pursing and their flexibility about working not just with NCSA but also with students through programs like this that mirror what we did with industry. (Interview by Illinois Public Media with Gropp on the Smart State effort)
HPCwire: There must be a list of projects?
Gropp: Of course there is but it’s not quite ready to release to the public. We’re working on developing that. I can say another thing that is interesting about this is the state is quite understanding of the fact that in many cases, the sorts of projects they are looking at are quite ambitious, and so the work is being structured as a number of reasonable steps rather than some five-year proposal to solve all the states problems. It’s being structured in much more reasonable pieces where we can perform the pieces and see where we got and figure out what are the next steps.
HPCwire: Given the attention being paid to the rise of data-driven science can you talk a little about the National Data Service consortium (NDS)? I believe NCSA is a steering committee member. What is it exactly?
Gropp: Really it’s just what is says, a consortium trying to help us find commonality and common ground in providing data services. There have been seven semi-annual meetings and there’s the NDS lab which is an effort to provide software, frameworks may not be quite the right word, but to start looking at ways you can provide support for dealing with the five Vs of big data. We sort of know how to deal with velocity and volume, I am oversimplifying it but to some extent, that’s just money. Veracity is another tricky one including provenance and so forth. You can maybe slide reproducibility under that. We have work in that area, particularly with Victoria Stodden, who’s an NCSA affiliate and one of our quite brilliant faculty.
The really tricky one is Variety. There are so many different cases to deal with. Having frameworks to discuss that and places to discuss how we deal with that as well as how we deal with making resources available over time. How do we ensure data doesn’t get lost? Having a consortium that give us a place to talk about these things, a place to start organizing and developing cooperative projects so we are working together instead of working separately – a 1,000 flowers blooming is good but at some point you need to be able to pull this together. One of the things that has been put together that is so powerful is our ability to federate different data sources and draw information out of collections.
My role as NCSA director has been more working with the NDS to ensure it has a long-term sustainable direction because NDS will only be useful if it can help deliver these resources over the time we expect the data to be valuable. I think that’s one of the biggest challenges of big data compared to big computing. When we are doing big computing, you do your computation, you have your results, and you’re done, again oversimplifying. With the data you create the data and it retains value even increasing value and so managing over long lifetimes is again going to be a challenge. It’s important to think of the national data service not as something that one institution is offering to the nation but as collaboration among some of the people who want to support data science in this country getting together to solve these problems.
HPCwire: Sort of a big data related question, can you talk a little about the large synoptic survey telescope project NCSA is planning to support. Its expected output is staggering – 10 million alerts, 1000 pairs of exposures, 15 terabytes of data every night.
Gropp: That’s an important project in our future and was really begun under Dan Reed (former NCSA director). NSF wants those projects divided into a construction project and then operations project, which has not yet been awarded but that proposal will go later this year. [The latter] will do many things; it will operate the LSST facility itself but also the other facilities including the archiving the processing centers. This is significant big data activity that we are fully expecting to be involved in and in fact be leading the data resource side of that.
I don’t have the numbers in front of me but there is a lot of data that comes out of the telescope, an 8-meter telescope. The data is filtered a little bit there [delete there?] and sent by network from Chile to Illinois where it gets processed and archived, and we have to be able to process it in real time. The real time requirement, I think, is in seconds or minutes if not microseconds, but very quick processing of the data to discover and send out alerts on changes. It’s a very different kind of computing than sort of the FLOPS-heavy HPC computing that we usually think about. That will most likely be one of the things that occupies our datacenter, the National Petascale Computing Facility (NPCF).Currently under construction in Chile, the LSST is designed to conduct a ten-year survey of the dynamic universe. LSST can map the entire visible sky in just a few nights; each panoramic snapshot with the 3200-megapixel camera covers an area 40 times the size of the full moon. Images will be immediately analyzed to identify objects that have changed or moved: from exploding supernovae on the other side of the Universe to asteroids that might impact the Earth. In the ten-year survey lifetime, LSST will map tens of billions of stars and galaxies. With this map, scientists will explore the structure of the Milky Way, determine the properties of dark energy and dark matter, and make discoveries that we have not yet imagined. – LSST.org
HPCwire: Given the many resources NCSA operates, what’s required to simply keep all the systems you have running. What the scope of systems supported and ongoing to support changes and maintenance activities for Blue Waters?
Gropp: We were just talking about that this morning. At no time in our history have we been operating so many HPC scale systems. There’s not just Blue Waters. There are systems for the industry program and for some of the other research groups. There’s also a campus cluster, which is officially operated by a different organization but is actually operated by our staff. [It’s an] exciting time to be running all these systems. The big thing we are waiting for is the RFP for the next track one system, and we are still quite optimistic about that.
Some of the systems reach a point of retirement and replacement so we have gone through that cycle a number of times. There was one system that we eventually broke apart and recycled parts out to various faculty members. There are things like that always going on.
For systems like Blue Waters, we have a maintenance agreement with Cray, which has actually been quite reliable. Keeping things up to date is always an issue; for example our security systems are state of the art. There’s a lot of work along those lines, which of course I can’t describe in detail. The biggest challenge for us, a big challenge for all of us in the community, is the lack of predictable schedules from our sponsors for keeping these systems up to date. So we are still waiting for the RFP for the next track one system and that remains a real challenge. That’s why the academy report called on NSF to produce a roadmap because we have to do planning for that.National Petascale Computing Facility
We also have a lot of stuff that is going into the building (National Petascale Computing Facility) and we have a committee right now that is engaged and thinking about do we have sufficient room in the building, do we have enough power in the building, enough cooling, what do we do when we fill that building up? Those things are made much more difficult when there are so many uncertainties.
HPCwire: I probably should have started with this question. So what’s it like being director? What is the range of your responsibilities and what’s surprised you?
Gropp: I will say every day is different; that’s one of the things that is fun about the job. There are a lot of upper management sorts of things, so I spend time every day on budget and policy and personnel and implementing our strategic plan, but I also spend time interacting with more people on campus in a deeper way and also with some of our industry partners. Meeting new people from the state was really quite eye opening, both in terms of what the state is already doing but also in terms of what the opportunities are.
Last week I went to the state capital and gave some rare good news on the return on investment that they made in Blue Waters. The state provided funding for the datacenter. That was a new experience for me, going to a subcommittee hearing and being able to say that the investment you made in the University of Illinois and NCSA has paid off. Investing in science is a good thing to do and here are the numbers. It’s definitely an intense experience but I found it quite stimulating and different than what I have been doing.
On surprises, even though I have been here for quite awhile (2007), I really didn’t know the breadth and depth of all the things going on. Everyone has the tendency to see the stuff they are interested in and now I am responsible for everything so I have to be aware of everything. That was a very pleasant surprise. I wish I could say that dealing with our state budget situation was a surprise, but it wasn’t; it’s just been difficult. Really I think just coming to understand how much is going on here is more than I expected. In terms of goals, I have some very tactical things I want to get done. These sort of boring but important changes to policy to better support engagement with campus and make it easier for student to work with us, and you’ve already seen some of that in the directions we have gone with the industry program.
HPCwire: With your many director responsibilities are you still able to carry on research?
Gropp: I still have my students. I just met with one before our call. I have a number of grants. So I am still the lead PI on our Center for Exascale Simulation Plasma-coupled Combustion (XPACC). That one I find really find interesting because we are really looking at different ways of developing software for the large scale applications rather than going to new programming models. We’re trying to look at how to take the models we have, the existing code bases, and augment them with tools that help automate the task that skilled programmers find most challenging. I have another project where we have been looking at developing better algorithms and a third looking at techniques for making use of multicore processors for these large sparse linear systems and the non-linear systems they represent. That’s been fun.
Even before I took this [position], we were co-advising the students. That’s another thing I find really enjoyable here at Illinois is the faculty collaborates frequently and we have lots of joint projects. It’s fun for and I think it is good for the students because it gives several different perspectives, and they don’t run the risk of being quite so narrowly trained. One of the other things we have been doing, jumping back up to our discussion of technology, is we have always been involved in bringing in new technology and experimenting with it whether it’s hardware or software and faculty and staff and students and we are continuing to do that. In my role as director I get more involved in making decisions about which directions we are going, which projects. We have one proposal that has been submitted that involves certain kinds of deep learning. That was fun because of the tremendous upwelling of interest from campus.
So I think there will be lots of new things to do. I think if I had not been in this job I would have heard about them and said gee that sounds interesting I wish I had time to for it. In this job I say, gee that sounds great it’s my job to make it happens.
HPCwire: What I haven’t I asked that I should?
Gropp: I think the one question you didn’t ask is “what keeps me up at night.” I’m really very concerned about national support for research in general and high performance or maybe I should say advanced computing writ broadly. We see this in the delay of the RFP. We see it in a fairly modest roadmap going forward from NSF. We see it in hesitancy by other agencies to commit to the resources that are needed. I think we have so much to offer to the scientific community and the nation [and] it has been frustrating that there’s so little long-term consistent planning available. I know that we (NCSA) are not unique in this.
A trade we’ll accept is less money if you will give us consistency so we don’t have to figure out what we are getting every other year. If we had a longer term plan we’d be willing to accept a little less. So that’s the sort of thing. The uncertainty and the lack of recognition of the value of what we do at the scale that would benefit the country. That’s something that all of us spend time trying to change.
HPCwire: The new Trump administration and the economic environment generally doesn’t seem bullish on research spending, particularly basic research. How worried are you about support for advanced computing and science?
Gropp: I think we said many of these things in the academies report (summary) and I still stand behind them. I think we have lots of opportunities but I think we are …I think other countries, and that’s not just China, recognize the value of HPC, they recognize the value in fact in diversity (technologies). I was pleased to hear in the response to a question asked at the NSF hearing this week when the NSF quoted our report, saying there is a need for one or more large tightly-coupled machines and that they took that recommendation seriously.
It would be great if there were more than one NSF track one system. It would be great if there were more than a handful of track two systems. If you look at Japan, for example, they have nine large university advanced computing systems, not counting the Flagship 2020 system in their plans, and that, honestly, is more than we’ve got. So there is a concern we will not provide the level of support that will allow us to maintain broad leadership in the sciences. That’s been a continuous concern.
HPCwire: What’s your take on the race to Exascale? China has stirred up attention with its latest machine while the U.S. program has hit occasional speed bumps. Will we hit the 2022-2023 timeframe goal?
Gropp: Yes, I think the U.S. will get there in 2023. It will be a challenge but the routes that we are going down will allow us to get there. These high-end machines, they aren’t really general purpose, but they are general enough so that there are a sufficient number of science problems that they can solve. I think that will remain true. There will be some things that we will be able to accomplish on an exascale peak machine; there will be a challenge for those problems that don’t map into the sorts of architecture directions that we’re being pushed in order to meet those targets. I think that’s something that we all have to bear in mind. Reaching exascale doesn’t mean for all problems we can run them on one exascale peak system. It’s really going to be, are there enough, which I believe there are. It’s going to be a subset of problems that we can run and that set will probably shrink as we move from the pre-exascale machines to the exascale machines.
“For EHPCSW17 we took on the important challenge of accommodating numerous HPC-related workshops. Our aim is to make the European HPC Summit Week a reference in the HPC ecosystem, and to create synergies between stakeholders,” says Sergi Girona, EXDCI project coordinator.
The programme starts on Monday with the EXDCI workshop, which will give an overview of EXDCI recent activities including the HPC vision and recommendations to improve the overall HPC ecosystem. PRACEdays17, the fourth edition of the PRACE Scientific and Industrial Conference, will take place from Tuesday morning until midday Thursday, and, under the motto HPC for Innovation: when Science meets Industry, will have several high-level international keynote talks, parallel sessions and a panel discussion.
On Tuesday afternoon, and in parallel to PRACEdays17, three additional workshops organised by HPC Centers of Excellence will take place:
- CompBioMed: HPC‐based computational biomedicine;
- HPC for renewable energies: new programming models and strategies for the emerging exascale architectures and
- ENES / ESiWACE HPC Workshop.
In the late afternoon, a poster session, with a welcome reception sponsored by PRACE, will close the events of the day. On Wednesday afternoon, EuroLab4HPC will organise its workshop The Future of High-Performance Computing in parallel to the workshop Mathematics for Exascale and Digital Science.
On Thursday afternoon, the European Technology Platform for HPC (ETP4HPC) will organise a round-table entitled Exploiting the Potential of European HPC Stakeholders in Extreme-Scale Demonstrators in parallel to the EUDAT workshop: Coupling HPC and Data Resources and services together. The week will finish with scientific workshops organised by FETHPC projects and Centers of Excellence:
- HPCAFE-2017: High-Performance Computing Approaches for Monitoring, Exploring, Optimizing and Autotuning
- NextGenIO /SAGE workshop: Working towards Exascale IO
- POP User Forum: Help us, help you! – Help us improve the POP Service so that we can help you improve your HPC Applications
The full programme is available online at https://exdci.eu/events/european-hpc-summit-week-2017. The registration fee is €60 and those interested in attending the full week (or part of it) should fill out the centralised registration form by 5 May: https://exdci.eu/events/european-hpc-summit-week-2017
About the EHPCSW conference series
EXDCI coordinates the conference series “European HPC Summit Week”. Its aim is to gather all related European HPC stakeholders (institutions, service providers, users, communities, vendors and consultants) in a single week to foster synergies. Each year, EXDCI opens a call for contributions to all HPC-related actors who would like to participate in the week through a workshop.
This first edition took place in 2016 (EHPCSW16) in Prague, Czech Republic. The EHPCSW16 gathered a total 238 attendees with nearly all European nationalities represented. The four-day summit comprised a number of HPC events running concurrently: an EXDCI Workshop, PRACEdays16, the “EuroLab4HPC: Why is European HPC running on US hardware?” workshop and the ETP4HPC Extreme-Scale Demonstrators Workshop, as well as a number of private collaborative meetings.
The post European HPC Summit Week 2017 in Barcelona to Gather HPC Stakeholders appeared first on HPCwire.
ASTI, Italy, March 28, 2017 — NICE is pleased to announce the general availability of EnginFrame 2017, our powerful and easy to use web front-end for accessing Technical and Scientific Applications on-premises and in the cloud.
Since the NICE acquisition by Amazon Web Services (AWS), many customers asked us how to make the HPC experience in the Cloud as simple as the one they have on premises, while still leveraging the elasticity and flexibility that it provides. While we stay committed to delivering new and improved capabilities for on-premises deployments, like the new support for Citrix XenDesktop and the new HTML5 file transfer widgets, EnginFrame 2017 is our first step into making HPC easier to deploy and use in AWS, even without an in-depth knowledge of its APIs and rich service offering.
What’s new in EnginFrame 2017:
- Easy procedure for deployment on AWS Cloud: you can create a fully functional HPC cluster with a simple Web interface, including:
- Amazon Linux support
- Virtual Private Cloud to host all components of an HPC system
- Application Load Balancer for encrypted access to EnginFrame
- Elastic File System for spoolers and applications
- Directory Services for user authentication
- CfnCluster integration for elastic HPC infrastructure deployment
- Simpler EnginFrame license and process for evaluations on AWS
- HTML5 file upload widget with support for server-side file caching, that replace the previous applet-based implementations
- Service Editor capability to create new actions on files using services. Administrators can publish services associated to specific file patterns, that users can find in the context-sensitive menu in the spooler and file manager panels.
- New Java Client API for managing interactive sessions: customers and partners can now implement interactive session management in their applications.
- Citrix XenDesktop integration: support for graphical applications running on XenDesktop infrastructure.
- Improved DCV and VNC session token management, with automatic token invalidation based on a time-to-live.
- Many other fixes and enhancements.
The new features are immediately available for all the EnginFrame product lines:
- EnginFrame Views: Manage interactive sessions, collaboration and VDI
- EnginFrame HPC: In addition to the Views features, easily submit and monitor the execution of HPC applications and their data
- EnginFrame Enterprise: Both EnginFrame Views and HPC can be upgraded to the Enterprise version, to support fault-tolerant and load-balanced deployments.
With immediate availability, all NICE customers with a valid support contract can download the new release, access the documentation and the support helpdesk.
The post NICE Announces General Availability of EnginFrame 2017 appeared first on HPCwire.
Is it possible to detect who might be vulnerable to the illness before its onset using brain imaging?
David Schnyer, a cognitive neuroscientist and professor of psychology at The University of Texas at Austin, believes it may be. But identifying its tell-tale signs is no simpler matter. He is using the Stampede supercomputer at the Texas Advanced Computing Center (TACC) to train a machine learning algorithm that can identify commonalities among hundreds of patients using Magnetic Resonance Imaging (MRI) brain scans, genomics data and other relevant factors, to provide accurate predictions of risk for those with depression and anxiety.Researchers have long studied mental disorders by examining the relationship between brain function and structure in neuroimaging data.
“One difficulty with that work is that it’s primarily descriptive. The brain networks may appear to differ between two groups, but it doesn’t tell us about what patterns actually predict which group you will fall into,” Schnyer says. “We’re looking for diagnostic measures that are predictive for outcomes like vulnerability to depression or dementia.”
In March 2017, Schnyer, working with Peter Clasen (University of Washington School of Medicine), Christopher Gonzalez (University of California, San Diego) and Christopher Beevers (UT Austin), published their analysis of a proof-of-concept study in Psychiatry Research: Neuroimaging that used a machine learning approach to classify individuals with major depressive disorder with roughly 75 percent accuracy.
Machine learning is a subfield of computer science that involves the construction of algorithms that can “learn” by building a model from sample data inputs, and then make independent predictions on new data.
The type of machine learning that Schnyer and his team tested is called Support Vector Machine Learning. The researchers provided a set of training examples, each marked as belonging to either healthy individuals or those who have been diagnosed with depression. Schnyer and his team labelled features in their data that were meaningful, and these examples were used to train the system. A computer then scanned the data, found subtle connections between disparate parts, and built a model that assigns new examples to one category or the other.
In the recent study, Schnyer analyzed brain data from 52 treatment-seeking participants with depression, and 45 heathy control participants. To compare the two, a subset of depressed participants was matched with healthy individuals based on age and gender, bringing the sample size to 50.
Source: Aaron Dubrow, TACC
Title: Data Services Specialist
Deadline to Apply: 2017-04-02
Deadline to Remove: 2017-04-03
Job Summary: The Data Visualization Specialist will help researchers and project teams understand and use data visualization techniques to support their work. The specialist will support the research agendas of faculty and students, enhance curricula, and encourage research innovation across the University relating to data visualization and visual thinking. The specialist will provide consulting services to faculty and students and collaborate with other campus units already providing visualization and design services. The specialist will work closely with the Data Management Services Librarian, the Digital Humanities Librarian, the GIS Specialist, the Head of Digital Initiatives, and others in both the Research Commons and elsewhere in University Libraries to increase understanding of visual data issues, ranging from preparing data for visualization to limitations of data visualization. The specialist will be responsible for leveraging research software and technology resou!
rces to enhance course development and research innovation. This position reports to the Manager of the Research Commons.
Job URL: https://www.jobsatosu.com/postings/77380
Job Location: Columbus, OH
Institution: The Ohio State University
Requisition Number: 426701
Posting Date: 2017-03-27
Job Posting Type: Job
Please visit http://hpcuniversity.org/careers/ to view this job on HPCU.
Please contact email@example.com with questions.
OSLO, Norway, Mar. 28, 2017 — Asetek today announced confirmation of an order from one of its existing OEM partners for its RackCDU D2C (Direct-to-Chip) liquid cooling solution. The order is part of a new installation for an undisclosed HPC (High Performance Computing) customer.
“I am very pleased with the progress we are making in our emerging data center business segment. This repeat order, from one of our OEM partners, to a new end customer confirms the trust in our unique liquid cooling solutions and that adoption is growing,” said André Sloth Eriksen, CEO and founder of Asetek.
The order will result in revenue to Asetek in the range of USD 300,000 for approximately 15 racks with delivery in Q2 2017. The OEM partner as well as the installation site will be announced at a later date.
Asetek (ASETEK.OL) is the global leader in liquid cooling solutions for data centers, servers and PCs. Founded in 2000, Asetek is headquartered in Denmark and has operations in California, Texas, China and Taiwan. Asetek is listed on the Oslo Stock Exchange. For more information, visit www.asetek.com
The post New HPC Installation to Deploy Asetek Liquid Cooling Solution appeared first on HPCwire.
NEW YORK, N.Y., March 28, 2017 — Cycle Computing today announced that its CEO, Jason Stowe, will address attendees at the 253rd National Meeting and Exposition of the American Chemical Society, being held April 2-6 in San Francisco, CA.
Jason is scheduled to speak on Sunday, April 2 starting at 9:00 am local time. His session, titled, “Lessons learned in using cloud Big Compute to transform computational chemistry research” is one part of an overall session endeavoring to answer the question — Should I move my computational chemistry or informatics tools to the Cloud? Jason’s discussion will focus on how scientists, engineers, and researchers are leveraging CycleCloud software to unlock the Big Compute capabilities of the public cloud, performing larger, more accurate, and more complete workloads than ever before. Real world use cases will include big pharma, materials science, and manufacturing researchers accelerated science using 160 to 160,000 cores on the cloud. Attendees of Jason’s discussion will gain a clear understanding of where cloud resources can, or cannot, help their work.
The American Chemical Society (ACS) mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and its people. As the largest scientific society in the world, the ACS is a leading and authoritative source of scientific information. The ACS supports and promotes the safe, ethical, responsible, and sustainable practice of chemistry coupled with professional behavior and technical competence. The society recognizes its responsibility to safeguard the health of the planet through chemical stewardship and serves more than 157,000 members globally, providing educational and career development programs, products, and services.
Cycle Computing’s CycleCloud software orchestrates Big Compute and Cloud HPC workloads enabling users to overcome the challenges typically associated with large workloads. CycleCloud takes the delays, configuration, administration, and sunken hardware costs out of HPC clusters. CycleCloud easily leverages multi-cloud environments moving seamlessly between internal clusters, Amazon Web Services, Google Cloud Platform, Microsoft Azure and other cloud environments. Researchers and scientists can use CycleCloud to size the infrastructure to the technical question or computation at hand.
More information about the CycleCloud cloud management software suite can be found at: www.cyclecomputing.com
About Cycle Computing
Cycle Computing is the leader in Big Compute software to manage simulation, analytics, and Big Data workloads. Cycle turns the Cloud into an innovation engine for your organization by providing simple, managed access to Big Compute. CycleCloud is the enterprise software solution for managing multiple users, running multiple applications, across multiple clouds, enabling users to never wait for compute and solve problems at any scale. Since 2005, Cycle Computing software has empowered customers in many Global 2000 manufacturing, Big 10 Life Insurance, Big 10 Pharma, Big 10 Hedge Funds, startups, and government agencies, to leverage hundreds of millions of hours of cloud based computation annually to accelerate innovation. For more information visit: www.cyclecomputing.com
Source: Cycle Computing
The post Cycle Computing CEO to Address National Meeting of the ACS appeared first on HPCwire.
Stampede will not be available from 8 am to 7:30 pm (CDT) on Tuesday, April 4th 2017. System maintenance will be performed during this time.- TACC Team
New Report Shines Light on Installed Costs and Deployment Barriers for Residential Solar PV with Energy Storage
On April 14th, 2017; TACC will be offering the following training courses via webcast to the XSEDE community:
MPI Foundations I – 8:30AM – 12:00PM CT
Learn the basics of the Single Program, Multiple Data (SPMD) programming model through the use of the Message Passing Interface (MPI). This 3 hour course will provide attendees with an understanding of the SPMD model and how to use MPI collectives to move data between multiple processes. Familiarity with C/C++ or Fortran is expected.
MPI Foundations II – 1:00PM – 4:30PM CT
Building on the knowledge gained in MPI Foundations I, learn how to use both blocking and non-blocking MPI point-to-point communication to transmit information between parallel processes. An understanding of the SPMD model and familiarity with C/C++ or Fortran are expected.
You are welcome to register for either or both courses.
We would like to make you aware of the following workshop to be held at the Pittsburgh Supercomputing Center this spring:
HANDS-ON WORKSHOP ON COMPUTATIONAL BIOPHYSICS
Workshop Dates: May 30 – June 2, 2017
Application Deadline: May 8, 2017
For more details, or to apply visit:
The “Hands-On Workshop on Computational Biophysics is a joint-effort between The Theoretical and Computational Biophysics Group at UIUC [www.ks.uiuc.edu] and The National Center for Multiscale Modeling of Biological Systems (MMBioS) [mmbios.org]. Drs. Ivet Bahar (Pitt), Emad Tajkhorshid (UIUC), and Zan Luthey-Schulten (UIUC) will lead the instruction for this interactive 4 day event. The workshop will cover a wide range of physical models and computational approaches for the simulation of biological systems using NAMD, VMD, and ProDy. Space is limited and applications will be accepted until May 8th. This workshop is supported by MMBioS, an NIH Biomedical Technology and Research Resource (BTRR).
BERKELEY, Calif., March 27, 2017 — A team of researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab), Pacific Northwest National Laboratory (PNNL) and Intel are working hard to make sure that computational chemists are prepared to compute efficiently on next-generation exascale machines. Recently, they achieved a milestone, successfully adding thread-level parallelism on top of MPI-level parallelism in the planewave density functional theory method within the popular software suite NWChem.
“Planewave codes are useful for solution chemistry and materials science; they allow us to look at the structure, coordination, reactions and thermodynamics of complex dynamical chemical processes in solutions and on surfaces,” says Bert de Jong, a computational chemist in the Computational Research Division (CRD) at Berkeley Lab.
Developed approximately 20 years ago, the open-source NWChem software was designed to solve challenging chemical and biological problems using large-scale parallel ab initio, or first principle calculations. De Jong and his colleagues will present a paper on this latest parallelization work at the May 29-June 2 IEEE International Parallel and Distributed Processing Symposium in Orlando, Florida.
Multicore vs. “Manycore”: Preparing Science for Next-Generation HPC
Since the 1960s, the semiconductor industry has looked to Moore’s Law—the observation that the number of transistors on a microprocessor chip doubles about every two years—to set targets for their research and development. As a result, chip performance sped up considerably, eventually giving rise to laptop computers, smartphones and the Internet. But like all good things, this couldn’t last.
As more and more silicon circuits are packed into the same small area, an increasingly unwieldy amount of heat is generated. So about a decade ago, microprocessor designers latched onto the idea of multicore architectures—putting multiple processors called “cores” on a chip—similar to getting five people to carry your five bags of groceries home, rather than trying to get one stronger person to go five times faster and making separate trips for each bag.
Supercomputing took advantage of these multicore designs, but today they are still proving too power-hungry, and instead designers are using a larger number of smaller, simpler processor cores in the newest supercomputers. This “manycore” approach—akin to a small platoon of walkers rather than a few runners—will be taken to an extreme in future exaflop supercomputers. But achieving a high level of performance on these manycore architectures requires rewriting software, incorporating intensive thread and data-level parallelism and careful orchestration of data movement. In the grocery analogy, this addresses who will carry each item, can the heavier ones be divided into smaller parts, and should items be handed around mid-way to avoid overtiring anyone—more like a squad of cool, slow-walking, collaborative jugglers.
Getting Up to Speed on Manycore
The first step to ensuring that their codes will perform efficiently on future exascale supercomputers is to make sure that they are taking full advantage of manycore architectures that are being deployed. De Jong and his colleagues have been working for over a year to get the NWChem planewave code optimized and ready for science, just in time for the arrival of NERSC latest supercomputer Cori.
The recently installed Cori system at the Department of Energy’s (DOE’s) National Energy Research Scientific Computing Center (NERSC) reflects one of these manycore designs. It contains about 9,300 Intel Xeon Phi (Knights Landing) processors and according to the November, 2016 Top500 list, is the largest system of its kind, also representing NERSC’s move towards exascale. de Jong and his colleagues were able to gain early access to Cori through the NERSC Exascale Science Applications Program and the new NWChem code has been shown to perform well on the new machine.
According to de Jong, the NWChem planewave methods primarily comprise fast Fourier transform (FFT) algorithms and matrix multiplications of tall-skinny matrix products. Because current Intel math libraries don’t efficiently solve the tall-skinny matrix products in parallel, Mathias Jacquelin, a scientist in CRD’s Scalable Solvers Group, developed a parallel algorithm and optimized manycore implementation for calculating these matrices and then integrated that into the existing planewave codes.
When trying to squeeze the most performance from new architectures, it is helpful to understand how much headroom is left—how close are you to computing or data movement limits of the hardware, and when will you reach the point of diminishing returns in tuning an application’s performance. For this, Jacquelin turned to a tool known as a Roofline Model, developed several years ago by CRD computer scientist Sam Williams.
Jacquelin developed an analysis of matrix factorization routine within a roofline model for the Knights Landing nodes. In a test case that simulated a solution with 64 water molecules, the team found that their code easily scaled up to all 68 cores available in a single massively parallel Intel Xeon Phi Knights Landing node. They also found that the new, completely threaded version of the planewave code performed three times faster on this manycore architecture than on current generations of the Intel Xeon cores, which will allow computational chemists to model larger, more complex chemical systems in less time.
“Our achievement is especially good news for researchers who use NWChem because it means that they can exploit multicore architectures of current and future supercomputers in an efficient way,” says Jacquelin. “Because there are other areas of chemistry that also rely on tall-skinny matrices, I believe that our work could potentially be applied to those problems as well.”
“Getting this level of performance on the Knights Landing architecture is a real accomplishment and it took a team effort to get there,” says de Jong. “Next, we will be focusing on running some large scale simulations with these codes.”
This work was done with support from DOE’s Office of Science and Intel’s Parallel Computing Center at Berkeley Lab. NERSC is a DOE Office of Science User Facility. In addition to de Jong and Jacquelin, Eric Bylaska of PNNL was also a co-author on the paper.
Source: Linda Vu, Lawrence Berkeley National Laboratory
The post Berkeley Lab Researchers Target Chem Code for Knights Landing appeared first on HPCwire.
SAN JOSE, Calif., March 27, 2017 — Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in compute, storage and networking technologies including green computing, announces a new Rack Scale Design (RSD) solution that empowers cloud service providers, telecoms, and Fortune 500 companies to build their own agile, efficient, software-defined data centers. Supermicro RSD is a total solution comprised of Supermicro server/storage/networking hardware and an optimized rack level management software that represents a superset of the open source RSD software framework from Intel and industry standard Redfish RESTful APIs developed by DMTF (Distributed Management Task Force).
Supermicro RSD solves the hardware management and resource utilization challenges of data centers, large or small, often with tens of thousands of servers distributed in hundreds of racks using traditional 1-to-1 server management tool IPMI (Intelligent Platform Management Interface). Designed with the whole rack as the new management unit in mind, Supermicro RSD leverages open Redfish APIs to support composable infrastructure and enable interoperability among potential RSD offerings from different vendors. With industry standard Redfish APIs, Supermicro RSD can be further integrated into data center automation software such as Ansible, Puppet or private cloud software such as OpenStack or VMware.
Supermicro RSD rack level management software is based on the Intel RSD framework, which provides the scale and efficiency for cloud operators to perform operations such as pooling and composability, in addition to the necessary telemetry and maintenance functions of the pod (a collection of racks) to manage allocated resources in a large scale data center environment. Users can provision, manage and power-on the composed node as if it were one physical node. When the task is complete, the user simply deletes the composed node to return the resource to the pools for other workloads.
A unique advantage Supermicro RSD solution offers is that it does not require purpose-built new hardware. In fact, the Supermicro RSD solution runs on all existing X10 (Broadwell) generation as well as new X11 generation server, storage and networking hardware. Furthermore, Supermicro MicroBlade offers a future-proof, disaggregated hardware that allows customers to independently refresh compute module (CPU + memory) hardware while keeping the remaining server investment intact resulting in substantial savings and flexibility.
“Supermicro RSD makes it easy for companies of any size to build cloud infrastructure that until now are limited to leading large public and private cloud providers,” said Charles Liang, President and CEO of Supermicro. “The Supermicro RSD solution enables more customers to build large scale modern data centers leveraging Supermicro’s best-of-breed server, storage and networking product portfolio.”
“The launch of Supermicro’s platform, incorporating Intel Rack Scale Design, brings open, industry standards-based, hyperscale-inspired capabilities such as resource discovery, composability and telemetry to cloud, communications and enterprise data centers,” said Charlie Wuischpard, General Manager of Intel’s Scalable Data Center Solutions Group. “Supermicro’s solution, based on Intel RSD, enables flexible, economical solutions for data centers, supported by Intel architecture and technologies.”
Supermicro RSD software includes the following components:
- Pod Manager (PodM): A pod is a collection of physical racks. Pod Manager sits at the top of the logical software hierarchy and uses Redfish API to communicate with the racks that make up the pod. It manages and aggregates the hardware resources within multiple racks in the Pod by polling respective PSMEs and RMMs
- Rack Management Module (RMM): RMM manages power and thermal resources within a rack by polling rack hardware and reports this information to PodM through Redfish API
- Pooled System Management Engine (PSME): PSME acts as the drawer or chassis manager. PSME communicates with each BMC controller in the drawer/chassis and reports aggregated information such as telemetry and asset information through Redfish API to PodM
- Web UI is a browser based graphical user interface that simplifies the management of RSD
A minimum Supermicro RSD hardware configuration includes the following components:
- A 1U management appliance that bundles all RSD related software or a software only distribution
- Two Supermicro 1G management switches for connecting the baseboard management controllers (BMC)
- One Supermicro data switch
- Supermicro’s broad X10 and X11 server portfolio. Popular server choices include but are not limited to TwinPro, BigTwin, FatTwin, MicroBlade, SuperBlade and GPU servers
Popular Storage Choices include:
- 2U Ultra with 24 NVMe drives as hot storage,
- 2U SSG with 24 SAS drives as warm storage
- 45Bay, 60 bay or 90 bay JBODs as cold storage.
For more information on Supermicro’s complete range of rack scale design solutions, please visit https://www.supermicro.com/solutions/SRSD.cfm.
About Super Micro Computer, Inc. (NASDAQ: SMCI)
Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.
The post Supermicro Announces Rack Scale Design for Datacenters appeared first on HPCwire.
Six high performance computing centers will be formally launched in the U.K. later this week intended to provide wider access to HPC resources to U.K. industry and academics. This expansion of HPC resources and access to them is being funded with £20 million from the Engineering and Physical Sciences Research Council. The EPSRC plays a somewhat similar role in the U.K. to the National Science Foundation role in the U.S.
The centers are located at the universities of Cambridge, Edinburgh, Exeter, and Oxford, Loughborough University, and University College London. According to today’s pre-launch announcement, some of the centers will be available free of charge to any EPSRC-supported researcher, and some will give access to UK industry. Some of the infrastructure is in place and has been in use for a while.
“The new centers provide a diversity of computing architectures, which are driven by science needs and are not met by the national facilities or universities. This is because the National HPC Service must meet the needs of the whole U.K. community and so cannot specialize in specific novel architectures or novel requirements,” according to the release.
It’s worth noting the U.K. move to bolster its HPC resources and use in both academia and industry is happening at a time of uncertainty around research funding in the U.S. The move is also occurring as the U.K. prepares to implement Brexit, its withdrawal from the European Union.
Here’s a brief snapshot of the new centers:
- GW4 Tier-2 HPC Centre for Advanced Architectures. The new service will be the first production system of its kind in the world, and will be named Isambard after Victorian engineer Isambard Kingdom Brunel. It will use an ARM processor system to provide access to a wide range of the most promising emerging architectures. Led by: Professor Simon McIntosh-Smith, University of Bristol. EPSRC grant: £3,000,000. Partners: Universities of Bristol, Bath, Cardiff and Exeter, Cray, and Met Office.
- Peta-5: A National Facility for Petascale Data Intensive Computation and Analytics. This multi-disciplinary facility will provide large-scale data simulation and high performance data analytics designed to enable advances in material science, computational chemistry, computational engineering and health informatics. Led by: Professor Paul Alexander, University of Cambridge. EPSRC grant: £5,000,000. Partners: Universities of Cambridge, Oxford and Southampton, Leicester and Bristol, UCL, Imperial College London, DiRAC, King’s College London, and The Alan Turing Institute.
- Tier-2 Hub in Materials and Molecular Modeling. The facility will be available to members of the Materials and Molecular Modeling (MMM) Hub as well as the wider MMM and Tier-2 communities. It will be called Thomas, after the polymath Thomas Young, and will have applications in energy, healthcare and the environment. Led by: Professor Angelos Michaelides, UCL. EPSRC grant: £4,000,000. Partners: UCL, Imperial College London, King’s College London, Queen Mary University of London, Queen’s University of Belfast, Universities of Cambridge, Oxford, Kent and Southampton, and OCF.
- JADE: Joint Academic Data science Endeavour. The largest GPU facility in the UK, with compute nodes with eight NVIDIA Tesla P100 GPUs tightly-coupled through the high-speed NVlink interconnect, JADE will focus on machine learning and related data science areas, and molecular dynamics. It will have applications in areas such as natural language understanding, autonomous intelligent machines, medical imaging and drug design. Led by: Professor Mike Giles, University of Oxford. EPSRC grant: £3,000,000. Partners: Universities of Oxford, Edinburgh, Southampton, Sheffield and Bristol, Queen Mary University of London, UCL and King’s College London, and NVIDIA.
- HPC Midlands Plus. The HPC facility will be based at a new centre of excellence at Loughborough University’s Science and Enterprise Park. It will be used by universities, research organizations and businesses to undertake complex simulations and process vast quantities in fields ranging from engineering, manufacturing, healthcare and energy. Led by: Professor Steven Kenny, Loughborough University. EPSRC grant: £3,200,000. Partners: Loughborough University, Aston University, Universities of Birmingham, Leicester, Nottingham and Warwick, and Queen Mary University of London.
- EPCC Tier-2 HPC Service. The Edinburgh Parallel Computing Centre (EPCC) is growing its new industry HPC system, named Cirrus, five times larger to provide a state-of-the-art multi-core HPC service for science and industry. A next generation research data store, dedicated to Tier-2 users, is being installed to allow researchers to store data, share it and move it between different supercomputers. Led by: Professor Mark Parsons, University of Edinburgh. EPSRC grant: £2,400,000. Partners: Universities of Edinburgh, Bristol, Leeds and Strathclyde, and UCL.
The centers will be officially launched on Thursday 30 March at the Thinktank science museum in Birmingham.
CHAMPAIGN, Ill., March 27, 2017 — The OpenMP Architecture Review Board (ARB) celebrates this year the 20th anniversary of its incorporation and of the release of the first OpenMP API specification for parallel processing. The first two events that will form part of the official celebrations are hereby announced.
Since its advent in 1997, the OpenMP programming model has proved to be a key driver behind parallel programming for shared-memory architectures. Its powerful and flexible programming model has allowed researchers from various domains to enable parallelism in their applications. Over the two decades of its existence, OpenMP has tracked the evolution of hardware and the complexities of software to ensure that it stays as relevant to today’s high performance computing community as it was in 1997.
The OpenMP ARB has recently announced the first two events that will form part of the official celebrations. OpenMPCon and the International Conference on OpenMP (IWOMP) 2017 combine to provide OpenMP developers, researchers, and thought leaders with the opportunity to share their work and experiences. The two events take place at Stony Brook University in New York, USA between the 18th and 22nd of September and are now taking submissions.
OpenMPCon 2017 (18-20 Sept) provides a unique forum for OpenMP developers to present and discuss the development of real-world applications, libraries, tools and techniques that leverage the performance of parallel processing through the use of OpenMP. The submission deadline for OpenMPCon is June 1. http://openmpcon.org
IWOMP 2017 (21-22 Sept) focuses on the presentation of unpublished academic research and takes place immediately after OpenMPCon. The submission deadline for IWOMP is April 28. https://you.stonybrook.edu/iwomp2017/
“Developers attending this year’s OpenMPCon and IWOMP conferences will have the added bonus of joining us to celebrate the vital contribution OpenMP has made by enabling high-performance computing over the past two decades and will also help us to shape OpenMP’s next twenty years.” said Michael Klemm, OpenMP CEO.
About the OpenMP ARB
The OpenMP ARB has a mission to standardize directive-based multi-language high-level parallelism that is performant, productive and portable. Jointly defined by a group of major computer hardware and software vendors, the OpenMP API is a portable, scalable model that gives parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from embedded systems and accelerator devices to multicore systems and shared-memory systems. The OpenMP ARB owns the OpenMP brand, oversees the OpenMP specification and produces and approves new versions of the specification. Further information can be found at http://www.openmp.org/.
Source: OpenMP ARB
The post The OpenMP ARB Celebrates 20 Years of Parallel HPC at Key OpenMP appeared first on HPCwire.
ARGONNE, Ill., March 27, 2017 — Computer scientist Valerie Taylor has been appointed as the next director of the Mathematics and Computer Science (MCS) division at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, effective July 3, 2017.
“Valerie brings with her a wealth of leadership experience, computer science knowledge and future vision,” said Rick Stevens, Argonne Associate Laboratory Director for Computing, Environment and Life Sciences. “We feel strongly that her enthusiasm and drive will serve her well in her new role, and are pleased to have her joining our staff.”
Taylor has received numerous awards for distinguished research and leadership and authored or co-authored more than 100 papers in the area of high performance computing, with a focus on performance analysis and modeling of parallel scientific applications. She is a fellow of the Institute of Electrical and Electronics Engineers and Association for Computing Machinery. Taylor most recently served as the senior associate dean of academic affairs in the College of Engineering and a Regents Professor and the Royce E. Wisenbaker Professor in the Department of Computer Science and Engineering at Texas A&M University.
In 2003, she joined Texas A&M as the head of Computer Science and Engineering, which she led until 2011. Under her leadership, the department saw unprecedented growth in faculty and research expenditures. Taylor started the Industries Affiliates Program, which continues to be a signature way to engage industry partners. Prior to joining Texas A&M, Taylor was a professor in the Electrical Engineering and Computer Science Department at Northwestern University for 11 years. While at Northwestern, she held a guest appointment with Argonne’s MCS Division. Taylor also serves as the Executive Director of the Center for Minorities and People with Disabilities in IT.
In order to help solve some of the nation’s most critical scientific problems, Argonne’s MCS Division produces next-generation technologies and software that exploit high-performance computing and tackle the challenges of big data generated by high-performance computing and large, experimental facilities.
About Argonne National Laboratory
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
About The U.S. Department of Energy’s Office of Science
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.
Source: Argonne National Laboratory
The post Valerie Taylor Named Argonne’s Mathematics and CS Division Director appeared first on HPCwire.
Just as AI has become the leitmotif of the advanced scale computing market, infusing much of the conversation about HPC in commercial and industrial spheres, it also is impacting high-level management changes in the industry.
This week saw two headliner announcements:
- Naveen Rao, former CEO of AI company Nervana, acquired by Intel last year, announced he will lead Intel’s new Artificial Intelligence Products Group (AIPG), a strategic, “cross-Intel organization.”
- Andrew Ng, one of the highest profile of players in AI, announced that he has resigned his post as chief scientist at Baidu. His destination: unknown.
In addition, Nvidia announced that Tencent Cloud will integrate its Tesla GPU accelerators and deep learning platform, along with Nvidia NVLink technology, into Tencent’s public cloud platform.Naveen Rao of Intel
Rao announced his new position and AIPG in a blog (“Making the Future Starts with AI”) that underscores Intel’s AI push, along with its recent $15B acquisition of Mobileye. Formation of AIPG adds fodder to the drumbeat among industry observers that the company views AI, broadly defined, as its next big growth market. In addition, the company’s processor roadmap emphasizes co-processors (aka accelerators) used for AI workloads. To date, Nvidia GPUs have enjoyed the AI processor spotlight. But in commenting on Intel’s x86-based roadmap at this week’s Leverage Big Data+Enterprise HPC event in Florida, a senior IT manager at a financial services company believes Intel will mount a major competitive response in the AI market. “I wouldn’t want to be Nvidia right now,” he said.
Rao himself referred to Intel as “a data company.”
“The new organization (AIPG) will align resources from across the company to include engineering, labs, software and more as we build on our current leading AI portfolio: the Intel Nervana platform, a full-stack of hardware and software AI offerings that our customers are looking for from us,” Rao said.
“Just as Intel has done in previous waves of computational trends, such as personal and cloud computing, Intel intends to rally the industry around a set of standards for AI that ultimately brings down costs and makes AI more accessible to more people – not only institutions, governments and large companies, as it is today,” he said.
Nvidia had significant news of its own this week in announcing Tencent Cloud’s adoption of its Tesla GPU accelerators to help advanced AI for enterprise customers.
“Tencent Cloud GPU offerings with NVIDIA’s deep learning platform will help companies in China rapidly integrate AI capabilities into their products and services,” said Sam Xie, vice president of Tencent Cloud. “Our customers will gain greater computing flexibility and power, giving them a powerful competitive advantage.”
As part of the companies’ collaboration, Tencent Cloud said it will offer a range of cloud products that will include GPU cloud servers incorporating Nvidia Tesla P100, P40 and M40 GPU accelerators and Nvidia deep learning software.
As for Andrew Ng, he did not state what his next career step will be, only saying “I will continue my work to shepherd in this important societal change.
“In addition to transforming large companies to use AI, there are also rich opportunities for entrepreneurship as well as further AI research,” he said on Twitter. “I want all of us to have self-driving cars; conversational computers that we can talk to naturally; and healthcare robots that understand what ails us. The industrial revolution freed humanity from much repetitive physical drudgery; I now want AI to free humanity from repetitive mental drudgery, such as driving in traffic. This work cannot be done by any single company — it will be done by the global AI community of researchers and engineers.”
Ng, who was a founder of the Google Brain project, joined Baidu in 2014 to work on AI, and since then, he said, Baidu’s AI group has grown to roughly 1,300 people.
“Our AI software is used every day by hundreds of millions of people,” said Ng. “My team birthed one new business unit per year each of the last two years: autonomous driving and the DuerOS Conversational Computing platform. We are also incubating additional promising technologies, such as face-recognition (used in turnstiles that open automatically when an authorized person approaches), Melody (an AI-powered conversational bot for healthcare) and several more.”
The post AI in the News: Rao in at Intel, Ng out at Baidu, Nvidia on at Tencent Cloud appeared first on HPCwire.
Wrangler experienced issues with the dssd nodes supporting /gpfs/flash filesystem. There were a number of jobs running at the time, many were not using this filesystem so should be unaffected. If your jobs were using /gpfs/flash and have failed because of the outage please submit a ticket and let us know.
A NASA space observatory assembled by students and faculty in the Physics Department at Colorado School of Mines could launch this weekend from Wanaka, New Zealand.
The Extreme Universe Space Observatory Super Pressure Balloon would fly at 110,000 feet and make the first fluorescence observations of high-energy cosmic ray extensive air showers by looking down at Earth’s atmosphere from near space.
“We expect to have our launch window open on Saturday [Friday MST] although the weather forecast for the weekend is not so good,” said Physics Professor Lawrence Wiencke, who has overseen undergraduate and graduate students assembling the gondola that will hold the project’s instrumentation. After building the gondola in Golden, Colo., Wiencke’s team brought it to NASA’s scientific balloon facility in Palestine, Texas, before shipping it to New Zealand in December 2016.
NASA conducted its final tests May 23 in New Zealand, making sure all primary balloon systems—tracking, telemetry, communications and flight termination—as well as redundant systems are functioning properly.
“Today’s test is the culmination of more than a year of preparation work all leading up to the team declaring the balloon and payload as flight ready for the mission,” said Gabe Garde, NASA mission manager for the launch. “After today, much will be in the hands of Mother Nature as well as in receiving overflight clearance permissions from a handful of countries.”
NASA leadership granted “Approval to Proceed” for the mission earlier that day.
“The level of public interest here is extremely high,” said Wiencke, who remains in New Zealand. Local newspapers have written about the project, while acting United States Ambassador to New Zealand Candy Green and other dignitaries attended an open house.
Engineering physics undergraduate student Rachel Gregg, PhD candidate Johannes Eser, postdoctoral researcher Simon Bacholle and former postdoc Lech Piotrowski were members of Wiencke’s team in New Zealand for testing and other preparations. Bacholle has returned to Mines to set up the US operations center for the balloon, while Piotrowski is setting up an operations center in Japan.
A similar balloon flew for almost 47 days in 2016; Wiencke hopes for 50 days of flight for this balloon. The project seeks to help determine where the highest-energy subatomic particles in the universe came from and how they traveled to Earth, both a major scientific mystery in astroparticle physics.
Mark Ramirez, Managing Editor, Communications and Marketing | 303-273-3088 | firstname.lastname@example.org
Agata Bogucka, Communications Manager, College of Earth Resource Sciences & Engineering | 303-384-2657 | email@example.com