HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 13 hours 5 min ago

Nor-Tech Introduces HPC Cluster Support Solution

Wed, 02/08/2017 - 07:16

Feb. 8 — Nor-Tech, renowned throughout the technology community for industry leading technology and support, just announced contract-based HPC cluster service and support. Called, Nor-Tech HPC Cluster Support Solution, the service is designed as an entry point for prospects interested in transitioning into Nor-Tech clusters.

Nor-Tech President and CEO David Bollig said, “This is an excellent opportunity for anyone with an existing cluster to experience our superior cluster support services firsthand before they make a commitment to buy from us.”

An excellent market for this service is HPC cluster and workstation users who have purchased from a large manufacturer, but are not getting the service attention that larger buyers enjoy–large cluster manufacturers typically won’t allocate service resources for smaller cluster buyers.

“Many small to mid-sized companies have to struggle to get even basic support services,” Bollig said. “This means they routinely get stuck on hold, don’t get call backs, and sometimes are shut out of support entirely. This is a common refrain from clients that bought their first cluster from a larger manufacturer and a big part of the reason they buy their second cluster from us.”

The service and support that Nor-Tech offers to clients and non-clients alike features the following:

  • U.S. Based: all support inquiries are routed to their Minnesota headquarters.
  • Flexibility: Nor-Tech provides both remote and onsite help.
  • No Wait Time: Nor-Tech does not put clients on hold.
  • Expertise: Nor-Tech’s certified support staff averages 10+ years of experience.
  • Familiarity: Very often the same team members that built the cluster are available for support.
  • Patience: Nor-Tech engineers take the time to thoroughly listen before diagnosing the issue. They also explain solutions in terms that those without a deep technology background can understand.
  • Detailed, Customized Documentation: Nor-Tech’s HPC cluster and workstation clients receive manuals and Quick Start Guides that are customized down to the graphics.

The Nor-Tech HPC Cluster Support Solution includes prepaid blocks of time in 10, 25, and 50 hour increments and is available for any HPC cluster user that is not getting the support they need from the manufacturer.

“I am incredulous that anyone should have to look outside the manufacturer for support,” Bollig said. “At Nor-Tech, we take pride in the level of support that we include with the clusters we build. Along with quality and reputation, outstanding support is one of the primary reasons that our clients cite for their long-term loyalty.”

For example, Nor-Tech began working with a large company in the automotive industry. The company purchased a cluster from a major manufacturer, which deployed the cluster and provided almost no support after that. Very frustrated company officials purchased a Nor-Tech HPC Cluster Support Solution time block. Impressed with the service, the company purchased their next cluster from Nor-Tech.

About Nor-Tech

2016 HPCwire award finalist, Nor-Tech is renowned throughout the scientific, academic, and business communities for easy to deploy turnkey clusters and expert, no wait time support. All of Nor-Tech’s technology is made by Nor-Tech in Minnesota and supported by Nor-Tech around the world. In addition to HPC clusters, Nor-Tech’s custom technology includes workstations, desktops, and servers for a range of applications including CAE, CFD, and FEA. Nor-Tech engineers average 20+ years of experience and are responsible for significant high performance computing innovations. The company has been in business since 1998 and is headquartered in Burnsville, Minn. just outside of Minneapolis. To contact Nor-Tech call 952-808-1000/toll free: 877-808-1010 or visit http://www.nor-tech.com.

Source: Nor-Tech

The post Nor-Tech Introduces HPC Cluster Support Solution appeared first on HPCwire.

ORNL Enhances Data Integrity and Accessibility With Active Archive Solutions

Wed, 02/08/2017 - 06:50

BOULDER, Colo., Feb. 8 — The Active Archive Alliance today announced that Oak Ridge National Laboratory (ORNL) has upgraded its active archive solutions to enhance the integrity and accessibility of its vast amount of data. The new solutions allow ORNL to meet its increasing data demands and enable fast file recall for its users.

ORNL is home to the United States’ most powerful supercomputer for open science, Titan. Titan is capable of 27 petaflops and can handle quadrillions of calculations simultaneously for scientific simulations. More than 1,200 users have access to the supercomputer and its file storage systems, where simulation data is stored so that users can quickly and efficiently access datasets as needed.

“These active archive upgrades were crucial to ensuring our users’ data is both accessible and fault-tolerant so they can continue performing high-priority research at our facilities,” said Jack Wells, director of science for the National Center for Computational Sciences at ORNL. “Our storage-intensive users have been very pleased with our new data storage capabilities.”

The active archive solutions include Redundant Array of Independent Tapes (RAIT) technology as well as new enterprise-class tape drives and an 18 PB disk cache. The center currently has more than 120 tape drives and the ability to house 60,000 tapes. The archive has 107 PB of tape storage capacity of which 59 PB is being used at present, and it has the ability to scale to a data capacity of 498 PB.

“We are looking at best-of-breed solutions all the time, whether those be for the disk cache or tape layer or for the application managing those hierarchical storage systems,” said Quinn Mitchell, High-Performance Computing Storage System Administrator at the Oak Ridge Leadership Computing Facility (OLCF). “We are always evaluating our current storage system to find the best active archive solutions to meet both our center’s needs and the needs of the next generation of computational scientists.”

Read the full ORNL case study here.

About The Active Archive Alliance

The Active Archive Alliance is a collaborative industry alliance dedicated to promoting active archives for simplified, access to all of your data, all of the time. Launched in early 2010, the Active Archive Alliance is a vendor neutral organization open to leading providers of active archive technologies including file systems, active archive applications, cloud storage, high density tape and disk storage. Active Archive Alliance members provide active archive solutions, best practices, and industry testimonials so that organizations can achieve fast, active access to all their data in the most cost-effective manner. Members include DDN Storage, Fujifilm Recording Media USA, HGST, Quantum, Spectra Logic and StrongBox Data Solutions. Visit www.activearchive.com for more information.

Source: Active Archive Alliance

The post ORNL Enhances Data Integrity and Accessibility With Active Archive Solutions appeared first on HPCwire.

SC17 Student Cluster Competition Applications Now Being Accepted

Tue, 02/07/2017 - 14:37

SALT LAKE CITY, Utah, Feb. 7 — Students interested in demonstrating their high-performance computing skills on a global stage are invited to team up and sign up to compete in the tenth anniversary Student Cluster Competition at the SC17 Conference to be held Nov. 12-17, 2017, in Denver, Colo. SC17 is the premier international conference on high performance computing, networking, storage and analysis.

The Student Cluster Competition (SCC) is a high energy event featuring student supercomputing talent from around the world competing to build and operate powerful cluster computers, all in the view of thousands of HPC experts. Applications are now being accepted and the deadline for team submissions is Friday, April 7, 2017.

Launched at SC07 to showcase student expertise in a friendly yet spirited competition, the Student Cluster Competition aims to introduce the next generation of students to the high-performance computing community. Over the last couple of years, the competition has drawn teams from around the world, including Australia, Canada, China, Costa Rica, Germany, Russia, Taiwan and the United States.

The SC17 competition will again include the SCC Reproducibility Initiative, in which students will be challenged to reproduce a paper rather than run prescribed datasets. Although they are doing similar tasks from previous competitions, they are seeing it from an entirely new perspective, as a component to the scientific process.

“We added this challenge at SC16 to help students understand, early in their careers, the important role reproducibility plays in research,” said SCC Chair Stephen Harrell. “This not only adds another layer of competition, but also brings more real-world experience to the event.”

Team proposals must be submitted via the SC17 submission site at https://submissions.supercomputing.org/.

How the Challenge Works

In this real-time, non-stop, 48-hour competition, teams of undergraduate and/or high school students assemble small cluster computers on the SC17 exhibit floor and race to complete a real-world workload across a series of applications and impress HPC industry judges. Prior to the competition, teams work with their advisor and vendor partners to design and build a cutting-edge cluster from commercially available components that does not exceed a 3000-watt power limit (26-amp at 120-volt), and work with application experts to tune and run the competition codes.

Questions should be sent to student-cluster-competition@info.supercomputing.org

About SC17

SC17, sponsored by the ACM (Association for Computing Machinery) and the IEEE Computer Society, offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world class exhibit area, demonstrations and opportunities for hands-on learning.

Source: SC17

The post SC17 Student Cluster Competition Applications Now Being Accepted appeared first on HPCwire.

ISC High Performance Keynote Forecasts Future Role of HPC in Weather and Climate Prediction

Tue, 02/07/2017 - 11:20

FRANKFURT, Germany, Feb. 7 — The ISC High Performance organizers are pleased to announce that this year’s Tuesday keynote will be delivered by Dr. Peter Bauer, who will underline the role high performance computing plays in the very pressing topic of weather and climate prediction, and also reveal the ambitions the European Centre for Medium-Range Weather Forecasts (ECMWF) has for exascale computing.

In the keynote to be held on June 20th, Bauer, who is the Deputy Director of the Research Department Center at ECMWF in Reading, UK, will address over 3,000 conference attendees on the existing challenges in this domain. Specifically, he will discuss the computing and data challenges, as well as the current avenues the weather and climate prediction community is taking in preparing for the new computing era. This year’s conference will take place in Frankfurt, Germany from June 18 through June 22, 2017.

During his career, Bauer was awarded post-doctoral and research fellowships by the University Corporation for Atmospheric Research (UCAR) and the National Aeronautics and Space Administration (NASA). He worked for the German Aerospace Center, leading a research team on satellite meteorology before joining ECMWF in 2000. He is the author and co-author of over 100 peer-reviewed scientific journal papers and a member of several advisory committees for national weather services, the World Meteorological Organization and European space agencies. His current duties also include the management of the Scalability Programme that will prepare ECMWF for exascale computing.

Dr. Peter Bauer

In his keynote abstract, Bauer says: “Meeting the future requirements for forecast reliability and timeliness needs 100-1000 times bigger HPC resources than today – and towards exascale. To meet these requirements, the weather and climate prediction community is undergoing one of its biggest revolutions since its foundation in the early 20th century.” He goes on to explain that the revolution encompasses a fundamental redesign of mathematical algorithms and numerical methods, the ingestion of new programming models, the implementation of dynamic and resilient workflows and the efficient post-processing and handling of big data.

“Weather and climate prediction are HPC applications with significant societal and economic impact, ranging from disaster response and climate change adaptation strategies to agricultural production and energy policy. Forecasts are based on millions of observations made every day around the globe, which are then input to numerical models. The models represent complex processes that take place on scales from hundreds of metres to thousands of kilometres in the atmosphere, the ocean, the land surface, the cryosphere and the biosphere. Forecast production and dissemination to users is always time critical, and output data volumes already reach petabytes per week.”

Submission Deadlines for ISC High Performance are Fast Approaching

These include regular workshops, tutorials, birds-of-a-feather (BoF) sessions, the PhD forum, research posters, project posters, and the student volunteer program. If you are interested in participating in the program, please visit the website.

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

Over 400 hand-picked expert speakers and 150 exhibitors, consisting of leading research centers and vendors, will greet attendees at ISC High Performance. A number of events complement the Monday – Wednesday keynotes, including the Distinguished Speaker Series, the Industry Track, The Machine Learning Track, Tutorials, Workshops, the Research Paper Sessions, Birds-of-a-Feather (BoF) Sessions, Research Poster, the PhD Forum, Project Poster Sessions and Exhibitor Forums.

Source: ISC High Performance

The post ISC High Performance Keynote Forecasts Future Role of HPC in Weather and Climate Prediction appeared first on HPCwire.

Van Andel Research Optimizes HPC Pipeline with DDN

Tue, 02/07/2017 - 11:15

For more than a decade the swelling output from life sciences experimental instruments has been overwhelming research computing infrastructures intended to support them. DNA sequencers were the first – instrument capacities seemed to jump monthly. Today it’s the cryo electron microscope – some of them 13TB a day beasts. Even a well-planned brand new HPC environment can find itself underpowered by the time it is switched on.

A good example of the challenge and nimbleness required to cope is Van Andel Research Institute’s (VARI) initiative to build a new HPC environment to support its work on epigenetic, genetic, molecular and cellular origins of cancer – all of which require substantial computational resources. VARI (Grand Rapids, MI) is part of Van Andel Institute.

With the HPC building project largely finished, Zack Ramjan, research computing architect for VARI, recalled wryly, “About 10 months ago, we decided we were going to get into the business of cryo-EM. That was news to me and maybe news to many of us here. That suite of three instruments has huge data needs. So we went back and luckily the design that we had was rock solid that’s where we kind of started adding.” He’d been recruited from USC in late 2014 specifically to lead the effort to create an HPC environment for scientific computing.

Titan Krios

The response was to re-examine the storage system, which would absorb the bulk of the new workload strain, and deploy expanded DDN storage – GS7K appliances and WOS – to cope with demand expected from three new cryo-EMs (FEI Titan Krios, FEI Arctica, and smaller instrument for QC). Taken together, the original HPC building effort and changes made later on the fly showcase the rapidly changing choices often confronted by “smaller” research institutions mounting HPC overhauls.

Working with DDN, Silicon Mechanics, and Bright Computing, VARI developed a modest-size hybrid cluster-cloud environment with roughly 2000 cores, 2.2 petabytes of storage, and 40Gb Ethernet throughout. Major components include private-cloud hosting with OpenStack, Big Data analytics, petabyte-scale distributed/parallel storage, and cluster/grid computing. The work required close collaboration with VARI researcher – roughly 32 groups of varying size – to design and support computing workloads in genomics, epigenetics, next-gen sequencing, molecular-dynamics, bioinformatics and biostatistics

As for many similar-sized institutions, bringing order to the storage architecture was a major challenge. Without centralized HPC resources in-house, individual investigators (and groups) tend to go it alone creating a chaotic disconnected storage landscape.

“These pools of storage were scattered and independent. They were small, not scalable, and intended for specific use cases,” he recalled. “I wanted to replace all that with a single solution that could support HPC because it’s not just about the storage capacity; we also need to support access to that data in a high performance way, [for] moving data very fast, in parallel, to many machines at once.”

A wide range of instruments – sequencers and cryo-EM are just two – required access to storage. Workflows were mixed. Data from external collaborators and other consortia were often brought in-house and had a way of “multiplying after being worked on.” Ramjan’s plan was to centralize and simplify. Data would stream directly from instruments to storage. Investigator created data would likewise be captured in one place.

“There’s no analysis storage and instrument storage, it’s all one storage. The data goes straight to a DDN device. My design was to remove copy and duplications. It comes in one time and users are working on it. It’s a tiered approach. So data goes straight into the highest performing tier, from there, there is no more movement.” DDN GS7K devices comprise this higher performing tier.

As the data ‘cools’ and investigators move to new projects, “We may have to retain the data due to obligations or the user wants to keep it around; then we don’t want to keep ‘cold’ data on our highest performing device. Behind the scenes this data is automatically moved to a slower and more economical tier,” said Ramjan. This is the WOS controlled tier. It’s also where much of the cryo-EM data ends up after initial processing.

DDN GRIDScaler-GS7K

Physically there are actually four places the data can be although the user only sees one, emphasized Ramjan. “It’s either on our mirrored pool – we have two GS7Ks, one either side of the building for disaster recovery in terms of a flood or tornado something like that. If the data doesn’t need to have that level of protection it will be on one of the GSK7s or it will be replicated on WOS. There are two WOS devices also spread out in the same way so the data could be sitting mirrored, replicated, on either side. The lowest level of protection would be a single WOS device.”

“Primary data being – data we’re making here, it came of a machine, or there’s no recreating it because the sample is destroyed – we consider that worthy of full replication sitting in two places on the two GS7Ks. If the user lets it cool down and it will go to the two WOS devices and inside those devices is also a RAID so you can say the replication factor is 2-plus. We maintain that for our instrument data.”

Data movement is widely control by policy capabilities in the file system. Automating data flow from instruments in this way, for example, greatly reduces steps and admin requirements. Choosing an effective parallel file system is a key component in such a scheme and reduces the need for additional tools.

“There are really only three” options for a very high performance file systems,” said Ramjan, “GPFS (now Spectrum Scale from IBM), Lustre, and OneFS (Dell DMC/Isilon).” OneFS, said Ramjan, which VARI had earlier experience with, was cost-prohibitive compared to the other choices. He also thinks Lustre is more difficult to work than GPFS and lacked key features.

“We had Isilon before. I won’t say anything bad about it but pricewise it was pretty painful. I spent a lot of time exploring both of the others. Lustre is by no means a bad option but for us the right fit was GPFS. I needed something that was more appliance based. You know we’re not the size of the university of Michigan or USC or a massive institute with 100 guys in the IT department ready to work on this. We wanted to bring something in quick that would be well supported.

“I felt Lustre would require more labor and time than I was willing to spend and it didn’t have some of the things GPFS does like tiering and rule-based tiering and easier expansion. DDN could equally have sold us a Lustre GSK too if we wanted,” he said.

Zack Ramjan-VARI

On balance, “Deploying DDN’s end-to-end storage solution has allowed us to elevate the standard of protection, increase compliance and push boundaries on a single, highly scalable storage platform,” said Ramjan. “We’ve also saved hundreds of thousands of dollars by centralizing the storage of our data-intensive research and a dozen data-hungry scientific instruments on DDN.”

Interesting side note: “The funny things was the vendors of the microscopes didn’t know anything about IT so they couldn’t actually tell us concretely what we’d need. For example, would 10Gig network be sufficient? They couldn’t answer of those questions and they still can’t unfortunately. It put me in quite a bind. I ended up talking with George Vacek at DDN and he pointed me towards three other cryo-EM users also using DDN, which turned out to be a great source of support.”

Storage, of course is only part of the HPC puzzle. Ramjan was replacing a systems that had more in common with traditional corporate enterprise systems than with scientific computing platforms. Starting from scratch, he had a fair degree of freedom in selecting the architecture and choosing components. He says going with a hybrid cluster/cloud architecture was the correct choice.

Silicon Mechanics handled the heavy lifting with regard to hardware and integration. The Bright Computing provisioning and management platform was used. There are also heterogeneous computing elements although accelerators were not an early priority.

“The genomics stuff – sequencing, genotyping, etc. – that we’ve been doing doesn’t benefit much from GPUs, but the imaging analysis we are getting into does. So we do have a mix of nodes, some with accelerators, although they are all very similar at the main processer. The nodes all have Intel Xeons with a lot of memory, fast SSD, and fast network connections. We have some [NVIDIA] K80s and are bringing in some of the new GTX 1080s. I’m pretty excited about the 1080s because they are a quarter of the cost and in our use case seem to be performing just as well if not a little but better,” said Ramjan.

“I had the option of using InfiniBand, but said listen we know Ethernet, we can do Ethernet in a high performance way, let’s just stick with it at this time. Now there’s up to a 100 Gig Ethernet.”

In going with the hybrid HPC cluster/cloud route, Ramjan evaluated public cloud options. “I wanted to be sure it made sense to do it in-house (OpenStack) when I could just put it in Google’s cloud or Amazon or Microsoft. We ran the numbers and I think cloud computing is great for someone doing a little bit of computing a few times year, but not for us.” It’s not the cost of cycles; they are cheap enough. It’s data movement and storage charges.

Cloud bursting to the public cloud is an open question for Ramjan. He is already working with Bright Computing on a system environment update, expected to go live in March, that will have cloud bursting capability. He wonders how much it will be used.

“It’s good for rare cases. Still you have to balance that against just acquiring more nodes. The data movement in and out of the cloud is where they get you on price. With a small batch I could see it being economical but I have an instrument here that can produce 13 TB a day – moving that is going to be very expensive. We have people doing molecular dynamics, low data volume, low storage volume, but high CPU requirements. But even then latency is a factor.”

System adoption has been faster than expected. “I thought utilization would ramp up slowly, but [already] we’re sitting at 80 percent utilization on a constant basis often at 100 percent. It surprised me how fast and how hungry our investigators were for these resources. If you would have asked them beforehand ‘do you need this’ they probably would have said no.”

The post Van Andel Research Optimizes HPC Pipeline with DDN appeared first on HPCwire.

Wrangler Supercomputer at TACC Supports Information Retrieval Projects

Tue, 02/07/2017 - 07:08

Feb. 7 — Much of the data of the World Wide Web hides like an iceberg below the surface. The so-called ‘deep web’ has been estimated to be 500 times bigger than the ‘surface web’ seen through search engines like Google. For scientists and others, the deep web holds important computer code and its licensing agreements. Nestled further inside the deep web, one finds the ‘dark web,’ a place where images and video are used by traders in illicit drugs, weapons, and human trafficking. A new data-intensive supercomputer called Wrangler is helping researchers obtain meaningful answers from the hidden data of the public web.

The Wrangler supercomputer got its start in response to the question, can a computer be built to handle massive amounts of I/O (input and output)? The National Science Foundation (NSF) in 2013 got behind this effort and awarded the Texas Advanced Computing Center (TACC), Indiana University, and the University of Chicago $11.2 million to build a first-of-its-kind data-intensive supercomputer. Wrangler’s 600 terabytes of lightning-fast flash storage enabled the speedy reads and writes of files needed to fly past big data bottlenecks that can slow down even the fastest computers. It was built to work in tandem with number crunchers such as TACC’s Stampede, which in 2013 was the sixth fastest computer in the world.

While Wrangler was being built, a separate project came together headed by the Defense Advanced Research Projects Agency (DARPA) of the U.S. Department of Defense. Back in 1969, DARPA had built the ARPANET, which eventually grew to become the Internet, as a way to exchange files and share information. In 2014, DARPA wanted something new – a search engine for the deep web. They were motivated to uncover the deep web’s hidden and illegal activity, according to Chris Mattmann, chief architect in the Instrument and Science Data Systems Section of the NASA Jet Propulsion Laboratory (JPL) at the California Institute of Technology.

“Behind forms and logins, there are bad things. Behind the dynamic portions of the web like AJAX and Javascript, people are doing nefarious things,” said Mattmann. They’re not indexed because the web crawlers of Google and others ignore most images, video, and audio files. “People are going on a forum site and they’re posting a picture of a woman that they’re trafficking. And they’re asking for payment for that. People are going to a different site and they’re posting illicit drugs, or weapons, guns, or things like that to sell,” he said.

Mattmann added that an even more inaccessible portion of the deep web called the ‘dark web’ can only be reached through a special browser client and protocol called TOR, The Onion Router. “On the dark web,” said Mattmann, “they’re doing even more nefarious things.” They traffic in guns and human organs, he explained. “They’re basically doing these activities and then they’re tying them back to terrorism.”

In response, DARPA started a program called Memex. Its name blends ‘memory’ with ‘index’ and has roots to an influential 1945 Atlantic magazine article penned by U.S. engineer and Raytheon founder Vannevar Bush. His futuristic essay imagined making all of a person’s communications – books, records, and even all spoken and written words – in fingertip reach. The DARPA Memex program sought to make the deep web accessible. “The goal of Memex was to provide search engines the information retrieval capacity to deal with those situations and to help defense and law enforcement go after the bad guys there,” Mattmann said.

Karanjeet Singh is a University of Southern California graduate student who works with Chris Mattmann on Memex and other projects. “The objective is to get more and more domain-specific (specialized) information from the Internet and try to make facts from that information,” said Singh said. He added that agencies such as law enforcement continue to tailor their questions to the limitations of search engines. In some ways the cart leads the horse in deep web search. “Although we have a lot of search-based queries through different search engines like Google,” Singh said, “it’s still a challenge to query the system in way that answers your questions directly.”

Once the Memex user extracts the information they need, they can apply tools such as named entity recognizer, sentiment analysis, and topic summarization. This can help law enforcement agencies like the U.S. Federal Bureau of Investigations find links between different activities, such as illegal weapon sales and human trafficking, Singh explained.

“Let’s say that we have one system directly in front of us, and there is some crime going on,” Singh said. “The FBI comes in and they have some set of questions or some specific information, such as a person with such hair color, this much age. Probably the best thing would be to mention a user ID on the Internet that the person is using. So with all three pieces of information, if you feed it into the Memex system, Memex would search in the database it has collected and would yield the web pages that match that information. It would yield the statistics, like where this person has been or where it has been sited in geolocation and also in the form of graphs and others.”

“What JPL is trying to do is trying to automate all of these processes into a system where you can just feed in the questions and and we get the answers,” Singh said. For that he worked with an open source web crawler called Apache Nutch. It retrieves and collects web page and domain information of the deep web. The MapReduce framework powers those crawls with a divide-and-conquer approach to big data that breaks it up into small pieces that run simultaneously. The problem is that even the fastest computers like Stampede weren’t designed to handle the input and output of millions of files needed for the Memex project.

The Wrangler data-intensive supercomputer avoids data overload by virtue of its 600 terabytes of speedy flash storage. What’s more, Wrangler supports the Hadoop framework, which runs using MapReduce. “Wrangler, as a platform, can run very large Hadoop-based and Spark-based crawling jobs,” Mattmann said. “It’s a fantastic resource that we didn’t have before as a mechanism to do research; to go out and test our algorithms and our new search engines and our crawlers on these sites; and to evaluate the extractions and analytics and things like that afterwards. Wrangler has been an amazing resource to help us do that, to run these large-scale crawls, to do these type of evaluations, to help develop techniques that are helping save people, stop crime, and stop terrorism around the world.”

Click here to view the entire article.

Source: Jorge Salazar, TACC

The post Wrangler Supercomputer at TACC Supports Information Retrieval Projects appeared first on HPCwire.

Supermicro Deploys 30,000+ MicroBlade Servers to Enable One of the World’s Highest Efficiency Datacenters

Mon, 02/06/2017 - 07:53

SAN JOSE, Calif., Feb. 6 — Super Micro Computer, Inc., a global leader in compute, storage and networking technologies including green computing, has announced deployment of its disaggregated MicroBlade systems at one of the world’s highest density and energy efficient data centers.

A technology-leading Fortune 100 company has deployed over 30,000 Supermicro MicroBlade servers, at its Silicon Valley data center facility with a Power Use Effectiveness (PUE) of 1.06, to support the company’s growing compute needs. Compared to a traditional data center running at 1.49 PUE, or more, the new datacenter achieves an 88 percent improvement in overall energy efficiency. When the build out is complete at a 35 megawatt IT load power, the company is targeting $13.18M in savings per year in total energy costs across the entire datacenter.

The Supermicro MicroBlade system represents an entirely new type of computing platform. It is a powerful and flexible extreme-density 3U or 6U all-in-one total system that features 14 or 28 hot-swappable MicroBlade Server blades. The system deliver 86% improvement in power/cooling efficiency with common shared infrastructure, 56 percent system density improvement and lower initial investment versus 1U servers. The solution has 280 Intel Xeon processor-servers per rack and achieves 45 percent to-65 percent CAPEX savings per refresh cycle with a disaggregated rack scale design.

“With 280 Intel Xeon processor-servers in a 9-foot rack, and up to 86 percent improvement in system cooling efficiency, the MicroBlade system is a game changer,” said Charles Liang, President and CEO of Supermicro. “Leveraging our Silicon Valley based engineering team and global service capabilities, Supermicro collaborated closely with the company’s IT department and delivered a solution from design concept to optimally tuned, high-quality product with full supply chain and large-scale delivery support in five weeks.  With our new MicroBlade and SuperBlade, we have changed the game of blade architecture to make blades the lowest in initial acquisition cost for our customers, not just the best in terms of computation, power efficiency, cable-less design, and TCO.”

The Supermicro MicroBlade disaggregated architecture unlocks the interdependence between the major server subsystems enabling the independent upgrade of CPU+Memory, I/O, Storage and, Power/Cooling.  Now each component can be refreshed on-time to maximize Moore’s Law improvements in performance and efficiency versus waiting for a single monolithic server refresh cycle.

“A disaggregated server architecture enables the independent upgrades of the compute modules without replacing the rest of the enclosure including networking, storage, fans and power supplies, which refresh at a slower rate,” said Shesha Krishnapura, Intel Fellow and Intel IT CTO. “By disaggregating CPU and memory, each resource can be refreshed independently allowing data centers to reduce refresh cycle costs. When viewed over a three to five year refresh cycle, an Intel Rack Scale Design disaggregated server architecture will deliver, on-average, higher-performing and more-efficient servers at lower costs than traditional rip-and-replace models by allowing data centers to independently optimize adoption of new and improved technologies.”

The MicroBlade provides the perfect building block for a Rack-Scale design data center solution. The networking across all server blades is aggregated into just two ports for uplink through an integrated switch, eliminating the need for Top-of-Rack (ToR) switches and complex cabling. With up to 99% cabling reduction for the MicroBlade system, airflow is significantly improved, which in turn reduces the load on the cooling fans, resulting in even lower OPEX. Up to 86 percent cooling fan power efficiency improvement is achieved by sharing four cooling fans and integrated power modules across all 14 MicroBlade server blades. The MicroBlade enclosure is configured with a Chassis Management Module for unified management and redundant 2,000 watt Titanium Level certified digital power supplies for 96% energy efficiency. Supermicro MicroBlade is shipped with industry standard IPMI 2.0 and Redfish API designed to lower management overhead in large scale data centers. The MicroBlade also supports DP Intel Xeon E5-2600 v4/v3 processors (MBI-6128R-T2/T2X blade server part numbers).

In addition to the MicroBlade, Supermicro is introducing a new SuperBlade architecture with more deployment options. The X10 Generation SuperBlade supports up to 10/14/20 ES-2600 v4 dual processor nodes per 7U chassis with many of the same features as the MicroBlade system. The new 8U SuperBlade systems use the same Ethernet switches, chassis management modules, and software as the MicroBlade for improved reliability, serviceability, and affordability. The systems are designed to support DP and MP processors up to 205 watts in half-height and full-height blades, respectively. Similarly, the new 4U SuperBlade systems maximize performance and efficiency while enabling up to 140 dual-processor servers or 280 single-processor servers per 42U rack.

About the MicroBlade 3U Enclosure Deployed

  • 30,000+ Intel Xeon Processor based Supermicro MicroBlade Server blades
  • Each 3U MicroBlade enclosure consists of 14 hot-swap server blades
  • 56% better data center space utilization / density compared to previous solution deployed
  • Up to 96.5% cable reduction
  • Up to 2 Ethernet switches 2x 40Gb/s QSFP or 8x 10Gb/s SFP+ uplinks per enclosure
  • High efficiency shared Titanium Level (96%+) digital power supplies
  • 45% to 65% CAPEX savings due to disaggregated hardware architecture

About the MicroBlade 6U Enclosure

  • Up to 28 hot-swap blade servers (56 UP or 28 Xeon DP nodes)
  • Up to 98% cable reduction
  • Up to 2 GbE switches with 2x 40Gb/s QSFP or 8x 10Gb/s SFP+ uplinks per enclosure
  • Up to 8 (N+1 or N+N redundant) 2000W Titanium certified high-efficiency (96%) digital power supplies

New 8U SuperBlade Enclosure

  • Up to 20 half-height 2-socket blade servers
  • Up to 10 full-height 4-socket blade servers
  • One 100G EDR IB or Omni-Path switch
  • Up to 2 Ethernet switches (1G,10G) for Ethernet connectivity i
  • One Chassis Management Module (CMM)
  • Up to 8x (N+1 or N+N redundant) 2200W Titanium (96%) digital power supplies

New 4U SuperBlade Enclosure

  • Up to 14 half-height 2-socket blade servers
  • Up to 28 single-socket blade server nodes
  • Up to 2 Ethernet (1G, 10G, or 25G) switches
  • Up to 4 (N+1 or N+N redundant) 2200W Titanium (96%) digital power supplies
  • One Chassis Management Module (CMM)

Typical MicroBlade Blade Servers

3U/6U MicroBlade — designed for best advantages over many industry standard architectures with all-in-one total solution, ultra high density, ultra low power consumption, best performance per watt per dollar, high scalability, and best ease of service. The MicroBlade enclosure can incorporate up to 2x Ethernet switches (10G or 1G) and up to 2 Chassis Management Modules. It can incorporate up to 4 or 8 redundant (N+1 or N+N) 2000W Titanium Level high-efficiency (96%+) power supplies with cooling fans.

MBI-6119G-C4/T4 — 28 Intel Xeon processor E3-1200 v5 product family nodes per 6U (up to 196 nodes per 42U) or 14 nodes per 3U with 4x 2.5″ SAS SSD, RAID 0,1,1E,10 or 4x 2.5″ SATA HDD.

MBI-6219G-T — 56 Intel Xeon processor E3-1200 v5 product family nodes per 6U (up to 392 computing nodes per 42U rack) or 28 nodes per 3U with 2x 2.5″ SSD per node.

MBI-6218G-T41X/-T81X — 56 Intel Xeon Processor D-1581/1541 product family nodes per 6U (up to 392 computing nodes per 42U rack) or 28 nodes per 3U with up to 16 cores and integrated 10GbE per node.

MBI-6118G-T41X/-T81X — 28 Intel Xeon Processor D-1541/D-1581 product family nodes per 6U (up to 196 computing nodes per 42U rack) or 14 nodes per 3U with 8 cores and integrated 2x 10GbE.

MBI-6219G-T7LX/-T8HX — 56 Intel Xeon Processor E3-1578L v5/E3-1585 v5 product family nodes per 6U (up to 392 computing nodes per 42U rack) or 28 nodes per 3U with Intel Iris Pro Graphics P580 and integrated 10GbE per node.

MBI-6119G-T41X/-T8HX — 28 Intel Xeon Processor E3-1578L v5/E3-1585 v5  product family nodes per 6U (up to 196 computing nodes per 42U rack) or 14 nodes per 3U with Intel Iris Pro Graphics P580 and integrated 2x 10GbE.

MBI-6128R-T2/-T2X — 28 Intel Xeon Processor E5-2600 v4 product family DP nodes per 6U (up to 196 computing nodes per 42U rack) or 14 nodes per 3U with 1GbE and 10GbE options.

For more on Supermicro MicroBlade solutions visit: https://www.supermicro.com/products/MicroBlade/.

For more information on Supermicro’s complete range of high-performance, high-efficiency Server, Storage and Networking solutions, please visit http://www.supermicro.com/.

About Super Micro Computer, Inc.

Supermicro (SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Source: Supermicro

The post Supermicro Deploys 30,000+ MicroBlade Servers to Enable One of the World’s Highest Efficiency Datacenters appeared first on HPCwire.

NVIDIA Introduces New Quadro Products Based on Pascal Architecture

Mon, 02/06/2017 - 07:43

LOS ANGELES, Calif., Feb. 6 — NVIDIA (NASDAQ: NVDA) has introduced a range of Quadro products, all based on its Pascal architecture, that transform desktop workstations into supercomputers with breakthrough capabilities for professional workflows across many industries.

Workflows in design, engineering and other areas are evolving rapidly to meet the exponential growth in data size and complexity that comes with photorealism, virtual reality and deep learning technologies. To tap into these opportunities, the new NVIDIA Quadro Pascal-based lineup provides an enterprise-grade visual computing platform that streamlines design and simulation workflows with up to twice the performance of the previous generation, and ultra-fast memory.

“Professional workflows are now infused with artificial intelligence, virtual reality and photorealism, creating new challenges for our most demanding users,” said Bob Pette, vice president of Professional Visualization at NVIDIA. “Our new Quadro lineup provides the graphics and compute performance required to address these challenges. And, by unifying compute and design, the Quadro GP100 transforms the average desktop workstation with the power of a supercomputer.”

Benefits of Quadro Pascal Visual Computing Platform

The new generation of Quadro Pascal-based GPUs — the GP100, P4000, P2000, P1000, P600 and P400 — enables millions of engineers, designers, researchers and artists to:

  • Unify simulation, HPC, rendering and design – The GP100 combines unprecedented double precision performance with 16GB of high-bandwidth memory (HBM2) so users can conduct simulations during the design process and gather realistic multiphysics simulations faster than ever before. Customers can combine two GP100 GPUs with NVLink technology and scale to 32GB of HBM2 to create a massive visual computing solution on a single workstation.
  • Explore deep learning – The GP100 provides more than 20 TFLOPS of 16-bit floating point precision computing — making it an ideal development platform to enable deep learning in Windows and Linux environments.
  • Incorporate VR into design and simulation workflows – The “VR Ready” Quadro GP100 and P4000 have the power to create detailed, lifelike, immersive environments. Larger, more complex designs can be experienced at scale.
  • Reap the benefits of photorealistic design – Pascal-based Quadro GPUs can render photorealistic images more than 18 times faster than a CPU.
  • Create expansive visual workspaces – Visualize data in high resolution and HDR color on up to four 5K displays.
  • Build massive digital signage configurations cost effectively – Up to 32 4K displays can be configured through a single chassis by combining up to eight P4000 GPUs and two Quadro Sync II cards.

The new cards complete the entire NVIDIA Quadro Pascal lineup including the previously announced P6000, P5000 and mobile GPUs. The entire NVIDIA Quadro Pascal lineup supports the latest NVIDIA CUDA 8 compute platform providing developers access to powerful new Pascal features in developer tools, performance enhancements and new libraries including nvGraph.

NVIDIA Quadro at SOLIDWORKS World

The entire NVIDIA Quadro family of desktop GPUs will be on display at SOLIDWORKS World, starting today at the Los Angeles Convention Center, NVIDIA booth 628. NVIDIA Quadro will be powering the most demanding CAD workflows from physically based rendering to virtual reality. Visit our partners’ booths to try the latest mobile workstations powered by our new Quadro mobile GPUs.

Availability

The new NVIDIA Quadro products will be available starting in March from leading workstation OEMs, including Dell, HP, Lenovo and Fujitsu, and authorized distribution partners, including PNY Technologies in North America and Europe, ELSA/Ryoyo in Japan and Leadtek in Asia Pacific.

About NVIDIA

NVIDIA‘s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.” More information at http://nvidianews.nvidia.com/.

Source: NVIDIA

The post NVIDIA Introduces New Quadro Products Based on Pascal Architecture appeared first on HPCwire.

Mellanox Ships More Than 100,000 Cables for Next Generation 100Gb/s Networks

Mon, 02/06/2017 - 07:06

SUNNYVALE, Calif. & YOKNEAM, Israel, Feb. 6 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced that it has shipped more than 100,000 units of its Direct Attach Copper Cables (DACs) to serve the growing demand of hyperscale Web 2.0 and cloud 100Gb/s networks.

“Hyperscale customers are selecting Mellanox cables due to our advanced manufacturing automation technologies which enable us to achieve higher quality, lower costs and to deliver in high volume,” said Amir Prescher, senior vice president of business development and general manager of the interconnect business at Mellanox. “Copper cables are the most cost effective way to connect new 25G and 50G servers to TOR switches as they enable the entire new generation of 100Gb/s networks.”

Mellanox offers a full line of 10, 25, 40, 50 and 100Gb/s copper cabling for server and storage interconnect. The two most popular options are splitter cables, which feature a 100Gb/s connector at one end for plugging into a switch port and either two 50Gb/s connectors or four 25Gb/s connectors at the other end for connecting to 25G or 50G servers. Widely used by hyperscale customers to connect servers to the top of the rack (TOR) switch, DACs have lower cost and zero power consumption when compared to optical cables and transceivers. The superior performance and low 1E-15 BER eliminates the need for FEC, which would add latency to the critical server-TOR link.

Mellanox has ramped up into high volume production for 100Gb/s products faster than its competitors because of in-house design expertise for key interconnect components. Mellanox designs their own copper cables, as well as the drivers, TIAs, silicon photonics chips, packaging, and modules for optical products. All qualification and testing is performed by Mellanox. Deploying advanced manufacturing automation technologies helps to achieve higher quality and lower costs. This, “end to end” control of the supply chain and manufacturing lines ensures Mellanox can scale to meet market demand.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox Technologies

The post Mellanox Ships More Than 100,000 Cables for Next Generation 100Gb/s Networks appeared first on HPCwire.

JuliaPro Added to Windows Data Science Virtual Machine

Mon, 02/06/2017 - 06:40

REDMOND, Wash., Feb. 6 — Microsoft has added JuliaPro to Windows Data Science Virtual Machine (DSVM), making it available on Microsoft Azure. Julia is now available for the first time on the two largest cloud environments, following the December 2016 launch of Julia on Amazon Web Services.

According to Viral Shah, CEO of Julia Computing, “We are thrilled to partner with Microsoft to make JuliaPro available to Microsoft Azure users via Windows Data Science Virtual Machine (DSVM). Now Julia users in finance, engineering, manufacturing, biomedical research and other areas of data science and scientific computing can access JuliaPro in both of the top two public cloud computing environments: Amazon Web Services and Microsoft Azure.”

This latest version of JuliaPro launched in December 2016, and includes the Julia Compiler, Debugger, Profiler, Juno Integrated Development Environment, more than 100 curated packages, data visualization and plotting. Integration with Excel, customer support and indemnity are available with JuliaPro Enterprise and JuliaFin. JuliaFin also includes Bloomberg integration, advanced time series analytics and Miletus, a custom Julia package for developing and executing complex trading strategies.

About Julia Computing and Julia

Julia Computing was founded in 2015 by the co-creators of the Julia language to provide support to businesses and researchers who use Julia.

Julia is the fastest modern high performance open source computing language for data and analytics. It combines the functionality and ease of use of R, Python, Matlab, SAS and Stata with the speed of Java and C++. Julia delivers dramatic improvements in simplicity, speed, capacity and productivity.

  1. Julia is lightning fast.  Julia provides speed improvements up to 1,000x for insurance model estimation, 225x for parallel supercomputing image analysis and 11x for macroeconomic modeling.
  2. Julia is easy to learn.  Julia’s flexible syntax is familiar and comfortable for users of Python and R.
  3. Julia integrates well with existing code and platforms.  Users of Python, R and other languages can easily integrate their existing code into Julia.
  4. Elegant code.  Julia was built from the ground up for mathematical, scientific and statistical computing, and has advanced libraries that make coding simple and fast, and dramatically reduce the number of lines of code required – in some cases, by 90% or more.
  5. Julia solves the two language problem.  Because Julia combines the ease of use and familiar syntax of Python and R with the speed of C, C++ or Java, programmers no longer need to estimate models in one language and reproduce them in a faster production language.  This saves time and reduces error and cost.

Source: Julia Computing

The post JuliaPro Added to Windows Data Science Virtual Machine appeared first on HPCwire.

ANSYS Adds Cycle Orchestration for Enterprise Cloud HPC

Sat, 02/04/2017 - 15:47

The waiting is the hardest part.

When design engineers need to run complex simulations, too often they find that the HPC resources required for those workloads are already being used. The problem: most on-prem data centers are provisioned for steady-state, not high-demand, needs. When demand increases and HPC resources aren’t available, the engineer puts in a request with the job scheduler. Here, to paraphrase the opening of “Casablanca,” the fortunate ones, through money, or influence, or luck, might obtain access to HPC resources. But the others wait in scheduling limbo. And wait, and wait, and wait….

Now ANSYS, the popular CAE software vendor whose users increasingly turn to HPC for complex simulation workloads, has partnered with Cycle Computing and its CycleCloud software to leverage dynamic cloud capacity and auto-scaling. CycleCloud will provide HPC orchestration for ANSYS’s Enterprise Cloud HPC offering, an engineering simulation platform delivered on Amazon Web Services. CycleCloud enables cloud migration of CAE workloads requiring HPC, including storage and data management and access to resources for interactive and batch execution that scales on demand.

According to ANSYS, more customers are turning to the cloud as the locale for the full simulation and design life cycle.

“We have periods when we have need for many more cores than our data centers can manage,” Judd Kaiser, ANSYS cloud computing program manager, said. “Or we’re moving to increasingly variable workloads and were looking to cloud now as a possible solution. On other end, we have customers who are growing into HPC, who’d like to take advantage of HPC, but building a data center isn’t their core business, so they want to know how they can use cloud to their advantage.”

Cycle addresses both needs, he said.

“We didn’t have much experience in provisioning cloud resources and managing HPC on cloud infrastructures, and that’s what Cycle brought to the table,” he said. “ANSYS Enterprise Cloud, is intended to be a virtual simulation data center, it just happens to be backed on public cloud hardware. It means we can provision for a customer and they can have it up and running next week, serving the needs of dozens of engineers running very large workloads. If that same customer asked us for a recommendation of what we need for a data center, from specs for the system, to ordering the hardware, to rack and stack and installing software and rolling it out to the engineers, that typically takes many months.”

CycleCloud is intended to ensure optimal AWS Spot instance usage and that appropriate resources are used for the right amount of time in the ANSYS Enterprise Cloud. With CycleCloud handing auto-scaling, he explained, “the engineer submits a job to the cluster…and the cluster scales up to meet the demands of the job. So resources are provisioned specifically to serve the needs of that individual job, the job runs almost immediately, and then when it’s complete those resources are decommissioned.”

Keith said there already is some misunderstanding that the combined ANSYS-Cycle offering targets burst-to-the-cloud demand situations.

“It’s more than that,” he said. “People imagine burst capabilities…, it sounds great. They think: ‘I have an on-prem job, I’ll submit it to the cloud and when it’s done I’ll bring it back.’ But therein lies the problem: Bringing it back.”

Not only do ANSYS jobs use a significant amount of compute resources, he said, but once that job is complete the resulting data set can be extremely large. “So if the idea is to bring that data set back on prem and finish the simulation process there…, for most of our software that’s done interactively. You get the data, you load it onto a graphical workstation, you slice and dice it…and extract the useful information. That last part is graphical in nature. So if your vision is to launch to the cloud for the HPC and then bring the results back, you’ve got a data transfer problem. Our results files are routinely huge.”

The answer, he said, it to conduct the entire simulation process in the cloud. “Without moving the data after it’s computed, you spin up a graphical workstation in the cloud and do your post processing with the data in place, still in the cloud. You’re using some sort of thin client locally to interact with the software, but it’s all physically running in the cloud.”

The post ANSYS Adds Cycle Orchestration for Enterprise Cloud HPC appeared first on HPCwire.

SDSC’s Comet Supercomputer Surpasses 10,000 Users Milestone

Fri, 02/03/2017 - 07:39

Feb. 3 — Comet, the petascale supercomputer at the San Diego Supercomputer Center (SDSC), an Organized Research Unit of UC San Diego, has easily surpassed its target of serving at least 10,000 researchers across a diverse range of science disciplines, from astrophysics to redrawing the “tree of life”.

In fact, about 15,000 users have used Comet to run science gateways jobs alone since the system went into production less than two years ago. A science gateway is a community-developed set of tools, applications, and data services and collections that are integrated through a web-based portal or suite of applications. Another 2,600 users have accessed the high-performance computing (HPC) resource via traditional runs. The target was established by SDSC as part of its cooperative agreement with the National Science Foundation (NSF), which awarded funding for Comet in late 2013.

Comet was designed to meet the needs of what is often referred to as the ‘long tail’ of science – the idea that the large number of modest-sized computationally-based research projects represent, in aggregate, a tremendous amount of research that can yield scientific advances and discovery,” said SDSC Director Michael Norman, principal investigator for the Comet project.

Comet, which went into operation in mid-2015,has been one of the most widely used supercomputers in the NSF’s XSEDE (Extreme Science and Engineering Discovery Environment) program, which provides researchers with an advanced collection of integrated digital resources and services.

SDSC will hold a webinar on February 15 to provide a detailed overview of Comet’s capabilities and system upgrades. First-time and current users are invited to attend. More information about the webinar can be found here.

“Now in its second year of serving the national research community, Comet is exceeding our expectations and we encourage new users to learn more about how Comet can support their research,” said Norman. “Feedback from our current user base – both anecdotally and through their expressed use on the system, as well as examining the data we’ve been collecting – underscores a strong need for systems such as Comet that serve what we call the ‘99 percent’ of the research community.”

In addition to Comet’s design, its allocation and operational policies are geared toward rapid access, quick turnaround, and an overall focus on scientific productivity. Comet also features large memory nodes, GPUs and local flash, which taken together, provide a highly usable and flexible computing environment for a wide range of domains.

Providing ‘Science Gateways’ for Researchers

Surpassing the 10,000-user milestone in less than two years of operations is due in large part to researchers accessing Comet via science gateways, which provide scientists with access to many of the tools used in cutting-edge research – telescopes, seismic shake tables, supercomputers, sky surveys, undersea sensors, and more – and connect often diverse resources in easily accessible ways that save researchers and institutions time and money.

Science gateways make it possible to run the available applications on supercomputers such as Comet so results come quickly, even with large data sets. Moreover, browser access offered by gateways allows researchers to focus on their scientific problem without having to learn the details of how supercomputers work and how to access and organize the data needed.

In mid-2016, a collaborative team led by SDSC was awarded a five-year $15 million NSF grant to establish a Science Gateways Community Institute to accelerate the development and application of highly functional, sustainable science gateways that address the needs of researchers across the full spectrum of NSF directorates. The award was part of a larger NSF announcement in which the agency committed $35 million to create two Scientific Software Innovation Institutes (S2I2) that will serve as long-term hubs for scientific software development, maintenance and education.

“It’s possible to support gateways across many disciplines because of the variety of hardware and support for complex, customized software environments on Comet,” said Nancy Wilkins-Diehr, an associate director of SDSC and co-director of XSEDE’s Extended Collaborative Support Services. “This is a great benefit to researchers who value the ease of use of high-end resources via such gateways.”

One of the most popular science gateways across the entire XSEDE resource portfolio is the CIPRES science gateway, created as a portal under the NSF-funded Cyberinfrastructure for Phylogenetic Research (CIPRES) project in late 2009. The gateway is used by scientists to explore evolutionary relationships by comparing DNA sequence information between species.

In 2013, SDSC received a $1.5 million NSF award to extend the project to make supercomputer access simpler and more flexible for phylogenetics researchers. Typically, about 200 CIPRES jobs are running simultaneously on Comet.

“The scheduling policy on Comet allows us to make big gains in efficiency because we can use anywhere between one and 24 cores on each node,” said Mark Miller, a bioinformatics researcher with SDSC and principal investigator of the CIPRES gateway. “When you are running 200 small jobs 24/7, those savings really add up in a hurry.”

To date, the CIPRES science gateway has supported more than 20,000 users conducting phylogenetic studies involving species in every branch of the “tree of life”. The gateway is used by researchers on six continents, and their results have appeared in more than 3,000 scientific publications since 2010, including CellNature, and PNAS.

I-TASSER: A New Protein Structure Gateway Available via Comet

In late 2016, a new science gateway called I-TASSER (Iterative Threading ASSEmbly Refinement), developed by researchers at the Zhang Lab at the University of Michigan’s Medical School, began accepting users. I-TASSER is a hierarchical approach to protein structure and function prediction. Structural templates are first identified from the Protein Data Bank using LOMETS (Local Meta-Threading-Server), an on-line web service for protein structure prediction. Full-length atomic models are then constructed by iterative template fragment assembly simulations. Finally, function insights of the target are derived by threading the 3D models through the protein function database called BioLiP.

Since October 2016, I-TASSER has been accessed via Comet – the only resource within the XSEDE portfolio to do so – by more than 8,000 unique users, according to Yang Zhang, a U-M professor of computational medicine and bioinformatics as well as biological chemistry, and the I-TASSER’s principal investigator. In total, I-TASSER currently has more than 76,000 registered users from 130 countries.

“With the increasing requests from the community for protein structure and function modeling, one of the major bottlenecks of the I-TASSER Server has been the limit in supporting computer resources of our laboratory that was originally funded by the Department of Computational Medicine and Bioinformatics at the University of Michigan,” said Zhang. “The generous grant of computing resources from XSEDE is very helpful in improving the capacity of the I-TASSER system to serve the broader biomedical community by providing faster and higher quality simulations of protein models.”

Including I-TASSER, 29 science gateways are available via XSEDE’s resources, each one designed to address the computational needs of a particular community such as computational chemistry, phylogenetics, and the neurosciences. SDSC alone has delivered 77 percent of all gateway cycles since the start of the XSEDE project in 2011.

About SDSC 

As an Organized Research Unit of UC San Diego, SDSC is considered a leader in data-intensive computing and cyberinfrastructure, providing resources, services, and expertise to the national research community, including industry and academia. Cyberinfrastructure refers to an accessible, integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. SDSC’s Comet joins the Center’s data-intensive Gordon cluster, and are both part of the National Science Foundation’s XSEDE (Extreme Science and Engineering Discovery Environment) program.

Source: SDSC

The post SDSC’s Comet Supercomputer Surpasses 10,000 Users Milestone appeared first on HPCwire.

Allen Malony Receives Fulbright Distinguished Chair Award

Fri, 02/03/2017 - 07:26

Feb. 3 — Winning a Fulbright award is nothing new for Allen Malony — a professor in the UO’s Department of Computer and Information Science, who now has three such awards — but his latest achievement, a Fulbright-Toqueville Distinguished Chair, is the top honor handed out by the Fulbright organization.

As a 2016-17 Distinguished Chair, Malony will teach courses, do research and participate in conferences and seminars related to his expertise in high-performance computing while serving as a visiting professor at the University of Versailles Saint-Quentin-en-Yvelines in Versailles, France.

“The emphasis on both teaching and research encompasses the full spirit of the Fulbright Distinguished Chair Award, and I am very honored to receive it,” Malony said.

Malony’s teaching has largely been focused on parallel computing theory and practice. Parallel computers utilize multiple processors to execute parts of programs at the same time, making it possible for applications to run faster. The world’s most powerful computer systems today, so-called supercomputers, rely on parallel computing.

“At the heart of my academic and research work is the sincere belief that high-performance computing matters, for science and society,” Malony said. “I enjoy the research interactions that I have with other people and enjoy thinking about the role of high-performance computing in our world and in the future and what it can mean for our ability to solve scientific, social and engineering problems that we have — for the betterment of humanity and our lives.”

While in France, Malony hopes to inspire students and colleagues with his vision of high-performance computing and its potential for next-generation discoveries. He will teach a course on parallel performance engineering methods and conduct research seminars for graduate students.

In addition to having recently launched a new high-performance computing center, Malony and his UO research group work on projects funded by the Department of Energy and the National Science Foundation to develop parallel performance measurement and analysis tools. Malony also directs the UO’s NeuroInformatics Center, which develops advanced integrated neuroimaging tools for next-generation brain analysis.

Malony sees applications for high-performance computing in disciplines ranging from molecular biology to astrophysics to genome sequencing. His interdisciplinary work at the UO includes modeling of the electromagnetics of the human head with the UO Neuroinformatics Center, examining the dynamics of polymers with UO chemistry professor Marina Guenza, computing the paths of sound waves in marine seismic tomography with UO geological science professor Doug Toomey and simulating the mountain pine beetle epidemic with UO geography professor Chris Bone.

“Almost any area of research has opportunities for using parallel computing systems,” Malony said.

Malony came to the UO in 1991 after serving as a senior software engineer at the University of Illinois Center for Supercomputing Research and Development. He spent a year as a Fulbright Research Scholar and visiting professor at Utrecht University in the Netherlands and was awarded the NSF National Young Investigator award in 1994.

In 1999 he was a Fulbright Research Scholar to Austria at the University of Vienna. In 2002, he was awarded the Alexander von Humboldt Research Award for Senior U.S. Scientists.

The Fulbright program was established in 1946 to increase mutual understanding between the U.S. and other countries through the exchange of students and scholars. The Fulbright-Toqueville Distinguished Chair was created by the Franco-American Fulbright Commission, in partnership with the French Ministry of Higher Education and Research and the U.S. Department of State. It commemorates Alexis de Tocqueville’s 200th birthday and Senator J. William Fulbright’s 100th birthday.

Source: University of Oregon

The post Allen Malony Receives Fulbright Distinguished Chair Award appeared first on HPCwire.

SC17 Workshop Proposals Due Feb. 7

Thu, 02/02/2017 - 15:27

Feb. 2 — The SC17 conference is accepting proposals for independently planned full- or half-day workshops. The deadline for submitting a proposal is Thursday, February 7. SC17 will be held Nov. 12-17 in Denver, Colorado.

SC17 will include about 30 full-day and half-day workshops that complement the overall Technical Program events, expand the knowledge base of its subject area, and extend its impact by providing greater depth of focus. Workshop proposals will be peer-reviewed academically with a focus on submissions that will inspire deep and interactive dialogue in topics of interest to the HPC community.

The SC conference series is dedicated to promoting equality and diversity and recognizes the role that this has in ensuring the success of the conference series. We welcome submissions from all sectors of society.  SC17 is committed to providing an inclusive conference experience for everyone, regardless of gender, sexual orientation, disability, physical appearance, body size, race, or religion.

This year’s dates for workshop submissions and notifications is earlier than usual to accommodate the expanded focus on peer review.

Source: SC17

The post SC17 Workshop Proposals Due Feb. 7 appeared first on HPCwire.

Weekly Twitter Roundup (Feb. 2, 2017)

Thu, 02/02/2017 - 14:35

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below.

Cheers to student cluster competition Team South Africa, visiting today @SCCompSC @ISChpc @Dell pic.twitter.com/ZxANFzJwtT

— TACC (@TACC) January 31, 2017

Congrats to @SCSatCMU #AI #Libratus for winning the #BrainsVsAI tourney! Can't wait to hear what applications are next for Libratus! pic.twitter.com/f3G0xZuPD8

— PSC (@psc_live) January 31, 2017

Lee Carter from @BrightComputing and Dr Ben Bennett from @sgi_corp explaining how to maximise your investment in #hpc at #UKHPC today pic.twitter.com/76Gujpwf1S

— Bright – EMEA (@BrightEMEA) February 1, 2017

Two hours video with main focus on #BurstBuffer training for the KAUST early access users https://t.co/Uio8Of114b #HPC @KAUST_HPC @cray_inc

— George Markomanolis (@geomark) January 31, 2017

Weekly Twitter Roundup (Feb. 2, 2017)

Proud to be HPCWire's "people to watch" for the second time. interview was also enjoyable revealing the true "me"! https://t.co/wKdzN1hBmS

— Satoshi Matsuoka (@ProfMatsuoka) January 30, 2017

Participants in @matheimadvent competition had a chance to meet "Konni," a model of the HLRN supercomputer "Konrad" https://t.co/IuZ3STHS4z pic.twitter.com/idqMTLNJPi

— Cray Inc. (@cray_inc) January 31, 2017

#intelai day in Munich with a lot of inspiring presentations on another technology building on a #HPC foundation. @HPCatDell #GoBigWinBig pic.twitter.com/09Te29fAFX

— Martin Hilgeman (@MartinHilgeman) February 1, 2017

.@hpcprogrammer @SCInclusivity IMHO given state of play in US the most inclusive option for @Supercomputing now is move to Canda ASAP.. #HPC

— Chris Samuel (@chris_bloke) January 29, 2017

TSUBAME3 ended up to be exactly my initial design, kudos to HPE/SGI who won the contract. Exciting announcements later incl. design details.

— Satoshi Matsuoka (@ProfMatsuoka) January 30, 2017

Liberatus runs on @psc_live's supercomputer to crush world-class poker pros. Our #HPC tech is helping #AI go all in. https://t.co/qTl8KLgJLB pic.twitter.com/XJRCKZHkYC

— HPE (@HPE) February 1, 2017

Ahh, the log-log graph, friend of the performance blagger, the speedup merchant, the hiders of truth, for many a year #hpc #performancecrime pic.twitter.com/xPGQKXEUBK

— Adrian Jackson (@adrianjhpc) January 31, 2017

Great PRACE 5-IP Kick off meeting in Athens! Thanks for hosting us @grnet_gr #HPC #eInfrastructures #H2020 pic.twitter.com/f0Fy33Hjuv

— PRACE (@PRACE_RI) February 2, 2017

Click here to view the top tweets from last week.

The post Weekly Twitter Roundup (Feb. 2, 2017) appeared first on HPCwire.

Quantum: ORNL Sets Data Density Transmission Record

Thu, 02/02/2017 - 10:36

Using a technique called superdense coding, researchers from Oak Ridge National Laboratory have set a new record for data density transmission – 1.67 bits per qubit, or quantum bit – over a fiber optic cable. Noteworthy, they used relatively non-exotic components suggesting the technique may be moving closer, albeit slowly, towards practical use.

Brian Williams, ORNL

A report on the work (Superdense coding over optical fiber links with complete Bell-state measurements) by ORNL researchers Brian Williams, Ronald Sadlier and Travis Humble was published yesterday in Physical Review Letters. The research was selected as an “Editor’s Suggestion,” a distinction reserved for approximately one in six PRL papers.

Quantum behavior offers many tantalizing prospects for computing and communications. Whereas classical computers transmit information in the form of bits (usually a 1 or 0), qubits can employ two states simultaneously (superposition) and represent more information than a traditional bit. The physics of this quantum communication task employed by Williams and his team is similar to that used by quantum computers, which use qubits to arrive at solutions to extremely complex problems faster than their bit-laden counterparts.

(Left) The original four-color 100 × 136 pixel 3.4 kB image. (Right) The image received using superdense coding. The calculated fidelity was 87%.

A brief article on the work is posted on the ORNL web site and a synopsis is on the APS Physics website. As a demonstration of the technique’s effectiveness, the team transmitted the ORNL logo, an oak leaf, between two end points in the laboratory.

A significant part of the challenge in superdense coding such as that used by the ORNL team is the need to perform a complete Bell-state measurement (BSM) on the photon pair; it’s not possible using only linear optics and a single degree of shared entanglement. Non-linear optics can be used for successful BSM but have proven inefficient and complicated to implement.

In the paper, the researcher write, “Our novel interferometric design allows ‘off-the-shelf’ single-photon detectors to enable the complete Bell-state discrimination instead of the number-resolving detectors required by previous experiments. To our knowledge, this is the first demonstration of superdense coding over an optical fiber and a step towards the practical realization of superdense coding. Alongside our demonstration of a hybrid quantum-classical transfer protocol, these results represent a step toward the future integration of quantum communication with fiber-based networks.” See figure from the paper below.

Quantum communication and computing are indeed fascinating but also puzzling for most of us. It’s best to read the original paper. That said, here’s an excerpt from the APS synopsis (by Michael Schirber) describing the ORNL work:

“Suppose Alice wants to send a two-bit message to Bob. She could send two photons with the message encoded in their polarizations. Or, using superdense coding, she could send one polarized photon qubit whose polarization state encodes both bits. The latter option requires that the two parties initially share a pair of photons with entangled polarization. Alice performs one of four operations on her photon and then sends it to Bob, who combines it with his photon to measure which operation Alice performed.

“If Bob simply measures polarization, then he won’t recover the full message. One solution is to entangle the photons in some additional degree of freedom, such as orbital angular momentum. But so far, these hyperentangled states have been unable to survive transmission through optical fibers. Williams and his colleagues have devised a superdense coding system that is fiber compliant. In this case, Alice and Bob’s photons pass through an interferometer whose arms incorporate time delays that entangle the arrival times of the photons at the detectors. Using polarization and arrival-time measurements, Bob can recover Alice’s message at a density of 1.67 bits per qubit. This is not yet the maximum density of 2, but it sets a new record for a system using single photons and linear optics.”

There are still many challenges. For example, the ORNL logo (see figure) was only transmitted with ~87 percent fidelity. “[E]rrors in the received image result from drift in the interferometer during transmission, phase miscalibration, and imperfect state generation.”

Link to paper: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.050501

Link to ORNL article: https://www.ornl.gov/news/ornl-researchers-break-data-transfer-efficiency-record

Link to APS Physics synopsis: http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.118.050501

The post Quantum: ORNL Sets Data Density Transmission Record appeared first on HPCwire.

NCSES Publishes Latest Women, Minorities, and Persons With Disabilities in Science and Engineering Report

Thu, 02/02/2017 - 08:33

Feb. 2 — The National Center for Science and Engineering Statistics (NCSES), a division of the National Science Foundation (NSF), has released the 2017 Women, Minorities, and Persons with Disabilities in Science and Engineering (WMPD) report, the federal government’s most comprehensive look at the participation of these three demographic groups in science and engineering education and employment.

The report shows the degree to which women, people with disabilities and minorities from three racial and ethnic groups ­­ black, Hispanic and American Indian or Alaska Native ­­ are underrepresented in science and engineering (S&E). Women have reached parity with men in educational attainment but not in S&E employment. Underrepresented minorities account for disproportionately smaller percentages in both S&E education and employment

Congress mandated the biennial report in the Science and Engineering Equal Opportunities Act as part of the National Science Foundation’s (NSF) mission to encourage and strengthen the participation of underrepresented groups in S&E.

“An important part of fulfilling our mission to further the progress of science is producing current, accurate information about the U.S. STEM workforce,” said NSF Director France Córdova. “This report is a valuable resource to the science and engineering policy community.”

NSF maintains a portfolio of programs aimed at broadening participation in S&E, including ADVANCE: Increasing the Participation and Advancement of Women in Academic Science and Engineering Careers;

LSAMP: the Louis Stokes Alliances for Minority Participation; and NSF INCLUDES, which focuses on building networks that can scale up proven approaches to broadening participation.

The digest provides highlights and analysis in five topic areas: enrollment, field of degree, occupation, employment status and early career doctorate holders. That last topic area includes analysis of pilot study data from the Early Career Doctorates Survey, a new NCSES product. NCSES also maintains expansive WMPD data tables, updated periodically as new data become available, which present the latest S&E education and workforce data available from NCSES and other agencies. The tables provide the public access to detailed, field­by­ field information that includes both percentages and the actual numbers of people involved in S&E.

“WMPD is more than just a single report or presentation,” said NCSES Director John Gawalt. “It is a vast and unique information resource, carefully curated and maintained, that allows anyone (from the general public to highly trained researchers) ready access to data that facilitate and support their own exploration and analyses.”

Key findings from the new digest include:

  • The types of schools where students enroll vary among racial and ethnic groups. Hispanics, American Indians or Alaska Natives and Native Hawaiians or Other Pacific Islanders are more likely to enroll in community colleges. Blacks and Native Hawaiian or Other Pacific Islanders are more likely to enroll in private, for profit schools.
  • Since the late 1990s, women have earned about half of S&E bachelor’s degrees. But their representation varies widely by field, ranging from 70 percent in psychology to 18 percent in computer sciences.
  • At every level ­­ bachelor’s, master’s and doctorate ­­ underrepresented minority women earn a higher proportion of degrees than their male counterparts. White women, in contrast earn a smaller proportion of degrees than their male counterparts.
  • Despite two decades of progress, a wide gap in educational attainment remains between underrepresented minorities and whites and Asians, two groups that have higher representation in S&E education than they do in the U.S. population.
  • White men constitute about one­third of the overall U.S. population; they comprise half of the S&E workforce. Blacks, Hispanics and people with disabilities are underrepresented in the S&E workforce.
    Women’s participation in the workforce varies greatly by field of occupation.
  • In 2015, scientists and engineers had a lower unemployment rate compared to the general U.S. population (3.3 percent versus 5.8 percent), although the rate varied among groups. For example, it was 2.8 percent among white women in S&E but 6.0 percent for underrepresented minority women.

For more information, including access to the digest and data tables, see the updated WMPD website at https://nsf.gov/statistics/wmpd.

Source: NSF

The post NCSES Publishes Latest Women, Minorities, and Persons With Disabilities in Science and Engineering Report appeared first on HPCwire.

Cycle Computing Collaborates With ANSYS on Enterprise Cloud HPC Offering

Thu, 02/02/2017 - 07:19

NEW YORK, N.Y., Feb. 2 — Cycle Computing, the global leader in Big Compute and Cloud HPC orchestration, today announced that ANSYS has officially chosen its CycleCloud product to spearhead the orchestration and management behind the ANSYS Enterprise Cloud. ANSYS is the global leader in engineering simulation bringing clarity and insight to its customers’ most complex design challenges.

Many ANSYS customers require simulation workloads to be migrated to the cloud, as customers look to leverage dynamic cloud capacity to accelerate time to result, shorten product development cycles and reduce costs. ANSYS Enterprise Cloud, an enterprise-level engineering simulation platform, delivered on the Amazon Web Service (AWS) global platform using the CycleCloud software platform, enables this migration, including secure storage and data management and access to resources for interactive and batch execution that scales on demand for virtual-private cloud (VPC) for enterprise simulation.

“Our collaboration with Cycle Computing enables the ANSYS Enterprise Cloud to meet the elastic capacity and security requirements of enterprise customers,” said Ray Milhem, vice president, Enterprise Solutions and Cloud, ANSYS. “CycleCloud has run some of the largest Cloud Big Compute and Cloud HPC projects in the world, and we are excited to bring their associated, proven software capability to our global customers with the ANSYS Enterprise Cloud.”

Cycle Computing’s CycleCloud will optimize ANSYS Enterprise Cloud with the orchestration of cloud HPC clusters with ANSYS software applications in the cloud, ensuring optimal AWS Spot instance usage, and ensuring that appropriate resources are used for the right amount of time in the ANSYS Enterprise Cloud.

“Our CycleCloud software brings over one hundred and sixty engineering years of development, and of course, the history of managing Cloud Big Compute and Cloud HPC environments for some of the world’s most innovative organizations,” said Jason Stowe, CEO, Cycle Computing. “We are excited to be chosen by ANSYS to orchestrate secure Big Compute cloud infrastructure, optimize costs, and help enable disruptive time to result for its Enterprise Cloud clients.”

More information about the CycleCloud cloud management software suite can be found at www.cyclecomputing.com.

About Cycle Computing

Cycle Computing is the leader in Big Compute software to manage simulation, analytics, and Big Data workloads. Cycle turns the Cloud into an innovation engine for your organization by providing simple, managed access to Big Compute and Cloud HPC. CycleCloud is the enterprise software solution for managing multiple users, running multiple applications, across multiple clouds, enabling users to never wait for compute and solve problems at any scale. Since 2005, Cycle Computing software has empowered customers in many Global 2000 manufacturing, Big 10 Life Insurance, Big 10 Pharma, Big 10 Hedge Funds, startups, and government agencies, to leverage hundreds of millions of hours of cloud based computation annually to accelerate innovation. For more information visit: www.cyclecomputing.com

Source: Cycle Computing

The post Cycle Computing Collaborates With ANSYS on Enterprise Cloud HPC Offering appeared first on HPCwire.

Mellanox Reports Fourth Quarter 2016 Financial Results

Thu, 02/02/2017 - 07:08

SUNNYVALE, Calif., Feb. 2 — Mellanox Technologies, Ltd. (NASDAQ: MLNX) has announced financial results for its fourth quarter ended December 31, 2016.

“During the fourth quarter we saw continued sequential growth in our InfiniBand business, driven by robust customer adoption of our 100 Gigabit EDR solutions into artificial intelligence, machine learning, high-performance computing, storage, database and more. Our quarterly, and full-year 2016 results, highlight InfiniBand’s continued leadership in high-performance interconnects,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Customer adoption of our 25, 50, and 100 gigabit Ethernet solutions continued to grow in the fourth quarter. Adoption of Spectrum Ethernet switches by customers worldwide generated positive momentum exiting 2016. Our fourth quarter and full-year 2016 results demonstrate Mellanox’s diversification, and leadership in both Ethernet and InfiniBand. We anticipate growth in 2017 from all Mellanox product lines.”

Fourth Quarter and Fiscal 2016 Highlights

  • Revenues were $221.7 million in the fourth quarter, and $857.5 million in fiscal year 2016.
  • GAAP gross margins were 66.8 percent in the fourth quarter, and 64.8 percent in fiscal year 2016.
  • Non-GAAP gross margins were 71.9 percent in the fourth quarter, and 71.6 percent in fiscal year 2016.
  • GAAP operating income was $13.4 million, or 6.0 percent of revenue, in the fourth quarter, and operating income was $30.6 million, or 3.6 percent of revenue, in fiscal year 2016.
  • Non-GAAP operating income was $44.4 million, or 20.0 percent of revenue, in the fourth quarter, and $180.4 million, or 21.0 percent of revenue, in fiscal year 2016.
  • GAAP net income was $9.0 million in the fourth quarter, and $18.5 million in fiscal year 2016.
  • Non-GAAP net income was $41.3 million in the fourth quarter, and $169.5 million in fiscal year 2016.
  • GAAP net income per diluted share was $0.18 in the fourth quarter, and $0.37 in fiscal year 2016.
  • Non-GAAP net income per diluted share was $0.82 in the fourth quarter, and $3.43 in fiscal year 2016.
  • $54.0 million in cash was provided by operating activities during the fourth quarter.
  • $196.1 million in cash was provided by operating activities during fiscal year 2016.
  • Cash and investments totaled $328.4 million at December 31, 2016.

First Quarter 2017 Outlook

We currently project:

  • Quarterly revenues of $200 million to $210 million
  • Non-GAAP gross margins of 71 percent to 72 percent
  • An increase in non-GAAP operating expenses of 3 percent to 5 percent
  • Share-based compensation expense of $15.8 million to $16.3 million
  • Non-GAAP diluted share count of 50.3 million to 50.8 million shares

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end Ethernet and InfiniBand intelligent interconnect solutions and services for servers, storage, and hyper-converged infrastructure. Mellanox’s intelligent interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance. Mellanoxoffers a choice of high performance solutions: network and multicore processors, network adapters, switches, cables, software and silicon, that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage, network security, telecom and financial services. More information is available at www.mellanox.com.

Source: Mellanox Technologies

The post Mellanox Reports Fourth Quarter 2016 Financial Results appeared first on HPCwire.

Micron CEO Announces Upcoming Retirement

Thu, 02/02/2017 - 06:40

BOISE, Idaho, Feb. 2 — Micron Technology, Inc., (NASDAQ: MU) today announced the upcoming retirement of its Chief Executive Officer, Mark Durcan. The Board of Directors has formed a special committee to oversee the succession process and has initiated a search, with the assistance of an executive search firm, to identify and vet candidates. The Board has not established a timeframe for this process and intends to conduct a deliberate review of candidates who can contribute to Micron’s future success. Mark Durcan will continue to lead Micron as CEO during this process and will assist the company with its search and subsequent leadership transition.

“Mark Durcan recently discussed with the Board his desire to retire from Micron when the time and conditions were right for the company,” said Robert E. Switz, Chairman of the Board and a member of the search committee. “As CEO, he has successfully guided Micron’s strategy and growth for the past five years and has allowed the company to initiate this transition from a position of strength. The Board is committed to thoughtful long-term succession planning and takes seriously its responsibility to maintain a high-caliber management team and to ensure successful executive leadership transition. We expect Mark to play an instrumental role in securing and transitioning his replacement.”

About Micron Technology

Micron Technology, Inc., is a global leader in advanced semiconductor systems. Micron’s broad portfolio of high-performance memory technologies—including DRAM, NAND and NOR Flash—is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of technology leadership, Micron’s memory solutions enable the world’s most innovative computing, consumer, enterprise storage, networking, mobile, embedded and automotive applications. Micron’s common stock is traded on the NASDAQ under the MU symbol. To learn more about Micron Technology, Inc., visit www.micron.com.

Source: Micron

The post Micron CEO Announces Upcoming Retirement appeared first on HPCwire.

Pages