HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 20 hours 46 min ago

Cray to Install CS400 Cluster Supercomputer at Argonne National Laboratory

Mon, 05/01/2017 - 15:16

SEATTLE, May 01, 2017 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced it has been awarded a contract to deliver a Cray CS400 cluster supercomputer to the Laboratory Computing Resource Center (LCRC) at Argonne National Laboratory. The new Cray system will serve as the Center’s flagship cluster, and in continuing with LCRC’s theme of jazz-music inspired computer names, the Cray CS400 system is named “Bebop.”

Argonne National Laboratory established the LCRC in 2002 to enable and promote the use of high-performance computing (HPC) across the Laboratory in support of its varied research missions. The LCRC is available to the entire Argonne user community, and its integrated computing and data resources will include the new 1.5 petaflop Cray CS400 system. These systems are stepping stones in the development of petascale codes that will run on systems such as Theta, a Cray XC40 supercomputer at the Argonne Leadership Computing Facility (ALCF).

“At its core, the mission of the LCRC is to provide Argonne’s users with supercomputing resources that expand research horizons, provide the training and assistance for more productive research projects, and enable larger and more complex studies,” said Rick Stevens, Associate Laboratory Director for Computing, Environment and Life Sciences. “Supercomputers are important tools for the Laboratory’s efforts in many areas, including energy storage, new materials, nuclear energy, climate change, and efficient transportation.”

“Cray supercomputers continue to power the amazing research conducted by the Argonne user community, and we are honored that the LCRC has selected a Cray CS400 as the next flagship system for this important program,” said Peter Ungaro, president and CEO of Cray. “We are proud of our ongoing partnership with Argonne, and with Theta and the upcoming Aurora system, and now Bebop, we look forward to an exciting future with this important customer.”

The Cray CS400 cluster supercomputers are scalable, flexible systems built from industry-standard technologies into a unified, fully-integrated system. Available with air- or liquid-cooled configurations, Cray CS400 systems provide superior price/performance, energy efficiency and configuration flexibility. The Cray CS400 systems are integrated with Cray’s HPC software stack and include software tools compatible with most open source and commercial compilers, schedulers, and libraries.

The Cray CS400 system at the LCRC is expected to be put into production in mid-2017.

For more information on the Cray CS cluster supercomputers, please visit www.cray.com.

About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray

The post Cray to Install CS400 Cluster Supercomputer at Argonne National Laboratory appeared first on HPCwire.

Microsoft Research Head Jeannette Wing to Lead Columbia Data Science Institute

Mon, 05/01/2017 - 14:31

NEW YORK, May 1, 2017 — Columbia University President Lee C. Bollinger today announced that Jeannette Wing, currently corporate vice president of Microsoft Research, will become the Avanessians Director of Columbia’s Data Science Institute and Professor of Computer Science.

“Jeannette Wing is a pioneering figure in the world of computer science research and education, and her addition to the University’s academic leadership team reflects the continuing expansion of our work in this field,” said Bollinger. “Our Data Science Institute is indispensable to virtually every scholarly initiative at the University dedicated to addressing a societal problem.  The benefits to be derived from Jeannette’s leadership and her presence here will be immense.”

Wing spent the last four years leading a global network of research labs extending from Boston to Bangalore as a corporate vice president of Microsoft Research.  A longtime advocate of interdisciplinary and collaborative research, Wing will continue to expand the Data Science Institute’s impact on research and education in a variety of ways, from precision medicine and public policy, to the humanities and the professions.  She will take the reins of this Columbia-wide initiative, reporting directly to President Bollinger, in early July.

“I am thrilled to be joining Columbia and returning to my academic roots, working with colleagues to help the university fulfill its commitment to solving major challenges in our world,” she said.  “Through rapid advances in computer science, statistics and operations research, we are just beginning to explore the power of data-driven discovery and decision-making.  At the same time, the automated collection and analysis of personal data raise important new ethical concerns.  Columbia is ideally positioned to lead this exploration.”

Launched in 2012, the Data Science Institute has grown from its base at Columbia Engineeringto include more than 200 affiliated researchers across the university. Under its founding Executive Director, Kathleen McKeown, and Associate Director, Patricia Culligan, the Institute has emerged as a leader in both the foundational and interdisciplinary applications of data science. Columbia became one of the first universities to offer a master’s degree in data science and develop specific courses in data science now offered throughout the university.

President Bollinger and Columbia Engineering Dean Mary Boyce saluted McKeown and Culligan for their outstanding leadership. McKeown, the Henry and Gertrude Rothschild Professor of Computer Science, and Culligan, the Robert A.W. and Christine S. Carleton Professor of Civil Engineering, will return to their full-time teaching and research positions at the School and to their innovative interdisciplinary work.  “So many of us across the University are indebted to Kathy and Trish for their profound contributions to the initial conception and formative development of the Data Science Institute,” Bollinger said.  “They and their many dedicated colleagues are responsible for creating the foundation on which we continue to build.”

Before joining Microsoft in 2013, Wing held leadership positions at Carnegie Mellon University and the National Science Foundation.  From 2007 to 2010, she oversaw the National Science Foundation’s computer and information science and engineering directorate, developing programs and setting funding priorities for research and education in academia. One of her initiatives, Expeditions in Computing, invited researchers to pursue big, risky bets and served as a model for a similar program she launched at Microsoft.  Also during her time at NSF, she promoted an effort to design an advanced placement course focusing on computational thinking for the College Board, which later became a model for the nation’s high schools.

Wing led Carnegie Mellon’s Computer Science Department before and after her time at NSF, and served for five years as Carnegie Mellon’s associate dean for Academic Affairs, overseeing educational programs offered by the university’s School of Computer Science. She received her undergraduate, master’s and doctoral degrees from the Massachusetts Institute of Technology.

Wing’s areas of research expertise are in security and privacy, formal methods, programming languages, and distributed and concurrent systems.  She is best known for defining mathematical logics and models to reason about correctness properties of computing systems.  Through her research, Wing invents ways to ensure that the computing systems we use daily are reliable, safe, and secure.

Anticipating the rise of “Big Data,” Wing was an early advocate for the influence of computer science research and education and the need to inform other disciplines with this learning.  In the wake of the dot-com bust and falling enrollments in computer science departments, her influential 2006 essay, Computational Thinking, helped reinvigorate computer science research and teaching.

Wing is on the board of the Institute for Pure and Applied Mathematics at UCLA and on the steering committee for DARPA’s Information Science and Technology Board.  She has served as chair or member of dozens of academic, industry, government and international advisory and scholarly journal board.  Her work at NSF, and in promoting the value of computational thinking,has been recognized with distinguished service awards from the Computing Research Association and Association for Computing Machinery (ACM).  She is a fellow of the American Academy of Arts and Sciences, American Associationfor the Advancement of Science, ACM and the Institute of Electrical and Electronic Engineers.

She will lead a Columbia Data Science Institute that, since its founding with seed funding from New York City’s Applied Sciences NYC initiative, has worked to foster innovation and collaboration.  The Institute works in areas where the rise of powerful algorithms and massive data has opened new opportunities and threats in fields including cybersecurity, journalism, smart cities, finance and medicine.  Through its growing Industry Affiliates program, the Institute has helped launch successful start-ups and partnered with the nation’s leading technology companies. While national and global in scope, the Institute has worked to deepen and extend Columbia’s role in ongoing efforts to develop New York City as a thriving center of the tech-driven innovation economy.

About Columbia University

Among the world’s leading research universities, Columbia University in the City of New Yorkcontinuously seeks to advance the frontiers of scholarship and foster a campus community deeply engaged in the complex issues of our time through teaching, research, patient care and public service. Founded in 1754 as King’s College, the University is today comprised of 16 undergraduate, graduate and professional schools, and four affiliated colleges and seminaries in Manhattan, and a wide array of research institutes and global centers around the world. Columbia’s Data Science Institute is one of several university initiatives helping to drive economic dynamism and technology innovation in New York and nationally through teaching, applied research and collaborations with government and industry. Others include Columbia Entrepreneurship that supports the development of new startups through programs on and off-campus, and Columbia Technology Ventures whose record of bringing innovations to market led to Columbia being named number two for tech-transfer among U.S. universities by the Milken Institute.

Source: Columbia University

The post Microsoft Research Head Jeannette Wing to Lead Columbia Data Science Institute appeared first on HPCwire.

NERSC Uses Roofline to Optimize Code for KNL

Mon, 05/01/2017 - 13:28

Roofline, a software performance model first developed (~2005) by Sam Williams while he was a Ph.D. student at the University of California, Berkeley, is now being pressed into service at the National Energy Research Scientific Computing Center (NERSC) to optimize code for use on manycore systems, notably Intel Knights Landing (KNL).

“The idea of Roofline is two-fold,” said Lenny Oliker, a computer senior scientist in Lawrence Berkeley National Lab’s Computational Research Division who worked closely with Williams, now a staff scientist in CRD’s Performance Algorithms Research Group, to refine and expand Roofline’s capabilities. “First, we need to understand the underlying hardware architecture (of the supercomputer), its capabilities and the performance of real codes running on it. Then we want to characterize actual applications and graph them onto the Roofline chart.”

In practice, the Roofline model is a graph with an x and y axis where the x axis is the arithmetic intensity—a measure of flops per byte, explained Jack Deslippe, acting group lead for NERSC’s Application Performance Group who is working with Williams and other Berkeley Lab colleagues to extend the model and its applications. “In computer code what this means is how many floating point operations (FLOPs) do you do for every byte of data that you have to bring in from memory,” Deslippe said. “What the Roofline curve tells you is what performance you can expect from the system given the characteristics of your application or a subroutine of the application.”

An account of NERSC’s expanded use of Roofline – formally known as the Empirical Roofline Toolkit (ERT) – is on the NERSC web site (Roofline Model Boosts Manycore Code Optimization Efforts).

While the Roofline model has been used for a number of years to characterize supercomputing systems and architectures, over the past year it has been expanded to both visualize and guide application optimization, and new tools have been developed to support this, according to Deslippe. As part of this effort, NERSC’s Doug Doerfler has extended and applied the technology to the Knights Landing processors in the Cori KNL system.

“We are using this model to frame the conversation with users about where their application stands,” Deslippe said. “It’s a good way to communicate with users about what they need to work on with a given application or subroutine. It takes a little of the mystery out of code optimization.”

Over the past year, the Roofline team has been introducing NERSC users, including those involved in the NERSC Exascale Science Applications Program (NESAP), to the Roofline model to help them gauge performance improvements. For example, Tuomas Koskela, a NERSC postdoc who joined the center in 2016 to work on XGC1 (a fusion particle-in-cell code) as part of a NESAP project, has been using Roofline to improve the code’s performance on Cori.

“We started talking about using Roofline last spring,” Koskela said. “It was interesting to me because I had a problem with my code that we didn’t understand why it wasn’t getting very good performance.” After using Roofline via Intel Advisor to optimize the performance of kernels of the XGC1 code for the KNL architecture, Koskela was able to dramatically improve the code’s performance on Cori.

Link to full article: http://www.nersc.gov/news-publications/nersc-news/science-news/2017-2/roofline-model-boosts-manycore-code-optimization-efforts/

The post NERSC Uses Roofline to Optimize Code for KNL appeared first on HPCwire.

PSSC Labs Introduces All Flash Storage for Hadoop

Mon, 05/01/2017 - 13:21

LAKE FOREST, Calif., May 1, 2017 — PSSC Labs, a developer of custom HPC and Big Data computing solutions, today announced it now offers the option of complete flash storage for its CloudOOP 12000 enterprise server, built specifically for Hadoop. The faster storage option means PSSC Labs can now offer increased data analytics speeds and improved performance, making the CloudOOP 12000 the ideal enterprise Hadoop solution.

The CloudOOP 12000 now supports the new Micron 5100 series Solid State SATAIII (SSDs), with up to 14 x Micron SSDs (2 drives dedicated for operating system and 12 drives for data storage). The all flash storage option provides durability, reliability, and faster performance at a price point that makes the technology affordable to more enterprise users. The Micron SSD’s offer faster speeds that traditional non-flash storage, and when combined with the CloudOOP 12000’s made for Hadoop server, can achieves near real time performance for data analytics.

Using the largest capacity Micron SSDs, the CloudOOP 12000 can support up to 96 TBs of storage and is tailored to meet the needs of read-intensive video streaming, latency-sensitive transactional databases and write-intensive logging applications.  It also makes an excellent platform for edge computing with the ability to consume large amounts of data very quickly from IOT devices and sensors.

The CloudOOP 12000 is the only server specifically designed for Hadoop, Kafka, Big Data and IOT. It offers 2x the density and up to 35% lower power draw than traditional manufacturers as well as a near 50% increase in data throughput performance. Reducing power draw means a lower data center footprint and significantly reducing your total cost of ownership with an over 90% efficiency rating.

“The CloudOOP1200 is PSSC Lab’s unique platform for enterprise users using applications like Hadoop, Spark, Kafka Streaming. PSSC Labs has already successfully deployed over 100 PBytes for Hadoop using the CloudOOP 12000 platform, and after stringent review of the SSD options on the market, we’ve certified of Micron’s new 5100 series of SSDs, which will allow us to offer our customers a high capacity, high performance, durable system with the absolute lowest cost of ownership, “said Alex Lesser, Vice President of PSSC Labs.

Micron 5100 Series SSD Features include

High Capacity

  • Unique range of solutions with up to 8TB of storage in a 2.5-inch form factor and 2TB in an M.2

High Performance

  • Three models optimized for varying workloads with consistent, steady state random writes at 74,000 IOPS.

Secure Encryption

  • Built-in AES-256-bit encryption and TCG Enterprise protection with FIPS 140-2 validation – available on the 5100 MAX.

Greater Flexibility

  • Micron’s FlexPro firmware architecture can be used to actively tune capacity to optimize drive performance and endurance

Best Reliability

  • Unmatched 99.999% quality of service (QoS) compared to spinning media. MTTF of 2 million device hours

CloudOOP 12000 Features Include

High Processing Power

  • The CloudOOP 12000 supports up to 2 x Intel Xeon E5 Series processors & up to 256GB high performance memory – get higher performance and reduced computing time

Direct Connect IO technology

  • Unique design gives each hard drive its own independent path to the motherboard – removing unnecessary components that restrict data pathways and improving data ingestion & IO rates.

Connectivity Options

  • GigE, 10GigE, 40GigE and Infiniband network connectivity options available. Dual GigE network bandwidth comes standard, with addition network adapters from Intel, Mellanox, Solarflare and others available.

Operating System Compatibility

  • Supports Microsoft Windows, Red Hat, CentOS, Ubuntu & most other Linux distributions.

All CloudOOP 12000 server configurations service and support from PSSC Lab’s US based, expert in house engineers. Prices for a custom CloudOOP 12000 server start at $5000.

For more information see http://www.pssclabs.com/products/big-data/hadoop/high-density-hadoop-server/.

About PSSC Labs

For technology powered visionaries with a passion for challenging the status quo, PSSC Labs is the answer for hand-crafted HPC and Big Data computing solutions that deliver relentless performance with the absolute lowest total cost of ownership.  All products are designed and built at the company’s headquarters in Lake Forest, California. For more information, 949-380-7288, www.pssclabs.com , sales@pssclabs.com .

Source: PSSC Labs

The post PSSC Labs Introduces All Flash Storage for Hadoop appeared first on HPCwire.

IBM Inventor Lisa Seacat DeLuca To Be Inducted into the Women in Technology Hall of Fame

Mon, 05/01/2017 - 13:20

ARMONK, N.Y., May 1, 2017 — IBM (NYSE: IBM) has announced that Lisa Seacat DeLuca, Technology Strategist, IBM Watson Customer Engagement, is being inducted into the Women in Technology Hall of Fame. Sponsored by the Women in Technology International (WITI) Foundation, the induction ceremony will take place on Monday, June 12.

At 34 years old, DeLuca is the most prolific female inventor in IBM history, and her accomplishments have been recognized widely throughout the industry. In 2016, she was named by the Internet of Things (IoT) Institute as one of the Most Influential Women in IoT. Prior to that, DeLuca was named one of MIT’s 35 Innovators Under 35 and one of Fast Company’s 100 Most Creative People in Business. She currently serves as a technology strategist in the Cognitive Incubation Lab for IBM Watson Customer Engagement.

WITI’s Hall of Fame was launched in 1996 as a U.S.-based outreach initiative supported by the Clinton Administration. WITI is dedicated to providing a forum to recognize, celebrate and publicize women’s exceptional contributions to the science and technology fields.

Past honorees include women who have made scientific and technological breakthroughs, who use science and technology to improve the human condition as well as for environmental endeavors to protect the planet. Other IBM inductees include Harriet Green, General Manager, IBM Watson Internet of Things, Customer Engagement and Education and Marie Wieck, General Manager, IBM Blockchain.

For more information on the WITI Hall of Fame and previous inductees, please visit: http://www.witi.com/halloffame.

Source: IBM

The post IBM Inventor Lisa Seacat DeLuca To Be Inducted into the Women in Technology Hall of Fame appeared first on HPCwire.

Online Launches ARMv8-Based Scaleway Public Cloud Powered by Cavium ThunderX

Mon, 05/01/2017 - 12:45

SAN JOSE, Calif., May 1, 2017 — Online, a wholly-owned subsidiary of the leading French Telecom company Iliad Group and one of the leading web hosting providers, announced today the commercial deployment of server platforms based on Cavium’s (NASDAQ: CAVM) ThunderX workload optimized processors as part of their Scaleway cloud service offering.

Online offers a range of services to Internet customers worldwide including domain names, web hosting, dedicated servers and hosting in their datacenter. With several hundred thousand servers deployed in their datacenter, Online is one of the largest web hosting providers in Europe.

The ThunderX product family is Cavium’s 64-bit ARMv8-A server processor for Datacenter and Cloud applications, and features high performance custom cores, single and dual socket configurations, high memory bandwidth and large memory capacity. The product family also includes integrated hardware accelerators, integrated feature rich high bandwidth network and storage IO, fully virtualized core and IO, and scalable high bandwidth, low latency Ethernet fabric, which affords ThunderX best-in-class performance per dollar. They are fully compliant with ARMv8-A architecture specifications as well as ARM’s SBSA and SBBR standards, and widely supported by industry leading OS, Hypervisor and Software tool and application vendors.

Online is deploying dual socket 96 core ThunderX based platforms as part of their Scaleway IaaS cloud offering.  As part of this deployment, Online.net is introducing three starter ARMv8 servers at an attractive starting price of €0.006 per hour, which is less than one third of their current offering.  Scaleway cloud platform is fully supported by Ubuntu 16.04 OS, including LAMP stack, Docker, Puppet, Juju, Hadoop, MAAS, and more. The platforms also support all standard features of the Scaleway Cloud including flexible IPs, native IPv6, Snapshots and images.

“Online success in the hosting server industry is built on providing disruptive technology with best-in-class customer experience. This requires us to deploy the most advanced, highest performance and highly scalable servers in our infrastructure,” said Yann Léger, VP Cloud Computing at Online. “Cavium’s ThunderX workload optimized servers provide an ideal vehicle to enable highly optimized platforms for scalable cloud workloads. We expect ThunderX based servers to deliver significant benefits in performance and TCO, thereby providing better performance and cost-efficiency than all existing solutions in the industry.”

“ThunderX ARMv8 CPUs were designed to deliver best-in-class performance and TCO for targeted workloads and are being deployed at multiple hosting datacenters,” said Gopal Hegde, VP/GM, Datacenter Processor Group at Cavium. “We are pleased to partner with one of Europe’s elite hosting providers on server platforms for their next generation cloud datacenters. This partnership demonstrates continued acceptance of ThunderX platforms across largest and most demanding datacenters.”

About Cavium

Cavium, Inc. (NASDAQ: CAVM), offers a broad portfolio of infrastructure solutions for compute, security, storage, switching, connectivity and baseband processing. Cavium’s highly integrated multi-core SoC products deliver software compatible solutions across low to high performance points enabling secure and intelligent functionality in Enterprise, Datacenter and Service Provider Equipment. Cavium processors and solutions are supported by an extensive ecosystem of operating systems, tools, application stacks, hardware reference designs and other products. Cavium is headquartered in San Jose, CA with design centers in California, Massachusetts, India, Israel, China and Taiwan.

Source: Cavium

The post Online Launches ARMv8-Based Scaleway Public Cloud Powered by Cavium ThunderX appeared first on HPCwire.

Rescale Announces LS-DYNA On-Demand in Europe

Mon, 05/01/2017 - 12:40

SAN FRANCISCO, May 1, 2017 — Rescale and DYNAmore are excited to announce that hourly, on-demand licenses of the popular finite element analysis (FEA) software LS-DYNA are now available in Europe on ScaleX Enterprise, Rescale’s enterprise cloud platform for big compute. The joint launch builds on the success of on-demand licensing in the United States, which accounts for 99% of LS-DYNA jobs on ScaleX in that country. DYNAmore, an LS-DYNA European distributor, will take care of orders and billing, while Rescale will deliver the software on its cloud platform.

Rescale provides ScaleX, a cloud computing platform for simulation and other software that require high-performance computing (HPC). Over 200 third-party software packages, including LS-DYNA, are integrated onto the ScaleX platform, which users can leverage on the cloud via an intuitive SaaS graphical user interface. Rescale partners with major public cloud providers, including AWS and Microsoft Azure, to allow users to run simulations on a global network of the latest HPC hardware.

LS-DYNA, one of the most popular software packages on the ScaleX platform, was previously available to European customers under a “bring-your-own-license (BYOL)” model that permitted customers to use their annual or paid-up licenses on the Rescale platform. With the addition of on-demand licensing, European LS-DYNA customers can now instantly purchase hourly licenses on the cloud to meet their variable simulation requirements and pay by the hour for the licenses they use. In conjunction with Rescale’s multi-cloud network of on-demand HPC hardware, on-demand licenses will allow European LS-DYNA customers to fully leverage the elasticity of the cloud. “Engineers at European enterprises now have the freedom to scale out their LS-DYNA simulations in the blink of an eye, giving their organizations the IT agility that directly corresponds with ROI,” said Joris Poort, Rescale’s CEO.

DYNAmore’s Software Solutions Manager Uli Göhner anticipates the news will boost LS-DYNA sales in Europe as the software licenses become more accessible and easy to purchase. “We already have a lot of requests from our existing customer base for short-term HPC resources. Our flexible licensing strategy allows customers to lease additional licenses for a short term or to purchase licenses on a pay-per-use basis. This new licensing option was implemented especially for our LS-DYNA cloud offering and allows our customers to use their HPC resources effectively.”

Rescale is a Gold Sponsor of the 11th European LS-DYNA Conference in Salzburg, Austria on May 9-11, 2017. Visit the Rescale booth for a live demo of how to buy on-demand licenses on ScaleX.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

About DYNAmore

DYNAmore is the main partner for consulting, training, support and sales services concerning the finite element software LS-DYNA. The product portfolio consists of LS-DYNA, LS-OPT, LS-PrePost, GENESIS, additional complementary programs as well as numerous FE models for crash simulation.

DYNAmore is the first choice for pilot and development projects dealing with the simulation of nonlinear dynamic problems. Secured and qualified support for all application fields, FEM calculation services and general consulting on the subject of structural dynamics are among the services. The educational offering covers a wide range of seminars, infodays and conferences. The services provided also include software development for finite element solver technology and simulation data management as well as consulting and support for modern, massively parallel computer systems.

Source: Rescale

The post Rescale Announces LS-DYNA On-Demand in Europe appeared first on HPCwire.

TACC Announces HPC Services Partnership with NASA JPL

Mon, 05/01/2017 - 11:52

AUSTIN, May 1, 2017 — The Texas Advanced Computing Center (TACC) has announced the formation of a five-year advanced computing partnership with NASA’s Jet Propulsion Laboratory for up to $3.1 million. Under the contract, TACC will provide resources, including supercomputers, networks, storage, and expertise in applying computational methods to fundamental research, science and engineering challenges.

Advanced computing is foundational to the success of a wide range of modern science and engineering efforts relevant to JPL, from design of next generation flight systems to processing real-time data streams from large scale instruments, like telescopes. JPL is the leading U.S. center for robotic exploration of the solar system, and has 19 spacecraft and 10 major instruments carrying out planetary, Earth science, and space-based astronomy missions.

JPL’s requirements for specialized and increasingly more capable HPC resources grows every year. The partnership will significantly extend JPL’s existing computational services and expertise with resources at TACC. Services to be offered to JPL include HPC capabilities and scientific, technical, and operational consulting services.

“We believe this is an interesting space where we have something to contribute and JPL has things to teach us as well,” said John West, director of Strategic Initiatives at TACC. “We always look for opportunities that are high on the intellectual engagement scale. It makes us a better organization and better at serving our customers.”

TACC’s environment includes a comprehensive cyberinfrastructure ecosystem of leading-edge resources in high performance computing (HPC), visualization, data analysis, storage, archive, cloud, data-driven computing, connectivity, tools, APIs, algorithms, consulting, and software. In addition, TACC’s skilled experts work with thousands of researchers on more than 3,000 projects each year at more than 450 institutions across the country.

JPL’s workloads sourced to TACC are expected to represent the full spectrum of JPL’s fundamental research, development and operational areas.

About TACC

The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is a leading research center for advanced computational science, engineering and technology. TACC’s mission is to enable discoveries that advance science and society through the application of advanced computing technologies. To fulfill this mission, TACC provides comprehensive advanced computing resources and support services to researchers in Texas and across the nation. TACC conducts research and development in applications and algorithms, in computing systems design/architecture, and in programming tools and environments to produce new technologies that expand the capabilities of researchers for knowledge discovery. TACC also educates the next generation of computational researchers, and promotes awareness of the importance and impact of computing to science and society. Visit TACC’s website at: www.tacc.utexas.edu

Source: TACC

The post TACC Announces HPC Services Partnership with NASA JPL appeared first on HPCwire.

LLNL Offers 3D Design Summer Academy

Mon, 05/01/2017 - 11:03

LIVERMORE, California, May 1, 2017 — Register now for the Lawrence Livermore National Laboratory (LLNL) 3D Design Summer Academy for enrolled, degree-pursuing community college students. This weeklong workshop will be held June 12-16 from 9 a.m. to 4 p.m. daily at the Edward Teller Education Center (ETEC) at LLNL.

Participating students will learn about cutting-edge scientific research conducted at the Laboratory and experience the nature of science through direct involvement and use of equipment, processes and practices found in research labs. This hands-on workshop will explore the principles behind 2D and 3D printing, with students designing and printing their own projects during the week. Students will tour the Additive Manufacturing facility inside the Laboratory to see real-world printing in practice and interface with Lab scientists and engineers.

No previous experience is required. Registrants must be 18 years or older and a U.S. citizen. Register online by May 15. There is a $25 registration fee.

The Community College 3D Design Summer Academy is sponsored by LLNL’s University Relations and Science Education Program. Visit the LLNL Science Education Program website for more information about this and other educational outreach programs.

For more information, contact Joanna Albala(link sends e-mail), LLNL Education Program manager, (925) 422-6803.

Source: LLNL

The post LLNL Offers 3D Design Summer Academy appeared first on HPCwire.

Tsinghua University Wins ASC17 Championship Big Time

Fri, 04/28/2017 - 09:45

On April 28, the final round of the 2017 ASC Student Supercomputer Challenge (ASC17) ended in Wuxi. Tsinghua University stood out from 20 teams from around the world after a fierce one-week competition, becoming grand champion and winning the e Prize.

Tsinghua University secured ASC17 Champion

As the world’s largest supercomputing competition, ASC17 received applications from 230 universities around the world, 20 of which got through to the final round held this week at the National Supercomputing Center in Wuxi after the qualifying rounds. During the final round, the university student teams were required to independently design a supercomputing system under the precondition of a limited 3000W power consumption. They also had to operate and optimize standard international benchmark tests and a variety of cutting-edge scientific and engineering applications including AI-based transport prediction, genetic assembly, and material science. Moreover, they were required to complete high-resolution maritime simulation on the world’s fastest supercomputer, “Sunway TaihuLight.

The grand champion, team Tsinghua University, completed deep parallel optimization of the high-resolution maritime data simulation mode MASNUM on TaihuLight, expanding the original program up to 10,000 cores and speeding up the program by 392 times. This helped the Tsinghua University team win the e Prize award. MASNUM was nominated in 2016 for the Gordon Bell Prize, the top international prize in the supercomputing applications field.

The runner-up, Beihang University, gave an outstanding performance in the popular AI field. After constructing a supercomputing system which received massive training based on past big data of transportation provided by Baidu, their self-developed excellent deep neural network model yielded the most accurate prediction of road conditions during the morning peak.

The first-time finalist, Weifang University team, constructed a highly optimized advanced heterogeneous supercomputing system with Inspur’s supercomputing server, and ran the international HPL benchmark test, setting a new world record of 31.7 TFLOPS for float-point computing speed. The team turned out to be the biggest surprise of the event and won the award for best computing performance.

Moreover, Ural Federal University, National Tsing Hua University, Northwestern Polytechnical University and Shanghai Jiao Tong University won the application innovation award. The popular choice award was shared by Saint-Petersburg State University and Zhengzhou University.

“It is great to see the presence of global teams in this event,” Jack Dongarra, the Chairman of the ASC Expert Committee, founder of the TOP500 list that ranks the 500 most powerful supercomputer systems in the world, and professor at the Oak Ridge National Laboratory of the United States and the University of Tennessee, said in an interview. “This event inspired students to gain advanced scientific knowledge. TaihuLight is an amazing platform for this event. Just imagine the interconnected computation of everyone’s computer in a gymnasium housing 100,000 persons, and TaihuLight’s capacity is 100 times of such a gym. This is something none of the teams will ever be able to experience again.”

According to Wang Endong, initiator of the ASC competition, academician of the Chinese Academy of Engineering, and the chief scientist of Inspur Group, the rapid development of AI at the moment is significantly changing human society. At the core of such development are computing, data and algorithms. With this trend, supercomputers will become an important infrastructure for intelligent society in the future, and their speed of development and standards will be closely related to social development, improvement in livelihood, and progress of civilization. ASC competition is always committed to cultivating future-oriented, inter-disciplinary supercomputing talents to extend the benefits to the greater population.

ASC17 is jointly organized by the Asian Supercomputing Community, Inspur Group, the National Supercomputing Center in Wuxi, and Zhengzhou University. Initiated by China, the ASC supercomputing challenge aims to be the platform to promote exchanges among young supercomputing talent from different countries and regions, as well as to groom young talent. It also aims to be the key driving force in promoting technological and industrial innovations by improving the standards in supercomputing applications and research.

The post Tsinghua University Wins ASC17 Championship Big Time appeared first on HPCwire.

ISC 2017 Early-Bird Registration Ends May 10

Fri, 04/28/2017 - 09:07

FRANKFURT, Germany, April 28, 2017 — The early bird registration for the 2017 ISC High Performance conference will come to an end in less than two weeks and we would like to encourage participants not to wait until the last minute and miss out on the opportunity to save over 45 percent off the on-site rates.

The conference will once again be held at Messe Frankfurt in Germany and is expected to have an attendance of over 3,000 participants from around the globe. So far 146 companies have signed up to exhibit, and with very few booth spaces still left for booking, we will likely end up with a total of 150 exhibitors on the 2017 show floor.

Here is an overview of the different passes available to ISC 2017 attendees:

  • Conference Pass: The conference pass gives access to all sessions from Monday, June 19, through Wednesday, June 21, as well as the exhibition and all social events.
  • Exhibition Pass: Pass holders have access to the exhibition (and all related activities), all Birds-of-a-Feather (BoF) sessions, the PhD Forum, the Vendor Showdown, the ISC Student Cluster Competition awarding and the Welcome Party on Monday, June 19.
  • Tutorial Pass: This pass provides access to 13 tutorials interactive tutorials on Sunday, June 18.
  • Workshop Pass: This pass gives access to the 21 workshops that will take place on Thursday, June 22 at the Frankfurt Marriott Hotel.

This year the conference will cover a broad array of topics, such as exascale development, deep learning, big data, extreme-scale algorithms and new programming models. The conference also provides insights into a wide range of performance-demanding applications, including product prototyping, earthquake prediction, transportation logistics, energy exploration, and drug design, to name a few.

ISC Tutorials (Sunday, June 18)

If you are interested in broadening your knowledge in key HPC, networking and storage topics, consider attending the ISC tutorials. Renowned experts in their respective fields will give attendees a comprehensive introduction to the topic as well as providing a closer look at specific problems. They will also incorporate hands-on components where appropriate. The organizers are expecting over 300 attendees. Here is an overview of the five full-day and eight half-day tutorials.

ISC Workshops (Thursday, June 22)

The ISC Workshops are well-known to attract over 600 researchers and commercial users interested in learning more about current developments in specific areas of HPC. There are 21 unique workshops to choose from, and among them eight are full-day workshops. Click here to find out more about the individual workshops

Other interesting program elements include:

About ISC High Performance

First held in 1986, ISC High Performance is the world’s oldest and Europe’s most important conference and networking event for the HPC community. It offers a strong five-day technical program focusing on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments.

ISC High Performance attracts engineers, IT specialists, system developers, vendors, scientists, researchers, students, journalists, and other members of the HPC global community. The exhibition draws decision-makers from automotive, finance, defense, aeronautical, gas & oil, banking, pharmaceutical and other industries, as well those providing hardware, software and services for the HPC community. Attendees will learn firsthand about new products and applications, in addition to the latest technological advances in the HPC industry.

Source: ISC

The post ISC 2017 Early-Bird Registration Ends May 10 appeared first on HPCwire.

Intel, JSC Collaborate to Deploy Next-Gen Modular Supercomputer

Fri, 04/28/2017 - 08:06

JÜLICH, April 28, 2017 — Intel and Forschungszentrum Jülich together with ParTec and DELL today announced a cooperation to develop and deploy a next-generation modular supercomputing system. Leveraging the experience and results gained in the EU-funded DEEP and DEEP-ER projects, in which three of the partners have been strongly engaged, the group will develop the necessary mechanisms required to augment JSC’s JURECA cluster with a highly-scalable component named “Booster” and being based on Intel’s Scalable Systems Framework (Intel SSF).

“This will be the first-ever demonstration in a production environment of the Cluster-Booster concept, pioneered in DEEP and DEEP-ER at prototype-level, and a considerable step towards the implementation of JSC’s modular supercomputing concept”, explains Prof. Thomas Lippert, Director of the Jülich Supercomputing Centre. Modular supercomputing is a new paradigm directly reflecting the diversity of execution characteristics, found in modern simulation codes, in the architecture of the supercomputer. Instead of a homogeneous design, different modules with distinct hardware characteristics are exposed via a homogeneous global software layer that enables optimal resource assignment.

Code parts of a simulation that can only be parallelized up to a limited concurrency level stay on the Cluster – equipped with faster general-purpose processor cores – while the highly parallelizable parts are to run on the weaker Booster cores but at much higher concurrency. In this way increased scalability and significantly higher efficiency with lower energy consumption can be reached, addressing both big data analytics and Exascale simulation capabilities.

Technical Specifications

The JURECA Booster will use Intel Xeon Phi 7250-F processors with on-package Intel Omni-Path Architecture interfaces. The system will be delivered by Intel with its subcontractor Dell, utilizing Dell’s PowerEdge C6230P servers. Once installed, it will provide a peak performance of 5 Petaflop/s. The system was co-designed by Intel and JSC to enable maximum scalability for large-scale simulations. The JURECA Booster will be directly connected to the JURECA cluster, a system delivered by T-Platforms in 2015, and both modules will be operated as a single system. As part of the project a novel high-speed bridging mechanism between JURECA’s InfiniBand EDR and the Boosters’ Intel Omni-Path Architecture interconnect will be developed by the group of partners. Together with the modularity features of ParTec’s ParaStation ClusterSuite, this will enable efficient usage of the whole system by applications flexibly distributed across the modules.

Supercomputer JURECA at the Jülich Supercomputing Centre (JSC)
Copyright: Forschungszentrum Jülich / Ralf-Uwe Limbach

Source: Jülich Supercomputing Centre

The post Intel, JSC Collaborate to Deploy Next-Gen Modular Supercomputer appeared first on HPCwire.

Supermicro Announces 3rd Quarter 2017 Financial Results

Fri, 04/28/2017 - 07:53

SAN JOSE, Calif., April 28, 2017 — Super Micro Computer, Inc. (NASDAQ:SMCI), a global leader in high-performance, high-efficiency server, storage technology and green computing, today announced third quarter fiscal 2017 financial results for the quarter ended March 31, 2017.

Fiscal 3rd Quarter Highlights

  • Quarterly net sales of $631.1 million, down 3.2% from the second quarter of fiscal year 2017 and up 18.5% from the same quarter of last year.
  • GAAP net income of $16.7 million, down 24.2% from the second quarter of fiscal year 2017 and equal to the same quarter of last year.
  • GAAP gross margin was 14.0%, down from 14.3% in the second quarter of fiscal year 2017 and down from 14.9% in the same quarter of last year.
  • Server solutions accounted for 70.0% of net sales compared with 68.1% in the second quarter of fiscal year 2017 and 69.9% in the same quarter of last year.

Net sales for the third quarter ended March 31, 2017 totaled $631.1 million, down 3.2% from $652.0 million in the second quarter of fiscal year 2017. No customer accounted for more than 10% of net sales during the quarter ended March 31, 2017.

GAAP net income for the third quarter of fiscal year 2017 and for the same period a year ago were both $16.7 million or $0.32 per diluted share. Included in net income for the quarter is $4.8 million of stock-based compensation expense (pre-tax). Excluding this item and the related tax effect, non-GAAP net income for the third quarter was $20.3 million, or $0.38 per diluted share, compared to non-GAAP net income of $19.0 million, or $0.36 per diluted share, in the same quarter of the prior year. On a sequential basis, non-GAAP net income decreased from the second quarter of fiscal year 2017 by $4.7 million or $(0.1) per diluted share.

GAAP gross margin for the third quarter of fiscal year 2017 was 14.0% compared to 14.9% in the same period a year ago. Non-GAAP gross margin for the third quarter was 14.0% compared to 14.9% in the same period a year ago. GAAP gross margin for the second quarter of fiscal year 2017 was 14.3% and Non-GAAP gross margin for the second quarter of fiscal year 2017 was 14.4%.

The GAAP income tax provision for the third quarter of fiscal year 2017 was $5.1 million or 23.6% of income before tax provision compared to $7.4 million or 30.7% in the same period a year ago and $9.3 million or 29.7% in the second quarter of fiscal year 2017. The effective tax rate for the third quarter of fiscal year 2017 was lower primarily due to a tax benefit resulting from the completion of an income tax audit in a foreign jurisdiction.

The Company’s cash and cash equivalents and short and long term investments at March 31, 2017 were $110.5 million compared to $183.7 million at June 30, 2016. Free cash flow for the nine months ended March 31, 2017 was $(113.5) million, primarily due to an increase in the Company’s cash used in operating activities.

Business Outlook & Management Commentary

The Company expects net sales of $655 million to $715 million for the fourth quarter of fiscal year 2017 ending June 30, 2017. The Company expects non-GAAP earnings per diluted share of approximately $0.40 to $0.50 for the fourth quarter.

“We are pleased to report third quarter revenues that exceeded our guidance in a quarter complicated by shortages in memory and SSD. Our resurgent revenue growth and market share gains are a result of our strategy of developing vertical markets that expand our TAMs. Storage, IOT, Accelerated Computing, Enterprise and Asia contributed to the 18.5% growth from last year,” said Charles Liang, Chairman and Chief Executive Officer. “Supermicro’s preparation for the upcoming new Xeon processor launches has never been stronger and our traction with new customer engagement for seeding and early deployment has been outstanding. We expect to lead the industry with the most innovative platform architectures, the broadest product array and total solutions during the upcoming technology transitions.”

It is currently expected that the outlook will not be updated until the Company’s next quarterly earnings announcement, notwithstanding subsequent developments. However, the Company may update the outlook or any portion thereof at any time. Such updates will take place only by way of a news release or other broadly disseminated disclosure available to all interested parties in accordance with Regulation FD.

 

 

Use of Non-GAAP Financial Measures

Non-GAAP gross margin discussed in this press release excludes stock-based compensation expense. Non-GAAP net income and net income per share discussed in this press release exclude stock-based compensation expense and the related tax effect of the applicable items. Management presents non-GAAP financial measures because it considers them to be important supplemental measures of performance. Management uses the non-GAAP financial measures for planning purposes, including analysis of the Company’s performance against prior periods, the preparation of operating budgets and to determine appropriate levels of operating and capital investments. Management also believes that the non-GAAP financial measures provide additional insight for analysts and investors in evaluating the Company’s financial and operational performance. However, these non-GAAP financial measures have limitations as an analytical tool, and are not intended to be an alternative to financial measures prepared in accordance with GAAP. Pursuant to the requirements of SEC Regulation G, detailed reconciliations between the Company’s GAAP and non-GAAP financial results is provided at the end of this press release. Investors are advised to carefully review and consider this information as well as the GAAP financial results that are disclosed in the Company’s SEC filings.

About Super Micro Computer, Inc.

Supermicro is a provider of end-to-end green computing solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro’s advanced Server Building Block Solutions offer a vast array of components for building energy-efficient, application-optimized, computing solutions. Architecture innovations include Twin, TwinPro, FatTwin, Ultra Series, MicroCloud, MicroBlade, SuperBlade, Simply Double, Double-sided Storage, Battery Backup Power (BBP) modules and WIO/UIO. Products include servers, blades, GPU systems, workstations, motherboards, chassis, power supplies, storage, networking, server management software and SuperRack cabinets/accessories delivering unrivaled performance and value.

Source: Supermicro

The post Supermicro Announces 3rd Quarter 2017 Financial Results appeared first on HPCwire.

IARPA Launches ‘RAVEN’ to Develop Rapid Integrated Circuit Imaging Tools

Thu, 04/27/2017 - 09:46

WASHINGTON, D.C., April 27, 2017 — The Intelligence Advanced Research Projects Activity, within the Office of the Director of National Intelligence, announced today the Rapid Analysis of Various Emerging Nano-electronics—“RAVEN”—program, a multi-year research effort to develop tools to rapidly image current and future integrated circuit chips.

“As semiconductor technology continues to follow Moore’s Law, each new generation of chips has smaller geometries and more transistors. The ability to quickly image advanced chips has become extremely challenging. Maintaining this capability is critical for failure analysis, process manufacturing verification, and identification of counterfeit chips in these latest technologies,” said Carl E. McCants, RAVEN program manager at IARPA.

The goal of the RAVEN program is to develop a prototype analysis tool for acquiring the images and reconstructing all layers (up to 13 metal layers) from a 10-nanometer integrated circuit chip within an analysis area of 1 centimeter squared in less than 25 days. To be successful, the performer teams must create and integrate solutions to four primary challenges: image acquisition speed and resolution, rapid processing of extremely large file for image reconstruction, file manipulation and storage, and sample preparation.

The RAVEN program is divided into three phases. While each IARPA-funded research team offers a unique approach, the teams must achieve a demanding set of metrics for time, resolution, accuracy, and repeatability by the end of each phase.

Through a competitive Broad Agency Announcement process, IARPA has awarded research contracts in support of the RAVEN program to teams led by the University of Southern California-Information Sciences Institute, Varioscale, Inc., BAE Systems, and the Massachusetts Institute of Technology.

About IARPA

IARPA invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the Intelligence Community. Additional information on IARPA and its research may be found on https://www.iarpa.gov.

Source: IARPA

The post IARPA Launches ‘RAVEN’ to Develop Rapid Integrated Circuit Imaging Tools appeared first on HPCwire.

Inspur launches 16 GPU capable AI computing box

Thu, 04/27/2017 - 09:40

On April 26, 2017, Inspur and Baidu jointly launched the super-large scale AI computing platform “SR-AI Rack” (SR-AI) for huge scale data sets and deep neural network at Inspur Partner Forum 2017 (IPF2017).

Inspur SR-AI Rack Computing Module

Compatible with China’s latest Scorpio 2.5 standard, Inspur SR-AI is the world’s first AI solution based on the interconnected architecture design of PCIe Fabric. The coordination between the PCI-E switch and I/O BOX, and physical decoupling and pooling of the GPU and CPU can realize large extension nodes for 16 GPUs. The solution can support a maximum of 64 GPUs with peak processing ability at 512TFlops, which is 5-10 times faster than regular AI solutions, making it possible to support model trainings with hundreds of billions of samples and trillions of parameters.

Shaking off close GPU/CPU coupling of traditional servers, Inspur SR-AI connects the uplink CPU computing/scheduling nodes and the downlink GPU Box through PCI-e Switch nodes. Such arrangement allows independent CPU/GPU expansion and avoids excessive component redundancy in traditional architecture upgrades. As a result, more than 5% of the cost can be saved and such advantage will become even more obvious as the scale expands, since no high-cost IT resources are needed in GPU extension.

Meanwhile, Inspur SR-AI is also a 100G RDMA GPU cluster. Its RDMA (Remote Direct Memory Access) technology can directly interact with the GPU and memory data without CPU computation, which has realized ns-level network delay in the cluster, 50% faster than that of traditional GPU expansion methods.

SR-AI Rack Topology

With continuous exploration in AI in recent years, Inspur has created strong computing platforms and innovation ability. Currently, Inspur is a supplier of the most diversified GPU (2U2/4/8) server arrays and accounted for more than 60% of the market share in AI computation in 2016. Thanks to the deep cooperation in system and application with Baidu, Alibaba, Tencent, iFLYTEK, Qihoo 360, Sogou, Toutiao, Face++, and other leading companies in AI, Inspur helps customers achieve substantial improvement in application performance in voice, images, videos, searching, and network.

Inspur provides users and partners with advanced computing platforms, system management tools, performance optimization tools and basic algorithm integration platform software, such as face and voice recognition and other regular algorithm components, as well as Caffe-MPI deep learning computing framework, and AI-Station deep learning system management tools. In addition, Inspur offers integrated solutions for scientific research institutions and other general users. The integrated deep learning machine D1000 released in 2016 is a multi-GPU server cluster system carrying Caffe-MPI.

The annual Inspur Partner Forum is an important event for Inspur’s partners. The IPF in 2017 was held in the Wuzhen Internet International Conference & Exhibition Center of Zhejiang province, China. The forum attracted around 2000 partners across the nation, including ISVs, SIs, and distributors from various sectors.

The post Inspur launches 16 GPU capable AI computing box appeared first on HPCwire.

Mellanox Reports First Quarter 2017 Results

Thu, 04/27/2017 - 08:26

SUNNYVALE, Calif. & YOKNEAM, Israel, April 27, 2017 — Mellanox Technologies, Ltd. (NASDAQ: MLNX) has announced financial results for its first quarter ended March 31, 2017.

“Our first quarter InfiniBand revenues were down year-over-year, impacted by delays in the general availability of next generation x86 CPUs, seasonal trends in high-performance computing, and technology transitions occurring across several end users and OEM customers. We believe InfiniBand has maintained share in HPC, and expect revenues will see sequential growth in the coming quarters driven by current backlog and additional pipeline opportunities,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Our first quarter Ethernet revenues grew across all product families sequentially, driven by the adoption of our 25/50/100 gigabit solutions. We expect 2017 to be a growth year for Mellanox.”

First Quarter 2017 -Highlights

  • Revenues of $188.7 million decreased 4.1 percent, compared to $196.8 million in the first quarter of 2016.
  • GAAP gross margins of 65.8 percent in the first quarter, compared to 64.2 percent in the first quarter of 2016.
  • Non-GAAP gross margins of 71.7 percent in the first quarter, compared to 71.4 percent in the first quarter of 2016.
  • GAAP operating loss was $12.6 million, compared to $3.9 million in the first quarter of 2016.
  • Non-GAAP operating income was $15.7 million, or 8.3 percent of revenue, compared to $41.3 million, or 21.0 percent of revenue in the first quarter of 2016.
  • GAAP net loss was $12.2 million, compared to $7.2 million in the first quarter of 2016.
  • Non-GAAP net income was $14.7 million, compared to $39.3 million in the first quarter of 2016.
  • GAAP net loss per diluted share was $0.25 in the first quarter, compared to $0.15 in the first quarter of 2016.
  • Non-GAAP net income per diluted share was $0.29 in the first quarter, compared to $0.81 in the first quarter of 2016.
  • $35.0 million in cash was provided by operating activities, compared to $48.6 million in the first quarter of 2016.
  • Cash and investments totaled $325.2 million at March 31, 2017, compared to $328.4 million at December 31, 2016.

Second Quarter 2017 Outlook

We currently project:

  • Quarterly revenues of $205 million to $215 million
  • Non-GAAP gross margins of 70.5 percent to 71.5 percent
  • An increase in non-GAAP operating expenses of 3 percent to 5 percent
  • Share-based compensation expense of $17.3 million to $17.8 million
  • Non-GAAP diluted share count of 50.8 million to 51.3 million shares

Recent Mellanox Press Release Highlights

• April 24, 2017 Mellanox InfiniBand Delivers up to 250 Percent Higher Return on Investment for High Performance Computing Platforms • April 19, 2017 Mellanox Announces New Executive Appointments • April 18, 2017 Mellanox 25Gb/s Ethernet Adapters Chosen By Major ODMs to Enable Next Generation Hyperscale Data Centers • March 20, 2017 Mellanox Doubles Silicon Photonics Ethernet Transceiver Speeds to 200Gb/s • March 20, 2017 Mellanox Introduces New 100Gb/s Silicon Photonics Optical Engine Product Line • March 16, 2017 Mellanox Ships More Than 200,000 Optical Transceiver Modules for Next Generation 100Gb/s Networks • March 8, 2017 Mellanox to Showcase Cloud Infrastructure Efficiency with Production-Ready SONiC over Spectrum Open Ethernet Switches • March 7, 2017 Mellanox Enables Industry’s First PCIe Gen-4 OpenPOWER-Based Rackspace OCP Server with 100Gb/s Connectivity • March 7, 2017 Mellanox Announces Industry-Leading OCP-Based ConnectX-5 Adapters for Qualcomm Centriq 2400 Processor-Based Platforms • Feb 26, 2017 Mellanox and ECI Smash Virtual CPE Performance Barriers with Indigo-Based Platform

 

About Mellanox

Mellanox Technologies is a leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software, cables and silicon that accelerate application runtime and maximize business results for a wide range of markets including high-performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at www.mellanox.com.

Source: Mellanox

The post Mellanox Reports First Quarter 2017 Results appeared first on HPCwire.

Parallware Trainer 0.3 Now Available for Early Access

Thu, 04/27/2017 - 08:21

April 27, 2017 — Appentra today announced that Parallware Trainer 0.3 is now available through its Early Access Program. Users will have full and free access to the tool and will be able to learn parallel programming while improving their code.

New Features  in Parallware Trainer 0.3

New support for offloading to GPU and Xeon Phi using OpenMP 4.5

New suggestions for guided parallelization:

  • Ranking of strategies to parallelize reduction operations (scalar and sparse reductions)

  • List of variables to offload to the accelerator that need user intervention (e.g. specify array ranges for data transfers)

New support for multiple compiler suites: Intel, GNU and PGI

Improvements in usability for project management:

  • Show number of parallel versions of a sequential source file

  • Exclude files using regular expressions

  • Drag and drop from several file managers (e.g. Nautilus)

Bugfixes in compilation and execution of Fortran source code

Bugfixes in the GUI

Click here to join the Parallware Trainer Early Acces Program.

Source: Appentra

The post Parallware Trainer 0.3 Now Available for Early Access appeared first on HPCwire.

HERMES Team Simulates Health, Economic Impacts of Heat-Stable Vaccines

Wed, 04/26/2017 - 15:46

PITTSBURGH, April 26, 2017 — Health care workers in low-income nations often have to deliver vaccines on rugged footpaths, via motorcycle or over river crossings. On top of this, vaccines need to be kept refrigerated or they may degrade and become useless, which can make getting vaccines to mothers and children that need them challenging.

That’s why researchers at Doctors Without Borders and the HERMES Logistics Team of the Global Obesity Prevention Center at the Johns Hopkins Bloomberg School of Public Health and the Pittsburgh Supercomputing Center at Carnegie Mellon University carried out the first computer simulation of the health and economic impacts of introducing heat-stable vaccines in India and in Benin and Niger in Africa. The simulation offered good news. Not only would vaccines that don’t require refrigeration help increase vaccination rates in these countries, the cost savings of decreased spoilage and improved health would more than cover the cost of making the vaccines stable, even at twice or three times the current cost per dose.

Click here to read the full release from Doctors Without Borders.

About PSC 

The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry, and is a leading partner in XSEDE (Extreme Science and Engineering Discovery Environment), the National Science Foundation cyberinfrastructure program.

Source: PSC

The post HERMES Team Simulates Health, Economic Impacts of Heat-Stable Vaccines appeared first on HPCwire.

TACC Helps ROSIE Bioscience Gateway Expand its Impact

Wed, 04/26/2017 - 14:18

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high end HPC systems that many bioscience researchers lack easy access to. One bioscience gateway – ROSIE – has been established as part of XSEDE (Extreme Science and Engineering Discover Environment) to expand access to the popular Rosetta suite of prediction software; so far 5000 users have run more than 30,000 jobs and ROSIE organizers are hoping recent additions will further expand use.

A fundamental issue here is that bioscience researchers often face the twin hurdles of possessing limited computational expertise and having limited access to HPC. ROSIE – the Rosetta Online Server that Includes Everyone (quite the name) – lets researchers run their jobs using a straightforward interface and without necessarily knowing the work is being done on supercomputer resources such as TACC’s Stampede. The idea isn’t brand new. ROSIE is the latest morphing of what was the RosettaCommons.

An account (Rosetta Modeling Software and the ROSIE Science Gateway) of the expansion of ROSIE is posted on the TACC site.

Structure prediction is fundamental to much of bioscience research. Think of biomolecules as expert contortionists whose shape critically influences their function. For example, the 3D shape of a protein is critical to its function and is determined the sequence of its constituent amino acids; however predicting the shape from the amino acid sequence is (still) challenging and computationally intensive. The same can be said for many classes of biomolecules.

“One of the most widely used such [structure prediction] programs is Rosetta. Originally developed as a structure prediction tool more than 17 years ago in the laboratory of David Baker at the University of Washington, Rosetta has been adapted to solve a wide range of common computational macromolecular problems. It has enabled notable scientific advances in computational biology, including protein design, enzyme design, ligand docking, and structure predictions for biological macromolecules and macromolecular complexes,” according to the TACC article.

Jeffrey Gray, Johns Hopkins University

“The structure prediction problem is to take a sequence and ask, ‘What does it look like?'” said Jeffrey Gray, a professor of Chemical and Biomolecular Engineering at Johns Hopkins University and a collaborator on the project. “The design problem asks ‘What sequence would fold into this structure?’ That’s at the heart of Rosetta, but Rosetta does a lot of other things,” Gray said. Over the years, Rosetta evolved from a single tool, to a collection of tools, to a large collaboration called RosettaCommons, which includes more than 50 government laboratories, institutes, and research centers (only nonprofits).

Gray had used TACC resources as a graduate student in Texas in the late 1990s, so he knew about TACC and some of the other NSF supercomputing facilities. “We’ve been using Stampede and applied for it through XSEDE,” Gray said. “We have a Stampede allocation for my lab and we have a separate allocation for ROSIE.”

First described in PLOS One in May 2013, ROSIE continues to add new elements. In January 2017, a team of researchers, including Gray, reported in Nature Protocols on the latest additions to the gateway: antibody modeling and docking tools called RosettaAntibody and SnugDock that can run fully automated via the ROSIE web server or manually, with user control, on a personal computer or cluster.

Link to TACC article: https://www.tacc.utexas.edu/-/rosetta-modeling-software-and-the-rosie-science-gateway

The post TACC Helps ROSIE Bioscience Gateway Expand its Impact appeared first on HPCwire.

ASC17 Challenge Established A New HPL Record

Wed, 04/26/2017 - 09:40

On April 26, the very first day of the ASC Student Supercomputer Challenge (ASC17), Weifang University from China, setting new student competition HPL record with 31.70 TFLOPS. Weifang University is a common college from China Shandong Province, for them this is the second time to participate in ASC17 challenge and the first time finalist.

Final of ASC17 Challenge

The HPL test in ASC17 has strict rules for competing teams to complete the construction of a supercomputing system with a total power constraint of 3000W using equipment such as Inspur supercomputing nodes, high-speed networks and self-configuration accelerator cards provided by the organizing committee. The team from Weifang University designed a heterogeneous supercomputing system using 5 Inspur supercomputing servers and 10 P100 GPU accelerator cards to achieve a consistent floating-point performance of 31.7 TFLOPS.

The Weifang University team leader, Cao Jian, said that they only switched their team’s supercomputer design from a CPU to a GPU system at the very last minute just one day before the competition. Nevertheless, the team had prior experience with power consumption testing on GPU cards in the cluster back at school and they were very confident even with the last minute change in strategy. Ultimately, Weifang University was able to successfully control the power consumption of individual GPU cards to within 173W, more than 30% lower than the 250W power rating, proving the team’s strong hands-on capabilities. According to Cao Jian, even though it gives them great joy to break the world record for HPL computing performance, team members were most attracted by the opportunity to operate Sunway TaihuLight, China’s very own world number one supercomputer, as well as to exchange notes and learn from supercomputing talents the world over.

Team Weifang University

The ASC Student Supercomputer Challenge is initiated by China, which is the largest student supercomputer challenge in the world. This year the ASC17 Challenge is organized by ASC Community, Inspur, the National Supercomputing Centre in Wuxi, and Zhengzhou University,with 230 teams from all over the world having taken part in the competition. The 20 finalist teams will design and build a cluster under 3000W using Inspur supercomputing nodes to run AI-based traffic prediction, Falcon, LAMMPS, Saturne, benchmark HPL&HPCG, and run MASNUM on Sunway TaihuLight.

The post ASC17 Challenge Established A New HPL Record appeared first on HPCwire.

Pages