HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 2 hours 38 min ago

Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices

Thu, 07/13/2017 - 15:43

The second annual Women in HPC luncheon was held on June 20, 2017, during the International Supercomputing Conference in Frankfurt, Germany. The luncheon provides participants the opportunity to network with industry leaders and meet new contacts as well as brainstorm about ways to improve diversity and inclusivity for women within the HPC community.

As keynote speaker Angelo Apa of Lenovo noted, “if this was an easy problem to solve we would have fixed it already, so we need to generate ideas. My request of you today is in exchange for lunch help us solve these problems.”

Angelo Apa, Lenovo

Apa, technical sales and business development director at Lenovo Enterprise business group, shared the reality that between 18 and 30 percent of the workforce in the tech industry is female depending upon the country. “This is rubbish anyway you look at it,” he said. “We really need to do something about this.”

Lenovo’s numbers are better: 37 percent of its total global workforce is female, 14 percent of execs and 30 percent of managers. But, as Apa is quick to point out, these numbers are skewed by China having representation of women far above the global average.

“There is inherently a culture of diversity at all times in Lenovo,” said Apa, who’s worked in tech for over 30 years. The business was started in 1984 in Beijing by 10 people, seven men and three women. “Because they wanted to grow so quickly they realized they had to become multi-country, multi-cultural very quickly, so the whole diversity thing generally is pretty strong culturally inside of Lenovo. Now what we need to do is work on how we can develop that so we see a higher degree of general diversity than we see today in Europe.”

“We need to make sure that we are mirroring society because if we’re not, then we’re not relevant,” said Apa. “It doesn’t matter if you make the best servers in the world; it doesn’t matter if you make the cheapest servers in the world. It makes absolutely no difference whatsoever. If you’re not mirroring your customer, then that customer will not buy from you. So that’s something that we really need to work on from a business perspective as well as everything else.”

In 2007, Catherine Ladousse, executive director of communication EMEA at Lenovo, started the Women in Lenovo Leadership (WILL) program, which includes an executive training program that fast-tracks talented women in the company, providing them with individual coaching and professional development sessions.

At a WILL breakfast event in Milan held earlier this year, Apa was speaking with Paul Rector who runs the Lenovo global accounts business worldwide about recruiting women applicants. Apa’s job advertisements were failing to attract women. Rector recalled the advice of a hiring consultant he had worked with: write job descriptions in a female-friendly way.

“But surely a job description is just factual,” Apa thought, “What does it mean to write it in female-friendly way?”

Apa recently went through a female recruiter to fill an open position, a pre-sales technical role. Going through all of the usual channels resulted in not a single female applicant.

“Can you help me understand what’s going on here and how to improve recruitment efforts?” Apa asked the room.

Rebecca Hartman-Baker, Berkeley Lab (center)

“Sounds like you need new channels,” said Rebecca Hartman-Baker of Berkeley Lab. “It’s like if you were going to go out fishing and you went to your fishing hole and there were no fish there, you’d find another fishing hole where you can find the fish.”

Allison Kennedy, co-founder and senior advisor Women in HPC and director, Hartree Centre, emphasized the need for more effective outreach. “I think you have to contact woman directly, go through networks. Women are more likely to know other women.”

Another idea was to advertise with Women in HPC, which is completely free.

The potential for sexism in the recruiter, even if female, was also raised and the importance of implicit bias training.

The 70 or so attendees, mostly women and a few male colleagues, were not short on ideas for crafting a female-friendly job description. If the job description specifies expert-level experience or cites an extensive list of requirements, women are more likely than men to take themselves out of the running. “Make sure you’re not making a laundry list where you’re looking for a purple unicorn with rainbow tail, something that doesn’t exist,” added Hartman-Baker. “Women tend to look at a list like that and if they don’t have 100 percent of all of those qualifications they’re not going to apply. With men, it’s 40 percent. If they have 40 percent, they will apply.”

Micron’s Richard Murphy added that in his experience teaching undergraduate classes, even when the smartest person in the class is a woman, the person most likely to answer a question is a guy who was guessing. “So it does not surprise me that a woman would look at a list like that and think they had to check all of those boxes,” he said.

Toni Collis, EPCC and Women in HPC

Research conducted by Women in HPC backs up this point. A study done with International HPC Summer School applicants found that even with similar experience levels, women would consistently rate themselves lower than men when asked to rate their knowledge of skills.

“I think one of the barriers is when we present it as ‘you must be an expert’,” said Toni Collis, director of Women in HPC. “How many times have we read a job description where you must be an expert in MPI, and how many women would call themselves an expert compared to their male colleagues?”

Cristin Merrit, Alces Flight

Cristin Merrit, partner manager for Alces Flight, questioned the wisdom of gatekeeping based on formal technical education. “When you are hiring for pre-sales, you are already asking for hen’s teeth. You don’t want to shrink your pool before you even get out the door.” Here, cover letters allow applicants to explain their background and skill sets when they don’t fit traditional boxes.

In wrapping up the networking event, Collis shared, “It’s all well and good HR telling us what to do, but if it’s not working for us why are we doing it?”

“You’re here because you get this, right?” Collis told the audience. “The people who really need to be here aren’t here. So I ask you to go and be ambassadors for Women in HPC as an initiative. If you aren’t already a member, join right now and get your male colleagues to join.”

Membership for individuals is free and includes a monthly newsletter with updates on what Women in HPC is doing and why they’re doing it which is the crucial thing, according to Collis.

“I wake up every morning to change the world — that’s why I get out of bed in the morning, that’s why I do this,” said Collis. “Don’t get me wrong — it’s fantastic that occasionally I get to play with the fastest machines in the world, but the reason I do it is because I’m here to change the world. I think as a community, HPC and supercomputing, we spend an awful lot of time telling people outside of our community and the taxpayers at the end of the day that we do it because we can make this big machine; we forget the why, we forget the science that it enables, we forget the fact that really we can save lives with climate change studies and with weather prediction and so much more. We can do so much with HPC, but we don’t sell that message enough. And if we sell that message, it’s not just going to improve the situation with funding, it’s going to bring in women.

“I always use my husband as an example here,” Collis continued. “He’s in HPC as well. We’re two sides of the same coin. He is a computer scientist at heart, it’s where he belongs. He wakes up every morning because he just loves coding. I mean he just can’t get enough of it. I wake up every morning to change the world, the coding is just something that happens. But I do this because of what it can do and everybody is here for a different reason, but if we’re only given one side of it, the coding side, we’re missing a huge swath of people who are attracted by the fact that we make a difference. So please go away and be an ambassador.”

Women in HPC is entirely funded by donations and support, including the support of EPCC who set up Women in HPC. The Women in HPC luncheon was made possible through the support of ISC, PRACE, Lenovo and Xand McMahon. Find out more about Lenovo’s initiative to encourage gender diversity in the workplace at www.lenovowomen.com.

The post Women in HPC Luncheon Shines Light on Female-Friendly Hiring Practices appeared first on HPCwire.

Supercomputers Help Decode RNA Structure

Thu, 07/13/2017 - 12:09

ARGONNE, Ill., July 13, 2017 — A cure for cancer, HIV and other stubborn diseases has evaded the brightest minds for generations. But with supercomputers – computing systems that can calculate, analyze and visualize extremely large amounts of data – researchers are gaining a leg up in the fight for better treatments and cures.

Researchers at the National Cancer Institute (NCI) are using supercomputers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory to advance disease studies by enhancing our understanding of RNA, biological polymers that are fundamentally involved in health and disease.

“Since the biologically active form of RNA is a 3-D structure, going from understanding the primary sequence and the two-dimensional layout of an RNA to understanding the 3-D form is a big stepping-stone that gives us a lot of useful information about biological functions, …”

In collaboration with staff from the Argonne Leadership Computing Facility (ALCF), researchers have perfected a technique that accurately computes the 3-D structure of RNA sequences. This method, which relies on a computer program known as RS3D and Mira – the ninth fastest supercomputer in the world – gives researchers studying cancer and other diseases structural insights about relevant RNAs that can be used to advance computer-assisted drug design and development.

RNA not only functions as a DNA interpretation messenger for protein fabrication, but also plays a multifaceted role in regulating gene expression – such as when, where and how efficiently a gene is expressed. For this reason, researchers are actively seeking to understand the functions of novel RNA sequences. And in order to get a complete picture, they need to know the biologically active forms of RNA, which are reflected in the complex 3-D structures that RNA sequences fold into after they’re created.

“We already know the basic chemical groups for RNA and how they’re composed, but what we don’t know is what conformational structures they take,” said Wei Jiang, a researcher at the Argonne Leadership Computing Facility who is one of the computational leads in the project.

“Getting the real functional structure, which is the 3-D structure, is very difficult to do experimentally, because the RNA polymer is too flexible,” he said. “This is why we rely on computational simulation. Simulations can be used to explore hundreds or thousands of possible conformational states that would eventually lead us to the most likely 3-D structure.”

The computer program RS3D was developed by a National Cancer Institute research team, led by researcher Yun-Xing Wang and postdoctoral fellow Yuba Bhandari, and optimized by ALCF researchers to run on Mira; Jiang played a central role in scaling the RS3D code to run on a large fraction of Mira, which improved its performance significantly.

As an input, RS3D uses known RNA sequence information and experimental data from small angle X-ray scattering, a technique that provides important structural information, such as particle size and shape, based on the scattering pattern that is generated when X-ray beams are applied to a target sample. With these inputs, RS3D outputs a low-resolution 3-D image of the topological structure of RNA that provides the most likely folding patterns.

“Since the biologically active form of RNA is a 3-D structure, going from understanding the primary sequence and the two-dimensional layout of an RNA to understanding the 3-D form is a big stepping-stone that gives us a lot of useful information about biological functions,” said Bhandari, one of the leaders of the project. “Understanding the structural basis provides a foundation for further investigating molecular interactions and biological pathways in various diseases.”

The researchers validated their technique by using it to compute the 3-D structure of 18 RNA polymers whose structures are known. These select RNAs fold into a wide variety of structures that represent common folding architectures. Additionally, researchers used R3SD along with experimental data recorded at the synchrotron light source at Argonne, the Advanced Photon Source, to compute the structure of adenine riboswitch, an RNA structure known to regulate gene expression.

“One of the unique and advantageous features of this technique is the fact that it’s fully automated, meaning it does not require the user to input an initial 3-D structural template to work. This sets it apart from other methods that perform similar calculations,” Bhandari said. “This helps us eliminate any potential limitations or biases that could be introduced through a template, and make the whole approach easier to apply.”

The researchers are now in the process of publishing their technique; the source code will be made available to researchers thereafter. A brief summary of their computational work, presented in an article titled “Modeling RNA topological structures using small angle X-ray scattering,” is published in Methods.

This work is funded by the Intramural Research Programs of the National Cancer Institute. This work employed resources at the Argonne Leadership Computing Facility and the Advanced Photon Source, both DOE Office of Science User Facilities. Experimental data for adenine riboswitch RNA was recorded at Sector 12 of the Advanced Photon Source. Computing time was awarded through the ALCF’s Director’s Discretionary Program.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.

Source: Joan Koka, ANL

The post Supercomputers Help Decode RNA Structure appeared first on HPCwire.

Netcope Releases New 200G Programmable Smart NIC

Thu, 07/13/2017 - 11:52

CZECH REPUBLIC, Prague, ​July 13,​ 2017 — Netcope Technologies, a company specializing in development and distribution of high-speed network solutions, has announced release of a new product from their NFB product line. The new NFB-200G2QL is a smart network interface card (smart NIC) ready to process packets at 200 Gbps speed.

The new network card is powered by the latest Xilinx FPGA chip Virtex UltraScale+, a high performance device that unlocks the card’s capability of transferring 200 Gbps of data to software at wire speed, with zero packet loss. It can effectively distribute traffic to two CPUs in dual-CPU system, bypassing QPI often considered as a bottleneck. It connects to one or two gen 3×16 PCI-E slots. Last but not least, it comes in a low profile design, so it fits into smaller servers.

OEM manufacturers of security solutions, communication service providers, pioneers of the NFV industry or even traders on electronic exchanges can benefit from this dramatic improvement of bandwidth and processing power on the hardware part of Netcope’s solution, which is packet processing acceleration on the fastest network. In essence, Netcope’s solutions are comprised of hardware and firmware parts that work together as a point of contact between the network and server’s CPUs, where they preprocess the traffic in order to save valuable CPU cycles and total cost of ownership of the final product.

“​The new low profile card leverages higher density of computing resources and thanks to the latest manufacturing technologies it reduces overall power consumption of our customers’ solution.” ​says Viktor Puš, CTO of Netcope. “​The programmability of packet processing in P4, or using C language for description of low latency trading strategies, helps engineers integrate this new powerful card into their solution without the need of extensive knowledge of FPGA technology.” ​he adds.

About Netcope

Netcope Technologies is a leading manufacturer and provider of high-performance network solutions. We excel in packet capture and packet processing technologies and low-latency trading solutions. Our focus is on delivering state-of-the-art solutions for high-speed and low-latency networks. Our products are deployed world-wide. The Netcope Technologies portfolio covers the whole field of products for the hardware acceleration of network traffic processing using FPGA technology. These products are ideal for all OEM vendors, R&D customers and end customers to build, develop and deploy hardware-accelerated solutions. The use of our network adapters with advanced features enables our customers and partners to gain a competitive advantage in the world of high-speed and low-latency networks.

Source: Netcope

The post Netcope Releases New 200G Programmable Smart NIC appeared first on HPCwire.

NEC Partners with ebb3 for HPC Virtualization

Thu, 07/13/2017 - 10:32

MANCHESTER, LONDON, July 13, 2017 — NEC Europe Ltd. today announced its strategic partnership with innovative technology services business ebb3 to deploy 3D business applications virtually using ebb3’s innovative technology.

The partnership will combine ebb3’s in-depth knowledge of networks and virtualisation with NEC’s global market presence, bringing aHigh Performance Virtual Computer (HPVC) to NEC’s EMEA customer base.

Mark Vickers, CEO of ebb3 has said: “We’re proud to be supporting NEC in all markets where ebb3 can enable more specialists with the ability to innovate their working practices. Accessing the most powerful applications remotely has been complicated and challenging to manage until now, and we’re pleased to bring a transformative solution to NEC’s customer base.”

The solution will be used in the manufacturing industry, which is dependent on high powered workstations for Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) software. Until now, delivering the functionality of this software remotely has been complex, limiting workstation users to fixed locations.

The companies will work together to provide the manufacturing industry with the ability to run 3D graphical applications that require very high levels of computational power from any location with a minimum of a 4G data connection. The solution will also be used in other sectors reliant on such workstations, including oil and gas, engineering, architecture, construction, automotive and design.

Mark Jackson, Head of Manufacturing Industrial Solutions Europe at NEC Europe Ltd. has said: “We are delighted to work with ebb3 to deliver workstation-class performance to our customers on any device, wherever and whatever specialism they happen to be in. The partnership with ebb3 will initially allow our manufacturing customers to use the most innovative technology available; providing cutting-edge graphics technology to transform the way they work, collaborate and access data securely.

The partnership represents a key stage in the growth of both companies, and follows ebb3’s recent announcement of £1m investment from Maven Capital Partners. Together, ebb3 and NEC, as a result of the strategic partnership, plan to solidify their presence as a key Digital Transformation enabler in the manufacturing sectors.

About NEC

NEC Corporation is a leader in the integration of IT and network technologies that benefit businesses and people around the world. By providing a combination of products and solutions that cross utilise the company’s experience and global resources, NEC’s advanced technologies meet the complex and ever-changing needs of its customers. NEC brings more than 100 years of expertise in technological innovation to empower people, businesses and society.

Source: NEC

The post NEC Partners with ebb3 for HPC Virtualization appeared first on HPCwire.

IRON Deploys ADVA FSP 3000 in R&E Network

Thu, 07/13/2017 - 10:25

BOISE, Ida., July 13, 2017 — ADVA Optical Networking announced today that the Idaho Regional Optical Network (IRON) has deployed its 100Gbit/s core technology to respond to soaring bandwidth demand from Idaho’s research and education (R&E) institutions. The upgraded backbone network delivers secure high-capacity services across the state including remote rural areas. The new solution features ADVA Optical Networking’s flexible transport technology, enabling IRON to provide universities, laboratories and health care centers with 10Gbit/s services. Built on the ADVA FSP 3000, the network offers phenomenal ease of use and massive scalability, ensuring that IRON’s infrastructure will satisfy the needs of the R&E community both now and in the future. IRON has also subscribed to ADVA Optical Networking’s Bronze Hardware and Software Maintenance package for technical support and extended repair coverage.

“When it came to upgrading our infrastructure, ADVA Optical Networking’s 100Gbit/s core technology offered precisely what we were looking for. One key benefit is the ADVA FSP 3000’s plug-and-play simplicity. Integrating the new equipment into our network was straightforward and we were immediately able to deliver upgraded services. Future-proofing was also a vital requirement so we’ve invested in a solution that can scale alongside rising demand,” said Michael Guryan, general manager, IRON. “Fast, reliable connectivity is an essential tool for today’s R&E institutions. By giving learners and scientists throughout the state access to high-bandwidth applications and enhanced data sharing, we’re closing the digital divide in remote areas, creating invaluable opportunities for research teams and helping to boost the region’s economy.”

By utilizing the ADVA FSP 3000 in its backbone network, IRON is minimizing costs and ensuring maximum efficiency across every section of its transport infrastructure. The new backbone network now supports 100Gbit/s transport so that higher education institutions and research centers even in remote rural areas can take advantage of ultra-fast broadband services. This enables technical data transfer and storage objectives that would otherwise be impossible in many regions. ADVA Optical Networking’s modular solution provides flexibility and cost-efficiency as it transmits, multiplexes and protects high-speed data. Its high-density design guarantees power efficiency and the smallest possible footprint. What’s more, the ADVA FSP 3000’s scalable modular architecture ensures that IRON’s new network is future-proofed against further growth in demand.

“Research and education communities need to be at the cutting edge of technology. With this deployment, IRON is ensuring that laboratories and academic institutions across the state can access high-bandwidth applications and further push the boundaries of what’s possible,” commented John Scherzinger, senior VP, sales, North America, ADVA Optical Networking. “Built on our FSP 3000, this new transport network has the capacity, resiliency and scalability IRON needs to help Idaho play a key role in the global community of learning and scientific discovery. With the capacity to quickly and easily turn up new services and the ability to scale to 100Gbit/s and beyond, IRON’s new infrastructure will satisfy the needs of universities, laboratories and research hospitals for a long time to come.”

Watch this video for information on the ADVA FSP 3000: http://adva.li/3dfsp3000.

Source: ADVA

The post IRON Deploys ADVA FSP 3000 in R&E Network appeared first on HPCwire.

Cray to Provide Urika-GX System to Alan Turing Institute

Thu, 07/13/2017 - 10:20

Seattle, Wa., July 13, 2017 — Cray Inc. (Nasdaq: CRAY) today announced the Company will provide a Cray Urika-GX system to the Alan Turing Institute through a collaboration between Cray, Intel, and the Institute. Hosted at the University of Edinburgh in the Edinburgh Parallel Computing Centre (EPCC), the Cray Urika-GX system will provide researchers at the Alan Turing Institute with a dedicated analytics hardware platform, enabling the development of advanced applications across a number of scientific fields including engineering and technology, defense and security, smart cities, financial services and life sciences.

The Alan Turing Institute is the United Kingdom’s national institute for data science, and brings together researchers from a range of disciplines to tackle core challenges in data science theory and application. The Institute is named in honor of Alan Turing, whose pioneering work in theoretical and applied mathematics, engineering, and computing are considered to be the key disciplines comprising data science. With the addition of the Cray Urika-GX system, the Institute’s researchers will have access to Cray’s agile analytics platform, which fuses supercomputing technologies with an open, enterprise-ready software framework for big data analytics.

“The Alan Turing Institute was created to advance the world-changing potential of data science,” said Sir Alan Wilson, CEO of the Alan Turing Institute. “Our researchers require powerful computing technology in order to enable their research, and the Cray system, based in the University of Edinburgh, one of our founding university partners, will be an important addition to the Turing’s data science toolkit. We look forward to opening it up to our community of researchers and enabling their innovation to thrive.”

“The Alan Turing Institute is quickly becoming a major force in the data sciences community worldwide, and we are thrilled the Cray Urika-GX system will support the Institute’s mission of advancing data science research to change the world for the better,” said Peter Ungaro, president and CEO of Cray. “The rise of data-intensive computing – where big data analytics, artificial intelligence, and supercomputing converge – has opened up a new domain of real-world, complex analytics applications, and the Cray Urika-GX gives our customers a powerful platform for solving this new class of data-intensive problems.”

“The convergence of HPC and analytics are unleashing a global wave of discovery and innovation. The Alan Turing Institute aims to use advanced data science and powerful technology to improve the lives of everyone,” said Trish Damkroger, Vice President of Technical Computing at Intel. “Solution leaders like Cray with their Intel-based Cray Urika-GX system provide the technical foundation to deliver the ever-increasing capabilities needed by leading researchers and scientists.”

The Cray Urika-GX agile analytics platform features a scalable analytics software environment designed to support large-scale data science activity. An exclusive feature of the Cray Urika-GX system is the Cray Graph Engine, which leverages the high-speed Aries network interconnect, to provide unprecedented, large-scale graph pattern matching and discovery operations across complex collections of data. Also supported is the Apache® Spark™ cluster engine and the Apache Hadoop® software library, both included to provide the tools necessary for large-scale analytics and machine learning operations.  When combined, the three environments – Spark, Hadoop and the Cray Graph Engine – enable customers to build complete end-to-end analytics workflows and avoid unnecessary data movement. Underlying the analytics stack, is an open high-performance system featuring the Intel® Xeon® processor E5 v4 product family, up to 22 terabytes of DRAM memory, and up to 176 terabytes of local Intel P3700 series SSD storage capacity.

For more information on the Cray Urika-GX system, please visit the Cray website at www.cray.com. 

About Cray Inc.
Global supercomputing leader Cray Inc. (Nasdaq: CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray

The post Cray to Provide Urika-GX System to Alan Turing Institute appeared first on HPCwire.

Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface

Thu, 07/13/2017 - 09:50

New satellite technologies have completely changed the game in mapping and geographical data gathering, reducing costs and placing a new emphasis on time series and timeliness in general, according to Paul Morin, director of the Polar Geospatial Center at the University of Minnesota.

In the second plenary session of the PEARC conference in New Orleans on July 12, Morin described how access to the DigitalGlobe satellite constellation, the NSF XSEDE network of supercomputing centers and the Blue Waters supercomputer at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign have enabled his group to map Antarctica—an area of 5.4 million square miles, compared with the 3.7 million square miles of the “lower 48” United States—at 1-meter resolution in two years. Nine months later, then-president Barack Obama announced a joint White House initiative involving the NSF and the National Geospatial Intelligence Agency (NGIA) in which Morin’s group mapped a similar area in the Arctic including the entire state of Alaska in two years.

“If I wrote this story in a single proposal I wouldn’t have been able to write [any proposals] afterward,” Morin said. “It’s that absurd.” But the leaps in technology have made what used to be multi-decadal mapping projects—when they could be done at all—into annual events, with even more frequent updates soon to come.

The inaugural Practice and Experience in Advanced Research Computing (PEARC) conference—with the theme Sustainability, Success and Impact—stresses key objectives for those who manage, develop and use advanced research computing throughout the U.S. and the world. Organizations supporting this new HPC conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC), the Extreme Science and Engineering Development Environment (XSEDE), the Science Gateways Community Institute, the Campus Research Computing (CaRC) Consortium, the Advanced CyberInfrastructure Research and Education Facilitators (ACI-REF) consortium, the National Center for Supercomputing Applications’ Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC) and Internet2.

Follow the Poop

One project made possible with the DigitalGlobe constellation—a set of Hubble-like multispectral orbiting telescopes “pointed the other way”—was a University of Minnesota census of emperor penguin populations in Antarctica.

“What’s the first thing you do if you get access to a bunch of sub-meter-resolution [orbital telescopes covering] Antarctica?” Morin asked. “You point them at penguins.”

Thanks in part to a lack of predators the birds over-winter on the ice, huddling in colonies for warmth. Historically these colonies were discovered by accident: Morin’s project enabled the first continent-wide survey to find and estimate the population size of all the colonies.

The researchers realized that they had a relatively easy way to spot the colonies in the DigitalGlobe imagery: Because the penguins eat beta-carotene-rich krill, their excrement stains the ice red.

“You can identify their location by looking for poo,” Morin said. The project enabled the first complete population count of emperor penguins: 595,000 birds, +14%

“We started to realize we were onto something,” he added. His group began to wonder if they could leverage the sub-meter-resolution, multispectral, stereo view of the constellation’s WorldView I, II and III satellites to derive the topography of the Antarctic, and later the Arctic. One challenge, he knew, would be finding the computational power to extract topographic data from the stereo images in a reasonable amount of time. He found his answer at the NSF and the NGIA.

“We proposed to a science agency and a combat support agency that we were going to map the topography of 30 degrees of the globe in 24 months.”

Blue Waters on the Ice

Morin and his collaborators found themselves in the middle of a seismic shift in topographic technology.

“Eight years ago, people were doing [this] from the ground,” with a combination of land-based surveys and accurate but expensive LIDAR mapping from aircraft, he said. These methods made sense in places where population and industrial density made the cost worthwhile. But it had left the Antarctic and Arctic largely unmapped.

Deriving topographic information from the photographs posed a computational problem well beyond the capabilities of a campus cluster. The group did initial computations at the Ohio Supercomputer Center, but needed to expand for the final data analysis. In 2014 XSEDE Project Director John Towns offered XSEDE’s help in tackling the massive scale of data that would come from an array of satellites collecting topographic images. From 2014 to 2015, Morin used XSEDE resources, most notably Gordon at San Diego Supercomputer Center and XSEDE’s Extended Collaborative Support Service to carry out his initial computations. XSEDE then helped his group acquire an allocation on Blue Waters, an NSF-funded Cray Inc. system at Illinois and NCSA with 49,000 CPUs and a peak performance of 13.3 petaflops.

Collecting the equivalent area of California daily, a now-expanded group of subject experts made use of the polar-orbiting satellites and Blue Waters to derive elevation data. They completed a higher-resolution map of Alaska—the earlier version of which had taken the U.S. Geological Survey 50 years—in a year. While the initial images are licensed for U.S. government use only, the group was able to release the resulting topographic data for public use.

Mapping Change

Thanks to the one-meter resolution of their initial analysis, the group quickly found they could identify many man-made structures on the surface. They could also spot vegetation changes such as clearcutting. They could even quantify vegetation regrowth after replanting.

“We’re watching individual trees growing here.”

Another set of images he showed in his PEARC17 presentation were before-and-after topographic maps of Nuugaatsiaq, Greenland, which was devastated by a tsunami last month. The Greenland government is using the images, which show both human structures and the landslide that caused the 10-meter tsunami, to plan recovery efforts.

The activity of the regions’ ice sheets was a striking example of the technology’s capabilities.

“Ice is a mineral that flows,” Morin said, and so the new topographic data offer much more frequent information about ice-sheet changes driven by climate change than previously available. “We not only have an image of the ice but we know exactly how high it is.”

Morin also showed an image of the Larsen Ice Shelf revealing a crack that had appeared in the glacier. The real news, though, was that the crack—which created an iceberg the size of the big island of Hawaii—was less than 24 hours old. It had appeared sometime after midnight on July 12.

“We [now] have better topography for Siberia than we have for Montana,” he noted.

New Directions

While the large, high-resolution satellites have already transformed the field, innovations are already coming that could create another shift, Morin said.

“This is not your father’s topography,” he noted. “Everything has changed; everything is time sensitive; everything is on demand.” In an interview later that morning, he added, “XSEDE, Blue Waters and NSF have changed how earth science happens now.”

One advance won’t require new technology: just a little more time. While the current topographic dataset is at 1-meter resolution, the data can go tighter with more computation. The satellite images actually have a 30-centimeter resolution, which would allow for the project to shift from imaging objects the size of automobiles to those the size of a coffee table.

At that point, he said, “instead of [just the] presence or absence of trees we’ll be able to tell what species of tree. It doesn’t take recollection of imagery; it just takes reprocessing.”

The new, massive constellation of CubeSats such as the Planet company’s toaster-sized Dove satellites now being launched promises an even more disruptive advance. A swarm of these satellites will provide much more frequent coverage of the entire Earth’s surface than possible with the large telescopes.

Click to expand

“The quality isn’t as good, but right now we’re talking about coverage,” Morin said. His group’s work has taken advantage of a system that allows mapping of a major portion of the Earth in a year. “What happens when we have monthly coverage?”

Feature image caption: Buildings in Juneau, Alaska, as shown in the University of Minnesota topographic survey of the Arctic region. The airport runway can be seen at the bottom.


Ken Chiacchia, Senior Science Writer, Pittsburgh Supercomputing Center

Tiffany Jolley Content Producer, National Center for Supercomputing Applications

The post Satellite Advances, NSF Computation Power Rapid Mapping of Earth’s Surface appeared first on HPCwire.

Intel Skylake: Xeon Goes from Chip to Platform

Thu, 07/13/2017 - 09:36

With yesterday’s New York unveiling of the new “Skylake” Xeon Scalable processors, Intel made multiple runs at multiple competitive threats and strategic markets. Skylake will carry Intel’s flag in the fight for leadership in the emerging advanced data center encompassing highly demanding network workloads, cloud computing, real time analytics, virtualized infrastructures, high-performance computing and artificial intelligence.

Most interesting, Skylake takes a big step toward accommodating what one industry analyst has called “the wild west of technology disaggregation,” life in the post-CPU-centric era.

“What surprised me most is how much platform goodness Intel brought to the table,” said industry watcher Patrick Moorhead, Moor Insights & Strategy, soon after the launch announcement. “I wasn’t expecting so many enhancements outside of the CPU chip itself.”

In fact, Moorhead said, Skylake turns Xeon into a platform, one that “consists of CPUs, chipset, internal and external accelerators, SSD flash and software stacks.”

The successor to the Intel Xeon processor E5 and E7 product lines, Skylake has up to 28 high-performance cores and provides platform features with, according to Intel, significant performance increases, including:

  • Artificial Intelligence: Delivers 2.2x higher deep learning training and inference compared to the previous generation, according to Intel, and 113x deep learning performance gains compared to a three-year-old non-optimized server system when combined with software optimizations accelerating delivery of AI-based services.
  • Networking: Delivers up to 2.5x increased IPSec forwarding rate for networking applications compared to the previous generation when using Intel QuickAssist and Deep Platform Development Kit.
  • Virtualization: Operates up to approximately 4.2x more virtual machines versus a four-year-old system for faster service deployment, server utilization, lower energy costs and space efficiency.
  • High Performance Computing: Provides up to a 2x FLOPs/clock improvement with Intel AVX-512 (the 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for the x86 instruction set architecture) as well as integrated Intel Omni-Path Architecture ports, delivering improved compute capability, I/O flexibility and memory bandwidth, Intel said.
  • Storage: Processes up to 5x more IOPS while reducing latency by up to 70 percent versus out-of-the-box NVMe SSDs when combined with Intel Optane SSDs and Storage Performance Development Kit, making data more accessible for advanced analytics.

Overall, Intel said, Skylake delivers performance increase up to 1.65x versus the previous generation of Intel processors, and up to 5x OLTP warehouse workloads versus the current install base.

The company also introduced Intel Select Solutions, aimed at simplifying deployment of data center and network infrastructure, with initial solutions delivery on Canonical Ubuntu, Microsoft SQL 16 and VMware vSAN 6.6. Intel said this is an expansion of the Intel Builders ecosystem collaborations and will offer Intel-verified configurations for specific workloads, such as machine learning inference, and is then sold and marketed as a package by OEMs and ODMs under the “Select Solution” sub-brand.

Intel said Xeon Scalable platform is supported by hundreds of ecosystem of partners, more than 480 Intel builders and 7,000-plus software vendors, including support from Amazon, AT&T, BBVA, Google, Microsoft, Montefiore, Technicolor and Telefonica.

But it’s Intel’s support for multiple processing architectures that drew the most attention.

Moorhead said Skylake enables heterogeneous compute in several ways. “First off, Intel provides the host processer, a Xeon, as you can’t boot to an accelerator. Inside of Xeon, they provide accelerators like AVX-512. Inside Xeon SoCs, Intel has added FPGAs. Inside the PCH contains a QAT accelerator. Intel also has PCIe accelerator cards for QAT and FPGAs.”

In the end, Moorhead said, the Skylark announcement is directed at datacenter managers “who want to run their apps and do inference on the same machines using the new Xeons.” He cited Amazon’s support for this approach, “so it has merit.”

The post Intel Skylake: Xeon Goes from Chip to Platform appeared first on HPCwire.

Study Demonstrates Potential for AI and Whole Genome Sequencing

Wed, 07/12/2017 - 13:27

NEW YORK, July 11, 2017 — In a study published today in the July 11, 2017 issue of Neurology Genetics, an official journal of the American Academy of Neurology, researchers at the New York Genome Center (NYGC), The Rockefeller University and other NYGC member institutions, and IBM (NYSE: IBM) have illustrated the potential of IBM Watson for Genomics to analyze complex genomic data from state-of-the-art DNA sequencing of whole genomes. The study compared multiple techniques – or assays – used to analyze genomic data from a glioblastoma patient’s tumor cells and normal healthy cells.

The proof of concept study used a beta version of Watson for Genomics technology to help interpret whole genome sequencing (WGS) data for one patient. In the study, Watsonwas able to provide a report of potential clinically actionable insights within 10 minutes, compared to 160 hours of human analysis and curation required to arrive at similar conclusions for this patient.

The study also showed that WGS identified more clinically actionable mutations than the current standard of examining a limited subset of genes, known as a targeted panel. WGS currently requires significantly more manual analysis, so combining this method with artificial intelligence could help doctors identify potential therapies from WGS for more patients in less time.

Interpretation of genome sequencing data is a significant challenge because of the volume of genomic data to sift through, as well as the large, growing body of research on molecular drivers of cancer and potential targeted therapies. This informatics challenge is often a critical bottleneck when dealing with deadly cancers such as glioblastoma, with a median survival of less than 15 months following diagnosis.

“Our partnership has explored cutting-edge challenges and opportunities in harnessing genomics to help cancer patients. We provide initial insights into two critical issues: what clinical value can be extracted from different commercial and academic cancer genomic platforms, and how to think about scaling access to that value,” noted the study’s Principal Investigator, Robert Darnell, MD, PhD, Robert and Harriet Heilbrunn Professor and Senior Attending Physician at The Rockefeller University and Founding Director of the New York Genome Center.

In the study, NYGC researchers and bioinformatics experts analyzed DNA and RNA from a glioblastoma tumor specimen and DNA from the patient’s normal blood, and compared potentially actionable insights to those derived from a commercial targeted panel that had previously been performed. The whole genome and RNA sequencing data were analyzed by a team of bioinformaticians and oncologists at the NYGC as well as a beta version of IBM Watson for Genomics, an automated system for prioritizing somatic variants and identifying potential therapies.

The beta version of Watson for Genomics processed abstracts and in some cases, full text articles from PubMed, a comprehensive source of more than 27 million citations for biomedical literature. With this information, the NYGC and Watson collaborated to identify gene alterations that can be therapeutically targeted.

“This study documents the strong potential of Watson for Genomics to help clinicians scale precision oncology more broadly,” said Vanessa Michelini, Watson for Genomics Innovation Leader, IBM Watson Health. “Clinical and research leaders in cancer genomics are making tremendous progress towards bringing precision medicine to cancer patients, but genomic data interpretation is a significant obstacle, and that’s where Watson can help.”

The study was part of the NYGC’s and its Institutional Founding Members’ ongoing efforts to advance the use of next-generation sequencing, particularly WGS, in precision medicine. The NYGC and its founding member institutions are conducting additional studies involving Watson to help accelerate the discovery of potentially actionable sequence variants in various types of cancer, including an ongoing study that involves DNA and RNA from a larger cohort of glioblastoma patients, and a study of 200 patients with different types of cancer.

This study, conducted from 2015-2016, utilized a beta version of Watson for Genomics, which is now commercially available for genomic data interpretation through partnerships with Quest Diagnostics, Illumina, or as a cloud-based software for clinicians and researchers. Watson for Genomics is also used in clinical practice at the VA Health System.

About the New York Genome Center
The New York Genome Center is an independent, nonprofit academic research institution at the forefront of transforming biomedical research with the mission of advancing clinical care. A collaboration of premier academic, medical and industry leaders across the globe, the New York Genome Center has as its goal to translate genomic research into the development of new treatments, therapies and therapeutics against human disease. NYGC member organizations and partners are united in this unprecedented collaboration of technology, science and medicine, designed to harness the power of innovation and discoveries to advance genomic services. Their shared objective is the acceleration of medical genomics and precision medicine to benefit patients around the world. For more information, visit our website at http://www.nygenome.org.

Member institutions include: Albert Einstein College of Medicine, American Museum of Natural History, Cold Spring Harbor Laboratory, Columbia University, Hospital for Special Surgery, The Jackson Laboratory, Memorial Sloan Kettering Cancer Center, Icahn School of Medicine at Mount Sinai, NewYork-Presbyterian Hospital, The New York Stem Cell Foundation, New York University, Northwell Health, Princeton University, The Rockefeller University, Roswell Park Cancer Institute, Stony Brook University, Weill Cornell Medicine and IBM.

About IBM Watson Health
Watson is the first commercially available cognitive computing capability representing a new era in computing. The system, delivered through the cloud, analyzes high volumes of data, understands complex questions posed in natural language, and proposes evidence-based answers. Watson continuously learns, gaining in value and knowledge over time, from previous interactions. In April 2015, the company launched IBM Watson Health and the Watson Health Core cloud platform (now Watson Platform for Health). The new unit will help improve the ability of doctors, researchers and insurers to innovate by surfacing insights from the massive amount of personal health data being created and shared daily. The Watson Platform for Health can mask patient identities and allow for information to be shared and combined with a dynamic and constantly growing aggregated view of clinical, research and social health data. For more information on IBM Watson, visit: ibm.com/watson. For more information on IBM Watson Health, visit: ibm.com/watsonhealth.

Source: IBM

The post Study Demonstrates Potential for AI and Whole Genome Sequencing appeared first on HPCwire.

Stanford Researchers Tackle Cardiac Arrhythmia Detection with Machine Learning

Wed, 07/12/2017 - 10:45

Using machine learning techniques Stanford University researchers reported developing an algorithm for identifying cardiac arrhythmias that performs as well or better than cardiologists. Training the model, as usual, was the big hurdle. The researchers used a 34-layer convolutional neural network (CNN) to train a model able to distinguish 14 types of arrhythmias.

The new work is from Stanford’s Machine Learning Group, which is led by Andrew Ng, and was reported last week in an arXiv paper (Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks) and in an article on the Stanford site. The study and result is more evidence of machine learning’s rapid spread into diverse applications.

“Given that more than 300 million ECGs are recorded annually, high-accuracy diagnosis from ECG can save expert clinicians and cardiologists considerable time and decrease the number of misdiagnoses. Furthermore, we hope that this technology coupled with low-cost ECG devices enables more widespread use of the ECG as a diagnostic tool in places where access to a cardiologist is difficult,” write the paper’s authors.

The effective use of CNNs and a large database were instrumental to the project’s success. “We build a dataset (30,000 unique patients) with more than 500 times the number of unique patients than previously studied corpora. On this dataset, we train a 34-layer convolutional neural network which maps a sequence of ECG samples to a sequence of rhythm classes. Committees of board-certified cardiologists annotate a gold standard test set on which we compare the performance of our model to that of 6 other individual cardiologists. We exceed the average cardiologist performance in both recall (sensitivity) and precision (positive predictive value),” report the researchers.

Stanford worked with iRhythm, a provider of cardiac monitoring systems, on the study. Data were collected from iRhythm’s wearable ECG monitor. Patients wear a small chest patch for two weeks and carry out their normal day-to-day activities while the device records each heartbeat for analysis. The group took approximately 30,000, 30-second clips from various patients that represented a variety of arrhythmias.

As quoted in the Stanford article written by Taylor Kubota, “The differences in the heartbeat signal can be very subtle but have massive impact in how you choose to tackle these detections. For example, two forms of the arrhythmia known as second-degree atrioventricular block look very similar, but one requires no treatment while the other requires immediate attention,” said Pranav Rajpurkar, a graduate student and co-lead author of the paper.

To test accuracy of the algorithm, “the researchers gave a group of three expert cardiologists 300 undiagnosed clips and asked them to reach a consensus about any arrhythmias present in the recordings. Working with these annotated clips, the algorithm could then predict how those cardiologists would label every second of other ECGs with which it was presented, in essence, giving a diagnosis.”

Link to paper: https://arxiv.org/pdf/1707.01836.pdf

Link to Stanford article, written by Taylor Kubota: http://news.stanford.edu/2017/07/06/algorithm-diagnoses-heart-arrhythmias-cardiologist-level-accuracy/

The post Stanford Researchers Tackle Cardiac Arrhythmia Detection with Machine Learning appeared first on HPCwire.

GIGABYTE Brings Intel’s Next-Generation Processors To Market

Wed, 07/12/2017 - 09:50

TAIPEI, Taiwan, July 12, 2017 — GIGABYTE today announced its latest generation of servers based on Intel’s Skylake Purley architecture. This new generation brings a wealth of new options in scalability – across compute, network and storage – to deliver solutions for any application, from the enterprise to the data center to HPC.

This server series adopts Intel’s new product family – officially named the ‘Intel Xeon Scalable family‘ and utilizes its ability to meet the increasingly diverse requirements of the industry, from entry-level HPC to large scale clusters.. The major development in this platform is around the improved features and functionality at both the host and fabric levels. These enable performance improvements – both natively on chip and for future extensibility through compute, network and storage peripherals.  In practical terms, these new CPUs will offer up to 28 cores, and 48 PCIe lanes per socket.

GIGABYTE‘s new product offerings take advantage of the performance benefits that Intel has built in to target a range of segments. In particular, these target enterprise and cloud applications, with extensibility for advanced future HPC applications:

Increased Memory Bandwidth
The CPU architecture allows for up to 6 memory channels per socket (and 50% generation-on-generation increase) and a bandwidth of up to 2666MT/s. The architecture is also future-ready with the potential to support Apache Pass DIMM in 2018 – opening up potential memory bandwidth of  2933MT/s (1DP) on certain SKUs.

Increased Memory Capacity
Memory capacity is also increased with up to available 1.5TB per socket, which represents a 2x gen-on-gen increase. This is supported by new AVX-512 to boost performance.

Innovative New Storage and Networking Options
This CPU family is designed to enable:
– innovative storage through Intel’s Optane low-latency, non-volatile storage
– integrated QAT for improved compression and security
– improved networking through Omni-Path interconnect fabric as a CPU-integrated or standalone device
This also means that the 10% increase in I/O is not compromised by the fabric and can be used for additional storage or accelerators.

GIGABYTE’s R281-NO0 2U Flagship All-Flash Server

GIGABYTE‘s experienced design team has developed a series of systems incorporating both the advancements of Intel’s Xeon Scalable family and the unique design features that GIGABYTE is recognized for.

What’s New?

Optimized for high TDP – these systems are thermally designed with the highest rated bins in mind so that performance is impressive whichever Xeon Scalable CPU you choose. All systems in this series are supplied with stereo holes. They are also powered by 80+ Platinum (or above) PSUs to ensure over 90% power supply efficiency.

High Density Add-On SlotsGIGABYTE brings you the most options in terms of Full-Height, Half-Length/Low-profile/OCP slots for Intel Xeon Scalable systems in the market.

Modularised Backplanesall systems have a uniform backplane that is able to support exchangable expanders offering both SAS and U.2 (or a combination) to allow for custom expanders to meet different customer needs.

Environmental Compliance immediate adherence to new RoHS regulations introduced in July 2017

In addition, GIGABYTE’s systems continue to ship with:
 Tool-less design for ease of installation and management
– IPMI and Redfish compatibility designed in

GIGABYTE is excited to build on our close co-operation with Intel and act as an early supplier of this new scalable architecture”, said Etay Lee, GM, GIGABYTE. “We look forward to working with Intel to address new market segments with this innovative and extensible platform”.

Initially, GIGABYTE will offer 4 new 1U-form factor, and 4 new 2U-form factor systems, as well as 2 motherboard SKUs that support the Intel Xeon Scalable series. These offer a range of options for storage and expansion slots.
See the Product Info links below for more information and stay tuned for news on our upcoming GPGPU and 2U, 4 node systems based on this architecture!

GIGABYTE’s MD71-HB0 Xeon Scalable Motherboard

Product Information:
b2b.gigabyte.com/Rack-Server/Intel-Xeon-Scalable (Systems)
b2b.gigabyte.com/Server-Motherboard/Intel-Xeon-Scalable (Motherboard)

GIGABYTE, headquartered in Taipei, Taiwan, is recognized as a global leading brand in the IT industry, with employees and business channels in almost every country. Founded in 1986, GIGABYTE started as a research and development team and has since taken the lead in the world’s motherboard and graphics card markets. On top of Motherboards and Graphics cards, GIGABYTE further expanded its product portfolio to include PC Components, PC Peripherals, Laptops, Desktop PCs, Network Communications, Servers & Datacenter systems and Mobile Phones to serve each facet of the digital life in the home and office. Everyday GIGABYTE aims to “Upgrade Your Life” with innovative technology, exceptional quality, and unmatched customer service.


The post GIGABYTE Brings Intel’s Next-Generation Processors To Market appeared first on HPCwire.

ASRock Rack Introduces 5 Motherboard Designs Based on Xeon Scalable Processors

Wed, 07/12/2017 - 09:45

TAIPEI, Taiwan, July 12, 2017 — As Artificial Intelligence (AI) becomes more pervasive in the market, ASRock Rack is introducing 5 new motherboard designs based on the new Intel Xeon Scalable Processor platform and can deliver up to 2.2x more machine learning inference and training performance for AI.

The new Intel Xeon Scalable Processor marks a significant milestone in processor architecture for Intel, with upgrades from AVX2 to AVX-512 instructions, and a 1.5 times increase in memory bandwidth from 4 to 6 channels. This new processor supports an integrated host fabric interface (HFI), and up to 100Gbp bandwidth with the Omni-Path architecture. It significantly improves scalability and resilience with new IO capabilities by expanding to 48 PCI-E lanes which allows more possibilities for hardware acceleration and high-end storage. Meanwhile, the server platform supports Intel Ethernet Connection X722 which has 4x10GbE with RDMA.

In collaboration with Intel, ASRock Rack has launched 7 different, new server motherboards and 2 barebone systems based on the Intel Xeon Scalable Processor. Each different motherboard has targeted market segments served by a variety of form factors from rack, to blade and pedestal. ASRock Rack’s goal is to always provide leading server technology and to provide the ability to upgrade a wide suite of existing applications.

EPC622D24LM — Key Segments and Specification

  • Enterprise Mission Critical Application / Datacenter / E-commerce / Virtualization
  • L shape 16.7”x17”
  • Dual Socket P support Intel Xeon Scalable Processor
  • 24 xDDR4 2600/2400 R or LR DIMM slots
  • 14 x SATA3 6.0Gb/s by Intel C622 ( Incl. 2xM.2 slots)
  • 2 x PCI-E3.0x16 (Slot1:x16 CPU0, Slot2:x16 from CPU1), 1xPCI-E3.0x8 from CPU1
  • 0 ports(2 ports in rear side, 2 ports from internal header, 1 port internal type A)

EP2C622D16NM — Key Segments and Specification

  • High Performance and High-density HPC / Mission Critical / Hyper Converge / Database or Haddop Server with SSD Acceleration / I/O virtualization / Co-location
  • EEB 12”x13”
  • Dual Socket P support Intel Xeon Scalable Processor
  • 16 xDDR4 2600/2400 R or LR DIMM slots
  • 14 x SATA3 6.0Gb/s by Intel C622 ( Incl. 1xSATA DOM ports and 1xM.2 slots)
  • 2 x PCI-E3.0x16
  • Support OCP Mezzanine type A/B/C

EP2C621D8A — Key Segments and Specification

  • UP Solution for Application Server / High-end Workstation / GPGPU Server / Multi-display Server
  • ATX 12” x 9.6”
  • Single Socket P supports Intel Xeon Scalable Processor
  • 8 x DDR4 2666/2400 R and LRDIMM slots
  • 14 SATA3 6.0 Gb/s by Intel C621(Incl.1 x SATA DOM and 1 x M.2)
  • 2 x PCI-E3.0x16, 3 x PCI-E3.0x8. 1 x PCI-E3.0 x4

EP2C622D24HM — Key Segments and Specification

  • HPC Application / Scientific Computing / Real-time Finance / Research & Labs
  • Half Width (OCP Blade) 6.5” x 20”
  • Supports Intel Xeon Scalable Processor ( with FPGA package TDP 205W)
  • 24 x DDR4 2666/2400 RDIMM/ LR DIMM
  • Intel C622: 13 x SATA3 (by 3x mini SAS HD, 1xM.2)
  • 1 x PCI-E3.0x24, 1 x PCI-E3.0x8
  • 1 x USB3.0 port
  • Support OCP Mezzanine type A/B/C

EP2C622D16HM — Key Segments and Specification

  • High Performance Computing Node / Virtualization Server / DPI Server / Internet Security
  • Half 20” x 6.5”
  • Dual Socket P support Intel Xeon Scalable Processor
  • 16 x DDR4 2666/2400 R and LRDIMM
  • 14 SATA3 6.0 Gb/s by Intel C622 (Incl.1 x SATA DOM and 1 x M.2)
  • 2 x PCI-E3.0x16

EP2C622D16FM — Key Segments and Specification

  • High Performance Server / High-density CPU Rendering Machine / Scale-out HPC Platform / GPGPU Server
  • SSI CEB 12” x 10.5”
  • Intel Xeon Scalable Processor
  • 14 SATA3 (12 from 3 x miniSAS connector + 2 x M.2 ports from PCH)
  • 2 x USB3.0 ports
  • Integrated IPMI2.0 with KVM and dedicated LAN(RTL8211E)
  • Support OCP Mezzanine type A/B/C

EP2C622D12HM — Key Segments and Specification

  • High Performance Computing Node / Virtualization Server / DPI Server / Internet Security
  • Half Width(OCP Blade)6.5”x20”
  • Intel Xeon Scalable Processor
  • 13 SATA3 (12 from 3 x miniSAS connector + 1 x sSATA shared with M.2)
  • 1xPCI-E3.0x24, 1xPCI-E3.0x8
  • 1 x USB3.0 ports
  • Support OCP Mezzanine type A/B/C

1U12LX-C622RPSU — Key Segments and Specification

  • High-density Web 2.0 Cloud Datacenter Storage
  • 1U Chassis support 12×3.5’’ SATA3/SAS2 HDD Hot-Swap Bay +2 x 2.5’’ HDD Bay with 700W Redundant PSU
  • Intel Xeon Scalable Processor
  • Supports Dual Channel DDR4 2133/2400/2666 RDIMM,LR DIMM,NV DIMM ECC , 16 slots
  • Support 14 x SATA3 by C622
  • ASPEED 2500. Integrated IPMI 2.0 with KVM, vMedia and Dedicated LAN (RTL8211E)
  • Supports 2 x x8 Mezzanine slot & typeC slot support OCP mezzanine card

1U12LX-C622SPSU — Key Segments and Specification

  • High-density Web 2.0 Cloud Datacenter Storage
  • 1U Chassis support 12×3.5’’ SATA3/SAS2 HDD Hot-Swap Bay +2 x 2.5’’ HDD Bay with 700W Single PSU
  • Intel Xeon Scalable Processor
  • Supports Dual Channel DDR4 2133/2400/2666 RDIMM,LR DIMM,NV DIMM ECC , 16 slots
  • Support 14 x SATA3 by C622
  • ASPEED 2500. Integrated IPMI 2.0 with KVM, vMedia and Dedicated LAN (RTL8211E)
  • Supports 2 x x8 Mezzanine slot & typeC slot support OCP mezzanine card

“ASRock Rack” is the official trademark for ASRock Rack, Inc., all the covers need to remain the original brand name without any modification. All other brands, names and trademarks are the property of their respective owners.

About ASRock Rack

ASRock Rack Inc., established in 2013, specializes in providing high-performance and high-efficiency server technology in the fields of Cloud Computing, Enterprise IT, HPC and Datacenter. We adopted the design concept of “Creativity, Consideration, Cost-effectiveness” from ASRock, and the company devotes passion to think out-of-the-box in the Server Industry. Leveraged by ASRock’s growing momentum and distribution channels, this young and vibrant company targets the booming market of Cloud Computing, and is committed to serve the market with user-friendly, eco-friendly and do-it-yourself Server technology, featuring flexible and reliable products.

Source: ASRock Rack


The post ASRock Rack Introduces 5 Motherboard Designs Based on Xeon Scalable Processors appeared first on HPCwire.

Asetek Receives Incremental Order for Government Contract

Wed, 07/12/2017 - 09:37

AALBORG, Denmark, July 12, 2017 — Asetek today announced an increase of its contract with the United States Department of Defense (DoD) Environmental Security Technology Certification Program (ESTCP).

Asetek’s contract with the DoD, initiated in October 2013, is focused on studying the benefits of Asetek’s RackCDU Direct-to-Chip (D2C) liquid cooling technology in a DoD data center.

This amendment supports installation of our In-RackCDU product line at a new site: One of DoD’s largest HPC data centers.

The DoD has therefore increased contract funding by USD 1.2 million to a total value of USD 3.7 million. The incremental funds are expected to be spent over the coming three quarters. The program is extended through May 2018.

Source: Asetek

The post Asetek Receives Incremental Order for Government Contract appeared first on HPCwire.

WekaIO Unveils Cloud-Native Scalable File System

Wed, 07/12/2017 - 09:33

SAN JOSE, Calif., July 12, 2017 — WekaIO, a venture backed high-performance cloud storage software company, today emerged from stealth to introduce the industry’s first cloud-native scalable file system that delivers unprecedented performance to applications, scaling to Exabytes of data in a single namespace. Headquartered in San Jose, CA, WekaIO has developed the first software platform that harnesses flash technology to create a high-performance parallel scale out file storage solution for both on-premises servers and public clouds.

WekaIO is the world’s fastest distributed file system, processing four times the workload compared to IBM Spectrum Scale measured on Standard Performance Evaluation Corp. (SPEC) SFS 2014, an independent industry benchmark. Utilizing only 120 cloud compute instances with locally attached storage, WekaIO completed 1,000 simultaneous software builds compared to 240 on IBM’s high-end FlashSystem 900. The WekaIO software utilized only 5% of the AWS compute instance resources, leaving 95% available to run customer applications.

“TGen is dedicated to the next revolution in precision medicine — with the goal of better patient outcomes driving our core principles,” said Nelson Kick, manager of HPC operations at TGen. “Future-thinking companies like WekaIO, complement our core principle of accelerating research and discovery. The ability to run more concurrent high performance genomic workloads will significantly advance our time to discovery.”

Scott Sinclair, Senior Analyst at Enterprise Strategy Group states, “ESG tested WekaIO and validated millions of IOPs and GBs of throughput for common I/O sizes, while linear scalability was achieved in a cloud deployment consisting of up to 120 nodes. WekaIO enables seamless data movement from on-prem to public cloud environments delivering cloud-like scale and agility with all-flash storage performance. As performance is a key consideration in choosing converged infrastructure — WekaIO is well-positioned to deliver.”

WekaIO is attracting attention from Fortune 500 organizations because it challenges conventional wisdom when it comes to performance, scalability and cloud economics. Unlike traditional storage solutions, WekaIO eliminates bottlenecks and storage silos by aggregating local SSDs inside the servers into one logical pool, which is then presented as a single namespace to the host applications. A transparent tiering layer offloads cold data to any S3 or Swift cloud object store for unlimited capacity scaling, under the same single namespace. The resulting solution helps organizations eliminate their storage challenges in a fundamentally different way. WekaIO leapfrogs legacy storage infrastructure through this radically new software defined environment.

“WekaIO’s innovative approach to high performance storage solves a critical need for organizations while simplifying their storage process,” said Quinn Li, vice president and global head of Qualcomm Ventures, the investment arm of Qualcomm Incorporated. “We are excited to support their efforts as they continue to transform the legacy storage industry.”

WekaIO is experiencing high levels of interest and demand from industries such as media and entertainment, life sciences, engineering design and manufacturing, where large bandwidth, high IOPs and low-latency requirements are critical for fast time to results.

“Data is at the heart of every business but many industries are hurt by the performance limitations of their storage infrastructure,” said Michael Raam, president and CEO of WekaIO. “We are heralding a new era of storage, having developed a true scale-out data infrastructure that puts independent, on-demand capacity and performance control into the hands of our customers. It’s exciting to be part of a company that delivers a true revolution for the storage industry.”

With a deep pedigree in storage technologies, Liran Zvibel, Omri Palmon, and Maor Ben-Dayan founded WekaIO in 2013, as their second venture following XIV, which was acquired by IBM in 2008. In addition to their offices in California, WekaIO has global R&D located in Tel Aviv, Israel.

WekaIO and its global partners are accepting purchase orders for its software effective immediately.

About WekaIO

WekaIO leapfrogs legacy infrastructures and improves IT agility by delivering software-centric data storage solutions that unlock the true promise of the cloud. WekaIO Matrix software is ideally suited for performance intensive workloads such as Web 2.0 application serving, financial modeling, life sciences research, media rendering, Big Data analytics, log management and government or university research. For more information, visit www.weka.io, email us at sales@weka.io, or watch our latest video here.

Source: WekaIO

The post WekaIO Unveils Cloud-Native Scalable File System appeared first on HPCwire.

Perverse Incentives? How Economics (Mis-)shaped Academic Science

Wed, 07/12/2017 - 09:22

The unintended consequences of how we fund academic research—in the U.S. and elsewhere—are strangling innovation, putting universities into debt and creating numerous PhD graduates and postdoctoral fellows who will not be able to get jobs in their chosen fields, according to economist Paula Stephan of Georgia State University.

The good news, Stephan said at the opening plenary session of the PEARC17 conference in New Orleans on July 11, is that researchers probably needn’t go back to the politicians to ask for more money. The bad news: the current system is so ingrained it’s hard to be optimistic.

“I don’t think it would take more funding to [encourage] more risk,” she said, but “unless we change the incentives in the system we’re going to continue to overbuild and over train.”

Stephan identified three major effects of the perverse incentives governing academic research: over-training, risk aversion and over-building of physical infrastructure. All three are problems in their own right but also feed back to make the situation worse.

“Economics is about incentives and cost,” Stephan explained, and both are problematic in most national funding systems. She particularly examined that of the U.S.

The Inaugural Practice and Experience in Advanced Research Computing (PEARC) conference—with the theme Sustainability, Success and Impact—stresses key objectives for those who manage, develop and use advanced research computing throughout the U.S. and the world. Organizations supporting this new HPC conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC), the Extreme Science and Engineering Development Environment (XSEDE), the Science Gateways Community Institute, the Campus Research Computing (CaRC) Consortium, the Advanced CyberInfrastructure Research and Education Facilitators (ACI-REF) consortium, the National Center for Supercomputing Applications’ Blue Waters project, ESnet, Open Science Grid, Compute Canada, the EGI Foundation, the Coalition for Academic Scientific Computation (CASC) and Internet2.

Over-training: A Plague of PhDs

Increasingly, Stephan argued, universities are following a “high-end shopping mall” model in which they “lease” space to researchers—the “stores.” Physical building, particularly during the funding increases of the 1990s, became a priority as universities vied to attract top-performing (read: highly funded) research faculty. One down side to this model, though, is that individual principal investigators took on so much of the risk. With about 95 percent of research faculty paying their own salaries through soft money, funding has become existential and devours increasing amounts of the average lab head’s time: One study estimated that PIs spend 42% of their professional time on grant administration and writing.

“This raises the issue of how you’re going to staff your lab,” Stephan said. While few researchers make a conscious decision to bias hiring toward some types of research workers, the economic pressures often give little choice.

The issue is stark in the decision of whether to employ graduate students, postdoctoral fellows or staff scientists to conduct lab research. Nationally, graduate students average a stipend of about $26,000 annually; in addition, they represent approximately an additional $16,000 or more for tuition and other student costs. Their hourly “pay rate,” then, can be between $19.50 and $27.50.

Postdoctoral fellows are paid more. But they also have no tuition costs and at most universities have few additional benefits. Assuming a university follows the NIH benchmark of $43,692 for a first-year postdoc, their hourly rate comes to around $17 to $18, depending on the field.

Staff scientists start at about $60,000 to $75,000, coming out to an hourly rate of about $30.00. But that doesn’t reflect their full cost, which includes much more extensive benefits than students or postdocs.

Given this incentive structure, Stephan explained, it isn’t hard to understand the relative scarcity of staff scientists. Her own study found that at least 72 percent of academic research papers had postdocs or grad students as their first author. In the NSF’s annual survey, life science PhD graduates with definite job commitments have fallen from a peak of 70% in 1994 to 58% in 2014—and most of those are going to postdoc positions, not permanent jobs.

With the scarcity of permanent positions for these postdocs to go to next, “academe has become the alternate career track” for PhDs, particularly in physics and the physical and life sciences, she said.

“Training [has become] less about the future supply and more about getting research and teaching done now,” Stephan said.

Aversion to Risk

Along with the oversupply of PhDs, the funding structure has created an atmosphere in which risk-taking is discouraged in the funding process. In an influential Proceedings of the National Academy of Science USA paper, biomedical giants Alberts, Kirscher, Tilghman and Varmus criticized biomedical research funding as overly risk averse. Researchers have perceived a similar problem in the physical sciences: Even DARPA, which once self-identified as funding risky projects, has been criticized for being over-cautious.

At the stage of grant application reviews, the common requirement for preliminary data among many reviewers tilts the field against high-risk projects. So does the use of bibliometric measures of author impact. The short-term nature of the funding cycle also discourages novelty: “It’s hard to recover from failure in three years,” Stephan said. And since the success rate for grant continuations is higher than that for new grants, the system encourages researchers to “stay in their lanes.”

“The stress on ‘translational’ outcomes” that provide immediate practical applications “also discourages risk,” she added.

Another study showed that highly novel papers tend to show pronounced payoffs at 13 years after publication but little at three years. Non-novel papers, on the other hand, pay off better at three-year cycles—but don’t improve over time.

If You Build It, They Will Not Necessarily Come

Overbuilding—the construction of unneeded university brick and mortar—came with the NIH budget doubling in the late 1990s. Universities, assuming continued growth, embarked on a “building binge” to attract top grant-attracting faculty. They borrowed to do so, partly because interest payments for debt service can be included in calculating indirect costs charged against those grants—and thus it would, presumably, “pay for itself.”

From 1988 to 2011, biomedical research floor space at the average university increased from 40,000 square feet to 90,000 square feet.

When funding declined in real dollars, unrecoverable debt and even facility mothballing followed. The annual average university debt service grew from $3.5 million in 2003 to $6.9 million in 2008. It created an economic drag on many research universities that will be hard to escape.

“All disciplines will pay for this, not just the biomedical sciences,” Stephan said.

The Way Out?

The irony, of course, is that the primary justification for government-funded research is to take risks that industry can’t.

For economists, the case for academe starts with a concept called “market failure,” Stephan explained. It’s the term used to describe the way most firms avoid overly risky projects; the difficulty of capturing financial benefits from fundamental discovery is a particular disincentive to pursue that which does not pay off in the near term.

“But the risky stuff shifts knowledge frontiers, eventually contributing to economic growth,” she said.

Excellence is not the same thing as risk-taking, Stephan took pains to add. Not all excellent research takes big risk; not all risky research is of high quality.

“I think as a country we need a portfolio,” she said. “It does not mean that there is not a substantial role for what we call ‘normal’ research.” But if we don’t change the incentive structures of our funding process—rewarding outcomes over longer time periods, creating incentives to encourage permanent rather than temporary jobs and make living on “soft money” less precarious, we won’t see the kind of innovation in which academic research was supposed to specialize.

“I’ve been working at this for too long, so I’m not wildly optimistic,” Stephan admitted.

Ken Chiacchia, Senior Science Writer, Pittsburgh Supercomputing Center, is following a non-traditional career path for science PhDs.

The post Perverse Incentives? How Economics (Mis-)shaped Academic Science appeared first on HPCwire.

Driving AI Forward with Intel® Xeon® Scalable Processors

Wed, 07/12/2017 - 01:05

AI compute cycles are expected to grow by a factor of 20 within the next three years as the intelligence revolution goes mainstream.  Intel is working to fuel this growth at every level, delivering a major leap in AI performance with new Intel® Xeon® Scalable processors and targeting a game-changing 100X increase in machine learning performance by 2020 with the Intel® Nervana™ Platform.

Figure 1. The Intel® Scalable System Framework simplifies the design of efficient, high-performing clusters that optimize the value of HPC investments.

These leaps in performance will need to be accompanied by comparable leaps in scalability to support growing data volumes and larger neural networks. To address this need, Intel is driving innovation across the entire HPC solution stack through the Intel® Scalable System Framework (Intel® SSF). Tight integration and synchronized innovation across compute, memory, storage, fabric, and software will help organizations scale cost-effectively as their own AI revolution unfolds.

Superior Performance Today

Choosing the right processors for specific workloads is important. Current options include:The move toward faster, more scalable machine learning is already under way.  Intel is working with vendors and the open source community to optimize popular software frameworks and algorithms for higher performance on Intel architecture. The performance benefits can be transformative[1].

  • Intel® Xeon® Scalable processors for inference engines and for some training workloads. Intel Xeon processors already support 97 percent of all AI workloads[2]. With more cores, more memory bandwidth, an option for integrated fabric controllers, and ultra-wide 512-bit vector support, the new Intel Xeon Scalable processors (formerly code name Skylake) provide major advances in performance, scalability, and flexibility. They are ideal for deploying AI inference engines at scale and, in many cases, for tackling the heavier demands of neural network training. With their broad interoperability, they also provide a powerful and agile foundation for integrating AI solutions into other business and technical applications.
  • Intel® Xeon Phi™ processors for training large and complex neural networks. With up to 72 cores, 288 threads, and 512-bit vector support—plus integrated high bandwidth memory and fabric controllers—these processors offer performance and scalability advantages versus GPUs for neural network training. Since they function as host processors and run standard x86 code, they simplify implementation and eliminate the inherent latencies of PCIe-connected accelerator cards. An AI-optimized upgrade to the Intel Xeon Phi processor family (code name Knights Mill) will be in production in the fourth quarter of 2017, and is expected to provide a significant increase in deep learning training performance to help unleash a new wave of innovation.
  • Optional accelerators for agile infrastructure-optimization. Intel offers a range of workload-specific accelerators, including programmable Intel FPGAs that can evolve along with workloads to meet changing requirements. These optional add-ons for Intel Xeon processor-based servers bring new flexibility and efficiency for supporting AI and many other critical workloads. They open new doors for innovation, and can help organizations reduce data center power and space requirements for their most demanding workloads.
Unprecedented Performance Tomorrow

If today’s Intel processors push the boundaries of machine learning performance, tomorrow’s will shatter them. The Intel Nervana Platform is a complete solution stack that is designed for the sole purpose of delivering unprecedented performance and density for neural networks. High-speed memory and powerful interconnects are built into each chip to deliver extreme performance that can be scaled across multiple chips and multiple chassis without performance loss.

At the same time, Intel SSF is evolving to help enable cutting-edge performance at every scale, from small workgroup clusters to the world’s largest supercomputers. In combination with rapid, ongoing advances in the x86 software ecosystem—including applications, frameworks, and optimized Intel libraries—this ramp up in computing capability will provide the foundation for a tidal wave of AI innovation. In relatively short order, this groundbreaking new technology will become a mainstream resource that can be deployed with confidence by virtually every organization.

Learn more: Read previous and future articles about the benefits Intel SSF brings to AI through synchronized innovation across the complete HPC solution stack:  Overview, memory, compute, fabric, storage, software.

[1] For more information on the value of running optimized AI software on Intel Architecture, read the Intel article: “Intel® Xeon Phi™ Delivers Competitive Performance for Deep Learning—And Getting Better Fast,” September 26, 2016. https://software.intel.com/en-us/articles/intel-xeon-phi-delivers-competitive-performance-for-deep-learning-and-getting-better-fast

[2] Based on Intel internal estimates.

The post Driving AI Forward with Intel® Xeon® Scalable Processors appeared first on HPCwire.

Code @ TACC Robotics Camp Students Solve Real-World Traffic Problems

Tue, 07/11/2017 - 16:17

July 11, 2017 — On a hot and breezy June day in Austin, parents, friends, brothers and sisters navigated through main campus at The University of Texas at Austin and helped carry luggage for the new arrivals to their dorm rooms. Thirty-four high school students from mostly low-income Title I schools in Central Texas, some from as far away as Houston, said good-bye to their families.

The students came for a different kind of summer camp, where for one week they became part of a science team that used computer programming and internet-connected technologies to solve a real-world problem. They had high hopes to walk away with experiences that would help them become future scientists and engineers.

From June 11 to 16, 2017, the Texas Advanced Computing Center (TACC) hosted Code @TACC Robotics, a week-long summer camp funded by the Summer STEM Funders Organization under the supervision of the KDK Harmon Foundation. The 34 students received instruction from five staff scientists at TACC and two guest high school teachers from Dallas and Del Valle, as well as round-the-clock supervision from five undergraduate proctors. Leading the camp was Joonyee Chuah, Outreach Coordinator at the TACC.

“The goal of the camp is to provide these students with their first experiences with programming, to jumpstart them and get them further ahead to things that are current in the computing world,” Chuah said.

The students divided themselves into teams, each with specific roles of principal investigator, validation engineer, software developer, and roboticist. They assembled a robotic car from a kit and learned how to program the software that controls it. The robotic cars had sensors that measured the distance to objects in front, and they could be programmed to respond to that information by stopping or turning or even relaying that information to another car near it. Teams were assigned a final project based on a real-world problem, such as what action to take when cars arrive together at a four-way stop.

The Code @TACC Robotics camp went a step further than the typical introductory Lego-based robotics program by using maker-based electronics that connected to the cloud using the Particle platform. The robots assembled for the camp were three-wheeled cars that communicated via the internet and could relay events and interact with services such as Gmail, Twitter, and Facebook.

“The platform allows these robots to do a lot of communication with each other that facilitates projects that you wouldn’t normally be able to do in a standard high school classroom using off-the-shelf toy robotics,” Chuah said. The robotic cars presented a simplified version of the cutting-edge autonomous vehicles being developed today by leading companies such as Google.

Industry outreach was an important part of the camp, and the students toured the offices of IBM in Austin, where they participated in student activities that explored the IBM Watson supercomputer and robotics connected to it. The students also visited engineering departments and computer science departments at UT Austin, as well as TACC’s world-renowned Visualization Laboratory. “They get a full experience of both college as well as future industry,” Chuah said. “It’s important for students to understand that there are economic and intellectual opportunities out there.”

Read the rest of the story at the TACC website.

Source: Jorge Salazar, TACC

The post Code @ TACC Robotics Camp Students Solve Real-World Traffic Problems appeared first on HPCwire.

IBM, Citizen-Scientists to Contribute Equivalent of up to $200M for Climate Research

Tue, 07/11/2017 - 14:36

ARMONK, N.Y., July 10, 2017 — As climate change accelerates, IBM (NYSE: IBM) is galvanizing the global science community with a massive infusion of computing resources, weather data, and cloud services to help researchers examine the effects of climate change, and explore strategies to mitigate its effects. IBM pledges to help direct the equivalent of up to $200 million for up to five climate-related projects judged to offer the greatest potential impact, and will then broadly share the experiments’ results.

IBM is inviting members of the global science community to propose research projects that could benefit from World Community Grid, an IBM Citizenship initiative that provides researchers with enormous amounts of free computing power to conduct large-scale environmental and health-related investigations.

This resource is powered by the millions of devices of more than 730,000 worldwide volunteers who sign up to support scientific research. World Community Grid volunteers download an app to their computers and Android devices, and, whenever they are otherwise not in full use, the computers automatically perform virtual experiments, with the aim of dramatically accelerating foundational scientific research.

Scientists who submit proposals for climate-related experiments may also apply to receive free IBM cloud storage resources, so that they can work with their experiment data in a secure, responsive, and convenient manner. They may also apply to receive free access to data about historical, current, and forecasted meteorological conditions around the globe from The Weather Company, an IBM Business.

The in-kind, donated resources offered by IBM can support many potential areas of inquiry. These might include gauging the impacts on watersheds and fresh water resources; tracking and predicting human or animal migration patterns based on changing weather conditions; analyzing weather that affects pollution or clean-up efforts; analyzing and improving crop or livestock resilience and yields in regions with extreme weather conditions, and more.

IBM’s World Community Grid has previously hosted numerous environment-related projects led by scientists around the world. For example, Harvard University identified 36,000 carbon-based compounds with the potential to perform at approximately double the efficiency of most organic solar cells currently in production.

“World Community Grid enabled us to find new possibilities for solar cells on a timescale that matters to humanity–in other words, in a few years instead of decades,” said Dr. Alán Aspuru-Guzik, Professor of Chemistry and Chemical Biology, Harvard University. “Usually, computational chemists who try to do this type of thing are studying 10 or 20 molecules at a time. World Community Grid allowed us to screen about 25,000 molecules every day. We had to start thinking in terms of millions of molecules and formulate new ideas based on this massive scale.”

Other environmental initiatives hosted on IBM’s World Community Grid have included a project led by Tsinghua University in China, which uncovered a phenomenon that could lead to more efficient water filtration using nanotechnology. Scientists have also used IBM’s World Community Grid to better understand crop resiliency to extreme weather, and to model the impact of water management practices on sensitive watersheds.

IBM has a long history of environmental leadership. Just last week, IBM announced that it achieved two major commitments four years ahead of schedule in its effort to help combat climate change. Earlier this month, IBM also reaffirmed its support for the Paris Climate Agreement and signed on to the #WeAreStillIn pledge, expressing its commitment to help continue leading the global fight against climate change.

“Computational research is a powerful tool for advancing research on climate change and related environmental challenges,” said Jennifer Ryan Crozier, Vice President of IBM Corporate Citizenship and President of the IBM International Foundation. “IBM is proud to help advance essential efforts to combat climate change by providing scientists with free access to massive computing power, cloud resources, and weather data.”

IBM will select up to five projects to receive support. Proposals will be evaluated for scientific merit, potential to contribute to the global community’s understanding of specific climate and environmental challenges or development of effective strategies to mitigate them, and the capacity of the research team to manage a sustained research project. Resources provided are valued at up to $40 million per project, for a total of approximately USD $200 million.

IBM will accept applications here on a rolling basis, with a first-round deadline of September 15, 2017. Scientists from around the world are encouraged to apply. Up to five winning research teams will be announced beginning in Fall 2017.

Since its founding in 2004, World Community Grid has supported 28 research projects on cancer, HIV/AIDS, Zika, clean water, renewable energy and other humanitarian challenges. To date, World Community Grid, hosted in IBM’s Cloud, has connected researchers to one half-billion U.S. dollars’ worth of free supercomputing power. More than 730,000 individuals and 430 institutions from 80 countries have donated more than one million years of computing time from more than three-million desktops, laptops and Android devices. Volunteer participation has helped researchers to identify potential treatments for childhood cancer, more efficient solar cells and more efficient water filtration.

To learn more about World Community Grid and volunteer to contribute your unused computing power, please visit https://www.worldcommunitygrid.org/

To learn more about IBM Cloud, please visit https://www.ibm.com/cloud-computing/

For more information about The Weather Company, an IBM Business, please visit http://www.theweathercompany.com

Source: IBM

The post IBM, Citizen-Scientists to Contribute Equivalent of up to $200M for Climate Research appeared first on HPCwire.

Koi Computers Announces Product Lineup Powered by Xeon Scalable Processors

Tue, 07/11/2017 - 12:27

Koi Computers’ new Intel-based product line up are available in rack-mounted servers, storage solutions, and workstations. “As an Intel Technology Platinum Provider and HPC Data Center Specialist, Koi Computers is extending its portfolio with a wide range of workload optimized solutions based on the new Intel Xeon Scalable platform,” said Catherine Ho, Federal Business Development Manager at Koi Computers.

“These solutions from Koi Computers will take advantage of the performance, efficiency and scalability of the new Intel Xeon processor family to meet the most intensive demands of our customer’s workloads in the modern data center,” said Jennifer Huffstetler, Sr. Director, data center product marketing at Intel Corporation.

The new Intel Xeon Scalable processors represent the most significant platform innovation in this decade, incorporating unique features for compute, network, and storage workloads. The Intel Xeon Scalable processors deliver impressive performance gains of up to 4.2x higher performance for virtualized workloads as compared with 4-year-old systems widely used in the market today, allowing customers to run a more workloads with greater efficiency on each system.

Some of the feature improvements of the new Scalable family over the previous generation of processors include:

  • Integrated performance accelerators such as Intel® Advanced Vector Extensions 512 (Intel AVX-512) and Intel QuickAssist Technology (Intel QAT);
  • 1.5x memory bandwidth increase (6 channels vs. 4 in previous generation);
  • Intel Volume Management Device (Intel VMD), a new platform capability designed to deliver seamless management of PCIe-based (NVMe) solid state drives and Intel VMD enables a “hot plug” capability that minimizes service interruptions during drive swaps; and
  • Optional Integrated Network / Fabric

The Intel Xeon Scalable Processors offer four levels of performance and capabilities, with a new tiered model based on metals (bronze, silver, gold and platinum) to make the options simple and efficient to choose. Customers will also have increased flexibility with configuration choices with regard to which integrations and accelerators to choose from.

Koi Computers’ new server, storage, and workstation solutions designed for the new Intel Xeon Scalable platform are now available to order. For more information or inquiries, please contact our sales team at sales(at)koicomputers(dot)com or (888) LOVE-KOI. The new product line-up is also available on all of Koi Computers’ Federal Government Contracts: GSA IT Schedule 70 (GS-35F-0488U), NASA SEWP V – Group A (NNG15SD50B); and NITAAC CIO-CS (HHSN316201500039W).

About Koi Computers

For more than 20 years, Koi Computers has been working with top technology manufacturers to deliver scalable high performance computing and technology solutions that improve the efficiency, reliability and speed of our customers’ work. Our team specializes in building custom IT solutions that fit your needs today and your vision for tomorrow. Koi Computers has deployed clusters across the U.S. Federal Government and have placed systems on both the Top 500 and Green 500 List. Koi Computers is a Prime Contract Holder of the NASA SEWP V, NITAAC CIO-CS, and GSA IT Schedule 70 contracts. To learn more, visit http://www.koicomputers.com and follow us on Twitter @koicomputers.

Source: Koi Computers

The post Koi Computers Announces Product Lineup Powered by Xeon Scalable Processors appeared first on HPCwire.

Advanced Clustering Technologies Announces Systems Based on Xeon Scalable Processors

Tue, 07/11/2017 - 12:24

Advanced Clustering’s ACT series of HPC solutions, which includes ACTserv (servers), ACTstor (storage), ACTblade (blades), provide the HPC market with high scalability and performance for the most demanding computing requirements. These systems will now integrate the Intel Xeon Scalable processors, which are designed to deliver powerful capabilities for HPC workloads including genomic sequencing, seismic modeling, computational fluid dynamics and high frequency trading.

“Integrating this new breakthrough generation of Intel Xeon processors into our systems means we can provide our customers with a powerful platform that has been designed specifically to deliver advanced HPC capabilities,” said Advanced Clustering Technologies President Kyle Sheumaker. “Researchers and data scientists will be able to unlock data and scientific insights faster than ever before because of the advancements Intel Xeon Scalable processors bring across compute, storage, memory and I/O.”

Advanced Clustering’s ACT HPC servers are based on the new line of Intel Server Boards, which deliver high levels of compute density, scalability, and storage and memory capacity. These features, combined with the enhancements in the new Intel® Xeon® Scalable processors, make ACT HPC systems highly effective for higher education computing resources and in a range of commercial sectors including financial services, climate and weather research, manufacturing and automotive design.

“Innovative technologies in compute, memory, fabric, storage, and system software are needed to provide balanced performance at scale and enable the next generation of HPC innovation,” said Al Diaz, VP and general manager, Product Collaboration and Systems Division, Intel Data Center Group. “Support for Intel Xeon Scalable processors and Intel’s latest server platforms in the ACT series provides Advanced Clustering’s customers with the powerful performance, maximum scalability and energy efficiency required for increasingly demanding HPC workloads.”

Intel Xeon Scalable processors feature multiple enhancements for HPC workloads. The new processors deliver advanced performance with up to 28 cores and significant increases in memory and I/O bandwidth (with six memory channels and 48 PCIe lanes) to handle extremely large compute- and data-intensive workloads. Integrated Intel QuickAssist Technology (Intel QAT) with hardware acceleration for cryptography and data compression frees the host processor to focus on other critical tasks. The new Intel Advanced Vector Extensions 512 (Intel AVX-512) with up to double the flops per clock cycle compared to the previous generation Intel AVX2* boosts performance for the most compute-intensive workloads. Integrated Intel Omni-Path Architecture (Intel OPA), provides 100Gbps high-bandwidth and low-latency fabric for HPC clusters.

For more information about Advanced Clustering’s Intel Xeon Scalable processor-integrated HPC solutions, visit http://www.advancedclustering.com/technologies/skylake/.

About Advanced Clustering Technologies 

Advanced Clustering Technologies builds customized, turn-key HPC solutions including clusters, servers, storage solutions and workstations. The company is an official Intel Technology Provider and HPC Data Center Specialist. The designations recognize Advanced Clustering as a partner adept in delivering quality HPC solutions for customers. Visit the company’s site at http://www.advancedclustering.com.

Source: Advanced Clustering Technologies

The post Advanced Clustering Technologies Announces Systems Based on Xeon Scalable Processors appeared first on HPCwire.