Feed aggregator

Hedge Funds (with Supercomputing help) Rank First Among Investors

HPC Wire - Mon, 05/22/2017 - 12:48

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest share (27 percent) of stock trading of any investor type, according to the article. That’s up from 2010 when quant-based trading was tied with bank trades for the bottom share. Algorithm-based trading is, of course, the ‘sine qua non’ of hedge funds and has helped lift them to the top of the investing crowd.

The WSJ article, written by Gregory Zuckerman and Bradley Hope, quickly reviews the rise of quants in the financial industry and showcases its still-growing attraction as a lucrative career for algorithm stars formerly headed for computer science. Here’s an excerpt:

“A decade ago, the brightest graduates all wanted to be traders at Wall Street investment banks, but now they’re climbing over each other to get into quant funds,” says Anthony Lawler, who helps run quantitative investing at GAM Holding AG. The Swiss money manager last year bought British quant firm Cantab Capital Partners for at least $217 million to help it expand into computer-powered funds.

“Guggenheim Partners LLC built what it calls a “supercomputing cluster” for $1 million at the Lawrence Berkeley National Laboratory in California to help crunch numbers for Guggenheim’s quant investment funds, says Marcos Lopez de Prado, a Guggenheim senior managing director. Electricity for the computers costs another $1 million a year.

“Algorithmic trading has been around for a long time but was tiny. An article in The Wall Street Journal in 1974 featured quant pioneer Ed Thorp. In 1988, the Journal profiled a little-known Chicago options-trading firm that had a secret computer system. Journal reporter Scott Patterson wrote a best-selling book in 2010 about the rise of quants.”

Link to full article: https://www.wsj.com/articles/the-quants-run-wall-street-now-1495389108

The post Hedge Funds (with Supercomputing help) Rank First Among Investors appeared first on HPCwire.

PEARC17 Announces Complete Conference Schedule

HPC Wire - Mon, 05/22/2017 - 11:05

NEW ORLEANS, May 22, 2017 — The PEARC17 organizers today announced the full schedule for the Practice & Experience in Advanced Research Computing conference in New Orleans, July 9–13, 2017. PEARC17’s robust technical program will address issues and challenges facing those who manage, develop, and use advanced research computing throughout the nation and the world.

Featuring content submitted by the community for the community, PEARC17 will include more than 50 technical papers by professionals in the field of advanced research computing.

The conference will kick off with 20 full-day and half-day tutorials on Monday, July 10. The tutorials will cover a wide range of topics, including advancing on-campus research computing, many-core programming, cloud computing and more.

Attendees will have the opportunity to attend panel discussions focusing on workforce issues, visualization, and national-scale infrastructure. Included in PEARC17’s dynamic program will also be a poster reception, the Visualization Showcase, numerous Birds-of-a-Feather sessions, and a student program.

Also included with PEARC17 registration is the Advanced Research Computing on Campuses (ARCC) Best Practices Workshop, which is co-locating with this year’s conference. ARCC will present a full-day tutorial on Monday. Attendees need to sign-up for the ARCC tutorial as a part of tutorial registration. During the main conference, ARCC will offer focused technical sessions and panels, as well as Birds-of-a-Feather sessions.

PEARC17 Registration and Hotel Booking Deadline is May 31

May 31 is the deadline for PEARC17 attendees to register and book a room at guaranteed hotel room rates. There will be no extension of these deadlines—late registration fees will apply beginning June 1.

To register, go to http://pearc17.pearc.org to register for the conference and check the latest program schedule. To book rooms, go to http://pearc17.pearc.org/hotel or call 888-421-1442 and reference the conference name.

ABOUT PEARC17

Being held in New Orleans July 9-13, PEARC17—Practice & Experience in Advanced Research Computing 2017—is for those engaged with the challenges of using and operating advanced research computing on campuses or for the academic and open science communities. This year’s inaugural conference offers a robust technical program, as well as networking, professional growth and multiple student participation opportunities.

Organizations supporting the new conference include the Advancing Research Computing on Campuses: Best Practices Workshop (ARCC); XSEDE; the Science Gateways Community Institute (SGCI); the Campus Research Computing Consortium (CaRC); the ACI-REF consortium; the Blue Waters project; ESnet; Open Science Grid; Compute Canada; the EGI Foundation; the Coalition for Academic Scientific Computation (CASC); and Internet2. (need to add all current supporting/exhibitors)

Source: PEARC17

The post PEARC17 Announces Complete Conference Schedule appeared first on HPCwire.

Bright Computing Announces Integration with BeeGFS from ThinkParQ

HPC Wire - Mon, 05/22/2017 - 11:00
AMSTERDAM, May 22, 2017 – Bright Computing today announced that integration with BeeGFS has been included in Bright Cluster Manager 8.0.

BeeGFS, developed at the Fraunhofer Center for High Performance Computing in Germany and delivered by ThinkParQ, is a parallel cluster file system with a strong focus on performance and flexibility, and is designed for very easy installation and management. BeeGFS is free of charge, and transparently spreads user data across multiple servers. By increasing the number of servers and disks in the system, users can simply scale performance and capacity of the file system to the level needed, from small clusters up to enterprise-grade systems with thousands of nodes.

This latest development from Bright Computing is in response to an increasing number of requests from Bright’s customer base to integrate with BeeGFS.

The new integration offers:

  • Setup utility – The process of setting up BeeGFS within Bright Cluster Manager 8.0 has been streamlined, by introducing an easy-to-use wizard that takes you through the process
  • Configuration – BeeGFS can be configured in Bright Cluster Manager 8.0 through CMDaemon roles
  • Metrics, health-checks, and service management – Bright ensures that BeeGFS is always working properly by reporting on the performance and health of BeeGFS in BrightView, the central management console

Martijn de Vries, CTO at Bright Computing, commented; “BeeGFS is a highly respected and popular parallel file system; many of our customers have been using it for a number of years. It makes perfect sense to formalize the integration to ensure that setup and management of BeeGFS in Bright Cluster Manager is quick, easy, error free, and adds immediate value to our customers’ cluster management experience.”

Sven Breuner, CEO of ThinkParQ, added: “Leveraging the full performance of the available hardware while still keeping all components easy to manage has always been our primary goal with BeeGFS. Seeing how easy it is with the new Bright Cluster Manager to configure BeeGFS and to monitor the system utilization is really impressive and will help system administrators to significantly improve the application runtime of their cluster users. And since many of the BeeGFS customers are using Bright Cluster Manager already and all others can now enable it with just a few clicks, this integration provides an immediate benefit to the community.”

Source: Bright Computing

The post Bright Computing Announces Integration with BeeGFS from ThinkParQ appeared first on HPCwire.

Senior Scientific Applications Engineer

XSEDE News - Mon, 05/22/2017 - 09:47

OSC is hiring a Senior Scientific Applications Engineer with a focus on application support for OSC’s industrial engagement and research programs. This individual will:
1) Optimize relevant open source modeling and simulation codes for high performance execution on OSC and other HPC systems.
2) Provide consulting on code optimization, parallel programming or accelerator programming to OSC clients.
3) Perform application software builds, investigate software development tools, make improvements to OSC’s software deployment infrastructure, and create user-facing documentation.

More details can be found at the job posting here:
https://www.jobsatosu.com/postings/78836

PSSC Labs Announces “Greenest” New Eco Blade Server Platform for HPC

HPC Wire - Mon, 05/22/2017 - 07:47

LAKE FOREST, Calif., May 22, 2017 — PSSC Labs, a developer of custom HPC and Big Data computing solutions, today announced its new Eco Blade server platform, the most energy efficient high performance blade server on the market.

The Eco Blade is a unique server platform engineered specifically for high performance, high density computing environments – simultaneously increasing compute density while decreasing power use. Eco Blade offers two complete, independent servers within 1U of rack space. Each independent server supports up to 64 Intel Xeon processor cores and 1.0 TB of enterprise memory for a total of up to 128 Cores and 2 TB of memory per 1U.

There is no shared power supply or backplane, a unique design feature that translates to power savings of up to 46% on a per server comparison with servers of similar capabilities from leading brands. By lowering the power consumption of these servers, PSSC Labs is offering the greenest server of its kind on the market, translating to lower lifetime ownership costs for institutions that adopt the Eco Blade servers.

According to the EPA, volume servers are responsible for 68% of all electricity consumed by IT equipment in data centers in 2006. A study by the US government found that in 2014 US data centers consumed about 70 billion kilowatt-hours of electricity, making up 2% of the country’s total energy consumption. Using over 90% energy efficient power supplies and combining them with power saving features, the Eco Blade servers result in significant savings in energy costs over the lifetime of the product. Along with the significant reduction in power used, the Eco Blade is built using 55% recyclable material, it a move that cements PSSC Labs’ commitment to finding sustainable enterprise server solutions that reduce waste and powers progress.

“The global thirst for more computing power and storage see demand for volume servers and the resulting energy consumption to continue to rise. As an industry, it is our responsibility to find ways to reduce power consumption while still providing the computing ability needed to fuel cutting edge research and groundbreaking enterprises,” said Alex Lesser, Vice President of PSSC Labs. “PSSC Labs has taken a big step in engineering a HPC / data center server that does not compromise on performance but will significantly reduce power consumption. By deploying the Eco Blade server instead of traditional servers from other manufacturers, companies will reduce their cap ex and op ex while being good stewards of the environment.”

In addition to the lifetime saving in energy costs, the Eco Blade servers allow higher density rack configurations which reduce the amount of infrastructure and networking equipment required, translating to huge cost savings during the initial purchase as well as savings to recurring maintenance costs.

Eco Blade Technical Specs:

  • Supports 2 x Intel Xeon E5-2600v4 Processors
    • Up to 128 Processor Cores in just 1U of rack space (with hypertheading enabled)
  • Up to 1.0TB of High Performance ECC Registered System Memory Per Server
  • 2 x Redundant SSD Operating System Hard Drives
  • Network Connectivity Options include 10GigE, 40GigE and 100GigE
    • Support for Mellanox Infiniband and Intel Omnipath
  • Remote Management through Dedicated IPMI 2.0 Network
  • Certified Compatible with Red Hat Linux, CentOS Linux, Ubuntu Linux, Microsoft Operating Systems

Application Compatibility

  • Docker
  • Kubernetes
  • Mesos
  • OpenStack
  • Joyent
  • Rancher
  • Chef
  • Puppet
  • High Performance Computing (HPC) workloads

All PSSC Labs server installation includes a one year unlimited phone / email support package (additional year support available), with fully US based support staff. Prices for a custom-built Eco Blade servers start at $2,299.  For more information on the Eco Blade, visit http://www.pssclabs.com/servers/virtualization-servers/.

About PSSC Labs

For technology powered visionaries with a passion for challenging the status quo, PSSC Labs is the answer for hand-crafted HPC and Big Data computing solutions that deliver relentless performance with the absolute lowest total cost of ownership. All products are designed and built at the company’s headquarters in Lake Forest, California. For more information, 949-380-7288, www.pssclabs.com, sales@pssclabs.com.

Source: PSSC Labs

The post PSSC Labs Announces “Greenest” New Eco Blade Server Platform for HPC appeared first on HPCwire.

AI: Breaking Down Performance Barriers With Intel® Scalable System Framework

HPC Wire - Mon, 05/22/2017 - 01:01

Interest in artificial intelligence (AI) is rising fast, as practical applications in speech, image analysis, and other complex pattern recognition tasks have now surpassed human accuracy. Once trained, a neural network can be deployed on edge devices or in the cloud to meet demands in near real-time. Yet training a deep neural network within a reasonable time frame requires the speed and scale of a high performance computing (HPC) system.

Like many other HPC workloads, neural network training is both compute- and data-intensive. It stresses every aspect of cluster design, from floating-point performance, to memory-bandwidth and capacity, to message latency and network bandwidth. Powerful processors are essential, but not sufficient to meet these intense demands.

Figure 1. Intel® Scalable System Framework simplifies the design of efficient, high-performing clusters that optimize the value of HPC investments.

Intel® Scalable System Framework (Intel® SSF) is designed to address the technology barriers that currently limit performance for neural network training and other HPC workloads. The framework accomplishes this by delivering balanced high-performance at every layer of the solution stack—compute, memory, storage, fabric, and software. This holistic, system-level approach simplifies the design of optimized clusters and helps organizations take advantage of disruptive new technologies with less effort and lower risk.

That’s a good thing, because disruptive new technologies are coming fast.

Powerful Compute for AI Acceleration

Intel® Xeon® processors and Intel® Xeon Phi™ processors are key compute components of Intel SSF. Intel announcements at SC16 highlighted their success versus GPUs in addressing the performance and scalability challenges of deep neural networks. This is just the beginning. The Intel processor roadmap is poised to deliver a 100X increase in neural network training performance within the next three years[1], shattering the performance barriers that currently slow AI innovation. Intel SSF will help to unleash the full potential of these and other Intel processor innovations.

Groundbreaking Memory and Storage Technologies

The performance gap between processors and memory/storage solutions has been widening for decades, requiring ever-more complex workarounds.  Beginning with the current Intel Xeon Phi processor family, Intel is offering up to 16 GB of fast on-chip memory to help resolve the data access bottleneck. This, too, is just a beginning. Intel breakthroughs in memory and storage technology are beginning to enter the market now, and are pivotal to the Intel SSF roadmap. These innovations will allow memory and storage to finally catch up with processor performance, enabling transformative new efficiencies that will redefine what is possible and affordable in AI and other data-driven fields.

A Fabric for the Future of AI

Neural network training is a tightly-coupled application that alternates compute-intensive number crunching with cluster-wide data sharing, so fabric performance is critical. Intel SSF addresses this challenge with Intel® Omni-Path Architecture (Intel® OPA), which matches the line speed of EDR InfiniBand and includes optimizations to improve message passing efficiency, fabric scalability, and cost models. Today’s Intel Xeon Phi processors offer integrated Intel OPA controllers to further reduce latency and cost. Ongoing processor and fabric integration will increase efficiency at every scale and provide a cost-effective foundation for the massive neural networks of tomorrow.

Optimized Software that Ties It All Together

Although foundational AI algorithms have been around since the mid 1960s, they were designed for functionality, not performance. Intel is working with vendors and the open source community to deliver software that is highly optimized for performance on Intel architecture across the full breadth of AI and HPC needs. This includes everything from core math libraries and machine learning frameworks, to memory- and logic-based AI applications. It also includes essential system software, such as Intel® HPC Orchestrator, which helps to simplify the design, deployment, and use of high-performing systems that can scale cost-effectively to support extreme requirements.

A Launching Pad for AI Innovation

The next wave of AI innovation will require enormous new computing capability. Intel SSF provides a unified platform that enables a leap forward in performance and efficiency for AI and a host of other HPC workloads, including big data analytics, data visualization, and digital simulation.

As innovation heats up, the advantages will grow. Intel SSF will help AI innovators ride the wave of escalating performance while maintaining application compatibility[2], so they can focus on driving deeper and more useful intelligence into almost everything they create.

We can’t wait to see the results.

Stay tuned for more articles focusing on the benefits Intel SSF brings to AI at each level of the solution stack through balanced innovation in compute, memory, storage, fabric, and software technologies.

[1] https://www.hpcwire.com/2016/11/21/intel-details-ai-hardware-strategy/

[2]The aim of Intel SSF is to help drive generation by generation performance gains that benefit existing software without requiring a recompile. Additional and in some cases massive gains may become possible through software optimization.

 

The post AI: Breaking Down Performance Barriers With Intel® Scalable System Framework appeared first on HPCwire.

SDSC Comet: Lustre scratch filesystem issue resolved

XSEDE News - Sun, 05/21/2017 - 09:04

There was an issue with the Lustre scratch filesystem (/oasis/scratch/comet) this morning and it is impacting jobs on the system. We are looking into the problem and will update once we have resolution. Please email help@xsede.org if you have any questions.

IBM, D-Wave Report Quantum Computing Advances

HPC Wire - Thu, 05/18/2017 - 23:00

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version.

That progress follows an announcement earlier this week that commercial quantum computer developer D-Wave Systems has garnered venture funding that could total up to $50 million to build it next-generation machine with up to 2,000 qubits.

Also this week, Hewlett Packard Enterprise Labs introduced what it claims is the largest single-memory computer. The premise behind HPE’s approach is putting memory rather than the processor at the center of the computing architecture. The prototype unveiled this week contains 160 terabytes of main memory spread across 40 nodes that are connected via a high-speed fabric protocol.

Meanwhile, IBM researchers continue to push the boundaries of quantum computing as part of its IBM Q initiative launched in March to promote development of a “universal” quantum computer. Access to a 16-qubit processor via the IBM cloud would allow developers and researchers to run quantum algorithms. The new version replaces an earlier 5-qubit processor.

The company also rolled on Wednesday (May 17) the first prototype of a 17-qubit commercial processor, making it IBM’s most powerful quantum device. The prototype will serve as the foundation of IBM Q’s commercial access program. The goal is to eventually scale future prototypes to 50 or more qubits.

IBM has provided researchers with free cloud access to its quantum processors to test algorithms and develop new applications ranging from modeling financial data and machine learning to cloud security. (The flip side of the data security equation, critics argue, is that quantum computers could some day be used to defeat current data encryption methods.)

Another commercial developer, D-Wave Systems, said this week it has landed $30 million in investor funding to build a next-generation quantum computer with “more densely-connected qubits” for machine learning and other applications. The funding was provided by Canada’s Public Sector Pension Investment Board, which plans to invest an additional $20 million if D-Wave achieves certain development milestones.

Earlier this year, cyber-security specialist Temporal Defense Systems purchased the first D-Wave 2000Q system.

The post IBM, D-Wave Report Quantum Computing Advances appeared first on HPCwire.

OCF Delivers 600 Teraflop Supercomputer for University of Bristol

HPC Wire - Thu, 05/18/2017 - 14:13

May 18, 2017 — For over a decade the University of Bristol has been contributing to world-leading and life changing scientific research using High Performance Computing (HPC), having invested over £16 million in HPC and research data storage. To continue meeting the needs of its researchers working with complex and large amounts of data, they will now benefit from a new HPC machine, named BlueCrystal 4 (BC4). Designed, integrated and configured by the HPC, storage and data analytics integrator OCF, BC4 has more than 15,000 cores making it the largest UK University system by core count and a theoretical peak performance of 600 Teraflops.

Over 1,000 researchers in areas such as paleobiology, earth science, biochemistry, mathematics, physics, molecular modeling, life sciences, and aerospace engineering will be taking advantage of the new system. BC4 is already aiding research into new medicines and drug absorption by the human body.

“We have researchers looking at whole-planet modeling with the aim of trying to understand the earth’s climate, climate change and how that’s going to evolve, as well as others looking at rotary blade design for helicopters, the mutation of genes, the spread of disease and where diseases come from,” said Dr Christopher Woods, EPSRC Research Software Engineer Fellow, University of Bristol. “Early benchmarking is showing that the new system is three times faster than our previous cluster – research that used to take a month now takes a week, and what took a week now only takes a few hours. That’s a massive improvement that’ll be a great benefit to research at the University.”

BC4 uses Lenovo NeXtScale compute nodes, each comprising of two 14 core 2.4 GHz Intel Broadwell CPUs with 128 GiB of RAM. It also includes 32 nodes of two NVIDIA Pascal P100 GPUs plus one GPU login node, designed into the rack by Lenovo’s engineering team to meet the specific requirements of the University.

Connecting the cluster are several high-speed networks, the fastest of which is a two-level Intel Omni-Path Architecture network running at 100Gb/s. BC4’s storage is composed of one PetaByte of disk provided by DDN’s GS7k and IME systems running the parallel file system Spectrum Scale from IBM.

Effective benchmarking and optimisation, using the benchmarking capabilities of Lenovo’s HPC research centre in Stuttgart, the first of its kind, has ensured that BC4 is highly efficient in terms of physical footprint, while fully utilising the 30KW per rack energy limit. Lenovo’s commitment to third party integration has allowed the University to avoid vendor lock-in while permitting new hardware to be added easily between refresh cycles.

Dr Christopher Woods continues: “To help with the interactive use of the cluster, BC4 has a visualisation node equipped with NVIDIA Grid vGPUs so it helps our scientists to visualise the work they’re doing, so researchers can use the system even if they’ve not used an HPC machine before.”

Housed at VIRTUS’ LONDON4, the UK’s first shared data centre for research and education in Slough, BC4 is the first of the University’s supercomputers to be held at an independent facility. The system is directly connected to the Bristol campus via JISC’s high speed Janet network. Kelly Scott, account director, education at VIRTUS Data Centres said, “LONDON4 is specifically designed to have the capacity to host ultra high density infrastructure and high performance computing platforms, so an ideal environment for systems like BC4. The University of Bristol is the 22nd organisation to join the JISC Shared Data Centre in our facility, which enables institutions to collaborate and share infrastructure resources to drive real innovation that advances meaningful research.”

Currently numbering in the hundreds, applications running on the University’s previous cluster will be replicated onto the new system, which will allow researchers to create more applications and better scaling software. Applications are able to be moved directly onto BC4 without the need for re-engineering.

“We’re now in our tenth year of using HPC in our facility. We’ve endeavored to make each phase of BlueCrystal bigger and better than the last, embracing new technology for the benefit of our users and researchers,” commented Caroline Gardiner, Academic Research Facilitator at the University of Bristol.

Simon Burbidge, Director of Advanced Computing comments: “It is with great excitement that I take on the role of Director of Advanced Computing at this time, and I look forward to enabling the University’s ambitious research programmes through the provision of the latest computational techniques and simulations.”

Due to be launched at an event on 24th May at the University of Bristol, BC4 will house over 1,000 system users, carried over from BlueCrystal Phase 3.

Source: OCF

The post OCF Delivers 600 Teraflop Supercomputer for University of Bristol appeared first on HPCwire.

Bozdag contributes to revolutionary 3D model of Earth's interior

Colorado School of Mines - Thu, 05/18/2017 - 13:49

Ebru Bozdag, Assistant Professor in the Department of Geophysics, is working with an international team of researchers to develop better models of the Earth’s interior. These models will help scientists understand the layers of the earth and how the inner workings of the planet affect the life upon it.

From the story:

Seismologists usually use a technique known as seismic tomography, similar to a CAT scan of the human body, to map our planet’s inner structure.

Categories: Partner News

PRACEdays 2017 Wraps Up in Barcelona

HPC Wire - Thu, 05/18/2017 - 13:01

Barcelona has been absolutely lovely; the weather, the food, the people. I am, sadly, finishing my last day at PRACEdays 2017 with two sessions: an in-depth look at the oil and gas industry’s use of HPC and a panel discussion on bridging the gap between scientific code development and exascale technology.

Henri Calandra of Total SA spoke on the challenges of increased HPC complexity and value delivery for the oil and gas industry.

The main challenge Total and other oil and gas companies are finding is that discoveries of oil deposits are becoming more rare. To stay competitive, they need to first and foremost open new frontiers for oil discovery, but do this while reducing risk and costs.

In the 1980s, seismic data was reviewed in the 2 dimensional space. The 1990’s started development of 3D seismic depth imaging. Continuing into the 2000’s, 3D depth imaging was improved as wave equations were added to the traditional imaging. The 2010’s brought more physics, more accurate images, and more complex processes to visually view the seismic data.

Henri Calandra of Total SA – click to enlarge

The industry continues to see drastic improvements. A seismic simulation that in 2010 took four weeks to run, in 2016 takes one day. Images have significantly higher resolution and the amount of detail seen in the images enables Total to be more precise in identifying seismic fields and potential hazards in drilling.

If you look closely at the pictures (shown on the slide), you can make out improvements the image quality. Although it may seem slight to our eye, the geoscientist can see the small nuances in the images that help them be more precise, identify hazards, and achieve a better positive acquisition rate.

How did this change over the last 30+ years happen? Improved technology, integrating more advanced technologies, improved processes, more physics, more complex algorithms – basically more HPC.

Using HPC, Total has been able to reduce their risks, become more precise and selective on their explorations, identify potential oil fields faster, and optimize their seismic depth imaging.

What’s next: Opening new frontiers enabled by the better appraisal of potential new opportunities. HPC has enabled seismic depth imaging methods that can do more iterations, more physics, and more complex approximations. Models are larger, there are multiple resolutions, and 4D data. There is interactive processing happening during the drilling and these multi real-time simulations allow adjustments to the drilling, thus improving the success rate of finding oil.

Developing new algorithms is a long-term process and typically last across several generations of supercomputers. Of course, the oil and gas industry is looking forward to exascale. But the future is complex — in the compute in the form of manycore, with accelerators, and heterogeneous systems. Complexity in the storage with the abundance of data and movement between tiers of storage via multiple storage technologies. Complexity in the tools such as OpenCL, CUDA, OpenMP, OpenACC, and compilers. There is a need for standardized tools to hide the hardware complexity and help the users of the HPC systems.

None of this can be addressed without HPC specialists. Application development cannot be done without a strong collaboration between the physicist, scientist, and HPC team. This constant progress will continue to improve the predictions Total relies on for finding productive oil fields.

The second session of the day was a panel moderated by Inma Martinez: titled “Bridging the gap between scientific code development and exascale technology.” Much of the focus was on the software challenges for extreme scale computing faced by the community.

The panelists:

Henri Calandra: Total

Lee Margetts: NAFEMS

Erik Lindahl: PRACE Scientific Steering Committee

Frauke Gräter: Heidelberg Institute for Theoretical Studies

Thomas Skordas, European Commission

This highly anticipated session looked at the gap between hardware, software, and application advances and the role of industry, academia and the European Commission in the development of software for HPC systems.

Thomas Skordas pointed out that driving leadership in exascale is important and it’s about much more than hardware. It’s the next generation code, training, and understanding the opportunities exascale can accomplish.

Frauke Gräter sees data as a significant challenge; the accumulation of more and more data and the analysis of that data. In the end, scientists are looking for insights and research organizations will invest in science.

Parallelizing the algorithms is the key action according to Erik Lindahl. There is too much focus on the exascale machine but algorithms need to be good to make the best use of the hardware. Exascale, expected to happen around 2020, is not expected to be a staple in commercial datacenter until 2035. There is not a supercomputer in the world that does not run open source software, and exascale machines will follow this practice.

Lee Margetts talked of “monster machines” — the large compute clusters in every datacenter. As large vendors adopt artificial intelligence and machine learning, will we see the end of the road for the large “monster” machines? We have very sophisticated algorithms and are using very sophisticated computing. What if this technology that is used in something like oil and gas were used to predict volcanoes or earthquakes — the point being, can technologies be used for more than one science?

Henri Calandra noted that data analytics and storage will become a huge issue. If we move to exascale, we’ll have to deal with thousands of compute nodes and update code for all these machines.

The biggest challenge is the software challenge.

When asked about the new science we will see, the panel had answers that fit their sphere of knowledge. Thomas spoke of brain modeling and self-driving cars. Frauke added genome assembly and new scientific disciplines such as personalized medicine. She says, “To attract young people, we need to marry machine learning and deep learning into HPC.” Erik notes that we have a revolution of data because of accelerators. Data and accelerators enabling genome resource will drive research in this area. Lee spoke of integrating machine learning into manufacturing processes.

Kim McMahon, XAND McMahon

As Lee said, “Diversity in funding through the European commission is really important – we need to fund the mavericks as well as the crazy ones.”

My takeaway is that the accomplishment of an exascale machine is not the goal that will drive the technology forward. It’s the analysis of the data. The algorithms. Parallelizing code. There will be some who will buy the exascale machine, but it will be years after it’s available before it’s broadly accepted. As Lee said, “the focus is not the machine, the algorithms or the software, but delivering on the science. Most people in HPC are domain scientists who are trying to solve a problem.”

The post PRACEdays 2017 Wraps Up in Barcelona appeared first on HPCwire.

MIT Grad Earns ACM Doctoral Dissertation Award

HPC Wire - Thu, 05/18/2017 - 12:38

NEW YORK, May 17, 2017 – Haitham Hassanieh is the recipient of the Association for Computing Machinery (ACM) 2016 Doctoral Dissertation Award. Hassanieh developed highly efficient algorithms for computing the Sparse Fourier Transform, and demonstrated their applicability in many domains including networks, graphics, medical imaging and biochemistry. In his dissertation “The Sparse Fourier Transform: Theory and Practice,” he presented a new way to decrease the amount of computation needed to process data, thus increasing the efficiency of programs in several areas of computing.

In computer science, the Fourier transform is a fundamental tool for processing streams of data. It identifies frequency patterns in the data, a task that has a broad array of applications. For many years, the Fast Fourier Transform (FFT) was considered the most efficient algorithm in this area. With the growth of Big Data, however, the FFT cannot keep up with the massive increase in datasets. In his doctoral dissertation Hassanieh presents the theoretical foundation of the Sparse Fourier Transform (SFT), an algorithm that is more efficient than FFT for data with a limited number of frequencies. He then shows how this new algorithm can be used to build practical systems to solve key problems in six different applications including wireless networks, mobile systems, computer graphics, medical imaging, biochemistry and digital circuits. Hassanieh’s Sparse Fourier Transform can process data at a rate that is 10 to 100 times faster than was possible before, thus greatly increasing the power of networks and devices.

Hassanieh is an Assistant Professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Illinois at Urbana-Champaign. He received his MS and PhD in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT). A native of Lebanon, he earned a BE in Computer and Communications Engineering from the American University of Beirut. Hassanieh’s Sparse Fourier Transform algorithm was chosen by MIT Technology Review as one of the top 10 breakthrough technologies of 2012. He has also been recognized with the Sprowls Award for Best Dissertation in Computer Science, and the SIGCOMM Best Paper Award.

Honorable Mention for the 2016 ACM Doctoral Dissertation Award went to Peter Bailis of Stanford University and Veselin Raychev of ETH Zurich. The 2016 Doctoral Dissertation Award recipients will be formally recognized at the annual ACM Awards Banquet on June 24 in San Francisco, CA.

In Bailis’s dissertation, “Coordination Avoidance in Distributed Databases,” he addresses a perennial problem in a network of multiple computers working together to achieve a common goal: Is it possible to build systems that scale efficiently (process ever-increasing amounts of data) while ensuring that application data remains provably correct and consistent? These concerns are especially timely as Internet services such as Google and Facebook have led to a vast increase in the global distribution of data. In addressing this problem, he introduces a new framework, invariant confluence, that mitigates the fundamental tradeoffs between coordination and consistency. His dissertation breaks new conceptual ground in the areas of transaction processing and distributed consistency—two areas thought to be fully understood. Bailis is an Assistant Professor of Computer Science at Stanford University. He received a PhD in Computer Science from the University of California, Berkeley and his AB in Computer Science from Harvard College.

Raychev’s dissertation, “Learning from Large Codebases,” introduces new methods for creating programming tools based on probabilistic models of code that can solve tasks beyond the reach of current methods. As the size of publicly available codebases has grown dramatically in recent years, so has interest in developing programming tools that solve software tasks by learning from these codebases. His dissertation takes a novel approach to addressing this challenge that combines advanced techniques in programming languages with machine learning practices. In the thesis, Raychev lays out four separate methods that detail how machine learning approaches can be applied to program analysis in order to produce useful programming tools. These include: code completion with statistical language models; predicting program properties from big code; learning program from noisy data; and learning statistical code completion systems. Raychev’s work is regarded as having the potential to open up several promising new avenues of research in the years to come. Raychev is currently a co-founder of the company DeepCode. He received a PhD in Computer Science from ETH Zurich. A native of Bulgaria, he received MS and BS degrees from Sofia University.

About the ACM Doctoral Dissertation Award

Presented annually to the author(s) of the best doctoral dissertation(s) in computer science and engineering. The Doctoral Dissertation Award is accompanied by a prize of $20,000, and the Honorable Mention Award is accompanied by a prize totaling $10,000. Financial sponsorship of the award is provided by Google. Winning dissertations will be published in the ACM Digital Library as part of the ACM Books Series.

About ACM

ACM, the Association for Computing Machinery www.acm.org is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Source: ACM

The post MIT Grad Earns ACM Doctoral Dissertation Award appeared first on HPCwire.

Spear facilitated work on new climate change report

Colorado School of Mines - Thu, 05/18/2017 - 12:29

A new joint report, "Microbes and Climate Change" from the American Society for Micobiology (ASM) and the American Geophysical Union (AGU) explores how microorganisms have both changed and been changed by the climate throughout Earth's past. John Spear, professor of Civil and Environmental Engineering at Mines, facillitated the work of the authoring committee for the report, which was the output of a one-day research colloquium.

Categories: Partner News

US, Europe, Japan Deepen Research Computing Partnership

HPC Wire - Thu, 05/18/2017 - 12:08

On May 17, 2017, a ceremony was held during the PRACEdays 2017 conference in Barcelona to announce the memorandum of understanding (MOU) between PRACE in Europe, RIST in Japan, and XSEDE in the United States. The MOU allows for the promotion and sharing of resources between the organizations, including PRACE’s federated resources in Europe, the K computer and other systems in Japan, and XSEDE’s network of HPC systems and advanced digital services in the US.

Discussing details of the enhanced partnership were Dr. Anwar Osseyran, council chair of the Partnership for Advanced Computing in Europe (PRACE); John Towns, principal investigator and project director for the Extreme Science and Engineering Discovery Environment (XSEDE); and Masahiro Seki, president of the Research Organization for Information Science and Technology (RIST).

XSEDE PI John Towns (left) with RIST President Masahiro Seki (center) and PRACE Council Chair Anwar Osseyran (right). Source

“The aim is to stimulate collaboration in the area of research and computational science by sharing information on the usage of supercomputers,” said Dr. Osseyran. “The collaboration will be of mutual benefit, reciprocity and equality and we will identify the capabilities of cooperation in the areas of science, technology and industry. [Further, the MOU] will reinforce the HPC ecosystems for all of us.”

The agreement builds on the partners’ work with the International HPC Summer School. (The eighth such event will take place June 25 to June 30, 2017, in Boulder, Colorado, United States. Compute Canada is also a partner.)

“As research becomes much more an international endeavor, the need for infrastructure to collaborate closely and support those research endeavors becomes even more important,” said John Towns. “Having the agreement such as the one we have signed now facilitates the collaboration of the infrastructures and allows us to promote science and engineering and industry work and the use of HPC resources and very importantly the associated services and support staff that surround them. Being able to effectively use these resources is quite important and often they are very difficult as the technology moves very rapidly, so having access to the expertise is also critical. I’m very happy to be a part of this and I look forward to our work [together].”

“The three parties — PRACE, XSEDE and RIST — have recognized the importance of trilateral collaboration,” said Masahiro Seko. “Finalizing the MOU today makes me happier than anything else. In the new MOU, we will continuously implement…in the area of promotional shared use of supercomputers; at the same time our collaboration will be accelerated through the users of all the members of the partnering organizations, and especially the trilateral union will be of great help to promote advanced supercomputing in the field of sensor technology and in industry.”

The ceremony commemorates the official signing which took place on April 4, 2017. The agreement contains the following elements:

(1) Exchange of information: Mutual exchange of experiences and knowledge in user selection and user support etc. is helpful for the three parties in order to execute their projects more effectively and efficiently.

(2) Interaction amongst the staff of the parties in pursuing any identified collaboration opportunities: Due to the complex and international nature of science, engineering and analytics challenge problems that require highly advanced computing solutions, collaborative support between RIST, PRACE and XSEDE will enhance the productivity of globally distributed research teams.

(3) Holding technical meetings: Technical meetings will be held to support cross organizational information exchange and collaboration.

The post US, Europe, Japan Deepen Research Computing Partnership appeared first on HPCwire.

NSF, IARPA, and SRC Push into “Semiconductor Synthetic Biology” Computing

HPC Wire - Thu, 05/18/2017 - 09:59

Research into how biological systems might be fashioned into computational technology has a long history with various DNA-based computing approaches explored. Now, the National Science Foundation has fired up a new program – Semiconductor Synthetic Biology for Information Processing and Storage Technologies – and just issued a solicitation in which eight to ten grants totaling around $4 million per year for three years are expected to be awarded.

The program is a joint effort between NSF, the Intelligence Advanced Research Projects Activity (IARPA), and Semiconductor Research Corporation (SRC) and has grand ambitions and was the subject of a Computing Community Consortium blog posted yesterday by Mitra Basu, the program director: “New information technologies can be envisioned that are based on biological principles and that use biomaterials in the fabrication of devices and components; it is anticipated that these information technologies could enable stored data to be retained for more than 100 years and storage capacity to be 1,000 times greater than current capabilities. These could also facilitate compact computers that will operate with substantially lower power than today’s computers.”

Five goals are specified and each submission must include elements of at least three (proposals are due in October 2017):

  • Advancing basic and fundamental research by exploring new programmable models of computation, communication, and memory based on synthetic biology.
  • Enriching the knowledge base and addressing foundational questions at the interface of biology and semiconductors.
  • Promoting the frontier of research in the design of new bio-nano hybrid devices based on sustainable materials, including carbon-based systems that test the physical size limit in transient electronics.
  • Designing and fabricating hybrid semiconductor-biological microelectronic systems based on living cells for next-generation information processing functionalities.
  • Integrating scaling-up and manufacturing technologies involving electronic and synthetic biology characterization instruments with CAD-like software tools.

The solicitation notes that “semiconductor and information technologies are facing many challenges as CMOS/Moore’s Law approaches its physical limits, with no obvious replacement technologies in sight. Several recent breakthroughs in synthetic biology have demonstrated the suitability of biomolecules as carriers of stored digital data for memory applications…[T]he (SemiSynBio) solicitation seeks to explore synergies between synthetic biology and semiconductor technologies. Today is likely to mark the beginning of a new technological boom to merge and exploit the two fields for information processing and storage capacity.”

As described in the solicitation the program’s goal is to “foster exploratory, multi-disciplinary, longer-term basic research leading to novel high-payoff solutions for the information technology industry based on recent progress in synthetic biology and the know-how of semiconductor technology. It is also anticipated that research in synthetic biology will benefit by leveraging semiconductor capabilities in design and fabrication of hybrid and complex material systems for extensive applications in biological and information processing technologies. In addition, the educational goal is to train new cadre of students and researchers.”

A bit tongue in cheek, and certainly not noticed for the first time, it’s safe to say nature has already figured out how to do this at least once, at high level (perhaps), with human computers conditioned by deep learning and programed to survive, explore, and continue learning.

Link to NSF solicitation: https://www.nsf.gov/pubs/2017/nsf17557/nsf17557.htm

Link to CCC blog: http://www.cccblog.org/2017/05/17/new-nsf-program-solicitation-on-semiconductor-synthetic-biology-for-information-processing-and-storage-technologies-semisynbio/

The post NSF, IARPA, and SRC Push into “Semiconductor Synthetic Biology” Computing appeared first on HPCwire.

Rescale Named a “Cool Vendor” by Gartner

HPC Wire - Thu, 05/18/2017 - 09:09

SAN FRANCISCO, Calif., May 18, 2017 — Rescale, the turnkey platform provider in cloud high performance computing, today announced that it has been named a “Cool Vendor” based on the May 2017 report “Cool Vendors in Cloud Infrastructure, 2017” by leading industry analyst firm Gartner.

The report makes recommendations for infrastructure and operations (I&O) leaders seeking to modernize and exploit more agile solutions, including the following:

  • “I&O leaders should examine these Cool Vendors closely and leverage the opportunities that they provide.”
  • “As enterprises grapple with the right mix of on-premises, off-premises and native cloud, choosing a cloud infrastructure vendor becomes more critical.”

“Rescale is very excited to be named a Gartner ‘Cool Vendor’ for 2017,” said Jonathan Oakley, VP of Marketing at Rescale. “High-performance computing (HPC) is the fastest growing compute segment of the cloud market with significant pent-up demand as CIOs look for efficiencies and agility in their IT infrastructure during their cloud transformation, while at the same time satisfying the increasing HPC demands of end-users. Those HPC end-users have become mission critical for the Global Fortune 500 leaders in aerospace, automotive, energy, financial services, industrials, life sciences, and semiconductor industry segments.”

Rescale’s ScaleX platform provides the enterprise with a turnkey multi-cloud solution accessing the largest network of HPC capacity globally with over 60 data centers locations, as well as hybrid solutions, allowing CIOs to leverage existing infrastructure assets. There are no compromises with Rescale’s suite of solutions. Rescale works with all major public cloud providers, including Amazon Web Services, Google Cloud Platform, IBM Cloud, and Microsoft Azure, along with over 220 natively-integrated software solutions from leading vendors ANSYS, Dassault Systemes, Siemens, and many others.

Rescale enables immediate benefits across the enterprise, such as:

  • Faster time to market – shortened design cycles and improved software deployment
  • Transformed IT agility – instant access to HPC infrastructure and global collaboration
  • Integrated solutions – hybrid cloud, private cloud, public cloud, and on-premises compute
  • Optimized cost structure – pay as you go for only what you need

Disclaimer:

Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Rescale

Rescale is the global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

Source: Rescale

The post Rescale Named a “Cool Vendor” by Gartner appeared first on HPCwire.

In-Memory Computing Summit Previews Keynotes for First Annual European Conference

HPC Wire - Thu, 05/18/2017 - 09:07

FOSTER CITY, Calif., May 18, 2017 — GridGain Systems, provider of enterprise-grade in-memory computing platform solutions based on Apache Ignite, today announced the keynote addresses for the In-Memory Computing Summit Europe, the premier In-Memory Computing (IMC) conference for users from across Europe and Asia. The IMC Summit Europe will take place at the Mövenpick Hotel Amsterdam City Centre, June 20-21, 2017. Attendees can receive a 10 percent Early Bird discount on the registration fee when registering by May 21, 2017.

The In-Memory Computing Summit Europe is the only industry-wide event focusing on the full range of in-memory computing technologies. It brings together computing visionaries, decision makers, experts and developers for the purpose of education, discussion and networking.

The keynote addresses for this year’s event include:

  • In-Memory Computing, Digital Transformation, and the Future of Business — Abe Kleinfeld, GridGain Systems
  • A new platform for collaboration between fintechs, academics and the finance industry — Felix Grevy, Misys
  • In Memory Computing: High performance and highly efficient web application scaling for the travel industry — Chris Goodall, CG Consultancy
  • SNIA and Persistent Memory — Alex McDonald, SNIA Europe
  • Panel Discussion: The Future of In-Memory Computing — Rob Barr, Barclays; Lieven Merckx, ING; Chris Goodall, CG Consultancy; Sam Lawrence, FSB Technology; and Nikita Ivanov, GridGain Systems

Super Saver Registration Discounts
Attendees can receive a 10 percent discount by registering now. The Early Bird admission rate of EUR 449 ends on May 21, 2017. Register via the conference website, or email attendance and registration questions to info@imcsummit.org.

Sponsorships
By sponsoring the In-Memory Computing Summit Europe, organizations gain a unique opportunity to enhance their visibility and reputation as leaders in in-memory computing products and services. They can interact with key in-memory computing business and technical decision makers, connect with technology purchasers and influencers, and help shape the future of Fast Data.

Sponsorship packages are available. Visit the conference website for more information on sponsorship benefits and pricing and to download a prospectus. Current sponsors include:

  • Platinum Sponsors — GridGain Systems
  • Gold Sponsors — ScaleOut Software
  • Silver Sponsors — Fujitsu, Hazelcast
  • Foundation/Association Sponsor — SNIA
  • Media Sponsors — IT for CEOs, Jet Info Magazine

About the In-Memory Computing Summits

The In-Memory Computing Summits in Europe and North America are the only industry-wide events focused on in-memory computing-related technologies and solutions. They are the perfect opportunity to connect with technical decision makers, IT implementers, and developers who make or influence purchasing decisions in the areas of in-memory computing, Big Data, Fast Data, IoT, web-scale applications and high performance computing (HPC). Attendees include CEOs, CIOs, CTOs, VPs, IT directors, IT managers, data scientists, senior engineers, senior developers, architects and more. The Summits are unique forums for networking, education and the exchange of ideas — ideas that are powering the Digital Transformation and future of Fast Data. For more information, visit https://imcsummit.org and follow the event on Twitter @IMCSummit.

About GridGain Systems

GridGain Systems is revolutionizing real-time data access and processing by offering enterprise-grade in-memory computing solutions built on Apache Ignite. GridGain solutions are used by global enterprises in financial, software, ecommerce, retail, online business services, healthcare, telecom and other major sectors. GridGain solutions connect data stores (SQL, NoSQL and Apache Hadoop) with cloud-scale applications and enable massive data throughput and ultra-low latencies across a scalable, distributed cluster of commodity servers. GridGain is the most comprehensive, enterprise-grade in-memory computing platform for high volume ACID transactions, real-time analytics and hybrid transactional/analytical processing. For more information, visit gridgain.com.

Source: GridGain Systems

The post In-Memory Computing Summit Previews Keynotes for First Annual European Conference appeared first on HPCwire.

BioTeam Test Lab at TACC Deploys Avere Systems

HPC Wire - Thu, 05/18/2017 - 09:04

PITTSBURGH, Penn., May 18, 2017 — Avere Systems, a leading provider of hybrid cloud enablement solutions, announced today that BioTeam, Inc. is incorporating Avere FXT Edge filers into its Convergence Lab, a testing environment hosted at the Texas Advanced Computing Center (TACC), in Austin, Texas. In cooperation with vendors and TACC, BioTeam utilizes the lab to evaluate solutions for its clients by standing up, configuring and testing new infrastructure under conditions relevant to life sciences in order to deliver on its mission of providing objective, vendor agnostic solutions to researchers. The life sciences community is producing increasingly large amounts of data from sources ranging from laboratory analytical devices, to research, to patient data, which is putting IT organizations under pressure to support these growing workloads.

Avere’s technology offers life science organizations the ability to flexibly process and store these growing datasets where it makes the most sense — at performance levels that help to improve the rate of discovery. Avere Edge filers allow seamless integration of multiple storage destinations, including multiple public clouds and on-premises data centers, increasing the options that organizations like BioTeam can provide its customers for data center optimization. BioTeam plans on utilizing the FXT filers to test burst buffer workloads and hybrid storage strategies for life sciences data and workloads in order to develop effective recommendations for their customers under the right conditions.

Avere’s technology provides many world-renowned life science research facilities with flexibility and performance benefits, in addition to the ability to support the large data sets common to BioIT workflows. By reducing the dependency on traditional storage and facilitating modernization with hybrid cloud infrastructures, Avere also helps organizations keep their IT costs in check.

The BioTeam Lab takes an integrative approach to streamlining computer-aided research from the lab bench to knowledge. Solutions are driven by BioTeam’s clients and tailored to meet the scientific needs of the organization. Inside the lab, BioTeam works with vendors to understand the end-to-end experience of using their technologies and handles everything including the racking, installations, configuration, testing and integration, vendor communication and return shipping. Remote access to the lab is available from virtually any location with an internet connection. TACC provides the space, power, cooling, connectivity, support and deep collaboration on lab projects.

“BioTeam is a fast-growing consulting company that is comprised of a highly cross-functional and creative group of scientists and engineers. Our unique cross section of experience allows us to enable computer-aided discovery in life sciences by creating and adapting IT infrastructure and services to fit the scientific goals of the organizations we work with,” said Ari Berman, Vice President and General Manager of Consulting, BioTeam. “As part of our larger suite of hardware and software, having Avere in our lab gives us the hands-on ability to test Avere-based hybrid storage scenarios in a controlled and optimized life sciences environment, utilizing real workloads. These scenarios will allow BioTeam to understand where Avere technology best fits in the life sciences and healthcare domain and will allow us to innovate next-generation strategies for storage and analytics workflows. Having this opportunity allows us to deepen our understanding of the overall storage landscape and to be able to recommend fit for purpose solutions to our customers.”

“Working with BioTeam is a natural fit for Avere. Our technology has a solid track record of helping life science organizations leverage the cloud for large workloads for both cloud compute and storage resources,” said Jeff Tabor, Senior Director of Product Management and Marketing at Avere Systems. “We look forward to collaborating with the BioTeam and continuing to help the industry effectively integrate cloud into their data center strategies and seamlessly use multiple cloud vendors.”

Next week at the BioIT World Conference in Boston, BioTeam and Avere will co-present “Freeing Data: How to Win the War with Hybrid Clouds.” BioTeam Senior Scientific Consultant Adam Kraut and Avere CEO Ron Bianchini will take the stage on May 25, 2017 at 12:20pm ET. Avere Systems is exhibiting at the show, booth #536, from May 23 – 25, 2017.

About Avere Systems

Avere helps enterprise IT organizations enable innovation with high-performance data storage access, and the flexibility to compute and store data where necessary to match business demands. Customers enjoy easy reach to cloud-based resources, without sacrificing the consistency, availability or security of enterprise data. A private company based in Pittsburgh, Pennsylvania, Avere is led by industry experts to support the demanding, mission-critical hybrid cloud systems of many of the world’s most recognized companies and organizations. Learn more at www.averesystems.com.

About BioTeam, Inc.

BioTeam, Inc. has a well-established history of providing complete, and forward-thinking solutions to the life sciences. With a cross-section of expertise that includes classical laboratory scientific training, applications development, informatics, large data center installations, HPC, enterprise and scientific network engineering, and high-volume as well as high-performance storage, BioTeam leverages the right technologies customized to its client’s unique needs in order to enable them to reach their scientific objectives. For more information, please visit the company website.

About Texas Advanced Computing Center

TACC designs and deploys the world’s most powerful advanced computing technologies and innovative software solutions to enable researchers to answer complex questions like these and many more. Every day, researchers rely on our computing experts and resources to help them gain insights and make discoveries that change the world. Find out more at https://www.tacc.utexas.edu/.

Source: Avere Systems

The post BioTeam Test Lab at TACC Deploys Avere Systems appeared first on HPCwire.

Hyperion Research Adds New HPC Innovation Awards for Datacenters

HPC Wire - Wed, 05/17/2017 - 14:45

ST. PAUL, Minn., May 17, 2017 – Hyperion Research, the new name for the former IDC HPC group, today announced it is adding two new categories to its global awards program for high performance computing (HPC) innovation. Both new categories are for innovations benefiting HPC use in data centers—either dedicated HPC data centers or the growing number of enterprise data centers that are exploiting HPC server and storage systems for advanced analytics. The new categories complement Hyperion’s long-standing innovation awards for HPC users:

  1. The first new award category rewards applied HPC innovations for which data centers are primarily responsible.
  2. The second new category rewards HPC vendors for HPC innovations that have proven to benefit data centers.

Hyperion also welcomes submissions for HPC innovations resulting from collaborations between data centers and vendors, and for innovations involving private, hybrid or public clouds.

“Hyperion Research welcomes award submissions at any time of year and announces awards twice a year, at the annual ISC European supercomputing conference in June and the annual SC worldwide supercomputing conference in November,” according to Hyperion Research CEO Earl Joseph. “The first round of winners of the new awards will be made public at the ISC’17 conference that will be held in June 2017 in Frankfort Germany.”

Submission forms are available at Hyperion’s website: http://www.hpcuserforum.com/innovationaward/applicationform.html

About Hyperion Research

Hyperion Research is the new name for the former IDC high performance computing (HPC) analyst team. IDC agreed with the U.S. government to divest the HPC team before the recent sale of IDC to Chinese firm Oceanwide. As Hyperion Research, the team continues all the worldwide activities that have made it the world’s most respected HPC industry analyst group for more than 25 years, including HPC and HPDA market sizing and tracking, subscription services, custom studies and papers, and operating the HPC User Forum. For more information, see www.hpcuserforum.com.

Source: Hyperion Research

The post Hyperion Research Adds New HPC Innovation Awards for Datacenters appeared first on HPCwire.

Pages

Subscribe to www.rmacc.org aggregator