HPC Wire

Subscribe to HPC Wire feed
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Updated: 5 hours 24 sec ago

Colovore Announces 2 MW Phase 3 Colocation Expansion

10 hours 50 min ago

SANTA CLARA, Calif., Jan. 17, 2018 — Colovore has announced that it has begun construction on Phase 3, adding another 2 MW of capacity to its Santa Clara data center. Since launching in 2014, Colovore has grown rapidly by providing power densities in Bay Area colocation, exceptional uptime and service quality, and a cost-effective, pay-by-the-kW pricing model. As with Phase 2, all cabinets in Phase 3 will support 35 kW of critical load. The additional capacity is expected to be delivered in early Q3 and Colovore is now marketing Phase 3, adding much-needed high-density capacity to the tight Bay Area colocation marketplace.

Highlights / Key Facts

  • Customers utilize Colovore to host their high-performance (HPC) and Big Data infrastructure, private/hybrid cloud deployments, and internal lab environments
  • With power densities of 35 kW per rack, Colovore provides the highest footprint efficiency and lowest TCO in Bay Area colocation; customers can pack their racks full of servers and operate in a much smaller, cost-effective footprint than legacy colos
  • Colovore’s pay-by-the-kW pricing model allows customers to match their costs directly to their IT requirements as they go, providing significant cost savings and easy scalability– 1 kW at a time
  • With 9 MW of total power available at its facility, Colovore has plenty of capacity for future expansion beyond this 2 MW Phase 3

“We are clearly seeing increasing rollout of power-hungry, computing platforms supporting a number of fast-growing HPC applications,” stated Sean Holzknecht, President and Co-Founder of Colovore. “Artificial intelligence, Big Data, self-driving cars, and the Internet of Things are exploding and customers need data centers with next-generation power and cooling capabilities to support the underlying IT infrastructure. That is our specialty at Colovore.”

To learn more about how you can benefit from Colovore’s high-performance colocation solutions, contact Ben Coughlin at Colovore (tel. #408-330-9290) or email info@colovore.com.

About Colovore
Colovore is a leading provider of high-performance colocation services. Our 9 MW state-of-the-art data center in Santa Clara features power densities of 35 kW per rack and a pay-by-the-kW pricing model. We offer colocation the way you want it—cost-efficient, scalable, and robust. Colovore is profitable and backed by industry leaders including Digital Realty Trust. For more information please visit www.colovore.com.

Source: Colovore

The post Colovore Announces 2 MW Phase 3 Colocation Expansion appeared first on HPCwire.

Quantum Corporation Names Patrick Dennis CEO

Tue, 01/16/2018 - 18:46

SAN JOSE, Calif., Jan. 16, 2018 — Quantum Corp. today announced that its board of directors has appointed Patrick Dennis as president and CEO, effective today. Dennis was most recently president and CEO of Guidance Software and has also held senior executive roles in strategy, operations, sales, services and engineering at EMC. He succeeds Adalio Sanchez, a member of Quantum’s board who had served as interim CEO since early November 2017. Sanchez will remain on the board and assist with the transition.

“Patrick has been a successful public company CEO and brings a broad range of experience in storage and software, including a proven track record leading business transformations,” said Raghu Rau, Quantum’s chairman. “The other board members and I look forward to working closely with him to drive growth, cost reductions, and profitability and deliver long-term shareholder value. We also want to thank Adalio for stepping in and leading the company during a critical transition period.”

“During my time as CEO, I’ve greatly appreciated the commitment to change I’ve seen from team members across Quantum and will be supporting Patrick in any way I can to build on the important work we started,” said Sanchez.

Dennis served as president and CEO of Guidance Software, a provider of cyber security software solutions, from May 2015 until its acquisition by OpenText last September. During his tenure, he turned the company around, growing revenue and significantly improving profitability. Before joining Guidance Software, Dennis was senior vice president and chief operating officer, Products and Marketing, at EMC, where he led the business operations of its $10.5 billion enterprise and mid-range systems division, including management of its cloud storage business. Dennis spent 12 years at EMC, including as vice president and chief operating officer of EMC Global Services, overseeing a 3,500-person technical sales force. In addition to his time at EMC, he served as group vice president, North American Storage Sales, at Oracle, where he turned around a declining business.

“With its long-standing expertise in addressing the most demanding data management challenges, Quantum is well-positioned to help customers maximize the strategic value of their ever-growing digital assets in a rapidly changing environment,” said Dennis. “I’m excited to be joining the company as it looks to capitalize on this market opportunity by leveraging its strong solutions portfolio in a more focused way, improving its cost structure and execution, and continuing to innovate.”

About Quantum

Quantum is a leading expert in scale-out tiered storage, archive and data protection, providing solutions for capturing, sharing, managing and preserving digital assets over the entire data lifecycle. From small businesses to major enterprises, more than 100,000 customers have trusted Quantum to address their most demanding data workflow challenges. Quantum’s end-to-end, tiered storage foundation enables customers to maximize the value of their data by making it accessible whenever and wherever needed, retaining it indefinitely and reducing total cost and complexity. See how at www.quantum.com/customerstories.

Source: Quantum Corp.

The post Quantum Corporation Names Patrick Dennis CEO appeared first on HPCwire.

New C-BRIC Center Will Tackle Brain-Inspired Computing

Tue, 01/16/2018 - 15:17

WEST LAFAYETTE, Ind., Jan. 16, 2018 — Purdue University will lead a new national center to develop brain-inspired computing for intelligent autonomous systems such as drones and personal robots capable of operating without human intervention.

The Center for Brain-inspired Computing Enabling Autonomous Intelligence, or C-BRIC, is a five-year project supported by $27 million in funding from the Semiconductor Research Corp (SRC) via their Joint University Microelectronics Program, which provides funding from a consortium of industrial sponsors as well as from the Defense Advanced Research Projects Agency. The SRC operates research programs in the United States and globally that connect industry to university researchers, deliver early results to enable technological advances, and prepare a highly-trained workforce for the semiconductor industry. Additional funds include $3.96 million from Purdue and as well as support from other participating universities. At the state level, the Indiana Economic Development Corporation will be providing funds, pending board approval, to establish an intelligent autonomous systems laboratory at Purdue.

C-BRIC, which begins operating in January 2018, will be led by Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering (ECE), with Anand Raghunathan, Purdue professor of ECE, as associate director. Other Purdue faculty involved in the center include Suresh Jagannathan, professor of computer science and ECE; and Eugenio Culurciello, associate professor of biomedical engineering, ECE and mechanical engineering. The center will involve seven other universities, pending final contracts, which include Arizona State University, Georgia Institute of Technology, Pennsylvania State University, Portland State University, Princeton University, University of Pennsylvania, and University of Southern California, around seventeen faculty, and around 85 graduate students and postdoctoral researchers.

“The center’s goal is to develop neuro-inspired algorithms, architectures and circuits for perception, reasoning and decision-making, which today’s standard computing is unable to do efficiently,” Roy said.

Efficiency here implies energy use. For example, while advanced computers such as IBM’s Watson and Google’s AlphaGo have beaten humans at high-level cognitive tasks, they also consume hundreds of thousands of watts of power to do so, whereas the human brain requires only around 20 watts.

“We have to narrow this huge efficiency gap to enable continued improvements in artificial intelligence in the face of diminishing benefits from technology scaling,” Raghunathan said. “C-BRIC will develop technologies to perform brain-like functions with brain-like efficiency.”

In addition, the center will enable next-generation autonomous intelligent systems capable of accomplishing both “end-to-end” functions and completion of mission-critical tasks without human intervention.

“Autonomous intelligent systems will require real-time closed-loop control, leading to new challenges in neural algorithms, software and hardware,” said Venkataramanan (Ragu) Balakrishnan, Purdue’s Michael and Katherine Birck Head and Professor of Electrical and Computer Engineering. “Purdue’s long history of preeminence in related research areas such as neuromorphic computing and energy-efficient electronics positions us well to lead this effort.”

“Purdue is up to the considerable challenges that will be posed by C-BRIC,” said Suresh Garimella, Purdue’s executive vice president for research and partnerships and the R. Eugene and Susie E. Goodson Distinguished Professor of Mechanical Engineering. “We are excited that our faculty and students are embarking on this ambitious mission to shape the future of intelligent autonomous systems.”

Mung Chiang, Purdue’s John A. Edwardson Dean of the College of Engineering, said, “C-BRIC represents a game-changer in artificial intelligence. These outstanding colleagues in Electrical and Computer Engineering and other departments at Purdue will carry out transformational research on efficient, distributed intelligence.”

To achieve their goals, C-BRIC researchers will improve the theoretical and mathematical underpinnings of neuro-inspired algorithms.

“This is very important,” Raghunathan said. “The underlying theory of brain-inspired computing needs to be better worked out, and we believe this will lead to broader applicability and improved robustness.”

At the same time, new autonomous systems will have to possess “distributed intelligence” that allows various parts, such as the multitude of “edge devices” in the so-called Internet of Things, to work together seamlessly.

“We are excited to bring together a multi-disciplinary team with expertise spanning algorithms, theory, hardware and system-building, that will enable us to pursue a holistic approach to brain-inspired computing, and to hopefully deliver an efficiency closer to that of the brain,” Roy said.

Information about the SRC can be found at https://www.src.org/.

Source: Purdue University

The post New C-BRIC Center Will Tackle Brain-Inspired Computing appeared first on HPCwire.

New Center at Carnegie Mellon University to Build Smarter Networks to Connect Edge Devices to the Cloud

Tue, 01/16/2018 - 15:14

PITTSBURGH, Jan. 16, 2018 — Carnegie Mellon University will lead a $27.5 million Semiconductor Research Corporation (SRC) initiative to build more intelligence into computer networks.

Researchers from six U.S. universities will collaborate in the CONIX Research Center headquartered at Carnegie Mellon. For the next five years, CONIX will create the architecture for networked computing that lies between edge devices and the cloud. The challenge is to build this substrate so that future applications that are crucial to IoT can be hosted with performance, security, robustness, and privacy guarantees.

“The extent to which IoT will disrupt our future will depend on how well we build scalable and secure networks that connect us to a very large number of systems that can orchestrate our lives and communities. CONIX will develop novel architectures for large-scale, distributed computing systems that have immense implications for social interaction, smart buildings and infrastructure, and highly connected communities, commerce, and defense,” says James H. Garrett Jr., dean of Carnegie Mellon College of Engineering.

CONIX, an acronym for Computing on Network Infrastructure for Pervasive Perception, Cognition, and Action, is directed by Anthony Rowe, associate professor of Electrical and Computer Engineering at Carnegie Mellon. The assistant director, Prabal Dutta, is an associate professor at the University of California, Berkeley.

IoT has pushed a major focus on edge devices. These devices make our homes and communities smarter through connectivity, and they are capable of sensing, learning, and interacting with humans. In most current IoT systems, sensors send data to the cloud for processing and decision-making. However, massive amounts of sensor data coupled with technical constraints have created bottlenecks in the network that curtail efficiency and the development of new technologies especially if timing is critical.

“There isn’t a seamless way to merge cloud functionality with edge devices without a smarter interconnect, so we want to push more intelligence into the network,” says Rowe. “If networks were smarter, decision-making could occur independent of the cloud at much lower latencies.”

The cloud’s centralized nature makes it easier to optimize and secure, however, there are tradeoffs. “Large systems that are centralized tend to struggle in terms of scale and have trouble reacting quickly outside of data centers,” explains Rowe. CONIX researchers will look at how machine-learning techniques that are often used in the context of cloud computing can be used to self-optimize networks to improve performance and even defend against cyberattacks.

Developing a clean-slate distributed computing network will take an integrated view of sensing, processing, memory, dissemination and actuation. CONIX researchers intend to define the architecture for such networks now before attempts to work around current limitations create infrastructure that will be subject to rip-and-repair updates, resulting in reduced performance and security.

CONIX’s research is driven by three applications:

Smart and connected communities—Researchers will explore the mechanisms for managing and processing millions of sensors’ feeds in urban environments. They will deploy CONIX edge devices across participating universities to monitor and visualize the flow of pedestrians. At scale, this lays the groundwork for all kinds of infrastructure management.

Enhanced situational awareness at the edge—Efforts here will create on-demand information feeds for decision makers by dispatching human-controlled swarming drones to provide aerial views of city streets. Imagine a system like Google Street View, only with live real-time data. This would have both civilian and military applications. For example, rescue teams in a disaster could use the system to zoom in on particular areas of interest at the click of a button.

Interactive Mixed Reality—Physical and virtual reality systems will merge in a collaborative digital teleportation system.  Researchers will capture physical aspects about users in a room, such as their bodies and facial expressions. Then, like a hologram, this information will be shared with people in different locations. The researchers will use this technology for meetings, uniting multiple CONIX teams. This same technology will be critical to support next-generation augmented reality systems being used in applications ranging from assisted surgery and virtual coaching to construction and manufacturing.

In addition to Carnegie Mellon and the University of California, Berkeley, other participants include the University of California, Los Angeles, University of California, San Diego, University of Southern California, and University of Washington Seattle.

CONIX is one of six research centers funded by the SRC’s Joint University Microelectronics Program (JUMP), which represents a consortium of industrial participants and the Defense Advanced Research Projects Agency (DARPA).

About the College of Engineering at Carnegie Mellon University

The College of Engineering at Carnegie Mellon University is a top-ranked engineering college that is known for our intentional focus on cross-disciplinary collaboration in research. The College is well-known for working on problems of both scientific and practical importance. Our “maker” culture is ingrained in all that we do, leading to novel approaches and transformative results. Our acclaimed faculty have a focus on innovation management and engineering to yield transformative results that will drive the intellectual and economic vitality of our community, nation and world.

About the SRC

Semiconductor Research Corporation (SRC), a world-renowned, high technology-based consortium serves as a crossroads of collaboration between technology companies, academia, government agencies, and SRC’s highly regarded engineers and scientists. Through its interdisciplinary research programs, SRC plays an indispensable part to address global challenges, using research and development strategies, advanced tools and technologies. Sponsors of SRC work synergistically together, gain access to research results, fundamental IP, and highly experienced students to compete in the global marketplace and build the workforce of tomorrow. Learn more at: www.src.org.

Source: Carnegie Mellon University

The post New Center at Carnegie Mellon University to Build Smarter Networks to Connect Edge Devices to the Cloud appeared first on HPCwire.

SRC Spends $200M on University Research Centers

Tue, 01/16/2018 - 15:10

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communications, nanotechnology, and more. It’s not a bad way to begin 2018 for the winning institutions which include Notre Dame University, University of Michigan, University of Virginia, Carnegie Mellon University, Purdue University, and UC Santa Barbara.

SRC’s JUMP (Joint University Microelectronics Program) is a collaborative network of research centers sponsored by U.S. industry participants and DARPA. As described in the SRC web site, “[JUMP’s] mission is to enable the continued pace of growth of the microelectronics industry with discoveries which release the evolutionary constraints of traditional semiconductor technology development. JUMP research, guided by the university center directors, tackles fundamental physical problems and forges a nationwide effort to keep the United States and its technology firms at the forefront of the global microelectronics revolution.”

The six projects, funded over five years, were launched on January 1st and are listed below with short descriptions. Links to press releases from each center are at the end of the article:

  • ASCENT (Applications and Systems driven Center for Energy-Efficient Integrated NanoTechnologies at Notre Dame). “ASCENT focuses on demonstration of foundational material synthesis routes and device technologies, novel heterogeneous integration (package and monolithic) schemes to support the next era of functional hyper-scaling. The mission is to transcend the current limitations of high-performance transistors confined to a single planar layer of integrated circuit by pioneering vertical monolithic integration of multiple interleaved layers of logic and memory.”
  • ADA (Applications Driving Architectures Center at University of Michigan). “[ADA will drive] system design innovation by drawing on opportunities in application driven architecture and system-driven technology advances, with support from agile system design frameworks that encompass programming languages to implementation technologies. The center’s innovative solutions will be evaluated and quantified against a common set of benchmarks, which will also be expanded as part of the center efforts. These benchmarks will be initially derived from core computational aspects of two application domains: visual computing and natural language processing.”
  • Kevin Skadron, University of Virginia

    CRISP (Center for Research on Intelligent Storage and Processing-in-memory at University of Virginia). “Certain computations are just not feasible right now due to the huge amounts of data and the memory wall,” says Kevin Skadron, who chairs UVA Engineering’s Department of Computer Science and leads the new center. “Solving these challenges and enabling the next generation of data-intensive applications requires computing to be embedded in and around the data, creating ‘intelligent’ memory and storage architectures that do as much of the computing as possible as close to the bits as possible.”

  • CONIX (Computing On Network Infrastructure for Pervasive Perception, Cognition, and Action at Carnegie Mellon University). “CONIX will create the architecture for networked computing that lies between edge devices and the cloud. The challenge is to build this substrate so that future applications that are crucial to IoT can be hosted with performance, security, robustness, and privacy guarantees.”
  • CBRIC (Center for Brain-inspired Computing Enabling Autonomous Intelligence at Purdue University). Charged with delivering key advances in cognitive computing, with the goal of enabling a new generation of autonomous intelligent systems, “CBRIC will address these challenges through synergistic exploration of Neuro-inspired Algorithms and Theory, Neuromorphic Hardware Fabrics, Distributed Intelligence, and Application Drivers.”
  • ComSenTer (Center for Converged TeraHertz Communications and Sensing at UCSB). ComSenTer will develop the technologies for a future cellular infrastructure using hubs with massive spatial multiplexing, providing 1-100Gb/s to the end user, and, with 100-1000 simultaneous independently-modulated beams, aggregate hubs capacities in the 10’s of Tb/s. Backhaul for this future cellular infrastructure will be a mix of optical links and Tb/s-capacity point-point massive MIMO links.”

Links to individual press releases/program descriptions:

ASCENT, Notre Dame: https://www.src.org/newsroom/press-release/2018/921/

ADA, University of Michigan: https://www.src.org/newsroom/press-release/2018/922/

CRISP, University of Virginia: https://www.src.org/newsroom/press-release/2018/920/

CONIX, Carnegie Mellon: https://www.prnewswire.com/news-releases/new-center-headquartered-at-carnegie-mellon-university-will-build-smarter-networks-to-connect-edge-devices-to-the-cloud-300582210.html

CBRIC, Purdue: https://www.src.org/newsroom/press-release/2018/919/

ComSentTer, UCSB: https://www.src.org/program/jump/comsenter/

The post SRC Spends $200M on University Research Centers appeared first on HPCwire.

UVA Engineering Tapped to Lead $27.5 Million Center to Reinvent Computing

Tue, 01/16/2018 - 15:09

CHARLOTTESVILLE, Va., Jan. 16, 2018 — The University of Virginia School of Engineering & Applied Science has been selected to establish a $27.5 million national center to remove a bottleneck built into computer systems 70 years ago that is increasingly hindering technological advances today.

UVA Engineering’s new Center for Research in Intelligent Storage and Processing in Memory, or CRISP, will bring together researchers from eight universities to remove the separation between memories that store data and processors that operate on the data.

That separation has been part of all mainstream computing architectures since 1945, when John von Neumann, one of the pioneering computer scientists, first outlined how programmable computers should be structured. Over the years, processor speeds have improved much faster than memory and storage speeds, and also much faster than the speed at which wires can carry data back and forth.

These trends lead to what computer scientists call the “memory wall,” in which data access becomes a major performance bottleneck. The need for a solution is urgent, because of today’s rapidly growing data sets and the potential to use big data more effectively to find answers to complex societal challenges.

“Certain computations are just not feasible right now due to the huge amounts of data and the memory wall,” said Kevin Skadron, who chairs UVA Engineering’s Department of Computer Science and leads the new center. “One example is in medicine, where we can imagine mining massive data sets to look for new indicators of cancer. The scale of computation needed to make advances for health care and many other human endeavors, such as smart cities, autonomous transportation, and new astronomical discoveries, is not possible today. Our center will try to solve this problem by breaking down the memory-wall bottleneck and finally moving beyond the 70-year-old paradigm. This will enable entirely new computational capabilities, while also improving energy efficiency in everything from mobile devices to datacenters.”

CRISP is part of a $200 million, five-year national program that will fund centers led by six top research universities: UVA, University of California at Santa Barbara, Carnegie Mellon University, Purdue University, the University of Michigan and the University of Notre Dame. The Joint University Microelectronics Program is managed by North Carolina-based Semiconductor Research Corporation, a consortium that includes engineers and scientists from technology companies, universities and government agencies.

Each research center will examine a different challenge in advancing microelectronics, a field that is crucial to the U.S. economy and its national defense capabilities. The centers will collaborate to develop solutions that work together effectively. Each center will have liaisons from the program’s member companies, collaborating on the research and supporting technology transfer.

“The trifecta of academia, industry and government is a great model that benefits the country as a whole,” Skadron said. “Close collaboration with industry and government agencies can help identify interesting and relevant problems that university researchers can help solve, and this close collaboration also helps accelerate the impact of the research.”

The program includes positions for about a dozen new Ph.D. students at UVA Engineering, and altogether, about 100 Ph.D. students across the entire center. The center will also create numerous opportunities for undergraduate students to get involved in research. The program provides all these students with professional development opportunities and internships with companies that are program sponsors.

Engineering Dean Craig Benson said the new center expresses UVA Engineering’s commitment to research and education that add value to society.

“Most of the grand challenges the National Academy of Engineering has identified for humanity in the 21st century will require effective use of big data,” Benson said. “This investment affirms the national research community’s confidence that UVA has the vision and expertise to lead a new era for technology.”

Pamela Norris, UVA Engineering’s executive associate dean for research, said the center is also an example of the bold ideas that propelled the School to a near 36 percent increase in research funding in fiscal year 2017, compared to the prior year.

“UVA Engineering has a culture of collaborative, interdisciplinary research programs,” Norris said. “Our researchers are determined to use this experience to address some of society’s most complex challenges.”

UVA’s center will include researchers from seven other universities, working together in a holistic approach to solve the data bottleneck in current computer architecture.

“Solving these challenges and enabling the next generation of data-intensive applications requires computing to be embedded in and around the data, creating ‘intelligent’ memory and storage architectures that do as much of the computing as possible as close to the bits as possible,” Skadron said.

This starts at the chip level, where computer processing capabilities will be built inside the memory storage. Processors will also be paired with memory chips in 3-D stacks. UVA Electrical and Computer Engineering Professor Mircea Stan, an expert on the design of high-performance, low-power chips and circuits, will help lead the center’s research on 3-D chip architecture, thermal and power optimization, and circuit design.

CRISP researchers also will examine how other aspects of computer systems will have to change when computer architecture is reinvented, from operating systems to software applications to data centers that house entire computer system stacks. UVA Computer Science Assistant Professor Samira Khan, an expert in computer architecture and its implications for software systems, will help guide the center’s efforts to rethink how the many layers of hardware and software in current computer systems work together.

CRISP also will develop new system software and programming frameworks so computer users can accomplish their tasks without having to manage complex hardware details, and so that software is portable across diverse computer architectures. All this work will be developed in the context of several case studies to help guide the hardware and software research to practical solutions and real-world impact. These include searching for new cancer markers; mining the human gut microbiome for new insights on interactions among genetics, environment, lifestyle and wellness; and data mining for improving home health care.

“Achieving a vision like this requires a large team with diverse expertise across the entire spectrum of computer science and engineering, and such a large-scale initiative is very hard to put together without this kind of investment,” Skadron said. “These large, center-scale programs profoundly enhance the nation’s ability to maintain technological leadership, while simultaneously training a large cohort of students who will help address the nation’s rapidly growing need for technology leadership. This is an incredibly exciting opportunity for us.”

Source: University of Virginia

The post UVA Engineering Tapped to Lead $27.5 Million Center to Reinvent Computing appeared first on HPCwire.

Notre Dame to Lead $26 Million Multi-University Research Center Developing Next-Generation Computing Technologies

Tue, 01/16/2018 - 15:03

Jan. 16, 2018 — In today’s age of ubiquitous computing, society produces roughly the same amount of data in 10 minutes that would have previously taken 100 years. Within the next decade, experts anticipate the ability to create, share and store a century’s worth of data in less than 10 seconds.

To get there, researchers and technologists must overcome data-transfer bottlenecks and improve the energy efficiency of current electronic devices.

Now, a new $26 million center led by the University of Notre Dame will focus on conducting research that aims to increase the performance, efficiency and capabilities of future computing systems for both commercial and defense applications.

At the state level, the Indiana Economic Development Corporation (IEDC) has offered to provide funding for strategic equipment, pending final approval from the IEDC Board of Directors, to support execution of the program’s deliverables.

“We have assembled a group of globally recognized technical leaders in a wide range of areas — from materials science and device physics to circuit design and advanced packaging,” said Suman Datta, director of the Applications and Systems-driven Center for Energy-Efficient integrated Nano Technologies (ASCENT) and Frank M. Freimann Professor of Engineering at Notre Dame. “Working together, we look forward to developing the next generation of innovative device technologies.”

The multidisciplinary research center will develop and utilize advanced technologies to sustain the semiconductor industry’s goals of increasing performance and reducing costs. Researchers have been steadily advancing toward these goals via relentless two-dimensional scaling as well as the addition of performance boosters to complementary metal oxide semiconductors, or CMOS technology. Both approaches have provided enhanced performance to energy consumption ratios.

The exponentially increasing demand for connected devices, big data analytics, cloud computing and machine-learning technologies, however, requires future innovations that transcend the impending limits of current CMOS technology.

ASCENT comprises 20 faculty members from 13 of the nation’s leading research universities, including Arizona State University, Cornell University, Georgia Institute of Technology, Purdue University, Stanford University, University of Minnesota, University of California-Berkeley, University of California-Los Angeles, University of California-San Diego, University of California-Santa Barbara, University of Colorado, and the University of Texas-Dallas.

Sayeef Salahuddin, professor of electrical engineering and computer science, at the University of California-Berkeley, will serve as the center’s associate director.

Datta said the center’s research agenda has been shaped by valuable lessons learned from past research conducted at the Notre Dame’s Center for Nano Science and Technology (NDnano), as well as the Notre Dame-led Center for Low Energy Systems Technology (LEAST) and the Midwest Institute for Nanoelectronics Discovery (MIND), which stemmed from the Semiconductor Research Corporation’s (SRC) STARnet program and Nanoelectronics Research Initiative, respectively.

Researchers at ASCENT will pursue four areas of technology including three-dimensional integration of device technologies beyond a single planar layer (vertical CMOS); spin-based device concepts that combine processing and memory functions (beyond CMOS); heterogeneous integration of functionally diverse nano-components into integrated microsystems (heterogeneous integration fabric); and hardware accelerators for data intensive cognitive workloads (merged logic-memory fabric).

“The problems that Professor Datta and his team will try to solve are among the most challenging and important facing the electronics industry,” said Thomas G. Burish, Charles and Jill Fischer Provost of Notre Dame. “The selection committee in their feedback was highly complimentary of the vision, technical excellence, diverse talent and collaborative approach that Suman and his colleagues have undertaken. Notre Dame is delighted to be able to host this effort.”

ASCENT is one of six research centers funded by the SRC’s Joint University Microelectronics Program (JUMP), which represents a consortium of industrial participants and the Defense Advanced Research Projects Agency (DARPA). Information about the SRC can be found at https://www.src.org/.

Source: University of Notre Dame

The post Notre Dame to Lead $26 Million Multi-University Research Center Developing Next-Generation Computing Technologies appeared first on HPCwire.

UMass Center for Data Science Partners with Chan Zuckerberg Initiative to Accelerate Science and Medicine

Tue, 01/16/2018 - 14:34

AMHERST, Mass., Jan. 16, 2018 — Distinguished scientist and professor Andrew McCallum, director of the Center for Data Science at the University of Massachusetts Amherst, will lead a new partnership with the Chan Zuckerberg Initiative to accelerate science and medicine. The goal of this project, called Computable Knowledge, is to create an intelligent and navigable map of scientific knowledge using a branch of artificial intelligence known as knowledge representation and reasoning.

The Computable Knowledge project will facilitate new ways for scientists to explore, navigate, and discover potential connections between millions of new and historical scientific research articles. Once complete, the service will be accessible through Meta, a free CZI tool, and will help scientists track important discoveries, uncover patterns, and deliver insights among an up-to-date collection of published scientific texts, including more than 60 million articles.

“We are excited for the opportunity to advance our research in deep learning, representation and reasoning for such a worthy challenge,” said McCallum. “We believe the result will be a first-of-its-kind guide for every scientist, just as map apps are now indispensable tools for navigating the physical world. We hope our results will help solve the mounting problem of scientific knowledge complexity, democratize scientific knowledge, and put powerful reasoning in the hands of individual scientists.”

The Chan Zuckerberg Initiative (CZI) is building a team of AI scientists to collaborate on the project, and has made an initial grant of $5.5 million to the university’s Center for Data Science. It is CZI’s first donation and partnership with the University of Massachusetts Amherst.

McCallum expects CZI’s investment to result in hiring software engineers in Western Massachusetts to work on the project. It will also support the related research of several graduate, Ph.D. and postdoctoral students in the Center for Data Science and create internships for UMass Amherst students at other CZI projects worldwide.

“We are very pleased CZI selected UMass Amherst to play a major role in this groundbreaking initiative that will give scientists tremendous power to share their research around the world,” Massachusetts Governor Charlie Baker said. “Massachusetts’ renowned research and health care institutions make the Commonwealth an attractive location to advance CZI’s work, and we welcome their engagement here.”

“We are grateful for CZI’s generous support and recognition of UMass Amherst’s leadership in artificial intelligence,” said UMass Amherst Chancellor Subbaswamy. “Andrew McCallum and his colleagues are engaged in extraordinary and innovative research, and we are thrilled to be partners with CZI in their goal to cure, prevent, or manage all diseases by the end of the century.”

“This project has the potential to accelerate the work of millions of scientists around the globe,” said Cori Bargmann, president of science at the Chan Zuckerberg Initiative. “Andrew McCallum and the Center for Data Science at UMass Amherst are global leaders in artificial intelligence and natural language processing. Andrew will bring deep knowledge and expertise to this effort, and we are honored to partner with him.”

About Professor Andrew McCallum

McCallum, who joined the UMass Amherst faculty in 2002, focuses his research on statistical machine learning applied to text, including information extraction, social network analysis, and deep neural networks for knowledge representation. He served as president of the International Society of Machine Learning and is a Fellow of the Association for the Advancement of Artificial Intelligence as well as the Association for Computing Machinery. Recognized as a pre-eminent researcher in these field, he has published more 150 papers and received over 50,000 citations from fellow researchers. He was named the founding director of the UMass Amherst Center for Data Science in 2015.

About the Chan Zuckerberg Initiative

The Chan Zuckerberg Initiative was founded by Facebook founder and CEO Mark Zuckerberg and his wife Priscilla Chan in December 2015. The philanthropic organization brings together world-class engineering, grant-making, impact investing, policy, and advocacy work. Its initial areas of focus include supporting science through basic biomedical research and education through personalized learning. It is also exploring other issues tied to the promotion of equal opportunity including access to affordable housing and criminal justice reform.

Source: UMass Amherst

The post UMass Center for Data Science Partners with Chan Zuckerberg Initiative to Accelerate Science and Medicine appeared first on HPCwire.

US Seeks to Automate Video Analysis

Tue, 01/16/2018 - 12:11

U.S. military and intelligence agencies continue to look for new ways to use artificial intelligence to sift through huge amounts of video imagery in hopes of freeing analysts to identify threats and otherwise put their skills to better use.

The latest AI effort announced last week by the research arm of the U.S. intelligence apparatus focuses on video surveillance and using machine vision to automate video monitoring. The initiative is similar to a Pentagon effort to develop computer vision algorithms to scan full-motion vision.

The new effort unveiled by the Intelligence Advanced Research Projects Activity (IARPA) would focus on public safety applications such as securing government facilities or monitoring public spaces that have become targets for terror attacks.

Program officials said last week they have selected six teams to develop machine vision techniques to scan video under a new program called Deep Intermodal Video Activity, or DIVA. The U.S. National Institute of Standards and Technology along with contractor Kitware Inc., an HPC visualization specialist, will evaluate research data and test proposed DIVA systems, the research agency said.

Among the goals is developing an automated capability to detect threats and, failing that, quickly locating attacks using machine vision and automated video monitoring. “There [are] an increasing number of cases where officials, and the communities they represent, are tasked with viewing large stores of video footage, in an effort to locate perpetrators of attacks, or other threats to public safety,” Terry Adams, DIVA program manager, noted in a statement announcing the effort.

“The resulting technology will provide the ability to detect potential threats while reducing the need for manual video monitoring,” Adams added.

The agency also stressed that the surveillance technology would not be used to track the identity of individuals and “will be implemented to protect personal privacy.” Program officials did not elaborate.

IARPA was established in 2006 to coordinate research across the National Security Agency, CIA and other U.S. spy agencies. The office is modeled after the Defense Advanced Research Projects Agency, which funds risky but promising technology development. Those efforts have focused on the ability to process the enormous video and data haul generated by spy satellites and, increasingly, drones and sensor networks.

Similarly, the Pentagon launched an AI effort last year dubbed Project Maven to accelerate DoD’s integration of big data and machine learning into its intelligence operations. The first computer vision algorithms focused on parsing full-motion video were scheduled for release by the end of 2017.

These and other efforts are aimed at automating the tedious task of pouring through hours of surveillance data to detect threats. Among IARPA’s research thrusts is speeding the analysis of sensor data “to maximize insight from the information we collect,” the agency said.

The post US Seeks to Automate Video Analysis appeared first on HPCwire.

RAIDIX 4.6 Ensures Data Integrity on Power Down

Tue, 01/16/2018 - 10:46

Jan. 16, 2018 — Data storage vendor RAIDIX launches a new edition of the software-defined storage technology – RAIDIX 4.6. The RAIDIX volume management software powers commodity hardware to create fault-tolerant high-performance data storage systems for data-intensive applications. Building on in-house RAID algorithms, advanced data reconstruction and smart QoS, RAIDIX enables peak GB/s and IOPS in Media & Entertainment, HPC, CCTV, and Enterprise with minimal hardware overheads.

RAIDIX 4.5 (shipped in October 2017) focused on hybrid storage performance, efficient SSD caching and virtualization of siloed SAN storage devices. Ver. 4.5 further improved multi-thread data processing and employed proprietary intelligent algorithms to avoid redundant write levels. Adding to the previous major edition, RAIDIX 4.6 enables the use of NVDIMM-N, ensures support for new 100Gbit adapters and brings along more features and improvements.

In version 4.6, the RAIDIX R&D implemented write-back cache protection leveraging non-volatile dual in-line memory (NVDIMM-N). RAIDIX-based systems prevent data loss in case of power down or other failures on the node. Unlike hardware controllers, NVDIMM does not require battery replacement, and data storage built on non-volatile memory does not require a second controller to ensure reliability. NVDIMM-powered solutions also reveal higher performance whereas guaranteed cache synchronization in the dual controller mode leads to inevitable latencies. Thus, the implemented protection mechanism puts together high write speeds with caching and the reliability of synchronous write.

Enhancing the interoperability matrix, RAIDIX 4.6 includes the ability to connect to a Linux client through the high-speed InfiniBand Mellanox ConnectX-4 100Gbit interfaces. This results in accelerated performance in Big Data, HPC and corporate environments – as well as minimal available latencies. On the ease-of-use front, RAIDIX 4.6 encompasses a host of interface tweaks for better control and manageability.

Established in 2009, RAIDIX is an SDS vendor that empowers system integrators and end customers to design and operate high-performance and cost-effective data storage systems. Flexible RAIDIX configurations ranging from entry-level systems up to multi-petabyte clusters are employed by the global partner network in 35 countries. IT solution providers utilize RAIDIX as the key component in turnkey projects or deliver industry-tailored appliances powered by RAIDIX.


RAIDIX (www.raidix.com) is a leading solution provider and developer of high-performance data storage systems. The company’s strategic value builds on patented erasure coding methods and innovative technology designed by the in-house research laboratory. The RAIDIX Global Partner Network encompasses system integrators, storage vendors and IT solution providers offering RAIDIX-powered products for professional and enterprise use.

Source: RAIDIX

The post RAIDIX 4.6 Ensures Data Integrity on Power Down appeared first on HPCwire.

Cray Announces Selected Preliminary 2017 Financial Results

Tue, 01/16/2018 - 10:32

SEATTLE, Jan. 16, 2018 — Global supercomputer leader Cray Inc. (Nasdaq:CRAY) today announced selected preliminary 2017 financial results. The 2017 anticipated results presented in this release are based on preliminary financial data and are subject to change until the year-end financial reporting process is complete.

Based on preliminary results, total revenue for 2017 is expected to be about $390 million.

While a wide range of results remains possible for 2018 and based on the Company’s preliminary 2017 results, Cray expects revenue to grow by 10-15% for 2018. Revenue is expected to be about $50 million for the first quarter of 2018.

“With a strong effort across the company and in partnership with our customers, we completed all our large acceptances during the fourth quarter,” said Peter Ungaro, president and CEO of Cray. “A couple of smaller acceptances that we did not finish are now expected to be completed early in 2018. While 2017 was challenging, we’re beginning to see early signs of a rebound in our core market and I’m proud of the progress we made during the year to position the company for long-term growth.”

Based on currently available information, Cray estimates that the impact of the Tax Cuts and Jobs Act (Tax Legislation) passed in December 2017 will result in a reduction to the Company’s GAAP earnings for the fourth quarter and year ended December 31, 2017 in the range of $30-35 million.  The large majority of this charge is due to the remeasurement of the Company’s U.S. deferred tax assets at lower enacted corporate tax rates.  The charge may differ from this estimate, possibly materially, due to, among other things, changes in interpretations and assumptions the Company has made, and guidance that may be issued. This charge has no impact on the Company’s previously provided non-GAAP guidance.  Going forward, the Company does not expect an increase in its non-GAAP tax rates as a result of the Tax Legislation.

About Cray Inc.

Global supercomputing leader Cray Inc. (Nasdaq:CRAY) provides innovative systems and solutions enabling scientists and engineers in industry, academia and government to meet existing and future simulation and analytics challenges. Leveraging more than 40 years of experience in developing and servicing the world’s most advanced supercomputers, Cray offers a comprehensive portfolio of supercomputers and big data storage and analytics solutions delivering unrivaled performance, efficiency and scalability. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to meet the market’s continued demand for realized performance. Go to www.cray.com for more information.

Source: Cray Inc.

The post Cray Announces Selected Preliminary 2017 Financial Results appeared first on HPCwire.

Mellanox ConnectX-5 Ethernet Adapter Wins Analyst Award for Best Networking Chip

Tue, 01/16/2018 - 08:09

SUNNYVALE, Calif. & YOKNEAM, Israel, Jan. 16, 2018 — Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced that the Mellanox ConnectX-5 Ethernet Adapter IC has received The Linley Group’s Analyst Choice Award for “Best Networking Chip” in 2017.

The ConnectX-5 industry-leading dual port 10, 25, 40, 50 and 100Gb/s Ethernet adapter has set new performance records, delivering as low as 700ns latency and up to 140 million packets per second (Mpps) of forwarding message rate running the open source Data Plane Development Kit (DPDK). ConnectX-5 integrates a programmable flow-based Ethernet switch subsystem, an embedded PCIe Gen4 switch and Mellanox’s Multi-Host technology thereby enabling network connectivity with up to 4 hosts on a single adapter for improved data center total cost of ownership. ConnectX-5 acceleration engines, such as remote direct memory access (RDMA), NVMe-over-Fabrics, open virtual switching (OVS), routing, load-balancing and advanced DPDK offloads deliver world-leading performance for the cloud, enterprise data center, Web2.0, deep learning, database, financial applications and more.

“We perform a rigorous review of the networking silicon reaching production each year, and the Mellanox ConnectX-5 EN stands out as the most feature-rich and high performance Ethernet adapter silicon,” said Bob Wheeler, principal analyst at The Linley Group. “Mellanox has consistently set a high bar when it comes to maximizing server networking efficiency, and they are delivering unique offloads for the fast-growing NVMe over Fabrics market.”

“We are proud to be the recipient of this year’s Analyst Choice Award for the ConnectX-5 Ethernet adapter,” said Yael Shenhav, vice president, product marketing, Mellanox Technologies. “Mellanox has consistently innovated with our market-leading adapter technologies, each generation offering higher throughputs, lower latencies and more of the accelerations our customers need to maximize their data center’s potential and improve their total cost of ownership.”

The ConnectX family of network adapters support both Ethernet and InfiniBand, offering unmatched RDMA features and capabilities to future-proof data center investments. With full support for RDMA over Converged Ethernet (RoCE) and advanced tunneling offloads such as VxLAN, NVGRE and Geneve, ConnectX enables servers and appliances to support the latest networking and storage protocols.

The Linley Group Analysts’ Choice Awards recognize the top semiconductor products of 2017 in several categories, including embedded processors, server processors, networking chips, and related technology. To choose each winner, The Linley Group’s team of technology analysts gather to discuss the merits of the top offerings that entered production in 2017. They evaluate the combined advantages in power, performance, features, and cost of each device for their target end application and market.

About The Linley Group

The Linley Group is a leading source for independent technology analysis of semiconductors for networking, communications, mobile, and data-center applications. The company provides strategic consulting services, in-depth analytical reports, and conferences focused on advanced technologies for chip and system design. The Linley Group also publishes the weekly Microprocessor Report. For insights on recent industry news, subscribe to the company’s free email newsletter: Linley Newsletter.

About Mellanox

Mellanox Technologies (NASDAQ: MLNX) is a leading supplier of end-to-end InfiniBand and Ethernet smart interconnect solutions and services for servers and storage. Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability. Mellanox offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application runtime and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services. More information is available at: www.mellanox.com.

Source: Mellanox

The post Mellanox ConnectX-5 Ethernet Adapter Wins Analyst Award for Best Networking Chip appeared first on HPCwire.

New Center at University of Michigan Aims to Democratize the Design and Manufacturing of Next-Generation Systems

Mon, 01/15/2018 - 11:15

ANN ARBOR, Mich., Jan. 15, 2018 — As the computing industry struggles to maintain its historically rapid pace of innovation, a new, $32 million center based at the University of Michigan aims to streamline and democratize the design and manufacturing of next-generation computing systems.

The Center for Applications Driving Architectures, or ADA, will develop a transformative, “plug-and-play” ecosystem to encourage a flood of fresh ideas in computing frontiers such as autonomous control, robotics and machine-learning.

Today, analysts worry that the industry is stagnating, caught between physical limits to the size of silicon transistors and the skyrocketing costs and complexity of system design.

“The electronic industry is facing many challenges going forward, and we stand a much better chance of solving these problems if we can make hardware design more accessible to a large pool of talent,” said Valeria Bertacco, an Arthur F. Thurnau professor of computer science and engineering at U-M and director of the ADA Center.  “We want to make it possible for anyone with motivation and a good idea to build novel high-performance computing systems.”

The center is a five-year project that’s led by U-M and includes researchers from a total of seven universities, pending final contracts: Harvard University, MIT, Stanford University, Princeton University, University of Illinois and University of Washington.

ADA is funded by a consortium that is led by the Semiconductor Research Corporation and includes the Defense Advanced Research Projects Agency. The center is one of six new centers recently announced as part of the Joint University Microelectronics Program, organized by the Semiconductor Research Corporation.

ADA aims to democratize the development and deployment of advanced computing systems in several ways: It will develop a modular approach to system hardware and software design, where applications’ internal algorithms are mapped to highly efficient and reusable accelerated hardware components. This faster and more effective approach will require that the entire design framework—from system software to architecture to design tools—be reimagined and rebuilt.

“You shouldn’t need a Ph.D. to design new computing systems,” Bertacco said. “Five years from now, I’d like to see freshly minted college grads doing hardware startups.”

Computing has had a monumental impact on society, but the path forward is uncertain. Researchers are looking for creative approaches to extend the utility of traditional silicon beyond the Moore’s Law era, a long-standing but waning trend in which chips become cheaper to manufacture, and more powerful, each year.

ADA researchers see customized silicon for specific applications—like chips optimized for image search or data analytics—as a promising approach. But the biggest industrial customized silicon successes to date, such as smartphone systems-on-a-chip or graphics processing units, have required the immense resources of large, deeply integrated, vertical design teams. ADA’s goal is to change that. The center is organized into three themes:

Agile system development: The team will identify patterns in the core algorithms of emerging applications—such as virtual reality, machine learning and augmented reality—in order to map those algorithms to new, tailored computational blocks.This approach would slash design costs by building ready-to-use components that usher designs all the way from high-level computational languages to fully packaged systems.

Algorithms-driven architectures: The researchers will develop reusable, highly efficient algorithmic hardware accelerators for the computational blocks. Instead of targeting the application itself, designs will target the underlying algorithms. Special-purpose hardware designs can improve the efficiency-per-operation by several orders of magnitude over a general-purpose chip. Such special-purpose hardware design occurs today, but it can take a decade after a need is identified before mature and efficient solutions are available, and it requires extremely specialized expertise, the researchers say.

Technology-driven systems: A key aspect of this theme involves developing an open-source chip scaffold for these new, accelerator-centric systems. The scaffolds would include all the necessary support subsystems—such as general-purpose cores, on-chip communication fabric, and memories—to facilitate a “plug-and-play” flow. “One will no longer need to send a design to the fab and wait for a chip to come back. They may still need a clean room to assemble a system, but this will be much simpler and more economical,” Bertacco said. Researchers will also explore technology innovations independent of silicon scaling.

“This is a daring and progressive approach to system design that stands to revolutionize the computing industry,” said Alec Gallimore, who is the Robert J. Vlasic Dean of Engineering, the Richard F. and Eleanor A. Towner Professor, an Arthur F. Thurnau Professor, and a professor both of aerospace engineering and of applied physics. “The work of this new center will empower generations of engineers and computer scientists to design and build the systems that can bring their ideas to life.”

DARPA and the Semiconductor Research Corporation will contribute $27.5 million to this project, with the remaining funds provided by the participating institutions. The Semiconductor Research Corporation is a global, high technology-based consortium that serves as a crossroads of collaboration between technology companies, academia, government agencies, and SRC’s engineers and scientists.

Source: University of Michigan

The post New Center at University of Michigan Aims to Democratize the Design and Manufacturing of Next-Generation Systems appeared first on HPCwire.

HiPEAC Conference Seeks to Advance Computing in Face of Crisis

Mon, 01/15/2018 - 11:10

GHENT, Belgium, Jan. 15, 2018 — From 22-24 January in Manchester, the HiPEAC conference will once again bring together the best minds in computer architecture and compilation to exploit the enormous potential of new computing paradigms while minimizing the very real risks. At a time of global crisis in computing systems, with chip-level security flaws exposing the vulnerability of our ever-more connected society and the end of Moore’s Law threatening to slow the progress brought about by faster, cheaper, more powerful processing, HiPEAC’s network of experts will once again showcase their solutions for everything from machine learning to secure critical real-time systems.

‘The HiPEAC conference is the flagship networking event of our 2000-strong community of computing experts,’ says HiPEAC coordinator Koen de Bosschere of Ghent University. ‘This year we are very happy to have two leading European companies (ARM for mobile computing and DeepMind for deep learning) as the main sponsors of the event. They are creating the key technological components of future smart devices,’ he adds.

Keynote talks from Maria Girone (CERN openlab) on computing challenges at the Large Hadron Collider, Dileep Bhandarkar (Qualcomm Datacenter Technologies) on emerging data centre trends and Dan Belov (DeepMind) on machine learning will kick off each day.

Further highlights from the conference include:

  • Innovative interconnect solutions at the AISTECS workshop, including the launch of prototype memory disaggregation for cloud services developed by IBM Research – Ireland, as described in this blog post and video.
  • The Heterogeneity Alliance, coordinated by the TANGO project, which aims to bring heterogeneous architecture in to mainstream markets.

Beyond academic excellence, the conference also facilitates the transformation of cutting-edge research results into market-ready innovations. As well as providing a hub for researchers, industry representatives and policy makers to exchange ideas, the conference features a specific TETRAMAX workshop on technology transfer. This follows the recent HiPEAC Tech Transfer Awards, which recognized ten projects where concrete research results have been made industrial practice.  

HiPEAC18 will also once again feature HiPEAC’s tailored recruitment support, including a travelling careers unit, which helps companies find candidates with the specialist skills to bring about the computing systems of the future. For the first time, the conference will also feature a STEM (Science, Technology, Engineering and Mathematics) Student Day, with the aim of preparing the next generation of computer scientists who will ensure Europe’s enduring competitiveness.

With the Manchester ‘Baby’, the world’s first stored-program computer, celebrating its 70th birthday this year, the northern city provides a particularly apt location for the conference, which is testament to the power of collaborative European research in the face of political uncertainty.  

Once again, the biggest international names in technology, including Arm, DeepMind, Atos and Samsung, have shown their confidence in HiPEAC by generously supporting the conference. Full list of sponsors below.

About HiPEAC

Since 2004, the HiPEAC (High Performance and Embedded Architecture and Compilation) project has provided a hub for European researchers in computing systems; today, its network, the biggest of its kind in the world, numbers around 2000 specialists. The project offers training, mobility support and dissemination and recruitment services, along with numerous networking facilities to its members. The latest incarnation of the project, HiPEAC 5, began on 1 December 2017 and is delivered by 13 partners, led by Ghent University. It is funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 779656.  

HiPEAC organizes four networking events per year: the HiPEAC conference, two Computing Systems Weeks and a summer school. The HiPEAC conference attracts around 600 participants, and the 2018 edition is organized by the University of Manchester. The following organizations are generously supporting the conference: Arm, DeepMind, Atos, Samsung, AXIOM, Barco, dividiti, Embedded Computing Specialists, Kaleao, Polly Labs, Springer, Sundance, SYSGO, Thales, and Think Silicon.

Source: HiPEAC

The post HiPEAC Conference Seeks to Advance Computing in Face of Crisis appeared first on HPCwire.

White Paper Addresses Impact of Meltdown and Spectre Vulnerabilities on HPC

Mon, 01/15/2018 - 10:50

AMSTERDAM, Netherlands, Jan. 15, 2018 — Two security vulnerabilities in modern CPUs have recently been publicised, causing widespread concern in the computing industry. This is because the vulnerabilities are hardware-based and therefore have a potentially very broad impact which cannot be easily patched without consequences. Colloquially referred to as “Meltdown” and “Spectre,” these vulnerabilities refer to attacks that can allow malicious programs to steal data from the memory of other programs.

A new white paper written by Daniël Mantione, HPC System Architect at ClusterVision, addresses some of the implications of Meltdown and Spectre for HPC. These vulnerabilities have only been discovered recently, so information is still developing. Therefore, this document should not be interpreted as a complete overview of the situation but as an informative view of the potential impact on HPC.

ClusterVision experts already made several experiences on its clusters in order to analyse the impact of the kernel-patch on performance.

You can view, read, and download the white paper here.

About ClusterVision

ClusterVision specialises in high performance computing (HPC) solutions. We design, build, manage and support supercomputers and HPC software. By combining cutting-edge hardware and software components with a range of customised professional services and a highly-skilled engineering team, ClusterVision creates for its customers the most efficient, technologically-forward, and reliable HPC solutions. Our solutions stand out from the HPC crowd because every installation is tailor-fit around the customer’s needs and wants.

Source: ClusterVision

The post White Paper Addresses Impact of Meltdown and Spectre Vulnerabilities on HPC appeared first on HPCwire.

ANSYS and Rescale Offer On-Demand, Pay-Per-Use ANSYS Software on Rescale’s ScaleX Cloud HPC Platform

Mon, 01/15/2018 - 10:45

SAN FRANCISCO, Jan. 15, 2018 — Rescale, a leading provider of enterprise big compute and cloud HPC, is pleased to announce that ANSYS Elastic Licensing can now be purchased directly through Rescale’s ScaleX, a SaaS platform purpose-built for solving the world’s most challenging engineering, scientific and mathematical problems with HPC in the cloud. ANSYS Elastic Licensing is an on-demand, pay-per-use licensing model which unlocks the full ANSYS engineering simulation portfolio including structures, fluids, and electronics solutions. The delivery of ANSYS simulation software on ScaleX provides organizations in virtually every industry with the ability to transform the way products are engineered and brought to market.

Rescale has enabled customers to use ANSYS Elastic Units (AEUs) in the cloud, backed by more than 60 data centers worldwide, since their introduction by ANSYS in 2016. AEUs can now be purchased directly through Rescale’s ScaleX platform, providing an efficient, single-vendor procurement of on-demand access to cloud HPC and simulation software. ANSYS Elastic Units are available in three pack sizes, which provide lower unit cost with larger volumes.

With Rescale, customers achieve the ultimate flexibility by being able to run existing traditional licenses and purchase pay-per-use licensing on ScaleX, as well as access a variety of hardware architectures via bare metal and virtual servers and their on-premises infrastructure. In addition, the ScaleX administration portal allows company administrators to monitor and control their company’s AEU usage by setting budgets at the user, project, and company levels.

“Pay-per-use licenses from ANSYS have been very popular with Rescale customers since we introduced ANSYS Elastic Licensing to the platform in 2016. Now we’re delivering the additional flexibility of being able to seamlessly purchase elastic units on-demand directly through the Rescale platform. I’m confident that our customers will be delighted by the frictionless on-demand and flexible licensing experience,” said Joris Poort, CEO of Rescale.

“Simulation is becoming ubiquitous, allowing engineers to assess more design possibilities and make better decisions throughout the product life cycle. Access to ANSYS on the ScaleX platform can be deployed in hours to increase their simulation throughput – empowering companies to innovate more and increase product quality while cutting costs and time to market,” said Todd McDevitt, Director, Product Management at ANSYS. “We also see tremendous value in the Rescale administrative portal, which provides IT and engineering managers detailed usage metrics and control to manage hardware and software budgets.”

ANSYS customers interested in purchasing on-demand ANSYS Elastic Licensing through Rescale should contact their Rescale account executive.

About Rescale

Rescale is a global leader for high-performance computing simulations and deep learning in the cloud. Trusted by the Global Fortune 500, Rescale empowers the world’s top scientists and engineers to develop the most innovative new products and perform groundbreaking research and development faster and at lower cost. Rescale’s ScaleX platform transforms traditional fixed IT resources into flexible hybrid, private, and public cloud resources—built on the largest and most powerful high-performance computing network in the world. For more information on Rescale’s ScaleX platform, visit www.rescale.com.

Source: Rescale

The post ANSYS and Rescale Offer On-Demand, Pay-Per-Use ANSYS Software on Rescale’s ScaleX Cloud HPC Platform appeared first on HPCwire.

Gartner Says Worldwide Semiconductor Revenue Forecast to Grow 7.5 Percent in 2018

Mon, 01/15/2018 - 10:39

Jan. 15, 2018 — Worldwide semiconductor revenue is forecast to total $451 billion in 2018, an increase of 7.5 percent from $419 billion in 2017, according to Gartner, Inc. This represents a near doubling of Gartner’s previous estimate of 4 percent growth for 2018.

“Favorable market conditions for memory sectors that gained momentum in the second half of 2016 prevailed through 2017 and look set to continue in 2018, providing a significant boost to semiconductor revenue,” said Ben Lee, principal research analyst at Gartner. “Gartner has increased the outlook for 2018 by $23.6 billion compared with the previous forecast, of which the memory market accounts for $19.5 billion. Price increases for both DRAM and NAND flash memory are raising the outlook for the overall semiconductor market.”

However, these price increases will put pressure on margins for system vendors of key semiconductor demand drivers, including smartphones, PCs and servers. Gartner predicts that component shortages, a rising bill of materials (BOM) and the resulting prospect of having to raise average selling prices (ASPs) will create a volatile market through 2018.

Despite the upward revision for 2018, the quarterly growth profile for 2018 is expected to fall back to a more normal pattern with a mid-single-digit sequential decline in the first quarter of the year, followed by a recovery and buildup in both the second and third quarters of 2018, and a slight decline in the fourth quarter.

On January 3, a security vulnerability that spans all microprocessor vendors was revealed, impacting nearly all types of personal and data center computing devices. While this is an obscure security vulnerability that is difficult to achieve, the potential of a high-impact security issue cannot be ignored and must be mitigated.

“The current mitigation solution is via firmware and software updates, and has a potential processor performance impact. This may result in an increased demand for high-performance data center processors in the short term, but Gartner expects that in the longer term, microprocessor architectures will be redesigned, reducing the performance impact of the software mitigations and limiting the long-term forecast impact,” said Alan Priestley, research director at Gartner.

Taking the memory sectors out of the equation, the semiconductor market is forecast to grow 4.6 percent in 2018 (compared with 9.4 percent in 2017) with field-programmable gate array (FPGA), optoelectronics, application-specific integrated circuits (ASICs) and nonoptical sensors leading the semiconductor device categories.

The other significant device category driving the 2018 forecast higher is application-specific standard products (ASSPs). The predicted growth in ASSPs was influenced by an improved outlook for graphics cards used in gaming PCs and high-performance computing applications, a broad increase in automotive content and a stronger wired communications forecast.

The mixed fortunes of semiconductor vendors in recent years serves as a reminder of the fickleness of the memory market,” said Mr. Lee. “After growing by 22.2 percent in 2017, worldwide semiconductor revenue will revert back to single-figure growth in 2018 before a correction in the memory market results in revenue declining slightly in 2019.”

Gartner clients can read more in the report “Forecast Analysis: Electronics and Semiconductors, Worldwide, 4Q17 Update.”

Source: Gartner

The post Gartner Says Worldwide Semiconductor Revenue Forecast to Grow 7.5 Percent in 2018 appeared first on HPCwire.

Eideticom Demonstrates First NVMe Over RDMA and TCP/IP using Broadcom’s NetXtreme Ethernet SoC

Mon, 01/15/2018 - 08:09

CALGARY, Jan. 15, 2018 — Eidetic Communications Inc. (Eideticom) announced it is collaborating with Broadcom Limited on NVMe over Fabrics (NVMe-oF) with TCP/IP transport. The platform provides disaggregation of compute and storage resources by allowing Eideticom’s NoLoad NVMe accelerator to be accessed and shared across the network using standard TCP/IP transport protocols. Utilizing Broadcom’s highly-optimized BCM58800 NetXtreme S-Series storage target SoC to support NVMe-oF with RDMA and TCP/IP, the solution optimizes throughput, power consumption and cost for scale-out storage applications.

Leveraging the ubiquitous TCP/IP protocol present in today’s data center network, NVMe-oF with TCP/IP allows data center operators to  deploy NVMe-oF over their  existing IP networks, thereby lowering the costs and complexity of implementing  NVMe-oF.   This will allow data center operators to take advantage of composable/disaggregated storage to address large, data-intensive workloads for advanced applications, including high performance computing and deep learning.

“Eideticom is excited to be collaborating with Broadcom on our compute and storage disaggregation solution,” said Roger Bertschmann, president and co-founder of Eideticom. “With Broadcom’s high-performance NetXtreme Ethernet SoC, we have been able to demonstrate NVMe-oF with RDMA and TCP/IP on our platform. The combination of Eideticom NoLoad FPGA and Broadcom BCM58800 provides a compelling solution for acceleration of storage and other compute intensive workloads in enterprise data centers and the cloud.”

“Being able to showcase the benefits of NVME-oF with both TCP/IP and RDMA is key to broader industry adoption,” said Fazil Osman, distinguished engineer at Broadcom Limited. “Leveraging Eideticom’s peer to peer NoLoad FPGA with our NetXtreme S-Series SoC we can accelerate various storage functions, and now can do that with either protocol which is groundbreaking.  This gives customers the flexibility to choose what works best for their environment.”

About Eideticom

Eideticom develops leading edge storage, compute and application acceleration products targeting programmable platforms on the cloud or at the network edge. Eideticom’s extensive experience developing and deploying enterprise grade products enables our customers and partners to confidently and successfully deliver their products to market.

Source: Eideticom

The post Eideticom Demonstrates First NVMe Over RDMA and TCP/IP using Broadcom’s NetXtreme Ethernet SoC appeared first on HPCwire.

Honored Physicist Steven Chu Selected as AAAS President-Elect

Tue, 01/09/2018 - 14:40

Jan. 9, 2018 — Nobel laureate and former Energy Secretary Steven Chu has been chosen as president-elect of the American Association for the Advancement of Science. Chu will start his three-year term as an officer and member of the Executive Committee of the AAAS Board of Directors at the 184th AAAS Annual Meeting in Austin, Texas, in February.

“As Secretary of Energy, I was reminded daily that science must continue to be elevated and integrated into our national life and throughout the world. The work of AAAS in connecting science with society, public policy, human rights, education, diplomacy and journalism – through its superb journals and programs – is essential,” said Chu in his candidacy statement.

“Never has there been a more important time than today for AAAS to communicate the advances in science, the methods we use to acquire this knowledge and the benefits of these discoveries to the public and our policymakers,” he said.

Chu cited his role in key reports by National Academies and the American Academy of Arts and Sciences on the competitiveness of the U.S. scientific enterprise and the state of fundamental research, studies that “sounded alarms that the health of science, science education and integration of science into public decision-making in the U.S. was in peril and heading in the wrong direction,” he said in his candidacy statement. “Concern among scientists and friends of science is even greater today and we in AAAS have our work cut out for us.”

AAAS must continue its efforts to communicate the benefits of scientific progress, Chu noted, saying the world’s largest general scientific organization must continue to ensure scientists and students have access to the free exchange of ideas and the ability to pursue discovery across national boundaries.

Chu currently serves as the William R. Kenan Jr. Professor of Physics and Professor of Molecular and Cellular Physiology at Stanford University. Prior to rejoining Stanford in 2013, Chu was secretary of energy during President Barack Obama’s first term, the first scientist to head the Department of Energy, the home of the nation’s 17 National Laboratories.

Prior to his appointment as energy secretary, Chu was director of the Lawrence Berkeley National Laboratory as well as a professor of physics and molecular and cell biology at University of California, Berkeley. He first joined Stanford University in 1987, where he was a professor of physics until 2004.

Between 1978 and 1987, Chu worked at Bell Labs, where he ultimately led its Quantum Electronics Research Department. At Bell Labs, Chu carried out research on laser cooling and atom trapping, work that would earn him – along with Claude Cohen-Tannoudji and William Daniel Phillips – the Nobel Prize for Physics in 1997. Their new methods for using laser light to “trap” and slow down atoms to study them in greater detail “contributed greatly to increasing our knowledge of the interplay between radiation and matter,” the Nobel Committee said in 1997.

Chu received bachelor’s degrees in mathematics and physics from the University of Rochester and a Ph.D. in physics from the University of California, Berkeley.

He was named an elected fellow of AAAS in 2000 and has been a member of AAAS since 1995. He served on the AAAS Committee on Nominations, which selects the annual slate of candidates for AAAS president-elect and Board of Directors elections, from 2009 to 2011.

The current AAAS president-elect, Margaret Hamburg, will begin her term as AAAS president at the close of the 2018 Annual Meeting. Hamburg is foreign secretary of the National Academy of Medicine. The current president, Susan Hockfield, will become chair of the AAAS Board of Directors. Hockfield is president emerita of the Massachusetts Institute of Technology.

Source: AAAS

The post Honored Physicist Steven Chu Selected as AAAS President-Elect appeared first on HPCwire.

Micron and Intel Announce End to NAND Memory Joint Development Program

Tue, 01/09/2018 - 11:20

BOISE, Idaho, and SANTA CLARA, Calif., Jan. 8, 2018 – Micron and Intel today announced an update to their successful NAND memory joint development partnership that has helped the companies develop and deliver industry-leading NAND technologies to market.

The announcement involves the companies’ mutual agreement to work independently on future generations of 3D NAND. The companies have agreed to complete development of their third-generation of 3D NAND technology, which will be delivered toward the end of this year and extending into early 2019. Beyond that technology node, both companies will develop 3D NAND independently in order to better optimize the technology and products for their individual business needs.

Micron and Intel expect no change in the cadence of their respective 3D NAND technology development of future nodes. The two companies are currently ramping products based on their second-generation of 3D NAND (64 layer) technology.

Both companies will also continue to jointly develop and manufacture 3D XPoint at the Intel-Micron Flash Technologies (IMFT) joint venture fab in Lehi, Utah, which is now entirely focused on 3D XPoint memory production.

“Micron’s partnership with Intel has been a long-standing collaboration, and we look forward to continuing to work with Intel on other projects as we each forge our own paths in future NAND development,” said Scott DeBoer, executive vice president of Technology Development at Micron. “Our roadmap for 3D NAND technology development is strong, and we intend to bring highly competitive products to market based on our industry-leading 3D NAND technology.”

“Intel and Micron have had a long-term successful partnership that has benefited both companies, and we’ve reached a point in the NAND development partnership where it is the right time for the companies to pursue the markets we’re focused on,” said Rob Crooke, senior vice president and general manager of Non-Volatile Memory Solutions Group at Intel Corporation. “Our roadmap of 3D NAND and Optane technology provides our customers with powerful solutions for many of today’s computing and storage needs.”

Source: Intel

The post Micron and Intel Announce End to NAND Memory Joint Development Program appeared first on HPCwire.