High Performance Computing: Collaborations power research & learning

Sam Fried

In a temperature-controlled room located in an out-of-the-way part of campus, rows of computers are humming, blinking and whirring in a High Performance Computing (HPC) Center called DEAC (Distributed Environment for Academic Computing) Cluster. Last year, this centralized HPC system processed over 21 million core-hours from more than 650,000 tasks […]

In a temperature-controlled room located in an out-of-the-way part of campus, rows of computers are humming, blinking and whirring in a High Performance Computing (HPC) Center called DEAC (Distributed Environment for Academic Computing) Cluster.

Last year, this centralized HPC system processed over 21 million core-hours from more than 650,000 tasks that were submitted by Wake Forest researchers. It also hosts an enormous amount of centralized data supported and maintained by the HPC team within the University’s Information Systems (IS) department. 

According to EdTech Magazine, high performance resources are a competitive differentiator for institutions seeking to hire top researchers—especially those at Universities that support research, teaching and learning and regularly upgrade their networks. A centrally maintained HPC Center means departmental funds and professor’s grants can be used to support teaching and research rather than technology.

Two decades ago, the Wake Forest Physics department formed the first HPC cluster on the Reynolda campus, which has since evolved into a centralized Information Systems resource, used by several departments and the downtown campus and medical school. Since then, IS has invested heavily in the cluster, expanding the team and updating hardware as research needs have grown. Empowering and accelerating researchers and research remains a high priority for the department as outlined in the IT Strategic Plan.

HPC is changing the future of research  

For researchers, what makes high performance computing powerful is, in part, its ability to split data into partitions and accelerate the collection of data by running more than one variation of code at a time.

Adam Carlson, a senior HPC systems administrator in IS, compares HPC to checking out at the grocery store on a busy day. “If there is only one line open, the process takes a long time. But once multiple lanes open and carts can be scanned simultaneously rather than one at a time. You’re done quickly.”

Carlson and fellow administrators Sean Anderson and Cody Stevens form the HPC team. They upload or create code, trouble-shoot and problem-solve for faculty and students from any discipline who want to find fast ways to analyze lots of data. 

“For many of us, the HPC Center is an essential component of our research and teaching efforts,” said physics professor Natalie Holzwarth, a founding member of the physics department cluster in 2002. She uses HPC to model the properties of materials that might be candidates for solid-state electrolytes for use in battery technology.

High Performance Computing: Collaborations power research & learning

Next Post

Nvidia AI Tech Lets Computers Understand the 3D World From 2D Photos

Nvidia AI tech constructs 3D models out of a collection of 2D photographs. Nvidia; animation by Stephen Shankland/CNET Graphics chips are good at taking 3D scenes like video game battlefields or airplane designs and rendering them as 2D images on a screen. Nvidia, a top maker of such chips, now […]

Subscribe US Now