Why Standards-Based Parallel Programming Should be in Your HPC Toolbox

By Jeff Larkin, Principal HPC Application Architect at NVIDIA

September 5, 2022

HPC application developers have long relied on programming abstractions that were developed and used almost exclusively within the realm of traditional HPC. OpenMP was created more than 25 years ago to simplify shared-memory parallel computing because programming languages of the day had few to no such features and vendors were developing their own, incompatible abstractions for symmetric multiprocessing.

CUDA C was designed and released by NVIDIA in 2007 as extensions to the C language to support programming massively parallel GPUs, again because the C language lacked the necessary features to support parallelism directly. Both of these programming models have been highly successful because they provide the necessary abstractions to overcome the shortcomings of the languages that they extended in a user-friendly manner.

The landscape has changed a lot, however, in the years since these models were initially released and it’s time to reevaluate where they should fit in a programmer’s toolbox. In this post I discuss why you should be parallel programming natively with ISO C++ and ISO Fortran.

Parallel Programming is Becoming the Standard

Parallel programming was once a niche field reserved only for government labs, research universities, and certain forward-looking industries, but today it is a requirement for all industries. Because of this, mainstream programming languages now support parallel programming natively, and an increasing number of developer tools support these features. It is now possible for applications to be developed to support parallelism from the start, with no need for a serial baseline code.

Such parallel-first codes can be taken to any computer system, whether it’s based on multi-core CPUs, GPUs, FPGAs, or some other novel processor we haven’t thought of yet, and be expected to run on day one. This frees developers from the need to port applications to new systems and enables them to focus on productively optimizing their application or expanding its capabilities instead.

NVIDIA delivers multiple approaches to programming HPC systems, including and enhanced standard language support, incremental directives-based optimization, CUDA platform specialization, and GPU-accelerated libraries.
NVIDIA provides three composable approaches to parallel programming: accelerated standard languages, portable directives-based solutions, and platform specific solutions. This gives developers choices to optimize their efforts according to their productivity, portability, and performance goals.

NVIDIA provides three approaches to programming for our platform, all of which are layered on the foundation of our decades-long investment in accelerated libraries and compilers. All of these approaches are fully composable, giving the programmer the choice of how to best balance their productivity, portability, and performance goals.

ISO Languages Achieve Performance and Portability

New application development should be performed using ISO standard programming languages and the parallel features they provide. There is no better example of portable programming models than the ISO languages, so developers should expect that applications written to these standards will run anywhere. Many of the developers we’ve worked with have found that the performance gains from refactoring their applications using standards-based parallelism in C++ or Fortran are already as good as or better than their existing code.

Some developers have elected to perform further optimizations by introducing portable directives, OpenACC, or OpenMP, to improve data movement or asynchrony and obtain even higher performance. This results in application code that’s still fully portable and high-performance. Developers who want to obtain the highest performance in key parts of their applications may choose to take the additional step of optimizing portions of the application with a lower-level approach, such as CUDA, and take advantage of everything the hardware has to offer. And, of course, all of these approaches interact nicely with our expert-tuned accelerated libraries.

Expanding the Standards to Leverage Innovations

There’s a misconception in the industry that CUDA is the language used by NVIDIA to lock-in users, but in fact it’s our language for innovating and exposing the features of our hardware most directly. CUDA C++ and Fortran are in many ways co-design languages, where we can expose hardware innovations and iterate on the programming model quickly. As best practices are developed in the CUDA programming model, we believe they can and should be codified in standards.

For instance, due to the successes of our customers in utilizing mixed-precision arithmetic, we worked with the C++ committee to standardize extended floating point types in C++23. Thanks in a large part to the work of our math libraries team, we have worked with the community to  propose a C++ extension for a standardized linear algebra interface that will map well to not only our libraries but community-based and proprietary libraries from other vendors as well. We strive to improve parallel programming and asynchrony in the ISO standard languages because it’s the best thing for our customers and the community at large.

What Do Developers Think?

Professor Jonas Latt at the University of Geneva uses nvc++ and the C++ parallel algorithms in the Pallabos library and said that, “The result produces state-of-the-art performance, is highly didactical, and introduces a paradigm shift in cross-platform CPU/GPU programming in the community.”

Dr. Ron Caplan of Predictive Science Inc. said of his experience using nvfortran and Fortran Do Concurrent, “I can now write far fewer directives and still expect high performance from my Fortran applications.”

And Simon McIntosh-Smith from the University of Bristol said when presenting his team’s results using nvc++ and parallel algorithms, “The ISO C++ versions of the code were simpler, shorter, easier to write, and should  be easier to maintain.”

These are just a few of the developers already reaping the rewards of using standards-based parallelism in their development.

Standards-Based Parallel Programming Resources

NVIDIA has a range of resources to help you fall in love with standards-based parallelism.

Our HPC Software Development Kit (SDK)  is a free software package that includes:

  • NVIDIA HPC compilers for C, C++, and Fortran
  • The CUDA NVCC Compiler
  • A complete set of accelerated math libraries, communication libraries, and core libraries for data structures and algorithms
  • Debuggers and profilers

The HPC SDK is freely available on x86, Arm, and OpenPOWER platforms, regardless of whether you own an NVIDIA GPU, and is even Amazon’s HPC software stack for Graviton3.

NVIDIA On-Demand also has several relevant recordings to get you started (try “No More Porting: Coding for GPUs with Standard C++, Fortran, and Python”), as well as our posts on the NVIDIA Developer Blog.

Finally, I encourage you to register for GTC Fall 2022, where you’ll find even more talks about our software and hardware offerings, including more information on standards-based parallel programming.

Jeff Lark, Principal HPC Application Architect at NVIDIA

About Jeff Larkin

Jeff is a Principal HPC Application Architect in NVIDIA’s HPC Software team. He is passionate about the advancement and adoption of parallel programming models for High Performance Computing. He was previously a member of NVIDIA’s Developer Technology group, specializing in performance analysis and optimization of high performance computing applications. Jeff is also the chair of the OpenACC technical committee and has worked in both the OpenACC and OpenMP standards bodies. Before joining NVIDIA, Jeff worked in the Cray Supercomputing Center of Excellence, located at Oak Ridge National Laboratory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

IBM Think 2025: Download a Sneak Peek of the Next Gen Granite Models

May 8, 2025

At IBM Think 2025, IBM announced Granite 4.0 Tiny Preview, a preliminary version of the smallest model in the upcoming Granite 4.0 family of language models, to the open source community. IBM Granite models are a series Read more…

Using HPC to Peer into Yellowstone’s Fiery Heart

May 8, 2025

Beneath the ing geysers and bubbling mud pots of Yellowstone National Park lies one of the world's most closely watched volcanic systems. Now, a team of geoscientists has uncovered new evidence that sheds light on h Read more…

Do You Own Your Cloud Data? Third-Party Doctrine Says No

May 7, 2025

Your data is yours, right? It seems like a simple question, but thanks to a little-known loophole in federal law, US regulators are can access your private data without a warrant as long as it’s being stored by a third Read more…

Shutterstock 2211687503

Which Liquid Cooling Is Right for You? Immersion and Direct-to-Chip Explained

May 6, 2025

Over the next two years, the number of liquid-cooled data centers is projected to grow from less than 1 percent of the market to about 30 percent of all installations. The reason behind such spectacular growth is clear - Read more…

GenCyber: Exploring Cybersecurity Careers

May 6, 2025

For the fifth year, Texas A&M University High Performance Research Computing (TAMU-HPRC) is sponsoring two GenCyber Camps for students entering grades 8-12. The camps will be held June 2-6 and June 23-27 from 8:00 a. Read more…

Microsoft Azure & AMD Solution Channel

Unlock Advanced Visualization and AI Inference with Azure’s NVads V710 v5 VMs

The demand for cost-effective and powerful GPU solutions continues to soar, driven by growing needs in lightweight AI inferencing, graphics-intensive applications, Virtual Desktop Infrastructure (VDI), and cloud gaming. Read more…

IBM Think 2025: The Main of Gen AI and Start of Agentic AI

May 6, 2025

IBM yesterday kicked off its annual Think conference being held in Boston this week. No surprise, generative AI and tools for enabling agentic AI will dominate discussion at the event expected to attract 5000 attendees. Read more…

Using HPC to Peer into Yellowstone’s Fiery Heart

May 8, 2025

Beneath the ing geysers and bubbling mud pots of Yellowstone National Park lies one of the world's most closely watched volcanic systems. Now, a team of ge Read more…

Shutterstock 2211687503

Which Liquid Cooling Is Right for You? Immersion and Direct-to-Chip Explained

May 6, 2025

Over the next two years, the number of liquid-cooled data centers is projected to grow from less than 1 percent of the market to about 30 percent of all install Read more…

IBM Think 2025: The Main of Gen AI and Start of Agentic AI

May 6, 2025

IBM yesterday kicked off its annual Think conference being held in Boston this week. No surprise, generative AI and tools for enabling agentic AI will dominate Read more…

Inside the Chargeback System That Made Harvard’s HPC Storage Sustainable

May 5, 2025

Like many research institutions, Harvard University struggled to manage the rapidly growing storage environment backing its HPC clusters. The university had lit Read more…

QED-C Workshop Identifies Quantum AI Targets

May 1, 2025

How, exactly, will quantum computing and AI work together? Following the flood of marketing enthusiasm for Quantum AI during the past couple of years, the Quant Read more…

Three Ways AI Can Weaken Your Cybersecurity

May 1, 2025

Even before generative AI arrived on the scene, companies struggled to adequately secure their data, applications, and networks. In the never-ending cat-and-mou Read more…

Parallel Works Unveils ACTIVATE High Security Platform, Secures Key DoD Accreditation

April 30, 2025

Parallel Works has just launched its new ACTIVATE High Security Platform, a hybrid multi-cloud computing control plane designed to meet the Department of Defens Read more…

Huawei Challenges Nvidia’s AI Dominance with New Chip

April 29, 2025

As geopolitical tensions reshape technology supply chains and U.S. export controls tighten, new challenges and opportunities arise that are transforming the glo Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Altair
AMD
Ansys
Aspen Systems
Boxx
CoolIT
Cornelis Networks
DDC
DDN
Dell
Drivenets
Eviden
Google
Hammerspace
HPE
Intel
Lenovo
Microsoft
Motivair
NEC
Nvidia
Parallel Works
Penguin Solution
Quantinuum
Silicon Mechanics
TotalCAE
Viridien

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow