
There is always the promise of using more computing power for a single task. Your computer has multiple CPUs now, surely. Your video card has even more. Your computer is probably networked to a slew of other computers. But how do you write software to take advantage of that? There are many complex systems, of course, but there’s also Chapel.
Chapel is a reasonably simple programming language, but it supports parallelism in various forms. The run time controls how computers — whatever that means — communicate with one another. You can have code running on your local CPUs, your GPU, and other processing elements over the network without much work on your part.
What’s it look like? Here’s a simple distributed program from the project’s homepage:
// print a message per compute node coforall loc in Locales do on loc do writeln("Hello from locale ", loc.id); // print a message per core per compute node coforall loc in Locales do on loc do coforall tid in 0..<here.maxTaskPar do writeln("Hello from task ", tid, " on locale ", loc.id);
As you might guess, Locales is an array of locale objects that each describe some computing resource. The coforall statement splits a loop up to run on different locales or CPUs. You can even write GPU kernels:
coforall (gpu, row) in zip(here.gpus, localRowStart..) do on gpu {
You can try it in your browser, but for best results, you really want to download it or run it in a container. The license is Apache 2.0, so you can even contribute if you want to. If you want to really do distributed work, be sure to grab the package built for GASNet or Slurm.
While it is something new to learn, you might find it easier and more generally applicable than something like CUDA.