Tuesday, May 26, 2015

International HPC Conferences in Brazil this Year: CARLA and SBAC-PAD

If you have been searching for places to publish the latest results of your research in HPC, or to improve your networking with fellow Latin American researchers, 2015 provides two opportunities in Brazilian cities: CARLA and SBAC-PAD.


The Latin America High Performance Computing Conference (CARLA) will have its second edition this year. It will take place in August 26-28 in the city of Petrópolis, Rio de Janeiro. As previously covered in HPCLA, CARLA is a joint effort that combines two earlier Latin American conferences, HPCLaTam and CLCAR. 

Besides its technical program, CARLA has three confirmed international keynotes: Luiz DeRose, from Cray Inc; Harold Castro, from the University of Los Andes, Colombia; and Michael Wilde, from the Argonne National Laboratory. Additionally, CARLA will host the HPC Advisory Council Brazil Conference (HPCAC), and the Regional School in High Performance Computing of the state of Rio de Janeiro (ERAD/RJ).

CARLA is still taking paper submissions until June 14. Accepted full papers will be published in Springer's Communications in Computer and Information Science (CCIS) series.


The 27th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD) will be hosted this year in the city of Florianópolis, Santa Catarina, and take place in October 18-21. As SBAC-PAD is hosted in Brazil every two years only, this is the ideal opportunity to participate.

SBAC-PAD has three confirmed international keynotes so far: Geoffrey Fox, from Indiana University; Satoshi Matsuoka, from Tokyo Institute of Technology; and Onur Mutlu, from Carnegie Mellon University. SBAC-PAD will also host two international workshops - namely the 4th Workshop on Parallel Programming Models (MPP), and the 6th Workshop on Applications for Multi-Core Architectures (WAMCA) - and the Brazilian Symposium in High Performance Computing Systems (WSCAD).

SBAC-PAD's abstract submission deadline is June 7, with the paper submission until June 15. Accepted papers will be published in the conference proceedings in the IEEE Xplore Digital Library.

Update 1: CARLA's deadline was postponed to June 14 (originally May 31).
Update 2: SBAC-PAD's deadlines were postponed to June 7 and June 15 (originally June 1st and June 8).

Tuesday, May 19, 2015

Designing an Undergraduate Class on High Performance Computing

Since the advent of multicore processors in the mid 2000s, parallel architectures have become mainstream. These days, it is extremely difficult to find a single-core processor, including those on cell phones, tablets, and laptops. Therefore, there is a growing interest in training students on programming for parallel architectures. In particular, a class on high-performance computing (HPC) is getting more popularity in universities. Part of this interest comes from the fact that HPC almost equates with parallel computing.

Here are some of the challenges on designing an undergraduate-level class on HPC in Latin America:
  • List of Topics. This is probably the first big decision that has to be made. Which topics are in, which are out? The answer depends on many factors, ranging from the expertise of the instructor, the availability of resources, and the background of students. As a general guide, a class on HPC should target junior and senior students (3rd and 4th year into the program) of computer science or computer engineering. That's a safe move to guarantee students have enough background on computer science concepts (algorithms, systems, programming). The recommendation is to include at least these topics: architecture, interconnects, performance models, parallel algorithms, shared-memory programming, distributed-memory programming, accelerator programming, and some sort of parallel programming patterns.
  • Programming Languages. The point above listed three programming platforms. There should be at least one programming language for each. The recommendation is to use open-source compilers. A safe bet is to go for C/C++, a language most students are familiar with. Shared-memory systems can be programmed using the OpenMP standard. Most C/C++ compilers provide an implementation of OpenMP. The GNU compiler, gcc, offers a good implementation. Distributed-memory systems can be programmed using MPI, the de-facto standard for this type of system. There are many implementations, but MPICH or OpenMPI are two open-source, well-maintained implementations. Finally, accelerators can be programmed using CUDA, the NVIDIA mechanism to program their GPUs. The nvcc compiler is free and extremely powerful in compiling CUDA code. An alternative to CUDA is OpenCL, a standard that targets general accelerator architectures.
  • High Performance Computing Resources. This is the biggest challenge. Sometimes, supercomputers are available at Latin American institutions. In that case, the system administrators may be willing to provide the instructor with an education allocation on the system. If no supercomputers are readily available, it is possible to get allocation on international supercomputers through agreements or access programs (check this post or this other post for more information). Finally, if nothing of the above is an option, then a simple desktop will provide the minimal testbed for at least shared-memory and distributed-memory programming. If the desktop is equipped with a GPU, then all the puzzle is completed. Some laboratories may actually feature multiple desktop computers with the aforementioned configuration.
Finally, an advice on keeping the class dynamic is to start as soon as possible with programming examples. A language that is simple (yet powerful) to introduce data and loop parallelism is Cilk Plus. Even compilers such as gcc and Clang now recognize Cilk constructs.