Tuesday, October 28, 2014

Latin American HPC Team Competing for Charity

Latin America is going to be represented by seven brave people in this year's edition of Intel® Parallel Universe Computing Challenge. PUCC is an event where teams compete for the prize of a $25,000 donation to a charitable organization. The competition is organized as a single elimination tournament involving trivia, where teams answer 15 questions with 30 seconds for each question, and parallel code optimization, where teams have 10 minutes to speed up a program.


The Latin American team, named SC3 after the Super Computing and Scientific Computing (Super Computación y Calculo Cientifico in Spanish) laboratory at the Universidad Industrial de Santander (UIS) in Bucaramanga, Colombia, includes members from Brazil, Colombia, Costa Rica, Mexico, and Venezuela. The team is managed by Gilberto Javier Diaz Toro, Infrastructure chief on SC3 at UIS. This is the first time Latin Americans participate in the event. The participation and organization of HPC competitions can help bring visibility to the area in Latin America and motivate new talents to enter the area.

In their first day, SC3 will be facing Coding Illini, a team representing the National Center for Supercomputing Applications and the University of Illinois at Urbana–Champaign that made into the finals of last year's competition.

PUCC is going to be held within SC14 from November 17 until November 20. The International Conference for High Performance Computing, Networking, Storage and Analysis, also known as the Supercomputing Conference, is one of the major HPC events in the world, This year's edition had almost 400 paper submissions (from which 21% were accepted) and will have over 350 exhibitors. Over 10,000 are expected to participate.


Tuesday, October 21, 2014

Create an Intel Parallel Computing Center in your country

With the acquisition of interconnect assets from Cray Inc. and an aggressive development of co-processors, the giant microchip manufacturer Intel is making clear they mean business in HPC. The company is betting on the evolution of its line of co-processors, named Intel Xeon Phi, to dominate the market of accelerators for HPC and dethrone NVIDIA and its GPUs from the top. To ensure that Intel Xeon Phi enjoys a wide acceptance, Intel created an initiative called Intel Parallel Computing Centers (IPCC).

An IPCC is a unit inside an academic institution or a company that is devoted to accelerate scientific and engineering simulations with Intel Xeon Phi co-processors. To exploit all the potential of a co-processor, applications have to be analyzed to expose more parallelism and to identify regions of code that can be vectorized. This is not an easy task, and Intel acknowledges that. A particular IPCC will be focused on specific codes and will have the help of Intel experts.

The IPCC program receives applications all year round to create a new IPCC in academia, government, or industry. The award includes a renewable 2-year grant with up to $100K to $200K USD per year. The 7-10 page proposal should emphasize the impact of optimizing an application in science or engineering with Intel Xeon Phi co-processors. If you need more information on the application process, please visit the following address https://software.intel.com/en-us/ipcc.

There are currently 40 IPCCs already established in many countries. The areas of specialization of those centers include:
  • Climate Modeling.
  • Computational Chemistry.
  • Materials Modeling.
  • Data Analytics.
  • Molecular Dynamics.
  • Computational Fluid Dynamics.
  • Linear Algebra and Multi-Physics Codes.

There is a wide distribution of IPCCs around the globe. Out of the 40 IPCCs, 50% of them are located in the United States of America, 30% in Europe, 15% in Asia, and 5% in Latin America. These 2 IPCCs in Latin America are both located in Brazil. One of them is in the Federal University of Rio de Janeiro and it is specialized in seismic imaging for oil and gas. The other is in the São Paulo State University and its focus is on physics simulation of particles and matter.

The Intel Parallel Computing Centers represent a good opportunity to foster collaborative research in computational science by accelerating a code on what is an architecture Intel is going to push as the flagship hardware for HPC in the future. Do you want to ride this wave? 

Tuesday, October 14, 2014

Leftraru supercomputer is here, and you can try it!


Leftraru supercomputer is finally here and ready to produce groundbreaking science. Leftraru (also known as Lautaro) is a name coming from the Mapuche language meaning "swift hawk", which is a very appropriate name for a machine that will "fly" over oceans of data at high speed and literally survey the sky thanks to the astronomy models, data mining and visualization tools that will be deployed in the system in the coming months.

The machine is composed of 132 Hewlett-Packard computing nodes, each one with 2 10-core Intel Xeon Ivy Bridge processors (total of 2640 cores), combined with 6.25 terabytes of memory. In addition, it showcases 12 co-processors Intel Xeon Phi 5110p. The system has 274 terabytes of Lustre storage coupled with an Infiniband FDR network at 56Gbps.

Installed last June at the National Laboratory for High Performance Computing (NLHPC) in Santiago de Chile, the machine is set to be the most powerful supercomputer of Chile with a theoretical peak performance of 70 teraflops (70 trillion mathematical operations per second). 

NLHPC, in collaboration with the Center for Mathematical Modeling (CMM) of the University of Chile (UChile), has launched a call for users to test the supercomputer. Among all applicants, four fortunate groups will be selected this Friday October 17th. The winners will be granted one month access to the supercomputer. They will be able to launch scientific simulations using up to 512 cores and 2 terabytes of storage during this period. The idea behind this great initiative is to showcase the scientific capabilities of the supercomputer.

NLHPC is a laboratory led by the CMM with UChile as sponsoring institution, in association with Universidad Católica del Norte (UCN), Universidad Técnica Federico Santa María (UTFSM), Pontificia Universidad Católica de Chile (PUC), Universidad de Santiago (USACH), Universidad de Talca(UTalca) and Universidad de la Frontera (UFRO) and REUNA.

Tuesday, October 7, 2014

The First Latin American Joint Conference on HPC

If you ever wondered where could you meet fellow researchers working on High Performance Computing in Latin America, then the Latin America High Performance Computing Conference (CARLA) is the place you were searching for.

CARLA is a joint effort that combines two earlier Latin American conferences, namely the Latin America Symposium on High Performance Computing (HPCLaTam) and the Latin American Conference on High-Performance Computing (CLCAR), both of which would be on their seventh edition this year. This union presents a promising opportunity for the development of a strong and active HPC community.

The first edition of the joint conference will take place in October 20-22, 2014, in Valparaiso, Chile. This year's sponsors are Intel and Nvidia.

If you are interested in keynotes, speakers from Argentina, Chile, Germany, Mexico, Spain, and Uruguay have been confirmed already. However, if your focus is on improving the visibility of your research in the future, you should know that selected papers will be invited to be submitted to Springer's Cluster Computing this year.

For more information about CARLA, check its webpage on http://carla2014.hpclatam.org/.

Thursday, October 2, 2014

LARTop50: The Fastest Supercomputers in Latin America

According to LARTop50, the fastest supercomputer in Latin America is Miztli, installed at the Universidad Nacional Autónoma de México (UNAM). It has a maximum performance of roughly 80 TeraFLOPs. Miztli embodies 5,280 cores and features an Infiniband interconnect. The top 5 spots on the list contain systems from countries such as Brazil, Chile, and Argentina.

The LARTop50 project collects performance information and ranks the fastest supercomputers in Latin America. Similar to its world-wide counterpart (Top500 list), it presents the main features of each machine: name, site, country, vendor, system type, processor type, node and core counts, maximum and peak performance, power information, network type, and more.

To rank the different systems, LARTop50 uses the same benchmark suite as the Top500 list, LINPACK. LINPACK is a set of linear-algebra routines that are common to many numerical methods. It was originally written in the 1970s, and a parallel implementation of LINPACK for distributed-memory systems is the one used as the benchmark to rank supercomputers. This parallel implementation of LINPACK is called High Performance Linpack (HPL) and consists of a program that solves a collection of random linear equations. Despite some criticism, LINPACK and HPL continue to be the standard for comparing performance across a wide range of systems.

The LARTop50 project was originally conceived in 2011 at the Universidad Nacional de San Luis, Argentina. It is now composed of members of multiple Latin American organizations. The project aims at collecting performance information of HPC systems, diffusing that information within industrial, academic and governmental communities, promote training on HPC, and organize events to develop HPC in general.

Initiatives such as LARTop50 are very important to provide access to information otherwise nearly impossible to gather. HPC systems in Latin America lag behind the fastest supercomputers in the world. Therefore, it is hard for Latin American HPC system to make it to the Top500 list. LARTop50 brings visibility to Latin American supercomputers. In addition, LARTop50 provides an incentive for supercomputing facilities to adopt a standard mechanism, LINPACK in this case, to tune up and measure the performance of the system.

The current results on the list show only 13 systems. It is expected that this number grows in the future as the HPC community in the region gets more integrated and the use of HPC resources becomes mainstream. Also, as the project matures, more statistics will be available and more powerful projections can be drawn from the data.

For more information about LARTop50, visit its webpage http://lartop50.org.