Sunday, December 21, 2014

Create a CUDA Center in your country

In a previous post, we reported on the Intel Parallel Computing Centers (IPCC) and the opportunities that open up by bringing to your country a specialized laboratory on accelerator technology. This time, it is the turn of CUDA centers, a similar and well-established initiative by NVIDIA.

The most recent release of the Top500 list reveals the relevance of accelerators in the construction of modern supercomputers. Out of the top 10 systems, 5 use accelerators, 3 of them use NVIDIA graphic processing units (GPU). Moreover, there are 16 systems in the first 100 that use NVIDIA GPUs. Power concerns are driving a wider adoption of GPUs to build new systems. A look at the top 10 systems of the Green500 list (the supercomputers with the best FLOPS/Watt ratio) shows that 8 systems use NVIDIA GPUs. In addition, the Department of Energy program Coral, which will provide the next set of elite supercomputers in the United States in 2017, will include IBM hybrid machines combining POWER processors and NVIDIA GPUs.

Behind NVIDIA's strategy to democratize the use of GPUs, there is a language extension to offload computation to the GPU. This language extension is called Compute Unified Device Architecture, CUDA. Programs written in C, C++, or Fortran can be modified to make use of the graphic's card in a computer as a general purpose graphic processing unit. Therefore, CUDA makes it possible to accelerate kernels in traditional scientific computing routines. Also, new scientific fields are explored to measure the impact of CUDA in performance. NVIDIA offers several academic programs to foster the use of CUDA in an institution. Among those programs, there are CUDA centers. These centers aim at improving education, research, and collaboration in CUDA. There are 3 types of CUDA centers:

  • Teaching center: designed to improve training on CUDA programming, support research, and complement advanced parallel computing education. The benefits of a teaching center include the donation of CUDA-enabled GPUs, education material, teaching assistant funds, and technical support on CUDA technology.
  • Research center: aimed at pushing the envelope on scientific applications by using CUDA enabled GPUs. Among the benefits of a research center are participation in conferences, assignment of CUDA liaison, donation of GPUs, inclusion in a larger network of CUDA developers, and promotion in several NVIDIA programs.
  • Center of excellence: oriented towards the recognition of groundbreaking research in GPU computing. This last type of center looks for a stronger relationship between NVIDIA and the institution and provides a collaboration channel for the development of new projects.

There are dozens of CUDA centers around the world. In particular, Latin America has a good number of centers distributed in 7 countries. The figure below shows the distribution of CUDA centers in several Latin American countries. A clear trend in all countries is to have more teaching centers than research centers. This trend is possibly explained by the way CUDA maturity evolves: starts with training, followed by programming projects, next research, and finally state-of-the-art CUDA development. Brazil seems to be the most advanced country in terms of CUDA maturity. In fact, it is the only country featuring a center of excellence.


The Computer Science Institute at the Universidade Federal Fluminense in Rio de Janeiro, Brazil, hosts the CUDA Center of Excellence, the only of its kind in Latin America (so far). Led by Prof. Esteban Clua, the center focuses on the development of applications for cosmology, oil exploration, and software engineering. Additionally, the center also provides training opportunities for those willing to enter into the CUDA world.

If you wish to apply for a CUDA center in your institution, visit https://developer.nvidia.com/academic-programs for more information.

Tuesday, December 16, 2014

Supercomputing Center in Colombia will host the Aerospace Data for the European Galileo Project

The Center for High Performance and Scientific Computing (SC3) of the Industrial University of Santander in Colombia hosts the Galileo Information Center for Latin America on behalf of the Galileo Programme by the European Union, and supports a number of data analytics projects for applications based on Global Navigation Satellite systems. To know more about this, we interviewed Professor Raul Ramos-Pollan, Leader in Big Data Research at SC3 and responsible for the Galileo Information Center. 

Q: What is the Galileo Project and who is leading it? 

A: The Galileo program is the Global Navigation Satellite System being built by the European Union and the European Space Agency (ESA). It will consist on a constellation of 30 satellites providing world wide positioning services complementary to the GPS system from the US, GLONASS system from Russia and the Beidou system from China (expected to be completed by 2020). In fact, all together they are expected to provide high precision positioning based on signals from satellites from different constellations. Currently the Galileo constellation has 6 satellites in orbit and is expected to be fully deployed by 2020. 

Q: What is the purpose of the Center of Galileo Information for Latin America? 

A: In this multi-constellation context a wealth of opportunities arise for new applications across industry and academy (transportation, agriculture, air navigation, etc.), providing sub-meter positioning accuracy to the common user and not only to the specialized ones. In this sense, the Galileo Information Center for Latin America has several goals: (1) to raise awareness on the development of the Galileo constellation, (2) to identify applications of specific interest for the particularities of the region and (3) to be a feedback channel from the region to the EU for considering opening future funding opportunities (specially the Horizon 2020 R+D program) 

Q: Why is it important to have a partnership between government, industry and academia? 

A: The GNSS industry has been traditionally fostered in tight cooperation among industry, government and academia, including the EU space program develop by ESA. As oppor
tunities for multi-constellation based GNSS applications are being identified, this tight partnership is key to enable the application development, from research to production.

Q: What is the role of SC3 in this partnership and how is this beneficial for the region? 

A: SC3 hosts the Galileo Information Center for Latin America at the Guatiguara Technological Park, an initiative from the Industrial University of Santander to endow itself with the resources to attract industry and foster collaboration with academy. This scenario was considered to be aligned with the goals of the Galileo Information Center and we expect (as it is already happening) to have a special impact in the region in terms of access to opportunities, information channels and expertise. 

Q: What opportunities this projects brings in terms of data-intensive applications? 

A: As multi-constellation GNSS receivers start to emerge (or the actual ones are being adapted) a new order of magnitude of data generation is expected, let alone by the fact that a receiver is going to be seeing at least 4 satellites from each constellation and this data has be to used to obtain accurate positioning. Apart from this, GNSS monitoring networks are also going to be producing more and more data enabling opportunities in industry and academy for data intensive applications. Just to mention a few, this includes ionospheric modelling, climate, seismic research, personalized location based services, transportation in urban canyons, precision agriculture, etc.

Tuesday, December 9, 2014

Profile of national HPC developments in Latin America - Part I

In this series of posts, we will present some of the national developments in high performance computing seen in Latin American countries. Today, we will focus on the two largest countries of Latin America (in area): Argentina and Brazil.

Argentina - SNCAD
Argentina's National High Performance Computing System (SNCAD, Sistema Nacional de Computación de Alto Desempeño) was created in 2010 for the purpose of recommending policies and coordinating actions to maintain and update computational resources in the country. It currently supports seventeen computing centers distributed over nine provinces by helping fund machines and support personal.

SNCAD computing resources are not integrated. That means scientists cannot easily run parallel software over different resources, and are required to request accounts on each computing center. Accounts may not even be possible in some cases, as centers are independent and have their own local regulation. In this context, a sensible advance for SNCAD would be to assist in the development of a national research grid.

Brazil - SINAPAD
The Brazilian National High Performance Computing System (SINAPAD, Sistema Nacional de Processamento de Alto Desempenho) is composed of nine HPC national centers (CENAPADs) hosted by federal universities, institutes, or laboratories distributed over seven different states. These HPC national centers hold some of the fastest supercomputers in Latin America. SINAPAD was formally created in 2004, while the oldest of its national centers started back in 1992.

SINAPAD's computational resources include over 10,000 CPU cores and 28,000 GPU cores for a total of 171 TFLOPS of peak performance. Most computational resources are Bull, SGI, and Sun platforms. These resources are available through specific web portals only. Each portal enables the execution of specific non interactive software, such as Brazilian developments on the Regional Atmospheric Modeling System (BRAMS) and a probabilistic primality test. Although limited, this scheme enables research by some scientists that would not have enough resources for their experiments otherwise.

Tuesday, December 2, 2014

MPI4Py: From Latin America to the World!

The Python programming language has become one of the big three languages in HPC (along with Fortran and C/C++). Thanks to an extended ecosystem of scientific libraries, development tools, and applications, Python has been widely adopted by many scientific communities as a high-level, easy-to-use, scripting language. Plugging Python code with pre-existing programs in other languages is straightforward. Therefore, Python makes prototyping new computational methods very easy and powerful. It comes at no surprise that your favorite HPC library, whatever that might be, has a Python interface.

The same is true for the Message-Passing Interface (MPI), the standard mechanism to implement large-scale parallel applications in HPC. The last decade brought several competing alternatives to use MPI with Python: PyMPI, PyPar, and MPI4Py, among others. All those libraries shared the same goal of providing an implementation of the MPI standard to Python programmers. However, after years of dispute, MPI4Py emerged as the clear winner. Interestingly, MPI4Py was developed in Latin America by Lisandro Dalcin. Lisandro is a researcher at the Research Center in Computational Methods in Argentina, who graduated from Universidad Nacional del Litoral, in Argentina, with a PhD degree in Engineering. He is the author of MPI4Py, the most popular implementation of MPI in Python, PETSc4Py, and SLEPc4Py. He is a contributor of Cython and PETSc libraries. He was part of the PETSc team who won the R&D 100 Award in 2009.

MPI4Py offers a convenient, yet powerful way to use the MPI-2 standard in Python. The design of MPI4Py makes it possible to seamlessly incorporate message-passing parallelism into a Python program. One important feature of MPI4Py is its ability to communicate any built-in or user-defined Python object, taking advantage of the pickle module to marshall and unmarshall objects. Thus, writing a parallel program out of an object-oriented Python code does not involve a huge effort on writing the serialization methods. In addition, MPI4Py efficiently implements the two-sided communication operations of the MPI-2 standard (including non-blocking and persistent calls). To top it all off,  MPI4Py provides dynamic process management, one-sided communication, and parallel I/O operations.

The MPI4Py project demonstrates the potential of hard work on collaborative efforts in the region. For more information about MPI4Py, visit the webpage http://mpi4py.scipy.org.

Tuesday, November 25, 2014

Inside the Broader Engagement program of the Supercomputing Conference

The Supercomputing Conference (SC) that just took place last week in New Orleans was a highly successful event where the brightest minds in high performance computing presented outstanding research. This event should not be missed by those working in cutting-edge research on supercomputers. However, the flight to USA and a week of hosting plus other expenses is a significant cost that not all young scientists can afford. Fortunately, the SC series offers help to those passionate about supercomputers that want to learn, exchange, and present novel ideas about HPC: the Broader Engagement (BE) program tries to increase the diversity of the HPC community by offering financial support to a large number of persons from different backgrounds (as long as they have a legitimate interest in HPC).

In order to offer our readers a clear idea of what the BE program of the SC conference has to offer; we interviewed Javier Iparraguirre, who just attended SC14 under the BE program. Javier received a Bachelor of Science degree in Electrical Engineering from Universidad Nacional del Sur in Argentina. He was awarded a Master scholarship by the Fulbright Comission in 2005 and received a Master in Computer Science from the University of Illinois in Chicago. He currently teaches programming to Electrical and Computer Engineering students at Universidad Tecnológica Nacional and Universidad Nacional del Sur in Argentina.  

Q: How did you know about the SC Conference? 

A: While I was talking to other colleagues, I heard that SC is the best HPC conference in the world. From that moment I started following all the news related to the event (social media, RSS feeds, and blogs).

Q: How did you learn about the BE program?

A: I learned about BE program from the SC14 website. 

Q: How was your experience at SC14? Was the BE program useful to you?

A: The experience was life-changing. First of all, I met the most famous people in the supercomputing world such as Jack Dongarra and Bill Gropp. Then, the BE program helped me to understand the actual state of the research and development involved in HPC. Finally, I met a lot of people with similar objectives and motivations.

Q: What did you like the most? What could be improved?

A: The most relevant fact that I want to highlight is the opportunity to meet people with similar interests. It is an excellent place to interact and learn about HPC. One aspect to improve is to add multiple tracks in the BE program. The selected people included a wide range of ages and education levels. On one extreme you could find undergraduate students and on the other extreme of the spectrum there were professors. Adding multiple tracks may be helpful for all BE participants. 

Q: Is there any advice you would like to share for future SC and BE attendants?

A: My advice is to apply. After your submission is accepted, get involved in all activities proposed in the BE program. You will not regret it. 

Tuesday, November 18, 2014

STIC-AmSud - Funding collaborations in Latin America

One of the main difficulties to the development of collaborations in HPC research in Latin America is funding, as investments in infrastructure and research missions are subpar. Nevertheless, there is no reason to despair and forfeit, as there are frameworks to improve collaborations, such as STIC-AmSud.

The Regional Program STIC-AmSud is a cooperation initiative between France and South American countries started in 2005. It currently includes Argentina, Brazil, Chile, Paraguay, Peru, Uruguay, and Venezuela. Its main objective is to promote and strengthen collaborations in the field of Information and Communication Science and Technology. This is done by funding missions and exchanges among Latin American countries, and also with France. In order to promote long-lasting cooperation networks, the participation of young researchers is favored. Although STIC-AmSud is not focused on HPC only, it has funded HPC projects before, such as South-American Grid for the Brazilian Regional Atmospheric Modeling System  and Semantic Web Analytics: Processing Big Data on Energy Consumption.

STIC-AmSud has yearly calls for proposals of two years projects. Proposals must include at least one French research group and at least two Latin American research groups belonging to different participating countries. Approved projects receive between 10,000 € and 15,000 € per year to pay solely for travel expenses (flights, lodging, meals) between the participating institutions.

Although the call for proposals for projects starting in 2015 has already ended, the next call for 2016 should open in the next months and end by mid 2015. To stay informed, please check STIC-AmSud's official website and keep an eye on your own national funding agency.

Tuesday, November 11, 2014

RISC Project: let's build HPC together

"If you don't compute, you don't compete", that is how Mateo Valero, a seasoned HPC expert and the director of the Barcelona Supercomputing Center, summarizes the fundamental need to develop HPC infrastructure and develop science, engineering, government, and industry in the modern society. However, for some regions - Latin America among them - it is hard to keep up with the relentless development of powerful machines in other latitudes. To bridge this gap, sometimes it is necessary to build a network of stakeholders that share a common goal, agree to team up, and work in conjunction to provide all the actors in society with the right computing infrastructure that make all of them competitive.

That goal was among the main drivers behind the RISC project. A group of European and Latin American organizations worked together to understand the limitations, actors, and opportunities of HPC in Latin America. During more than 2 years, these groups worked in identifying the key components of a successful HPC initiative in Latin America. Among the specific goals of RISC project were: a) strengthen the cooperation between Europe and Latin America through the construction of a community to increase capacity, awareness, networking, and training; b) asses the drivers and needs of HPC in Latin America; c) make a roadmap for HPC development in Latin America; d) identify research clusters in Latin America; and e) paving the road for a dialogue between the research communities and the policy makers.

The RISC project identified research communities that were hungry for computing power, and that would benefit from access to HPC resources. The main areas pinned down by the project were: a) computational biology, b) oil exploration modeling, c) modeling and simulation of natural disasters, d) innovation in technology.

Another important contribution of the RISC project was a green paper on recommendations to develop HPC in Latin America. The paper suggests reducing the gap in HPC infrastructure in the region compared to the global leaders by consolidating a joint infrastructure in the whole region. Such infrastructure should follow a hierarchy with three layers of HPC resources (national, regional, and local), with the first layer of most powerful supercomputers being shared among users in the different countries. A more urgent recommendation in the paper is to obtain access to leadership facilities in other places to avoid the exodus of well trained and competitive scientists and engineers. In addition, the paper stresses the importance of developing training programs, constructing open-source software, pushing forward the discussion on policy making for HPC infrastructure, consolidating cooperation agreements with other entities, and more.

To know more about the RISC project and the green paper, please visit the project's webpage http://www.risc-project.eu

Tuesday, November 4, 2014

Successful SuperComputing-Camp at the Coffee Triangle in Colombia

Over 70 young minds gathered and camped at the Super Computing and Distributed Computing Camp (SC-Camp) this year. The event was hosted by the Bioinformatics and Computational Biology Center (BIOS) located at the Natural Park Los Yarumos, in Manizales, Colombia. The Natural Park Los Yarumos is located in the Colombian Coffee Growing Axis (also known as the Coffee Triangle) which produces the majority of the world-famous Colombian Coffee. Speakers from USA, Brazil and Venezuela gave state-of-the-art talks in various domains including parallel computing and computational science. Students from 9 Colombian universities had the opportunity to learn cutting-edge technologies and made new friendships that can lead to future research collaborations.

The event was strongly supported by important industry actors such as Intel, Hewlett Packard and Microsoft. Representatives from these companies presented to the young generation, new high performance technologies that they are developing and pushing into the market. “It is very exiting to have the opportunity to approach the new players in the programming world, one can feel this very good energy, the capacity and desire to learn.” said Hugues Hugo Morin from Intel.

For the students, it was a memorable experience. “I leave SC-Camp with new friends and more knowledge, this was the first step for something big.” said Julian Hernando Henao, participant from the University of Caldas. “I had the opportunity to learn new methodologies in topics related to computer science, parallel programming and big data, opening several exploratory directions where one can develop new research.” said Maria Alejandra Munoz, participant of the National University of Colombia in Manizales. For many of them, the experience of SC-Camp will continue, as they develop collaborations in new topics of research relevant to their careers and the local needs of their community.

For BIOS, it was a great honor to host SC-Camp, which gained national interest as large Colombian media groups diffused the news about the event and the support of the local authorities for such initiatives. It is important to keep promoting these types of activities in order to pave the way for the future generation of professionals in high performance computing, so that they can have a strong positive impact in the region, the country and hopefully in the world.

Tuesday, October 28, 2014

Latin American HPC Team Competing for Charity

Latin America is going to be represented by seven brave people in this year's edition of Intel® Parallel Universe Computing Challenge. PUCC is an event where teams compete for the prize of a $25,000 donation to a charitable organization. The competition is organized as a single elimination tournament involving trivia, where teams answer 15 questions with 30 seconds for each question, and parallel code optimization, where teams have 10 minutes to speed up a program.


The Latin American team, named SC3 after the Super Computing and Scientific Computing (Super Computación y Calculo Cientifico in Spanish) laboratory at the Universidad Industrial de Santander (UIS) in Bucaramanga, Colombia, includes members from Brazil, Colombia, Costa Rica, Mexico, and Venezuela. The team is managed by Gilberto Javier Diaz Toro, Infrastructure chief on SC3 at UIS. This is the first time Latin Americans participate in the event. The participation and organization of HPC competitions can help bring visibility to the area in Latin America and motivate new talents to enter the area.

In their first day, SC3 will be facing Coding Illini, a team representing the National Center for Supercomputing Applications and the University of Illinois at Urbana–Champaign that made into the finals of last year's competition.

PUCC is going to be held within SC14 from November 17 until November 20. The International Conference for High Performance Computing, Networking, Storage and Analysis, also known as the Supercomputing Conference, is one of the major HPC events in the world, This year's edition had almost 400 paper submissions (from which 21% were accepted) and will have over 350 exhibitors. Over 10,000 are expected to participate.


Tuesday, October 21, 2014

Create an Intel Parallel Computing Center in your country

With the acquisition of interconnect assets from Cray Inc. and an aggressive development of co-processors, the giant microchip manufacturer Intel is making clear they mean business in HPC. The company is betting on the evolution of its line of co-processors, named Intel Xeon Phi, to dominate the market of accelerators for HPC and dethrone NVIDIA and its GPUs from the top. To ensure that Intel Xeon Phi enjoys a wide acceptance, Intel created an initiative called Intel Parallel Computing Centers (IPCC).

An IPCC is a unit inside an academic institution or a company that is devoted to accelerate scientific and engineering simulations with Intel Xeon Phi co-processors. To exploit all the potential of a co-processor, applications have to be analyzed to expose more parallelism and to identify regions of code that can be vectorized. This is not an easy task, and Intel acknowledges that. A particular IPCC will be focused on specific codes and will have the help of Intel experts.

The IPCC program receives applications all year round to create a new IPCC in academia, government, or industry. The award includes a renewable 2-year grant with up to $100K to $200K USD per year. The 7-10 page proposal should emphasize the impact of optimizing an application in science or engineering with Intel Xeon Phi co-processors. If you need more information on the application process, please visit the following address https://software.intel.com/en-us/ipcc.

There are currently 40 IPCCs already established in many countries. The areas of specialization of those centers include:
  • Climate Modeling.
  • Computational Chemistry.
  • Materials Modeling.
  • Data Analytics.
  • Molecular Dynamics.
  • Computational Fluid Dynamics.
  • Linear Algebra and Multi-Physics Codes.

There is a wide distribution of IPCCs around the globe. Out of the 40 IPCCs, 50% of them are located in the United States of America, 30% in Europe, 15% in Asia, and 5% in Latin America. These 2 IPCCs in Latin America are both located in Brazil. One of them is in the Federal University of Rio de Janeiro and it is specialized in seismic imaging for oil and gas. The other is in the São Paulo State University and its focus is on physics simulation of particles and matter.

The Intel Parallel Computing Centers represent a good opportunity to foster collaborative research in computational science by accelerating a code on what is an architecture Intel is going to push as the flagship hardware for HPC in the future. Do you want to ride this wave? 

Tuesday, October 14, 2014

Leftraru supercomputer is here, and you can try it!


Leftraru supercomputer is finally here and ready to produce groundbreaking science. Leftraru (also known as Lautaro) is a name coming from the Mapuche language meaning "swift hawk", which is a very appropriate name for a machine that will "fly" over oceans of data at high speed and literally survey the sky thanks to the astronomy models, data mining and visualization tools that will be deployed in the system in the coming months.

The machine is composed of 132 Hewlett-Packard computing nodes, each one with 2 10-core Intel Xeon Ivy Bridge processors (total of 2640 cores), combined with 6.25 terabytes of memory. In addition, it showcases 12 co-processors Intel Xeon Phi 5110p. The system has 274 terabytes of Lustre storage coupled with an Infiniband FDR network at 56Gbps.

Installed last June at the National Laboratory for High Performance Computing (NLHPC) in Santiago de Chile, the machine is set to be the most powerful supercomputer of Chile with a theoretical peak performance of 70 teraflops (70 trillion mathematical operations per second). 

NLHPC, in collaboration with the Center for Mathematical Modeling (CMM) of the University of Chile (UChile), has launched a call for users to test the supercomputer. Among all applicants, four fortunate groups will be selected this Friday October 17th. The winners will be granted one month access to the supercomputer. They will be able to launch scientific simulations using up to 512 cores and 2 terabytes of storage during this period. The idea behind this great initiative is to showcase the scientific capabilities of the supercomputer.

NLHPC is a laboratory led by the CMM with UChile as sponsoring institution, in association with Universidad Católica del Norte (UCN), Universidad Técnica Federico Santa María (UTFSM), Pontificia Universidad Católica de Chile (PUC), Universidad de Santiago (USACH), Universidad de Talca(UTalca) and Universidad de la Frontera (UFRO) and REUNA.

Tuesday, October 7, 2014

The First Latin American Joint Conference on HPC

If you ever wondered where could you meet fellow researchers working on High Performance Computing in Latin America, then the Latin America High Performance Computing Conference (CARLA) is the place you were searching for.

CARLA is a joint effort that combines two earlier Latin American conferences, namely the Latin America Symposium on High Performance Computing (HPCLaTam) and the Latin American Conference on High-Performance Computing (CLCAR), both of which would be on their seventh edition this year. This union presents a promising opportunity for the development of a strong and active HPC community.

The first edition of the joint conference will take place in October 20-22, 2014, in Valparaiso, Chile. This year's sponsors are Intel and Nvidia.

If you are interested in keynotes, speakers from Argentina, Chile, Germany, Mexico, Spain, and Uruguay have been confirmed already. However, if your focus is on improving the visibility of your research in the future, you should know that selected papers will be invited to be submitted to Springer's Cluster Computing this year.

For more information about CARLA, check its webpage on http://carla2014.hpclatam.org/.

Thursday, October 2, 2014

LARTop50: The Fastest Supercomputers in Latin America

According to LARTop50, the fastest supercomputer in Latin America is Miztli, installed at the Universidad Nacional Autónoma de México (UNAM). It has a maximum performance of roughly 80 TeraFLOPs. Miztli embodies 5,280 cores and features an Infiniband interconnect. The top 5 spots on the list contain systems from countries such as Brazil, Chile, and Argentina.

The LARTop50 project collects performance information and ranks the fastest supercomputers in Latin America. Similar to its world-wide counterpart (Top500 list), it presents the main features of each machine: name, site, country, vendor, system type, processor type, node and core counts, maximum and peak performance, power information, network type, and more.

To rank the different systems, LARTop50 uses the same benchmark suite as the Top500 list, LINPACK. LINPACK is a set of linear-algebra routines that are common to many numerical methods. It was originally written in the 1970s, and a parallel implementation of LINPACK for distributed-memory systems is the one used as the benchmark to rank supercomputers. This parallel implementation of LINPACK is called High Performance Linpack (HPL) and consists of a program that solves a collection of random linear equations. Despite some criticism, LINPACK and HPL continue to be the standard for comparing performance across a wide range of systems.

The LARTop50 project was originally conceived in 2011 at the Universidad Nacional de San Luis, Argentina. It is now composed of members of multiple Latin American organizations. The project aims at collecting performance information of HPC systems, diffusing that information within industrial, academic and governmental communities, promote training on HPC, and organize events to develop HPC in general.

Initiatives such as LARTop50 are very important to provide access to information otherwise nearly impossible to gather. HPC systems in Latin America lag behind the fastest supercomputers in the world. Therefore, it is hard for Latin American HPC system to make it to the Top500 list. LARTop50 brings visibility to Latin American supercomputers. In addition, LARTop50 provides an incentive for supercomputing facilities to adopt a standard mechanism, LINPACK in this case, to tune up and measure the performance of the system.

The current results on the list show only 13 systems. It is expected that this number grows in the future as the HPC community in the region gets more integrated and the use of HPC resources becomes mainstream. Also, as the project matures, more statistics will be available and more powerful projections can be drawn from the data.

For more information about LARTop50, visit its webpage http://lartop50.org.