Wednesday, December 9, 2015

The 27th International Symposium on Computer Architecture and High Performance Computing

The latest instance of the International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD) was held in Florianópolis, Brazil, 18-21 October, 2015. It was an excellent event to get exposure to a wide range of interesting research topics, to get up to speed with the latest technologies from major HPC vendors, to do networking, and to enjoy fantastic Brazilian food (let's not forget the pão de queijo).

SBAC-PAD received over 200 participants, which enjoyed the main event, its two tutorials, three keynote lectures, seven sponsor talks, and seven satellite events (including international workshops such as the 4th Workshop on Parallel Programming Models and the 6th Workshop on Applications for Multi-Core Architectures, and Brazilian events such as the 16th Symposium on High Performance Computational Systems). In total, 26 papers were presented in the main event, while other 82 presentations (including posters) happened in the satellite events.

The keynote speakers this time were Prof. Geoffrey Fox (Indiana University) who talked largely about the Big Data environment, from applications to systems; Prof. Satoshi Matsuoka (Tokyo Tech) that gave a perspective on the convergence of Big Data and HPC in terms of the system architecture features that are needed for both fields; Prof. Onur Mutlu (Carnegie-Mellon University) who presented new perspectives on memory system design for data intensive computing.

This SBAC-PAD edition had six parallel events. One of them was WSCAD (Symposium of High Performance Computational Systems, in Portuguese). This was the 16th edition of this symposium and, even though it is a Portuguese event, 7 out of 22 papers were written in English. Best paper authors are going to be invited to submit an extended version of their work to the Concurrency and Computation: Practice and Experience journal.

For the last few years SBAC-PAD had been held in other countries in addition to Brazil. This is an effort to make the conference more international and attract more people. Next SBAC-PAD is going to be in Los Angeles. This is definitively a good Latin American conference, that has potential to become an important reference not only locally, but globally.



Thursday, November 5, 2015

Google selects 12 research teams in Latin America to award USD$1M in research grants

As published in our previous post, Google is giving over USD$1M in grants to Latin American researchers developing cutting-edge technology in big data and high performance computing. A total of 12 projects were selected (less than 5% of the number of submitted projects): 8 from Brazil, 2 from Mexico, 1 from Colombia and 1 from Chile. The winner will receive funds for a year, with the option to renew their grants for up to two years for master students and three years for PhD students. PhD students will receive USD$1200/month and their supervisors USD$750/month. Master students will receive USD$750/month and their supervisors USD$675/month.

Here is the list of winners and their projects:

Altigran Soares da Silva, Universidade Federal do Amazonas (Brazil)
Project: “An Active Learning Approach to Match Networked Schemas”
Using classifiers and active learning, this project develops and evaluates a method to enable the integration of the schemas by establishing which pairs of schema elements have the same semantics.

Anna Helena Reali Costa, Universidade de São Paulo (Brazil)
Project: “Improving Deep Reinforcement Learning through Knowledge Transfer”
This project focuses on improving the performance of Deep Reinforcement Learning (DRL) agents with the use of abstractions, generalizations and knowledge transfer (Transfer Learning - TL) in the area of Machine Learning. The ultimate goal is to present a new DRL algorithm that can learn a variety of tasks using knowledge acquired with TL.  

Carlos Gershenson, Universidad Nacional Autónoma de México (Mexico)
Project: “Urban Coordination of Autonomous Vehicles”
The goal is to design and test coordination algorithms for autonomous vehicles at intersections to maximize flow and safety. An open source simulator will be deployed and made available to the public.  

Catalina Elizabeth Stern Forgach, Universidad Nacional Autónoma de México (Mexico)
Project: “Interconnected Dual Biosensor for Type II Diabetes Mellitus”
This project will develop a biosensor that will measure glucose and insulin levels simultaneously in real time, in order to diagnose and monitor type 2 diabetes mellitus, even in stages where there are no obvious symptoms. The data will be stored online via an app universally accessible and useful for construction of a database and further analysis. 

Diego de Freitas Aranha, Universidade Estadual de Campinas (Brazil)
Project: “Machine learning over ciphertexts using homomorphic encryption”
This project develop and implements homomorphic versions of two algorithms widely used in machine learning that can be evaluated over encrypted data using somewhat homomorphic encryption schemes: Principal Component Analysis (PCA) and KNearest Neighbors (K-NN). 

Diego Raphael Amancio, Universidade de São Paulo (Brazil)
Project: “Word sense disambiguation via topological and dynamical analysis of complex networks”
The problem of interest is word sense disambiguation (WSD), i.e., how to solve ambiguities in texts. For that, the authors use the framework of complex networks to combine structural and semantic contextual information.

Éric Tanter, Universidad de Chile (Chile)
Project: Gradual Security Typing for the Web
The project will contribute to security type system design, with formal models and proofs, as well as implement extensions to Dart and develop secure versions of existing Web applications written in Dart. 

Gustavo Enrique de Almeida Prado Alves Batista, Universidade de São Paulo (Brazil)
Project: “Controlling Dengue Fever Mosquitoes using Intelligent Sensors and Traps”
Proposes the construction of an inexpensive device that will empower the population with the knowledge of Aedes aegypti (urban mosquito) densities to motivate local mosquito control activities.

Jussara Marques de Almeida, Universidade Federal de Minas Gerais (Brazil)
Project: “Beyond Relevance: Addressing Novelty, Diversity and Personalization in Tag Recommendation”
With tagging being one of the best ways of associating metadata with media objects on the Web, the main goal is to develop new tag recommendation strategies that tackle all four aspects of the problem: relevance, diversity, novelty and personalization. 

Marcos André Gonçalves, Universidade Federal de Minas Gerais (Brazil)
Project: “Boosting Out-of-Bag Estimators for Learning to Rank”
This project aims at solving the L2R (Learning To Rank) problem by developing an original and novel RF-based algorithm which smoothly combines properties of the bagging with boosting procedures. 

Pablo Arbelaez, Universidad de Los Andes (Colombia)
Project: “Learning Dynamic Action Units for Three-dimensional Facial Expression Recognition”
This project will develop convolutional neural network architectures for fine-grained localization, RGB-D scene understanding and video analysis to improve human-computer interaction. 

Sandra Maria Aluísio, Universidade de São Paulo (Brazil)
Project: “ANAA-Dementia: Automated neuropsychological assessments for Brazilian citizens during their lifetime”
The objective of this project is to create an automated personal neuropsychological test to detect cognitive issues, accessible via web or from mobile devices. 

Sunday, October 4, 2015

Self-sustained Supercomputing Centers in Latin America

Some supercomputing centers in the region start with initial seed funds coming from the government, academic institutions, or donation. However, they receive a clear mandate to operate: be self sufficient. That is the case of Mexico's National Supercomputing Center (CNS) at the Instituto Potosiano de Investigación Científica y Tecnológica (Ipicyt).

The CNS houses a powerful supercomputer, named Thubat-Kaal (the first and fastest in Tenek language). It is an IBM machine that runs LINPACK at 107TFlops. It comprises 5640 cores from Intel Xeon processors. There are 140 nodes with 2 8-core processors each, and 25 nodes equipped with an additional pair of Intel Xeon Phi 5110 co-processors. It has a total storage capacity of 1.5 petabytes and uses an Infiniband interconnect. Thubat-Kaal appeared in the Top500 list in June, 2013. This machine is used by the CNS to provide a range of services for companies in energy, finance, life sciences, digital media, engineering, and design. In particular, CNS assists companies in modeling products, data analytics, and general simulation.

Along with the supercomputing services, the CNS provides several other services to fulfill its mission of self-sustainability. More specifically, CNS also develops applications, provides IT consulting, offers a data center, and assists in the installation of networks and telecommunications.

Although the CNS does not have a research branch itself, it provides allocation grants on its supercomputer for academic use. Therefore, scientists and engineers from Mexico can access the full power of Thubat-Kaal and exploit this useful resource to accelerate their discoveries.


Wednesday, September 9, 2015

First Petascale supercomputer in Latin America now installed at LNCC

The Brazilian National Scientific Computing Laboratory (LNCChas received an upgrade in its processing power with the installation of a Petascale supercomputer from Bull (Atos Technologies). Named Santos Dummont in reference to the Brazilian aviation pioneer, the supercomputer has the capacity for 1.1 PFLOPS, putting it above all other HPC platforms in Latin America.

The Santos Dummont supercomputer (source: LNCC)
This processing power comes from three different kinds of nodes in the supercomputer:
  • 32 bullx chassis forming a total of 576 Thin Compute Nodes (Intel Ivy Bridge processors only)
  • 32 bullx chassis forming a total of 288 Hybrid Compute Nodes (including 396 Nvidia K40 GPUs and 108 Intel Xeon Phi 7110X)
  • 1 bullx chassis forming one Fat Compute Node (16 Intel Ivy Bridge processors and 6 TB of shared memory).
The computing nodes will have access to 1.5 PB of storage using Lustre, with a bandwidth of over 30 GB/s for both reads and writes.

Current tests with Linpack put the aggregated performance of Santos Dummont around 0.8 PFLOPS, leaving it around the 85th position in Top500.

With a purchase process started in 2009, the Brazilian government invested R$60 million (~16 million USD) in the supercomputer and infrastructure. Its presence in the city of Petrópolis (RJ) is expected to develop a supercomputing reference center and to attract new companies to the region (a Bull research center is already confirmed).

The presence of a Petaflopic machine in Latin America is of great importance. Nevertheless, it also shows how behind the region is when compared to the others. For instance, North America achieved this landmark in 2008, while Europe and Asia achieved it in 2009. 

The supercomputer is planned to be integrated in the Brazilian National High Performance Computing System (SINAPAD) in the near future. This upgrade corresponds to an increase of over 6 times in the current computing power available for researchers. As its test period is coming to an end, we expect to see new research developments in areas such as energy, civil engineering, nanotechnology, meteorology, oceanography, and life sciences soon.  

Monday, August 31, 2015

Google to award over USD$1M to Latin America researchers

Google is well known for supporting the development of sciences and technology around the globe. This time, the Internet giant is looking for world-class scientists in the Latin America region to support them and work with them in research topics of mutual interest. In particular, researchers in the domain of computational science are likely to have common interests with Google and their in-house research programs. These are the research areas that are of interest for Google (every proposal should be classified in one of these areas): 


  • Geo/maps
  • Human-computer interaction
  • Information retrieval, extraction and organization
  • Internet of things
  • Machine learning and data mining 
  • Mobile 
  • Natural language processing 
  • Physical interfaces and immersive experiences 
  • Privacy
  • Other topics related to web research  


Google has offered over one million dollars to support scientists in the region. This represents a significant cash injection into the Latin America research community. Each project should have one student, either PhD or Master and one faculty member. For the case of PhD students, the student will receive an allowance of USD$1200/month and the faculty USD$750/month for one year. For Master students, the student will receive USD$750/month and the faculty USD$675/month. 

Thousands of students and faculty member sent their applications last month. Precise guidelines on how to build a good application were given. Google senior researchers wrote a guide on how to write a strong proposal, which was a great place to start for applicants. The notifications would be sent directly to the fortunate winners in the coming weeks. We should keep an eye on those fortunate teams to see what great research results come out of this in the coming months. 


Monday, August 17, 2015

HPC Research in Applications

Roughly speaking, there are three big areas of research in HPC: hardware, software and applications. All well-versed HPC researcher and practitioner must know a fair amount in each one, but usually focus on a single one. I personally believe that Latin America HPC research should focus on applications, because this area has potential to impact the most people.

Two areas, in which I have been working in the recent years, are good examples of applications that are important and LA has already lots of knowhow: Weather/Climate and Natural Resources. In addition, those areas would improve in quality and scale with help of new methods and techniques from HPC.

In weather and climate modeling, usually the meteorologist simplify the physics of the processes involved so that they can run in a feasible amount of time their simulations. However, these simplifications result in loss of accuracy and precision. Human resources are needed to redesign the applications for large scale machines so that more realist models can run. Those models can be used to improve peoples lives, for example to plan for electricity production.

In natural resources, there is much research for example in oil and gas. In LA, there are many companies that perform large simulations to better extract oil and gas without damaging the environment. For example, in Bucaramanga - Colombia, Ecopetrol has a research center: http://www.ecopetrol.com.co/especiales/Portafolio%20ICP/portafolio/centro/index.htm . It has 20 labs with over 100 researcher. They improve safety by using simulations and, definitely would benefit from HPC to run larger and more realistic simulations.

Another possible application in natural resources is fracking. Simulating fractures is difficult and using realist models to preform those simulations is expensive. New methods to accelerate these techniques are important for the industry to allow safer operations. I particularly have been interested in XFEM method, since it does not require explicit representation of fractures, that would be difficult to maintain for complex geometries.

Finally, I believe that much can be done in basic software and hardware in LA, but applications should probably be central in HPC research here. Primarily, because it directly produces results used by society. Some sites in LA where software and data about Weather/Models can be obtained are:

Monday, July 27, 2015

Recruiting Research Assistants through a Graduate-level HPC Class in Latin America

During the introductory seminars to graduate school, a faculty professor at the University of Illinois at Urbana-Champaign used to equate the process of finding an advisor (or finding students, for that matter) to that of finding a spouse. A good relationship student-advisor was like a good marriage. It is not easy to really find whom to work with in your research. Similarly, it is not easy to find whom to share your life with. Although I found the analogy quite amusing, I am not sure I would go to that extent when it comes to define the process of finding students to work in your research group. However, something is clear. If you want to build a productive research team, you would better be on the lookout for good students. How do you recruit them?

For those young assistant professors in Latin America starting a new research team, here is a little piece of advice. A graduate course is a very efficient mechanism to find good matches for your research group. Here are some of the reasons why:

  • The course extends for a relatively long period (usually one semester), giving you time to get to know your students, both at the technical and personal level.
  • The course exposes students to a fairly rich body of knowledge about a particular topic. Therefore, by the end of the semester the student will have a better clue on whether the topic is interesting or not.
  • The graduate class is usually more flexible in the format. That opens up an opportunity to explore particular research problems that may engage students and bring them up to speed into your research agenda.
  • The class usually has more students than available spots in your research group, so it gives you the chance to choose good elements from a bigger sample.
Note that the course works both ways: professor identify good candidates for their groups and students find attractive research groups. So, the course is a meeting point to establish long-lasting relationships and spark new and interesting collaborations. Make sure you carefully design your graduate class. Here are some recommendations:
  • Provide a multiple-session introduction to the topic, where you cover most of the fundamentals of the discipline.
  • Select intriguing papers (both recent and classic) to make up the reading list for the class. 
  • Provide a small collection of project descriptions that students could pick up for their term project. That way you would have them working on your research direction and determine if they are a good match for your team. Make sure all projects generate a scientific write-up with the findings of the project.
  • Schedule presentations from the students, so you can refine their presentation skills. As one of my colleagues says when it comes to the contribution of a paper "half is what you do, the other half is what you show".

Given that many universities in Latin America have a graduate program in Computer Science, you would have a good chance to offer a graduate course in HPC and start building your research group.

Wednesday, July 15, 2015

Discussion on HPC collaboration between EU and LA tomorrow (July 16)!

As we have previously covered last month, the Workshop on HPC collaboration between Europe and Latin America will take place tomorrow (July 16, 2015) at the International Supercomputing Conference (ISC) in Frankfurt, Germany. The workshop will happen in the Candela Room of the Frankfurt Marriot Hotel (5th floor).

If you happen to be in the conference or just curious, you can check the updated schedule for the workshop here

Thursday, July 9, 2015

Help us map supercomputing in Latin America

Since we started the endeavor that is HPCLA, we have gathered many news and information about Latin American supercomputers, workshops and conferences in High Performance Computing, and international projects related to the region, among others. Although we are still in the start of our mission of gathering and reporting on this large subject, we already find issues in the way this information is organized online. In this context, we would like to ask for your help to make things better!

We have started to map the institutions in Latin America that are related to High Performance Computing, such as laboratories and universities. For all institutions mapped so far, we have also added a link to the posts that discuss topics related to them. We invite you to help make this map more complete, with new information, photos and links to institutions already listed or missing. You can send your new information to us through hpclatinamerica at gmail. You can expect more of this soon!

Monday, July 6, 2015

16th IEEE/ACM International Symposium on Cluster, cloud, and Grid Computing (CCGrid2016) to be held in Cartagena Colombia

After Shenzhen, China, the International Symposium on Cluster, cloud, and Grid Computing (CCGrid) is moving to Cartagena, Colombia. Indeed, CCGrid2016 will be held on the Caribbean Coast in Colombia. Thus, we contacted Professor Carlos Jaime Barrios, general co-chair of CCGrid2016. Professor Barrios coordinates the high performance computing center at the Universidad Industrial de Santander, in Bucaramanga, Colombia. Here are the answers he gave us about the coming CCGrid2016 conference in Colombia.

Q: CCGrid has been evolving during the last years, what are the most relevant topics for the next year and how are they related to the Latin American region?

A: The spirit of CCGRID is to join people of all the world around cluster, cloud, grid computing and distributed systems to show advances and collaborate in interesting projects which can be better the quality of the life (The last conference was in China). If you see the chart of CCGRID, maybe is the only one conference with a wide range of impact, making CCGRIDs in countries of all continents (Not yet Africa, but soon).  For Latin America, from some years, the participation in HPC and large scale computing projects is growing. The Brazilian CCGRID in 2007 in Rio de Janeiro and for the next year CCGRID 2016 in Cartagena de Indias, Colombia, are shown as Latin American host. CCGRID 2016 is an interesting space for us, as Latin America HPC community to join more the HPC world wide community, to meet us with our pairs from different countries and welcome guests and participants in our beautiful cities.

Q: It has been over 8 years since the last time the CCGrid conference was held in South America (Brazil 2017), what does this opportunity means for the region?

A: The  CCGRID 2016 in Cartagena de Indias is a recognition of the importance of the region in the HPC and large scale computing world wide community. For the region, is a good opportunity to meet specialists, to show developments and to joint directly more projects than before. Another conferences as ISCA was developed in Latin America and some proposals to develop future workshop and conferences around IPDPS, eScience, HPDC and others are in preparation for different Latin American countries (not only Colombia).

Q: Should we expect a strong presence of Latin American community (students, engineers, researchers) in CCGrid2016?

A: Yes, more than the regular conference and traditional workshops, we will propose different satellite activities: tutorials, LatinAmerica Research and Innovation Workshop in HPC (addressed to Latin American community specifically), the SCALE contest, large scale visualization showcase, and more. On the other hand, Cartagena de Indias is a beautiful city, with the best of the Spanish colonial time, the best of the modern Latin America development and spectacular nature. It's not complicated to go to Cartagena, with a lot of flight options directly to different parts of the world. So, we think in a strong participation not only from the Latin American community, also of the all the world. 

Sunday, June 21, 2015

VMware vForum in Chile

The VMware vForum is one of the best opportunities to interact with experts in virtualization, cloud computing and mobility. The event has been going on for 8 years already and took place in Santiago de Chile last June 9th. 

The keynote "One Cloud, Any Application, Any Device" was given by Bob Shultz, Chief Strategy Lead and VP of the End User Computing group of VMware. Several breakout sessions took place in the morning and the afternoon, with great talks from leaders in the distributed computing industry, such as, Cisco, HP, Intel, EMC among others. In addition, several demos were presented at the "Engineer Salon". Finally, the meeting was closed with a Cocktail and prizes in the exhibition area.

During the meeting, recent novelties were presented and the IT experts discussed cutting-edge technology and the opportunities that can be leveraged in the coming years thanks to this technology. If you could not a attend the meeting, we have good news for you: All the slides have been collected and put online open to the public. So if you missed the VMware vForum in Chile this occasion, we invite you to check out the talks online and learn about latest advances in cloud and mobility computing.




Wednesday, June 10, 2015

HPC is not only about computer science

HPC has been a hot topic for everybody since the arrival of multicore processors and accelerators. Much attention has been given to new technologies related to these architectures and supporting software. Not only in Latin American universities but in many places, the focus has been on teaching the technology only. However, there is a very important aspect that universities here should pay more attention and that is the applications and the techniques used to develop applications, in what is typically called Computational Science and Engineering.

High Performance Computing is a mean not an end in itself, therefore, our professionals usually will have to design and develop software to be used in HPC machines. These tasks will require not only knowledge about the system they use but also what they are actually developing. The industry, that is the place most of the students of HPC eventually will work, is interested in the application and how to use HPC systems to solve real problems.

Some people may claim that computer scientists do not need to know the gory details of any scientific or data application. I agree with them. Surely the HPC professionals typically work in teams that include application experts that know deeply the problem and the methods used to solve it. However, HPC professionals need to know the basic techniques, methods and the language used by those experts, so that they can be more productive.

The list of topics one should pay attention may be large and intimidating. But there are some basics subjects every student interested in HPC should be encouraged to study. My list is:

  - Linear Algebra and its applications;
  - Probability / Statistics (and also its applications);
  - Numerical methods;
  - Optimization;
  - Data Analytics;
  - Visualization.

This list is not meant to be complete but it covers a lot of background many HPC practitioners will need. But how one can learn about these topics? Computer science departments may not offer the subjects related to these topics. However, the diligent student can still find on the Internet all she/he needs to learn about Computational Science and Engineering. Here are some examples:


There are definitely more available, but this is a good sample of what can be used. Finally, I believe that our professional will have more impact if they manage to use HPC to solve real problems and that starts with a better understanding of the problems and solutions methods that are available.

Tuesday, June 2, 2015

Workshop on HPC collaboration between Europe and Latin America

International collaborations play a critical road in our path to achieve exascale computing. This is true for all regions, including North America, Europe, Asia and Latin America as well. All of this regions have instantiated multiple collaborations to develop software and investigate new research directions in order to push the boundaries of extreme scale computing. Most of those collaborations have achieved goals that could not be reached by a single research group alone. 

Europe is one of the regions that has developed the most important collaborations with research institutions in Latin America. For many years, Europe and Latin America have been developing joint projects in HPC and data intensive applications. Projects such as EELA, RISC, OPENBIO, among others, had a clear impact in the academic community and industry. In addition, the fruit of those collaborations had helped to develop the infrastructure and computing facilities in Latin America, sparkling a flame of interest among young undergrad and graduate students. It is important to continue those research exchanges in order to perpetuate the list of achievements that they have been producing in the last years.

With this objective in mind, researchers from Europe and Latin America will be hosting the Workshop on HPC collaboration between Europe and Latin America at the International Supercomputing Conference (ISC). The workshop will take place the Thursday 16th July 2015 at the Marriott Hotel Frankfurt in Germany. The talks are divided in three main topics : Research and Academic Collaboration, Training Collaboration, and Industrial Collaboration.

The schedule of the workshop is as follows:

  • 9 00 - 9:10 Welcome (Carlos J. Barrios, SCALAC-UIS)
  • 9:10 - 9:30 EU/LatAm Collaboration Networks (Salma Jaliffe, SCALAC-CUDI)
  • 9:30 - 10:00 Sustainable ultrascale computing: a challenge for research (Jesus Carretero, NESUS Network)
  • 10:00 - 10:30 EU-LA collaboration on HPC and HTC through Frameworks Programs (Rafael Mayo, CIEMAT)
  • 10:30 - 11:00 RISC: towards a stable EU-LATAM scientific cooperation on HPC (Ulises Cortes, BSC-CNS)
  • 11:00 - 11:30 Coffee Break
  • 11:30 - 11:45 EU/LatAm Academic and Research Collaboration Proposals (Alvaro de la Ossa, SCALAC-UCR)
  • 11:45 - 12:15 The joint SISSA/ICTP master in HPC: an opportunity to create a new class of HPC professionals between Europe and Latin America (Stefano Cozzini, SISSA-Democritos-ICTP)
  • 12:15 - 12:45 High Performance Computing for Geophysics Applications Overview and results of HPC GA (FP7 IRSES 2011-2014) (Jean-François Mehaut, UJF)
  • 12:45 - 13:15 HPC on societal challenges to enhance EU-LAC cooperation (Yolanda Ursa, EU-INMARK)
  • 13:15 - 14:00 Lunch
  • 14:00 - 14:15 EU/LatAM Research, Development and Innovation Partnership (Renato Miceli, FIEB-SENAMI)
  • 14:15 - 14:45 eXtended Discrete Element Method: overview, applications and perspectives (Xavier Besseron, University of Luxembourg) 
  • 14:45 - 15:15 High performance computing and data management driven by highly demanding applications (Philippe Navaux, SCALAC-UFRGS) 
  • 15:15 - 15:30 Bringing our Center for Excellence in Parallel Programming to Latin America (Rafael Escovar, Bull Atos Group)
  • 15:20 - 15:30 Experiences and capacities of Colombian Universities for HPC Innovation (Harold Castro, UNIANDES – SCAD_Colombia)
  • 16:00 - 16:30 Coffee Break
  • 16:30 - 17:00 On some projects being developed at ABACUS the Center for Applied Mathematics and High Performance Computing of the Center for Research and Advanced Studies of IPN (Cinvestav) in Mexico: collaboration a key for success (Isidoro Gitler, SCALAC-ABACUS-CINVESTAV)
  • 17:00 - 17:45 Final Panel: Next Steps for the EU/LatAm Collaboration: 2015 – 2020 (Conducted by Philippe Navaux, SCALAC-UFRGS)
  • 17:45 - 18:00 Close and Conclusions

If you plan to attend ISC2015, don't miss this opportunity to participate in the workshop and be part of the collaboration between Europe and Latin America. Remember that you need to register at the conference to participate in the workshop.

Update: Schedule updated.

Tuesday, May 26, 2015

International HPC Conferences in Brazil this Year: CARLA and SBAC-PAD

If you have been searching for places to publish the latest results of your research in HPC, or to improve your networking with fellow Latin American researchers, 2015 provides two opportunities in Brazilian cities: CARLA and SBAC-PAD.

CARLA

The Latin America High Performance Computing Conference (CARLA) will have its second edition this year. It will take place in August 26-28 in the city of Petrópolis, Rio de Janeiro. As previously covered in HPCLA, CARLA is a joint effort that combines two earlier Latin American conferences, HPCLaTam and CLCAR. 

Besides its technical program, CARLA has three confirmed international keynotes: Luiz DeRose, from Cray Inc; Harold Castro, from the University of Los Andes, Colombia; and Michael Wilde, from the Argonne National Laboratory. Additionally, CARLA will host the HPC Advisory Council Brazil Conference (HPCAC), and the Regional School in High Performance Computing of the state of Rio de Janeiro (ERAD/RJ).

CARLA is still taking paper submissions until June 14. Accepted full papers will be published in Springer's Communications in Computer and Information Science (CCIS) series.

SBAC-PAD

The 27th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD) will be hosted this year in the city of Florianópolis, Santa Catarina, and take place in October 18-21. As SBAC-PAD is hosted in Brazil every two years only, this is the ideal opportunity to participate.

SBAC-PAD has three confirmed international keynotes so far: Geoffrey Fox, from Indiana University; Satoshi Matsuoka, from Tokyo Institute of Technology; and Onur Mutlu, from Carnegie Mellon University. SBAC-PAD will also host two international workshops - namely the 4th Workshop on Parallel Programming Models (MPP), and the 6th Workshop on Applications for Multi-Core Architectures (WAMCA) - and the Brazilian Symposium in High Performance Computing Systems (WSCAD).

SBAC-PAD's abstract submission deadline is June 7, with the paper submission until June 15. Accepted papers will be published in the conference proceedings in the IEEE Xplore Digital Library.

Update 1: CARLA's deadline was postponed to June 14 (originally May 31).
Update 2: SBAC-PAD's deadlines were postponed to June 7 and June 15 (originally June 1st and June 8).

Tuesday, May 19, 2015

Designing an Undergraduate Class on High Performance Computing

Since the advent of multicore processors in the mid 2000s, parallel architectures have become mainstream. These days, it is extremely difficult to find a single-core processor, including those on cell phones, tablets, and laptops. Therefore, there is a growing interest in training students on programming for parallel architectures. In particular, a class on high-performance computing (HPC) is getting more popularity in universities. Part of this interest comes from the fact that HPC almost equates with parallel computing.

Here are some of the challenges on designing an undergraduate-level class on HPC in Latin America:
  • List of Topics. This is probably the first big decision that has to be made. Which topics are in, which are out? The answer depends on many factors, ranging from the expertise of the instructor, the availability of resources, and the background of students. As a general guide, a class on HPC should target junior and senior students (3rd and 4th year into the program) of computer science or computer engineering. That's a safe move to guarantee students have enough background on computer science concepts (algorithms, systems, programming). The recommendation is to include at least these topics: architecture, interconnects, performance models, parallel algorithms, shared-memory programming, distributed-memory programming, accelerator programming, and some sort of parallel programming patterns.
  • Programming Languages. The point above listed three programming platforms. There should be at least one programming language for each. The recommendation is to use open-source compilers. A safe bet is to go for C/C++, a language most students are familiar with. Shared-memory systems can be programmed using the OpenMP standard. Most C/C++ compilers provide an implementation of OpenMP. The GNU compiler, gcc, offers a good implementation. Distributed-memory systems can be programmed using MPI, the de-facto standard for this type of system. There are many implementations, but MPICH or OpenMPI are two open-source, well-maintained implementations. Finally, accelerators can be programmed using CUDA, the NVIDIA mechanism to program their GPUs. The nvcc compiler is free and extremely powerful in compiling CUDA code. An alternative to CUDA is OpenCL, a standard that targets general accelerator architectures.
  • High Performance Computing Resources. This is the biggest challenge. Sometimes, supercomputers are available at Latin American institutions. In that case, the system administrators may be willing to provide the instructor with an education allocation on the system. If no supercomputers are readily available, it is possible to get allocation on international supercomputers through agreements or access programs (check this post or this other post for more information). Finally, if nothing of the above is an option, then a simple desktop will provide the minimal testbed for at least shared-memory and distributed-memory programming. If the desktop is equipped with a GPU, then all the puzzle is completed. Some laboratories may actually feature multiple desktop computers with the aforementioned configuration.
Finally, an advice on keeping the class dynamic is to start as soon as possible with programming examples. A language that is simple (yet powerful) to introduce data and loop parallelism is Cilk Plus. Even compilers such as gcc and Clang now recognize Cilk constructs.






Monday, April 27, 2015

Supercomputer in Chile helps scientists to discover 61 supernovas in only 6 nights

Leftraru supercomputer, the second most powerful supercomputer in Latin America, helped scientists from the Mathematics Modeling Center (CMM) of the University of Chile to detect 61 supernova explosions just a few days before they appeared in the universe. For comparison, previously those explosions were only spotted once every hundred years. To achieve this spectacular result, the team leaded by Francisco Foster started by monitoring the sky from the Observatory Cerro Tololo with a Dark Energy Camera (DECam), the second best in the world for this kind of measurements. Almost a hundred thousand images were sent over about 500 KM to the Leftraru supercomputer at CMM. The supercomputer then analyzed the images (over a billion pixels) with an algorithm that has been developed and optimized since 2013 to recognize supernovas, asteroids and other objects. The scientists have been "teaching" the computer how to recognize supernovas, using machine learning algorithms and supervised learning. 

The parallel algorithm was initially created in 2013. A first experiment was tried in 2014, and the team discovered 12 of those supernovas. The program was optimized during 2014 and this recent experiment in 2015 showed that the new version can detect over five times more supernovas than the previous version. Once the supernovas are detected, the supercomputer raises an alert and a notification is sent to the other observatories around the world, giving the coordinates the telescopes should point, in order to observe the new cosmic event.

The results were so impressive, that Forster and his team were invited to present their work at the DECam community science workshop in Arizona USA. This success story not only shows the impact of HPC in our society but also that Latin America can be an important player in the quest for scientific discovery.

Wednesday, April 22, 2015

The 44th edition of the Argentinian computer science seminars

The Argentina Society for Computer Science (SADIO) is organizing the 44th edition of the Argentinian computer science seminars. They will take place from August 31 to September 4th at the Rosario University Campus. International researchers, professors, undergraduate and graduate students are expected to gather for this event. In the last edition, about 8000 participants presented 195 works and posters, which added to the other internal workshops and panels made almost 290 presentations and activities. Activities such as the Argentinian seminar for Big Data will be of great interest for students and researchers specialized in large scale computing and data intensive systems. A complete list of the activities can be found at the JAIIO 2015 website. Last year, the event featured talks from Intel and Facebook among other big players of the data intensive computing community.

Many activities are organized, including several student contests. In a previous edition, engineering students Cecilia Balesteri, Araceli Martin and Cecilia Romitti presented a model of neural network that could estimate the risk of student withdrawal in their engineering faculty. They won one of the prizes given at the event. They estimate the experience as a wonderful opportunity to extend their knowledge and to meet many academic and industrial researchers. 

This is a wonderful opportunity for students to meet with students from other universities and exchange their experience and vision of the domain. It is also a good occasion to interact with international researchers for those looking for internship opportunities or to continue their studies abroad.

Tuesday, April 7, 2015

Supercomputing Laboratory Starting in Record Time


As we previously covered here, Mexico approved its third national laboratory in May, 2014. Now, after less than one year since its announcement, the National Supercomputing Laboratory of the Southeast of Mexico (LNS) has started its operations in March, 2015. LNS is held at the Meritorious Autonomous University of Puebla (BUAP) and is expected to help develop high-impact projects in the region.

Following its vision of "being a national reference with international presence in computing services of high specialization, self-sustainable and in the technology vanguard", LNS is expected to support from 25 to 30 research projects in its first stage. This will be done by providing training courses and access to its supercomputer, referred to as LNS too. 

LNS's supercomputer was acquired from Fujitsu and counts with a mostly homogeneous infrastructure. It possesses around 150 Tflops of computing power, 14 TB of memory and 200 TB of storage. The resources are split over 210 computes nodes with four different setups:
  • 204 thin nodes with two 12-cores Haswell processors and 128 GB of memory;
  • 2 accelerated nodes similar to thin nodes but with two Xeon Phi 7120p coprocessors each;
  • 2 accelerated nodes similar to thin nodes but with two Kepler K40 coprocessors each; and
  • 2 fat nodes with four 15-cores Haswell processors and 1 TB of memory.
Besides providing an HPC platform for local researchers, LNS will help in international projects involving institutions like the High Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory and the European Organization for Nuclear Research (CERN). Researchers should be able to register new user accounts soon.

Thursday, April 2, 2015

HPC Resources for the Latin American Integration

"A University without Boundaries" is the motto of the Federal University of Latin American Integration (UNILA, from its Portuguese acronym), a public university created in 2010 that harmonizes the education and research needs of several countries in Latin America in areas related to social studies and natural resources, among others. UNILA's courses are taught in Portuguese and Spanish to accommodate its students and faculty members coming from countries such as Argentina, Brazil, Colombia, Paraguay, and Uruguay. UNILA is strategically located in Foz do Iguaçu, Brazil, a city with borders with Argentina and Paraguay.

In order to enable research in applied sciences, UNILA counts with its High Performance Computing Laboratory (LCAD) since 2012. The laboratory seeks to provide computational support to researchers from any education or research institution in Latin America, not being restricted to UNILA staff only. This is done mainly by providing access to its cluster named HPC-Lattes. 

HPC-Lattes is a Bull cluster with three kinds of nodes that support applications with different requirements. Their kinds and numbers are listed below.
  • Fifteen thin nodes are composed of two 6-core Westmere processors and 24 GB of memory. They are used for general parallel applications.
  • Eight accelerated nodes that add two M2050 Fermi GPUs to the usual thin node configuration. They are available for accelerated applications and applications with higher processing requirements.
  • Five fat nodes are composed of four 8-cores Nehalem processors and 256 GB of memory. They are used for shared memory applications and applications with higher memory requirements. 
In total, HPC-Lattes provides 12.16 TFLOPS distributed over 404 CPU and 7168 GPU cores, 1.8 TB of memory and 36 TB of storage. If you are now considering requesting an account on HPC-Lattes to develop your research, the application form can be found here.

Wednesday, March 25, 2015

The National System of High Performance Computing in Argentina

Policies at the national level are a pivotal element to foster the development of high performance computing in a country. For instance, it was the National Science Foundation Network (NSFNet), a nation-wide effort in the United States of America, what set the bases during the 80's for a solid HPC development. The NSFNet project helped in the creation of now highly reputable HPC centers such as Pittsburgh Supercomputing Center (PSC), National Center for Supercomputing Applications (NCSA), and San Diego Supercomputer Center (SDSC).
Argentina has developed a national strategy to increase its HPC arsenal, increase the HPC literacy among scientists, and established fruitful international connections. The program is called the National System of High Performance Computing, or SNCAD by its acronym in Spanish. The program is attached to the Ministry of Science, Technology and Productive Innovation under a major initiative on big instrumentation and databases.
The SNCAD aims at satisfying the growing requests of scientific and academic institutions regarding storage, cloud computing, HPC, visualization, and emerging technologies. Its main goals are:

  • Create policies that maximize the use of equipment funded by public grants.
  • Contribute to the development of funding strategies to improve HPC services.
  • Offer at least a funding program for institutions to acquire new hardware or renovate equipment.
  • Promote training on HPC through nation-wide programs.
  • Foster the integration of national centers wit the international community.
  • Create a system to emit certificates on HPC.

So far, the SNCAD has invested more than $2,124,695 in 17 different centers in Argentina. Those centers represent 36 computational systems that are already into production. Those centers are already supporting research in many scientific disciplines: atmospheric science, theoretical chemistry,  biological systems, mathematics, astronomy, and more. For more information, please follow this link http://www.supercalculo.mincyt.gob.ar

Tuesday, March 17, 2015

International Supercomputing Conference in Mexico

The 6th International Supercomputing Conference in Mexico just took place at Mexico City last week. From March 9th until March 13th, hundreds of the best researchers in the HPC community gathered at the Fiesta Americana Hotel to discuss some of the most pressing topics in the supercomputing community. The conference featured a well rounded technical program and poster session, together with workshops where students could learn and experiment with new techniques and tools for large scale computing. In addition, thanks to the support of the industry partners, the conference could offer free registration to a certain number of students interested on attending the conference.

The conference featured a large list of well known researchers from all the corners of the world, including USA, Japan, Germany, Russia, Spain and Brazil, who presented the latest developments and new insights in challenges related to data-intensive computing, power-aware programming and heterogeneous computing:
  • Ilkay Altintas - Director for the Workflows for Data Science (WorDS) Center of Excellence, San Diego Supercomputer Center (SDSC), USA 
  • José M. Cela Espín - Director of the Computer Applications in Science and Engineering (CASE) Department, Barcelona Supercomputing Center (BSC), Spain
  • Ian Foster - Director of the Computation Institute, University of Chicago, USA 
  • Jean Luc Gaudiot - Professor of Department of Electrical Engineering and Computer Science University of California – Irvine, USA
  • William Gropp - Director of Parallel Computing Institute National Academy of Engineering, University of Illinois Urbana-Champaign, USA
  • Ryutaro Himeno - Director of Advanced Center for Computing and Communication (ACCC) RIKEN, Japan
  • Thomas Lippert - Director of the Institute for Advanced Simulation, Head of Jülich Supercomputing Centre, Germany
  • Philippe O. A. Navaux - Professor Titular Federal University of Rio Grande do Sul, Brazil Edward Seidel Director of National Center for Supercomputing Applications, USA
  • Mateo Valero Cortés - Director of Barcelona Supercomputing Center (BSC), Spain
  • Vladimir Voevodin - Deputy Director of the Research Computing Center at Lomonosov Moscow State University, Russia
  • Nancy Wilkins Diehr - Associate Director at SDSC and co- director of XSEDE's Extended Collaborative Support program San Diego Supercomputing Center, USA
The International Supercomputing Conference in Mexico (ISUM) series, started in 2010 with a first meeting in Guadalajara, followed by events in San Luis Potosi, Guanajuato, Colima and Ensenada in the following years. The list of keynote speakers this year was impressive and the conference continues to gain a strong attendance. This event is of major benefit for the HPC community in Mexico, as it opens new opportunities for research collaborations and dialogue with the industrial partners.

Wednesday, March 11, 2015

Profile of national HPC developments in Latin America - Part IV


In this series of posts, we present some of the national developments in high performance computing seen in Latin American countries. Following the discussion of the developments in Argentina and Brazil, Chile and Colombia, and Mexico, today we discuss Peru and Uruguay.

Peru

Although many universities and research institutions exist in Peruvian soil, no integrated national effort in HPC has been developed lately. For instance, the Peruvian Academic Network (RAAP) provides connectivity among national research institutions and to others in Latin America, but lacks computing resources to provide to its own members.

Nevertheless, not all hope is lost, as the San Pablo Catholic University (UCSP) is heading the proposal of a Center of Excellence in "High Performance Computing for the Research, Development and Technological Research for Urban Centers' Problems". This initiative is competing with five other center of excellence proposals to be funded by the Peruvian National Council of Science, Technology and Technological Inovation (CONCYTEC). This Center of Excellence in HPC will be developed in cooperation with Brazilian, French, German, and North American universities, and Peruvian companies. Besides its research focus, this center also aims to promote the training and education of master, doctoral and post-doctoral personnel with the support of the collaboration network.

Uruguay

Uruguay does not have national policies and agencies to support the development of high performance computing research and development in the country. Additionally, most of its universities and research centers are in Montevideo, the country's capital. Nevertheless, the country had only one public university - the University of the Republic (URU), which counts with over 100,000 students and 10,000 faculty members - until recently. In this sense, the HPC efforts developed in URU could be seen as national developments.

The University of the Republic expanded its efforts in HPC with the creation of the Interdisciplinary Center for Scientific High Performance Computing (NICCAD) in 2010 with the joint work of 20 different research groups. NICCAD aims to promote the integration of researchers with varied backgrounds in order to solve diverse scientific problems using HPC techniques. To run scientific applications on domains such as fluid dynamics and biomolecular simulations, researchers rely solely in the FING cluster, a heterogeneous cluster composed of Dell Power and HP Proliant nodes funded by the own university that provides 5 TFlops of performance over 440 cores and 848GB of memory.

Tuesday, February 24, 2015

SCALAC: Advanced Computing Services for Latin America and the Caribbean

Nation or even continent-wide programs are a fundamental piece in developing high performance computing expertise. Two examples of that are the XSEDE in the United States and PRACE in Europe. XSEDE is a program that assembles HPC resources from several supercomputing sites. At the time of this writing, XSEDE lists 10 supercomputers, 2 visualization clusters, 6 storage systems, and 1 high throughput cluster. XSEDE has sparked a huge amount of applications and new research directions. Similarly, PRACE encloses 6 supercomputers and has 25 member countries in Europe. Programs of this nature provide a fertile ground to develop new and powerful HPC applications, mainly because it is easier to find people with the right expertise on a particular field, the right computing infrastructure, and a formidable source of feedback.

In the same vein, SCALAC, Spanish for "Advanced Computing Services for Latin American and the Caribbean", aims at providing the region with a program that integrates consolidated advanced computing centers. Such program would integrate HPC experts, service providers, and supercomputers. Therefore, the necessary resources would be within reach for scientist and engineers eager to develop groundbreaking simulations.

The SCALAC project was formally established in March 2013 with the Bucaramanga Declaration, signed by the original members in the Industrial University of Santander, in Bucaramanga, Colombia. The SCALAC project is supported by a bigger program in the region called RedCLARA. Short for Latin American Cooperation in Advanced Networks, RedCLARA has implemented a high speed interconnect among several countries in Latin America. SCALAC was inspired by other initiatives in the region, namely project GISELA (Grid Initiatives for e-Science virtual communities in Europe and Latin America) and grid computing projects EELA and EELA-2. The operational center of SCALAC is a distributed system installed in the three major participants: Mexico, Colombia, and Brazil. In addition, Costa Rica and Ecuador provide small scale services. As a supplement, the Barcelona Supercomputing Center (BSC) and the Center of Research in Energy, Environment and Technology (CIEMAT) in Spain provide international support.

The goal of SCALAC is threefold:
  • Support high-level scientific endeavors, providing supercomputing platforms for applied and fundamental research projects.
  • Support to academic activity, forming HPC specialists at different levels (technical, specialized, scientific).
  • Support HPC technology transfer and use of HPC platforms for projects focused on the most important needs of the region (climate, health, and security).
For more information on the project, please visit http://www.cenat.ac.cr/computacion-avanzada/scalac.

Tuesday, February 17, 2015

Latin american children meet HPC

As a follow up of our previous post about teaching HPC to undergraduate students, in this post we go beyond that, we want to reach children! Of course, we are not talking about asking a ten-years old to program a hardware device with thousands of cores. We are talking about the importance of exposing children to the HPC world from early education stages. In the same way children are capable to understand what a telescope or a microscope is, they also should be able to have some basic notions about one of the most important tools for scientific discovery: supercomputers. But we should not ask them to understand the intricate technical details about how a supercomputer works; what is important is to explain them what is a supercomputer, what is it used for and the advantages that they offer to many scientific domains. This is critical to awake the curiosity of young generations and to motivate them to go into science and technology careers.

In this ground, Chile is already several steps ahead. Indeed, the National Laboratory for High Performance Computing (home of the Leftraru supercomputer) organized several visits from high schools students to the laboratory. First, about fifteen girls from the Liceo Carmela Carvajal de Prat were received by the scientific director Jaime San Martin, the project manager Susana Cabello, and the technology manager Gines Guerrero. They received several informal talks explaining the purpose of supercomputers, what they look like and how they are used for different purposes. In addition, the engineer Alex Di Genova explained how HPC is important for bioinformatic research. The turn of the gentlemen came several days later, when about thirty boys from the National Institute visited the premises of the NLHPC. Again, the main concepts of HPC were explain in short talks, but this time astrophysics took the place as the emblematic scientific application, when Dr. Francisco Corster explained them the critical importance of scientific modeling to understand the behavior of the universe and its cosmic bodies.

Other laboratories across Latin America should follow similar strategies. These visits are not only extremely stimulating to children, but they also help to pave the way for the future bright generation of Latin American young scientist and engineers.

Friday, February 13, 2015

Opinion Piece: Teaching HPC to Undergraduates

Teaching high performance computing concepts to undergraduate students is one of the most important tasks of HPC educators and researchers in Latin America. An early exposure and a thorough instruction would not only help train future researchers and technicians to work on computational challenges of our society, but also enhance labor for areas such as embedded, real-time, and distributed systems. All students should graduate with enough knowledge to be able to develop a performing parallel program to run on one of the many parallel platforms that surround us, be it a mobile phone or a supercomputer.

Students should be instructed in three different fronts: parallel architectures, parallel programming, and software optimization. Several of the concepts that they involve are not difficult to grasp, and students around the second year, with basic programming skills and computer architecture knowledge, could start learning them.
  • Parallel architectures are the basic platforms for HPC. Students from at least the second year and beyond should start learning about multithreading, multicore processors, accelerators, and clusters. This knowledge would help people to evaluate the machine requirements for certain applications, and to guide software development decisions.
  • Parallel programming is required to fully utilize parallel platforms. Parallel programming concepts should be presented in introductory programming classes to avoid restraining students to a sequential mentality, while advanced parallel constructs, languages, and frameworks could be left to the second half of a degree. Courses should not restrict themselves to OS ideas of concurrent programming (threads and processes, synchronization and mutual exclusion) but embrace modern APIs and high-level programming models. 
  • Software optimization comes to improve the integration of parallel programs and architectures. Advanced algorithms and code optimization classes can be left to a degree's last years, but should not be left out. This would ensure that graduates are qualified to make full use of the resources they are given and can lead the development of new software solutions.
One of the main issues for teaching and learning HPC is the time needed to discuss all of these themes, as they cannot be confined to a single course. In this sense, subjects would have to be spread around different courses, with some coordination between instructors required. Nevertheless, such reform should not be stopped just because it is laborious.

Another common issue in Latin American universities is the absence of high performance platforms to run undergraduate students' experiments. This problem could be circumvented by putting together clusters with old or cheap single-board computers with students. Another solution would be to run experiments on commercial clouds.

Finally, exposure to HPC should not be limited to the classroom. Summer courses and tutorials, nearby conferences and small events, parallel programming competitions... all play a role in bringing people in and growing HPC in Latin America.

Wednesday, February 4, 2015

The Role of University Centers in Promoting HPC in Latin America

The development of a competitive high performance computing platform in a country requires the concerted effort of different agents: government, academia, society, and industry. Ideally, a coordinated strategy of all the aforementioned parties would put HPC on the fast track. If  governmental agencies design policies to effectively support the investment on HPC equipment and human capital, academic institutions and companies will efficiently organize teams to chase the funding opportunities. Society must ensure that their taxes are well spent on initiatives that improve their quality of life and open new business directions.

There is, however, an entity that has historically played a key role in developing HPC projects with high impact for society. University centers, attached either to academic, administrative, or technical departments, are units strategically located to spark fundamental initiatives in HPC. Here are some of the reasons why:
  • Proximity to scientist and engineers, which makes university centers aware of the most relevant  and challenging problems in science and engineering that require huge amounts of computation, storage, and analysis to solve. Therefore, university centers are most likely to understand the need for HPC.
  • Funding availability, either because the center has an annual budget that includes equipment, training, and even discretionary funds, or because it is close to upper management where the case can be made to provide HPC platforms for the university. Thus, university centers may be able to cover the cost of HPC.
  • The right size and dynamics, because university centers are usually relatively small units that can be reconfigured in a short amount of time to accommodate the requirements of new platforms for advanced computing. In a world where technologies may change frequently and abruptly, university centers are able to cope with the dynamics of HPC.
In summary, university centers have all the connections necessary to promote HPC at the institutional level. For instance, the centers may have contacts with equipment and service providers that may be able to install advanced computing infrastructure. Also, the centers may understand what problems in society have a higher impact and prioritize those problems in their agenda.

If a university center decides to take on the HPC challenge, it must come up with a plan to wisely invest the resources in addressing the problems with higher impact. There are some fundamental tasks in the plan to develop HPC. Here is a list of them:
  • Equipment. There is no HPC without the right advanced computing infrastructure. Providing the scientific and engineering community with the right hardware is fundamental in achieving a highly efficient HPC platform. In achieving that, the center directors must clearly understand the technical requirements of their potential users: computation, network, storage, applications, frameworks, and more. Surveying those requirements is important since different communities may lie on different ends of the big-compute big-data spectrum.
  • Training. Reaching high efficiency on the HPC platform needs the users to make use of the right programming tool in the right way. That is why the university center should also offer a rich portfolio of training opportunities. Either by developing local experts, or by bringing someone from abroad, proper training sessions may inspire new ways to solve problems and help people overcome the learning curve.
  • Networking. One of the, often overlooked, roles of the centers is to serve as a meeting point for computational scientists and engineers. By organizing workshops, conferences, and competitions, the centers may achieve the important goal of putting people together and start collaborations on fascinating problems.
  • Alliances. As the center becomes more savvy on HPC matters, it will find itself in a good position to contact equipment providers, scientists and engineers,  application developers, policy makers, students, and more. Strengthening those alliances will certainly increase the potential of their goals. 
Not surprisingly, Latin America offers several examples of university centers that have taken on the task of leading the promotion of HPC in their institutions. One example is the Center for Mathematical Modeling (CMM) at the University of Chile. Created in 2000, the CMM has pushed the idea of using advanced computing infrastructure to simulate challenging mathematical models. That quest has made the CMM install some of the most powerful supercomputers in Chile in the recent years. The CMM is now the leader in the development of the National Laboratory for High Performance Computing in Chile. Another example is the Center for High Performance Computing (CCAD) of the University of Córdoba, Argentina. The CCAD has been a constant advocate of using HPC resources to solve important problems in science and engineering. In fact, the CCAD hosts Mendieta, the most powerful Argentinian supercomputer.

Tuesday, January 27, 2015

Intel to open new R&D laboratory in Costa Rica

A new Intel "mega-laboratory" for research and development will open this year in Costa Rica. The new mega-lab will carry out testing and quality control operations for the entire product portfolio before they go on to manufacturing. The new laboratory should start operations in a couple of months, as announced by Intel, and is planning to open about 350 new positions. In addition to the new R&D laboratory, there are ongoing discussions between the Costa Rican government and Intel to open a new laboratory for small and medium sized businesses to share knowledge and collaborate, as a way to spark new initiatives and promote the entrepreneurship in the country.

This new R&D laboratory is a great opportunity to incubate and expand the HPC knowledge in the region. As the purpose of the lab is to test and validate Intel products, Highly skilled engineers will develop more expertise in code parallelization and highly efficient optimization techniques that improve efficiency of HPC applications in terms of time to completion, but also in terms of energy consumption. Furthermore, the new mega-lab is expected to fuel the HPC development in the region, as undergraduate and graduate student do internships in the lab and universities develop collaboration projects with Intel that can lead to new high performance software leveraging features of the Intel products being tested. This initiative could even spark a new IPCC in the region.

The new laboratory comes as wave of fresh air to Costa Rica, as the giant chip manufacturer closed their chip assembly plant in Costa Rica last year. A total of 1500 employees lost their job as part of this restructuring operation. "The best long-term solution to maximize global operational efficiency and effectiveness is to close its assembly and testing operations in Costa Rica" announced Intel in a statement last year. Intel has been operating in Costa Rica since 1997 and produced over $2 billions in annual exports, which represents about 20% of the Costa Rica annual exports. 

While the number of new jobs opened by the new R&D laboratory is just a fraction of the previous manufacturing line, this still represents a great opportunity for engineers and future graduates in the region. Joining the movement, the cloud computing company VMware announced recently that they will be expanding their team in Costa Rica to reach 400 employees in 2015. VMware started operations in Costa Rica in 2012 with only 3 employees. Their quick expansion in the region has been seen as a positive sign for the cloud computing market in Latin America.

Wednesday, January 21, 2015

Profile of national HPC developments in Latin America - Part III

In this series of posts, we present some of the national developments in high performance computing seen in Latin American countries. Following parts I (Argentina and Brazil), and II (Chile and Colombia), today we discuss Mexico's developments.

Mexico - National Supercomputing Centers and Network

Mexico is one of countries most engaged in high performance computing in Latin America. Their commitment at a national level is visible in different ways. For instance, the Mexican Supercomputing Network (RedMexSu) interconnects seventeen supercomputing centers, universities, and research institutions in the country.  RedMexSu's activities in HPC include the development of infrastructure, services, and training. In order to promote collaboration among its member, it counts with funding to support missions for researchers from graduate level and above.

Mexico counts with two national laboratories, namely the supercomputing efforts at the National Autonomous University of Mexico (UNAM), and the National Supercomputing Center (CNS) at the San Luis Potosi Institute of Scientific Research and Technology (IPICyT). LARTop50 lists UNAM's Miztli supercomputer as the fastest supercomputer in Latin America with a theoretical peak performance of 120 TFlops. Nevertheless, access to Miztli is restricted to UNAM faculty. Meanwhile, CNS provides access to its own supercomputer, named Thubat-Kaal, to Mexican researchers and foreign collaborators. Thubat-Kaal provides 115 TFlops of performance split into 140 2xIntel Xeon nodes and 25 nodes with the same processors plus two Xeon Phi ones. This kind of access to supercomputing infrastructure is very important for the development of research in Latin America.

Finally, Mexico has already announced the development of a third national laboratory. The National Supercomputing Laboratory of the Southeast of Mexico (Laboratorio Nacional de Supercómputo del Sureste de México, or LNS), as it is named, will be held at the Meritorious Autonomous University of Puebla (BUAP). This national center is expected to help in the development of southeastern part of Mexico. For that, it will count with a supercomputer recently bought from Fujitsu, which is anticipated to provide between 100 and 200 TFlops of computing power and to include both Intel Xeon Phi and Nvidia CUDA accelerators. Expect to read more about LNS as its development unfolds this year.

Sunday, January 18, 2015

Latin America hosts one of the most powerful supercomputers in the world

That's right, a supercomputer as powerful as the top 2 machine on the most recent Top500 list has its home in the Atacama desert in Chile. Although it may not run the traditional HPC software, it certainly enables cutting-edge scientific exploration.

The Atacama Large Millimeter/submillimeter Array (ALMA) is a collection of high-precision antennas that work together as a giant telescope. Using high resolution and sensitivity, ALMA provides a window to understand the origin of the universe.  The antennas are installed in the dry Atacama desert at more than 5,000 meters above sea level. Those conditions are ideal for the type of instrument ALMA embodies.

The general idea of the array of antennas is to capture a signal from the sky by two or more antennas and combine them to analyze the signal and get more information about its source. Images result from combining radio waves collected by different antennas. Therefore, ALMA has the ability to photograph the sky and provide valuable information on the life of galaxies. The proper orientation of the antenna collection is of such precision that it is necessary to run heavy computations. Such information makes it possible to have the antennas precisely pointing at the same region of the sky and have coherent signals that will later be combined into a single image. The computation is carried out by a supercomputer, called the ALMA Correlator.

The correlator can be thought as ALMA's brain; without it, the antenna collection wouldn't work properly. The correlator takes signals from the antennas as input, and produces astronomic data for further analysis. The goal of this process is to multiply the signals from the antennas. The result is saved into files called visibilities, which will later be used to make the images. The correlator contains 134 million processors capable of performing 17 quadrillion operations. It requires 140 kilowatts to cool down the processors. In part, such power consumption is due to the thin air of the Atacama desert. High altitude also precludes the use of hard disks, hence the correlator is diskless.

The correlator was built and installed by the National Radio Astronomy Observatory (NRAO) and funded by the US National Science Fundation (NSF). It is a fundamental part of the ALMA's puzzle and it is already providing the information necessary to understand how planets, galaxies, and stars form.

For more information about the project, please visit http://www.almaobservatory.org.