Mon, 11 Dec|
This course takes half a day.
Time & Location
11 Dec, 09:00 – 12:00 CET
About the Event
The Python programming language is increasingly popular. It is a versatile language for general purpose programming and accessible for novice programmers. However, it is also increasingly used for scientific computing and can be used to develop code that runs on GPGPUs. Additionally, a number of libraries that are commonly used in scientific computing, data science and machine learning can use GPGPUs to improve performance.
Python is making inroads into the HPC landscape. However, writing Python code for efficient scientific computing is not entirely trivial. In this course a variety of techniques and libraries will be discussed that are useful in this context. Subjects covered include profiling of code to discover opportunities for optimization, using Cython, a Python extension that translate critical code sections into efficient C, wrapping C/C++/Fortran libraries in Python, multithreaded/multiprocess Python, distributed programming use mpi4py, and pySpark for data science.Target audience
This training is for you if you speed up your Python by using GPUs.Previous knowledge
You will need experience programming in Python. This is not a training that starts from scratch. Some familiarity with numpy is required as well.
If you plan to do Python GPU programming in a Linux or HPC environment (and you should), then familiarity with these environments is required as well.Result/Objectives
When you complete this training you will
- have an understanding of the architecture and features of GPGPUs,
- be able to transfer data between the host and the GPGPU device,
- be able to do linear algebra computations on GPGPUs using scikit-cuda,
- be able to generate random numbers on a GPGPU using curand,
- be able to define your own kernels to run on GPGPUs,
- use numba to generate kernels to run on GPGPUs,
- run machine learning algorithms on GPGPUs,
- speed up data science tasks using Rapids.