Oppgaven er ikke lenger tilgjengelig

MPI over PCI Express

Message Passing Interface (MPI) is a standardized and portable message-passing system designed by researchers from academia and industry to function on a wide variety of parallel computers.

Dolphin PXH830 PCIe Adapter

MPI is used in several small and large clusters and supercomputers and supports both multiprocessor environments, and heterogeneous architectures with GPUs and other accelerators. Several Message Passing Interfaces (MPIs) today offer support for transport plugins. 

This master project availible for one or two students, and the tasks would be to:

  • Select one or two open source MPI libraries.
  • Create basic transport using standard PIO and RDMA functionality offered by Dolphin's SISCI API.
  • Optimize collective operations by using PCI Express multicast functionality.
  • Integrate and test with CUDA (nVidia GPUDirect) using PCI Express peer-to-peer functionality.
  • Collaborate with the MPI open source development team, to submit results to open source projects.
  • Benchmarking and analysis of the results.

Dolphin will provide required guidance on using SISCI and PCI Express interconnect cards, as well as the required hardware for test and benchmarking.

 

Qualifications

Good low-level computer systems understanding. The student should have completed, INF3151 or equivalent.

Publisert 31. aug. 2018 12:50 - Sist endret 16. sep. 2019 15:42

Veileder(e)

Omfang (studiepoeng)

60