高性能计算

MPI and CUDA mixed programming, General CUDA progr

2010年3月26日 阅读(257)

http://forums.nvidia.com/index.php?showtopic=98213
I am running Ubuntu 8.04 with the CUDA 2.0 toolkit and driver version 177.73 with openMPI. With this configuration, everything works fine and I am able to compile and execute mpi code by simply replace g++/gcc with mpic++ in common.mk.

My issue is that when I try to upgrade my driver to version 180.22 (to get support for my new 295 cards) I get immediate segmentation fault with even the most trivial programs (empty int main). This problem happens only when I am compiling with the CUDA template. Other programs which are compiled with only the mpic++ command line run fine and when I go back to driver v177.73, everything works again. This issue occurs with nearly identical software config on 5 different workstation with different mobo/CPU, chipset, and graphics cards.

has anyone had this issue in the past… I suspect that there may be a compiler flag that I can pass to fix this issue, but that is way outside my pay rate. I have found that things compile and run if I switch to mpich and the mpicc wrapper.

http://forums.nvidia.com/index.php?showtopic=30741

http://forums.nvidia.com/index.php?showtopic=96620&hl=mpi
http://forums.nvidia.com/index.php?showtopic=75796&hl=mpi
http://forums.nvidia.com/index.php?showtopic=71498&hl=mpi
http://forums.nvidia.com/index.php?showtopic=159179

> How to compile MPI CUDA code?

完整的cuda与mpi混合编程指南


Mixing CUDA and openMPI


the gpu I was using with mpi was in protected mode, which prevent me from running

CUDA and autoconf



compiling MPI and CUDA C


 MPI and CUDA C


Elementary Cuda Question MPI and CUDA, Can one run a program already written in standard MPI?

how to compile MPI and CUDA.

CUDA and MPI

cuda + openmpi

 CUDA with OpenMPI on Ubuntu 8.04, libcudart.so.2: cannot open shared object file: No such file or directory

Mixed CUDA and MPI programming



MPI causing trouble in memory allocation?

Question about using cudaMemcpy in mixed CUDA/MPI Programming

 CUDA visual profiler using mpi?

 Sharing 1 GPU betwenn MPI tasks, work fine with 4 mpi tasks but cudaMalloc "unknown error" with

Sort-of MPI on the Tesla, Development of high-level routines

CUDA multicore/mpi

You Might Also Like