分段错误 - MPI FFTW

用户10954947

我有一个非常简单的 2D MPI FFTW 代码,如下所示。

int main(int argc, char **argv){
  int N0 = 4, N1 = 4;
  fftw_plan plan;
  fftw_complex *data; //local data of course
  ptrdiff_t alloc_local, local_n0, local_0_start;

  MPI_Init(&argc, &argv);
  fftw_mpi_init();

  int id, p, ierr, count;
  ierr = MPI_Comm_size ( MPI_COMM_WORLD, &p );
  ierr = MPI_Comm_rank ( MPI_COMM_WORLD, &id );

  /* get local data size and allocate */
  alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
                       &local_n0, &local_0_start);

  data = fftw_alloc_complex(alloc_local);
  memset( data, 0, alloc_local*sizeof(fftw_complex));
  printf("Processor %d of %d - Local row index starts at %ld with %ld * %d size\n",id,p, local_0_start,local_n0,N1);
  /* create plan for forward DFT */
  plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
                  FFTW_FORWARD, FFTW_ESTIMATE);

  /* initialize data to some function my_function(x,y) */
  count = 0;
  for (int i = local_0_start; i < local_0_start + local_n0; ++i) for (int j = 0; j < N1; ++j){
    data[i*N1 + j][0]=local_0_start;
    data[i*N1 + j][1]=i;
    //printf("Processor %d of %d - (%d,%d) - %d - %f + %f i \n",id,p,i+1,j+1,count,data[i*N1 +j][0],data[i*N1 +j][1]);
    count +=1;
  }

  fftw_execute(plan);
  fftw_destroy_plan(plan);

  fftw_free(data);
  MPI_Finalize();
  printf("finalize\n");
  return 0;
}

我编译并运行它

mpicc -I /usr/local/include -L /usr/local/lib  ${PRJ}.c -o ${PRJ} -lfftw3_mpi -lfftw3 -lm 
mpirun -np 2 ./simple_mpi_example

当我尝试将一些数据初始化到我的数据矩阵时,导致我在 MacOS 上的 mpirun 期间出现分段错误并带有错误代码

Processor 1 of 2 - Local row index starts at 2 with 2 * 4 size
Processor 0 of 2 - Local row index starts at 0 with 2 * 4 size
[Sanaths-MacBook-Air:82223] *** Process received signal ***
[Sanaths-MacBook-Air:82223] Signal: Segmentation fault: 11 (11)
[Sanaths-MacBook-Air:82223] Signal code:  (0)
[Sanaths-MacBook-Air:82223] Failing at address: 0x0
[Sanaths-MacBook-Air:82223] [ 0] 0   libsystem_platform.dylib            0x00007fff7d48bb3d _sigtramp + 29
[Sanaths-MacBook-Air:82223] *** End of error message ***
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node Sanaths-MacBook-Air exited on signal 11 (Segmentation fault: 11).
--------------------------------------------------------------------------
make: *** [run] Error 139

我尝试在 linux 机器上运行相同的代码,它也给出了如下错误。

Processor 1 of 2 - Local row index starts at 2 with 2 * 4 size
Processor 0 of 2 - Local row index starts at 0 with 2 * 4 size
double free or corruption (!prev)
[sanath-X550LC:05802] *** Process received signal ***
[sanath-X550LC:05802] Signal: Aborted (6)
[sanath-X550LC:05802] Signal code:  (-6)
^C^CMakefile:7: recipe for target 'run' failed

仅当我尝试使用嵌套 for 循环访问数据时才会发生此错误。当我不这样做时,它运行时不会出错。这让我相信我可能在某个地方越界了,但我没有。

任何建议/提示将不胜感激。谢谢!

斯蒂芬·贝克尔

我猜你的代码基于这个MPI 示例代码该示例表明每个本地进程处理整个数组的一个切片

for (int     i = 0; i < local_n0; ++i) {
    for (int j = 0; j < N1;       ++j) {
        data[i*N1 + j][0] = complex_real_part(local_n0_start + i, j);
        data[i*N1 + j][1] = complex_imag_part(local_n0_start + i, j);
    }
}

但是您的代码假定它可以访问整个数组,尽管您只为本地切片分配了内存

data = fftw_alloc_complex(alloc_local);
memset(data, 0, alloc_local*sizeof(fftw_complex));
...
for (int     i = local_n0; i < local_n0 + local_n0; ++i) {
    for (int j = 0;        j < N1;                  ++j) {
        data[i*N1 + j][0] = complex_real_part(i, j);
        data[i*N1 + j][1] = complex_imag_part(i, j);
    }
}

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章