CUDA - 内核调用中的编译错误

CUDA - compilation error in kernel call

本文关键字:编译 错误 调用 内核 CUDA      更新时间:2023-10-16

嗨,我想将 Steam 代码从 CPU 修改为 GPU 版本。实际上没有必要理解整个代码。因此,如果有人感兴趣,我将只介绍片段,所有内容(源代码和描述)都可以在这里找到:http://www.dgp.toronto.edu/people/stam/reality/Research/pub.html => "游戏的实时流体动力学"。

这可能是一件很容易的事。但是我很长时间没有使用C++,只是学习 CUDA,所以这对我来说很难。尝试了很长时间,但没有效果。

中央处理器版本(工作):

#define IX(i,j) ((i)+(N+2)*(j))
...
void lin_solve(int N, int b, float * x, float * x0, float a, float c)
{
    for (int k = 0; k<20; k++) 
    {
        for (int i = 1; i <= N; i++) 
        {
            for (int j = 1; j <= N; j++) 
            {
            x[IX(i, j)] = (x0[IX(i, j)] + a*(x[IX(i - 1, j)] + x[IX(i + 1, j)] + x[IX(i, j - 1)] + x[IX(i, j + 1)])) / c;
            }
        }

            set_bnd(N, b, x);
    }
}

我的 GPU 版本(无法编译):

#define IX(i,j) ((i)+(N+2)*(j))
__global__
void GPU_lin_solve(int *N, int *b, float * x, float * x0, float *a, float *c)
{
    int i = threadIdx.x * blockIdx.x + threadIdx.x;
    int j = threadIdx.y * blockIdx.y + threadIdx.y;
    if (i < N && j < N)
    x[IX(i, j)] = (x0[IX(i, j)] + a*(x[IX(i - 1, j)] + x[IX(i + 1, j)] + x[IX(i, j - 1)] + x[IX(i, j + 1)])) / c;
}
void lin_solve(int N, int b, float * x, float * x0, float a, float c)
{
    for (int k = 0; k<20; k++) 
    {
        int *d_N, *d_b;
        float **d_x, **d_x0;
        float *d_a, *d_c, *d_xx, *d_xx0;
        *d_xx = **d_x;
        *d_xx0 = **d_x0;
        cudaMalloc(&d_N, sizeof(int));
        cudaMalloc(&d_b, sizeof(int));
        cudaMalloc(&d_xx, sizeof(float));
        cudaMalloc(&d_xx0, sizeof(float));
        cudaMalloc(&d_a, sizeof(float));
        cudaMalloc(&d_c, sizeof(float));
        cudaMemcpy(d_N, &N, sizeof(int), cudaMemcpyHostToDevice);
        cudaMemcpy(d_b, &b, sizeof(int), cudaMemcpyHostToDevice);
        cudaMemcpy(d_xx, &*x, sizeof(float), cudaMemcpyHostToDevice);
        cudaMemcpy(d_xx0, &*x0, sizeof(float), cudaMemcpyHostToDevice);
        cudaMemcpy(d_a, &a, sizeof(float), cudaMemcpyHostToDevice);
        cudaMemcpy(d_c, &c, sizeof(float), cudaMemcpyHostToDevice);
        GPU_lin_solve << <1, 1 >> > (d_N, d_b, d_xx, d_xx0, d_a, d_c);
        // compilator showing problem in the line above
        // Error 23 error : argument of type "int *" is incompatible with parameter of type "int"
        cudaMemcpy(&*x, d_xx, sizeof(float), cudaMemcpyDeviceToHost); 

        cudaFree(d_N);
        cudaFree(d_b);
        cudaFree(d_xx);
        cudaFree(d_xx0);
        cudaFree(d_a);
        cudaFree(d_c);

            set_bnd(N, b, x);
    }
}

编译器报告错误:

Error 23 error : argument of type "int *" is incompatible with parameter of type "int"

在内核启动时

GPU_lin_solve << <1, 1 >> > (d_N, d_b, d_xx, d_xx0, d_a, d_c);

我做错了什么?

if (i < N && j < N)
    x[IX(i, j)] = (x0[IX(i, j)] + a*(x[IX(i - 1, j)] + x[IX(i + 1, j)] + x[IX(i, j - 1)] + x[IX(i, j + 1)])) / c;
}

N在你的条件和宏是一个指针,你把它当作一个整数。尝试取消引用它?