CUDA:结构体的共享数据成员和对该结构体的引用成员具有不同的地址和值

CUDA: shared data member of struct and member of reference to that struct have different addresses, values

本文关键字:结构体 地址 成员 共享 数据成员 CUDA 引用      更新时间:2023-10-16

好的,问题来了:
使用CUDA 1.1计算GPU,我试图为每个线程维护一组(可能变化的数量,这里固定为4)索引,作为结构变量的成员保留的引用
我的问题是,在访问成员数组时,获取对结构的引用会导致不正确的结果:我用0初始化成员数组值,当我使用原始结构变量读取数组值时,我得到了正确的值(0),但当我使用对结构var的引用读取时,我会得到垃圾(-8193)。即使使用class而不是struct,也会发生这种情况。

为什么tmp低于/不等于0?

C++不是我的主要语言,所以这可能是一个概念问题,也可能是CUDA工作的一个怪癖。

struct DataIdx {
    int numFeats;
    int* featIdx;
};
extern __shared__ int sharedData[];
__global__  void myFn(){
    int tidx = blockIdx.x * blockDim.x + threadIdx.x;
    
    DataIdx myIdx;  //instantiate the struct var in the context of the current thread
    myIdx.numFeats = 4;
    size_t idxArraySize = sizeof(int)*4;
    //get a reference to my array for this thread. Parallel Nsight debugger shows myIdx.featIdx address = 0x0000000000000000e0
    myIdx.featIdx = (int*)(&sharedData[tidx*idxArraySize]);  
    
    myIdx.featIdx[0] = 0x0;  //set first value to 0 
    int tmp = myIdx.featIdx[0];  // tmp is correctly eq to 0 in Nsight debugger -- As Expected!!
    tmp = 2*tmp;    antIdx.featIdx[0] = tmp; //ensure compiler doesn't elide out tmp
    
    DataIdx *tmpIdx = &myIdx;  //create a reference to my struct var
    tmp = tmpIdx.featIdx[0];   // expected 0, but tmp = -8193 in debugger !! why?  debugger shows address of tmpIdx.featIdx = __devicea__ address=8
    tmpIdx.featIdx[0] = 0x0;
    tmp = tmpIdx.featIdx[0]; // tmp = -1; cant even read what we just set
    
    //forcing the same reference as myIdx.featIdx, still gives a problem! debugger shows address of tmpIdx.featIdx = __devicea__ address=8
    tmpIdx->featIdx =  (int*)(&sharedData[tidx*idxArraySize]); 
    tmp = tmpIdx.featIdx[0]; //tmp = -8193!! why != 0?
    DataIdx tmpIdxAlias = myIdx;
    tmp = tmpIdx.featIdx[0]; //aliasing the original var gives correct results, tmp=0
    
    
     myIdx.featIdx[0] = 0x0;
     mySubfn(&myIdx); //this is a problem because it happens when passing the struct by reference to subfns
     mySubfn2(myIdx);
}
__device__ mySubfn(struct DataIdx *myIdx){
  int tmp = myIdx->featIdx[0]; //tmp == -8193!! should be 0
}
__device__ mySubfn2(struct DataIdx &myIdx){
  int tmp = myIdx.featIdx[0]; //tmp == -8193!! should be 0
}

我不得不修改您的代码进行编译。在线

tmpIdx->featIdx[0] = 0x0

编译器无法理解指针指向共享内存。它不是对共享存储器(R2G)进行存储,而是对越界的全局地址0x10进行存储。

    DataIdx *tmpIdx = &myIdx;
0x000024c8  MOV32 R2, R31;  
0x000024cc  MOV32 R2, R2;  
    tmp = tmpIdx->featIdx[0];
    tmpIdx->featIdx[0] = 0x0;
0x000024d0  MOV32 R3, R31;  
0x000024d4  MOV32 R2, R2;  
0x000024d8  IADD32I R4, R2, 0x4;  
0x000024e0  R2A A1, R4;  
0x000024e8  LLD.U32 R4, local [A1+0x0];  
0x000024f0  IADD R4, R4, R31;  
0x000024f8  SHL R4, R4, R31;  
0x00002500  IADD R4, R4, R31;  
0x00002508  GST.U32 global14 [R4], R3;   // <<== GLOBAL STORE vs. R2G (register to global register file)
    tmp = tmpIdx->featIdx[0];

Nsight CUDA内存检查器捕获全局内存的越界存储。

Memory Checker detected 1 access violations.
error = access violation on store (global memory)
blockIdx = {0,0,0}
threadIdx = {0,0,0}
address = 0x00000010
accessSize = 0

如果您为compute_10,sm_10(实际上<=1.3)进行编译,则对于编译器无法确定访问是共享内存的每一行,都应该看到以下警告:

kernel.cu(46): warning : Cannot tell what pointer points to, assuming global memory space

如果在启动后添加cudaDeviceSynchronize,您应该会看到由越界内存访问引起的错误代码cudaErrorUnknown

__shared__是一个变量内存限定符,而不是类型限定符,所以我知道如何告诉编译器featIdx将始终指向共享内存。在CC上>=2.0编译器应该将CCD_ 11转换为通用指针。