怪异的结果从NVPROF输出计算内存带宽

weird result calculating memory bandwidth from a nvprof output

本文关键字:输出 计算 内存 带宽 NVPROF 结果      更新时间:2023-10-16

如何用给定计算GPU存储器带宽:

  1. 数据样本量(在Gb中)。
  2. 内核执行时间(NVPROF输出)。

gpu: gtx 1050 ti
CUDA:8.0
OS:Windows 10
IDE:Visual studio 2015

通常,我会使用此公式:bandwidth [Gb/s] = data_size [Gb] / average_time [s]

但是,当我将上述方程式用于get_mem_kernel()内核时,我会得到错误的结果:441,93 [Gb/s]

我认为这个结果是错误的,因为在gtx 1050 ti的技术规格中,全局内存带宽为112 [Gbs]

我在哪里犯了一个错误,或者还有其他我不明白的东西?

示例代码:

// cpp libs:
#include <iostream>
#include <sstream>
#include <fstream>
#include <iomanip>
#include <stdexcept>
// cuda libs:
#include <cuda_runtime.h>
#include <device_launch_parameters.h>
#define ERROR_CHECK(CHECK_) if (CHECK_ != cudaError_t::cudaSuccess) { std::cout << "cuda error" << std::endl; throw std::runtime_error("cuda error"); }
using data_type = double;
template <typename T> constexpr __forceinline__
T div_s(T dividend, T divisor)
{
    using P = double;
    return static_cast <T> (static_cast <P> (dividend + divisor - 1) / static_cast <P> (divisor));
}
__global__
void set_mem_kernel(const unsigned int size, data_type * const in_data)
{
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    if (idx < size)
    {
        in_data[idx] = static_cast <data_type> (idx);
    }
}
__global__
void get_mem_kernel(const unsigned int size, data_type * const in_data)
{
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    data_type val = 0;
    if (idx < size)
    {
        val = in_data[idx];
    }
}
struct quit_program
{
public:
    ~quit_program()
    {
        try
        {
            ERROR_CHECK(cudaDeviceReset());
        }
        catch (...) {}
    }
} quit;
int main()
{
    unsigned int size = 12500000; // 100 mb;
    size_t       byte = size * sizeof(data_type);
    dim3 threads (256, 1, 1);
    dim3 blocks  (div_s(size, threads.x), 1, 1);
    std::cout << size << std::endl;
    std::cout << byte << std::endl;
    std::cout << std::endl;
    std::cout << threads.x << std::endl;
    std::cout << blocks.x  << std::endl;
    std::cout << std::endl;
    // data:
    data_type * d_data = nullptr;
    ERROR_CHECK(cudaMalloc(&d_data, byte));
    for (int i = 0; i < 20000; i++)
    {
        set_mem_kernel <<<blocks, threads>>> (size, d_data);
        ERROR_CHECK(cudaDeviceSynchronize());
        ERROR_CHECK(cudaGetLastError());
        get_mem_kernel <<<blocks, threads>>> (size, d_data);
        ERROR_CHECK(cudaDeviceSynchronize());
        ERROR_CHECK(cudaGetLastError());
    }
    // Exit:
    ERROR_CHECK(cudaFree(d_data));
    ERROR_CHECK(cudaDeviceReset());
    return EXIT_SUCCESS;
}

nvproof结果:

D:Devvisual_studionevada_test_sitex64Release>nvprof ./cuda_test.exe
12500000
100000000
256
48829
==10508== NVPROF is profiling process 10508, command: ./cuda_test.exe
==10508== Warning: Unified Memory Profiling is not supported on the current configuration because a pair of devices without peer-to-peer support is detected on this multi-GPU setup. When peer mappings are not available, system falls back to using zero-copy memory. It can cause kernels, which access unified memory, to run slower. More details can be found at: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-managed-memory
==10508== Profiling application: ./cuda_test.exe
==10508== Profiling result:
Time(%)      Time     Calls       Avg       Min       Max  Name
 81.12%  19.4508s     20000  972.54us  971.22us  978.32us  set_mem_kernel(unsigned int, double*)
 18.88%  4.52568s     20000  226.28us  224.45us  271.14us  get_mem_kernel(unsigned int, double*)
==10508== API calls:
Time(%)      Time     Calls       Avg       Min       Max  Name
 97.53%  26.8907s     40000  672.27us  247.98us  1.7566ms  cudaDeviceSynchronize
  1.61%  443.32ms     40000  11.082us  5.8340us  183.43us  cudaLaunch
  0.51%  141.10ms         1  141.10ms  141.10ms  141.10ms  cudaMalloc
  0.16%  43.648ms         1  43.648ms  43.648ms  43.648ms  cudaDeviceReset
  0.08%  22.182ms     80000     277ns       0ns  121.07us  cudaSetupArgument
  0.06%  15.437ms     40000     385ns       0ns  24.433us  cudaGetLastError
  0.05%  12.929ms     40000     323ns       0ns  57.253us  cudaConfigureCall
  0.00%  1.1932ms        91  13.112us       0ns  734.09us  cuDeviceGetAttribute
  0.00%  762.17us         1  762.17us  762.17us  762.17us  cudaFree
  0.00%  359.93us         1  359.93us  359.93us  359.93us  cuDeviceGetName
  0.00%  8.3880us         1  8.3880us  8.3880us  8.3880us  cuDeviceTotalMem
  0.00%  2.5520us         3     850ns     364ns  1.8230us  cuDeviceGetCount
  0.00%  1.8240us         3     608ns     365ns  1.0940us  cuDeviceGet

CUDA Samplesv8.01_UtilitiesbandwidthTest结果:

[CUDA Bandwidth Test] - Starting...
Running on...
 Device 0: GeForce GTX 1050 Ti
 Quick Mode
 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     11038.4
 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     11469.6
 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     95214.0
Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

编译器正在优化内存读取。罗伯特·克罗维拉(Robert Crovella)指出了它。谢谢您的帮助 - 我永远不会猜到。

详细:
我的编译器正在优化val变量,并通过扩展内存读取。