SSE2优化用于从RGB565转换为RGB888(无alpha通道)

SSE2 optimization for converting from RGB565 to RGB888 (no alpha channel)

本文关键字:alpha 通道 RGB888 用于 优化 RGB565 转换 SSE2      更新时间:2023-10-16

我正在尝试转换一个位缓冲区,从每个像素16位:

RGB 565: rrrrrggggggbbbb|rrr..

每个像素:

24位

RGB888 rrrrrrrrgggggggbbbbbbb|rrr...

我有一个非常优化的算法,但是我很好奇如何使用SSE完成。似乎是一个很好的候选人。LET假设输入是一组16BPP,内存对齐,大小为64x64像素,它非常适合,因此一个64*64*16的缓冲液,将其转换为64*64*的缓冲区24。

如果在__m128i注册表上加载颜色(16BPP(的初始缓冲区(然后是迭代(,我可以每次处理8个像素。如果使用口罩和偏移,我可以在不同注册表(伪代码(中提取每个组件:

eg r_c:  
Input buffer c565
Ouput buffer c888
__m128i* ptr = (__m128i*)c565; // Original byte buffer rgb565
__m128i r_mask_16 = _mm_set_epi8(0xF8, 0, 0xF8...);
__m128i r_c = _mm_and_si128(*ptr, r_mask_16);
result:
__m128i r_c = [r0|0|r1|0|....r7|0]
__m128i g_c = [g0|0|g1|0|....g7|0]
__m128i b_c = [b0|0|r1|0|....b7|0]
But if I extract them manually it loses all its performance:
c888[0] = r_c[0];
c888[1] = g_c[0];
c888[2] = b_c[0];
c888[3] = r_c[1];
...

我想正确的方法应该是将它们加入另一个注册表并直接将其存储在C888上,而无需单独执行每个组件。但是不确定我如何有效地做到这一点?

注意:这个问题不是用SSE2优化RGB565转换为RGB565的重复性。从RGB565转换为ARGB8888,它与将RGB565转换为RGB888不同。以上问题使用说明

punpcklbw
punpckhbw

当有对(XMM(RB(XMM(GA(XMM(RGBA(X2时,这些说明效果很好,因为它们从一个寄存器xmm和GA中从另一个寄存器中获取RB,然后将它们从另一个寄存器中添加到两个xmms中。但是我的情况i暴露是当您不想要alpha组件时。

不幸的是,SSE没有写出包装的24位整数的好方法,因此我们需要自己打包像素数据。

24BPP像素占每个像素的3个字节,但XMM寄存器为16个字节,这意味着我们需要一次处理3*16个像素= 48个字节,即可不必担心仅存储XMM寄存器的一部分。

首先,我们需要加载16BPP数据的向量,然后将其转换为一对32BPP数据的向量。我通过将数据解开为UINT32的向量,然后转移并掩盖此矢量以提取红色,绿色和蓝色通道来做到这一点。或一起这些是转化为32BPP的最后一步。可以用链接问题的代码替换,如果它更快,我尚未测量解决方案的性能。

一旦我们将16个像素转换为32BPP像素的向量,这些矢量就需要将它们包装在一起并写入结果数组。我选择单独掩盖每个像素,然后使用_mm_bsrli_si128_mm_bslli_si128将其移至三个结果向量中的每一个中的最终位置。或"这些像素"中的每个像素都会再次提供包装的数据,并将其写入结果数组。

我已经测试了此代码有效,但是我没有进行任何性能测量,如果有更快的方法可以执行此操作,我不会感到惊讶,尤其是如果您允许自己使用SSE2以外的东西。

这将带有红色通道的24BPP数据写为MSB。

#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <x86intrin.h>
#define SSE_ALIGN 16
int main(int argc, char *argv[]) {
    // Create a small test buffer
    // We process 16 pixels at a time, so size must be a multiple of 16
    size_t buf_size = 64;
    uint16_t *rgb565buf = aligned_alloc(SSE_ALIGN, buf_size * sizeof(uint16_t));
    // Fill it with recognizable data
    for (size_t i = 0; i < buf_size; i++) {
        uint8_t r = 0x1F & (i + 10);
        uint8_t g = 0x3F & i;
        uint8_t b = 0x1F & (i + 20);
        rgb565buf[i] = (r << 11) | (g << 5) | b;
    }
    // Create a buffer to hold the data after translation to 24bpp
    uint8_t *rgb888buf = aligned_alloc(SSE_ALIGN, buf_size * 3*sizeof(uint8_t));
    // Masks for extracting RGB channels
    const __m128i mask_r = _mm_set1_epi32(0x00F80000);
    const __m128i mask_g = _mm_set1_epi32(0x0000FC00);
    const __m128i mask_b = _mm_set1_epi32(0x000000F8);
    // Masks for extracting 24bpp pixels for the first 128b write
    const __m128i mask_0_1st  = _mm_set_epi32(0,          0,          0,          0x00FFFFFF);
    const __m128i mask_0_2nd  = _mm_set_epi32(0,          0,          0x0000FFFF, 0xFF000000);
    const __m128i mask_0_3rd  = _mm_set_epi32(0,          0x000000FF, 0xFFFF0000, 0         );
    const __m128i mask_0_4th  = _mm_set_epi32(0,          0xFFFFFF00, 0,          0         );
    const __m128i mask_0_5th  = _mm_set_epi32(0x00FFFFFF, 0,          0,          0         );
    const __m128i mask_0_6th  = _mm_set_epi32(0xFF000000, 0,          0,          0         ); 
    // Masks for the second write
    const __m128i mask_1_6th  = _mm_set_epi32(0,          0,          0,          0x0000FFFF);
    const __m128i mask_1_7th  = _mm_set_epi32(0,          0,          0x000000FF, 0xFFFF0000);
    const __m128i mask_1_8th  = _mm_set_epi32(0,          0,          0xFFFFFF00, 0         );
    const __m128i mask_1_9th  = _mm_set_epi32(0,          0x00FFFFFF, 0,          0         );
    const __m128i mask_1_10th = _mm_set_epi32(0x0000FFFF, 0xFF000000, 0,          0         );
    const __m128i mask_1_11th = _mm_set_epi32(0xFFFF0000, 0,          0,          0         );
    // Masks for the third write
    const __m128i mask_2_11th = _mm_set_epi32(0,          0,          0,          0x000000FF);
    const __m128i mask_2_12th = _mm_set_epi32(0,          0,          0,          0xFFFFFF00);
    const __m128i mask_2_13th = _mm_set_epi32(0,          0,          0x00FFFFFF, 0         );
    const __m128i mask_2_14th = _mm_set_epi32(0,          0x0000FFFF, 0xFF000000, 0         );
    const __m128i mask_2_15th = _mm_set_epi32(0x000000FF, 0xFFFF0000, 0,          0         );
    const __m128i mask_2_16th = _mm_set_epi32(0xFFFFFF00, 0,          0,          0         );
    // Convert the RGB565 data into RGB888 data
    __m128i *packed_rgb888_buf = (__m128i*)rgb888buf;
    for (size_t i = 0; i < buf_size; i += 16) {
        // Need to do 16 pixels at a time -> least number of 24bpp pixels that fit evenly in XMM register
        __m128i rgb565pix0_raw = _mm_load_si128((__m128i *)(&rgb565buf[i]));
        __m128i rgb565pix1_raw = _mm_load_si128((__m128i *)(&rgb565buf[i+8]));
        // Extend the 16b ints to 32b ints
        __m128i rgb565pix0lo_32b = _mm_unpacklo_epi16(rgb565pix0_raw, _mm_setzero_si128());
        __m128i rgb565pix0hi_32b = _mm_unpackhi_epi16(rgb565pix0_raw, _mm_setzero_si128());
        // Shift each color channel into the correct position and mask off the other bits
        __m128i rgb888pix0lo_r = _mm_and_si128(mask_r, _mm_slli_epi32(rgb565pix0lo_32b, 8)); // Block 0 low pixels
        __m128i rgb888pix0lo_g = _mm_and_si128(mask_g, _mm_slli_epi32(rgb565pix0lo_32b, 5));
        __m128i rgb888pix0lo_b = _mm_and_si128(mask_b, _mm_slli_epi32(rgb565pix0lo_32b, 3));
        __m128i rgb888pix0hi_r = _mm_and_si128(mask_r, _mm_slli_epi32(rgb565pix0hi_32b, 8)); // Block 0 high pixels
        __m128i rgb888pix0hi_g = _mm_and_si128(mask_g, _mm_slli_epi32(rgb565pix0hi_32b, 5));
        __m128i rgb888pix0hi_b = _mm_and_si128(mask_b, _mm_slli_epi32(rgb565pix0hi_32b, 3));
        // Combine each color channel into a single vector of four 32bpp pixels
        __m128i rgb888pix0lo_32b = _mm_or_si128(rgb888pix0lo_r, _mm_or_si128(rgb888pix0lo_g, rgb888pix0lo_b));
        __m128i rgb888pix0hi_32b = _mm_or_si128(rgb888pix0hi_r, _mm_or_si128(rgb888pix0hi_g, rgb888pix0hi_b));
        // Same thing as above for the next block of pixels
        __m128i rgb565pix1lo_32b = _mm_unpacklo_epi16(rgb565pix1_raw, _mm_setzero_si128());
        __m128i rgb565pix1hi_32b = _mm_unpackhi_epi16(rgb565pix1_raw, _mm_setzero_si128());
        __m128i rgb888pix1lo_r = _mm_and_si128(mask_r, _mm_slli_epi32(rgb565pix1lo_32b, 8)); // Block 1 low pixels
        __m128i rgb888pix1lo_g = _mm_and_si128(mask_g, _mm_slli_epi32(rgb565pix1lo_32b, 5));
        __m128i rgb888pix1lo_b = _mm_and_si128(mask_b, _mm_slli_epi32(rgb565pix1lo_32b, 3));
        __m128i rgb888pix1hi_r = _mm_and_si128(mask_r, _mm_slli_epi32(rgb565pix1hi_32b, 8)); // Block 1 high pixels
        __m128i rgb888pix1hi_g = _mm_and_si128(mask_g, _mm_slli_epi32(rgb565pix1hi_32b, 5));
        __m128i rgb888pix1hi_b = _mm_and_si128(mask_b, _mm_slli_epi32(rgb565pix1hi_32b, 3));
        __m128i rgb888pix1lo_32b = _mm_or_si128(rgb888pix1lo_r, _mm_or_si128(rgb888pix1lo_g, rgb888pix1lo_b));
        __m128i rgb888pix1hi_32b = _mm_or_si128(rgb888pix1hi_r, _mm_or_si128(rgb888pix1hi_g, rgb888pix1hi_b));
        // At this point, rgb888pix_32b contains the pixel data in 32bpp format, need to compress it to 24bpp
        // Use the _mm_bs*li_si128(__m128i, int) intrinsic to shift each 24bpp pixel into it's final position
        // ...then mask off the other pixels and combine the result together with or
        __m128i pix_0_1st = _mm_and_si128(mask_0_1st,                 rgb888pix0lo_32b     ); // First 4 pixels
        __m128i pix_0_2nd = _mm_and_si128(mask_0_2nd, _mm_bsrli_si128(rgb888pix0lo_32b, 1 ));
        __m128i pix_0_3rd = _mm_and_si128(mask_0_3rd, _mm_bsrli_si128(rgb888pix0lo_32b, 2 ));
        __m128i pix_0_4th = _mm_and_si128(mask_0_4th, _mm_bsrli_si128(rgb888pix0lo_32b, 3 ));
        __m128i pix_0_5th = _mm_and_si128(mask_0_5th, _mm_bslli_si128(rgb888pix0hi_32b, 12)); // Second 4 pixels
        __m128i pix_0_6th = _mm_and_si128(mask_0_6th, _mm_bslli_si128(rgb888pix0hi_32b, 11));
        // Combine each piece of 24bpp pixel data into a single 128b variable
        __m128i pix128_0 = _mm_or_si128(_mm_or_si128(_mm_or_si128(pix_0_1st, pix_0_2nd), pix_0_3rd), 
                                        _mm_or_si128(_mm_or_si128(pix_0_4th, pix_0_5th), pix_0_6th));
        _mm_store_si128(packed_rgb888_buf, pix128_0);
        // Repeat the same for the second 128b write
        __m128i pix_1_6th  = _mm_and_si128(mask_1_6th,  _mm_bsrli_si128(rgb888pix0hi_32b, 5 ));
        __m128i pix_1_7th  = _mm_and_si128(mask_1_7th,  _mm_bsrli_si128(rgb888pix0hi_32b, 6 ));
        __m128i pix_1_8th  = _mm_and_si128(mask_1_8th,  _mm_bsrli_si128(rgb888pix0hi_32b, 7 ));
        __m128i pix_1_9th  = _mm_and_si128(mask_1_9th,  _mm_bslli_si128(rgb888pix1lo_32b, 8 )); // Third 4 pixels
        __m128i pix_1_10th = _mm_and_si128(mask_1_10th, _mm_bslli_si128(rgb888pix1lo_32b, 7 ));
        __m128i pix_1_11th = _mm_and_si128(mask_1_11th, _mm_bslli_si128(rgb888pix1lo_32b, 6 ));
        __m128i pix128_1 = _mm_or_si128(_mm_or_si128(_mm_or_si128(pix_1_6th, pix_1_7th),  pix_1_8th ), 
                                        _mm_or_si128(_mm_or_si128(pix_1_9th, pix_1_10th), pix_1_11th));
        _mm_store_si128(packed_rgb888_buf+1, pix128_1);
        // And again for the third 128b write
        __m128i pix_2_11th = _mm_and_si128(mask_2_11th, _mm_bsrli_si128(rgb888pix1lo_32b, 10));
        __m128i pix_2_12th = _mm_and_si128(mask_2_12th, _mm_bsrli_si128(rgb888pix1lo_32b, 11));
        __m128i pix_2_13th = _mm_and_si128(mask_2_13th, _mm_bslli_si128(rgb888pix1hi_32b,  4)); // Fourth 4 pixels
        __m128i pix_2_14th = _mm_and_si128(mask_2_14th, _mm_bslli_si128(rgb888pix1hi_32b,  3));
        __m128i pix_2_15th = _mm_and_si128(mask_2_15th, _mm_bslli_si128(rgb888pix1hi_32b,  2));
        __m128i pix_2_16th = _mm_and_si128(mask_2_16th, _mm_bslli_si128(rgb888pix1hi_32b,  1));
        __m128i pix128_2 = _mm_or_si128(_mm_or_si128(_mm_or_si128(pix_2_11th, pix_2_12th), pix_2_13th), 
                                        _mm_or_si128(_mm_or_si128(pix_2_14th, pix_2_15th), pix_2_16th));
        _mm_store_si128(packed_rgb888_buf+2, pix128_2);
        // Update pointer for next iteration
        packed_rgb888_buf += 3;
    }
    for (int i = 0; i < buf_size; i++) {
        uint8_t r565 = (i + 10) & 0x1F;
        uint8_t g565 = i & 0x3F;
        uint8_t b565 = (i + 20) & 0x1F;
        printf("%2d] RGB = (%02x,%02x,%02x), should be (%02x,%02x,%02x)n", i, rgb888buf[3*i+2], 
                rgb888buf[3*i+1], rgb888buf[3*i], r565 << 3, g565 << 2, b565 << 3);
    }
    return EXIT_SUCCESS;
}

编辑:这是将32BPP像素数据压入24BPP的第二种方法。我尚未测试它是否更快,尽管我会这样认为,因为它执行了更少的说明,并且不需要在最后运行或在结尾处运行一棵树。

不清楚它如何工作。

在此版本中,使用换档和散档的组合用于将每个像素的每个块移动在一起,而不是掩盖和单独移动每个像素。将16BPP转换为32BPP的方法没有变化。

首先,我定义了一个辅助功能以换档左uint32在__m128i的每一半中。

__m128i bslli_low_dword_once(__m128i x) {
    // Multiply low dwords by 256 to shift right 8 bits
    const __m128i shift_multiplier = _mm_set1_epi32(1<<8);
    // Mask off the high dwords
    const __m128i mask = _mm_set_epi32(0xFFFFFFFF, 0, 0xFFFFFFFF, 0);
    return _mm_or_si128(_mm_and_si128(x, mask), _mm_mul_epu32(x, shift_multiplier));
}

那么,唯一的其他更改是将32BPP数据包装到24BPP中的代码。

// At this point, rgb888pix_32b contains the pixel data in 32bpp format, need to compress it to 24bpp
__m128i pix_0_block0lo = bslli_low_dword_once(rgb888pix0lo_32b);
        pix_0_block0lo = _mm_srli_epi64(pix_0_block0lo, 8);
        pix_0_block0lo = _mm_shufflelo_epi16(pix_0_block0lo, _MM_SHUFFLE(2, 1, 0, 3));
        pix_0_block0lo = _mm_bsrli_si128(pix_0_block0lo, 2);
__m128i pix_0_block0hi = _mm_unpacklo_epi64(_mm_setzero_si128(), rgb888pix0hi_32b);
        pix_0_block0hi = bslli_low_dword_once(pix_0_block0hi);
        pix_0_block0hi = _mm_bslli_si128(pix_0_block0hi, 3);
__m128i pix128_0 = _mm_or_si128(pix_0_block0lo, pix_0_block0hi);
_mm_store_si128(packed_rgb888_buf, pix128_0);
// Do the same basic thing for the next 128b chunk of pixel data
__m128i pix_1_block0hi = bslli_low_dword_once(rgb888pix0hi_32b);
        pix_1_block0hi = _mm_srli_epi64(pix_1_block0hi, 8);
        pix_1_block0hi = _mm_shufflelo_epi16(pix_1_block0hi, _MM_SHUFFLE(2, 1, 0, 3));
        pix_1_block0hi = _mm_bsrli_si128(pix_1_block0hi, 6);
__m128i pix_1_block1lo = bslli_low_dword_once(rgb888pix1lo_32b);
        pix_1_block1lo = _mm_srli_epi64(pix_1_block1lo, 8);
        pix_1_block1lo = _mm_shufflelo_epi16(pix_1_block1lo, _MM_SHUFFLE(2, 1, 0, 3));
        pix_1_block1lo = _mm_bslli_si128(pix_1_block1lo, 6);
__m128i pix128_1 = _mm_or_si128(pix_1_block0hi, pix_1_block1lo);
_mm_store_si128(packed_rgb888_buf+1, pix128_1);
// And again for the final chunk
__m128i pix_2_block1lo = bslli_low_dword_once(rgb888pix1lo_32b);
        pix_2_block1lo = _mm_bsrli_si128(pix_2_block1lo, 11);
__m128i pix_2_block1hi = bslli_low_dword_once(rgb888pix1hi_32b);
        pix_2_block1hi = _mm_srli_epi64(pix_2_block1hi, 8);
        pix_2_block1hi = _mm_shufflelo_epi16(pix_2_block1hi, _MM_SHUFFLE(2, 1, 0, 3));
        pix_2_block1hi = _mm_bslli_si128(pix_2_block1hi, 2);
__m128i pix128_2 = _mm_or_si128(pix_2_block1lo, pix_2_block1hi);
_mm_store_si128(packed_rgb888_buf+2, pix128_2);