H.264传输流的AVCODEC_DECODE_VIDEO2的内容

What to pass to avcodec_decode_video2 for H.264 Transport Stream?

本文关键字:DECODE VIDEO2 AVCODEC 传输      更新时间:2023-10-16

我想从MPEG-2传输流数据包中解码H.264视频,但我不清楚该传递给avcodec_decode_video2

文档说要传递"包含输入缓冲区的输入AVPACKET"。

但是输入缓冲区中应该是什么?

PES数据包将分布在几个TS数据包的有效负载部分,而Nalu(s)内部是PES。那么通过TS碎片吗?整个pes?PES有效载荷?

此示例代码提到:

但是其他一些编解码器(MSMPEG4,MPEG4)固有地基于框架,因此 您必须将它们与所有数据完全拨打,以确切地将其调用。你必须 在初始化它们之前,还要初始化"宽度"answers"高度"。

但我找不到有关"所有数据"的信息...

传递TS数据包有效载荷的片段不起作用:

AVPacket avDecPkt;
av_init_packet(&avDecPkt);
avDecPkt.data = inbuf_ptr;
avDecPkt.size = esBufSize;
len = avcodec_decode_video2(mpDecoderContext, mpFrameDec, &got_picture, &avDecPkt);
if (len < 0)
{
    printf("  TS PKT #%.0f. Error decoding frame #%04d [rc=%d '%s']n",
        tsPacket.pktNum, mDecodedFrameNum, len, av_make_error_string(errMsg, 128, len));
    return;
}

输出

[h264 @ 0x81cd2a0] no frame!
TS PKT #2973. Error decoding frame #0001 [rc=-1094995529 'Invalid data found when processing input']

编辑

使用WLGFX的出色命中,我制作了这个简单的程序来尝试解码TS数据包。作为输入,我准备了一个包含的文件,仅视频pid中的 ts数据包。

感觉很接近,但我不知道如何设置格式。AV_READ_FRAME()的SegFaults下面的代码(在ret = s->iformat->read_packet(s, pkt))。s-> iformat为零。

建议?

编辑II-对不起,获得后源代码****编辑iii-更新的示例代码以模拟阅读TS PKT队列

/*
 * Test program for video decoder
 */
#include <stdio.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
extern "C" {
#ifdef __cplusplus
    #define __STDC_CONSTANT_MACROS
    #ifdef _STDINT_H
        #undef _STDINT_H
    #endif
    #include <stdint.h>
#endif
}
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libavutil/imgutils.h"
#include "libavutil/opt.h"
}

class VideoDecoder
{
public:
    VideoDecoder();
    bool rcvTsPacket(AVPacket &inTsPacket);
private:
    AVCodec         *mpDecoder;
    AVCodecContext  *mpDecoderContext;
    AVFrame         *mpDecodedFrame;
    AVFormatContext *mpFmtContext;
};
VideoDecoder::VideoDecoder()
{
    av_register_all();
    // FORMAT CONTEXT SETUP
    mpFmtContext = avformat_alloc_context();
    mpFmtContext->flags = AVFMT_NOFILE;
    // ????? WHAT ELSE ???? //
    // DECODER SETUP
    mpDecoder = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!mpDecoder)
    {
        printf("Could not load decodern");
        exit(11);
    }
    mpDecoderContext = avcodec_alloc_context3(NULL);
    if (avcodec_open2(mpDecoderContext, mpDecoder, NULL) < 0)
    {
        printf("Cannot open decoder contextn");
        exit(1);
    }
    mpDecodedFrame = av_frame_alloc();
}
bool
VideoDecoder::rcvTsPacket(AVPacket &inTsPkt)
{
    bool ret = true;
    if ((av_read_frame(mpFmtContext, &inTsPkt)) < 0)
    {
        printf("Error in av_read_frame()n");
        ret = false;
    }
    else
    {
        // success.  Decode the TS packet
        int got;
        int len = avcodec_decode_video2(mpDecoderContext, mpDecodedFrame, &got, &inTsPkt);
        if (len < 0)
            ret = false;
        if (got)
            printf("GOT A DECODED FRAMEn");
    }
    return ret;
}
int
main(int argc, char **argv)
{
    if (argc != 2)
    {
        printf("Usage: %s tsInFilen", argv[0]);
        exit(1);
    }
    FILE *tsInFile = fopen(argv[1], "r");
    if (!tsInFile)
    {
        perror("Could not open TS input file");
        exit(2);
    }
    unsigned int tsPktNum = 0;
    uint8_t      tsBuffer[256];
    AVPacket     tsPkt;
    av_init_packet(&tsPkt);
    VideoDecoder vDecoder;
    while (!feof(tsInFile))
    {
        tsPktNum++;
        tsPkt.size = 188;
        tsPkt.data = tsBuffer;
        fread(tsPkt.data, 188, 1, tsInFile);
        vDecoder.rcvTsPacket(tsPkt);
    }
}

我有一些代码片段,可能会帮助您使用MPEG-TS。

从我的数据包线开始,该线程将每个数据包检查针对我已经找到的流ID,并获得了编解码器上下文:

void *FFMPEG::thread_packet_function(void *arg) {
    FFMPEG *ffmpeg = (FFMPEG*)arg;
    for (int c = 0; c < MAX_PACKETS; c++)
        ffmpeg->free_packets[c] = &ffmpeg->packet_list[c];
    ffmpeg->packet_pos = MAX_PACKETS;
    Audio.start_decoding();
    Video.start_decoding();
    Subtitle.start_decoding();
    while (!ffmpeg->thread_quit) {
        if (ffmpeg->packet_pos != 0 &&
                Audio.okay_add_packet() &&
                Video.okay_add_packet() &&
                Subtitle.okay_add_packet()) {
            pthread_mutex_lock(&ffmpeg->packet_mutex); // get free packet
            AVPacket *pkt = ffmpeg->free_packets[--ffmpeg->packet_pos]; // pre decrement
            pthread_mutex_unlock(&ffmpeg->packet_mutex);
            if ((av_read_frame(ffmpeg->fContext, pkt)) >= 0) { // success
                int id = pkt->stream_index;
                if (id == ffmpeg->aud_stream.stream_id) Audio.add_packet(pkt);
                else if (id == ffmpeg->vid_stream.stream_id) Video.add_packet(pkt);
                else if (id == ffmpeg->sub_stream.stream_id) Subtitle.add_packet(pkt);
                else { // unknown packet
                    av_packet_unref(pkt);
                    pthread_mutex_lock(&ffmpeg->packet_mutex); // put packet back
                    ffmpeg->free_packets[ffmpeg->packet_pos++] = pkt;
                    pthread_mutex_unlock(&ffmpeg->packet_mutex);
                    //LOGI("Dumping unknown packet, id %d", id);
                }
            } else {
                av_packet_unref(pkt);
                pthread_mutex_lock(&ffmpeg->packet_mutex); // put packet back
                ffmpeg->free_packets[ffmpeg->packet_pos++] = pkt;
                pthread_mutex_unlock(&ffmpeg->packet_mutex);
                //LOGI("No packet read");
            }
        } else { // buffers full so yield
            //LOGI("Packet reader on hold: Audio-%d, Video-%d, Subtitle-%d",
            //  Audio.packet_pos, Video.packet_pos, Subtitle.packet_pos);
            usleep(1000);
            //sched_yield();
        }
    }
    return 0;
}

每个用于音频,视频和字幕的解码器都有其自己的线程,这些线程从上面的线程中以戒指缓冲区接收数据包。我不得不将解码器分为自己的线程,因为当我开始使用Deinterlace Filter时,CPU的使用正在增加。

我的视频解码器会从缓冲区中读取数据包,并且完成数据包后,将其发送回去,可以再次使用。平衡数据包缓冲区一旦一切运行就不会花费太多时间。

这是我的视频解码器中的剪切:

void *VideoManager::decoder(void *arg) {
    LOGI("Video decoder started");
    VideoManager *mgr = (VideoManager *)arg;
    while (!ffmpeg.thread_quit) {
        pthread_mutex_lock(&mgr->packet_mutex);
        if (mgr->packet_pos != 0) {
            // fetch first packet to decode
            AVPacket *pkt = mgr->packets[0];
            // shift list down one
            for (int c = 1; c < mgr->packet_pos; c++) {
                mgr->packets[c-1] = mgr->packets[c];
            }
            mgr->packet_pos--;
            pthread_mutex_unlock(&mgr->packet_mutex); // finished with packets array
            int got;
            AVFrame *frame = ffmpeg.vid_stream.frame;
            avcodec_decode_video2(ffmpeg.vid_stream.context, frame, &got, pkt);
            ffmpeg.finished_with_packet(pkt);
            if (got) {
#ifdef INTERLACE_ALL
                if (!frame->interlaced_frame) mgr->add_av_frame(frame, 0);
                else {
                    if (!mgr->filter_initialised) mgr->init_filter_graph(frame);
                    av_buffersrc_add_frame_flags(mgr->filter_src_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF);
                    int c = 0;
                    while (true) {
                        AVFrame *filter_frame = ffmpeg.vid_stream.filter_frame;
                        int result = av_buffersink_get_frame(mgr->filter_sink_ctx, filter_frame);
                        if (result == AVERROR(EAGAIN) ||
                                result == AVERROR(AVERROR_EOF) ||
                                result < 0)
                            break;
                        mgr->add_av_frame(filter_frame, c++);
                        av_frame_unref(filter_frame);
                    }
                    //LOGI("Interlaced %d frames, decode %d, playback %d", c, mgr->decode_pos, mgr->playback_pos);
                }
#elif defined(INTERLACE_HALF)
                if (!frame->interlaced_frame) mgr->add_av_frame(frame, 0);
                else {
                    if (!mgr->filter_initialised) mgr->init_filter_graph(frame);
                    av_buffersrc_add_frame_flags(mgr->filter_src_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF);
                    int c = 0;
                    while (true) {
                        AVFrame *filter_frame = ffmpeg.vid_stream.filter_frame;
                        int result = av_buffersink_get_frame(mgr->filter_sink_ctx, filter_frame);
                        if (result == AVERROR(EAGAIN) ||
                                result == AVERROR(AVERROR_EOF) ||
                                result < 0)
                            break;
                        mgr->add_av_frame(filter_frame, c++);
                        av_frame_unref(filter_frame);
                    }
                    //LOGI("Interlaced %d frames, decode %d, playback %d", c, mgr->decode_pos, mgr->playback_pos);
                }
#else
                mgr->add_av_frame(frame, 0);
#endif
            }
            //LOGI("decoded video packet");
        } else {
            pthread_mutex_unlock(&mgr->packet_mutex);
        }
    }
    LOGI("Video decoder ended");
}

您可以看到,在来回传递数据包时,我正在使用Mutex。

获得了框架后,我只会从框架中复制YUV缓冲区,以便以后使用到另一个缓冲区列表中。我不转换Yuv,我使用的着色器将YUV转换为GPU上的RGB。

下一个片段将我的解码框架添加到我的缓冲区列表中。这可能有助于了解如何处理数据。

void VideoManager::add_av_frame(AVFrame *frame, int field_num) {
    int y_linesize = frame->linesize[0];
    int u_linesize = frame->linesize[1];
    int hgt = frame->height;
    int y_buffsize = y_linesize * hgt;
    int u_buffsize = u_linesize * hgt / 2;
    int buffsize = y_buffsize + u_buffsize + u_buffsize;
    VideoBuffer *buffer = &buffers[decode_pos];
    if (ffmpeg.is_network && playback_pos == decode_pos) { // patched 25/10/16 wlgfx
        buffer->used = false;
        if (!buffer->data) buffer->data = (char*)mem.alloc(buffsize);
        if (!buffer->data) {
            LOGI("Dropped frame, allocation error");
            return;
        }
    } else if (playback_pos == decode_pos) {
        LOGI("Dropped frame, ran out of decoder frame buffers");
        return;
    } else if (!buffer->data) {
        buffer->data = (char*)mem.alloc(buffsize);
        if (!buffer->data) {
            LOGI("Dropped frame, allocation error.");
            return;
        }
    }
    buffer->y_frame = buffer->data;
    buffer->u_frame = buffer->y_frame + y_buffsize;
    buffer->v_frame = buffer->y_frame + y_buffsize + u_buffsize;
    buffer->wid = frame->width;
    buffer->hgt = hgt;
    buffer->y_linesize = y_linesize;
    buffer->u_linesize = u_linesize;
    int64_t pts = av_frame_get_best_effort_timestamp(frame);
    buffer->pts = pts;
    buffer->buffer_size = buffsize;
    double field_add = av_q2d(ffmpeg.vid_stream.context->time_base) * field_num;
    buffer->frame_time = av_q2d(ts_stream) * pts + field_add;
    memcpy(buffer->y_frame, frame->data[0], (size_t) (buffer->y_linesize * buffer->hgt));
    memcpy(buffer->u_frame, frame->data[1], (size_t) (buffer->u_linesize * buffer->hgt / 2));
    memcpy(buffer->v_frame, frame->data[2], (size_t) (buffer->u_linesize * buffer->hgt / 2));
    buffer->used = true;
    decode_pos = (++decode_pos) % MAX_VID_BUFFERS;
    //if (field_num == 0) LOGI("Video %.2f, %d - %d",
    //        buffer->frame_time - Audio.pts_start_time, decode_pos, playback_pos);
}

如果还有其他我可能能帮助我的东西,请大声喊叫。: - )

编辑:

摘要如何打开视频流上下文,该上下文自动确定编解码器,是H264,mpeg2还是另一个:

void FFMPEG::open_video_stream() {
    vid_stream.stream_id = av_find_best_stream(fContext, AVMEDIA_TYPE_VIDEO,
                                                -1, -1, &vid_stream.codec, 0);
    if (vid_stream.stream_id == -1) return;
    vid_stream.context = fContext->streams[vid_stream.stream_id]->codec;
    if (!vid_stream.codec || avcodec_open2(vid_stream.context,
            vid_stream.codec, NULL) < 0) {
        vid_stream.stream_id = -1;
        return;
    }
    vid_stream.frame = av_frame_alloc();
    vid_stream.filter_frame = av_frame_alloc();
}

edit2:

这就是我打开输入流的方式,无论是文件还是URL。AvformatContext是流的主要上下文。

bool FFMPEG::start_stream(char *url_, float xtrim, float ytrim, int gain) {
    aud_stream.stream_id = -1;
    vid_stream.stream_id = -1;
    sub_stream.stream_id = -1;
    this->url = url_;
    this->xtrim = xtrim;
    this->ytrim = ytrim;
    Audio.volume = gain;
    Audio.init();
    Video.init();
    fContext = avformat_alloc_context();
    if ((avformat_open_input(&fContext, url_, NULL, NULL)) != 0) {
        stop_stream();
        return false;
    }
    if ((avformat_find_stream_info(fContext, NULL)) < 0) {
        stop_stream();
        return false;
    }
    // network stream will overwrite packets if buffer is full
    is_network =  url.substr(0, 4) == "udp:" ||
                  url.substr(0, 4) == "rtp:" ||
                  url.substr(0, 5) == "rtsp:" ||
                  url.substr(0, 5) == "http:";  // added for wifi broadcasting ability
    // determine if stream is audio only
    is_mp3 = url.substr(url.size() - 4) == ".mp3";
    LOGI("Stream: %s", url_);
    if (!open_audio_stream()) {
        stop_stream();
        return false;
    }
    if (is_mp3) {
        vid_stream.stream_id = -1;
        sub_stream.stream_id = -1;
    } else {
        open_video_stream();
        open_subtitle_stream();
        if (vid_stream.stream_id == -1) { // switch to audio only
            close_subtitle_stream();
            is_mp3 = true;
        }
    }
    LOGI("Audio: %d, Video: %d, Subtitle: %d",
            aud_stream.stream_id,
            vid_stream.stream_id,
            sub_stream.stream_id);
    if (aud_stream.stream_id != -1) {
        LOGD("Audio stream time_base {%d, %d}",
            aud_stream.context->time_base.num,
            aud_stream.context->time_base.den);
    }
    if (vid_stream.stream_id != -1) {
        LOGD("Video stream time_base {%d, %d}",
            vid_stream.context->time_base.num,
            vid_stream.context->time_base.den);
    }
    LOGI("Starting packet and decode threads");
    thread_quit = false;
    pthread_create(&thread_packet, NULL, &FFMPEG::thread_packet_function, this);
    Display.set_overlay_timout(3.0);
    return true;
}

edit :(构造AVPacket)

构建一个avpacket发送到解码器...

AVPacket packet;
av_init_packet(&packet);
packet.data = myTSpacketdata; // pointer to the TS packet
packet.size = 188;

您应该能够重复使用数据包。它可能需要Unref'ing。

您必须首先使用avcodec库将压缩帧从文件中取出。然后,您可以使用AVCODEC_DECODE_VIDEO2解码它们。查看本教程http://dranger.com/ffmpeg/