如何编写Live555 FramedSource以允许我直播H.264

How to write a Live555 FramedSource to allow me to stream H.264 live

本文关键字:直播 允许我 何编写 Live555 FramedSource      更新时间:2023-10-16

我一直在尝试编写一个从Live555中的FramedSource派生的类,该类将允许我将实时数据从D3D9应用程序流式传输到MP4或类似程序。

我在每帧中所做的是将backbuffer作为纹理抓取到系统内存中,然后将其从RGB->YUV420P转换,然后使用x264进行编码,然后理想情况下将NAL数据包传递到Live555。我创建了一个名为H264FramedSource的类,它基本上是通过复制DeviceSource文件从FramedSource派生而来的。输入不是一个输入文件,而是一个NAL包,我更新每一帧。

我对编解码器和流媒体很陌生,所以我可能做的每件事都完全错了。在每个doGetNextFrame()中,我应该抓取NAL数据包并执行类似的操作吗

memcpy(fTo, nal->p_payload, nal->i_payload)

我假设有效载荷是以字节为单位的帧数据?如果有人有一个从FramedSource派生的类的例子,它可能至少与我想要做的很接近,我很想看到它,这对我来说是全新的,要弄清楚发生了什么有点棘手。Live555的文档基本上就是代码本身,这并不能让我很容易理解。

好吧,我终于有时间花在这上面了,并让它工作起来了!我相信还有其他人会乞求知道如何做到这一点,所以就在这里。

您将需要自己的FramedSource来获取每一帧,进行编码,并为流媒体做好准备,我将很快提供一些源代码。

基本上将FramedSource放入H264VideoStreamDiscreteFramer,然后将其放入H264RTPSink。像这样的

scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);   
framedSource = H264FramedSource::createNew(*env, 0,0);
h264VideoStreamDiscreteFramer 
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource);
// initialise the RTP Sink stuff here, look at 
// testH264VideoStreamer.cpp to find out how
videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink);
env->taskScheduler().doEventLoop();

现在,在主渲染循环中,将保存到系统内存中的后缓冲区翻转到FramedSource中,以便对其进行编码等。有关如何设置编码内容的更多信息,请查看以下答案:如何使用x264 C API将一系列图像编码到H264中?

我的实现非常糟糕,还没有得到优化,由于编码的原因,我的d3d应用程序运行速度约为15fps,哎哟,所以我必须对此进行研究。但从所有的意图和目的来看,这个StackOverflow问题都得到了回答,因为我主要关注的是如何流式传输。我希望这能帮助其他人。

至于我的FramedSource,它看起来有点像这个

concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;

EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;
int W = 720;
int H = 960;
H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize, 
unsigned playTimePerFrame) 
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}
H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize, 
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(0),
fCurIndex(0)
{
if (referenceCount == 0) 
{
}
++referenceCount;
x264_param_default_preset(&param, "veryfast", "zerolatency");
param.i_threads = 1;
param.i_width = 720;
param.i_height = 960;
param.i_fps_num = 60;
param.i_fps_den = 1;
// Intra refres:
param.i_keyint_max = 60;
param.b_intra_refresh = 1;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = 25;
param.rc.f_rf_constant_max = 35;
param.i_sps_id = 7;
//For streaming:
param.b_repeat_headers = 1;
param.b_annexb = 1;
x264_param_apply_profile(&param, "baseline");

encoder = x264_encoder_open(&param);
pic_in.i_type            = X264_TYPE_AUTO;   
pic_in.i_qpplus1         = 0;
pic_in.img.i_csp         = X264_CSP_I420;   
pic_in.img.i_plane       = 3;

x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);
convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);

if (eventTriggerId == 0) 
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
}
H264FramedSource::~H264FramedSource() 
{
--referenceCount;
if (referenceCount == 0) 
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
}
}
void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*3;
sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = 0;
int frame_size = -1;

frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false;
if (frame_size >= 0)
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = 0; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}   
}
delete [] surfaceData;
surfaceData = NULL;
envir().taskScheduler().triggerEvent(eventTriggerId, this);
}
void H264FramedSource::doGetNextFrame() 
{
deliverFrame();
}
void H264FramedSource::deliverFrame0(void* clientData) 
{
((H264FramedSource*)clientData)->deliverFrame();
}
void H264FramedSource::deliverFrame() 
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds   = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/1000000;
fPresentationTime.tv_usec = uSeconds%1000000;
}
// Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
}
if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload;
// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
}
memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}

哦,对于那些想知道我的并发队列是什么的人来说,它就在这里,而且效果很好http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html

祝你好运!

deliveryFrame方法在开始时缺少以下检查:

if (!isCurrentlyAwaitingData()) return;    

请参阅实时中的DeviceSource.cpp