WebRTC本机,AdiotrackSinkInterface添加到轨道上,但从未召集过ondata

WebRTC Native, AudioTrackSinkInterface added to track, but OnData is never called

本文关键字:ondata AdiotrackSinkInterface 本机 添加 轨道 WebRTC      更新时间:2023-10-16

我一直在研究一种产品,该产品使用webrtc在浏览器和本机客户端之间交换音频,而本机侧是在C 中实现的。目前,我已经构建了WebRTC的最新稳定版本(分支:branch-heads/65(。

到目前为止,我能够使连接对等方连接,在浏览器上正确渲染音频并呈现。但是,尽管Chrome调试工具表明数据已从浏览器发送到本机客户端。

肯定会调用以下代码,并且正在按预期添加频道。

void Conductor::OnAddStream(rtc::scoped_refptr<webrtc::MediaStreamInterface> stream)
{
    webrtc::AudioTrackVector atracks = stream->GetAudioTracks();
    for (auto track : atracks)
    {
        remote_audio.reset(new Native::AudioRenderer(this, track));
        track->set_enabled(true);
    }
}
// Audio renderer derived from webrtc::AudioTrackSinkInterface
// In the audio renderer constructor, AddSink is called on the track.
AudioRenderer::AudioRenderer(AudioCallback* callback, webrtc::AudioTrackInterface* track) : track_(track), callback_(callback)
{
// Can confirm this point is reached.
    track_->AddSink(this);
}
AudioRenderer::~AudioRenderer()
{
    track_->RemoveSink(this);
}
void AudioRenderer::OnData(const void* audio_data, int bits_per_sample, int sample_rate, size_t number_of_channels,
        size_t number_of_frames)
{
// This is never hit, despite the connection starting and streams being added.
    if (callback_ != nullptr)
    {
        callback_->OnAudioData(audio_data, bits_per_sample, sample_rate, number_of_channels, number_of_frames);
    }
}

我还可以确认这两种优惠都包括接收音频的选项:

浏览器客户端提供:

// Create offer
var offerOptions = {
    offerToReceiveAudio: 1,
    offerToReceiveVideo: 0
};
pc.createOffer(offerOptions)
    .then(offerCreated);

本地客户答案:

webrtc::PeerConnectionInterface::RTCOfferAnswerOptions o;
{
    o.voice_activity_detection = false;
    o.offer_to_receive_audio = webrtc::PeerConnectionInterface::RTCOfferAnswerOptions::kOfferToReceiveMediaTrue;
    o.offer_to_receive_video = webrtc::PeerConnectionInterface::RTCOfferAnswerOptions::kOfferToReceiveMediaTrue;
}
peer_connection_->CreateAnswer(this, o);

我找不到有关此问题的最新信息,似乎可以在客户端应用程序中使用收到的音频的常见用例。对于我可能会在聆听入站音频时犯错误的任何想法,或者我可能会采取策略来调查为什么这不起作用?

非常感谢

我已经设法找到了从WEBRTC获取音频数据的替代方法,该方法允许人们解决此问题。

  1. 实现自定义webrtc::AudioDeviceModule实现。查看WEBRTC源代码以查看如何做到这一点。
  2. 捕获RegisterAudioCallback方法中的音频传输,该方法在建立呼叫时被调用。

摘要:

int32_t AudioDevice::RegisterAudioCallback(webrtc::AudioTransport * transport)
{
    transport_ = transport;
    return 0;
}
  1. 在设备类中添加自定义方法,用于使用NeedMorePlayData方法从音频传输中提取音频。(注意:这似乎可以与NTP_TIME_MS一起使用为0,似乎不需要(。

摘要:

int32_t AudioDevice::NeedMorePlayData(
    const size_t nSamples,
    const size_t nBytesPerSample,
    const size_t nChannels,
    const uint32_t samplesPerSec,
    void* audioSamples,
    size_t& nSamplesOut,
    int64_t* elapsed_time_ms,
    int64_t* ntp_time_ms) const
    {
        return transport_->NeedMorePlayData(nSamples,
            nBytesPerSample,
            nChannels,
            samplesPerSec,
            audioSamples,
            nSamplesOut,
            elapsed_time_ms,
            ntp_time_ms);
    }