如何使用webbrtc和基于服务器的Peer连接录制摄像头和音频

How to record webcam and audio using webRTC and a server-based Peer connection

本文关键字:连接 Peer 摄像头 音频 服务器 webbrtc 何使用      更新时间:2023-10-16

我想记录用户的网络摄像头和音频,并将其保存到服务器上的文件。然后,这些文件将能够提供给其他用户。

播放没有问题,但是录制内容有问题。

我的理解是getUserMedia .record()函数还没有被写出来——到目前为止只有一个关于它的建议。

我想使用PeerConnectionAPI在我的服务器上创建一个对等连接。我知道这有点黑客,但我认为应该有可能在服务器上创建一个对等体并记录客户端对等体发送的内容。

如果这是可能的,我应该能够保存这些数据为flv或任何其他视频格式。

我的偏好实际上是记录网络摄像头+音频客户端,允许客户端重新录制视频,如果他们不喜欢他们的第一次尝试之前上传。这也允许网络连接中断。我看到过一些代码,允许记录个人"图像"从网络摄像头通过发送数据到画布-这很酷,但我需要音频太。

这是我目前为止的客户端代码:

  <video autoplay></video>
<script language="javascript" type="text/javascript">
function onVideoFail(e) {
    console.log('webcam fail!', e);
  };
function hasGetUserMedia() {
  // Note: Opera is unprefixed.
  return !!(navigator.getUserMedia || navigator.webkitGetUserMedia ||
            navigator.mozGetUserMedia || navigator.msGetUserMedia);
}
if (hasGetUserMedia()) {
  // Good to go!
} else {
  alert('getUserMedia() is not supported in your browser');
}
window.URL = window.URL || window.webkitURL;
navigator.getUserMedia  = navigator.getUserMedia || navigator.webkitGetUserMedia ||
                          navigator.mozGetUserMedia || navigator.msGetUserMedia;
var video = document.querySelector('video');
var streamRecorder;
var webcamstream;
if (navigator.getUserMedia) {
  navigator.getUserMedia({audio: true, video: true}, function(stream) {
    video.src = window.URL.createObjectURL(stream);
    webcamstream = stream;
//  streamrecorder = webcamstream.record();
  }, onVideoFail);
} else {
    alert ('failed');
}
function startRecording() {
    streamRecorder = webcamstream.record();
    setTimeout(stopRecording, 10000);
}
function stopRecording() {
    streamRecorder.getRecordedData(postVideoToServer);
}
function postVideoToServer(videoblob) {
/*  var x = new XMLHttpRequest();
    x.open('POST', 'uploadMessage');
    x.send(videoblob);
*/
    var data = {};
    data.video = videoblob;
    data.metadata = 'test metadata';
    data.action = "upload_video";
    jQuery.post("http://www.foundthru.co.uk/uploadvideo.php", data, onUploadSuccess);
}
function onUploadSuccess() {
    alert ('video uploaded');
}
</script>
<div id="webcamcontrols">
    <a class="recordbutton" href="javascript:startRecording();">RECORD</a>
</div>

你一定要去看看Kurento。它提供了一个webbrtc服务器基础设施,允许您从webbrtc馈送和更多的记录。您还可以在这里找到您正在规划的应用程序的一些示例。在演示中添加录制功能并将媒体文件存储在URI(本地磁盘或其他地方)中非常容易。

该项目在LGPL Apache 2.0

下获得许可

编辑1

自从这篇文章以来,我们增加了一个新的教程,展示了如何在几个场景中添加记录器

  • kurento-hello-world-recording:简单的录音教程,展示了录音端点的不同功能。
  • kurento- one21 -recording:如何在媒体服务器上录制一对一的通信。
  • kurento-hello-world-repository:使用外部存储库记录文件。

免责声明:我是Kurento开发团队的一员。

我认为使用kurento或其他mcu只是为了录制视频会有点过分,特别是考虑到Chrome从v47和Firefox v25开始就有MediaRecorder API支持的事实。所以在这个交叉点,你可能甚至不需要一个外部的js库来做这项工作,试试我做的这个演示,用MediaRecorder录制视频/音频:

演示 -将在chrome和firefox中工作(故意省略将blob推送到服务器代码)

Github Code Source

如果运行firefox,你可以在这里测试它(chrome需要https):

'use strict'
let log = console.log.bind(console),
  id = val => document.getElementById(val),
  ul = id('ul'),
  gUMbtn = id('gUMbtn'),
  start = id('start'),
  stop = id('stop'),
  stream,
  recorder,
  counter = 1,
  chunks,
  media;
gUMbtn.onclick = e => {
  let mv = id('mediaVideo'),
    mediaOptions = {
      video: {
        tag: 'video',
        type: 'video/webm',
        ext: '.mp4',
        gUM: {
          video: true,
          audio: true
        }
      },
      audio: {
        tag: 'audio',
        type: 'audio/ogg',
        ext: '.ogg',
        gUM: {
          audio: true
        }
      }
    };
  media = mv.checked ? mediaOptions.video : mediaOptions.audio;
  navigator.mediaDevices.getUserMedia(media.gUM).then(_stream => {
    stream = _stream;
    id('gUMArea').style.display = 'none';
    id('btns').style.display = 'inherit';
    start.removeAttribute('disabled');
    recorder = new MediaRecorder(stream);
    recorder.ondataavailable = e => {
      chunks.push(e.data);
      if (recorder.state == 'inactive') makeLink();
    };
    log('got media successfully');
  }).catch(log);
}
start.onclick = e => {
  start.disabled = true;
  stop.removeAttribute('disabled');
  chunks = [];
  recorder.start();
}
stop.onclick = e => {
  stop.disabled = true;
  recorder.stop();
  start.removeAttribute('disabled');
}
function makeLink() {
  let blob = new Blob(chunks, {
      type: media.type
    }),
    url = URL.createObjectURL(blob),
    li = document.createElement('li'),
    mt = document.createElement(media.tag),
    hf = document.createElement('a');
  mt.controls = true;
  mt.src = url;
  hf.href = url;
  hf.download = `${counter++}${media.ext}`;
  hf.innerHTML = `donwload ${hf.download}`;
  li.appendChild(mt);
  li.appendChild(hf);
  ul.appendChild(li);
}
      button {
        margin: 10px 5px;
      }
      li {
        margin: 10px;
      }
      body {
        width: 90%;
        max-width: 960px;
        margin: 0px auto;
      }
      #btns {
        display: none;
      }
      h1 {
        margin-bottom: 100px;
      }
<link type="text/css" rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css">
<h1> MediaRecorder API example</h1>
<p>For now it is supported only in Firefox(v25+) and Chrome(v47+)</p>
<div id='gUMArea'>
  <div>
    Record:
    <input type="radio" name="media" value="video" checked id='mediaVideo'>Video
    <input type="radio" name="media" value="audio">audio
  </div>
  <button class="btn btn-default" id='gUMbtn'>Request Stream</button>
</div>
<div id='btns'>
  <button class="btn btn-default" id='start'>Start</button>
  <button class="btn btn-default" id='stop'>Stop</button>
</div>
<div>
  <ul class="list-unstyled" id='ul'></ul>
</div>
<script src="https://code.jquery.com/jquery-2.2.0.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>

请检查RecordRTC

RecordRTC是MIT在github上授权的。

是的,如您所知,MediaStreamRecorder目前未实现。

MediaStreamRecorder是一个用于记录getUserMedia()流的WebRTC API。它允许web应用程序从实时音频/视频会话中创建文件。

或者你可以这样做http://ericbidelman.tumblr.com/post/31486670538/creating-webm-video-from-getusermedia但音频是缺失的一部分。

您可以使用RecordRTC-together,它是基于RecordRTC的。

它支持在单独的文件中录制视频和音频。您将需要像ffmpeg这样的工具将两个文件合并为服务器上的一个文件。

Web Call Server 4可以录制webbrtc的音视频到WebM容器。录音是使用Vorbis编解码器的音频和VP8编解码器的视频。最初的WebRTC编解码器是Opus或G.711和VP8。因此,服务器端记录需要Opus/G。711到Vorbis服务器端转码或VP8-H。264转码,如果有必要使用另一个容器,即AVI。

我也没有足够的知识,

但是我在Git hub上找到了这个-

<!DOCTYPE html>
 <html>
<head>
  <title>XSockets.WebRTC Client example</title>
  <meta charset="utf-8" />

<style>
body {
  }
.localvideo {
position: absolute;
right: 10px;
top: 10px;
}
.localvideo video {
max-width: 240px;
width:100%;
margin-right:auto;
margin-left:auto;
border: 2px solid #333;
 }
 .remotevideos {
height:120px;
background:#dadada;
padding:10px; 
}
.remotevideos video{
max-height:120px;
float:left;
 }
</style>
</head>
<body>
<h1>XSockets.WebRTC Client example </h1>
<div class="localvideo">
    <video autoplay></video>
</div>
<h2>Remote videos</h2>
<div class="remotevideos">
</div>
<h2>Recordings  ( Click on your camera stream to start record)</h2>
<ul></ul>

<h2>Trace</h2>
<div id="immediate"></div>
<script src="XSockets.latest.js"></script>
<script src="adapter.js"></script>
<script src="bobBinder.js"></script>
<script src="xsocketWebRTC.js"></script>
<script>
    var $ = function (selector, el) {
        if (!el) el = document;
        return el.querySelector(selector);
    }
    var trace = function (what, obj) {
        var pre = document.createElement("pre");
        pre.textContent = JSON.stringify(what) + " - " + JSON.stringify(obj || "");
        $("#immediate").appendChild(pre);
    };
    var main = (function () {
        var broker;
        var rtc;
        trace("Ready");
        trace("Try connect the connectionBroker");
        var ws = new XSockets.WebSocket("wss://rtcplaygrouund.azurewebsites.net:443", ["connectionbroker"], {
            ctx: '23fbc61c-541a-4c0d-b46e-1a1f6473720a'
        });
        var onError = function (err) {
            trace("error", arguments);
        };
        var recordMediaStream = function (stream) {
            if ("MediaRecorder" in window === false) {
                trace("Recorder not started MediaRecorder not available in this browser. ");
                return;
            }
            var recorder = new XSockets.MediaRecorder(stream);
            recorder.start();
            trace("Recorder started.. ");
            recorder.oncompleted = function (blob, blobUrl) {
                trace("Recorder completed.. ");
                var li = document.createElement("li");
                var download = document.createElement("a");
                download.textContent = new Date();
                download.setAttribute("download", XSockets.Utils.randomString(8) + ".webm");
                download.setAttribute("href", blobUrl);
                li.appendChild(download);
                $("ul").appendChild(li);
            };
        };
        var addRemoteVideo = function (peerId, mediaStream) {
            var remoteVideo = document.createElement("video");
            remoteVideo.setAttribute("autoplay", "autoplay");
            remoteVideo.setAttribute("rel", peerId);
            attachMediaStream(remoteVideo, mediaStream);
            $(".remotevideos").appendChild(remoteVideo);
        };
        var onConnectionLost = function (remotePeer) {
            trace("onconnectionlost", arguments);
            var peerId = remotePeer.PeerId;
            var videoToRemove = $("video[rel='" + peerId + "']");
            $(".remotevideos").removeChild(videoToRemove);
        };
        var oncConnectionCreated = function () {
            console.log(arguments, rtc);
            trace("oncconnectioncreated", arguments);
        };
        var onGetUerMedia = function (stream) {
            trace("Successfully got some userMedia , hopefully a goat will appear..");
            rtc.connectToContext(); // connect to the current context?
        };
        var onRemoteStream = function (remotePeer) {
            addRemoteVideo(remotePeer.PeerId, remotePeer.stream);
            trace("Opps, we got a remote stream. lets see if its a goat..");
        };
        var onLocalStream = function (mediaStream) {
            trace("Got a localStream", mediaStream.id);
            attachMediaStream($(".localvideo video "), mediaStream);
            // if user click, video , call the recorder
            $(".localvideo video ").addEventListener("click", function () {
                recordMediaStream(rtc.getLocalStreams()[0]);
            });
        };
        var onContextCreated = function (ctx) {
            trace("RTC object created, and a context is created - ", ctx);
            rtc.getUserMedia(rtc.userMediaConstraints.hd(false), onGetUerMedia, onError);
        };
        var onOpen = function () {
            trace("Connected to the brokerController - 'connectionBroker'");
            rtc = new XSockets.WebRTC(this);
            rtc.onlocalstream = onLocalStream;
            rtc.oncontextcreated = onContextCreated;
            rtc.onconnectioncreated = oncConnectionCreated;
            rtc.onconnectionlost = onConnectionLost;
            rtc.onremotestream = onRemoteStream;
            rtc.onanswer = function (event) {
            };
            rtc.onoffer = function (event) {
            };
        };
        var onConnected = function () {
            trace("connection to the 'broker' server is established");
            trace("Try get the broker controller form server..");
            broker = ws.controller("connectionbroker");
            broker.onopen = onOpen;
        };
        ws.onconnected = onConnected;
    });
    document.addEventListener("DOMContentLoaded", main);
</script>

在第89行,在我的情况下代码OnrecordComplete实际上附加了一个记录器文件的链接,如果你点击那个链接,它将开始下载,你可以将该路径保存到你的服务器作为一个文件。

记录代码看起来像这样

recorder.oncompleted = function (blob, blobUrl) {
                trace("Recorder completed.. ");
                var li = document.createElement("li");
                var download = document.createElement("a");
                download.textContent = new Date();
                download.setAttribute("download", XSockets.Utils.randomString(8) + ".webm");
                download.setAttribute("href", blobUrl);
                li.appendChild(download);
                $("ul").appendChild(li);
            };

blobUrl保存路径。我用这个解决了我的问题,希望有人会发现这个有用

目前浏览器支持客户端录音

https://webrtc.github.io/samples/

可以在连接结束后通过一些HTTP请求上传将录制的文件推送到服务器。

https://webrtc.github.io/samples/src/content/getusermedia/record/https://github.com/webrtc/samples/tree/gh-pages/src/content/getusermedia/record

这有一些缺点,如果用户只是关闭选项卡,不在后端运行这些操作,它可能不会将文件完全上传到服务器。

作为一个更稳定的解决方案,Ant Media Server可以在服务器端记录流,记录功能是Ant Media Server的基本功能之一。

antmedia.io

注:我是蚂蚁媒体团队成员。

从技术上讲,您可以在后端使用FFMPEG来混合视频和音频