1
我試圖使用節點js和socket io實時緩衝MP3歌曲。我基本上把MP3分成幾個字節段,然後發送到客戶端,在那裏Web音頻API將接收它,解碼並開始播放它。這裏的問題是聲音不能連續播放,每個緩衝區段之間都有一個0.5秒的間隔。我怎樣才能解決這個問題Web音頻API不能播放聲音連續
// buffer is a 2 seconds decoded audio ready to be played
// the function is called when a new buffer is recieved
function stream(buffer)
{
// creates a new buffer source and connects it to the Audio context
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.loop = false;
// sets it and updates the time
source.start(time + context.currentTime);
time += buffer.duration; // time is global variable initially set to zero
}
哪裏流
// where stream bufferQ is an array of decoded MP3 data
// so the stream function is called after every 3 segments that are recieved
// the Web audio Api plays it with gaps between the sound
if(bufferQ.length == 3)
{
for(var i = 0, n = bufferQ.length ; i < n; i++)
{
stream(bufferQ.splice(0,1)[0]);
}
}
我應該用比網絡音頻API等不同的API或稱爲是有辦法來安排我的緩衝區,這樣的情況下它會不斷播放?
什麼時候調用stream()? – guest271314
@ guest271314對不起,我沒有澄清這一點。我已經編輯了我的答案。你可以看看嗎? –
您可以在前一個'bufferSource'的結尾開始下一個'bufferSource'。您還可以利用'MediaSource'和'updateend'事件將媒體源段的'ArrayBuffer'表示添加到媒體播放中,請參閱[HTML5音頻流:精確測量延遲?](https://stackoverflow.com/q/ 38768375 /) – guest271314