stagefright框架(一)Video Playback的流程
在Android上,預(yù)設(shè)的多媒體框架(multimedia framework)是OpenCORE。OpenCORE的優(yōu)點(diǎn)是兼顧了跨平臺(tái)的移植性,而且已經(jīng)過多方驗(yàn)證,所以相對(duì)來說較為穩(wěn)定;但是其缺點(diǎn)是過於龐大複雜,需要耗費(fèi)相當(dāng)多的時(shí)間去維護(hù)。從Android 2.0開始,Google引進(jìn)了架構(gòu)稍為簡(jiǎn)潔的Stagefright,並且有逐漸取代OpenCORE的趨勢(shì) (註1)。
[圖1] Stagefright在Android多媒體架構(gòu)中的位置。
[圖2] Stagefright所涵蓋的模組 (註2)。
以下我們就先來看看Stagefright是如何播放一個(gè)影片檔。
Stagefright在Android中是以shared library的形式存在(libstagefright.so),其中的module -- AwesomePlayer可用來播放video/audio (註3)。AwesomePlayer提供許多API,可以讓上層的應(yīng)用程式(Java/JNI)來呼叫,我們以一個(gè)簡(jiǎn)單的程式來說明video playback的流程。
在Java中,若要播放一個(gè)影片檔,我們會(huì)這樣寫:
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(PATH_TO_FILE); ...... (1)
mp.prepare(); ........................ (2)、(3)
mp.start(); .......................... (4)
在Stagefright中,則會(huì)看到相對(duì)應(yīng)的處理;
(1) 將檔案的絕對(duì)路徑指定給mUri
status_t AwesomePlayer::setDataSource(const char* uri, ...)
{
return setDataSource_l(uri, ...);
}
status_t AwesomePlayer::setDataSource_l(const char* uri, ...)
{
mUri = uri;
}
(2) 啟動(dòng)mQueue,作為event handler
status_t AwesomePlayer::prepare()
{
return prepare_l();
}
status_t AwesomePlayer::prepare_l()
{
prepareAsync_l();
while (mFlags & PREPARING)
{
mPreparedCondition.wait(mLock);
}
}
status_t AwesomePlayer::prepareAsync_l()
{
mQueue.start();
mFlags |= PREPARING;
mAsyncPrepareEvent = new AwesomeEvent(
this
&AwesomePlayer::onPrepareAsyncEvent);
mQueue.postEvent(mAsyncPrepareEvent);
}
(3) onPrepareAsyncEvent被觸發(fā)
void AwesomePlayer::onPrepareAsyncEvent()
{
finishSetDataSource_l();
initVideoDecoder(); ...... (3.3)
initAudioDecoder();
}
status_t AwesomePlayer::finishSetDataSource_l()
{
dataSource = DataSource::CreateFromURI(mUri.string(), ...);
sp<MediaExtractor> extractor =
MediaExtractor::Create(dataSource); ..... (3.1)
return setDataSource_l(extractor); ......................... (3.2)
}
(3.1) 解析mUri所指定的檔案,並且根據(jù)其header來選擇對(duì)應(yīng)的extractor
sp<MediaExtractor> MediaExtractor::Create(const sp<DataSource> &source, ...)
{
source->sniff(&tmp, ...);
mime = tmp.string();
if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4)
{
return new MPEG4Extractor(source);
}
else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG))
{
return new MP3Extractor(source);
}
else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)
{
return new AMRExtractor(source);
}
}
(3.2) 使用extractor對(duì)檔案做A/V的分離 (mVideoTrack/mAudioTrack)
status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor)
{
for (size_t i = 0; i < extractor->countTracks(); ++i)
{
sp<MetaData> meta = extractor->getTrackMetaData(i);
CHECK(meta->findCString(kKeyMIMEType, &mime));
if (!haveVideo && !strncasecmp(mime, "video/", 6))
{
setVideoSource(extractor->getTrack(i));
haveVideo = true;
}
else if (!haveAudio && !strncasecmp(mime, "audio/", 6))
{
setAudioSource(extractor->getTrack(i));
haveAudio = true;
}
}
}
void AwesomePlayer::setVideoSource(sp<MediaSource> source)
{
mVideoTrack = source;
}
(3.3) 根據(jù)mVideoTrack中的編碼類型來選擇video decoder (mVideoSource)
status_t AwesomePlayer::initVideoDecoder()
{
mVideoSource = OMXCodec::Create(mClient.interface(),
mVideoTrack->getFormat(),
false,
mVideoTrack);
}
(4) 將mVideoEvent放入mQueue中,開始解碼播放,並交由mVideoRenderer來畫出
status_t AwesomePlayer::play()
{
return play_l();
}
status_t AwesomePlayer::play_l()
{
postVideoEvent_l();
}
void AwesomePlayer::postVideoEvent_l(int64_t delayUs)
{
mQueue.postEventWithDelay(mVideoEvent, delayUs);
}
void AwesomePlayer::onVideoEvent()
{
mVideoSource->read(&mVideoBuffer, &options);
[Check Timestamp]
mVideoRenderer->render(mVideoBuffer);
postVideoEvent_l();
}
(註1) 從Android2.3 (Gingerbread) 開始,預(yù)設(shè)的多媒體框架為 Stagefright。
(註2) Stagefright的架構(gòu)尚不斷在演進(jìn)中,本系列文章並未含括所有的模組。
(註3) Audio的播放是交由 AudioPlayer 來處理,請(qǐng)參考《
Stagefright (6) - Audio Playback的流程》。
stagefright框架(二)- 和OpenMAX的運(yùn)作
Stagefright的編解碼功能是利用OpenMAX框架,而且用的還是OpenCORE之OMX的實(shí)作,我們來看一下Stagefright和OMX是如何運(yùn)作的。
(1) OMX_Init
OMXClient mClient;
AwesomePlayer::AwesomePlayer()
{
mClient.connect();
}
status_t OMXClient::connect()
{
mOMX = service->getOMX();
}
sp<IOMX> MediaPlayerService::getOMX()
{
mOMX = new OMX;
}
OMX::OMX() : mMaster(new OMXMaster)
OMXMaster::OMXMaster()
{
addPlugin(new OMXPVCodecsPlugin);
}
OMXPVCodecsPlugin::OMXPVCodecsPlugin()
{
OMX_MasterInit();
}
OMX_ERRORTYPE OMX_MasterInit() <-- under OpenCORE
{
return OMX_Init();
}
(2) OMX_SendCommand
OMXCodec::function_name()
{
mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle);
}
status_t OMX::sendCommand(node, cmd, param)
{
return findInstance(node)->sendCommand(cmd, param);
}
status_t OMXNodeInstance::sendCommand(cmd, param)
{
OMX_SendCommand(mHandle, cmd, param, NULL);
}
(3) 其他作用在 OMX 元件的指令
其他作用在OMX元件的指令也和OMX_SendCommand的call path一樣,請(qǐng)見下表:
OMXCodec
OMX
OMXNodeInstance
useBuffer
useBuffer (OMX_UseBuffer)
getParameter
getParameter (OMX_GetParameter)
fillBuffer
fillBuffer (OMX_FillThisBuffer)
emptyBuffer
emptyBuffer (OMX_EmptyThisBuffer)
(4) Callback Functions
OMX_CALLBACKTYPE OMXNodeInstance::kCallbacks =
{
&OnEvent, <--------------- omx_message::EVENT
&OnEmptyBufferDone, <----- omx_message::EMPTY_BUFFER_DONE
&OnFillBufferDone <------- omx_message::FILL_BUFFER_DONE
}
stagefright框架(三)-選擇Video Decoder
在《
Stagefright (1) – Video Playback的流程》中,我們並沒有詳述Stagefright是如何根據(jù)影片檔的類型來選擇適合的video decoder,現(xiàn)在,就讓我們來看一看。
(1) Video decoder是在onPrepareAsyncEvent中的initVideoDecoder被決定的
OMXCodec::Create()會(huì)回傳video decoder給mVideoSource。
status_t AwesomePlayer::initVideoDecoder()
{
mVideoSource = OMXCodec::Create(mClient.interface(),
mVideoTrack->getFormat(),
false,
mVideoTrack);
}
sp<MediaSource> OMXCodec::Create(&omx, &meta, createEncoder, &source, matchComponentName)
{
meta->findCString(kKeyMIMEType, &mime);
findMatchingCodecs(mime, ..., &matchingCodecs); ........ (2)
for (size_t i = 0; i < matchingCodecs.size(); ++i)
{
componentName = matchingCodecs[i].string();
softwareCodec =
InstantiateSoftwareCodec(componentName, ...); ..... (3)
if (softwareCodec != NULL) return softwareCodec;
err = omx->allocateNode(componentName, ..., &node); ... (4)
if (err == OK)
{
codec = new OMXCodec(..., componentName, ...); ...... (5)
return codec;
}
}
}
(2) 根據(jù)mVideoTrack的MIME從kDecoderInfo挑出合適的components
void OMXCodec::findMatchingCodecs(mime, ..., matchingCodecs)
{
for (int index = 0;; ++index)
{
componentName = GetCodec(
kDecoderInfo,
sizeof(kDecoderInfo)/sizeof(kDecoderInfo[0]),
mime,
index);
matchingCodecs->push(String8(componentName));
}
}
static const CodecInfo kDecoderInfo[] =
{
...
{ MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.qcom.video.decoder.mpeg4" },
{ MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.TI.Video.Decoder" },
{ MEDIA_MIMETYPE_VIDEO_MPEG4, "M4vH263Decoder" },
...
}
GetCodec會(huì)依據(jù)mime從kDecoderInfo挑出所有的component name,然後存到matchingCodecs中。
(3) 根據(jù)matchingCodecs中component的順序,我們會(huì)先去檢查其是否為software decoder
static sp<MediaSource> InstantiateSoftwareCodec(name, ...)
{
FactoryInfo kFactoryInfo[] =
{
...
FACTORY_REF(M4vH263Decoder)
...
};
for (i = 0; i < sizeof(kFactoryInfo)/sizeof(kFactoryInfo[0]); ++i)
{
if (!strcmp(name, kFactoryInfo[i].name))
return (*kFactoryInfo[i].CreateFunc)(source);
}
}
所有的software decoder都會(huì)被列在kFactoryInfo中,我們藉由傳進(jìn)來的name來對(duì)應(yīng)到適合的decoder。
(4) 如果該component不是software decoder,則試著去配置對(duì)應(yīng)的OMX component
status_t OMX::allocateNode(name, ..., node)
{
mMaster->makeComponentInstance(
name,
&OMXNodeInstance::kCallbacks,
instance,
handle);
}
OMX_ERRORTYPE OMXMaster::makeComponentInstance(name, ...)
{
plugin->makeComponentInstance(name, ...);
}
OMX_ERRORTYPE OMXPVCodecsPlugin::makeComponentInstance(name, ...)
{
return OMX_MasterGetHandle(..., name, ...);
}
OMX_ERRORTYPE OMX_MasterGetHandle(...)
{
return OMX_GetHandle(...);
}
(5) 若該component為OMX deocder,則回傳;否則繼續(xù)檢查下一個(gè)component
stagefright框架(四)-Video Buffer傳輸流程
這篇文章將介紹Stagefright中是如何和OMX video decoder傳遞buffer。
(1) OMXCodec會(huì)在一開始的時(shí)候透過read函式來傳送未解碼的data給decoder,並且要求decoder將解碼後的data傳回來
status_t OMXCodec::read(...)
{
if (mInitialBufferSubmit)
{
mInitialBufferSubmit = false;
drainInputBuffers(); <----- OMX_EmptyThisBuffer
fillOutputBuffers(); <----- OMX_FillThisBuffer
}
...
}
void OMXCodec::drainInputBuffers()
{
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput];
for (i = 0; i < buffers->size(); ++i)
{
drainInputBuffer(&buffers->editItemAt(i));
}
}
void OMXCodec::drainInputBuffer(BufferInfo *info)
{
mOMX->emptyBuffer(...);
}
void OMXCodec::fillOutputBuffers()
{
Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput];
for (i = 0; i < buffers->size(); ++i)
{
fillOutputBuffer(&buffers->editItemAt(i));
}
}
void OMXCodec::fillOutputBuffer(BufferInfo *info)
{
mOMX->fillBuffer(...);
}
(2) Decoder從input port讀取資料後,開始進(jìn)行解碼,並且回傳EmptyBufferDone通知OMXCodec
void OMXCodec::on_message(const omx_message &msg)
{
switch (msg.type)
{
case omx_message::EMPTY_BUFFER_DONE:
{
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
drainInputBuffer(&buffers->editItemAt(i));
}
}
}
OMXCodec收到EMPTY_BUFFER_DONE之後,繼續(xù)傳送下一個(gè)未解碼的資料給decoder。
(3) Decoder將解碼完的資料送到output port,並回傳FillBufferDone通知OMXCodec
void OMXCodec::on_message(const omx_message &msg)
{
switch (msg.type)
{
case omx_message::FILL_BUFFER_DONE:
{
IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer;
fillOutputBuffer(info);
mFilledBuffers.push_back(i);
mBufferFilled.signal();
}
}
}
OMXCodec收到FILL_BUFFER_DONE之後,將解碼後的資料放入mFilledBuffers,發(fā)出mBufferFilled信號(hào),並且要求decoder繼續(xù)送出資料。
(4) read函式在後段等待mBufferFilled信號(hào)。當(dāng)mFilledBuffers被填入資料後,read函式將其指定給buffer指標(biāo),並回傳給AwesomePlayer
status_t OMXCodec::read(MediaBuffer **buffer, ...)
{
...
while (mFilledBuffers.empty())
{
mBufferFilled.wait(mLock);
}
BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);
info->mMediaBuffer->add_ref();
*buffer = info->mMediaBuffer;
}
stagefright框架(五)-Video Rendering
AwesomePlayer::onVideoEvent除了透過OMXCodec::read取得解碼後的資料外,還必須將這些資料(mVideoBuffer)傳給video renderer,以便畫到螢?zāi)簧先ァ?div style="height:15px;">
...
...
...)); .......... (2)
mISurface); ............................ (3)
從上段的程式碼(1)來看,AwesomeRemoteRenderer的本質(zhì)是由OMX::createRenderer所創(chuàng)建的。createRenderer會(huì)先建立一個(gè)hardware renderer -- SharedVideoRenderer (libstagefrighthw.so);若失敗,則建立software renderer -- SoftwareRenderer (surface)。
(3) 如果video decoder是software component,則建立一個(gè)AwesomeLocalRenderer作為mVideoRenderer
...
...
(3) 開啟audio output的同時(shí),AudioPlayer會(huì)將callback函式設(shè)給它,之後每當(dāng)callback函式被呼叫,AudioPlayer便去audio decoder讀取解碼後的資料
解碼後audio資料的讀取就是由callback函式所驅(qū)動(dòng),但是callback函式又是怎麼由audio output去驅(qū)動(dòng)的,目前從程式碼上還看不出來。另外一方面,從上面的程式片段可以看出,fillBuffer將資料(mInputBuffer)複製到data之後,audio output應(yīng)該會(huì)去取用data。
講完了audio和video的處理流程,接下來要看的是audio和video同步化(synchronization)的問題。OpenCORE的做法是設(shè)置一個(gè)主clock,而audio和video就分別以此作為輸出的依據(jù)。而在Stagefright中,audio的輸出是透過callback函式來驅(qū)動(dòng),video則根據(jù)audio的timestamp來做同步。以下是詳細(xì)的說明:
(1) 當(dāng)callback函式驅(qū)動(dòng)AudioPlayer讀取解碼後的資料時(shí),AudioPlayer會(huì)取得兩個(gè)時(shí)間戳 -- mPositionTimeMediaUs和mPositionTimeRealUs
...
mPositionTimeRealUs = ((mNumFramesPlayed + size_done / mFrameSize) * 1000000) / mSampleRate;
...
mPositionTimeMediaUs是資料裡面所載明的時(shí)間戳(timestamp);mPositionTimeRealUs則是播放此資料的實(shí)際時(shí)間(依據(jù)frame number及sample rate得出)。
...
...
AwesomePlayer從AudioPlayer取得realTimeUs(即mPositionTimeRealUs)和mediaTimeUs(即mPositionTimeMediaUs),並算出其差值mTimeSourceDeltaUs。
...
...