How MediaCodec finds the codec inside the framework in Android? - android

I am trying to understanding how MediaCodec is used for hardware decoding.
My knowledge in android internal is very limited.
Here is my findings:
There is a xml file which represents the codec details in the android system .
device/ti/omap3evm/media_codecs.xml for an example.
Which means, that If we create a codec from the Java Application with Media Codec
MediaCodec codec = MediaCodec.createDecoderByType(type);
It should be finding out respective coder with the help of xml file.
What am I doing?
I am trying to figure our which part of the code is reading xml and find the codec based on given 'type'.
1) Application Layer :
MediaCodec codec = MediaCodec.createDecoderByType(type);
2) MediaCodec.java -> [ frameworks/base/media/java/android/media/MediaCodec.java ]
public static MediaCodec createDecoderByType(String type) {
return new MediaCodec(type, true /* nameIsType */, false /* encoder */);
}
3)
private MediaCodec(
String name, boolean nameIsType, boolean encoder) {
native_setup(name, nameIsType, encoder); --> JNI Call.
}
4)
JNI Implementation -> [ frameworks/base/media/jni/android_media_MediaCodec.cpp ]
static void android_media_MediaCodec_native_setup (..) {
.......
const char *tmp = env->GetStringUTFChars(name, NULL);
sp<JMediaCodec> codec = new JMediaCodec(env, thiz, tmp, nameIsType, encoder); ---> Here
}
from frameworks/base/media/jni/android_media_MediaCodec.cpp
JMediaCodec::JMediaCodec( ..) {
....
mCodec = MediaCodec::CreateByType(mLooper, name, encoder); //Call goes to libstagefright
.... }
sp<MediaCodec> MediaCodec::CreateByType(
const sp<ALooper> &looper, const char *mime, bool encoder) {
sp<MediaCodec> codec = new MediaCodec(looper);
if (codec->init(mime, true /* nameIsType */, encoder) != OK) { --> HERE.
return NULL;
}
return codec;
}
status_t MediaCodec::init(const char *name, bool nameIsType, bool encoder) {
// MediaCodec
}
I am struck with this flow. If someone points out how to take it forward would help a lot.
thanks.

Let's take the flow step by step.
MediaCodec::CreateByType will create a new MediaCodec object
MediaCodec constructor would create a new ACodec object and store it as mCodec
When MediaCodec::init is invoked, it internally instructs the underlying ACodec to allocate the OMX component through mCodec->initiateAllocateComponent.
ACodec::initiateAllocateComponent would invoke onAllocateComponent
ACodec::UninitializedState::onAllocateComponent would invoke OMXCodec::findMatchingCodecs to find the codecs matching the MIME type passed from the caller.
In OMXCodec::findMatchingCodecs, there is a call to retrieve an instance of MediaCodecList as MediaCodecList::getInstance().
In MediaCodecList::getInstance, there is a check if there is an existing MediaCodecList or else a new object of MediaCodecList is created.
In the constructor of MediaCodecList, there is a call to parseXMLFile with the file name as /etc/media_codecs.xml.
parseXMLFile reads the contents and stores the different component names etc into MediaCodecList which can be used for any other codec instance too. The helper function employed for the parsing is startElementHandler . A function of interest could be addMediaCodec.
Through these steps, the XML file contents are translated into a list which can be employed by any other module. MediaCodecList is exposed at Java layer too as can be referred from here.
I have skipped a few hops wherein MediaCodec and ACodec employ messages to actually communicate and invoke methods, but the flow presented should give a good idea about the underlying mechanism.

Related

How to get frame by frame from MP4? (MediaCodec)

Actually I am working with OpenGL and I would like to put all my textures in MP4 in order to compress them.
Then I need to get it from MP4 on my Android
I need somehow decode MP4 and get frame by frame by request.
I found this MediaCodec
https://developer.android.com/reference/android/media/MediaCodec
and this MediaMetadataRetriever
https://developer.android.com/reference/android/media/MediaMetadataRetriever
But I did not see approach how to request frame by frame...
If there is someone who worked with MP4, please give me a way where to go.
P.S. I am working with native way (JNI), so does not matter how to do it.. Java or native, but I need to find the way.
EDIT1
I make some kind of movie (just one 3d model), so I am changing my geometry as well as textures every 32 milliseconds. So, it is seems to me reasonable to use mp4 for tex because of each new frame (32 milliseconds) very similar to privious one...
Now I use 400 frames for one model. For geometry I use .mtr and for tex I use .pkm (because it optimized for android) , so I have around 350 .mtr files(because some files include subindex) and 400 .pkm files ...
This is the reason why I am going to use mp4 for tex. Because one mp4 much more smaller than 400 .pkm
EDIT2
Plase take a look at Edit1
Actually all that I need to know is there API of Android that could read MP4 by frames? Maybe some kind of getNextFrame() method?
Something like this
MP4Player player = new MP4Player(PATH_TO_MY_MP4_FILE);
void readMP4(){
Bitmap b;
while(player.hasNext()){
b = player.getNextFrame();
///.... my code here ...///
}
}
EDIT3
I made such implementation on Java
public static void read(#NonNull final Context iC, #NonNull final String iPath)
{
long time;
int fileCount = 0;
//Create a new Media Player
MediaPlayer mp = MediaPlayer.create(iC, Uri.parse(iPath));
time = mp.getDuration() * 1000;
Log.e("TAG", String.format("TIME :: %s", time));
MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
mRetriever.setDataSource(iPath);
long a = System.nanoTime();
//frame rate 10.03/sec, 1/10.03 = in microseconds 99700
for (int i = 99700 ; i <= time ; i = i + 99700)
{
Bitmap b = mRetriever.getFrameAtTime(i, MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
if (b == null)
{
Log.e("TAG", String.format("BITMAP STATE :: %s", "null"));
}
else
{
fileCount++;
}
long curTime = System.nanoTime();
Log.e("TAG", String.format("EXECUTION TIME :: %s", curTime - a));
a = curTime;
}
Log.e("TAG", String.format("COUNT :: %s", fileCount));
}
and here execution time
E/TAG: EXECUTION TIME :: 267982039
E/TAG: EXECUTION TIME :: 222928769
E/TAG: EXECUTION TIME :: 289899461
E/TAG: EXECUTION TIME :: 138265423
E/TAG: EXECUTION TIME :: 127312577
E/TAG: EXECUTION TIME :: 251179654
E/TAG: EXECUTION TIME :: 133996500
E/TAG: EXECUTION TIME :: 289730345
E/TAG: EXECUTION TIME :: 132158270
E/TAG: EXECUTION TIME :: 270951461
E/TAG: EXECUTION TIME :: 116520808
E/TAG: EXECUTION TIME :: 209071269
E/TAG: EXECUTION TIME :: 149697230
E/TAG: EXECUTION TIME :: 138347269
This time in nanoseconds == +/- 200 milliseconds... It is very slowly... I need around 30 milliseconds by frame.
So, I think this method is execution on CPU, so question if there a method that executing on GPU?
EDIT4
I found out that there is MediaCodec class
https://developer.android.com/reference/android/media/MediaCodec
also I found similar question here MediaCodec get all frames from video
I understood that there is a way to read by bytes, but not by frames...
So, still question - if there is a way to read mp4 video by frames?
The solution would look something like the ExtractMpegFramesTest, in which MediaCodec is used to generate "external" textures from video frames. In the test code, the frames are rendered to an off-screen pbuffer and then saved as PNG. You would just render them directly.
There are a few problems with this:
MPEG video isn't designed to work well as a random-access database.
A common GOP (group of pictures) structure has one "key frame" (essentially a JPEG image) followed by 14 delta frames, which just hold the difference from the previous decoded frame. So if you want frame N, you may have to decode frames N-14 through N-1 first. Not a problem if you're always moving forward (playing a movie onto a texture) or you only store key frames (at which point you've invented a clumsy database of JPEG images).
As mentioned in comments and answers, you're likely to get some visual artifacts. How bad these look depends on the material and your compression rate. Since you're generating the frames, you may be able to reduce this by ensuring that, whenever there's a big change, the first frame is always a key frame.
The firmware that MediaCodec interfaces with may want several frames before it starts producing output, even if you start at a key frame. Seeking around in a stream has a latency cost. See e.g. this post. (Ever wonder why DVRs have smooth fast-forward, but not smooth fast-backward?)
MediaCodec frames passed through SurfaceTexture become "external" textures. These have some limitations vs. normal textures -- performance may be worse, can't use as color buffer in an FBO, etc. If you're just rendering it once per frame at 30fps this shouldn't matter.
MediaMetadataRetriever's getFrameAtTime() method has less-than-desirable performance for the reasons noted above. You're unlikely to get better results by writing it yourself, although you can save a bit of time by skipping the step where it creates a Bitmap object. Also, you passed OPTION_CLOSEST_SYNC in, but that will only produce the results you want if all your frames are sync frames (again, clumsy database of JPEG images). You need to use OPTION_CLOSEST.
If you're just trying to play a movie on a texture (or your problem can be reduced to that), Grafika has some examples. One that may be relevant is TextureFromCamera, which renders the camera video stream on a GLES rect that can be zoomed and rotated. You can replace the camera input with the MP4 playback code from one of the other demos. This'll work fine if you're only playing forward, but if you want to skip around or go backward you'll have trouble.
The problem you're describing sounds pretty similar to what 2D game developers deal with. Doing what they do is probably the best approach.
I can see why it might seem easy to have all your textures in a single file, but this is a really really bad idea.
MP4 is a video codec it is highly optimised for a list of frames which have a high level of similarity to adjacent frames i.e. motion. It is also optimised to be decompressed in sequential order, so using a 'random access' approach will be very inefficient.
To give a bit more detail video codecs store key frames (one a second, but the rate changes) and delta frames the rest of the time. The key frames are independently compressed just like separate images, but the delta frames stored as the difference from one or more other frames. The algorithm assumes this difference will be fairly minimal, after motion compensation has been performed.
So if you want to access a single delta frame you code will have to decompress a nearby key frame and all the delta frames that connect it to the frame you want, this will be much slower than just using single frame JPEG.
In short, use JPEG or PNG to compress your textures and add them all to a single archive file to keep it tidy.
Yes there is way to extract single frames from mp4 video.
In principle, you seem to look for alternative way to load textures, where usual way is GLUtils.texImage2D (which fills texture from a Bitmap).
First, you should consider what others advice, and expect visual artifacts from compression. But assuming that your textures form related textures (e.g. an explosion), getting these from video stream makes sense. For unrelated images you'll get better results using JPG or PNG. And note that mp4 video doesn't have alpha channel, often used in textures.
For the task, you can't use MediaMetadataRetriever, it won't give you needed accuracy to extract all frames.
You'd have to work with MediaCodec and MediaExtractor classes. Android documentation for MediaCodec is detailed.
Actually you'll need to implement kind of customized video player, and add one key function: frame step.
Close thing to this is Android's MediaPlayer, which is complete player, but 1) lacks frame-step, and 2) is rather closed-source because it's implemented by lot of native C++ libraries which are impossible to extend and hard to study.
I advice this with experience of creating a frame-by-frame video player, and I did it by adopting MediaPlayer-Extended, which is written in plain java (no native code), so you can include this in your project and add function that you need. It works with Android's MediaCodec and MediaExtractor.
Somewhere in MediaPlayer class you'd add function for frameStep, and add another signal + function in PlaybackThread to decode just one next frame (in paused mode). However, the implementation of this would be up to you. Result would be that you let decoder to obtain and process single frame, consume the frame, then repeat with next frame. I did it, so I know that this approach works.
Another half of the task is about obtaining the result. A video player (with MediaCodec) outputs frames into a Surface. Your task would be to get the pixels.
I know about way how to read RGB bitmap from such surface: you need to create OpenGL Pbuffer EGLSurface, let MediaCodec render into this surface (Android's SurfaceTexture), then read pixels from this surface. This is another nontrivial task, you need to create shader to render EOS texture (the surface), and use GLES20.glReadPixels to obtain RGB pixels into a ByteBuffer. You'd then upload this RGB bitmaps into your textures.
However, as you want to load textures, you may find optimized way how to render the video frame directly into your textures, and avoid moving pixels around.
Hope this helps, and good luck in implementation.
Actually I want to post my implementation for current time.
Here h file
#include <jni.h>
#include <memory>
#include <opencv2/opencv.hpp>
#include "looper.h"
#include "media/NdkMediaCodec.h"
#include "media/NdkMediaExtractor.h"
#ifndef NATIVE_CODEC_NATIVECODECC_H
#define NATIVE_CODEC_NATIVECODECC_H
//Originally took from here https://github.com/googlesamples/android-
ndk/tree/master/native-codec
//Convert took from here
https://github.com/kueblert/AndroidMediaCodec/blob/master/nativecodecvideo.cpp
class NativeCodec
{
public:
NativeCodec() = default;
~NativeCodec() = default;
void DecodeDone();
void Pause();
void Resume();
bool createStreamingMediaPlayer(const std::string &filename);
void setPlayingStreamingMediaPlayer(bool isPlaying);
void shutdown();
void rewindStreamingMediaPlayer();
int getFrameWidth() const
{
return m_frameWidth;
}
int getFrameHeight() const
{
return m_frameHeight;
}
void getNextFrame(std::vector<unsigned char> &imageData);
private:
struct Workerdata
{
AMediaExtractor *ex;
AMediaCodec *codec;
bool sawInputEOS;
bool sawOutputEOS;
bool isPlaying;
bool renderonce;
};
void Seek();
ssize_t m_bufidx = -1;
int m_frameWidth = -1;
int m_frameHeight = -1;
cv::Size m_frameSize;
Workerdata m_data = {nullptr, nullptr, false, false, false, false};
};
#endif //NATIVE_CODEC_NATIVECODECC_H
Here cc file
#include "native_codec.h"
#include <cassert>
#include "native_codec.h"
#include <jni.h>
#include <cstdio>
#include <cstring>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <cerrno>
#include <climits>
#include "util.h"
#include <android/log.h>
#include <string>
#include <chrono>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
#include <android/log.h>
#include <string>
#include <chrono>
// for native window JNI
#include <android/native_window_jni.h>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
using namespace std;
using namespace std::chrono;
bool NativeCodec::createStreamingMediaPlayer(const std::string &filename)
{
AMediaExtractor *ex = AMediaExtractor_new();
media_status_t err = AMediaExtractor_setDataSource(ex, filename.c_str());;
if (err != AMEDIA_OK)
{
return false;
}
size_t numtracks = AMediaExtractor_getTrackCount(ex);
AMediaCodec *codec = nullptr;
for (int i = 0; i < numtracks; i++)
{
AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
int format_color;
AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_COLOR_FORMAT, &format_color);
bool ok = AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_WIDTH, &m_frameWidth);
ok = ok && AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_HEIGHT,
&m_frameHeight);
if (ok)
{
m_frameSize = cv::Size(m_frameWidth, m_frameHeight);
} else
{
//Asking format for frame width / height failed.
}
const char *mime;
if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime))
{
return false;
} else if (!strncmp(mime, "video/", 6))
{
// Omitting most error handling for clarity.
// Production code should check for errors.
AMediaExtractor_selectTrack(ex, i);
codec = AMediaCodec_createDecoderByType(mime);
AMediaCodec_configure(codec, format, nullptr, nullptr, 0);
m_data.ex = ex;
m_data.codec = codec;
m_data.sawInputEOS = false;
m_data.sawOutputEOS = false;
m_data.isPlaying = false;
m_data.renderonce = true;
AMediaCodec_start(codec);
}
AMediaFormat_delete(format);
}
return true;
}
void NativeCodec::getNextFrame(std::vector<unsigned char> &imageData)
{
if (!m_data.sawInputEOS)
{
m_bufidx = AMediaCodec_dequeueInputBuffer(m_data.codec, 2000);
if (m_bufidx >= 0)
{
size_t bufsize;
auto buf = AMediaCodec_getInputBuffer(m_data.codec, m_bufidx, &bufsize);
auto sampleSize = AMediaExtractor_readSampleData(m_data.ex, buf, bufsize);
if (sampleSize < 0)
{
sampleSize = 0;
m_data.sawInputEOS = true;
}
auto presentationTimeUs = AMediaExtractor_getSampleTime(m_data.ex);
AMediaCodec_queueInputBuffer(m_data.codec, m_bufidx, 0, sampleSize,
presentationTimeUs,
m_data.sawInputEOS ?
AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM : 0);
AMediaExtractor_advance(m_data.ex);
}
}
if (!m_data.sawOutputEOS)
{
AMediaCodecBufferInfo info;
auto status = AMediaCodec_dequeueOutputBuffer(m_data.codec, &info, 0);
if (status >= 0)
{
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM)
{
__android_log_print(ANDROID_LOG_ERROR,
"AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM", "AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM :: %s",
//
"output EOS");
m_data.sawOutputEOS = true;
}
if (info.size > 0)
{
// size_t bufsize;
uint8_t *buf = AMediaCodec_getOutputBuffer(m_data.codec,
static_cast<size_t>(status), /*bufsize*/nullptr);
cv::Mat YUVframe(cv::Size(m_frameSize.width, static_cast<int>
(m_frameSize.height * 1.5)), CV_8UC1, buf);
cv::Mat colImg(m_frameSize, CV_8UC3);
cv::cvtColor(YUVframe, colImg, CV_YUV420sp2BGR, 3);
auto dataSize = colImg.rows * colImg.cols * colImg.channels();
imageData.assign(colImg.data, colImg.data + dataSize);
}
AMediaCodec_releaseOutputBuffer(m_data.codec, static_cast<size_t>(status),
info.size != 0);
if (m_data.renderonce)
{
m_data.renderonce = false;
return;
}
} else if (status < 0)
{
getNextFrame(imageData);
} else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED)
{
__android_log_print(ANDROID_LOG_ERROR,
"AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED", "AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED :: %s", //
"output buffers changed");
} else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED)
{
auto format = AMediaCodec_getOutputFormat(m_data.codec);
__android_log_print(ANDROID_LOG_ERROR,
"AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED", "AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED :: %s",
//
AMediaFormat_toString(format));
AMediaFormat_delete(format);
} else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER)
{
__android_log_print(ANDROID_LOG_ERROR, "AMEDIACODEC_INFO_TRY_AGAIN_LATER",
"AMEDIACODEC_INFO_TRY_AGAIN_LATER :: %s", //
"no output buffer right now");
} else
{
__android_log_print(ANDROID_LOG_ERROR, "UNEXPECTED INFO CODE", "UNEXPECTED
INFO CODE :: %zd", //
status);
}
}
}
void NativeCodec::DecodeDone()
{
if (m_data.codec != nullptr)
{
AMediaCodec_stop(m_data.codec);
AMediaCodec_delete(m_data.codec);
AMediaExtractor_delete(m_data.ex);
m_data.sawInputEOS = true;
m_data.sawOutputEOS = true;
}
}
void NativeCodec::Seek()
{
AMediaExtractor_seekTo(m_data.ex, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaCodec_flush(m_data.codec);
m_data.sawInputEOS = false;
m_data.sawOutputEOS = false;
if (!m_data.isPlaying)
{
m_data.renderonce = true;
}
}
void NativeCodec::Pause()
{
if (m_data.isPlaying)
{
// flush all outstanding codecbuffer messages with a no-op message
m_data.isPlaying = false;
}
}
void NativeCodec::Resume()
{
if (!m_data.isPlaying)
{
m_data.isPlaying = true;
}
}
void NativeCodec::setPlayingStreamingMediaPlayer(bool isPlaying)
{
if (isPlaying)
{
Resume();
} else
{
Pause();
}
}
void NativeCodec::shutdown()
{
m_bufidx = -1;
DecodeDone();
}
void NativeCodec::rewindStreamingMediaPlayer()
{
Seek();
}
So, according to this implementation for format conversion (in my case from YUV to BGR) you need to set up OpenCV, for understand how to do it check this two source
https://www.youtube.com/watch?v=jN9Bv5LHXMk
https://www.youtube.com/watch?v=0fdIiOqCz3o
And also for sample I leave here my CMakeLists.txt file
#For add OpenCV take a look at this video
#https://www.youtube.com/watch?v=jN9Bv5LHXMk
#https://www.youtube.com/watch?v=0fdIiOqCz3o
#Look at the video than compare with this file and make the same
set(pathToProject
C:/Users/tetavi/Downloads/Buffer/OneMoreArNew/arcore-android-
sdk/samples/hello_ar_c)
set(pathToOpenCv C:/OpenCV-android-sdk)
cmake_minimum_required(VERSION 3.4.1)
set(CMAKE VERBOSE MAKEFILE on)
set(CMAKE CXX FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11")
include_directories(${pathToOpenCv}/sdk/native/jni/include)
# Import the ARCore library.
add_library(arcore SHARED IMPORTED)
set_target_properties(arcore PROPERTIES IMPORTED_LOCATION
${ARCORE_LIBPATH}/${ANDROID_ABI}/libarcore_sdk_c.so
INTERFACE_INCLUDE_DIRECTORIES ${ARCORE_INCLUDE}
)
# Import the glm header file from the NDK.
add_library(glm INTERFACE)
set_target_properties(glm PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES
${ANDROID_NDK}/sources/third_party/vulkan/src/libs/glm
)
# This is the main app library.
add_library(hello_ar_native SHARED
src/main/cpp/background_renderer.cc
src/main/cpp/hello_ar_application.cc
src/main/cpp/jni_interface.cc
src/main/cpp/video_render.cc
src/main/cpp/geometry_loader.cc
src/main/cpp/plane_renderer.cc
src/main/cpp/native_codec.cc
src/main/cpp/point_cloud_renderer.cc
src/main/cpp/frame_manager.cc
src/main/cpp/safe_queue.cc
src/main/cpp/stb_image.h
src/main/cpp/util.cc)
add_library(lib_opencv SHARED IMPORTED)
set_target_properties(lib_opencv PROPERTIES IMPORTED_LOCATION
${pathToProject}/app/src/main/jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libopencv_java3.so)
target_include_directories(hello_ar_native PRIVATE
src/main/cpp)
target_link_libraries(hello_ar_native $\{log-lib} lib_opencv
android
log
GLESv2
glm
mediandk
arcore)
Usage:
You need to create stream media player with this method
NaviteCodec::createStreamingMediaPlayer(pathToYourMP4file);
and then just use
NativeCodec::getNextFrame(imageData);
Feel free to ask

android MediaSource not recogonized as class in jni code

I am trying to use android ndk to develop simple decoder/player application.I created one project using android sdk and then i created a folder named jni in my project directory.
Inside the jni directory i created one omx.cpp file and i want to write my own class inside this which inherits Android MediaSource from stagefright.I have also included stagefright header files in my project.I am loading libstagefright.so by using dlopen in my omx.cpp file.
the code i am using is as follows:
using android::sp;
namespace android
{
class ImageSource : public MediaSource {
public:
ImageSource(int width, int height, int colorFormat)
: mWidth(width),
mHeight(height),
mColorFormat(colorFormat)
{
}
public:
int mWidth;
int mHeight;
int mColorFormat;
virtual status_t start(MetaData *params = NULL) {}
virtual status_t stop() {}
// Returns the format of the data output by this media source.
virtual sp<MetaData> getFormat() {}
virtual status_t read(
MediaBuffer **buffer, const MediaSource::ReadOptions *options) {
}
/*protected:
virtual ~ImageSource() {}*/
};
void Java_com_exampleomxvideodecoder_MainActivity(JNIEnv *env, jobject obj, jobject surface)
{
void *dlhandle;
dlhandle = dlopen("d:\libstagefright.so", RTLD_NOW);
if (dlhandle == NULL) {
printf("Service Not Found: %s\n", dlerror());
}
int width = 720;
int height = 480;
int colorFormat = 0;
sp<MediaSource> img_source = new ImageSource(width, height, colorFormat);
sp<MetaData> enc_meta = new MetaData;
// enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_H263);
// enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_MPEG4);
enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
enc_meta->setInt32(kKeyWidth, width);
enc_meta->setInt32(kKeyHeight, height);
enc_meta->setInt32(kKeySampleRate, kFramerate);
enc_meta->setInt32(kKeyBitRate, kVideoBitRate);
enc_meta->setInt32(kKeyStride, width);
enc_meta->setInt32(kKeySliceHeight, height);
enc_meta->setInt32(kKeyIFramesInterval, kIFramesIntervalSec);
enc_meta->setInt32(kKeyColorFormat, colorFormat);
sp<MediaSource> encoder =
OMXCodec::Create(
client.interface(), enc_meta, true, image_source);
sp<MPEG4Writer> writer = new MPEG4Writer("/sdcard/screenshot.mp4");
writer->addSource(encoder);
// you can add an audio source here if you want to encode audio as well
sp<MediaSource> audioEncoder =
OMXCodec::Create(client.interface(), encMetaAudio, true, audioSource);
writer->addSource(audioEncoder);
writer->setMaxFileDuration(kDurationUs);
CHECK_EQ(OK, writer->start());
while (!writer->reachedEOS()) {
fprintf(stderr, ".");
usleep(100000);
}
err = writer->stop();
}
}
I have following doubts:
1.In jni function is it okay if we create some class objects and use them to call functions of say MediaSource class or we have to create separate .cpp and .h files.If we use separate files how do we call/ref it from jni function.
2.Is this the right approach to make our own wrapper class which inherits from MediaSource class or is there any other way.
Basically i want to make an application which takes .mp4/.avi file,demux it separate audio/video,decode and render/play it using android stagefright and OpenMAX only.
If ffmpeg is suggested for source,demuxing then how to integrate it with android st
agefright framework.
Regards
To answer your first question, Yes it is possible to define a class in the same source file and instantiate the same in a function below. A best example which I feel could be very good example for such an implementation would be the DummySource of the recordVideo utility which can be found in cmds directory.
However, your file should include the MediaSource.h file either directly or indirectly as can be found in the aforementioned example too.
The second question is more of an implementation choice or religion. For some developers, defining a new class and inheriting from MediaSource might be the right way as you have tried in your example.
There is an alternate implementation where you can create the source and typecast into a MediaSoure strong pointer as shown in the example below.
<sp><MediaSource> mVideoSource;
mVideoSource = new ImageSource(width, height, colorformat);
where ImageSource implements start and read methods. I feel recordVideo example above is a good reference.
Regarding the last paragraph, I will respond on your other query, but I feel there is a fundamental mismatch between your objective and code. The objective is to create a parser or MediaExtractor and a corresponding decoder, but the code above is instantiating an ImageSource which I presume gives YUV frames and creating an encoder as you are passing true for encoder creation.
I will also add further comments on the NDK possibilities on the other thread.

Save recorded audio to file - OpenSL ES - Android

I'm trying to record from the microphone, add some effects, and the save this to a file
I've started with the example native-audio included in the Android NDK.
I'va managed to add some reverb and play it back but I haven't found any examples or help on how to accomplish this.
Any and all help is welcome.
OpenSL is not a framework for file formats and access. If you want a raw PCM file, simply open it for writing and put all buffers from OpenSL callback into the file. But if you want encoded audio, you need your own codec and format handler. You can use ffmpeg libraries, or built-in stagefright.
Update write playback buffers to local raw PCM file
We start with native-audio-jni.c
#include <stdio.h>
FILE* rawFile = NULL;
int bClosing = 0;
...
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
assert(bq == bqPlayerBufferQueue);
assert(NULL == context);
// for streaming playback, replace this test by logic to find and fill the next buffer
if (--nextCount > 0 && NULL != nextBuffer && 0 != nextSize) {
SLresult result;
// enqueue another buffer
result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, nextBuffer, nextSize);
// the most likely other result is SL_RESULT_BUFFER_INSUFFICIENT,
// which for this code example would indicate a programming error
assert(SL_RESULT_SUCCESS == result);
(void)result;
// AlexC: here we write:
if (rawFile) {
fwrite(nextBuffer, nextSize, 1, rawFile);
}
}
if (bClosing) { // it is important to do this in a callback, to be on the correct thread
fclose(rawFile);
rawFile = NULL;
}
// AlexC: end of changes
}
...
void Java_com_example_nativeaudio_NativeAudio_startRecording(JNIEnv* env, jclass clazz)
{
bClosing = 0;
rawFile = fopen("/sdcard/rawFile.pcm", "wb");
...
void Java_com_example_nativeaudio_NativeAudio_shutdown(JNIEnv* env, jclass clazz)
{
bClosing = 1;
...
Pass the raw vector from c to java and encode it in mp3 with mediaRecorder, I don't know if you can set the audio source from a raw vector, but maybe...

Is it possible to get a byte buffer directly from an audio asset in OpenSL ES (for Android)?

I would like to get a byte buffer from an audio asset using the OpenSL ES FileDescriptor object, so I can enqueue it repeatedly to a SimpleBufferQueue, instead of using SL interfaces to play/stop/seek the file.
There are three main reasons why I would like to manage the sample bytes directly:
OpenSL uses an AudioTrack layer to play/stop/etc for Player Objects. This does not only introduce unwanted overhead, but it also has several bugs, and rapid starts/stops of the player cause lots of problems.
I need to manipulate the byte buffer directly for custom DSP effects.
The clips I'm going to be playing are small, and can all be loaded into memory to avoid file I/O overhead. Plus, enqueueing my own buffers will allow me to reduce latency by writing 0's to the output sink, and simply switching to sample bytes when they are playing, rather than STOPPING, PAUSING and PLAYING the AudioTrack.
Okay, so justifications complete - here's what I've tried - I have a Sample struct which contains, essentially, an input and output track, and a byte array to hold the samples. The input is my FileDescriptor player, and the output is a SimpleBufferQueue object. Here's my struct:
typedef struct Sample_ {
// buffer to hold all samples
short *buffer;
int totalSamples;
SLObjectItf fdPlayerObject;
// file descriptor player interfaces
SLPlayItf fdPlayerPlay;
SLSeekItf fdPlayerSeek;
SLMuteSoloItf fdPlayerMuteSolo;
SLVolumeItf fdPlayerVolume;
SLAndroidSimpleBufferQueueItf fdBufferQueue;
SLObjectItf outputPlayerObject;
SLPlayItf outputPlayerPlay;
// output buffer interfaces
SLAndroidSimpleBufferQueueItf outputBufferQueue;
} Sample;
after initializing a file player fdPlayerObject, and malloc-ing memory for my byte buffer with
sample->buffer = malloc(sizeof(short)*sample->totalSamples);
I'm getting its BufferQueue interface with
// get the buffer queue interface
result = (*(sample->fdPlayerObject))->GetInterface(sample->fdPlayerObject, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &(sample->fdBufferQueue));
Then I instantiate an output player:
// create audio player for output buffer queue
const SLInterfaceID ids1[] = {SL_IID_ANDROIDSIMPLEBUFFERQUEUE};
const SLboolean req1[] = {SL_BOOLEAN_TRUE};
result = (*engineEngine)->CreateAudioPlayer(engineEngine, &(sample->outputPlayerObject), &outputAudioSrc, &audioSnk,
1, ids1, req1);
// realize the output player
result = (*(sample->outputPlayerObject))->Realize(sample->outputPlayerObject, SL_BOOLEAN_FALSE);
assert(result == SL_RESULT_SUCCESS);
// get the play interface
result = (*(sample->outputPlayerObject))->GetInterface(sample->outputPlayerObject, SL_IID_PLAY, &(sample->outputPlayerPlay));
assert(result == SL_RESULT_SUCCESS);
// get the buffer queue interface for output
result = (*(sample->outputPlayerObject))->GetInterface(sample->outputPlayerObject, SL_IID_ANDROIDSIMPLEBUFFERQUEUE,
&(sample->outputBufferQueue));
assert(result == SL_RESULT_SUCCESS);
// set the player's state to playing
result = (*(sample->outputPlayerPlay))->SetPlayState(sample->outputPlayerPlay, SL_PLAYSTATE_PLAYING);
assert(result == SL_RESULT_SUCCESS);
When I want to play the sample, I am using:
Sample *sample = &samples[sampleNum];
// THIS WORKS FOR SIMPLY PLAYING THE SAMPLE, BUT I WANT THE BUFFER DIRECTLY
// if (sample->fdPlayerPlay != NULL) {
// // set the player's state to playing
// (*(sample->fdPlayerPlay))->SetPlayState(sample->fdPlayerPlay, SL_PLAYSTATE_PLAYING);
// }
// fill buffer with the samples from the file descriptor
(*(sample->fdBufferQueue))->Enqueue(sample->fdBufferQueue, sample->buffer,sample->totalSamples*sizeof(short));
// write the buffer to the outputBufferQueue, which is already playing
(*(sample->outputBufferQueue))->Enqueue(sample->outputBufferQueue, sample->buffer, sample->totalSamples*sizeof(short));
However, this causes my app to freeze and shut down. Something is wrong here.
Also, I would prefer to not get the samples from the File Descriptor's BufferQueue each time. Instead, I'd like to permanently store it in a byte array, and Enqueue that to the output whenever I like.
Decoding to PCM is available at API level 14 and higher.
When you create decoder player you need set Android simple buffer queue as the data sink:
// For init use something like this:
SLDataLocator_AndroidFD locatorIn = {SL_DATALOCATOR_ANDROIDFD, decriptor, start, length};
SLDataFormat_MIME dataFormat = {SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED};
SLDataSource audioSrc = {&locatorIn, &dataFormat};
SLDataLocator_AndroidSimpleBufferQueue loc_bq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataSink audioSnk = { &loc_bq, NULL };
const SLInterfaceID ids[2] = {SL_IID_PLAY, SL_IID_ANDROIDSIMPLEBUFFERQUEUE};
const SLboolean req[2] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
SLresult result = (*engineEngine)->CreateAudioPlayer(engineEngine, &(sample->fdPlayerObject), &outputAudioSrc, &audioSnk, 2, ids1, req1);
For decoder queue you need enqueue a set of empty buffers to the Android simple buffer queue, which will be filled with PCM data.
Also you need register a callback handler with the decoder queue which will be called when PCM data will be ready. The callback handler should process the PCM data, re-enqueue the now-empty buffer, and then return. The application is responsible for keeping track of decoded buffers; the callback parameter list does not include sufficient information to indicate which buffer was filled or which buffer to enqueue next.
Decode to PCM supports pause and initial seek. Volume control, effects, looping, and playback rate are not supported.
Read Decode audio to PCM from OpenSL ES for Android for more details.

How to encode non-camera video in Android

I am working on an android application in which a video is dynamically generated by compositing a sequence of animation frames. I tried to use the Android Media Recorder API for this but have not found a way to get it to accept a non-camera source as input. I have been attempting to use a FFMPEG port (based on the Rockplayer build) but am running into difficulties with missing functions since I am using it as an encoder, not a decoder.
The iPhone version of this app uses AVAssetWriter from the AVFoundation framework.
Is there an easier way to do this or am I stuck slugging it out with FFMPEG?
This may help (see the note on resolution though):-
How to encode using the FFMpeg in Android (using H263)
I'm not sure if they did a custom build of ffmpeg, or not, if so they may be able to offer advice on porting a more feature complete version.
-Anthony
Opencv has ViewBase class which takes the input from the camera as a frame and represent the frame as a bitmap , you can extand the class View base and make it for your own use , even though installing opencv on the android isn't very easy.
When you extend SampleCvViewBase you will have the following function which you can use pretty much hard work but the best I can think of.
#Override
protected Bitmap processFrame(VideoCapture capture) {
capture.retrieve(picture, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
if (Utils.matToBitmap(picture, bmp))
return bmp;
bmp.recycle();
return null;
}
You can use a pure Java open source library called JCodec ( http://jcodec.org ).
It contains a simple yet working H.264 encoder and MP4 muxer. The class below uses JCodec low level API and should be what you need ( CORRECTED ):
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
for (int i = 0; i < 3; i++)
Arrays.fill(toEncode.getData()[i], 0);
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
public static void main(String[] args) throws IOException {
SequenceEncoder encoder = new SequenceEncoder(new File("video.mp4"));
for (int i = 1; i < 100; i++) {
BufferedImage bi = ImageIO.read(new File(String.format("folder/img%08d.png", i)));
encoder.encodeImage(bi);
}
encoder.finish();
}
}
You can get JCodec jar from a project web-site.

Categories

Resources