android MediaSource not recogonized as class in jni code - android

I am trying to use android ndk to develop simple decoder/player application.I created one project using android sdk and then i created a folder named jni in my project directory.
Inside the jni directory i created one omx.cpp file and i want to write my own class inside this which inherits Android MediaSource from stagefright.I have also included stagefright header files in my project.I am loading libstagefright.so by using dlopen in my omx.cpp file.
the code i am using is as follows:
using android::sp;
namespace android
{
class ImageSource : public MediaSource {
public:
ImageSource(int width, int height, int colorFormat)
: mWidth(width),
mHeight(height),
mColorFormat(colorFormat)
{
}
public:
int mWidth;
int mHeight;
int mColorFormat;
virtual status_t start(MetaData *params = NULL) {}
virtual status_t stop() {}
// Returns the format of the data output by this media source.
virtual sp<MetaData> getFormat() {}
virtual status_t read(
MediaBuffer **buffer, const MediaSource::ReadOptions *options) {
}
/*protected:
virtual ~ImageSource() {}*/
};
void Java_com_exampleomxvideodecoder_MainActivity(JNIEnv *env, jobject obj, jobject surface)
{
void *dlhandle;
dlhandle = dlopen("d:\libstagefright.so", RTLD_NOW);
if (dlhandle == NULL) {
printf("Service Not Found: %s\n", dlerror());
}
int width = 720;
int height = 480;
int colorFormat = 0;
sp<MediaSource> img_source = new ImageSource(width, height, colorFormat);
sp<MetaData> enc_meta = new MetaData;
// enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_H263);
// enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_MPEG4);
enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
enc_meta->setInt32(kKeyWidth, width);
enc_meta->setInt32(kKeyHeight, height);
enc_meta->setInt32(kKeySampleRate, kFramerate);
enc_meta->setInt32(kKeyBitRate, kVideoBitRate);
enc_meta->setInt32(kKeyStride, width);
enc_meta->setInt32(kKeySliceHeight, height);
enc_meta->setInt32(kKeyIFramesInterval, kIFramesIntervalSec);
enc_meta->setInt32(kKeyColorFormat, colorFormat);
sp<MediaSource> encoder =
OMXCodec::Create(
client.interface(), enc_meta, true, image_source);
sp<MPEG4Writer> writer = new MPEG4Writer("/sdcard/screenshot.mp4");
writer->addSource(encoder);
// you can add an audio source here if you want to encode audio as well
sp<MediaSource> audioEncoder =
OMXCodec::Create(client.interface(), encMetaAudio, true, audioSource);
writer->addSource(audioEncoder);
writer->setMaxFileDuration(kDurationUs);
CHECK_EQ(OK, writer->start());
while (!writer->reachedEOS()) {
fprintf(stderr, ".");
usleep(100000);
}
err = writer->stop();
}
}
I have following doubts:
1.In jni function is it okay if we create some class objects and use them to call functions of say MediaSource class or we have to create separate .cpp and .h files.If we use separate files how do we call/ref it from jni function.
2.Is this the right approach to make our own wrapper class which inherits from MediaSource class or is there any other way.
Basically i want to make an application which takes .mp4/.avi file,demux it separate audio/video,decode and render/play it using android stagefright and OpenMAX only.
If ffmpeg is suggested for source,demuxing then how to integrate it with android st
agefright framework.
Regards

To answer your first question, Yes it is possible to define a class in the same source file and instantiate the same in a function below. A best example which I feel could be very good example for such an implementation would be the DummySource of the recordVideo utility which can be found in cmds directory.
However, your file should include the MediaSource.h file either directly or indirectly as can be found in the aforementioned example too.
The second question is more of an implementation choice or religion. For some developers, defining a new class and inheriting from MediaSource might be the right way as you have tried in your example.
There is an alternate implementation where you can create the source and typecast into a MediaSoure strong pointer as shown in the example below.
<sp><MediaSource> mVideoSource;
mVideoSource = new ImageSource(width, height, colorformat);
where ImageSource implements start and read methods. I feel recordVideo example above is a good reference.
Regarding the last paragraph, I will respond on your other query, but I feel there is a fundamental mismatch between your objective and code. The objective is to create a parser or MediaExtractor and a corresponding decoder, but the code above is instantiating an ImageSource which I presume gives YUV frames and creating an encoder as you are passing true for encoder creation.
I will also add further comments on the NDK possibilities on the other thread.

Related

Firemonkey TSpinBox height

I'm using C++ Builder 10.3 and my application is for Android, please note I'm very new to C++ Builder
I'm trying to change the font size and height of a TSpinBox but i'm unable to change the height.
I tried by best to port the following Delphi solution
Firemonkey TEdit height but with no joy and i'm a total lose.
AdjustFixedSize is declared private i dont think its being overridden, i have also tried creating a setter and calling it but yet again I was unable to get it to work. The biggest problem i have is my lack of C++ Builder knowledge.
Header
class TMySpinBox : public TSpinBox{
public:
protected:
virtual void AdjustFixedSize(const TControl Ref) ;
};
CPP
TMySpinBox::TMySpinBox() : TSpinBox(0){};
void TMySpinBox::AdjustFixedSize(const TControl Ref){
SetAdjustType(TAdjustType::None);
Code
TMySpinBox* SpinBox1 = new TMySpinBox();
SpinBox1->ControlType=TControlType::Platform;
SpinBox1->Parent=Panel1->Parent;
SpinBox1->Position->Y=16.0;
SpinBox1->Position->X=16.0;
SpinBox1->Min=2;
SpinBox1->Max=99;
SpinBox1->Font->Size=48;
SpinBox1->Visible=true;
SpinBox1->Value=2;
SpinBox1->Align=TAlignLayout::None;
SpinBox1->Height=100;
Width=100;
I gave it a try and moved a few things around - mostly into the constructor of the customized TSpinBox. I skipped using AdjustFixedSize since it doesn't seem necessary.
myspinbox.h
#ifndef myspinboxH
#define myspinboxH
//---------------------------------------------------------------------------
#include <FMX.SpinBox.hpp>
class TMySpinBox : public TSpinBox {
protected:
// The correct signature but commented out since I didn't use it:
//void __fastcall AdjustFixedSize(TControl* const ReferenceControl) override;
public:
// C++ Builder constructors can be virtual and override which is not
// standard C++. This is afaik only important if you make a custom component
// to integrate with the IDE to support streaming it, but I'll make it
// virtual anyway.
// This component sets Owner and Parent to the same component. You can change that if
// you'd like to keep them separate.
virtual __fastcall TMySpinBox(Fmx::Types::TFmxObject* OwnerAndParent);
};
#endif
myspinbox.cpp
#pragma hdrstop
#include "myspinbox.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
__fastcall TMySpinBox::TMySpinBox(Fmx::Types::TFmxObject* OwnerAndParent) :
TSpinBox(OwnerAndParent) // set owner
{
// set properties
this->Parent = OwnerAndParent;
this->Position->Y = 16.0;
this->Position->X = 16.0;
this->Min = 2;
this->Max = 99;
this->Value = this->Min;
this->Height = 100;
this->Width = 100;
// Remove the styled setting for Size to enable setting our own font size
this->StyledSettings >>= Fmx::Types::TStyledSetting::Size;
this->Font->Size = 48;
}
Code
// Let Panel1 own and contain the spinbox and destroy it when it itself is destroyed
TMySpinBox* SpinBox1 = new TMySpinBox(Panel1);
Disclaimer: Only tested on Windows

How MediaCodec finds the codec inside the framework in Android?

I am trying to understanding how MediaCodec is used for hardware decoding.
My knowledge in android internal is very limited.
Here is my findings:
There is a xml file which represents the codec details in the android system .
device/ti/omap3evm/media_codecs.xml for an example.
Which means, that If we create a codec from the Java Application with Media Codec
MediaCodec codec = MediaCodec.createDecoderByType(type);
It should be finding out respective coder with the help of xml file.
What am I doing?
I am trying to figure our which part of the code is reading xml and find the codec based on given 'type'.
1) Application Layer :
MediaCodec codec = MediaCodec.createDecoderByType(type);
2) MediaCodec.java -> [ frameworks/base/media/java/android/media/MediaCodec.java ]
public static MediaCodec createDecoderByType(String type) {
return new MediaCodec(type, true /* nameIsType */, false /* encoder */);
}
3)
private MediaCodec(
String name, boolean nameIsType, boolean encoder) {
native_setup(name, nameIsType, encoder); --> JNI Call.
}
4)
JNI Implementation -> [ frameworks/base/media/jni/android_media_MediaCodec.cpp ]
static void android_media_MediaCodec_native_setup (..) {
.......
const char *tmp = env->GetStringUTFChars(name, NULL);
sp<JMediaCodec> codec = new JMediaCodec(env, thiz, tmp, nameIsType, encoder); ---> Here
}
from frameworks/base/media/jni/android_media_MediaCodec.cpp
JMediaCodec::JMediaCodec( ..) {
....
mCodec = MediaCodec::CreateByType(mLooper, name, encoder); //Call goes to libstagefright
.... }
sp<MediaCodec> MediaCodec::CreateByType(
const sp<ALooper> &looper, const char *mime, bool encoder) {
sp<MediaCodec> codec = new MediaCodec(looper);
if (codec->init(mime, true /* nameIsType */, encoder) != OK) { --> HERE.
return NULL;
}
return codec;
}
status_t MediaCodec::init(const char *name, bool nameIsType, bool encoder) {
// MediaCodec
}
I am struck with this flow. If someone points out how to take it forward would help a lot.
thanks.
Let's take the flow step by step.
MediaCodec::CreateByType will create a new MediaCodec object
MediaCodec constructor would create a new ACodec object and store it as mCodec
When MediaCodec::init is invoked, it internally instructs the underlying ACodec to allocate the OMX component through mCodec->initiateAllocateComponent.
ACodec::initiateAllocateComponent would invoke onAllocateComponent
ACodec::UninitializedState::onAllocateComponent would invoke OMXCodec::findMatchingCodecs to find the codecs matching the MIME type passed from the caller.
In OMXCodec::findMatchingCodecs, there is a call to retrieve an instance of MediaCodecList as MediaCodecList::getInstance().
In MediaCodecList::getInstance, there is a check if there is an existing MediaCodecList or else a new object of MediaCodecList is created.
In the constructor of MediaCodecList, there is a call to parseXMLFile with the file name as /etc/media_codecs.xml.
parseXMLFile reads the contents and stores the different component names etc into MediaCodecList which can be used for any other codec instance too. The helper function employed for the parsing is startElementHandler . A function of interest could be addMediaCodec.
Through these steps, the XML file contents are translated into a list which can be employed by any other module. MediaCodecList is exposed at Java layer too as can be referred from here.
I have skipped a few hops wherein MediaCodec and ACodec employ messages to actually communicate and invoke methods, but the flow presented should give a good idea about the underlying mechanism.

fopen/fread APK Assets from NativeActivity on Android

I have only been able to find solutions dated 2010 and earlier. So I wanted to see if there was a more up-to-date stance on this.
I'd like to avoid using Java and purely use C++, to access files (some less-or-more than 1MB) stored away in the APK. Using AssetManager means I can't access files like every other file on every other operating system (including iOS).
If not, is there a method in C++ where I could somehow map fopen/fread to the AssetManager APIs?
I actually found pretty elegant answer to the problem and blogged about it here.
The summary is:
The AAssetManager API has NDK bindings. This lets you load assets from the APK.
It is possible to combine a set of functions that know how to read/write/seek against anything and disguise them as a file pointer (FILE*).
If we create a function that takes an asset name, uses AssetManager to open it, and then disguises the result as a FILE* then we have something that's very similar to fopen.
If we define a macro named fopen we can replace all uses of that function with ours instead.
My blog has a full write up and all the code you need to implement in pure C. I use this to build lua and libogg for Android.
Short answer
No. AFAIK mapping fread/fopen in C++ to AAssetManager is not possible. And if were it would probably limit you to files in the assets folder. There is however a workaround, but it's not straightforward.
Long Answer
It IS possible to access any file anywhere in the APK using zlib and libzip in C++.
Requirements : some java, zlib and/or libzip (for ease of use, so that's what I settled for). You can get libzip here: http://www.nih.at/libzip/
libzip may need some tinkering to get it to work on android, but nothing serious.
Step 1 : retrieve APK location in Java and pass to JNI/C++
String PathToAPK;
ApplicationInfo appInfo = null;
PackageManager packMgmr = parent.getPackageManager();
try {
appInfo = packMgmr.getApplicationInfo("com.your.application", 0);
} catch (NameNotFoundException e) {
e.printStackTrace();
throw new RuntimeException("Unable to locate APK...");
}
PathToAPK = appInfo.sourceDir;
Passing PathToAPK to C++/JNI
JNIEXPORT jlong JNICALL Java_com_your_app(JNIEnv *env, jobject obj, jstring PathToAPK)
{
// convert strings
const char *apk_location = env->GetStringUTFChars(PathToAPK, 0);
// Do some assigning, data init, whatever...
// insert code here
//release strings
env->ReleaseStringUTFChars(PathToAPK, apk_location);
return 0;
}
Assuming that you now have a std::string with your APK location and you have zlib on libzip working you can do something like this:
if(apk_open == false)
{
apk_file = zip_open(apk_location.c_str(), 0, NULL);
if(apk_file == NULL)
{
LOGE("Error opening APK!");
result = ASSET_APK_NOT_FOUND_ERROR;
}else
{
apk_open = true;
result = ASSET_NO_ERROR;
}
}
And to read a file from the APK:
if(apk_file != NULL){
// file you wish to read; **any** file from the APK, you're not limited to regular assets
const char *file_name = "path/to/file.png";
int file_index;
zip_file *file;
struct zip_stat file_stat;
file_index = zip_name_locate(apk_file, file_name, 0);
if(file_index == -1)
{
zip_close(apk_file);
apk_open = false;
return;
}
file = zip_fopen_index(apk_file, file_index, 0);
if(file == NULL)
{
zip_close(apk_file);
apk_open = false;
return;
}
// get the file stats
zip_stat_init(&file_stat);
zip_stat(apk_file, file_name, 0, &file_stat);
char *buffer = new char[file_stat.size];
// read the file
int result = zip_fread(file, buffer, file_stat.size);
if(result == -1)
{
delete[] buffer;
zip_fclose(file);
zip_close(apk_file);
apk_open = false;
return;
}
// do something with the file
// code goes here
// delete the buffer, close the file and apk
delete[] buffer;
zip_fclose(file);
zip_close(apk_file);
apk_open = false;
Not exactly fopen/fread but it gets the job done. It should be pretty easy to wrap this to your own file reading function to abstract the zip layer.

How to get MJPG stream video from android IPWebcam using opencv

I am using the IP Webcam program on android and receiving it on my PC by WiFi. What I want is to use opencv in Visual Studio, C++, to get that video stream, there is an option to get MJPG stream by the following URL: http://MyIP:port/videofeed
How to get it using opencv?
Old question, but I hope this can help someone (same as my answer here)
OpenCV expects a filename extension for its VideoCapture argument,
even though one isn't always necessary (like in your case).
You can "trick" it by passing in a dummy parameter which ends in the
mjpg extension:
So perhaps try:
VideoCapture vc;
ipCam.open("http://MyIP:port/videofeed/?dummy=param.mjpg")
Install IP Camera Adapter and configure it to capture the videostream. Then install ManyCam and you'll see "MPEG Camera" in the camera section.(you'll see the same instructions if you go to the link on how to setup IPWebCam for skype)
Now you can access your MJPG stream just like a webcam through openCV. I tried this with OpenCV 2.2 + QT and works well.
Think this helps.
I did a dirty patch to make openCV working with android ipWebcam:
In the file OpenCV-2.3.1/modules/highgui/src/cap_ffmpeg_impl.hpp
In the function bool CvCapture_FFMPEG::open( const char* _filename )
replace:
int err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
by
AVInputFormat* iformat = av_find_input_format("mjpeg");
int err = av_open_input_file(&ic, _filename, iformat, 0, NULL);
ic->iformat = iformat;
and comment:
err = av_seek_frame(ic, video_stream, 10, 0);
if (err < 0)
{
filename=(char*)malloc(strlen(_filename)+1);
strcpy(filename, _filename);
// reopen videofile to 'seek' back to first frame
reopen();
}
else
{
// seek seems to work, so we don't need the filename,
// but we still need to seek back to filestart
filename=NULL;
int64_t ts = video_st->first_dts;
int flags = AVSEEK_FLAG_FRAME | AVSEEK_FLAG_BACKWARD;
av_seek_frame(ic, video_stream, ts, flags);
}
That should work. Hope it helps.
This is the solution (im using IP Webcam on android):
CvCapture* capture = 0;
capture = cvCaptureFromFile("http://IP:Port/videofeed?dummy=param.mjpg");
I am not able to comment, so im posting new post. In original answer is an error - used / before dummy. THX for solution.
Working example for me
// OpenCVTest.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "opencv2/highgui/highgui.hpp"
/**
* #function main
*/
int main( int argc, const char** argv )
{
CvCapture* capture;
IplImage* frame = 0;
while (true)
{
//Read the video stream
capture = cvCaptureFromFile("http://192.168.1.129:8080/webcam.mjpeg");
frame = cvQueryFrame( capture );
// create a window to display detected faces
cvNamedWindow("Sample Program", CV_WINDOW_AUTOSIZE);
// display face detections
cvShowImage("Sample Program", frame);
int c = cvWaitKey(10);
if( (char)c == 27 ) { exit(0); }
}
// clean up and release resources
cvReleaseImage(&frame);
return 0;
}
Broadcast mjpeg from a webcam with vlc, how described at http://tumblr.martinml.com/post/2108887785/how-to-broadcast-a-mjpeg-stream-from-your-webcam-with

How to encode non-camera video in Android

I am working on an android application in which a video is dynamically generated by compositing a sequence of animation frames. I tried to use the Android Media Recorder API for this but have not found a way to get it to accept a non-camera source as input. I have been attempting to use a FFMPEG port (based on the Rockplayer build) but am running into difficulties with missing functions since I am using it as an encoder, not a decoder.
The iPhone version of this app uses AVAssetWriter from the AVFoundation framework.
Is there an easier way to do this or am I stuck slugging it out with FFMPEG?
This may help (see the note on resolution though):-
How to encode using the FFMpeg in Android (using H263)
I'm not sure if they did a custom build of ffmpeg, or not, if so they may be able to offer advice on porting a more feature complete version.
-Anthony
Opencv has ViewBase class which takes the input from the camera as a frame and represent the frame as a bitmap , you can extand the class View base and make it for your own use , even though installing opencv on the android isn't very easy.
When you extend SampleCvViewBase you will have the following function which you can use pretty much hard work but the best I can think of.
#Override
protected Bitmap processFrame(VideoCapture capture) {
capture.retrieve(picture, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
if (Utils.matToBitmap(picture, bmp))
return bmp;
bmp.recycle();
return null;
}
You can use a pure Java open source library called JCodec ( http://jcodec.org ).
It contains a simple yet working H.264 encoder and MP4 muxer. The class below uses JCodec low level API and should be what you need ( CORRECTED ):
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
for (int i = 0; i < 3; i++)
Arrays.fill(toEncode.getData()[i], 0);
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
public static void main(String[] args) throws IOException {
SequenceEncoder encoder = new SequenceEncoder(new File("video.mp4"));
for (int i = 1; i < 100; i++) {
BufferedImage bi = ImageIO.read(new File(String.format("folder/img%08d.png", i)));
encoder.encodeImage(bi);
}
encoder.finish();
}
}
You can get JCodec jar from a project web-site.

Categories

Resources