I have an IP camera which is streaming video in MJPEG format. Now my aim is to receive it and display it in my own custom android app. For this I have three programming alternatives on android platform :
Using inbuilt Anrdroid MediaPlayer class
Using FFMPEG library in native C and accessing it through JNI
Using GStreamer port on android to receive the stream
So please suggest a better solution?
I have no experience with FFMPEG or GStreamer. So what is the feasibility of doing this?
Use gstreamer for it.
I have used gstreamer at beagleboard which has 1GHz processor for taking image from 2 cameras in 30 fps with very low CPU processing power.
Gstreamer able to merge images, add strings, change formats. And presents you what you want easily in stream. The only thing you need to do is adding black boxes each other.
You can add blackboxes with both dynamically and statically.
If you are not going to change your stream depends on input at your program I suggest to use static one. But I am not sure if it works at android..
To test 3rd option (gstreamer) you can use this app: https://play.google.com/store/apps/details?id=pl.effisoft.rpicamviewer2. You can also open gstreamer preview from your code using following code:
Intent intent = new Intent("pl.effisoft.rpicamviewer2.PREVIEW");
int camera =0;
//--------- Basic settings
intent.putExtra("full_screen", true);
intent.putExtra("name"+camera, "My pipeline name");
intent.putExtra("host"+camera, "192.168.0.1");
intent.putExtra("port"+camera, 5000);
intent.putExtra("description"+camera, "My pipeline description");
intent.putExtra("uuid"+camera, UUID.randomUUID().toString() );
intent.putExtra("aspectRatio"+camera, 1.6);
intent.putExtra("autoplay"+camera, true);
//--------- Enable advanced mode
intent.putExtra("advanced"+camera, true); //when advanced=true, then custom_pipeline will be played
//when advanced=false, then pipeline will be generated from host, port (use only for backward compatibility with previous versions)
intent.putExtra("custom_pipeline"+camera, "videotestsrc ! warptv ! autovideosink");
//--------- Enable application extra features
intent.putExtra("extraFeaturesEnabled"+camera, false);
//--------- Add autoaudiosink to featured pipeline
intent.putExtra("extraFeaturesSoundEnabled"+camera, false);
//--------- Scale Video Stream option
intent.putExtra("extraResizeVideoEnabled"+camera, false);
intent.putExtra("width"+camera, 320); //used only when extraResizeVideoEnabled=true
intent.putExtra("height"+camera, 200); //used only when extraResizeVideoEnabled=true
//--------- Add plugins
ArrayList<String> plugins = new ArrayList<String>();
intent.putExtra("plugins"+camera, plugins);
intent.setPackage("pl.effisoft.rpicamviewer2");
startActivityForResult(intent, 0);
Related
In my Android application, I am encoding some media in webm (vp8) format using MediaCodec. The encoding is working as expected. However, I need to ensure that I create a sync frame once in a while. Here is what I do:
encoder.queueInputBuffer(..., MediaCodec.BUFFER_FLAG_SYNC_FRAME);
Later in the code, I check for sync frame:
encoder.dequeueOutputBuffer(bufferInfo, 0);
boolean isSyncFrame = (bufferInfo.flags & MediaCodec.BUFFER_FLAG_SYNC_FRAME);
The problem is that isSyncFrame never gets a true value.
I am wondering if I am making a mistake in my encoding configuration. May be there is a better way to tell the encoder to create a sync frame once in a while.
I hope it is not a bug in MediaCodec. Thank you in advance for your help.
There is no (current as of Android 4.3) way to request an on-demand sync frame using MediaCodec encoders. This is partly due to OMX, the underlying codec implementation in Android, that does not provide a way to specify which input frame should be encoded as a sync frame; although it has a way to trigger a sync frame "in the near future".
feisal's answer is the only currently supported way to control sync frames, but you have to do it at configuration time.
==edit re: jesup
You can trigger a sync frame in the near future using MediaCodec.setParameter:
Bundle params = new Bundle();
params.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME, 0);
mCodec.setParameters(syncFrame);
Unfortunately, there is no (reliable) way to tell in MediaCodec if an encoded buffer is a sync frame other than doing it on your own by inspecting the byte-codes.
you can set the rate of I-frames in the MediaFormat object of your encoder by setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, int secs_between_iframes );
I am doing a video compression project for Android and I am thinking of implementing it by designing a new video codec (by scratch , I have designed the algorithm) . I have already read the basics of video compression , related relevant algorithms and codec basics . I have also found that FFmpeg may serve as a quite good solution on Android.
Now my questions come:
How to write a new video codec as in FFmpeg? I am still a beginner at writing codecs , but
how do I start ? I have a rough idea that that you have to write at least a demuxer first and then the specific encoder and decoder etc . (Asking for references here please.)
Since my codec deosn't simply adjust video properties like fps , resolution , bit-rate etc.
Is reading the MediaCodec API and MediaPlayer API in official Android SDK enough for writing new codecs ? (Because last time I saw it had only support for MPEG-4 SP , H.263 and H.264 . I was unable to find if you could directly write your own classes and functions).
Thanks .
You can use ffmpeg as a tool or the ffmpeg set of libraries (libavcodec, libaviformat, …) on Android. You can add or change ffmpeg codecs in a cross- platform manner, because this project puts a strong emphasis on platform independence. You can use the MediaCodec API instead. But there is no way to extend the MediaCodec API (update it is possible to extend MediaCodec, it is documented at http://source.android.com/devices/media.html#codecs ) and no easy way to let ffmpeg use this API.
if you are a newb and "just want to do it in SW", than just do it in SW. I am assuming your algorithm does not need to be real-time, and compress video data on the fly, or you would need to use a HW codec.
This is from Android MediaCodec Reference
MediaCodec codec = MediaCodec.createDecoderByType(type);
codec.configure(format, ...);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
for (;;) {
int inputBufferIndex = codec.dequeueInputBuffer(timeoutUs);
if (inputBufferIndex >= 0) {
// fill inputBuffers[inputBufferIndex] with valid data
...
codec.queueInputBuffer(inputBufferIndex, ...);
}
int outputBufferIndex = codec.dequeueOutputBuffer(timeoutUs);
if (outputBufferIndex >= 0) {
// outputBuffer is ready to be processed or rendered.
...
codec.releaseOutputBuffer(outputBufferIndex, ...);
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
outputBuffers = codec.getOutputBuffers();
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// Subsequent data will conform to new format.
MediaFormat format = codec.getOutputFormat();
...
}
}
codec.stop();
codec.release();
codec = null;
On the line that reads "// outputBuffer is ready to be processed or rendered" apply your codec.
That is your first frame will be outputBuffers[0] to outputBuffers[outputBufferIndex]. Save off outputBufferIndex, i.e. outputBufferIndex_old = outputBufferIndex; then your next frame will be outputbuffers[outputBufferIndex_old] to outputbuffers[outputBufferIndex]. But this is a circular buffer, so in the for loop ... ahhhhh
something like this:
//init
int old = 0;
int len = codec.BufferInfo().size,buff_len=outputBuffers.size;
Byte[] processBuffer = new Byte[len];
... // outputBuffer ready
for (int i=old; i<old+len; i++){
processBuffer[i-old] = outputBuffers[i%buff_len];
}
old = outputBufferIndex;
Here is a good example. You may want to look into MediaMetadataRetriever to get information about the input video. height and width ect. bytesize per pixel, if you want your encoder to be robust to different types of video. Anyway, that should get you started.
I strongly recommend Matlab(or GNU Octave) for prototyping a video codec. It will save you a ton of time. Meaning you should make sure your intended codec algorithm works before trying to implement it on a near impossible system to debug like Android.
Hope this helps.
If someone stumbles across this old question the answer is:
Write your Program.
Where you want the "Codec" to go simply add a 'null Codec' (copy Input to Output).
Test that your Program still works and that you can read the (so-called) encoded File.
Add your Codec where the 'null Codec' was (call a Function to avoid big edits to a working File).
Re-Test your Program to ensure it still works and read the Output to make sure it is correct.
That is all. ;)
Things to consider:
A "Video Player" can drop Frames, a "Video Recorder" had better NOT
drop Frames.
A 'Software Codec' (no Hardware assist) will be slow,
run it on a different Core, if available.
A Hardware Codec (called from Software) will be necessary unless you are just making a
Demo.
Split your Program into pieces that can run separately so it can be threaded and those Threads can be assigned to different Cores. You will need to detect the number of Cores and assess their speed so you can do some of the partitioning dynamically at Runtime.
Use of the NDK and Assembly Language Programming will be necessary to get enough speed to compress a decent sized Video at a wanted frame rate (IE: you do not want your finished Program to only support 320x176 # 5 FPS Videos). The Compressor MUST run faster than it's Input arrives.
Designing your own Codec to beat an existing Codec (x265) will take you years (without help).
If your a Wiz at Java, C, and ARM Assembly (and a Software Engineer) it will take more than a couple of months of work; so commit or quit. Try to find some Open Source as a base to start from.
I am working at a player based on gstreamer tutorials. For this I created a pipeline using:
pipeline = gst_pipeline_new("audio-player");
//adding also 3 gstreamer elements
appsrc = gst_element_factory_make("appsrc", "source");
decoder = gst_element_factory_make("faad", "aac-decoder");
sink = gst_element_factory_make("autoaudiosink", "audio-output");
//adding and linking the elements to the pipeline
gst_bin_add_many (GST_BIN (pipeline), appsrc, decoder, sink, NULL);
gst_element_link_many(appsrc, decoder,sink, NULL);
//for appsrc was added a callback function need_data_cb
g_signal_connect(appsrc, "need-data", (GCallback)need_data_cb, data);
//state of pipeline is set to playing
gst_element_set_state(pipeline, GST_STATE_PLAYING);
In need_data_cb function I have a buffer that I want to be played:
g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
My poblem is this: I have the same code in Linux and in Android. In Linux buffer is played well each time it enters the callback function need_data_cb. In Android it plays the buffer just the first time it enters in need_data_cb and after that no sound. Why it happens this when I have same code in both versions. If I add in need_data_cb Android version to change pipeline states to pause and play before adding buffer to appsrc, it plays each time buffer but with some interruptions between each call.
//the first 2 lines added in Android version to play each time buffer
gst_element_set_state(pipeline, GST_STATE_PAUSED);
gst_element_set_state(pipeline, GST_STATE_PLAYING);
g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
The question is why on Linux works fine without these lines and on Android not?
On Linux I installed gstreamer 0.10 version, and on Android I used the libs from gstreamer sdk tutorials. Do you have any hint for my problem?
Thanks,
Radu
The problem was due to the emulator. On device everything was ok. Do not use emulators try debugging directly on device!!!
Is possible to compare a voice with already recorded voice in the phone.Based on the comparison we can rate like Good, Very Good , Excellent etc. Most closed sound get high rating.
Anybody know is it possible in Android?
Help is highly appreciable.
For a general audio processing library I can recommend marsyas. Unfortunately the official home page is currently down.
Marsyas even provides a sample android application. After getting a proper signal analysis framework, you need to analyse your signal. For example, the AimC implementation for marsyas can be used to compare voice.
I recommend installing marsyas on your computer and fiddle with the python example scripts.
For your voice analysis, you could use a network like this:
vqNetwork = ["Series/vqlizer", [
"AimPZFC/aimpzfc",
"AimHCL/aimhcl",
"AimLocalMax/aimlocalmax",
"AimSAI/aimsai",
"AimBoxes/aimBoxes",
"AimVQ/vq",
"Gain/g",
]
This network takes your audio data and transforms it as it would be processed by a human ear. After that it uses vector quantization to reduce the many possible vectors to very specific codebooks with 200 entries. You can then translate the output of the network to readable characters (utf8 for example), which you then can compare using something like string edit distances (e.g. Levenshtein distance).
Another possibility is to use MFCC (Mel Frequency Cepstral Coefficients) for speech recognition which marsyas supports as well and use something, for example Dynamic Time Warping, to compare the outputs. This document describes the process pretty well.
Using 'Musicg' library you can compare two voice (.wav format) files.
use Wave object to load the wave file to instantiate in pgm.
here using FingerPrintSimilarity
function you pass pre recorded wav files to get the output.
But you should know that "musicg" library deals only with .wav format files, so if you have a an .mp3 file for example you need to convert it to a wave file first.
android gradle dependency:
implementation group: 'com.github.fracpete', name: 'musicg', version: '1.4.2.2'
for more:
https://github.com/loisaidasam/musicg
sample code:
private void compareTempFile(String str) {
Wave w1 = new Wave(Environment.getExternalStorageDirectory().getAbsolutePath()+"/sample1.wav");
Wave w2 = new Wave(Environment.getExternalStorageDirectory().getAbsolutePath()+"/sample2.wav");
println("Wave 1 = "+w1.getWaveHeader());
println("Wave 2 = "+w2.getWaveHeader());
FingerprintSimilarity fpsc1 = w2.getFingerprintSimilarity(w1);
float scorec = fpsc1.getScore();
float simc= fpsc1.getSimilarity();
tvSim.setText(" Similarity = "+simc+"\nScore = "+scorec);
println("Score = "+scorec);
println("Similarity = "+simc);
}
I just have a question about how to use ffmpeg/libavcodec/libstagfright.cpp: I try to avcodec_open2(st->codec, codec) when I have use ffmpeg to set codec->id as CODEC_ID_H264,codec->name as libstagefright_h264,that means I will open
AVCodec ff_libstagefright_h264_decoder.
but when Stagefright_init->OMXCodec::Create->configureCodec-> initOutputFormat(meta), the process just Quit ! It is a bazinga !
I knew that meta is Metadata, its data comes from codec->extradata, and in here, it means sps and pps, am I right?
How can I use libstagefright sucessfully in ffmpeg? Can somebody give me an example?
Im actually working on providing stagefright to my ffmpeg library on Android. I made some changes to original libstagefright.cpp from ffmpeg/libav but it is still not stable. After stabilizing it I will create pull request for ffmpeg/libav team. You can look around on my project: in "hwaccel" branch.
It is available at AndroidFFmpeg/FFmpegLibrary/jni/ffstagefright.cpp directory.
To use this library you have call standard ffmpeg methods and open insteed of standard h264 codec libstagefright_h264 codec:
AVCodec *codec = avcodec_find_decoder_by_name("libstagefright_h264");
It works at ICS4.0.3 ,Moto XT910,FFmpeg 0.7
I use extradata for store MediaFileName,then get metadata from codes:
DataSource::RegisterDefaultSniffers();
sp<MediaSource> source ;
source = createSource((char*)MeidaFileName);
if(source==NULL){
return -1 ;
}
meta = source->getFormat();
if(!meta->findData(kKeyAVCC, &type, &data, &data_size))
{
return -1 ;
}
meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
then you can OMX::create(there are some difference for Android 2.3 and ICS)