Adjusting Android H.264 Codec and running it on a phone - android

First of all I should let you know that I am new to Android system.
I would like to slightly adjust the existing H.264/AVC Codec that Android uses. Specifically I would like to change the way codec calculates whatever data it gets from an input buffer, before sending it to an output buffer.
As I took a look at Android Media Architecture it seems that stagefright is only some kind of wrapper and I cannot find source code for OMX IL Hardware H.264 Implementation.
So my first question I guess is, where does computing from bits into actual picture happens?
The second part of the problem is getting this adjusted codec into mobile device. I guess I just rewrite the existing file (when I find it)? As far as I understand I do not need to follow Implementing custom codecs as I would like to keep changed codec registered with the same name.
At this point I should tell you that I am not hoping this will work in general use but only in custom app for research purpose.
Edit 1: I am not expecting an answer with the exact solution, but would appreciate some guidelines, to where to start off my exploration with the goal of modifing codec.
Edit 2: I will be using unsecured (rooted) device.

Related

Android - Choosing between MediaRecorder, MediaCodec and Ffmpeg

I am working on a video recording and sharing application for Android. The specifications of the app are as follows:-
Recording a 10 second (maximum) video from inside the app (not using the device's camera app)
No further editing on the video
Storing the video in a Firebase Cloud Storage (GCS) bucket
Downloading and playing of the said video by other users
From the research, I did on SO and others sources for this, I have found the following (please correct me if I am wrong):-
The three options and their respective features are:-
1.Ffmpeg
Capable of achieving the above goal and has extensive answers and explanations on sites like SO, however
Increases the APK size by 20-30mb (large library)
Runs the risk of not working properly on certain 64-bit devices
2.MediaRecorder
Reliable and supported by most devices
Will store files in .mp4 format (unless converted to h264)
Easier for playback (no decoding needed)
Adds the mp4 and 3gp headers
Increases latency according to this question
3.MediaCodec
Low level
Will require MediaCodec, MediaMuxer, and MediaExtractor
Output in h264 ( without using MediaMuxer for playback )
Good for video manipulations (though, not required in my use case)
Not supported by pre 4.3 (API 18) devices
More difficult to implement and code (my opinion - please correct me if I am wrong)
Unavailability of extensive information, tutorials, answers or samples (Bigflake.com being the only exception)
After spending days on this, I still can't figure out which approach suits my particular use case. Please elaborate on what I should do for my application. If there's a completely different approach, then I am open to that as well.
My biggest criteria are that the video encoding process be as efficient as possible and the video to be stored in the cloud should have the lowest possible space usage without compromising on the video quality.
Also, I'd be grateful if you could suggest the appropriate format for saving and distributing the video in Firebase Storage, and point me to tutorials or samples of your suggested approach.
Thank you in advance! And sorry for the long read.
Your overview on this topic is applicable to the point.
I'll just add my 2 cents on this topic that you might have missed as addition:
1.FFMpeg
+/-If you build your own SO then you can reduce the size down to about 2-3 MB depending on the use-case of course. Editing a 6000 lines buildscript takes time and effort though
++Supports wide range of formats (almost everything)
++Results are the same for every device
++Any resolution supported
--High energy consumption due do SW-En-/Decoding, while also making it slow. There is a plugin to support lib-stagefright, but it doesn't work on many devices (as of May 2016)
--Licensing can be problematic depending on your location and use-case. I'm not a lawyer, but we had legal consulting on this topic and it's quite complex.
2. MediaRecorder
++Easiest to implement (simplified access to mediacodec/libstagefright) Raw data gets passed to the encoder directly so no messing around there
++HW Accelerated on most devices. Makes it fast and energy saving.
++Delay only applies to live streaming
--Dependent on implementation of HW-manufacturers
--Results may vary from device to device
++No licensing problems
3.MediaCodec
+/-Most of 2.MediaRecorder applies to this as well (apart from ease of use)
++Most flexible access to HW-en-/decoding
--Hard to use for cases that were not thought of (e.g. mixing videos from different sources)
+/-Delay for streaming can be eliminated (is tricky though)
--HW-manufacturers sometimes don't implement things correctly (e.g the Samsung Galaxy S5 sometimes produces a SIG-SEV if live data from some DLSR is fed to the encoder. Works fine for a while, then all of a sudden it's SIG-SEV. This might be the dslr's fault, but the SIG-SEV is not avoidable and crashes the app, which in the end is the app developers fault ;) )
--If used without MediaMuxer you need either good understanding of media containers or rely on 3rd party libraries
The list is obviously not complete and some points might not be correct. The last time I worked with video was almost half a year ago.
As for your use-case I would recommend using MediaRecorder since it is the easiest to implement, supported on all devices, and offers a good deal of quality/size option. FFMpeg produces better results for the same storage size, but takes longer (extreme case, DSLR live footage was encoded 30 times faster), and is more energy consuming.
As far as I understand your use-case, there is no need to fiddle around with MediaCodec since you want to encode and decode only.
I suggest using VP8 or 9 since you wont run into licensing problems. Again I'm no lawyer but distributing H264 over your own server might make you a broadcasting station, so i was told.
Hope this helps you in your decision making

AVC HW encoder with MediaCodec Surface reliability?

I'm working on a Android app that uses MediaCodec to encode H.264 video using the Surface method. I am targeting Android 5.0 and I've followed all the examples and samples from bigflake.com (I started working on this project two years ago, so I kind of went through all the gotchas and other issues).
All is working nice and well on Nexus 6 (which uses the Qualcomm hardware encoder for doing this), and I'm able to record flawlessly in real-time 1440p video with AAC audio, in a multitude of outputs (from MP4 local files, upto http streaming).
But when I try to use the app on a Sony Android TV (running Android 5.1) which uses a Mediatek chipset, all hell breaks loose even from the encoding level. To be more specific:
It's basically impossible to make the hardware encoder work properly (that is, "OMX.MTK.VIDEO.ENCODER.AVC"). With the most basic setup (which succeeds at MediaCodec's level), I will almsot never get output buffers out of it, only weird, spammy, logcat error messages stating that the driver has encountered errors each time a frame should be encoded, like this:
01-20 05:04:30.575 1096-10598/? E/venc_omx_lib: VENC_DrvInit failed(-1)!
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] cannot set param
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] EncSettingH264Enc fail
Sometimes, trying to configure it to encode at a 360 by 640 pixels resolution will succeed in making the encoder actually encode stuff, but the first problem I'll notice is that it will only create one keyframe, that is, the first video frame. After that, no more keyframes are ever, ever created, only P-frames. Ofcourse, the i-frame-interval was set to a decent value and is working with no issues on other devices. Needless to say, this makes it impossible to create seekable MP4 files, or any kind of streamable solution on top.
Most of the times, after releasing the encoder, logcat will start spamming endlessly with "Waiting for input frame to be released..." which basically requires a reboot of the device, since nothing will work from that point on anyway.
In the case where it doesn;t go havoc after a simple release(), no problem - the hardware encoder is making sure that it cannot be created a second time, and it falls back to the generic SOFTWARE avc google encoder. hich ofcourse is basically a mockup encoder which does little to nothing than spit out an error when trying to make it encode anything larger than 160p videos...
So, my question is: is there any hope of making this MediaCodec API actually work on such a device? My understanding was that there are some CTS tests performed by Google/manufacturers (in this case, Sony) that would allow a developer to actually think that an API is supported on a device which prouds itself as running Android 5.1. Am I missing something obvious here? Did anyone actually ever tried doing this (a simple MediaCodec video encoding test) and succeeded? It's really frustrating!
PS: it's worth mentioning that not even Sony provides yet a recording capability for this TV set, which many people are complaining anyway. So, my guess is that this sounds more like a Mediatek problem, but still, what exactly are the Android's CTS for in this case anyway?

Android keep send information to server

I built a similar app Shazam, however it only works in sending an entire file of 10seconds of audio.
My doubt is: In android, there's any thing to keep like Shazam of while music is playing and the database is searching? Or it's own Shazam service technology?
Shazam developed that audio fingerprint matching technlogy. It's not available in the default Android SDK.
The Shazam technology is proprietary. The base algorithm was documented since by its creator:
The algorithm uses a combinatorially hashed time-frequency constellation analysis of the
audio, yielding unusual properties such as transparency, in which multiple tracks mixed together may each be identified.
This is very novel and efficient, but the principles for fingerprinting audio stay the same. Among which certainly a FTT (fast fourier transform) to at least detect the BPM. Its even possible to convert sound to an image (the simplest being a spectogram), which can be further processed by audio-unrelated software.
If you need an audio analysis library, written in Java you could look into MusicG for example which is said to be simple to use on Android.

How do I simultaneously record audio from every mic on a microphone array into a separate buffer in Android?

Many tablets and some smart phones use an array of microphone for things like noise cancellation. For example Motorola Droid X uses three microphone arrays and even allows you to set "audio scenes". An example is discussed here.
I want to be able to record from all the microphones that are available on the tablet/phone at the same time. I found that using AudioSource we can choose the mic (I do not know which mic this is specifically but it might be the one facing the user) or the mic that is in same orientation as the video camera, but could not find anyway of accessing all the other mic in the mic array. Any help that points me in the right direction to investigate this will be great. Thanks in advance for your time.
It's seems like you've verified that there isn't a standard Android API for accessing specific mics in an array. I couldn't find anything either.
As is the case with custom additions to the Android system, it's up to the manufacturer to release developer APIs. Motorola has done this before. I took a look at all of the ones they have listed and it seems they simply don't expose it. Obviously, they have code somewhere which can do it (the "audio scenes" uses it).
So the quick answer: you're out of luck.
The more involved answer: you can go spelunking around the source code for the Droid X because it's released as open source. If you can actually find it, understand that you're using an undocumented API which could be changed at any time. Plus, you'll have to do this for every device you want to support.

android spectrum analysis of streaming input

for a school project I am trying to make an android application that, once started, will perform a spectrum analysis of live audio received from the microphone or a bluetooth headset. I know I should be using FFT, and have been looking at moonblink's open source audio analyzer ( http://code.google.com/p/moonblink/wiki/Audalyzer ) but am not familiar with android development, and his code is turning out to be too difficult for me to work with.
So I suppose my questions are, are there any easier java based, or open source android apps that do spectrum analysis I can reference? Or is there any helpful information that can be given, such as; steps that need be taken to get the microphone input, put it into an fft algorithm, then display a graph of frequency and pitch over time from its output?
Any help would be appreciated, thanks.
Suggestion....
It depends what you want to use it for. If you don't need the entire spectrum, then you might only need a filter, easily achieved using a FIR filter. Note that you can get 3 bands (LP, BP and HP) very quickly by realizing that the HP uses the same multipliers as the LP, just some of the values are subtracts instead of adds. Likewise the BP is obtained by subtracting the LP and HP from the original data (all pass). So, if you code it right, you can get a very fast 3 band analyzer....if thats all you need.
If you want to use an FFT, you might look to see if there isn't already an FFT available in the Java libraries, written in C using the JNI (NDK) interface. This will be much faster than writing your own in java.
Hope that helps.

Categories

Resources