VP8 Encoding results in grayscale image on Google Glass - android

The application I am working on is developed for Google Glass but runs on Android tablets as well.It uses VP8 encoding to transfer camera images to a remote application.
The preview format parameter on the camera is set to ImageFormat.YV12.
The VP8 encoder is initialized with VPX_IMG_FMT_YV12 parameter.
When the application .apk file is installed and run from the Glass, the image is displayed in gray scale on the remote application.
When the same .apk file is installed on a tablet or a phone, the image is displayed in proper colors.
I am wondering if anyone has any idea on where the problem could lie. Regards.

I finally figured out what is happening.
There is a bug in Google Glass camera module. Although it gladly accepts the requested image format of YV12, the preview buffer actually contains data in NV21 format.
I had to dump the camera preview buffer into a file and examine each byte just to figure this out:-(.
If you intend to use YV12 format, you may be better off using NV21 format for now until this bug is fixed.

Related

how to retrieve NV21 data from DJI camera Phantom 3 Professional drone

As I described in a previous post, I'm working on an Android mobile app oriented to the real time augmented visualization of a drone's camera view (specifically I'm working on a DJI Phantom 3 Professional with relative SDK), using Wikitude framework for the AR part. Thanks to Alex's response, I implemented my own Wikitude Input Plugin in combination with dji's Video Stream Decoding.
I have some issues now. First of all, "DJI's Video Stream Decoding" demo uses FFmpeg for video frame parsing and MediaCodec for hardware decoding. So, it helps to parse video frames and decode the raw video stream data from DJI Camera and output the YUV data. You adviced me to "get the raw video data from the dji sdk and pass it to the Wikitude SDK": since Wikitude Input Plugin needs YUV 420 format, arranged to be compliant to the NV21 standard in order to provide the custom camera, I should pass to it the YUV data output of the MediaCodec, right?
About this point, I tried to retrieve bytebuffers from the MediaCodec output (and this is possible by setting Surface parameter to null into configure() method, which have the effect to invoke a callback and pass it out to an external listener), but I'm having some issues about colours in visualization, because the encoded video colour is not right (blue and red seem to be reversed, and there is too much noise when camera moves).. (please note that, when I pass a Surface not null, after the instruction codec.releaseOutputBuffer(outIndex, true), MediaCodec renders frames on that and shows video stream properly, but I need to pass the video stream to Wikitude Plugin and so I must set surface to null).
I tried to set different MediaFormat.KEY_COLOR_FORMAT but none of them works properly. How can I solve this point?
When decoding into bytebuffers with MediaCodec, you can't decide what color format the buffer uses; the decoder decides, and you have to deal with it. Each decoder can use a different format; some of them can be a standard format like COLOR_FormatYUV420Planar (corresponding to I420) or COLOR_FormatYUV420SemiPlanar (corresponding to NV12 - not NV21), while others can use completely proprietary formats.
See e.g. https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#401 for an example on what formats the decoder can return that are supported, and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#963 for a reference showing that it is ok for decoders to return private formats.
You can have a look at e.g. http://git.videolan.org/?p=vlc.git;a=blob;f=modules/codec/omxil/qcom.c;h=301e9150ae66075ca264e83566504802ed57578c;hb=bdc690e9c0e2516c00a6d3733a77a87a25d9b6e3 for an example on how to interpret one common proprietary color format.

Grayscale video streaming from Google Glass using WebRTC

I'm trying to stream audio and video from a Google glass to a browser. The broswer just has to receive the video and audio.
I compiled the google source code following the instructions here http://www.webrtc.org/native-code/android.
So far, it works. But I'm having an issue with the video. It's displaying in grayscale, and I'm not sure what are the changes that I should do on the source code in order to fix this.
Here is a screenshot of the problem:
I found two related issues in stackoverflow.com, but I didn't get the solution:
VP8 Encoding results in grayscale image on Google Glass
VP8 encode/decode on android results in black and white image with red, green and blue squares
Thanks very much for any help that you can provide!
Per the first answer you gave, you likely need to compensate for a bug in the camera code for Glass. The image capture code probably thinks it's getting YV12, and actually is getting NV21, so the simplest thing to do is to convert NV21 to something else (like i420, which is the common internal video representation used). Alternatively, change the frame objects to say they're NV21 and let the rest of the code handle it.

Android video frame processing

I am working on application that does some real time image processing on camera frames. For that, I use preview callback's method onPreviewFrame. This works fine for cameras that support preview frames that have resolution at least 640x480 or larger. But when camera does not support such large camera preview resolution, application is programmed to refuse processing such frames. Now, the problem I have is with phones like Sony Xperia Go. It is a very nice device that can record video up to resolution 1280x720, but unfortunately maximum camera preview size is 480x320, which is too small for my needs.
What I would like to know is how to obtain these larger camera frames (up to 1280x720 or more)? Obviously it has to be possible because camera application has the ability to record videos in that resolution - therefore this application somehow must be able to access those larger frames. How to do the same from my application?
Application has to support Android 2.1 and later, but I would be very happy even if I find the solution for my problem only for Android 4.0 or newer.
This question is similar to http://stackoverflow.com/questions/8839109/processing-android-video-frame-by-frame-while-recording, but I don't need to save the video - I only need those high resolution video frames...
It seems the only thing you can do is decoding frames from MediaRecoder data.
You may use ffmpeg to decode recoreder data from LocalSocket.
Hope the following open source projects may help:
ipcamera-for-android: https://code.google.com/p/ipcamera-for-android/
spydroid-ipcamera: https://code.google.com/p/spydroid-ipcamera/
You should probably take a look at the OpenCV library.
It has methods that allow you to receive full frames.
I have an impression: video preview size is small, and is slow, slower than the set video recording frame rate.
I was once trying to look for solutions on this. It seems a better way is to get the video stream from the video recorder, then directly process the data from the video stream.
You could find some examples on Android ip-camera.
You can use this library:
https://github.com/natario1/CameraView
This library has addFrameProcessor listener that in process function has Frame parameter.
If you need to record video while frame processing, you need to use from takeVideoSnapshot function of CameraView. takeVideo stop frame processing until complete video recording in latest version I tested 2.6.4.

Images to Video converter in Android

I would like to convert multiple images(frames) to a video(MP4) in an android device. Also, I would like to convert video(MP4) into multiple images(for each frame). I have limited knowledge on FFMPEG, but installing FFMPEG in Android may consume more time. I Would like to ask experienced engineers to suggest a better strategy which can take less time to complete this task. Please point me to some open source code which I may modify to complete this task quickly.
First you need to convert Image to YUV format using Image Decoder.
Then you can feed each YUV Image as a Video Input to Media Recorder Engine.
Go through the Media Recorder Source Code to get more info.

encoding images to video with ffpmeg

We are working on Android 3D Animation App.
We need to identify images, then save and encode the same to video using FFmpeg (Since Android API is not supporting). Once the video is generated, then audio is appended to the same.
We are facing 2 problems on this.
First is the memory leakage issue at the time of saving identified images for encoding. CPU of emulator is getting overloaded. Whether FFpmeg is called every time when an image is selected? How to resolve this issue?
Second (in case if we get through the first one) we are not able to encode the selected images, since this is generating green color video. What could be reason for this?
Whether is there any tool other than FFmpeg for video encoding from images to H264?
Whether images version (Rastar or Vector) will impact this video encoding?
Whether Android OS version is considered?
Any valuable inputs on this will be greatly appreciated.
Thanks
I played also with that idea using ffmpeg on an android phone, but I would suggest to do that on a server which has much more power. On a server you don't need to think about the cpu load of a smartphone.
In general for improving your ffmpeg run you need to publish the ffmpeg calls. ffmpeg is quiet complex where the order of the parameters directly correlates with the efficience.
I don't know which container format you preferer but maybe a simple mjpeg codec could work for you. AFIK there a just the jpeg frames connected to each other which should be much simple then encoding a video to h264/x264 (ffmpeg uses the last one).
A combination of both may be to generate a mjpeg stream which will be converted on the server side to a h264 video which may be downloaded to the client. but that really depends on the length of the video if you don't want to waste the traffic of your customers.

Categories

Resources