I'm trying to stream audio and video from a Google glass to a browser. The broswer just has to receive the video and audio.
I compiled the google source code following the instructions here http://www.webrtc.org/native-code/android.
So far, it works. But I'm having an issue with the video. It's displaying in grayscale, and I'm not sure what are the changes that I should do on the source code in order to fix this.
Here is a screenshot of the problem:
I found two related issues in stackoverflow.com, but I didn't get the solution:
VP8 Encoding results in grayscale image on Google Glass
VP8 encode/decode on android results in black and white image with red, green and blue squares
Thanks very much for any help that you can provide!
Per the first answer you gave, you likely need to compensate for a bug in the camera code for Glass. The image capture code probably thinks it's getting YV12, and actually is getting NV21, so the simplest thing to do is to convert NV21 to something else (like i420, which is the common internal video representation used). Alternatively, change the frame objects to say they're NV21 and let the rest of the code handle it.
Related
Currently I'm using the sample app from Sony developer of the action camera. This sample app connects to the action camera and get with HTTP packets the images. The images are in the payload of the HTTP and I'm able to draw them on a SurfaceView. What I'm trying to do is to get the frames from the SurfaceView and encode them in H264. I'm reading it can be done via MediaCodec but I'm a bit confused and the documentation is not explaining much. Any Mediacodec expert out here that can help me?
You might want to try Intel INDE Media for Mobile, it has GLCapture class which accepts textures, encodes them and packs to stream, it has built-in streaming to WOWZA server
Tutorials are here: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
Samples are on github: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
There are samples for game capturing, but it should be easy to change it to switch from drawing game scenes to coping bitmap from camera to texture.
Please go through the EncodeDecodeTest.java of [this][1]
[1]: http://bigflake.com/mediacodec/ and [grafica] [1]
[1]: https://github.com/google/grafika , this will help you to do this.usually sony devices has some color format issues that you have to take care
I am working on FFMPEG Video Conversion, I want a face replacement by my image in a video. For this subject I searched for something which I am describing below. Please let me know if I am wrong, and suggest a more proper procedure for the task.
1) I can extract all images from a video frame by frame.
2) Then we detect face from each image.
3) Morph an image onto the face.
4) Then again make a video with these images through FFMPEG.
Am I right? If yes then what about audio in this process? And if wrong then where am I mistaken?
ffmpeg can help with the video/audio handling part ,for the the face replacement you need a specific image processing tool : openCV http://opencv.org/ crosses my mind but you can do further search.Good luck.
Hi guys I am trying to develop a video chat application and I am using the h264 encoder for video, but I am facing some issues as the video seems a bit unclear. I have attached an image below in which you can see some shade above the eyebrows. Can anybody tell me what could be the possible problem? Because it occurs only with the front camera and works fine with the real camera.
Hoping for response.
The encoding world is literally huge, there are a lot of ways to store data, manipulate data and transfer data, and when this comes to the imaging world all the options simply multiply.
Your problem reminds me this topic, also looks like there are some shadows, kinda like that bluish shadow is part of a previous frame with a shape in a different position.
Remember that H264 is a patented codec and you have to pay for it if you want to use it.
Is it possible to record video with overlay view? While recording the video I have displayed one small image on the overlay view. What I want to do is I want those overlay image along with the video recorded. So when I will open that recorded video, I will be able to see that overlapped image that recorded with video also.
Friends, I need this solution ASAP. Please suggest proper solution :)
Unfortunately, there is no way in the current Android API to get between the camera input and the encoder. Any solution would either involve capturing frames from the video source, overlaying the additional image, and then including an encoder for the captured frames. Even in native code with NEON optimizations on a fast system, this is going to be a slow process. Alternatively, the whole stream could be post-processed in a similar fashion, but this would also require a decoder.
For future reference: This is possible using the CameraView library, at least in "snapshot video" mode.
I know how to capture video on android device, but i would like to capture video and add some other information on it e.g. some funny timeclock and save it all to file so the person watching the video will see the exact time of capturing. I would also like to add some watermark.
Do you know how can i do it or is it possible on android device? I read the API but couldnt find anything that could help me.
I was being asked this question a short time ago, and as a backup we came up with some sort of backup plan: send your stuff to a server and let that (using ffmpeg?) do the watermark, save the file, and send a link back to the phone.. Maybe that's a route to take?
edit:
There seems to be an android port possible for FFMPEG. see for instance this link: http://gitorious.org/~olvaffe/ffmpeg/ffmpeg-android
I haven't had the time to compile it myself, but it seems you can either use the normal FFMPEG and the NDK, or use this version to compile for android. It's a bit more work, but looks do-able.
I actually don't think that's possible. You can fetch video frames from a camera preview, but there's no good way to encode them to video. The standard video encoder (MediaRecorder) can only record the actual direct camera input into a video file.