Hi guys I am trying to develop a video chat application and I am using the h264 encoder for video, but I am facing some issues as the video seems a bit unclear. I have attached an image below in which you can see some shade above the eyebrows. Can anybody tell me what could be the possible problem? Because it occurs only with the front camera and works fine with the real camera.
Hoping for response.
The encoding world is literally huge, there are a lot of ways to store data, manipulate data and transfer data, and when this comes to the imaging world all the options simply multiply.
Your problem reminds me this topic, also looks like there are some shadows, kinda like that bluish shadow is part of a previous frame with a shape in a different position.
Remember that H264 is a patented codec and you have to pay for it if you want to use it.
Related
I'm trying to stream audio and video from a Google glass to a browser. The broswer just has to receive the video and audio.
I compiled the google source code following the instructions here http://www.webrtc.org/native-code/android.
So far, it works. But I'm having an issue with the video. It's displaying in grayscale, and I'm not sure what are the changes that I should do on the source code in order to fix this.
Here is a screenshot of the problem:
I found two related issues in stackoverflow.com, but I didn't get the solution:
VP8 Encoding results in grayscale image on Google Glass
VP8 encode/decode on android results in black and white image with red, green and blue squares
Thanks very much for any help that you can provide!
Per the first answer you gave, you likely need to compensate for a bug in the camera code for Glass. The image capture code probably thinks it's getting YV12, and actually is getting NV21, so the simplest thing to do is to convert NV21 to something else (like i420, which is the common internal video representation used). Alternatively, change the frame objects to say they're NV21 and let the rest of the code handle it.
We are working on Android 3D Animation App.
We need to identify images, then save and encode the same to video using FFmpeg (Since Android API is not supporting). Once the video is generated, then audio is appended to the same.
We are facing 2 problems on this.
First is the memory leakage issue at the time of saving identified images for encoding. CPU of emulator is getting overloaded. Whether FFpmeg is called every time when an image is selected? How to resolve this issue?
Second (in case if we get through the first one) we are not able to encode the selected images, since this is generating green color video. What could be reason for this?
Whether is there any tool other than FFmpeg for video encoding from images to H264?
Whether images version (Rastar or Vector) will impact this video encoding?
Whether Android OS version is considered?
Any valuable inputs on this will be greatly appreciated.
Thanks
I played also with that idea using ffmpeg on an android phone, but I would suggest to do that on a server which has much more power. On a server you don't need to think about the cpu load of a smartphone.
In general for improving your ffmpeg run you need to publish the ffmpeg calls. ffmpeg is quiet complex where the order of the parameters directly correlates with the efficience.
I don't know which container format you preferer but maybe a simple mjpeg codec could work for you. AFIK there a just the jpeg frames connected to each other which should be much simple then encoding a video to h264/x264 (ffmpeg uses the last one).
A combination of both may be to generate a mjpeg stream which will be converted on the server side to a h264 video which may be downloaded to the client. but that really depends on the length of the video if you don't want to waste the traffic of your customers.
Hi I'm using the game engine AndEngine, and I want to be able to stream live video from a webcam on a robot to my Android app. The reason I'm using AE is because I need game controls that control my robot. However, I have no idea how to stream video when using AndEngine (or even when not using it, for that matter). The controls and video feed need to be in the same screen (unless there's absolutely no other way). My question is how would one put a video stream over-top an AndEngine scene, and/or how would one format that feed so that it didn't obscure the controls? (they're oriented in the bottom left and top right of the screen, which is a pain I know, but I don't think I can change it due to some problems with multi-touch on my device).
Thanks.
Look at the Augumentged Reality example at GitHub.
https://github.com/nicolasgramlich/AndEngineExamples
It could be of use to you. However, I know that this example was problematic and didn't work when I tried it, but maybe you'll have more luck.
Is it possible to record video with overlay view? While recording the video I have displayed one small image on the overlay view. What I want to do is I want those overlay image along with the video recorded. So when I will open that recorded video, I will be able to see that overlapped image that recorded with video also.
Friends, I need this solution ASAP. Please suggest proper solution :)
Unfortunately, there is no way in the current Android API to get between the camera input and the encoder. Any solution would either involve capturing frames from the video source, overlaying the additional image, and then including an encoder for the captured frames. Even in native code with NEON optimizations on a fast system, this is going to be a slow process. Alternatively, the whole stream could be post-processed in a similar fashion, but this would also require a decoder.
For future reference: This is possible using the CameraView library, at least in "snapshot video" mode.
What I am attempting to do is create an application that adds effects to videos while recording. Is there any way to have a callback method receive the frame, then apply an effect to it and have that recorded. There is currently an application on the Android Market (Videocam Illusion) that claims it is the only application that can do this. Anyone know how Videocam Illusion does this or have some links to possible tutorials outlining video processing for Android?
This is a similar question that is unanswered:
Android preview processing while video recording
Unfortunately, (unless I'm unaware of some other method provided by the API) the way this is done is using a direct stream to the camera and manipulating it by using some sort of Native Code to modify the stream. I've done something similar to this before when I was working on an eyetracker - So I'll tell you how it works basically.
Open a stream using the NDK (possibly api, depending on implementations)
Modify the bytes of the stream - each frame is sent as a separate packet. You have to grab each packet from the camera, and modify it. You can do a replace of colors, or you can translate. You can also use OpenGL to modify the image entirely by adding things like glass effects.
Flatten the images back out
send the image over to the view controller to be displayed.
One thing that you have to be mindful of is the load and send of the packets & images happen in about 1/30th of a second for each frame. So the code has to be extremely optimized.