I am working on FFMPEG Video Conversion, I want a face replacement by my image in a video. For this subject I searched for something which I am describing below. Please let me know if I am wrong, and suggest a more proper procedure for the task.
1) I can extract all images from a video frame by frame.
2) Then we detect face from each image.
3) Morph an image onto the face.
4) Then again make a video with these images through FFMPEG.
Am I right? If yes then what about audio in this process? And if wrong then where am I mistaken?
ffmpeg can help with the video/audio handling part ,for the the face replacement you need a specific image processing tool : openCV http://opencv.org/ crosses my mind but you can do further search.Good luck.
Related
I'm trying to stream audio and video from a Google glass to a browser. The broswer just has to receive the video and audio.
I compiled the google source code following the instructions here http://www.webrtc.org/native-code/android.
So far, it works. But I'm having an issue with the video. It's displaying in grayscale, and I'm not sure what are the changes that I should do on the source code in order to fix this.
Here is a screenshot of the problem:
I found two related issues in stackoverflow.com, but I didn't get the solution:
VP8 Encoding results in grayscale image on Google Glass
VP8 encode/decode on android results in black and white image with red, green and blue squares
Thanks very much for any help that you can provide!
Per the first answer you gave, you likely need to compensate for a bug in the camera code for Glass. The image capture code probably thinks it's getting YV12, and actually is getting NV21, so the simplest thing to do is to convert NV21 to something else (like i420, which is the common internal video representation used). Alternatively, change the frame objects to say they're NV21 and let the rest of the code handle it.
I am working on a project to read a video file from sdcard then process frames and re diplay as a video in real time. So far I didn't manage to come up with a solution for directly extract frames from the MediaPlayer like MediaPlayer.getCurrentFrame();. MediaMetadataRetriever.getFrameAtTime() is super slow, difficult to get a descent frame rate.
The only thing I have right now is using a TextureView surface with MediaPlayer. Here I start the MediaPlayer and in real time read the bitmap form TextureView asTextureView.getBitMap(), then process BitMap and display it on another ImageView. Here this gives me a a descent frame rate.
The problem here is TextureView has to be in the xml layout and should visible, Which I do not want.
Can some one please shed some light here? Is it possible to somehow hide the TextureView which is attaching to the MediaPlayer, without fake hiding like using RelativeLayouts :). iOS has a solution for this which is AVPLAyerItemVideoOutput, I need something like that with android.
Or any other work around to extract frames from video file?
Thank you
For video processing ...... you can use the FFMPEG Library for getting frames of videos but for that you have the knowledge of android native integration.
I hope this will help you.enter link description here
Is there a way to record square (640x640) videos and concat them in Android? I looked up in the Internet and found some solutions. The solution seems to be "ffmpeg". However, to use ffmpeg I need to dive into NDK and build ffmpeg from its sources. Is there a solution by only using the Android SDK?
My basic needs are:
Record multiple videos (square format)
Resize captured videos (i.e. 480x480 to 640x640)
Concat captured videos
Rotate final video (clockwise 90)
Final output will be in mp4 or mpg format
Is there a solution by only using the Android SDK?
Not really.
Your primary video recording option is MediaRecorder, and it supports exactly nothing of what you list. For example, there is no requirement for any Android device to support taking square videos.
You are also welcome to use the camera preview stuff to assemble your own videos from individual frames. Vine does this, AFAIK. There, you could perhaps use existing Bitmap facilities to handle the cropping, resizing, and rotating. However, this will be slow, and doing this work in a way that can keep up with a reasonable frame rate will be difficult. Also, I do not know if there is a library that can stitch those frames together into a video, or blend in any sort of audio (camera previews are pure images).
How can I put text or transparent image into a video? I can display text overlayed camera output but how can I record it? Using opencv is an alternative but I don't really want to use opencv manager(or 25+ mb binary).
Is there a way to record overlayed video with Android SDK or 3rd party library. What are my options?
Update: I'm not looking for a "record to disc then load recorded video and process every frame" solution. I'm trying to find a way to process every camera frames before recording. Something like opencv.
You can get help from here to get video byes array of each frame from camera and then save them using some third party encoder. Now you can create bitmap from byte array, and using bitmap you can write a overlay text on it. Example code here and here is the link for the third party encoder AndroidFFmpeg
Is it possible to record video with overlay view? While recording the video I have displayed one small image on the overlay view. What I want to do is I want those overlay image along with the video recorded. So when I will open that recorded video, I will be able to see that overlapped image that recorded with video also.
Friends, I need this solution ASAP. Please suggest proper solution :)
Unfortunately, there is no way in the current Android API to get between the camera input and the encoder. Any solution would either involve capturing frames from the video source, overlaying the additional image, and then including an encoder for the captured frames. Even in native code with NEON optimizations on a fast system, this is going to be a slow process. Alternatively, the whole stream could be post-processed in a similar fashion, but this would also require a decoder.
For future reference: This is possible using the CameraView library, at least in "snapshot video" mode.