I've implemented an android app that implements the CvCameraListener interface. In the onCameraFrame(Mat inputFrame) method I process the captured inputFrame from the camera.
Now to my problem: Is there a way that I can use a saved video file on my phone as an input instead of getting the frames directly from camera? That means I would like to have a video file input frame by frame in Mat format.
Is there a possible way to do that?
Thanks for your answers
though it is not tested and I don't have much experience in OpenCV on Android. Still, you can try like this:
//[FD : File descriptor or path.]
Bitmap myBitmapFrame;
MediaMetadataRetriever video_retriever = new MediaMetadataRetriever();
try {
retriever.setDataSource(FD);
myBitmapFrame = retriever.getFrameAtTime(..);
}
catch(...
:
Utils.bitmapToMat(myBitmapFrame, myCVMat);
You may have to implement some callback system as you can work with only OpenCV after it is initialized. Also, you can convert frame number to time-code.
Good Luck and Happy Coding. :)
Related
In my android application, I used ImageReader surface to capture the image from the camera and dump it into a file. Here is the relevant code:
void saveImage(Image img) {
Bytebuffer buf = img.GetPlanes()[0].getBuffer();
buf.rewind();
byte[] data = new byte[buf.remaining()];
buf.get(data);
saveToFile(data);
}
The generated file, when viewed through avplay, seems to display the image just fine. However, when I load the same content using Qt's QPixmap::loadFromData on Ubuntu, the method fails. The errors I get are:
Corrupt JPEG data: premature end of data segment
Invalid JPEG file structure: two SOI markers
I am wondering if anyone has any insight on how to overcome this problem. Not sure if it is a problem with Android MediaCodec class or the jpeg library Qt internally uses has a bug. Regards.
I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080
Currently I'm working on a android program using Mobile Vision. I am using the "TextRecognizer" class and one of the methods is .detect(Frame frame). Right now I have a image I want to input into it, however, the image is the file type "Bitmap". I have tried to convert it to "Frame" by casting it but that hasn't worked. If anyone has any suggesting it would be much appreciated.
Use the setBitmap method in the Frame.Builder class:
Frame outputFrame = new Frame.Builder().setBitmap(myBitmap).build();
I would like to perform face detection / tracking on a video file (e.g. an MP4 from the users gallery) using the Android Vision FaceDetector API. I can see many examples on using the CameraSource class to perform face tracking on the stream coming directly from the camera (e.g. on the android-vision github), but nothing on video files.
I tried looking at the source code for CameraSource through Android Studio, but it is obfuscated, and I couldn't see the original online. I image there are many commonalities between using the camera and using a file. Presumably I just play the video file on a Surface, and then pass that to a pipeline.
Alternatively I can see that Frame.Builder has functions setImageData and setTimestampMillis. If I was able to read in the video as ByteBuffer, how would I pass that to the FaceDetector API? I guess this question is similar, but no answers. Similarly, decode the video into Bitmap frames and pass that to setBitmap.
Ideally I don't want to render the video to the screen, and the processing should happen as fast as the FaceDetector API is capable of.
Alternatively I can see that Frame.Builder has functions setImageData and setTimestampMillis. If I was able to read in the video as ByteBuffer, how would I pass that to the FaceDetector API?
Simply call SparseArray<Face> faces = detector.detect(frame); where detector has to be created like this:
FaceDetector detector = new FaceDetector.Builder(context)
.setProminentFaceOnly(true)
.build();
If processing time is not an issue, using MediaMetadataRetriever.getFrameAtTime solves the question. As Anton suggested, you can also use FaceDetector.detect:
Bitmap bitmap;
Frame frame;
SparseArray<Face> faces;
MediaMetadataRetriever mMMR = new MediaMetadataRetriever();
mMMR.setDataSource(videoPath);
String timeMs = mMMR.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION); // video time in ms
int totalVideoTime= 1000*Integer.valueOf(timeMs); // total video time, in uS
for (int time_us=1;time_us<totalVideoTime;time_us+=deltaT){
bitmap = mMMR.getFrameAtTime(time_us, MediaMetadataRetriever.OPTION_CLOSEST_SYNC); // extract a bitmap element from the closest key frame from the specified time_us
if (bitmap==null) break;
frame = new Frame.Builder().setBitmap(bitmap).build(); // generates a "Frame" object, which can be fed to a face detector
faces = detector.detect(frame); // detect the faces (detector is a FaceDetector)
// TODO ... do something with "faces"
}
where deltaT=1000000/fps, and fps is the desired number of frames per second. For example, if you want to extract 4 frames every second, deltaT=250000
(Note that faces will be overwritten on every iteration, so you should do something (store/report results) inside the loop
I am developing an app for a custom android device. It is still early in the development and it is possible that the camera may physically be rotated 90 degrees to the rest of the device. This means that there is scope for great confusion between portrait and landscape for any images it takes. For this reason I would like absolute control over the Exif data in any images that the camera takes. The portrait vs landscape information in the camera parameters may be incorrect. For this reason I would like to be able to force a change in the Exif data inside onPictureTaken, before the image is saved. Is this possible, and if so how?
I am struggling because examples of playing with exif data seem to either work by changing camera parameters, or by working on an already saved file - so that's either too early or too late!
public void onPictureTaken(byte[] jpg_data, Camera camera)
{
// can I change exif data here?
try
{
FileOutputStream buf = new FileOutputStream(filename);
buf.write(jpg_data);
//... etc.
EDIT: Maybe I am misunderstanding something here... is there Exif data already contained within the jpg_data that gets passed to onPictureTaken? Or is it optionally added?
The standard way of writing exif data in Android is by using the ExifInterface which sadly only works on files that have already been written to disk.
If you wish to do the exif write without using a library, then you would have to do it after your FileOutputStream is finished writing the file.
If you don't mind using a library, Sanselan (http://commons.apache.org/proper/commons-imaging/) might have the ability to do it to a byte[] array, but the documentation is pretty limited.