On my Android device I can take videos one of 2 ways:
1) I can write a custom class / set of methods that will allow me to utilize the camera and have direct control over the features
or
2) I can use an intent to open the video and take a recording.
This question is with regards to the latter (using an intent). This is the intent code I am using to take a video:
private void takeVideo(Activity activity, Context context){
//Actual intent used to capture the video
Intent takeVideoIntent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);
//30 Second time limit (can make it anything, just using 30 here)
takeVideoIntent.putExtra(MediaStore.EXTRA_DURATION_LIMIT, 30);
//Utility method I have for generating a uri to use for the video
android.net.Uri myUri = generateVideoUri(context);
//Uri to use
takeVideoIntent.putExtra(MediaStore.EXTRA_OUTPUT, myUri);
//Cap it at 20mb
takeVideoIntent.putExtra(MediaStore.EXTRA_SIZE_LIMIT, 20000000L);
//HERE IS WHERE MY QUESTION LIES //0 is low, 1 is high
takeVideoIntent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 0);
activity.startActivityForResult(takeVideoIntent, 100);
}
The above code works just fine. My question is surrounding the quality.
If I send in 0 as the quality, it is absolutely horrible quality (IE, 10 seconds of video is about 0.3mb and appears to be the lowest quality the phone can handle). If I send in 1 as the quality, it is ungodly large and unwieldy (IE, 10 seconds of video is about 50mb).
Bearing in mind that some phone makers will ignore the intent extras sent, How do I go about getting medium quality video to use for uploads, messaging, etc?
Do I need to run some sort of compression on the large files?
Can this only be done via a custom video/ camera class?
Is there some other way I am not thinking of?
What would you recommend I do to get a "medium quality" video as opposed to the 2 extremes of way too large or way too small?
Thanks all!
Related
I want to open video camera with intent and take a video only HD mode.(Phone have Full HD, HD and TV(very low) mode). I just open video camera like above. But I cannot set any params in it.
Intent videoCapture = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);
startActivityForResult(videoCapture, 1);
MediaStore.class
/**
* The name of the Intent-extra used to control the quality of a recorded video. This is an
* integer property. Currently value 0 means low quality, suitable for MMS messages, and
* value 1 means high quality. In the future other quality levels may be added.
*/
public final static String EXTRA_VIDEO_QUALITY = "android.intent.extra.videoQuality";
I think there is no solution for now .s
You can use this to set parameters for your video capture :
Intent videoCapture = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);
videoCapture.putExtra(MediaStore.EXTRA_DURATION_LIMIT, 60); // Duration in Seconds
videoCapture.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1); // Quality High (0 : Quality Low)
videoCapture.putExtra(MediaStore.Video.Thumbnails.HEIGHT, 320);
videoCapture.putExtra(MediaStore.Video.Thumbnails.WIDTH, 240);
You can find more informations on the MediaStore page.
Hope it helps !
I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080
I want to create a GIF from mp4 video. So I need to extract the frames from video first. Here is the code I use to extract frames:
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(mFilePath);
Bitmap bitmap = retriever.getFrameAtTime(i,
MediaMetadataRetriever.OPTION_CLOSEST);
Note that variable i is time in microseconds. Since I want to get 24 frames/second, I call retriever.getFrameAtTime() with i = 42000, 84000, .... (microseconds).
The problem is: when I collect extracted frames to a video, I see only 4-5 different frames. In other words, I didn't get a smooth video. It seems that MediaMetadataRetriever often returns the same frame with different given time. Please help me!
This might sound like a strange/silly question. But hear me out.
Android applications are, at least on the T-Mobile G1, limited to 16
MB of heap.
And it takes 4 bytes per pixel to store an image (in Bitmap form):
public void onPictureTaken(byte[] _data, Camera _camera) {
Bitmap temp = BitmapFactory.decodeByteArray(_data, 0, _data.length);
}
So 1 image, at 6 Megapixels takes up 24MB of heap. (Cue Memory overflow).
Now I am very much aware of the ability to decode with parameters, to effectively reduce the size of the image. I even have a method which will scale it down to a desired size.
But what about in the scenario when I want to use the camera as a quality camera!
I have no idea how to get this image into the database. As soon as I decode, it errors.
Note: I need(?) to convert it to Bitmap so that I can rotate it before storing it.
So to sum it up:
Limited to 16MB of heap
Image takes up 24MB of heap
Not enough space to take and manipulate an image
This doesnt address the problem, but I Recommend it as a starting point for others who are just loading images from a location:
Displaying Bitmaps on android
I can only think of a couple of things that might help, none of them are optimal
Do your rotations server side
Store the data from the capture directly to the SDCARD without decoding it, then rotate it chunk by chunk using the file system, then send that to your DB. Lots of examples on the web (if your angles are simple, 90 180 etc) though this would be time consuming since IO operations against SDCARD's are not exactly fast.
When you decode drop the alpha channel, this may not solve you issue though and if you are using a matrix to rotate the image then you would need a target/source anyway
Options opt = new Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
// Decode the raw camera a bitmap with no alpha channel
bmp = BitmapFactory.decodeByteArray(raw, 0,raw.length, opt);
There may be a better way to do this, but since your device is so limited in heap etc. I can't think of any.
It would be nice if there was an optional file based matrix method (which in general is what I am suggesting as option 2) or some kind of "paging" system for Android but that's the best I can come up.
First save it to the filesystem, do your operations with the file from the filesystem...
I've written a simple app that opens the camera and supplies a path for saving any captured images.
The code basically looks like this:
File file = new File( Environment.getExternalStorage() + "myimages/",
"my_image.jpg" );
Uri outputUri = Uri.fromFile( file );
Intent intent = new
Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE );
Intent.putExtra( MediaStore.EXTRA_OUTPUT, outputUri );
This works perfectly on my Droid. Images are consistently saved at 2592x1936. However, when testing on the Milestone, the images are saved at much smaller sizes such as 320x240 and 1280 x 1900. Using adb logcat, I can see that the image size is set as soon as the photo is taken.
It seems like there is a default setting on the Milestone causing this behavior.
Any help would be greatly appreciated.
Thanks,
~Jeremy
Every phone has its own manufacturer written camera app. Every camera app decides how much to resize and/or compress any images taken. If you test on more phones, you will probably find that you're getting a variety of different image sizes and qualities. As long as you are using intents, it is up to whatever Activity that the user chooses to handle that intent (which could always be different than the default camera app anyway) to decide what to do with it.
If you want to enforce a particular image size or quality, you need to write your own camera activity. Keep in mind that different phones are going to have different hardware that takes images at a huge variety of different resolutions anyway. You should be able to get better than 320x240, but you will not be able to get a consistent size across all phones because the hardware is different.