I'm using a MediaCodecto encode some video frames to H264. Everything seems ok on most devices, but on some devices my output gets a lot of "diagonal" noise (appears like an offset/alignment issue).
The output resolution of the MediaCodec is calculated by using the function getWidthAlignment andgetHeightAlignment. The documentation states that the values returned by that functions are powers of two that the codec output width and height need to be a multiple of. The function returns 2 for the devices which do not work properly.
So, the device I am testing on (that does not work properly) has a native resolution of 1280x800. If I request that the output of the MediaCodec be 1152x720 (keeping the same aspect ratio), everything works. If I have the output of the MediaCodec be 1224x720 (keeping the same aspect ratio of the screen if the screen has the nav bar showing, so, not the same AR as the native resolution). Both 1224 and 720 are multiples of the width/height alignments that the MediaCodec's VideoCapabilities returns. Using 1224x720 causes the "diagonal lines."
So, my question is, what is the best way to get a supported output resolution for a MediaCodec? Using getWidth/HeightAlignment doesn't seem to behave properly on a lot of lower end devices (perhaps the devices are not implementing those functions correctly?).
Related
The goal is to crop a preview on the surface for frames that come from Camera2 api capture session, but not to crop the video that will be created itself.
For example, I have a streaming resolution of 1920x1080 (16:9), and the screen size (just for instance) 2000x3000 (2:3 -> 6:9 or 16:24, lets pick the second variant for example), so I'd like to have my video in the original streaming resolution - 1920x1080, but my preview to fill all the available space without View resizing - so it should be 5333x3000 (just bumping the size up to fill the rectangle area using the same aspect ratio as the streaming resolution's one), and then to "cut" the "frame data that corresponds to this surface's resolution (which is, I suppose, 5333x2000)" to 2000x3000 (just remove (5333 - 2000) / 2 from both top and bottom).
Is it possible?
P.S.: the bad thing is that google sample for camera2 api resizes the view itself, and these "blank areas" are undesired for me. I haven't found anything that even closely matches my problem.
P.S.S.: AFAIU this SO solution crops the frame that comes from the camera itself, but I need my video to be in the original resolution.
If you're using a TextureView, you can probably adjust its transform matrix to scale up the preview (and cut off the edges in the process). Read the existing matrix, fix up the aspect ratio and scale it up, and then save the new matrix.
But please note that saving a different field of view than what you're showing to the user is probably going to get you negative reactions - please use the preview to frame what they want to record, and if what you're saving has extra stuff, the recorded video won't match expectations. Of course, maybe this isn't a concern for your use case.
I am following ExtractMpegFramesTest post to extract PNG frames from video.
This works fine with videos that are recorded in landscape mode but doesn't work with videos that are recorded in portrait mode.
Does anybody know how to generate PNG frames from portrait video using solution provided in above link ?
I have tested this with 720p and 1080p videos.
Couple of things i observed is,
MediaExtractor gives width and height 1280 and 720 of 720p video regardless of orientation. this should be 1280 x 720 in landscape and 720 x 1280 in portrait. simillar case in 1080p videos.
Other thing is when i pass false in the Method drawFrame in invert parameter, PNG frame is fine but upside down.
Edit:
With ExtractMpegFramesTest i'm getting this result
Landscape video with invert parameter true gives perfect Image
http://postimg.org/image/qdliypuj5/
portrait video with invert parameter true gives distorted Image
http://postimg.org/image/vfb7dwvdx/
portrait video with invert parameter false gives perfect upside down Image.(According to #Peter Tran's answer output can be fixed by rotating the Bitmap.)
http://postimg.org/image/p7km4iimf/
In ExtractMpegFramesTest in the comment for saveFrame, it states the
// Making this even more interesting is the upside-down nature of GL, which means
// our output will look upside-down relative to what appears on screen if the
// typical GL conventions are used. (For ExtractMpegFrameTest, we avoid the issue
// by inverting the frame when we render it.)
This is why there is the boolean parameter for drawFrame that you mentioned.
So it sounds like what you want to do is invert the bitmap before saving to PNG. This can be done by applying a Matrix (with preScale) to the Bitmap. You would need to modify the code in saveFrame after the call to bmp.copyPixelsFromBuffer.
Please refer to this answer for mirroring a bitmap; and use preScale(-1,1) to invert the image on the correct axis.
I am not familiar with android programming,
I wish to know if there is a way to set the video resolution when it is filmed inside an android application ?
Or, is there a way to reduce the resolution later.
We need to reduce the file size of the video that we capture.
Thanks
Shani
There are three things you can control to manage the resulting file size when recording video. All three are available as methods in MediaRecorder class:
Frame size (width x height). Use method setVideoSize(int width, int height). The smaller the frame size, the smaller the video file.
Encoding bit rate - this controls compression quality of each frame. Use method setVideoEncodingBitRate (int bitRate). Lower bit rate results in higher compression, which in turn leads to lower quality and lower video file size. This is only available from API level 8 and above.
Video "speed" - how many frames per second are captured. Use setVideoFrameRate (int rate) method. The lower the rate, the fewer frames you'll be capturing - resulting in a smaller video file size. This is only available from API level 11 and above. Remember though that for a smooth video you need at least 24 frames per second.
Have a look at the documentation for MediaRecorder` for more information.
I'm in the process of attempting to use the camera and some motion tracking AS3 classes to detect movement in front of a ViewSonic Smart Display, for the sake of a demo. I've gotten the app and detection to function on other Android devices, but the 'Smart Display' is presenting me with some odd issues.
Taking a long shot that someone might've encountered this, but this is the very simple camera set up code I reduced the issue down to:
var camera = Camera.getCamera();
camera.setMode(stage.stageWidth, stage.stageHeight, 30, true);
var video:Video = new Video(stage.stageWidth, stage.stageHeight);
video.attachCamera(camera);
My problem lies at the point of "video.attachCamera"
For some reason, this device takes this function as "Display the video in a tiny window in the upper right hand corner" and ignores all other code, dominating the screen with blank black, and a tiny (maybe 40x20px square) of video stream.
Image of it occuring...
Any help is much appreciated, thanks
The problem might be the values that you are passing to the camera with the setMode() method. You are trying to set the camera to capture at the width/height of the stage.
The camera likely does not have such a capture resolution, and as the documentation for setMode() states, it will try to find something that is close to what you have specified:
Sets the camera capture mode to the native mode that best meets the specified requirements. If the camera does not have a native mode that matches all the parameters you pass, the runtime selects a capture mode that most closely synthesizes the requested mode. This manipulation may involve cropping the image and dropping frames.
Now, it is granted that you would expect Flash to have picked a resolution that is bigger than what is shown in your screenshot. But given the myriad of camera devices/drivers, it's possible this is not working too well in your case.
You might start off by experimenting w/more typical resolutions to capture the video: 480x320, 640x480, 800x600, or at the most 1024x768. Most applications on the web probably use the first or second capture resolutions.
So change:
camera.setMode(stage.stageWidth, stage.stageHeight, 30, true);
To:
camera.setMode(640, 480, 30, true);
Note you can display the video in any size you want, but the capture resolutions you can use depend on your camera hardware/drivers/OS/etc. Typical resolutions have a 4:3 aspect ratio and are relatively small (not the full dimensions of the screen/stage). The capture resolution you use affects the quality of video and the amount of network bandwidth you need to stream the video. Generally (for streaming), you don't want to use a big capture resolution, but maybe it's not so important in your motion capture use case.
I am attempting to allow users to record video that is a different size than the actual on-screen preview that they can see while recording. This seems to be possible from this documentation concerning the getSupportedVideoSizes function which states:
If the returned list is not null, the returned list will contain at
least one Size and one of the sizes in the returned list must be
passed to MediaRecorder.setVideoSize() for camcorder application if
camera is used as the video source. In this case, the size of the
preview can be different from the resolution of the recorded video
during video recording.
This suggests that some phones will return null from this fn (in my experience the Galaxy SIII does) but for those who do not, it is possible to provide a preview with a different resolution than the actual video. Is this understanding correct? Do some phones allow the behavior and others not?
Attempting a Solution:
In the official description of the setPreviewDisplay function, which is used in the lengthy process of setting up for video recording, it is mentioned that:
If this method is called with null surface or not called at all, media
recorder will not change the preview surface of the camera.
This seems to be what I want, but unfortunately if I do this, the whole video recording process is completely messed up. I am assuming that this function can not be passed null or not called at all in the process of recording video. Perhaps in other contexts this is okay. Unfortunately though, this does not seem to help me.
My only next steps are to look into TextureViews and to use a preview Texture as opposed to a typical SurfaceView implementation in order to use openGL to stretch the texture to my desired size that differs from the actual resolution (and crop any excess off the screen), and then to Construct a Surface for the setPreviewDisplay function with the Surface(SurfaceTexture surfaceTexture) constructor for a Surface. I would like to avoid using a TextureView due to incompatibility below ICS, and also because this adds significant complexity.
This seems like a delicate process, but I am hoping someone can offer some advice in this area.
Thank you.
a.Assume the user sets the size of x,y as video size
b.Now with getSupportedVideoSizes function get the entire list and see if x,y falls in one of them and set the MediaRecorder.setVideoSize().If x,y does not fall in the getSupportedVideoSizes list,then set the default profile for the video record.
This is about the video size
Now coming to the preview size,Not much workaround options.
Take a RelativeLayout which holds the SurfaceView.
<android.view.SurfaceView xmlns:android="http://schemas.android.com/apk/res/android"
android:id="#+id/preview"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
/>
preview is the name of the SurfaceView.
Here i have given a sample of re-sizing it to half of the width and height.
resetCamera(); //reset the camera
ViewGroup.LayoutParams params = preview.getLayoutParams();
RelativeLayout myRelLayout = (RelativeLayout) findViewById(R.id.myRelLayout);
params.width = (int) (myRelLayout.getWidth()/2);
params.height = (int)(myRelLayout.getHeight()/2);
preview.setLayoutParams(params);
initCamera(); //initiate the camera(open camera, set parameter, setPreviewDisplay,startPreview)
please look at the resolution of the preview and then scale down the height or width accordingly based on the video size.
Hope it helps.
As you mention, this is only possible when getSupportedVideoSizes() returns a non-null list.
But if you do see a non-null list, then this simple approach should work:
Set the desired preview resolution with setPreviewSize; the size you select has to be one of the sizes given from getSupportedPreviewSizes.
Set the preview display to your SurfaceView or SurfaceTexture with setPreviewDisplay or setPreviewTexture, respectively.
Start preview.
Create the media recorder, and set its video size either directly with setVideoSize using one of the sizes from getSupportedVideoSizes, or use one of the predefined Camcorder profiles to configure all the media recorder settings for a given quality/size.
Pass the camera object to MediaRecorder's setCamera call, configure the rest of the media recorder, and start recording.
On devices with a non-null getSupportedVideoSizes list, this should result in preview staying at the resolution set by your setPreviewSize call, with recording operating at the set video size/camcorder profile resolution. On devices with no supported video sizes, the preview size will be reset by the MediaRecorder to match the recording size. You should be able to test this by setting a very low preview resolution and a high recording resolution (say, 160x120 for preview, 720p for recording). It should be obvious if the MediaRecorder switches the preview resolution to 720p when recording starts, as the preview quality will jump substantially.
Note that the preview size is not directly linked to the dimensions of the display SurfaceView; the output of the camera preview will be scaled to fit into the SurfaceView, so if your SurfaceView's dimensions are, say 100x100 pixels due to your layout and device, whatever the preview resolution you use will be scaled to 100x100 for display. So you still need to make sure to keep the SurfaceView's aspect ratio correct so that the preview is not distorted.
And for power efficiency, you should not use a preview resolution much higher than the actual number of pixels in your SurfaceView, since the additional resolution will be lost in fitting the preview in the surfaceview. This is of course only possible for recording when getSupportedVideoSizes() returns a non-null value.
First, I will try to answer your specific questions.
it is possible to provide a preview with a different resolution than the actual video. Is this understanding correct?
Yes, preview size is more often than not different from recording size. Preview size is more often than not linked to your display size. So if a phone has display of CIF (352 x 288), but is capable of recording D1 (720 x 480), then preview size and recording size will be different. I feel that other experts have answered sufficiently on this point.
Do some phones allow the behavior and others not?
Most of the latest phones support this feature except maybe a few low-end ones.
Along with setPreviewDisplay, we have to consider this point also:
The one exception is that if the preview surface is not set (or set to null) before startPreview() is called, then this method may be called once with a non-null parameter to set the preview surface. (This allows camera setup and surface creation to happen in parallel, saving time.) The preview surface may not otherwise change while preview is running.
Could you please share the issue faced by you when setPreviewDisplay is invoked with a NULL surface?