I'm writing an Android application which makes some use of stereoscopic image data from camera on HTC Evo 3D. I try to access the data using standard Android API with some 3D-specific functions provided by HTC OpenSense API. So far, I can access the camera in stereoscopic mode and I can grab the image data using onPreviewFrame() callback method.
However, the "raw" image data (data[] byte array) available in onPreviewFrame() are not complete. The image I get is a correct side-by-side stereoscopic picture, but its horizontal size is reduced by a factor of two. For example, when set the camera preview size to 1280x720 px, I expect a 2560x720 px image (two images of the desired 1280x720 px resolution). But what I get is a picture of 1280x720 resolution, half of which comes from the right camera and the other half from the left one. I don't know why is the horizontal resolution reduced.
There is a similar thread on this forum, but the answer doesn't really solve the problem. Although the DisplaySetting.setStereoscopic3DFormat() returns true in my program, it doesn't seem to have any effect on display or image data.
Has anyone any experience with this issue?
The resolution halving is by design, the parallax barrier display causes 3D photos to be half the resolution.
Related
I have tried writing my own code for accessing camera via the camera2 API on Android instead of using the Google's example. On one hand, I've wasted way too much time understanding what exactly is going on, but on the other hand, I have noticed something quite weird:
I want the camera to produce vertical images. However, despite the fact that the ImageReader is initialized with height larger than width, the Image that I get in the onCaptureCompleted has the same size, except it is rotated 90 degrees. I was struggling trying to understand my mistake, so I went exploring Google's code. And what I have found is that their images are rotated 90 degrees as well! They compensate for that by setting a JPEG_ORIENTATION key in the CaptureRequestBuilder (if you comment that single line, the images that get saved will be rotated). This is unrelated to device orientation - in my case, screen rotation is disabled for the app entirely.
The problem is that for the purposes of the app I am making I need a non-compressed precise data from camera, so since JPEG a) compresses images b) with losses, I cannot use it. Instead I use the YUV_420_888 format which I later convert to a Bitmap. But while the JPEG_ORIENTATION flag can fix the orientation for JPEG images, it seems to do nothing for YUV ones. So how do I get the images to be correctly rotated?
One obvious solution is to rotate the resulting Bitmap, but I'm unsure what angle should I rotate it by on different devices. And more importantly, what causes such strange behavior?
Update: rotating the Bitmap and scaling it to proper size takes way too much time for the preview (the context is as follows: I need both high-res images from camera to process and a downscaled version of these same images in preview. Let's just say I'm making something similar to QR code recognition). I have even tried using RenderScripts to manipulate the image efficiently, but this is still too long. Also, I've read here that when I set multiple output surfaces simultaneously, the same resolution will be used for all of them, which is quite bad for me.
Android stores every image, no matter if it is taken in landscape or in portrait, in landscape mode. It also stores metadata that tells you if the image should be displayed in portrait or landscape.
If you don'r turn the image according to the metadata, you will end up with every image in landscape. I had that problem too (but I wanted compression so my solution doesn't work for you).
You need to read the metadata and turn it accordingly.
I hope this helps at least a bit.
1) The Camera previews at 1920 x 1080
2) I record at 960 x 540
3) I want to be able to specify what portion of the 1920 x 1080 preview should be saved into the video and change this on-the-fly.
In effect this would give me the ability to do digital zooming as well as digital panning of the Camera. What APIs, code-samples could help me out here?
I've looked at the Camera2 API and samples. Looks like you can only set one viewport for the device, not per output.
You'll have to implement this zooming yourself; the camera API produces the same field of view on all of its outputs, regardless of the resolution of each output (though it does crop different aspect ratios differently, to avoid stretching). The camera2 SCALER_CROP_REGION (used for digital zoom) will zoom/pan all outputs equally.
The simplest way to do this is probably to send the 1080p output to the GPU, and from the GPU, render to the screen with the full FOV, and render to a media recorder with just the region of the image you want to record.
It's not terribly straightforward, since you'll need to write quite a bit of OpenGL code to accomplish this.
I recently started working learning Unity2d from an iOS background. I am using the FWVGA Landscape (854x480) dimensions. I resized a 1136x640 background image (which I use for iOS) to 854x480 and made a sprite with the image but it only took a small portion of the screen. I had to resize the image in Unity. What are the general rules for converting dimensions from iOS to Android on Unity to get the dimensions to fit?
1136x640 is the ratio 16:9, as is 1920x1080 (1080p). Note a quick way to check the ratio: 16/9 = 1.77r. 1920/1080 = 1.77r. 1136/640=1.77r, 854/480 = 1.77r. All the same 16:9 ratio.
So, for Android phones that are also 16:9 you don't "need" to resize your assets unless say you want to up the resolution for quality sake on more powerful Android devices.
If you want to do it quick and easy then you want to draw your game in a camera at 1336x640 and then scale the camera view to match the resolution of the device you are running on (be it 1920x1080, 640x480, etc - all the same ratio).
You get problems with devices that are not 16:9 like tablets or phones that are say 16:10, 5:3, 3:2, or the iPads and Nexus 9 tablets at 4:3. This is a long subject though and there are lots of guides around to help. Try searching for "Unity 2d multiple screen ratios".
I think there are just too many variables to make this work "simply"
854x480 doesn't sound like a 'standard' HxW to me. and Unity is a 3D game engine which means its camera is on the Z axis when you are "looking down upon" a 2D game.
So the camera Z axis(Which apparently Unity tries to 'adjust' to match the phone's size, the phone's physical size, the size of the image you are using, etc. There are just a lot of variables. I would recommend https://stackoverflow.com/a/21375938 to see if that helps.
I am diving into Camera based app development and am looking into capturing a list of resolutions a phone's camera can support.
Is getSupportPreview or PictureSizes the best way to go? If so, what is the difference between them?
You should most probably use both. Preview size is the size in which the camera is displaying what it is seeing on your screen. So the data that is coming through the cameras' sensor is being displayed at say 1920x1080. But the picture size is that size that image will be captured in. So you maybe be showing the image to the user as they are taking it at 1920x1080 but the image that is captured could be at 4160x3120 or whatever it may be.
My application takes pictures of documents. I specify a picture size of 1600x1200 on the camera, request a JPEG image, and set quality of 30.
The iPhone developer working on the iPhone version of the application does the same thing (1600 x 1200, quality = 30).
Image sizes:
On an iPhone 4S: ~100k
Samsung Galaxy Nexus: ~240k
Samsung Galaxy S3: ~600k
I verified that all the images are in JPEG format and all are 1600 x 1200. When I visually inspect the image quality they look roughly the same.
Why the difference in file sizes? What surprised me the most is the difference between the Nexus and S3, since they're running the exact same Android code.
The same image on the two phones could have different sizes if the jpeg encoder uses different default quantization tables. Quantization is the only lossy step in jpeg encoding and it will have the biggest impact on final image files size - http://en.wikipedia.org/wiki/JPEG#Quantization
Btw, did you try visually inspecting the images on native resolution? A bigger screen might help identify the subtle differences in image quality, if any exist.
It's all a matter of how the phone itself is built to handle the raw image taken from the camera. It has to convert the image to JPG as best it can from what it actually saw. When I process pictures taken with my DSLR in RAW, I can output JPGs of varying file sizes (same picture size), just by messing with the exposure, color curves, etc. I'm doing that manually, but the phones have all of that logic built in to try and produce a picture that is generally pleasing to most people. Point-and-shoot cameras do the same thing.
Based on this site, it looks like the iPhone does a bit more compression and color guessing than the S3. If you look closely, you can see that the colors are smoother with the picture taken by the S3, which takes more memory to store. I have a feeling the Nexus uses a similar algorithm to the S3, it just doesn't have as good of a camera.