My team is developing an app for HTC phones that uses the stereo camera. In order to do our processing, we require the still images taken by the 3D camera to be in MPO format. By default it is returning JPS images.
How can I make the camera return MPO images? Is there someplace that this is documented?
I have spent a while on HTCs site but was unable to find source code for their API or camera app that might help (since their camera app can do MPO files).
I don't know an API for this, but it is pretty straight forward to do yourself. The JPS format is just a single image with the left half being one camera and the right half being another. So first step just convert it to two separate images. Create new bitmaps from it with rectangles for either side:
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(android.graphics.Bitmap,%20int,%20int,%20int,%20int,%20android.graphics.Matrix,%20boolean)
The MPO format is just two JPG images in one file, one after the other. So next write two JPG images using the compress method to the same file output stream:
http://developer.android.com/reference/android/graphics/Bitmap.html#compress(android.graphics.Bitmap.CompressFormat,%20int,%20java.io.OutputStream)
You can find a lot of sample code online for Android for cropping images and saving them to JPG, which is pretty much all you need.
The MPO format is not just
two JPG images in one file, one after the other.
http://www.cipa.jp/english/hyoujunka/kikaku/pdf/DC-007_E.pdf
English translation of the 2009-02-04 standard of the Camera and Image Processing Association's Standard Development Working Group, Multi-Picture Format Sub-Working Group.
https://www.htcdev.com/devcenter/opensense-sdk/stereoscopic-3d/s3d-sample-code/
Shows some sample code for working with the HTC EVO 3D camera.
Related
I am making an app and part of it will view PDF's. The pdf's are on a web server and downloaded to the app. I am trying to understand what happens when a viewer loads a pdf. If it is converted to an image then I would like to try converting the pdf to a png on the server and just using that copy to view in app as png is a lot less hassle to deal with.
The only reason I would not convert to png is if an android pdf viewer maintains the vector nature of the file as zooming is critical and I want a nice crisp image.
Ultimately someone is always going to convert the PDF to an image. Or better said to pixels because that is what you need to display on the tablet screen (or any screen that I know of). The question is who does the conversion, when it is done and how well and quickly it is done.
For a tablet viewer, the challenge is to do the conversion quickly enough not to bother the user with load times and that often goes at the price of quality. There are virtually no PDF viewing applications on either iOS or Android at this point that do a really good job at showing all intricacies of the PDF file format.
That being said, the quality is usually good enough and if the viewer is well implemented, zooming for a PDF file should be a no brainer. Zooming simply means for the application that the viewport (the part of the PDF page that is visible) is different, but it doesn't really change the algorithm used to convert the PDF page elements into pixels.
That is also the difference between you converting the PDF to an image on the server and the app converting it to pixels on the device. Your server likely has more calculation power (at least it might have :-)) but the application knows at which resolution it needs to convert to pixels and what part of the page it has to convert to pixels. And a good viewing application can use these details to adapt how it does the conversion to pixels. There are lots of optimisation algorithms that can be used to only render visible elements and take shortcuts based on knowing exactly what resolution will be used for rendering.
In short, yes, you can do the rendering on the server and feed an image to your viewer. But keep in mind that - especially while allowing the user to zoom - you'll get lots of data and probably poorer quality than when you let a good viewer handle things in PDF...
I have an app that uses a lot of images for drawables. However, because they are in high definition quality they take a lot of space. Is there anyway in android that I might be able to convert these images into text or since I am using Maya save them in a different format so that my app can draw them? In other words is there a way to create a code that draws the image via points given by text yet still has a good quality?
I am developing an android camera app. The camera pictures are later processed by OCR, so the picture must be as sharp as possible.
If you shake the camera, it looks as if the digital camera overlays multiple images, to create the effect of motion blur:
Example 1: http://i.stack.imgur.com/nqrmd.jpg
Example 2: http://i.stack.imgur.com/ZBx6F.jpg
If you examine the pictures closely, the motion blur looks to consist of 2 or 3 images taken in quick succession and blended together to simulate light exposure. I understand that this amounts to the way digital cameras work.
But I'd prefer having a single crisp image rather than a properly exposed one. The app can use histogram corrections to make the text readable again for OCR. The image does not have to appeal to the human eye.
Is there a way to better control the camera to get these sort of raw image snapshots?
I had some limited success using the "Action" scene mode on the camera. Not much, but it's as far as you can get.
Using an Android (2.3.3) phone, I can use the camera to retrieve a preview with the onPreviewFrame(byte[] data, Camera camera) method to get the YUV image.
For some image processing, I need to convert this data to an RGB image and show it on the device. Using the basic java / android method, this runs at a horrible rate of less then 5 fps...
Now, using the NDK, I want to speed things up. The problem is: How do I convert the YUV array to an RGB array in C? And is there a way to display it (using OpenGL perhaps?) in the native code? Real-time should be possible (the Qualcomm AR demos showed us that).
I cannot use the setTargetDisplay and put an overlay on it!
I know Java, recently started with the Android SDK and have zero experience in C
Have you considered using OpenCV's Android port? It can do a lot more than just color conversion, and it's quite fast.
A Google search returned this page for a C implementation of YUV->RGB565. The author even included the JNI wrapper for it.
You can also succeed by staying with Java. I did this for the imagedetectíon of the androangelo-app.
I used the sample code which you find here by searching "decodeYUV".
For processing the frames, the essential part to consider is the image-size.
Depending on the device you may get quite large images. i.e. for the Galaxy S2
the smallest supported previewsize is 640*480. This is a big amount of pixels.
What I did, is to use only every second row and every second column, after yuvtorgb decoding. So processing a 320*240 image works quite well and allowed me to get frame-rates of 20fps. (including some noise-reduction, a color-conversion from rgb to hsv and a circledetection)
In addition You should carefully check the size of the image-buffer provided to the setPreview function. If it is too small, the garbage-collection will spoil everything.
For the result you can check the calibration-screen of the androangelo-app. There I had an overlay of the detected image over the camera-preview.
Having a video file, there is any way to access single pixel values?
I have two cases where I would like to access the pixels:
From the video camera
From a video file What I need is geting a pixel information for a certain place with something like getPixel(posX, posY) and returning the RGB information
I have an algorithm that detects blobs (homogeneous parts) of an image and I would like to implement it in real time using the android video camera and offline processing analyzing a video file.
Yes, but you'll need to do some work.
Extract a video frame from the source file with a tool such as FFmpeg. The result will be a JPEG or other such image file
Use an image processing tool, like Imagemagick, to extract the information for a pixel.
Presto!