is there a way to access the pixels of a video? - android

Having a video file, there is any way to access single pixel values?
I have two cases where I would like to access the pixels:
From the video camera
From a video file What I need is geting a pixel information for a certain place with something like getPixel(posX, posY) and returning the RGB information
I have an algorithm that detects blobs (homogeneous parts) of an image and I would like to implement it in real time using the android video camera and offline processing analyzing a video file.

Yes, but you'll need to do some work.
Extract a video frame from the source file with a tool such as FFmpeg. The result will be a JPEG or other such image file
Use an image processing tool, like Imagemagick, to extract the information for a pixel.
Presto!

Related

Adding a bitmap to a video

How do I compress a bitmap to a video file with android. Like, adding the bitmap file to the video. I have a .png file which I decoded into a bitmap. Now, I want to add that bitmap to a video file so that when the video plays it also show the bitmap I added
If you just want to display the bitmap on top of the video when it is playing on your Android device, then the easiest thing is to display it in a separate view on top of the video, using a relative layout in your layout file, for example. This avoid CPU and battery consuming video manipulation.
If you actually want to add the image to the video so it is saved then one way to do this on Android is to use ffmpeg. There are several ways to use ffmpeg but one of the easier ones is to use a well supported ffmopeg wrapper, e.g.:
https://github.com/WritingMinds/ffmpeg-android-java
This allows you use the standard command line syntax for the ffmpeg instructions themselves, which you will find is well documented and supported on the web. For example, the following covers adding an image into a video:
https://ffmpeg.org/ffmpeg.html (look for ' overlay an image' near the bottom)
Note that adding the image is processing intensive, so you may quite likely find your device is not able to do this in real time - i.e. not able to add it while you are watching. If you have the luxury of being able to add the image first to the video and then watch it, then this may be more practical - you could even consider doing it on the server side after uploading the video, where you should have much more processing power.
If you must see the image and the video in real time then you may find a combination of the above approaches works for you - simply display the image over the video in realtime, and then add the image to the video afterwards before storing it.

MPO Images from HTC Evo 3D

My team is developing an app for HTC phones that uses the stereo camera. In order to do our processing, we require the still images taken by the 3D camera to be in MPO format. By default it is returning JPS images.
How can I make the camera return MPO images? Is there someplace that this is documented?
I have spent a while on HTCs site but was unable to find source code for their API or camera app that might help (since their camera app can do MPO files).
I don't know an API for this, but it is pretty straight forward to do yourself. The JPS format is just a single image with the left half being one camera and the right half being another. So first step just convert it to two separate images. Create new bitmaps from it with rectangles for either side:
http://developer.android.com/reference/android/graphics/Bitmap.html#createBitmap(android.graphics.Bitmap,%20int,%20int,%20int,%20int,%20android.graphics.Matrix,%20boolean)
The MPO format is just two JPG images in one file, one after the other. So next write two JPG images using the compress method to the same file output stream:
http://developer.android.com/reference/android/graphics/Bitmap.html#compress(android.graphics.Bitmap.CompressFormat,%20int,%20java.io.OutputStream)
You can find a lot of sample code online for Android for cropping images and saving them to JPG, which is pretty much all you need.
The MPO format is not just
two JPG images in one file, one after the other.
http://www.cipa.jp/english/hyoujunka/kikaku/pdf/DC-007_E.pdf
English translation of the 2009-02-04 standard of the Camera and Image Processing Association's Standard Development Working Group, Multi-Picture Format Sub-Working Group.
https://www.htcdev.com/devcenter/opensense-sdk/stereoscopic-3d/s3d-sample-code/
Shows some sample code for working with the HTC EVO 3D camera.

What format is for Android camera with raw pictureCallback?

I am trying to use data from Android picture. I do not like JPEG format, since eventually I will use gray scale data. YUV format is fine with me, since the first half part is gray-scale.
from the Android development tutorial,
public final void takePicture (Camera.ShutterCallback shutter,
Camera.PictureCallback raw, Camera.PictureCallback postview,
Camera.PictureCallback jpeg)
Added in API level 5
Triggers an asynchronous image capture. The camera service will
initiate a series of callbacks to the application as the image capture
progresses. The shutter callback occurs after the image is captured.
This can be used to trigger a sound to let the user know that image
has been captured. The raw callback occurs when the raw image data is
available (NOTE: the data will be null if there is no raw image
callback buffer available or the raw image callback buffer is not
large enough to hold the raw image). The postview callback occurs when
a scaled, fully processed postview image is available (NOTE: not all
hardware supports this). The jpeg callback occurs when the compressed
image is available. If the application does not need a particular
callback, a null can be passed instead of a callback method.
It talks about "the raw image data". However, I find nowhere information about the format for the raw image data?
Do you have any idea about that?
I want to get the gray-scale data of the picture taken by the photo, and the data are located in the phone memory, so it would not cost time to write/read from image files, or convert between different image formats. Or maybe I have to sacrifice some to get it??
After some search, I think I found the answer:
From the Android tutorial:
"The raw callback occurs when the raw image data is available (NOTE:
the data will be null if there is no raw image callback buffer
available or the raw image callback buffer is not large enough to hold
the raw image)."
See this link (2011/05/10)
Android: Raw image callback supported devices
Not all devices support raw pictureCallback.
https://groups.google.com/forum/?fromgroups=#!topic/android-developers/ZRkeoCD2uyc (2009)
The employee Dave Sparks at Google said:
"The original intent was to return an uncompressed RGB565 frame, but
this proved to be impractical. " "I am inclined to deprecate that API
entirely and replace it with hooks for native signal processing. "
Many people report the similar problem. See:
http://code.google.com/p/android/issues/detail?id=10910
Since many image processing processes are based on gray scale images, I am looking forward gray scale raw data in the memory produced for each picture by the Android.
You may have some luck with getSupportedPictureFormats(). If it lists some YUV format, you can use setPictureFormat() and the desired resolution, and ciunterintuitively you will get the uncompressed high quality image in JpegPreview callback, from which grayscale (a.k.a. luminance) can be easily extracted.
Most devices will only list JPEG as a valid choice. That's because they perform compression in hardware, on the camera side. Note that the data transfer from camera to application RAM is often the bottleneck; if you can use stagefright hw JPEG decoder, you will actually get the result faster.
The biggest problem with using the raw callback is that many developers have trouble with getting anything returned on many phones.
If you are satisfied with just the YUV array, your camera preview SurfaceView can implement PreviewCallback and you can add the onPreviewFrame method to your class. This function will allow you direct access to the YUV array for every frame. You can fetch it when you choose.
EDIT: I should specify that I was assuming you were building a custom camera application in which you extended SurfaceView for a custom camera preview surface. In order to follow my advice you will need to build a custom camera. If you are trying to do things quickly though I suggest building a new bitmap out of the JPEG data where you implement the greyscale yourself.

processing video in android

I am trying to figure out the right way to approach processing video (from file or camera) frame by frame on android.
I need to get each frame and convert it to RGB, process it (each color separately) and send it to the screen.
Has anyone done it? how could it be done (preferably without any native code processing)?
Look at this post where I suggest using OpenCV for Android. With OpenCV you can grab a video frame in RGB format and process each of the components individually in real-time.

How to compare an image stored in the device with the scanned frame value using camera?

I'm having an image stored in the android device. I need to compare the image with the video stream captured using its camera. If a match found we need to display a message.
How can i do this?
It might be no so trivial question. Definitely need to research for available image processing libs online, otherwise extract DIB section (binary data of the image, as RGB or ARGB) and then compare with streaming image with possible round percentage.
It's just idea, and not so real answer.

Categories

Resources