i want color of some pixels in RGB image of every camera frame in Vuforia, but according to this just one byte keeps all of data about each pixels, am i right?
so how can i get color of each pixel?
thanks in advance
Related
I'm writing a small Android app for my P30Pro in which I want to find an object in the color image (face, visual marker, ...) and then get its relative position by using the dense depth image from the Time-of-Flight camera. To simultaneously track the camera, I run ArCore and use a SharedSession to access the color image as well as the depth image via frame.acquireDepthImage(). (Both works well, I get a high resolution color image and a 120x160 depth image)
For the color image, I get the intrinsic calibration via camera.getImageIntrinsics, so I can map between a pixel and the corresponding ray.
I however found no corresponding function for the depth camera, so that I can't create a pointcloud from the depth image or get the corresponding depth for a pixel in the color image.
So: "How can I find the corresponding 3D Point for a given pixel in the color image by using the dense depth image?"
I previously worked on a project that used P20 Pro's dense depth maps, and I found they were already aligned with the high resolution color image, even though they were much smaller resolution.
In other words, you should just upsample the depth map so it matches the color image's resolution. Once you do that, you should find that the interpolated depth value at (r,c) in the upsampled depth image corresponds to the pixel at (r,c) in the color image.
I am using Camera 1 and I understand that I get range of supported frame rate and set that frame. However I want the preview to be showing on very low frame rate (ie 5 frames per second).
I can't set that as it is below any range. Is there a way I can drop certain frame from the preview? If setPreviewCallbackwithbuffer then I get the frame but at that point it is already displayed. Any way I can just "skip frames" from preview?
Thank you
No, you cannot intervene in frames on preview surface. You could do some tricks if you use TextureSurface for preview, but if you need very low frame rate, you can simply draw the frames (preferably, to an OpenGL texture) yourself. You will get the YUV frames in onPreveiewFrame() callback, and pass them for display when you want. You can use a shader to display the YUV frames without waisting CPU to convert them to RGB.
Usually, we want to skip preview frames because we want to run some CV algorithms on the frames, and often we want to display the frames modified by such CV processing, e.g. with bounding boxes of detected objects. Even if you have the coordinates of the box aside, and want to display the preview frame 'as is', using your own renderer has the advantage that there will be no time lag between the picture and the overlay.
Hi I am developing a camera application in that I have to do black and white image processing.I goggled and found only gray scale image processing. I want to convert my image into black and white like cam scanner.Also I tried with openCv but the result is not up to our expectation.If anybody solved this means please let me know. Thank you.
You will start with a grayvalue int[] or byte[] array with intensity values in the range [0, 255]. What you need is a threshold thres, so that all pixels with intensity below that threshold are set to black (0) and all pixels with intensity equal or above that threshold are set to white (255). For tetermining the optimal threshold the Otsu method is a well established approach. It is rather intuitive. Since the threshold will divide the pixels into two subsets you take that threshold value that minimizes the variance within the two subsets - which is the same as maximizing the variance between the two subsets. As you see from the Wikipedia Link, the calculation is rather simple, they also provide the Java code. I work with this too and it is rather efficient.
I have applied some effects to camera preview by OpenGL ES 2.0 shaders.
Next, I want to save these effect pictures (grayscale, negative ...)
I call glReadPixels() in onDrawFrame(), create a bitmap based on the pixels I read from opengl frame buffer and then save it to device storage.
However, in this way, I only "snapshot" the camera effect preview. In other words, the saved image resolution (ex: 800*480) is not the same as the image taken by Camera.PictureCallback (ex: 1920 * 1080).
I know the preview size can be changed by setPreviewSize(), but it can't equal to picture size finally.
So, is it possible to use glsl shader to post process directly the image get by Camera.PictureCallback ? Or there is another way to achieve the same goal?
Any suggestion will be greatly appreciated.
Thanks.
Justin, this was my question about setPreviewTexture(), not a suggestion. If you send the pixel data as received from onPreviewFrame() callback, it will naturally be limited by supported preview sizes.
You can use the same logic to push the pixels from onPictureTaken() to a texture, but you should decode them into RGB from JPEG format.
I am capturing the onPreviewFrame in a byte[] and then saving this to compare with the next captured frame. Although the camera is pointed to an area with no change happening and I can see on the display that there is no change succesive frames element by elemnet are different. Any suggestion why this should be so? Also I only want the grayscale and I understand that the first height * width bytes are grayscale data, is this correct>
Statistically, there cannot be an exact match. Check out Image comparison - fast algorithm for a way of comparing images.