There are a few problems I am listing here..
I am using a Omnivision image sensor to get the raw video and images. I have to convert the raw image to bitmap or the video format to MJPEG..
I tried got data using Uri, then to inputstream then to a byte [], a N x 1. where I got about a million values. I am not sure whether this is the right way to get the image. Then I tried to decode using imcodes. I used to bitwise shift and added the values, but it took a lot of time and the app crashed. Instead of it, I reshaped into m x n and tried to display on a bitmap to view it as a null. I tried the dimoraic which I could not proceed. I tried decoding it as bitmap too, and the app crashed too.
Is there any way I could directly stream it in Android studio. I need to convert this raw video format into MJPEG format. I tried to stream it in python just like a webcam, which gave an error can't grab frame and something to do with MSMF
Related
I am going to get depth image when playback with mp4 file in ARCore.
First, I recorded live camera frames(depth, color) to mp4 file.
this mp4 file has 3 tracks(1280720(color), 640480(color), 640*480(depth)).
Next, I start playback this mp4 file using session.setPlaybackDataSet() function in ARCore.
and I tried to get color image depth image using below functions.
textureImage = frame.acquireCameraImage();
depthImage = frame.acquireDepthImage();
In this case, textureImage's size is 640480, but depthImage's size is 640360(cropped).
But I want to get 640*480 depthImage(non-cropped).
I tried to search functions for changing frame's size before start playback.
But I cannot find any solution.
How can I get non-cropped depth image. My tested device is Samsung Galaxy S20+.
Please help me.
in Android Q there is a option to get depth map from image.
Starting in Android Q, cameras can store the depth data for an image in a separate file, using a new schema called Dynamic Depth Format (DDF). Apps can request both the JPG image and its depth metadata, using that information to apply any blur they want in post-processing without modifying the original image data.
To read the specification for the new format, see Dynamic Depth Format.
I have read this file Dynamic Depth Format and it looks that depth data is stored in JPG file as XMP metadata and now my question is how to get this depth map from file or directly from android API?
I am using galaxy s10 with anrdoid Q
If you retrieve a DYNAMIC_DEPTH jpeg, the depth image is stored in the bytes immediately after the main jpeg image. The documentation leaves a lot to be desired in explaining this; I finally figured it out by searching the whole byte buffer for JPEG start and end markers.
I wrote a parser that you can look at or use here: https://github.com/kmewhort/funar/blob/master/app/src/main/java/com/kmewhort/funar/preprocessors/JpegParser.java. It does the following:
Uses com.drew.imaging.ImageMetadataReader to read the EXIF.
Uses com.adobe.internal.xmp to read info about the depth image sizes (and some other interesting depth attributes that you don't necessarily need, depending on your use case)
Calculates the byte locations of the depth image by subtracting each trailer size from the final byte of the whole byte buffer (there can be other trailer images, such as thumbnails). It returns a wrapped byte buffer of the actual depth JPEG.
I am decoding an image(ARGB) that is 8000x8000 pixels, so in uncompressed form it reaches
(2 + 2 + 2 + 2) * (8000 * 8000) = 488 MB
and crashes the android. I don't want to sample the image because I am converting the bitmap to byte array and sending it in a PUT request. I have tried "decodeRegion" but I don't know how to stitch the data(i.e. byte arrays) back together , since they have the head info at start and just concatenating them isn't helping.
Use an HTTP client library that allows you to upload from a file or stream, so that you do not need to decode the image and try to hold it in memory. OkHttp has options for this; see this recipe for streaming a POST request, this recipe for POSTing a file, or this recipe for multipart POSTs. Those techniques should be adaptable to a PUT request.
Why are you reading in a large image, decoding it, then posting the byte array? That's the wrong way to do it.
If your API actually requires the decoded bytes, fix it. More likely it wants the file's raw data. In which case you just need to use any networking API that gives you an OutputStream, and read in the file's data 1 MB at a time, reading it from the File's InputStream and writing it to the socket's OutputStream
I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080
I am uploading an image (JPEG) from android phone to server. I tried these two methods -
Method 1 :
int bytes=bitmap.getByteCount();
ByteBuffer byteBuffer=ByteBuffer.allocate(bytes);
bitmap.copyPixelsToBuffer(byteBuffer);
byte[] byteArray = byteBuffer.array();
outputStream.write(byteArray, 0, bytes-1);
Method 2 :
bitmap.compress(Bitmap.CompressFormat.JPEG,100,outputStream);
In method1, I am converting the bitmap to bytearray and writing it to stream. In method 2 I have called the compress function BUT given the quality as 100 (which means no loss I guess).
I expected both to give the same result. BUT the results are very different. In the server the following happened -
Method 1 (the uploaded file in server) :
A file of size 3.8MB was uploaded to the server. The uploaded file is unrecognizable. Does not open with any image viewer.
Method 2 (the uploaded file in server)
A JPEG file of 415KB was uploaded to the server. The uploaded file was in JPEG format.
What is the difference between the two methods. How did the size differ so much even though I gave the compression quality as 100? Also why was the file not recognizable by any image viewer in method 1?
I expected both to give the same result.
I have no idea why.
What is the difference between the two methods.
The second approach creates a JPEG file. The first one does not. The first one merely makes a copy of the bytes that form the decoded image to the supplied buffer. It does not do so in any particular file format, let alone JPEG.
How did the size differ so much even though I gave the compression quality as 100?
Because the first approach applies no compression. 100 for JPEG quality does not mean "not compressed".
Also why was the file not recognizable by any image viewer in method 1?
Because the bytes copied to the buffer are not being written in any particular file format, and certainly not JPEG. That buffer is not designed to be written to disk. Rather, that buffer is designed to be used only to re-create the bitmap later on (e.g., for a bitmap passed over IPC).