In my android application, I used ImageReader surface to capture the image from the camera and dump it into a file. Here is the relevant code:
void saveImage(Image img) {
Bytebuffer buf = img.GetPlanes()[0].getBuffer();
buf.rewind();
byte[] data = new byte[buf.remaining()];
buf.get(data);
saveToFile(data);
}
The generated file, when viewed through avplay, seems to display the image just fine. However, when I load the same content using Qt's QPixmap::loadFromData on Ubuntu, the method fails. The errors I get are:
Corrupt JPEG data: premature end of data segment
Invalid JPEG file structure: two SOI markers
I am wondering if anyone has any insight on how to overcome this problem. Not sure if it is a problem with Android MediaCodec class or the jpeg library Qt internally uses has a bug. Regards.
Related
There are a few problems I am listing here..
I am using a Omnivision image sensor to get the raw video and images. I have to convert the raw image to bitmap or the video format to MJPEG..
I tried got data using Uri, then to inputstream then to a byte [], a N x 1. where I got about a million values. I am not sure whether this is the right way to get the image. Then I tried to decode using imcodes. I used to bitwise shift and added the values, but it took a lot of time and the app crashed. Instead of it, I reshaped into m x n and tried to display on a bitmap to view it as a null. I tried the dimoraic which I could not proceed. I tried decoding it as bitmap too, and the app crashed too.
Is there any way I could directly stream it in Android studio. I need to convert this raw video format into MJPEG format. I tried to stream it in python just like a webcam, which gave an error can't grab frame and something to do with MSMF
I want to upload an internal png image to my backend, the API supplied with the backend only allows for byte[] data to be uploaded.
But so far, I haven't found a way of extracting byte[] data from a texture. If it's an internal resource or not, I'm not sure matters?
So what ways are there to achieve this using Libgdx framework?
The image I want to use is loaded with the AssetManager.
Before trying to do this, make sure to understand the following:
A Texture is an OpenGL resource which resides in video memory (VRAM). The texture data itself is not (necessarily) available in RAM. So you can not access it directly. Transferring that data from VRAM to RAM is comparable to taking a screenshot. In general it is something you want to avoid.
However, if you load the image using AssetManager then you are loading it from file and thus have the data available in RAM already. In that case it is not called a Texture but a Pixmap instead. To get the data from the pixmap goes like this:
Pixmap pixmap = new Pixmap(Gdx.files.internal(filename));
ByteBuffer nativeData = pixmap.getPixels();
byte[] managedData = new byte[nativeData.remaining()];
nativeData.get(managedData);
pixmap.dispose();
Note that you can load the Pixmap using AssetManager as well (in that case you would unload instead of dispose it). The nativeData contains the raw memory, most API's can use that also, so check if you can use that directly. Otherwise you can use the managedData managed byte array.
I am uploading an image (JPEG) from android phone to server. I tried these two methods -
Method 1 :
int bytes=bitmap.getByteCount();
ByteBuffer byteBuffer=ByteBuffer.allocate(bytes);
bitmap.copyPixelsToBuffer(byteBuffer);
byte[] byteArray = byteBuffer.array();
outputStream.write(byteArray, 0, bytes-1);
Method 2 :
bitmap.compress(Bitmap.CompressFormat.JPEG,100,outputStream);
In method1, I am converting the bitmap to bytearray and writing it to stream. In method 2 I have called the compress function BUT given the quality as 100 (which means no loss I guess).
I expected both to give the same result. BUT the results are very different. In the server the following happened -
Method 1 (the uploaded file in server) :
A file of size 3.8MB was uploaded to the server. The uploaded file is unrecognizable. Does not open with any image viewer.
Method 2 (the uploaded file in server)
A JPEG file of 415KB was uploaded to the server. The uploaded file was in JPEG format.
What is the difference between the two methods. How did the size differ so much even though I gave the compression quality as 100? Also why was the file not recognizable by any image viewer in method 1?
I expected both to give the same result.
I have no idea why.
What is the difference between the two methods.
The second approach creates a JPEG file. The first one does not. The first one merely makes a copy of the bytes that form the decoded image to the supplied buffer. It does not do so in any particular file format, let alone JPEG.
How did the size differ so much even though I gave the compression quality as 100?
Because the first approach applies no compression. 100 for JPEG quality does not mean "not compressed".
Also why was the file not recognizable by any image viewer in method 1?
Because the bytes copied to the buffer are not being written in any particular file format, and certainly not JPEG. That buffer is not designed to be written to disk. Rather, that buffer is designed to be used only to re-create the bitmap later on (e.g., for a bitmap passed over IPC).
I've implemented an android app that implements the CvCameraListener interface. In the onCameraFrame(Mat inputFrame) method I process the captured inputFrame from the camera.
Now to my problem: Is there a way that I can use a saved video file on my phone as an input instead of getting the frames directly from camera? That means I would like to have a video file input frame by frame in Mat format.
Is there a possible way to do that?
Thanks for your answers
though it is not tested and I don't have much experience in OpenCV on Android. Still, you can try like this:
//[FD : File descriptor or path.]
Bitmap myBitmapFrame;
MediaMetadataRetriever video_retriever = new MediaMetadataRetriever();
try {
retriever.setDataSource(FD);
myBitmapFrame = retriever.getFrameAtTime(..);
}
catch(...
:
Utils.bitmapToMat(myBitmapFrame, myCVMat);
You may have to implement some callback system as you can work with only OpenCV after it is initialized. Also, you can convert frame number to time-code.
Good Luck and Happy Coding. :)
Background
We have just swapped to the UIL library, which seems fantastic. Unfortunately we have to support CMYK images (against our will) and have attempted to modify an existing ImageDecoder called BaseImageDecoder.
The code for this can be found here. http://pastebin.com/NqbSr0w3
We had an existing AsyncTask http://pastebin.com/5aq6QrRd that used an ImageMagick wrapper described in this SO post (Convert Image byte[] from CMYK to RGB?). This worked fine before in our setup.
The Problem
The current decoder fails to load the cached image from the file system and this results in a decoding error. We have looked through the source code and believe we are using the right functions. We also thought that adding our extra level of decoding at this point in the process was ideal, as the image may have been resized and stored on the file system.
File cachedImageFile = ImageLoader.getInstance().getDiscCache().get(decodingInfo.getImageUri());
if (!cachedImageFile.exists()) {
Log.v("App", "FILE DOES NOT EXIST");
return null;
}
The above lines always return that the file does not exist.
The Question
Are we incorrect to process our CMYK images at this point, and if not why can't we get the image from the cache on file system?