I want to upload an internal png image to my backend, the API supplied with the backend only allows for byte[] data to be uploaded.
But so far, I haven't found a way of extracting byte[] data from a texture. If it's an internal resource or not, I'm not sure matters?
So what ways are there to achieve this using Libgdx framework?
The image I want to use is loaded with the AssetManager.
Before trying to do this, make sure to understand the following:
A Texture is an OpenGL resource which resides in video memory (VRAM). The texture data itself is not (necessarily) available in RAM. So you can not access it directly. Transferring that data from VRAM to RAM is comparable to taking a screenshot. In general it is something you want to avoid.
However, if you load the image using AssetManager then you are loading it from file and thus have the data available in RAM already. In that case it is not called a Texture but a Pixmap instead. To get the data from the pixmap goes like this:
Pixmap pixmap = new Pixmap(Gdx.files.internal(filename));
ByteBuffer nativeData = pixmap.getPixels();
byte[] managedData = new byte[nativeData.remaining()];
nativeData.get(managedData);
pixmap.dispose();
Note that you can load the Pixmap using AssetManager as well (in that case you would unload instead of dispose it). The nativeData contains the raw memory, most API's can use that also, so check if you can use that directly. Otherwise you can use the managedData managed byte array.
Related
in Android Q there is a option to get depth map from image.
Starting in Android Q, cameras can store the depth data for an image in a separate file, using a new schema called Dynamic Depth Format (DDF). Apps can request both the JPG image and its depth metadata, using that information to apply any blur they want in post-processing without modifying the original image data.
To read the specification for the new format, see Dynamic Depth Format.
I have read this file Dynamic Depth Format and it looks that depth data is stored in JPG file as XMP metadata and now my question is how to get this depth map from file or directly from android API?
I am using galaxy s10 with anrdoid Q
If you retrieve a DYNAMIC_DEPTH jpeg, the depth image is stored in the bytes immediately after the main jpeg image. The documentation leaves a lot to be desired in explaining this; I finally figured it out by searching the whole byte buffer for JPEG start and end markers.
I wrote a parser that you can look at or use here: https://github.com/kmewhort/funar/blob/master/app/src/main/java/com/kmewhort/funar/preprocessors/JpegParser.java. It does the following:
Uses com.drew.imaging.ImageMetadataReader to read the EXIF.
Uses com.adobe.internal.xmp to read info about the depth image sizes (and some other interesting depth attributes that you don't necessarily need, depending on your use case)
Calculates the byte locations of the depth image by subtracting each trailer size from the final byte of the whole byte buffer (there can be other trailer images, such as thumbnails). It returns a wrapped byte buffer of the actual depth JPEG.
There are the Bitmap for Android and UIImage for iOS. Is there a way to display both somehow in the Xamarin Forms Image control?
Obviously I need the Dependency Service. I will have two implementations that create either a bitmap or an uiimage using some source, but how do I bring those two products together to a single forms control? Both Android and iOS methods have to return something, that the image control can understand and display. I don't know what that might be.
Edit: I look for a way where I don't use storage space, if possible.
Edit2:
I tried Jasons suggestion and it works fine.
I create a bitmap in the Android project and return a MemoryStream object:
MemoryStream stream = new MemoryStream();
newImage.Compress(Bitmap.CompressFormat.Png, 0, stream);
return stream;
Then I consume it in my Xamarin.Forms Image control:
var stream = DependencyService.Get<ICrossPlatformImageProcesor>().Combine_Images(imagePath);
stream.Position = 0;
img_ImageView.Source = Xamarin.Forms.ImageSource.FromStream(() => stream);
I will have two implementations that create either a bitmap or an uiimage using some source, but how do I bring those two products together to a single forms control?
You can simply use Image Control of xamarin forms, images can be loaded specifically for each platform, or they can be downloaded for display.
For more information, you can refer to Working with Images.
I look for a way where I don't use storage space, if possible.
I'm not quite understand this, if you mean don't use memory, then I think it is not possible. If you mean your images are not saved in storage, then possibly you have an URL address on internet of your images?
Anyway, Image control in Xamarin.Forms support image source form ImageSource instance, file, Uri, and resources, to load image from uri, you can simply code like this:
var webImage = new Image { Aspect = Aspect.AspectFit };
webImage.Source = ImageSource.FromUri(new Uri("https://xamarin.com/content/images/pages/forms/example-app.png"));
I am working with a customizable database with pictures. Right now I am taking pictures as it is from the sdcard and encoding it in base64 String and then putting it in the database. but whenever I am trying decoding it and showing it in my view, I am getting Out of memory error. Can any one one tell me what is the best procedure to do it? Shall I change the size of the pictures before encoding it?
I want to re-size all of the pictures into 512*512.
Image to Base64 is very heavy operation in android. Consider saving the images on the external/internal memory and save the file path in the sqlite database.
You can convert your image to byte array then store values in sql by using BLOB type and vice versa.
As you mentioned you want to resize the images to 512*512, you can scale the image using below code,
Create bitmap from captured image and then use below line
Bitmap resizedBitmap = Bitmap.createScaledBitmap(myBitmap, 512, 512, false);
It will give you a smaller image, you can also consider compressing the image to reduce in size,
OutputStream imagefile = new FileOutputStream("/your/file/name.jpg");
// Write 'bitmap' to file using JPEG and 50% quality hint for JPEG:
bitmap.compress(CompressFormat.JPEG, 50, imagefile);
Now, you have two options,
Save the scaled and compressed image into a file and save the path of that file in db. (Better way)
Convert the scaled and compressed image to base64 string and save in db.
Althought base64 is , as many answers said, a heavy operation for android, if done properly, it should not be a problem.
There are many reasons a bitmap could be required to be saved to a DB , (photo of a invoice ticket, for example?), and this is the way i do it.
first, create a new , smaller bitmap like #Swapnil commented.
and second, correctly use the bitmap transformation methods, i've been using these (look below) two so far and haven't had any memory issue on many different devices.
link to my BitmapUtils transformation methods
I am uploading an image (JPEG) from android phone to server. I tried these two methods -
Method 1 :
int bytes=bitmap.getByteCount();
ByteBuffer byteBuffer=ByteBuffer.allocate(bytes);
bitmap.copyPixelsToBuffer(byteBuffer);
byte[] byteArray = byteBuffer.array();
outputStream.write(byteArray, 0, bytes-1);
Method 2 :
bitmap.compress(Bitmap.CompressFormat.JPEG,100,outputStream);
In method1, I am converting the bitmap to bytearray and writing it to stream. In method 2 I have called the compress function BUT given the quality as 100 (which means no loss I guess).
I expected both to give the same result. BUT the results are very different. In the server the following happened -
Method 1 (the uploaded file in server) :
A file of size 3.8MB was uploaded to the server. The uploaded file is unrecognizable. Does not open with any image viewer.
Method 2 (the uploaded file in server)
A JPEG file of 415KB was uploaded to the server. The uploaded file was in JPEG format.
What is the difference between the two methods. How did the size differ so much even though I gave the compression quality as 100? Also why was the file not recognizable by any image viewer in method 1?
I expected both to give the same result.
I have no idea why.
What is the difference between the two methods.
The second approach creates a JPEG file. The first one does not. The first one merely makes a copy of the bytes that form the decoded image to the supplied buffer. It does not do so in any particular file format, let alone JPEG.
How did the size differ so much even though I gave the compression quality as 100?
Because the first approach applies no compression. 100 for JPEG quality does not mean "not compressed".
Also why was the file not recognizable by any image viewer in method 1?
Because the bytes copied to the buffer are not being written in any particular file format, and certainly not JPEG. That buffer is not designed to be written to disk. Rather, that buffer is designed to be used only to re-create the bitmap later on (e.g., for a bitmap passed over IPC).
This might sound like a strange/silly question. But hear me out.
Android applications are, at least on the T-Mobile G1, limited to 16
MB of heap.
And it takes 4 bytes per pixel to store an image (in Bitmap form):
public void onPictureTaken(byte[] _data, Camera _camera) {
Bitmap temp = BitmapFactory.decodeByteArray(_data, 0, _data.length);
}
So 1 image, at 6 Megapixels takes up 24MB of heap. (Cue Memory overflow).
Now I am very much aware of the ability to decode with parameters, to effectively reduce the size of the image. I even have a method which will scale it down to a desired size.
But what about in the scenario when I want to use the camera as a quality camera!
I have no idea how to get this image into the database. As soon as I decode, it errors.
Note: I need(?) to convert it to Bitmap so that I can rotate it before storing it.
So to sum it up:
Limited to 16MB of heap
Image takes up 24MB of heap
Not enough space to take and manipulate an image
This doesnt address the problem, but I Recommend it as a starting point for others who are just loading images from a location:
Displaying Bitmaps on android
I can only think of a couple of things that might help, none of them are optimal
Do your rotations server side
Store the data from the capture directly to the SDCARD without decoding it, then rotate it chunk by chunk using the file system, then send that to your DB. Lots of examples on the web (if your angles are simple, 90 180 etc) though this would be time consuming since IO operations against SDCARD's are not exactly fast.
When you decode drop the alpha channel, this may not solve you issue though and if you are using a matrix to rotate the image then you would need a target/source anyway
Options opt = new Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
// Decode the raw camera a bitmap with no alpha channel
bmp = BitmapFactory.decodeByteArray(raw, 0,raw.length, opt);
There may be a better way to do this, but since your device is so limited in heap etc. I can't think of any.
It would be nice if there was an optional file based matrix method (which in general is what I am suggesting as option 2) or some kind of "paging" system for Android but that's the best I can come up.
First save it to the filesystem, do your operations with the file from the filesystem...