Android OpenGL ES read PHONE inconsistant with EMULATOR - android

Hey,
i wrote a small application, a client application which receives images from a server, and then display them on a rotating cube using openGL ES.
this works just fine in the emulator, but on real phone SGS , blank white images displayed instead.
what could be the problem ???
the photos are saved using
fos = openFileOutput(i+".jpg",MODE_WORLD_READABLE);
and then read and converted to Bitmap using
File myImage= context.getFilesDir();
String imgPath=myImage.getAbsolutePath();
BitmapDrawable bmd = new BitmapDrawable(imgPath+"/"+face+".jpg");
bitmap[face]= bmd.getBitmap();
The Rendering Code used is same as supposed in Example 6a: Photo-Cube , under MYGLRenderer.java
Thanks in Advance.

This sounds somewhat similar situation I faced some time ago. Reason was that I didn't resize Bitmap to size of power of two before sending it to GL context.
https://gamedev.stackexchange.com/questions/10829/loading-png-textures-for-use-in-android-opengl-es1

Related

Got wrong data by ImageReader when reading from a video?

I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080

How to convert Bitmap into a Frame?

Currently I'm working on a android program using Mobile Vision. I am using the "TextRecognizer" class and one of the methods is .detect(Frame frame). Right now I have a image I want to input into it, however, the image is the file type "Bitmap". I have tried to convert it to "Frame" by casting it but that hasn't worked. If anyone has any suggesting it would be much appreciated.
Use the setBitmap method in the Frame.Builder class:
Frame outputFrame = new Frame.Builder().setBitmap(myBitmap).build();

Bitmaps from Gallery are blurred and jaggered

I'm currently working on an application which requires users to upload a photo to the app from their gallery.
I've looked at a number of solutions including coming up with my own. However, I can't figure out why the ImageView, taking in the final Bitmap is blurred & the edges are jaggered.
As an example, I'm currently using this SO post (the marked answer): Quality problems when resizing an image at runtime
to work with.
The photo I'm testing on is taken on my LG G3, so the quality is pretty crisp.. if that helps.
I've had the same problem a few month ago. May be this this is the answer that you aro looking for. Give a try:
BitmapFactory.Options op = new BitmapFactory.Options();
op.inScaled = false;
Bitmap a = BitmapFactory.decodeFile(imgPath.getAbsolutePath(), op);

Android: difference between BitmapFactory.decodeResource() and BitmapFactory.decodeFile() result?

I'm noticing a crash (in an external native library that does some image processing) when I pass it the pixel data returned from bitmap.getPixels().
If I package the image in the app, in the drawables folder and load the Bitmap with
BitmapFactory.decodeResource()
then grab the pixel data with
bitmap.getPixels()
there's no crash, and everything works as expected. However, if I load the same image from the file system with
BitmapFactory.decodeFile()
then grab the pixels with
bitmap.getPixels()
and hand that off, the native lib crashes.
Is there a difference between the way these two calls process the image into a Bitmap?
Reading the Android sources There is one interesting diffrence: The decodeFile method may call a different native bitmap decoder if the passed file is an asset, while the decodeResource will never do this.
if (is instanceof AssetManager.AssetInputStream) {
bm = nativeDecodeAsset(((AssetManager.AssetInputStream) is).getAssetInt(),
outPadding, opts);
However, the crash is most likely a bug in your native code. Messing up the stackframe with bad pointers and/or buffer overruns typically results in weird crashes like this. Try to check all your native code that runs before the crash and see if you can spot any memory issues like that.

Android: load only detail of large image

I'm working on a project that needs to use a large image as a map. The image is about 95MB and has a resolution of 12100 x 8000 pixels.
I don't need the whole image at once, I just need a detail of 1000 x 1000 Pixel (it's not always the same detail, just grabbing the same part is not a solution I can use). So I can't just sample it down with the BitmapOptions.
I looked around and found the idea to create a FileInputStream (the image is on the SD-Card) and then I can just load the detail with decodeStream(InputStream is, Rect outPadding, BitmapFactory.Options opts). That way I wouldn't load the whole thing into the memory. I tried it, but it's just crashing when I try to load the image. Here's my code:
FileInputStream stream = null;
try {
stream = new FileInputStream(path);
} catch(Exception e) {
Log.e("inputstream",e.toString());
}
Rect rect = new Rect(a,b,c,d);
return BitmapFactory.decodeStream(stream, rect, null);
When I try to load the image, the activity closes and LogCat tells me java.lang.outOfMemoryError. Why does it crash? I thought with the stream it should work on the image "on-the-fly", but the only explication I have for the Error is the it trys to load the hole image into the memory. Does anybody have an idea how I can load the detail out of the image, or why this idea doesn't work?
It crashed because all these 95M are sucked into memory for processing. This call will not ignore parts of the stream - it will put the whole thing to memory and then try to manipulate it. The only solution you can have is to have some sort of server side code that does the same sort of manipulation or if you don't want to do it on server - provide thumbnails of your large image. And I would strongly advise against pulling whole 95M at any time anyways
Does BitmapRegionDecoder not help (I realise its level 10)?

Categories

Resources