Opengl Multiple model rendering overlapping each other rather than normally displaying - android

I am using opengl and eclipse to build an android app that loads ply models and renders it . But when i tried rendering two files together one being transparent and the other being opaque the result i got is rather abnormal ..
front view
as you can see the hair is owerlapping the face rather than simply displaying
pls help ..

for the body, you'll want to disable blending and enable depth testing, like you said.
for the hair, you need to enable alpha blending of course but still need to enable depth testing. otherwise, all hair will be visible regardless of whether it's behind the body or not.
but, if the frontmost strand of hair is rendered first, all hair behind that won't get drawn at all since the depth test now fails.
the solution is to have depth testing enabled but disable writing to the depth buffer with glDepthMask. This will render everything that's in front of the body, regardless of the ordering, but nothing that's behind it.

Related

How to stop objects being pixelated in Gear VR app?

I imported the Vuforia sample for gear vr in unity and replaced the objects with blender objects. Now when in my scene I put the object very close to the camera it works alright. But when I am in the Ar mode or I look form a distance at the object the edges of the objects seem very pixelatex. And move a little bit.(Blink). I have the anti aliassing on the hightes it can be but that didn´t change a thing. The blinking would indicate to me that the depth buffer is confused but since i am new to unity I have no idea what to do about that. I also read about mipmapping, that the resulution of the textures might be false. But I use materials which are colored so I can´t imagine which settings to change. Please help! Any suggestion would be very welcome!
You should set your rendering from the default 50% to 100%. Do you have any screen shots it will help.
How to do it
If you are using Oculus SDK 1.6+, you might want to check the OVRManager and make sure that "Use Recommanded MSAA" is turned off. You can also try adjust the "Min & Max Render Scale".
https://i.stack.imgur.com/yrya1.png

Android - drawline with hardware acceleration and antialiasing causes artifacts

I am working on an Android custom graph view that uses Canvas#drawLines and a paint object that has antialiasing turned on. My view has hardware acceleration turned on. Occasionally when I pinch zoom in/out, some of the lines in my graph will appear disjointed and they sort of taper off into a gradient. If I change to software layer or disable antialiasing, the issue goes away. Is this a bug with drawLines or does someone have an idea of what might be going on?
The first image exhibits the issue, the second image was moved slightly and demonstrates how the graph looks most of the time, with fully joined lines.
(image demonstrates issue)
(image showing how graph should look - still couple minor gaps)
I think this post by Romain Guy answers some of your question: http://android-developers.blogspot.com/2011/03/android-30-hardware-acceleration.html
Essentially, anti-aliasing is not supported by drawLines when hardware acceleration is turned on. Also remember that hardware acceleration won't always be 'better' for your app. If what you are drawing can be accelerated, your app will benefit from it, but for certain operations it might be worse.
I believe that explains why your lines appear disjointed when hardware accelerated. I'm not too sure it explains why it works when you turn anti-aliasing off, though. I'd imagine it would appear disjointed even with anti-aliasing off, but clearly that is not the case!
Try forcing a refresh after the resize gestures.
Have a look at my old Accelerometer Toy app. (Yeah, it REALLY need updating...) If you don't see the problem with that app then I can probably help.

What does TwoPassFilter GPUImage actually do?

I am trying to re-create the GPUImageTwoPassFilter from GPUImage(ios) for Android. I am working off the work done here for an Android Port of GPUImage. The port actually works great for many of the filters. I have ported over many of the shaders, basically line for line with great success.
The problem is that to port some of the filters, you have to extend from the GPUImageTwoPassFilter from GPUImage, which the Author of the android version didn't implement yet. I want to take a stab at writing it, but unfortunately the iOS version is very undocumented so I'm not really sure what the TwoPass filter is supposed to do.
Does anyone have any tips for going about this? I have a limited knowledge of openGL, but very good knowledge of Android and iOS. Im definitely looking for a very psudocode description here
I guess I need to explain my thinking here.
As the name indicates, rather than just applying a single operation to an input image, this runs two passes of shaders against that image, one after the other. This is needed for operations like Gaussian blurs, where I use a separable kernel to perform one vertical blur pass and then a horizontal one (cuts down texture reads from 81 to 18 on a 9-hit blur). I also use it to reduce images to their luminance component for edge detection, although I recently made the filters detect if they were receiving monochrome content to make that optional.
Therefore, this extends the base GPUImageFilter to use two framebuffers and two shader programs instead of just one of each. In the first pass, rendering happens just like it would with a standard GPUImageFilter. However, at the end of that, instead of sending the resulting texture to the next filter in the chain, that texture is taken in as input for a second render pass. The filter switches to the second shader program and runs that against the first output texture to produce a second output texture, which is finally passed on as the output from this filter.
The filter overrides only the methods of GPUImageFilter required to do this. One tricky thing to watch out for is the fact that I correct for the rotation of the input image in the first stage of the filter, but the second stage needs to not rotate the image again. That's why there's a difference in texture coordinates used for the first and second stages. Also, filters like the blurs that sample in a single direction may need to have their sampling inputs flipped depending on whether the first stage is rotating the image or not.
There are also some memory optimization and shader caching things in there, but you can safely ignore those when porting this to Android.

How to fix intermittent/jerky android paint updates

(I tried to stuff the question with keywords in case someone else has this issue - I couldn't find much help.)
I have a custom View in Android that contains an LED bargraph that displays levels received via socket communication. It's basically just a clipped image. The higher the level, the less clipped the image is.
When I update the level and then invalidate the View, some devices seem to "collect" multiple updates and render them in chunks. The screen visibly hesitates for say 1/10th of a second, then rapidly paints multiple frames, and then hesitates again. It looks like it's overwhelmed and dropping frames.
However, when changing another UI control on the screen, the LED bargraph paints much more frequently and smoothly. I'm thinking Android is trying to help me by "collecting" multiple invalidations and then doing them all at once. Perhaps by manipulating controls, I'm "increasing" my frame rate simply by giving it "more to do" so it delays less between actual paints.
Unlike animation (with smooth transitions) I want to show the absolute latest value as quickly as possible. My data samples aren't faster than 10-20fps anyway.
Is there an easy way to "force" a paint at certain points, or is this a limit of how Views work? Should I be implementing this in a SurfaceView instead? (I have not played with that yet... want advice first.) Thanks in advance for suggestions.
(Later that same day...)
Update: I found a page in the Docs that does suggest implementing my widget as a SurfaceView is the way to go:
http://developer.android.com/guide/topics/graphics/2d-graphics.html
(An hour after that...)
SurfaceView seems overkill for what I want to do. The best-practice method is to "own" the whole canvas, but I have already developed the rest of my controls and layouts and they work well. It must be possible to get some better performance with what I have, especially since interacting with the UI makes the redraw speed satisfactory.
It turns out SurfaceView was the way to go. I was benchmarking on an older phone which didn't help. (The frame rate using a standard View was fine on an ASUS eeePad). I had to throw away some code, but the end result is smoother and faster with SurfaceView. Further, I was able to re-use more code than I expected and actually dramatically simplified my multitouch handling code (since everything I want to touch is in the same SurfaceView.
FYI: I'm still only getting about 15fps on Droid X, but half of the CPU load appears to be data packet processing. The eeePad is doing almost 40fps now -- and my data rate is only 20 samples/sec.
So... a win I guess. I want the Droid X to run better, but it flies on a real tablet.

Weird graphical effect on Android

I' developing a small android maze game and I'm experiencing a strange effect which I can only describe via screenshot: http://www.virtualalbum.eu/fu/39/cepp1523110123182951.jpg
At first I thought I needed to set up antialiasing but the advices I followed to enable it changed nothing, and the effect appears to be a little too evident to be that anyway.
The labyrinth is composed with rectangular-based pieces for the walls and small square-based pillars between walls and on edges, plus a big square as the floor.
There are 4 lights, I don't know if that matters
I've been thinking about removing the small pillar faces adjacent to walls as you shouldn't see them anyway, but that would mean writing a lot of code and still wouldn't fix the zigzag with the floor.
Thanks a lot,
J
EDIT: After some more testing I'm starting to think it may be a z-fighting issue, does anyone has any idea on how to increase the depth buffer precision on android?
I managed to fix it, settinggl.glDepthFunc(GL10.GL_LEQUAL); the zigzag on the floor disappeared (as it's the first thing I draw), I was still having issues with the walls but for that I wrote some extra code (it wasn't that much after all) and I'm also saving some triangle.

Categories

Resources