What does TwoPassFilter GPUImage actually do? - android

I am trying to re-create the GPUImageTwoPassFilter from GPUImage(ios) for Android. I am working off the work done here for an Android Port of GPUImage. The port actually works great for many of the filters. I have ported over many of the shaders, basically line for line with great success.
The problem is that to port some of the filters, you have to extend from the GPUImageTwoPassFilter from GPUImage, which the Author of the android version didn't implement yet. I want to take a stab at writing it, but unfortunately the iOS version is very undocumented so I'm not really sure what the TwoPass filter is supposed to do.
Does anyone have any tips for going about this? I have a limited knowledge of openGL, but very good knowledge of Android and iOS. Im definitely looking for a very psudocode description here

I guess I need to explain my thinking here.
As the name indicates, rather than just applying a single operation to an input image, this runs two passes of shaders against that image, one after the other. This is needed for operations like Gaussian blurs, where I use a separable kernel to perform one vertical blur pass and then a horizontal one (cuts down texture reads from 81 to 18 on a 9-hit blur). I also use it to reduce images to their luminance component for edge detection, although I recently made the filters detect if they were receiving monochrome content to make that optional.
Therefore, this extends the base GPUImageFilter to use two framebuffers and two shader programs instead of just one of each. In the first pass, rendering happens just like it would with a standard GPUImageFilter. However, at the end of that, instead of sending the resulting texture to the next filter in the chain, that texture is taken in as input for a second render pass. The filter switches to the second shader program and runs that against the first output texture to produce a second output texture, which is finally passed on as the output from this filter.
The filter overrides only the methods of GPUImageFilter required to do this. One tricky thing to watch out for is the fact that I correct for the rotation of the input image in the first stage of the filter, but the second stage needs to not rotate the image again. That's why there's a difference in texture coordinates used for the first and second stages. Also, filters like the blurs that sample in a single direction may need to have their sampling inputs flipped depending on whether the first stage is rotating the image or not.
There are also some memory optimization and shader caching things in there, but you can safely ignore those when porting this to Android.

Related

Detecting pathways in a video using BoofCV on Android

For my application I have been looking into using BoofCV to detect if I am on a pathway or not. The pathway is just gravel so it is the color of a standard roadway. I'm not sure exactly what image processing technique to use. The BoofCV demo app has a lot of features, but I would like to know which one is appropriate for what I'm trying to do.
Ultimately I'd like to have a toast appear on the screen when I am on a pathway.
From your question, I'm guessing that you' re using a regular camera, as real time input from a moving object. In that case you may need to:
Calibrate and Stabilize your input frames (since your pathway is made from gravel). BoofCV provides libraries.
Adjust exposure, contrast or brightness (for night/low light vision cameras or low contrast frames).
Use BoofCV's Binary Image Ops, according to your app's needs (Image Thresholding, Binary Labeling etc).
Use a classifier for 2 classes ("inside pathway", "outside pathway").
Process your output and feedback results to your "decision operator", to make a choice and guide your moving object.
More details about your project may help for a better answer.

How to equalize brightness, contrast, histrograms between two images using EMGUCV

What I am doing is attempting to using EMGU to perform and AbsDiff of two images.
Given the following conditions:
User starts their webcam and with the webcam stationary takes a picture.
User moves into the frame and takes another picture (WebCam has NOT moved).
AbsDiff works well but what I'm finding is that the ISO adjustments and White Balance adjustments made by certain cameras (even on Android and iPhone) are uncontrollable to a degree.
Therefore instead of fighting a losing battle I'd like to attempt some image post processing to see if I can equalize the two.
I found the following thread but it's not helping me much: How do I equalize contrast & brightness of images using opencv?
Can anyone offer specific details of what functions/methods/approach to take using EMGUCV?
I've tried using things like _EqualizeHist(). This yields very poor results.
Instead of equalizing the histograms for each image individually, I'd like to compare the brightness/contrast values and come up with an average that gets applied to both.
I'm not looking for someone to do the work for me (although code example would CERTAINLY be appreciated). I'm looking for either exact guidance or some way to point the ship in the right direction.
Thanks for your time.

In-App screen recording on android to capture 15 frames per second

After a lot of searching and days of experiments I haven't found a straight-forward solution.
I'm developing an app that user will interact with a pet on the screen and i want to let him save it as video.
Is there any "simple" way to capture the screen of the app itself?
I found a workaround (to save some bitmaps every second and then pass them to an encoder) but it seems too heavy. I will happy even with a framerate of 15fps
It seems to be possible, i.e. there is a similar app that does this, its called "Talking Tom"
It really depends on the way you implement your "pet view". Are you drawing on a Canvas? OpenGl ES? Normal Android view hierarchy?
Anyway, there is no magical "recordScreenToVideo()" like one of the comments said.
You need to:
Obtain bitmaps representing your "frames".
This depends on how you implement your view. If you draw yourself (Canvas or OpenGL), then save your raw pixel buffers as frames.
If you use the normal view hierarchy, subclass Android's onDraw and save the "frames" that you get on the canvas. The frequency of the system's call to onDraw will be no less than the actual "framerate" of the screen. If needed, duplicate frames afterwards to supply a 15fps video.
Encode your frames. Seems like you already have a mechanism to do that, you just need it to be more efficient.
Ways you can optimize encoding:
Cache your bitmaps (frames) and encode afterwards. This will work
only if your expected video will be relatively short, otherwise
you'll get out of storage.
Record only at the framerate that your app actually generates (depending on the way you draw) and use an encoder parameter to generate a 15fps video (without actually supplying 15 frames per second).
Adjust quality settings to current device. Can be done by performing a hidden CPU cycle test on app startup and defining some thresholds.
Encode only the most relevant portion of the screen.
Again, really depending on the way you implement - if you can save some "history data", and then convert that to frames without having to do it in real time, that would be best.
For example, "move", "smile", "change color" - or whatever your business logic is, since you didn't elaborate on that. Your "generate movie" function will animate this history data as a frame sequence (without drawing to the screen) and then encode.
Hope that helps

Android image and color blending using iOS blend modes

I am currently porting an application from iOS into Android and I have ran into some difficulties when it comes to image processing.
I have a filter class that is comprised of ImageOverlays and ColorOverlays which are applied in a specific order to a given base Bitmap. Each ColorOverlays has an RGB color value, a BlendModeId, and an alpha value. Each ImageOverlay has an image Bitmap, a BlendModeId, and an alpha/intensity value.
My main problem is that I need to support the following blend modes taken from iOS:
CGBlendModeNormal
CGBlendModeMultiply
CGBlendModeScreen
CGBlendModeOverlay
CGBlendModeDarken
CGBlendModeLighten
CGBlendModeColorDodge
Some of these have corresponding PorterDuff.Mode types in Android while others do not. What's worse, some of the modes that do exist were introduced in recent versions of Android and I need to run on API level 8.
Trying to build the modes from scratch is extremely inefficient.
Additionally, even with the modes that do exist in API8, I was unable to find methods that blend 2 images but that allow you to specify the intensity of the mask (the alpha value from ImageOverlay). Similarly with ColorOverlays.
The iOS functions I am trying to replicate in Android are
CGContextSetBlendMode(...)
CGContextSetFillColorWithColor(...)
CGContextFillRect(...) - This one is easy
CGContextSetAlpha(...)
I have started looking at small third party libraries that support these blend modes and alpha operations. The most promising one was poelocesar's lib-magick which is supposedly a port of ImageMagick.
While lib-magick did offer most of the desired blend modes (called CompositeOperator) I was unable to find a way to set intensity values or to do a color fill with a blend mode.
I'm sure somebody has had this problem before. Any help would be appreciated. BTW, Project specifications forbid me from going into OpenGLES.
Even though I helped you via e-mail, I thought I'd post to your question too in case someone wanted some more explanation :-)
2.2 is API level 8, which supports the "libjnigraphics" library in the
NDK, which gives you access to the pixel buffer for bitmap objects.
You can do these blends manually - they are pretty simple math
calculations and can be done really quickly.
Check out this site for Android JNI bitmap information.
It's really simple, just create a JNI method blend() with any
parameters you need (either the color values or possibly another bitmap object to blend together), lock the pixel buffer for that bitmap, do the
calculation needed, and unlock the bitmap. Link
Care needs to be taken on the format of the bitmap in memory, though,
as the shifting/calculation for 565 will be different than 8888. Keep that in mind if it doesn't look exactly correct!
It turned out that implementing it in the jni wasn't nearly as painful as previously expected. The following link had all the details.
How does photoshop blend two images together?

Manipulating android activity window using renderscript?

I am wondering if it possible to use Android renderscript to manipulate activity windows. For example if it is possible to implement something like 3dCarousel, with activity running in every window?
I was researching for a long time, and all the examples I found are for manipulating bitmaps on the screen. If it is true, and renderscript is only meant for images, than what is used in SPB Shell 3d, or these panels aren't actual acitivites?
It does not appear to me that those are activities. To answer your question directly, to the best of my knowledge there is no way to do what you want with renderscript as the nature of how it works prevents it from controlling activities. The control actually works the other way around... You could however build a series of fragments containing renderscript surface views, however the processing load of this would be horrific to say the least. I am unsure of how to take those fragments or activities and then draw a carousel.
What I *think * they are doing is using render script or open gl to draw a carousel and then placing the icon images where they need to be. But I have never made a home screen app so I could be and likely am mistaken in that regard.

Categories

Resources