Android: draw using fb0 on eink device - android

I've been looking a lot around, and can't find much of anything clear. The context is I have a stylus-enabled e-ink tablet that I program on for fun, and I'd love to use a native library to read the pen events, and draw directly on the framebuffer. The vendor provides one, secret (a .so JNI lib that you can call with just a size parameter).
I suppose this is intended to activate a direct draw on the frame buffer, eventually with a refresh. But I can't grasp how it's supposed to compose with SurfaceFlinger and Android...
Anyone has experience with generic eink tricks to display from JNI via IOCTL that could explain me why I don't see the pixels changing unless I draw myself in java (I can change the update mode and draw quite fast, but... I want the fastest) ?
How could I verify writes on the FB ? Can an android app be "overlayed" by pixels written directly on the framebuffer ?

I figured it out in the end. Using strace to catch calls to fb0 via ioctl + lsof + remote debugging of the official drawing applications showed me that I was wrong.
This particular tablet's vendor software, unlike the remarkable, does NOT draw on the framebuffer directly via magic ioctl commands. All it does is register an area of the screen to be non refreshable by Android normal display primitives. That allows them not to configure each and every view in the hierarchy to be in Direct Update eink mode, while using the android framework with as little custom code as possible.
I could see the command passed that way:
[pid 3997] openat(AT_FDCWD, "/dev/graphics/fb0", O_RDWR|O_LARGEFILE) = 30
[pid 3997] ioctl(30, _IOC(0, 0x6d, 0x25, 0x00), 0x1) = 0
[pid 3997] close(30)
This simply puts the screen on lockdown and stop all android refreshes.
This means they could be much faster.

I suggest you write a basic pixel plotting program in C for framebuffer. Execute it normally and then create a JNI call for the same. The speed difference is negligible to notice.

Related

Android: allocate a graphic once and move it around on the screen without rewriting pixels

I have a Google Pixel 4, rooted, and I have the AOSP code building successfully. Inside an Android app, I'd like to gralloc an extra-large area of memory (probably 4x as large as the 1080x2280 screen) and draw a simple graphic (for example, a square) into the middle of the buffer. Then, on each frame, I just want to slide the pointer around on the buffer to make it look like the square is moving around on the screen (since the rest of the buffer will be blank).
I'm not sure if this is feasible. So far, I have a completely native Android app. At the beginning of android_main(), I malloc a region 4x as large as the screen, and I draw a red square in the middle. On each frame, I call
ANativeWindow_lock()
to get the pointer to the gralloced memory, which is accessed in ANativeWindow_Buffer->bits. Then I use memcpy to copy pixels from my big malloced buffer into the bits address, adjusting the src pointer in my memcpy call to slide around within the buffer and make it seem like the square is moving around on the screen. Finally, I call
ANativeWindow_unlockAndPost()
to release CPU access and post the buffer to the display.
However, I'm trying to render as fast as possible. There are 2462400 pixels (2 bytes each) for the screen, so memcpy is copying 5MB of data for each frame, which takes ~10ms. So, I want to avoid the memcpy and access the ORIGINAL pointer to the dma_buf, or whatever it is that Gralloc3 originally allocates for the GraphicBuffer (the ANativeWindow is basically just a Surface which uses a GraphicBuffer, which uses GraphicBufferMapper, which uses GrallocMapper in gralloc). This is complicated by the fact that the system appears to be triple-buffering, putting three gralloced buffers in BufferQueue and rotating between them.
By adding log statements to my AOSP build, I can see when Gralloc3 allocates buffers and how big they are. So I can allocate extra-large buffers. But it's manually adjusting the pointers and having that reflect on the display where I'm getting stuck. ANativeWindow_lock() gets a copy of the original pointer to the pixel buffer, so I figured if I can trace that call all the way down, then I can find the original pointer. I traced it down into hardware/interfaces/graphics/mapper/3.0/IMapper.hal and IAllocator.hal, which are used by Gralloc3 to interact with memory. But I don't know where to go after this. The HAL file is basically a header that's implemented by some other vendor-specific file, I guess....
Checking out ANativewindow_lock() using Android Studio's CPU profiler
Based on this picture, it seems like some QtiMapper.cpp file might be implementing the HAL. There are a few files called QtiMapper, in hardware/qcom. (I'm guessing I should be looking in the sm8150 folder because the Pixel 4 uses Snapdragon 855.) Then lower down, it looks like the IonAlloc::CleanBuffer and BufferManager::LockBuffer might be in the gr_ files in hardware/qcom/display/msmxxxx/gralloc folders. But I can't be sure where the calls are being routed exactly because if I try to modify or add log statements to these files, I get problems with the AOSP build. Any directions on how to mod these would be very helpful, too. If these are the actual files being used by the system, it looks like I could possibly use them for my app because I can see the ioctl and mmap calls in them.
Using the Linux Direct Rendering Manager, I was able to write directly to the display in a C file by shutting down SurfaceFlinger and mmapping some memory. See my demo here. So if I shut down the Android framework, I can accomplish what I want to do. But I want to keep Android up, because I'm looking to use the accelerometers and maybe other APIs in my app. (The goal is to use the accelerometer readings to stabilize text on the display as fast as possible.) It's also annoying because starting up the display server again does some kind of reboot.
First of all, is what I want to do even worth it? It seems like it should be, because the display on the Pixel can refresh every 10 milliseconds, and taking the time to copy the pixel memory is pointless in this case.
Second of all, does anyone know of a way I can, within my app, adjust the low-level pointer to the pixel buffer in memory and still make it push to the display?

Can Android phone running image processing app 24/7

I'm developing an image processing app on Android phones, which is expected to run 24/7. I've managed to do the following:
Use Camera2 interface to gain better fps.
Grab raw frames, using renderscript to convert to rgb and do image processing using opencv in a background service( no preview ). I got around 20fps after conversion to rgb at 1280x960 on LG G4.
So my questions are:
Is there anything else I need to optimize to minimize the memory and CPU usage?
Any chance that this application can run 24/7? Is delegating all the camera operations and processing to the background service sufficient to allow it run 24/7? When I leave it running, I can still feel the heat from the camera and its surrounding area.
Any suggestion would be appreciated. Thanks.
UPDATE 1
The app runs on LG G4 using Camera2 interface and do image processing in the background with the screen off, got too hot and the phone turned off itself after a few hours. What can I do to overcome this?
about the second question. I think the app can not run 24/7 because the phone will close itself because of the heat.
Before answering to your question i must say that i am also new to image processing with android (but not to image processing field).
For the question one:
May be yes. Because image processing tasks are much memory intensive you may need to optimize your app in order to avoid things like memory leak(even if the android runtime perform routine garbage collection) .
check the following links
link one
may be useful
when it comes to pixel level operation ( when you avoid using inbuilt functions of opencv or what ever the library you are using and do access and process pixels manually) it will be too slow. I am saying this based on my experience on my laptop. Hoping that you are using opencv for your app just take a look at the following opencv site ( it is for python but you can get the idea)
take an idea from this
and also this SO answer: SO answer
A tip i can remember: try to reduce Mat variable copying (copying Mat objects to other Mat objects) as much as you can
Question number two:
I will go with answer given by user7746903. This answer also linked with the memory that will be consumed by your background app. There will be much more memory intensive app running on background so it depends. Thank you.
For the first question:
I feel its worth mentioning that you should by pass java as much as possible. Ie. using Java as the interfacial layer, then using JNI C as the call loop.
eg:
Get texture from camera2 > supply texture to C function > call render script/ compute shaders from C and other processing functions > call java function to render to screen.
This speeds up CPU performance and reduces memory warnings (especially when rapid allocation and freeing of memory).

Is it possible to debug shaders in Android OpenGL ES 2?

Is there a possibility to debug the shaders (fragment and vertex) in an Android Application with OpenGL-ES 2?
Since we only pass a String with code and a bunch of variables to replace with handles, it is very tedious to find the proper changes that need to be done.
Is it possible to write to the Android Log, as in Log.d()
Is it possible to use break points and to inspect the current values in the shader calculations?
I am simply not used to write code with a pen anymore and that's what it feels like to code within the shader text code.
This is an old question but since it appears first in searches and the old answer can be expanded upon, I'm leaving an alternative answer:
While printing or debugging like we do on Java or Kotlin is not possible, this doesn't mean that it cannot be debugged. There used to be a tool on the now deprecated Android Monitor for letting you see a trace of your GPU execution frame by frame, which included inspecting calls and geometry.
Right now the official GPU debugger is the Android GPU Inspector, which has some useful performance metrics and will include debugging frame by frame in a future update.
If the Android GPU Inspector doesn't have what you need, you can go with vendor-specific debuggers depending on your device (Mali Graphics Debugger, Snapdragon Debugger, etc.)
No. Remember that the GPU is going to execute every program millions of times (once per vertex, and once per fragment), often with hundreds of threads running concurrently, so any concept of "connect a debugger" is pretty much impossible.

Is smartphone(likely Android) can be used for image processing unit?

I'm new in image processing.
I have a photocamera(not built-in in smartphone) that would use smartphone(likely Android) as processing unit. The cam will be placed on car's back or maybe car's roof(let mark this car as X) and the smartphone should alert if any other car aproaches to this car X or if other car drive strangely(goes right and left)...
My question is: can I use smartphone as processing unit for this kind of purpose or I'll need to have some server that would process the images and that server will sent the result to smartphone?
1 - If you think that smartphone(likely Android) could NOT manage this kind of image-processing tell me why please?
2 - If you think that smartphone(likely Android) DO could manage with this what tools I can use for this purpose?
It certainly can be done. I've used a Eee PC (1.4 GHz Atom processor) for image processing (3D reconstruction) and it worked very well. The system as a whole wasn't powerful enough, but the issue here was other stuff not directly related to the image processing portion (path finding, etc.). Depending on what you're going to do, you shouldn't have any issues processing images at 15, 30 or even 60 Hz.
As a note: Ever checked Android's camera app (the default one)? Newer versions offer a "background" mode for video recordings, replacing the actual backdrop with other videos. This is essentially image processing.
As for tools: I'm not sure if there's a OpenCV port yet, but this really depends on what (and how) you want to do it. Simple tracking, depth detection, etc. can definitely be done without such libraries and without having to rewrite too much.

Android Skia remote surface

Hi I was researching the possibility to transport the "not rendered" rendering calls to a second screen from Android. While researching I found out that Skia is behind the Surfacefinger, and the Canvas.draw() method. So my question is now, what would be the best interception point to branch off the calls in order to use them for a second screen / machine. The second device mut not be a pure replay device, but can be another Android device.
First I used VNC for that concept, but quickly found out that it badly performs, due to the double buffering effect, it is also possible to manipulate the android code in a sense that it omits the double buffering, but it is still of interest to actually use the pre rendered calles on a second, maybe scaled device.
Thanks

Categories

Resources