Continuously capture Android screen for processing in an app - android

I need a live stream or some method to see what is currently being displayed on screen.
I do not want to save a screenshot to a file, or record a video etc.
I do not want to stream the screen to another device (I'm not trying to screencast/mirracast), I need the screen contents locally on a Android application.
Why would I need that?
I want to write an Android application to manage a lightpack by adapting the lights to what is being displayed on the android device.
Requiring root is fine (I expected root to be required).
Needing additional lower level binaries is also acceptable.
The quality of the image does not have to be exact as on screen, a much lower
resolution will still be acceptable
If you say it can't be done, you're wrong. If you download the Prismatik app for Android you'll see exactly what I want, it does screen grabbing and manages my lightpack without any performance issues while grabbing the screen. I just want to create my own application that does similar, except I will make it an open source GitHub project...
edit: I think using /dev/graphics/fb0 could be part of an answer. I don't know how the performance of using this will be though.

I have found an answer to my own question.
Android (linux too) has a framebuffer file (/dev/graphics/fb0 on Android) which you can read that contains the current framebuffer of the entire screen contents. In most cases it will actually contain 2 frames for double buffering...This file will constantly change as the contents on the screen changes (for what I want to use it for, it is fine, I don’t need EXACT to the < millisecond contents, I only need to roughtly know what colors are displayed where on screen)
This file contains the graphics in a RAW format. On my Galaxy S3 it contained 4bytes per pixel.
If you need to know how many bytes you should read to get your current screen (only 1 frame), you need to do the math. My screen is 720x1280 pixels.
Therefore I need to read:
720 x 1280 pixels = 921,600 pixels
921,600 pixels * 4 bytes per pixel = 3,686,400 bytes
From this you can extract each color channel, and do the calculations you need.
The method to read the data is really up to you. You can write a native bit of code to read the buffer and do manipulations.
Or you can read the buffer from Android code with an Input Stream.
Depending on the ROM loaded on your device you might need root to access this file (most cases you will need root)

Related

Play and split video on a wall of screens

I am working on an Android TV app project. I need to split one video into 2, 3 or X screens in equal parts. Each screen has an Android TV stick plugged on it with my app on it.
For example:
If we have 2 screens each one will show 50% of the video played.
If we have 3 screens each one will show 33,33% of the video played.
If we have 4 screens each one will show 25% of the video played.
Here is one image to have a better understanding of my expectations:
The video is played simultaneously in each screens of the wall and about this point I have already think about it : one screen will be the NTP (network time protocol) master and the other screen(s) will be the slave(s). To synchronize the players.
My first idea is to have on each app the complete video, playing it and having visible only the part that I need. How can I achieve that ? Is it possible ?
In advance thank you for your help.
I'm not clear how you'll handle the height (e.g., if you have a 1080p video but span it across four screens, you're going to have to cut off 3/4 of the pixels to "zoom in" on it across the screens), but some thoughts:
If you don't have to worry about HDCP, an HDMI splitter might work. If not, but it's for a one-off event (e.g., setting up a kiosk for a trade show), then it's probably least risky and easiest to create separate video files with them actually split how you'd want. If this has to be more flexible/robust, then it's going to be a bit of a journey with some options.
Simplest
You should be able to set up a SurfaceView as large as you need with the offsets adjusted for each device. For example, screen 2 might have a SurfaceView set with a width of #_of_screens * 1920 (or whatever the appropriate resolution is) and an X starting position of -1920. The caveat is that I don't know how large of a SurfaceView this could support. For example, this might work great for just two screens but not work for ten screens.
You can try using VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING to scale the video output based on how big you need it to display.
For powerful devices
If the devices you're working with are powerful enough, you may be able to render to a SurfaceTexture off screen then copy the portion of the texture to a GLSurfaceView. If this is DRMed content, you'll also have to check for the EGL_EXT_protected_content extension.
For Android 10+
If the devices are running Android 10 or above, SurfaceControl may work for you. You can use a SurfaceControl.Transaction to manipulate the SurfaceControl, including the way the buffer coordinates are mapped. The basic code ends up looking like this:
new SurfaceControl.Transaction()
.setGeometry(surfaceControl, sourceRect, destRect, Surface.ROTATION_0)
.apply();
There's also a SurfaceControl sample in the ExoPlayer v2 demos: https://github.com/google/ExoPlayer/tree/release-v2/demos/surface

How to get all intermediate stages of image processing in Android?

If I use camera2 API to capture some image I will get "final" image after image processing, so after noise reduction, color correction, some vendor algorithms and etc.
I should also be able to get raw camera image following this.
The question is can I get intermediate stages of image as well? For example let's say that raw image is stage 0, then noise reduction is stage 1 color correction stage 2 and etc. I would like to get all of those stages and present them to user in an app.
In general, no. The actual hardware processing pipelines vary a great deal between different chip manufacturers and chip versions even from the same manufacturer. Plus each Android device maker then adds their own software on top of that.
And often, it's not possible to dump outputs from every step of the process, only some of them.
So making a consistent API for fetching this isn't very feasible, and the camera2 API doesn't have support for it.
You can somewhat simulate it by turning things like noise reduction entirely off (if supported by the device) and capturing multiple images, but that of course isn't as good as multiple versions of a single capture.

Android steganography detection LSB

I am trying to do detection of LSB Steganography using real-time camera on mobile phone. So far i havent had much luck with detecting the LSB Steganography, whether on printed material or on the PC Screen.
I tried using OpenCV and do the conversion of each frame to RBG, and then read the bits from each pixel, but that never detects the steganography.
I also tried using the Camera functionality, and check onFrame whether pixel by pixel the starting string is recognized or not, so i can read the actual hidden data in the remaining pixels.
This provided few times positive result, but then the reading of the data was impossible.
Any suggestions how to approach this?
Little bit more information on the hidden data:
1. It is all over the image, and i know the algorithm works, since if i just read the exact image through Bitmap in the app, the steganography is detected and decoded, but when i try to use the camera no such luck.
2. It is in a grid, 8x5 pixels all over the image, so it is not that it is only on 1 specific area of the image, and it can not be detected in the camera view.
I can post some code as well if needed.
Thanks.
You still haven't clarified on the specifics of how you do it, but I assume you do some flavour of the following:
embed a secret in a digital image,
print this stego image or have it displayed on a pc, and
take a photograph of that and detect the embedded secret.
For all practical purposes, this can't work. LSB pixel embedding steganography is a very fragile technique. You require a perfect copy of the stego pixels images for extraction to work. Even a simple digital manipulation is enough to destroy your secret. Scaling, cropping and rotation are to name a few. Then you have to worry about the angle you take the photo and the ambient light. And we're not even touching upon the colours that are shown on a pc monitor or the printed photo.
The only reason you get positives for the starting sequence is because you use a short one and you're bound to be lucky. Assuming the photographed stego image results in random deviations for each pixel from its true value, you'll still get lucky sometimes. Imagine the first pixel had the value 250 and after photographed it's 248. Well, the LSB in both cases is still 0.
On top of that, some sequences are more likely to come up. In most photos neighbouring pixels are correlated, because the colour gradient is smooth. This means that if the top left of a photo is dark and the top right is bright, the colour will change slowly. For example, the first 4 pixels have the value 10, then the next few have 11, and so on. In terms of LSBs, you have the pattern 00001111 and as I've just explained, that's likely to come up fairly frequently regardless of what image you photograph out there.

android make video of screen -need to take application video

In one of my application i need to record video of my own screen.
I need to make a video in that user can take video of their app how it possible?
first question is it possible so? if yes then how? any useful links or some help
Thanks
Pragna Bhatt
Yes, it is possible, but with certain limitations. I have done it in one of my projects.
Android 4.4 adds support for screen recording. See here
If you are targeting lower versions, you can achieve it, but it will be slow. There is no direct or easy way to do it. What you will do is, create drawable from your view/layout. Convert that drawable to YUV format and send it to camera (see some library where you can give custom yuv image to camera), camera will play it like a movies, you can save that movies to storage. Use threads to increase frame-rate (new multi-core device will have high frame-rate).
Only create drawable (from view) when there is any change in view or its children, for that you can use Global Layout Listener. Otherwise send same YUV image to camera.
Limitation:
You can not create video of more than one activities (at a time), or their transactions, because you are creating image from view. (Work on it yourself, may be you'll find a way)
You can not increase frame rate from a certain point, because it depends on hardware of your device.

Photo Editing Mobile App for IOS and Android

We are trying to build a photo app for a client where large photos are required to be fetched using a web service. These photos will be high resolution JPGs ranging in size (between roughly 5 - 7 mb).
The issue we're facing is how to fetch a batch of photos (say 10-15), store them locally on the app, and allow the user to perform editing tasks on them. What I understood from my team is if we edit the high resolution photos it will crash the app due to memory. This means we will have to reduce the resolution and size of the photo, which is reasonable, but could take a while. What is the best practice to download and reduce the photos so a good user experience is maintained?
To give some background, we are build the app for both Android and IOS. The features expected are typical swipe, pinch, editing with basic editing and advance editing like frames, text overlay, etc.
Not sure this is a UX question so much as about app architecture.
Maybe better suited to StackOverflow or another stack exchange site instead, but I'll try to approach it from a UX angle...
USER EXPECTATIONS
Do your users expect to edit high-res & have control over maintaining maximum quality? Or are they casual users just interested in making funny pix & won't care about loss of quality?
If they expect to have control, you could check disc space or device capability before downloading & offer them a choice of smaller size vs. slower response time.
For example, if they're on an older non-retina/low-pixel-density device, display an alert that editing high-res images might be difficult & offer a smaller version as an alternative.
How will saving/uploading edited versions work? Users might be upset if they overwrite originals w/lower quality versions & weren't given an option to "save as" or set quality level.
USE CASES & DEVICE SPECIFICS
Assumption: A user on a mobile device will only work on 1 image (maybe 2) at a time.
No mobile device is large enough to show multiple high-res images on screen at once anyway. Keep current image in memory; only show thumbnails of others (saved on disc) until requested for editing & then swap; release/reload resources as necessary.
If your users are using older hardware (pre-retina iPhone 3GS or iPad 2 for example), then a 5-7MB image (anything >3000px per side) might be a bit slow, but newer devices take/handle 8-12MP pictures themselves. Should be well within the device's capability to open/edit one at a time.
Are you saying this is not the case?? Can't even open 1 image? Is it being saved to disc first, or opened in-app directly from web service?
Verify adequate storage space either for the whole batch beforehand, or as each image is saved
If device storage is full, cancel remaining downloads & alert user which images are missing
USABILITY & RESPONSIVENESS
Download the images asynchronously to avoid blocking the UI
Create much smaller low-res thumbnails to act as a placeholder for the high-res versions. Download & show thumbnails first to give a sense of progress, but differentiate between an image that's still loading & one that's available for editing (with a progress bar, transparency, etc).
Download in the background (as you might an "in-app purchase") and save to disc.
Download individually & save to shared location. This keeps them organized as a batch of 10-15, but lets the user start working as soon a the 1st image is available. Don't make them wait for all of them.
Could use a separate "downloads" view w/progress bars & let user continue work in another tab/view
Only once the user selects a thumbnail do you need to worry about loading/displaying the large version from disc. You can release thumbnail/loading view from memory & free up resources if necessary while the large image is being edited. Reload only as necessary.
Auto-save to disc in background to prevent loss of work & take opportunity to clean up caches & whatnot.
If working memory is already a concern, you won't have many options for undo/redo. Most image-editing apps manage this ok though, so there's a way.

Categories

Resources