Is it possible, from within my android java app, to capture an image of what is on the screen, even if it was written using native (ndk)? I do not wish to take screen shots of other apps, just my own. I can already capture and image of a canvas that I am aware of, but is there a view or canvas or something like it that always represents what is on the screen, so that a) I don't have to capture the separate views images and recompile them, and b) I can see what my native (jni) code is doing with the graphics too?
There is no way to access the "raw" framebuffer from an application. On some devices you can get at it with sufficient permissions (e.g. the DDMS screen dump), but not even that will work on all devices.
Um if you are using view.getRootView() then although it is a view as you were complaining about, it gets a collection of all the views within the screen so you should not need to recompile anything, at least I know I don't when I use it. Sorry don't think it would be too helpful for seeing what the code-behind is doing to on screen graphics.
Related
I'd like to create an Android app that can modify the whole display, even when the app is not being used directly. This is one example of an app that seems to do this.
Ideally, I'd not only want to be able to tint the screen, but to perform arbitrary operations on the pixels being shown on the display, ranging between making the entire screen a solid color, inverting the colors (so that e.g. black becomes white), and blurring the screen. (I could imagine this level of access in the wrong hands could make somebody's phone unusable, so maybe not all of these are possible.)
Any pointers on how to do this?
You want to let your app draw over other apps. There is a special set of requirements for such applications.
Take a look here and here.
There is also a simple tutorial.
And an opensource app, that looks pretty similar to one you've linked above.
I am currently working on an application in android studios that messes with the colors of live camera feed from a phones camera. For example, I may want to filter out all the reds, or maybe I want to make displayed camera image black-and-white.
However, I haven't really found much on how to do this. I've found tutorials on using both the deprecated Camera class and the android.hardware.camera2 class. My preferred sample code was for camera2, found directly here (takes you directly to Java class files, not whole project).
So does anyone know how to use camera2 to do what I want? Do I need to use the deprecated class Camera instead? My idea is that I need to have an activity that has the main job of displaying images, and behind the scenes the phone camera is running, sending the image (in whatever format, Bitmap) to have the colors messed with (by some code I will make), which then sends the image to be displayed in the main activity.
So that is three main pieces: (1) Camera to Bitmap, to get what is currently seen by the phone Camera and store it in code; (2) mess with the colors of the Bitmap to distort the current view in my desired way; and (3) then a way of taking the resulting distorted view and displaying that on the screen. Of course, as mentioned, it's the first and last of the three just mentioned that I really need help with.
Please let me know what other details will be helpful to know about.
I am wondering if it possible to use Android renderscript to manipulate activity windows. For example if it is possible to implement something like 3dCarousel, with activity running in every window?
I was researching for a long time, and all the examples I found are for manipulating bitmaps on the screen. If it is true, and renderscript is only meant for images, than what is used in SPB Shell 3d, or these panels aren't actual acitivites?
It does not appear to me that those are activities. To answer your question directly, to the best of my knowledge there is no way to do what you want with renderscript as the nature of how it works prevents it from controlling activities. The control actually works the other way around... You could however build a series of fragments containing renderscript surface views, however the processing load of this would be horrific to say the least. I am unsure of how to take those fragments or activities and then draw a carousel.
What I *think * they are doing is using render script or open gl to draw a carousel and then placing the icon images where they need to be. But I have never made a home screen app so I could be and likely am mistaken in that regard.
I want to write an activity that:
Shows the camera preview (viewfinder), and has a "capture" button.
When the "capture" button is pressed, takes a picture and returns it to the calling activity (setResult() & finish()).
Are there any complete examples out there that works on every device? A link to a simple open source application that takes pictures would be the ideal answer.
My research so far:
This is a common scenario, and there are many questions and tutorials on this.
There are two main approaches:
Use the android.provider.MediaStore.ACTION_IMAGE_CAPTURE event. See this question
Use the Camera API directly. See this example or this question (with lots of references).
Approach 1 would have been perfect, but the issue is that the intent is implemented differently on each device. On some devices it works well. However, on some devices you can take a picture but it is never returned to your app. On some devices nothing happens when you launch the intent. Typically it also saves the picture to the SD card, and requires the SD card to be present. The user interaction is also different on every device.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
I would have used ZXing as an example application (I work with it a lot), but it only uses the preview (viewfinder), and doesn't take any pictures. I also found that on some devices, ZXing did not automatically adjust the white balance when the lighting conditions changed, while the native camera app did it properly (not sure if this can be fixed).
Update:
For a while I used the camera API directly. This gives more control (custom UI, etc), but I would not recommend it to anyone. I would work on 90% of devices, but every now and again a new device would be released, with a different problem.
Some of the problems I've encountered:
Handling autofocus
Handling flash
Supporting devices with a front camera, back camera or both
Each device has a different combination of screen resolution, preview resolutions (doesn't always match the screen resolution) and picture resolutions.
So in general, I'd not recommend going this route at all, unless there is no other way. After two years I dumped by custom code and switched back to the Intent-based approach. Since then I've had much less trouble. The issues I've had with the Intent-based approach in the past was probably just my own incompetence.
If you really need to go this route, I've heard it's much easier if you only support devices with Android 4.0+.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
Either there is a bug in the examples or there is a compatibility issue with the devices.
The example that CommonsWare gave works well. The example works when using it as-is, but here are the issues I ran into when modifying it for my use case:
Never take a second picture before the first picture has completed, in other words PictureCallback.onPictureTaken() has been called. The CommonsWare example uses the inPreview flag for this purpose.
Make sure that your SurfaceView is full-screen. If you want a smaller preview you might need to change the preview size selection logic, otherwise the preview might not fit into the SurfaceView on some devices. Some devices only support a full-screen preview size, so keeping it full-screen is the simplest solution.
To add more components to the preview screen, FrameLayout works well in my experience. I started by using a LinearLayout to add text above the preview, but that broke rule #2. When using a FrameLayout to add components on top of the preview, you don't have any issues with the preview resolution.
I also posted a minor issue relating to Camera.open() on GitHub.
"the recommended way to access the camera is to open Camera on a separate thread". Otherwise, Camera.open() can take a while and might bog down the UI thread.
"Callbacks will be invoked on the event thread open(int) was called from". That's why to achieve best performance with camera preview callbacks (e.g. to encode them in a low-latency video for live communication), I recommend to open camera in a new HandlerThread, as shown here.
so, the way I read the documentation, using EXTRA_OUTPUT tells the camera to save the file in a specific location. That's great, but it also says to get a full size image. That's not so great.
How can I get just a small image but still specify the filename?
After trying to work with the built-in Camera activity for some time now I can advise you not to expect anything good from it because:
built-in activity differs from version to version. For example in 2.2 emulator it even crashes when you try to take a (dummy) picture.
Camera activity on real devices like Samsung Galaxy S is different, i.e. it not just looks different, it has different code and set of bugs.
Original built-in Camera activity has CROP feature, but it is not part of the public API and thus it is not good idea to use it.
So far I fount that to be safe when working with camera I need to:
- create my custom camera activity that misses the fancy stuff like filters, etc but is more configurable (I don't have it yet). I've tried to find third party Camera App but every one of them seems to be targeted at normal users not developers, i.e. has many "cool" features but it is slow / bloated / buggy / has bad UI.
- create thumbnail images by myself outside of the Camera activity (for more control).
I really hope that I am missing something here and someone will correct me in the comments with appropriate solution...
I ended up just dealing with the large images by always scaling on the read. It would have been nice not to have to do that as I read in more than one place, but ...oh well...
problem solved, although far from elegant.