I'm a newbie in Android.
I using Nexus7 reference device and I've downloaded the full source code from source.android.com.
I have an engineering system image and I can make a system application.
/system/bin/screencap utility is good for me to capture screen.
I want to get a pixel data using screencap.cpp directly in my application.
When I used to screencap utility, the process is like below.
capture screen and save an image.
Open image file
decodefile to bitmap
get pixel data(int array) from bitmap
I want to remove the step 1, 2 and 3.
Just call api to get pixel data of a screen directly,
How can I do that?
If you're running with system privileges, you can just ask SurfaceComposerClient for the pixel data rather than launching a separate process to do it for you.
Looking at the screencap source code, all you really need is the Binder initialization:
ProcessState::self()->startThreadPool();
and the SurfaceComposerClient IPC call:
ScreenshotClient screenshot;
sp<IBinder> display = SurfaceComposerClient::getBuiltInDisplay(displayId);
if (display != NULL && screenshot.update(display, Rect(), false) == NO_ERROR) {
base = screenshot.getPixels();
w = screenshot.getWidth();
h = screenshot.getHeight();
s = screenshot.getStride();
f = screenshot.getFormat();
size = screenshot.getSize();
}
You can safely ignore all the /dev/graphics/fb0 stuff below it -- it's an older approach that no longer works. The rest of the code is just needed for the PNG compression.
If you're not running with system privileges, you can't capture the entire screen. You can capture your app though.
If you are writing a Java app just call /system/bin/screencap from your application (using java.lang.Process) and read the result into memory as a binary stream. You can see the binary structure in screencap.cpp, but it's just width, height, and format as four byte integers followed by the data.
Note to other readers: this is only be possible if you are a system app.
1) You can transfer data from your screencap utility to your App over the network by using sockets.
2) Android NDK can be used for direct function calls of your utility from your App.
Related
I want to test a learnt model under Python on android, I'm on Tensorflow 0.9.
To do so, I freezed my graph to have a single pb file with the graph and weight. I used Queues to manage my learning batches.
When running my session on Android, I specify the input tensor by its name "input_node", which is the data layer as input in my network.
X = tf.reshape(X, [-1, W, H, 1], name="input_node")
and call the "output_node" layer :
output = tf.reshape(h_fc11, shape=[-1, 8], name="output_node")
Here is the call in tensorflow_jni.cc:
std::vector<std::pair<std::string, tensorflow::Tensor> > input_tensors({{"input_node", input_tensor}});
s = session->Run(input_tensors, output_names, {}, &output_tensors);
The batch generation is done before, so it should not been used when testing.
But I have the following error:
tensorflow_jni.cc:312 Error during inference: Invalid argument: No OpKernel was registered to support Op 'RandomShuffleQueue' with these attrs
[[Node: shuffle_batch/random_shuffle_queue = RandomShuffleQueuecapacity=10750, component_types=[DT_FLOAT, DT_FLOAT], container="", min_after_dequeue=10000, seed=0, seed2=0, shapes=[[10000], [8]], shared_name=""]]
It seems that the batch generation layer is called (my images are 100x100 and I have 8 outputs), but I don't know why.
When testing the same model with the same input/output layers though the image_labelling.cc directly on Mac (building with Bazel), I don't have the error.
I do not understand why the RandomShuffleQueue is needed when testing. Am I missing something to specify the part of the graph I want to use? Are all the layers of the graph verified even if not used?
Thanks.
I'm still working on the documentation for this, but I think the optimize_for_inference script should help you here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/optimize_for_inference.py
You pass in a frozen graph, the input nodes, and output nodes, and it removes all of the other ops that aren't needed.
Also trying to get access to color data bytes from color cam of Tango, I was stuck on java API by being able to connect tango Cam to a surface for display (but just OK for display in fact, no easy access to raw data, nor time stamp)... so finally I switch using C API on native code (latest FERMAT lib and header) and follow recommendation I found on stack Overflow by registering a derivated sample code to connectOnFrameAvailable()... (I start using PointCloudActivity sample for that test).
First problem I found is somewhat a side effect of registering to that callback, that works usually fine (callbacks gets fire regularly), but then another callback that I also registered, to get xyz clouds, start to fail to fire. Like in sample code I mentioned, clouds are get through a onXYZijAvailable() callback, that the app registers using TangoService_connectOnXYZijAvailable(onXYZijAvailable).
So failing to get xyz callback fired is not happening always, but usually half of the time, during tests, with a awful workaround that is by taking the app in background then foreground again ... this is curious, is this "recover" related to On-pause/On-resume low level stuff??). If someone has clues ....
By the way in Java API, same side effect was observed, once connecting cam texture for display (through Tango adequate API ...)
But here is my second "problem", back to acquiring YV12 color data from camera :
through registering to TangoService_connectOnFrameAvailable( TangoCameraId::TANGO_CAMERA_COLOR, nullptr, onFrameAvailable)
and providing static funtion onFrameAvailable defined like this :
static void onFrameAvailable(void* ctx, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
LOGI("OnFrameAvailable(): Cam frame data received");
// Check if data format of expected type : YV12 , i.e.
// TangoImageFormatType::TANGO_HAL_PIXEL_FORMAT_YV12
// i.e. = 0x32315659 // YCrCb 4:2:0 Planar
//LOGI("OnFrameAvailable(): Frame data format (%x)", buffer->format);
....
}
the problem is that width, height, stride information of received TangoImageBuffer structure seems valid (1280x720, ...), BUT the format returned is changing every-time, and not the expected magic number (here 0x32315659) ...
I am doing something wrong there ? (but other info are OK ...)
Also, there is apparently only one data format defined (YV12 ) here, but seeing Fish Eye images from demo app, it seems grey level image, is it using same (color) format as low level capture than the RGB cam ???
1) Regarding the image from the camera, I came to the same conclusion you did - only availability of image data is through the C API
2) Regarding the image - I haven't had any issues with YUV, and my last encounter with this stuff was when I wrote JPEG stuff - the format is naked, i.e. it's an organizational structure and has no header information save the undefined metadata in the first line of pixels mentioned here - Here's a link to some code that may help you decode the image in a response to another message here
3) Regarding point cloud returns -
Please note this information is anecdotal, and to some degree the product of superstition - what works for me only does that sometimes, and may not work at all for you
Tango does seem to have a remarkable knack to simply stop producing point clouds. I think a lot of it has to do with very sensitive timing internally (I wonder if anyone mentioned that Linux ain't an RTOS when this was first crafted)
Almost all issues I encounter can be attributed to screwing up the timing where
A. Debugging the C level can may point clouds stop coming
B. Bugs in the native or java code that cause hiccups in the threads that are handling the callbacks can cause point clouds to stop coming
C. Excessive load can cause the system to loose sync, at which point the point clouds will stop coming - this is detectable, you will start to see a silvery grid pattern appear in rectangular areas of the image, and point clouds will cease. Rarely, the system will recover if load decreases, the silvery pattern goes away, and point clouds come back - more commonly the silvery pattern (I think its the 3d spatializing grid) grows to cover more of the image - at least a restart of the app is required for me, and a full tablet reboot every 3rd time or so
Summarizing, that's my suspicions and countermeasures, but it's based completely on personal experience -
it is easy to this code
Bitmap bitmap;
View v1 = MyView.getRootView();
v1.setDrawingCacheEnabled(true);
bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
and it works great , but this is the case if there is activity.
How can I take a screenshot from service ?
my goal to take a screenshot ones in a hour ,e.i to to take screen shut every hour for example in 12 then in 1 then in 2 .... and so on
To capture ScreenShot for your activity you have to need a View of your activity, and which one is not present in your service so you have to make a TimerTask which will call your activity at every hour and your activity responding it to with current appear view and you can capture the ScreenShot from that. (I think this one is only solution for your problem.)
Or If you want to take a ScreenShot of your current device screen (any application) then you have to root permission, and read framebuffer for that which will give raw data of current screen then convert it to bitmap or any picture file you can do it in your service.
Android Screenshot Library (ASL) provides means for taking snapshots of phone's screen without the need for signing your application or having privileged (root) access to the Android system
Click here for ASL
So I am using the Android camera to take pictures within an Android app. About 90% of my users have no issues, but the other 10% get a picture that returns pure black or a weird jumbling of pixels.
Has anyone else seen this behavior? or have any ideas why it happens?
Examples:
Black:
Jumbled:
I've had similar problems like this.
The problem in short is: Missing data.
It occurs to a Bitmap/Stream if the datastream was interrupted for too long or it is accidentally no more available.
Another example where it may occur: Downloading and uploading images.
If the user disables all of a sudden Wifi/mobile network no more data can be transmitted.
You end up in a splattered image.
The image will appear/view okay(where okay means black/splattered, it's still viewable!) but is invalid internally (missing or corrupted information).
If it's not too critical you can try to move all the data into a Bitmap object (BitmapFactory.decode*) and test if the returned Bitmap is null. If yes the data possibly is corrupted.
This is just solving the consequences of the problem, as you can guess.
The better way would be to take the problem on the foot:
Ensure a good connection to your data source (Large enough, stout buffer).
Try to avoid unneccesary casts (e.g. from char to int)
Use the correct type of buffers (Either Reader/Writer for character streams or InputStream/OutputStream for byte streams).
From android 4.0 they put hardwareAcceleration set to true as default in the manifest. Hardwareaccelerated canvas does not support Pictures and you will get a black screen...
Please also check that whether you use BitmapFactory.Options object for generating the bitmap or not. Because few methods of this object also makes the bitmap corrupted.
I spent a lot of time debugging different problems that were reproducible only on a specific devices.
For instance I left my attempts to take a picture from a camera using an Intent. Because only a limited set of the devices behave as expected.
Another example is when I use a byte array from the onPictureTakenCallback:
public void onPictureTaken(byte[] data, Camera camera) {
byte[] tempData = new byte[data.length];
System.arraycopy(data, 0, dataTemp, 0, data.length);
///...
}
So if I don't make a copy, but use original "data" array some time later then I fall into troubles because some devices clean this array up after a time. But other devices don't do such cleaning so it works perfectly without doing a copy.
One more example:
Some devices return null when:
Camera.Parameters params = camera.getParameters();
List<Camera.Size> sizes = params.getSupportedPreviewSizes();
// sizes is null
But most of devices (I think) return a list of supported sizes.
So I wonder if is there any kind of knowledge base / FAQ assembled of such problems? If not, let's post here issues with which we faced?
I'm unaware of it. But byte array you are receiving is mmapped, and in control of another (native) application (and thus data may go at camera application discretion, if it reuses this buffer)
Best way is to copy it away to safe location ASAP
As for preview sizes - they are a mess. Even if you get this list, not all resolutions are supported actually ( I got segfaults on bigger resolutions - somehow preview buffer did not fit ). Only way is to probe whether this preview size is actually supported by activating them in turn and waiting for exc eption