As the tile, i'm using Here-map.
I'm trying to custom raster tiles from S3 server instead. Folow the documentation, there are two class that support me to do that: UrlMapRasterTileSourceBase and MapRasterTileSource.
In my case, i tried to new an instance of MapRasterTileSource then override getTileWithError(int x, int y, int zoomLevel) function to load my tile image from S3 server.
The problem is the time to load an image is too long, and the UI seem be lagged.
In the document, here is part:
Note: Ensure that getTileWithError() returns within a reasonable
amount of time. If your operation takes a longer period of time,
launch an asynchronous operation and return the
TileResult.Error.NOT_READY error code while the operation is in
progress.
But, i don't have any idea how to apply that. I have tried to load image with a callback, but don't know what to do after having result.
Could you give me a hand on this please!?
How are you loading raster tiles from S3? Is it a synchronous HTTP request that you call from getTileWithError?
The proper flow should be:
-getTileWithError() // for a particular x, y, z
-Execute the tile fetch from your S3 layer asynchronously via a thread or AsyncTask.
-Meanwhile, getTileWithError() will be called continously, and you can return "TileResult.NOT_READY"
-When the image is fully downloaded, then return the image back to the runtime.
If the amount of time spent in the getTileWithError takes too long, the tile source will be disabled automatically.
Related
I need to implement text recognition using MLKit on Android and I have decided to use the new CameraX api as the camera lib. I am struggling with the correct "pipeline" of classes or data flow of the image because CameraX is quite new and not many resources is out there. The use case is that I take the picture, crop it in the middle by some bounds that are visible in the UI and then pass this cropped image to the MLKit that will process the image.
Given that, is there some place for ImageAnalysis.Analyzer
api? From my understanding this analyzer is used only for previews and not the captured image.
My first idea was to use takePicture method that accepts OnImageCapturedCallback but when I've tried access eg. ImageProxy.height the app crashed with an exception java.lang.IllegalStateException: Image is already closed and I could not find any fix for that.
Then I've decided to use another overload of takePicture method and now I save image to the file, then read it to Bitmap, crop this image and now I have an image that can be passed to MLKit. But when I take a look at FirebaseVisionImage that is passed to FirebaseVisionTextRecognizer it has a factory method to which I can pass the Image that I get from OnImageCapturedCallback which seems that I am doing some unnecessary steps.
So my questions are:
Is there some class (CaptureProcessor?) that will take care of the cropping of taken image? I suppose that then I could use OnImageCapturedCallback where I would receive already cropped image.
Should I even use ImageAnalysis.Analyzer if I am not doing realtime processing and I am doing post processing?
I suppose that I can achieve what I want with my current approach but I am feeling that I could use much more of CameraX than I currently am.
Thanks!
Is there some class (CaptureProcessor?) that will take care of the
cropping of taken image?
You can set the crop aspect ratio after you build the ImageCapture use case by using the setCropAspectRatio(Rational) method. This method crops from the center of the rotated output image. So basically what you'd get back after calling takePicture() is what I think you're asking for.
Should I even use ImageAnalysis.Analyzer if I am not doing realtime
processing and I am doing post processing?
No, it wouldn't make sense in your scenario. As you mentioned, only when doing real-time image processing would you want to use ImageAnalysis.Analyzer.
ps: I'd be interested in seeing the code you use for takePicture() that caused the IllegalStateException.
[Edit]
Taking a look at your code
imageCapture?.takePicture(executor, object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(image: ImageProxy) {
// 1
super.onCaptureSuccess(image)
// 2
Log.d("MainActivity", "Image captured: ${image.width}x${image.height}")
}
})
At (1), if you take a look at super.onCaptureSuccess(imageProxy)'s implementation, it actually closes the imageProxy passed to the method. Accessing the image's width and height in (2) throws an exception, which is normal -since the image has been closed-. The documentation states:
The application is responsible for calling ImageProxy.close() to close
the image.
So when using this callback, you should probably not call super..., just use the imageProxy, then before returning from the method, manually close it (ImageProxy.close()).
the keywords about this topic:
CustomSurfaceView: three custom surfaceview for three different levels.
Canvas: lock/unlockAndPost method (i'm not using custom bitmap )
Multi thread ( each surface is a separate thread )
Shapes ( shapes on canvas )
Client/Server ( architecture )
Flickering ( IS WHY I'M HERE )
We are developing a client/server application and I'm working on the client side. I'm receiving messages from the server containing general data (coordinates, color, width [...] ) about paths like, circle, rectangle, line and other shapes. The web application allows the user to send these data drawing on HTML5 Canvas, to an android device that receiving this messages and parsing it, will be able to redraw all the shapes. From my own experience on this subjects, I learned that the best way to keep in control all the things you draw on the canvas, is saving everything into a buffer, array, list or something like that, then reuse it when you want (for example, you can use the older path for show, hide, move or simply change something on the canvas). In my opinion, the android application follows the best practice of android development and OOP paradigm so I'm not assuming errors related to the bad architecture. In this case, I'm saving the messages on web client side. When the user draws on HTML5 Canvas, the messages which contain shape info are perfectly reported to the android canvas, but the problem appears when:
[example]
Consider you draw 10 objects (10 messages) and you want delete only one object on web app canvas, so the only way is clearing all the canvas, and redraw all the previous shapes without the deleted shape (so resend to the client 9 messages by loop the messages buffer ). This method works perfectly for the web app but cause flickering problem on android client. So after too many experiments I found a workaround, using a Thread.sleep(100)(Whooo! 100ms is too much), in order to parse slowly the messages and let the surfaceview thread to read correctly the data (data access through singleton pattern) and write on the double-buffer of the canvas.Well, it's slow and ugly but it works ! Actually I don't like this “horrible” workaround so please help me to see an exit strategy.
This is a piece of code where the canvas get data from shapes containers and draw if data are present. The data of each containers came from server messages.
#Override
public void run() {
Canvas canvas = null;
while (running) {
//this is the surface's canvas
try {
canvas = shapesSurfaceHolder.lockCanvas();
synchronized (shapesSurfaceHolder) {
if (shapesSurfaceHolder.getSurface().isValid()) {
if(!Parser.cmdClear){
//draw all the data present
canvas.drawPath(PencilData.getInstance().getPencilPath(),
PencilData.getInstance().getPaint());
canvas.drawPath(RectData.getInstance().getRectPath(),
RectData.getInstance().getPaint());
canvas.drawPath(CircleData.getInstance().getCirclePath(),
CircleData.getInstance().getPaint());
canvas.drawPath(LineData.getInstance().getLinePath(),
LineData.getInstance().getPaint());
canvas.drawText(TextData.getInstance().getText(),
TextData.getInstance().getX(),
TextData.getInstance().getY(),
TextData.getInstance().getPaint());
} else {
//remove all canvas content and clear data.
canvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR);
for (int i = 0; i < AbstractFactory.SHAPE_NUM; i++) {
abstracFactory.getShape(i).clearData();
}
}
}
}
} finally {
if (canvas != null) {
shapesSurfaceHolder.unlockCanvasAndPost(canvas);
}
}
}
}//end_run()
I can summarize that, apparently, my problem is to draw too quickly
Note:
Similar concept: Android thread controlling multiple texture views causes strange flickering
Hardware acceleration is enabled.
minSdkVersion 17
Tested on
Tablet Samsung SM-T113
Google Nexus 5
The TextureView issue was due to a bug specific to TextureView. You're using SurfaceView, so that does not apply here.
When drawing on a SurfaceView's Surface, you must update every pixel inside the dirty rect (i.e. the optional arg passed to lockCanvas()) every time. If you don't provide a dirty rect, that means the entire screen must be updated. This is because the Surface is double- or triple-buffered, and swapped when you call unlockCanvasAndPost(). If you lock / clear / unlock, then the next time you lock / draw / unlock, you will not be drawing into the buffer you previously cleared.
If you want to do incremental rendering, you should point your Canvas at an off-screen Bitmap and do all your rendering there. Then just blit the entire bitmap between lock and unlock. The alternative is to store up the drawing commands, starting with the initial clear, and play them all back between lock/unlock.
The phrase "three custom surfaceview" is somewhat concerning if they're all on screen at once. If you have them all at different Z depths (default, media overlay, top) then they will behave correctly, but the system is generally more efficient if you can put everything on one.
Also trying to get access to color data bytes from color cam of Tango, I was stuck on java API by being able to connect tango Cam to a surface for display (but just OK for display in fact, no easy access to raw data, nor time stamp)... so finally I switch using C API on native code (latest FERMAT lib and header) and follow recommendation I found on stack Overflow by registering a derivated sample code to connectOnFrameAvailable()... (I start using PointCloudActivity sample for that test).
First problem I found is somewhat a side effect of registering to that callback, that works usually fine (callbacks gets fire regularly), but then another callback that I also registered, to get xyz clouds, start to fail to fire. Like in sample code I mentioned, clouds are get through a onXYZijAvailable() callback, that the app registers using TangoService_connectOnXYZijAvailable(onXYZijAvailable).
So failing to get xyz callback fired is not happening always, but usually half of the time, during tests, with a awful workaround that is by taking the app in background then foreground again ... this is curious, is this "recover" related to On-pause/On-resume low level stuff??). If someone has clues ....
By the way in Java API, same side effect was observed, once connecting cam texture for display (through Tango adequate API ...)
But here is my second "problem", back to acquiring YV12 color data from camera :
through registering to TangoService_connectOnFrameAvailable( TangoCameraId::TANGO_CAMERA_COLOR, nullptr, onFrameAvailable)
and providing static funtion onFrameAvailable defined like this :
static void onFrameAvailable(void* ctx, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
LOGI("OnFrameAvailable(): Cam frame data received");
// Check if data format of expected type : YV12 , i.e.
// TangoImageFormatType::TANGO_HAL_PIXEL_FORMAT_YV12
// i.e. = 0x32315659 // YCrCb 4:2:0 Planar
//LOGI("OnFrameAvailable(): Frame data format (%x)", buffer->format);
....
}
the problem is that width, height, stride information of received TangoImageBuffer structure seems valid (1280x720, ...), BUT the format returned is changing every-time, and not the expected magic number (here 0x32315659) ...
I am doing something wrong there ? (but other info are OK ...)
Also, there is apparently only one data format defined (YV12 ) here, but seeing Fish Eye images from demo app, it seems grey level image, is it using same (color) format as low level capture than the RGB cam ???
1) Regarding the image from the camera, I came to the same conclusion you did - only availability of image data is through the C API
2) Regarding the image - I haven't had any issues with YUV, and my last encounter with this stuff was when I wrote JPEG stuff - the format is naked, i.e. it's an organizational structure and has no header information save the undefined metadata in the first line of pixels mentioned here - Here's a link to some code that may help you decode the image in a response to another message here
3) Regarding point cloud returns -
Please note this information is anecdotal, and to some degree the product of superstition - what works for me only does that sometimes, and may not work at all for you
Tango does seem to have a remarkable knack to simply stop producing point clouds. I think a lot of it has to do with very sensitive timing internally (I wonder if anyone mentioned that Linux ain't an RTOS when this was first crafted)
Almost all issues I encounter can be attributed to screwing up the timing where
A. Debugging the C level can may point clouds stop coming
B. Bugs in the native or java code that cause hiccups in the threads that are handling the callbacks can cause point clouds to stop coming
C. Excessive load can cause the system to loose sync, at which point the point clouds will stop coming - this is detectable, you will start to see a silvery grid pattern appear in rectangular areas of the image, and point clouds will cease. Rarely, the system will recover if load decreases, the silvery pattern goes away, and point clouds come back - more commonly the silvery pattern (I think its the 3d spatializing grid) grows to cover more of the image - at least a restart of the app is required for me, and a full tablet reboot every 3rd time or so
Summarizing, that's my suspicions and countermeasures, but it's based completely on personal experience -
I'm using OpenCV to detect an image. Here is my problem: my function detect_image(mRgba) needs some time to perform operations and give some results. While function is computing camera preview is frozen because it only shows image when code reaches return inputFrame.rgba() I would like to know how to make those operation parallel, function will be computing in a background while camera preview is working with normal speed.
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
detect_image(mRgba);
return inputFrame.rgba();
}
To just get a taste at parallelization, the simple approach would be to just use an AsyncTask to process your images:
AsyncTask reference page
A more friendly introduction can be found here:
http://android-developers.blogspot.co.il/2010/07/multithreading-for-performance.html
while this:
http://developer.att.com/developer/forward.jsp?passedItemId=11900176
is a nice all-around introduction to multi-threading on Android.
If you want to just get started, a simple algorithm should work like this:
from within your "onCameraFrame" method check if you have an AsyncThread for processing the image which is already running
if the answer is "yes", just show mRgba in the preview window and return
if the answer is "no" start a new AsyncThread and let it run "detectImage" on mRgba, making sure that the results are saved in the onPostExecute method.
With this algorithm, if your system can detect 4 images per second while taking a preview at 60fps (for example), you will be able to get a smooth video with a new result about each 20-30 frames on a single processor device, under the realistic assumption that detect_image is CPU intensive while the camera preview/display are I/O intensive.
Capture: x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x....
Processing: 1.......1.......1.......1.....1.......1....
time ------------------------------------>
Starting with HoneyComb, a more refined approach would be to account for the number of cores in your CPU (multicore phones/tablets are becoming increasingly common) and start N AsyncTask in parallel (one for each core), feeding a different preview image to each one (maybe using a thread pool...).
If you separate each thread by a fixed delay (about the duration of detectImage/N ), you should get a constant stream of results with a frequency that should be a multiple of the single threaded version.
Capture: x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x....
Processing: 1.2.3.4.1.2.3.4.1.2.3.4.1.2.3.4.1.2.3.4....
time ------------------------------------>
Hope this helps
So I am using the Android camera to take pictures within an Android app. About 90% of my users have no issues, but the other 10% get a picture that returns pure black or a weird jumbling of pixels.
Has anyone else seen this behavior? or have any ideas why it happens?
Examples:
Black:
Jumbled:
I've had similar problems like this.
The problem in short is: Missing data.
It occurs to a Bitmap/Stream if the datastream was interrupted for too long or it is accidentally no more available.
Another example where it may occur: Downloading and uploading images.
If the user disables all of a sudden Wifi/mobile network no more data can be transmitted.
You end up in a splattered image.
The image will appear/view okay(where okay means black/splattered, it's still viewable!) but is invalid internally (missing or corrupted information).
If it's not too critical you can try to move all the data into a Bitmap object (BitmapFactory.decode*) and test if the returned Bitmap is null. If yes the data possibly is corrupted.
This is just solving the consequences of the problem, as you can guess.
The better way would be to take the problem on the foot:
Ensure a good connection to your data source (Large enough, stout buffer).
Try to avoid unneccesary casts (e.g. from char to int)
Use the correct type of buffers (Either Reader/Writer for character streams or InputStream/OutputStream for byte streams).
From android 4.0 they put hardwareAcceleration set to true as default in the manifest. Hardwareaccelerated canvas does not support Pictures and you will get a black screen...
Please also check that whether you use BitmapFactory.Options object for generating the bitmap or not. Because few methods of this object also makes the bitmap corrupted.