I'm using the popular ZXing project to enable barcode scanning on my Android application.
I want to manually set the width and height of my viewfinder, so I used the following:
intent.putExtra("SCAN_WIDTH", 400);
intent.putExtra("SCAN_HEIGHT", 300);
Before sending my intent. However, the app crashes due to a NullPointerException at line 279 in CameraManager.java. I did some debugging and it looks like the screenResolution member of configManager is never initialized. I debugged some more, to find that surfaceCreated() is not called in time (this is supposed to be done through a Callback). At least, that is what it seems like to me, since surfaceCreated() in CaptureActivity.java is responsible for initializing those members of configManager. I did some searching on here and Google but it doesn't seem like people use those intent extras SCAN_WIDTH and SCAN_HEIGHT. They are manually setting the MIN and MAX width/height values within the ZXing code, which I am trying to avoid. Any help would be appreciated.
The scanner works fine when I am not setting those width/height values via intent.
EDIT: After updating my version of the ZXing library, this is no longer an issue. It also fixed the front camera issue I was having with the 2012 Nexus 7.
screenResolution is definitely set, in initFromCameraParameters. It happens when the driver opens. It's OK if surfaceCreated happens a bit later since the onResume method registers a callback to initialize the camera after the surface is created, if it's not already available.
onResume calls setManualFramingRect even if it's not initialized, but, in that case it just saves the request in requestedFramingRectWidth and requestedFramingRectHeight and sets it later.
I think this case is handled correctly, but as ever I can't be 100% sure there's not an oversight. Maybe you can say more about where you think the problem is given this info.
Related
I'm setting the target resolution like below
var imageResolution = Size(480, 640)
imageCapture = ImageCapture.Builder()
.setTargetResolution(imageResolution)
.build()
Now, I need to change the resolution. So, I tried
var imageResolution = Size(1200, 1600)
imageCapture?.updateSuggestedResolution(imageResolution)
but, it is giving a error
error 1
UseCase.updateSuggestedResolution can only be called from within the same library group (groupId=androidx.camera)
error 2
Photo capture failed: The completer object was garbage collected - this future would otherwise never complete. The tag was: issueTakePicture[stage=0]
I didn't able to figure it out when error 1 & 2 will occur and noticed that error 2 will not let the image to get saved.
And all images if taken and successfully saved the resolution was 1080*1080 only, If I have tried to change its resolution at least once. else, after ImageCapture.Builder() step If I didn't tried to change the resolution it would retain the resolution what I mentioned.
why it is coming and How to avoid this warning ?
ok, I had the same issue today (Error 2).
I was using an old way to manage onActivityResult (the depreciated way), after I receive the result, the app open de camera and it takes the picture. For some reason, if the camera is started after de activityResult it throws the garbage collected problem.
To solve that, I used the new way to manage the onActivityResult.
This links helped me:
OnActivityResult method is deprecated, what is the alternative?
https://developer.android.com/training/basics/intents/result#java
I instantiate the following gameObject, which contains an Animator with the mode "always animate" on, the animation goes for 340ms, after that time I destroy the gameObject.
The gameObject Inspector properties:
I instantiate it using the following code:
instancia = (Instantiate(cardAnimation, new Vector3(0, 0, 0), Quaternion.identity) as GameObject).GetComponent<Image>();
instancia.rectTransform.SetParent(transform);
StartCoroutine(KillOnAnimationEnd());
Here is the Coroutine:
private IEnumerator KillOnAnimationEnd()
{
yield return new WaitForSeconds(0.34f);
DestroyImmediate(instancia);
}
Here is how the animation looks like when simulating in Unity (PC-Windows):
But on android after I open the chest it waits 340ms with nothing happening and then show the information above, does this have something to do with the plataform or is some unity or perhaps code related issue?
NOTE: I also have another animation in another scene that is just a already instantiated gameObject in the Hierarchy with always animated on and it works on Android.
--EDIT--
So I have ran the newest version of the app in a emulator which is almost about 1080x480 and the animation showed just as the PC, also running on a 720p smartphone did the job, the only problem I'm still having is with my QuadHD resolution from Galaxy S6, everything else shows but the animation, I have even tried making the animation run without any script so it runs in a loop, but it doesn't show up in galaxy screen.
Given the news about the issue I think this might change a little bit the perspective of answers and perhaps help someone else solve the same problem in the future.
Okay, figured out the problem, its something to do with "rotation" in animations using Unity3D in 2D mode, gonna be reporting it form Unity so it is fixed.
The solution: Animate your UI only using scale/position, if used rotation it will not show on high resolution display.
I am pretty sure your WaitForSeconds(0.34f) is not working properly because there is no thing such as yield keyword in Java. I recommend you to use a invoke method instead to call your method that destroys your GameObject.
According to offical google team statement the CONTROL_AE_EXPOSURE_COMPENSATION manual change is broken on Android 5.1. I'm lookin for a workaround for couple of days and the only one I found is connected to SENSOR_INFO_SENSITIVITY_RANGE. However, I found some difficulties in using it. My code look like this:
if(!modeDisabled){
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
modeDisabled=true;
}
range1 = characteristics.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE);
minmin = range1.getLower();
maxmax = range1.getUpper();
int iso = ((i * (maxmax - minmin)) / 100 + minmin);
mPreviewRequestBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, iso);
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), null, mBackgroundHandler);
Of course the 'i' value is a progress value taken from the seekbar and everyting is closed in OnProgressChanged function.
The problem is that there are no visible changes when manipulating the seekbar. I'd be really gratetful for any help.
CONTROL_AE_EXPOSURE_COMPENSATION isn't broken in Android 5.1 in general, it was disabled on the Nexus 6 only (and will be re-enabled in a future update).
If you're disabling auto-exposure, you probably also need to set the exposure time, in addition to the sensitivity. You also preferably need to set the frame duration, though the defaults for both are probably 1/30s, which is reasonable. You can also copy the latest values for those from the most-recent capture result that did you auto-exposure.
That said, you should still see some sort of change here. Is it possible that you're overwriting your capture request elsewhere right after you set this one as the repeating request? You can check the returned capture results to see what the sensitivity setting the camera device is receiving is.
Also trying to get access to color data bytes from color cam of Tango, I was stuck on java API by being able to connect tango Cam to a surface for display (but just OK for display in fact, no easy access to raw data, nor time stamp)... so finally I switch using C API on native code (latest FERMAT lib and header) and follow recommendation I found on stack Overflow by registering a derivated sample code to connectOnFrameAvailable()... (I start using PointCloudActivity sample for that test).
First problem I found is somewhat a side effect of registering to that callback, that works usually fine (callbacks gets fire regularly), but then another callback that I also registered, to get xyz clouds, start to fail to fire. Like in sample code I mentioned, clouds are get through a onXYZijAvailable() callback, that the app registers using TangoService_connectOnXYZijAvailable(onXYZijAvailable).
So failing to get xyz callback fired is not happening always, but usually half of the time, during tests, with a awful workaround that is by taking the app in background then foreground again ... this is curious, is this "recover" related to On-pause/On-resume low level stuff??). If someone has clues ....
By the way in Java API, same side effect was observed, once connecting cam texture for display (through Tango adequate API ...)
But here is my second "problem", back to acquiring YV12 color data from camera :
through registering to TangoService_connectOnFrameAvailable( TangoCameraId::TANGO_CAMERA_COLOR, nullptr, onFrameAvailable)
and providing static funtion onFrameAvailable defined like this :
static void onFrameAvailable(void* ctx, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
LOGI("OnFrameAvailable(): Cam frame data received");
// Check if data format of expected type : YV12 , i.e.
// TangoImageFormatType::TANGO_HAL_PIXEL_FORMAT_YV12
// i.e. = 0x32315659 // YCrCb 4:2:0 Planar
//LOGI("OnFrameAvailable(): Frame data format (%x)", buffer->format);
....
}
the problem is that width, height, stride information of received TangoImageBuffer structure seems valid (1280x720, ...), BUT the format returned is changing every-time, and not the expected magic number (here 0x32315659) ...
I am doing something wrong there ? (but other info are OK ...)
Also, there is apparently only one data format defined (YV12 ) here, but seeing Fish Eye images from demo app, it seems grey level image, is it using same (color) format as low level capture than the RGB cam ???
1) Regarding the image from the camera, I came to the same conclusion you did - only availability of image data is through the C API
2) Regarding the image - I haven't had any issues with YUV, and my last encounter with this stuff was when I wrote JPEG stuff - the format is naked, i.e. it's an organizational structure and has no header information save the undefined metadata in the first line of pixels mentioned here - Here's a link to some code that may help you decode the image in a response to another message here
3) Regarding point cloud returns -
Please note this information is anecdotal, and to some degree the product of superstition - what works for me only does that sometimes, and may not work at all for you
Tango does seem to have a remarkable knack to simply stop producing point clouds. I think a lot of it has to do with very sensitive timing internally (I wonder if anyone mentioned that Linux ain't an RTOS when this was first crafted)
Almost all issues I encounter can be attributed to screwing up the timing where
A. Debugging the C level can may point clouds stop coming
B. Bugs in the native or java code that cause hiccups in the threads that are handling the callbacks can cause point clouds to stop coming
C. Excessive load can cause the system to loose sync, at which point the point clouds will stop coming - this is detectable, you will start to see a silvery grid pattern appear in rectangular areas of the image, and point clouds will cease. Rarely, the system will recover if load decreases, the silvery pattern goes away, and point clouds come back - more commonly the silvery pattern (I think its the 3d spatializing grid) grows to cover more of the image - at least a restart of the app is required for me, and a full tablet reboot every 3rd time or so
Summarizing, that's my suspicions and countermeasures, but it's based completely on personal experience -
So I am using the Android camera to take pictures within an Android app. About 90% of my users have no issues, but the other 10% get a picture that returns pure black or a weird jumbling of pixels.
Has anyone else seen this behavior? or have any ideas why it happens?
Examples:
Black:
Jumbled:
I've had similar problems like this.
The problem in short is: Missing data.
It occurs to a Bitmap/Stream if the datastream was interrupted for too long or it is accidentally no more available.
Another example where it may occur: Downloading and uploading images.
If the user disables all of a sudden Wifi/mobile network no more data can be transmitted.
You end up in a splattered image.
The image will appear/view okay(where okay means black/splattered, it's still viewable!) but is invalid internally (missing or corrupted information).
If it's not too critical you can try to move all the data into a Bitmap object (BitmapFactory.decode*) and test if the returned Bitmap is null. If yes the data possibly is corrupted.
This is just solving the consequences of the problem, as you can guess.
The better way would be to take the problem on the foot:
Ensure a good connection to your data source (Large enough, stout buffer).
Try to avoid unneccesary casts (e.g. from char to int)
Use the correct type of buffers (Either Reader/Writer for character streams or InputStream/OutputStream for byte streams).
From android 4.0 they put hardwareAcceleration set to true as default in the manifest. Hardwareaccelerated canvas does not support Pictures and you will get a black screen...
Please also check that whether you use BitmapFactory.Options object for generating the bitmap or not. Because few methods of this object also makes the bitmap corrupted.