Does Ionic Camera Preview plugin provide a full-quality pic? - android

There are 2 ways (that I'm aware of) to capture images/videos from the camera in an ionic app:
Native camera app
'Camera Preview' library
The first option I know will allow users to maximise the potential of the camera (quality, megapixels etc.), but I need the flexibility of adding an overlay (basically I need the flexibility of the second option).
Question
In the docs I can only see the ability to see a 'quality' argument as part of the 'takePicture' call, how would a maximum of 100 here compare to the quality of a pic I'd have got from the native app?
I know this is called 'camera preview' but ideally I need it to be the best image quality the camera's capable of capturing (same as the native app).
https://github.com/cordova-plugin-camera-preview/cordova-plugin-camera-preview

Take snapshot of the camera preview. The resulting image will be the same size as specified in startCamera options. The argument quality defaults to 85 and specifies the quality/compression value: 0=max compression, 100=max quality.
CameraPreview.takePicture({width:640, height:640, quality: 100}, function(base64PictureData|filePath) {
// One simple example is if you are going to use it inside an HTML img src attribute then you would do the following:
imageSrcData = 'data:image/jpeg;base64,' + base64PictureData;
console.log(imageSrcData);
});
For More Info Check Documentations.

Related

Mobile - Scan text from camera, without taking a picture

Is there a known API or way to SCAN the text from a card without actually manually saving (and uploading) the picture? (iOS and Android)
Then I would need to know if that API can determine the marquee within the camera that should be scanned.
I want a behaviour similar to the one of QR scanners, or Augmented Reality apps. Where the user just directs the camera and the action occurs.
I have printed cards with a Redeem code in Text, and including QR will need to change the current card production.
The text is inside a white box, which may make it easier to recognise:
On iOS, you would use CIDetector with an AVCaptureSession. It can be used to process capture session output buffers as they come in from the camera without having to take a picture and provide text scanning.
For text detection, using CIDetector with CIDetectorTypeText will return areas that are likely to have text in them, but you would have to perform additional processing for Optical Character Recognition.
You could also use OpenCV for a not out of the box solution.
You can try this: https://github.com/gali8/Tesseract-OCR-iOS
Usage:
// Specify the image Tesseract should recognize on
tesseract.image = [[UIImage imageNamed:#"image_sample.jpg"] g8_blackAndWhite];
// Optional: Limit the area of the image Tesseract should recognize on to a rectangle
tesseract.rect = CGRectMake(20, 20, 100, 100);
// Optional: Limit recognition time with a few seconds
tesseract.maximumRecognitionTime = 2.0;
// Start the recognition
[tesseract recognize];

connectOnFrameAvailable() provides TangoImageBuffer with curious format infos

Also trying to get access to color data bytes from color cam of Tango, I was stuck on java API by being able to connect tango Cam to a surface for display (but just OK for display in fact, no easy access to raw data, nor time stamp)... so finally I switch using C API on native code (latest FERMAT lib and header) and follow recommendation I found on stack Overflow by registering a derivated sample code to connectOnFrameAvailable()... (I start using PointCloudActivity sample for that test).
First problem I found is somewhat a side effect of registering to that callback, that works usually fine (callbacks gets fire regularly), but then another callback that I also registered, to get xyz clouds, start to fail to fire. Like in sample code I mentioned, clouds are get through a onXYZijAvailable() callback, that the app registers using TangoService_connectOnXYZijAvailable(onXYZijAvailable).
So failing to get xyz callback fired is not happening always, but usually half of the time, during tests, with a awful workaround that is by taking the app in background then foreground again ... this is curious, is this "recover" related to On-pause/On-resume low level stuff??). If someone has clues ....
By the way in Java API, same side effect was observed, once connecting cam texture for display (through Tango adequate API ...)
But here is my second "problem", back to acquiring YV12 color data from camera :
through registering to TangoService_connectOnFrameAvailable( TangoCameraId::TANGO_CAMERA_COLOR, nullptr, onFrameAvailable)
and providing static funtion onFrameAvailable defined like this :
static void onFrameAvailable(void* ctx, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
LOGI("OnFrameAvailable(): Cam frame data received");
// Check if data format of expected type : YV12 , i.e.
// TangoImageFormatType::TANGO_HAL_PIXEL_FORMAT_YV12
// i.e. = 0x32315659 // YCrCb 4:2:0 Planar
//LOGI("OnFrameAvailable(): Frame data format (%x)", buffer->format);
....
}
the problem is that width, height, stride information of received TangoImageBuffer structure seems valid (1280x720, ...), BUT the format returned is changing every-time, and not the expected magic number (here 0x32315659) ...
I am doing something wrong there ? (but other info are OK ...)
Also, there is apparently only one data format defined (YV12 ) here, but seeing Fish Eye images from demo app, it seems grey level image, is it using same (color) format as low level capture than the RGB cam ???
1) Regarding the image from the camera, I came to the same conclusion you did - only availability of image data is through the C API
2) Regarding the image - I haven't had any issues with YUV, and my last encounter with this stuff was when I wrote JPEG stuff - the format is naked, i.e. it's an organizational structure and has no header information save the undefined metadata in the first line of pixels mentioned here - Here's a link to some code that may help you decode the image in a response to another message here
3) Regarding point cloud returns -
Please note this information is anecdotal, and to some degree the product of superstition - what works for me only does that sometimes, and may not work at all for you
Tango does seem to have a remarkable knack to simply stop producing point clouds. I think a lot of it has to do with very sensitive timing internally (I wonder if anyone mentioned that Linux ain't an RTOS when this was first crafted)
Almost all issues I encounter can be attributed to screwing up the timing where
A. Debugging the C level can may point clouds stop coming
B. Bugs in the native or java code that cause hiccups in the threads that are handling the callbacks can cause point clouds to stop coming
C. Excessive load can cause the system to loose sync, at which point the point clouds will stop coming - this is detectable, you will start to see a silvery grid pattern appear in rectangular areas of the image, and point clouds will cease. Rarely, the system will recover if load decreases, the silvery pattern goes away, and point clouds come back - more commonly the silvery pattern (I think its the 3d spatializing grid) grows to cover more of the image - at least a restart of the app is required for me, and a full tablet reboot every 3rd time or so
Summarizing, that's my suspicions and countermeasures, but it's based completely on personal experience -

CWAC Camera - What's the best way to customize the ImageCleanupTask?

I'm using the cwac-camera library to take photos with a custom in-app camera.
I'm overriding adjustPreviewParameters in SimpleCameraHost and am setting the JPEG quality.
#Override
public Parameters adjustPreviewParameters(Parameters parameters) {
super.adjustPreviewParameters(parameters);
parameters.setJpegQuality(80);
return (parameters);
}
Unfortunately, as per this question, the setJpegQuality method doesn't work on some devices (e.g. the S3).
I can see that the cwac-camera ImageCleanupTask always saves the manipulated image at 100% JPEG quality.
What's the best way to customize the ImageCleanupTask?
Should I expose a setJpegQuality method in PictureTransaction? Or do we want a more versatile solution (like allowing the ImageCleanupTask to be injected)?
I can see that the cwac-camera ImageCleanupTask always saves the manipulated image at 100% JPEG quality.
Ideally, that would be configurable. There are lots of things the library would ideally do. :-)
What's the best way to customize the ImageCleanupTask?
If you mean "how would one get the JPEG percentage in there?", augment PictureTransaction.
Should I expose a setJpegQuality method in PictureTransaction?
I would do jpegQuality(), as PictureTransaction uses the builder/fluent API pattern.
Note that with this change, you would want to remove parameters.setJpegQuality(80); from your existing code. Otherwise, the image will be degraded twice, once on capture (for devices that support it) and once when the image is written to disk, and that's probably not what you want.

GPUimageVideoCamera for android

I am using GPUImage library to compress a video in my iOs app (GPUimageVideoCamera)
https://github.com/BradLarson/GPUImage/
I have worked with it on iOS and it is very fast
I want to do the same in my android app, but it seems that GPUImageMovie class doesn't exist in android library:
https://github.com/CyberAgent/android-gpuimage/tree/master/library/src/jp/co/cyberagent/android/gpuimage
It seems that android library only work on images (no video).
Anyone know if this library can do the job? If not, did someone developed GPUImage all library? If not, what is the best library i can use that can do the job as fast as GPUImage library do.
That's what GPUimageVideoCamera do in iOs (Filtering live video):
To filter live video from an iOS device's camera, you can use code like the following:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:#"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.
The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.
For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.
Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:
videoCamera.audioEncodingTarget = movieWriter;
Is there a library that can do the same in android?

Android Front Facing Camera - empty (0km) file or start failed on some devices

I had this problem with my app (ScareApp) that uses the front facing camera to record video. I "think" I've finally resolved the issue, so thought I would post it here for any developers that run into the same thing....
Basically..
The android MediaRecorder allows you to define the Video and Audio Encoder, and according to the docs, DEFAULT can be used for each.
However, this refers to the main camera's settings, which is often a far higher spec than the front facing camera.
DEFAULT on the Droid Razr for example, selects an encoding (MPEG_4_SP) that isn't available for the Front facing camera, and this results in an empty (0kb) file being produced (or on some other devices a Camera 100 - start failed error).
My other option was to use the CameraProfile.get method to lookup what the HIGH_QUALITY settings, but again, this by default uses the main camera.
To get around this, you can set the ID of the front facing camera by using
CameraProfile.get(<CameraID>, CamcorderProfile.QUALITY_HIGH);
My current work around is as follows:
CamcorderProfile profile = CamcorderProfile.get(FrontFacingCameraId, CamcorderProfile.QUALITY_HIGH);
if(profile != null) {
_recorder.setAudioEncoder(profile.audioCodec);
_recorder.setVideoEncoder(profile.videoCodec);
}else {
//default to basic H263 and AMR_NB if profile not found
_recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
_recorder.setVideoEncoder(MediaRecorder.VideoEncoder.H263);
}
Or alternatively, you can skip setting the Encoders, and just use
_recorder.setProfile(profile);
But as my app allows the user to select the resolution, I need to set the encoder's.
Hopefully this will help someone and save the time and hassle it has caused me!
Cheers,
Mark

Categories

Resources