Crop Image in Titanium in the Preview Screen - android

I am creating an application which uses camera/gallery. I am taking a photo using camera and once I took the picture, the device will automatically display a preview screen in iOS, allows me to move and scale my image as required. In android, I manually created a preview window.
But I want to crop the image with a resolution 610x320 pixels.
Here is the code for taking image
Ti.Media.showCamera({
success:function(event) {
if(event.mediaType == Ti.Media.MEDIA_TYPE_PHOTO) {
var image = event.media;
var ImageFactory = require('ti.imagefactory');
var newBlob = ImageFactory.imageAsCropped(image, {width:610, height:320 });
imgvwCapturedImage.image = newBlob; //imgvwCapturedImage is an image view
}
},
cancel:function() {},
error:function(error) {
alert("Sorry, Unable to to process now.Please retry later.");
},
saveToPhotoGallery:true,
allowEditing:true,
mediaTypes:[Ti.Media.MEDIA_TYPE_PHOTO]
});
I was able to crop the image using the imageFactory module only after selecting the photo from the preview screen. Is there any chance to do the same at the preview screen itself so that the user can identify which area is getting cropped?
Any help will be appreciated.

Did you try an overlay? Just create a resizable view that user can manipulate with (Select portion of an image) and add it to CameraOptionsType.
http://docs.appcelerator.com/titanium/latest/#!/api/CameraOptionsType-property-overlay

I have created my own preview screen for iOS and cropped the image by the help of a scrollView and image factory module. Now it's working perfect. You may find a sample code here. However this will not be working for Android devices.

Related

Android GPUImage setImage and getBitmapWithFilterApplied cause screen to flicker

I have been using this github as a start of my code: https://github.com/xizhang/camerax-gpuimage
The code is a way to show the camera view with GPUImage filters on it.
I want to be able to also analyze the bitmap with the filter applied on it to get some analytics (percent red/green/blue in the image).
I have been successful in showing the default camera view to the user as well as the filter I created.
By commenting out the setImage line of code, I have been able to get the analytics of the filtered image, but when I try to do both at the same time the screen flickers. I changed the StartCameraIfReady function to get the filtered image as follows:
#SuppressLint("UnsafeExperimentalUsageError")
private fun startCameraIfReady() {
if (!isPermissionsGranted() || cameraProvider == null) {
return;
}
val imageAnalysis = ImageAnalysis.Builder().setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST).build()
imageAnalysis.setAnalyzer(executor, ImageAnalysis.Analyzer {
var bitmap = allocateBitmapIfNecessary(it.width, it.height)
converter.yuvToRgb(it.image!!, bitmap)
it.close()
gpuImageView.post {
// These two lines conflict causing the screen to flicker
// I can comment out one or the other and it works great
// But running both at the same time causes issues
gpuImageView.gpuImage.setImage(bitmap)
val filtered = gpuImageView.gpuImage.getBitmapWithFilterApplied(bitmap)
/*
Analyze the filtered image...
Print details about image here
*/
}
})
cameraProvider!!.unbindAll()
cameraProvider!!.bindToLifecycle(this, CameraSelector.DEFAULT_BACK_CAMERA, imageAnalysis)
}
When I try to get the filtered bitmap, it seems to conflict with the setImage line of code and cause the screen to flicker as shown in the video below. I can either show the preview to the user, or I can analyze the image, but not both at the same time. I have tried running them synchronized as well as each on their own background thread. I have also tried adding another Image Analyzer and binding it to the camera lifecycle (one for the preview and the other to get the filtered bitmap), the screen still flickers but less often.
https://imgur.com/a/mXeuEhe
<blockquote class="imgur-embed-pub" lang="en" data-id="a/mXeuEhe" >screenRocord</blockquote><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
If you are going to get the Bitmap with filter applied, you don't even need the GPUImageView. You can just get the Bitmap, and then set it on a regular ImageView. This is how your analyzer should look like:
ImageAnalysis.Analyzer {
var bitmap = allocateBitmapIfNecessary(it.width, it.height)
converter.yuvToRgb(it.image!!, bitmap)
it.close()
val filteredBitmap = gpuImage.getBitmapWithFilterApplied(bitmap)
regularImageView.post {
regularImageView.setImageBitmap(filteredBitmap)
}
}
Please be aware that the original GitHub sample is inefficient, and the sample above is even worse, because they convert the output to a Bitmap before feeding it back to GPU. For the best performance to argument the preview stream, please see CameraX's core test app on how to access preview Surface via OpenGL.

How to get ARCore AcquireCameraImageBytes() in color?

I've been stuck on this problem for days. I am trying to do some image processing on my project. I have looked over the computer vision example ARCore provides but it only shows how to access the camera frame in black and white. I need color for my images. I've already looked at Save AcquireCameraImageBytes() from Unity ARCore to storage as an image
and had no luck.
I have a function called GetImage() that is supposed to return the camera frame as a texture.
public Texture GetImage() {
if (!Frame.CameraImage.AcquireCameraImageBytes().IsAvailable) {
return null;
}
var ptr = Frame.CameraImage.AcquireCameraImageBytes();
var bufferSize = ptr.Width * ptr.Height * 4;
if (_tex == null) {
_tex = new Texture2D(ptr.Width, ptr.Height, TextureFormat.RGBA32, false, false);
}
if (_bytes == null) {
_bytes = new byte[bufferSize];
}
Marshal.Copy(_ptr.Y, _bytes, 0, bufferSize);
_tex.LoadRawTextureData(_bytes);
_tex.Apply();
return _tex;
}
I run into two problems with this code.
First, after a few seconds my app freezes with the error "failed to acquire camera image with status error resources exhausted".
The second issue is if I do manage to display the image it is not in color and is repeated four times like in this post ARCore for Unity save camera image.
Does anyone have a working example of accessing ARCore images in color or know what is wrong with my code?
With "Frame.CameraImage.AcquireCameraImageBytes();" you get a greyscale image in the YUV-420-888 format (see doc) So with the right YUV to RGB transformation you should get a color camera image. (But I didn't find a right YUV to RGB transformation, yet)
I had a similiar questsion: ARCore Save Camera Image (Unity C#) on Button click in my updated question I posted code which get me the greyscale image from AcquireCameraImagesBytes().
If you want to do image processing, did you already looked at the Unity Computer Vision Example. example? See the _OnImageAvailable With the TextureReader API you can get colored images.
Also this github issue could be helpful: https://github.com/google-ar/arcore-unity-sdk/issues/221

Possible to make android watch face a picture from my gallery?

Is it possible to make an app that will take a picture from a user's phone's gallery and convert it to a android wearable watch face?
I've been reading up on these Android articles
https://developer.android.com/training/wearables/watch-faces/index.html
https://developer.android.com/training/wearables/watch-faces/drawing.html
and it seems if I can get a user to select a picture from the gallery and convert it to a bitmap it would then be plausible to set that as the watch face. I'm definitely a beginner programmer when it comes to Android and apks.
Confirmation from a more advanced Android developer would be great.
Now where I'm getting confused is if the picking of the picture would happen on the user's phone and send it to the android wearable app or if the wearable app has the ability to access the gallery of the user's phone and select it directly. Does anyone know if wearable apps can access the gallery of a users phone?
Assuming I already have a reference of the image selected it would be something like this? Correct me if I'm wrong. (Taken from second article under "Initialize watch face elements"
#Override
public void onCreate(SurfaceHolder holder) {
super.onCreate(holder);
// configure the system UI (see next section)
...
// load the background image
Resources resources = AnalogWatchFaceService.this.getResources();
//at this point the user should have already picked the picture they want
//so set "backgroundDrawable" to the image the user picture
int idOfUserSelectPicture = GetIdOfUserSelectedPictureSomehow();
Drawable backgroundDrawable = resources.getDrawable(idOfUserSelectPicture, null);
//original implementation from article
//Drawable backgroundDrawable = resources.getDrawable(R.drawable.bg, null);
mBackgroundBitmap = ((BitmapDrawable) backgroundDrawable).getBitmap();
// create graphic styles
mHourPaint = new Paint();
mHourPaint.setARGB(255, 200, 200, 200);
mHourPaint.setStrokeWidth(5.0f);
mHourPaint.setAntiAlias(true);
mHourPaint.setStrokeCap(Paint.Cap.ROUND);
...
// allocate a Calendar to calculate local time using the UTC time and time zone
mCalendar = Calendar.getInstance();
}
Thank you for any and all help.
The way to implement this would be to create a configuration Activity that runs on the phone, that picks from an image on your device. You can then send this image as an Asset via the Data Layer http://developer.android.com/training/wearables/data-layer/index.html and it will be received on the watch side, and you can then make it the background of the watch face.
It is not possible for an Android Wear device to see the photo collection on your phone, they are totally separate devices and nothing is shared by default unless you write an application that does this.
The Data Layer sample shows how to take a photo on the phone, and then send it to the wearable: https://github.com/googlesamples/android-DataLayer

How to get camera screen in android?

In my android app, I am trying to do some image recognition, and I want to open the camera and see the camera generated images before even taking a pic, like the camera browse mode.
Basically as you browse I want to continuously grab the current screen and then read it and generate some object to update some text on the screen. Something like this
#Override
// this gets called every 5 seconds while in camera browse mode. bitmap is the image of the camera currently
public void getScreen(Bitmap bitmap) {
MyData data = myalgorithm(bitmap);
displayCountOnScreen(data);
}
I saw this app https://play.google.com/store/apps/details?id=com.fingersoft.cartooncamera&hl=en
and in camera browse mode, they change the screen and put some other GUI stuff on the screen. I want to do that too.
Anyone know how I can do this?
Thanks
If all you want to do is put some GUI elements on the screen, then there is no need to fetch all the preview frames as Bitmaps (though you could do that as well, if you want):
Create a layout with a SurfaceView for where you want the video data to appear, and then put other views on top.
In onCreate, you can get it like this:
surfaceView = (SurfaceView)findViewById(R.id.cameraSurfaceView);
surfaceHolder = surfaceView.getHolder();
surfaceHolder.addCallback(this); // For when you need to know when it changes.
surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
When you create a camera, you need to pass it a Surface to display the preview on, see:
http://developer.android.com/reference/android/hardware/Camera.html#setPreviewDisplay(android.view.SurfaceHolder):
Camera camera = ...
...
camera.setPreviewDisplay(surfaceHolder);
camera.startPreview();
If you want to do image recognition, what you want are the raw image bytes to work with. In this case, register a PreviewCallback, see here:
http://developer.android.com/reference/android/hardware/Camera.html#setPreviewCallbackWithBuffer(android.hardware.Camera.PreviewCallback)

Making screenshot of Google Map bigger than my device screen

I'm developing an Android application which is using Google Maps API v2 and is sharing the map screen through socket connection with another Android device. It all works fine but I want to make it taking a screenshot of a map which is bigger than the map on my screen so that it can fit perfectly on the bigger screen of the device which is receiving the screenshots. For example: My app screen is 540x719 and the bitmap which the 2nd device receives is with width: 540 and height: 719. How can I make it sent screenshots which fit perfectly to the second application?
The method which I use for screenshots of Google Map:
public void CaptureMapScreen() {
SnapshotReadyCallback callback = new SnapshotReadyCallback() {
Bitmap bitmap1;
Bitmap bitmap;
#Override
public void onSnapshotReady(Bitmap snapshot) {
bitmap1 = snapshot;
try {
//some code
} catch (Exception e) {
e.printStackTrace();
}
}
};
mGoogleMap.snapshot(callback);
}
It seems impossible to me to make an actual screenshot of something that is not on the screen. This is because of the simple fact that a "screenshot" captures the screen, and not, what is not on it.
Also, it is not possible to make an actual screenshot with a higher resolution than the capturing devices screen.
If you are not worried that much about resolution, you could try to programatically zoom out your map, then take the screenshot and then zoom back in again. In that way you can capture more of the map.
Another thing you could do is capture multiple different parts of the map and later merge them to make a single Bitmap out of them.

Categories

Resources