I'm developing an Android application which is using Google Maps API v2 and is sharing the map screen through socket connection with another Android device. It all works fine but I want to make it taking a screenshot of a map which is bigger than the map on my screen so that it can fit perfectly on the bigger screen of the device which is receiving the screenshots. For example: My app screen is 540x719 and the bitmap which the 2nd device receives is with width: 540 and height: 719. How can I make it sent screenshots which fit perfectly to the second application?
The method which I use for screenshots of Google Map:
public void CaptureMapScreen() {
SnapshotReadyCallback callback = new SnapshotReadyCallback() {
Bitmap bitmap1;
Bitmap bitmap;
#Override
public void onSnapshotReady(Bitmap snapshot) {
bitmap1 = snapshot;
try {
//some code
} catch (Exception e) {
e.printStackTrace();
}
}
};
mGoogleMap.snapshot(callback);
}
It seems impossible to me to make an actual screenshot of something that is not on the screen. This is because of the simple fact that a "screenshot" captures the screen, and not, what is not on it.
Also, it is not possible to make an actual screenshot with a higher resolution than the capturing devices screen.
If you are not worried that much about resolution, you could try to programatically zoom out your map, then take the screenshot and then zoom back in again. In that way you can capture more of the map.
Another thing you could do is capture multiple different parts of the map and later merge them to make a single Bitmap out of them.
Related
I need to make an app for label printing like this
I am checking this tutorial
And PrintHelper has very simple and limited features.
I can only use two scales - SCALE_MODE_FILL, SCALE_MODE_FIT.
My bitmap image has a size of 512px X 512px. And I might need to adjust the size because of the label sticker size.
OR I need to choose the size of paper(ex. 100mm X 100mm) then, both way above will have the same result.
When I try this code, It opens print setting activity.
private void doPhotoPrint(Bitmap bitmap) {
PrintHelper printHelper = new PrintHelper(this);
printHelper.setColorMode(COLOR_MODE_MONOCHROME);
// Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.droids);
printHelper.printBitmap("droids.jpg - test print", bitmap);
}
However, I just want to implement the print function without opening the setting screen, But just when I click 'print' on my application, then right away print one or more bitmap images continuously with the default settings that I set(image size, black&white/color, printer that is connected, paper size).
Is there any way to make a function like the video above?
With the PrintHelper you get the system print dialog there is no way to print silently as the user has to pick the printer and the print attributes from the dialog. To function like the video, you'd need to implement the discovery and printing functionalities with the printer directly
I'm using the nativescript-google-maps-sdk plugin to create a Google map.
Everything works fine but I've got a problem with my custom marker icons, if you look at these pictures you can see that the icon size is not preserved on Android, making them very, very small to the point where you can barely even see them. This happens both in the emulators and on a real phone.
On IOS however the size is fine, as you can see in the 2nd image. The icon images have a size of 16x16 pixels and are in .png format.
I haven't been able to find any solution to this so this is my last resort, does anyone know why this might be happening?
This is the code I use to create the markers:
getImage(this.getWarningIcon(warning.status)).then((result) => {
const icon = new Image();
icon.imageSource = result;
const marker = new Marker();
marker.position = warning.centerOfPolygon;
marker.icon = icon;
marker.flat = true;
marker.anchor = [0.5, 0.5];
marker.visible = warning.isVisible;
marker.zIndex = zIndexOffset;
marker.infoWindowTemplate = 'markerTemplate';
marker.userData = {
description: warning.description,
startTime: warning.startTime,
completionTime: warning.completionTime,
freeText: warning.freeText
};
this.layers.push(marker);
this.map.addMarker(marker);
});
In that case 16px sounds too low for a high density device. Increase the size of the image sent from server or locally resize the image before passing it to marker.
You may also consider generating a scaled bitmap natively if you are familiar with Android apis. Image processing is something always complicated in Android. Using drawables are recommend when your images are static at least.
After implementing the camera2 API for the inApp camera I noticed that on Samsung devices the images appear blurry. After searching about that I found the Sasmung Camera SDK (http://developer.samsung.com/galaxy#camera). So after implementing the SDK on Samsung Galaxy S7 the images are fine now, but on Galaxy S6 they are still blurry. Someone experienced those kind of issues with Samsung devices?
EDIT:
To complement #rcsumners comment. I am setting autofocus by using
mPreviewBuilder.set(SCaptureRequest.CONTROL_AF_TRIGGER, SCaptureRequest.CONTROL_AF_TRIGGER_START);
mSCameraSession.capture(mPreviewBuilder.build(), new SCameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(SCameraCaptureSession session, SCaptureRequest request, STotalCaptureResult result) {
isAFTriggered = true;
}
}, mBackgroundHandler);
It is a long exposure image where the use has to take an image of a static non moving object. For this I am using the CONTROL_AF_MODE_MACRO
mCaptureBuilder.set(SCaptureRequest.CONTROL_AF_MODE, SCaptureRequest.CONTROL_AF_MODE_MACRO);
and also I am enabling auto flash if it is available
requestBuilder.set(SCaptureRequest.CONTROL_AE_MODE,
SCaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
I am not really an expert in this API, I mostly followed the SDK example app.
There could be a number of issues causing this problem. One prominent one is the dimensions of your output image
I ran Camera2 API and the preview is clear, but the output was quite blurry
val characteristics: CameraCharacteristics? = cameraManager.getCameraCharacteristics(cameraId)
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // The issue
val width = imageDimension.width
val height = imageDimension.height
if (size != null) {
width = size[0].width; height = size[0].height
}
val imageReader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 5)
The code below was returning a dimension about 245*144 which was way to small to be sent to the image reader. Some how the output was stretching the image making it end up been blurry. Therefore I removed this line below.
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // this was returning a small
Setting the width and height manually resolved the issue.
You're setting the AF trigger for one frame, but then are you waiting for AF to complete? For AF_MODE_MACRO (are you verifying the device lists support for this AF mode?) you need to wait for AF_STATE_FOCUSED_LOCKED before the image is guaranteed to be stable and sharp. (You may also receive NOT_FOCUSED_LOCKED if the AF algorithm can't reach sharp focus, which could be because the object is just too close for the lens, or the scene is too confusing)
On most modern devices, it's recommended to use CONTINUOUS_PICTURE and not worry about AF triggering unless you really want to lock focus for some time period. In that mode, the device will continuously try to focus to the best of its ability. I'm not sure all that many devices support MACRO, to begin with.
Is it possible to make an app that will take a picture from a user's phone's gallery and convert it to a android wearable watch face?
I've been reading up on these Android articles
https://developer.android.com/training/wearables/watch-faces/index.html
https://developer.android.com/training/wearables/watch-faces/drawing.html
and it seems if I can get a user to select a picture from the gallery and convert it to a bitmap it would then be plausible to set that as the watch face. I'm definitely a beginner programmer when it comes to Android and apks.
Confirmation from a more advanced Android developer would be great.
Now where I'm getting confused is if the picking of the picture would happen on the user's phone and send it to the android wearable app or if the wearable app has the ability to access the gallery of the user's phone and select it directly. Does anyone know if wearable apps can access the gallery of a users phone?
Assuming I already have a reference of the image selected it would be something like this? Correct me if I'm wrong. (Taken from second article under "Initialize watch face elements"
#Override
public void onCreate(SurfaceHolder holder) {
super.onCreate(holder);
// configure the system UI (see next section)
...
// load the background image
Resources resources = AnalogWatchFaceService.this.getResources();
//at this point the user should have already picked the picture they want
//so set "backgroundDrawable" to the image the user picture
int idOfUserSelectPicture = GetIdOfUserSelectedPictureSomehow();
Drawable backgroundDrawable = resources.getDrawable(idOfUserSelectPicture, null);
//original implementation from article
//Drawable backgroundDrawable = resources.getDrawable(R.drawable.bg, null);
mBackgroundBitmap = ((BitmapDrawable) backgroundDrawable).getBitmap();
// create graphic styles
mHourPaint = new Paint();
mHourPaint.setARGB(255, 200, 200, 200);
mHourPaint.setStrokeWidth(5.0f);
mHourPaint.setAntiAlias(true);
mHourPaint.setStrokeCap(Paint.Cap.ROUND);
...
// allocate a Calendar to calculate local time using the UTC time and time zone
mCalendar = Calendar.getInstance();
}
Thank you for any and all help.
The way to implement this would be to create a configuration Activity that runs on the phone, that picks from an image on your device. You can then send this image as an Asset via the Data Layer http://developer.android.com/training/wearables/data-layer/index.html and it will be received on the watch side, and you can then make it the background of the watch face.
It is not possible for an Android Wear device to see the photo collection on your phone, they are totally separate devices and nothing is shared by default unless you write an application that does this.
The Data Layer sample shows how to take a photo on the phone, and then send it to the wearable: https://github.com/googlesamples/android-DataLayer
I am creating an application which uses camera/gallery. I am taking a photo using camera and once I took the picture, the device will automatically display a preview screen in iOS, allows me to move and scale my image as required. In android, I manually created a preview window.
But I want to crop the image with a resolution 610x320 pixels.
Here is the code for taking image
Ti.Media.showCamera({
success:function(event) {
if(event.mediaType == Ti.Media.MEDIA_TYPE_PHOTO) {
var image = event.media;
var ImageFactory = require('ti.imagefactory');
var newBlob = ImageFactory.imageAsCropped(image, {width:610, height:320 });
imgvwCapturedImage.image = newBlob; //imgvwCapturedImage is an image view
}
},
cancel:function() {},
error:function(error) {
alert("Sorry, Unable to to process now.Please retry later.");
},
saveToPhotoGallery:true,
allowEditing:true,
mediaTypes:[Ti.Media.MEDIA_TYPE_PHOTO]
});
I was able to crop the image using the imageFactory module only after selecting the photo from the preview screen. Is there any chance to do the same at the preview screen itself so that the user can identify which area is getting cropped?
Any help will be appreciated.
Did you try an overlay? Just create a resizable view that user can manipulate with (Select portion of an image) and add it to CameraOptionsType.
http://docs.appcelerator.com/titanium/latest/#!/api/CameraOptionsType-property-overlay
I have created my own preview screen for iOS and cropped the image by the help of a scrollView and image factory module. Now it's working perfect. You may find a sample code here. However this will not be working for Android devices.