I want to create application to show offline map using GIS shape file(.shp) any one have idea that how to use shape file to display map in android.
Thanks in advance
If you are loading data from SD card, you can work with openmap it has a class SHapefile and try to transform your shapefile to an graphicslayer it will work i've already done it.
static public GraphicsLayer SHPtoPOINT(String shpfile) {
SpatialReference lSR = SpatialReference.create(26192);
Envelope lEnvolope = getSHPEnvelope(shpfile);//to create an extent for your graphics layer
GraphicsLayer graphicLayer = new GraphicsLayer(lSR, lEnvolope);
try {
File file = new File(shpfile);
ShapeFile shp = new ShapeFile(file);
ESRIPointRecord e = (ESRIPointRecord) shp.getNextRecord();
SimpleMarkerSymbol c_point = new SimpleMarkerSymbol(Color.BLACK, 1,
STYLE.CIRCLE);
while (e != null) {
graphicLayer.addGraphic(new Graphic(new Point(e.getX(), e.getY()), c_point));
e = (ESRIPointRecord) shp.getNextRecord();
}
shp.close();
} catch (IOException e1) {
e1.printStackTrace();
}
return graphicLayer;
}
Source
EDIT:
BBN Technologies' OpenMap TM package is an Open Source JavaBeans TM
based programmer's toolkit. Using OpenMap, you can quickly build
applications and applets that access data from legacy databases and
applications. OpenMap provides the means to allow users to see and
manipulate geospatial information.
Link to OpenMap info.
Related
in my App I print some parts to a pdf for the user. I do this by using a PrintedPdfDocument.
The code looks in short like this:
// create a new document
val printAttributes = PrintAttributes.Builder()
.setMediaSize(mediaSize)
.setColorMode(PrintAttributes.COLOR_MODE_COLOR)
.setMinMargins(PrintAttributes.Margins.NO_MARGINS)
.build()
val document = PrintedPdfDocument(context, printAttributes)
// add pages
for ((n, pdfPageView) in pdfPages.withIndex()) {
val page = document.startPage(n)
Timber.d("Printing page " + (n + 1))
pdfPageView.draw(page.canvas)
document.finishPage(page)
}
// write the document content
try {
val out: OutputStream = FileOutputStream(outputFile)
document.writeTo(out)
out.close()
Timber.d("PDF written to $outputFile")
} catch (e: IOException) {
return
}
It all works fine. However now I want to add another page at the end. Only exception is that this will be a pre-generated pdf file from the assets. I only need to append it so no additional rendering etc. should be necessary.
Is there any way of doing this via the PdfDocument class from the Android SDK?
https://developer.android.com/reference/android/graphics/pdf/PdfDocument#finishPage(android.graphics.pdf.PdfDocument.Page)
I assumed it might be a similar question like this here: how can i combine multiple pdf to convert single pdf in android?
But is this true? The answer was not accepted and is 3 years old. Any suggestions?
Alright, I gonna answer my own question here.
It looks like there are not many options. At least I couldn't find anything native. There are some pdf libraries in the Android framework but they all seem to support only creating new pages but no operations on existing documents.
So this is what I did:
First of all there don't seem to be any good Android libraries. I found that one here which prepared the Apache PDF-Box for Android. Add this to your Gradle file:
implementation 'com.tom_roush:pdfbox-android:1.8.10.3'
In code you can now import
import com.tom_roush.pdfbox.multipdf.PDFMergerUtility
Where I added a method
val ut = PDFMergerUtility()
ut.addSource(file)
val assetManager: AssetManager = context.assets
var inputStream: InputStream? = null
try {
inputStream = assetManager.open("appendix.pdf")
ut.addSource(inputStream)
} catch (e: IOException) {
...
}
// Write the destination file over the original document
ut.destinationFileName = file.absolutePath
ut.mergeDocuments(true)
That way the appendix page is loaded from the assets and appended at the end of the document.
It then gets written back to the same file as it was before.
My desired goal is to add a TileOverlay in mbtiles format and rendering some gemetric object (mainly lines and polygons) in KML format.
The problem is that the MapBox map covers my KML polygons and i don't know how to manage the rendering order.
Via code i tried to load on map firstly the mbtile and then the KML polygons, with no luck.
I attach the code for further considerations
TileOverlayOptions opts = new TileOverlayOptions();
MapBoxOfflineTileProvider provider = new MapBoxOfflineTileProvider("/path/to/file.mbtiles");
opts.tileProvider(provider);
mbTileOverlay = mMap.addTileOverlay(opts);
KmlLayer layer = null;
try {
layer = new KmlLayer(mMap, R.raw.mypolygons, mContext);
layer.addLayerToMap();
} catch (XmlPullParserException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Thanks for your help
Here is a link to the KML used
The polygons defined in your KML don't have the drawOrder property set (see this change) so the zIndex is set to the default (0).
As you don't define a zIndex for your MapBoxOfflineTileProvider it's also defaulted to 0. Try defining your TileOverlayOptions like this:
TileOverlayOptions opts = new TileOverlayOptions();
opts.zIndex(-1);
I have been trying to implement the flashlight/torch feature of the camera using the GooglePlay Services Vision API (using Nuget from Visual Studio) for the past few days without success. I have noticed that there is a GitHub implementation of this API which has such functionality but that is only available to Java users.
I was wondering if there is anything related to C# Xamarin users.
The Camera object is not made available on this API therefore I am not able to alter the Camera parameters needed to activate the flashlight.
I would like to be sure if that functionality is not available so I don't waste more time over this. It just might be the case that the Xamarin developers have not attended to this functionality and they might in a near future.
UPDATE
https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/BarcodeCaptureActivity.java
In there you can see that on line 214 we have such method call:
mCameraSource = builder.setFlashMode(useFlash ? Camera.Parameters.FLASH_MODE_TORCH : null).build();
SetFlashMode is not a method of the CameraSource in Nuget, but it is on the GitHub (open source version).
Xamarin Vision Library Didn't expose the method to set Flash Mode.
WorkAround.
Using Reflection. You can get the Camera Object from CameraSouce and add the flash parameter then set the updated parameters to the camera.
This should be called after surfaceview has been created
Code
public Camera getCameraObject (CameraSource _camSource)
{
Field [] cFields = _camSource.Class.GetDeclaredFields ();
Camera _cam = null;
try {
foreach (Field item in cFields) {
if (item.Name.Equals ("zzbNN")) {
Console.WriteLine ("Camera");
item.Accessible = true;
try {
_cam = (Camera)item.Get (_camSource);
} catch (Exception e) {
Logger.LogException (this, e);
}
}
}
} catch (Exception e) {
Logger.LogException (this, e);
}
return _cam;
}
public void setFlash (bool isEnable)
{
try {
isTorch = !isEnable;
var _cam = getCameraObject (mCameraSource);
if (_cam == null) return;
var _pareMeters = _cam.GetParameters ();
var _listOfSuppo = _cam.GetParameters ().SupportedFlashModes;
_pareMeters.FlashMode = isTorch ? _listOfSuppo [0] : _listOfSuppo [3];
_cam.SetParameters (_pareMeters);
} catch (Exception e) {
Logger.LogException (this, e);
}
}
Basically, anything you can do with Android can be done with Xamarin.Android. All the underlying APIs area available.
Since you have existing Java code, you can create a binding project that enables you to call the code from your Xamarin.Android project. Here's a good article on how to get started: Binding a Java Library
On the other hand, I don't think you need a library to do what you want to. If you only want torch/flashlight functionality, you just need to adapt the Java code from this answer to work in Xamarin.Android with C#.
I want to upload image on Google Cloud Storage from my android app. For that I searched and found that GCS JSON Api provides this feature. I did a lot of research for Android sample which demonstrates its use. On the developer site they have provided code example that only support java. I don't know how to use that API in Android. I referred this and this links but couldn't get much idea. Please guide me on how i can use this api with android app.
Ok guys so I solved it and got my images being uploaded in Cloud Storage all good.
This is how:
Note: I used the XML API it is pretty much the same.
First, you will need to download a lot of libraries.
The easiest way to do this is create a maven project and let it download all the dependencies required. From this sample project :
Sample Project
The libraries should be:
Second, you must be familiar with Cloud Storage using the api console
You must create a project, create a bucket, give the bucket permissions, etc.
You can find more details about that here
Third, once you have all those things ready it is time to start coding.
Lets say we want to upload an image:
Cloud storage works with OAuth, that means you must be an authenticated user to use the API. For that the best way is to authorize using Service Accounts. Dont worry about it, the only thing you need to do is in the API console get a service account like this:
We will use this service account on our code.
Fourth, lets write some code, lets say upload an image to cloud storage.
For this code to work you must put your key generated in step 3 in assets folder, i named it "key.p12".
I don't recommend you to do this on your production version, since you will be giving out your key.
try{
httpTransport= new com.google.api.client.http.javanet.NetHttpTransport();
//agarro la key y la convierto en un file
AssetManager am = context.getAssets();
InputStream inputStream = am.open("key.p12"); //you should not put the key in assets in prod version.
//convert key into class File. from inputstream to file. in an aux class.
File file = UserProfileImageUploadHelper.createFileFromInputStream(inputStream,context);
//Google Credentianls
GoogleCredential credential = new GoogleCredential.Builder().setTransport(httpTransport)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(SERVICE_ACCOUNT_EMAIL)
.setServiceAccountScopes(Collections.singleton(STORAGE_SCOPE))
.setServiceAccountPrivateKeyFromP12File(file)
.build();
String URI = "https://storage.googleapis.com/" + BUCKET_NAME+"/"+imagename+".jpg";
HttpRequestFactory requestFactory = httpTransport.createRequestFactory(credential);
GenericUrl url = new GenericUrl(URI);
//byte array holds the data, in this case the image i want to upload in bytes.
HttpContent contentsend = new ByteArrayContent("image/jpeg", byteArray );
HttpRequest putRequest = requestFactory.buildPutRequest(url, contentsend);
com.google.api.client.http.HttpResponse response = putRequest.execute();
String content = response.parseAsString();
Log.d("debug", "response is:"+response.getStatusCode());
Log.d("debug", "response content is:"+content);} catch (Exception e) Log.d("debug", "Error in user profile image uploading", e);}
This will upload the image to your cloud bucket.
For more info on the api check this link Cloud XML API
Firstly, You should get the below information by registering your application in the GCP console.
private final String pkcsFile = "xxx.json";//private key file
private final String bucketName = "your_gcp_bucket_name";
private final String projectId = "your_gcp_project_id";
Once you get the credentials, you should put the private key (.p12 or .json) in your assets folder. I'm using JSON format private key file. Also, you should update the image location to upload.
#RequiresApi(api = Build.VERSION_CODES.O)
public void uploadImageFile(String srcFileName, String newName) {
Storage storage = getStorage();
File file = new File(srcFileName);//Your image loaction
byte[] fileContent;
try {
fileContent = Files.readAllBytes(file.toPath());
} catch (IOException e) {
e.printStackTrace();
return;
}
if (fileContent == null || fileContent.length == 0)
return;
BlobInfo.Builder newBuilder = Blob.newBuilder(BucketInfo.of(bucketName), newName);
BlobInfo blobInfo = newBuilder.setContentType("image/png").build();
Blob blob = storage.create(blobInfo, fileContent);
String bucket = blob.getBucket();
String contentType = blob.getContentType();
Log.e("TAG", "Upload File: " + contentType);
Log.e("File ", srcFileName + " uploaded to bucket " + bucket + " as " + newName);
}
private Storage getStorage() {
InputStream credentialsStream;
Credentials credentials;
try {
credentialsStream = mContext.getAssets().open(pkcsFile);
credentials = GoogleCredentials.fromStream(credentialsStream);
} catch (IOException e) {
e.printStackTrace();
return null;
}
return StorageOptions.newBuilder()
.setProjectId(projectId).setCredentials(credentials)
.build().getService();
}
I followed this example to parse a local GPX file in Android:
http://android-coding.blogspot.pt/2013/01/get-latitude-and-longitude-from-gpx-file.html
All works fine to access "lat" and "long" but I need also to get the "ele" value but all my tentatives were unsuccessful.
Anyone can give me some hits to do that?
Thanks in advance!
Best regards,
NR.
I will add my library for GPX parsing to these answers: https://github.com/ticofab/android-gpx-parser. It provides two ways to parse you GPX file: once you obtain / create a GPXParser object (mParser in the examples below), you can then either parse directly your GPX file
Gpx parsedGpx = null;
try {
InputStream in = getAssets().open("test.gpx");
parsedGpx = mParser.parse(in);
} catch (IOException | XmlPullParserException e) {
e.printStackTrace();
}
if (parsedGpx == null) {
// error parsing track
} else {
// do something with the parsed track
}
or you can parse a remote file:
mParser.parse("http://myserver.com/track.gpx", new GpxFetchedAndParsed() {
#Override
public void onGpxFetchedAndParsed(Gpx gpx) {
if (gpx == null) {
// error parsing track
} else {
// do something with the parsed track
}
}
});
Contributions are welcome.
you have the "Node node = nodelist_trkpt.item(i);" in your first loop.
Get the child elements from this node an run through these child elements.
e.g.:
NodeList nList = node.getChildNodes();
for(int j=0; j<nList.getLength(); j++) {
Node el = nList.item(j);
if(el.getNodeName().equals("ele")) {
System.out.println(el.getTextContent());
}
}
Update: I've added parsing "ele" element as well, so this code could match your requirements.
I will propose different approach: https://gist.github.com/kamituel/6465125.
In my approach I don't create an ArrayList of all track points (this is done in the example you posted). Such a list can consume quite a lot of memory, which can be an issue on Android.
I've even given up on using regex parsing to avoid allocating too many objects (which causes garbage collector to run).
As a result, running Java with 16Mb heap size, parsing GPX file with over 600 points, garbage collector will be run only 12 times. I'm sure one could go lower, but I didn't optimize it heavily yet.
Usage:
GpxParser parser = new GpxParser(new FileInputStream(file));
TrkPt point = null;
while ((point = parser.nextTrkPt()) != null) {
// point.getLat()
// point.getLon()
}
I've successfully used this code to parse around 100 Mb of GPX files on Android. Sorry it's not in the regular repo, I didn't plan to share it just yet.
I've ported the library GPXParser by ghitabot to Android.
https://github.com/urizev/j4gpx