I'm interested in writing a visualization program for the road data in the 2009 Tiger/Line Shapefiles. I'd like to draw the line data to display all the roads for my county.
The ESRI Shapefile or simply a
shapefile is a popular geospatial
vector data format for geographic
information systems software. It is
developed and regulated by ESRI as a
(mostly) open specification for data
interoperability among ESRI and other
software products.1 A "shapefile"
commonly refers to a collection of
files with ".shp", ".shx", ".dbf", and
other extensions on a common prefix
name (e.g., "lakes.*"). The actual
shapefile relates specifically to
files with the ".shp" extension,
however this file alone is incomplete
for distribution, as the other
supporting files are required.
Does anyone know of existing libraries for parsing and reading in the line data for Shapefiles?
GeoTools will do it. There are a ton of jars and you don't need most of them. However, reading the shapefile is just a few lines.
File file = new File("mayshapefile.shp");
try {
Map<String, String> connect = new HashMap();
connect.put("url", file.toURI().toString());
DataStore dataStore = DataStoreFinder.getDataStore(connect);
String[] typeNames = dataStore.getTypeNames();
String typeName = typeNames[0];
System.out.println("Reading content " + typeName);
FeatureSource featureSource = dataStore.getFeatureSource(typeName);
FeatureCollection collection = featureSource.getFeatures();
FeatureIterator iterator = collection.features();
try {
while (iterator.hasNext()) {
Feature feature = iterator.next();
GeometryAttribute sourceGeometry = feature.getDefaultGeometryProperty();
}
} finally {
iterator.close();
}
} catch (Throwable e) {}
Openmap has a Java API that provides read and write access to ESRI files.
There is GeoTools, or more exactly this class ShapefileDataStore.
You could try to use Java ESRI Shape File Reader library. It's small, easy to install and has very simple API.
The only drawback is that it does not read other mandatory and optional files (.shx, .dbf, etc.) that are usually shipped with a shape file.
You can directly use GUI GIS tools so that their is no need of changing the source code of GeoTools.
I use QGIS which does all operations(even more) than GeoTools.
Quantum GIS - An open source Geographic Information System for editing, merging and simplifying shapefile maps. See also: creating maps with multiple layers using Quantum GIS.
Related
I tried a while to get the pretrained model working on android. The problem is, I only got the ckpt and meta file for the pretrained net. In my opinion I need the .pb for the android app. So I tried to convert the given files to an .pb file.
Therefore I tried the freeze_graph.py but without succes. So I used the example code from https://github.com/openimages/dataset/blob/master/tools/classify.py and modified it to store a pb. file after loading
if not os.path.exists(FLAGS.checkpoint):
tf.logging.fatal(
'Checkpoint %s does not exist. Have you download it? See tools/download_data.sh',
FLAGS.checkpoint)
g = tf.Graph()
with g.as_default():
input_image = tf.placeholder(tf.string)
processed_image = PreprocessImage(input_image)
with slim.arg_scope(inception.inception_v3_arg_scope()):
logits, end_points = inception.inception_v3(
processed_image, num_classes=FLAGS.num_classes, is_training=False)
predictions = end_points['multi_predictions'] = tf.nn.sigmoid(
logits, name='multi_predictions')
init_op = control_flow_ops.group(tf.global_variables_initializer(),
tf.global_variables_initializer(),
data_flow_ops.initialize_all_tables())
saver = tf_saver.Saver()
sess = tf.Session()
saver.restore(sess, FLAGS.checkpoint)
outpt_filename = 'output_graph.pb'
#output_graph_def = sess.graph.as_graph_def()
output_graph_def = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ["multi_predictions"])
with gfile.FastGFile(outpt_filename, 'wb') as f:
f.write(output_graph_def.SerializeToString())
Now my problem is that I have the .pb file but I don't have any opinion what is the input node name and I am not sure if multi_predictions is the right output name. In the example android app I have to specify both. And the android app crashed with:
tensorflow_inference_jni.cc:138 Could not create Tensorflow Graph: Invalid argument: No OpKernel was registered to support Op 'DecodeJpeg' with these attrs.
I don't know if there are more problem by trying to fix the .pb problem. Or if anyone knows a better way to port the ckpt and meta files to a .pd file in my case or knows a source for the final file with input and ouput names please give me a hint to complete this task.
Thanks
You'll need to use the optimize_for_inference.py script to strip out the unused nodes in your graph. "decodeJpeg" is not supported on Android -- pixel values should be fed in directly. ClassifierActivity.java has more detail about the specific nodes to use for inception v3.
I'm using mp4parser to mux h264 and aac file which are re-encoded from orginal video file,how can I write the metadata of the original video to the new mp4 file? Or is there a common method to write metadata to mp4 file?
metadata and MP4 is a really problem. There is no generally supported specification. But this is only one part of the problem.
Prob (1): When to write metadata
Prob (2): What to write
Prob (1) is relatively easy to solve: Just extend the DefaultMp4Builder or the FragmentedMp4Builder on your own and override the
protected ParsableBox createUdta(Movie movie) {
return null;
}
with something meaningful. E.g.:
protected ParsableBox createUdta(Movie movie) {
UserDataBox udta = new UserDataBox();
CopyrightBox copyrightBox = new CopyrightBox();
copyrightBox.setCopyright("All Rights Reserved, me, myself and I, 2015");
copyrightBox.setLanguage("eng");
udta.addBox(copyrightBox);
return udta;
}
some people used that to write apple compatible metadata but even though there are some classes in my code I never really figured out what works and what not. You might want to have a look into Apple's specification here
And yes: I'm posting this a year to late.
It seems that the 'mp4parser' library (https://code.google.com/p/mp4parser/), supports writing Metadata to mp4 files in Android. However, I've found there's little-to-no documentation on how to do this, beyond a few examples in their codebase. I've had some luck with the following example, which writes XML metadata into the 'moov/udta/meta' box:
https://github.com/copiousfreetime/mp4parser/blob/master/examples/src/main/java/com/googlecode/mp4parser/stuff/ChangeMetaData.java
If you consider the alternatives you might want to look at JCodec for this purpose. It now has the org.jcodec.movtool.MetadataEditor API (and a matching CLI org.jcodec.movtool.MetadataEditorMain).
Their documentation contains many samples: http://jcodec.org/docs/working_with_mp4_metadata.html
So basically when you want to add some metadata you need to know what key(s) it corresponds to. One way to find out is to inspect a sample file that already has the metadata you need. For this you can run the JCodec's CLI tool that will just print out all the existing metadata fields (keys with values):
./metaedit <file.mp4>
Then when you know the key you want to work with you can either use the same CLI tool:
# Changes the author of the movie
./metaedit -f -si ©ART=New\ value file.mov
or the same thing via the Java API:
MetadataEditor mediaMeta = MetadataEditor.createFrom(new
File("file.mp4"));
Map<Integer, MetaValue> meta = mediaMeta.getItunesMeta();
meta.put(0xa9415254, MetaValue.createString("New value")); // fourcc for '©ART'
mediaMeta.save(false); // fast mode is off
To delete a metadata field from a file:
MetadataEditor mediaMeta = MetadataEditor.createFrom(new
File("file.mp4"));
Map<Integer, MetaValue> meta = mediaMeta.getItunesMeta();
meta.remove(0xa9415254); // removes the '©ART'
mediaMeta.save(false); // fast mode is off
To convert string to integer fourcc you can use something like:
byte[] bytes = "©ART".getBytes("iso8859-1");
int fourcc =
ByteBuffer.wrap(bytes).order(ByteOrder.BIG_ENDIAN).getInt();
If you want to edit/delete the android metadata you'll need to use a different set of fucntion (because it's stored differently than iTunes metadata):
./metaedit -sk com.android.capture.fps,float=25.0 file.mp4
OR alternatively the same through the API:
MetadataEditor mediaMeta = MetadataEditor.createFrom(new
File("file.mp4"));
Map<String, MetaValue> meta = mediaMeta.getKeyedMeta();
meta.put("com.android.capture.fps", MetaValue.createFloat(25.));
mediaMeta.save(false); // fast mode is off
I am working on Android and using docx4j to view the docx,pptx and xlsx files into my application.
I am unable to view the ppt files. I am getting compile time error at SvgExporter class. which is not there in docx4j library.
How can I get the SvgExporter class library and build my application and get the Svghtml to load on webview for ppt files? My code is as follows:
String inputfilepath = System.getProperty("user.dir") + "/sample-docs/pptx/pptx-basic.xml";
// Where to save images
SvgExporter.setImageDirPath(System.getProperty("user.dir") + "/sample-docs/pptx/");
PresentationMLPackage presentationMLPackage =
(PresentationMLPackage)PresentationMLPackage.load(new java.io.File(inputfilepath));
// TODO - render slides in document order!
Iterator partIterator = presentationMLPackage.getParts().getParts().entrySet().iterator();
while (partIterator.hasNext()) {
Map.Entry pairs = (Map.Entry)partIterator.next();
Part p = (Part)pairs.getValue();
if (p instanceof SlidePart) {
System.out.println(
SvgExporter.svg(presentationMLPackage, (SlidePart)p)
);
}
}
// NB: file suffix must end with .xhtml in order to see the SVG in a browser
}
SvgExporter uses XSLT and Xalan extension functions to do its thing.
IIRC, there were problems getting Xalan working on Android (you should verify this yourself).
If that remains the case, then you'll need to write a version of SvgExporter which does the traversal in Java code, as opposed to relying on Xalan to do this.
That should be quite feasible; there are "NonXSLT" examples in the docx4j code base.
I have yet another hurdle to climb with my GOOGLE DRIVE SDK Android App. I am uploading scanned images with tightly controlled index fields - user defined 'tags' from local dictionary. For instance XXX.JPG has index words "car" + "insurance". Here is a simplified code snippet:
...
body.setTitle("XXX.JPG");
body.setDescription("car, insurance");
body.setIndexableText(new IndexableText().setText("car insurance"));
body.setMimeType("image/jpeg");
body.setParents(Arrays.asList(new ParentReference().setId(...)));
FileContent cont = new FileContent("image/jpeg", new java.io.File(fullPath("xxx.jpg")));
File gooFl = _svc.files().insert(body, cont).execute();
...
Again, everything works great, except when I start a search, I get results that apparently come from some OCR post process, thus rendering my system's DICTIONARY unusable. I assume I can use a custom MIME type, but then the JPEG images become invisible for users who use standard GOOGLE DRIVE application (local, browser-based ... ). So the question is: Can I upload MIME "image/jpeg" files with custom indexes (either Indexable, or Description fields) but stop GOOGLE from OCR-ing my files and adding indexes I did not intend to have?
Just to be more specific, I search for "car insurance" and instead of my 3 files I indexed this way, I get unmanageable pile of other results (JPEG scanned documents) that had "car" and "insurance" somewhere in them. Not what my app wants.
Thank you in advance, sean
...
Based on Burcu's advise below, I modified my code to something that looks like this (stripped to bare bones):
// define meta-data
File body = new File();
body.setTitle("xxx.jpg");
body.setDescription(tags);
body.setIndexableText(new IndexableText().setText(tags));
body.setMimeType("image/jpeg");
body.setParents(Arrays.asList(new ParentReference().setId(_ymID)));
body.setModifiedDate(DateTime.parseRfc3339(ymdGOO));
FileContent cont =
new FileContent("image/jpeg",new java.io.File(fullPath("xxx.jpg")));
String sID = findOnGOO(driveSvc, body.getTitle());
// file not found on gooDrive, upload and fix the date
if (sID == null) {
driveSvc.files().insert(body, cont).setOcr(false).execute();
driveSvc.files().patch(gooFl.getId(), body).setOcr(false).setSetModifiedDate(true).execute();
// file found on gooDrive - modify metadata and/or body
} else {
// modify content + metadata
if (contentModified) {
driveSvc.files().update(sID, body, cont).setOcr(false).setSetModifiedDate(true).execute();
// only metadata (tags,...)
} else {
driveSvc.files().patch(sID, body).setOcr(false).setSetModifiedDate(true).execute();
}
}
...
It is a block that uploads or modifies a Google Drive file. The two non-standard operations are:
1/ resetting the file's 'modified' date in order to force the date of file creation - tested, works OK
2/ stopping the OCR process that interferes with my apps indexing scheme - will test shortly and update here
For the sake of simplicity, I did not include the implementation of "findInGOO()" method. It is quite simple 2-liner and I can supply it upon request
sean
On insertion, set the ocr parameter to false:
service.files().update(body, content).setOcr(false).execute();
I'm having a problem with iText. Other people say that iText is for PDF Creation only? and it can not read or extract text from a PDF. is that true?
If it is true then what are other options i can choose to EXTRACT text from PDF File and Save it on a Variable or Display it in Android device?
If iText is capable of Extracting text from PDF, then HOW?
iText can extract text from PDFs. While it is true that it originated as a tool to create new and manipulate existing PDFs, it in the recent years also has become better and better at extracting text. This obviously implies that you should use a current iText version (5.3.x) for text extraction.
The book "iText in Action, second edition" by the main iText developer, Bruno Lowagie, explains basic iText text extraction in chapter 15, and the samples from that chapter are available in the iText Sourceforge SVN repository, cf. Samples for chapter 15. A good starting point is ExtractPageContentSorted2 which extracts the text of a whole page.
If you have special requirements, you may use ExtractPageContentSorted1 as a starting point which explicitly defines a text extraction strategy; depending on your requirements you will need your own startegy. If you want the text from a specific region only, look at ExtractPageContentArea.
To really fine tune the text extraction capabilities of iText, you should have a look at the itext-question mailing list archive (e.g. at nabble.com) as recently the iText text extraction API was extended to serve additional use cases.
Use below code to extract text from pdf :
String pat = data.getData().getPath();
File f = new File(pat);
//f is file path of pdf file
read = new PdfReader(new FileInputStream(f));
parser = new PdfReaderContentParser(read);
strw = new StringWriter();
stretegy = parser.processContent(j, new SimpleTextExtractionStrategy());
strw.write(stretegy.getResultantText());
String da = strw.toString();
//set extracted text from pdf file
//to Edit-text
edt1.setText(da);