I'm building an app which reads EXIF data from images and overlays that data on the image so you can share your camera settings with a nice graphic rather than manually typing them out (EG: "F/1.4 at 1/200 ISO400")
I'm using AndroidX ExifInterface 1.1.0-beta01 and the blow code works to get every piece of data except the LensMake and LensModel are always null.
I've tried reverting to ExifInterface 1.0.0 and that made no difference, it still behaves identically.
I note that the documentation for ExifInterface refers to LensMake and LensModel as returning an "ASCII String" which Camera Make and Camera Model just return a "String" so i've tried different variations of getAttribute without success.
These exact files work fine on the iOS version of the app I've previously built and i've tried files from multiple different cameras (Fuji X-T3, Canon 5D III)
var stream: InputStream? = null
try {
stream = contentResolver.openInputStream(uri)
val exifInterface = ExifInterface(stream!!)
FS = exifInterface.getAttribute(ExifInterface.TAG_F_NUMBER)!!
SS = exifInterface.getAttribute(ExifInterface.TAG_EXPOSURE_TIME)!!
ISO = exifInterface.getAttribute(ExifInterface.TAG_PHOTOGRAPHIC_SENSITIVITY)!!
val LensMake = exifInterface.getAttribute(ExifInterface.TAG_LENS_MAKE) //THIS APPEARS TO BE ALWAYS NULL :(
val LensModel = exifInterface.getAttribute(ExifInterface.TAG_LENS_MODEL) //THIS APPEARS TO BE ALWAYS NULL :(
val CameraMake = exifInterface.getAttribute(ExifInterface.TAG_MAKE)
val CameraModel = exifInterface.getAttribute(ExifInterface.TAG_MODEL)
}
I'd like to be able to read the lens information, I know it's in the file but this library doesn't seem to want to expose it.
There is an open bug filed on the issue tracker, which states, that:
Although the constants are available for LensMake and LensModel, the getter does not return the actual values from the file. It seems like proper support is missing. I think the reason is that ExifTag[] IFD_EXIF_TAGS does not contain an array item for lens make and model. Adding the following lines at the right place of the aforementioned array, seems to fix things:
new ExifTag(TAG_LENS_MAKE, 42035, IFD_FORMAT_STRING),
new ExifTag(TAG_LENS_MODEL, 42036, IFD_FORMAT_STRING),
Not sure how reliable this is, but it is at least a solution approach.
Related
I recently used this sample of great TensorFlow lite in android.
I can use this project correctly, but I want to estimate poses on single images too (not just in real time mode). so I tried to reach my goal! but unfortunately I couldn't! and those disappointing codes are here:
private fun runOnSimpleImage() {
val detector = MoveNet.create(this, device, ModelType.Lightning)
detector.let { detector ->
simpleDetector = detector
}
simpleDetector?.estimatePoses(templateBitmap)?.let {persons->
VisualizationUtils.drawBodyKeypoints(
templateBitmap,
persons, false
)
}
showOutputBitmap(templateBitmap)
}
Also I search it and found this. but I couldn't solve my problem yet.
and my result is something like this:
Fortunately my code is not wrong! and it works correctly. and you can use it!
The problem was in method that I used to convert my drawable image to Bitmap.
I used to use these codes:
val drawable = ResourcesCompat.getDrawable(resources, R.drawable.resized,theme)
templateBitmap = (drawable as BitmapDrawable).bitmap
But when I changed those to something like this:
templateBitmap = BitmapFactory.decodeResource(resources,R.drawable.resized)
my problem solved.
Using Tensorflow 1.0.1 it's fine to read optimized graph and quantized graph in android using TensorFlowImageClassifier.create method, such as:
classifier = TensorFlowImageClassifier.create(
c.getAssets(),
MODEL_FILE,
LABEL_FILE,
IMAGE_SIZE,
IMAGE_MEAN,
IMAGE_STD,
INPUT_NAME,
OUTPUT_NAME);
But according to the Peter Warden's Blog(https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/), it's recommended to use memory mapped graph in mobile to avoid memory related crashes.
I built memmapped graph using
bazel-bin/tensorflow/contrib/util/convert_graphdef_memmapped_format \
--in_graph=/tf_files/rounded_graph.pb \
--out_graph=/tf_files/mmapped_graph.pb
and it created fine, but when I tried to load the file with TensorFlowImageClassifier.create(...) it says the file is not valid graph file.
In iOS, it's ok to load the file with
LoadMemoryMappedModel(
model_file_name, model_file_type, &tf_session, &tf_memmapped_env);
for it has a method for read memory mapped graph.
So, I guess there's a similar function in android, but I couldn't find it.
Could someone guide me how to load memory mapped graph in android ?
Since the file from the memmapped tool is no longer a standard GraphDef protobuf, you need to make some changes to the loading code. You can see an example of this in the iOS Camera demo app, the LoadMemoryMappedModel() function:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/ios_examples/camera/tensorflow_utils.mm#L159
The same code (with the Objective C calls for getting the filenames substituted) can be used on other platforms too. Because we’re using memory mapping, we need to start by creating a special TensorFlow environment object that’s set up with the file we’ll be using:
std::unique_ptr<tensorflow::MemmappedEnv> memmapped_env;
memmapped_env->reset(
new tensorflow::MemmappedEnv(tensorflow::Env::Default()));
tensorflow::Status mmap_status =
(memmapped_env->get())->InitializeFromFile(file_path);
You then need to pass in this environment to subsequent calls, like this one for loading the graph.
tensorflow::GraphDef tensorflow_graph;
tensorflow::Status load_graph_status = ReadBinaryProto(
memmapped_env->get(),
tensorflow::MemmappedFileSystem::kMemmappedPackageDefaultGraphDef,
&tensorflow_graph);
You also need to create the session with a pointer to the environment you’ve created:
tensorflow::SessionOptions options;
options.config.mutable_graph_options()
->mutable_optimizer_options()
->set_opt_level(::tensorflow::OptimizerOptions::L0);
options.env = memmapped_env->get();
tensorflow::Session* session_pointer = nullptr;
tensorflow::Status session_status =
tensorflow::NewSession(options, &session_pointer);
One thing to notice here is that we’re also disabling automatic optimizations, since in some cases these will fold constant sub-trees, and so create copies of tensor values that we don’t want and use up more RAM. This setup also means it's hard to use a model stored as an APK asset in Android, since those are compressed and don't have normal filenames. Instead you'll need to copy your file out of an APK onto a normal filesytem location.
Once you’ve gone through these steps, you can use the session and graph as normal, and you should see a reduction in loading time and memory usage.
I use the randomforest estimator, implemented in tensorflow, to predict if a text is english or not. I saved my model (A dataset with 2k samples and 2 class labels 0/1 (Not English/English)) using the following code (train_input_fn function return features and class labels):
model_path='test/'
TensorForestEstimator(params, model_dir='model/')
estimator.fit(input_fn=train_input_fn, max_steps=1)
After running the above code, the graph.pbtxt and checkpoints are saved in the model folder. Now I want to use it on Android. I have 2 problems:
As the first step, I need to freeze the graph and checkpoints to a .pb file to use it on Android. I tried freeze_graph (I used the code here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py). When I call the freeze_graph in my mode, I get the following error and the code cannot create the final .pb graph:
File "/Users/XXXXXXX/freeze_graph.py", line 105, in freeze_graph
_ = tf.import_graph_def(input_graph_def, name="")
File "/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/importer.py", line 258, in import_graph_def
op_def = op_dict[node.op]
KeyError: u'CountExtremelyRandomStats'
this is how I call freeze_graph:
def save_model_android():
checkpoint_state_name = "model.ckpt-1"
input_graph_name = "graph.pbtxt"
output_graph_name = "output_graph.pb"
checkpoint_path = os.path.join(model_path, checkpoint_state_name)
input_graph_path = os.path.join(model_path, input_graph_name)
input_saver_def_path = None
input_binary = False
output_node_names = "output"
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_graph_path = os.path.join(model_path, output_graph_name)
clear_devices = True
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path,
output_node_names, restore_op_name,
filename_tensor_name, output_graph_path,
clear_devices, "")
I also tried the freezing on the iris dataset in "tf.contrib.learn.datasets.load_iris". I get the same error. So I believe it is not related to the dataset.
As a second step, I need to use the .pb file on the phone to predict a text. I found the camera demo example by google and it contains a lot of code. I wonder if there is a step by step tutorial how to use a Tensorflow model on Android by passing a feature vector and get the class label.
Thanks, in advance!
UPDATE
By using the recent version of tensorflow (0.12), the problem is solved. However, now, the problem is that what I should pass to output_node_names ??? How can I get what are the output nodes in the graph ?
Re (1) it looks like you are running freeze_graph on a build of tensorflow which does not have access to contrib ops. Maybe try explicitly importing tensorforest before calling freeze_graph?
Re (2) I don't know of a simpler example.
CountExtremelyRandomStats is one of TensorForest's custom ops, and exists in tensorflow/contrib. As was pointed out, TF switched to including contrib ops by default at some point. I don't think there's an easy way to include the contrib custom ops in the global registry in the previous releases, because TensorForest uses the method of building a .so file that is included as a data file which is loaded at runtime (a method that was the standard when TensorForest was created, but may not be any longer). So there are no easily-included python build rules that will properly link in the C++ custom ops. You can try including tensorflow/contrib/tensor_forest:ops_lib as a dep in your build rule, but I don't think it will work.
In any case, you can try installing the nightly build of tensorflow. The alternative includes modifying how tensorforest custom ops are built, which is pretty nasty.
I'm novice at Xamarin and I'm trying to use ShinobiCharts in Xamarin.Android to show candlestick data on it.
Code from .axml:
<fragment
class="com.shinobicontrols.charts.ChartFragment"
android:id="#+id/chart"
android:layout_width="match_parent"
android:layout_height="match_parent" />
This is fragment where should be shown chart.
var chartFragment = (ChartFragment)FragmentManager.FindFragmentById(Resource.Id.chart);
var chart = chartFragment.ShinobiChart;
chart.SetLicenseKey("license_key");
chart.AddXAxis(new DateTimeAxis
{
GesturePanningEnabled = true,
GestureZoomingEnabled = true
});
chart.AddYAxis(new NumberAxis
{
GesturePanningEnabled = true,
GestureZoomingEnabled = true
});
var dataPoints =
quotes.Select(
quoteCandle =>
new MultiValueDataPoint(DateUtils.ConvertToJavaDate(TimeStamp.UnixToDate(quoteCandle.Timestamp)),
(double) quoteCandle.Low, (double) quoteCandle.High,
(double) quoteCandle.Open, (double) quoteCandle.Close)).ToList();
var series = new OHLCSeries { DataAdapter = new SimpleDataAdapter() };
series.DataAdapter.AddAll(dataPoints);
chart.AddSeries(series);
chart.XAxis.Style.GridlineStyle.GridlinesShown = true;
chart.YAxis.Style.GridlineStyle.GridlinesShown = true;
chart.RedrawChart();
This is code of creating ShinobiCharts.
The problem is that added series are not shown in chart. Style changed, but there are no series. What do I do wrong? I hope anyone can help.
Sorry for question, Candlestickes are not available for trial version of ShinobiControls, only for premium version.
As sammyd said, CandlestickSeries are available in the trial version of ShinobiCharts for Android (as are all the other premium features). Have you managed to get the CandlestickChart sample running? The sample is included in the download bundle.
On the face of it your code looks fine (without seeing the rest of it) but there are a couple of things I'd recommend checking:
Have you replaced the string license_key with the trial license key we would have emailed you when you downloaded the bundle? The trial license key is a a really long string of characters and digits.
Is your data coming in as expected and does it make sense (e.g. are your low values less than your open values etc.)?
I'm not sure it'll make much difference, as they're pretty much interchangeable, but you're actually creating an OHLCSeries in code but mention a CandlestickSeries
Have you set the series' style object in a way that would stop it from showing i.e. have you set it to be transparent in colour?
Are you getting an actual error, and if so what message is being logged?
Hopefully the above will help you get your candlestick chart up and running!
Edit:
If you're using the Android Emulator, your AVD needs to have GPU emulation turned on (as we use OpenGL ES 2.0 for rendering the charts). There's more information on using the emulator with OpenGL on the Android developer site.
Disclaimer: I work for ShinobiControls
I'm porting a simple tetris-like XNA app to Android, using Mono For Android and MonoGame; I have followed the suggested steps in this link and so far, everything compiles well, and no relevant warnings fire up. However, upon loading the contents, a null parameter exception breaks the program at the point below in my program:
protected override void LoadContent() {
// ...
_font = Content.Load<Microsoft.Xna.Framework.Graphics.SpriteFont>("SpriteFont1");
// ...
}
The content root directory is set in the game constructor class:
public Game2 (){
Content.RootDirectory = "Content";
Content.RootDirectory = "Assets/Content"; // TEST.
//...}
And I have tried several combinations, all to no avail.
I have also tried setting the xnb files as Content as well as Android Assets in the Build Action property; having the linked, copied always, copied only if newer... etc.
Either way, my problem is that I don't really understand WHY and HOW should I do this. I'm rather new to the platform and to XNA as well, so this may very well be a newbie question, but the truth is after several hours banging my head and fists against the monitor/keyboard I feel stuck and need your help.
I have a library that supports variable-width fonts (generated by BMFont) on MonoGame. Unfortunately it is a renderer and so has other code around it. However, the basic idea is very simple. You can take a look at the loader here and the mesh builder (given a string) here. This builder supports fonts that spread characters across multiple pages, too.
Hope this helps!
MonoGame (2.5.1) throws NotImplementedException in ContentManager.Load for SpriteFont type. Have the same not resolved problem. I'm trying not to use DrawString.
For loading textures in Win32 application I use:
Content.RootDirectory = #"../../Content";
var sampleTexture = Content.Load<Texture2D>("Sample.png");
You even must not add it to solution.
For Andoind (MonoDroid) application you must add "Content" folder to your solution and set "Andtoid Asset" in "Sample.png" properties.
Content.RootDirectory = "Content";
var sampleTexture = Content.Load<Texture2D>("Sample.png");
See also:
http://monogame.codeplex.com/discussions/360468
http://monogame.codeplex.com/discussions/267900