I have completed training a simple linear regression model on jupyter notebook using tensorflow, and I am able to save and restore the saved variables like so:
Now I'm trying to use the model on an android application.
Following the tutorial here, I am able to get to the stage where i import the tensorflow library like so:
Now I'm at the point where I want to give the model an input data and get a output value.(Refer to application flow below) However, they are using a .pb file(no clue what this is) in their application. In the 4 files:
that i got from saving my model, i do not have a .pb file which left me dumbfounded.
What the application does:
Predicts the SoC with a pre-trained tensorflow model using user's input value of height.
Whereby, the linear regression equation is used: y = Wx + b
y - SoC
W - weight
x - height
b - bias
All variables are float values.
Android application flow:
User inputs height value in textbox, and presses "Predict" button.
Application uses the weight, bias & height values of the saved model to predict SoC.
Application displays predicted SoC in textview.
So my question is: how do I import and use my model in Android application using android studios 2.3.1?
Here are my ipynb and csv data files.
I may have misunderstood the question but:
Given that the model is pre-trained, the weight and bias are not going to change, you can simply use the W and b values calculated in the Jupyter notebook and hard code them in a simple expression
<soc> = -56.0719*<height> + 98.3029
there is no need to import a tensorflow model for this.
UPDATE
To ensure the question is answered, the *.pb file comes from freezing the checkpoint file with the graph - refer to the second code panel in the linked tutorial for how to do this.
In terms of what freezing is refer here
Related
I am quite new to this platform so please be kind if my question is stupid. Currently I am trying to integrate a deep learning model by using SNPE to detect human pose. The architecture of the model is as following:
Input -> CNN layers -> seperate to two different set of CNN -- > 2 different output layers
So, basically my network is stated from an input data and then genertates two different outputs (output1 and output2), but when I try to execute the network in SNPE, It seems like only have information about the output2 layer. Do any of you has any idea about this situation and is it possipole for me to look for the output of output1. Thank you all in Advance!.
I assume you have successfully converted the model to DLC and are trying to run the network with snpe-net-run tool. For getting multiple outputs, while running snpe-net-run you need to specify the output layers (in addition to input) in the file that is given to --input_list argument.
Let's assume outputlayer1 and outputlayer2 are the names of 2 output layers and ~/test/example_input.raw is the path of the input, then the input list file format for the same is as follows:
#outputlayer1 outputlayer2
~/test/example_input.raw
In the first line # is followed by output layer names which are separated by a whitespace. Next line contains the path to input(single input case). You can also add multiple input files, one line per iteration. If there is more than one input per iteration, a whitespace should be used as a delimiter.
General format for input list file is as follows
#<output_name>[<space><output_name>]
<input_layer_name>:=<input_layer_path>[<space><input_layer_name>:=<input_layer_path>]
…
You can refer to snpe-net-run documentation for more information.
This is my code of a tensorflow Lite model imported on Android Studio :
enter image description here
And this is the out put When I run the App:
enter image description here
I Don't understand it , How can I get the model Output ??
update :
The output is Float Array of 6 elements , but what I want is the Index of the Largesse element , I tried this code :
enter image description here
Is It right ?? I'm getting the same output 1- on every prediction
Looks like your TFLite model generates a feature vector as a float array, which represents the characteristics of the image data.
See also https://brilliant.org/wiki/feature-vector/
The feature vector is usually considered as an intermediate data and it often requires to put the feature vectors again to the additional model for image classifications or some other tasks.
I'm trying to use the model used on this tutorial in an Android app. I wanted to modify DetectorActivity.java and TensorFlowMultiBoxDetector.java found here but it seems like i miss some parameters as imageMean, imageStd, inputName, outputLocationsName and outputScoresName.
From what I understand, input name is the name of the input for the model and both outputs are the names for the position and score output, but what do imageMean and imageStd stand for ?
I don't need to use the model with a camera, I just need to detect objects on bitmaps.
Your understanding of input/output names seems correct. They are tensorflow node names that can receive input and will contain the outputs at the end. imageMean and imageStd are used to scale the RGB values of the image to the mean of 0 and the standard deviation of 1. See the 8 lines starting from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowMultiBoxDetector.java#L208
The TensorFlow android demo app you are referring to has been updated. It now supports MobileNets. Check it out at github: commit 53aabd5cb0ffcc1fd33cbd00eb468dd8d8353df2.
a newbie to Meteor, but I have created under localhost a simple passenger counter I intend to use on an android app for a passenger survey (holding a laptop in an airport isn't a particularly wise idea). It works, but there is no export function. Basically uses meteor.collection as simple rows of data - one row been one passenger with a date/time stamp, two buttons - Add Passenger and Reset. I have tried looking everywhere for a simple answer, but what I want to do is now add to the client side (on browser - but ultimately on the mobile) a button called export, and when clicked, takes each row in the meteor.collection and export it to a text file - line by line. Very simple. Alas right now I haven't got a clue how to proceed. Seen some references to FS.collection, but don't think there is what I want. This is ultimately for a mobile application.
Thanks.
JSON.stringify each object into a larger object of pure text, say bytes
Create a blob from that text object, ex: blob = new Blob([bytes]);
Use the filesaver package to save to a file locally.
Little backstory - I'm working on android application with OpenGL ES2.0 and some time ago I faced a problem with lines width, finally it turned out that glLineWidth() implementation is vendor specific, and the range of possible values is not guaranteed. For example for Adreno200 it is 1-18 and emulator I got 1-100.
I'm wondering if it is possible to get the list of such methods.
The list of limits with vendor specific values is in the spec document. To find that:
Go to https://www.khronos.org/ (Khronos is the consortium responsible for the OpenGL ES standard).
Click on "OpenGL ES" in the tabs above the top pane on the page.
Click on "Specs & Headers" at the bottom of the pane. This will bring you to https://www.khronos.org/registry/gles/.
Find the section "OpenGL ES 2.0 Specifications and Documentation", and click on "Full Specification". Or better yet, download the PDF file to have it handy for future use.
In this PDF file, look for section "6.2 State Tables", which starts on page 134. The information you're looking for is then in "Table 6.18 Implementation Dependent Values".
This table lists the name of each value, and the function to use for querying the value for your specific implementation. Also very useful, it lists the minimum value guaranteed to be supported by all implementations.
For your specific example, you will find a value ALIASED_LINE_WIDTH_RANGE, which is the 6th entry in the table, with GetFloatv for the function name, 1,1 for the minimum supported value, and this for the description:
Range (lo to hi) of aliased line widths
Based on this, you know that implementations can have a limit as low as 1 for the maximum line width (i.e. they do not support wide lines at all), and you can query the limit for the implementation you are using with:
GLfloat widthRange[2];
glGetFloatv(GL_ALIASED_LINE_WIDTH_RANGE, widthRange);
You can get all such data from glGet when running the program.
For example requesting glGetFloatv(GL_ALIASED_LINE_WIDTH_RANGE,lineWidthRange); would return the line width range.
The OpenGL ES 2.0 specification lists in its section 6.2 all the minimum requirements. From there we can see that line width range is guaranteed to be [1,1], everything else is implementation specific.
I am not aware of a list that would compare "all" implementations according to attribute values.