How to deploy pre-trained model using sagemaker? - android

I have a pre-trained model that will translate text from English to Marathi. You can find it here...
git clone https://github.com/shantanuo/Word-Level-Eng-Mar-NMT.git
Clone and Run the notebook. I am looking for a way to deploy it so that users can use it as an API
The guidelines for deploying the model can be found here...
https://gitlab.com/shantanuo/dlnotebooks/blob/master/sagemaker/01-Image-classification-transfer-learning-cifar10.ipynb
I will like to know the steps to follow in order to deploy the model.
Is it possible to create an android app for this?

Great News! So your model has already been deployed when you created the endpoint. Make sure you DON'T run the sage.delete_endpoint(EndpointName=endpoint_name) at the end of the notebook!
Now you can call that endpoint via command line or an SDK like boto3.
To attach it to a public API endpoint I recommend leveraging API Gateway, Lambdas, and Sagemaker in something similar to this tutorial.
API Gateway will handle the hosting and security/tokens (if desired). After the http request hits API Gateway it needs to be caught by a designated lambda. The lambda's job is to verify the incoming data, call the Sagemaker endpoint, and return the response in the correct format.
Step 1: Build Lambda
To correctly deploy your lambda you will need to create a Serverless Framework Service.
1) First install Serverless Framework
2) Navigate to the directory where you want to store the API Gateway and Lambda files
3) In the command line run:
serverless create --template aws-python
4) Create a new file named lambdaGET.py to be deployed inside your lambda
lambdaGET.py
import os
import io
import boto3
import json
import csv
'''
endpoint_name here should be a string, the same as which was created
in your notebook on line:
endpoint_name = job_name_prefix + '-ep-' + timestamp
'''
ENDPOINT_NAME = endpoint_name
client = boto3.client('sagemaker-runtime')
# Or client = boto3.client('runtime.sagemaker') should also be acceptable
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
event = json.loads(json.dumps(event))
# I recommend you verify the data here although it is not critical
'''
I'm assuming your going to attach this to a GET in which case the
payload will be passed inside the "queryStringParameters"
'''
payload = event["queryStringParameters"]
print(payload)
response = client.invoke_endpoint(
EndpointName=ENDPOINT_NAME,
Body=payload,
ContentType='application/x-image',
Accept='Accept'
)
result = json.loads(response['Body'].read())
print(result)
'''
After the lambda has obtained the results in needs to correctly
format them to be passed across the API Gateway
'''
response = {
"isBase64Encoded": False,
"statusCode": 200,
"headers": {},
"body": json.dumps(result)
}
return response
Step 2: Build Serverless.yml
In this step you need to build the serverless file to deploy the lambda, API Gateway, and connect them together.
service: Word-Level-Eng-Mar-NMT
provider:
name: aws
runtime: python2.7
timeout: 30
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'us-east-1'}
profile: ${opt:profile, 'default'}
apiName : Word-Level-Eng-Mar-NMT-${self:provider.stage}
environment:
region: ${self:provider.region}
stage: ${self:provider.stage}
stackTags:
Owner : shantanuo
Project : Word-Level-Eng-Mar-NMT
Service : Word-Level-Eng-Mar-NMT
Team : shantanuo
stackPolicy: # This policy allows updates to all resources
- Effect: Allow
Principal: "*"
Action: "Update:*"
Resource: "*"
iamRoleStatements:
- Effect: "Allow"
Action:
- "sagemaker:InvokeEndpoint"
Resource:
- "*"
# Note: Having * for the resource is highly frowned upon, you should change
# this to your acutal account number and EndpointArn when you get the chance
functions:
lambdaGET:
handler: lambdaGET.main
events:
- http:
method: GET
path: /translate
resp: json
Step 3: Deploy
1) In this step you will need to install Serverless Framework
2) Install AWS commmand line
3) Set up your AWS configure
4) Make sure your directories are setup correctly:
(lambdaGET.py and servless.yml should be in the same folder)
```
-ServiceDirectory
--- lambdaGET.py
--- serverless.yml
```
5) Navigate to the ServiceDirectory folder and in the command line run:
sls deploy
Step 4: Test
Your API can now be invoked using browsers or programs such as Postman
The base URL for all your services API endpoint can be found in console inside API Gateway > Service (in your case 'Word-Level-Eng-Mar-NMT') > Dashboard
Almost there... Now that you have the base URL you need to add the extention we placed on our endpoint: /translate
Now you can place this entire URL in Postman and send the same payload you were using in the creation and testing conducted in your notebooks. In your case it will be the file
test.jpg
TAAA DAAA!!
If your model was handling text or relatively small package size bits of information this would be the end of the story. Now because you are trying to pass an entire image it is possible that you will be over the size limit for API Gateway. In this case we will need to create an alternive plan that involves uploading the image to a public location (S3 bucket for example) and passing the URI via API. Then the lambda would have to retreive the image from the bucket and invoke the model. Still doable, just a little more complex.
I hope this helps.

The basic idea to deploy model to SageMaker is:
1) Containerize your model.
2) Publish your model to ECR repository and grant SageMaker necessary permissions.
3) Call CreateModel, CreateEndpointConfig and CreateEndpoint to deploy your model to SageMaker.
Per your notebook of training the model, you didn't use any SageMaker sdk to containerize your model automatically, so it is more complicated to start from scratch.
You may consider use any of the following sample notebooks with keras to containerize your model first:
https://github.com/awslabs/amazon-sagemaker-examples

Related

How should the structure be built for Firebase Requests

Brief information: I am working on a quiz application for Android. The database is on Firebase and the users login via anonymously. When the user opened the application, it will be automatically signed-in.
My question is about firebase. I could not build the intelligence for firebase requests.
When the application is opened;
1) signInAnonymously (which firebase function) should be called first.
2) Then i check that the signed user has a saved point or not on firebase database.
3) If the user does not have point, it is generated.
4) Then i send a request to get the point of user.
In all steps, i send a request to firebase via async firebase methods. The sequence is important because the output of any step can be an input for the next step.
I handle this via callback. But i do not know that it is the best way.
screenshots of callbacks for these steps
Can you give me advice for these? If i do not use callbacks, problems are occured because of asynchronous firebase methods. The reason of that i open this issue is undetermined problems. I can learn and build any other algorithm to make it better. Thank you.
It looks like you are using nested callbacks and I am not a Java programmer, but you may want to take it easy on yourself and not go that route.
If my signing in anonymously you mean a One-Time-Password authentication flow such as just providing a phone number, that would definitely be a good approach.
You can use Google Cloud Functions, but the functions would have to be written in Nodejs, Python or Go.
Either way take a look at this flow below:
User requests OTP
Acknowledge the request
Generate code, save the code on backend (GCF)
Text user the code
User sends you the correct code
Compare codes on the server
Send user some kind of token or as you say a "point" to identify them.
I do believe Java does have support for JSON Web Tokens.
So after your setup GCF project, you are going to get some folder and files like so:
.firebaserc: a hidden file that helps you quickly switch between projects with firebase use.
firebase.json: describes properties for your project.
functions/: this folder contains all the code for your functions.
functions/package.json: an NPM package file describing your Cloud Functions.
functions/index.js: the main source for your Cloud Functions code.
functions/node_modules/: the folder where all your NPM dependencies are installed.
You want to import the needed modules and initialize the app:
const admin = require("firebase-admin");
const functions = require("firebase-functions");
const serviceAccount = require("./service_account.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://my-project.firebaseio.com"
});
That service_account.json is something you need to create, its not a library.
It will have a bunch of private and public keys that you get from your Firebase console. Ensure you also place that file inside your .gitignore files as well.
So I am skipping some crucial details here that you will have to figure out so as to get to your main question.
First, you need the idea of a user, so you need to create a user inside GCF so that in itself is going to be a function and as you mentioned Firebase is asynchronous so it would look something like this:
const admin = require("firebase-admin");
module.exports = function(req, res) {
// Verify the user provided a phone
if (!req.body.phone) {
return res.status(422).send({ error: "Bad Input" });
}
// Format the phone number to remove dashes and parens
const phone = String(req.body.phone).replace(/[^\d]/g, "");
// Create a new user account using that phone number
admin
.auth()
.createUser({ uid: phone })
.then(user => res.send(user))
.catch(err => res.status(422).send({ error: err }));
// Respond to user request saying account was made
};
So the code above I grabbed from a previous project of mine, except the whole thing was in JavaScript. For you this part will be in JavaScript or Nodejs specifically as well since again, Nodejs, Go or Python are the only languages supported by GCF.
So the comments are self-explanatory but I feel compelled to explain that the first thing I had to resolve is how to pass in information to this function in a request.
To do that I had to reference the req.body object as you see above. req.body contains all the different data that was passed to this function when the user called it. I was not sure if you knew that. So before you go and copy paste what I have above, do a res.send(req.body);. So nothing else inside that module.exports = function(req, res) {} except res.send(req.body);, so you can get a good sense of how this all works.
For every function you create you need to run a firebase deploy name-of-project.
After you feel you have a handle on this and its all working successfully, you can create your Android Studio project and add the database dependency like so:
compile 'com.google.firebase:firebase-database:10.2.1'
And then you will probably want to create your User model, maybe like this:
public class User {
public String phone;
public User() {
// Default constructor required for calls to DataSnapshot.getValue(User.class)
}
public User(String phone) {
this.phone = phone;
}
}
And so on, anyway I hope that kind of gives you a good enough idea that it gets you going. Best of luck. I know I failed to take time out to explain that the regex in my code is to sanitize the phone number and probably some other stuff. So again, don't just copy paste what I offered, study it.

call a Google Spreadsheets Apps Script function from Google Sheets API v4

I have a Spreadsheet with some Apps Script functions bound to it.
I also have an Android client using Google Sheets API v4 to interact with that spreadsheet.
Is there a way for the Android client to call/run some function in the Apps Script code?
The reason I need to code to be run on the Apps Script side, and not simply on the Android client, is because I'm sending some email when something happens to the doc, and I would like the email to be sent from the owner account of the spreadsheet, and not from the Android user authenticated via the API.
I know I can trigger functions implicitly like by adding rows to a doc, but is there a way to directly run a specific function?
Yes. You can make GET and POST requests to Google apps-scripts. from anywhere that can make REST type calls including clients. If you need authentication there is also the apps-script client libraries. I wrote a short script for emailing from a request from one apps-script to another here. But, it would work if you called the emailing script from your client also.
Deploy your Google Apps Script as Web Apps > reference, by this way you can run function Get(e) or Post(e) and invoke other functions inside one of them with conditions....
You might have gotten the answer to your question. Just in case you have not, below are some points that may help with your development:
1) Create the server side script (i.e., Google Apps Script) function like usual:
function myFunction(inputVar) {
// do something
return returnVar;
}
2) Create a doGet(e) or doPost(e) function like below - can be in the same .gs file with the function in 1) :
function doGet(e) {
var returnVar = "";
if (e.parameter.par1 != null) {
var inputVar = e.parameter.par1;
returnVar = myFunction(inputVar);
}
return HtmlService.createHtmlOutput(returnVar);
}
3) Publish and deploy your project as webapp. Note the deployed URL.
4) From your Android client do HTTP call with the URL as: your_webapp_url?par1="input value"

Getting python Cloud Endpoint Enum values on an Android client

I have a python class that inherits from Cloud Endpoints Enum and is included in a Message for transmission to an Android client.
class Status(messages.Enum):
SUCCESS = 1
NOT_IN_MATCH = 2
ALREADY_MATCHED = 3
FAILURE = 4
Is there anyway to get these constant strings ("SUCCESS", "NOT_IN_MATCH", "ALREADY_MATCHED", "FAILURE") in the Android client? I don't see them anywhere in the generated Java source code when I use get_client_lib.
Note: I have seen this post that gives a solution in Java. That is not applicable when using python Cloud Endpoints.
According to a Google Cloud Endoints dev this is not possible currently.
Unfortunately, there currently isn't [a way to get these constant strings in the Android client].

Get url & parameters of a POST request made from an android app. (Hack an application)

1> Is it possible to fetch the end-point of a http post request made from an android app?
2> Is it possible to fetch the parameters (key & value) of that request ?
If it is not possible to fetch the exact end-point & the parameter list for a post request made from an android device, can we assume that it is very hard to hack that particular end-point?
Edit 1 :
Say, in my android app, I am using an end-point like - http://abc.xyz.com/buyItem with 2 parameters : itemCode=value1, price=value2
(how)Can the url, parameter list & values be fetched by a hacker?
Yes it is possible to monitor network traffic and get those values.
It is pretty easy to set-up with something like a basic (cheap) network hub (not a switch) and a PC attached and a few network tools like tcpdump or ngrep.
A tcpdump example would be:
tcpdump -A -i eth3 > t.dump
Change eth3 to your network interface. You can look over the file t.dump in a text editor or use less or more.
NOTE: SSL / HTTPS connections are encrypted, so tcpdump will only give you parameters over HTTP.
There are other ways as well.
You could get lucky and simply unpack the apk and grep for something like ://. For example
grep -R '://' ./unpacked-apk/*
Update: Added a tcpdump example.
I would recommend using Fiddler for this purpose, it is a tool that has a nice UI and you would not need to write any complicated command line commands. Here is a documentation article on how to configure Fiddler to capture traffic on Nexus device.

Django Media files and Retrofit

We work on a project with Android frontend and django-rest-framework backend.
The Media files were served through Django Media files and we can cache media files and see them in the app when it was offline.
urlpatterns = patterns('',
url(r'^admin/', include(admin.site.urls)),
url(r'auth/login/', 'rest_framework_jwt.views.obtain_jwt_token',name='jwt_login'), # post username & password to get token
...
) + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
The problem was that we need to apply authorization on media files so I've removed media paths from urls and add a view to do the job
#api_view(['GET'])
def media_image_handler(request,url):
# extra code before serving media
...
...
# read and return media file to response
And the url.py changed to this:
urlpatterns = patterns('',
url(r'^media/(?P<url>.*)/$',media_image_handler, name='media'),
url(r'^admin/', include(admin.site.urls)),
url(r'auth/login/', 'rest_framework_jwt.views.obtain_jwt_token',name='jwt_login'),
)
Now we got 2 problems:
Due to extra code response time become higher
Cache files cannot be loaded offline
Now the question:
Is there any suitable method that can be used instead?
For example instead of full authentication use a random generated file names that cannot be guessed easily or whatever?
We will appreciate for any helpful opinion
P.S. We are using Retrofit and Picasso on Android
For first: static files should be served by servers like nginx or lighttpd. About your question. I think you are talking about controlled downloads. This feature calls X-Sendfile and implemented in nginx and other servers. You can read about it in Nginx documentation.
https://www.nginx.com/resources/wiki/start/topics/examples/xsendfile/

Categories

Resources