Workout Movement Counting using google_ml_kit flutter - android

I am creating a app using
google_ml_kit
to create face recognition. I have successfully implemented the face recognition using flutter(front end),node js(back end) and mongo database. now I have to create Workout Movement Counting example(dumbbell count). can anyone please let me know it is possible with google_ml_kit package. If yes, please share some tips which will helps me a lot.
Thanks in advance!!!

The ML Kit's Android vision quickstart app provides an example to do rep counting with Pose Detection. https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart
Please search for "Pose Detection" on the page linked above and see instruction about how to enable classification and counting in the demo app.
Here is the most related code: https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart/app/src/main/java/com/google/mlkit/vision/demo/java/posedetector
The implementation is in Java, but it is just algorithm, so you should be able to convert it to Dart I guess (I am not familiar with Dart personally).

#all I have implemented the dumbbell count using tflite plugin. If you need source code or support comment below.

Related

Facial Recognition(not detection)using tensorflow lite(on device) in Android

Using Tensorflow lite I am trying to find a way for facial recognition (not detection) using camera given picture.
I googled everything related to this but all are detecting face.
I followed these links:
https://firebase.google.com/docs/ml-kit/android/detect-faces
https://medium.com/devnibbles/facial-recognition-with-android-1-4-5e043c264edc
Any help will be appreciated.
Thanks a lot in advance.
https://medium.com/#tomdeore/facenet-on-mobile-cb6aebe38505 may be of interest to you. You'll likely need to find the corresponding TensorFlow model and convert it over instead of using an out of the box solution

recognize person by its voice using tensor flow and ML kit

I want to know the feasibility of an android app which I am going to build for my College project.
The App, which I am trying to build is for attendance on the class through voice recognition or face detection.
For this, I suppose to first collect the data set for all the student of the class and then train it.
so, Is it feasible to build such an app and how to approach this?
I am new in Tensor flow and ML and also searched about this on the internet but unsuccessful to find anything so please help me come out from this. Your help is appreciated.
You will have to train and use a custom model for this.
ML Kit offers face detection but does not offer face recognition or voice recognition at the moment. So you will have to collect data and train a model yourself. You can look at the quickstart samples for iOS and Android on GitHub and learn about using mobile-optimized custom models in your app.

How to rapidly prototype an image recognition application using Machine learning & neural network?

Known that I'm very new in Machine learning.
I was thinking about a real world example of using Machine Learning
and Neural network in an application and I want to try it with a
mobile application who can handle image recognition with the front
camera after make an image of something(A cat for exemple).
I really need advice of tools to use to rapidly make a prototype of this application with a python backend that I will call via rest.
Thanks in advance.
I suggest if you are new to the machine learning algorithms, that you use an API from Google or Microsoft and get in touch with the flow and how it works .. Once you understand what are the inputs and outputs, you can try to replace the API for you own neural net, try to train it properly and collect results ..
Machine learning is not an easy concept and if you start big, there is a good chance that you'll get discouraged before you finish building it ... The API will provide you with a functional prototype very quickly and thus help you stay motivated to pursue it more ..
But to answer your question more directly, TensorFlow by Google is probably the most sophisticated tool for machine learning in general right now..
There is an excellent course for deep learning with TensorFlow made by Google on Udacity ..
You can follow PyImageSearch. It has lot of stuff related image processiong like face recognition and license Plate Recognition system. It also use neural networks.
Use an image recognition API, like google vision.
It is easy and fast to put in an application, and a lot more effective if you do not have experience and ressources in ML
I have done something similar for our company website. It is based on caffe though.
You can go through the source code here
However, it is a segmentation demo. You need to modify it a little.

Using ARToolkit with android

A project is assigned to me, in which i need to use ARToolKit. But i am much confused on how to use it, how to get started with ARToolKit.
Is it the same as metaio, vuforia and total immersion ? Please help me get started with it. I would be thankful to start if some startup tutorials and sample examples on ARToolKit is provided..
Any kind of help would appreciated...
Here is the ARToolkit demo provided on Google Code which might guide you Check Out
Also check out the documentation here
For Android:
You can give a try to droidAR
droidAR
For iOS
You can give a try to VRToolkit.This app uses ARToolKit plus to detect markers on the video frames and then overlays 3D objects following the movements of this marker.
You can scan all 4096 BCH marker as well as thin based Marker after setting corresponding property to YES.
VRToolKit github

3D waving flag in Android (using cloth simulation)

My goal is to create a waving flag in Android using OpenGL but I don't have any clue on where to start.
Cloth modeling seems to make the neatest effect but I couldn't find any implementation for Android on the web.
I hope that someone would know some tutorials, resources etc. that could help me solve this problem.
If you know easier/other ways to create a decent looking waving flag, let me know. I'm open for everything.
Thanks in advance!
First result on google
http://code.google.com/p/waving-flag-android/
A good tutorial about cloth simulation is from Jesper Mosegaard http://cg.alexandra.dk/2009/06/02/mosegaards-cloth-simulation-coding-tutorial/
The pioneer work on cloth simulation is from Darwin 3D http://www.darwin3d.com/gdm1999.htm#gdm0599
A open source project in Google Code provides a variety of algorithms http://code.google.com/p/opencloth/
All these works are based on C/C++ which is the natural programming language for OpenGL. As you are working on an Android project, you have to rewrite the algorithms in Java. After you figure out the mechanism, you should be fine. Good luck!

Categories

Resources