I am trying to develop an android app that will record a video and send it to google colab for some classification task and then send the result back to the app. I am a complete beginner and totally lost as I could not find any suitable materials online. I found some stuff that uses a tensorflow light model that can be utilized using firebase. But I am coding in PyTorch and it's pretty heavy and would not run on mobile. So sending the video file to colab is the only option. Could you please help with any kind of suggestions? Thanks.
Related
I have been developing an android app that uses the speech recognition service but the android device has no Google app installed. For that reason, I'm using the vosk API for speech recognition but for better accuracy in speech recognition. I need to use a higher size model. Which takes a lot of space in assets. So, how can I access the vosk model without including the assets or using them from the online server directly?
Edit:-
I have seen Kaldi's WebSocket in vosk. Can this help me to use vosk from an online server(https://github.com/just-ai/aimybox-android-sdk/tree/master/kaldi-speechkit#online-mode)?. In this, they have given information about how to use WebSocket and also given an example but I am unable to understand about making a WebSocket file.
Any help regarding this is Helpful!
I need to build a chatbot which does not takes any online support.
I am using:
Python chatterbot to build conversation dialogues.
Android's google offline speech recognition to convert speech to text and vice versa.
I want to train the model on my PC and use the generated database.sqlite3 file on android.
The complete flow of the process is as follows:
Pretrained model generated database.sqlite3 which is placed in android.
Voice -> Text -> Local Android Server which runs python script using database.sqlite3 and generates response(text) -> Text to Voice
Now I have the problem of running Python on Android with all the environment needed to run the script on android. Kindly help me out with this.
I have searched stuffs and found setting local server on android using NanoHTTPD/AndroidSync. Now I want to use this server to run python script
If you have any better alternative to any of the steps above, kindly suggest.
In my experience, trying to get Python running on Android doesn't sound like the best way to accomplish this. I'd recommend splitting your project up into two parts:
1. A web application hosted somewhere
You can create a regular web application using a Python framework like Django or Flask. This application can provide a RESTful API that allows other applications to exchange information with your chat bot.
ChatterBot has built-in support for Django and there are numerous examples of the two being used together available. You can also take a look at the "How do I deploy my chat bot to the web?" section for a brief overview and some tips on how to get started.
2. The Android app
The app can access Android's native speech recognition technologies to interpret verbal information before it sends the recognized text to your chat bot API server.
Recently I've been struggling a lot with WebRTC, I was able to build a very simple WebRTC web application based on the WebRTC codelab which consists of a simple signaling server (basically step 8 in the codelab tutorial).
My next target is to build a native Android application that does the same thing which is to be able to make video call with the web application using the same simple signaling server. I am very new to WebRTC and I could not find any good tutorial or guide that allows me to build a simple native Android application.
I've searched for similar questions on Stackoverflow but most of them are outdated and do not provide useful answers that I need.
I'm hoping the Stackoverflow community who knows any good source or tutorial on how to build a simple and basic native WebRTC Android application can share with me their knowledge and information. Thank you so much.
I suggest you build the AppRTCMobile target in WebRTC (see https://webrtc.org/native-code/android for details on how to build etc) then deploy your own instance of AppRTC (https://github.com/webrtc/apprtc) if you wan to have full control over the signaling. Otherwise you can just use the one publicly available at https://appr.tc.
I am going through webRTC for Android. I need to develop video and audio chat application. I read many things in webRTC. I am confused where to start with. I did not get proper link.Many said that refer below site.
https://webrtc.org/reference/getting-started
But I could not find this page itself. Please help me to build webRTC for Android development.
Note: I want opensource code. I don't want any licenced libraries.
Thank you for the help.
Check this, and you can start by studying this sample android app.
I am trying to develop an android application that will stream the video from android mobile to the web (similar to Qik). I had gone through RED5, MAMMOTH and RTMPD servers.
My question is which web server I should use? Which is the best supported on android? Is there any other alternative to do this?
If there is some tutorial or code is available, please point it to me.
Thanks
For the testing purpose, I had developed a small application in C# which serves as server. It captures the streamed data and display it in video format.
For the implementation of actual server, I used gstreamer. Implementation of gstreamer is also available for Android.