I'm doing a project which consists in controlling a drone (Tello) with a mobile phone. I choosed react native as hybrid app to develop this project, I had to insert a node.js inside the application (Node.js for Mobile Apps React Native) because the drone has only udp communication and also need a package to decode the stream video.
the drone has three udp ports, one of them is to receive instructions, another one is to send the drone status and the last one for sending the video.
This video I get from the stream it is in raw so I need a package to encode it or transform it.
I was testing things because there isn't documentation about this topic.
I downloaded ffmpeg and after that I tried to decode the data in H.264 and finally I could see the video.
After this introduction I would like to ask you:
Is there any way I can use the same technique on the mobile without needing ffmpeg?
Is there any way to import ffmpeg into android and communicate with nodejs?
Is there any way I can use the same technique on the mobile without needing ffmpeg?
Yes, you can us the native video decoder. MediaCodec on android and VideoToolbox on iOS
Is there any way to import ffmpeg into android and communicate with nodejs?'
Yes, node js has bindings for C, and I'm am sure there are open source bindings. You could also use something like child_process
Is there any other solution where I can use another node that doesn't have to be on the phone?
Sure, tons.
Related
I have to build an application for android to stream video and audio to a desktop application through a server. Latency is important. I also have to make sure that android streaming can be controlled from pc (user should be able to switch the camera or turn off the microphone).
I thought to use the WebRTC protocol for communication but it seems I'm gonna have to write signalling server myself to support that requirement mentioned above.
Is there a better way to implement this whole thing? Also, I can't find any good docs or libraries for android streaming (no retrofit analogies obviously).
P.S. I'm thinking about using Javafx via Tornadofx for a desktop application.
You certainly don't need to create your own signaling server. I would suggest using something like Kurento Streaming Server or a derivation of Kurento like OpenVidu. It's open source and free and has lot's of great and active support via google groups. Depending on how much specific customization you may need one or the other might be better for you. OpenVidu allows for less customization since most of the stuff under the hood is already done for you, whereas Kurento allows you to modify and customize almost everything under the hood and on the front end using examples that can be changed at the code level. I have used it extensive on projects on the past and would think it meets most, if not all of your requirements. Scaling can be a bit challenging, but is still mush easier than just P2P webRTC since everything is relayed through a central server and most certainly doable depending on your requirements and implementation. Additionally you can record, process and transcode video server side.
I'm developing a webRTC project, the goal is to have 8 random people on the same channel, sharing video and audio, while having the possibility of being on different platforms (iOS, Android, PC, etc).
So far, so good, I finished developing the browser client and the server (using Socket.io and Node.js), and it is working fine.
The problem is I used a webRTC abstraction to code the browser instead of the apprtc libraries, and the library I used doesn't support native apps.
My question is, should I try to code a new library for mobile based on the abstraction library I used (around 6k lines of code), try to find a way to connect peers running on the browser abstraction library and the peers using an android/iOS abstraction library, or should I re-write all client side code with apprtc samples?
The goal here is to have everything working as fast as possible and have the possibility to optimize later on.
A few notes on my project:
->The interface will be really simple, all the user has to do is click a button, I will then check for video and send him to a queue in the server (through socket.io).
->The server will then find 7 other people to connect the peer to.
->All peers receive information from the server (either a channel or other peer's client information) and set up a video and audio conference.
I am trying to write a metronome application in Python, and I intend to publish the application for Android and iOS. I have found a few cross-platform frameworks like Kivy, but their audio support is lacking. More specifically, I need very precise audio timing and I can't rely on thread timing or events. I want to write audio data directly to the device's audio output, or create a MIDI file that can be played on the fly. The problem is, I cannot find any suitable framework for this task.
I know that many games have been written for Android in Python, and those games have excellent and precise sound timing. I need help finding either:
a way to create and play MIDI files on the fly in Android with Python,
a Python framework for Android with a suitable audio API to write sound directly to an audio device, or at least play audio with very accurate timing.
Thanks!
I'm looking for the same thing. I too am looking at Kivy. The possible solutions I can see to audio is hooking in a 3rd party application as a "recipe" in Kivy.
There is aubio, which apparently can be compiled for iOS/Android (see stackoverflow question regarding this), but I believe you have to get your own audio source for it, which could be potentially handled by the audiostream subproject in kivy.
Kivy/audiostream imports the core libpd project it appears, so you can use libpd python bindings. I think this is the path of least resistance, but I had issues when trying to run the examples.
Both of these approaches, I think could work but both need some effort to be able to start using.
I wanna develop a simple two way video call functionality and integrate it within my app.
I found two solutions:
Using Android SIP - i will need to handle sending and receiving streams
Using XMPP - Jingle - i will need to implement the whole protocol
Problem is that i am pretty new to SIP and do know really understand what the SIP protocol on android already handles and how much of development will be needed. I know on the other hand that XMPP on android is not easy as well especially when working with video streams.
I would love to have people thoughts on which solution would be the best to implement knowing that i want:
1. a simple working 2way video chat at first
2. extend the functionality to a system of users (i was thining that using XMPP with openfire will cover this easily but im kind of scared regarding the ammount of work to integrate jingle)
If you have any easier solution to integrate audio/video functionality on android i would be glad to hear from you.
Both solutions are the same in a lot of ways.
SIP and XMPP both take care only of the signaling. The media part (video streams, UDP, etc) are done "elsewhere" and with the same set of protocols: RTP and RTCP for transport and control. H.264/VP8 for the video codec, some other codec for voice.
I'd look into WebRTC to see if it has any available code on Android - that would take care of the media parts nicely.
After carrying out a lot of research I have come to the conclusion that Java and the Java Media Framework (JMF) is not suitable for developing a streaming server that supports the RTSP protocol on the server side for streaming video and audio. I have read very good things about Live555 media server and the testOnDemandRTSPServer source code for a basis of design. My only worry is that this is written in C++ and I am a predominantly Java programmer. This server is a large portion of my final year project at university so my degree kind of hangs on its successful implementation and I am running out of time. If any one has any experience with implementing a RTSP server that can stream to an android handset or belive they can point me in the right direction to learn how to do it, please let me know. Thanks in advance.
My project also has the RTSP server module to be run on Android phone. I think we can build rtsp library as name.so file and can interface with java by using JNI.
This also works for Android!
http://net7mma.codeplex.com/
You can see the article on CodeProject # http://www.codeproject.com/Articles/507218/Managed-Media-Aggregation-using-Rtsp-and-Rtp
The live555 RTSP server is a fully fledged RTSP server that implements most payloads (H.263, H.264, MPEG2, PCM, AMR, AAC, etc. You can read up on the website whether it already supports the media types you want to stream. It also features an RTSP client. With respect to streaming to an android handset: that is the whole point of RTSP: it doesn't matter what type of client you're streaming to, and as for the server side development, there isn't really much dev to do, unless you need to implement an unsupported media type. The code can be quite complex if you're not well versed in c++, but it sounds like your goal is more related to setting up streaming to android as opposed to implementing the RTSP server and client? So check if live555 supports your media types and if it does, I wouldn't bother writing one in JAVA, that can be quite involved. If you do choose to go that route, your best friend is of course the RFC (http://tools.ietf.org/html/rfc2326).
As for the client, I'm not sure if android already has an RTSP library/client. The one other thing you have to consider is which media types are supported by android.