I am developing Recording App that includes Pause/Play option.
I tried with both Media Recorder and AudioRecord
In case of AudioRecord , the recorded audio consumes larger size, so if the recording size increases say for eg: if i record 1 min audio it consumes 40 to 50MB an it really paining to combine by converting it to .raw file and send to php server.
So i tried with Media Recorder, it consumes less size,but not able to combine using the previous way handled in Audio Record.
Next step i tried with Android NDK- really paining for even Set up process.
Now my question is that which is the best way to combine recorded audio files
Using Android NDk
Reading the byte data from Audio and combining -If i use this there is problem with Headers of Recording format say amr,wav like that.
Also if i try with this , i am not able to get javax.sound package , So i tried with Plugins but no luck..
Please Suggest best way to do this. Also i tried with all this following links
Audio Link 1
Audio Link 2
Audio Link 3
Audio Link 4
Provide me Good tutorial or samples or links.Thanks.
For something like this your best bet would be to develop native C++ code using the NDK.
Related
Let me refraise my question, I wrote it in a hurry.
Current situation:
I have set up a digital video recorder to record broadcasts provided via DVB-C. It is running on a raspberry 3B using TVHeadend and jetty/cling to provide UPnP and other possibilities to access media files. For watching recordings, I wrote an android player app using IJKPlayer, which runs on smartphones, FireTV and AndroidTV.
One hassle when playing media files which are currently beeing recorded is, that IJKPlayer doesn not support timeshifting. Means, when I start playing a currently recording file, I can only watch the length which is known by the player at that moment. Anything which is recorded afterwards can not be played. I need to exit the player activity and start it again. I have resolved that issue by "simulating" a completed recoding using a custom servlet implementation. Since the complete length of the recording is already known, I can use ffmpeg to accomplish this.
Future situation:
I plan to move away from IJKPlayer to ExoPlayer, because it supports hardware playback and is much faster when playing h.264 media. I can of course use the same solution like above, but as far as I have found out yet, ExoPlayer can support media files which are currently being recorded by using the Timeline class. However, I don't seem to find neither a usefull documentation nor any good example. Hence, I would appreciate any help with the timeline object.
Regards
Harry
Looks like my approach won't work. At least, I didn't find a solution. Problem is, that the server returns the stream size as it is during player-start-time. I didn't find a method to update the media duration for "regular" files.
However, I can solve the problem by changing the server side. Instead of accessing a regular file, I convert the file to m3u8 in realtime, using ffmpeg. I then throw the m3u8 URI onto the player and it updates the duration of the stream (while playing) without the need to create any additional code on the client side.
My android app plays videos in Exoplayer 2, and now I'd like to play a video backwards.
I searched around a lot and found only the idea to convert it to a gif and this from WeiChungChang.
Is there any more straight-forward solution? Another player or a library that implements this for me is probably too much to ask, but converting it to a reverse gif gave me a lot of memory problems and I don't know what to do with the WeiChungChang idea. Playing only mp4 in reverse would be enough tho.
Videos are frequently encoded such that the encoding for a given frame is dependent on one or more frames before it, and also sometimes dependent on one or more frames after it also.
In other words to create the frame correctly you may need to refer to one or more previous and one or more subsequent frames.
This allows a video encoder reduce file or transmission size by encoding fully the information for every reference frame, sometimes called I frames, but for the frames before and/or after the reference frames only storing the delta to the reference frames.
Playing a video backwards is not a common player function and the player would typically have to decode the video as usual (i.e. forwards) to get the frames and then play them in the reverse order.
You could extend ExoPlayer to do this yourself but it may be easier to manipulate the video on the server side if possible first - there exist tools which will reverse a video and then your players will be able to play it as normal, for example https://www.videoreverser.com, https://www.kapwing.com/tools/reverse-video etc
If you need to reverse it on the device for your use case, then you could use ffmpeg on the device to achieve this - see an example ffmpeg command to do this here:
https://video.stackexchange.com/a/17739
If you are using ffmpeg it is generally easiest to use via a wrapper on Android such as this one, which will also allow you test the command before you add it to your app:
https://github.com/WritingMinds/ffmpeg-android-java
Note that video manipulation is time and processor hungry so this may be slow and consume more battery than you want on your mobile device if the video is long.
I am looking for a way to tag the start and end of a song(s) in a video file.
I am targeting below video formats for now.
1) 3GPP (.3gp)
2) MPEG-4 (.mp4)
I referred to the article http://bigflake.com/mediacodec/ and Android Extract Decode Encode Mux Audio and was able to get an idea for extracting demuxed, encoded audio data, however i am not sure how to identify the start of a music (not normal audio) in this audio file.
Target OS is Marshmallow.
Please suggest if this is possible, the answer i am looking for may need audio signal processing, unless there is an easier way to do it.
A theoretical way would be to use Transloadit, a program that would turn a song into a waveform. Then create a program (I think Python would do decently well) that can identify the start of the waveform and load it into a library. Then, in that library, you place the songs and their charts together, and if a song plays, you can then also run Python, which would detect the location of the source of current music and find the chart there and get the corresponding song name.
This will take a lot of time.
Transloadit waveform generator
Using PIL (Python Image Library) to detect image on screen from StackOverflow - first answer will help you.
If you need help with the libraries, then just ask me.
I'm sorry, but I don't have any major Android knowledge cough cough none.
Sorry, but you might need to search up some tutorials or try to use Python.
I want to write an app on Android to record snoring sounds of a sleeper and analyze it afterwards (i.e., not in real-time) for signs of a medical condition called obstructive sleep apnea.
The Android devices I've experimented with have voice recorders that produce a file format called .3ga. I want to programmatically read in the audio file and look at the amplitude for each individual time-sample. Then I can analyze that for patterns. Would this be easier if I converted this to a different format, e.g., MP3, and if so how can I do that programmatically?
I did a Google search on this and most of the hits seemed to be related to audio recording or playback which are unrelated to what I'm trying to do. I haven't coded anything yet because I don't know how to get started.
You are looking to do sample-based analysis on a raw audio signal, but the formats you mention are compressed. You will need to either deal with raw samples directly, or decompress the audio and then analyze.
Since you said you can do this work after-the-fact, why not upload to a server and analyze there?
I am working on an application. Here I want to read various sounds using Android Application. I know how to record in Android. Now what I want to do is to read a sound and based on its frequency I want to display it on Android Screen. So how can I read the frequency in Hz or KHz?
You will need to perform a discrete Fourier transform on your recorded audio samples. You can write code yourself or use a library for it. Unfortunately I have no idea which FFT libraries exist for Java, but I am sure you can google that. I found two in 2 minutes:
http://www.ee.ucl.ac.uk/~mflanaga/java/FourierTransform.html
https://sites.google.com/site/piotrwendykier/software/jtransforms