I would like to extract a selected interval of an audio track (e.g. 10s A-B interval) and then export it in a consistent format (regardless of the source file format).
I started by trying to load local file as a stream and then the idea was to save audio buffer and use MediaRecorder to encode it. But after a lot of struggles and virtually no progress I hope someone can at least point me into the right direction.
Note: Workarounds where the user has to play the segment while the app records it via MIC will not be accepted.
Related
Currently, I have been making streaming video player app. so for that reason I want to use dash streaming. I have a normal URI of video from my firebase storage. but for dash streaming, I think I need a file that ends with .mpd.
ExoPlayer player = new ExoPlayer.Builder(context).build();
player.setMediaItem(MediaItem.fromUri(**dashUri**));
player.prepare();
what I have to do to convert normal to URI which ends with .mpd.
So, how can I do that?
You actually have to covert the video file to a fragmented format and typically will want to make it available in multiple bit rates, which means transcoding it also.
The reason for this is that DASH is an ABR protocol - it breaks multiple renditions of a video into equal size, time wise, chunks and the player can then request chunk by chunk, choosing the best bit rate version of each chunk depending on the current network conditions and the device type.
See here for more info: https://stackoverflow.com/a/42365034/334402
Open source tools exist to create DASH files from mp4 - see some examples here (links correct at time of writing):
https://github.com/gpac/gpac/wiki/DASH-Support-in-MP4Box
https://www.ffmpeg.org/ffmpeg-formats.html#dash-2
I've been exploring the documentation and examples at http://bigflake.com/mediacodec/ by Fadden, and applied patch http://bigflake.com/mediacodec/0001-Record-game-into-.mp4.patch to the breakout game. Unfortunately, after compiling the code, I realized it doesn't work, producing video files that aren't streamable.
I see the following error:
"The mp4 file will not be streamable."
According to Fadden, this should be fixed by checking the mBufferInfo.flags (https://stackoverflow.com/questions/23934087/non-streamable-video-file-created-with-mediamuxer), which is already done in his code, so I'm at a complete loss. Did anyone else get the video recording patch to work?
The warning you're seeing is just a warning, nothing more. MP4 files aren't streamable anyway in most cases, in the sense that you would be able to pass the written MP4 over a pipe and have the other end play it back (unless you resort to a lot of extra trickery, or use fragmented MP4 which the android MP4 muxer doesn't write normally). What streamable means here is that once you have the final MP4 file, you can start playing it back without having to seek to the end of the file (which playback over HTTP can do e.g. with HTTP byte range requests).
To write a streamable MP4, the muxer tries to guess how large your file will be, and reserves a correspondingly large area at the start of the file to write the file index to. If the file turns out to be larger so the index doesn't fit into the reserved area, it needs to be written at the end of the file. See lines 506-519 in https://android.googlesource.com/platform/frameworks/av/+/lollipop-release/media/libstagefright/MPEG4Writer.cpp for more info about this guess. Basically the guess seems to boil down to: "The default MAX _MOOV_BOX_SIZE value is based on about 3 minute video recording with a bit rate about 3 Mbps, because statistics also show that most of the video captured are going to be less than 3 minutes."
If you want to turn such a non-streamable MP4 file into a streamable one, you can use the qt-faststart tool from libav/ffmpeg, which just reorders the blocks in the file.
You can check Intel INDE Media for Mobile, it allows to make game capturing and streaming to network:
https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
simplest capturing:
https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials-video-capturing-for-opengl-applications
youtube streaming:
https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials-video-streaming-from-device-to-youtube
Is it possible to record the internal sound generated by the app?
My app allows you to create and play back musical sequences.
soundPool.play(soundIds[i], 1f, 1f, 1, 0, Constants.TIME_RATE);
I'd like to be able to record the sequence and export to mp3.
I've looked into Audio Capture but setAudioSource (int audio_source) only seems to accept MIC recording.
Thanks
No, there's no API for getting the audio output, even for your own app (actually that's not entirely true, because you can get it through the Visualizer API, but it would be of such low quality that I doubt it would be of any use for you).
If you want that kind of functionality you'll have to implement it yourself. That is; as you start playback of sounds, mix them and write the result to a file as well. If the sounds are compressed you'll also have to take case of decoding them yourself.
Note that there's no MP3 encoder included with Android, so you'd have to supply your own MP3 encoder anyway if that's the format you want to export in.
As the michael said , u need to implement your own encoder and decoder for that . Visualizer is providing very low quality of data becaz we can use it to show on custom views and effects which are synchronized with equalizer.
This is the link where u will find simple decoder and encoder for MP3 file. Where they are reading data from MP3 file and putting it into new MP3 file. They had created support for some other extension too.
http://code.google.com/p/ringdroid/source/browse/#svn%2Fbranches%2Fgingerbread%2Fsrc%2Fcom%2Fringdroid
According to http://xzpeter.org/?p=254 it's possible to capture internal sound playback if you modify Android sources. Particularly the write function of the AudioFlinger::MixerThread class. (Note that the article is a little bit old - on the latest Android versions AudioFlinger was reorganized and write code can be now found in the threadLoop_write() function).
Quoting original solution author:
AudioFlinger is implemented under dir
frameworks/base/services/audioflinger/. What we are going to
do is to find the mixer output. In the file AudioFlinger.cpp, we can
see AudioFlinger::MixerThread::threadLoop(), which is the working
thread of the mixer, and this MixerThread is inherited from
AudioFlinger::BaseThread. Then, just search the keyword mOutput->write
with your best editor (vim, emacs, gedit, whatever), and we will find
something like this under the threadLoop() function:
mLastWriteTime = systemTime();
mInWrite = true;
mBytesWritten += mixBufferSize;
int bytesWritten = (int)mOutput->write(mMixBuffer, mixBufferSize);
if (bytesWritten < 0) mBytesWritten -= mixBufferSize;
mNumWrites++;
mInWrite = false;
That is the very point that mixer output buffer is transferred to hardware related codes I think, and the audio clip is in mMixbuffer, with size mixBufferSize. In this buffer, there are PCM raw audio data with 44100Hz sampling rate, 2 channels and 16 bits little endian as its param.
If you write this buffer out to a file, like /data/wav.raw, you can just use adb pull to retrieve the data file to your host machine and play it with aplay:
aplay -t raw -c 2 -f S16_LE -r 44100 wav.raw
Anyway, in order to convert it to mp3 you will have to use external encoder as stated by Michael.
I develop an app where the user can record an sound, maybe a clap or other sounds. The android system should save it, since the user delete the command. With that recording sound the user can select any commands that the device maybe show an intent or others. Now my question: Is it possible that when the microphone are on that the device when it hear the recording voice start the intent and when it is possible how can I do that? I think that I maybe do it with getMaxAmplitude() but I need an method to decide the length of that amplitude.
https://code.google.com/p/musicg/ may help you.
You can find the demo app in the site.
The features described in the site as follows:
Clap Api - Detect whether the input audio is a clap
Whistle Api - Detect whether the input audio is a whistle
Read PCM WAVE Headers
Read audio data
Trim the audio data
Save the edited audio file
Read amplitude-time domain data
Read frequency-time domain data
Render audio wave form image (Requires Java 2D & Java Image I/O, Android non-compatible)
Render audio spectrogram image (Requires Java 2D & Java Image I/O, Android non-compatible)
I want to write an app on Android to record snoring sounds of a sleeper and analyze it afterwards (i.e., not in real-time) for signs of a medical condition called obstructive sleep apnea.
The Android devices I've experimented with have voice recorders that produce a file format called .3ga. I want to programmatically read in the audio file and look at the amplitude for each individual time-sample. Then I can analyze that for patterns. Would this be easier if I converted this to a different format, e.g., MP3, and if so how can I do that programmatically?
I did a Google search on this and most of the hits seemed to be related to audio recording or playback which are unrelated to what I'm trying to do. I haven't coded anything yet because I don't know how to get started.
You are looking to do sample-based analysis on a raw audio signal, but the formats you mention are compressed. You will need to either deal with raw samples directly, or decompress the audio and then analyze.
Since you said you can do this work after-the-fact, why not upload to a server and analyze there?