I want to create an app playing music backwards that uses the Android MediaExtractor to decode audio and connect backwards
As I can not get the MediaExtractor to move it backwards, I tried to cut it up to a certain byte size from end, flip it, play it, and cut next end and play so on.
In order to do this, we need to use MediaExtractor.seekTo to rewind forward, but the position to rewind is ambiguous. I tried to seekTo calculated Micro Seconds with sample rate and the certain byte size, but when using MediaExtractor.SEEK_TO_CLOSEST_SYNC flag, the connection part is slightly shifted and noise occurs.
So I think this problem can be solved if I can know the location of the Byte being sync, or if I can seekTo in bytes unit.
Any help would be appreciated.
since you want it in reverse couldn't you use SEEK_TO_PREVIOUS_SYNC for a better match instead of SEEK_TO_CLOSEST_SYNC
Related
My android app plays videos in Exoplayer 2, and now I'd like to play a video backwards.
I searched around a lot and found only the idea to convert it to a gif and this from WeiChungChang.
Is there any more straight-forward solution? Another player or a library that implements this for me is probably too much to ask, but converting it to a reverse gif gave me a lot of memory problems and I don't know what to do with the WeiChungChang idea. Playing only mp4 in reverse would be enough tho.
Videos are frequently encoded such that the encoding for a given frame is dependent on one or more frames before it, and also sometimes dependent on one or more frames after it also.
In other words to create the frame correctly you may need to refer to one or more previous and one or more subsequent frames.
This allows a video encoder reduce file or transmission size by encoding fully the information for every reference frame, sometimes called I frames, but for the frames before and/or after the reference frames only storing the delta to the reference frames.
Playing a video backwards is not a common player function and the player would typically have to decode the video as usual (i.e. forwards) to get the frames and then play them in the reverse order.
You could extend ExoPlayer to do this yourself but it may be easier to manipulate the video on the server side if possible first - there exist tools which will reverse a video and then your players will be able to play it as normal, for example https://www.videoreverser.com, https://www.kapwing.com/tools/reverse-video etc
If you need to reverse it on the device for your use case, then you could use ffmpeg on the device to achieve this - see an example ffmpeg command to do this here:
https://video.stackexchange.com/a/17739
If you are using ffmpeg it is generally easiest to use via a wrapper on Android such as this one, which will also allow you test the command before you add it to your app:
https://github.com/WritingMinds/ffmpeg-android-java
Note that video manipulation is time and processor hungry so this may be slow and consume more battery than you want on your mobile device if the video is long.
Using Android MediaMuxer, what would be a decent way to add my own PCM track as the audio track in the final movie?
In a movie, at a certain time, I'm slowing down, stop, then accelerate and restart a video. For the video part, it's easy to directly affect the presentation time, but for audio, there is a chunk-by-chunk process that makes less intuitive to handle a slow down, a stop and a start in the audio track.
Currently, when iterating through the buffer I've received from the source, to slow down the whole track I do:
// Multiply by 3 the presentation time.
audioEncoderOutputBufferInfo.PresentationTimeUs =
audioEncoderOutputBufferInfo.PresentationTimeUs * ratio);
// I expand the sample by 3. Damn, just realized I haven't
// respected the sample alignment but anyway, the problem is not about white noise...
encoderOutputBuffer = Slowdown(encoderOutputBuffer, 3);
// I then write it in the muxer
muxer.WriteSampleData(outputAudioTrack, encoderOutputBuffer, audioEncoderOutputBufferInfo);
But this just doesn't play. Of course, if the MediaFormat from the source was copied to the destination, then it will have a 3 times shorter duration than the actual audio data.
Could I just take the whole PCM from an input, edit the byte[] array, and add it as a track to the MediaMuxer?
If you want to slow down your audio samples you need to do this before you encode them, so before you queue the input buffer of your audio codec.
From my experience, the audio presentation timestamps are ignored by most of the players out there (I tried it with VLC and ffplay). If you want to make sure that audio and video stay in sync, you must make sure that you actually have enough audio samples to fill in the gap between to pts, otherwise the player will just start to play the following samples regardless of their pts.
Furthermore you cannot just mux PCM samples using the MediaMuxer, you need to encode them first.
I'm working on an android app that plays video (using video view). the video is meant to have both music (left and right) and narration, but I want to selectively be able to turn off the narration track in the MediaPlayer.
Is the way to do this correctly to encode by mp4 video file with 3 audio tracks (right left and narration) and then turn off the naration audio track with deselectTrack()?
Not clear to me from the documentation that MediaPlayer can handle more than 2 audio tracks.
If the audio tracks are limited to 2, would it make sense to run two media player simultaneously (synching them up with seekTo())when I want the narration track to play?
Thanks.
Sorry to burst your bubble, but...
1) You have a misunderstanding about what a "track" denotes. A track can have multiple channels (e.g., a stereo track has left and right channels). As I understand it, stereo is the extent of the Android AudioTrack implementation at present. I haven't yet checked if the OpenSL implementation is more extensive than the Java API.
2) Only 1 audio track can be selected at a time, so you wouldn't be able to have background and narration simultaneously in the way you were thinking.
3) Audio tracks can only be selected in the prepared state (i.e., not after playback has started). The documentation mentions this limitation is not ideal, so it will probably change in the future. If not for this problem, your goal could be accomplished with two audio tracks encoded in the stream, one with both background & narration, the other just background.
You will probably find it difficult to synchronize two MediaPlayers, but I haven't tried. Maybe this approach would be acceptable for your situation, although be forewarned the seekTo method isn't accurate. It depends on the encoding of the files.
Something I would try if I were you is to have two complete encoded videos, one with narration, the other without. Use two MediaPlayers and keep them both prepared. When you want to switch use seekTo to put the correct one at (or near) the desired location. That way you don't have to worry about synchronization. If the video is large, this method could use significantly more resources, though.
I need to be able to play two or more (let's say, up to 5) short ogg files simultaneously. And by simultaneously I mean in perfect synchrony. I am able to load them to SoundPool and play, but this sometimes creates a noticeable difference in playback start time, which I want to get rid of.
From my understanding this can be avoided if mixing PCMs into one buffer and playing. But OGG's are not PCMs and need to be somehow efficiently decoded before playing and latency must be very low, ideally as soon as user presses the button. So I figured I need a way to stream OGG into PCM and as I receive buffers I would mix them and feed to AudioTrack. My requirement is Android 2.3.3+, so I cannot use any new codecs provided in Jelly Bean.
Also although OGGs themselves are small, there is a lot of them. So keeping them all decoded in memory (SoundPool or some pre-decoding) may case problems too.
Can someone give me a tip where to dig? Can OpenSL ES do that for me? Or should I think about integrating ffmpeg? And is it even possible to stream simultaneus files with low latency?
Thanks
You can play sounds using AssetPlayers, but this sometimes creates a noticeable difference in playback start time, yeh...
So, i recomend to decode ogg using Ogg Vorbis (like here) and then using this PCM buffer for BufferPlayer.
Btw, check this OpenSL ES wrappers
https://github.com/Suvitruf/Android-ndk/tree/master/OpenSLES
I am trying to stream audio through a server. I have set up everything, and it's working fine with recording and playing back static audio, but when I am trying to stream an audio there is a delay on the playing side.
I did a Google search, but couldn't find the proper way of doing this. I am using AudioRecord & the Audiotrack Android media API for sending & receiving audio data. Can anybody tell me how to handle this delay?
I have added my code on GOOGLE GROUP to get clear picture.
I had tried in this way, holding 5 chunks of audio data in a buffer which comes through the server & playing back when it fills 5 chunks of data and again getting next 5 chunks of audio data and filling it like that it goes till 1024 bytes of data (it writes to the audiotrack & the play method is called).This too has a delay,any other solutions??
If you're really trying to do this unbuffered, make sure whatever playback tool you're using is trying to play it back without a buffer. You will be hard-pressed to not have a delay. Nothing on TV, radio, etc. is really 'live'--there is always some kind of delay. With internet streams, you're sending a large amount of data constantly. Even besides the time for it to travel, all this data has to be kept in a particular order and nobody wants choppy playback while the enduser's computer attempts playback. I've had flash players for major networks keep massive cache files on my computer while it's handling playback, but their players do not skip/wait to buffer/etc. (If you load up something and notice a few 100 MBs of extra memory being used, maybe even more during playback, that's what that is.)
You might be able to get away with a very small buffer (the standard in the past used to be 30-60 seconds and a lot of players still default to this) using VLC. I have been able to set its buffer very low but it is on incredibly low quality streams/videos. The big problem you have though I'd guess is your playback is setting the buffer and if your playback is set to 60 seconds buffer, it doesn't matter what you do serverside...the client end will wait until it has that much of a chunk and then begin playback.