How can I stream video from byteArray
fun streamVideoListener(frame: ByteArray){
// receiving H.264 frames every 100ms.
}
I tried FFmpeg library. merged 100 frames and make few seconds video and add it to ExoPlayer playlist. but performance is not good at all.
I also tried NanoHttpd library. I can send a simple .mp4 video file and play it with vlc or MxPlayer, but don't know how to stream a growing video file (without refreshing page)
You'll need to implement a custom DataSource implementing the com.google.android.exoplayer.upstream.DataSource interface or extend the BaseDataSource from exoplayer library. Store the byte array and in the read method provide the stored byte array. You can see usage in the RtmpDataSource class of the exoplayer library
Related
I am building an application in which I need to trim videos. It is possible to do this using ffmpeg, but I can't use it because it uses the gpl license.
I tried using mediaCodec but can't use the codes I found.
How can i trim videos on android?
I had to develop trim functionality into my app a few months back and found that FFMPEG is very heavy and wasn't as accurate as MediaCodec.
None of the examples helped me but as I was developing in Kotlin I had to rewrite it anyway.
Here is the breakdown of how to use MediaCodec:
Pass the file to your mediacodec class
Extract the video from a file
Create your buffer size
Seek to where you want to file to be trimmed from or to
Mux your audio and video together
We tried to find a way to do the start and finish times together but we ended up just duplicating the clip first and passing both in with a start and and end time.
You'll need to post your code and show where you're having the issue with MediaCodec for people to help you.
I'm using two mediaPlayer instances to send two different streams to each channel. Half the time, it works fine but sometimes there is lag between left and right channel which is clearly audible. Is there any alternative except soundpool in Android to play multiple audio files simultaneously with sync ? soundpool is not suitable for my application since audiofiles are large (approx 20 MiB each). Audiofile format in question is : FLAC.
I found that there're no proper in-built mixing capabilities provided by Android API. I ended up using wav files instead of FLAC and mixing them on the fly as needed. Here is a higher level description how I achieved it.
Read both wav files and saving data part in byte array (Don't forget to strip out header bytes)
Mix them byte by byte to generate a unified wav file
In my use case, I just needed to mix left and right channels, but one can do all sorts of transformations as needed.
Create a temporary file to hold mixed data
Play the temporary file with mediaPlayer
One can also use audioTrack to play without storing resulting byte array to temporary file but I chose to use mediaPlayer due to built in seekTo functionality.
Hope, this approach is helpful.
I'm using the Android ExoPlayer to play a video, but I want to implement some custom encoding mechanism.
The idea is to change certain bytes on the video file, so that it's not playable from standard players. And then in ExoPlayer perform the decoding on the fly (without actually modifying the stored file).
How could I do such a thing - using a custom DataSource? Any tips on this would be much appreciated!
I have a video file that is encoded. For example the first bit of each byte is reversed. I want to read this video file, change the first bits and send the decoded result to Mediaplayer.
How can I do that? How can I create and pass this stream to media player without saving the decoded data on storage?
It is important that I do not want to save a decoded copy of my video and play it on media player. I want to play encoded video directly on mediaplayer using streams or other possible ways.
Short answer: NO, there is no way to do that (obviously by my point of view)
You cannot reproduce from a "custom" stream by manipulating the data just before passing it the MediaPlayer.
Why?
The official MediaPlayer API which is closest to the one needed to achieve your goal is the following:
MediaPlayer mp = new MediaPlayer();
FileInputStream fis = new FileInputStream(yourFile);
mp.setDataSource(fis.getFD());
//...
This snippet allows to play a file starting from a FileInputStream, but more precisely from the underlying FileDescriptor. The FileDescriptor is a class which is marked as final (and it is reasonable because it has to deal with the underlying OS), so you cannot override anything.
Possible workarounds?
As you already pointed out, you can try to modify the real file "in-place" while reproducing the video with the standard MediaPlayer (without creating a deep/separate copy of it): it's very tricky but plausible.
Try to use another player object: ExoPlayer (which is a new standard Android API) or Vitamio
Try a pure native solution (NDK + Android source), which I will not recommend ;)
UPDATE: detail about the 1st workaround
Assuming that "the first bit of each byte is reversed" you can use a FileChannel to manipulate the whole file "in-place" while reading it. You should use a FileChannels created from a RandomAccessFile created in mode "rw" in order to be able to read/write simultaneously.
This pre-elabaration task can run on a separated thread (or inside an IntentService, which is more fashion and reliable); you can wait for few seconds after the elaboration begins and then starting the playback by passing the File reference to the standard MediaPlayer (you need to tune this waiting period considering how fast is the elaboration, like a streaming buffering but easier because performance are almost stable).
In this way you don't need to wait the end of the pre-elaboration before starting the playback.
When the playback stops or you close the app, you need to undo your work by calling the same pre-elaboration task on the played file in order to restore it to its original state.
I hope that this hint can be useful.
Comments and precisations about my answer are welcome, I will update my post if I'll find more information.
I need to save a video file generated by two video streams coming from two different sources. I'm using rtsp over tcp/ip, and the videos are encoded with h264.
I need to first record the video from the first source and than continue with the second source.
So what I tried was to declare two AVFormatContext instances, initialize both with avformat_open_input(&context, "rtsp://......",NULL,&options)
and then read frames with av_read_frame(context,&packet)
and write them in the video file av_write_frame(oc,&packet);
It works fine saving the video from the first source, but if by example I saved y frames from the first context, when I try reading and saving the frames from the second context in the same file, for the first y frames I am tring to save, av_write_frame(oc,&packet2);
would retun -22, and would not add the frame to the file.
I think the problem is that the context variable remembers how many frames were read, and it gives every read packet an identification number, to make sure it isn't written twice. But when I'm using a new context those identification numbers reset, the AVOutputFormat or the AVFormatContext also retain the id of the package they are expecting to receive, and would not write anything until they receive a package with that id.
Now I'm wondering how could I solve this inconvenience. I can't find any setter for that id, or any way to reuse the same context. I thought to modify the ffmpeg sources but they are pretty complex and I couldn't find what I was looking for.
An alternative would be to save the two video in two different files but, I don't know how to append them afterwards, as ffmpeg can only append videos encoded with mpeg and rencoding the video isn't really an option, as it will take to much time. Also I couldn't find any other functional way to append two mp4 videos encoded with h264.
I'll be happy to hear any kind of usable ideea to this problem.
If you are saving raw h.264 streams why not simply store two seperate streams and then concatenate the file chunks on the command line seperately using a system command system("cat file1 file2 > finalfile")
If your output is one of the following you can append directly using cat
Transport stream [ts] with same codecs
.mpg files
raw h.264 files
raw mpeg4 files which have exactly same encoding headers [same dimensions, profile and toolsets mentioned in header]
H.263 streams
You cannot concatenate directly mp4 files or 3gpp files.