I have android implementation which sets media player data source range between 10000 bytes and 40000000 bytes:
FileInputStream fis = new FileInputStream(Environment.getExternalStorageDirectory() + "path to video file");
mMediaPlayer.setDataSource(fis.getFD(),1000,40000000);
Is there any alternative to do the same in iOS?
I don't think you can set the range based on bytes in iOS. But you can specify the video's playback range based on time. An example is given in this link.
You have to calculate it manually by the combination of the two:
// 1 Get AVAsset file size
NSDictionary *fileAttributes = [[NSFileManager defaultManager] attributesOfItemAtPath:URL error:&attributesError];
NSNumber *fileSizeNumber = [fileAttributes objectForKey:NSFileSize];
long long fileSize = [fileSizeNumber longLongValue];
// 2 now from the estimated data rate
Get Estimated data rate
so now you can have the CMTime and you can pass it accordingly.
Hope this helps.
Related
I have an RTMP stream I want to play in my app using the Exoplayer library. My setup for that is as follows:
TrackSelector trackSelector = new DefaultTrackSelector();
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new ExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
3000, // max buffer
1000, // playback
2000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addListener(mVideoStreamHandler);
mPlayer.addVideoListener(new VideoListener() {
#Override
public void onVideoSizeChanged(int width, int height, int unappliedRotationDegrees, float pixelWidthHeightRatio) {
Log.d("hasil", "onVideoSizeChanged: w:" + width + ", h:" + height);
String res = width + "x" + height;
resolution.setText(res);
}
#Override
public void onRenderedFirstFrame() {
}
});
Where createSource() is as follows:
private void createSource() {
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720));
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL));
}
My current problem is that only the first three ExtractorMediaSources work fine in Exoplayer. The mMediaSourceAudio refuses to play in Exoplayer, but works just fine in the VLC Media Player for Android.
Right now I have a suspicion that the format is AAC-LTP, or whatever AAC variant that requires a codec available in VLC but not in default Android. However, I do not have access to the encoding process so I don't know for sure.
If this isn't the case, what is it?
EDIT:
I've been debugging the BandwidthMeter and added a MediaSourceEventListener. When I use the normal Video sources, onDownstreamFormatChanged() gets called, but not when I use that Audio Stream source.
In addition, the BandwidthMeter works fine, with bytes always downloaded in all parts of the stream and more bytes when the video stream comes in, but only in the Audio only stream that, when I call mPlayer.getBufferedPosition(), the returned value is always 0. Also, when I use the Audio Stream source, no OMX code was called - no decoders were set up.
Am I seeing a malformed audio stream, or do I need to change my Exoplayer's settings?
EDIT 2:
Further debugging reveals that, in all the Video streams and Audio stream, the same FlvExtractor is used. Even though the Video streams have the avc video track encoding and mp4a-latm audio track encoding. Is this normal?
Turns out it's because the stream was recognized to have two tracks/sampleQueues. One Audio track, and one track with null format. That null track was supposed to be the video track, which was supposed to exist according to the stream's flvHeader flag.
For now, I get around this by creating a custom MediaSource using a custom MediaPeriod. Said custom MediaPeriod having code to separate the video and audio tracks of the SampleQueues, then using the audio-only SampleQueue[] instead of the source SampleQueue[] when I want to play the audio-only stream.
Though this gives me another point of concern: There's something one can do to alter the 'has audio track (flag & 0x04) and video track (flag & 0x01)' flag in the rtmp stream, right?
Thanks for the comments, I'm new to ExoPlayer. But your comments helped me in debugging and getting multiple workarounds to the issue.
I tried to use custom MediaSource and custom MediaPeriod to address this audio issue. I have observed video format data coming after audio data incase of video+audio wowza stream, so the function maybeFinishPrepare() will wait for getting both video and audio format tag data before invoking onPrepared, incase if video tagData is received first. Incase of audio data received first, it wont wait and will call onPrepare().
With the above changes, I was able to play audio alone and video_audio wowza streams, where rtmp tagHeader with tagTypes were coming in the order of video tagData and then followed by audio data.
I wasn't able to use the same patch with srs server to play both audio_only and video_audio streams with the same changes. srs server is giving tagData in the order of audio and then video tagData,
So, I debugged further in FlvExtractor. In readFlvHeader, I have overriden the hasAudio and hasVideo variables. These variables will be set based on the first few tagHeaders(5 or 6). I used peekFully on input for 6 times in a loop. In each loop after fetching tagType and tagDataSize, tagDataSize is used to input.advancePeekPosition(), and tagType is used to identify whether we have audio/video format data in tagData. After peeking for first 6 consecutive tagHeaders, I was able to get actual values of hasAudio and hasVideo, and ignored the flvHeaders.flags, which were used to set these variables.
Custom FlvExtractor workaround, looked cleaner than custom MediaSource/MediaPeriod, as we will create those many tracks as necessary, as we are setting proper hasVideo/hasAudio values.
I am now trying to play an encrypted video(mp4) complete with my own logic. It takes too much time to play back the decoded file because it is too large to create and play. So, what I have found is how to play it while decrypting it with InputStream using ExoPlayer. But it's too difficult at my level to apply it. When I was worried for two days, I had a night, but I still do not see any results. So I ask for help here.
What I am looking for is a reference that can be helpful. I must accept and decode the buffer size (4096). I do not know where to write this code.
And the flow to complete the function I think is as follows.
1. Complete the ExoPlayer UI.
2. Encrypt the downloaded file using my encryption logic. (buffer size is 4096)
3. InputStream receives the file, decodes it at the same time, and plays it. (streaming)
I will do it somehow until 1 and 2, but 3 is very difficult for me. Do you have any specific code and explanation? If you know anyone, please give me a favor. Thank you.
try {
ios = new FileInputStream(params[0]);
fos = context.openFileOutput(params[1] + ".mp4", MODE_PRIVATE);
ScatteringByteChannel sbc = ios.getChannel();
GatheringByteChannel gbc = fos.getChannel();
File file = new File(params[0]);
fileLength = file.length();
startTime = System.currentTimeMillis();
int read = 0;
readb = 0;
ByteBuffer bb = ByteBuffer.allocate(4096);
while ((read = sbc.read(bb)) != -1) {
bb.flip();
gbc.write(ByteBuffer.wrap(enDecryptVideo.combineByteArray(bb.array())));
bb.clear();
readb += read;
if (readb % (4096 * 1024 * 3) == 0){
publishProgress(((int) ( readb * 100 / fileLength)));
} else if (readb == fileLength) {
publishProgress(101);
}
}
ios.close();
fos.close();
} catch (Exception e) {
e.getMessage();
} finally {
Log.d(TAG, "doInBackground: " + (System.currentTimeMillis() - startTime));
}
This is my code when I use File play. The above code is the code I used when I made a decoded file and played it. Now I have to play back at the same time as decoding. It does not create a file. I am very eager. Because I have been working for a month since I started work, but I have received something that does not fit my level. But I really want to hit this target... Teach me please.
You can actually leverage the platforms inbuilt encryption functionality for streamed video, either using a commercial DRM or using a 'clear key' encryption.
If these meet your needs it should much easier to work with as you won't have to implement the encryption and decryption yourself.
This answer provides an example for creating both an HLS / AES stream and a DASH clearkey stream:
https://stackoverflow.com/a/45103073/334402
This does not provide the same security as DRM, as the keys themselves are not encrypted, but it may be sufficient for your needs.
These streams can then be played with the standard iOS, Android or HTML5 players.
How can I get the byte offset of a video based on the video's play time offset? For example, given the play time offset of 15 seconds for a video, I'd like to know the byte offset for that second.
The reason for this is because I'd like to be able to "trim" a clip from a video. I'd like to be able to save a video clip from 00:00:20 to 00:00:35 of the video.
At the moment, here is what I have - but this saves the entire video from the url to the device:
URL url = new URL(http_url_path);
URLConnection ucon = url.openConnection();
// Define InputStreams to read from the URLConnection.
// uses 5KB download buffer
InputStream is = ucon.getInputStream();
BufferedInputStream in = new BufferedInputStream(is, BUFFER_SIZE);
FileOutputStream out = new FileOutputStream(file);
byte[] buff = new byte[BUFFER_SIZE];
int len = 0;
while ((len = in.read(buff)) != -1) {
out.write(buff, 0, len);
}
If you don't mind cutting at the nearest key frame (a/k/a sync frame), you can use MediaExtractor to extract the frames, using getSampleTime() to check the PTS, and MediaMuxer to put it back together minus the unwanted frames.
The video must start with a key frame, so you can't cut the stream at an arbitrary point unless you're willing to re-encode that GOP.
MP4 video files are not just a series of frames (I assume you're not operating on raw H.264 data). MediaMuxer will take care of rewriting the header and other supporting data.
You can try INDE Media for Mobile - https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
It has transcoding\remuxing functionality as MediaComposer class and a possibility to select segments for resulted files. Since it uses MediaCodec API inside it is very battery friendly and works as fast as possible. Samples are here: https://github.com/INDExOS/media-for-mobile
I'd like to capture the outgoing audio from a game and record it into an audio file as it's played. Is this possible within the framework in OpenSL? Like by connecting the OutputMix to an AudioRecorder, or something?
You could register a callback to the queue and obtain the output buffer before / after it is enqueued into the buffer queue for output. You could have a wavBuffer (a short array the length of the buffer size) that is written into on each enqueueing of a new buffer. The contents of this buffer are then written to a file.
outBuffer = p->outputBuffer[p->currentOutputBuffer]; // obtain float buffer
for ( int i = 0; i < bufferSize; ++i )
wavBuffer = ( short ) outBuffer[ i ] * 32768; // convert float to short
// now append contents of wavBuffer into a file
The basic OpenSL setup for the queue callback is explained in some detail on this page
And a very basic means of creating a WAV file in C++ can be found here note that you must have a pretty definitive idea of the actual size of the total WAV file as it's part of its header.
Can any tell how to combine/merge two media files into one ?
i found a topics about audioInputStream but now it's not supported in android, and all code for java .
And on StackOverflow i found this link here
but there i can't find solution - these links only on streaming audio . Any one can tell me ?
P.S and why i can't start bounty ?:(
import java.io.*;
public class TwoFiles
{
public static void main(String args[]) throws IOException
{
FileInputStream fistream1 = new FileInputStream("C:\\Temp\\1.mp3"); // first source file
FileInputStream fistream2 = new FileInputStream("C:\\Temp\\2.mp3");//second source file
SequenceInputStream sistream = new SequenceInputStream(fistream1, fistream2);
FileOutputStream fostream = new FileOutputStream("C:\\Temp\\final.mp3");//destinationfile
int temp;
while( ( temp = sistream.read() ) != -1)
{
// System.out.print( (char) temp ); // to print at DOS prompt
fostream.write(temp); // to write to file
}
fostream.close();
sistream.close();
fistream1.close();
fistream2.close();
}
}
Consider two cases for .mp3 files:
Files with same sampling frequency and number of channels
In this case, we can just append the second file to end of first file. This can be achieved using File classes available on Android.
Files with different sampling frequency or number of channels.
In this case, one of the clips has to be re-encoded to ensure both files have same sampling frequency and number of channels. To do this, we would need to decode MP3, get PCM samples,process it to change sampling frequency and then re-encode to MP3. From what I know, android does not have transcode or reencode APIs. One option is to use external library like lame/FFMPEG via JNI for re-encode.