I'm trying to make an Android Applicaion that uses FFMPEG to play online streams. I gone through dolphin player source and with the help of this i achieved playing udp streams in android. Now i am trying to play multiple streams in my player. For example i have a list activity that has 10 names and by clicking each name the corrosponding video should be played. How can i achieve this in avformat_open_input. I am very new to ffmpeg. Please help me in achiving this. Thanks in advance.
My c code is as follows:
static int decode_module_init(void *arg)
{
VideoState *is = (VideoState *)arg;
AVFormatContext *pFormatCtx;
int err;
int ret = -1;
int i;
int video_index = -1;
int audio_index = -1;
is->videoStream=-1;
is->audioStream=-1;
char* Filename = "udp://.....";
global_video_state = is;
pFormatCtx = avformat_alloc_context();
pFormatCtx->interrupt_callback.callback = decode_interrupt_cb;
pFormatCtx->interrupt_callback.opaque = is;
**err = avformat_open_input(&pFormatCtx, Filename , NULL, NULL);**
if (err < 0) {
__android_log_print(ANDROID_LOG_INFO, "message", "File open failed");
ret = -1;
goto decode_module_init_fail;
}
__android_log_print(ANDROID_LOG_INFO, "message", "File open successful");
}
Related
I am using the native codec app given by Google: (https://github.com/googlesamples/android-ndk/tree/master/native-codec).
The app has a folder (assets) which contains some video samples to play.
My purpose is to read videos from the internal storage of the phone (i.e /sdcard/filename.mp4).
I added these 2 lines to the manifest file but this hasn't helped to fix the issue yet.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
I modified the code to get the video filename as an argument given by adb shell.
Here is the code:
mSourceString = getIntent().getStringExtra("arg");
if (!mCreated) {
if (mSourceString != null) {
mCreated = createStreamingMediaPlayer(getResources().getAssets(), mSourceString);
}
}
if (mCreated) {
mIsPlaying = !mIsPlaying;
setPlayingStreamingMediaPlayer(mIsPlaying);
}
The native code of the method which reads the video filename is the following:
jboolean Java_com_example_mohammed_myapplication_MainActivity_createStreamingMediaPlayer(JNIEnv* env, jclass clazz, jobject assetMgr, jstring filename)
{
LOGV("### create");
// convert Java string to UTF-8
const char *utf8 = env->GetStringUTFChars(filename, NULL);
LOGV("opening %s", utf8);
off_t outStart, outLen;
int fd = AAsset_openFileDescriptor(AAssetManager_open(AAssetManager_fromJava(env, assetMgr), utf8, 0),
&outStart, &outLen);
env->ReleaseStringUTFChars(filename, utf8);
if (fd < 0) {
LOGE("failed to open file: %s %d (%s)", utf8, fd, strerror(errno));
return JNI_FALSE;
}
data.fd = fd;
workerdata *d = &data;
AMediaExtractor *ex = AMediaExtractor_new();
media_status_t err = AMediaExtractor_setDataSourceFd(ex, d->fd,
static_cast<off64_t>(outStart),
static_cast<off64_t>(outLen));
close(d->fd);
if (err != AMEDIA_OK) {
LOGV("setDataSource error: %d", err);
return JNI_FALSE;
}
int numtracks = AMediaExtractor_getTrackCount(ex);
AMediaCodec *codec = NULL;
LOGV("input has %d tracks", numtracks);
for (int i = 0; i < numtracks; i++) {
AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
const char *s = AMediaFormat_toString(format);
LOGV("track %d format: %s", i, s);
const char *mime;
if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime)) {
LOGV("no mime type");
return JNI_FALSE;
} else if (!strncmp(mime, "video/", 6)) {
// Omitting most error handling for clarity.
// Production code should check for errors.
AMediaExtractor_selectTrack(ex, i);
codec = AMediaCodec_createDecoderByType(mime);
AMediaCodec_configure(codec, format, d->window, NULL, 0);
d->ex = ex;
d->codec = codec;
d->renderstart = -1;
d->sawInputEOS = false;
d->sawOutputEOS = false;
d->isPlaying = false;
d->renderonce = true;
AMediaCodec_start(codec);
}
AMediaFormat_delete(format);
}
mlooper = new mylooper();
mlooper->post(kMsgCodecBuffer, d);
return JNI_TRUE;
}
The app plays the videos successfully when they are in the "assets" folder, i.e inside the app. But when a video is outside the app (internal/external storage) the app stops working.
Is there a solution for this issue?
Apart from adding storage permission, the user needs to give manual permission also.
For testing purpose, you can go to Settings-> Apps-> your app-> Permissions-> enable storage permission. Should work fine then.
For production purpose, you should ask for permission via dialogue. There are plenty of tutorials for that.
I want to analyse an audio file (mp3 in particular) which the user can select and determine what notes are played, when they're player and with what frequency.
I already have some working code for my computer, but I want to be able to use this on my phone as well.
In order to do this however, I need access to the bytes of the audio file. On my PC I could just open a stream and use AudioFormat to decode it and then read() the bytes frame by frame.
Looking at the Android Developer Forums I can only find classes and examples for playing a file (without access to the bytes) or recording to a file (I want to read from a file).
I'm pretty confident that I can set up a file chooser, but once I have the Uri from that, I don't know how to get a stream or the bytes.
Any help would be much appreciated :)
Edit: Is a similar solution to this possible? Android - Read a File
I don't know if I could decode the audio file that way or if there would be any problems with the Android API...
So I solved it in the following way:
Get an InputStream with
final InputStream inputStream = getContentResolver().openInputStream(selectedUri);
Then pass it in this function and decode it using classes from JLayer:
private synchronized void decode(InputStream in)
throws BitstreamException, DecoderException {
ArrayList<Short> output = new ArrayList<>(1024);
Bitstream bitstream = new Bitstream(in);
Decoder decoder = new Decoder();
float total_ms = 0f;
float nextNotify = -1f;
boolean done = false;
while (! done) {
Header frameHeader = bitstream.readFrame();
if (total_ms > nextNotify) {
mListener.OnDecodeUpdate((int) total_ms);
nextNotify += 500f;
}
if (frameHeader == null) {
done = true;
} else {
total_ms += frameHeader.ms_per_frame();
SampleBuffer buffer = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream); // CPU intense
if (buffer.getSampleFrequency() != 44100 || buffer.getChannelCount() != 2) {
throw new DecoderException("mono or non-44100 MP3 not supported", null);
}
short[] pcm = buffer.getBuffer();
for (int i = 0; i < pcm.length-1; i += 2) {
short l = pcm[i];
short r = pcm[i+1];
short mono = (short) ((l + r) / 2f);
output.add(mono); // RAM intense
}
}
bitstream.closeFrame();
}
bitstream.close();
mListener.OnDecodeComplete(output);
}
The full project (in case you want to look up the particulars) can be found here:
https://github.com/S7uXN37/MusicInterpreterStudio/
I am trying to decode h264 video using ffmpeg and stagefright library. I'm using this example.
The example shows how to decode mp4 files, but i want to decode only h264 video.
Here is piece of my code..
AVFormatSource::AVFormatSource(const char *videoPath)
{
av_register_all();
mDataSource = avformat_alloc_context();
avformat_open_input(&mDataSource, videoPath, NULL, NULL);
for (int i = 0; i < mDataSource->nb_streams; i++)
{
if (mDataSource->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
{
mVideoIndex = i;
break;
}
}
mVideoTrack = mDataSource->streams[mVideoIndex]->codec;
size_t bufferSize = (mVideoTrack->width * mVideoTrack->height * 3) / 2;
mGroup.add_buffer(new MediaBuffer(bufferSize));
mFormat = new MetaData;
switch (mVideoTrack->codec_id == CODEC_ID_H264)
{
mConverter = av_bitstream_filter_init("h264_mp4toannexb");
mFormat->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
if (mVideoTrack->extradata[0] == 1) //SIGSEGV Here
{
mFormat->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
mFormat->setData(kKeyAVCC, kTypeAVCC, mVideoTrack->extradata,
mVideoTrack->extradata_size);
}
}
mFormat->setInt32(kKeyWidth, mVideoTrack->width);
mFormat->setInt32(kKeyHeight, mVideoTrack->height);
}
mVideoTrack->extradata is NULL. What i'm doing wrong?? My question is, what should be in mVideoTrack->extradata for kKeyAVCC ??
Please help me, I need Your help.
Thanks in advance.
If your input is a raw h.264 file, It is already in annex B format. So you do not need to do the "h264_mp4toannexb" conversion. In addition, in annex B, the SPS/PPS are sent inline with the first (or every) IDR frame. So no extra data is needed. Read more here: Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
I have found open source video player for Android, which uses ffmpeg to decode video.
I have some problems with audio, that sometimes plays with jerks, but video picture is shown well. The basic idea of player is that audio and video are decoded in two different streams, and then in the third stream the are passed back, video picture is shown on SurfaceView and video sound is passed in byte array to AudioTrack and then plays. But sometimes sound is lost or playing with jerks. Can anyone give me start point for what to do (some basic concepts). May be I should change buffer size for AudioTrack or add some flags to it. Here is a piece of code, where AudioTrack class is created.
private AudioTrack prepareAudioTrack(int sampleRateInHz,
int numberOfChannels) {
for (;;) {
int channelConfig;
if (numberOfChannels == 1) {
channelConfig = AudioFormat.CHANNEL_OUT_MONO;
} else if (numberOfChannels == 2) {
channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
} else if (numberOfChannels == 3) {
channelConfig = AudioFormat.CHANNEL_OUT_FRONT_CENTER
| AudioFormat.CHANNEL_OUT_FRONT_RIGHT
| AudioFormat.CHANNEL_OUT_FRONT_LEFT;
} else if (numberOfChannels == 4) {
channelConfig = AudioFormat.CHANNEL_OUT_QUAD;
} else if (numberOfChannels == 5) {
channelConfig = AudioFormat.CHANNEL_OUT_QUAD
| AudioFormat.CHANNEL_OUT_LOW_FREQUENCY;
} else if (numberOfChannels == 6) {
channelConfig = AudioFormat.CHANNEL_OUT_5POINT1;
} else if (numberOfChannels == 8) {
channelConfig = AudioFormat.CHANNEL_OUT_7POINT1;
} else {
channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
}
try {
Log.d("MyLog","Creating Audio player");
int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz,
channelConfig, AudioFormat.ENCODING_PCM_16BIT);
AudioTrack audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC, sampleRateInHz,
channelConfig, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
return audioTrack;
} catch (IllegalArgumentException e) {
if (numberOfChannels > 2) {
numberOfChannels = 2;
} else if (numberOfChannels > 1) {
numberOfChannels = 1;
} else {
throw e;
}
}
}
}
And this is a piece of native code where sound bytes are written to AudioTrack
int player_write_audio(struct DecoderData *decoder_data, JNIEnv *env,
int64_t pts, uint8_t *data, int data_size, int original_data_size) {
struct Player *player = decoder_data->player;
int stream_no = decoder_data->stream_no;
int err = ERROR_NO_ERROR;
int ret;
AVCodecContext * c = player->input_codec_ctxs[stream_no];
AVStream *stream = player->input_streams[stream_no];
LOGI(10, "player_write_audio Writing audio frame")
jbyteArray samples_byte_array = (*env)->NewByteArray(env, data_size);
if (samples_byte_array == NULL) {
err = -ERROR_NOT_CREATED_AUDIO_SAMPLE_BYTE_ARRAY;
goto end;
}
if (pts != AV_NOPTS_VALUE) {
player->audio_clock = av_rescale_q(pts, stream->time_base, AV_TIME_BASE_Q);
LOGI(9, "player_write_audio - read from pts")
} else {
int64_t sample_time = original_data_size;
sample_time *= 1000000ll;
sample_time /= c->channels;
sample_time /= c->sample_rate;
sample_time /= av_get_bytes_per_sample(c->sample_fmt);
player->audio_clock += sample_time;
LOGI(9, "player_write_audio - added")
}
enum WaitFuncRet wait_ret = player_wait_for_frame(player,
player->audio_clock + AUDIO_TIME_ADJUST_US, stream_no);
if (wait_ret == WAIT_FUNC_RET_SKIP) {
goto end;
}
LOGI(10, "player_write_audio Writing sample data")
jbyte *jni_samples = (*env)->GetByteArrayElements(env, samples_byte_array,
NULL);
memcpy(jni_samples, data, data_size);
(*env)->ReleaseByteArrayElements(env, samples_byte_array, jni_samples, 0);
LOGI(10, "player_write_audio playing audio track");
ret = (*env)->CallIntMethod(env, player->audio_track,
player->audio_track_write_method, samples_byte_array, 0, data_size);
jthrowable exc = (*env)->ExceptionOccurred(env);
if (exc) {
err = -ERROR_PLAYING_AUDIO;
LOGE(3, "Could not write audio track: reason in exception");
// TODO maybe release exc
goto free_local_ref;
}
if (ret < 0) {
err = -ERROR_PLAYING_AUDIO;
LOGE(3,
"Could not write audio track: reason: %d look in AudioTrack.write()", ret);
goto free_local_ref;
}
free_local_ref:
LOGI(10, "player_write_audio releasing local ref");
(*env)->DeleteLocalRef(env, samples_byte_array);
end: return err;
}
I will be pleased for any help!!!! Thank you very much!!!!
I had the same problem. The problem is for start point of audio data that write to audio player. In PCM data each 2 byte of data create one sample of audio base on little_endian conversion. for correct playing the PCM data samples must be correctly create an write to audio player. If the start point of reading buffer is not the first byte of sample then the samples of audio can not create correctly and sound will be destroyed. In my situation I read samples from file. In some times the start point of reading data from file had been second byte of sample and then the all data that I read from file had been decode uncorrectly. I solve the problem by checking the start point and if the start point is odd number I increase that and change it to even number.
excuse me for bad english.
I'm using the following C function to decode packets in Android (with JNI). When I play an mp3 file the code works fine however and wma file results in choppy audio. I suspect the issue may be with the "swr_convert" function and the data_size I'm using but I'm not sure. Does anyone know why this would be happening?
int decodeFrameFromPacket(AVPacket *aPacket) {
int n;
AVPacket *pkt = aPacket;
AVFrame *decoded_frame = NULL;
int got_frame = 0;
if (aPacket->stream_index == global_audio_state->audio_stream) {
if (!decoded_frame) {
if (!(decoded_frame = avcodec_alloc_frame())) {
__android_log_print(ANDROID_LOG_INFO, TAG, "Could not allocate audio frame\n");
return -2;
}
}
if (avcodec_decode_audio4(global_audio_state->audio_st->codec, decoded_frame, &got_frame, aPacket) < 0) {
__android_log_print(ANDROID_LOG_INFO, TAG, "Error while decoding\n");
return -2;
}
int data_size = 0;
if (got_frame) {
/* if a frame has been decoded, output it */
data_size = av_samples_get_buffer_size(NULL, global_audio_state->audio_st->codec->channels,
decoded_frame->nb_samples,
global_audio_state->audio_st->codec->sample_fmt, 1);
}
swr_convert(global_audio_state->swr, (uint8_t **) &gAudioFrameRefBuffer, decoded_frame->nb_samples, (uint8_t const **) decoded_frame->data, decoded_frame->nb_samples);
avcodec_free_frame(&decoded_frame);
gAudioFrameDataLengthRefBuffer[0] = data_size;
return AUDIO_DATA_ID;
}
return 0;
}