I am working at a player based on gstreamer tutorials. For this I created a pipeline using:
pipeline = gst_pipeline_new("audio-player");
//adding also 3 gstreamer elements
appsrc = gst_element_factory_make("appsrc", "source");
decoder = gst_element_factory_make("faad", "aac-decoder");
sink = gst_element_factory_make("autoaudiosink", "audio-output");
//adding and linking the elements to the pipeline
gst_bin_add_many (GST_BIN (pipeline), appsrc, decoder, sink, NULL);
gst_element_link_many(appsrc, decoder,sink, NULL);
//for appsrc was added a callback function need_data_cb
g_signal_connect(appsrc, "need-data", (GCallback)need_data_cb, data);
//state of pipeline is set to playing
gst_element_set_state(pipeline, GST_STATE_PLAYING);
In need_data_cb function I have a buffer that I want to be played:
g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
My poblem is this: I have the same code in Linux and in Android. In Linux buffer is played well each time it enters the callback function need_data_cb. In Android it plays the buffer just the first time it enters in need_data_cb and after that no sound. Why it happens this when I have same code in both versions. If I add in need_data_cb Android version to change pipeline states to pause and play before adding buffer to appsrc, it plays each time buffer but with some interruptions between each call.
//the first 2 lines added in Android version to play each time buffer
gst_element_set_state(pipeline, GST_STATE_PAUSED);
gst_element_set_state(pipeline, GST_STATE_PLAYING);
g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
The question is why on Linux works fine without these lines and on Android not?
On Linux I installed gstreamer 0.10 version, and on Android I used the libs from gstreamer sdk tutorials. Do you have any hint for my problem?
Thanks,
Radu
The problem was due to the emulator. On device everything was ok. Do not use emulators try debugging directly on device!!!
Related
Consider using libVLC for Android, based on the official recommended way.
I went through the compilation process without problems (but took some time).
I'd like to have the snapshot functionality, but I've found some very old (2-3 years old) threads around which states that this feature is still not available (2016) at least "not out of the box' by this thread (2014).
Snapshot functionality is available on other platforms.
Also there are some solutions where they switch from SurfaceView to TextureView.
However I prefer sticking to SurfaceView as TextureView brings some performance drawbacks (according to this topic).
Also on an official android page it's stated:
In API 24 and higher, it's recommended to implement SurfaceView instead of TextureView.
In 2014 there were only 2 dependecies of the snapshot function based on the thread I've mentioned earlier:
enabling sout module
enabling png as encoder
When looking the "VLC-Android" repository of VideoLAN, there is a file responsible for building libVLC.
In line 396, sout module seems to be enabled by default.
Before compilation I've enabled png as encoder in vlc/contrib/src/ffmpeg/rules.mak as pointed out in the forum.
However there is still no function related to snapshot in either org.videolan.libvlc.MediaPlayer or in org.videolan.libvlc.VLCVideoLayout.
The question is how can I create a snapshot (either into file, or into buffer) on Android with libVLC, without using TextureView?
Update1:
Fact1:
Found the reason on why it's unavailable on Android. In VLC's core source tree, in file lib/video.c on line 145 there is the snapshot function with a massive FIXME warning:
/* FIXME: This is not atomic. All parameters should be passed at once
* (obviously _not_ with var_*()). Also, the libvlc object should not be
* used for the callbacks: that breaks badly if there are concurrent
* media players in the instance. */
var_Create( p_vout, "snapshot-width", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-width", i_width);
var_Create( p_vout, "snapshot-height", VLC_VAR_INTEGER );
var_SetInteger( p_vout, "snapshot-height", i_height );
var_Create( p_vout, "snapshot-path", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-path", psz_filepath );
var_Create( p_vout, "snapshot-format", VLC_VAR_STRING );
var_SetString( p_vout, "snapshot-format", "png" );
var_TriggerCallback( p_vout, "video-snapshot" );
vlc_object_release( p_vout );
Fact2:
I wanted to go to another direction with this. If snapshot function is not usable (and also not wise to use it), I thought of some emergency solutions:
there is a video-filter in VLC named scene. This produce still images of the video to a specific path. I tried using this, but video-filters are not able to change at runtime. So this attempt failed.
I also tried to do it from the MediaPlayer (via Media.addOption), but video filters are also not possible to change at MediaPlayer level on Android.
I tried then to pass the filter config as an argument to libVLC initialization and finally it succeeded, however that won't be effective to create a new libVLC instance everytime when I need a screenshot.
A few ways to go about this...
Here's a crossplatform thumbnailer example using libvlc https://code.videolan.org/mfkl/libvlcsharp-samples/blob/master/PreviewThumbnailExtractor.Skia/Program.cs It should work on Android without much editing since it doesn't use any OS specific feature in the script. Should be able to translate it to Java/Kotlin as well I guess.
There is a libvlc function that allows to take snapshot. Just go the time you want and call it. https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__video.html#ga9b0a3870ce962aa0358050b2d5a59143
In VLC Android, the medialibrary now manages thumbnails.
LibVLC 4 now bundles a thumbnailer https://github.com/videolan/vlc/blob/d40eb012b10cc355ea9ad7a13eaf494b8e826d78/include/vlc/libvlc_media.h#L845
Good luck.
I'm using the Android oboe library for high performance audio in a music game.
In the assets folder I have 2 .raw files (both 48000Hz 16 bit PCM wavs and about 60kB)
std_kit_sn.raw
std_kit_ht.raw
These are loaded into memory as SoundRecordings and added to a Mixer. kSampleRateHz is 48000:
stdSN= SoundRecording::loadFromAssets(mAssetManager, "std_kit_sn.raw");
stdHT= SoundRecording::loadFromAssets(mAssetManager, "std_kit_ht.raw");
mMixer.addTrack(stdSN);
mMixer.addTrack(stdFT);
// Create a builder
AudioStreamBuilder builder;
builder.setFormat(AudioFormat::I16);
builder.setChannelCount(1);
builder.setSampleRate(kSampleRateHz);
builder.setCallback(this);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
LOGD("After creating a builder");
// Open stream
Result result = builder.openStream(&mAudioStream);
if (result != Result::OK){
LOGE("Failed to open stream. Error: %s", convertToText(result));
}
LOGD("After openstream");
// Reduce stream latency by setting the buffer size to a multiple of the burst size
mAudioStream->setBufferSizeInFrames(mAudioStream->getFramesPerBurst() * 2);
// Start the stream
result = mAudioStream->requestStart();
if (result != Result::OK){
LOGE("Failed to start stream. Error: %s", convertToText(result));
}
LOGD("After starting stream");
They are called appropriately to play with standard code (as per Google tutorials) at required times:
stdSN->setPlaying(true);
stdHT->setPlaying(true); //Nasty Sound
The audio callback is standard (as per Google tutorials):
DataCallbackResult SoundFunctions::onAudioReady(AudioStream *mAudioStream, void *audioData, int32_t numFrames) {
// Play the stream
mMixer.renderAudio(static_cast<int16_t*>(audioData), numFrames);
return DataCallbackResult::Continue;
}
The std_kit_sn.raw plays fine. But std_kit_ht.raw has a nasty distortion. Both play with low latency. Why is one playing fine and the other has a nasty distortion?
I loaded your sample project and I believe the distortion you hear is caused by clipping/wraparound during mixing of sounds.
The Mixer object from the sample is a summing mixer. It just adds the values of each track together and outputs the sum.
You need to add some code to reduce the volume of each track to avoid exceeding the limits of an int16_t (although you're welcome to file a bug on the oboe project and I'll try to add this in an upcoming version). If you exceed this limit you'll get wraparound which is causing the distortion.
Additionally, your app is hardcoded to run at 22050 frames/sec. This will result in sub-optimal latency across most mobile devices because the stream is forced to upsample to the audio device's native frame rate. A better approach would be to leave the sample rate undefined when opening the stream - this will give you the optimal frame rate for the current audio device - then use a resampler on your source files to supply audio at this frame rate.
In a Xamarin Android (not Xamarin Forms) application on KitKat (API 19), using Visual Studio 2015 Update 3, I'm having trouble with playback of sounds from a SoundPool. Sometimes my sound plays fine. Other times it stutters.
Weirdest of all, on rare occasion, playback fails completely with a warning in the debug log like:
W/SoundPool(30751): sample 2 not READY
This is despite the fact that I have absolutely positively waited for the sample to be loaded before trying to play it, and despite the same sample ID playing just fine immediately before and after the "sample X not READY" failure.
The TL;dr version of the code:
var Pool = new SoundPool(6, Stream.Music, 0);
var ToneID = await Pool.LoadAsync(Android.App.Application.Context, Resource.Raw.tone, 1);
beepButton.Click += (sender, e) => Pool.Play(ToneID, 1.0F, 1.0F, 1, 0, 1.0F);
The Resource.Raw.tone is little-endian 16-bit PCM mono audio with a sample rate of 44100Hz and a Microsoft "WAV" header. It is a 0.25s 440Hz sine wave created using Audacity 2.1.3.
I know all about SoundPool.Builder. It was introduced in API 21. I'm targeting API 19.
The actual code disables the button for 500ms after each press, so there's no chance I'm trying to play the sample more than once concurrently. Indeed, I can reproduce the problem with individual presses some tens of seconds apart.
Things I have tried without success:
using smaller or larger maxStreams (first arg) values when calling the SoundPool ctor
using different Android.Media.Stream enumeration values for streamType (second arg) when calling the SoundPool ctor
using different srcQuality (third arg) values when calling the SoundPool ctor (despite the Android docco saying it does nothing and to always use zero)
using different priority (third arg) values when calling the SoundPool.Load() instance method
using sub-unity volume arguments (like 0.99F) when calling the SoundPool.Play() instance method
using different priority (fourth arg) values when calling the SoundPool.Play() instance mehtod
using different file formats (MP3, OggVorbis) for my sound resource
Here is a .zip file containing my entire ready-to-build application. This is essentially just the template app except for the MainActivity.cs.
What the app should do: beep each time you press the (one and only) button on the UI. It will probably do just that the first dozen presses or so. But then you'll hear "BaBeep" or "BeeeeBip" or "Bee<pause>beep" or silence.
FWIW, the same resource plays back just fine every time using MediaPlayer.
(Edited to link a much-simplified and cleaned-up .zip file example.)
In my Android application, I am encoding some media in webm (vp8) format using MediaCodec. The encoding is working as expected. However, I need to ensure that I create a sync frame once in a while. Here is what I do:
encoder.queueInputBuffer(..., MediaCodec.BUFFER_FLAG_SYNC_FRAME);
Later in the code, I check for sync frame:
encoder.dequeueOutputBuffer(bufferInfo, 0);
boolean isSyncFrame = (bufferInfo.flags & MediaCodec.BUFFER_FLAG_SYNC_FRAME);
The problem is that isSyncFrame never gets a true value.
I am wondering if I am making a mistake in my encoding configuration. May be there is a better way to tell the encoder to create a sync frame once in a while.
I hope it is not a bug in MediaCodec. Thank you in advance for your help.
There is no (current as of Android 4.3) way to request an on-demand sync frame using MediaCodec encoders. This is partly due to OMX, the underlying codec implementation in Android, that does not provide a way to specify which input frame should be encoded as a sync frame; although it has a way to trigger a sync frame "in the near future".
feisal's answer is the only currently supported way to control sync frames, but you have to do it at configuration time.
==edit re: jesup
You can trigger a sync frame in the near future using MediaCodec.setParameter:
Bundle params = new Bundle();
params.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME, 0);
mCodec.setParameters(syncFrame);
Unfortunately, there is no (reliable) way to tell in MediaCodec if an encoded buffer is a sync frame other than doing it on your own by inspecting the byte-codes.
you can set the rate of I-frames in the MediaFormat object of your encoder by setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, int secs_between_iframes );
I have an IP camera which is streaming video in MJPEG format. Now my aim is to receive it and display it in my own custom android app. For this I have three programming alternatives on android platform :
Using inbuilt Anrdroid MediaPlayer class
Using FFMPEG library in native C and accessing it through JNI
Using GStreamer port on android to receive the stream
So please suggest a better solution?
I have no experience with FFMPEG or GStreamer. So what is the feasibility of doing this?
Use gstreamer for it.
I have used gstreamer at beagleboard which has 1GHz processor for taking image from 2 cameras in 30 fps with very low CPU processing power.
Gstreamer able to merge images, add strings, change formats. And presents you what you want easily in stream. The only thing you need to do is adding black boxes each other.
You can add blackboxes with both dynamically and statically.
If you are not going to change your stream depends on input at your program I suggest to use static one. But I am not sure if it works at android..
To test 3rd option (gstreamer) you can use this app: https://play.google.com/store/apps/details?id=pl.effisoft.rpicamviewer2. You can also open gstreamer preview from your code using following code:
Intent intent = new Intent("pl.effisoft.rpicamviewer2.PREVIEW");
int camera =0;
//--------- Basic settings
intent.putExtra("full_screen", true);
intent.putExtra("name"+camera, "My pipeline name");
intent.putExtra("host"+camera, "192.168.0.1");
intent.putExtra("port"+camera, 5000);
intent.putExtra("description"+camera, "My pipeline description");
intent.putExtra("uuid"+camera, UUID.randomUUID().toString() );
intent.putExtra("aspectRatio"+camera, 1.6);
intent.putExtra("autoplay"+camera, true);
//--------- Enable advanced mode
intent.putExtra("advanced"+camera, true); //when advanced=true, then custom_pipeline will be played
//when advanced=false, then pipeline will be generated from host, port (use only for backward compatibility with previous versions)
intent.putExtra("custom_pipeline"+camera, "videotestsrc ! warptv ! autovideosink");
//--------- Enable application extra features
intent.putExtra("extraFeaturesEnabled"+camera, false);
//--------- Add autoaudiosink to featured pipeline
intent.putExtra("extraFeaturesSoundEnabled"+camera, false);
//--------- Scale Video Stream option
intent.putExtra("extraResizeVideoEnabled"+camera, false);
intent.putExtra("width"+camera, 320); //used only when extraResizeVideoEnabled=true
intent.putExtra("height"+camera, 200); //used only when extraResizeVideoEnabled=true
//--------- Add plugins
ArrayList<String> plugins = new ArrayList<String>();
intent.putExtra("plugins"+camera, plugins);
intent.setPackage("pl.effisoft.rpicamviewer2");
startActivityForResult(intent, 0);