We would like to integrate libvlc in our project, replacing Android MediaPlayer.
Project compile, libvlc is initialized withput error but nothing happens.
Instead of render video on a android widget as SurfaceView, GLSurfaceView or TextureView, we musto to render to a SurfaceTexture which GL texture creted in JNI part. We are using this texture to render a sky dome in our game.
Right now we are decoding video to texture with android MediaPlayer and run perfectly. So issues is related with libVLC integration. No trace logs are written so is quite difficult so find where the problem is.
To summary our code:
1 - Set up libvlc library
ArrayList<String> options = new ArrayList<String>();
options.add("-vvv"); // verbosity
libVLC = new LibVLC(options);
libVLC.setOnNativeCrashListener(this);
mediaPlayer = new MediaPlayer(libVLC);
mediaPlayer.setEventListener(this);
2 - Load movie source
if (mediaPath.startsWith("http")) {
media = new Media(libVLC, mediaUri);
this.isLoaded = (media.parse(Media.Parse.FetchNetwork) && getMediaInformation()) ||
(media.parse(Media.Parse.ParseNetwork) && getMediaInformation());
} else if (mediaPath.startsWith("android.resource://")) {
media = new Media(libVLC, mediaUri);
} else {
media = new Media(libVLC, mediaPath);
this.isLoaded = (media.parse(Media.Parse.FetchLocal) && getMediaInformation()) ||
(media.parse(Media.Parse.ParseLocal) && getMediaInformation());
}
3 - Update texture in render loop
synchronized(this) {
if(isFrameNew) {
if(surfaceTexture != null) surfaceTexture.updateTexImage();
isFrameNew = false;
isMoviedone = false;
return true;
}
return false;
}
But nothing. Texture is empty and seems that libvlc is not working internally.
Anyone has the same issue?
Thank in advanced
Related
I am making a project where you are supposed to be able to change the delay with which the feed from the cell phone camera is shown, so that people can see how their brains handle the delay/latency. I have managed to show the camera feed on a canvas that follows the camera around and fills the whole view of the Google Cardboard, but I am wondering how I could delay this video feed. Perhaps by using an image array of some sort?
I have tried searching for sollutions online, but I have come up short of an answer. I have tried a texture2D array, but the performance was really bad (I tried a modified version of this).
private bool camAvailable;
private WebCamTexture backCam;
private Texture defaultBackground;
public RawImage background;
public AspectRatioFitter fit;
// Start is called before the first frame update
void Start()
{
defaultBackground = background.texture;
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0 )
{
Debug.Log("No camera detected");
camAvailable = false;
return;
}
for (int i = 0; i < devices.Length; i++)
{
if (!devices[i].isFrontFacing)
{
backCam = new WebCamTexture(devices[i].name, Screen.width, Screen.height); // Used to find the correct camera
}
}
if (backCam == null)
{
Debug.Log("Unable to find back camera");
return;
}
backCam.Play();
background.texture = backCam;
camAvailable = true;
} //Tell me if this is not enough code, I don't really have a lot of experience in Unity, so I am unsure of how much is required for a minimal reproducible example
Should I use some sort of frame buffer or image/texture array for delaying the video? (Start "recording", wait a specified amount of time, start playing the "video" on the screen)
Thanks in advance!
Currently, I have a server that streams four RTMP MediaSources, one with 720p video source, one with 360p video source, one with 180p video source, and one audio-only source. If I wanted to switch resolutions, I have to stop the ExoPlayer instance, prepare the other track I wanted to switch to, then play.
The code I use to prepare the ExoPlayer instance:
TrackSelection.Factory adaptiveTrackSelectionFactory = new AdaptiveTrackSelection.Factory(bandwidthMeter);
TrackSelector trackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new AVControlExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
//noinspection deprecation
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
2000, // max buffer
1000, // playback
1000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addAnalyticsListener(mAnalyticsListener);
With createSource() being:
private void createSource() {
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_BOTH_AV);
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180()));
mMediaSource180.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource180"));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360()));
mMediaSource360.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource360"));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720()));
mMediaSource720.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource720"));
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_AUDIO_ONLY);
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL()));
mMediaSourceAudio.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSourceAudio"));
}
private void releaseSource() {
mMediaSource180.releaseSource(null);
mMediaSource360.releaseSource(null);
mMediaSource720.releaseSource(null);
mMediaSourceAudio.releaseSource(null);
}
And the code I currently use to switch between these MediaSources is:
private void changeTrack(MediaSource source) {
if (currentMediaSource == source) return;
try {
this.currentMediaSource = source;
mPlayer.stop(true);
mPlayer.prepare(source, true, true);
mPlayer.setPlayWhenReady(true);
if (source == mMediaSourceAudio) {
if (!audioOnly) {
try {
TransitionManager.beginDelayedTransition(rootView);
} catch (Exception ignored) {
}
layAudioOnly.setVisibility(View.VISIBLE);
vwExoPlayer.setVisibility(View.INVISIBLE);
audioOnly = true;
try {
GameQnAFragment fragment = findFragment(GameQnAFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
try {
GamePollingFragment fragment = findFragment(GamePollingFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
}
} else {
if (audioOnly) {
TransitionManager.beginDelayedTransition(rootView);
layAudioOnly.setVisibility(View.GONE);
vwExoPlayer.setVisibility(View.VISIBLE);
audioOnly = false;
}
}
} catch (Exception ignore) {
}
}
I wanted to implement a seamless switching between these MediaSources so that I don't need to stop and re-prepare, but it appears that this feature is not supported by ExoPlayer.
In addition, logging each MediaSource structure with the following code:
MappingTrackSelector.MappedTrackInfo info = ((DefaultTrackSelector)trackSelector).getCurrentMappedTrackInfo();
if(info != null) {
for (int i = 0; i < info.getRendererCount(); i++) {
TrackGroupArray trackGroups = info.getTrackGroups(i);
if (trackGroups.length != 0) {
for(int j = 0; j < trackGroups.length; j++) {
TrackGroup tg = trackGroups.get(j);
for(int k = 0; k < tg.length; k++) {
Log.i("track_info_"+i+"-"+j+"-"+k, tg.getFormat(k)+"");
}
}
}
}
}
Just nets me 1 video format and 1 audio format each.
My current workaround is to prepare another ExoPlayer instance in the background, replace the currently running instance with that upon preparations being complete, and release the old instance. That reduces the lag between the MediaSources somewhat, but doesn't come close to achieving seamless resolution changes like Youtube.
Should I implement my own TrackSelector and jam-pack all the 4 sources into that, should I implement another MediaSource that handles all 4 sources, or should I just tell the colleague who maintains the streams to switch to just one RTMP MediaSource with a sort of manifest that lists all the resolutions available for the AdaptiveTrackSelection to switch between them?
Adaptive Bit Rate Streaming is designed to allow easy switching between different bit rate streams, but it requires the streams to be segmented and the player to download the video segment by segment.
In this way the player can decide which bit rate to choose for the next segment depending on the current network conditions (and the device display size and t type). The player is able to seamlessly, apart from the different bitrate and quality, move from one bit rate to another this way.
See here for some more info: https://stackoverflow.com/a/42365034/334402
All the above relies on a delivery protocol which supports this segmentation and different bit rate streams. The most common ones today are HLS and MPEG-DASH.
The easiest way to support what I think you are looking for would be for you colleague who is supplying the stream to supply it using HLS and/or DASH.
Note that at the moment, both HLS and DASH are required as apple devices require HLS while other devices tend to default to DASH. Traditionally HLS used TS as the container for the video in the segments and DASH used fragmented MP4, but there is now a move for both to use CMAF, which is essentially fragmented MP4.
So in theory a single set of bit rate videos can be used for HLS and DASH now - in practice this will depend on whether your content is encrypted or not, as HLS and apple used one encryption mode and everyone else another in the past. This is changing now also but will take time before all devices support the new approach, where all devices can support the same encryption mode, so if your streams are encrypted this is an added complication at the moment.
I am having some trouble creating a pipeline for remote mp3 playback. If I construct the pipeline like this:
data->pipeline = gst_parse_launch("souphttpsrc location=https://xxxx ! mad ! autoaudiosink", &error);
It plays fine. However, I have to have a dynamic location, so I have to keep a reference to the source pad. This is what I have
data->source = gst_element_factory_make("souphttpsrc", "source");
decoder = gst_element_factory_make ("mad","decoder");
sink = gst_element_factory_make ("autoaudiosink", "sink");
data->pipeline = gst_pipeline_new ("mp3-player");
if (!data->pipeline || !data->source || !decoder || !sink) {
g_printerr ("Not all elements could be created.\n");
return NULL;
}
gst_bin_add_many (GST_BIN (data->pipeline), data->source, decoder, sink, NULL);
if (!gst_element_link_many (data->source, decoder, sink, NULL)) {
gchar *message = g_strdup_printf("Unable to build pipeline: %s", error->message);
g_clear_error (&error);
set_ui_message(message, data);
g_free (message);
return NULL;
}
Then I set the location via
g_object_set(data->source, "location", char_uri, NULL);
However nothing plays. I also see the following output:
ERROR/GStreamer+opensles_sink(17065): 0:00:00.401251335 0x6f43f520 openslessink.c:152:_opensles_query_capabilities: engine.GetInterface(IODeviceCapabilities) failed(0x0000000c)
Anyone had experience with this before? The only solution I can think of is to rebuild the pipeline everytime I want to change the source location, but that seems like overkill.
gst_element_set_state (data->pipeline, GST_STATE_PLAYING);
and if you later want to change the uri, wait for EOS on the bus, gst_element_set_state (data->pipeline, GST_STATE_READY); set the new uri and go to playing again.
I need to rotate a video to adjust some of my needs. I'll explain the details on the following list.
I'm creating a Vine like app. I have to record video segments and then merge all the parts into just one file. I'm doing this without issue on an Android app using mp4 parser library with last version 1.0-RC-26 using the example provided on their website: here
The append video example works fine if all the videos have the same orientation but I discovered some issues recording video from the front camera so the quick solution was to set the video orientation recording on 270. The bad part on this solution is that the segment with this orientation appear with the wrong orientation on the merged video.
My possible solution to this is to rotate the video to apply what I need in different situations but I'm not having a working example with my code. Searching the internet I found solutions like this one here. The problem with this code is that is not compatible with the last version (It gives an compilation error) . I tried too to understand the logic of the library but I'm not having results. For example I experimented using the setMatrix instruction on the Movie object but It simply don't work.
public static void mergeVideo(int SegmentNumber) throws Exception {
Log.d("PM", "Merge process started");
Movie[] inMovies = new Movie[SegmentNumber] ;
//long[] Matrix = new long[SegmentNumber];
for (int i = 1 ; i <= SegmentNumber; i++){
File file = new File(getCompleteFilePath(i));
if (file.exists()){
FileInputStream fis = new FileInputStream(getCompleteFilePath(i));
//Set rotation I tried to experiment with this instruction but is not working
inMovies [i-1].setMatrix(Matrix.ROTATE_90);
inMovies [i-1] = MovieCreator.build(fis.getChannel());
Log.d("PM", "Video " + i + " merged" );
}
//fis.close();
}
List<Track> videoTracks = new LinkedList<Track>();
List<Track> audioTracks = new LinkedList<Track>();
for (Movie m : inMovies) {
for (Track t : m.getTracks()) {
if (t.getHandler().equals("soun")) {
audioTracks.add(t);
}
if (t.getHandler().equals("vide")) {
videoTracks.add(t);
}
}
}
Movie result = new Movie();
if (audioTracks.size() > 0) {
result.addTrack(new AppendTrack(audioTracks.toArray(new Track[audioTracks.size()])));
}
if (videoTracks.size() > 0) {
result.addTrack(new AppendTrack(videoTracks.toArray(new Track[videoTracks.size()])));
}
Container out = new DefaultMp4Builder().build(result);
//out.getMovieBox().getMovieHeaderBox().setMatrix(Matrix.ROTATE_180); //set orientation, default merged video have wrong orientation
// Create a media file name
//
String filename = getCompleteMergedVideoFilePath() ;
FileChannel fc = new RandomAccessFile(String.format(filename), "rw").getChannel();
out.writeContainer(fc);
fc.close();
//don't leave until the file is on his place
File file = new File (filename);
do {
if (! file.exists()){
Log.d("PM", "Result file not ready");
}
} while (! file.exists() );
//
Log.d("PM", "Merge process finished");
}
Have someone rotated video with the very last version of Mp4 parser? English is not my native language so I apologize any grammar error.
for (int i = 1; i <= SegmentNumber; i++) {
IsoFile isoFile = new IsoFile(getCompleteFilePath(i));
Movie m = new Movie();
List<TrackBox> trackBoxes = isoFile.getMovieBox().getBoxes(
TrackBox.class);
for (TrackBox trackBox : trackBoxes) {
trackBox.getTrackHeaderBox().setMatrix(Matrix.ROTATE_90);
m.addTrack(new Mp4TrackImpl(trackBox));
}
inMovies[i - 1] = m;
}
This is what I did to rotate a video.
How to write (wrap) MPEG4 data into a MP4 file in android?
I am doing some kind video processing on android platform, but I don't know how to write the processed data (encoded in some kind standard, like MPEG4) back into video file like mp4. I think it is best to use API to do this, but I can't find the needed API.
Is there anyone have any ideas?
mp4parser can work only with fully created frame streams, u cant write frame by frame with it. Correct me if im wrong
H264TrackImpl h264Track = new H264TrackImpl(new BufferedInputStream(some input stream here));
Movie m = new Movie();
IsoFile out = new DefaultMp4Builder().build(m);
File file = new File("/sdcard/encoded.mp4");
FileOutputStream fos = new FileOutputStream(file);
out.getBox(fos.getChannel());
fos.close();
Now we need to know how to write frame by frame there.
OpenCV might be a little too much for the job, but I can't think of anything easier. OpenCV is a computer vision library that offers an API for C, C++ and Python.
Since you are using Android, you'll have to download a Java wrapper for OpenCV named JavaCV, and it's a 3rd party API. I wrote a small post with instructions to install OpenCV/JavaCV on Windows and use it with Netbeans, but at the end you'll have to search for a tutorial that shows how to install OpenCV/JavaCV for the Android platform.
This is a C++ example that shows how to open an input video and copy the frames to an output file. But since you are using Android an example using JavaCV is better, so the following code copies frames from an input video and writes it to an output file named out.mp4:
package opencv_videowriter;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_highgui.*;
public class OpenCV_videowriter
{
public static void main(String[] args)
{
CvCapture capture = cvCreateFileCapture("cleanfish47.mp4");
if (capture == null)
{
System.out.println("!!! Failed cvCreateFileCapture");
return;
}
int fourcc_code = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FOURCC);
double fps = cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
int w = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH);
int h = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT);
CvVideoWriter writer = cvCreateVideoWriter("out.mp4", // filename
fourcc_code, // video codec
fps, // fps
cvSize(w, h), // video dimensions
1); // is colored
if (writer == null)
{
System.out.println("!!! Failed cvCreateVideoWriter");
return;
}
IplImage captured_frame = null;
while (true)
{
// Retrieve frame from the input file
captured_frame = cvQueryFrame(capture);
if (captured_frame == null)
{
System.out.println("!!! Failed cvQueryFrame");
break;
}
// TODO: write code to process the captured frame (if needed)
// Store frame in output file
if (cvWriteFrame(writer, captured_frame) == 0)
{
System.out.println("!!! Failed cvWriteFrame");
break;
}
}
cvReleaseCapture(capture);
cvReleaseVideoWriter(writer);
}
}
Note: frames in OpenCV store pixels in the BGR order.
Your question doesn't make 100% sense. MPEG-4 is a family of specification (all ISO/IEC 14496-*) and MP4 is a the file format that is specified in ISO/IEC 14496-14.
If you want to create an MP4 file from a raw AAC and/or H264 stream I would suggest using the mp4parser library. There is an example that shows how to mux AAC and H264 into an MP4 file.
// Full working solution:
// 1. Add to app/build.gradle -> implementation 'com.googlecode.mp4parser:isoparser:1.1.22'
// 2. Add to your code:
try {
File mpegFile = new File(); // ... your mpeg file ;
File mp4file = new File(); // ... you mp4 file;
DataSource channel = new FileDataSourceImpl(mpegFile);
IsoFile isoFile = new IsoFile(channel);
List<TrackBox> trackBoxes = isoFile.getMovieBox().getBoxes(TrackBox.class);
Movie movie = new Movie();
for (TrackBox trackBox : trackBoxes) {
movie.addTrack(new Mp4TrackImpl(channel.toString()
+ "[" + trackBox.getTrackHeaderBox().getTrackId() + "]", trackBox));
}
movie.setMatrix(isoFile.getMovieBox().getMovieHeaderBox().getMatrix());
Container out = new DefaultMp4Builder().build(movie);
FileChannel fc = new FileOutputStream(mp4file).getChannel();
out.writeContainer(fc);
fc.close();
isoFile.close();
channel.close();
Log.d("TEST", "file mpeg " + mpegFile.getPath() + " was changed to " + mp4file.getPath());
// mpegFile.delete(); // if you wish!
} catch (Exception e) {
e.printStackTrace();
}
// It's all! Happy coding =)