I'm trying to play udp stream using Gstreamer on android. (I've used this tutorial from the official Gstreamer website). I can play rtsp and https streams, but when I pass udp uri (like this: udp://#238.0.0.1:1234) nothing happens (there is a black screen). In the log I have: Error received from element uridecodebin1: Your GStreamer installation is missing a plug-in. I've found some documentation here about installing plugins, but I don't understand how to do that.
Here is the piece of the code I use:
data->context = g_main_context_new ();
g_main_context_push_thread_default (data->context);
/* Build pipeline */
data->pipeline = gst_parse_launch ("playbin", &error);
if (error) {
gchar *message =
g_strdup_printf ("Unable to build pipeline: %s", error->message);
g_clear_error (&error);
set_ui_message (message, data);
g_free (message);
return NULL;
}
and the 2nd one:
/* Set playbin2's URI */
void gst_native_set_uri (JNIEnv * env, jobject thiz, jstring uri)
{
CustomData *data = GET_CUSTOM_DATA (env, thiz, custom_data_field_id);
if (!data || !data->pipeline)
return;
const gchar *char_uri = (*env)->GetStringUTFChars (env, uri, NULL);
GST_DEBUG ("Setting URI to %s", char_uri);
if (data->target_state >= GST_STATE_READY)
gst_element_set_state (data->pipeline, GST_STATE_READY);
g_object_set (data->pipeline, "uri", char_uri, NULL);
(*env)->ReleaseStringUTFChars (env, uri, char_uri);
data->duration = GST_CLOCK_TIME_NONE;
data->is_live |=
(gst_element_set_state (data->pipeline,
data->target_state) == GST_STATE_CHANGE_NO_PREROLL);
}
Full code is here
This is the first thing I'm doing in C and JNI, so I would be grateful for working code snippet.
Ok, I've accidentaly found the solution. I just modified my Android.mk file by adding $(GSTREAMER_PLUGINS_CODECS_RESTRICTED) into GSTREAMER_PLUGINS line. Now udp streams works fine!
Related
I'm struggling with reading file stream inside Android environment using C++ library.
I believe I set all the permissions correctly (1st figure) and I'm getting the file path using Android internal library. Would you please give me a snippet to correctly read a file using std::ifstream.getline()?
For instance, I get "/document/1EF7-1509:Download/015_1440.jpg" for the image file existing in the 2nd figure under "Download" folder. This path is a raw value returned by "Intent.getData().getPath()" with "ACTION_GET_CONTENT".
extern "C" JNIEXPORT jstring JNICALL Java_com_example_testpplication_MainActivity_testGeneralEncryption(JNIEnv* env, jobject, jstring myString)
{
const char *nativeString = env->GetStringUTFChars(myString, nullptr);
std::string filePath = std::string(nativeString);
std::string buffer;
std::ifstream fStreamIn(filePath);
if(fStreamIn.is_open())
{
std::getline(fStreamIn, buffer);
}
else
{
bool exists = fStreamIn.good();
if(exists)
{
buffer = "Exists";
}
else
{
buffer = "Non-existing";
}
}
return env->NewStringUTF((buffer + ":" + filePath).c_str());
}
Thanks to #blackapps, I did a google with a different keyword and found below answer. Basically, my question is a duplicate.
Inspired by this answer:
https://stackoverflow.com/a/49221353/1770003
The idea is, Android doesn't directly return the absolute path as other OS reports like Windows. A different approach is needed.
The approach I took is,
Add this in my solution: https://android-arsenal.com/details/1/8142#!package
Edit the project: https://github.com/onimur/handle-path-oz/wiki/Java-Single-Uri
In my case, I moved the caller into onRequestHandPathOz which is add by implementing "HandlePathOzListener.SingleUri".
#Override
public void onRequestHandlePathOz(PathOz pathOz, Throwable throwable)
{
txt_pathShow.setText(testGeneralEncryption(pathOz.getPath()));
}
The method "testGeneralEncryption" is defined in the original question and pathOz.getPath() is passed as an argument to the parameter "jstring myString".
I would like to play back a byte stream with the media player in libVLC for Android. But I don't find any Interface or Class where I could "inject" a byte stream. Only chance I have for play back is providing a file descriptor, a path to a file, or an URI.
Android's native media player provides the interface setDataSource(MediaDataSource dataSource) where a byte stream can be injected by extending the class MediaDataSource. Do I have similar possibility in libVLC for Android?
The libVLC API you are looking for is libvlc_media_new_callbacks.
However, it seems it is not currently exposed to Java to be used with a Java stream parameter. This would need to be implemented by you in the libvlcjni bindings, I believe.
You could get inspiration from this existing code making use of that API
void
Java_org_videolan_libvlc_Media_nativeNewFromFdWithOffsetLength(
JNIEnv *env, jobject thiz, jobject libVlc, jobject jfd, jlong offset, jlong length)
{
vlcjni_object *p_obj;
int fd = FDObject_getInt(env, jfd);
if (fd == -1)
return;
p_obj = VLCJniObject_newFromJavaLibVlc(env, thiz, libVlc);
if (!p_obj)
return;
p_obj->u.p_m =
libvlc_media_new_callbacks(p_obj->p_libvlc,
media_cb_open,
media_cb_read,
media_cb_seek,
media_cb_close,
p_obj);
if (Media_nativeNewCommon(env, thiz, p_obj) == 0)
{
vlcjni_object_sys *p_sys = p_obj->p_sys;
p_sys->media_cb.fd = fd;
p_sys->media_cb.offset = offset;
p_sys->media_cb.length = length >= 0 ? length : UINT64_MAX;
}
}
https://github.com/videolan/vlc-android/blob/f05db3f9b51e64061ff73c794e6a7bfb44f34f65/libvlc/jni/libvlcjni-media.c#L284-L313
libvlcsharp has this implemented, including for Android platforms, but it's .NET.
I want to create an android application which can locate a video file (which is more than 300 mb) and compress it to lower size mp4 file.
i already tried to do it with this
This tutorial is a very effective since you 're compressing a small size video (below than 100 mb)
So i tried to implement it using JNI .
i managed to build ffmpeg using this
But currently what I want to do is to compress videos . I don't have very good knowledge on JNI. But i tried to understand it using following link
If some one can guide me the steps to compress video after open file it using JNI that whould really great , thanks
Assuming you've got the String path of the input file, we can accomplish your task fairly easily. I'll assume you have an understanding of the NDK basics: How to connect a native .c file to native methods in a corresponding .java file (Let me know if that's part of your question). Instead I'll focus on how to use FFmpeg within the context of Android / JNI.
High-Level Overview:
#include <jni.h>
#include <android/log.h>
#include <string.h>
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#define LOG_TAG "FFmpegWrapper"
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)
void Java_com_example_yourapp_yourJavaClass_compressFile(JNIEnv *env, jobject obj, jstring jInputPath, jstring jInputFormat, jstring jOutputPath, jstring JOutputFormat){
// One-time FFmpeg initialization
av_register_all();
avformat_network_init();
avcodec_register_all();
const char* inputPath = (*env)->GetStringUTFChars(env, jInputPath, NULL);
const char* outputPath = (*env)->GetStringUTFChars(env, jOutputPath, NULL);
// format names are hints. See available options on your host machine via $ ffmpeg -formats
const char* inputFormat = (*env)->GetStringUTFChars(env, jInputFormat, NULL);
const char* outputFormat = (*env)->GetStringUTFChars(env, jOutputFormat, NULL);
AVFormatContext *outputFormatContext = avFormatContextForOutputPath(outputPath, outputFormat);
AVFormatContext *inputFormatContext = avFormatContextForInputPath(inputPath, inputFormat /* not necessary since file can be inspected */);
copyAVFormatContext(&outputFormatContext, &inputFormatContext);
// Modify outputFormatContext->codec parameters per your liking
// See http://ffmpeg.org/doxygen/trunk/structAVCodecContext.html
int result = openFileForWriting(outputFormatContext, outputPath);
if(result < 0){
LOGE("openFileForWriting error: %d", result);
}
writeFileHeader(outputFormatContext);
// Copy input to output frame by frame
AVPacket *inputPacket;
inputPacket = av_malloc(sizeof(AVPacket));
int continueRecording = 1;
int avReadResult = 0;
int writeFrameResult = 0;
int frameCount = 0;
while(continueRecording == 1){
avReadResult = av_read_frame(inputFormatContext, inputPacket);
frameCount++;
if(avReadResult != 0){
if (avReadResult != AVERROR_EOF) {
LOGE("av_read_frame error: %s", stringForAVErrorNumber(avReadResult));
}else{
LOGI("End of input file");
}
continueRecording = 0;
}
AVStream *outStream = outputFormatContext->streams[inputPacket->stream_index];
writeFrameResult = av_interleaved_write_frame(outputFormatContext, inputPacket);
if(writeFrameResult < 0){
LOGE("av_interleaved_write_frame error: %s", stringForAVErrorNumber(avReadResult));
}
}
// Finalize the output file
int writeTrailerResult = writeFileTrailer(outputFormatContext);
if(writeTrailerResult < 0){
LOGE("av_write_trailer error: %s", stringForAVErrorNumber(writeTrailerResult));
}
LOGI("Wrote trailer");
}
For the full content of all the auxillary functions (the ones in camelCase), see my full project on Github. Got questions? I'm happy to elaborate.
From the examples I got the basic idea of this code.
However I am not sure, what I am missing, as muxing.c demuxing.c and decoding_encoding.c
all use different approaches.
The process of converting an audio file to another file should go roughly like this:
inputfile -demux-> audiostream -read-> inPackets -decode2frames->
frames
-encode2packets-> outPackets -write-> audiostream -mux-> outputfile
However I found the following comment in demuxing.c:
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
My questions about this are:
Can I expect a frame that was retrieved by calling one of the decoder functions, f.e.
avcodec_decode_audio4 to hold suitable values to directly put it into an encoder or is
the resampling step mentioned in the comment mandatory?
Am I taking the right approach? ffmpeg is very asymmetric, i.e. if there is a function
open_file_for_input there might not be a function open_file_for_output. Also there are different versions of many functions (avcodec_decode_audio[1-4]) and different naming
schemes, so it's very hard to tell, if the general approach is right, or actually an
ugly mixture of techniques that where used at different version bumps of ffmpeg.
ffmpeg uses a lot of specific terms, like 'planar sampling' or 'packed format' and I am having a hard time, finding definitions for these terms. Is it possible to write working code, without deep knowledge of audio?
Here is my code so far that right now crashes at avcodec_encode_audio2
and I don't know why.
int Java_com_fscz_ffmpeg_Audio_convert(JNIEnv * env, jobject this, jstring jformat, jstring jcodec, jstring jsource, jstring jdest) {
jboolean isCopy;
jclass configClass = (*env)->FindClass(env, "com.fscz.ffmpeg.Config");
jfieldID fid = (*env)->GetStaticFieldID(env, configClass, "ffmpeg_logging", "I");
logging = (*env)->GetStaticIntField(env, configClass, fid);
/// open input
const char* sourceFile = (*env)->GetStringUTFChars(env, jsource, &isCopy);
AVFormatContext* pInputCtx;
AVStream* pInputStream;
open_input(sourceFile, &pInputCtx, &pInputStream);
// open output
const char* destFile = (*env)->GetStringUTFChars(env, jdest, &isCopy);
const char* cformat = (*env)->GetStringUTFChars(env, jformat, &isCopy);
const char* ccodec = (*env)->GetStringUTFChars(env, jcodec, &isCopy);
AVFormatContext* pOutputCtx;
AVOutputFormat* pOutputFmt;
AVStream* pOutputStream;
open_output(cformat, ccodec, destFile, &pOutputCtx, &pOutputFmt, &pOutputStream);
/// decode/encode
error = avformat_write_header(pOutputCtx, NULL);
DIE_IF_LESS_ZERO(error, "error writing output stream header to file: %s, error: %s", destFile, e2s(error));
AVFrame* frame = avcodec_alloc_frame();
DIE_IF_UNDEFINED(frame, "Could not allocate audio frame");
frame->pts = 0;
LOGI("allocate packet");
AVPacket pktIn;
AVPacket pktOut;
LOGI("done");
int got_frame, got_packet, len, frame_count = 0;
int64_t processed_time = 0, duration = pInputStream->duration;
while (av_read_frame(pInputCtx, &pktIn) >= 0) {
do {
len = avcodec_decode_audio4(pInputStream->codec, frame, &got_frame, &pktIn);
DIE_IF_LESS_ZERO(len, "Error decoding frame: %s", e2s(len));
if (len < 0) break;
len = FFMIN(len, pktIn.size);
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
LOGI("audio_frame n:%d nb_samples:%d pts:%s\n", frame_count++, frame->nb_samples, av_ts2timestr(frame->pts, &(pInputStream->codec->time_base)));
if (got_frame) {
do {
av_init_packet(&pktOut);
pktOut.data = NULL;
pktOut.size = 0;
LOGI("encode frame");
DIE_IF_UNDEFINED(pOutputStream->codec, "no output codec");
DIE_IF_UNDEFINED(frame->nb_samples, "no nb samples");
DIE_IF_UNDEFINED(pOutputStream->codec->internal, "no internal");
LOGI("tests done");
len = avcodec_encode_audio2(pOutputStream->codec, &pktOut, frame, &got_packet);
LOGI("encode done");
DIE_IF_LESS_ZERO(len, "Error (re)encoding frame: %s", e2s(len));
} while (!got_packet);
// write packet;
LOGI("write packet");
/* Write the compressed frame to the media file. */
error = av_interleaved_write_frame(pOutputCtx, &pktOut);
DIE_IF_LESS_ZERO(error, "Error while writing audio frame: %s", e2s(error));
av_free_packet(&pktOut);
}
pktIn.data += len;
pktIn.size -= len;
} while (pktIn.size > 0);
av_free_packet(&pktIn);
}
LOGI("write trailer");
av_write_trailer(pOutputCtx);
LOGI("end");
/// close resources
avcodec_free_frame(&frame);
avcodec_close(pInputStream->codec);
av_free(pInputStream->codec);
avcodec_close(pOutputStream->codec);
av_free(pOutputStream->codec);
avformat_close_input(&pInputCtx);
avformat_free_context(pOutputCtx);
return 0;
}
Meanwhile I have figured this out and written an Android Library Project that does this
(for audio files). https://github.com/fscz/FFmpeg-Android
See the file /jni/audiodecoder.c for details
Is there a way for my android app to retrieve and set extended user attributes of files? Is there a way to use java.nio.file.Files on android? Is there any way to use setfattr and getfattr from my dalvik app? I know that android use the ext4 file system, so i guess it should be possible. Any suggestions?
The Android Java library and the bionic C library do not support it. So you have to use native code with Linux syscalls for that.
Here is some sample code to get you started, tested on Android 4.2 and Android 4.4.
XAttrNative.java
package com.appfour.example;
import java.io.IOException;
public class XAttrNative {
static {
System.loadLibrary("xattr");
}
public static native void setxattr(String path, String key, String value) throws IOException;
}
xattr.c
#include <string.h>
#include <jni.h>
#include <asm/unistd.h>
#include <errno.h>
void Java_com_appfour_example_XAttrNative_setxattr(JNIEnv* env, jclass clazz,
jstring path, jstring key, jstring value) {
char* pathChars = (*env)->GetStringUTFChars(env, path, NULL);
char* keyChars = (*env)->GetStringUTFChars(env, key, NULL);
char* valueChars = (*env)->GetStringUTFChars(env, value, NULL);
int res = syscall(__NR_setxattr, pathChars, keyChars, valueChars,
strlen(valueChars), 0);
if (res != 0) {
jclass exClass = (*env)->FindClass(env, "java/io/IOException");
(*env)->ThrowNew(env, exClass, strerror(errno));
}
(*env)->ReleaseStringUTFChars(env, path, pathChars);
(*env)->ReleaseStringUTFChars(env, key, keyChars);
(*env)->ReleaseStringUTFChars(env, value, valueChars);
}
This works fine on internal storage but not on (emulated) external storage which uses the sdcardfs filesystem or other kernel functions to disable features not supported by the FAT filesystem such as symlinks and extended attributes. Arguably they do this because external storage can be accessed by connecting the device to a PC and the users expects that copying files back and forth preserves all information.
So this works:
File dataFile = new File(getFilesDir(),"test");
dataFile.createNewFile();
XAttrNative.setxattr(dataFile.getPath(), "user.testkey", "testvalue");
while this throws IOException with the error message: "Operation not supported on transport endpoint":
File externalStorageFile = new File(getExternalFilesDir(null),"test");
externalStorageFile.createNewFile();
XAttrNative.setxattr(externalStorageFile.getPath(), "user.testkey", "testvalue");