I am porting my Android AOSP-based distribution from Android K to Android N. It includes a modified version of the Media Player that decodes DVD subtitles.
The architecture of the Media Player evolved a lot between those 2 versions. In particular, it is now split into 3 processes (see https://source.android.com/devices/media/framework-hardening).
I am thus trying to use Shared Memory to make the MediaCodecService send decoded bitmap subtitles to the MediaServer. I modified the contents of the structure that was previously created by MediaCodecService and added a subtitle_fd attribute, file descriptor to the decoded bitmap subtitle. When a message is received by the MediaServer's Nuplayer for rendering, the code tries to map the aforementioned file descriptor.
Unfortunately, the result of the call to ::mmap is always MAP_FAILED.
Do you have an idea of what I missed ?
Code of the MediaCodecService part
AVSubtitleRect *rect = sub->rects[0];
size_t len = sizeof(*rect);
int fd = ashmem_create_region("subtitle rect", len);
ashmem_set_prot_region(fd, PROT_READ | PROT_WRITE);
void* ptr = ::mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (ptr == MAP_FAILED) {
ALOGI("%s[%d] dvb ptr == MAP_FAILED", __FUNCTION__, __LINE__);
} else {
ALOGI("Success creating FD with value %d", fd);
}
memcpy(ptr, rect, len);
sub->subtitle_fd = fd;
sub->subtitle_size = len;
Code of the MediaServer part
int fd = mSubtitle->subtitle_fd;
size_t len = mSubtitle->subtitle_size;
ALOGI("Trying to map shared memory with FD = %d", fd);
void* ptr = ::mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (ptr == MAP_FAILED) {
ALOGI("Subtitle mmap ptr==MAP_FAILED %s", strerror(errno));
} else {
ALOGI("Subtitle get ptr %p", ptr);
}
AVSubtitleRect *rect = (AVSubtitleRect *)ptr;
Thank you so much !
Related
I am trying to make 3DR texturing but it always use only vertex colors in texture.
On every frame I store frame as PNG:
RGBImage frame(t3dr_image, 4);
std::ostringstream ss;
ss << dataset_.c_str();
ss << "/";
ss << poses_.size();
ss << ".png";
frame.Write(ss.str().c_str());
poses_.push_back(t3dr_image_pose);
timestamps_.push_back(t3dr_image.timestamp);
In the method Save I am trying to process texturing:
1) I extract full mesh from context
Tango3DR_Mesh* mesh = 0;
Tango3DR_Status ret;
ret = Tango3DR_extractFullMesh(t3dr_context_, &mesh);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
2) Create texturing context using extracted mesh
Tango3DR_ConfigH textureConfig;
textureConfig = Tango3DR_Config_create(TANGO_3DR_CONFIG_TEXTURING);
ret = Tango3DR_Config_setDouble(textureConfig, "min_resolution", 0.01);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
Tango3DR_TexturingContext context;
context = Tango3DR_createTexturingContext(textureConfig, dataset.c_str(), mesh);
if (context == nullptr)
std::exit(EXIT_SUCCESS);
Tango3DR_Config_destroy(textureConfig);
3) Call Tango3DR_updateTexture with data I stored before (this does not work)
for (unsigned int i = 0; i < poses_.size(); i++) {
std::ostringstream ss;
ss << dataset_.c_str();
ss << "/";
ss << i;
ss << ".png";
RGBImage frame(ss.str());
Tango3DR_ImageBuffer image;
image.width = frame.GetWidth();
image.height = frame.GetHeight();
image.stride = frame.GetWidth() * 3;
image.timestamp = timestamps_[i];
//data are for sure in this format
image.format = TANGO_3DR_HAL_PIXEL_FORMAT_RGB_888;
image.data = frame.GetData();
ret = Tango3DR_updateTexture(context, &image, &poses_[i]);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
}
4) Texturize mesh
ret = Tango3DR_Mesh_destroy(mesh);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
mesh = 0;
ret = Tango3DR_getTexturedMesh(context, &mesh);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
5) Save it as OBJ (in the result texture are only data from vertex colors, why?)
ret = Tango3DR_Mesh_saveToObj(mesh, filename.c_str());
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
ret = Tango3DR_destroyTexturingContext(context);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
ret = Tango3DR_Mesh_destroy(mesh);
if (ret != TANGO_3DR_SUCCESS)
std::exit(EXIT_SUCCESS);
All methods returned TANGO_3DR_SUCCESS.
Full code here: https://github.com/lvonasek/tango
Thanks for reaching out and providing the detailed code breakdown.
The error is on our end - the library currently doesn't support RGB texture inputs. It assumes YUV for all input images. I've opened a ticket to track this bug and we'll fix it for the next release, by allowing RGB input and providing better return values for invalid image formats.
Edit: Found another bug on our end. The API states image_pose should be the pose of the image, but our implementation actually expects the pose of the device. I've opened a bug, and this will be fixed in next release (release-H).
You can try working around this for now by passing in the device pose without multiplying the device-to-camera extrinsic calibration, although of course that's just a temp bandaid.
I try to access a file in android by native method, but i got "Invalid argument" after call read or write function. The data_ptr is align to 512 bytes and it is declared as byte array in java.
JNIEXPORT jint JNICALL
Java_com_aa_bb_NativeRead(JNIEnv* env, jobject clazz, jbyteArray data_ptr, jint length){
int ret=0;
jsize len = (*env)->GetArrayLength(env, data_ptr);
jbyte *body = (*env)->GetByteArrayElements(env, data_ptr, 0);
fd = open(filePath, O_CREAT | O_RDWR | O_DIRECT | O_SYNC, S_IRUSR | S_IWUSR);
ret = read(fd, body, length);
if(ret<0){
LOGE("errno: %s\n", strerror(errno));
}
(*env)->ReleaseByteArrayElements(env, data_ptr, body, 0);
return ret;
}
JNIEXPORT jint JNICALL
Java_com_aa_bb_NativeWrite(JNIEnv* env, jobject clazz, jbyteArray data_ptr, jint length){
int ret=0;
jsize len = (*env)->GetArrayLength(env, data_ptr);
jbyte *body = (*env)->GetByteArrayElements(env, data_ptr, 0);
fd = open(filePath, O_CREAT | O_RDWR | O_DIRECT | O_SYNC, S_IRUSR | S_IWUSR);
ret = write(fd, body, length);
if(ret<0){
LOGE("errno: %s\n", strerror(errno));
}
(*env)->ReleaseByteArrayElements(env, data_ptr, body, 0);
return ret;
}
Edit:
If I use open(filePath, O_CREAT | O_RDWR, S_IRUSR | S_IWUSR); the error is disappear. But i want to use O_DIRECT for ignore cache&buffer to access hardware directly.
O_DIRECT requires writes to be multiples of the underlying filesystem:
The O_DIRECT flag may impose alignment restrictions on the length and
address of user-space buffers and the file offset of I/Os. In Linux alignâ
ment restrictions vary by filesystem and kernel version and might be absent
entirely. However there is currently no filesystem-independent interface
for an application to discover these restrictions for a given file or
filesystem. Some filesystems provide their own interfaces for doing so,
for example the XFS_IOC_DIOINFO operation in xfsctl(3).
Under Linux 2.4, transfer sizes, and the alignment of the user buffer and
the file offset must all be multiples of the logical block size of the
filesystem. Under Linux 2.6, alignment to 512-byte boundaries suffices.
GetByteArrayElements provides no such guarantees. It merely returns the address of the base of the primitive array of elements - in this case that's the address of the bytes in the byte array. These are allocated by the Java memory manager. You'll either have to copy the bytes (defeating the object of O_DIRECT), remove O_DIRECT or use some other strategy for allocation of the memory (such as allocating them yourself with mmap(..., MAP_ANON)).
From the examples I got the basic idea of this code.
However I am not sure, what I am missing, as muxing.c demuxing.c and decoding_encoding.c
all use different approaches.
The process of converting an audio file to another file should go roughly like this:
inputfile -demux-> audiostream -read-> inPackets -decode2frames->
frames
-encode2packets-> outPackets -write-> audiostream -mux-> outputfile
However I found the following comment in demuxing.c:
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
My questions about this are:
Can I expect a frame that was retrieved by calling one of the decoder functions, f.e.
avcodec_decode_audio4 to hold suitable values to directly put it into an encoder or is
the resampling step mentioned in the comment mandatory?
Am I taking the right approach? ffmpeg is very asymmetric, i.e. if there is a function
open_file_for_input there might not be a function open_file_for_output. Also there are different versions of many functions (avcodec_decode_audio[1-4]) and different naming
schemes, so it's very hard to tell, if the general approach is right, or actually an
ugly mixture of techniques that where used at different version bumps of ffmpeg.
ffmpeg uses a lot of specific terms, like 'planar sampling' or 'packed format' and I am having a hard time, finding definitions for these terms. Is it possible to write working code, without deep knowledge of audio?
Here is my code so far that right now crashes at avcodec_encode_audio2
and I don't know why.
int Java_com_fscz_ffmpeg_Audio_convert(JNIEnv * env, jobject this, jstring jformat, jstring jcodec, jstring jsource, jstring jdest) {
jboolean isCopy;
jclass configClass = (*env)->FindClass(env, "com.fscz.ffmpeg.Config");
jfieldID fid = (*env)->GetStaticFieldID(env, configClass, "ffmpeg_logging", "I");
logging = (*env)->GetStaticIntField(env, configClass, fid);
/// open input
const char* sourceFile = (*env)->GetStringUTFChars(env, jsource, &isCopy);
AVFormatContext* pInputCtx;
AVStream* pInputStream;
open_input(sourceFile, &pInputCtx, &pInputStream);
// open output
const char* destFile = (*env)->GetStringUTFChars(env, jdest, &isCopy);
const char* cformat = (*env)->GetStringUTFChars(env, jformat, &isCopy);
const char* ccodec = (*env)->GetStringUTFChars(env, jcodec, &isCopy);
AVFormatContext* pOutputCtx;
AVOutputFormat* pOutputFmt;
AVStream* pOutputStream;
open_output(cformat, ccodec, destFile, &pOutputCtx, &pOutputFmt, &pOutputStream);
/// decode/encode
error = avformat_write_header(pOutputCtx, NULL);
DIE_IF_LESS_ZERO(error, "error writing output stream header to file: %s, error: %s", destFile, e2s(error));
AVFrame* frame = avcodec_alloc_frame();
DIE_IF_UNDEFINED(frame, "Could not allocate audio frame");
frame->pts = 0;
LOGI("allocate packet");
AVPacket pktIn;
AVPacket pktOut;
LOGI("done");
int got_frame, got_packet, len, frame_count = 0;
int64_t processed_time = 0, duration = pInputStream->duration;
while (av_read_frame(pInputCtx, &pktIn) >= 0) {
do {
len = avcodec_decode_audio4(pInputStream->codec, frame, &got_frame, &pktIn);
DIE_IF_LESS_ZERO(len, "Error decoding frame: %s", e2s(len));
if (len < 0) break;
len = FFMIN(len, pktIn.size);
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
LOGI("audio_frame n:%d nb_samples:%d pts:%s\n", frame_count++, frame->nb_samples, av_ts2timestr(frame->pts, &(pInputStream->codec->time_base)));
if (got_frame) {
do {
av_init_packet(&pktOut);
pktOut.data = NULL;
pktOut.size = 0;
LOGI("encode frame");
DIE_IF_UNDEFINED(pOutputStream->codec, "no output codec");
DIE_IF_UNDEFINED(frame->nb_samples, "no nb samples");
DIE_IF_UNDEFINED(pOutputStream->codec->internal, "no internal");
LOGI("tests done");
len = avcodec_encode_audio2(pOutputStream->codec, &pktOut, frame, &got_packet);
LOGI("encode done");
DIE_IF_LESS_ZERO(len, "Error (re)encoding frame: %s", e2s(len));
} while (!got_packet);
// write packet;
LOGI("write packet");
/* Write the compressed frame to the media file. */
error = av_interleaved_write_frame(pOutputCtx, &pktOut);
DIE_IF_LESS_ZERO(error, "Error while writing audio frame: %s", e2s(error));
av_free_packet(&pktOut);
}
pktIn.data += len;
pktIn.size -= len;
} while (pktIn.size > 0);
av_free_packet(&pktIn);
}
LOGI("write trailer");
av_write_trailer(pOutputCtx);
LOGI("end");
/// close resources
avcodec_free_frame(&frame);
avcodec_close(pInputStream->codec);
av_free(pInputStream->codec);
avcodec_close(pOutputStream->codec);
av_free(pOutputStream->codec);
avformat_close_input(&pInputCtx);
avformat_free_context(pOutputCtx);
return 0;
}
Meanwhile I have figured this out and written an Android Library Project that does this
(for audio files). https://github.com/fscz/FFmpeg-Android
See the file /jni/audiodecoder.c for details
I am using the following code to read files in /assets/ folder,
//AAssetManager* mgr from parameter.
AAsset* asset = AAssetManager_open(mgr, filen_ame, AASSET_MODE_BUFFER);
if (NULL == asset) {
__android_log_print(ANDROID_LOG_ERROR, "hdrijni", "_ASSET_NOT_FOUND_");
return;
}
long size = AAsset_getLength(asset);
char * buffer = (char*) malloc(sizeof(char)*size);
int byteRead = AAsset_read(asset, buffer, size);
AAsset_close(asset);
I can get the content, but sometimes the content appends some special characters.
Actually this problem not because of Asset Manager, but with the shader code I am using.
After I read the shader content, I will create the shader like:
GLuint shader = glCreateShader(type);
glShaderSource(shader, 1, &buffer2, NULL);
Asset buffer or content buffer transfered by JNI might be not NULL
terminated, so you need to use 'length' parameter when calling
glShaderSource. -- From others.
Just change
glShaderSource(shader, 1, &buffer2, &length);
I wrote a simple Android native function that get a filename and some more arguments and read the file by mmapping (mmap) it's memory.
Because it's mmap, I don't really need to call "read()" so I just memcpy() from the address returned from the mmap().
But, somewhere I'm getting a SIGSEGV probably because I'm trying to access a memory which I not permitted. But I don't understand why, I already asked all file's memory to be mapped!
I'm attaching my code and the error I got:
EDIT
I fixed the unterminating loop, but still getting SIGSEGV after 25001984 bytes have been read.
The function works on those arguments:
jn_bytes = 100,000,000
jbuffer_size = 8192
jshared=jpopulate=jadvice=0
void Java_com_def_benchmark_Benchmark_testMmapRead(JNIEnv* env, jobject javaThis,
jstring jfile_name, unsigned int jn_bytes, unsigned int jbuffer_size, jboolean jshared, jboolean jpopulate, jint jadvice) {
const char *file_name = env->GetStringUTFChars(jfile_name, 0);
/* *** start count *** */
int fd = open(file_name, O_RDONLY);
//get the size of the file
size_t length = lseek(fd, 0L, SEEK_END);
lseek(fd, 0L, SEEK_SET);
length = length>jn_bytes?jn_bytes:length;
// man 2 mmap: MAP_POPULATE is only supported for private mappings since Linux 2.6.23
int flags = 0;
if (jshared) flags |= MAP_SHARED; else flags |= MAP_PRIVATE;
if(jpopulate) flags |= MAP_POPULATE;
//int flags = MAP_PRIVATE;
int * addr = reinterpret_cast<int *>(mmap(NULL, length , PROT_READ, flags , fd, 0));
if (addr == MAP_FAILED) {
__android_log_write(ANDROID_LOG_ERROR, "NDK_FOO_TAG", strerror(errno));
return;
}
int * initaddr = addr;
if(jadvice > 0)
madvise(addr,length,jadvice==1?(MADV_SEQUENTIAL|MADV_WILLNEED):(MADV_DONTNEED));
close(fd);
char buffer[jbuffer_size];
void *ret_val = buffer;
int read_length = length;
while(ret_val == buffer || read_length<jbuffer_size) {
/*****GETTING SIGSEGV SOMWHERE HERE IN THE WHILE************/
ret_val = memcpy(buffer, addr,jbuffer_size);
addr+=jbuffer_size;
read_length -= jbuffer_size;
}
munmap(initaddr,length);
/* stop count */
env->ReleaseStringUTFChars(jfile_name, file_name);
}
and the error log:
15736^done
(gdb)
15737 info signal SIGSEGV
&"info signal SIGSEGV\n"
~"Signal Stop\tPrint\tPass to program\tDescription\n"
~"SIGSEGV Yes\tYes\tYes\t\tSegmentation fault\n"
15737^done
(gdb)
15738-stack-list-arguments 0 0 0
15738^done,stack-args=[frame={level="0",args=[]}]
(gdb)
15739-stack-list-locals 0
15739^done,locals=[]
(gdb)
There is a big problem here:
addr+=jbuffer_size;
You're bumping addr by sizeof(int) * jbuffer_size bytes whereas you just want to increment it by jbuffer_size bytes.
My guess is sizeof(int) is 4 on your system, hence you crash at around 25% of the way through your loop, because you're incrementing addr by a factor of 4x too much on each iteration.
This loop never terminates because ret_val always equals buffer
void *ret_val = buffer;
int read_length = length;
while(ret_val == buffer || read_length<jbuffer_size) {
/*****GETTING SIGSEGV SOMWHERE HERE IN THE WHILE************/
ret_val = memcpy(buffer, addr,jbuffer_size);
addr+=jbuffer_size;
read_length -= jbuffer_size;
}
memcpy always returns it's first argument, so ret_val never changes.
The while loop is infinite:
while(ret_val == buffer || read_length<jbuffer_size) {
ret_val = memcpy(buffer, addr,jbuffer_size);
addr+=jbuffer_size;
read_length -= jbuffer_size;
}
as memcpy() always returns the desintation buffer so ret_val == buffer will always be true (and is therefore useless as part of the terminating condition). This means that addr is being incremented by jbuffer_size bytes on every iteration of the loop and is passed to memcpy(), resuting in accessing invalid memory.
The condition in while(ret_val == buffer || read_length<jbuffer_size) is wrong. ret_val == buffer will always be true, and if read_length<jbuffer_size is true when the loop is reached, it will always remain true because read_length is only ever reduced (well, until it underflows INT_MIN).