I need to pass the FFMPEG 'raw' data back to my JAVA code in order to display it on the screen.
I have a native method that deals with FFMPEG and after that calls a method in java that takes Byte[] (so far) as an argument.
Byte Array that is passed is read by JAVA but when doing BitmapFactory.decodeByteArray(bitmap, 0, bitmap.length); it returns null. I have printed out the array and I get 200k of elements (which are expected), but cannot be decoded. So far what I'm doing is taking data from AvFrame->data casting it to unsigned char * and then casting that to jbyterArray. After all the casting, I pass the jbyteArray as argument to my JAVA method. Is there something I'm missing here? Why won't BitmapFactory decode the array into an image for displaying?
EDIT 1.0
Currently I am trying to obtain my image via
public void setImage(ByteBuffer bmp) {
bmp.rewind();
Bitmap bitmap = Bitmap.createBitmap(1920, 1080, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(bmp);
runOnUiThread(() -> {
ImageView imgViewer = findViewById(R.id.mSurfaceView);
imgViewer.setImageBitmap(bitmap);
});
}
But I keep getting an exception
JNI DETECTED ERROR IN APPLICATION: JNI NewDirectByteBuffer called with pending exception java.lang.RuntimeException: Buffer not large enough for pixels
at void android.graphics.Bitmap.copyPixelsFromBuffer(java.nio.Buffer) (Bitmap.java:657)
at void com.example.asmcpp.MainActivity.setSurfaceImage(java.nio.ByteBuffer)
Edit 1.1
So, here is the full code that is executing every time there is a frame incoming. Note that the ByteBuffer is created and passed from within this method
void VideoClientInterface::onEncodedFrame(video::encoded_frame_t &encodedFrame) {
AVFrame *filt_frame = av_frame_alloc();
auto frame = std::shared_ptr<video::encoded_frame_t>(new video::encoded_frame_t,
[](video::encoded_frame_t *p) { if (p) delete p; });
if (frame) {
frame->size = encodedFrame.size;
frame->ssrc = encodedFrame.ssrc;
frame->width = encodedFrame.width;
frame->height = encodedFrame.height;
frame->dataType = encodedFrame.dataType;
frame->timestamp = encodedFrame.timestamp;
frame->frameIndex = encodedFrame.frameIndex;
frame->isKeyFrame = encodedFrame.isKeyFrame;
frame->isDroppable = encodedFrame.isDroppable;
frame->data = new char[frame->size];
if (frame->data) {
memcpy(frame->data, encodedFrame.data, frame->size);
AVPacket packet;
av_init_packet(&packet);
packet.dts = AV_NOPTS_VALUE;
packet.pts = encodedFrame.timestamp;
packet.data = (uint8_t *) encodedFrame.data;
packet.size = encodedFrame.size;
int ret = avcodec_send_packet(m_avCodecContext, &packet);
if (ret == 0) {
ret = avcodec_receive_frame(m_avCodecContext, m_avFrame);
if (ret == 0) {
m_transform = sws_getCachedContext(
m_transform, // previous context ptr
m_avFrame->width, m_avFrame->height, AV_PIX_FMT_YUV420P, // src
m_avFrame->width, m_avFrame->height, AV_PIX_FMT_RGB24, // dst
SWS_BILINEAR, nullptr, nullptr, nullptr // options
);
auto decodedFrame = std::make_shared<video::decoded_frame_t>();
decodedFrame->width = m_avFrame->width;
decodedFrame->height = m_avFrame->height;
decodedFrame->size = m_avFrame->width * m_avFrame->height * 3;
decodedFrame->timeStamp = m_avFrame->pts;
decodedFrame->data = new unsigned char[decodedFrame->size];
if (decodedFrame->data) {
uint8_t *dstSlice[] = {decodedFrame->data,
0,
0};// outFrame.bits(), outFrame.bits(), outFrame.bits()
const int dstStride[] = {decodedFrame->width * 3, 0, 0};
sws_scale(m_transform, m_avFrame->data, m_avFrame->linesize,
0, m_avFrame->height, dstSlice, dstStride);
auto m_rawData = decodedFrame->data;
auto len = strlen(reinterpret_cast<char *>(m_rawData));
if (frameCounter == 10) {
jobject newArray = GetJniEnv()->NewDirectByteBuffer(m_rawData, len);
GetJniEnv()->CallVoidMethod(m_obj, setSurfaceImage, newArray);
frameCounter = 0;
}
frameCounter++;
}
} else {
av_packet_unref(&packet);
}
} else {
av_packet_unref(&packet);
}
}
}
}
I am not entirely sure I am even doing that part correctly. If you see any errors in this, feel free to point them out.
You cannot cast native byte arrays to jbyteArray and expect it to work. A byte[] is an actual object with length field, a reference count, and so on.
Use NewDirectByteBuffer instead to wrap your byte buffer into a Java ByteBuffer, from where you can grab the actual byte[] using .array().
Note that this JNI operation is relatively expensive, so if you expect to do this on a per-frame basis, you might want to pre-allocate some bytebuffers and tell FFmpeg to write directly into those buffers.
Related
Background
I need to parse some zip files of various types (getting some inner files content for one purpose or another, including getting their names).
Some of the files are not reachable via file-path, as Android has Uri to reach them, and as sometimes the zip file is inside another zip file. With the push to use SAF, it's even less possible to use file-path in some cases.
For this, we have 2 main ways to handle: ZipFile class and ZipInputStream class.
The problem
When we have a file-path, ZipFile is a perfect solution. It's also very efficient in terms of speed.
However, for the rest of the cases, ZipInputStream could reach issues, such as this one, which has a problematic zip file, and cause this exception:
java.util.zip.ZipException: only DEFLATED entries can have EXT descriptor
at java.util.zip.ZipInputStream.readLOC(ZipInputStream.java:321)
at java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:124)
What I've tried
The only always-working solution would be to copy the file to somewhere else, where you could parse it using ZipFile, but this is inefficient and requires you to have free storage, as well as remove the file when you are done with it.
So, what I've found is that Apache has a nice, pure Java library (here) to parse Zip files, and for some reason its InputStream solution (called "ZipArchiveInputStream") seem even more efficient than the native ZipInputStream class.
As opposed to what we have in the native framework, the library offers a bit more flexibility. I could, for example, load the entire zip file into bytes array, and let the library handle it as usual, and this works even for the problematic Zip files I've mentioned:
org.apache.commons.compress.archivers.zip.ZipFile(SeekableInMemoryByteChannel(byteArray)).use { zipFile ->
for (entry in zipFile.entries) {
val name = entry.name
... // use the zipFile like you do with native framework
gradle dependency:
// http://commons.apache.org/proper/commons-compress/ https://mvnrepository.com/artifact/org.apache.commons/commons-compress
implementation 'org.apache.commons:commons-compress:1.20'
Sadly, this isn't always possible, because it depends on having the heap memory hold the entire zip file, and on Android it gets even more limited, because the heap size could be relatively small (heap could be 100MB while the file is 200MB). As opposed to a PC which can have a huge heap memory being set, for Android it's not flexible at all.
So, I searched for a solution that has JNI instead, to have the entire ZIP file loaded into byte array there, not going to the heap (at least not entirely). This could be a nicer workaround because if the ZIP could be fit in the device's RAM instead of the heap, it could prevent me from reaching OOM while also not needing to have an extra file.
I've found this library called "larray" which seems promising , but sadly when I tried using it, it crashed, because its requirements include having a full JVM, meaning not suitable for Android.
EDIT: seeing that I can't find any library and any built-in class, I tried to use JNI myself. Sadly I'm very rusty with it, and I looked at an old repository I've made a long time ago to perform some operations on Bitmaps (here). This is what I came up with :
native-lib.cpp
#include <jni.h>
#include <android/log.h>
#include <cstdio>
#include <android/bitmap.h>
#include <cstring>
#include <unistd.h>
class JniBytesArray {
public:
uint32_t *_storedData;
JniBytesArray() {
_storedData = NULL;
}
};
extern "C" {
JNIEXPORT jobject JNICALL Java_com_lb_myapplication_JniByteArrayHolder_allocate(
JNIEnv *env, jobject obj, jlong size) {
auto *jniBytesArray = new JniBytesArray();
auto *array = new uint32_t[size];
for (int i = 0; i < size; ++i)
array[i] = 0;
jniBytesArray->_storedData = array;
return env->NewDirectByteBuffer(jniBytesArray, 0);
}
}
JniByteArrayHolder.kt
class JniByteArrayHolder {
external fun allocate(size: Long): ByteBuffer
companion object {
init {
System.loadLibrary("native-lib")
}
}
}
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
thread {
printMemStats()
val jniByteArrayHolder = JniByteArrayHolder()
val byteBuffer = jniByteArrayHolder.allocate(1L * 1024L)
printMemStats()
}
}
fun printMemStats() {
val memoryInfo = ActivityManager.MemoryInfo()
(getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager).getMemoryInfo(memoryInfo)
val nativeHeapSize = memoryInfo.totalMem
val nativeHeapFreeSize = memoryInfo.availMem
val usedMemInBytes = nativeHeapSize - nativeHeapFreeSize
val usedMemInPercentage = usedMemInBytes * 100 / nativeHeapSize
Log.d("AppLog", "total:${Formatter.formatFileSize(this, nativeHeapSize)} " +
"free:${Formatter.formatFileSize(this, nativeHeapFreeSize)} " +
"used:${Formatter.formatFileSize(this, usedMemInBytes)} ($usedMemInPercentage%)")
}
This doesn't seem right, because if I try to create a 1GB byte array using jniByteArrayHolder.allocate(1L * 1024L * 1024L * 1024L) , it crashes without any exception or error logs.
The questions
Is it possible to use JNI for Apache's library, so that it will handle the ZIP file content which is contained within JNI's "world" ?
If so, how can I do it? Is there any sample of how to do it? Is there a class for it? Or do I have to implement it myself? If so, can you please show how it's done in JNI?
If it's not possible, what other way is there to do it? Maybe alternative to what Apache has?
For the solution of JNI, how come it doesn't work well ? How could I efficiently copy the bytes from the stream into the JNI byte array (my guess is that it will be via a buffer)?
I took a look at the JNI code you posted and made a couple of changes. Mostly it is defining the size argument for NewDirectByteBuffer and using malloc().
Here is the output of the log after allocating 800mb:
D/AppLog: total:1.57 GB free:1.03 GB used:541 MB (34%)
D/AppLog: total:1.57 GB free:247 MB used:1.32 GB (84%)
And the following is what the buffer looks like after the allocation. As you can see, the debugger is reporting a limit of 800mb which is what we expect.
My C is very rusty, so I am sure that there is some work to be done. I have updated the code to be a little more robust and to allow for the freeing of memory.
native-lib.cpp
extern "C" {
static jbyteArray *_holdBuffer = NULL;
static jobject _directBuffer = NULL;
/*
This routine is not re-entrant and can handle only one buffer at a time. If a buffer is
allocated then it must be released before the next one is allocated.
*/
JNIEXPORT
jobject JNICALL Java_com_example_zipfileinmemoryjni_JniByteArrayHolder_allocate(
JNIEnv *env, jobject obj, jlong size) {
if (_holdBuffer != NULL || _directBuffer != NULL) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Call to JNI allocate() before freeBuffer()");
return NULL;
}
// Max size for a direct buffer is the max of a jint even though NewDirectByteBuffer takes a
// long. Clamp max size as follows:
if (size > SIZE_T_MAX || size > INT_MAX || size <= 0) {
jlong maxSize = SIZE_T_MAX < INT_MAX ? SIZE_T_MAX : INT_MAX;
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Native memory allocation request must be >0 and <= %lld but was %lld.\n",
maxSize, size);
return NULL;
}
jbyteArray *array = (jbyteArray *) malloc(static_cast<size_t>(size));
if (array == NULL) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Failed to allocate %lld bytes of native memory.\n",
size);
return NULL;
}
jobject directBuffer = env->NewDirectByteBuffer(array, size);
if (directBuffer == NULL) {
free(array);
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Failed to create direct buffer of size %lld.\n",
size);
return NULL;
}
// memset() is not really needed but we call it here to force Android to count
// the consumed memory in the stats since it only seems to "count" dirty pages. (?)
memset(array, 0xFF, static_cast<size_t>(size));
_holdBuffer = array;
// Get a global reference to the direct buffer so Java isn't tempted to GC it.
_directBuffer = env->NewGlobalRef(directBuffer);
return directBuffer;
}
JNIEXPORT void JNICALL Java_com_example_zipfileinmemoryjni_JniByteArrayHolder_freeBuffer(
JNIEnv *env, jobject obj, jobject directBuffer) {
if (_directBuffer == NULL || _holdBuffer == NULL) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Attempt to free unallocated buffer.");
return;
}
jbyteArray *bufferLoc = (jbyteArray *) env->GetDirectBufferAddress(directBuffer);
if (bufferLoc == NULL) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Failed to retrieve direct buffer location associated with ByteBuffer.");
return;
}
if (bufferLoc != _holdBuffer) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"DirectBuffer does not match that allocated.");
return;
}
// Free the malloc'ed buffer and the global reference. Java can not GC the direct buffer.
free(bufferLoc);
env->DeleteGlobalRef(_directBuffer);
_holdBuffer = NULL;
_directBuffer = NULL;
}
}
I also updated the array holder:
class JniByteArrayHolder {
external fun allocate(size: Long): ByteBuffer
external fun freeBuffer(byteBuffer: ByteBuffer)
companion object {
init {
System.loadLibrary("native-lib")
}
}
}
I can confirm that this code along with the ByteBufferChannel class provided by Botje here works for Android versions before API 24. The SeekableByteChannel interface was introduced in API 24 and is needed by the ZipFile utility.
The maximum buffer size that can be allocated is the size of a jint and is due to the limitation of JNI. Larger data can be accommodated (if available) but would require multiple buffers and a way to handle them.
Here is the main activity for the sample app. An earlier version always assumed the the InputStream read buffer was was always filled and errored out when trying to put it to the ByteBuffer. This was fixed.
MainActivity.kt
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
}
fun onClick(view: View) {
button.isEnabled = false
status.text = getString(R.string.running)
thread {
printMemStats("Before buffer allocation:")
var bufferSize = 0L
// testzipfile.zip is not part of the project but any zip can be uploaded through the
// device file manager or adb to test.
val fileToRead = "$filesDir/testzipfile.zip"
val inStream =
if (File(fileToRead).exists()) {
FileInputStream(fileToRead).apply {
bufferSize = getFileSize(this)
close()
}
FileInputStream(fileToRead)
} else {
// If testzipfile.zip doesn't exist, we will just look at this one which
// is part of the APK.
resources.openRawResource(R.raw.appapk).apply {
bufferSize = getFileSize(this)
close()
}
resources.openRawResource(R.raw.appapk)
}
// Allocate the buffer in native memory (off-heap).
val jniByteArrayHolder = JniByteArrayHolder()
val byteBuffer =
if (bufferSize != 0L) {
jniByteArrayHolder.allocate(bufferSize)?.apply {
printMemStats("After buffer allocation")
}
} else {
null
}
if (byteBuffer == null) {
Log.d("Applog", "Failed to allocate $bufferSize bytes of native memory.")
} else {
Log.d("Applog", "Allocated ${Formatter.formatFileSize(this, bufferSize)} buffer.")
val inBytes = ByteArray(4096)
Log.d("Applog", "Starting buffered read...")
while (inStream.available() > 0) {
byteBuffer.put(inBytes, 0, inStream.read(inBytes))
}
inStream.close()
byteBuffer.flip()
ZipFile(ByteBufferChannel(byteBuffer)).use {
Log.d("Applog", "Starting Zip file name dump...")
for (entry in it.entries) {
Log.d("Applog", "Zip name: ${entry.name}")
val zis = it.getInputStream(entry)
while (zis.available() > 0) {
zis.read(inBytes)
}
}
}
printMemStats("Before buffer release:")
jniByteArrayHolder.freeBuffer(byteBuffer)
printMemStats("After buffer release:")
}
runOnUiThread {
status.text = getString(R.string.idle)
button.isEnabled = true
Log.d("Applog", "Done!")
}
}
}
/*
This function is a little misleading since it does not reflect the true status of memory.
After native buffer allocation, it waits until the memory is used before counting is as
used. After release, it doesn't seem to count the memory as released until garbage
collection. (My observations only.) Also, see the comment for memset() in native-lib.cpp
which is a member of this project.
*/
private fun printMemStats(desc: String? = null) {
val memoryInfo = ActivityManager.MemoryInfo()
(getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager).getMemoryInfo(memoryInfo)
val nativeHeapSize = memoryInfo.totalMem
val nativeHeapFreeSize = memoryInfo.availMem
val usedMemInBytes = nativeHeapSize - nativeHeapFreeSize
val usedMemInPercentage = usedMemInBytes * 100 / nativeHeapSize
val sDesc = desc?.run { "$this:\n" }
Log.d(
"AppLog", "$sDesc total:${Formatter.formatFileSize(this, nativeHeapSize)} " +
"free:${Formatter.formatFileSize(this, nativeHeapFreeSize)} " +
"used:${Formatter.formatFileSize(this, usedMemInBytes)} ($usedMemInPercentage%)"
)
}
// Not a great way to do this but not the object of the demo.
private fun getFileSize(inStream: InputStream): Long {
var bufferSize = 0L
while (inStream.available() > 0) {
val toSkip = inStream.available().toLong()
inStream.skip(toSkip)
bufferSize += toSkip
}
return bufferSize
}
}
A sample GitHub repository is here.
You can steal LWJGL's native memory management functions. It is BSD3 licensed, so you only have to mention somewhere that you are using code from it.
Step 1: given an InputStream is and a file size ZIP_SIZE, slurp the stream into a direct byte buffer created by LWJGL's org.lwjgl.system.MemoryUtil helper class:
ByteBuffer bb = MemoryUtil.memAlloc(ZIP_SIZE);
byte[] buf = new byte[4096]; // Play with the buffer size to see what works best
int read = 0;
while ((read = is.read(buf)) != -1) {
bb.put(buf, 0, read);
}
Step 2: wrap the ByteBuffer in a ByteChannel.
Taken from this gist. You possibly want to strip the writing parts out.
package io.github.ncruces.utils;
import java.nio.ByteBuffer;
import java.nio.channels.NonWritableChannelException;
import java.nio.channels.SeekableByteChannel;
import static java.lang.Math.min;
public final class ByteBufferChannel implements SeekableByteChannel {
private final ByteBuffer buf;
public ByteBufferChannel(ByteBuffer buffer) {
if (buffer == null) throw new NullPointerException();
buf = buffer;
}
#Override
public synchronized int read(ByteBuffer dst) {
if (buf.remaining() == 0) return -1;
int count = min(dst.remaining(), buf.remaining());
if (count > 0) {
ByteBuffer tmp = buf.slice();
tmp.limit(count);
dst.put(tmp);
buf.position(buf.position() + count);
}
return count;
}
#Override
public synchronized int write(ByteBuffer src) {
if (buf.isReadOnly()) throw new NonWritableChannelException();
int count = min(src.remaining(), buf.remaining());
if (count > 0) {
ByteBuffer tmp = src.slice();
tmp.limit(count);
buf.put(tmp);
src.position(src.position() + count);
}
return count;
}
#Override
public synchronized long position() {
return buf.position();
}
#Override
public synchronized ByteBufferChannel position(long newPosition) {
if ((newPosition | Integer.MAX_VALUE - newPosition) < 0) throw new IllegalArgumentException();
buf.position((int)newPosition);
return this;
}
#Override
public synchronized long size() { return buf.limit(); }
#Override
public synchronized ByteBufferChannel truncate(long size) {
if ((size | Integer.MAX_VALUE - size) < 0) throw new IllegalArgumentException();
int limit = buf.limit();
if (limit > size) buf.limit((int)size);
return this;
}
#Override
public boolean isOpen() { return true; }
#Override
public void close() {}
}
Step 3: Use ZipFile as before:
ZipFile zf = new ZipFile(ByteBufferChannel(bb);
for (ZipEntry ze : zf) {
...
}
Step 4: Manually release the native buffer (preferably in a finally block):
MemoryUtil.memFree(bb);
I am encoding raw data on Android using ffmpeg libraries. The native code reads the audio data from an external device and encodes it into AAC format in an mp4 container. I am finding that the audio data is successfully encoded (I can play it with Groove Music, my default Windows audio player). But the metadata, as reported by ffprobe, has an incorrect duration of 0.05 secs - it's actually several seconds long. Also the bitrate is reported wrongly as around 65kbps even though I specified 192kbps.
I've tried recordings of various durations but the result is always similar - the (very small) duration and bitrate. I've tried various other audio players such as Quicktime but they play only the first 0.05 secs or so of the audio.
I've removed error-checking from the following. The actual code checks every call and no problems are reported.
Initialisation:
void AudioWriter::initialise( const char *filePath )
{
AVCodecID avCodecID = AVCodecID::AV_CODEC_ID_AAC;
int bitRate = 192000;
char *containerFormat = "mp4";
int sampleRate = 48000;
int nChannels = 2;
mAvCodec = avcodec_find_encoder(avCodecID);
mAvCodecContext = avcodec_alloc_context3(mAvCodec);
mAvCodecContext->codec_id = avCodecID;
mAvCodecContext->codec_type = AVMEDIA_TYPE_AUDIO;
mAvCodecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
mAvCodecContext->bit_rate = bitRate;
mAvCodecContext->sample_rate = sampleRate;
mAvCodecContext->channels = nChannels;
mAvCodecContext->channel_layout = AV_CH_LAYOUT_STEREO;
avcodec_open2( mAvCodecContext, mAvCodec, nullptr );
mAvFormatContext = avformat_alloc_context();
avformat_alloc_output_context2(&mAvFormatContext, nullptr, containerFormat, nullptr);
mAvFormatContext->audio_codec = mAvCodec;
mAvFormatContext->audio_codec_id = avCodecID;
mAvOutputStream = avformat_new_stream(mAvFormatContext, mAvCodec);
avcodec_parameters_from_context(mAvOutputStream->codecpar, mAvCodecContext);
if (!(mAvFormatContext->oformat->flags & AVFMT_NOFILE))
{
avio_open(&mAvFormatContext->pb, filePath, AVIO_FLAG_WRITE);
}
if ( mAvFormatContext->oformat->flags & AVFMT_GLOBALHEADER )
{
mAvCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
avformat_write_header(mAvFormatContext, NULL);
mAvAudioFrame = av_frame_alloc();
mAvAudioFrame->nb_samples = mAvCodecContext->frame_size;
mAvAudioFrame->format = mAvCodecContext->sample_fmt;
mAvAudioFrame->channel_layout = mAvCodecContext->channel_layout;
av_samples_get_buffer_size(NULL, mAvCodecContext->channels, mAvCodecContext->frame_size,
mAvCodecContext->sample_fmt, 0);
av_frame_get_buffer(mAvAudioFrame, 0);
av_frame_make_writable(mAvAudioFrame);
mAvPacket = av_packet_alloc();
}
Encoding:
// SoundRecording is a custom class with the raw samples to be encoded
bool AudioWriter::encodeToContainer( SoundRecording *soundRecording )
{
int ret;
int frameCount = mAvCodecContext->frame_size;
int nChannels = mAvCodecContext->channels;
float *buf = new float[frameCount*nChannels];
while ( soundRecording->hasReadableData() )
{
//Populate the frame
int samplesRead = soundRecording->read( buf, frameCount*nChannels );
// Planar data
int nFrames = samplesRead/nChannels;
for ( int i = 0; i < nFrames; ++i )
{
for (int c = 0; c < nChannels; ++c )
{
samples[c][i] = buf[nChannels*i +c];
}
}
// Fill a gap at the end with silence
if ( samplesRead < frameCount*nChannels )
{
for ( int i = samplesRead; i < frameCount*nChannels; ++i )
{
for (int c = 0; c < nChannels; ++c )
{
samples[c][i] = 0.0;
}
}
}
encodeFrame( mAvAudioFrame ) )
}
finish();
}
bool AudioWriter::encodeFrame( AVFrame *frame )
{
//send the frame for encoding
int ret;
if ( frame != nullptr )
{
frame->pts = mAudFrameCounter++;
}
avcodec_send_frame(mAvCodecContext, frame );
while (ret >= 0)
{
ret = avcodec_receive_packet(mAvCodecContext, mAvPacket);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF )
{
break;
}
else
if (ret < 0) {
return false;
}
av_packet_rescale_ts(mAvPacket, mAvCodecContext->time_base, mAvOutputStream->time_base);
mAvPacket->stream_index = mAvOutputStream->index;
av_interleaved_write_frame(mAvFormatContext, mAvPacket);
av_packet_unref(mAvPacket);
}
return true;
}
void AudioWriter::finish()
{
// Flush by sending a null frame
encodeFrame( nullptr );
av_write_trailer(mAvFormatContext);
}
Since the resultant file contains the recorded music, the code to manipulate the audio data seems to be correct (unless I am overwriting other memory somehow).
The inaccurate duration and bitrate suggest that information concerning time is not being properly managed. I set the pts of the frames using a simple increasing integer. I'm unclear what the code that sets the timestamp and stream index achieves - and whether it's even necessary: I copied it from supposedly working code but I've seen other code without it.
Can anyone see what I'm doing wrong?
The timestamp need to be correct. Set the time_base to 1/sample_rate and increment the timestamp by 1024 each frame. Note: 1024 is aac specific. If you change codecs, you need to change the frame size.
When I program a project on Android with GLES 3.0 VBO, sometimes the application crashes when it invoking the method GLES30.glBufferData. There is no crashes happen if I use simple data while it crash when I get data from file.
int[] vboIDs = new int[1];
GLES30.glGenBuffers(1, vboIDs, 0);
GLES30.glBindBuffer(GLES30.GL_ELEMENT_ARRAY_BUFFER, vboIDs[0]);
GLES30.glBufferData(GLES30.GL_ELEMENT_ARRAY_BUFFER, size, buffer, GLES30.GL_STATIC_DRAW);//size=1296 buffer.capacity()=1296
GLES30.glBindBuffer(GLES30.GL_ELEMENT_ARRAY_BUFFER, 0);
It just crashed with no exception log. Is the format of buffer wrong? Below is how I get the buffer instance,the parameter byteBuffer is got from a binary file,
public static ByteBuffer createSlice(
ByteBuffer byteBuffer, int position, int length)
{
if (byteBuffer == null)
{
return null;
}
int oldPosition = byteBuffer.position();
int oldLimit = byteBuffer.limit();
try
{
int newLimit = position + length;
if (newLimit > byteBuffer.capacity())
{
throw new IllegalArgumentException(
"The new limit is " + newLimit + ", but the capacity is "
+ byteBuffer.capacity());
}
byteBuffer.limit(newLimit);
byteBuffer.position(position);
ByteBuffer slice = byteBuffer.slice();
slice.order(byteBuffer.order());
return slice;
}
finally
{
byteBuffer.limit(oldLimit);
byteBuffer.position(oldPosition);
}
}
A crash in buffer data normally means you've run off the end of the buffer and accessed an unmapped page, so check that size is correct for the buffer you are uploading.
I've a problem with this library
https://github.com/fyhertz/libstreaming
it allows to send via wireless the streaming of photocamera, it use 3 methods: two with mediacodec and one with mediarecorder.
I would like to modify it, and I have to use only the mediacodec;however first of all I tried the code of the example 2 of the library, but I've always found the same error:
the log tell me that the device can use the mediacodec, it set the encoder and when it test the decoder it fall and the buffer is filled with -1.
This is the method in the EncoderDebugger class where the exception occurs, some kind soul can help me please?
private long decode(boolean withPrefix) {
int n =3, i = 0, j = 0;
long elapsed = 0, now = timestamp();
int decInputIndex = 0, decOutputIndex = 0;
ByteBuffer[] decInputBuffers = mDecoder.getInputBuffers();
ByteBuffer[] decOutputBuffers = mDecoder.getOutputBuffers();
BufferInfo info = new BufferInfo();
while (elapsed<3000000) {
// Feeds the decoder with a NAL unit
if (i<NB_ENCODED) {
decInputIndex = mDecoder.dequeueInputBuffer(1000000/FRAMERATE);
if (decInputIndex>=0) {
int l1 = decInputBuffers[decInputIndex].capacity();
int l2 = mVideo[i].length;
decInputBuffers[decInputIndex].clear();
if ((withPrefix && hasPrefix(mVideo[i])) || (!withPrefix && !hasPrefix(mVideo[i]))) {
check(l1>=l2, "The decoder input buffer is not big enough (nal="+l2+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(mVideo[i],0,mVideo[i].length);
} else if (withPrefix && !hasPrefix(mVideo[i])) {
check(l1>=l2+4, "The decoder input buffer is not big enough (nal="+(l2+4)+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(new byte[] {0,0,0,1});
decInputBuffers[decInputIndex].put(mVideo[i],0,mVideo[i].length);
} else if (!withPrefix && hasPrefix(mVideo[i])) {
check(l1>=l2-4, "The decoder input buffer is not big enough (nal="+(l2-4)+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(mVideo[i],4,mVideo[i].length-4);
}
mDecoder.queueInputBuffer(decInputIndex, 0, l2, timestamp(), 0);
i++;
} else {
if (VERBOSE) Log.d(TAG,"No buffer available !7");
}
}
// Tries to get a decoded image
decOutputIndex = mDecoder.dequeueOutputBuffer(info, 1000000/FRAMERATE);
if (decOutputIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
decOutputBuffers = mDecoder.getOutputBuffers();
} else if (decOutputIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
mDecOutputFormat = mDecoder.getOutputFormat();
} else if (decOutputIndex>=0) {
if (n>2) {
// We have successfully encoded and decoded an image !
int length = info.size;
mDecodedVideo[j] = new byte[length];
decOutputBuffers[decOutputIndex].clear();
decOutputBuffers[decOutputIndex].get(mDecodedVideo[j], 0, length);
// Converts the decoded frame to NV21
convertToNV21(j);
if (j>=NB_DECODED-1) {
flushMediaCodec(mDecoder);
if (VERBOSE) Log.v(TAG, "Decoding "+n+" frames took "+elapsed/1000+" ms");
return elapsed;
}
j++;
}
mDecoder.releaseOutputBuffer(decOutputIndex, false);
n++;
}
elapsed = timestamp() - now;
}
throw new RuntimeException("The decoder did not decode anything.");
}
Here's my suggestions:
(1) check the settings of encoder and decoder, and make sure that they match. For example, revolution and color format are the same.
(2) make sure the very first packet generated by the encoder has been sent and pushed into the decoder. This packet defines the basic settings of the video stream.
(3) the decoder usually buffers 5-10 frames. So data in the buffer is invalid for a few hundred ms.
(4) while initiating the decoder, set the surface as null. Otherwise the output buffer will be read by the surface and probably released automatically.
I'm trying to set up OpenSL AudioPlayer to use memory I've allocated to playback a wav file. I want to do this so I can have multiple AudioPlayers that share the same data and conserve memory.
I've tried to give openSL the entire file and tell it that it is a WAVE with format_mime
SLDataLocator_Address loc_fd = {SL_DATALOCATOR_ADDRESS, data, size};
SLDataFormat_MIME format_mime = { SL_DATAFORMAT_MIME, (SLchar*)"audio/x-wav",SL_CONTAINERTYPE_WAV};
SLDataSource audioSrc = { &loc_fd, &format_mime };
// configure audio sink
SLDataLocator_OutputMix loc_outmix = { SL_DATALOCATOR_OUTPUTMIX,outputMixObject };
SLDataSink audioSnk = { &loc_outmix, 0 };
// create audio player
const SLInterfaceID ids[2] = { SL_IID_SEEK, SL_IID_PLAYBACKRATE };
const SLboolean req[2] = { SL_BOOLEAN_FALSE, SL_BOOLEAN_FALSE };
result = (*engineEngine)->CreateAudioPlayer(engineEngine,&uriPlayerObject[cntSOUND],&audioSrc, &audioSnk, 0, ids, req);
and I have parsed the WAVE data myself and loaded format_pcm
SLDataFormat_PCM format_pcm;
format_pcm.formatType = SL_DATAFORMAT_PCM;
char* wavParser = isWAVE(data);
if(wavParser == NULL)
{
Log("NOT A WAVE!");
return -1;
}
char* fmtChunk = getChunk("fmt ", data, size);
parsefmtChunk(fmtChunk, &format_pcm);
char* dataChunk = getChunk("data",data, size);
dataChunk += 4;
unsigned int dataSize = *((unsigned int*)dataChunk);
dataChunk += 4;
format_pcm.channelMask = 0;
format_pcm.containerSize = 16;
format_pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
loc_fd.pAddress = dataChunk;
loc_fd.length = dataSize;
The parsefmtChunk function is
void parsefmtChunk(char* fmtchunk, SLDataFormat_PCM* pcm)
{
char* data = fmtchunk + 8;
unsigned short audioFormat = *((unsigned short*)data);
if(audioFormat != 1)
{
Log("Not PCM!");
Log("Reached Line:%d in File %s", __LINE__, __FILE__);
return;
}
data += 2;
pcm->numChannels = *((unsigned short*)data);
data += 2;
pcm->samplesPerSec = *((unsigned int*)data);
data += 4;
//Byte Rate
data += 4;
//Block Align
data += 2;
//BitsPerSample
pcm->bitsPerSample = *((unsigned short*)data);
(Are Byte Rate and Block Align supposed to be used somehow to fill out the pcm struct?)
but whenever I create the audioplayer I get SL_RESULT_CONTENT_UNSUPPORTED
This is what I log from my parsefmt function
Channels:2
samplesPerSec:44100
bitsPerSample:16
from android-ndk-r8b/docs/opensles/index.html
PCM data format
The PCM data format can be used with buffer queues only.
So SLDataFormat_PCM CANNOT be used with SLDataLocator_Address like I assumed.
I can do what I want with a Buffer Queue instead by using just one big queue like so
bufferqueueitf->Enqueue(bufferqueueitf,dataChunk,dataSize);
Have you tried this?
SLDataFormat_MIME format_mime = {SL_DATAFORMAT_MIME, NULL, SL_CONTAINERTYPE_UNSPECIFIED};
The Android implementation of OpenSL ES isn't totally compliant and http://mobilepearls.com/labs/native-android-api/ndk/docs/opensles/ recommends the following:
The Android implementation of OpenSL ES requires that mimeType be initialized to either NULL or a valid UTF-8 string, and that containerType be initialized to a valid value. In the absence of other considerations, such as portability to other implementations, or content format which cannot be identified by header, we recommend that you set the mimeType to NULL and containerType to SL_CONTAINERTYPE_UNSPECIFIED.
Also, make sure you're giving it a valid URI.