Android read Assets resources in native code, special character in the end - android

I am using the following code to read files in /assets/ folder,
//AAssetManager* mgr from parameter.
AAsset* asset = AAssetManager_open(mgr, filen_ame, AASSET_MODE_BUFFER);
if (NULL == asset) {
__android_log_print(ANDROID_LOG_ERROR, "hdrijni", "_ASSET_NOT_FOUND_");
return;
}
long size = AAsset_getLength(asset);
char * buffer = (char*) malloc(sizeof(char)*size);
int byteRead = AAsset_read(asset, buffer, size);
AAsset_close(asset);
I can get the content, but sometimes the content appends some special characters.

Actually this problem not because of Asset Manager, but with the shader code I am using.
After I read the shader content, I will create the shader like:
GLuint shader = glCreateShader(type);
glShaderSource(shader, 1, &buffer2, NULL);
Asset buffer or content buffer transfered by JNI might be not NULL
terminated, so you need to use 'length' parameter when calling
glShaderSource. -- From others.
Just change
glShaderSource(shader, 1, &buffer2, &length);

Related

Android NDK: Read large file using AAssetManager

I am trying to read in a large file using asset manager in Android NDK.
The problem is that the code that I have written is not reading the entire content. Rather only a portion of it. When I try to achieve the same functionality using Java, it is giving me correct results.
This is the code that I have written:
std::string file = hats::files::SOURCE_DATASET_FILENAME;
AAssetManager *mgr = AAssetManager_fromJava(env, assetManager);
AAsset *asset = AAssetManager_open(mgr, file.c_str(), AASSET_MODE_BUFFER);
size_t assetLength = AAsset_getLength(asset);
char *buffer = (char *) malloc(assetLength + 1);
int nbytes{0};
while( (nbytes = AAsset_read(asset, buffer, assetLength)) > 0) {
LOGD("%s", buffer);
AAsset_seek(asset, nbytes, SEEK_CUR);
}
AAsset_close(asset);
return env->NewStringUTF(file.c_str());
I am not able to understand why only partial file is being read. I am not able to find any proper tutorial for Android NDK also.
Please help me out.

C++, Android NDK: How to save my raw audio data to file properly and load it again

I'm working on an Android app that plays back audio. To minimize latency I'm using C++ via JNI to play the app using the C++ library oboe.
Currently, before playback, the app has to decode the given file (e.g. an mp3), and then plays back the decoded raw audio stream. This leads to waiting time before playback starts if the file is bigger.
So I would like to do the decoding beforehand, save it, and when playback is requested just play thre decoded data from the saved file.
I have next to no knowledge of how to do proper file i/o in C++ and have a hard time wrapping my head around it. It is possible that my problem can be solved just with the right library, I'm not sure.
So currently I am saving my file like this:
bool Converter::doConversion(const std::string& fullPath, const std::string& name) {
// here I'm setting up the extractor and necessary inputs. Omitted since not relevant
// this is where the decoder is called to decode a file to raw audio
constexpr int kMaxCompressionRatio{12};
const long maximumDataSizeInBytes = kMaxCompressionRatio * (size) * sizeof(int16_t);
auto decodedData = new uint8_t[maximumDataSizeInBytes];
int64_t bytesDecoded = NDKExtractor::decode(*extractor, decodedData);
auto numSamples = bytesDecoded / sizeof(int16_t);
auto outputBuffer = std::make_unique<float[]>(numSamples);
// This block is necessary to get the correct format for oboe.
// The NDK decoder can only decode to int16, we need to convert to floats
oboe::convertPcm16ToFloat(
reinterpret_cast<int16_t *>(decodedData),
outputBuffer.get(),
bytesDecoded / sizeof(int16_t));
// This is how I currently save my outputBuffer to a file. This produces a file on the disc.
std::string outputSuffix = ".pcm";
std::string outputName = std::string(mFolder) + name + outputSuffix;
std::ofstream outfile(outputName.c_str(), std::ios::out | std::ios::binary);
outfile.write(reinterpret_cast<const char *>(&outputBuffer), sizeof outputBuffer);
return true;
}
So I believe I take my float array, convert it to a char array and save it. I am not certain this correct, but that is my best understanding of it.
There is a file afterwards, anyway.
Edit: As I found out when analyzing my saved file I only store 8 bytes.
Now how do I load this file again and restore the contents of my outputBuffer?
Currently I have this bit, which is clearly incomplete:
StorageDataSource *StorageDataSource::openPCM(const char *fileName, AudioProperties targetProperties) {
long bufferSize;
char * buffer;
std::ifstream stream(fileName, std::ios::in | std::ios::binary);
stream.seekg (0, std::ios::beg);
bufferSize = stream.tellg();
buffer = new char [bufferSize];
stream.read(buffer, bufferSize);
stream.close();
If this is correct, what do I have to do to restore the data as the original type? If I am doing it wrong, how does it work the right way?
I figured out how to do it thanks to #Michael's comments.
This is how I save my data now:
bool Converter::doConversion(const std::string& fullPath, const std::string& name) {
// here I'm setting up the extractor and necessary inputs. Omitted since not relevant
// this is where the decoder is called to decode a file to raw audio
constexpr int kMaxCompressionRatio{12};
const long maximumDataSizeInBytes = kMaxCompressionRatio * (size) * sizeof(int16_t);
auto decodedData = new uint8_t[maximumDataSizeInBytes];
int64_t bytesDecoded = NDKExtractor::decode(*extractor, decodedData);
auto numSamples = bytesDecoded / sizeof(int16_t);
// converting to float has moved to the reading function, so now i save decodedData directly.
std::string outputSuffix = ".pcm";
std::string outputName = std::string(mFolder) + name + outputSuffix;
std::ofstream outfile(outputName.c_str(), std::ios::out | std::ios::binary);
outfile.write((char*)decodedData, numSamples * sizeof (int16_t));
return true;
}
And this is how I read the stored file again:
long bufferSize;
char * inputBuffer;
std::ifstream stream;
stream.open(fileName, std::ifstream::in | std::ifstream::binary);
if (!stream.is_open()) {
// handle error
}
stream.seekg (0, std::ios::end); // seek to the end
bufferSize = stream.tellg(); // get size info, will be 0 without seeking to the end
stream.seekg (0, std::ios::beg); // seek to beginning
inputBuffer = new char [bufferSize];
stream.read(inputBuffer, bufferSize); // the actual reading into the buffer. would be null without seeking back to the beginning
stream.close();
// done reading the file.
auto numSamples = bufferSize / sizeof(int16_t); // calculate my number of samples, so the audio is correctly interpreted
auto outputBuffer = std::make_unique<float[]>(numSamples);
// the decoding bit now happens after the file is open. This avoids confusion
// The NDK decoder can only decode to int16, we need to convert to floats
oboe::convertPcm16ToFloat(
reinterpret_cast<int16_t *>(inputBuffer),
outputBuffer.get(),
bufferSize / sizeof(int16_t));
// here I continue working with my outputBuffer
The important bits of information/understanding C++ I didn't have or get were
a) the size of a pointer is not the same as the size of the data it
points to and
b) how seeking a stream works. I needed to put the
needle back to the start before I would find any data in my buffer.

How works ANeuralNetworksMemory_createFromFd?

In Android Neural Network API docs says: Creates a shared memory object from a file descriptor.
But I can't find any place that specifies how is the format of this file, on TFL source code:
allocation.cc:
MMAPAllocation::MMAPAllocation(const char* filename,
ErrorReporter* error_reporter)
: Allocation(error_reporter), mmapped_buffer_(MAP_FAILED) {
mmap_fd_ = open(filename, O_RDONLY);
if (mmap_fd_ == -1) {
error_reporter_->Report("Could not open '%s'.", filename);
return;
}
struct stat sb;
fstat(mmap_fd_, &sb);
buffer_size_bytes_ = sb.st_size;
mmapped_buffer_ =
mmap(nullptr, buffer_size_bytes_, PROT_READ, MAP_SHARED, mmap_fd_, 0);
if (mmapped_buffer_ == MAP_FAILED) {
error_reporter_->Report("Mmap of '%s' failed.", filename);
return;
}
}
nnapi_delegate.cc
NNAPIAllocation::NNAPIAllocation(const char* filename,
ErrorReporter* error_reporter)
: MMAPAllocation(filename, error_reporter) {
if (mmapped_buffer_ != MAP_FAILED)
CHECK_NN(ANeuralNetworksMemory_createFromFd(buffer_size_bytes_, PROT_READ,
mmap_fd_, 0, &handle_));
}
It means, TFL opens the file, and give this file to NNAPI. What I need is what is the format of this file that store the tensors, is it a flatbuffers file like TFL format?
Edit:
This is a sample from NNAPI doc:
ANeuralNetworksMemory* mem1 = NULL;
int fd = open("training_data", O_RDONLY);
ANeuralNetworksMemory_createFromFd(file_size, PROT_READ, fd, 0, &mem1);
This file training_data, how must its content be structured to NNAPI understand?
ANeuralNetworksMemory_createFromFd(file_size, PROT_READ, fd, 0, &mem1) - This API maps the model file in to ANeuralNetworksMemory.
The mapped address is stored in mem1 (pass by reference!)
Further, trained values that are stored in mem1 (ANeuralNetworksMemory object) is read by pointing to the appropriate offset value and copied in to the tensors of NeuralNetwork model.
ANeuralNetworksModel_setOperandValueFromMemory(model_, tensor0, mem1, offset, size);
ANeuralNetworksModel_setOperandValueFromMemory(model_, tensor1, mem1, offset+size, size);
tensor0 - pointing at offset
tensor1 - pointing at offset+size
The loading of the model file and the parsing of it are done separately. This makes it easier to mix-and-match between different memory models and different file formats. It also makes it possible to use these building blocks for other functions, like loading inputs from a file.
ANeuralNetworksMemory_createFromFd() is just used to load the model file into memory.
FlatBufferModel::BuildFromFile() gets an allocation (memory block) that represents the model. This is where ANeuralNetworksMemory_createFromFd() gets called. Then it news up a FlatBufferModel object. This calls tflite::GetModel() which is in the schema subdirectory. The schema subdirectory deserializes the flat-buffer-model from the .tflite model loaded into memory.
When NNAPIDelegate::Invoke() is called, the schema Model object is used to build the model in the Android-NN layer using calls like ANeuralNetworksModel_addOperand().

use ffmpeg api to convert audio files. crash on avcodec_encode_audio2

From the examples I got the basic idea of this code.
However I am not sure, what I am missing, as muxing.c demuxing.c and decoding_encoding.c
all use different approaches.
The process of converting an audio file to another file should go roughly like this:
inputfile -demux-> audiostream -read-> inPackets -decode2frames->
frames
-encode2packets-> outPackets -write-> audiostream -mux-> outputfile
However I found the following comment in demuxing.c:
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
My questions about this are:
Can I expect a frame that was retrieved by calling one of the decoder functions, f.e.
avcodec_decode_audio4 to hold suitable values to directly put it into an encoder or is
the resampling step mentioned in the comment mandatory?
Am I taking the right approach? ffmpeg is very asymmetric, i.e. if there is a function
open_file_for_input there might not be a function open_file_for_output. Also there are different versions of many functions (avcodec_decode_audio[1-4]) and different naming
schemes, so it's very hard to tell, if the general approach is right, or actually an
ugly mixture of techniques that where used at different version bumps of ffmpeg.
ffmpeg uses a lot of specific terms, like 'planar sampling' or 'packed format' and I am having a hard time, finding definitions for these terms. Is it possible to write working code, without deep knowledge of audio?
Here is my code so far that right now crashes at avcodec_encode_audio2
and I don't know why.
int Java_com_fscz_ffmpeg_Audio_convert(JNIEnv * env, jobject this, jstring jformat, jstring jcodec, jstring jsource, jstring jdest) {
jboolean isCopy;
jclass configClass = (*env)->FindClass(env, "com.fscz.ffmpeg.Config");
jfieldID fid = (*env)->GetStaticFieldID(env, configClass, "ffmpeg_logging", "I");
logging = (*env)->GetStaticIntField(env, configClass, fid);
/// open input
const char* sourceFile = (*env)->GetStringUTFChars(env, jsource, &isCopy);
AVFormatContext* pInputCtx;
AVStream* pInputStream;
open_input(sourceFile, &pInputCtx, &pInputStream);
// open output
const char* destFile = (*env)->GetStringUTFChars(env, jdest, &isCopy);
const char* cformat = (*env)->GetStringUTFChars(env, jformat, &isCopy);
const char* ccodec = (*env)->GetStringUTFChars(env, jcodec, &isCopy);
AVFormatContext* pOutputCtx;
AVOutputFormat* pOutputFmt;
AVStream* pOutputStream;
open_output(cformat, ccodec, destFile, &pOutputCtx, &pOutputFmt, &pOutputStream);
/// decode/encode
error = avformat_write_header(pOutputCtx, NULL);
DIE_IF_LESS_ZERO(error, "error writing output stream header to file: %s, error: %s", destFile, e2s(error));
AVFrame* frame = avcodec_alloc_frame();
DIE_IF_UNDEFINED(frame, "Could not allocate audio frame");
frame->pts = 0;
LOGI("allocate packet");
AVPacket pktIn;
AVPacket pktOut;
LOGI("done");
int got_frame, got_packet, len, frame_count = 0;
int64_t processed_time = 0, duration = pInputStream->duration;
while (av_read_frame(pInputCtx, &pktIn) >= 0) {
do {
len = avcodec_decode_audio4(pInputStream->codec, frame, &got_frame, &pktIn);
DIE_IF_LESS_ZERO(len, "Error decoding frame: %s", e2s(len));
if (len < 0) break;
len = FFMIN(len, pktIn.size);
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
LOGI("audio_frame n:%d nb_samples:%d pts:%s\n", frame_count++, frame->nb_samples, av_ts2timestr(frame->pts, &(pInputStream->codec->time_base)));
if (got_frame) {
do {
av_init_packet(&pktOut);
pktOut.data = NULL;
pktOut.size = 0;
LOGI("encode frame");
DIE_IF_UNDEFINED(pOutputStream->codec, "no output codec");
DIE_IF_UNDEFINED(frame->nb_samples, "no nb samples");
DIE_IF_UNDEFINED(pOutputStream->codec->internal, "no internal");
LOGI("tests done");
len = avcodec_encode_audio2(pOutputStream->codec, &pktOut, frame, &got_packet);
LOGI("encode done");
DIE_IF_LESS_ZERO(len, "Error (re)encoding frame: %s", e2s(len));
} while (!got_packet);
// write packet;
LOGI("write packet");
/* Write the compressed frame to the media file. */
error = av_interleaved_write_frame(pOutputCtx, &pktOut);
DIE_IF_LESS_ZERO(error, "Error while writing audio frame: %s", e2s(error));
av_free_packet(&pktOut);
}
pktIn.data += len;
pktIn.size -= len;
} while (pktIn.size > 0);
av_free_packet(&pktIn);
}
LOGI("write trailer");
av_write_trailer(pOutputCtx);
LOGI("end");
/// close resources
avcodec_free_frame(&frame);
avcodec_close(pInputStream->codec);
av_free(pInputStream->codec);
avcodec_close(pOutputStream->codec);
av_free(pOutputStream->codec);
avformat_close_input(&pInputCtx);
avformat_free_context(pOutputCtx);
return 0;
}
Meanwhile I have figured this out and written an Android Library Project that does this
(for audio files). https://github.com/fscz/FFmpeg-Android
See the file /jni/audiodecoder.c for details

Android NDK Pointer Arithmetic

I am trying to load a TGA file in Android NDK.
I open the file using AssetManager, read in the entire contents of the TGA file into a memory buffer, and then I try to extract the pixel data from it.
I can read the TGA header part of the file without any problems, but when I try to advance the memory pointer past the TGA header, the app crashes. If I don't try to advance the memory pointer, it does not crash.
Is there some sort of limitation in Android NDK for pointer arithmetic?
Here is the code:
This function opens the asset file:
char* GEAndroid::OpenAssetFile( const char* pFileName )
{
char* pBuffer = NULL;
AAssetManager* assetManager = m_pState->activity->assetManager;
AAsset* assetFile = AAssetManager_open(assetManager, pFileName, AASSET_MODE_UNKNOWN);
if (!assetFile) {
// Log error as 'error in opening the input file from apk'
LOGD( "Error opening file %s", pFileName );
}
else
{
LOGD( "File opened successfully %s", pFileName );
const void* pData = AAsset_getBuffer(assetFile);
off_t fileLength = AAsset_getLength(assetFile);
LOGD("fileLength=%d", fileLength);
pBuffer = new char[fileLength];
memcpy( pBuffer, pData, fileLength * sizeof( char ) );
}
return pBuffer;
}
And down here in my texture class I try to load it:
char* pBuffer = g_pGEAndroid->OpenAssetFile( fileNameWithPath );
TGA_HEADER textureHeader;
char *pImageData = NULL;
unsigned int bytesPerPixel = 4;
textureHeader = *reinterpret_cast<TGA_HEADER*>(pBuffer);
// I double check that the textureHeader is valid and it is.
bytesPerPixel = textureHeader.bits/8; // Divide By 8 To Get The Bytes Per Pixel
m_imageSize = textureHeader.width*textureHeader.height*bytesPerPixel; // Calculate The Memory Required For The TGA Data
pImageData = new char[m_imageSize];
// the line below causes the crash
pImageData = reinterpret_cast<char*>(pBuffer + sizeof( TGA_HEADER)); // <-- causes a crash
If I replace the line above with the following line (even though it is incorrect), the app runs, although obviously the texture is messed up.
pImageData = reinterpret_cast<char*>(pBuffer); // <-- does not crash, but obviously texture is messed up.
Anyone have any ideas?
Thanks.
Why reinterpret_cast? You're adding an integer to a char*; that operation produces a char*. No typecast necessary.
One caveat for pointer juggling on Android (and on ARM devices in general): ARM cannot read/write unaligned data from memory. If you read/write an int-sized variable, it needs to be at an address that's a multiple of 4; for short, a multiple of 2. Bytes can be at any address. This does not, as far as I can see, apply to the presented snippet. But do keep in mind. It does throw off binary format parsing occasionally, especially when ported from Intel PCs.
Simply assigning an unaligned value to a pointer does not crash. Dereferencing it might.
Sigh, I just realized the mistake. I allocate memory for pImageData, then set the point to the buffer. This does not sit well when I try to create an OpenGL texture with the pixel data. Modifying it so I memcpy the pixel data from (pBuffer + sizeof( TGA_HEADER) ) to pImageData fixes the problem.

Categories

Resources