I am working in karaoke android app, this is example:
https://github.com/koseonjae/Karaoke
That base on example of Google, please take a look at this trunk:
if(fos == nullptr)
fos = fopen("/sdcard/recorded_audio.pcm", "wb");
fwrite(buf->buf_, 1, buf->size_, fos);
This trunk of code about work on my device S7 Samsung, but crash in other device, so I change it to save buf to vector and write out when recording stoped:
bool EngineService(void *ctx, uint32_t msg, void *data) {
assert(ctx == &engine);
switch (msg) {
case ENGINE_SERVICE_MSG_RETRIEVE_DUMP_BUFS: {
*(static_cast<uint32_t *>(data)) = dbgEngineGetBufCount();
break;
}
case ENGINE_SERVICE_MSG_RECORDED_AUDIO_AVAILABLE: {
// adding audio delay effect
sample_buf *buf = static_cast<sample_buf *>(data);
assert(engine.fastPathFramesPerBuf_ ==
buf->size_ / engine.sampleChannels_ / (engine.bitsPerSample_ / 8));
engine.delayEffect_->process(reinterpret_cast<int16_t *>(buf->buf_),
engine.fastPathFramesPerBuf_);
// TODO error crash in some device that has low speed sdcard -> using vector for save data
/*if(fos == nullptr)
fos = fopen("/sdcard/recorded_audio.pcm", "wb");
fwrite(buf->buf_, 1, buf->size_, fos);*/
sample_buf output = sample_buf();
output.buf_ = new uint8_t();
*output.buf_ = *buf->buf_;
output.size_ = buf->size_;
bucket.push_back(output);
break;
}
default:
assert(false);
return false;
}
return true;
}
In stop function stopPlay(JNIEnv *env, jclass type) - I added:
if(fos == nullptr)
fos = fopen("/sdcard/recorded_audio.pcm", "wb");
LOGE("====stopPlay %u", bucket.size());
for(auto const& value: bucket) {
fwrite(value.buf_, 1, value.size_, fos);
}
fclose(fos);
fos = nullptr;
I don't know why the fwrite which write directly to sdcard work in EngineService but when My Vector bucket write out ok, but I cannot hear anything - just beep beep sound...
Does someone explane it? Something wrong here?
P/S: I think write to memory like vector is faster than sdcard? It true, isn't it? But not working
Related
I need to pass to MediaExtractor the data, for this purpose I use this method SetDataSource
https://developer.android.com/ndk/reference/group/media#amediaextractor_setdatasourcefd
Like this:
int32_t NDK_extractor::decode()
{
FILE *fp = nullptr;
media_status_t err;
AMediaExtractor *ex = AMediaExtractor_new();
fp = fopen("/storage/emulated/0/Android/data/com.test.debug/files/Models/test.mp3", "rb");
if (fp)
{
err = AMediaExtractor_setDataSourceFd(ex, fileno(fp), 0, dataSize);
}
else
{
LOGE("Failed open file");
return 0;
}
if (err != AMEDIA_OK)
{
LOGE("SOUND :: Error setting ex data source, err %d", err);
return 0;
}
...
}
And it works fine, but now I need to work with pointer to data and data size, so I changed this method like this
int32_t NDK_extractor::decode()
{
FILE *fp = nullptr;
media_status_t err;
AMediaExtractor *ex = AMediaExtractor_new();
fp = fopen("/storage/emulated/0/Android/data/com.test.debug/files/Models/test.mp3", "rb");
fseek(fp, 0, SEEK_END);
long lSize = ftell(fp);
rewind(fp);
void *buf = new unsigned char[lSize];
fread(buf, 1, lSize, fp);
fclose(fp);
fp = fmemopen(buf, lSize, "r");
if (fp)
{
err = AMediaExtractor_setDataSourceFd(ex, fileno(fp), 0, dataSize);
}
else
{
LOGE("Failed open file");
return 0;
}
if (err != AMEDIA_OK)
{
LOGE("SOUND :: Error setting ex data source, err %d", err);
return 0;
}
...
}
So, I am reading the same data (as in previous ex.) in buffer also getting a size and then I open it with fmemopen and as a result getting such an error - AMEDIA_ERROR_BASE
What is a problem here? Why does it work in one case and doesn't in other in spite of it is almost the same? What am I missing?
As a result, the problem turned out to be that AMediaExtractor_setDataSourceFd method accepts a file descriptor as a parameter. In order to get a file descriptor from FILE, you need to call fileno() method on FILE that was opened with fopen() then everything works well, but if the file was opened with fmemopen() then fileno() returns -1. I tried to do it through a pipe https://stackoverflow.com/a/1559018/5709159, but this approach does not work for AMediaExtractor_setDataSourceFd (I think because the pipe does not support seek()) tried to do it through a custom MediaExtractor (one of methods setDataSource()), but it was introduced only with api 29 (it does not suit me very much) in the end I did it with a workaround- I get the bytes, write to a temporary file and open this file with fopen() and call fileno() get the file descriptor and pass it to setDataSouceFd()
I am using the native codec app given by Google: (https://github.com/googlesamples/android-ndk/tree/master/native-codec).
The app has a folder (assets) which contains some video samples to play.
My purpose is to read videos from the internal storage of the phone (i.e /sdcard/filename.mp4).
I added these 2 lines to the manifest file but this hasn't helped to fix the issue yet.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
I modified the code to get the video filename as an argument given by adb shell.
Here is the code:
mSourceString = getIntent().getStringExtra("arg");
if (!mCreated) {
if (mSourceString != null) {
mCreated = createStreamingMediaPlayer(getResources().getAssets(), mSourceString);
}
}
if (mCreated) {
mIsPlaying = !mIsPlaying;
setPlayingStreamingMediaPlayer(mIsPlaying);
}
The native code of the method which reads the video filename is the following:
jboolean Java_com_example_mohammed_myapplication_MainActivity_createStreamingMediaPlayer(JNIEnv* env, jclass clazz, jobject assetMgr, jstring filename)
{
LOGV("### create");
// convert Java string to UTF-8
const char *utf8 = env->GetStringUTFChars(filename, NULL);
LOGV("opening %s", utf8);
off_t outStart, outLen;
int fd = AAsset_openFileDescriptor(AAssetManager_open(AAssetManager_fromJava(env, assetMgr), utf8, 0),
&outStart, &outLen);
env->ReleaseStringUTFChars(filename, utf8);
if (fd < 0) {
LOGE("failed to open file: %s %d (%s)", utf8, fd, strerror(errno));
return JNI_FALSE;
}
data.fd = fd;
workerdata *d = &data;
AMediaExtractor *ex = AMediaExtractor_new();
media_status_t err = AMediaExtractor_setDataSourceFd(ex, d->fd,
static_cast<off64_t>(outStart),
static_cast<off64_t>(outLen));
close(d->fd);
if (err != AMEDIA_OK) {
LOGV("setDataSource error: %d", err);
return JNI_FALSE;
}
int numtracks = AMediaExtractor_getTrackCount(ex);
AMediaCodec *codec = NULL;
LOGV("input has %d tracks", numtracks);
for (int i = 0; i < numtracks; i++) {
AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
const char *s = AMediaFormat_toString(format);
LOGV("track %d format: %s", i, s);
const char *mime;
if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime)) {
LOGV("no mime type");
return JNI_FALSE;
} else if (!strncmp(mime, "video/", 6)) {
// Omitting most error handling for clarity.
// Production code should check for errors.
AMediaExtractor_selectTrack(ex, i);
codec = AMediaCodec_createDecoderByType(mime);
AMediaCodec_configure(codec, format, d->window, NULL, 0);
d->ex = ex;
d->codec = codec;
d->renderstart = -1;
d->sawInputEOS = false;
d->sawOutputEOS = false;
d->isPlaying = false;
d->renderonce = true;
AMediaCodec_start(codec);
}
AMediaFormat_delete(format);
}
mlooper = new mylooper();
mlooper->post(kMsgCodecBuffer, d);
return JNI_TRUE;
}
The app plays the videos successfully when they are in the "assets" folder, i.e inside the app. But when a video is outside the app (internal/external storage) the app stops working.
Is there a solution for this issue?
Apart from adding storage permission, the user needs to give manual permission also.
For testing purpose, you can go to Settings-> Apps-> your app-> Permissions-> enable storage permission. Should work fine then.
For production purpose, you should ask for permission via dialogue. There are plenty of tutorials for that.
I try to render a raw h264 video to a surface (after decoding) and write it at the same time to a file.
The rendering is working fine but when I want to get the current output buffer, it always have a size of 8 and the output file have a size of 3,87 Ko.
It seems like the output buffer is locked by the surface (ANativeWindow)?
Anyone can give me an advice to do it without creating another codec?
The codec is configured with an output surface :
if (AMEDIA_OK == AMediaCodec_configure(d->codec, d->format, d->window /*the native window */, NULL, 0)
Here is the code snipet when I try to get the output buffer :
if (!d->sawOutputEOS) {
AMediaCodecBufferInfo info;
auto status = AMediaCodec_dequeueOutputBuffer(d->codec, &info, -1);
if (status >= 0) {
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
LOGV("output EOS");
d->sawOutputEOS = true;
d->isPlaying = false;
}
int64_t delay = 333000;
usleep((useconds_t )delay / 15);
size_t size;
// here i get the output buffer
uint8_t *outputbuffer = AMediaCodec_getOutputBuffer(d->codec,status,&size);
write(d->fd1,outputbuffer,size); // the output is always 0
LOGV("%d",size); // the size is always 8
LOGV("FRAME num : %d", counter[d->nb]++);
AMediaCodec_releaseOutputBuffer(d->codec, status, info.size != 0);
if (d->renderonce) {
d->renderonce = false;
return;
}
} else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED) {
LOGV("output buffers changed");
} else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED) {
auto format = AMediaCodec_getOutputFormat(d->codec);
LOGV("format changed to: %s", AMediaFormat_toString(format));
AMediaFormat_delete(format);
d->formatChanged = true;
} else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
LOGV("no output buffer right now");
} else {
LOGV("unexpected info code: %zd", status);
}
}
Thanks in advance
It's not locked; you asked the decoder to work for display, so it used the fastest route to display, without exposing the pixels to readable memory. You may find that the format is the opaque COLOR_FormatSurface, as explained here.
I want to analyse an audio file (mp3 in particular) which the user can select and determine what notes are played, when they're player and with what frequency.
I already have some working code for my computer, but I want to be able to use this on my phone as well.
In order to do this however, I need access to the bytes of the audio file. On my PC I could just open a stream and use AudioFormat to decode it and then read() the bytes frame by frame.
Looking at the Android Developer Forums I can only find classes and examples for playing a file (without access to the bytes) or recording to a file (I want to read from a file).
I'm pretty confident that I can set up a file chooser, but once I have the Uri from that, I don't know how to get a stream or the bytes.
Any help would be much appreciated :)
Edit: Is a similar solution to this possible? Android - Read a File
I don't know if I could decode the audio file that way or if there would be any problems with the Android API...
So I solved it in the following way:
Get an InputStream with
final InputStream inputStream = getContentResolver().openInputStream(selectedUri);
Then pass it in this function and decode it using classes from JLayer:
private synchronized void decode(InputStream in)
throws BitstreamException, DecoderException {
ArrayList<Short> output = new ArrayList<>(1024);
Bitstream bitstream = new Bitstream(in);
Decoder decoder = new Decoder();
float total_ms = 0f;
float nextNotify = -1f;
boolean done = false;
while (! done) {
Header frameHeader = bitstream.readFrame();
if (total_ms > nextNotify) {
mListener.OnDecodeUpdate((int) total_ms);
nextNotify += 500f;
}
if (frameHeader == null) {
done = true;
} else {
total_ms += frameHeader.ms_per_frame();
SampleBuffer buffer = (SampleBuffer) decoder.decodeFrame(frameHeader, bitstream); // CPU intense
if (buffer.getSampleFrequency() != 44100 || buffer.getChannelCount() != 2) {
throw new DecoderException("mono or non-44100 MP3 not supported", null);
}
short[] pcm = buffer.getBuffer();
for (int i = 0; i < pcm.length-1; i += 2) {
short l = pcm[i];
short r = pcm[i+1];
short mono = (short) ((l + r) / 2f);
output.add(mono); // RAM intense
}
}
bitstream.closeFrame();
}
bitstream.close();
mListener.OnDecodeComplete(output);
}
The full project (in case you want to look up the particulars) can be found here:
https://github.com/S7uXN37/MusicInterpreterStudio/
https://github.com/BelledonneCommunications/mediastreamer2
https://github.com/BelledonneCommunications/linphone-android
Using only mediastreamer2 library I am able to start a audio call with remote ip and port given and calling audiostream.c respective methods.
i then needed to start video call as well. so initialised videostream.c and
using its respective methods and providing it with surfaces to render remote and local camera feed. i am able to start video stream successfully with remote port and ip given.
But problem is when i start both streams together. sound stops and video streaming also gets stop. only local camera feed works.
So i have a one magical method that do all this for me. if i comment the video part of it. audio call works fine and if comment the audio part, video call works fine. but when i start both. No sound No Streaming.
but we get AudioStream started successfully and VideoStream started successfully log.
Can someone with linphone experience help as figuring the correct sequence of methods? or what are we doing wrong. Here is our method.
JNIEXPORT jint JNICALL Java_com_myapp_services_LinPhoneMSEngine_LinPhoneMSVE_1AudioStreamStartFull
(JNIEnv *env, jclass self, jstring remote_ip, jint remote_port, jint localport, jint payloadindex, jboolean isLowEnd)
{
int bRetVal = 0;
MSVideoSize size = {320, 240};
char rtcp_tool[128]={0};
int ret;
//jboolean copy;
char cname[128]={0};
const char *cremote_ip;
ortp_warning("Audio Stream Start Full");
LOGD("Audio Stream Start Full");
cremote_ip = (*env)->GetStringUTFChars(env, remote_ip, NULL);
ortp_warning("Cremote_ip= %s", cremote_ip);
LOGD("Cremote_ip= %s", cremote_ip);
// ms_filter_enable_statistics(TRUE);
veData->queue = ortp_ev_queue_new();
veData->soundCard = NULL;
set_playback_device();
ortp_warning("sound: playback_dev_id: %s", ms_snd_card_get_string_id(veData->soundCard));
LOGD("sound: playback_dev_id: %s", ms_snd_card_get_string_id(veData->soundCard));
veData->CaptureCard = NULL;
set_capture_device();
ortp_warning("sound: capture_dev_id: %s", ms_snd_card_get_string_id(veData->CaptureCard));
LOGD("sound: capture_dev_id: %s", ms_snd_card_get_string_id(veData->CaptureCard));
veData->audioStream = audio_stream_new(msFactory ,localport, localport + 1, false);
audio_stream_enable_adaptive_bitrate_control(veData->audioStream, true);
audio_stream_enable_adaptive_jittcomp(veData->audioStream, true);
rtp_session_set_jitter_compensation(veData->audioStream->ms.sessions.rtp_session, 50);
rtp_session_enable_rtcp_mux(veData->audioStream->ms.sessions.rtp_session, true);
ret=AUDIO_STREAM_FEATURE_VOL_SND | \
AUDIO_STREAM_FEATURE_VOL_RCV;
if (!isLowEnd)
{
ret = ret | AUDIO_STREAM_FEATURE_EC | AUDIO_STREAM_FEATURE_EQUALIZER | AUDIO_STREAM_FEATURE_DTMF | AUDIO_STREAM_FEATURE_DTMF_ECHO;
audio_stream_set_features(veData->audioStream, ret);
ortp_warning("Setting Echo Canceller params");
LOGD("Setting Echo Canceller params");
rtp_session_enable_jitter_buffer(veData->audioStream->ms.sessions.rtp_session, TRUE);
audio_stream_set_echo_canceller_params(veData->audioStream, 60, 0, 128);
audio_stream_enable_gain_control(veData->audioStream, true);
audio_stream_enable_automatic_gain_control(veData->audioStream, true);
}
else
{
audio_stream_set_features(veData->audioStream, ret);
ortp_warning("No Echo Canceller params!!");
LOGD("No Echo Canceller params!!");
rtp_session_enable_jitter_buffer(veData->audioStream->ms.sessions.rtp_session, FALSE);
}
if( veData->audioStream == NULL){
ortp_warning("AudioStream is Null");
LOGD("AudioStream is Null");
bRetVal = -1;
return -1;
}
audio_stream_play_received_dtmfs(veData->audioStream, true);
snprintf(rtcp_tool,sizeof(rtcp_tool)-1,"%s-%s","Android","2.8.0");
snprintf(cname,sizeof(cname)-1,"%s-%d", cremote_ip, remote_port);
ortp_warning("cname value: %s",cname);
LOGD("cname value: %s",cname);
audio_stream_prepare_sound(veData->audioStream, veData->soundCard, veData->CaptureCard);
if(0== audio_stream_start_full(veData->audioStream,veData->prof, cremote_ip, remote_port, cremote_ip, remote_port + 1, 114, 50,NULL,NULL,veData->soundCard,veData->CaptureCard, !isLowEnd))
{
veData->rtpSession = veData->audioStream->ms.sessions.rtp_session;
ortp_warning("AudioStreamStartFull Success");
post_audio_config(veData->audioStream);
audio_stream_set_rtcp_information(veData->audioStream, cname, rtcp_tool);
}
else
{
ortp_warning("AudioStream start failed");
bRetVal = -1;
}
// init video stream
veData->videoStream = video_stream_new(msFactory, localport,localport+1,false);
video_stream_enable_adaptive_bitrate_control(veData->videoStream, true);
video_stream_enable_adaptive_jittcomp(veData->videoStream, true);
rtp_session_enable_rtcp_mux(veData->videoStream->ms.sessions.rtp_session, true);
video_stream_use_video_preset(veData->videoStream, "custom");
video_stream_set_sent_video_size(veData->videoStream, size);
video_stream_set_preview_size(veData->videoStream, size);
video_stream_enable_self_view(veData->videoStream, TRUE);
ortp_message("Video Stream : [%p] & native window id : [%p]",veData->videoStream, veData->native_window_id);
video_stream_set_native_window_id(veData->videoStream, veData->native_window_id);
ortp_message("Video Stream : [%p] & preview window id : [%p]",veData->videoStream, veData->native_preview_window_id);
video_stream_set_native_preview_window_id(veData->videoStream, veData->native_preview_window_id);
video_stream_use_preview_video_window(veData->videoStream, TRUE);
video_stream_set_device_rotation(veData->videoStream, 0);
video_stream_set_fps(veData->videoStream, 10.0);
// link audio with video
audio_stream_link_video(veData->audioStream, veData->videoStream);
ms_message("Setting webcam as %p", veData->msWebCam);
if(bRetVal != -1 && video_stream_start(veData->videoStream, veData->prof,
cremote_ip,
remote_port,
cremote_ip,
remote_port + 1,
101,
60,
veData->msWebCam) >=0 ) {
ortp_warning("VideoStream started successfully");
veData->rtpSession = veData->videoStream->ms.sessions.rtp_session;
video_stream_set_rtcp_information(veData->videoStream, cname,rtcp_tool);
}
else
{
ortp_warning("VideoStream start failed");
bRetVal = -1;
}
(*env)->ReleaseStringUTFChars(env, remote_ip, cremote_ip);
return bRetVal;
}
Okay, finally with the help of #belledonne-communications.
we figured out we were sending both the streams on same port.
which is not possible. It needs to be sent on different ports. We corrected it and it worked.