I am trying to solve a big problem but stuck with very small issue. I am trying to read audio streams inside a video file with the help of ffmpeg but the loop that should traverse the whole file of streams only runs couple of times. Can not figure out what is the issue as others have used it very similarly.
Following is my code please check:
JNIEXPORT jint JNICALL Java_ru_dzakhov_ffmpeg_test_MainActivity_logFileInfo
(JNIEnv * env,
jobject this,
jstring filename
)
{
AVFormatContext *pFormatCtx;
int i,j,k, videoStream, audioStream;
AVCodecContext *pCodecCtx;
AVCodec *pCodec;
AVFrame *pFrame;
AVPacket packet;
int frameFinished;
float aspect_ratio;
AVCodecContext *aCodecCtx;
AVCodec *aCodec;
//uint8_t inbuf[AUDIO_INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
j=0;
av_register_all();
char *str = (*env)->GetStringUTFChars(env, filename, 0);
LOGI(str);
// Open video file
if(av_open_input_file(&pFormatCtx, str, NULL, 0, NULL)!=0)
;
// Retrieve stream information
if(av_find_stream_info(pFormatCtx)<0)
;
LOGI("Separating");
// Find the first video stream
videoStream=-1;
audioStream=-1;
for(i=0; i<&pFormatCtx->nb_streams; i++) {
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO)
{
LOGI("Audio Stream");
audioStream=i;
}
}
av_write_header(pFormatCtx);
if(videoStream==-1)
LOGI("Video stream is -1");
if(audioStream==-1)
LOGI("Audio stream is -1");
return i;}
you may be having issue related to library loading and unloading and how that relates to repeated calls thru jni. Not sure from what your symptom is , but if u have no solution try reading :
here
and here
Related
I want to create an android application which can locate a video file (which is more than 300 mb) and compress it to lower size mp4 file.
i already tried to do it with this
This tutorial is a very effective since you 're compressing a small size video (below than 100 mb)
So i tried to implement it using JNI .
i managed to build ffmpeg using this
But currently what I want to do is to compress videos . I don't have very good knowledge on JNI. But i tried to understand it using following link
If some one can guide me the steps to compress video after open file it using JNI that whould really great , thanks
Assuming you've got the String path of the input file, we can accomplish your task fairly easily. I'll assume you have an understanding of the NDK basics: How to connect a native .c file to native methods in a corresponding .java file (Let me know if that's part of your question). Instead I'll focus on how to use FFmpeg within the context of Android / JNI.
High-Level Overview:
#include <jni.h>
#include <android/log.h>
#include <string.h>
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#define LOG_TAG "FFmpegWrapper"
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)
void Java_com_example_yourapp_yourJavaClass_compressFile(JNIEnv *env, jobject obj, jstring jInputPath, jstring jInputFormat, jstring jOutputPath, jstring JOutputFormat){
// One-time FFmpeg initialization
av_register_all();
avformat_network_init();
avcodec_register_all();
const char* inputPath = (*env)->GetStringUTFChars(env, jInputPath, NULL);
const char* outputPath = (*env)->GetStringUTFChars(env, jOutputPath, NULL);
// format names are hints. See available options on your host machine via $ ffmpeg -formats
const char* inputFormat = (*env)->GetStringUTFChars(env, jInputFormat, NULL);
const char* outputFormat = (*env)->GetStringUTFChars(env, jOutputFormat, NULL);
AVFormatContext *outputFormatContext = avFormatContextForOutputPath(outputPath, outputFormat);
AVFormatContext *inputFormatContext = avFormatContextForInputPath(inputPath, inputFormat /* not necessary since file can be inspected */);
copyAVFormatContext(&outputFormatContext, &inputFormatContext);
// Modify outputFormatContext->codec parameters per your liking
// See http://ffmpeg.org/doxygen/trunk/structAVCodecContext.html
int result = openFileForWriting(outputFormatContext, outputPath);
if(result < 0){
LOGE("openFileForWriting error: %d", result);
}
writeFileHeader(outputFormatContext);
// Copy input to output frame by frame
AVPacket *inputPacket;
inputPacket = av_malloc(sizeof(AVPacket));
int continueRecording = 1;
int avReadResult = 0;
int writeFrameResult = 0;
int frameCount = 0;
while(continueRecording == 1){
avReadResult = av_read_frame(inputFormatContext, inputPacket);
frameCount++;
if(avReadResult != 0){
if (avReadResult != AVERROR_EOF) {
LOGE("av_read_frame error: %s", stringForAVErrorNumber(avReadResult));
}else{
LOGI("End of input file");
}
continueRecording = 0;
}
AVStream *outStream = outputFormatContext->streams[inputPacket->stream_index];
writeFrameResult = av_interleaved_write_frame(outputFormatContext, inputPacket);
if(writeFrameResult < 0){
LOGE("av_interleaved_write_frame error: %s", stringForAVErrorNumber(avReadResult));
}
}
// Finalize the output file
int writeTrailerResult = writeFileTrailer(outputFormatContext);
if(writeTrailerResult < 0){
LOGE("av_write_trailer error: %s", stringForAVErrorNumber(writeTrailerResult));
}
LOGI("Wrote trailer");
}
For the full content of all the auxillary functions (the ones in camelCase), see my full project on Github. Got questions? I'm happy to elaborate.
From the examples I got the basic idea of this code.
However I am not sure, what I am missing, as muxing.c demuxing.c and decoding_encoding.c
all use different approaches.
The process of converting an audio file to another file should go roughly like this:
inputfile -demux-> audiostream -read-> inPackets -decode2frames->
frames
-encode2packets-> outPackets -write-> audiostream -mux-> outputfile
However I found the following comment in demuxing.c:
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
My questions about this are:
Can I expect a frame that was retrieved by calling one of the decoder functions, f.e.
avcodec_decode_audio4 to hold suitable values to directly put it into an encoder or is
the resampling step mentioned in the comment mandatory?
Am I taking the right approach? ffmpeg is very asymmetric, i.e. if there is a function
open_file_for_input there might not be a function open_file_for_output. Also there are different versions of many functions (avcodec_decode_audio[1-4]) and different naming
schemes, so it's very hard to tell, if the general approach is right, or actually an
ugly mixture of techniques that where used at different version bumps of ffmpeg.
ffmpeg uses a lot of specific terms, like 'planar sampling' or 'packed format' and I am having a hard time, finding definitions for these terms. Is it possible to write working code, without deep knowledge of audio?
Here is my code so far that right now crashes at avcodec_encode_audio2
and I don't know why.
int Java_com_fscz_ffmpeg_Audio_convert(JNIEnv * env, jobject this, jstring jformat, jstring jcodec, jstring jsource, jstring jdest) {
jboolean isCopy;
jclass configClass = (*env)->FindClass(env, "com.fscz.ffmpeg.Config");
jfieldID fid = (*env)->GetStaticFieldID(env, configClass, "ffmpeg_logging", "I");
logging = (*env)->GetStaticIntField(env, configClass, fid);
/// open input
const char* sourceFile = (*env)->GetStringUTFChars(env, jsource, &isCopy);
AVFormatContext* pInputCtx;
AVStream* pInputStream;
open_input(sourceFile, &pInputCtx, &pInputStream);
// open output
const char* destFile = (*env)->GetStringUTFChars(env, jdest, &isCopy);
const char* cformat = (*env)->GetStringUTFChars(env, jformat, &isCopy);
const char* ccodec = (*env)->GetStringUTFChars(env, jcodec, &isCopy);
AVFormatContext* pOutputCtx;
AVOutputFormat* pOutputFmt;
AVStream* pOutputStream;
open_output(cformat, ccodec, destFile, &pOutputCtx, &pOutputFmt, &pOutputStream);
/// decode/encode
error = avformat_write_header(pOutputCtx, NULL);
DIE_IF_LESS_ZERO(error, "error writing output stream header to file: %s, error: %s", destFile, e2s(error));
AVFrame* frame = avcodec_alloc_frame();
DIE_IF_UNDEFINED(frame, "Could not allocate audio frame");
frame->pts = 0;
LOGI("allocate packet");
AVPacket pktIn;
AVPacket pktOut;
LOGI("done");
int got_frame, got_packet, len, frame_count = 0;
int64_t processed_time = 0, duration = pInputStream->duration;
while (av_read_frame(pInputCtx, &pktIn) >= 0) {
do {
len = avcodec_decode_audio4(pInputStream->codec, frame, &got_frame, &pktIn);
DIE_IF_LESS_ZERO(len, "Error decoding frame: %s", e2s(len));
if (len < 0) break;
len = FFMIN(len, pktIn.size);
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
LOGI("audio_frame n:%d nb_samples:%d pts:%s\n", frame_count++, frame->nb_samples, av_ts2timestr(frame->pts, &(pInputStream->codec->time_base)));
if (got_frame) {
do {
av_init_packet(&pktOut);
pktOut.data = NULL;
pktOut.size = 0;
LOGI("encode frame");
DIE_IF_UNDEFINED(pOutputStream->codec, "no output codec");
DIE_IF_UNDEFINED(frame->nb_samples, "no nb samples");
DIE_IF_UNDEFINED(pOutputStream->codec->internal, "no internal");
LOGI("tests done");
len = avcodec_encode_audio2(pOutputStream->codec, &pktOut, frame, &got_packet);
LOGI("encode done");
DIE_IF_LESS_ZERO(len, "Error (re)encoding frame: %s", e2s(len));
} while (!got_packet);
// write packet;
LOGI("write packet");
/* Write the compressed frame to the media file. */
error = av_interleaved_write_frame(pOutputCtx, &pktOut);
DIE_IF_LESS_ZERO(error, "Error while writing audio frame: %s", e2s(error));
av_free_packet(&pktOut);
}
pktIn.data += len;
pktIn.size -= len;
} while (pktIn.size > 0);
av_free_packet(&pktIn);
}
LOGI("write trailer");
av_write_trailer(pOutputCtx);
LOGI("end");
/// close resources
avcodec_free_frame(&frame);
avcodec_close(pInputStream->codec);
av_free(pInputStream->codec);
avcodec_close(pOutputStream->codec);
av_free(pOutputStream->codec);
avformat_close_input(&pInputCtx);
avformat_free_context(pOutputCtx);
return 0;
}
Meanwhile I have figured this out and written an Android Library Project that does this
(for audio files). https://github.com/fscz/FFmpeg-Android
See the file /jni/audiodecoder.c for details
hi I am trying to change the sample rate of the audio streams of a video file using ffmpeg. but i am not able to change audio of the original file. Till now i am able to read audio and video streams separately and can show its sample rate. Don't know how can i apply this effect permanently. Following is my Java code.
public class MainActivity extends Activity {
public static native float logFileInfo(String inputfilename);
static
{
System.loadLibrary("mylib");
}
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
TextView tv= new TextView(this);
String path= Environment.getExternalStorageDirectory().getPath();
path+="/test2_modified.mp4";
float x=logFileInfo(path);
String y=Float.toString(x);
tv.setText(y);
setContentView(tv);
}
and my native file
JNIEXPORT jfloat JNICALL Java_ru_dzakhov_ffmpeg_test_MainActivity_logFileInfo
(JNIEnv * env,
jobject this,
jstring filename
)
{
AVFormatContext *pFormatCtx;
int i, videoStream, audioStream;
AVCodecContext *pCodecCtx;
AVCodec *pCodec;
AVFrame *pFrame;
AVPacket packet;
int frameFinished;
float aspect_ratio;
AVCodecContext *aCodecCtx;
AVCodec *aCodec;
av_register_all();
char *str = (*env)->GetStringUTFChars(env, filename, 0);
LOGI(str);
// Open video file
if(av_open_input_file(&pFormatCtx, str, NULL, 0, NULL)!=0)
;
// Retrieve stream information
if(av_find_stream_info(pFormatCtx)<0)
;
LOGI("Separating");
// Find the first video stream
videoStream=-1;
audioStream=-1;
for(i=0; i<pFormatCtx->nb_streams; i++) {
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO &&
videoStream < 0) {
videoStream=i;
}
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO &&
audioStream < 0) {
audioStream=i;
}
}
if(videoStream==-1)
LOGI("Video stream is -1");
;
if(audioStream==-1)
LOGI("Audio stream is -1");
;
//pFormatCtx->streams[audioStream]->codec->sample_rate=16000;
aCodecCtx=pFormatCtx->streams[audioStream]->codec;
//aCodecCtx->sample_rate=16000;
jfloat sr=aCodecCtx->sample_rate;
return sr;}
I am using the following standard code pasted below (ref: http://dranger.com/ffmpeg/) to use ffmpeg in android using ndk. My code is working fine in ubuntu 10.04 using gcc compiler. But I am facing an issue in android.The issue is av_read_frame(pFormatCtx, &packet) is always returning packet.stream_index=0. I have tested my code with various rtsp urls and I have the same behaviour in all cases. I do not have any linking or compiling issues as everything seems to be working fine except this issue. I am trying to solve this from last 2 days but I am stuck badly.Please point me in right direction.
#include <jni.h>
#include <android/log.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <stdio.h>
#define DEBUG_TAG "mydebug_ndk"
jint Java_com_example_tut2_MainActivity_myfunc(JNIEnv * env, jobject this,jstring myjurl) {
AVFormatContext *pFormatCtx = NULL;
int i, videoStream;
AVCodecContext *pCodecCtx = NULL;
AVCodec *pCodec = NULL;
AVFrame *pFrame = NULL;
AVPacket packet;
int frameFinished;
AVDictionary *optionsDict = NULL;
struct SwsContext *sws_ctx = NULL;
jboolean isCopy;
const char * mycurl = (*env)->GetStringUTFChars(env, myjurl, &isCopy);
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:p2: [%s]", mycurl);
// Register all formats and codecs
av_register_all();
avformat_network_init();
// Open video file
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:before_open");
if(avformat_open_input(&pFormatCtx, mycurl, NULL, NULL)!=0)
return -1;
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "start: %d\t%d\n",pFormatCtx->raw_packet_buffer_remaining_size,pFormatCtx->max_index_size);
(*env)->ReleaseStringUTFChars(env, myjurl, mycurl);
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:before_stream");
// Retrieve stream information
if(avformat_find_stream_info(pFormatCtx, NULL)<0)
return -1;
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:after_stream");
// Find the first video stream
videoStream=-1;
for(i=0; i<pFormatCtx->nb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i;
break;
}
if(videoStream==-1)
return -1; // Didn't find a video stream
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:after_videostream");
// Get a pointer to the codec context for the video stream
pCodecCtx=pFormatCtx->streams[videoStream]->codec;
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:after_codec_context");
// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:after_decoder");
if(pCodec==NULL)
return -1;
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:found_decoder");
// Open codec
if(avcodec_open2(pCodecCtx, pCodec, &optionsDict)<0)
return -1;
// Allocate video frame
pFrame=avcodec_alloc_frame();
sws_ctx = sws_getContext(pCodecCtx->width,pCodecCtx->height,pCodecCtx->pix_fmt,pCodecCtx->width,
pCodecCtx->height,PIX_FMT_YUV420P,SWS_BILINEAR,NULL,NULL,NULL);
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:before_while");
int count=0;
while(av_read_frame(pFormatCtx, &packet)>=0) {
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "entered while: %d %d %d\n", packet.duration,packet.stream_index,packet.size);
if(packet.stream_index==videoStream) {
// Decode video frame
//break;
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,&packet);
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:in_while");
if(frameFinished) {
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:gng_out_of_while");
break;
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
if(++count>1000)
return -2; //infinite while loop
}
__android_log_print(ANDROID_LOG_DEBUG, DEBUG_TAG, "NDK:after_while");
// Free the YUV frame
av_free(pFrame);
// Close the codec
avcodec_close(pCodecCtx);
// Close the video file
avformat_close_input(&pFormatCtx);
return 0;
}
I have same problam too.
after my test, I'v found my mistake, Hopes can help you.
my av_read_frame() also always return av_pkg of stream index 0;
The problem is: I mixed two version of ffmpeg libs.
At before, I'v install the ffmpeg of 0.9.x
Now, I install ffmpeg of 1.2
the libs are in different version.I used some lib from 0.9.x and others from 1.2.
So... error occur.
I solution:
uninstall all version of ffmpeg , and then reinstall one version of ffmpeg(me->1.2).
The av_read_frame() goes right. :)
I am making an Android-to-Android VoIP (loudspeaker) app using its AudioRecord and AudioTrack class, along with Speex via NDK to do echo cancellation. I was able to successfully pass into and retrieve data from Speex's speex_echo_cancellation() function, but the echo remains.
Here is the relevant android thread code that is recording/sending and receiving/playing audio:
//constructor
public MyThread(DatagramSocket socket, int frameSize, int filterLength){
this.socket = socket;
nativeMethod_initEchoState(frameSize, filterLength);
}
public void run(){
short[] audioShorts, recvShorts, recordedShorts, filteredShorts;
byte[] audioBytes, recvBytes;
int shortsRead;
DatagramPacket packet;
//initialize recorder and player
int samplingRate = 8000;
int managerBufferSize = 2000;
AudioTrack player = new AudioTrack(AudioManager.STREAM_MUSIC, samplingRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, managerBufferSize, AudioTrack.MODE_STREAM);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, samplingRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, managerBufferSize);
recorder.startRecording();
player.play();
//record first packet
audioShorts = new short[1000];
shortsRead = recorder.read(audioShorts, 0, audioShorts.length);
//convert shorts to bytes to send
audioBytes = new byte[shortsRead*2];
ByteBuffer.wrap(audioBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(audioShorts);
//send bytes
packet = new DatagramPacket(audioBytes, audioBytes.length);
socket.send(packet);
while (!this.isInterrupted()){
//recieve packet/bytes (received audio data should have echo cancelled already)
recvBytes = new byte[2000];
packet = new DatagramPacket(recvBytes, recvBytes.length);
socket.receive(packet);
//convert bytes to shorts
recvShorts = new short[packet.getLength()/2];
ByteBuffer.wrap(packet.getData(), 0, packet.getLength()).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(recvShorts);
//play shorts
player.write(recvShorts, 0, recvShorts.length);
//record shorts
recordedShorts = new short[1000];
shortsRead = recorder.read(recordedShorts, 0, recordedShorts.length);
//send played and recorded shorts into speex,
//returning audio data with the echo removed
filteredShorts = nativeMethod_speexEchoCancel(recordedShorts, recvShorts);
//convert filtered shorts to bytes
audioBytes = new byte[shortsRead*2];
ByteBuffer.wrap(audioBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(filteredShorts);
//send off bytes
packet = new DatagramPacket(audioBytes, audioBytes.length);
socket.send(packet);
}//end of while loop
}
Here is the relevant NDK / JNI code:
void nativeMethod_initEchoState(JNIEnv *env, jobject jobj, jint frameSize, jint filterLength){
echo_state = speex_echo_state_init(frameSize, filterLength);
}
jshortArray nativeMethod_speexEchoCancel(JNIEnv *env, jobject jObj, jshortArray input_frame, jshortArray echo_frame){
//create native shorts from java shorts
jshort *native_input_frame = (*env)->GetShortArrayElements(env, input_frame, NULL);
jshort *native_echo_frame = (*env)->GetShortArrayElements(env, echo_frame, NULL);
//allocate memory for output data
jint length = (*env)->GetArrayLength(env, input_frame);
jshortArray temp = (*env)->NewShortArray(env, length);
jshort *native_output_frame = (*env)->GetShortArrayElements(env, temp, 0);
//call echo cancellation
speex_echo_cancellation(echo_state, native_input_frame, native_echo_frame, native_output_frame);
//convert native output to java layer output
jshortArray output_shorts = (*env)->NewShortArray(env, length);
(*env)->SetShortArrayRegion(env, output_shorts, 0, length, native_output_frame);
//cleanup and return
(*env)->ReleaseShortArrayElements(env, input_frame, native_input_frame, 0);
(*env)->ReleaseShortArrayElements(env, echo_frame, native_echo_frame, 0);
(*env)->ReleaseShortArrayElements(env, temp, native_output_frame, 0);
return output_shorts;
}
These code runs fine and audio data is definitely being sent/received/processed/played from android-to-android. Given audio sample rate of 8000 Hz and packet size of 2000bytes/1000shorts, I've found that a frameSize of 1000 is needed in order for the played audio to be smooth. Most value of filterLength (aka tail length according to Speex doc) will run, but seems to have no effect on the echo removal.
Does anyone understand enough AEC as to provide me some pointers on implementing or configuring Speex? Thanks for reading.
Your code is right but missing something in native codes, I modified init method and added speex preprocess after echo cancellation, then your code worked well (I tried in windows)
Here is Native Code
#include <jni.h>
#include "speex/speex_echo.h"
#include "speex/speex_preprocess.h"
#include "EchoCanceller_jniHeader.h"
SpeexEchoState *st;
SpeexPreprocessState *den;
JNIEXPORT void JNICALL Java_speex_EchoCanceller_open
(JNIEnv *env, jobject jObj, jint jSampleRate, jint jBufSize, jint jTotalSize)
{
//init
int sampleRate=jSampleRate;
st = speex_echo_state_init(jBufSize, jTotalSize);
den = speex_preprocess_state_init(jBufSize, sampleRate);
speex_echo_ctl(st, SPEEX_ECHO_SET_SAMPLING_RATE, &sampleRate);
speex_preprocess_ctl(den, SPEEX_PREPROCESS_SET_ECHO_STATE, st);
}
JNIEXPORT jshortArray JNICALL Java_speex_EchoCanceller_process
(JNIEnv * env, jobject jObj, jshortArray input_frame, jshortArray echo_frame)
{
//create native shorts from java shorts
jshort *native_input_frame = (*env)->GetShortArrayElements(env, input_frame, NULL);
jshort *native_echo_frame = (*env)->GetShortArrayElements(env, echo_frame, NULL);
//allocate memory for output data
jint length = (*env)->GetArrayLength(env, input_frame);
jshortArray temp = (*env)->NewShortArray(env, length);
jshort *native_output_frame = (*env)->GetShortArrayElements(env, temp, 0);
//call echo cancellation
speex_echo_cancellation(st, native_input_frame, native_echo_frame, native_output_frame);
//preprocess output frame
speex_preprocess_run(den, native_output_frame);
//convert native output to java layer output
jshortArray output_shorts = (*env)->NewShortArray(env, length);
(*env)->SetShortArrayRegion(env, output_shorts, 0, length, native_output_frame);
//cleanup and return
(*env)->ReleaseShortArrayElements(env, input_frame, native_input_frame, 0);
(*env)->ReleaseShortArrayElements(env, echo_frame, native_echo_frame, 0);
(*env)->ReleaseShortArrayElements(env, temp, native_output_frame, 0);
return output_shorts;
}
JNIEXPORT void JNICALL Java_speex_EchoCanceller_close
(JNIEnv *env, jobject jObj)
{
//close
speex_echo_state_destroy(st);
speex_preprocess_state_destroy(den);
}
You can find useful samples such as Encoding, Decoding, Echo Cancellation in speex library's source (http://www.speex.org/downloads/)
Are you properly aligning the far-end signal (what you call recv) and near end signal (what you call record)? There is always some playback/record latency which needs to be accounted for. This generally requires buffering of the far-end signal in a ring buffer for some specified period of time. On PCs this is usually about 50 - 120ms. On Android I suspect it's much higher. Probably in the range of 150 - 400ms. I would recommend using a 100ms taillength with speex and adjusting the size of your far-end buffer until the AEC converges. These changes should allow the AEC to converge, independently of the inclusion of the preprocessor, which is not required here.