Android custom ROM: force software decoders - android

I'm building a ROM from AOSP, running on Nexus 5X (bullhead).
I wish to completely disable hardware audio/video decoders and make the platform route everything through software - to work exactly the same as on emulator.
I've tried editing audio_policy_configuration.xml and media_codecs.xml to remove decoders, however I'm getting error messages in logcat and no audio is being played - I've no idea if this is even the right direction.

I achieved what I wanted by editing the following in hardware/qcom/audio/hal/audio_hw.c:
static uint32_t out_get_sample_rate(const struct audio_stream *stream)
{
/* struct stream_out *out = (struct stream_out *)stream;
return out->sample_rate;*/
return 44100;
}
static uint32_t out_get_channels(const struct audio_stream *stream)
{
/*struct stream_out *out = (struct stream_out *)stream;
return out->channel_mask;*/
return AUDIO_CHANNEL_OUT_STEREO;
}
static audio_format_t out_get_format(const struct audio_stream *stream)
{
/* struct stream_out *out = (struct stream_out *)stream;
return out->format;*/
return AUDIO_FORMAT_PCM_16_BIT;
}

Related

libusb on lollipop - fails to get devices list

Few days ago i've tested my app on Android Lollipop and it stops working. After debugging i've found libsub fails to get devices list:
struct libusb_device **devs;
int devs_count = libusb_get_device_list(ctx, &devs);
I've continues to dig the sources and found the next failure (linux_usbfs.c):
DIR *buses = opendir(usbfs_path); // '/dev/bus/usb', correct
struct discovered_devs *discdevs = *_discdevs;
int r = 0;
if (!buses) {
usbi_err(ctx, "opendir buses failed errno=%d", errno);
return LIBUSB_ERROR_IO; // my case!
}
usbfs_path is correct (/dev/bus/usb) and my device is /dev/bus/usb/003/002.
How can i work with usb device using libusb in Android Lollipop?
Previously i've iterated over devices, found my device using PID and VID, requested it's endpoints and worked as usual. Now i can't get struct libusb_device **devs; using libusb_get_device_list and this stops everything. What can i do having device path and opened connection file descriptor from android?
PS. testing on Nexus 9 with Android 5.1.1
PPS. I can't use either libusb_open_device_with_vid_pid as it requests devices list too:
DEFAULT_VISIBILITY
libusb_device_handle * LIBUSB_CALL libusb_open_device_with_vid_pid(
libusb_context *ctx, uint16_t vendor_id, uint16_t product_id)
{
struct libusb_device **devs;
struct libusb_device *found = NULL;
struct libusb_device *dev;
struct libusb_device_handle *handle = NULL;
size_t i = 0;
int r;
if (libusb_get_device_list(ctx, &devs) < 0)
return NULL;

use ffmpeg api to convert audio files. crash on avcodec_encode_audio2

From the examples I got the basic idea of this code.
However I am not sure, what I am missing, as muxing.c demuxing.c and decoding_encoding.c
all use different approaches.
The process of converting an audio file to another file should go roughly like this:
inputfile -demux-> audiostream -read-> inPackets -decode2frames->
frames
-encode2packets-> outPackets -write-> audiostream -mux-> outputfile
However I found the following comment in demuxing.c:
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
My questions about this are:
Can I expect a frame that was retrieved by calling one of the decoder functions, f.e.
avcodec_decode_audio4 to hold suitable values to directly put it into an encoder or is
the resampling step mentioned in the comment mandatory?
Am I taking the right approach? ffmpeg is very asymmetric, i.e. if there is a function
open_file_for_input there might not be a function open_file_for_output. Also there are different versions of many functions (avcodec_decode_audio[1-4]) and different naming
schemes, so it's very hard to tell, if the general approach is right, or actually an
ugly mixture of techniques that where used at different version bumps of ffmpeg.
ffmpeg uses a lot of specific terms, like 'planar sampling' or 'packed format' and I am having a hard time, finding definitions for these terms. Is it possible to write working code, without deep knowledge of audio?
Here is my code so far that right now crashes at avcodec_encode_audio2
and I don't know why.
int Java_com_fscz_ffmpeg_Audio_convert(JNIEnv * env, jobject this, jstring jformat, jstring jcodec, jstring jsource, jstring jdest) {
jboolean isCopy;
jclass configClass = (*env)->FindClass(env, "com.fscz.ffmpeg.Config");
jfieldID fid = (*env)->GetStaticFieldID(env, configClass, "ffmpeg_logging", "I");
logging = (*env)->GetStaticIntField(env, configClass, fid);
/// open input
const char* sourceFile = (*env)->GetStringUTFChars(env, jsource, &isCopy);
AVFormatContext* pInputCtx;
AVStream* pInputStream;
open_input(sourceFile, &pInputCtx, &pInputStream);
// open output
const char* destFile = (*env)->GetStringUTFChars(env, jdest, &isCopy);
const char* cformat = (*env)->GetStringUTFChars(env, jformat, &isCopy);
const char* ccodec = (*env)->GetStringUTFChars(env, jcodec, &isCopy);
AVFormatContext* pOutputCtx;
AVOutputFormat* pOutputFmt;
AVStream* pOutputStream;
open_output(cformat, ccodec, destFile, &pOutputCtx, &pOutputFmt, &pOutputStream);
/// decode/encode
error = avformat_write_header(pOutputCtx, NULL);
DIE_IF_LESS_ZERO(error, "error writing output stream header to file: %s, error: %s", destFile, e2s(error));
AVFrame* frame = avcodec_alloc_frame();
DIE_IF_UNDEFINED(frame, "Could not allocate audio frame");
frame->pts = 0;
LOGI("allocate packet");
AVPacket pktIn;
AVPacket pktOut;
LOGI("done");
int got_frame, got_packet, len, frame_count = 0;
int64_t processed_time = 0, duration = pInputStream->duration;
while (av_read_frame(pInputCtx, &pktIn) >= 0) {
do {
len = avcodec_decode_audio4(pInputStream->codec, frame, &got_frame, &pktIn);
DIE_IF_LESS_ZERO(len, "Error decoding frame: %s", e2s(len));
if (len < 0) break;
len = FFMIN(len, pktIn.size);
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
LOGI("audio_frame n:%d nb_samples:%d pts:%s\n", frame_count++, frame->nb_samples, av_ts2timestr(frame->pts, &(pInputStream->codec->time_base)));
if (got_frame) {
do {
av_init_packet(&pktOut);
pktOut.data = NULL;
pktOut.size = 0;
LOGI("encode frame");
DIE_IF_UNDEFINED(pOutputStream->codec, "no output codec");
DIE_IF_UNDEFINED(frame->nb_samples, "no nb samples");
DIE_IF_UNDEFINED(pOutputStream->codec->internal, "no internal");
LOGI("tests done");
len = avcodec_encode_audio2(pOutputStream->codec, &pktOut, frame, &got_packet);
LOGI("encode done");
DIE_IF_LESS_ZERO(len, "Error (re)encoding frame: %s", e2s(len));
} while (!got_packet);
// write packet;
LOGI("write packet");
/* Write the compressed frame to the media file. */
error = av_interleaved_write_frame(pOutputCtx, &pktOut);
DIE_IF_LESS_ZERO(error, "Error while writing audio frame: %s", e2s(error));
av_free_packet(&pktOut);
}
pktIn.data += len;
pktIn.size -= len;
} while (pktIn.size > 0);
av_free_packet(&pktIn);
}
LOGI("write trailer");
av_write_trailer(pOutputCtx);
LOGI("end");
/// close resources
avcodec_free_frame(&frame);
avcodec_close(pInputStream->codec);
av_free(pInputStream->codec);
avcodec_close(pOutputStream->codec);
av_free(pOutputStream->codec);
avformat_close_input(&pInputCtx);
avformat_free_context(pOutputCtx);
return 0;
}
Meanwhile I have figured this out and written an Android Library Project that does this
(for audio files). https://github.com/fscz/FFmpeg-Android
See the file /jni/audiodecoder.c for details

Android & libusb submit ISOC transfer error

I write the Android application using libusb-1.0.9 and NDK (Android 4.0.4+) which has to read out audio-data from the USB-audiocard. The USB device from libusb opens successfully, and for it it is possible to receive all interfaces/EndPoint list. But when making ISOC transfer I faced with following unclear me error:
Debugging C code of making transfer:
static uint8_t buf[12];
static void cb_xfr(struct libusb_transfer *xfr)
{
LOGD("USB callback\n");
libusb_free_transfer(xfr);
}
JNIEXPORT jlong JNICALL
Java_com_usbtest_libusb_Libusb_makeISOCTransfer(JNIEnv *env, jobject this, jlong ms)
{
static struct libusb_transfer *xfr;
int num_iso_pack = 1;
unsigned char ep = 0x84;
xfr = libusb_alloc_transfer(num_iso_pack);
if (!xfr) {
LOGD("libusb_alloc_transfer: errno=%d\n", errno);
return -1000;
}
LOGD("libusb_fill_iso_transfer: ep=%x, buf=%d, num iso=%d\n", ep, sizeof(buf), num_iso_pack);
libusb_fill_iso_transfer(xfr, handle, ep, buf, sizeof(buf), num_iso_pack, cb_xfr, NULL, 0);
libusb_set_iso_packet_lengths(xfr, sizeof(buf)/num_iso_pack);
int res = libusb_submit_transfer(xfr);
LOGD("libusb_submit_transfer: %d, errno=%d\n", res, errno);
struct timeval tv;
tv.tv_sec = 1;
tv.tv_usec = 0;
libusb_handle_events_timeout(NULL, &tv);
return res;
}
After a call of libusb_submit_transfer I received a mistake: libusb_submit_transfer: -1, errno -2
And text messages:
need 1 32k URBs for transfer
first URB failed, easy peasy
EndPoint 0x84 is true for audio-in. The buf[] size = 12 - corresponds to the field wMaxPacketSize of this EndPoint. I request 1 transfer. Field of xfr->status = 0, but callback isn't caused.
I thought that it is necessary to increase the buf buffer to 32K - I increased, but it didn't help.
I tried to increase quantity of transfers. Same error.
Prompt me please in what there can be a error?
I found the solution of this problem.
In this topic http://en.it-usenet.org/thread/14995/14985/ there was a similar question, and the decision appeared to make detach for device HID USB interfaces.
I've used libusb_detach_kernel to detach the 2 hid drivers and submit
transfer returned 0!!
I made libusb_detach_kernel_driver(handle, interface) for all interfaces of the device (0..3) after device opening, and as a result of libusb_submit_transfer return SUCCESS of transfer.

How to get output from Linux command via C/C++? and suitable for android?

I try to run a Linux command and read the output from it by using C/C++ code.
I search for exec but this don't deal with input/output.
What I am trying to achieve is to get information about wireless LAN by using this command iwconfig, invoking it from C/C++ code.
also i need a suitable code to use it as lib for android using NDK.
i see in android open source they called this function
what do you think about this code ?
int wpa_ctrl_request(struct wpa_ctrl *ctrl, const char *cmd, size_t cmd_len,
char *reply, size_t *reply_len,
void (*msg_cb)(char *msg, size_t len))
{
DWORD written;
DWORD readlen = *reply_len;
if (!WriteFile(ctrl->pipe, cmd, cmd_len, &written, NULL))
return -1;
if (!ReadFile(ctrl->pipe, reply, *reply_len, &readlen, NULL))
return -1;
*reply_len = readlen;
return 0;
}
this is the link
You could try running the command and outputting the results to a file, then reading it
system("iwconfig > temp.txt");
FILE *fp=fopen("temp.txt","w");
i see in android open source they called this function
what do you think about this code ?
int wpa_ctrl_request(struct wpa_ctrl *ctrl, const char *cmd, size_t cmd_len,
char *reply, size_t *reply_len,
void (*msg_cb)(char *msg, size_t len))
{
DWORD written;
DWORD readlen = *reply_len;
if (!WriteFile(ctrl->pipe, cmd, cmd_len, &written, NULL))
return -1;
if (!ReadFile(ctrl->pipe, reply, *reply_len, &readlen, NULL))
return -1;
*reply_len = readlen;
return 0;
}
this is the link

Too small ffmpeg rtsp decoding buffer

I'm decoding rtsp on Android with ffmpeg, and I quickly see pixelization when the image updates quickly or with a high resolution:
After googling, I found that it might be correlated to the UDP buffer size. I have then recompiled the ffmpeg library with the following parameters inside ffmpeg/libavformat/udp.c
#define UDP_TX_BUF_SIZE 327680
#define UDP_MAX_PKT_SIZE 655360
It seems to improve but it still starts to fail at some point. Any idea which buffer I should increase and how?
For my problem (http://libav-users.943685.n4.nabble.com/UDP-Stream-Read-Pixelation-Macroblock-Corruption-td4655270.html), I was trying to capture from a multicast UDP stream that had been set-up by someone else. Because I didn't have the ability to mess with the source, I ended up switching from using libav to using libvlc as a wrapper and it worked perfectly. Here is the summary of what worked for me:
stream.h:
#include <vlc/vlc.h>
#include <vlc/libvlc.h>
struct ctx {
uchar* frame;
};
stream.cpp:
void* lock(void* data, void** p_pixels){
struct ctx* ctx = (struct ctx*)data;
*p_pixels = ctx->frame;
return NULL;
}
void unlock(void* data, void* id, void* const* p_pixels){
struct ctx* ctx = (struct ctx*)data;
uchar* pixels = (uchar*)*p_pixels;
assert(id == NULL);
}
main.cpp:
struct ctx* context = (struct ctx*)malloc(sizeof(*context));
const char* const vlc_args[] = {"-vvv",
"-q",
"--no-audio"};
libvlc_media_t* media = NULL;
libvlc_media_player_t* media_player = NULL;
libvlc_instance_t* instance = libvlc_new(sizeof(vlc_args) / sizeof(vlc_args[0]), vlc_args);
media = libvlc_media_new_location(instance, "udp://#123.123.123.123:1000");
media_player = libvlc_media_player_new(instance);
libvlc_media_player_set_media(media_player, media);
libvlc_media_release(media);
context->frame = new uchar[height * width * 3];
libvlc_video_set_callbacks(media_player, lock, unlock, NULL, context);
libvlc_video_set_format(media_player, "RV24", VIDEOWIDTH, VIDEOHEIGHT, VIDEOWIDTH * 3);
libvlc_media_player_play(media_player);

Categories

Resources