can't debunk eglSwapBuffers function - android

I am trying to thoroughly track from user-space into kernel-space to find somewhere I can hook my fingers in in kernel-space to pull some information for my CPU driver. While trying to understand the user-space side a little. I am looking to detect frame buffer swaps so that I can track FPS within the kernel (hopefully). I am working with an Odroid XU3 running Android 4.4.4 and a 3.10.9 kernel.
From what I can tell the buffer swap in the egl library is happening in the eglApi.cpp file with the function EGLBoolean eglSwapBuffers(EGLDisplay dpy, EGLSurface draw) pasted below. Now my problem is that I cannot understand how this function is working. It appears to me to call itself recursively as following the return s->cnx->egl.eglSwapBuffers(dp->disp.dpy, s->surface) points me back to the same function due to
struct egl_t {
#include "EGL/egl_entries.in"
};
and from the source then
#define EGL_ENTRY(_r, _api, ...) #_api,
EGL_ENTRY(EGLBoolean, eglSwapBuffers, EGLDisplay, EGLSurface)
The complete function (minus trace stuff) pasted from the source file eglApi.cpp
EGLBoolean eglSwapBuffers(EGLDisplay dpy, EGLSurface draw)
{
ATRACE_CALL();
clearError();
const egl_display_ptr dp = validate_display(dpy);
if (!dp) return EGL_FALSE;
SurfaceRef _s(dp.get(), draw);
if (!_s.get())
return setError(EGL_BAD_SURFACE, EGL_FALSE);
#if EGL_TRACE
...
#endif
egl_surface_t const * const s = get_surface(draw);
if (CC_UNLIKELY(dp->traceGpuCompletion)) {
EGLSyncKHR sync = eglCreateSyncKHR(dpy, EGL_SYNC_FENCE_KHR, NULL);
if (sync != EGL_NO_SYNC_KHR) {
FrameCompletionThread::queueSync(sync);
}
}
if (CC_UNLIKELY(dp->finishOnSwap)) {
uint32_t pixel;
egl_context_t * const c = get_context( egl_tls_t::getContext() );
if (c) {
// glReadPixels() ensures that the frame is complete
s->cnx->hooks[c->version]->gl.glReadPixels(0,0,1,1,
GL_RGBA,GL_UNSIGNED_BYTE,&pixel);
}
}
return s->cnx->egl.eglSwapBuffers(dp->disp.dpy, s->surface);
}
I am missing something blatantly obvious and that someone can point out where this function performs the buffer swap so I can delve down the rabbit hole towards my safe place in kernel-space.

Related

How to implement Glium's Backend for Android's SurfaceTexture (Render with Rust into SurfaceTexture)

Rust's glium lib is a nice OpenGL wrapper that facilitate slots of stuff. In order to implement a new backend for it, you must implement https://github.com/glium/glium/blob/cacb970c8ed2e45a6f98d12bd7fcc03748b0e122/src/backend/mod.rs#L36
I want to implement Android's SurfaceTexture as a Backend
Looks like I need to implement a new Backend for SurfaceTexture: https://github.com/glium/glium/blob/master/src/backend/mod.rs#L36
Here are the C++ functions of SurfaceTexture https://developer.android.com/ndk/reference/group/surface-texture#summary
I think that Backend::make_current(&self); maps to ASurfaceTexture_attachToGLContext(ASurfaceTexture *st, uint32_t texName)
and Backend::is_current(&self) -> bool can be simulated somehow based on each SurfaceTexture being marked as active or not when this is called.
Maybe Backend::get_framebuffer_dimensions(&self) -> (u32, u32) is the size of the SurfaceTexture which is defined at creation so I can use that. I just don't know what to do with Backend::swap_buffers(&self) -> Result<(), SwapBuffersError>
and maybe Backend::unsafe fn get_proc_address(&self, symbol: &str) -> *const c_void can call some Android API that gets the address of the OpenGL functions
However, ASurfaceTexture_updateTexImage(ASurfaceTexture *st) looks important and needed, and I don't know what to map it to in the Backend. Also, what about ASurfaceTexture_detachFromGLContext(ASurfaceTexture *st)?
PS: I know there are other ways to render to an android widget, but I need to render to a Flutter widget, and the only way it through a SurfaceTexture
I managed to make this work some time ago, with a hack-ish solution, maybe it still works, because glium is not changing very much lately.
But in my experience using ASurfaceTexture yields unreliable results, maybe that is because I used it wrongly, or maybe because Android manufacturers do not pay too much attention to it, I don't know. But I didn't see any real program using it, so I decided to use the well tested Java GLSurfaceView instead and a bit of JNI to connect everything.
class MyGLView extends GLSurfaceView
implements GLSurfaceView.Renderer {
public MyGLView(Context context) {
super(context);
setEGLContextClientVersion(2);
setEGLConfigChooser(8, 8, 8, 0, 0, 0);
setRenderer(this);
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
GLJNILib.init();
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLJNILib.resize(width, height);
}
public void onDrawFrame(GL10 gl) {
GLJNILib.render();
}
Being com.example.myapp.GLJNILib the JNI binding to the Rust native library, where the magic happens. The interface is quite straightforward:
package com.example.myapplication;
public class GLJNILib {
static {
System.loadLibrary("myrustlib");
}
public static native void init();
public static native void resize(int width, int height);
public static native void step();
}
Now, this Rust library can be designed in several ways. In my particular projects, since it was a simple game with a single full-screen view, I just created the glium context and store it in a global variable. More sophisticated programs could store the Backend into a Java object, but that complicates the lifetimes and I didn't need it.
struct Data {
dsp: Rc<glium::backend::Context>,
size: (u32, u32),
}
static mut DATA: Option<Data> = None;
But first we have to implement the trait glium::backend::Backend, which happens to be surprisingly easy, if we assume that every time one of the Rust functions is called the proper GL context is always current:
struct Backend;
extern "C" {
fn eglGetProcAddress(procname: *const c_char) -> *const c_void;
}
unsafe impl glium::backend::Backend for Backend {
fn swap_buffers(&self) -> Result<(), glium::SwapBuffersError> {
Ok(())
}
unsafe fn get_proc_address(&self, symbol: &str) -> *const c_void {
let cs = CString::new(symbol).unwrap();
let ptr = eglGetProcAddress(cs.as_ptr());
ptr
}
fn get_framebuffer_dimensions(&self) -> (u32, u32) {
let data = unsafe { &DATA.as_ref().unwrap() };
data.size
}
fn is_current(&self) -> bool {
true
}
unsafe fn make_current(&self) {
}
}
And now we can implement the JNI init function:
use jni::{
JNIEnv,
objects::{JClass, JObject},
sys::{jint}
};
#[no_mangle]
#[allow(non_snake_case)]
pub extern "system"
fn Java_com_example_myapp_GLJNILib_init(_env: JNIEnv, _class: JClass) { log_panic(|| {
unsafe {
DATA = None
};
let backend = Backend;
let dsp = unsafe { glium::backend::Context::new(backend, false, Default::default()).unwrap() };
// Use dsp to create additional GL objects: programs, textures, buffers...
// and store them inside `DATA` or another global.
unsafe {
DATA = Some(Data {
dsp,
size: (256, 256), //dummy size
});
}
}
The size will be updated when the size of the view changes (not that glium uses that value so much):
#[no_mangle]
#[allow(non_snake_case)]
pub extern "system"
fn Java_com_example_myapp_GLJNILib_resize(_env: JNIEnv, _class: JClass, width: jint, height: jint) {
let data = unsafe { &mut DATA.as_mut().unwrap() };
data.size = (width as u32, height as u32);
}
And similarly the render function:
#[no_mangle]
#[allow(non_snake_case)]
pub extern "system"
fn Java_com_example_myapp_GLJNILib_render(_env: JNIEnv, _class: JClass) {
let data = unsafe { &mut DATA.as_ref().unwrap() };
let dsp = &data.dsp;
let mut target = glium::Frame::new(dsp.clone(), dsp.get_framebuffer_dimensions());
// use dsp and target at will, such as:
target.clear_color(0.0, 0.0, 1.0, 1.0);
let (width, height) = target.get_dimensions();
//...
target.finish().unwrap();
}
Note that target.finish() is still needed although glium is not actually doing the swap.

How to get frame by frame from MP4? (MediaCodec)

Actually I am working with OpenGL and I would like to put all my textures in MP4 in order to compress them.
Then I need to get it from MP4 on my Android
I need somehow decode MP4 and get frame by frame by request.
I found this MediaCodec
https://developer.android.com/reference/android/media/MediaCodec
and this MediaMetadataRetriever
https://developer.android.com/reference/android/media/MediaMetadataRetriever
But I did not see approach how to request frame by frame...
If there is someone who worked with MP4, please give me a way where to go.
P.S. I am working with native way (JNI), so does not matter how to do it.. Java or native, but I need to find the way.
EDIT1
I make some kind of movie (just one 3d model), so I am changing my geometry as well as textures every 32 milliseconds. So, it is seems to me reasonable to use mp4 for tex because of each new frame (32 milliseconds) very similar to privious one...
Now I use 400 frames for one model. For geometry I use .mtr and for tex I use .pkm (because it optimized for android) , so I have around 350 .mtr files(because some files include subindex) and 400 .pkm files ...
This is the reason why I am going to use mp4 for tex. Because one mp4 much more smaller than 400 .pkm
EDIT2
Plase take a look at Edit1
Actually all that I need to know is there API of Android that could read MP4 by frames? Maybe some kind of getNextFrame() method?
Something like this
MP4Player player = new MP4Player(PATH_TO_MY_MP4_FILE);
void readMP4(){
Bitmap b;
while(player.hasNext()){
b = player.getNextFrame();
///.... my code here ...///
}
}
EDIT3
I made such implementation on Java
public static void read(#NonNull final Context iC, #NonNull final String iPath)
{
long time;
int fileCount = 0;
//Create a new Media Player
MediaPlayer mp = MediaPlayer.create(iC, Uri.parse(iPath));
time = mp.getDuration() * 1000;
Log.e("TAG", String.format("TIME :: %s", time));
MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
mRetriever.setDataSource(iPath);
long a = System.nanoTime();
//frame rate 10.03/sec, 1/10.03 = in microseconds 99700
for (int i = 99700 ; i <= time ; i = i + 99700)
{
Bitmap b = mRetriever.getFrameAtTime(i, MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
if (b == null)
{
Log.e("TAG", String.format("BITMAP STATE :: %s", "null"));
}
else
{
fileCount++;
}
long curTime = System.nanoTime();
Log.e("TAG", String.format("EXECUTION TIME :: %s", curTime - a));
a = curTime;
}
Log.e("TAG", String.format("COUNT :: %s", fileCount));
}
and here execution time
E/TAG: EXECUTION TIME :: 267982039
E/TAG: EXECUTION TIME :: 222928769
E/TAG: EXECUTION TIME :: 289899461
E/TAG: EXECUTION TIME :: 138265423
E/TAG: EXECUTION TIME :: 127312577
E/TAG: EXECUTION TIME :: 251179654
E/TAG: EXECUTION TIME :: 133996500
E/TAG: EXECUTION TIME :: 289730345
E/TAG: EXECUTION TIME :: 132158270
E/TAG: EXECUTION TIME :: 270951461
E/TAG: EXECUTION TIME :: 116520808
E/TAG: EXECUTION TIME :: 209071269
E/TAG: EXECUTION TIME :: 149697230
E/TAG: EXECUTION TIME :: 138347269
This time in nanoseconds == +/- 200 milliseconds... It is very slowly... I need around 30 milliseconds by frame.
So, I think this method is execution on CPU, so question if there a method that executing on GPU?
EDIT4
I found out that there is MediaCodec class
https://developer.android.com/reference/android/media/MediaCodec
also I found similar question here MediaCodec get all frames from video
I understood that there is a way to read by bytes, but not by frames...
So, still question - if there is a way to read mp4 video by frames?
The solution would look something like the ExtractMpegFramesTest, in which MediaCodec is used to generate "external" textures from video frames. In the test code, the frames are rendered to an off-screen pbuffer and then saved as PNG. You would just render them directly.
There are a few problems with this:
MPEG video isn't designed to work well as a random-access database.
A common GOP (group of pictures) structure has one "key frame" (essentially a JPEG image) followed by 14 delta frames, which just hold the difference from the previous decoded frame. So if you want frame N, you may have to decode frames N-14 through N-1 first. Not a problem if you're always moving forward (playing a movie onto a texture) or you only store key frames (at which point you've invented a clumsy database of JPEG images).
As mentioned in comments and answers, you're likely to get some visual artifacts. How bad these look depends on the material and your compression rate. Since you're generating the frames, you may be able to reduce this by ensuring that, whenever there's a big change, the first frame is always a key frame.
The firmware that MediaCodec interfaces with may want several frames before it starts producing output, even if you start at a key frame. Seeking around in a stream has a latency cost. See e.g. this post. (Ever wonder why DVRs have smooth fast-forward, but not smooth fast-backward?)
MediaCodec frames passed through SurfaceTexture become "external" textures. These have some limitations vs. normal textures -- performance may be worse, can't use as color buffer in an FBO, etc. If you're just rendering it once per frame at 30fps this shouldn't matter.
MediaMetadataRetriever's getFrameAtTime() method has less-than-desirable performance for the reasons noted above. You're unlikely to get better results by writing it yourself, although you can save a bit of time by skipping the step where it creates a Bitmap object. Also, you passed OPTION_CLOSEST_SYNC in, but that will only produce the results you want if all your frames are sync frames (again, clumsy database of JPEG images). You need to use OPTION_CLOSEST.
If you're just trying to play a movie on a texture (or your problem can be reduced to that), Grafika has some examples. One that may be relevant is TextureFromCamera, which renders the camera video stream on a GLES rect that can be zoomed and rotated. You can replace the camera input with the MP4 playback code from one of the other demos. This'll work fine if you're only playing forward, but if you want to skip around or go backward you'll have trouble.
The problem you're describing sounds pretty similar to what 2D game developers deal with. Doing what they do is probably the best approach.
I can see why it might seem easy to have all your textures in a single file, but this is a really really bad idea.
MP4 is a video codec it is highly optimised for a list of frames which have a high level of similarity to adjacent frames i.e. motion. It is also optimised to be decompressed in sequential order, so using a 'random access' approach will be very inefficient.
To give a bit more detail video codecs store key frames (one a second, but the rate changes) and delta frames the rest of the time. The key frames are independently compressed just like separate images, but the delta frames stored as the difference from one or more other frames. The algorithm assumes this difference will be fairly minimal, after motion compensation has been performed.
So if you want to access a single delta frame you code will have to decompress a nearby key frame and all the delta frames that connect it to the frame you want, this will be much slower than just using single frame JPEG.
In short, use JPEG or PNG to compress your textures and add them all to a single archive file to keep it tidy.
Yes there is way to extract single frames from mp4 video.
In principle, you seem to look for alternative way to load textures, where usual way is GLUtils.texImage2D (which fills texture from a Bitmap).
First, you should consider what others advice, and expect visual artifacts from compression. But assuming that your textures form related textures (e.g. an explosion), getting these from video stream makes sense. For unrelated images you'll get better results using JPG or PNG. And note that mp4 video doesn't have alpha channel, often used in textures.
For the task, you can't use MediaMetadataRetriever, it won't give you needed accuracy to extract all frames.
You'd have to work with MediaCodec and MediaExtractor classes. Android documentation for MediaCodec is detailed.
Actually you'll need to implement kind of customized video player, and add one key function: frame step.
Close thing to this is Android's MediaPlayer, which is complete player, but 1) lacks frame-step, and 2) is rather closed-source because it's implemented by lot of native C++ libraries which are impossible to extend and hard to study.
I advice this with experience of creating a frame-by-frame video player, and I did it by adopting MediaPlayer-Extended, which is written in plain java (no native code), so you can include this in your project and add function that you need. It works with Android's MediaCodec and MediaExtractor.
Somewhere in MediaPlayer class you'd add function for frameStep, and add another signal + function in PlaybackThread to decode just one next frame (in paused mode). However, the implementation of this would be up to you. Result would be that you let decoder to obtain and process single frame, consume the frame, then repeat with next frame. I did it, so I know that this approach works.
Another half of the task is about obtaining the result. A video player (with MediaCodec) outputs frames into a Surface. Your task would be to get the pixels.
I know about way how to read RGB bitmap from such surface: you need to create OpenGL Pbuffer EGLSurface, let MediaCodec render into this surface (Android's SurfaceTexture), then read pixels from this surface. This is another nontrivial task, you need to create shader to render EOS texture (the surface), and use GLES20.glReadPixels to obtain RGB pixels into a ByteBuffer. You'd then upload this RGB bitmaps into your textures.
However, as you want to load textures, you may find optimized way how to render the video frame directly into your textures, and avoid moving pixels around.
Hope this helps, and good luck in implementation.
Actually I want to post my implementation for current time.
Here h file
#include <jni.h>
#include <memory>
#include <opencv2/opencv.hpp>
#include "looper.h"
#include "media/NdkMediaCodec.h"
#include "media/NdkMediaExtractor.h"
#ifndef NATIVE_CODEC_NATIVECODECC_H
#define NATIVE_CODEC_NATIVECODECC_H
//Originally took from here https://github.com/googlesamples/android-
ndk/tree/master/native-codec
//Convert took from here
https://github.com/kueblert/AndroidMediaCodec/blob/master/nativecodecvideo.cpp
class NativeCodec
{
public:
NativeCodec() = default;
~NativeCodec() = default;
void DecodeDone();
void Pause();
void Resume();
bool createStreamingMediaPlayer(const std::string &filename);
void setPlayingStreamingMediaPlayer(bool isPlaying);
void shutdown();
void rewindStreamingMediaPlayer();
int getFrameWidth() const
{
return m_frameWidth;
}
int getFrameHeight() const
{
return m_frameHeight;
}
void getNextFrame(std::vector<unsigned char> &imageData);
private:
struct Workerdata
{
AMediaExtractor *ex;
AMediaCodec *codec;
bool sawInputEOS;
bool sawOutputEOS;
bool isPlaying;
bool renderonce;
};
void Seek();
ssize_t m_bufidx = -1;
int m_frameWidth = -1;
int m_frameHeight = -1;
cv::Size m_frameSize;
Workerdata m_data = {nullptr, nullptr, false, false, false, false};
};
#endif //NATIVE_CODEC_NATIVECODECC_H
Here cc file
#include "native_codec.h"
#include <cassert>
#include "native_codec.h"
#include <jni.h>
#include <cstdio>
#include <cstring>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <cerrno>
#include <climits>
#include "util.h"
#include <android/log.h>
#include <string>
#include <chrono>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
#include <android/log.h>
#include <string>
#include <chrono>
// for native window JNI
#include <android/native_window_jni.h>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
using namespace std;
using namespace std::chrono;
bool NativeCodec::createStreamingMediaPlayer(const std::string &filename)
{
AMediaExtractor *ex = AMediaExtractor_new();
media_status_t err = AMediaExtractor_setDataSource(ex, filename.c_str());;
if (err != AMEDIA_OK)
{
return false;
}
size_t numtracks = AMediaExtractor_getTrackCount(ex);
AMediaCodec *codec = nullptr;
for (int i = 0; i < numtracks; i++)
{
AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
int format_color;
AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_COLOR_FORMAT, &format_color);
bool ok = AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_WIDTH, &m_frameWidth);
ok = ok && AMediaFormat_getInt32(format, AMEDIAFORMAT_KEY_HEIGHT,
&m_frameHeight);
if (ok)
{
m_frameSize = cv::Size(m_frameWidth, m_frameHeight);
} else
{
//Asking format for frame width / height failed.
}
const char *mime;
if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime))
{
return false;
} else if (!strncmp(mime, "video/", 6))
{
// Omitting most error handling for clarity.
// Production code should check for errors.
AMediaExtractor_selectTrack(ex, i);
codec = AMediaCodec_createDecoderByType(mime);
AMediaCodec_configure(codec, format, nullptr, nullptr, 0);
m_data.ex = ex;
m_data.codec = codec;
m_data.sawInputEOS = false;
m_data.sawOutputEOS = false;
m_data.isPlaying = false;
m_data.renderonce = true;
AMediaCodec_start(codec);
}
AMediaFormat_delete(format);
}
return true;
}
void NativeCodec::getNextFrame(std::vector<unsigned char> &imageData)
{
if (!m_data.sawInputEOS)
{
m_bufidx = AMediaCodec_dequeueInputBuffer(m_data.codec, 2000);
if (m_bufidx >= 0)
{
size_t bufsize;
auto buf = AMediaCodec_getInputBuffer(m_data.codec, m_bufidx, &bufsize);
auto sampleSize = AMediaExtractor_readSampleData(m_data.ex, buf, bufsize);
if (sampleSize < 0)
{
sampleSize = 0;
m_data.sawInputEOS = true;
}
auto presentationTimeUs = AMediaExtractor_getSampleTime(m_data.ex);
AMediaCodec_queueInputBuffer(m_data.codec, m_bufidx, 0, sampleSize,
presentationTimeUs,
m_data.sawInputEOS ?
AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM : 0);
AMediaExtractor_advance(m_data.ex);
}
}
if (!m_data.sawOutputEOS)
{
AMediaCodecBufferInfo info;
auto status = AMediaCodec_dequeueOutputBuffer(m_data.codec, &info, 0);
if (status >= 0)
{
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM)
{
__android_log_print(ANDROID_LOG_ERROR,
"AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM", "AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM :: %s",
//
"output EOS");
m_data.sawOutputEOS = true;
}
if (info.size > 0)
{
// size_t bufsize;
uint8_t *buf = AMediaCodec_getOutputBuffer(m_data.codec,
static_cast<size_t>(status), /*bufsize*/nullptr);
cv::Mat YUVframe(cv::Size(m_frameSize.width, static_cast<int>
(m_frameSize.height * 1.5)), CV_8UC1, buf);
cv::Mat colImg(m_frameSize, CV_8UC3);
cv::cvtColor(YUVframe, colImg, CV_YUV420sp2BGR, 3);
auto dataSize = colImg.rows * colImg.cols * colImg.channels();
imageData.assign(colImg.data, colImg.data + dataSize);
}
AMediaCodec_releaseOutputBuffer(m_data.codec, static_cast<size_t>(status),
info.size != 0);
if (m_data.renderonce)
{
m_data.renderonce = false;
return;
}
} else if (status < 0)
{
getNextFrame(imageData);
} else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED)
{
__android_log_print(ANDROID_LOG_ERROR,
"AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED", "AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED :: %s", //
"output buffers changed");
} else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED)
{
auto format = AMediaCodec_getOutputFormat(m_data.codec);
__android_log_print(ANDROID_LOG_ERROR,
"AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED", "AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED :: %s",
//
AMediaFormat_toString(format));
AMediaFormat_delete(format);
} else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER)
{
__android_log_print(ANDROID_LOG_ERROR, "AMEDIACODEC_INFO_TRY_AGAIN_LATER",
"AMEDIACODEC_INFO_TRY_AGAIN_LATER :: %s", //
"no output buffer right now");
} else
{
__android_log_print(ANDROID_LOG_ERROR, "UNEXPECTED INFO CODE", "UNEXPECTED
INFO CODE :: %zd", //
status);
}
}
}
void NativeCodec::DecodeDone()
{
if (m_data.codec != nullptr)
{
AMediaCodec_stop(m_data.codec);
AMediaCodec_delete(m_data.codec);
AMediaExtractor_delete(m_data.ex);
m_data.sawInputEOS = true;
m_data.sawOutputEOS = true;
}
}
void NativeCodec::Seek()
{
AMediaExtractor_seekTo(m_data.ex, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaCodec_flush(m_data.codec);
m_data.sawInputEOS = false;
m_data.sawOutputEOS = false;
if (!m_data.isPlaying)
{
m_data.renderonce = true;
}
}
void NativeCodec::Pause()
{
if (m_data.isPlaying)
{
// flush all outstanding codecbuffer messages with a no-op message
m_data.isPlaying = false;
}
}
void NativeCodec::Resume()
{
if (!m_data.isPlaying)
{
m_data.isPlaying = true;
}
}
void NativeCodec::setPlayingStreamingMediaPlayer(bool isPlaying)
{
if (isPlaying)
{
Resume();
} else
{
Pause();
}
}
void NativeCodec::shutdown()
{
m_bufidx = -1;
DecodeDone();
}
void NativeCodec::rewindStreamingMediaPlayer()
{
Seek();
}
So, according to this implementation for format conversion (in my case from YUV to BGR) you need to set up OpenCV, for understand how to do it check this two source
https://www.youtube.com/watch?v=jN9Bv5LHXMk
https://www.youtube.com/watch?v=0fdIiOqCz3o
And also for sample I leave here my CMakeLists.txt file
#For add OpenCV take a look at this video
#https://www.youtube.com/watch?v=jN9Bv5LHXMk
#https://www.youtube.com/watch?v=0fdIiOqCz3o
#Look at the video than compare with this file and make the same
set(pathToProject
C:/Users/tetavi/Downloads/Buffer/OneMoreArNew/arcore-android-
sdk/samples/hello_ar_c)
set(pathToOpenCv C:/OpenCV-android-sdk)
cmake_minimum_required(VERSION 3.4.1)
set(CMAKE VERBOSE MAKEFILE on)
set(CMAKE CXX FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11")
include_directories(${pathToOpenCv}/sdk/native/jni/include)
# Import the ARCore library.
add_library(arcore SHARED IMPORTED)
set_target_properties(arcore PROPERTIES IMPORTED_LOCATION
${ARCORE_LIBPATH}/${ANDROID_ABI}/libarcore_sdk_c.so
INTERFACE_INCLUDE_DIRECTORIES ${ARCORE_INCLUDE}
)
# Import the glm header file from the NDK.
add_library(glm INTERFACE)
set_target_properties(glm PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES
${ANDROID_NDK}/sources/third_party/vulkan/src/libs/glm
)
# This is the main app library.
add_library(hello_ar_native SHARED
src/main/cpp/background_renderer.cc
src/main/cpp/hello_ar_application.cc
src/main/cpp/jni_interface.cc
src/main/cpp/video_render.cc
src/main/cpp/geometry_loader.cc
src/main/cpp/plane_renderer.cc
src/main/cpp/native_codec.cc
src/main/cpp/point_cloud_renderer.cc
src/main/cpp/frame_manager.cc
src/main/cpp/safe_queue.cc
src/main/cpp/stb_image.h
src/main/cpp/util.cc)
add_library(lib_opencv SHARED IMPORTED)
set_target_properties(lib_opencv PROPERTIES IMPORTED_LOCATION
${pathToProject}/app/src/main/jniLibs/${CMAKE_ANDROID_ARCH_ABI}/libopencv_java3.so)
target_include_directories(hello_ar_native PRIVATE
src/main/cpp)
target_link_libraries(hello_ar_native $\{log-lib} lib_opencv
android
log
GLESv2
glm
mediandk
arcore)
Usage:
You need to create stream media player with this method
NaviteCodec::createStreamingMediaPlayer(pathToYourMP4file);
and then just use
NativeCodec::getNextFrame(imageData);
Feel free to ask

Android native C function as variable?

So, I'm working with a library that uses a callback function that is configured and called when it's needed. I need to access local variables in my c function from inside that function and can't make them members of the parent class for other reasons.
So, essentially this is my set up
callback.h
typedef void handler_func(uint8_t *data, size_t len);
typedef struct my_cfg {
handler_func *handler;
} my_cfg;
otherfile.c
#include "callback.h"
void test() {
char *test = "This is a test";
my_cfg cfg = { 0 };
memset(&cfg, 0, sizeof(my_cfg));
my_cfg.handler = my_handler;
// This is just an example, basically
// elsewhere in the code the handler
// function will be called when needed.
load_config(my_cfg);
}
void my_handler(uint8_t *data, size_t len) {
// I need to access the `test` var here.
}
What I need is something like this:
#include "callback.h"
void test() {
final char *test = "This is a test";
my_cfg cfg = { 0 };
memset(&cfg, 0, sizeof(my_cfg));
// This is the type of functionality I need.
my_cfg.handler = void (uint8_t *data, size_t len) {
printf("I can now access test! %s", test);
};
// This is just an example, basically
// elsewhere in the code the handler
// function will be called when needed.
load_config(my_cfg);
}
Please keep in mind that I cannot change the header files that define the function definition for handler_func, nor can I modify the my_cfg struct, nor can I modify the area of the code that is calling the handler_func, my_cfg.handler. They are all internal in the library.
(Also note that there may be code errors above, this is all psuedo code technically. I'm not at my computer, just typing this all out free hand on a tablet)
Edit
From what I understand, nested functions would solve this issue. But it appears that clang doesn't support nested functions.
Reference: https://clang.llvm.org/docs/UsersManual.html#gcc-extensions-not-implemented-yet
clang does not support nested functions; this is a complex feature
which is infrequently used, so it is unlikely to be implemented
anytime soon.
Is there another work around?

Segmentation fault using a Native Window in Android

We're trying to port a C++ application to Android. We think that using NativeActivity should be easiest, and let all OpenGL/EGL-stuff be done natively.
Right now, we're passing the ANativeWindow-pointer that we get from the android_app struct in android_native_app_glue.h through the application so that it can be used when the window is initialized. Here's a few relevant lines from this code (stripped from debug-code):
bool OpenGLWindowES::Initialize(EGLNativeWindowType wnd, EGLNativeDisplayType dsp,
EGLint redSize, EGLint greenSize, EGLint blueSize, EGLint alphaSize, EGLint depthSize, bool bMultiSample)
{
m_display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
if(m_display == EGL_NO_DISPLAY)
{
return false;
}
EGLint iMajorVersion, iMinorVersion;
if (!eglInitialize(m_display, &iMajorVersion, &iMinorVersion))
{
return false;
}
eglBindAPI(EGL_OPENGL_ES_API);
bool ecc = eglChooseConfig(m_display, attribs, &m_config, 1, &iConfigs);
if (!ecc || (iConfigs != 1))
{
return false;
}
EGLint format;
eglGetConfigAttrib(m_display, m_config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(wnd, 0, 0, format);
m_windowSurface = eglCreateWindowSurface(m_display, m_config, wnd, NULL);
//etc
}
This code proceeds with creating a context, makecurrent etc, but we don't get that far. We get a segmentation fault on eglCreateWindowSurface and since the display and config seem to be initialized correctly, this can only mean a problem with the ANativeWindow* (tyepdef'd to a EGLNativeWindowType). Error message:
signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 00000058
We also get a segmentation fault if we call for example:
ANativeWindow_getHeight(wnd);
So the question is, what can cause a segmentation fault at this point? wnd is not null, we've checked this before, so it should be initalized somehow and should be ready to use. Did we miss something before calling this function, or can there be some problem with the pointer?
EDIT: We're currently wondering if this can have something to do with the APP_CMD_INIT_WINDOW command not being sent or received properly (we haven't implemented any command handling at all yet so we're looking into this).
Are you entirely sure that the window is correctly initialized?
The ANativeWindow pointer in the app struct is not automatically set when passed into the native entry point. It is provided by the main Android thread at a later point and will then be sent to the app via the callback system that app_glue has set up.
You need to handle the calls sent to the android_app yourself, in the native_activity example in the NDK they do it like this:
int ident;
int events;
struct android_poll_source* source;
// If not animating, we will block forever waiting for events.
// If animating, we loop until all events are read, then continue
// to draw the next frame of animation.
while ((ident=ALooper_pollAll(engine.animating ? 0 : -1, NULL, &events,
(void**)&source)) >= 0) {
// Process this event.
if (source != NULL) {
source->process(state, source);
}
....
}
You probably need something like this in your main rendering loop, this will allow app_glue to automatically set the window correctly when it gets it from the main thread.
The problem is wnd, pass the state->window from the main function: void android_main(struct android_app* state) to your bool OpenGLWindowES::Initialize(...) method as the first parameter:
void android_main(struct android_app* state) {
Initialize(state->window /*, other arguments here... */);
}

How to get MJPG stream video from android IPWebcam using opencv

I am using the IP Webcam program on android and receiving it on my PC by WiFi. What I want is to use opencv in Visual Studio, C++, to get that video stream, there is an option to get MJPG stream by the following URL: http://MyIP:port/videofeed
How to get it using opencv?
Old question, but I hope this can help someone (same as my answer here)
OpenCV expects a filename extension for its VideoCapture argument,
even though one isn't always necessary (like in your case).
You can "trick" it by passing in a dummy parameter which ends in the
mjpg extension:
So perhaps try:
VideoCapture vc;
ipCam.open("http://MyIP:port/videofeed/?dummy=param.mjpg")
Install IP Camera Adapter and configure it to capture the videostream. Then install ManyCam and you'll see "MPEG Camera" in the camera section.(you'll see the same instructions if you go to the link on how to setup IPWebCam for skype)
Now you can access your MJPG stream just like a webcam through openCV. I tried this with OpenCV 2.2 + QT and works well.
Think this helps.
I did a dirty patch to make openCV working with android ipWebcam:
In the file OpenCV-2.3.1/modules/highgui/src/cap_ffmpeg_impl.hpp
In the function bool CvCapture_FFMPEG::open( const char* _filename )
replace:
int err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
by
AVInputFormat* iformat = av_find_input_format("mjpeg");
int err = av_open_input_file(&ic, _filename, iformat, 0, NULL);
ic->iformat = iformat;
and comment:
err = av_seek_frame(ic, video_stream, 10, 0);
if (err < 0)
{
filename=(char*)malloc(strlen(_filename)+1);
strcpy(filename, _filename);
// reopen videofile to 'seek' back to first frame
reopen();
}
else
{
// seek seems to work, so we don't need the filename,
// but we still need to seek back to filestart
filename=NULL;
int64_t ts = video_st->first_dts;
int flags = AVSEEK_FLAG_FRAME | AVSEEK_FLAG_BACKWARD;
av_seek_frame(ic, video_stream, ts, flags);
}
That should work. Hope it helps.
This is the solution (im using IP Webcam on android):
CvCapture* capture = 0;
capture = cvCaptureFromFile("http://IP:Port/videofeed?dummy=param.mjpg");
I am not able to comment, so im posting new post. In original answer is an error - used / before dummy. THX for solution.
Working example for me
// OpenCVTest.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "opencv2/highgui/highgui.hpp"
/**
* #function main
*/
int main( int argc, const char** argv )
{
CvCapture* capture;
IplImage* frame = 0;
while (true)
{
//Read the video stream
capture = cvCaptureFromFile("http://192.168.1.129:8080/webcam.mjpeg");
frame = cvQueryFrame( capture );
// create a window to display detected faces
cvNamedWindow("Sample Program", CV_WINDOW_AUTOSIZE);
// display face detections
cvShowImage("Sample Program", frame);
int c = cvWaitKey(10);
if( (char)c == 27 ) { exit(0); }
}
// clean up and release resources
cvReleaseImage(&frame);
return 0;
}
Broadcast mjpeg from a webcam with vlc, how described at http://tumblr.martinml.com/post/2108887785/how-to-broadcast-a-mjpeg-stream-from-your-webcam-with

Categories

Resources