Rendering VLC Video output on Android OpenGL - android

I am trying for weeks to render a vlc streaming to OpenGL on Android.
I guess I am missing something. Here is what I have so far.
This is my custom class:
public class VLCVideoView extends SurfaceView implements SurfaceHolder.Callback, IVideoPlayer, GLSurfaceView.Renderer
My initialization on VLC:
try {
// Create a new media player
libvlc = LibVLC.getInstance();
libvlc.setHardwareAcceleration(LibVLC.HW_ACCELERATION_FULL);
libvlc.eventVideoPlayerActivityCreated(true);
libvlc.setSubtitlesEncoding("");
libvlc.setAout(LibVLC.AOUT_OPENSLES);
libvlc.setVout(LibVLC.VOUT_OPEGLES2);
//libvlc.setVout(LibVLC.VOUT_ANDROID_SURFACE);
libvlc.setTimeStretching(true);
libvlc.setChroma("RV32");
libvlc.setVerboseMode(true);
LibVLC.restart(context);
holder.setFormat(PixelFormat.RGBA_8888);
mSurface = holder.getSurface();
libvlc.attachSurface(mSurface, this);
Implemented the entire GLSurfaceView.Renderer methods:
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config)
#Override
public void onSurfaceChanged(GL10 gl, int width, int height)
#Override
public void onDrawFrame(GL10 gl)
As well, as:
#Override
protected void onDraw(Canvas canvas) for the VLCVideoView
{
if (mSurface!= null)
{
try {
Canvas surfaceCanvas = mSurface.lockCanvas(null);
if(surfaceCanvas != null)
{
super.onDraw(surfaceCanvas);
mSurface.unlockCanvasAndPost(surfaceCanvas);
}
} catch (Surface.OutOfResourcesException excp) {
excp.printStackTrace();
}
}
}
I have a rotating cube being draw, but instead of having the streaming frames as textures, it simple appears streaming on a flat surface.
Any clue?

Related

How to live stream video from camera using texture view and not surfaceview?

I have seen many examples on how to live stream video from android camera to rtmp server using surfaceview. One is here : https://github.com/begeekmyfriend/yasea
But is it possible to stream the video from camera to rtmp using a textureview? If it is, how can we?
Textureview mTextureView;
// inside oncreate
mTextureView = (TextureView) findViewById(R.id.texture_view);
mTextureView.setSurfaceTextureListener(AircraftControlActivity.this);
// Outside OnCreate
#Override
public void onSurfaceTextureAvailable(final SurfaceTexture surface, final int width, final int height) {
}
#Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
}
#Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
}
#Override
public void onSurfaceTextureUpdated(final SurfaceTexture surface) {
}
What to do next?
Take a look at Texture View
public class MainActivity extends AppCompatActivity implements TextureView.SurfaceTextureListener
{
Camera camera;
TextureView textureView;
ImageButton button ; //ignore this one
#Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textureView = (TextureView) findViewById(R.id.textureView);
button = (ImageButton)findViewById(R.id.imageButton);
textureView.setSurfaceTextureListener(this);
}
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height)
{
camera = Camera.open();
try
{
camera.setPreviewTexture(surface);
camera.startPreview();
}
catch (IOException ioe)
{
// Something bad happened
}
}
#Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height)
{
// Ignored, Camera does all the work for us
}
#Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface)
{
camera.stopPreview();
camera.release();
return true;
}
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface)
{
// Invoked every time there's a new Camera preview frame
}
}

draw an android.media.Image to a surface

I have this android application.
It use a SurfaceView, from where I get the Surface through the SurfaceHolder.
It also use ExoPlayer to stream videos. However I have instantiated an ImageReader, getting its Surface and passing to the ExoPlayer.
Now, I am in the ImageReader.OnImageAvailableListener#onImageAvailable and I access the latest Image.
I want to manipulate the Image and send the new data to the "SurfaceView" Surface.
How can I "draw" an android.media.Image to an android.view.Surface ?
The question is not clear to me.
The way to get android.media.Image is by the Camera2 API, and there you can provide a surface and the "camera" will draw over it. Please refer to Camera2Video example
Another way to get the Image object is from ImageReader (while decoding video for example). In this case you want to draw the image, but you can not provide a surface to the ImageReader(there is an internal surface that is not displayed). In this case you can draw the Image on a SurfaceView.
Assuming this is the case, you need to convert an Image to a Bitmap objects.
You have discussion about how perform this here
Possible duplicate of: how to draw image on surfaceview android
First get your canvas by using lockCanvas() (see here), second get your image and make it a drawable using:
my_bitmap = Bitmap.createBitmap(
MediaStore.Images.Media.getBitmap(getContentResolver(), uri),
0,0,90, 90);
drawable=new BitmapDrawable(my_bitmap);
After that you can draw the drawable to the locked canvas and use unlockCanvasAndPost (Canvas canvas) to post the updated canvas back to the surfaceview.
here is the answer for your question.
MainActivity.java
public class MainActivity extends Activity {
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mySurfaceView mySurfaceView = new mySurfaceView(getApplicationContext());
setContentView(mySurfaceView);
}
}
mySurfaceView.java
public class mySurfaceView extends SurfaceView implements
SurfaceHolder.Callback {
private TutorialThread _thread;
public mySurfaceView(Context context) {
super(context);
getHolder().addCallback(this);
_thread = new TutorialThread(getHolder(), this);
}
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
Bitmap _scratch = BitmapFactory.decodeResource(getResources(),
R.drawable.icon);
canvas.drawColor(Color.BLACK);
canvas.drawBitmap(_scratch, 10, 10, null);
}
public void surfaceChanged(SurfaceHolder arg0, int arg1, int arg2, int arg3) {
}
public void surfaceCreated(SurfaceHolder arg0) {
_thread.setRunning(true);
_thread.start();
}
public void surfaceDestroyed(SurfaceHolder arg0) {
boolean retry = true;
_thread.setRunning(false);
while (retry) {
try {
_thread.join();
retry = false;
} catch (InterruptedException e) {
}
}
}
class TutorialThread extends Thread {
private SurfaceHolder _surfaceHolder;
private mySurfaceView _panel;
private boolean _run = false;
public TutorialThread(SurfaceHolder surfaceHolder, mySurfaceView panel) {
_surfaceHolder = surfaceHolder;
_panel = panel;
}
public void setRunning(boolean run) {
_run = run;
}
#Override
public void run() {
Canvas c;
while (_run) {
c = null;
try {
c = _surfaceHolder.lockCanvas(null);
synchronized (_surfaceHolder) {
_panel.onDraw(c);
}
} finally {
if (c != null) {
_surfaceHolder.unlockCanvasAndPost(c);
}
}
}
}
}
}

onSurfaceCreated() called every time orientation changes

I'm implementing the GLSurfaceView.Renderer like so:
public class GL20Renderer implements GLSurfaceView.Renderer {
private static GL20Renderer mInstance = new GL20Renderer();
private GL20Renderer() {}
public static GL20Renderer getInstance() {
return mInstance;
}
#Override
public void onDrawFrame(GL10 gl) {
Log.e("App", "onDrawFrame()");
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
Log.e("App", "onSurfaceChanged()");
}
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.e("App", "onSurfaceCreated()");
}
}
This class is implemented in the MainActivity:
public class MainActivity extends Activity {
private GLSurfaceView mGLView;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN);
// Create a GLSurfaceView instance and set it as the ContentView for this Activity
mGLView = new GL20SurfaceView(this);
setContentView(mGLView);
}
#Override
protected void onPause() {
super.onPause();
mGLView.onPause();
}
#Override
protected void onResume() {
super.onResume();
mGLView.onResume();
}
}
GL20SurfaceView is:
public class GL20SurfaceView extends GLSurfaceView {
public GL20SurfaceView(Context context) {
super(context);
// Create an OpenGL ES 2.0 context.
setEGLContextClientVersion(2);
// Set the Renderer for drawing on the GLSurfaceView
setRenderer(GL20Renderer.getInstance());
}
}
Very simple as you can see.
When I now start the App, the onSurfaceCreated() method is correctly called, follow by one call of onSurfaceChanged().
Problem now is: Whenever the device orientation changes, I get another call of onSurfaceCreated() followed by onSurfaceChanged().
In my understanding, the onSurfaceCreated() method is called whenever a new surface needs to be created. My question is: Why does it do that whenever I change just the device orientation? Shouldn't it be sufficient that only a onSurfaceChanged() call is triggered in order to adjust the viewport?
Note that I don't put my device to sleep when changing the orientation.
DO this way
<activity
android:name=".MainActivity"
android:configChanges="keyboardHidden|orientation|screenSize"
/>
The one of advantages of OpenGL that you draw regards to screen size. It gives you ability to handle all Android resolutions.
I'm not sure how it works with GL20 (sure the same like GL10).
As I know in onSurfaceChanged provides several configurations for OpenGL based on length/width of your screen.
For example glViewport
It is necessary to call glViewport handler when GL view dimensions are modified.
Only if you have width = height is unnecessary but its other story.
as exampe
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// prevent 0 divide
if(height == 0) {
height=1;
}
screenWidth = width;
screenHeight = height;
ratio = (float) width/height;
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0, width, 0, height, -10f, 10f);
gl.glViewport(0, 0, screenWidth, screenHeight);
If you want to avoid that, add to Manifest.xml:
<activity android:name="Activity"
android:configChanges="screenSize|orientation">

Antialiasing in TextureView

I tried to play the same video with a SurfaceView and a TextureView and noticed that the image rendered with the TextureView is more aliased (less 'smooth') than with the SurfaceView.
What is the reason for this ? Is there any way to configure rendering of TextureView to look better ?
The TextureView is used like this:
TextureView textureView = new TextureView(this);
textureView.setSurfaceTextureListener(new SurfaceTextureListener() {
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surfaceTexture, int width, int height) {
Log.i("test", "onSurfaceTextureAvailable()");
MediaPlayer player = MediaPlayer.create(TestActivity.this, Uri.parse(VIDEO_URL));
Surface surface = new Surface(surfaceTexture);
player.setSurface(surface);
player.start();
}
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
Log.i("test", "onSurfaceTextureUpdated()");
}
#Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
Log.i("test", "onSurfaceTextureSizeChanged()");
}
#Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
Log.i("test", "onSurfaceTextureDestroyed()");
return false;
}
});
setContentView(textureView);
And for the SurfaceView:
SurfaceView surfaceView = new SurfaceView(this);
surfaceView.getHolder().addCallback(new Callback() {
#Override
public void surfaceCreated(SurfaceHolder holder) {
Log.i("test", "surfaceCreated()");
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
Log.i("test", "surfaceDestroyed()");
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
Log.i("test", "surfaceChanged()");
MediaPlayer player = MediaPlayer.create(TestActivity.this, Uri.parse(VIDEO_URL));
player.setSurface(holder.getSurface());
player.start();
}
});
setContentView(surfaceView);
Well, it seems that applying a scaling (other than 1) on the TextureView does the 'smoothing' effect that I'm looking for.
textureView.setScaleX(1.00001f);
That sounds like a strange hack...but it works. Would be interesting to dig out what's done in the scaling that changes the rendering aspect...

How to load lots of bitmaps without crashing application in android

I am working on streaming a video from web. Where I decode the video/audio stuff in native code and get the raw pixels for video
I create a bitmap in java code, with surfaceholder and canvas and update pixels for each bitmap from native code and then stream the bitmaps as video. My problem here is, the video crashes after a few seconds because of low memory.
I want to know whether there is anything that i need to make sure to not to crash app and use low memory.
Here is my code.
public CanvasThread(SurfaceHolder surfaceHolder, Panel panel) {
_surfaceHolder = surfaceHolder;
_panel = panel; }
public void setRunning(boolean run) {
_run = run; }
#Override
public void run() {
Canvas c;
while (_run) {
c = null;
try {
c = _surfaceHolder.lockCanvas(null);
synchronized (_surfaceHolder) {
_panel.onDraw(c);
}
} finally {
if (c != null) {
_surfaceHolder.unlockCanvasAndPost(c);
}
}
public class Panel extends SurfaceView implements SurfaceHolder.Callback{
private CanvasThread canvasthread;
private static Bitmap mBitmap;
private static boolean ii=false;
public Panel(Context context, AttributeSet attrs) {
super(context, attrs);
getHolder().addCallback(this);
canvasthread = new CanvasThread(getHolder(), this);
setFocusable(true);
mBitmap=Bitmap.createBitmap(480, 320, Bitmap.Config.RGB_565);//bitmap created in constructor
}
public Panel(Context context) {
super(context);
getHolder().addCallback(this);
canvasthread = new CanvasThread(getHolder(), this);
setFocusable(true);
}
private static native void renderbitmap(Bitmap bitmap); //native function
#Override
public void onDraw(Canvas canvas) {
renderbitmap(mBitmap); //Update pixels from native code
canvas.drawBitmap(mBitmap, 0,0,null);//draw on canvas
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width,int height) { }
#Override
public void surfaceCreated(SurfaceHolder holder) {
canvasthread.setRunning(true);
canvasthread.start(); }
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
boolean retry = true;
canvasthread.setRunning(false);
while (retry) {
try {
canvasthread.join();
retry = false;
} catch (InterruptedException e) { }
}
You might check out Richard Quirk's glbuffer. I'm using it in a video player app.
In general you want to hold on to the video packets you're receiving and only decode them right before they are needed for display. It should be easy to integrate your code with glbuffer and bypass any Bitmap allocation in Java code.
There would be one decoded frame at any given time present in the native code and in GL texture memory and several encoded packets that you're keeping around for buffering.
That will probably not because you have limited amount of memory. Probably around 16MB for the emulator. You should not store all the images in memory. You have to options
Fix your algorithm so it doesn't
need everything in memory
Or only keep one image in memory and
the rest on disk
From a quick scan of your code, it looks like your basic problem is that you're creating a new bitmap each time your OnDraw method is called. Instead, comment that line out, and create the bitmap for mBitmap just once at startup.

Categories

Resources