I have a confusion and look forward to some comments on this. I was assuming that WebView creates a separate surface to draw and does not use the default surface of the activity to draw. But, in surfaceflinger dump, I dont see a new surface getting created when using webview.
When I do the similar experimnet of using videoview, I see a separate surface getting created.
Onwebview also, I wanted to play a video, so was assuming a separate surface would be created and thereby the surface resolution would be as per video resolution. But if it uses application's surface, then the max resolution of the video has to be of the UI resolution.
In chromium code, I see the code for separate surface but then in practical I could not see one getting created.
Can someone help me to clarify this.
Thank You.
If you look at VideoView inheritance graph you'll notice that it inherits from SurfaceView, while WebView does not, so WebView can only achieve that by creating an external SurfaceView.
While if you search for usages of ExternalVideoSurface in WebView part of Chromium code, you will notice that it is only enabled if "video hole" is enabled, which is intended to be used only for decoding encrypted videos, where WebView needs to do "hole punching". There is a System API-level setting in WebView that enables this behaviour, but it has its own limitations, and thus not recommended to be used in general.
I am also curious webview is not in sufaceflinger dump.
I think the reason is webview also render to the related activity native window, so there is not another surface in this situation.
But the situation seems differs in the lastest Android and Webview version by developer's option.
Related
I tried to succesfully render a webview onto a GLSurfaceView, which I managed to do thanks to this:
Is it possible to render an Android View to an OpenGL FBO or texture?
However when I tried to load up a website with a video, I realized, that the video doesn't play on the GLSurfaceView (there is sound however). I found out through googleing that the webview renders the video on a seperate surfaceview, which is probably the cause, that it doesnt appear on my GLSurfaceView.
My question:
How do I go about hacking the webview in order to access the extra video surface?
I consciously say hack... I am ready to use reflection and whatever is needed, but I don't really know where to start. In the SDK Manager provided source all I can find is the WebViewProvider interface without any implementation. Via grepcode I find some kind of implementations (http://grepcode.com/file/repo1.maven.org/maven2/org.robolectric/android-all/4.1.2_r1-robolectric-0/android/webkit/HTML5VideoViewProxy.java), but not in the SDK Manager supplied stuff. I am running KitKat 4.4 on an Xiaomi RedMi 2 phone.
How can I determine/use the relevant implementation?
Or do you think this isn't even feasable?
With a WebView I don't think it's possible, see this. I'm currently attempting something similar with a GeckoView because the you can get a handle to the surface, see this note and this on how to embed a GeckoView into your project.
I'm using libvlc on and android app to play a network stream; the native code draws the image on a surfaceview.
Let's assume the video stops and I have no access to the native code, how can I detect if the SurfaceView is still moving (the video is not frozen)?
I tried getViewTreeObserver().addOnDrawListener(); and getViewTreeObserver().addOnPreDrawListener(); but they do not have the effect I'm looking for.
Thanks
You can't get that information from the SurfaceView, because the SurfaceView itself does not know.
The SurfaceView's job is to set up the Surface, and create a hole in the View layout that you can see through. Once it has done these things, it is no longer actively involved in the process of displaying video. The content flows from the decoder to the Surface, which is managed by SurfaceFlinger (the system graphics compositor).
This is, for example, how DRM video works. The app "playing" the video has no access to DRM-protected video frames. (Neither does SurfaceFlinger, actually, but that's a longer story.)
The best way to know if content is still arriving is to ask the video source if it is still sending you content. Another approach would be to change your SurfaceView to a TextureView, and provide an onSurfaceTextureUpdated() callback method.
I am not sure what exactly what you are trying to achieve here but you can see if surface view is rendering or not through implementing an interface called SurfaceHolder.Callback which gives you access to the following methods,
On Surface Created - This is called immediately after the surface is first created.
On Surface Changed - This is called immediately after any structural changes (format or size) have been made to the surface.
On Surface Destroyed - This is called immediately before a surface is being destroyed.
Take a look at the documentation for surface view. For SurfaceHolder take a look at this link. Basically in order to know
Can anyone explain me what is android SurfaceView? I have been trough the android development web site and read about and i cant still understand it.
Why or when is it use in android application development.Maybe a good example if possible
Thank you
Android SurfaceView is an object that is associated with a window (but behind a window), using which you can directly manipulate the canvas and draw whatever you like.
What is interesting about the implementation of SurfaceView is that although it lies BEHIND a window, as long as it has any content to show, Android framework will let the corresponding pixels on that window to be transparent, thus making the surface view visible.
It is most likely to be used for building a game or browser, where you want a graphical renderer to calculate pixels for you while you can also use java code to control the normal APP logic.
If you are new to normal Android programming, chances are you do not need to know too much about it.
For further information, see this and the official documentation.
View or SurfaceView comes into picture when you need custom design in the android layout instead of using existing android widgets provided from android.
Here the main difference of View and SurfaceView is drawing threads. View is drawn in the UI thread and SurfaceView can be drawn in a separate thread.
Therefore SurfaceView is more appropriate when it requires to update UI rapidly or rendering takes too much time (eg: animations, video playback, camera preview etc..)
I'm trying to create an app where I am able to add filters to a recorded video. Basically, I want to replicate the functionality that exists in Instagram video, or Viddy.
I've done research and I can't piece it all together. I've looked into using GLSurfaceView to play the recorded video and I know I could use NDK to do the pixel manipulation and send it back to the SurfaceView or save it somehow. The problem is, I don't know how to send the pixel data because there seems to be no function to access it. This idea came from the Camera function "onPreviewFrame". The function returns a byte array allowing me to manipulate the pixels and display it.
Another idea is to use GLSurfaceView and use OpenGL to render the filter. GLSurfaceView has a renderer you can set, but I'm not very familiar with OpenGL. But again, this goes back to actually getting the pixels of each video frame. I also read about ripping each frame as a texture and then manipulating the texture in OpenGL but the answers I've come across are not very detailed.
Lastly, I've looked into JavaCV. Trying to use FFmpegFrameGrabber, but I haven't had much either. I wanted to just grab one frame, but when I try to write the frame's ByteBuffer to an ImageView, I get a "buffer not large enough for pixels" error.
Any guidance would be great.
From Android 4.3 you can use a Surface as the input to your encoder. http://developer.android.com/about/versions/android-4.3.html#Multimedia
So you can use GLSurfaceView and apply the filters using fragment shaders.
You can find some good examples here. http://bigflake.com/mediacodec/
It is good to use the exoplayer filter library and this one will do your work but in order to merge the filtered layered with the video you have to do an extra work.
Link for exoplayer filter is there for you : ExoplayerFilter
You have to se the exoplayer for this but follow their instructions you'll be able to do the task. Ping me if something comes up.
Is it possible to broadcast the android camera preview into 2 different SurfaceView controls at the same time? I have seen some apps that show effects into different previews in real-time, how do they achieve that? I read about the TextureView, is this the view to use? where can I find examples of multiple simultaneous camera previews?
Thanks
Well, as they answered in this question, I downloaded the grafika project and revised the "texture from camera" example.
In the RenderThread is a Sprite2d atribute called mRect.
I just make another instance called mRect2, and configuired it with the same parameters that mRect has, except the rotation, I put it to the double:
mRect.setRotation(rotAngle);
mRect2.setRotation(rotAngle*2);
This is the result
There is still a lot of code to understand, but it works and seems a very promising path to continue by.
I don't think that it is possible to independently open 2 camera previews at the same time, as the camera is treated as a shared resource. However, it will be possible to draw to multiple SurfaceViews which is what the apps you describe do.