I've been looking at different ways of grabbing a YUV frame from a video stream but most of what I've seen rely on getting the width and height from previewSize. However, a cell phone can shoot video at 720p but a lot of phones can only display it at a lower resolution (ie 800x480) so is it possible to grab a screen shot that's closer to 1920x1080 (if video is being shot at 720p)? Or Am i forced to use the preview resolution (800x400 on some phones)?
Thanks
Yes, you can. *
* Conditions Apply -
You need access to middle layer, mediaframe work to be more precise
No, it cannot be done only through the application
Now if you want to do it at the mediaframe work level here are steps -
Assuming you are using Froyo and above, the default mediaframe work used is StageFright
In StageFright go to method onVideoEvent after a buffer is read from the mVideoSource use the mVideoBuffer to access the video frame at its original resolution
Linking this with your application -
You will need a button in the application to indicate screen capture
Once the user presses this button then you read the video frame from the location mentioned above and then return this buffer to the Java layer
From here you can use the JPEG Encoder to convert the raw video frame to an image.
EDIT:
Re read your question, you were asking for the screen capture during recording or the camera path. Even for this there is no way to achieve this in application alone, you will have to do something similar but you will need access to CameraSource in the StageFright framework.
Related
I am modifying (Java) the TF Lite sample app for object detection. It has a live video feed that shows boxes around common objects. It takes in ImageReader frames at 640*480.
I want to use these bounds to crop the items, but I want to crop them from a high-quality image. I think the 5T is capable of 4K.
So, is it possible to run 2 instances of ImageReader, one low-quality video feed (used by TF Lite), and one for capturing full-quality still images? I also can't pin the 2nd one to any Surface for user preview, pic has to be captured in the background.
In this medium article (https://link.medium.com/2oaIYoY58db) it says "Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this is called the active configuration."
I'm new to android here, so couldn't make much sense of this.
Thanks for your time!
PS: as far as I know, this isn't possible with CameraX, yet.
As the cited article explains, you can use a lower-resolution preview stream and periodically capture higher-rez still images. Depending on hardware, this 'switch' may take time, or be really quick.
In your case, I would run a preview capture session at maximum resolution, and shrink (resize) the frames to feed into TFLite when necessary.
Does anyone know how to load a high resolution video in android programmatically, such as 3000 x 3000, to display only a portion of this video, such as 1000 x 1000?
I tried to use the MediaPlayer android sdk official in a TextureView, but this method has media size limitations, I think, because the video plays but and texture view is black ..
I appreciate the help.
Media size limitations are for a reason. 3k x 3k resolution video is huge for such small device like phone. Consider that you are not able to decrypt only small portion of video frame, that is not like video works. So you need do decrypt whole big frame (wchich is calculated from I-Frame and following P-frames ) than take screenshot of it after that take part which you are interested in and present on your textureView, and all of this in realtime. Think about device memory and CPU. In my opinion it's not possible with such small resources
In one of my application i need to record video of my own screen.
I need to make a video in that user can take video of their app how it possible?
first question is it possible so? if yes then how? any useful links or some help
Thanks
Pragna Bhatt
Yes, it is possible, but with certain limitations. I have done it in one of my projects.
Android 4.4 adds support for screen recording. See here
If you are targeting lower versions, you can achieve it, but it will be slow. There is no direct or easy way to do it. What you will do is, create drawable from your view/layout. Convert that drawable to YUV format and send it to camera (see some library where you can give custom yuv image to camera), camera will play it like a movies, you can save that movies to storage. Use threads to increase frame-rate (new multi-core device will have high frame-rate).
Only create drawable (from view) when there is any change in view or its children, for that you can use Global Layout Listener. Otherwise send same YUV image to camera.
Limitation:
You can not create video of more than one activities (at a time), or their transactions, because you are creating image from view. (Work on it yourself, may be you'll find a way)
You can not increase frame rate from a certain point, because it depends on hardware of your device.
I am trying to develop an application in which a Beaglebone platform captures video images from a camera connected to it, and then send them (through an internet socket) to an Android application such the application shows the video images.
I have read that openCV may be a very good option to capture the images from a camera, but then I am not sure how the images can be sent through a socket.
On the other end, I think that the video images received by the Android application could be treated by simple images. With this in mind I think I can refresh the image every second or so.
I am not sure if I am in the right way for the implementation, so I really appreciate any suggestion and help you could provide.
Thanks in advance, Gus.
The folks at OpenROV have done something like you've said. Instead of using a custom Android app, which is certainly possible, they've simply used a web browser to display the images captured.
https://github.com/OpenROV/openrov-software
This application uses OpenCV to perform the capture and analysis, a Node.JS application to transmit the data over socket.io to the web browser and a web client to display the video. An architecture description on how this works is given here:
http://www.youtube.com/watch?v=uvnAYDxbDUo
You can also look at running something like mjpg-streamer:
http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1051&context=cpesp
Note that displaying the video stream as a set of images can have big performance impact. For example, if you are not careful how you encode each frame, you can more than double the traffic between the two systems. ARGB takes 32 bits to encode a pixel, YUV takes 12 bits, so even accounting for the frame compression, you still are doubling the storage per frame. Also, rendering ARGB is much, much slower than rendering YUV, as most of the Android phones actually have hardware-optimized YUV rendering (as in the GPU can directly blit the YUV in the display memory). In addition, rendering separate frames as approach usually make sone take the easy way and render a Bitmap on a Canvas, which works if you are content with something in the order of 10-15 fps, but can never get to 60 fps, and can get to a peak (not sustained) of 30 fps only on very few phones.
If you have a hardware MPEG encoder on the Beaglebone board, you should use it to encode and stream the video. This would allow you to directly pass the MPEG stream to the standard Android media player for rendering. Of course, using the standard media player will not allow you to process the video stream in real time, so depending on your scenario this might not be an option for you.
I am building an Android application where part of the functionality involves users taking images and recording video.
For the application there is a need to set a specific resolution for both the images and the video.
Is it possible to specify the resolution parameters and then use a camera intent to capture images and video or do I need to build my own camera activity?
Any advice would be greatly appreciated.
Edit: I did some additional research and had a look at http://developer.android.com/guide/topics/media/camera.html#intents.
If I understand correctly there is no option to specify resolution parameters when using the Image capture intent http://developer.android.com/reference/android/provider/MediaStore.html#ACTION_IMAGE_CAPTURE.
For the Video capture intent it seems I have the option to use the Extra Video Quality parameter, however that only gives me the option of high quality and low quality (which I am not quite sure what corresponds to in terms of resolution) http://developer.android.com/reference/android/provider/MediaStore.html#EXTRA_VIDEO_QUALITY
It seems I best get started developing my own image and video activities then, unless I missed some other options with the image and video intent.
Camera intent starts external camera applciation which MAY use your hints (but MIGHT NOT). The activity/application is non standard (phone vendor dependent), as well as the concrete implementation of the camera software.
You can also use the camera api ( working examples are in this project: http://sourceforge.net/projects/javaocr/ ) which allows you to:
query supported image formats and resolutions (you guessed it - vendor dependent)
set up preview and capure resolutions and formats (but camera software is free to ignore this setting, and some formats and resolutions can produce weird exceptions despite being advertised as supported)
Conclusion: cameras in android devices are different and the camera API is underdocumented mess. So be as defensive as possible.