Working with Android camera - android

I'm new to Android and I've started working on video streaming app (from device to pc).
According to what I seen so far, what is better:
1. Using the camera callbacks + encoding to jpeg
2. Using the MediaRecorder (i.e H264 or something else) - I've seen that there are many problems with this API.
tnks

I was working on the video streaming app and we tried both approaches - note that this was 2-3 years ago.
Option 1. is probably too slow to achieve good rates, we managed around 8-12 fps. Note that this was two years ago and phones were significantly slower.
Option 2. We choose this approach on Androd, it's doable, but there are issues with certain phones.

Related

Android - Utilise both front and back cameras in app

I have searched quite a bit about whether or not it is possible to utilise both front and back cameras simultaneously in app. I found threads from several years ago saying it is possible on certain devices and on all Samsung phones after something like the S4. However that feature is locked to Samsung developed applications only. I then looked into whether or not it is possible to switch rapidly between the two cameras to achieve the same goal but apparently that would be extremely taxing on the hardware. I was wondering if anyone has some information about this in 2017 and if developing an application that is able to use both front and back cameras simultaneously is viable?
I know this is way late but here are two post I made on this to help anyone who runs into this:
https://stackoverflow.com/a/28811277/1138878
https://stackoverflow.com/a/43445052/1138878
Short answer: it's possible but depends on hardware/chipset (Snapdragon 801 and higher level hardware).
What it boils down to is that you need a Camera object for each camera which feeds a SurfaceView for each camera. Also make sure to check, in code, the capabilities (resolution and image format) and use one of the supported formats/sizes.

Android, Open Front and Back Cameras Simultaneously [duplicate]

This question already has answers here:
Using both front and back cameras simultaneously android
(3 answers)
Closed 5 years ago.
I am attempting to use 2 Surface Holder objects tied to 2 separate SurfaceViews.
I am for the one set doing a Camera.Open(0) for the back camera and Camera.Open(1) for the front.
I am able to get a perfect preview for whichever I call to open first, but am unable to open both Cameras at the same time, even though I am using separate SurfaceViews and SurfaceHolders for each Camera.
Is it just impossible to do this under Android ? I have seen a couple of post suggesting that it is either not possible, or that it is phone hardware dependent, but no concrete explanations as to why.
Could somebody explain why Android does not appear to support this ?
If it is supported, could someone suggest the correct way of opening both Cameras at the same time ?
I have also seen some suggestions that it should be possible to do using OpenCV. If so, could someone please provide a link to an example or similar ?
Thanks and Regards,
Steed.
It is possible because I've done it on my Nexus 6, even recording video from both cameras simultaneously when using Camera1 APIs. However, it is very much limited to a few devices.
Any unsupported device should give an error during the 2nd Camera.open() call. It seems each hardware manufacturer provides a different implementation of the Camera APIs. You could pretty easily try/catch the exception if a camera does not allow it.
This is possible on certain phones and pretty much all newer phones. I have found that devices which use the Snapdragon 801 and higher chipsets support this (OnePlus 1, HTC M8 etc). This was somewhere in 2014.
It all depends on the hardware/manufacturor and you should test this on real devices.
Also be aware that the first Camera API outputs in YUV so you will have to deal with this conversion to another format if you want to use the images/video; you can display it fine on a SurfaceView in realtime but saving to picture/video I suggest you save the YUV and convert later/on a seperate thread although you could save and convert single images in realtime on a seperate thread.
Sorry for the late answer but I hope you or someone else can use this info!

create video from series of images android

I am trying to create a video from series of images in android.
I have come across these three options MediaCodec, ffmpeg using ndk and jcodec. Can someone let me know which one of them is best and easiest. I didn't find any proper documentation so can somebody please post their working example?
If you are talking about API 4.3+ in general you need to get input surface from encoder, copy image to the texture that comes along with the surface, put correct timestamp and send it back to encoder. and do it
fps (frames per second) * resulted video duration in seconds
times. encoder bitstream after encoder should go to the muxer, so finally you will get mp4 file.
It requires rather much coding:)
I suggest you to try free Intel Media Pack: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
It has a sample - JpegSubstituteEffect, it allows to create videos from images. The idea is to take a dummy video (black video and quiet audio) and to substitute all black frame by coping images. It could be easily enhanced to creating a video from series of images. I know a couple of applications in Google Play making the same using Media Pack
I tried JCodec 1.7 for Android. This is very simple compared to the other two options and works. There is class SequenceEncoder in the android package that accepts Bitmap instances and encodes them in to video. I ended up cloning this class into my app to override some of the settings e.g. fps. Problem with JCodec is that performance is dismal - encoding single 720x480 pixels frame takes just about 45 seconds. I wanted to do timelapse videos possibly at fullHD and was initially thinking any encoder will do as I was not expecting encoding a frame to take more than a second (minimal interval between frames in my app is 3 seconds). As you can guess with 45 seconds per frame JCodec is not a fit.
I am monitoring your question for other answers that may be helpful.
The MediaCodec/MediaMuxer way seems ok but it is insanely complex. I need to learn quite a bit about OpenGL ES, video formats and some Android voodoo to get this going. Ohh and this only works on the latest crop of phones 4.3+. This is real shame for Google with all of their claims to fame. I found some Stackoverflow discussions on the topic. Two sub-paths exist - the OpenGL way is device independent. There is another way which involves transcoding your RGB Bitmap data to YUV. the catch with YUV is that there are 3 flavours of it depending on the device HW - planar, semi planar and semi planar compressed (I am not sure if a 4th way is not coming in the future so...).
Here are couple useful links on the OpenGL way
CTS test - https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java
Many CTS like tests - http://bigflake.com/mediacodec/#EncodeDecodeTest
Two of them seem to be relevant and useful
EncodeAndMuxTest - http://bigflake.com/mediacodec/EncodeAndMuxTest.java.txt
CameraToMpegTest - http://bigflake.com/mediacodec/CameraToMpegTest.java.txt (I believe this to be closest to what I need, just need to understand all the OpenGL voodoo and get my Bitmap in as texture filling up the entire frame i.e. projections, cameras and what not comes into play)
ffmpeg route does not seem direct enough too. Something in C++ accepting stuff from Java...I guess some weird conversions of the Bitmap to byte[]/ByteBuffer will be first needed - cpu intensive and slow. I actually have JPEG byte[] but am not sure this will come handy to the ffmpeg library. I am not sure if ffmpeg is taking leverage of the GPU or other HW acceleration so it may well end up at 10 seconds per frame and literally baking the phone.
FFmpeg can implement this task. You first need compile ffmpeg library in Android (refer to this article "How to use Vitamio with your own FFmpeg build")
You could refer the samples in FFmpeg to figure out how to implement your task.
In Android implement your task in C++; then use JNI to integrate the C++ code into your Android app.

How to take photo with front and rear camera simultaneously? [duplicate]

This question already has answers here:
Using both front and back cameras simultaneously android
(3 answers)
Closed 9 years ago.
I'm trying to make an app that takes a picture using the front and the rear camera in one picture.
This is kinda how it would look.
I've read http://developer.android.com/reference/android/hardware/Camera.html and searched and really couldn't find anything on the subject but I know I saw on the app store of other apps like this.
Mostly it depends on the hardware of the device and most of the devices now do not allow use of both cameras simultaneously. Even if some device will be able to use both cameras, if you want to make application that runs flawlessly on as much devices as possible (and this is what you want) you should not implement this solution. Google says:
In order for your application to be compatible with more devices, you
should not make assumptions about the device camera specifications
What you can do, and it depends on the application you are writing is to periodically take pictures once with the front camera and the second time with the back camera.

Capture video of applications from an Android device [duplicate]

This question already has answers here:
How to capture video in Android?
(7 answers)
Closed 9 years ago.
I need to make some videos of Android applications for a presentation. All solutions I found in the Android SDK can only take pictures. Is there a possibility to create a video - for example on a rooted device?
edit: I found the project droidAtScreen but the frame rate isn't very good...
I work for lunarg.com and we have a solution via a free SDK (in Beta1 stage right now) that allows developer to create "record" button that calls our library and we will capture all of the OpenGl calls - thus there is no overhead that could impact the performance of the device, for example during a gameplay.
After the capture is done, the file is uploaded in the background via WiFi to our server where it is converted to a video and posted to the user specified YouTube or video.
If you want you can look at the results here http://www.lunarg.com/seemegaming/
This has been discussed a few times on the android-developers Google Group, such as here and here.

Categories

Resources