I need to make a video as a demo for one application I have developed. I know that there are some experimental applications that use DDMS and achieve a framerate of 5-6 FPS. This framerate is completely insufficient for my purposes since the application has smooth animations that I would like to show. Is there a way to do a real-time screen capture on Android? Should I settle for a capture of the emulator or a real video done with a real camera?
Among the ready solutions, the one that provides the highest quality is beagle board or some other board with DVI or S-Video out. Second best is emulator.
Apparently, some phones provide TV video output. That seems to be the case of my Galaxy S i9000, which has a "TV Out" setting, providing video output via the Jack (TRRS) connector. Some HTC phones (Droid Incredible) may also support such video output. Then, all that is needed is a small S-Video acquisition card to capture the output.
I haven't tested that yet, but it is reported to work, and should allow to demo all features including multi-touch gestures, which could be hard to reproduce on a beagle board with a mouse plugged in... Plus, the phones have everything installed out of the box, that saves time.
EDIT - 19 Sep 2011:
Unfortunately, using the Samsung S GT-I9000 video output didn't provide good results. I purchased the specific Samsung video cable plus a Terratec G3 video acquisition USB adapter, and the results were not satisfying. The video was flickering, the image was of pretty bad quality, and it wasn't good enough for creating a demo of my app which relies on OpenGL.
So, I purchased a JVC GZ-HM435 camcorder, which records in HD, and that was a lot better. I was able to create a pretty nice video, with very acceptable quality, by positioning the camcorder appropriately using a proper stand. Also, this method better demonstrates the interactivity of the application, because one can see fingers, pinching and all that. It really shows how it works.
Related
Dual-camera smartphones are relatively new in the market, but I was wondering if a camera app could explicitly choose to use only one lens, or to manually retrieve separate input from each lens.
I couldn't find any Android API documentation that is specifically designed for dual-lens phones, so I guess this is a hardware/OS-level implementation that would be difficult to override or bypass.
Android's Camera HAL documentation page doesn't mention dual-lens devices as well, but it does seem to strengthen that assumption.
I don't have much experience with iOS, but I guess it wouldn't be easier.
So the question is - how, if at all, would such task be possible on either Android or iOS?
Edit: seems that in iOS it is possible, as explained here (thanks to the4kman for pointing this out in the comments). So I guess the question remains for Android only.
Different vendors provide dual cameras for their Android devices in hope to improve the photo quality for average user, more often than not, specifically tuned for special conditions like challenging illumination or distortions of selfie mode. Each vendor uses proprietary technologies to handle dual cameras, and they are not interested to disclose the implementation details. The only public interface they support is a virtual single camera which is more or less compliant with Google specs.
This does not mean that you cannot be lucky chance be able to unlock the camera for some specific device. Playing with photo modes, or scenes, or other parameters, may accidentally give you a picture that only passed one lens of the dual setup.
In some rare cases some documentation is actually present. E.g. HTC let you control stereo vs. mono setting for the circa'2011 Evo 3D device.
Another example: this review implies that for Huawei P9 and P10, you will actually get one-camera result if you choose monochrome mode.
A recent article gives some perspective how different are approaches to dual camera among different manufacturers.
I have searched quite a bit about whether or not it is possible to utilise both front and back cameras simultaneously in app. I found threads from several years ago saying it is possible on certain devices and on all Samsung phones after something like the S4. However that feature is locked to Samsung developed applications only. I then looked into whether or not it is possible to switch rapidly between the two cameras to achieve the same goal but apparently that would be extremely taxing on the hardware. I was wondering if anyone has some information about this in 2017 and if developing an application that is able to use both front and back cameras simultaneously is viable?
I know this is way late but here are two post I made on this to help anyone who runs into this:
https://stackoverflow.com/a/28811277/1138878
https://stackoverflow.com/a/43445052/1138878
Short answer: it's possible but depends on hardware/chipset (Snapdragon 801 and higher level hardware).
What it boils down to is that you need a Camera object for each camera which feeds a SurfaceView for each camera. Also make sure to check, in code, the capabilities (resolution and image format) and use one of the supported formats/sizes.
We are developing our own Android-based hardware and we wish to use Vuforia (developed via Unity3D) for certain applications. However, we are having problems making Vuforia work well with our current camera orientation settings.
On our hardware, when the camera is placed horizontally - everything works fine. That is, when the camera is parallel to the placement of the display. However, we need to place the camera vertically, or in other words, with a 90 degree difference to the placement of the display. These are all hardware settings. Our kernel is programmed according to such settings and every other program that utilises the camera works compatibly with everything, including our IMU sensors. However, apps developed with Vuforia behave completely odd when the camera is placed vertically.
We assume the problem to be related to Vuforia's algorithms of processing raw camera data however we are not sure. Moreover, we do not know how to fix the situation. For further details, I can list:
-When "Enable Video Background" is on, the projected image is distorted and no video feed is available. The AR projection appears on a black background with distorted dimensions.
-When "Enable Video Background" is on and the device is rotated, the black background is replaced by flickering solid colors.
-When "Enable Video Background" is off, the AR projection has normal dimensions (no distortion) however it is tracked with wrong axis settings. For example, when the target moves left in real world, the projection moves up.
-When "Enable Video Background" is off and the device is rotated, the AR projection is larger compared to its appearance when the device is in it's default state.
I will be glad to provide any more information you need.
Thank you very much, have a nice day.
PS: We have found out that applications that use the camera as a main purpose (Camera apps, Barcode Scanners, etc) work fine while apps for which camera usage is an extra quality (such as some games) have the same problem as Vuforia. This make me think that apps who access the camera directly work fine whereas those who use Android API and classes fail for some reason.
First understand that every platform deals with cameras differently and that beyond this different android phone manufacturers deal with these differently as well. In my testing WITHOUT vuforia I had to transform the plane I cast the video feed onto 0,-90,90 for android/iphone and -270,-90,90 for the windows surface tablet. Past this the iPhone rear camera was mirrored, the android front camera was mirrored as well as the surface front camera. That is easy to account for, but an annoying issue is that the Google Pixel and Samsung front cameras were mirrored across the y (as were ALL iOS on the back camera), but the Nexus 6p was mirrored across the x. What I am getting at here is that there are a LOT of devices to account for with android so try more than just that one device. Vuforia so far has dealt with my pixel and 4 of my iOS devices just fine.
As for how to fix your problem:
Go into your player settings for unity and look at the orientation. There are a few options here and my application only uses portrait so I force portrait and it seems to work fine (none of the problems I had to account for with the above mentioned scenario). Vuforia previously did NOT support auto rotation so you need to make sure you have the latest version since it sounds like that is what you need. If the auto rotate is set and it is not working right you may have to account for that specific device (don't do this for all devices until after you test those devices). To account for that device use an if (or construct a case statement if you have multiple instances of this problem with different devices) and then reflect or translate as needed. Cross platform development systems (like unity) doesn't always get everything perfect since there is basically no standard. In these cases you have to directly account for them by creating a method and a case statement within that so you can cleanly and modularly manipulate all necessary devices. It is a pain, but it beats developing for all devices separately.
One more thing is make sure you check out the vuforia configuration file as it has some settings such as camera mirror and direction settings on there. These seem to be public settings so you should also be able to script to these in your case statement in the event you need to use "Flip Horizontally" for one phone, but not another.
What I'm trying to achieve: access both front and back cameras at the same time.
What I've researched: I know android camera API doesn't give support for using multiple instances of the Camera and you have to release a camera before using the other one. I've read tens of questions about this, I know on some devices it's possible (like Samsung S4, or other new devices from them).
I've also found out that it's possible to have access to both of them in Android KitKat on SOME devices.
I also know that on api >= 21, using the camera2 API, it's possible to access both of them at the same time because it's thread safe.
What I've got so far: implementation for accessing the cameras one at the time in order to provide a picture-in-picture.
I know it's not possible to implement dual simultaneously camera on every device, I just want a way to make it available to some devices.
How can I test to see if the device is capable of accessing both of them?
I've also searched for a library that can allow me such thing, but I didn't find anything. Is there such a library?
I would like to make this feature available for as many devices as possible, and for the others, I'll leave the current state (one by one) of the feature.
Can anyone please help me, at least with some pieces of advice?
Thanks
!
The Android camera APIs generally allow multiple cameras to be used at the same time, but most devices do not have enough hardware resources to support that in practice - for example, there's often only one camera image processor shared by both cameras.
There's no query that's included in the Android APIs that'll tell you up front if you can use multiple cameras at the same time.
The only way to tell is to try to open a second camera when you already have one open. If you can open the second camera, then you can do picture-in-picture, etc. If you get an exception trying to open the second camera, then that particular device doesn't support having both cameras open.
It is possible using the Android Camera2 API, but as indicated above most devices don't have hardware support. If you have a Nexus 5X, Nexus 6, or Nexus 6P it will work and you can test with this BothCameras app. I've implemented blitting to allow video recording as well (in addition to still pictures) using the hardware h264 encoder.
You can not access both the cameras in all android mobile phones due to hardware limitations. The best alternative can be using both the camera one by one. For that you can use single camera object and can change camera face to take another photo.
I have done this in one of my application.
https://play.google.com/store/apps/details?id=com.ushaapps.bothie
I've decided to mention that in some cases just opening two cameras with Camera2 API is not enough to know about support.
There are some devices which are not throwing error during opening. The second camera is opened correctly but the first one will call onCaptureFailed callback.
So the most accurate way is starting both cameras and wait frames from each of them and check there are no capture failure errors.
Long version:
I have a very particular issue. I'm a multimedia artist working at the moment together with an animator - we are trying to create an interactive animation that I want to make available online as a website and as free app on the App Store and the Android Market.
But here's the key problem I am faced with now.
The output video of the actual animation will be massive in resolution - probably something like 4 or more times the HD resolution, but it's for a reason. The idea is to let the viewer only watch a part of the video at one time - like when panning around in Google Maps or any other canvas-like view (eg. MMORPG or strategy computer games). So you only see a part of the whole image at one time, and then you can move around to see what's "behind the corner".
So the animation would be a Google Maps-alike canvas (panning and perhaps zooming if there's enough time to implement it) but with video instead of images.
The big problem that comes up is performance. I was thinking that the best way to make it run would be to scale down the video for different devices accordingly. But then even just considering desktop computers for now - scaling down to 720p for HD screen means there is in total of about 4 times 720p in resolution, which is probably too much for an average computer to decode (Full HD is quite often already problematic) - and the output resolution would be more than the 4K standard (5120 by 2880, whilst 4K is 4096x2160). Anyhow, that is unacceptable.
But I reached the conclusion that there is really no point in decoding and rendering the parts of the video which are invisible to the user anyway (why waste the CPU+GPU time for that) - since we know that only about 1/6th of the full canvas would be visible at any given time.
This inspired an idea that maybe I could split the output video into blocks - something between 8 to 64 files stacked together side by side like cells in a table, then have a timecode timer playing in some variable and enabling the video-blocks on demand. As the user drags the canvas to the visible element it would automatically start the playback of the file at the given timecode read from the global variable. There could be some heuristics anticipating users movement and prematurely activating the invisible blocks in order to remove any delay caused by seeking within video and starting the playback. Then blocks which are no longer visible could deactivate themselves after a certain amount of time.
So my first attempt was to try and see what are my choices platform-wise and I really see it comes down to:
HTML5 with JavaScript (heavily using <video> tag)
Adobe Flash (using Flash Builder to deploy the apps to all the different devices)
And HTML5 would really be more preferable.
So I did some research to see if it would be at all possible to even synchronize more than one video at a time in HTML5. Unfortunately it's far from perfect, there are two available "hacks" which work well with Firefox, but are buggy in Webkit (the videos often get out of sync by more than a few frames, sometimes even up to half a second, which would be quite visible if it was a single video split into blocks/segments). Not to mention the fact that I have not even tried it on mobile platforms (Android / iOS).
These two methods/hacks are Rick Waldron's sync as shown here:
http://weblog.bocoup.com/html5-video-synchronizing-playback-of-two-videos/
And the other one, also developed by Rick is the mediagroup.js (this one doesn't work in Chrome at all):
https://github.com/rwldrn/mediagroup.js
My test here: http://jsfiddle.net/NIXin/EQbAx/10/
(I've hidden the controller, cause it is always playing back earlier than the rest of the clips for some reason)
So after explaining all that I would really appreciate any feedback from you guys - what would be the best way of solving this problem and on which platform. Is HTML5 mature enough?
Short version:
If I still haven't made it clear as to what I need - think of a video zoomed in at 600% so that you can't see everything (some bits are off screen) and you need to pan around by dragging with your mouse (or flicking your finger on mobile devices) to see what's going on in different places of the video. How could I do that (have the video run smoothly) across platforms, while retaining the high quality and resolution of the video?
Thanks a lot, let me know if you need any more details or any clarification of the matter.