I would like to use the mobile camera and develop a smart magnifier that can zoom and freeze-frame what we are viewing, so we don't have to keep holding the device steady while we read. Also should be able to change colors as given in the image in the link below.
https://lh3.ggpht.com/XhSCrMXS7RCJH7AYlpn3xL5Z-6R7bqFL4hG5R3Q5xCLNAO0flY3Fka_xRKb68a2etmhL=h900-rw
Since i'm new to android i have no idea on how to start, do you have any idea?
Thanks in advance for your help :)
I've done something similar and published it here. I have to warn you though, this is not a task to start Android development with. Not because of development skills, the showstopper here is a need for massive amount of devices to test it on.
Basically, two reasons:
Camera API is quite complicated and the different HW devices behave differently. Forget about using emulator, you would need a bunch of real HW devices.
There is a new API, Camera2 for platform 21 and higher, and the old Camera API is deprecated (kind of 'in limbo' state).
I have posted some custom Camera code on GitHub here, to show some of the hurdles involved.
So the easiest way out in your situation would be to use camera intent approach, and when you get your picture back (it is a jpeg file) just decompress it and zoom-in to the center of the resulting bitmap.
Good Luck
Related
I am in a situation where I need to use two cameras at a same time.
I have been looking up in internet for Camera2 api examples. Although not successful in developing my own camera app the way I want for android phone, I found some examples which open the camera.
Now I have a situation. I would like to know if I can access two cameras simultaneously in Android Things Odroid N2+ board. This is because I am working on the app that needs to open two cameras and display at the same time. For processing the image, I am planning to use OpenCV library.
Is this possible in Android/Odroid ?
I also recommend to use UVC Library to access two camera. Android Things is not fit to this problem. I think this example help you. https://github.com/saki4510t/UVCCamera
I am trying to mirror cast using my own app into a Fire TV Stick that is connected to the televsion. It has an option to Mirror the display. My phone can connect to the Fire TV Stick this way, but I would like to mirror something with a smaller resolution and even if I change my phone's resolution using adb, I think it sends the native resolution anyway.
I looked into MediaRouter and MediaRouteProvider. Also downloaded the Media Router sample that it's snippets are used in the documentation. The sample ran but didn't work. And this API is super complex and have so many things in it. I am not sure how to build a simple app that cast video(and later phone's screen) into another device, either the Amazon Fire TV stick mirror display or at least into a client app I will also write.
I couldn't find compact enough samples to do what I want. Do you have any idea where there is a sample that works and is not a massive amount of code?
I couldn't make it work following the documentation.
Instead of finding something in the API to do the mircast for me, I was able to just read pixel data from the MediaProjection and VirtualDisplay and send that using sockets.
It wasn't easy, I had to use a GLES11Ext.GL_TEXTURE_EXTERNAL_OES from the SurfaceTexture, render that into my own offscreen GL_TEXTURE2D and then read that using glReadPixels and the attached framebuffer.
I have searched quite a bit about whether or not it is possible to utilise both front and back cameras simultaneously in app. I found threads from several years ago saying it is possible on certain devices and on all Samsung phones after something like the S4. However that feature is locked to Samsung developed applications only. I then looked into whether or not it is possible to switch rapidly between the two cameras to achieve the same goal but apparently that would be extremely taxing on the hardware. I was wondering if anyone has some information about this in 2017 and if developing an application that is able to use both front and back cameras simultaneously is viable?
I know this is way late but here are two post I made on this to help anyone who runs into this:
https://stackoverflow.com/a/28811277/1138878
https://stackoverflow.com/a/43445052/1138878
Short answer: it's possible but depends on hardware/chipset (Snapdragon 801 and higher level hardware).
What it boils down to is that you need a Camera object for each camera which feeds a SurfaceView for each camera. Also make sure to check, in code, the capabilities (resolution and image format) and use one of the supported formats/sizes.
I've made a Camera App.
I want to add the functionality of anti-shake.
But I could not find the setting for anti-shake(image Stabilizer).
Plz Help me!!
Usually Image Stabilizer is a built-in camera feature, while OIS (Optical-Image-Stabilization) is a built-in hardware feature; by now really few devices support them.
If device hasn't a built-in feature, i think you cannot do anything.
Android doesn't provide a direct API to manage image stabilization, but you may try:
if android.hardware.Camera.getParameters().getSupportedSceneModes(); contains steadyphoto keyword (see here), your device supports a kind of stabilization (usually it shots when accelerometer data indicates a "stable" situation)
check android.hardware.Camera.getParameters().flatten(); for a "OIS" or "image-stabilizer" keyword/values or similar to use in Parameters.set(key, value);. For the Samsung Galaxy Camera you should use parameters.set("image-stabilizer", "ois");//can be "ois" or "off"
if you are really boring you may try reading the accelerometer data and decide to shot when the device looks steady.
Good luck.
If you want to develop software image stabilizer, OpenCV is helpful library for you. Following is the one of the way to stabilize the image using Feature.
At first, you should extract feature from image using feature extractor like SIFT, SURF algorithm. In my case, FAST+ORB algorithm is best. If you want more information, See this paper
After you get the features in images, you should find matching features with images.there are several matcher but Bruteforce matcher is not bad. If Bruteforce is slow in your system, you should use a algorithm like KD-Tree.
Last, you should get geometric transformation matrix which is minimize error of transformed points. You can use RANSAC algorithm in this process.
You can develop all this process using OpenCV and I already developed it in mobile devices. See this repository
I'm working on an application for sharing SurfaceView's draws with other android devices in real time (as a streaming). So all the connected devices on the net should be able to get the same canvas, draw in real time and see the modifications of each others.
I didn't found a way to do that. I thought about capturing screen shots and share them, but it will not allow multi-users modification.
How can I manage that? Is there an API or something like that?
I couldn't find from where to start.