I have a sony cam (imx327) and I need to get video and images from this cam to android phone based on mt6761. I wonder if I understand correctly there is a chance to do it - add supporting of this cam based on kernel(4.9) level /drivers/media/i2c/imx327.c(copy from other dist) by choosing Device Driver -> Multimedia Support -> Autoselect ancillary drivers... - I2C Encoders, decoders... -> Sony imx327 ...., add supporting v4l2, add device tree bindings of this for phone board?
Would it be probably possible to get video data from this cam to user mode Java/Kotlin application after this?
Related
I want to take two camera feeds separately so that I can use it for stereo vision but I am not able to find any resource.
Usually, the back cameras are arranged too close to produce a meaningful stereo pair. The exception are devices originally designed for stereo shoots, like HTC Evo 3D. The latter came with dedicated SDK that allows third-party developed to take advantage of its stereo camera.
Technically, independent access to individual lenses of the back camera array is supported by camera2 API since Android Pie, see https://source.android.com/devices/camera/multi-camera. But this support comes on manufacturer to manufacturer, device to device basis. Some of them don't provide access to physical camera devices. Some don't because their hardware is not compatible with these APIs. Others, because their software is not up-to-date.
I wanted to ask the features that tango devices have and normal android device (SAMSUNG GT-19505) don't have.
I am confused with the camera that tango have and normal android device like (SAMSUNG GT-19505) have.If i use SAMSUNG GT-19505 with tango API which capabilities of library will i be able to leverage.
Google Project Tango has two special cameras and two emitters on the back face:
A fish-eye RGB camera for wide-angle video.
A Class-1 IR-laser emitter for highlighting surface points.
An IR camera for detecting said points (and thereby determining spatial position.)
A flash for low-light conditions. (Might perhaps also be useful for assisting mapping efforts in low-light situations.)
Like most modern phones, the Samsung GT-19505 has an RGB camera, and a flash, but lacks the emitter/receiver pair for 3D depth-sensing and area-mapping, rendering the Project Tango SDKs incompatible.
For my project I need to implement HDR feature in my device that has Android Jelly bean on it. From the code I see that when HDR (High Dynamic Range) is selected the application is sending SCENE_MODE_HDR to the HAL layer. I am the developer from Camera HAL layer. What am I supposed to do when I get scene mode = SCENE_MODE_HDR. Do I need to request driver to give 3 images with different exposure compensation value and the application will take care of stitching the images to make the HDR image?
Or like panorama mode, the android application and framework layer can take care of HDR by themselves ?
The scene mode = SCENE_MODE_HDR seems to be introduced from Android Jelly Bean 4.2, and as i know, HDR at here indicates the Hardware HDR which means to be implemented by Camera Vendor.
I think the driver need handle this, not only give 3 images with different exposure compensation value, but also need do image composition and tone mapping.
So from the view of application, camera application just set the scene mode with SCENE_MODE_HDR and take picture, then, the HDR image will be output at onPictureTaken() callback function.
Is there a camera (can be point and shoot, or just the sensor) that has an SDK that can be interfaced with Android or an Arduino?
I have seen a few things online, but all of them are hacks. I like the Flea3, but it seems like you really need a full system to run this camera, or multiple cameras.
If this Samsung camera had an open SDK for Android, that would be ideal. If it had any SDK that they released to control the camera remotely (USB or wireless), that would also be awesome.
I know GoPro has their new Wi-Fi backpack, but you can only use their application to control the camera. I want to be able to turn the camera on and off, take pictures at any interval I set, including as fast as possible, and control other camera settings that are available.
Most high-end cameras have an IR remote. An Arduino can be used to record and replay the IR codes. See here for an example that shows how to capture the codes and create an intervalometer.
I found something better, called Picture Transfer Protocol (PTP). CHDK has a PTP interface, and I could connect to the camera with an arduino.
http://chdk.wikia.com/wiki/PTP_Extension
http://code.google.com/p/arduino-ptpusb/
I'm wondering if I can develop an application, where I can have the main content displayed on a big screen connected to my Android via HDMI and use the Android's touchscreen as a controller displaying different content.
So far the videos I've seen about Android's HDMI feature only mirror the phone's screen to the big screen.
You can use the Android Presentation API (API 17).
Works very well.
Your presentation is connected to an Activity, which lets you display e.g a Live Stream on the TV (through e.g HDMI) and use phone's display as a remote. I've done this in an app, and also out of laziness added a second app for a second phone which is used as a bluetooth remote control.
Hope this answers your question.
Surface flinger only sees two different kind of graphic buffers, frame buffers for normal ui display and overlay buffers for videos and camera previews. So frames buffers (or overlay buffers) should be transferred to hdmi by display controller when hdmi cable is plugged in. But unfortunately there isn't public api to control this kind of data flow. It is highly dependent on how hardware overlay or hdmi device drivers are implemented by chipset vendor or device manufacturer.
I dont think you can do this, unless you develop for a device for which vendor published HDMI API, like for some Motorola devices. For the rest, they typically have some hdmi OS service (not accessible to apps) that uses ioctls and /dev/ access for hdmi control (again, not accessible to unsigned apps).
You can exploit the flaw in HDMI overlay communication to achieve this. Your video input goes directly to the android output but you can use another layout for the screen which will not be visible in HDMI due to overlay issue.