I am researching a way to do raw camera feed manipulations via OpenCL (or other HW assisted methods). The important part here is that I need to do that at global level, so all apps that will ever use the camera, they will benefit from this "global filter". For example, if we have a device with a fish eye camera, is there a documented way to dewarp the feed before any target apps gets that feed?
In other words, is there a documented way to install global space filters on the camera feed that will pre-process the raw feed before being delivered to any app wanting access to the camera (the camera app, periscope, ustream, etc)?
If there are no such user-space installable filters, is there a documented way to do them as part of a custom Android OS distribution (like kernel-side drivers?? Are any interfaces of this kind even available?
I have done some extensive googling regarding this, but I've failed to find anything. Any pointers are greatly appreciated.
Thanks
I don't think there is an easy way to directly manipulate the raw camera stream using the GPU from the end-user (or developer)'s perspective. The chip vendor does have the interface between the camera pipeline and the GPU. Some of the interface might be open for the phone vendors as well. But those interface are definitely for internal-usage only. They will be used either in the driver or the system-level application, I don't see how a developer can directly access those kind of interface.
Related
I have a macbook and I would like to use it to monitor a nest wireless security camera, including an approximately 1 tb archive of continuously updated video history (perhaps of motion detected clips only). This can be done by subscribing to a nest cloud account, but that can get expensive, especially for several cameras, so I'd rather do it myself.
Can anyone point me to open-source code that will handle this? If not, is there another type of camera that will allow me to do this over wifi?
As promised above, I will update the status of this issue.
After a significant amount of work and also significant progress, I was able to connect to the live nest camera feed programatically but was never able to actually record the live stream into short videos, although this was easy for my MacBook webcam. My belief is that Nest has engineered this feed such that camera owners cannot directly access it, leaving no option but to use their "Nest Aware" monthly service. I do not want to do this as I do not want to pay for it and because I want to create options that Nest Aware does not offer.
Searching the web, it appears that this kind of thing might be done by using another software package, "blue iris". I did not want to get this either as I am sure that flexibility would be sacrificed and also the camera would need to be made publicly shared(!)
So I am giving up on Nest, although I like the hardware.
I did find an alternative. I also had an Arlo Q camera and I tried that, using an open source API on GitHub:
https://github.com/jeffreydwalter/arlo
I was able to access the camera and save motion detected videos to my disk within an hour of finding the above link. So, if you want to do this type of thing, I recommend Arlo over Nest.
I have a Sony Alpha 7R Camera and look for information about the "build in" application support. What are those apps, android based? I there information public about how to create and install your own camera app -- NOT talking about the remote api.
The few available apps are kind of primitive and limited, in particular I'd like to create a more versatile "Interval timer" app -- the time lapse thing is kind of too simple for my purpose.
To be particular: versatile bracketing, absolute start/stop times, complex shooting programs with pre programmed ISO, Shutter, bracketing etc. for a series for programmed interval shooting, or simply as fast as possible... As an example -- I just suffered "lost valuable time" shooting a Eclipse as I had to reconfigure/switch modes, etc.
Ideal would be a scenario I could upload a shooting script to the app on the camera.
The real answer is that you can build applications for the Camera API using many different methods. When you create an application for the Camera API you are just making API calls to the camera while your code is connected to the camera Wifi somehow. In the end the easiest way to distribute your code is using smartphones as it will work for IOS, Windows, etc.. as well as Android, but you are not limited to these technologies. Please let me know if I can provide more information.
I want to control the aperture, shutter speed and ISO on my android phone. Is there a way in which I can access the hardware features?
I won't say it's impossible to do this, but it IS effectively impossible to do it in a way that's generalizable to all -- or even many -- Android phones. If you stray from the official path defined by the Android API, you're pretty much on your own, and this is basically an embedded hardware development project.
Let's start with the basics: you need a schematic of the camera subsystem and datasheets for everything in the image pipeline. For every phone you intend to support. In some cases, you might find a few phones with more or less identical camera subsystems (particularly when you're talking about slightly-different carrier-specific models sold in the US), and occasionally you might get lucky enough to have a lot of similarity between the phone you care about and a Nexus phone.
This is no small feat. As far as I know, not even NEXUS phones have official schematics released. Popular phones (especially Samsung and HTC) usually get teardowns published, so everyone knows the broad details (camera module, video-encoding chipset, etc), but there's still a lot of guesswork involved in figuring out how it's all wired together.
Make no mistake -- this isn't casual hacking territory. If terms like I2C, SPI, MMC, and iDCT mean nothing to you, you aren't likely to get very far. If you don't literally understand how CMOS image sensors are read serially, and how bayer arrays are used to produce RGB images, you're almost certainly in over your head.
That doesn't mean you should throw in the towel and give up... but it DOES mean that trying to hack the camera on a commercial Android phone probably isn't the best place to start. There's a lot of background knowledge you're going to need in order to pull off a project like this, and you really need to acquire that knowledge from a hardware platform that YOU control & have proper documentation for. Make no mistake... on the hierarchy of "hard" Android software projects, this ranks pretty close to the top of the list.
My suggestion (simplified and condensed a bit): buy a Raspberry Pi, and learn how to light up a LED from a GPIO pin. Then learn how to selectively light up 8 LEDs through an 74HC595 shift register. Then buy a SPI-addressed flash chip on a breakout board, and learn how to write to it. At some point, buy a video image sensor with "serial" (fyi, "serial" != "rs232") interface from somebody like Sparkfun.com & learn how to read it one frame at a time, and dump the raw RGB data to flash. Learn how to use i2c to read and write the camera's control registers. At this point, you MIGHT be ready to tackle the camera in an Android phone for single photos.
If you're determined to start with an Android phone, at least stick to "Nexus" devices for now, and don't buy the phone (if you don't already own it) until you have the schematics, datasheets, and sourcecode in your possession. Don't buy the phone thinking you'll be able to trace the schematic yourself. You won't. At least, not unless you're a grad student and have one hell of a graduate-level electronics lab (with X-Ray capabilities) at your disposal. Most of these chips and modules are micro-BGA. You aren't going to trace them with a multimeter, and every Android camera I'm aware of has most of its low-level driver logic hidden in loadable kernel modules whose source isn't available.
That said, I'd dearly love to see somebody pull a project like this off. :-)
Android has published online training which contain all the information you need:
You can find it here - Media APIs
However, there are limitations, not all hardware's support all kind of parameters.
And if I recall correctly, you can't control the shutter speed and ISO.
I want to take an Android based tablet - not a phone, I need a large screen and I don't need 3G.
The guy with the tablet will attach a web cam to it and a s/w application in the Adnroid tablet will stream the cameras feed to a web page (there may later be a need to stream video back to the Android tablet - tbd).
Additionally, I need 2 way Voice over IP.
I may (tbd) need to use a TCP interace to a device which might, or might not, be achieved through the Andoid.
With so much open: is there any open source that can handle that, either as a grooup or individually, or should I code my own? Since I don't normally do this kinds of stuff what's the best approach, in terms of protocols, etc
I'd like to demo something in a month or so. Sorry that this is vague - but so is the person asking for it (which might make me lean towards roll your won simply because of shifting requirements. But I might roll my own around off the shelf building block, for instance if I can find off the shelf open source VoiP, etc)
is there any open source that can
handle that, either as a grooup or
individually, or should I code my own?
AFAIK, there is virtually no "open source that can handle that" for Android. In fact, you will need hardware modifications and drivers to support webcams, let alone anything else on your to-do list.
There are a lot of mobile streaming services. Maybe they can help you with one half of your problem:
http://www.ustream.tv/
http://www.qik.com/
http://bambuser.com/
Instead of the Webcam, you can use the integrated camera on the phone itself to capture and stream. And, yes, you 'll have to develop something on your own esp. with changing requirements.
if i wanted to perform an iris scan, would i need any
additional api's or can i just use whats readily available?
You don't need any new APIs to do biometrics as such, but specialized biometric APIs do exist. They're typically most helpful if you want to make inter-platform information sharing easier, skip some of the boring parts of writing image acquisition/storing programs, that sort of thing. Writing code that's compatible with all the relevant biometric standards can get pretty gory without the sort of guidance that a biometric API can provide. The BioAPI Consortium (http://www.bioapi.org/) hosts some specifications and other such things on their website if you're interested in possibly acquiring one, although I'm not deeply familiar with all the stuff they're up to.
In deciding whether or not to use a biometric API, I would check first to see how easy it is to make your acquisition device interoperate with your software. If you're planning to just take a few pictures for research purposes and download them onto your machine from a camera that you've already figured out, it may be less important to get one than if you're taking a bunch of pictures for access control purposes, which requires that you plug different cameras into different computers in different places.
One option is Neurotechnology VeriEye (not free, but they have a trial I think), they offer a pretty comprehensive SDK to work with iris images and do verifications/identifications. The biggest challenge is getting usable images from your phone camera. I have seen reasonably succesful iris recognitions using iris images extracted from dslr full-face images, so you should be able to get something usable out of your (rear) camera. As a ballpark figure, you typically need a 640x480 image that captures the complete eye.
I dont believe there are any libraries as such for biometrics . Also unless you are using your own hardware I dont think its viable as none of the current phones have the camera resolution to capture an iris in its details.