The Lenovo Mirage Solo VR headset has a stereo camera system in the front which is used by the built-in Google WorldSense tracking system.
According to the Google dev blog, support for AR/see-through might appear at some future point (or not, knowing Google).
However, since I'm interested in raw camera access anyway, I was wondering if anybody has already taken apart or rooted the Mirage Solo to see if the raw stereo camera data might be accessed?
Based on some digging with a device info app, it looks like the camera uses OV9282 sensors (monochrome, global shutter, 1 MP, up to 120 FPS), but that's all I was able to find out so far.
Any additional pointers would be much appreciated.
EDIT: I'd also be happy if someone could point me to a download link for the factory image files. I haven't been able to find those anywhere either.
Related
I have a Sony Alpha 7R Camera and look for information about the "build in" application support. What are those apps, android based? I there information public about how to create and install your own camera app -- NOT talking about the remote api.
The few available apps are kind of primitive and limited, in particular I'd like to create a more versatile "Interval timer" app -- the time lapse thing is kind of too simple for my purpose.
To be particular: versatile bracketing, absolute start/stop times, complex shooting programs with pre programmed ISO, Shutter, bracketing etc. for a series for programmed interval shooting, or simply as fast as possible... As an example -- I just suffered "lost valuable time" shooting a Eclipse as I had to reconfigure/switch modes, etc.
Ideal would be a scenario I could upload a shooting script to the app on the camera.
The real answer is that you can build applications for the Camera API using many different methods. When you create an application for the Camera API you are just making API calls to the camera while your code is connected to the camera Wifi somehow. In the end the easiest way to distribute your code is using smartphones as it will work for IOS, Windows, etc.. as well as Android, but you are not limited to these technologies. Please let me know if I can provide more information.
I am researching a way to do raw camera feed manipulations via OpenCL (or other HW assisted methods). The important part here is that I need to do that at global level, so all apps that will ever use the camera, they will benefit from this "global filter". For example, if we have a device with a fish eye camera, is there a documented way to dewarp the feed before any target apps gets that feed?
In other words, is there a documented way to install global space filters on the camera feed that will pre-process the raw feed before being delivered to any app wanting access to the camera (the camera app, periscope, ustream, etc)?
If there are no such user-space installable filters, is there a documented way to do them as part of a custom Android OS distribution (like kernel-side drivers?? Are any interfaces of this kind even available?
I have done some extensive googling regarding this, but I've failed to find anything. Any pointers are greatly appreciated.
Thanks
I don't think there is an easy way to directly manipulate the raw camera stream using the GPU from the end-user (or developer)'s perspective. The chip vendor does have the interface between the camera pipeline and the GPU. Some of the interface might be open for the phone vendors as well. But those interface are definitely for internal-usage only. They will be used either in the driver or the system-level application, I don't see how a developer can directly access those kind of interface.
Hi everbody!
I would build a drone with a ip camera to stream video to an android app with (if possible) http protocol (as in webpage) and the camera should be must small (and light) is possible . So, which ip camera you advice me?
Thank you guys!
If it was up to me to decide, (keeping in mind that I don't know the exact size of the drone),
I would say that you can't go wrong with a Raspberry Pi and the Pi Face camera. It offers HD quality with is vital if you want to see clearly while the wind is blowing it side to side.
It is soon available in infrared so there is an add on to look forward to!!!
It is lightweight and can be mounted anywhere because it isn't restricted to a housing.
I have used the cam for many projects from motion detection to security cams.
It is a Must Have, it is inexpensive and great for these projects, plus they can be programmed to do so much more.
I want to control the aperture, shutter speed and ISO on my android phone. Is there a way in which I can access the hardware features?
I won't say it's impossible to do this, but it IS effectively impossible to do it in a way that's generalizable to all -- or even many -- Android phones. If you stray from the official path defined by the Android API, you're pretty much on your own, and this is basically an embedded hardware development project.
Let's start with the basics: you need a schematic of the camera subsystem and datasheets for everything in the image pipeline. For every phone you intend to support. In some cases, you might find a few phones with more or less identical camera subsystems (particularly when you're talking about slightly-different carrier-specific models sold in the US), and occasionally you might get lucky enough to have a lot of similarity between the phone you care about and a Nexus phone.
This is no small feat. As far as I know, not even NEXUS phones have official schematics released. Popular phones (especially Samsung and HTC) usually get teardowns published, so everyone knows the broad details (camera module, video-encoding chipset, etc), but there's still a lot of guesswork involved in figuring out how it's all wired together.
Make no mistake -- this isn't casual hacking territory. If terms like I2C, SPI, MMC, and iDCT mean nothing to you, you aren't likely to get very far. If you don't literally understand how CMOS image sensors are read serially, and how bayer arrays are used to produce RGB images, you're almost certainly in over your head.
That doesn't mean you should throw in the towel and give up... but it DOES mean that trying to hack the camera on a commercial Android phone probably isn't the best place to start. There's a lot of background knowledge you're going to need in order to pull off a project like this, and you really need to acquire that knowledge from a hardware platform that YOU control & have proper documentation for. Make no mistake... on the hierarchy of "hard" Android software projects, this ranks pretty close to the top of the list.
My suggestion (simplified and condensed a bit): buy a Raspberry Pi, and learn how to light up a LED from a GPIO pin. Then learn how to selectively light up 8 LEDs through an 74HC595 shift register. Then buy a SPI-addressed flash chip on a breakout board, and learn how to write to it. At some point, buy a video image sensor with "serial" (fyi, "serial" != "rs232") interface from somebody like Sparkfun.com & learn how to read it one frame at a time, and dump the raw RGB data to flash. Learn how to use i2c to read and write the camera's control registers. At this point, you MIGHT be ready to tackle the camera in an Android phone for single photos.
If you're determined to start with an Android phone, at least stick to "Nexus" devices for now, and don't buy the phone (if you don't already own it) until you have the schematics, datasheets, and sourcecode in your possession. Don't buy the phone thinking you'll be able to trace the schematic yourself. You won't. At least, not unless you're a grad student and have one hell of a graduate-level electronics lab (with X-Ray capabilities) at your disposal. Most of these chips and modules are micro-BGA. You aren't going to trace them with a multimeter, and every Android camera I'm aware of has most of its low-level driver logic hidden in loadable kernel modules whose source isn't available.
That said, I'd dearly love to see somebody pull a project like this off. :-)
Android has published online training which contain all the information you need:
You can find it here - Media APIs
However, there are limitations, not all hardware's support all kind of parameters.
And if I recall correctly, you can't control the shutter speed and ISO.
Many tablets and some smart phones use an array of microphone for things like noise cancellation. For example Motorola Droid X uses three microphone arrays and even allows you to set "audio scenes". An example is discussed here.
I want to be able to record from all the microphones that are available on the tablet/phone at the same time. I found that using AudioSource we can choose the mic (I do not know which mic this is specifically but it might be the one facing the user) or the mic that is in same orientation as the video camera, but could not find anyway of accessing all the other mic in the mic array. Any help that points me in the right direction to investigate this will be great. Thanks in advance for your time.
It's seems like you've verified that there isn't a standard Android API for accessing specific mics in an array. I couldn't find anything either.
As is the case with custom additions to the Android system, it's up to the manufacturer to release developer APIs. Motorola has done this before. I took a look at all of the ones they have listed and it seems they simply don't expose it. Obviously, they have code somewhere which can do it (the "audio scenes" uses it).
So the quick answer: you're out of luck.
The more involved answer: you can go spelunking around the source code for the Droid X because it's released as open source. If you can actually find it, understand that you're using an undocumented API which could be changed at any time. Plus, you'll have to do this for every device you want to support.