How to get feedback from camera? - android

For my next project I started analyzing apps that measure pulse via camera (you press a finger against the camera and you get your pulse info).
I concluded that the apps receives data from the camera with the help of a light. How do the achieve this? Can you direct me to any area I should Investigate?
If anyone is in a mood to help me explaining how does pulse measure apps work? I cannot find ANY doc on the net on this topic.
Thanks in advance

To complement Robert's answer from a non-programming perspective (and since you asked it), pulse measure apps are based on Pulse Oxymetry.
The idea is to measure the absorbance of red light, which will vary when oxygenated blood is passing through your fingertips. When that happens there will a be a peak in absorbance, you only have to measure the number of times a peak is registered and divide by the respective time frame to compute cardiac frequency.
IMHO, to perform it on a mobile device is not fairly reliable, since it requires good lighting conditions and infra-red pulses, and there are several factors that makes this task very difficult:
1) Some phones may not have the flash LED light right near the camera
2) Some phones may not have a flash light at all
3) You don't have access to infra-red data.
4) The phone has to be absolutely still, or the image will be constantly changing, making the brightness measurement unreliable.

AFAIK such apps are using the preview mode of the Camera.
Using the method setPreviewCallBack(..) (and of course startPreview())you can register your own listener that receives continuously calls from the camera containing the current seen picture:
onPreviewFrame(byte[] data, Camera camera)
The image data is contained in the data byte array. The format of the data can be set via setPreviewFormat(). Using this data you can for example process the image and reduce it it's brightness at certain point in the image. Over the time the image brightness should show pulses.
I don't think that the necessary image algorithms are available by default in the Android runtime, therefore you have to develop own algorithms or look for 3rd party libraries that can be used on Android.

Related

Android Camera2: How to implement an semi-automatic Shutter Speed Priority Mode

Goal
Capture images with Android smartphones attached to moving vehicles
frequency: 1 Hz
reference model: Google Pixel 3a
objects of interest: the road/way in front of the vehicle
picture usage: as input for machine learning (like RNN) to identify damages on the road/way surfaces
capture environment: outdoor, only on cloudy days
Current state
Capturing works (currently using JPEG instead of RAW because of the data size)
auto-exposure works
static focus distance works
Challenge
The surface of the ways/roads in the pictures are often blurry
The source of the motion blur is mostly from the shaking vehicle/fixed phone
To reduce the motion blur we want to use a "Shutter Speed Priority Mode"
i.e. minimize shutter speed => increase ISO (accept increase noise)
there is only one aperture (f/1.8) available
there is no "Shutter Speed Priority Mode" (short: Tv/S-Mode) available in the Camera2 API
the CameraX API does not (yet) offer what we need (static focus, Tv/S Mode)
Steps
Set the shutter speed to the fastest exposure supported (easy)
Automatically adjust ISO setting for auto-exposure (e.g. this formular)
To calculate the ISO the only missing part is the light level (EV)
Question
How can I estimate the EV continuously during capturing to adjust the ISO automatically while using a fixed shutter speed?
Ideas so far:
If I could read out the "recommendations" from the Camera2 auto exposure (AE) routine without actually enabling AE_MODE_ON then I could easily calculate the EV. However, I did not find an API for this so far. I guess it's not possible without routing the device.
If the ambient light sensor would provide all information needed to auto-expose (calculate EV) this would also be very easy. However, from my understanding it only measures the incident light not the reflected light so the measurement does not take the actual objects in the pictures into account (how their surfaces reflect light)
If I could get the information from the Pixels of the last captures this would also be doable (if the calculation time fits into the time between two captures). However, from my unterstanding the pixel "bightness" is heavily dependent on the objects captured, i.e. if the brightness of the objects captured changes (many "black horses" or "white bears" at the side of the road/way) I would calculate bad EV values.
Capture auto-exposed images in-between the actual captures and calculate the light levels from the auto-selected settings used in the in-between captures for the actual captures. This would be a relatively "good" way from my understanding but it's quite hard on the resources end - I am not sure I the time available between two captures is enough for this.
Maybe I don't see a simpler solution. Has anyone done something like this?
Yes, you need to implement your own auto-exposure algorithm.
All the 'real' AE has to go by is the image captured by the sensor as well, so in theory you can build something just as good at guessing the right light level.
In practice, it's unlikely you can match it, both because you have a longer feedback loop (the AE algorithm can cheat a bit on synchronization requirements and update sensor settings faster than an application can), and because the AE algorithm can use hardware statistics units (collect histograms and average values across the scene), which make it more efficient.
But a simple auto-exposure algorithm would be to average the whole scene (or a section of the scene, or every-tenth-pixel of the scene, etc) and if that average is below half max value, increase ISO, and if it's above, reduce. A basic feedback control loop, in other words. With all the issues about stability, convergence, etc, that apply. So a bit of control theory understanding can be quite helpful here. I'd recommend a low-resolution YUV output (640x480 maybe?) to an ImageReader from the camera to use as the source data, and then just look at the Y channel. Not a ton of data to churn through in that case.
Or as hb0 mentioned, if you have a very limited set of outdoor conditions, you can try to hardcode values for each of them. But the range of outdoor brightness can be quite large, so this would require a decent bit of testing to make sure it'll work, plus manual selection of the right values each time.
When the pictures are only captured in specific light situations like "outdoor, cloudy":
Tabulated values can be used for the exposure value (EV) instead of using light measurements.
Example
EV100 (iso100) for Outdoor cloudy (OC) = 13
EV (dynamic iso) for OC = EV100 + log2(iso/100)
Using this formula together with those formulas we can calculate the iso from:
aperture (fixed)
shutter speed (manually selected)
Additionally, we could add an UI option to choose a "light situation" like:
outdoor, cloudy
outdoor, sunny
etc.
This is probably not the most accurate way but for now a first, easy way to continue prototyping.

Mobile App(iPhone / Android) app to read real time text from camera

I want to programmatically read numbers on a page using mobile's camera instead from image, just like barcode scanning.
I know that we can read or scan barcode but is there any way to read numbers using same strategy. Another thing is i also know that we can read text or numbers from image using OCR but i don't want to take the photo/image and than process it but only scan and get ?
You mean to say that you don't want to click a picture and process it, instead you want to scan text by just hovering the camera, am I right?
It could be accomplished using a technology called Optical Character Recognition. (You mentioned something about OSR, I think this is want you meant). What it does is, it finds patterns in images to detect text in printed documents.
As far as I know, existing tools processes still images, so you will have to work around it to make it scan any moving images.
Character recognition demands significant amount of resources, so instead of processing moving pictures I would recommend you to write a program that takes images less frequently from a hovering camera and process it. Once text, or numbers in your case, are detected you could use a less efficient pattern matching algorithm to track the motion of the numbers.
Till date, the most powerful and popular software is Tesseract-OCR. You will find it at GitHub. You can use this to develop your mobile application.

Quickly (100Hz or more) blinking of camera flashlight LED

I am trying to write a variable brightness flashlight app by using PWM (might use it for communication later). For that I need fast switching of the camera LED (say 100-200Hz), which is not possible through the Camera API's setParameters functionality (I guess the camera itself slows things down considerably).
Now – The LED is capable of switching rapidly and there are apps doing something similar (HTC flashlight for example, unfortunately couldn't find source code for it) so it all comes down to controlling the LED without the camera.
Any thoughts or ideas?
I know this is 4 years later, but you'd need a lot more than 100-200hz for PWM to work properly, without irritating the eye. You might get some control, but you won't be able to get 10% brightness without the pulses becoming noticeable, and even then, the duration of those pulses is too long to fool the eye. Typically PWM is handled at the microsecond level, around 100khz. I would like this to be possible as well. Except, if we could have say a 100khz carrier frequency in the flash, it would be possible to calculate distance to a subject with dedicated pixels in the sensor, as well as reject all ambient light through demodulation, if all pixels could be scanned fast enough. Sadly not possible though.
Normally to do that there'll be a PWM peripheral in the processor that handles the rapid switching for you, but that would need driver support; it won't be accessible to user applications. Here's a question which uses the driver to do it: Set brightness of flash in Android

Would it be possible for a mobile app to detect a flashing light with its camera, i.e., visual morse code

Is the technology there for the camera of a smartphone to detect a light flashing and to detect it as morse code, at a maximum of 100m?
There's already at least one app in the iPhone App store that does this for some unknown distance. And the camera can detect luminance at a much greater distance, given enough contrast of the exposure between on and off light levels, a slow enough dot rate to not alias against the frame rate (remember about Nyquist sampling), and maybe a tripod to keep the light centered on some small set of pixels. So the answer is probably yes.
I think it's possible in ideal conditions. Clear air and no other "light noise", like in a dark night in the mountain or so. The problem is that users would try to use it in the city, discos etc... where it would obviously fail.
If you can record a video of the light and easily visually decode it upon watching, then there's a fair chance you may be able to do so programmatically with enough work.
The first challenge would be finding the light in the background, especially if its small and/or there's any movement of the camera or source. You might actually be able to leverage some kinds of video compression technology to help filter out the movement.
The second question is if the phone has enough horsepower and your algorithm enough efficiency to decode it in real time. For a slow enough signaling rate, the answer would be yes.
Finally there might be things you could do to make it easier. For example, if you could get the source to flash at exactly half the camera frame rate when it is on instead of being steady on, it might be easier to identify since it would be in every other frame. You can't synchronize that exactly (unless both devices make good use of GPS time), but might get close enough to be of help.
Yes, the technology is definitely there. I written an Android application for my "Advanced Internet Technology" class, which does exactly what you describe.
The application has still problems with bright noise (when other light sources leave or enter the camera view while recording). The approach that I'm using just uses the overall brightness changes to extract the Morse signal.
There are some more or less complicated algorithms in place to correct the auto exposure problem (the image darkens shortly after the light is "turned on") and to detect the thresholds for the Morse signal strength and speed.
Overall performance of the application is good. I tested it during the night in the mountains and as long as the sending signal is strong enough, there is no problem. In the library (with different light-sources around), it was less accurate. I had to be careful not to have additional light-sources at the "edge" of the camera screen. The application required the length of a "short" Morse signal to be 300ms at least.
The better approach would be to "search" the screen for the actual light-source. For my project it turned out to be too much work, but you should get good detection in noisy environment with this.

Microphone input

I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing.
I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level.
Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software?
Thanks,
/George
It seems there's an AGC (Automatic Gain Control) filter in place. You should also be able to identify the shot by its frequency characteristics. I would expect it to show up across most of the audible spectrum, but get a spectrum analyzer (there are a few on the app market, like SpectralView) and try identifying the event by its frequency "signature" and amplitude. If you clap your hands what do you get for max amplitude? You could also try covering the phone with something to muffle the sound like a few layers of cloth
It seems like AGC is in the media recorder. When I use AudioRecord I can detect shots using the amplitude even though it sometimes reacts on sounds other than shots. This is not a problem since the shooter usually doesn't make any other noise while shooting.
But I will do some FFT too to get it perfect :-)
Sounds like you figured out your agc problem. One further suggestion: I'm not sure the FFT is the right tool for the job. You might have better detection and lower CPU use with a sliding power estimator.
e.g.
signal => square => moving average => peak detection
All of the above can be implemented very efficiently using fixed point math, which fits well with mobile android platforms.
You can find more info by searching for "Parseval's Theorem" and "CIC filter" (cascaded integrator comb)
Sorry for the late response; I didn't see this question until I started searching for a different problem...
I have started an application to do what I think you're attempting. It's an audio-based lap timer (button to start/stop recording, and loud audio noises for lap setting). It' not finished, but might provide you with a decent base to get started.
Right now, it allows you to monitor the signal volume coming from the mic, and set the ambient noise amount. It's also using the new BSD license, so feel free to check out the code here: http://code.google.com/p/audio-timer/. It's set up to use the 1.5 API to include as many devices as possible.
It's not finished, in that it has two main issues:
The audio capture doesn't currently work for emulated devices because of the unsupported frequency requested
The timer functionality doesn't work yet - was focusing on getting the audio capture first.
I'm looking into the frequency support, but Android doesn't seem to have a way to find out which frequencies are supported without trial and error per-device.
I also have on my local dev machine some extra code to create a layout for the listview items to display "lap" information. Got sidetracked by the frequency problem though. But since the display and audio capture are pretty much done, using the system time to fill in the display values for timing information should be relatively straightforward, and then it shouldn't be too difficult to add the ability to export the data table to a CSV on the SD card.
Let me know if you want to join this project, or if you have any questions.

Categories

Resources