I am trying to develop an app which transmits morse code using camera flash light on phone. My transmitting part works fine. I am turning flash on based on DOT or DASH and off based on GAP, LETTER_GAP and WORD_GAP. all DOT, DASH, GAP, LETTER_GAP and WORD_GAP has different time duration for which they will be ON or OFF.
I have difficult time figuring out how to decode this on receiver side...I am using opencp binary threshold to see if there exist a bright spot in the image and not. Based on camera fps I can calculate how many frames had flash on or off consecutively which determines dot/dash/gap. here is the example.
Say from transmitter phone i am sending "abc xyz" as string. on receiver phone I am getting these String
.-#-.*..#-.*-. -.*.-#-.*--#--*.*. where,
"." - DOT
"-" - DASH
"*" - GAP
"#" - LETTER GAP
" " - WORD GAP
this string is exactly represents "abc xyz". The problem is I can not think of a way for receiver phone where to start looking for new message and when to stop, as everything is being sent using light signals. there is no sync between transmit and receive. I mean there is no way for receiver to identify start and end signal as i just process raw camera frames provided by opencv. Is there any way i can impose these? or alternative solution to make detection/decoding?
Please let me know if I am not clear. Thank You!
Well, there can be multiple answers. First you can ask for manual input from the receiver and analyze all frames for the first seconds. Perhaps you can always monitor and set a threshhold on the light pattern strength. You can also make a re-sync sequence where the sender shines the light for exactly one second and starts transmission. This would be the handshake and the rest the message.
Great work and hopefully you make an app out of it.
Check out the approach Shivam Kalra used here:http://www.codeproject.com/Articles/46174/Computer-Vision-Decoding-a-Morse-Code-Flashing-LED
tldr: allow user to set a coordinate in the picture frame and monitor brightness of the pixel(s) underneath the coordinate.
Related
I'm having a real hard time calculating the distance between two android phones using sound.
-the main idea is having 2 phones sync'ed on same time, making mobile A send a msg to mobile B to let him know he is playing sound soon. note that mobile A save this time.
-then mobile B sends "ok, u can go ahead" to mobile A while it starts recording the next 1 second or so.
-Then mobile A gets the "ok" and start playing a 1000Hz sound.
-Mobile B detect that freq and send its current time to mobileA
now we have all the info to calculate the distance. problem is that at theory this is all good, but when i implement this i have lots of random time added into the equation.
the main problem is that I cant point at the ABSOLUTE time when mobile B got the good freq.
I tried not recording the whole 1000 ms but lots of "mini" chunks of (12~24ms) but the time the mobile spend on the recorder_.startRecording()/recorder_.read()/recorder_.stop() commands is too much, and im missing the freq by lots of ms (each ms is equal to 30cm so i cant effort much errors...)
can any one tell me what im doing wrong or guiding me to better ways of doing that??
The main issue is the recording device cant point on the actual time he recorded the wanted freq.....
thanks in advanced,
Ofer.
Please have a look at new audio features introduced in API 19.
For my next project I started analyzing apps that measure pulse via camera (you press a finger against the camera and you get your pulse info).
I concluded that the apps receives data from the camera with the help of a light. How do the achieve this? Can you direct me to any area I should Investigate?
If anyone is in a mood to help me explaining how does pulse measure apps work? I cannot find ANY doc on the net on this topic.
Thanks in advance
To complement Robert's answer from a non-programming perspective (and since you asked it), pulse measure apps are based on Pulse Oxymetry.
The idea is to measure the absorbance of red light, which will vary when oxygenated blood is passing through your fingertips. When that happens there will a be a peak in absorbance, you only have to measure the number of times a peak is registered and divide by the respective time frame to compute cardiac frequency.
IMHO, to perform it on a mobile device is not fairly reliable, since it requires good lighting conditions and infra-red pulses, and there are several factors that makes this task very difficult:
1) Some phones may not have the flash LED light right near the camera
2) Some phones may not have a flash light at all
3) You don't have access to infra-red data.
4) The phone has to be absolutely still, or the image will be constantly changing, making the brightness measurement unreliable.
AFAIK such apps are using the preview mode of the Camera.
Using the method setPreviewCallBack(..) (and of course startPreview())you can register your own listener that receives continuously calls from the camera containing the current seen picture:
onPreviewFrame(byte[] data, Camera camera)
The image data is contained in the data byte array. The format of the data can be set via setPreviewFormat(). Using this data you can for example process the image and reduce it it's brightness at certain point in the image. Over the time the image brightness should show pulses.
I don't think that the necessary image algorithms are available by default in the Android runtime, therefore you have to develop own algorithms or look for 3rd party libraries that can be used on Android.
Is the technology there for the camera of a smartphone to detect a light flashing and to detect it as morse code, at a maximum of 100m?
There's already at least one app in the iPhone App store that does this for some unknown distance. And the camera can detect luminance at a much greater distance, given enough contrast of the exposure between on and off light levels, a slow enough dot rate to not alias against the frame rate (remember about Nyquist sampling), and maybe a tripod to keep the light centered on some small set of pixels. So the answer is probably yes.
I think it's possible in ideal conditions. Clear air and no other "light noise", like in a dark night in the mountain or so. The problem is that users would try to use it in the city, discos etc... where it would obviously fail.
If you can record a video of the light and easily visually decode it upon watching, then there's a fair chance you may be able to do so programmatically with enough work.
The first challenge would be finding the light in the background, especially if its small and/or there's any movement of the camera or source. You might actually be able to leverage some kinds of video compression technology to help filter out the movement.
The second question is if the phone has enough horsepower and your algorithm enough efficiency to decode it in real time. For a slow enough signaling rate, the answer would be yes.
Finally there might be things you could do to make it easier. For example, if you could get the source to flash at exactly half the camera frame rate when it is on instead of being steady on, it might be easier to identify since it would be in every other frame. You can't synchronize that exactly (unless both devices make good use of GPS time), but might get close enough to be of help.
Yes, the technology is definitely there. I written an Android application for my "Advanced Internet Technology" class, which does exactly what you describe.
The application has still problems with bright noise (when other light sources leave or enter the camera view while recording). The approach that I'm using just uses the overall brightness changes to extract the Morse signal.
There are some more or less complicated algorithms in place to correct the auto exposure problem (the image darkens shortly after the light is "turned on") and to detect the thresholds for the Morse signal strength and speed.
Overall performance of the application is good. I tested it during the night in the mountains and as long as the sending signal is strong enough, there is no problem. In the library (with different light-sources around), it was less accurate. I had to be careful not to have additional light-sources at the "edge" of the camera screen. The application required the length of a "short" Morse signal to be 300ms at least.
The better approach would be to "search" the screen for the actual light-source. For my project it turned out to be too much work, but you should get good detection in noisy environment with this.
I am writing an application that will behave similar to the existing Voice recognition but will be sending the sound data to a proprietary web service to perform the speech recognition part. I am using the standard MediaRecord (which is AMR-NB encoded) which seems to be perfect to speech recognition. The only data provided by this is the Amplitude via the getMaxAmplitude() method.
I am trying to detect when the person starts to talk so that when the person stops talking for about 2 seconds I can proceed to send the sound data to the web service. Right now I am using a threshold for the amplitude that if its goes over a value (i.e. 1500) then I assume the person is speaking. My concern is that the amplitude levels may vary by device (i.e. Nexus One v Droid), so I am looking for a more standard approach to this that can be derived from the amplitude values.
P.S.
I looked at graphing-amplitude but it doesn't provide a way to do it with just the amplitude.
Well, this might not be of much help but how about starting by measuring the offset noise captured by the microphone of the device by the application, and apply the threshold dynamically based on that? That way you would make it adaptable to the different devices' microphones and also to the environment the user is using it at, at a given time.
1500 is too low of a number. Measuring the change in amplitude will work better.
However, it will still result in miss detections.
I fear the only way to solve this problem is to figure out how to recognize a simple word or tone rather than simply detect noise.
There are now multiple VAD library designed for Android. One of these are:
https://github.com/gkonovalov/android-vad
Most of the smartphones come with a proximity sensor. Android has API for using these sensors. This would be adequate for the job you described. When the user moves the phone near to his ear, you can code the app to start recording. It should be easy enough.
Sensor class for android
I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing.
I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level.
Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software?
Thanks,
/George
It seems there's an AGC (Automatic Gain Control) filter in place. You should also be able to identify the shot by its frequency characteristics. I would expect it to show up across most of the audible spectrum, but get a spectrum analyzer (there are a few on the app market, like SpectralView) and try identifying the event by its frequency "signature" and amplitude. If you clap your hands what do you get for max amplitude? You could also try covering the phone with something to muffle the sound like a few layers of cloth
It seems like AGC is in the media recorder. When I use AudioRecord I can detect shots using the amplitude even though it sometimes reacts on sounds other than shots. This is not a problem since the shooter usually doesn't make any other noise while shooting.
But I will do some FFT too to get it perfect :-)
Sounds like you figured out your agc problem. One further suggestion: I'm not sure the FFT is the right tool for the job. You might have better detection and lower CPU use with a sliding power estimator.
e.g.
signal => square => moving average => peak detection
All of the above can be implemented very efficiently using fixed point math, which fits well with mobile android platforms.
You can find more info by searching for "Parseval's Theorem" and "CIC filter" (cascaded integrator comb)
Sorry for the late response; I didn't see this question until I started searching for a different problem...
I have started an application to do what I think you're attempting. It's an audio-based lap timer (button to start/stop recording, and loud audio noises for lap setting). It' not finished, but might provide you with a decent base to get started.
Right now, it allows you to monitor the signal volume coming from the mic, and set the ambient noise amount. It's also using the new BSD license, so feel free to check out the code here: http://code.google.com/p/audio-timer/. It's set up to use the 1.5 API to include as many devices as possible.
It's not finished, in that it has two main issues:
The audio capture doesn't currently work for emulated devices because of the unsupported frequency requested
The timer functionality doesn't work yet - was focusing on getting the audio capture first.
I'm looking into the frequency support, but Android doesn't seem to have a way to find out which frequencies are supported without trial and error per-device.
I also have on my local dev machine some extra code to create a layout for the listview items to display "lap" information. Got sidetracked by the frequency problem though. But since the display and audio capture are pretty much done, using the system time to fill in the display values for timing information should be relatively straightforward, and then it shouldn't be too difficult to add the ability to export the data table to a CSV on the SD card.
Let me know if you want to join this project, or if you have any questions.