How to turn display to HDR mode - android

We are working on live SDR to HDR conversion on Android phones. Anyone can tell how to trigger the HDR display mode on an android phone when decoding the SDR video signal?

HDR is PQ transfer function, neither 10 bit, not BT.2020 has anything to do with HDR.
https://developer.android.com/training/wide-color-gamut
This talks only about WCG.
This should be used to trigger HDR. https://developer.android.com/ndk/reference/struct/a-hdr-metadata-smpte2086
For Java this should be used to check whether HDR is there.
https://developer.android.com/reference/android/view/Display.HdrCapabilities?hl=en
See this question with further links What is the difference between Display.HdrCapabilities and configuration.isScreenHdr

Related

How to calibrate an external microphone device?

I have been working on a research project which involves making audio recordings to perform some digital signal processing analysis.
To aid me in my recording, my research supervisor has provided me with an i436 microphone. It looks like this. However, before making the recording he has asked me to calibrate the device.
I have a rather blurry idea of what calibration means. Since different microphones may have different intensities / recording conditions, they will generate different results for the same recording. A calibration of the microphone device will generate a result in an acceptable range of 1 dB.
I searched through the internet and after digesting the seemingly indigestible concepts of using a hardware device to calibrate a microphone, I have come back here to ask for experts views on this.
If someone could explain in detail what calibrating a microphone means and how it can be achieved using a software on either an android device or a windows laptop, I would be grateful.
I don't want to purchase a new hardware device to calibrate a microphone but would rather appreciate software that do the same.
Currently I have the following equipment: An android device (Moto g3), Windows 10 laptop, iPhone 5S (not mine but I can borrow it).

How to view an openGL renders generated from PC's c++ code on an Android Device connected via WiFi?

I'm working on an Augmented Reality (AR) demo in which high quality openGL renders (in C++) will be generated from a PC and then streamed to an Android display device (running minimum Android 2.2). What is the easiest way to achieve this in real-time (30 FPS on Android Device) ?
I've looked into existing Anrdroid applications, and have not found anything to be suitable so far. The best available were remote desktop applications (such as TeamViewer) however the frame rates were far too low and unreliable.
Possible solution A:
1) Encode openGL window as H.264 Video (natively supported by Android)
2) Stream the H.264 Video via RTSP using a server
3) View the content from an Android Browser (android and pc connected via WiFi)
Possible solution B:
1) Encode openGL window as IP Camera in c++ (is this possible?)
2) Use an IPCamViewer on Android device to view (again connected via WiFi)
I'm not entirely sure if either or both of these approaches are viable and would like some reassurance before moving forward.
What is the resolution of the image (is it equal to the current screen resolution, larger or smaller)? It is possible and efficient to transport a H.264 stream, but it also depends on the machine used to do the encoding. Hardware encoders or GPU-accelerated encoders are your best bet.
Remember - if you choose to go with encoding, you will have latency due to buffering (on the encode and the decode side). It will be a constant time offset so if that's not a problem you should be ok.
The total system delay as proposed by this paper is composed of
Note that none of these delays can be fully measured directly in frame-time. Some of these depend on the frame data and/or the encoder/processing performed. But as a rough estimate with fast GPU-encoding and hardware decoding I'd say a lag of around 5-20 frames. You'll have to measure the final latency per-scenario. I did this by sending frames containing text (ticks) and once the frames were steady, comparing them side-by-side. In fact, I even allowed the user to enter "test mode" at anytime to compensate for network traffic peak times or have him change the "quality" settings to tweak this delay.

How do I simultaneously record audio from every mic on a microphone array into a separate buffer in Android?

Many tablets and some smart phones use an array of microphone for things like noise cancellation. For example Motorola Droid X uses three microphone arrays and even allows you to set "audio scenes". An example is discussed here.
I want to be able to record from all the microphones that are available on the tablet/phone at the same time. I found that using AudioSource we can choose the mic (I do not know which mic this is specifically but it might be the one facing the user) or the mic that is in same orientation as the video camera, but could not find anyway of accessing all the other mic in the mic array. Any help that points me in the right direction to investigate this will be great. Thanks in advance for your time.
It's seems like you've verified that there isn't a standard Android API for accessing specific mics in an array. I couldn't find anything either.
As is the case with custom additions to the Android system, it's up to the manufacturer to release developer APIs. Motorola has done this before. I took a look at all of the ones they have listed and it seems they simply don't expose it. Obviously, they have code somewhere which can do it (the "audio scenes" uses it).
So the quick answer: you're out of luck.
The more involved answer: you can go spelunking around the source code for the Droid X because it's released as open source. If you can actually find it, understand that you're using an undocumented API which could be changed at any time. Plus, you'll have to do this for every device you want to support.

Hardware encoding video on Android phone

I'm looking for some information about encoding video on an Android phone using hardware acceleration. I know some phones (if anyone has a list?) support encoding for the camera, and was hoping I could access the chip to encode a live feed supplied through say wifi, usb.
Also, I'm interested in the latency any such chip would provide.
EDIT: Apparently Android uses PacketVideo, however not much documentation to be found for encoding.
EDIT: Android documentation shows a video-encoder: http://developer.android.com/reference/android/media/MediaRecorder.VideoEncoder.html. However it does not say anything about hardware acceleration.
MediaCodec should fit your needs. Documentation says nothing about hardware acceleration, but according to logs and benchmarks, it relies on OpenMAX library.

Microphone input

I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing.
I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level.
Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software?
Thanks,
/George
It seems there's an AGC (Automatic Gain Control) filter in place. You should also be able to identify the shot by its frequency characteristics. I would expect it to show up across most of the audible spectrum, but get a spectrum analyzer (there are a few on the app market, like SpectralView) and try identifying the event by its frequency "signature" and amplitude. If you clap your hands what do you get for max amplitude? You could also try covering the phone with something to muffle the sound like a few layers of cloth
It seems like AGC is in the media recorder. When I use AudioRecord I can detect shots using the amplitude even though it sometimes reacts on sounds other than shots. This is not a problem since the shooter usually doesn't make any other noise while shooting.
But I will do some FFT too to get it perfect :-)
Sounds like you figured out your agc problem. One further suggestion: I'm not sure the FFT is the right tool for the job. You might have better detection and lower CPU use with a sliding power estimator.
e.g.
signal => square => moving average => peak detection
All of the above can be implemented very efficiently using fixed point math, which fits well with mobile android platforms.
You can find more info by searching for "Parseval's Theorem" and "CIC filter" (cascaded integrator comb)
Sorry for the late response; I didn't see this question until I started searching for a different problem...
I have started an application to do what I think you're attempting. It's an audio-based lap timer (button to start/stop recording, and loud audio noises for lap setting). It' not finished, but might provide you with a decent base to get started.
Right now, it allows you to monitor the signal volume coming from the mic, and set the ambient noise amount. It's also using the new BSD license, so feel free to check out the code here: http://code.google.com/p/audio-timer/. It's set up to use the 1.5 API to include as many devices as possible.
It's not finished, in that it has two main issues:
The audio capture doesn't currently work for emulated devices because of the unsupported frequency requested
The timer functionality doesn't work yet - was focusing on getting the audio capture first.
I'm looking into the frequency support, but Android doesn't seem to have a way to find out which frequencies are supported without trial and error per-device.
I also have on my local dev machine some extra code to create a layout for the listview items to display "lap" information. Got sidetracked by the frequency problem though. But since the display and audio capture are pretty much done, using the system time to fill in the display values for timing information should be relatively straightforward, and then it shouldn't be too difficult to add the ability to export the data table to a CSV on the SD card.
Let me know if you want to join this project, or if you have any questions.

Categories

Resources