Android Kotlin Real-Time FFT and plot - android

I am trying to apply Real-time FFT on a sensor data which is connected over the bluetooth BLE.
There is a sdk which allows you to receive the data from the sensor in the Android using handler. I am using ViewModel to send the sensor data in various parts of the app to plot the data using GraphView and perform FFT.
I am using JTransform to perform the FFT but before I was using JDSP to perform STFT.
Below is the code use to perform FFT on a unfiltered raw sensor data using JTransform
var t = 0
var fs = 512
var sampleSize = 2*fs
val windowSize = sampleSize/2
private fun getFFT(sample:DoubleArray): Array<DataPoint>{
val fft = DoubleFFT_1D(sampleSize.toLong())
fft.realForward(sample)
return analysed(sample)
}
private fun analysed(sample:DoubleArray): Array<DataPoint> {
val series:Array<DataPoint> = Array(sample.size) { i -> DataPoint(0.0,0.0) }
sample.forEachIndexed { i, y ->
val x = i.toDouble()
series[i] = DataPoint(x, y)
}
return series
}
sensorViewModel.getRaw().observe(this){
if(t<sampleSize-1){
sample[t] = it.toDouble()
t++
}else{
sample = sample.takeLast(windowSize).toDoubleArray().plus(DoubleArray(2*windowSize) { i -> 0.0 })
t = windowSize
// Plot FFT
asyncTask.execute(onPreExecute = {
}, doInBackground = {
getFFT(sample)
}, onPostExecute = {
fftseries.resetData(it)
})
}
}
Although my code runs without crash but I can see so many problems with the app.
Using a sliding window to create "sample" to perform FFT on the created sample feels really inefficient. Can anyone please suggest how can I do write it better with the better control of window size.
How to make this FFT plot fast?

Related

Android Auto: How to return a large number of children on loadChildren() of MediaBrowserService?

I am currently trying to implement a MediaBrowserService to build a media app for Android Auto.
I followed the official Android Auto documentation (https://developer.android.com/training/cars/media#onLoadChildren) to implement theonLoadChildren function.
Following is a code snippet that I tried to show the content on the Android Auto screen:
override fun onLoadChildren(parentId: String, result: Result<MutableList<MediaBrowserCompat.MediaItem>>) {
...
if (parentId == NODE_LIBRARY_ALBUMS) {
val items = mutableListOf<MediaBrowserCompat.MediaItem>()
val albumList = LibraryManager.getAlbumList()
for (it in albumList) {
val descriptionBuilder = MediaDescriptionCompat.Builder()
.setTitle(it.albumName)
items.add(MediaBrowserCompat.MediaItem(descriptionBuilder.build(), MediaBrowserCompat.MediaItem.FLAG_BROWSABLE))
}
result.sendResult(items)
}
...
}
This works pretty well, when the number of items is small enough.
However, when the number of items is large (e.g., about 5,000 items), the following error appears:
E/JavaBinder: !!! FAILED BINDER TRANSACTION !!! (parcel size = 1339384)
I found that several other media apps (e.g., Samsung Music) that support Android Auto can show a large number of items.
Is there any way to return a large number of items on the onLoadChildren function, or is there any other way to solve this issue?
Thanks!
Probably you have to split the large data into a small pieces. For example, you have a list of 5000 items. So inside you MediaBrowserService in onLoadChildren do something like this
public fun onLoadChildren(parentId: String, result: Result<List<MediaBrowserCompat.MediaItem>>) {
if (MEDIA_ID_ROOT == parentId || itemsList.size > 100) {
fillMediaBrowsableResult(parentId, result);
}
else {
fillMediaPlayableResult(parentId, result);
}
}
//Split the large number of content to a parts
private fun fillMediaBrowsableResult(parentId: String, result: Result<List<MediaBrowserCompat.MediaItem>>) {
// Detect count of parts
val partsCount = itemsList.size / 100
if(itemsList.size % 100 > 0){
partsCount ++
}
val mediaItems = mutableListOf<MediaBrowserCompat.MediaItem>()
//Create parts in a loop
for(i in 0 until partsCount){
val mediaDescription = MediaDescriptionCompat.Builder()
.setMediaId(i) // This is your next parent in onLoadChildren when you click on it in AA
.setTitle("Part ${i + 1}")
.build();
val mediaItem =. MediaBrowserCompat.MediaItem(mediaDescription, MediaBrowserCompat.MediaItem.FLAG_BROWSABLE)
mediaItems.add(mediaItem)
}
result.sendResult(mediaItems)
}
private fun fillMediaPlayableResult(parentId: String, result: Result<List<MediaBrowserCompat.MediaItem>>){
val intParent = parentId.toInt()
val startPosition = intParent * 100 // where to start for this part
val stopPosition = (intParent + 1) * 100 // where to finish for this part
if(stopPosition > itemsList.size){
stopPosition = itemsList.size
}
val mediaItems = mutableListOf<MediaBrowserCompat.MediaItem>()
for(i in startPosition..stopPosition){
//Generate playable content for this part
val item = itemsList[i]
val mediaDescription = MediaDescriptionCompat.Builder()
.setMediaId(item.id)
.setTitle(item.albumTitle)
.build();
val mediaItem =. MediaBrowserCompat.MediaItem(mediaDescription, MediaBrowserCompat.MediaItem.FLAG_PLAYABLE)
mediaItems.add(mediaItem)
}
result.sendResult(mediaItems)
}
I didn't check this code, but I think the idea is clear.
If you look into Android SDK document:
Note: Android Auto and Android Automotive OS have strict limits on how
many media items they can display in each level of the menu. These
limits minimize distractions for drivers and help them operate your
app with voice commands. ...
So, I think the best approach is to limit # of media items in UX design. For what you saw from the other apps working with lots of media items, they might used Jetpack Media 2 for pagination.

How to set sound playback from external speaker?

There is a kind of weird issue, I am using oboe lib https://github.com/google/oboe, for sound playback. Of course you can choose sound playback output according to android settings
https://developer.android.com/reference/android/media/AudioDeviceInfo
So, if I need to set exact output chanel I need to set it to oboe lib.
By the way output chanal that I need is TYPE_BUILTIN_SPEAKER, but on some devices (sometimes, not constantly) I hear the sound from
TYPE_BUILTIN_EARPIECE
How I am doing this, I have such method to get needed chanel id
fun findAudioDevice(app: Application,
deviceFlag: Int,
deviceType: Int): AudioDeviceInfo?
{
var result: AudioDeviceInfo? = null
val manager = app.getSystemService(Context.AUDIO_SERVICE) as AudioManager
val adis = manager.getDevices(deviceFlag)
for (adi in adis)
{
if (adi.type == deviceType)
{
result = adi
break
}
}
return result
}
How I use it
val id = getAudioDeviceInfoId(getBuildInSpeakerInfo())
private fun getBuildInSpeakerInfo(): AudioDeviceInfo?
{
return com.tetavi.ar.basedomain.utils.Utils.findAudioDevice( //
getApplication<Application>(), //
AudioManager.GET_DEVICES_OUTPUTS, //
AudioDeviceInfo.TYPE_BUILTIN_SPEAKER //
)
}
private fun getAudioDeviceInfoId(info: AudioDeviceInfo?): Int
{
var result = -1
if (info != null)
{
result = info.id
}
return result
}
And eventually I need to set this id to oboe lib. Oboe lib is native lib, so with JNI I pass this id and set it
oboe::Result oboe_engine::createPlaybackStream()
{
oboe::AudioStreamBuilder builder;
const oboe::SharingMode sharingMode = oboe::SharingMode::Exclusive;
const int32_t sampleRate = mBackingTrack->getSampleRate();
const oboe::AudioFormat audioFormat = oboe::AudioFormat::Float;
const oboe::PerformanceMode performanceMode = oboe::PerformanceMode::PowerSaving;
builder.setSharingMode(sharingMode)
->setPerformanceMode(performanceMode)
->setFormat(audioFormat)
->setCallback(this)
->setSampleRate(sampleRate);
if (m_output_playback_chanel_id != EMPTY_NUM)
{
//set output playback chanel (like internal or external speaker)
builder.setDeviceId(m_output_playback_chanel_id); <------------- THIS LINE
}
return builder.openStream(&mAudioStream);
}
So, actually issue is that on some devices (sometimes, not constantly) I still hear that sound playback goes from internal speaker TYPE_BUILTIN_EARPIECE inspite of I set directly that I need to use TYPE_BUILTIN_SPEAKER
I checked a few times the flow, from moment that I get this id (it is acctually is 3) and up to the moment when I set it as a param to oboe lib, but still sometimes I hear sound from internal speaker.
So, question is - if I miss something here? Maybe some trick should be implemented or something else?

How to proper blend two imageview using ScriptIntrinsicBlend

this is my current implementation which output different as expected
private fun multiplyBitmap(bitmap: Bitmap?):Bitmap{
var mRenderScript = RenderScript.create(context!!)
var bitmapMultiply = mOriginalBitmap!!.copy(mOriginalBitmap!!.config,true)
var blend = ScriptIntrinsicBlend.create(mRenderScript, U8_4(mRenderScript))
var allocationIn = Allocation.createFromBitmap(mRenderScript, originalBackground)
var allocationOut = Allocation.createFromBitmap(mRenderScript,bitmapMultiply)
blend.forEachMultiply(allocationIn,allocationOut)
return bitmapMultiply
}
Your code is missing a crucial part. It should call allocationOut.copyTo(bitmapMultiply) after the call to forEachMultiply.
The copyTo call ensures that the data is completely copied from GPU memory to the data store backing the bitmap.

Google Calendar API V3 android - Get all day events for freeBusyRequest

I'm trying to get the Free busy data of other people that are within my Google organization using the google-api-services-calendar:v3 for Android (using Kotlin). I'm getting the times just fine for events with a set duration. But all day events don't show up on the list. Documentation on this is almost nowhere to be found and the stuff that I find on developers.google.com contains code that was deprecated in 2013...
// ...
val busyTimesList = mutableListOf<AgendaPlotter.TimeSpan>()
SessionService.sharedInstance.getGoogleAccount(activity)
observeOn(Schedulers.io())
.subscribe {
mCredential!!.selectedAccount = it.account
val request = FreeBusyRequest()
val durationCal = Calendar.getInstance()
durationCal.time = startDay.time
Calendars.startOfDay(durationCal)
request.timeMin = DateTime(durationCal.time)
durationCal.add(Calendar.DATE, 1)
request.timeMax = DateTime(durationCal.time)
val requestItems = listOf(FreeBusyRequestItem().setId("email#from.colleague"))
request.items = requestItems
request.timeZone = TimeZone.getDefault().id
val busyTimes: FreeBusyResponse
try {
val query = mService!!.freebusy().query(request)
// Use partial GET to retrieve only needed fields.
query.fields = "calendars"
busyTimes = query.execute()
busyTimes.calendars.forEach {
it.toPair().second.busy.forEach { timeSpan ->
val busyTime = AgendaPlotter.TimeSpan()
busyTime.fromTime.timeInMillis = timeSpan.start.value
busyTime.toTime.timeInMillis = timeSpan.end.value
busyTimesList.add(busyTime)
}
}
emitter.onNext(busyTimesList)
} catch (e: IOException) {
e.printStackTrace()
// ...
}
}
// ...
So my question, how do I also obtain the whole day events?
After some searching I noticed that there is actually nothing wrong with the API. It's a setting for a whole day event to be busy of free.
By default this is set to free, which makes it not show as a busy time, which makes sense. This goes unnoticed by a lot of people and they will be "free" on that day.

[AS3][Air for Android] Get streaming mic input?

Is there a way to get true mic streaming input?
The example code I have at the moment looks to be getting the mic input data, and saving it to a sound object and playing it right away.
Is there a way to stream it properly?
If not, in my example, is there a way to get the mic input data, but mute the audio, as it is causing a feedback loop (despite setLoopBack being set to false..)
Code below:
import flash.display.*;
import flash.events.*;
import flash.media.*;
import flash.media.SoundTransform;
import flash.utils.*;
var _soundBytes:ByteArray = new ByteArray();
var _micBytes:ByteArray;
var son:Sound;
var sc:SoundChannel;
var pow:int = 0;
var myBar:Sprite;
stage.quality = "LOW";
// this code ended up muting the mic input oddly?
//SoundMixer.soundTransform = new SoundTransform(0);
init();
function init()
{
myBar = new Sprite;
micInit();
soundInit();
addEventListener(Event.ENTER_FRAME, visualise);
}
function micInit()
{
var mic:Microphone = Microphone.getMicrophone();
if(mic != null) {
//mic.setUseEchoSuppression(true);
mic.setLoopBack(false);
mic.setSilenceLevel(0);
mic.rate = 44;
mic.gain = 60;
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);
}
}
function micSampleDataHandler(event:SampleDataEvent):void
{
_micBytes = event.data;
sc = son.play();
}
function soundInit():void {
son = new Sound();
son.addEventListener(SampleDataEvent.SAMPLE_DATA, soundSampleDataHandler);
}
function soundSampleDataHandler(event:SampleDataEvent):void {
for (var i:int = 0; i < 8192 && _micBytes.bytesAvailable > 0; i++) {
var sample:Number = _micBytes.readFloat();
event.data.writeFloat(sample);
event.data.writeFloat(sample);
}
}
function drawLines(e:Event):void{
SoundMixer.computeSpectrum(_soundBytes, true);
myBar.graphics.clear();
myBar.graphics.lineStyle(2,0xabc241);
for (var i=0; i < 256; i++) {
pow = _soundBytes.readFloat()*200;
pow = Math.abs(pow);
myBar.graphics.drawRect(i*2, 0, 2, pow);
addChild(myBar);
}
}
Hope you can help!
To use acoustic echo cancellation, call Microphone.getEnhancedMicrophone() to get a reference to an enhanced Microphone object. Set the Microphone.enhancedOptions property to an instance of the MicrophoneEnhancedOptions class. Here is an article that discusses it all. Article about enhanced microphone options at Adobe
EDIT: I spoke too soon. I have used enhanced mic many times before, but I decided to read the article myself to see if there were any interesting things I could learn new from it... and I found this near the end
AEC is computationally expensive. Currently, only desktop platforms are supported for Flash Player and AIR
Although I just looked at the date... last year, so maybe give it a try and it is supported now?!?

Categories

Resources