How to proper blend two imageview using ScriptIntrinsicBlend - android

this is my current implementation which output different as expected
private fun multiplyBitmap(bitmap: Bitmap?):Bitmap{
var mRenderScript = RenderScript.create(context!!)
var bitmapMultiply = mOriginalBitmap!!.copy(mOriginalBitmap!!.config,true)
var blend = ScriptIntrinsicBlend.create(mRenderScript, U8_4(mRenderScript))
var allocationIn = Allocation.createFromBitmap(mRenderScript, originalBackground)
var allocationOut = Allocation.createFromBitmap(mRenderScript,bitmapMultiply)
blend.forEachMultiply(allocationIn,allocationOut)
return bitmapMultiply
}

Your code is missing a crucial part. It should call allocationOut.copyTo(bitmapMultiply) after the call to forEachMultiply.
The copyTo call ensures that the data is completely copied from GPU memory to the data store backing the bitmap.

Related

Android Kotlin Real-Time FFT and plot

I am trying to apply Real-time FFT on a sensor data which is connected over the bluetooth BLE.
There is a sdk which allows you to receive the data from the sensor in the Android using handler. I am using ViewModel to send the sensor data in various parts of the app to plot the data using GraphView and perform FFT.
I am using JTransform to perform the FFT but before I was using JDSP to perform STFT.
Below is the code use to perform FFT on a unfiltered raw sensor data using JTransform
var t = 0
var fs = 512
var sampleSize = 2*fs
val windowSize = sampleSize/2
private fun getFFT(sample:DoubleArray): Array<DataPoint>{
val fft = DoubleFFT_1D(sampleSize.toLong())
fft.realForward(sample)
return analysed(sample)
}
private fun analysed(sample:DoubleArray): Array<DataPoint> {
val series:Array<DataPoint> = Array(sample.size) { i -> DataPoint(0.0,0.0) }
sample.forEachIndexed { i, y ->
val x = i.toDouble()
series[i] = DataPoint(x, y)
}
return series
}
sensorViewModel.getRaw().observe(this){
if(t<sampleSize-1){
sample[t] = it.toDouble()
t++
}else{
sample = sample.takeLast(windowSize).toDoubleArray().plus(DoubleArray(2*windowSize) { i -> 0.0 })
t = windowSize
// Plot FFT
asyncTask.execute(onPreExecute = {
}, doInBackground = {
getFFT(sample)
}, onPostExecute = {
fftseries.resetData(it)
})
}
}
Although my code runs without crash but I can see so many problems with the app.
Using a sliding window to create "sample" to perform FFT on the created sample feels really inefficient. Can anyone please suggest how can I do write it better with the better control of window size.
How to make this FFT plot fast?

How to get buffer and sound info for an equalizer using MediaBrowser, MediaController, MediaSession and Exoplayer?

I have an app which can play playlists based on google docs on how to create and Audio App https://developer.android.com/guide/topics/media-apps/audio-app/building-an-audio-app
I would like to add an equalizer, like this one https://github.com/Yalantis/Horizon, but I cannot find how to get the needed information, I have never worked with sound before so I am a bit lost.
According to the docs I should first: "initialize the Horizon object with params referring to your sound:"
mHorizon = Horizon(
glSurfaceView, ResourcesCompat.getColor(resources, R.color.grey2),
RECORDER_SAMPLE_RATE, RECORDER_CHANNELS, RECORDER_ENCODING_BIT //Where to get these 3 constants?
)
And then: "to update Horizon call updateView method with chunk of sound data to proceed:"
val buffer = ByteArray(//Where to get the bytes?)
mHorizon!!.updateView(buffer)
How could I get this data? I looked in the android documentation but couldn't find anything.
You need to add a custom RendererFactory to your Exoplayer to get audio bytes. See the below code:
val rendererFactory = RendererFactory(this, object : TeeAudioProcessor.AudioBufferSink {
override fun flush(sampleRateHz: Int, channelCount: Int, encoding: Int) {
}
override fun handleBuffer(buffer: ByteBuffer) {
//pass bytes to the your function
}
})
exoPlayer = ExoPlayerFactory.newSimpleInstance(this, rendererFactory,DefaultTrackSelector())
You will get the bytes in a ByteBuffer, To convert it to ByteArray use below code:
try{
val arr = ByteArray(buffer.remaining())
buffer[arr] //pass this array to the required function
}
catch(exception:Exception)
{
// handle exception here
}

To how convert protobuf object to ByteArray and the encode with Base64 URL_SAFE in Swift?

In Android, I could convert an object to ByteArray and then encode it to Base64 (URL_SAFE) as per code below
val myByteArrayObject = protobufObject.toByteArray()
val meEncodedObject = android.util.Base64.encodeToString.encodeToString(
myByteArrayObject, android.util.Base64.DEFAULT).trim()
How could I achieve that in Swift?
Found the answer.
do {
let protobufSerialized = try protobufObject.serializedData()
let protobufEncoded = protobufSerialized.base64EncodedString()
// Do whatever need to be done with the protobufEncoded
} catch { }
The main hidden function that is hard to find is serializedData() that exist on SwiftProtobuf.Message

Flutters resource load faster than native Android

I'm trying to convert image taken from resources to ByteArray which
will later be send through Socket. I've been measuring time of each of this conversion.
I've done it on both Flutter and native Android (Kotlin). All of the test were done on the same image which was about 1-2MB.
Flutter code :
sendMessage() async {
if (socket != null) {
Stopwatch start = Stopwatch()..start();
final imageBytes = await rootBundle.load('assets/images/stars.jpg');
final image = base64Encode(imageBytes.buffer.asUint8List(imageBytes.offsetInBytes, imageBytes.lengthInBytes));
print('Converting took ${start.elapsedMilliseconds}');
socket.emit("message", [image]);
}
}
Kotlin code:
private fun sendMessage() {
var message = ""
val thread = Thread(Runnable {
val start = SystemClock.elapsedRealtime()
val bitmap = BitmapFactory.decodeResource(resources, R.drawable.stars)
message = Base64.encodeToString(getBytesFromBitmap(bitmap), Base64.DEFAULT)
Log.d("Tag", "Converting time was : ${SystemClock.elapsedRealtime() - start}")
})
thread.start()
thread.join()
socket.emit("message", message)
}
private fun getBytesFromBitmap(bitmap: Bitmap): ByteArray? {
val stream = ByteArrayOutputStream()
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream)
return stream.toByteArray()
}
I've been actually expecting native code to be much much faster than Flutter's but thats not the case.. Conversion for Flutter takes about 50ms and its around 2000-3000ms for native.
I thought that Threading may be the case, so I've tried to run this conversion on background thread for native code but it didn't help.
Can you please tell me why is there such a different in time, and how I can implement it better in native code? Is there a way to omit casting to Bitmap etc.? Maybe this makes it so long.
EDIT. Added getBytesFromBitmap function
the difference you see is that in flutter code you just read your data without any image decoding, while in kotlin you are first decoding to Bitmap and then you are compress()ing it back - if you want to speed it up simply get an InputStream by calling Resources#openRawResource and read your image resource without any decoding
It have something to do with the way you convert it to bytes... Can you please post your
getBytesFromBitmap func? Plus, the conversion in native code really should be done in background thread, please upload the your results in this case.

Stream video with bitmap as overlay

I'm new to wowza and is working on a project to live stream video captured from an Android device. I need to attach an image(dynamic one) to the video stream so that the users watching the stream can view it. The code I have tried is given below(as from the example source code from wowza):
// Read in a PNG file from the app resources as a bitmap
Bitmap overlayBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.overlay_logo);
// Initialize a bitmap renderer with the bitmap
mWZBitmap = new WZBitmap(overlayBitmap);
// Place the bitmap at top left of the display
mWZBitmap.setPosition(WZBitmap.LEFT, WZBitmap.TOP);
// Scale the bitmap initially to 75% of the display surface width
mWZBitmap.setScale(0.75f, WZBitmap.SURFACE_WIDTH);
// Register the bitmap renderer with the GoCoder camera preview view as a frame listener
mWZCameraView.registerFrameRenderer(mWZBitmap);
This works fine, but I don't want to show the image at the broadcasting end, the image should be visible only at the receiving end. Is there anyway to get this done?
I managed to get this done by registeringFrameRenderer and setting the bitmap inside onWZVideoFrameRendererDraw.
Code snippet is as given below(Kotlin):
private fun attachImageToBroadcast(scoreValue: ScoreUpdate) {
bitmap = getBitMap(scoreValue)
// Initialize a bitmap renderer with the bitmap
mWZBitmap = WZBitmap(bitmap)
// Position the bitmap in the display
mWZBitmap!!.setPosition(WZBitmap.LEFT, WZBitmap.TOP)
// Scale the bitmap initially
mWZBitmap!!.setScale(0.37f, WZBitmap.FRAME_WIDTH)
mWZBitmap!!.isVisible = false // as i dont want to show it initially
mWZCameraView!!.registerFrameRenderer(mWZBitmap)
mWZCameraView!!.registerFrameRenderer(VideoFrameRenderer())
}
private inner class VideoFrameRenderer : WZRenderAPI.VideoFrameRenderer {
override fun onWZVideoFrameRendererRelease(p0: WZGLES.EglEnv?) {
}
override fun onWZVideoFrameRendererDraw(p0: WZGLES.EglEnv?, framSize: WZSize?, p2: Int) {
mWZBitmap!!.setBitmap(bitmap) // note that the bitmap value gets changed once I get the new values
//I have implemented some flags and conditions to check whether a new value has been obtained and only if these values are satisfied, the setBitmap is called. Otherwise, as it is called continuously, flickering can occur in the screen
}
override fun isWZVideoFrameRendererActive(): Boolean {
return true
}
override fun onWZVideoFrameRendererInit(p0: WZGLES.EglEnv?) {
}
}
In iOS, we can implement WZVideoSink protocol to achieve this.
First, we need to update the scoreView with the latest score and then convert the view to an image.
Then we can embed this image to the captured frame using WZVideoSink protocol method.
A sample code is given below.
// MARK: - WZVideoSink Protocol
func videoFrameWasCaptured(_ imageBuffer: CVImageBuffer, framePresentationTime: CMTime, frameDuration: CMTime) {
if self.goCoder != nil && self.goCoder!.isStreaming {
let frameImage = CIImage(cvImageBuffer: imageBuffer)
var addCIImage: CIImage = CIImage()
if let scoreImage = self.getViewAsImage() {
// scoreImage is the image you want to embed.
addCIImage = CIImage(cgImage: scoreImage.cgImage!)
}
let filter = CIFilter(name: "CISourceOverCompositing")
filter?.setDefaults()
filter?.setValue(addCIImage, forKey: kCIInputImageKey)
filter?.setValue(frameImage, forKey: kCIInputBackgroundImageKey)
if let outputImage: CIImage = filter?.value(forKey: kCIOutputImageKey) as? CIImage {
let context = CIContext(options: nil)
context.render(outputImage, to: imageBuffer)
} else {
let context = CIContext(options: nil)
context.render(frameImage, to: imageBuffer)
}
}
}
func getViewAsImage() -> UIImage {
// convert scoreView to image
UIGraphicsBeginImageContextWithOptions(self.scoreView.bounds.size, false, 0.0)
self.scoreView.layer.render(in: UIGraphicsGetCurrentContext()!)
let scoreImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return scoreImage
}

Categories

Resources