Sepia Effect in IOS - android

I am developing an application in Android which has to apply Sepia effect on the uploaded image. Application is already. I came across a function in swift code which is applying Sepia effect.
func applySepia() -> UIImage? {
filterValue = 1
let image = self.processPixels(filterValue: filterValue)
guard let cgimg = image?.cgImage else {
print("imageView doesn't have an image!")
return self
}
let value = -filterValue
print("sliderValue = \(filterValue)")
print("value = \(value)")
let openGLContext = EAGLContext(api: .openGLES2)
let context = CIContext(eaglContext: openGLContext!)
let coreImage = CIImage(cgImage: cgimg)
let filter = CIFilter(name: "CISepiaTone")
filter?.setValue(coreImage, forKey: kCIInputImageKey)
filter?.setValue(value, forKey: kCIInputIntensityKey)
if let output = filter?.value(forKey: kCIOutputImageKey) as? CIImage {
let cgimgresult = context.createCGImage(output, from: output.extent)
let image = UIImage(cgImage: cgimgresult!)
return image.applySharpness(filterValue:filterValue)
}
return self
}
Above function is using "CISepiaTone" filter to implement Sepia tone. Here "kCIInputIntensityKey" is passed as -1 which I am unable to understand. As per the documentation, it's value ranges between 0 to 1 but how it is allowing a negative value. Due to this intensity value, the generated image does look like this:
In my opinion, after applying Sepia it should look like:
I am able to achieve the second image in android (which is truly Sepia Tone) using https://github.com/StevenRudenko/ColorMartix/blob/master/src/com/sample/colormatrix/Main.java
However, I couldn't find any built-in method or class in android which can be used to implement Sepia Tone with negative intensity just like the way it is applied in IOS swift. Here are my questions:
How IOS is allowing negative value for kCIInputIntensityKey despite
the fact that it should be ranged between 0-1.
After negative intensity values, Generated images does not look like
a Sepia Tone.
How can I achieve the same effect in Android?

Related

FaceDetector.findFaces accuracy params

I have been implementing the FaceDetector.findFaces() feature in my app to find/recognize faces in a selected Bitmap and I see that it works only for 100% clear faces.
Is there a way to apply a kind of 'accuracy' params so that a partial visible face is still accepted?
In my app I want to restrict profile pictures selection only to the ones showing the face and the code is plain simple:
private boolean faceIsDetected(Bitmap image) {
Bitmap image2 = image.copy(Bitmap.Config.RGB_565, false);
FaceDetector faceDetector = new FaceDetector(image2.getWidth(), image2.getHeight(), 5);
if (faceDetector.findFaces(image2, new FaceDetector.Face[5]) > 0)
return true;
return false;
}
The generated bitmap and see that it follows the requirements: it is RGB_565 and it has an even width.
The object Face includes already a confidence() value. From the documentation:
Returns a confidence factor between 0 and 1. This indicates how
certain what has been found is actually a face. A confidence factor
above 0.3 is usually good enough.
You would need to define your minimum value for this field, and decide when is good enough for you.
There is also a CONFIDENCE_THRESHOLD constant that defines the minimum confidence factor of good face recognition (which is 0.4F).
In my experiments this value typically oscillates between 0.5 and 0.53, and I never had anything outside that range. You are probably better off using ML Kit.

Android Flutter Analyze Audio Waveform

I want to create a music app that has a view that resembles the one of SoundCloud, this one to be clear: This
I thought of creating a class like this for each bar:
class Bar {
const Bar(this.alreadyPlayed, this.index, this.height);
final bool alreadyPlayed;
final int index;
final double height;
}
where alreadyPlayed is a bool that tells if the bar should be colored or Greyed out, index is the number of the bar and height, well is the height of the bar. The first two Variables shouldn't be difficult to obtain, my problem is to obtain the height of the bar, so the intensity of the music at that time. This is already enough, but even better if someone knows how to calculate the intensity of a specific frequency, for example, 225 Hz, that could be useful.
But anyway, if it helps, I am adding what I'm trying to achieve in pseudocode:
// Obtain the mp3 file.
//
// Define a number of bars decided from the song length
// or from a default, for example, 80.
//
// In a loop that goes from 0 to the number of bars create
// a Bar Object with the default alreadyPlayed as 0, index
// as the index and the height as a 0.
//
// Obtain the intensity of the sound in a way like this:
// sound[time_in_milliseconds = song_lenght_in_milliseconds / num_of_bars ],
// and then set the height of the bar as the just found intensity.
Is what I'm asking possible?
Looks like you're looking into generating waveform graphs from audio. Have you tried anything so far?
There's no short answer here though. You can start exploring with flutter_ffmpeg to generate waveform data from audio. It's up to you on what format you'll use for your waveform data. Once you got your data, you can generate waveform graphs in Flutter using CustomPaint. You can check the sample on this blog post. The waveform data used in the sample is in JSON.
I'm looking for a way to listen microphone and make some audio analysis with flutter and I found some here that's may help you:
Is a two articles sequence that's explain step-by-step to draw Waveforms with Flutter
Generating Waveform Data - Audio Representation:
https://matt.aimonetti.net/posts/2019-06-generating-waveform-data-audio-representation/
Drawing Waveforms in Flutter: https://matt.aimonetti.net/posts/2019-07-drawing-waveforms-in-flutter/
I hope this help you

How to fill gaps between each segment of the 7-segment character

I want to recognize digits from odometer by mobile using tesseract library.
Source image:
Next step:
Now i need to fill gaps between each segment.
Can you help me, how i do it?
(english training data work better for me than https://github.com/arturaugusto/display_ocr)
image processing:
func prepareImage(sourceImage: UIImage) -> UIImage {
let avgLuminanceThresholdFilter = GPUImageAverageLuminanceThresholdFilter()
avgLuminanceThresholdFilter.thresholdMultiplier = 0.67
let adaptiveThresholdFilter = GPUImageAdaptiveThresholdFilter()
adaptiveThresholdFilter.blurRadiusInPixels = 0.67
let unsharpMaskFilter = GPUImageUnsharpMaskFilter()
unsharpMaskFilter.blurRadiusInPixels = 4.0
let stillImageFilter = GPUImageAdaptiveThresholdFilter()
stillImageFilter.blurRadiusInPixels = 1.0
let contrastFilter = GPUImageContrastFilter()
contrastFilter.contrast = 0.75
let brightnessFilter = GPUImageBrightnessFilter()
brightnessFilter.brightness = -0.25
//unsharpen
var processingImage = unsharpMaskFilter.imageByFilteringImage(sourceImage)
processingImage = contrastFilter.imageByFilteringImage(processingImage)
processingImage = brightnessFilter.imageByFilteringImage(processingImage)
//convert to binary black/white pixels
processingImage = avgLuminanceThresholdFilter.imageByFilteringImage(processingImage)
return processingImage
}
OCR:
let tesseract_eng = G8Tesseract()
tesseract_eng.language = "eng"
tesseract_eng.engineMode = .TesseractOnly
tesseract_eng.pageSegmentationMode = .Auto
tesseract_eng.maximumRecognitionTime = 60.0
tesseract_eng.setVariableValue("0123456789", forKey: "tessedit_char_whitelist")
tesseract_eng.image = prepareImage(image)
tesseract_eng.recognize()
OpenCV has some morphology methods, which white fill the gaps between black pixels (like THIS or THIS). Pay attention to morphology opening method, this should be the primary method for solving this, but do not be afraid to combine it with dilating if only this does not help. I am not sure what software do you use for image processing, if it does have similar methods, try them out, otherwise I would highly recomend you installing OpenCV, which (is free of course) has many image-processing operations with very high speed. Also, you could try a bit to experiment with threshold values and find the balance between how much corners it cuts out and how much shadows it takes off (combined with morphological operations this should solve the issue for you).

Making a android-desktop-style PageView

This is a very simple example to achieve the android multi screen wallpaper effect on iOS, written in Swift. Sorry for the poor and redundant English as I am a non-native speaker and new to coding.
Many android phones have a multi screen desktop with the feature that when you swipe between screens, the wallpaper moves horizontally according to your gesture. Usually the wallpaper will move in a smaller scale comparing to the icons or search boxes to convey a sense of depth.
To do this, I use a UIScrollView instead of a UIPageViewController.I tried the latter at first, which makes the viewController hierarchy become very complex, and I couldn't use the touchesMove method correctly, as when you pan around pages, a new child viewController will come in and interrupt the method.
Instead, I tried UIScrollView and here is what I have reached.
drag a wallpaper imageView into viewController, set the image, set the frame to (0,0,width of image, height of screen)
set the viewController size to freedom and set the width to theNumberOfPagesYouWantToDisplay * screenWidth (only to make the next steps easier, the width of the viewController will not affect the final result.)
drag a scrollView that fits in the viewController
add the "content UIs" into the scrollView to become a childView of it. Or you can do it using codes.
add a pageControl if you would like.
in your viewController.swift add the following code
import UIKit
class ViewController: UIViewController, UIScrollViewDelegate{
let pagesToDisplay = 10 // You can modify this to test the result.
let scrollViewWidth = CGFloat(320)
let scrollViewHeight = CGFloat(568) // You can also modify these based on the screen size of your device, this is an example for iPhone 5/5s
#IBOutlet weak var backgroundView: UIImageView!
#IBOutlet weak var scrollView: UIScrollView!
#IBOutlet weak var pageControl: UIPageControl!
override func viewDidLoad() {
super.viewDidLoad()
scrollView.delegate = self
scrollView.pagingEnabled = true
// Makes the scrollView feel like pageViewController.
scrollView.frame = CGRectMake(0,0,scrollViewWidth,scrollViewHeight)
//Remember to set the frame back to the screen size.
scrollView.contentSize = CGSizeMake(CGFloat(pagesToDisplay) * scrollViewWidth, scrollViewHeight)
pageControl.numberOfPages = pagesToDisplay
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
func scrollViewDidEndDecelerating(scrollView: UIScrollView) {
let pageIndex = Int(scrollView.contentOffset.x / scrollView.frame.size.width)
pageControl.currentPage = pageIndex
}
func scrollViewDidScroll(scrollView: UIScrollView) {
let bgvMaxOffset = scrollView.frame.width - backgroundView.frame.width
let svMaxOffset = scrollView.frame.width * CGFloat(pagesToDisplay - 1)
if scrollView.contentOffset.x >= 0 &&
scrollView.contentOffset.x <= svMaxOffset {
backgroundView.frame.origin.x = scrollView.contentOffset.x * (bgvMaxOffset / svMaxOffset)
}
}
}
Feel free to try this on your device, and it will be really appreciated if someone could give some advice on this. Thank you.

Correct Pixel Processing Logic for DICOM JPEG(RGB) for Applying Window Width and Level Filter

I am trying to apply widow width and level filter to JPEG Image which I extracted from DICOM file.
Here is logic I use to process Each Channel of RGB Image fore example I manipulate Red Channel Like below code in Render-Script in android
Example code where I shown how I manipulate Red Channel of Image.
(I do same for Green and Blue Channels)
It does manipulate the JPEG Image Widow Width and Level but not sure if its correct way to manipulate DICOM JPEGS if some body know correct way to manipulate RGB JPEGS Window Width and Level with correct pixel processing math please help me as Its result some what (20%) differs from Windows Based DicomViewers ( I know Window Level and Width is for Monochrome Images Only but Some DicomViewers Such as "ShowCase" they do apply such filters on RGB )
displayMin = (windowLevel- windowWidth/2);
displayMax = (windowLevel+ windowWidth/2);
/*Manipulate Red Channel */
if(current.r < displayMin)
{
current.r = 0;
}
else if(current.r > displayMax)
{
current.r = 1;
}
Your current approach simply truncates the input data to fit the window, which can certainly be useful. However, it doesn't really let you see see the benefit of the window/level, particularly on images greater than 8bpp, because it doesn't enhance any of the details.
You'd typically want to remap the windowed input range (displayMin to displayMax) onto the output range (0 to 1) in some way. I don't think there is a definitive 'correct' approach, although here's a simple linear mapping that I find useful:
if (current.r <= displayMin || displayMin == displayMax)
{
current.r = 0;
}
else if (current.r >= displayMax)
{
current.r = 1;
}
else
{
current.r = (current.r - displayMin) / (displayMax - displayMin);
}
What that's doing is simply taking your restricted window, and expanding it to use the entire colour-space. You can think of it like zooming-in on the details.
(The displayMin == displayMax condition is simply guarding against division-by-zero.)

Categories

Resources