I am using the Android Philips Hue SDK and I am currently having an issue with converting the light bulbs XY value to RGB.
I have looked at this code provided in a forum on Philips Hue website and the code has been provided by someone from Hue Support.
I have the following function using this code from the forum:
public static int[] convertXYToRGB(float[] xy, String lightModel)
{
int color = PHUtilities.colorFromXY(xy, lightModel);
int r = Color.red(color);
int g = Color.green(color);
int b = Color.blue(color);
return new int[] {r, g, b};
}
And I am calling it like:
int hue = lightState.getHue();
float[] xy = PHUtilities.calculateXY(hue, item.light.getModelNumber());
int[] rgb = Utilities.convertXYToRGB(xy, item.light.getModelNumber());
Looking at the RGB value I get back it seems to be the wrong colour. For example, using the official app, I set one of my light bulbs to red. When I run my app, the RGB value that comes back is a pale yellow.
Has anyone else experienced this or know how to resolve this issue?
I had a similar issue while programming a desktop application using the same Java SDK (login required). Interestingly, a plain red turned into a fade yellow, exactly how you describe it. A possible solution is to use the xy-values directly instead of the conversion from hue-values. That finally solved the problem for me. You can get the xy-values from the PHLightState object using the methods .getX() and .getY(). After that, use colorFromXY as in your code to get the RGB-values (as android color value = int).
PHLightState s = light.getLastKnownLightState();
float xy[] = new float[] {s.getX(), s.getY()};
int combRGB = PHUtilities.colorFromXY(xy, light.getModelNumber());
On Android, convert combRGB as you already do. Make sure to include android.graphics.Color. If you are testing on non-Android systems you can use the following code:
Color theColor = new Color(combRGB);
int[] sepRGB = {theColor.getRed(), theColor.getGreen(), theColor.getBlue()};
Note: The lights can only address a certain color gamut depending on the type. This is explained into detail here. The 'normal' bulbs with a color gamut B have quite some limitations. For example: most greens turn into yellows and the blues contain a certain amount of red.
Example values: The following overall conversions are tested on my live system with LCT001-blubs. I used PHUtilities.calculateXYFromRGB() to convert the input, then I set the xy-values of the new light state with .setX() and .setY() and finally sent it to the bridge. The values are then extracted from the light cache in the application as soon as it gets the next update.
255 0 0 -> 254 0 0
0 255 0 -> 237 254 0
0 0 255 -> 90 0 254
200 0 200 -> 254 0 210
255 153 0 -> 254 106 0
255 153 153 -> 254 99 125
Related
I am following this blog post and GitHub almost exactly:
Blog
Github
But when I run, take a picture and call this line:
var outputs = new float[tfLabels.Count];
tfInterface.Feed("Placeholder", floatValues, 1, 227, 227, 3);
tfInterface.Run(new[] { "loss" });
tfInterface.Fetch("loss", outputs);
The app actually crashes and generates the error below on the .Run line.
I get this error in the output window (and the app crashes):
04-04 17:39:12.575 E/TensorFlowInferenceInterface( 8017): Failed to
run TensorFlow inference with inputs:[Placeholder], outputs:[loss]
Unhandled Exception:
Java.Lang.IllegalArgumentException: Input to reshape is a tensor with
97556 values, but the requested shape requires a multiple of 90944
[[Node: block0_0_reshape0 = Reshape[T=DT_FLOAT, Tshape=DT_INT32,
_device="/job:localhost/replica:0/task:0/device:CPU:0"](block0_0_concat,
block0_0_reshape0/shape)]]
According to the posts I am reading from the searching I am doing on this error, I sort of understand this is due to the image not fitting the expected size exactly but in the example I am following, this is resized to fit 227x227 everytime and converted to float like in these lines:
var resizedBitmap = Bitmap.CreateScaledBitmap(bitmap, 227, 227, false).Copy(Bitmap.Config.Argb8888, false);
var floatValues = new float[227 * 227 * 3];
var intValues = new int[227 * 227];
resizedBitmap.GetPixels(intValues, 0, 227, 0, 0, 227, 227);
for(int i = 0; i < intValues.Length; i++)
{
var val = intValues[i];
floatValues[i * 3 + 0] = ((val & 0xFF) - 104);
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - 117);
floatValues[i * 3 + 2] = (((val >> 16) & 0xFF) - 123);
}
So, I don't understand what is causing this or how to fix it. Please help!
UPDATE: I found out the issue is with my model or my labels. I found this out by simply swapping in the model and label file from the sample/github above while leaving all my code the same. When I did this, I no longer get the error. HOWEVER, this still doesn't tell me much. The error is not very explanatory to point me in a direction of what could be wrong with my model. I assume it is the model because the labels file is simply just a text file with labels on each line. I used Custom Vision Service on Azure to create my model. It trained fine and tests just fine on the web portal. I then exported it as TensorFlow. So, I am not sure what I could have done wrong or how to fix it.
Thanks!
After no answers here and several days of searching and trial and error, I have found the issue. In general, I guess this reshape error I was getting you can get if you are feeding the model with an image size other that it is expecting or setup to receive.
The issue is that, everything I have read says that typically you must feed the model with a 227 x 227 x 3 image. Then, I started noticing that size varies on some posts. Some people say 225 x 225 x 3, others say 250 x 250 x 3 and so on. I had tried those sizes as well with no luck.
As you can see in my edit in the question, I did have a clue. When using somebody else's pretrained model, my code works fine. However, when I use my custom model which I created on the Microsoft Azure CustomVision.ai site, I was getting this error.
So, I decided I would try to inspect the models to see what was different. I followed this post: Inspect a pre trained model
When I inspected the model that works using TensorBoard, I see that the input is 227 x 227 x 3 which is what I expected. However, when I viewed my model, I noticed that it was 224 x 224 x 3! I changed my code to resize the image to that size and it works! Problem went away.
So, to summarize, for some reason Microsoft Custom Vision service model generated a model to expect an image size of 224 x 224 x 3. I didn't see any documentation or setting for this. I also don't know if that number will change with each model. If you get a similar shape error, the first place I would check is the size of the image you are feeding your model and what it expects as an input. The good news is you can check your model, even if pre-trained, using TensorBoard and the post I linked above. Look at the input section, it should look something like this:
Hope this helps!
I am kind of stuck with this problem, and I know there are so many questions about it on stack overflow but in my case. Nothing gives the expected result.
The Context:
Am using Android OpenCV along with Tesseract so I can read the MRZ area in the passport. When the camera is started I pass the input frame to an AsyncTask, the frame is processed, the MRZ area is extracted succesfully, I pass the extracted MRZ area to a function prepareForOCR(inputImage) that takes the MRZ area as gray Mat and Will output a bitmap with the thresholded image that I will pass to Tesseract.
The problem:
The problem is while thresholding the Image, I use adaptive thresholding with blockSize = 13 and C = 15, but the result given is not always the same depending on the lighting of the image and the conditions in general from which the frame is taken.
What I have tried:
First I am resizing the image to a specific size (871,108) so the input image is always the same and not dependant on which phone is used.
After resizing, I try with different BlockSize and C values
//toOcr contains the extracted MRZ area
Bitmap toOCRBitmap = Bitmap.createBitmap(bitmap);
Mat inputFrame = new Mat();
Mat toOcr = new Mat();
Utils.bitmapToMat(toOCRBitmap, inputFrame);
Imgproc.cvtColor(inputFrame, inputFrame, Imgproc.COLOR_BGR2GRAY);
TesseractResult lastResult = null;
for (int B = 11; B < 70; B++) {
for (int C = 11; C < 70; C++){
if (IsPrime(B) && IsPrime(C)){
Imgproc.adaptiveThreshold(inputFrame, toOcr, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, B ,C);
Bitmap toOcrBitmap = OpenCVHelper.getBitmap(toOcr);
TesseractResult result = TesseractInstance.extractFrame(toOcrBitmap, "ocrba");
if (result.getMeanConfidence()> 70) {
if (MrzParser.tryParse(result.getText())){
Log.d("Main2Activity", "Best result with " + B + " : " + C);
return result;
}
}
}
}
}
Using the code below, the thresholded result image is a black on white image which gives a confidence greater than 70, I can't really post the whole image for privacy reasons, but here's a clipped one and a dummy password one.
Using the MrzParser.tryParse function which adds checks for the character position and its validity within the MRZ, am able to correct some occurences like a name containing a 8 instead of B, and get a good result but it takes so much time, which is normal because am thresholding almost 255 images in the loop, adding to that the Tesseract call.
I already tried getting a list of C and B values which occurs the most but the results are different.
The question:
Is there a way to define a C and blocksize value so that it s always giving the same result, maybe adding more OpenCV calls so The input image like increasing contrast and so on, I searched the web for 2 weeks now I can't find a viable solution, this is the only one that is giving accurate results
You can use a clustering algorithm to cluster the pixels based on color. The characters are dark and there is a good contrast in the MRZ region, so a clustering method will most probably give you a good segmentation if you apply it to the MRZ region.
Here I demonstrate it with MRZ regions obtained from sample images that can be found on the internet.
I use color images, apply some smoothing, convert to Lab color space, then cluster the a, b channel data using kmeans (k=2). The code is in python but you can easily adapt it to java. Due to the randomized nature of the kmeans algorithm, the segmented characters will have label 0 or 1. You can easily sort it out by inspecting cluster centers. The cluster-center corresponding to characters should have a dark value in the color space you are using.
I just used the Lab color space here. You can use RGB, HSV or even GRAY and see which one is better for you.
After segmenting like this, I think you can even find good values for B and C of your adaptive-threshold using the properties of the stroke width of the characters (if you think the adaptive-threshold gives a better quality output).
import cv2
import numpy as np
im = cv2.imread('mrz1.png')
# convert to Lab
lab = cv2.cvtColor(cv2.GaussianBlur(im, (3, 3), 1), cv2.COLOR_BGR2Lab)
im32f = np.array(im[:, :, 1:3], dtype=np.float32)
k = 2 # 2 clusters
term_crit = (cv2.TERM_CRITERIA_EPS, 30, 0.1)
ret, labels, centers = cv2.kmeans(im32f.reshape([im.shape[0]*im.shape[1], -1]),
k, None, term_crit, 10, 0)
# segmented image
labels = labels.reshape([im.shape[0], im.shape[1]]) * 255
Some results:
I wand to Theme a app in the same Color like my CM Theme and the App only allow to use a color picker,after looking in the shard preference of the app i found something. That's the Story but not the question.
This is what i found:
<int name="color_main_window" value="-13162859" />
My question is how i can generate this int from rgb/hex and the absolutely needed way from rgb/hex to int ?
An int is a 32-bit signed integer. So, -13161859 = 0xFF372695. A color will be represented as an ARGB int, so
a = FF
r = 37
g = 26
b = 95
The Color class has utility methods that can convert int to rgb or argb and vice versa.
Me and my project group are making an Paintball-like Android app. When you shoot someone it checks the following things:
Is the opponent's color in the cross-hair (center of screen)? (either fluor yellow or fluor orange vests)
Is their an opponent player in that direction (use of device compass)?
Is their an opponent in range in that direction (use of GPS)?
The problem at the moment is with the first check. We plan to use the HUE and/or HSV codes, including the lightness and saturation by using the Color.colorToHSV method in Android. We encounter some problems though when it is too dark (weather), and want some feedback over which method is the most efficient to get the best possible results for our colored vests.
With some tests we use the following range at the moment with the Color.colorToHSV method:
float[] currentHsv = new float[3];
Color.colorToHSV(Utils.findColor(myImageBitmap), currentHsv);
float hue = currentHsv[0];
float saturation = currentHsv[1];
float brightness = currentHsv[2];
// Fluor Yellow
if((hue >= 58 && hue <=128) && brightness > 0.40 && saturation <= 1.0){ // some code here }
// Fluor orange
else if((hue >= 4 && hue <=57) && brightness > 0.45 && saturation >= 0.62){ // some code here }
Does anyone know a more efficient way of doing this, which works in almost any kind of weather type, dark or light, inside or outside, below dark bridges or overhanging buildings, dark/light colored clothings underneed it, etc.
I am new to android ndk.I have started learning through the image processing example by
ruckus and by IBM blog. I am not getting few lines below.Can any one please help me to understand the code snippet?`
static void brightness(AndroidBitmapInfo* info, void* pixels, float brightnessValue){
int xx, yy, red, green, blue;
uint32_t* line;
for(yy = 0; yy < info->height; yy++){
line = (uint32_t*)pixels;
for(xx =0; xx < info->width; xx++){
//extract the RGB values from the pixel
red = (int) ((line[xx] & 0x00FF0000) >> 16);
green = (int)((line[xx] & 0x0000FF00) >> 8);
blue = (int) (line[xx] & 0x00000FF );
//manipulate each value
red = rgb_clamp((int)(red * brightnessValue));
green = rgb_clamp((int)(green * brightnessValue));
blue = rgb_clamp((int)(blue * brightnessValue));
// set the new pixel back in
line[xx] =
((red << 16) & 0x00FF0000) |
((green << 8) & 0x0000FF00) |
(blue & 0x000000FF);
}
pixels = (char*)pixels + info->stride;
}
}
`
static int rgb_clamp(int value) {
if(value > 255) {
return 255;
}
if(value < 0) {
return 0;
}
return value;
}
How the RGB value are getting extracted and wht does this rgb_clamp do.Why are we setting new Pixell back and how does pixels = (char*)pixels + info->stride; works?
I am not a c/c++ guys and not having much knowledge of Image processing.
Thanks
At first lets talk about one pixel. As far as i can see, it is a composition of at least 3 channels: r,g and b, which are all stored in one uint32_t value and has the format 0x00RRGGBB(32bit / 4 channels = 8bit per channel and thus a value range from 0..255). So, to get the separated r, g and b-values you need to mask them out, which is done in the three lines below //extract the RGB values. For example the red component... With the mask 0x00FF0000 and the & operator, you set every bit to 0 except the bits that are set in the red channel. But when you just mask them out with 0x00RRGGBB & 0x00FF0000 = 0x00RR0000, you would get a very big number. To get a value between 0 and 255 you also have to shift the bits to the right and that is what is done with the >>-operator. So for the latter example: After applying the mask, you get 0x00RR0000, and shifting this 16 bit right (>> 16)gives you 0x000000RR, which is a number between 0 and 255. The same happens with the green channel, but with an 8bit right shift and since the blue value is already on the "right" bit position, there is no need to shift.
Second question: What rgb_clamp does is easy to explain. It ensures, that your r,g or b-value, multiplied with your brightness factor, never exceeds the value range 0..255.
After the multiplication with the brightness factor, the new values are written back into memory, which happens in the reverse order of the above described extraction, this time shifting them leftwards and removing bits, that we don't want with the mask.
After one line of your image is processed, the info->stride is added, since for optimization purposes, the memory probably is aligned to fill 32byte boundaries and thus a single line can be longer than only image width and thus the "rest" of bytes are added to the pointer.
First and foremost I suggest you read the C book here: http://publications.gbdirect.co.uk/c_book/
Next I'll go through your questions.
How are the RGB values extracted
line is set to point to pixels parameter:
line = (uint32_t*)pixels;
That is pixels is an array of 32 bit unsigned integers
Then for the height and width of the bitmap the RGB values are extracted using a combination of bitwise ANDing (&) and bit shifting right (>>).
Lets see how you get red:
red = (int) ((line[xx] & 0x00FF0000) >> 16);
Here we get the current line, then AND with 0x00FF0000 as mask, this gets the bits 24-16 from the line. So using RGB code #123456 as an example we will be left with 0x00120000 in the red variable. But it's still in the 24-16 bit position so we right shift 16 bits to shift the bits down to 0x00000012.
We do this for the green and blue values, adjusting the AND mask and number of bits to shift right.
More information on binary arithmetic can be found here: http://publications.gbdirect.co.uk/c_book/chapter2/expressions_and_arithmetic.html
What does rgb_clamp do
This function simply ensures the red, green, or blue values are 0 or above or 255 and below.
If the parameter to rbg_clamp is -20 the it will return 0, which will be used to set the RGB value. If the parameter is rbg_clamp is 270 it will return 255.
RGB values for each colour must not exceed 225 or be below 0. In this example 255 being the brightest and 0 being the darkest value.
Why are we setting pixel back
It appears we are changing the brightness of the pixel, and setting the value back ready to be displayed.
How does pixels = (char*)pixels + info->stride; work?
Without knowing the structure of info variable of AndroidBitmapInfo type, I would guess info->stride refers to the width of the bitmap in bytes so line will become the next line on the next loop iteration.
Hope that helps