I'm using CameraX to capture images on Android.
I wanted to implement feature that would analyze captured images brightness/darkness level - if image is too dark/bright.
Is there some elegant way of doing this? Maybe some powerful light library that is meant for this?
Current approach is a code pieces that was found somewhere on Stackoverflow:
public static boolean isDark(Bitmap bitmap){
boolean dark=false;
float darkThreshold = bitmap.getWidth()*bitmap.getHeight()*0.45f;
int darkPixels=0;
int[] pixels = new int[bitmap.getWidth()*bitmap.getHeight()];
bitmap.getPixels(pixels,0,bitmap.getWidth(),0,0,bitmap.getWidth(),bitmap.getHeight());
for(int pixel : pixels){
int color = pixels[i];
int r = Color.red(color);
int g = Color.green(color);
int b = Color.blue(color);
double luminance = (0.299*r+0.0f + 0.587*g+0.0f + 0.114*b+0.0f);
if (luminance<150) {
darkPixels++;
}
}
if (darkPixels >= darkThreshold) {
dark = true;
}
long duration = System.currentTimeMillis()-s;
return dark;
}
Second approach is to use SensorManager TYPE_LIGHT. Any more ideas/approaches?
The more efficient way would be calculating the luminance without converting the output to Bitmap.
private final ImageAnalysis.Analyzer mAnalyzer = image -> {
byte[] bytes = new byte[image.getPlanes()[0].getBuffer().remaining()];
image.getPlanes()[0].getBuffer().get(bytes);
int total = 0;
for (byte value : bytes) {
total += value & 0xFF;
}
if (bytes.length != 0) {
final int luminance = total / bytes.length;
// luminance is the value you need.
}
image.close();
};
Source: CameraX test app source code
Related
In order to align the intensity values of two grayscale Images (as a first step for further processing) I wrote a Java method that:
converts the bitmaps of the two images into two int[] arrays containing the bitmap's intensities (I just take the red component here, since it's grayscale, i.e. r=g=b ).
public static int[] bmpToData(Bitmap bmp){
int width = bmp.getWidth();
int height = bmp.getHeight();
int anzpixel = width*height;
int [] pixels = new int[anzpixel];
int [] data = new int[anzpixel];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
for (int i = 0 ; i < anzpixel ; i++) {
int p = pixels[i];
int r = (p & 0xff0000) >> 16;
//int g = (p & 0xff00) >> 8;
//int b = p & 0xff;
data[i] = r;
}
return data;
}
aligns the cumulated intensity distributions of Bitmap 2 to that of Bitmap 1
//aligns the intensity distribution of a grayscale picture moving (given by int[] //data2) the the intensity distribution of a reference picture fixed (given by // int[] data1)
public static int[] histMatch(int[] data1, int[] data2){
int anzpixel = data1.length;
int[] histogram_fixed = new int[256];
int[] histogram_moving = new int[256];
int[] cumhist_fixed = new int[256];
int[] cumhist_moving = new int[256];
int i=0;
int j=0;
//read intensities of fixed und moving in histogram
for (int n = 0; n < anzpixel; n++) {
histogram_fixed[data1[n]]++;
histogram_moving[data2[n]]++;
}
// calc cumulated distributions
cumhist_fixed[0]=histogram_fixed[0];
cumhist_moving[0]=histogram_moving[0];
for ( i=1; i < 256; ++i ) {
cumhist_fixed[i] = cumhist_fixed[i-1]+histogram_fixed[i];
cumhist_moving[i] = cumhist_moving[i-1]+histogram_moving [i];
}
// look-up-table lut[]. For each quantile i of the moving picture search the
// value j of the fixed picture where the quantile is the same as that of moving
int[] lut = new int[anzpixel];
j=0;
for ( i=0; i < 256; ++i ){
while(cumhist_fixed[j]< cumhist_moving[i]){
j++;
}
// check, whether the distance to the next-lower intensity is even lower, and if so, take this value
if ((j!=0) && ((cumhist_fixed[j-1]- cumhist_fixed[i]) < (cumhist_fixed[j]- cumhist_fixed[i]))){
lut[i]= (j-1);
}
else {
lut[i]= (j);
}
}
// apply the lut[] to moving picture.
i=0;
for (int n = 0; n < anzpixel; n++) {
data2[n]=(int) lut[data2[n]];
}
return data2;
}
converts the int[] arrays back to Bitmap.
public static Bitmap dataToBitmap(int[] data, int width, int heigth) {
int index=0;
Bitmap bmp = Bitmap.createBitmap(width, heigth, Bitmap.Config.ARGB_8888);
for (int x = 0; x < width; x++) {
for (int y = 0; y < heigth; y++) {
index=y*width+x;
int c = data[index];
bmp.setPixel(x,y,Color.rgb(c, c, c));
}
}
return bmp;
}
While the core procedure 2) is straightforward and fast, the conversion steps 1) and 3) are rather inefficient. It would be more than cool to do the whole thing in Renderscript. But, honestly, I am completely lost in doing so because of missing documentation and, while there are many impressing examples on what Renderscript COULD perform, I don't see a way to benefit from these possibilities (no books, no docu). Any advice is highly appreciated!
As a starting point, use Android Studio to "Import Sample..." and select Basic Render Script. This will give you a working project that we will now modify.
First, let's add more Allocation references to MainActivity. We will use them to communicate image data, histograms and the LUT between Java and Renderscript.
private Allocation mInAllocation;
private Allocation mInAllocation2;
private Allocation[] mOutAllocations;
private Allocation mHistogramAllocation;
private Allocation mHistogramAllocation2;
private Allocation mLUTAllocation;
Then in onCreate() load another image, which you will also need to add to /res/drawables/.
mBitmapIn2 = loadBitmap(R.drawable.cat_480x400);
In createScript() create additional allocations:
mInAllocation2 = Allocation.createFromBitmap(mRS, mBitmapIn2);
mHistogramAllocation = Allocation.createSized(mRS, Element.U32(mRS), 256);
mHistogramAllocation2 = Allocation.createSized(mRS, Element.U32(mRS), 256);
mLUTAllocation = Allocation.createSized(mRS, Element.U32(mRS), 256);
And now the main part (in RenderScriptTask):
/*
* Invoke histogram kernel for both images
*/
mScript.bind_histogram(mHistogramAllocation);
mScript.forEach_compute_histogram(mInAllocation);
mScript.bind_histogram(mHistogramAllocation2);
mScript.forEach_compute_histogram(mInAllocation2);
/*
* Variables copied verbatim from your code.
*/
int []histogram_fixed = new int[256];
int []histogram_moving = new int[256];
int[] cumhist_fixed = new int[256];
int[] cumhist_moving = new int[256];
int i=0;
int j=0;
// copy computed histograms to Java side
mHistogramAllocation.copyTo(histogram_fixed);
mHistogramAllocation2.copyTo(histogram_moving);
// your code again...
// calc cumulated distributions
cumhist_fixed[0]=histogram_fixed[0];
cumhist_moving[0]=histogram_moving[0];
for ( i=1; i < 256; ++i ) {
cumhist_fixed[i] = cumhist_fixed[i-1]+histogram_fixed[i];
cumhist_moving[i] = cumhist_moving[i-1]+histogram_moving [i];
}
// look-up-table lut[]. For each quantile i of the moving picture search the
// value j of the fixed picture where the quantile is the same as that of moving
int[] lut = new int[256];
j=0;
for ( i=0; i < 256; ++i ){
while(cumhist_fixed[j]< cumhist_moving[i]){
j++;
}
// check, whether the distance to the next-lower intensity is even lower, and if so, take this value
if ((j!=0) && ((cumhist_fixed[j-1]- cumhist_fixed[i]) < (cumhist_fixed[j]- cumhist_fixed[i]))){
lut[i]= (j-1);
}
else {
lut[i]= (j);
}
}
// copy the LUT to Renderscript side
mLUTAllocation.copyFrom(lut);
mScript.bind_LUT(mLUTAllocation);
// Apply LUT to the destination image
mScript.forEach_apply_histogram(mInAllocation2, mInAllocation2);
/*
* Copy to bitmap and invalidate image view
*/
//mOutAllocations[index].copyTo(mBitmapsOut[index]);
// copy back to Bitmap in preparation for viewing the results
mInAllocation2.copyTo((mBitmapsOut[index]));
Couple notes:
In your part of the code I also fixed LUT allocation size - only 256 locations are needed,
As you can see, I left the computation of cumulative histogram and LUT on Java side. These are rather difficult to efficiently parallelize due to data dependencies and small scale of the calculations, but considering the latter I don't think it's a problem.
Finally, the Renderscript code. The only non-obvious part is the use of rsAtomicInc() to increase values in histogram bins - this is necessary due to potentially many threads attempting to increase the same bin concurrently.
#pragma version(1)
#pragma rs java_package_name(com.example.android.basicrenderscript)
#pragma rs_fp_relaxed
int32_t *histogram;
int32_t *LUT;
void __attribute__((kernel)) compute_histogram(uchar4 in)
{
volatile int32_t *addr = &histogram[in.r];
rsAtomicInc(addr);
}
uchar4 __attribute__((kernel)) apply_histogram(uchar4 in)
{
uchar val = LUT[in.r];
uchar4 result;
result.r = result.g = result.b = val;
result.a = in.a;
return(result);
}
I want to convert an image from Android camera to HSI format using OpenCV.
The problem is when I use the following method
private Mat rgb2hsi(Mat rgbFrame) {
Mat hsiFrame = rgbFrame.clone();
for( int i = 0; i < rgbFrame.rows(); ++i ) {
for( int j = 0; j < rgbFrame.cols(); ++j ) {
double[] rgb = rgbFrame.get(i, j);
Log.d(MAINTAG, "rgbFrame.get(i, j) array size = " + rgb.length);
double colorR = rgb[0];
double colorG = rgb[1];
double colorB = rgb[2];
double minRGB = min(colorR, colorG, colorB);
double colorI = (colorR + colorG + colorB) / 3;
double colorS = 0.0;
if(colorI > 0) colorS = 1.0 - (minRGB / colorI);
double colorH;
double const1 = colorR - (colorG / 2) - (colorB / 2);
double const2 = Math.sqrt(Math.pow(colorR, 2) + Math.pow(colorG, 2) + Math.pow(colorR, 2)
- (colorR * colorG) - (colorR * colorB) - (colorG * colorB));
colorH = Math.acos(const1 / const2);
if(colorB > colorG) colorH = 360 - colorH;
double[] hsi = {colorH, colorS, colorI};
hsiFrame.put(i, j, hsi);
}
}
return hsiFrame;
}
It shows an error
java.lang.UnsupportedOperationException: Provided data element number (3) should be multiple of the Mat channels count (4)
I search for a while to figure out the cause of this error.
I found that I put an array of size 3 instead of 4.
Android convert byte array from Camera API to color Mat object openCV
I wonder what Type of image receive from Android Camera.
Why when I get an array of size 4?
How to convert an image received from Android camera to HSI and preview on the screen?
The following is the overrided method onCameraFrame
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat outputFrame = inputFrame.rgba();
/* Get RGB color from the pixel at [index_row, index_column] */
int index_row = 0;
int index_column = 0;
final double[] mRgb_pixel = outputFrame.get(index_row, index_column);
/* Show the result */
runOnUiThread(new Runnable() {
#Override
public void run() {
int r = (int) mRgb_pixel[0];
int g = (int) mRgb_pixel[1];
int b = (int) mRgb_pixel[2];
/* Set RGB color */
mRred_textview.setText("Red\n" + Double.toString(mRgb_pixel[0]));
mGreen_textview.setText("Green\n" + Double.toString(mRgb_pixel[1]));
mBlue_textview.setText("Blue\n" + Double.toString(mRgb_pixel[2]));
mColor_textview.setBackgroundColor(Color.rgb(r, g, b));
}
});
if(mPreviewType == PreviewType.GB) {
outputFrame.convertTo(outputFrame, CvType.CV_64FC3);
return getGBColor(rgb2hsi(outputFrame));
} else if (mPreviewType == PreviewType.HSI) {
outputFrame.convertTo(outputFrame, CvType.CV_64FC3);
return rgb2hsi(outputFrame);
} else {
return outputFrame;
}
}
My MainActivity implements CameraBridgeViewBase.CvCameraViewListener2
[Edit]
I think that the reason why it return an array of size 4 is because the frame is in RGBA format, not RGB format.
Therefore, how to convert RGBA to HSI and preview the frame on the screen?
The problem here is that your hsiFrame is a 4 channel image and your hsi array has only 3 values. You need to add one term corresponding to alpha channel to your hsi array. Making either of the following changes should work for you:
1. double[] hsi = {colorH, colorS, colorI, rgb[3]};
2. Mat hsiFrame = new Mat(rgbFrame.size(), CvType.CV_8UC3);
Hope this helps.
I'm trying to implement camera preview image data processing using camera2 api as proposed here: Camera preview image data processing with Android L and Camera2 API.
I successfully receive callbacks using onImageAvailableListener, but for future processing I need to obtain bitmap from YUV_420_888 android.media.Image. I searched for similar questions, but none of them helped.
Could you please suggest me how to convert android.media.Image (YUV_420_888) to Bitmap or maybe there's a better way of listening for preview frames?
You can do this using the built-in Renderscript intrinsic, ScriptIntrinsicYuvToRGB. Code taken from Camera2 api Imageformat.yuv_420_888 results on rotated image:
#Override
public void onImageAvailable(ImageReader reader)
{
// Get the YUV data
final Image image = reader.acquireLatestImage();
final ByteBuffer yuvBytes = this.imageToByteBuffer(image);
// Convert YUV to RGB
final RenderScript rs = RenderScript.create(this.mContext);
final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);
final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
allocationYuv.copyFrom(yuvBytes.array());
ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
scriptYuvToRgb.setInput(allocationYuv);
scriptYuvToRgb.forEach(allocationRgb);
allocationRgb.copyTo(bitmap);
// Release
bitmap.recycle();
allocationYuv.destroy();
allocationRgb.destroy();
rs.destroy();
image.close();
}
private ByteBuffer imageToByteBuffer(final Image image)
{
final Rect crop = image.getCropRect();
final int width = crop.width();
final int height = crop.height();
final Image.Plane[] planes = image.getPlanes();
final byte[] rowData = new byte[planes[0].getRowStride()];
final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);
int channelOffset = 0;
int outputStride = 0;
for (int planeIndex = 0; planeIndex < 3; planeIndex++)
{
if (planeIndex == 0)
{
channelOffset = 0;
outputStride = 1;
}
else if (planeIndex == 1)
{
channelOffset = width * height + 1;
outputStride = 2;
}
else if (planeIndex == 2)
{
channelOffset = width * height;
outputStride = 2;
}
final ByteBuffer buffer = planes[planeIndex].getBuffer();
final int rowStride = planes[planeIndex].getRowStride();
final int pixelStride = planes[planeIndex].getPixelStride();
final int shift = (planeIndex == 0) ? 0 : 1;
final int widthShifted = width >> shift;
final int heightShifted = height >> shift;
buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));
for (int row = 0; row < heightShifted; row++)
{
final int length;
if (pixelStride == 1 && outputStride == 1)
{
length = widthShifted;
buffer.get(output.array(), channelOffset, length);
channelOffset += length;
}
else
{
length = (widthShifted - 1) * pixelStride + 1;
buffer.get(rowData, 0, length);
for (int col = 0; col < widthShifted; col++)
{
output.array()[channelOffset] = rowData[col * pixelStride];
channelOffset += outputStride;
}
}
if (row < heightShifted - 1)
{
buffer.position(buffer.position() + rowStride - length);
}
}
}
return output;
}
For a simpler solution see my implementation here:
Conversion YUV 420_888 to Bitmap (full code)
The function takes the media.image as input, and creates three RenderScript allocations based on the y-, u- and v-planes. It follows the YUV_420_888 logic as shown in this Wikipedia illustration.
However, here we have three separate image planes for the Y, U and V-channels, thus I take these as three byte[], i.e. U8 allocations. The y-allocation has size width * height bytes, while the u- and v-allocatons have size width * height/4 bytes each, reflecting the fact that each u-byte covers 4 pixels (ditto each v byte).
I write some code about this, and it's the YUV datas preview and chang it to JPEG datas ,and I can use it to save as bitmap ,byte[] ,or others.(You can see the class "Allocation" ).
And SDK document says: "For efficient YUV processing with android.renderscript: Create a RenderScript Allocation with a supported YUV type, the IO_INPUT flag, and one of the sizes returned by getOutputSizes(Allocation.class), Then obtain the Surface with getSurface()."
here is the code, hope it will help you:https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs
I need to do color detection(ball tracking) for Augmented Reality. I want to use Qualcomms Vuforia SDK for AR and OpenCV for image processing. I found a color detection algorithm that works on desktop(OpenCV, C++) and tried to apply this to FrameMarkers(a Vuforia sample code) but no success yet.
I got a frame from Vuforia(I can only get RGB565 or GRAYSCALE frames.) and convert to OpenCV Mat object and apply same steps with desktop solution. But I got an error on HSV conversion side. Below is the code.
//HSV range for orange objects
const int H_MIN = 7;
const int S_MIN = 186;
const int V_MIN = 60;
const int H_MAX = 256;
const int S_MAX = 256;
const int V_MAX = 157;
const bool shouldUseMorphologicalOperators = true;
const int FRAME_WIDTH = 240;
const int FRAME_HEIGHT = 320;
const int MAX_NUM_OBJECTS = 50;
const int MIN_OBJECT_AREA = 20 * 20;
const int MAX_OBJECT_AREA = 320 * 240 / 1.5;
ObjectTracker::ObjectTracker()
{
x=y=0;
}
ObjectTracker::~ObjectTracker()
{
}
void ObjectTracker::track(QCAR::Frame frame)
{
int nImages = frame.getNumImages();
for(int i = 0; i < nImages; i++)
{
const QCAR::Image *image = frame.getImage(i);
if(image->getFormat() == QCAR::RGB565)
{
Mat RGB565 = Mat(image->getHeight(),image->getWidth(),CV_8UC2,(unsigned char *)image->getPixels());
Mat HSV;
//I got error an error here
cvtColor(RGB565,HSV,CV_RGB2HSV);
Mat thresholdedImage;
inRange(HSV,Scalar(H_MIN,S_MIN,V_MIN),Scalar(H_MAX,S_MAX,V_MAX),thresholdedImage);
if(shouldUseMorphologicalOperators)
applyMorphologicalOperator(thresholdedImage);
trackFilteredObject(x,y,thresholdedImage,RGB565);
//waitKey(30);
}
}
}
void ObjectTracker::applyMorphologicalOperator(Mat &thresholdedImage)
{
//create structuring element that will be used to "dilate" and "erode" image
//the element chosen here is 3px by 3px rectangle
Mat erodeElement = getStructuringElement(MORPH_RECT,Size(3,3));
//dilate with larger element so make sure object is nicely visible
Mat dilateElement = getStructuringElement(MORPH_RECT,Size(8,8));
erode(thresholdedImage,thresholdedImage,erodeElement);
erode(thresholdedImage,thresholdedImage,erodeElement);
dilate(thresholdedImage,thresholdedImage,dilateElement);
dilate(thresholdedImage,thresholdedImage,dilateElement);
}
void ObjectTracker::trackFilteredObject(int &x,int &y,Mat &thresholdedImage,Mat &cameraFeed)
{
Mat temp;
thresholdedImage.copyTo(temp);
//Two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarcy;
//find contours of filtered image using openCV findContours function
findContours(temp,contours,hierarcy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE);
//use moments method to find out filtered object
double refArea = 0;
bool objectFound = false;
if(hierarcy.size() > 0)
{
int nObjects = hierarcy.size();
//if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
if(nObjects < MAX_NUM_OBJECTS )
{
for(int index = 0; index >= 0; index = hierarcy[index][0])
{
Moments moment = moments((cv::Mat)contours[index]);
double area = moment.m00;
//if the area is less than 20 px by 20 px then it is probably just noise
//if the area is the same as the 3/2 of the image size, probably just a bad filter
//we only want the object with the largest area so we safe a reference area each
//iteration and compare it to the area in the next iteration.
if(area > MIN_OBJECT_AREA && area < MAX_OBJECT_AREA && area > refArea)
{
x = moment.m10/area;
y = moment.m01/area;
objectFound = true;
refArea = area;
}
else
objectFound = false;
}
//let user know you found an object
if(objectFound ==true)
{
LOG("Object found");
highlightObject(x,y,cameraFeed);
}
}
else
{
LOG("Too much noise");
}
}
else
LOG("Object not found");
}
void ObjectTracker::highlightObject(int x,int y,Mat &frame)
{
}
How to do proper conversion from RGB565 to HSV color space?
Convert it to RGB888 first using some code from this SO Question.
If you have RGB888 your conversion to HSV should work fine.
EDIT: As mentioned in the Comment. In OpenCV you can do it like this:
use cvtColor(BGR565,RGB,CV_BGR5652BGR) to conver from RGB565 to RGB and then cvtColor(RGB,HSV,CV_RGB2HSV) to convert from RGB to HSV.
EDIT2: It seems that you have to use BGR5652BGR since there is no RGB5652RGB
To get RGB values of one image i used the follwing code snippet
int[] pix = new int[picw * pich];
bitmap.getPixels(pix, 0, picw, 0, 0, picw, pich);
int R, G, B,Y;
for (int y = 0; y < pich; y++){
for (int x = 0; x < picw; x++)
{
int index = y * picw + x;
int R = (pix[index] >> 16) & 0xff; //bitwise shifting
int G = (pix[index] >> 8) & 0xff;
int B = pix[index] & 0xff;
//R,G.B - Red, Green, Blue
//to restore the values after RGB modification, use
//next statement
pix[index] = 0xff000000 | (R << 16) | (G << 8) | B;
}}
I want to compare two images,i know that comparing pixel values would be more expensive.I also analysed OpenCV library but i won't
get into my requirement.
Is there any algorithm to compare images using RGB values in android?
or
Is any other method to compare RGB values?
Thanks,
I'm not sure what your requirements are, but if all you want to do is compare the (RGB) color palettes of two images, you might want to use the PaletteFactory methods from Apache Commons Imaging (fka "Sanselan"):
The PaletteFactory methods build up collections (int[] and List<>) which can then be iterated over. I'm not sure just what kind of comparison you need to do, but a fairly simple case, using e.g. makeExactRgbPaletteSimple(), would be:
final File img1 = new File("path/to/image_1.ext")
final File img2 = new File("path/to/image_2.ext")
final PaletteFactory pf;
final int MAX_COLORS = 256;
final Palette p1 = pf.makeExactRgbPaletteSimple(img1, MAX_COLORS);
final Palette p2 = pf.makeExactRgbPaletteSimple(img2, MAX_COLORS);
final ArrayList<Int> matches = new ArrayList<Int>(Math.max(p1.length(), p2.length()));
int matchPercent;
// Palette objects are pre-sorted, afaik
if ( (p1 != null) && (p2 != null) ) {
if (p1.length() > p2.length()) {
for (int i = 0; i < p1.length(); i++) {
final int c1 = p1.getEntry(i);
final int c2 = p2.getPaletteIndex(c1);
if (c2 != -1) {
matches.add(c1);
}
}
matchPercent = ( (int)( (float)matches.size()) / ((float)p1.length) * 100 ) )
} else if (p2.length() >= p1.length()) {
for (int i = 0; i < p1.length(); i++) {
final int c1 = p2.getEntry(i);
final int c2 = p1.getPaletteIndex(c1);
if (c2 != -1) {
matches.add(c1);
}
}
matchPercent = ( (int)( (float)matches.size()) / ((float)p2.length) * 100 ) )
}
}
This is just a minimal example which may or may not compile and is almost certainly not what you're looking for in terms of comparison logic.
Basically what it does is check if each member of p1 is also a member of p2, and if so, adds it to matches. Hopefully the logic is correct, no guarantees. matchPercent is the percentage of colors which exist in both Palettes.
This is probably not the comparison method you want. It is just a simple example.
You will definitely need to play around with the 2nd parameter to makeExactRgbPaletteSimple(), int max, as I chose 256 arbitrarily - remember, the method will (annoyingly, imo) return null if max is too small.
I would suggest building from source as the repos have not been updated for quite some time. The project is definitely not mature, but it is fairly small, reasonably fast for medium-sized images, and pure Java.
Hope this helps.