I am working on a project and in that project I need to be able to:
get a frame of a video
change it to a CMYK
try to render that frame
Do you guys have any suggestions?
To get the frames I think I can use ffmpeg.
I think the person that wrote
Android processing a video, YCrCb frames to video
can help me but I'm open to any suggestions.
I appreciate all the help I can get, thank you
Assuming you know how to convert YUV to RGB, the remaining problem is RGB to CMYK. This can be very simple (C = 1 - R, M = 1 - G, Y = 1 - B) or very complex (3D LUT implementing a complex gamut mapping based on tone curves and ink coverages, etc.). You can assume video has standard Rec.709 gamma but you'll need to better define what kind of CMYK you're looking for and if they are linear or some other mapping.
Related
I wanna render wave which shows frequency data of audio.
I have data point of 150 points/second.
I have rendered it using canvas,showing line for each data value. so I show 150 lines for 1 second of song, its showing in right way but when we scroll the view, its lagging.
Is there any Library which can render the data points using openGL, canvas or using any other method which will be smooth while scrolling.
These are two waves. Each line represent one data point, with minimum value zero and maximum value will be highest value in data set.
How to render this wave in OpenGL or using any other library because Its lagging in Scrolling if rendered using canvas.
maybe you could show an example of how it looks like. How do you create the lines? Are the points scattered? Do you have to connect them or do you have a fixed point?
Usually in OpenGL-ES the process would looks like:
- read in your data of audio
- sort them so that OpenGL knows how to connect them
- upload them to your vertexShader
I would really recommend this tutorial. I don't know your OpenGL background, thus this is a perfect tool to start it.
Actually, your application shouldn't be too complicated and the tutorial should offer you enough information. In the case, you want to visualize each second with 150 points
Just a small overview
Learn how to set up a window with OpenGL
You described a 2d application
-define x values as eg. -75 to 75
-define y values as your data
define Lines as x,y dataSet
Use to draw
glBegin(GL_Lines)
glVertexf((float)x value of each line,(float)y small value of each line);
glVertexf((float)x value of each line,(float)y high value of each line);
glEnd();
If you have to use mobile graphics you need shaders because OpenGLES only support shader in GLSL
define your OpenGL camera!
I have been working on Touch less Bio-metrics. I want to extract Fingerprints from image captured by normal mobile camera. I have achieved a good image, but it is not good enough to be verified by government.
The lines need to be more thick and connected.
What I have tried so far?
Below are the steps which I took to extract a fingerprint from image. It is good, but lines are disconnected and joined with other.
Changed contrast and brightness to 0.8 and 25 respectively
Converted from RGB to Gray
Applied histogram equalization
Normalized image
Applied adaptive (gaussian c) threshold for block size of 15 and constant 2
Smooth image to get rid of edges
Changed contrast and brightness again to 1.7 and -40 respectively
Applied Gaussian Blur
Add weight (alpha = 0.5, beta = -0.5 and gamma = 0)
Applied binary threshold (threshold = 10)
Original Image would be like this (I missed the original image of processed image)
And the result is the image attached (processed image).
I need lines to be more connected and separated from other lines so that Ridge Ending and Ridge Bifurcation can easily be identified.
I also came through this link, but due to very limited background in Image Processing, I am unable to understand this. Any guidance regarding this link can also help me a lot.
I am using opencv in Android.
Any help is highly appreciated.
I saw a video on Youtube related to RayTracing on video games rendering, and there I could see that the Q2VKPT engine creator uses a "Temporal Filter" using multiple frames to get a clear image (ASVGF).
https://www.youtube.com/watch?v=tbsudki8Sro
from 26:20 to 28:40
Maybe, if you have three different images for the fingerprints and then you use a similar approach, you could get a picture with less noise that works better.
In OpenCV HSV images, the scale of Hue depends on the type of image. From this answer,
For HSV images it depends on the image type (see OpenCV doc):
8 bit images: H in [0, 180], S,V in [0, 255]
32bit images: H in [0, 360] ,S,V in [0,1]
I have two questions:
How do I know which type of image or frame (whether 8-bit or 32-bit) is the camera detecting, e.g. how would I know in this class in the Color Blob Detection sample application?
Is there a way to force the camera to take 32 bit image (because I really want the second scale, and if there is no way, I will have to find a way to convert the first scale to the second one, because Android's Color.HSVToColor method follows the second scale)?
1) It uses CvType.CV_8U so that means 8 bit unsigned.
2) Probably not, you have to dive in the part of code that actually controls the camera and grabs a frame to find this out. However, you can simply convert a 8 bit image to 32 bit by using convertTo functions.
SOLUTION:
I've had to train my own data to try it with the OCR. It seems that works well, but I don't know why the trained data from arturaugusto not works for me =(
https://github.com/adri1992/Tesseract_sevenSegmentsLetsGoDigital.git
With my trained data, to get good results of the OCR, I've done this phases (I've done it with OpenCV):
First, convert the image to Black&White
Second, apply to the image a Gaussian Blur
Third, apply to the image a Threshold filter
With this, the seven segments digits are recognized.
QUESTION:
I'm trying to get an OCR through Tesseract on Android, and I'm testing the app with this image (via Text detection on Seven Segment Display via Tesseract OCR):
I'm using the data trained by arturaugusto (https://github.com/arturaugusto/display_ocr), but the wrong result of the OCR is:
884288
The zero is recognized as an eight, and I don't know why.
I'm applying to the image a Gaussian Blur and a threshold filter, via OpenCV, and the image processed is this:
Is there any other data trained or do you know any way to solve the problem?
Try using erode to fill the gaps between the segments.
I think the problem is that tesseract can't handle well segmented font.
With OpenCV-python, I use cv2.erode(display,kernel, iterations = erosion_iters) to solve this problem.
I want to get, in RGB (or anything I can convert to RGB later), the middle color value of a YUV image. So the color of the centre XY pixel.
Theres nice code out there to convert the whole pixel array from an Android camera to RGB...but this seems a bit wasteful if I just want the center pixel.
Normally Id just look at the loop and figure out where its processing the middle pixel....but I dont understand the YUV or the conversion code well enough to figure out where the data I need is.
Any help or pointers?
Cheers
-Thomas
Using this guide here;
stackoverflow.com/questions/5272388/extract-black-and-white-image-from-android-cameras-nv21-format
Explains the process fairly well.
However, it seems I was having a different problem then I expected,but its different enough to repost the question.