i have landmarks from Mediapipe, and i have eyebrow Image (png file 512 x 512), how can i achieve to plot the eyebrow in that particular landmarks(Pixel Coordinates).
any idea should be appreciated.
I i'm currently trying with OpenCV android java for plotting the eyebrow image in that landmarks. please let me know if any good approach or idea.
Related
I am using the ARCore android SDK. It provides the 468 3D vertices relative to the center pose of the face with units in meters.
But when I tried to something else on the corresponding CPU image via frame.acquireCameraImage(), I need the 2D face landmarks.
Could anyone help to point out is there any existing API to get the 2D face landmarks ([x, y] in pixels, like the face landmark TFLite model used by mediapipe) of the current CPU image?
Thanks a lot!
Our company want to develop a android app to analyse 2D camera's video frame. By integrating openni sdk, app can get human body keypoints. With keypoints, app find the hand keypoints and hand's location in frame. But after our research, we now find some problems:
openni's repository in github didn't maintain for 6 years.
Can we use openni to get human's body keypoints by 2D picture or frame? And how to do.
If openni can't, is there any other way to solve this in android?
I am developing an Android app in which I want to track a 2D image/a piece of paper, analyze what the user write/draw on it, and correctly display different 3D contents on it.
I am working on the tracking and displaying simple 3D contents part, which can actually be achieved using SDKs like Vuforia and Wikitude. However, I am not using them for several reasons.
There are other analysis on the image to be done, e.g. drawings analysis.
The image may not be as rich in features, e.g. paper with lines or some figures.
SDKs like Vuforia may not expose some underlying functionalities like feature detection etc. to developers.
Anyway, right now I only want to achieve the following result.
I have a piece of paper, probably with lines and figures on it. You can think of it as the kind of paper for children to practice writing or drawing on. Example: https://i.pinimg.com/236x/89/3a/80/893a80336adab4120ff197010cd7f6a1--dr-seuss-crafts-notebook-paper.jpg
I point my phone (the camera) at the paper while capturing the video frames.
I want to register the paper, track it and display a simple wire-frame cube on it.
I have been messing around with OpenCV, and have tried the following approaches.
Using homography:
Detect features in the 2D image (ORB, FAST etc.).
Describe the features (ORB).
Do the same in each video frame.
Match the features and find good matches.
Find the homography, use the homography and successfully draw a rectangle around the image in the video frame.
Did not know how to use the homography decomposition (into rotations, translations and normals) to display a 3D object like a cube.
Using solvePnP:
1 to 4 are the same as the above.
Convert all 2D good match points in the image to 3D by assuming the image lies on the world's x-y plane, thus all having z = 0.
Use solvePnP with those 3D points and 2D points in the current frame to retrieve the rotation and translation vectors, and further convert it to the projection matrix using Rodrigues() in OpenCV.
Construct the 3D points of a cube.
Project them into the 2D image using the projection and the camera matrix.
The issue is the cube is jumping around, which I believe is due to the feature detection and mapping not being stable and accurate, thus affecting solvePnP.
Using contours or corners:
I simply grayscale the camera frame, Gaussian-smooth it, dilate or erode it and try to find the biggest 4-edge contour so that I can track it using solvePnP etc. This, unsurprisingly, doesn't give good results, or I'm just doing it wrong.
So my questions are:
How can I solve the two bold problems mentioned above.
More generally, given the type of image target I want to track, what would be the optimal algorithm/solution/technique to track it?
What are the things that I can improve/change in my way of solving the problem?
Thank you very much.
I am looking for some kind of auto trim/crop functionality in android.
Which detects a object in captured image and creates a square box around object for
cropping. I have found face detection apis in android, but my problem is captured images are documents/pages not human faces so how can I detected documents or any other object from captured picture.
I am thinking of any algorithms for object detection or some color detection. Is there any apis or libraries available for it.
I have tried following link but not found any desired output.
Find and Crop relevant image area automatically (Java / Android)
https://github.com/biokys/cropimage
Any small hint would also help me alot. Please help. Thanks in advance
That depends on what you intend to capture and crop, but there are many ways to achieve this. Like littleimp suggested, you should use OpenCv for the effect.
I suggest you use edge-detection algorithms, such as Sobel, and perform image transformation on it with, for example, a Threshold function that will turn the image into a binary one (only black and white). Afterwards, you can search the image for the geometric shape you want, using what's suggested here. Filter the object you want by calculating the detected geometric figure's area and ratio.
It would help a lot to know what you're trying to detect in an image. Those methods I described were the ones I used for my specific case, which was developing an algorithm to detect and crop the license plate from a given vehicle image. It works close to perfect and it was all done by using OpenCV.
If you have anything else you'd like to know, don't hesitate to ask. I'm watching this post :)
Use OpenCV for android.
You can use the Watershed (Imgproc.watershed) function to segment the image into foreground and background. Then you can crop around the foreground (which will be the document).
The watershed algorithm needs some markers pre-defining the regions. You can for example assume the document to be in the middle of the image, so create a marked region in the middle of the image to get the watershed algorithm started.
I'm currently looking into creating an app that takes a picture using the phone's camera. The app will be used for clients that are pre-op for breast augmentation. The user will be able to take a picture of themselves then place two shperes over the picture, then with a slidebar manipulate the size of the spheres. This will give the user an idea which cup size they would like. I understand that there is something in photoshop that has the desired effect and this is what i'm trying to replicate but on the bitmap image.
Are there any suitable libraries out there that i can use. I'm looking into OpenGL as that is for 3D graphics.
Any ideas and input would be appreciated.
thanks Mat.
Java does have it's own 2D and 3D APIs. Hopefully you can these to achieve your goal?