Capture image without saving - android

Based on this thread, is there a way to process an image from camera in QML without saving it?
Starting from the example of the doc the capture() function save the image to Pictures location.
What I would like to achieve, is to process the camera image every second using onImageCaptured but I don't want to save it to the drive.
I've tried to implement a cleanup operation using onImageSaved signal but it's affecting onImageCaptured too .

As explained in this answer you can bridge C++ and QML via the mediaObject. That can be done via objectName (as in the linked answer) or by using a dedicated Q_PROPERTY (more on that later). In either case you should end up with a code like this:
QObject * source // QML camera pointer obtained as described above
QObject * cameraRef = qvariant_cast<QMediaObject*>(source->property("mediaObject"));
Once you got the hook to the camera, use it as a source for a QVideoProbe object, i.e.
QVideoProbe *probe = new QVideoProbe;
probe->setSource(cameraRef);
Connect the videoFrameProbed signal to an appropriate slot, i.e.
connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));
and that's it: you can now process your frames inside the processFrame function. An implementation of such a function looks like this:
void YourClass::processFrame(QVideoFrame frame)
{
QVideoFrame cFrame(frame);
cFrame.map(QAbstractVideoBuffer::ReadOnly);
int w {cFrame.width()};
int h {cFrame.height()};
QImage::Format f;
if((f = QVideoFrame::imageFormatFromPixelFormat(cFrame.pixelFormat())) == QImage::Format_Invalid)
{
QImage image(cFrame.size(), QImage::Format_ARGB32);
// NV21toARGB32 convertion!!
//
// DECODING HAPPENS HERE on "image"
}
else
{
QImage image(cFrame.bits(), w, h, f);
//
// DECODING HAPPENS HERE on "image"
}
cFrame.unmap();
}
Two important implementation details here:
Android devices use YUV format which is not supported currently by QImage and which should be converted by hand. I've made the strong assumption here that all the invalid formats are YUV. That would be better managed via ifdef's conditionals over the current OS.
The decoding can be quite costy so you can skip frames (simply add a counter to this method) or off load the work to a dedicated thread. That also depend on the pace at which frames are elaborated. Also reducing their size, e.g. taking only a portion of the QImage can greatly improve performances.
For that matter I would avoid at all the objectName approach for fetching the mediaObject and instead I would register a new type so that the Q_PROPERTY approach can be used. I'm thinking about something along the line of this:
class FrameAnalyzer
{
Q_OBJECT
Q_PROPERTY(QObject* source READ source WRITE setSource)
QObject *m_source; // added for the sake of READ function
QVideoProbe probe;
// ...
public slots:
void processFrame(QVideoFrame frame);
}
where the setSource is simply:
bool FrameAnalyzer::setSource(QObject *source)
{
m_source = source;
return probe.setSource(qvariant_cast<QMediaObject*>(source->property("mediaObject")));
}
Once registered as usual, i.e.
qmlRegisterType<FrameAnalyzer>("FrameAnalyzer", 1, 0, "FrameAnalyzer");
you can directly set the source property in QML as follows:
// other imports
import FrameAnalyzer 1.0
Item {
Camera {
id: camera
// camera stuff here
Component.onCompleted: analyzer.source = camera
}
FrameAnalyzer {
id: analyzer
}
}
A great advantage of this approach is readibility and the better coupling between the Camera code and the processing code. That comes at the expense of a (slightly) higher implementation effort.

Related

Why does Android use pinImages to upload texutures of mutable bitmaps to GPU?

When I read Android source code SkiaRecordingCanvas, I found that if a bitmap is marked as mutable, namely !isImmutable() == true, the bitmap will be cached in GPU by function SkiaPipeline::pinImages which calls the skia interface SKImage_pinAsTexture. But after I commented these lines in pinImages and re-compiled and pushed to the phone, I found the gifs displayed normally. The only difference is that the texture uploading is delayed from prepareTree to renderFrameImpl. So, why does Android use this method to cache the textures of mutable bitmaps?
bool SkiaPipeline::pinImages(std::vector<SkImage*>& mutableImages) {
// for (SkImage* image : mutableImages) {
// if (SkImage_pinAsTexture(image, mRenderThread.getGrContext())) {
// mPinnedImages.emplace_back(sk_ref_sp(image));
// } else {
// return false;
// }
// }
return true;
}
The pinning and unpinning is designed to work even if the underlying bitmap changes.
Pinning it causes the image to eagerly get uploaded to the GPU. Unpinning it allows the GPU related memory to be freed. See the docs at https://github.com/google/skia/blob/dc3332a07906872f37ec3a592db7831178886527/src/core/SkImagePriv.h#L62-L86
By commenting out this code, you're making it so that the image won't (or at least, may not) get uploaded to the GPU until closer to when it is drawn, which may lead to more work being done at unfortunate times, especially on lower end devices. It's also not clear to me what the consequence of calling unpin without a matching pin.

Android Camera2 take picture while processing frames

I am using Camera2 API to create a Camera component that can scan barcodes and has ability to take pictures during scanning. It is kinda working but the preview is flickering - it seems like previous frames and sometimes green frames are interrupting realtime preview.
My code is based on Google's Camera2Basic. I'm just adding one more ImageReader and its surface as a new output and target for CaptureRequest.Builder. One of the readers uses JPEG and the other YUV. Flickering disappears when I remove the JPEG reader's surface from outputs (not passing this into createCaptureSession).
There's quite a lot of code so I created a gist: click - Tried to get rid of completely irrelevant code.
Is the device you're testing on a LEGACY-level device?
If so, any captures targeting a JPEG output may be much slower since they can run a precapture sequence, and may briefly pause preview as well.
But it should not cause green frames, unless there's a device-level bug.
If anyone ever struggles with this. There is table in the docs showing that if there are 3 targets specified, the YUV ImageReader can use images with maximum size equal to the preview size (maximum 1920x1080). Reducing this helped!
Yes you can. Assuming that you configure your preview to feed the ImageReader with YUV frames (because you could also put JPEG there, check it out), like so:
mImageReaderPreview = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 1);
You can process those frames inside your OnImageAvailable listener:
#Override
public void onImageAvailable(ImageReader reader) {
Image mImage = reader.acquireNextImage();
if (mImage == null) {
return;
}
try {
// Do some custom processing like YUV to RGB conversion, cropping, etc.
mFrameProcessor.setNextFrame(mImage));
mImage.close();
} catch (IllegalStateException e) {
Log.e("TAG", e.getMessage());
}

Use Camera2 to Preview and Process camera data

Question
I want to do something similar to what's done on the Camera2Basic sample, that is:
Previewing images from the camera using a TextureView
Processing images from the camera using a ImageReader
With a few differences regarding 2:
I'm only interested on the gray channel (brightness) from the images to be processed. Their dimensions should be around 1000 x 1000 pixels (and not the highest resolution available)
When a image to be processed is available, a generic process(Image) method will be called instead of saving images to disk. What this method does is out of the scope of this question, but it takes around 50 ms to return
The image data should be processed periodically (around 10 FPS, but speed is not critical) instead of eventually
How can I accomplish this using the Camera2 API?
Observations
I've changed the way I'm creating the ImageReader instance, selecting smaller dimensions and a different format (YUV_420_888 instead of JPEG). The Y plane will be accessed in order to get the brightness data. Is there a more efficient format (since I'm simply ignoring the U and V planes)?
Both TextureView and ImageReader surfaces should be filled periodically, but at different rates. Since there can be only one repeating CameraRequest on a CameraCaptureSession (which can be set by calling setRepeatingRequest()), am I supposed to manually call capture() periodically (e.g. call setRepeatingRequest() with the preview request and call capture() periodically with the process request)?
Can the performance be improved by sending reprocessed requests to obtain the images to be processed from the preview images? If so, how can I do it?
I don't know how to help you with the gray channel, I suggest you to try to study the planes of the YUV format image and try to get it from there.
Also try to check all the values that you can set in the CaptureBuilder, maybe you can achieve your objetive using SENSOR_TEST_PATTERN_MODE, COLOR_CORRECTION_MODE, or BLACK_LEVEL_LOCK. You can check all the info in android documentation
About process just one of every 10 frames, just discard the frames in your process() method using a simple:
if (result.getFrameNumber() % 10 != 0) return;
Finally remember to close all the images that you recieve in your ImageReader OnImageAvailableListener to avoid memory leaks and improve your performance :P
#Override
public void onImageAvailable(ImageReader imageReader) {
Image image = null;
try {
image = imageReader.acquireNextImage();
//Do whatever you want with your Image
if (image != null) {
image.close();
}
} catch (IllegalStateException iae) {
if (image != null) {
image.close();
}
}
}
hope that it will help you, let me know if I can help you in something else!

YUV (NV21) to BGR conversion on mobile devices (Native Code)

I'm developing a mobile application that runs on Android and IOS. It's capable of real-time-processing of a video stream. On Android I get the Preview-Videostream of the camera via android.hardware.Camera.PreviewCallback.onPreviewFrame. I decided to use the NV21-Format, since it should be supported by all Android-devices, whereas RGB isn't (or just RGB565).
For my algorithms, which mostly are for pattern recognition, I need grayscale images as well as color information. Grayscale is not a problem, but the color conversion from NV21 to BGR takes way too long.
As described, I use the following method to capture the images;
In the App, I override the onPreviewFrame-Handler of the Camera. This is done in CameraPreviewFrameHandler.java:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
{
try {
AvCore.getInstance().onFrame(data, _prevWidth, _prevHeight, AvStreamEncoding.NV21);
} catch (NativeException e)
{
e.printStackTrace();
}
}
The onFrame-Function then calls a native function which fetches data from the Java-Objects as local references. This is then converted to an unsigned char* bytestream and calls the following c++ function, which uses OpenCV to convert from NV21 to BGR:
void CoreManager::api_onFrame(unsigned char* rImageData, avStreamEncoding_t vImageFormat, int vWidth, int vHeight)
{
// rImageData is a local JNI-reference to the java-byte-array "data" from onPreviewFrame
Mat bgrMat; // Holds the converted image
Mat origImg; // Holds the original image (OpenCV-Wrapping around rImageData)
double ts; // for profiling
switch(vImageFormat)
{
// other formats
case NV21:
origImg = Mat(vHeight + vHeight/2, vWidth, CV_8UC1, rImageData); // fast, only creates header around rImageData
bgrMat = Mat(vHeight, vWidth, CV_8UC3); // Prepare Mat for target image
ts = avUtils::gettime(); // PROFILING START
cvtColor(origImg, bgrMat, CV_YUV2BGRA_NV21);
_onFrameBGRConversion.push_back(avUtils::gettime()-ts); // PROFILING END
break;
}
[...APPLICATION LOGIC...]
}
As one might conclude from comments in the code, I profiled the conversion already and it turned out that it takes ~30ms on my Nexus 4, which is unacceptable long for such a "trivial" pre-processing step. (My profiling methods are double-checked and working properly for real-time measurement)
Now I'm trying desperately to find a faster implementation of this color conversion from NV21 to BGR. This is what I've already done;
Adopted the code "convertYUV420_NV21toRGB8888" to C++ provided in this topic (multiple of the conversion-time)
Modified the code from 1 to use only integer operations (doubled conversion-time of openCV-Solution)
Browsed through a couple other implementations, all with similar conversion-times
Checked OpenCV-Implementation, they use a lot of bit-shifting to get performance. Guess I'm not able to do better on my own
Do you have suggestions / know good implementations or even have a completely different way to work around this Problem? I somehow need to capture RGB/BGR-Frames from the Android-Camera and it should work on as many Android-devices as possible.
Thanks for your replies!
Did you try libyuv? I used it in the past and if you compile it with NEON support, it uses an asm code optimized for ARM processors, you can start from there to further optimize for your special situation.

Keeping track of features in successive frames in OpenCV

I have written a program that uses goodFeaturesToTrack and calcOpticalFlowPyrLK to track features from frame to frame. The program reliably works and can estimate the optical flow in the preview image on an Android camera from the previous frame. Here's some snippets that describe the general process:
goodFeaturesToTrack(grayFrame, corners, MAX_CORNERS, quality_level,
min_distance, cv::noArray(), eig_block_size, use_harris, 0.06);
...
if (first_time == true) {
first_time = false;
old_corners = corners;
safe_corners = corners;
mLastImage = grayFrame;
} else {
if (old_corners.size() > 0 && corners.size() > 0) {
safe_corners = corners;
calcOpticalFlowPyrLK(mLastImage, grayFrame, old_corners, corners,
status, error, Size(21, 21), 5,
TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30,
0.01));
} else {
//no features found, so let's start over.
first_time = true;
}
}
The code above runs over and over again in a loop where a new preview frame is grabbed at each iteration. Safe_corners, old_corners, and corners are all arrays of class vector < Point2f > . The above code works great.
Now, for each feature that I've identified, I'd like to be able to assign some information about the feature... number of times found, maybe a descriptor of the feature, who knows... My first approach to doing this was:
class Feature: public Point2f {
private:
//things about a feature that I want to track
public:
//getters and fetchers and of course:
Feature() {
Point2f();
}
Feature(float a, float b) {
Point2f(a,b);
}
}
Next, all of my outputArrays are changed from vector < Point2f > to vector < Feature > which in my own twisted world ought to work because Feature is defined to be a descendent class of Point2f. Polymorphism applied, I can't imagine any good reason why this should puke on me unless I did something else horribly wrong.
Here's the error message I get.
OpenCV Error: Assertion failed (func != 0) in void cv::Mat::convertTo(cv::OutputArray, int, double, double) const, file /home/reports/ci/slave50-SDK/opencv/modules/core/src/convert.cpp, line 1095
So, my question to the forum is do the OpenCV functions truly require a Point2f vector or will a descendant class of Point2f work just as well? Next step would be to get gdb working with mobile code on an the Android phone and seeing more precisely where it crashes, however I don't want to go down that road if my approach is fundamentally flawed.
Alternatively, if a feature is tracked across multiple frames using the approach above, does the address in memory for each point change?
Thanks in advance.
The short answer is YES, OpenCV functions do require std::vector<cv::Point2f> as arguments.
Note that the vectors contain cv::Point2f objects themselves, not pointers to cv::Point2f, so there is no polymorphic behavior.
Additionally, having your Feature inherit from cv::Point2f is probably not an ideal solution. It would be simpler to use composition in this case, not to mention modeling the correct relationship (Feature has-a cv::Point2f).
Relying on an object's location in memory is also probably not a good idea. Rather, read up on your data structure of choice.
I'm just getting into OpenCV myself so can't address that aspect of the code but your problem might be a bug in your code that results in an uninitialized base class (at least not initialized as you might expect). Your code should look like this:
Feature()
: Point2f()
{
}
Feature(float a, float b)
: Point2f(a,b)
{
}
Your implementation creates two temporary Point2f objects in the constructor. Those temporary objects do not initialize the Feature object's Point2f base class and those temporary objects are destroyed at the end of the constructor.

Categories

Resources