I want to open android tablet's camera and get the data from camera in C level. After that I will modify the data, and C level will be efficient.
Now I'm thinking using the V4L2 C code. But I find the open function of V4L2 need the parameter of the camera's name, such as '/dev/video0'. However I can't find something like that in my tablet's dev folder. Besides, I am not sure whether using the V4L2 will be the right solution.
Does anyone know anything about this?
on my device "OpenCV for Android" does not provide required performance neither in 'native' mode nor in 'java' mode. it gives FPS=2 in 1920x1080, in same time when java MediaRecorder can record 1920x1080 with FPS=15
I'm trying to solve it using the code from Android Open Source Project used by native Camera application:
static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
jobject weak_this, jint cameraId)
{
sp<Camera> camera = Camera::connect(cameraId);
if (camera == NULL) {
jniThrowRuntimeException(env, "Fail to connect to camera service");
return;
}
// make sure camera hardware is alive
if (camera->getStatus() != NO_ERROR) {
jniThrowRuntimeException(env, "Camera initialization failed");
return;
}
jclass clazz = env->GetObjectClass(thiz);
if (clazz == NULL) {
jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
return;
}
// We use a weak reference so the Camera object can be garbage collected.
// The reference is only used as a proxy for callbacks.
sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
context->incStrong(thiz);
camera->setListener(context);
// save context in opaque field
env->SetIntField(thiz, fields.context, (int)context.get());
}
You can always build a JNI method for the Java classes to get access from C. Another way could be using OpenCV for Android: OpenCV4Android
This gives you a interface to the camera, but as far as I remember, there is currently no support for Android 4.3+.
Related
I am working wiht OpenCV4Android, and I am trying to use VideoCapture class to open the Android Camera and perform further processings on each captured frame.
Hi i'm working on android with opencv and i'm sorry to tell you that you can not open a stream with opencv in cpp. The ndk android dont give any API to access the camera so opencv can not open any stream. I saw one time an API for android 4.4 if i remember well but i did not succeed to open any thing.
Since the realase of android 7.0 you have access to some C function that give you the right to take a picture, check out this header: camera/NdkCameraManager.h.
And if you whant a beginning of code
#include <camera/NdkCameraManager.h>
#include <android/log.h>
#define LOGI(...) ((void)__android_log_print(ANDROID_LOG_INFO, "gandoulf", __VA_ARGS__))
#define LOGW(...) ((void)__android_log_print(ANDROID_LOG_WARN, "gandoulf", __VA_ARGS__))
void AndroidCamera()
{
ACameraIdList *cameraList; //list of available camera
ACameraManager *cameraManager; // android camera manager
camera_status_t cameraStatus; // enum for the error while using camera
cameraManager = ACameraManager_create(); // instantiate the camera manager
cameraStatus = ACameraManager_getCameraIdList(cameraManager, &cameraList); // get the list of available camera, return enum camera_status_t for the error
if (cameraStatus == ACAMERA_OK) {
LOGI("cameraList ok\n");
LOGI("num of camera = %d", cameraList->numCameras);
}
else
LOGW("ERROR with cameraList\n");
}
With that you have the list of camera and you can normaly take a picture with function that you can find in the header.
Based on this thread, is there a way to process an image from camera in QML without saving it?
Starting from the example of the doc the capture() function save the image to Pictures location.
What I would like to achieve, is to process the camera image every second using onImageCaptured but I don't want to save it to the drive.
I've tried to implement a cleanup operation using onImageSaved signal but it's affecting onImageCaptured too .
As explained in this answer you can bridge C++ and QML via the mediaObject. That can be done via objectName (as in the linked answer) or by using a dedicated Q_PROPERTY (more on that later). In either case you should end up with a code like this:
QObject * source // QML camera pointer obtained as described above
QObject * cameraRef = qvariant_cast<QMediaObject*>(source->property("mediaObject"));
Once you got the hook to the camera, use it as a source for a QVideoProbe object, i.e.
QVideoProbe *probe = new QVideoProbe;
probe->setSource(cameraRef);
Connect the videoFrameProbed signal to an appropriate slot, i.e.
connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));
and that's it: you can now process your frames inside the processFrame function. An implementation of such a function looks like this:
void YourClass::processFrame(QVideoFrame frame)
{
QVideoFrame cFrame(frame);
cFrame.map(QAbstractVideoBuffer::ReadOnly);
int w {cFrame.width()};
int h {cFrame.height()};
QImage::Format f;
if((f = QVideoFrame::imageFormatFromPixelFormat(cFrame.pixelFormat())) == QImage::Format_Invalid)
{
QImage image(cFrame.size(), QImage::Format_ARGB32);
// NV21toARGB32 convertion!!
//
// DECODING HAPPENS HERE on "image"
}
else
{
QImage image(cFrame.bits(), w, h, f);
//
// DECODING HAPPENS HERE on "image"
}
cFrame.unmap();
}
Two important implementation details here:
Android devices use YUV format which is not supported currently by QImage and which should be converted by hand. I've made the strong assumption here that all the invalid formats are YUV. That would be better managed via ifdef's conditionals over the current OS.
The decoding can be quite costy so you can skip frames (simply add a counter to this method) or off load the work to a dedicated thread. That also depend on the pace at which frames are elaborated. Also reducing their size, e.g. taking only a portion of the QImage can greatly improve performances.
For that matter I would avoid at all the objectName approach for fetching the mediaObject and instead I would register a new type so that the Q_PROPERTY approach can be used. I'm thinking about something along the line of this:
class FrameAnalyzer
{
Q_OBJECT
Q_PROPERTY(QObject* source READ source WRITE setSource)
QObject *m_source; // added for the sake of READ function
QVideoProbe probe;
// ...
public slots:
void processFrame(QVideoFrame frame);
}
where the setSource is simply:
bool FrameAnalyzer::setSource(QObject *source)
{
m_source = source;
return probe.setSource(qvariant_cast<QMediaObject*>(source->property("mediaObject")));
}
Once registered as usual, i.e.
qmlRegisterType<FrameAnalyzer>("FrameAnalyzer", 1, 0, "FrameAnalyzer");
you can directly set the source property in QML as follows:
// other imports
import FrameAnalyzer 1.0
Item {
Camera {
id: camera
// camera stuff here
Component.onCompleted: analyzer.source = camera
}
FrameAnalyzer {
id: analyzer
}
}
A great advantage of this approach is readibility and the better coupling between the Camera code and the processing code. That comes at the expense of a (slightly) higher implementation effort.
I want to record a video call made on skype on android phone. But when the call gets connected i start my app which record the video. But it troughs an error (My app cant start recording) "java.lang.RuntimeException: Fail to connect to camera service"
The camera can only be used by one application at a time.
As per the open() documentation:
Creates a new Camera object to access a particular hardware camera. If the same camera is opened by other applications, this will throw a RuntimeException.
http://developer.android.com/guide/topics/media/camera.html states the following:
Accessing cameras
If you have determined that the device on which your application is running has a camera, you must request to access it by getting an instance of Camera (unless you are using an intent to access the camera).
To access the primary camera, use the Camera.open() method and be sure
to catch any exceptions, as shown in the code below:
/** A safe way to get an instance of the Camera object. */
public static Camera getCameraInstance(){
Camera c = null;
try {
c = Camera.open(); // attempt to get a Camera instance
}
catch (Exception e){
// Camera is not available (in use or does not exist)
}
return c; // returns null if camera is unavailable
}
// Camera is not available (in use or does not exist)
So, simply said, your answer is no.
I have a small app I have been working on that uses the front camera. The way I have been obtaining use of the front camera seems to work on most phones, but users have been reporting trouble on the S3 and various other new devices. The way I have been accessing the front camera is like so:
// Find the ID of the front camera
CameraInfo cameraInfo = new CameraInfo();
for(int i = 0; i < numberOfCameras; i++) {
Camera.getCameraInfo(i, cameraInfo);
if(cameraInfo.facing == CameraInfo.CAMERA_FACING_FRONT) {
defaultCameraId = i;
mCameraFound = true;
}
}
if(!mCameraFound)
displayDialog(8);
From some of the error reporting I've added into the app, I've noticed the S3 actually finds the front camera, but users report it only shows a blank screen? I have only been able to test on the devices I have (GNex and N7). I was hoping someone here may have some experience with this or may be able to help me solve this issue. If you want to try the app out on your S3, check the link below. Thanks in advance.
https://play.google.com/store/apps/details?id=com.wckd_dev.mirror
EDIT: I created a MirrorView object which contains a TextureView used for the preview. The MirrorView object implements a SurfaceTextureListener. Within the onSurfaceTextureAvailable() method is where the preview is started. I also created a method for restarting the preview after the app has gone from hidden back to visible.
So this is called when the app is first started:
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
try {
if (mCamera != null) {
Camera.Parameters parameters = mCamera.getParameters();
parameters.setPreviewSize(mPreviewSize.height, mPreviewSize.width);
requestLayout();
mCamera.setParameters(parameters);
mCamera.setPreviewTexture(surface);
mCamera.startPreview();
}
}
catch(RuntimeException e) {
// Log.e(TAG, "RuntimeException caused by setPreviewTexture()", exception);
}
catch (IOException e) {
// Log.e(TAG, "IOException caused by setPreviewTexture()", exception);
}
}
The restartPreview call is to an identical (but separate) method. From some of the debug data I've been collecting through users, I've noticed that the app finds two camera on the S III and selects the id matching CAMERA_FACING_FRONT. Also, this issue doesn't seem to be happening on all S III. I have users who have feedback reporting as much. The latest report from a user experiencing this issue was an AT&T S III user. Any help would be appreciated!
Got some face time with an S3 tonight that was experiencing this issue with my app. Here what was going on. The TextureView relies on 2d hardware acceleration which is supposed to on by default (from what I understood) on 4.0+ devices. It wasn't turning on (for my app at least) on his phone. The fix was as simple as adding a single line in the manifest (under application).
android:hardwareAcceleration = "true"
I'm porting an iPhone app to Android, and I need to use OpenGL framebuffers. I have a Nexus One, and a call to glGet(GL_EXTENSIONS) shows that the Nexus One supports the same framebuffer extension as the iPhone. However, I can't seem to call functions related to the OpenGL extension in my GLSurfaceView. When I call a simple framebuffer get function, I get an UnsupportedOperationException.
I can't seem to resolve this issue, and I must have framebuffers to continue development. Do I need to pass some options in when the OpenGL context is created to get a fully capable OpenGL context object? Here's the block of code that I'm trying to run that determines the capabilities of the hardware. It claims to support the extension and my gl object is an instance of GL11ExtensionPack, but the call to glGetFramebufferAttachmentParameterivOES fails with an UnsupportedOperationException.
public void runEnvironmentTests()
{
String extensions = gl.glGetString(GL11.GL_EXTENSIONS);
Log.d("Layers Graphics", extensions);
if (gl instanceof GL11ExtensionPack) {
Log.d("Layers Graphics", "GL11 Extension Pack supported");
GL11ExtensionPack g = (GL11ExtensionPack) gl;
int[] r = new int[1];
try {
g.glGetFramebufferAttachmentParameterivOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, L11.GL_TEXTURE_2D, r, 0);
Log.d("Layers Graphics", "Framebuffers are supported");
} catch (UnsupportedOperationException e) {
e.printStackTrace();
framebuffersSupported = false;
Log.d("Layers Graphics", "Framebuffers are NOT supported");
}
}
}
If anyone has successfully used the GL_FRAMEBUFFERS_OES extension, please let me know. I'm beginning to think it may just not be implemented in the Java API!
There's currently a bug in Android in which the Java versions of these functions are unimplemented. The only way to use these extension functions at the moment is to use the NDK.