I am using the CWAC library and wanted to implement a few functionalities on them.viz.
Is Pinch zoom possible?
I am using the camera as a part of the view pager and I am overriding setUserVisibleHint() to pause and resume the CameraView to reduce the memory footprint.Is there a better way of doing this?
Can touch to focus be implemented?
Is Pinch zoom possible?
Not via the library.
Is there a better way of doing this?
I have never tried this.
Your need for a camera would seem to well exceed my objectives for the library (and its eventual replacement). Quoting the documentation:
Library Objectives
The #1 objective of this library is maximum compatibility with hardware. As such, this library will not be suitable for all use cases.
The targeted use case is an app that might otherwise have relied upon ACTION_IMAGE_CAPTURE, but needs greater reliablilty and somewhat greater control (e.g., capture images directly to internal storage).
If you are trying to write "a camera app" — an app whose primary job is to take pictures — this library may be unsuitable for you.
I'll even go so far as to say: if camera functionality is more than 1% of the value of your app, do not use my libraries. Use the camera APIs directly.
Related
I am developing an augmented-reality app to be used on both Google's Project Tango tablet, and on ordinary android devices. The AR on the normal devices is being powered by Vuforia, so its libraries are available in the development of the app.
While the Tango's capabilities offer a unique opportunity to create a marker-free AR system, the Pose data has significant drift that makes it difficult to justify Tango development due to the data's instability.
When Vuforia was being researched for eventual inclusion into the app, I came across its Extended Tracking capabilities. It uses some advanced Computer Vision to provide tentative information on the device's location without having the AR marker onscreen. I tried out the demo, and it actually works great. Fairly accurate within reason, and minimal drift (especially when compared to the Tango's pose data!)
I would like to implement this extended tracking feature into the Tango version of the app, but after viewing the documentation it appears that the only way to take advantage of the extended tracking feature is to activate it while viewing an AR marker, and then the capability takes over once the marker disappears from view.
Is there any way to activate this Extended Tracking feature without requiring an AR marker to source its original position, and simply use it to stabilize and correct error in the Tango's pose data? This seems like the most realistic solution to the drift problem that I've come up with yet, and I'd really like to be able to take advantage of this technology.
this is my first answer on stack overflow, so I hope it can help!
I too have asked myself the same question for vuforia, as it can often be more stable with extended tracking than with a marker, like when far from a marker, or/and at an angle for example, it can be unstable, if I then cover up the marker, therefor forcing the extended tracking, it works better! I've not come across a way to just use extended tracking, but I haven't looked very far.
My suggestion is that you look into maybe using a UDT (user defined target) In the vuforia examples, you can find how to use UDT. They are made so that the user can take a photo of whatever he likes as a target. but what you could maybe do, is take this photo automatically, without user input, and the use this UDT, and the extended tracking from the created target.
A suggestion I thought useful. Personally I find the tracking of the tango amazing and much better than vuforia's extended tracking (to be expected with the extra sensors) But i suppose it all depends on the environment.
Good luck, hope this suggestion could work,
Beau
I am wondering does anyone know how is the best way to implement filters using commonsware-cwac.
I want to update the camera preview each time the user chooses a different filter
Andrew
If you are looking to use filters, do not use the CWAC-Camera library. You are trying to write a real camera app, and that is not what my library is for. Quoting the README:
The targeted use case is an app that might otherwise have relied upon ACTION_IMAGE_CAPTURE, but needs greater reliablilty and somewhat greater control (e.g., capture images directly to internal storage).
If you are trying to write "a camera app" — an app whose primary job is to take pictures — this library may be unsuitable for you.
Please use the Camera2 (Android 5.0+) and Camera (Android 4.4 and older) APIs directly.
For my final year project at university, I am extending an application called Rviz for Android. This is an application for Android tablets that uses ROS (robot operating system) to display information coming from robots. The main intent of the project is essentially the opposite of traditional augmented reality - instead of projecting something digital onto a view of the real world, I am projecting a view of the real world, coming from the tablet's camera, onto a digital view of the world (an abstract map). The intended purpose of the application is that the view of the real world should move on the map as the tablet moves around.
To move the view of the camera feed on screen, I am using the tablet's accelerometer and calculating distance travelled. This is inherently flawed, and as such, the movement is far from accurate (this itself doesn't matter that much - it's great material for my report). To improve the movement of the camera feed, I wish to use markers placed at predefined positions in the real world, with the intent that, if a marker is detected, the view jumps to the position of the marker. Unfortunately, while there are many SDKs out there that deal with marker detection (such as the Qualcomm SDK), they are all geared towards proper augmented reality (that is to say, overlaying something on top of a marker).
So far, the only two frameworks I have identified that could be somewhat useful are OpenCV (which looks very promising indeed, though I'm not very experienced with C++) and AndAR, which again seems very focused on traditional AR uses, but I might be able to modify. Would either of these frameworks be appropriate here? Is there any other way I could implement a solution?
If it helps at all, this is the source code of the application I am extending, and this is the source code for ROS on Android (the code which uses the camera is in the "android_gingerbread_mr1" folder. I can also provide a link to my extensions to Rviz, if that would also help. Thank you very much!
Edit: the main issue I'm having at the moment is trying to integrate the two separate classes which access the camera (JavaCameraView in OpenCV, and CameraPreviewView in ROS). They both need to be active at the same time, but they do different things. I'm sure I can combine them. As previously mentioned, I'll link to/upload the classes in question if needed.
Have a look at the section about Template Matching in the OpenCV documentation. This thread may also be useful.
So I've managed to find a solution to my problem, and it's completely different from what I thought it would be. All of the image processing is offloaded onto a computer, and is performed by a ROS node. This node uses a library called ArUco to detect markers. The markers are generated by a separate program provided with the library, and each has its own unique ID.
When a marker is detected, a ROS message is published containing the marker's ID. The app on the tablet receives the message, and moves the real-world view according to which marker it receives. It works pretty well, though it's a bit unreliable, because I have to use a low image quality to make rendering and transport of the image quicker. And that's my solution! I may post my source code here once the project is completely finished.
In a gist this is what I want to do:
I want to load a 100x100 region (any part) of a 5mega pixel Image into a Android Bitmap class so that I can draw it onto a canvas element
Its that simple. Thats all I want to do. Thats it. Nothing more. Nothing less. Sounds simple enough. So before you have a smirk on your face, read on further down.
I understand that this question has been asked a million times already. And I have also done my homework, researching it. Unfortunately I have hit a dead end from all sides. Maybe I need to make my question clear enough.
There is a limit on the amount of heap the Android VM lets you allocate. So loading a large bitmap, even the one from its own camera (I own a Nexus S) is not possible by using the following function.
BitmapFactory.decodeStream(InputStream is)
Yes I could scale it by using BitmapFactory.Options, but how then should I zoom in?
Now I am trying to design an image viewer which can smoothly zoom in/out from the image. Obviously that's not possible if I can't even load the image.
In gingerbread we have a new class BitmapRegionDecoder , but I am designing my app for Froyo. All these classes have hooks into the native api which use the Skia 2D Graphics Library. The Android NDK does not give access to these api's also. Maybe there is a way to manually build Skia and use it to load a java bitmap. But I am not sure how?
It seems to have been solved already (http://blog.javia.org/how-to-work-around-androids-24-mb-memory-limit/) in this app - https://market.android.com/details?id=image.viewer by using the memory allocated in the native C code using malloc/new. But I cant figure out how?
So make things clearer what I want is something that can zoom in/out of an image to full resolution. If possible it should be smooth or atleast it can be done in a separate thread.
If I need to use OpenGL, then please give me a sample code also.
I figured this out on my own. :)
Its not the best way, but it works wonderfully.
I created the bitmap in native code and accessed subset of its region using the Skia graphics library. (Which would mean that I am accessing private API's and its functionality could easily break with newer Android Versions).
http://code.google.com/p/skia/
I had to link against this lib in my C++ code and access this functionality through JNI.
I've been thinking about working on an application. You take a picture of something at a yard sale and it compares it against an image database.
For example say you take a picture of a spoon, and compares the image taken against images in the database and throws back to the user the top 5 possible matches.
Is this possible with current Android?
If so point me in the right direction, for stuff I'd need.
Thanks,
abolbridge
Look forward to your guys feedback.
That is rather possible, but too much CPU consuming and therefore not possible on Android itself. You'd have to build a serverside application for that.
It is going to be hard though. Quite.
Take a look at Google Goggles for an idea. The image processing is entirely made on the server side.
Check out openCV, as it contains a lot of useful object recognition functions and can be used on android. However, this approach will push the limits of the phones CPU and more so, its memory when using higher resolution images. A server-side implementation may be more appropiate.