What should I read for creating a basic augmented reality app for android?
I read the android reference articles, and I learnd that I could use the Intent(using the built in app) or construct my own "costumized" app (with camera).
I wanted to know what I should read more about, so that I could create something basic like a shape on the screen
By the way:
Cant I just see the current image given by the camera with out the need of saving it? (all of the articles want me to save the files captured, and as you know augmented reality(in my case) does not need saving the file, but does it "on the fly" , am I correct?
you can see the preview using surfaceview while recording from a mediarecorder.
the preview can be seen using setPreviewDisplayfunction of media recorder. its pretty simple to use.
I highly recommend you have a look at OpenCV. I have not used it with Android, but I know it to be a fairly painless and accessible way to image processing.
Related
I want make an Android app with custom camera API, which can take pictures with some png files as frames(Like some web came apps in PCs). And also first I want to take a picture of ball(or something) which act as frame for the second photo that I am going to take. Anybody have an idea?
Most devices already have a camera application, which you can start for the result if that suits your requirement.
But if you have more extensive requirement android also allows you to directly control the camera. Directly controlling the camera is much more involved and you should access your requirement before deciding on either approach.
You can refere to the following develper guides to get details of both
http://developer.android.com/training/camera/photobasics.html
http://developer.android.com/training/camera/cameradirect.html
Once you get the Bitmap, you can use the canvas element to combine the two bitmaps.
I'm creating an Android app that makes use of OpenCV to implement augmented reality. One of the needed features is that it saves the processed video. I can't seem to find any sample code on real-time saving while using OpenCV.
If the above scenario isn't possible, another option is to save the video first and have it post-processed by OpenCV and saved back as a new file. But I can't find any sample code for this either.
Could someone be kind enough to point me to either direction, or give me an alternative? It's ok if the alternative doesn't use OpenCV.
Typical opencv flow is, you receive frames from camera, convert to RGB format, perform matrix operations then return to activity to display in View. You can actually store the modified frames as images somewhere in sdcard and use jcodec to create your mp4 out of your images. See Android make animated video from list of images.
I have gone through all the samples of wikitude. Is it possible to overlay live camera feed image which has been saved as screenshot and create augmenetd image? If it is possible then what tracker image should I use? Because tracker image is the one which I know presently that which image I am going for track. Then if the image will be taken in future how can I create a .wtc file for that and how can I augment my camera feed? Is it possible in wikitude?
I have to create one application using wikitude. I like the sdk of wikitude.
If I understand you correctly you are looking for a way to create target images (that are used for recognition) on the device. This is currently not supported. However if you have a valid business case we are able to provide you with a server based tool to create target images. For more information please contact sales#wikitude.com.
Disclaimer: As you probably already guessed, I'm working for Wikitude.
I have an application where i make use of the Camera API to specify my own settings and do some frame processing.
I need my application to take photos with the same settings the Android original Camera app does, but apparently I have no way to extract the procedures and intents from it. I have taken a look at the original Android Camera app class file, but it was not helpful, since it makes use of native routines for the parameters part...
Is there any way that I can obtain the parameters used by the original camera App? And in which way does it save the images?
I know that i can write to a File stream as suggested in many posts, and i have done so, but how can i actually save the specific information the device puts in the files, such as information on the camera, density, and such ?
Thanks in advance
I've currently made an android app that can display a live preview of the camera, but I'm looking for a way to perform live pixel manipulation (ie, make the image grayscale, sepia-toned, etc.). As of yet I haven't found any code for someone whose done this before.
Any help would be appreciated.
You could use the Camera.Parameters to set the appropriate effect. Read more about it here.
If you want to do the manipulation by yours then use onPreviewFrame of camera. This gives you raw byte[] of YUV format (its by default, you could set it to other formats also. Look here for setting the preview format).
Now you could able to perform any pixel manipulation you want on byte[].
Hope this helps!
I have answered this question here. In short, this tutorial gives you probably the best way, how to achieve this (by using OpenCV, a free Computer Vision library). You can download their example application from their website as well.