I have an application where i make use of the Camera API to specify my own settings and do some frame processing.
I need my application to take photos with the same settings the Android original Camera app does, but apparently I have no way to extract the procedures and intents from it. I have taken a look at the original Android Camera app class file, but it was not helpful, since it makes use of native routines for the parameters part...
Is there any way that I can obtain the parameters used by the original camera App? And in which way does it save the images?
I know that i can write to a File stream as suggested in many posts, and i have done so, but how can i actually save the specific information the device puts in the files, such as information on the camera, density, and such ?
Thanks in advance
Related
Is it possible to take a picture and get the image as byte array without saving it?
All I want is, take an in memory picture. I don't want to save it. Also i plan to use other camera activity to take the picture (i mean, i don't want to implement my own camera logic)
Please note, I know that I can take a picture, read bytes and delete it. But I am looking if I can avoid this saving and deleting part, instead directly get the image in an in memory byte array.
Is it possible to take a picture and get the image as byte array without saving it?
That is the behavior of the android.hardware.Camera API, the android.hardware.camera2 API, and third-party libraries that simplify those APIs (e.g., Fotoapparat, CameraKit-Android).
i plan to use other camera activity to take the picture
If by "other camera activity", you mean a third-party camera app (e.g., via ACTION_IMAGE_CAPTURE), that will be difficult and unreliable. At best, you could try to use a content Uri pointing to a ContentProvider that you write that only holds onto written streams in memory. I would expect you to run out of heap space. Plus, not all camera apps will support a content Uri for EXTRA_OUTPUT (though they should).
I am writing a custom camera application because I need to do real-time streaming and I need to access the byte array raw video data from a camera.
To simplify my work, I was wondering if I could get this raw data from the user's native camera application. I am guessing not because you must invoke it through an intent, and then you get a result. But, I need the real-time raw data, not a delayed result.
The reason for this request is that I just need to get the raw data and don't actually desire to do anything fancier than what the native camera can do.
If this is possible I'd be very grateful for any assistance!
Have you tried using setPreviewCallback()? It will continuously give you still images of the camera preview. The only caveat is that the frame rate will probably be very low, but I don't know your use case so it might be good enough.
I was wondering if I could get this raw data from the user's native camera application
Not in general. There are thousands of "native camera applications" for Android. None are required to publish any sort of real-time raw data feed, to the extent that is even possible given the additional layer of IPC overhead. It's entirely possible that somebody happens to have written a camera app that does this, and you're welcome to hunt around for it, but that may take a fair bit of time.
I want make an Android app with custom camera API, which can take pictures with some png files as frames(Like some web came apps in PCs). And also first I want to take a picture of ball(or something) which act as frame for the second photo that I am going to take. Anybody have an idea?
Most devices already have a camera application, which you can start for the result if that suits your requirement.
But if you have more extensive requirement android also allows you to directly control the camera. Directly controlling the camera is much more involved and you should access your requirement before deciding on either approach.
You can refere to the following develper guides to get details of both
http://developer.android.com/training/camera/photobasics.html
http://developer.android.com/training/camera/cameradirect.html
Once you get the Bitmap, you can use the canvas element to combine the two bitmaps.
I have an application that captures pictures by dispatching an intent. Upon return of control to the activity, I retrieve the intent, extract the data, transform it into a bitmap and use it in my application. Just some straight forward code.
The pictures taken are subject to some privacy obligations and my application takes care to delete all data. The problem is that all pictures taken by the Camera application seem to automatically be saves into internal storage. I was successful in deleting screenshots taken by the device and by clearing all thumbnails. What remains to be done is keep the Camera application from storing picture in the first place.
The problem as I see it is that I relinquish control to the application and, hence, cannot influence it in any way. I tried launching ACTION_IMAGE_CAPTURE_SECURE but that failed. I would much prefer the approach of keeping the application from storing the image to the approach of having to scan the internal storage location for images and delete them as I tried it and failed.
Note: I have assigned all the right permissions. Should you need code, I will be happy to post it but there is nothing that cannot be found already in other threads. If there is no solution to my preferred approach, how would I go about deleting all images located in the internal storage at DCIM\Camera? Thank you!
If you need to override that behavior, don't take the pictures via intent. Take them yourself, using the android.hardware.Camera API.
What should I read for creating a basic augmented reality app for android?
I read the android reference articles, and I learnd that I could use the Intent(using the built in app) or construct my own "costumized" app (with camera).
I wanted to know what I should read more about, so that I could create something basic like a shape on the screen
By the way:
Cant I just see the current image given by the camera with out the need of saving it? (all of the articles want me to save the files captured, and as you know augmented reality(in my case) does not need saving the file, but does it "on the fly" , am I correct?
you can see the preview using surfaceview while recording from a mediarecorder.
the preview can be seen using setPreviewDisplayfunction of media recorder. its pretty simple to use.
I highly recommend you have a look at OpenCV. I have not used it with Android, but I know it to be a fairly painless and accessible way to image processing.