I am writing a custom camera application because I need to do real-time streaming and I need to access the byte array raw video data from a camera.
To simplify my work, I was wondering if I could get this raw data from the user's native camera application. I am guessing not because you must invoke it through an intent, and then you get a result. But, I need the real-time raw data, not a delayed result.
The reason for this request is that I just need to get the raw data and don't actually desire to do anything fancier than what the native camera can do.
If this is possible I'd be very grateful for any assistance!
Have you tried using setPreviewCallback()? It will continuously give you still images of the camera preview. The only caveat is that the frame rate will probably be very low, but I don't know your use case so it might be good enough.
I was wondering if I could get this raw data from the user's native camera application
Not in general. There are thousands of "native camera applications" for Android. None are required to publish any sort of real-time raw data feed, to the extent that is even possible given the additional layer of IPC overhead. It's entirely possible that somebody happens to have written a camera app that does this, and you're welcome to hunt around for it, but that may take a fair bit of time.
Related
Is it possible to take a picture and get the image as byte array without saving it?
All I want is, take an in memory picture. I don't want to save it. Also i plan to use other camera activity to take the picture (i mean, i don't want to implement my own camera logic)
Please note, I know that I can take a picture, read bytes and delete it. But I am looking if I can avoid this saving and deleting part, instead directly get the image in an in memory byte array.
Is it possible to take a picture and get the image as byte array without saving it?
That is the behavior of the android.hardware.Camera API, the android.hardware.camera2 API, and third-party libraries that simplify those APIs (e.g., Fotoapparat, CameraKit-Android).
i plan to use other camera activity to take the picture
If by "other camera activity", you mean a third-party camera app (e.g., via ACTION_IMAGE_CAPTURE), that will be difficult and unreliable. At best, you could try to use a content Uri pointing to a ContentProvider that you write that only holds onto written streams in memory. I would expect you to run out of heap space. Plus, not all camera apps will support a content Uri for EXTRA_OUTPUT (though they should).
I guess small audio clips are necessary for many applications, thus I would expect QT have support playing mp3 in memory slices. Maybe decode mp3 data to wav data in memory may be one solution, but that needs time to decode all data first. For real time application, it is not a good idea. It also doesn't make sense to store mp3_data in a file and ask QMediaPlayer to play that, the performance is unacceptable.
This is my code after many searches by google, including stackoverflow:
m_buffer.setBuffer(&mp3_data_in_memory);
m_player.setMedia(QMediaContent(), &m_buffer);
m_player.play();
where m_buffer is a QBuffer instance, and mp3_data_in_memory is a QByteArray one; m_player is a QMediaPlayer instance.
I got some information that the code here doesn't work in MacOS and iOS, but I am running on Android now.
Does anyone have a solution for Android system? Thanks a lot.
Your code won't work because the media property requires a valid QMediaContent instance:
Setting this property to a null QMediaContent will cause the player to
discard all information relating to the current media source and to
cease all I/O operations related to that media.
There's also no way of telling the QMediaPlayer what format the data is in, you're just dumping raw data on it. In principle QMediaResource can hold this information, but it requires a url and is regarded as null without it.
As you may have guessed, QMediaPlayer and the related classes are high-level constructs not designed for this sort of thing. You need to use a QAudioDecoder to actually decode the raw data, and pipe the output to a QAudioOutput to hear it.
I'm creating an Android app that makes use of OpenCV to implement augmented reality. One of the needed features is that it saves the processed video. I can't seem to find any sample code on real-time saving while using OpenCV.
If the above scenario isn't possible, another option is to save the video first and have it post-processed by OpenCV and saved back as a new file. But I can't find any sample code for this either.
Could someone be kind enough to point me to either direction, or give me an alternative? It's ok if the alternative doesn't use OpenCV.
Typical opencv flow is, you receive frames from camera, convert to RGB format, perform matrix operations then return to activity to display in View. You can actually store the modified frames as images somewhere in sdcard and use jcodec to create your mp4 out of your images. See Android make animated video from list of images.
Hello sages of the Overflowing Stack, Android noob here..
I'm using CSipSimple and want to stream the call audio to another app, in chunks of 1 second audio data so that it can process the raw pcm data.
The code that handles the audio in CSipSimple is native, so I prefer using native approaches and not callback Java.
I thought of a few ways of doing so:
Use audio streaming and let the other app get it.
Writing the data to a file and let the other app read it.
Calling a service in the other application (AIDL)
Using intents.
These are the considerations leading to my dillema:
Streaming looks like the natural choice, but I couldn't find Android support for retrieving raw pcm data from an an audio stream. The intent mechanism is flexible and convenient, but I don't think that that's what they're meant for. Using a file seems cumbersome, although it's well supported. Finally, using a service seems like a good option but it seems less flexible and probably needs more error handling and thread management.
Can you guys point out the best alternative?
If you have another one you're welcome to share it..
I do not know about the streaming audio API support so I'll not touch this case.
As for writing data to a file and let other application to read it - this is a possible case how to solve your problem.
As for calling service through AIDL and using intents, I do not think that this is a good solution. The problem is that Binder has a limitation over the size of the data (1MB) that can be passed in a transaction.
To my point of view, the best solution (especially if you're working in native) is to use AshMem. This is a shared memory driver developed specifically for Android. Thus, in your service you create a shared memory region and pass the reference to it into your client app that reads information from the this memory.
I have an application where i make use of the Camera API to specify my own settings and do some frame processing.
I need my application to take photos with the same settings the Android original Camera app does, but apparently I have no way to extract the procedures and intents from it. I have taken a look at the original Android Camera app class file, but it was not helpful, since it makes use of native routines for the parameters part...
Is there any way that I can obtain the parameters used by the original camera App? And in which way does it save the images?
I know that i can write to a File stream as suggested in many posts, and i have done so, but how can i actually save the specific information the device puts in the files, such as information on the camera, density, and such ?
Thanks in advance