How add frame around video - android

I'm developing app in which I need to create video with frame around it. So, basically i get video throuh standart camera and then I need to add frame around it. On the picture my video needs to be instead of blue area;
I have already read a tons of information about video processing and post-processing, opencv, ffmpeg etc. Does anyone knows how I can achieve this ?

After many hours I found only one solution - to use ffmpeg. You can build it and use through android jni. In my case I used executable ffmpeg file. In OnCreate I'm installing it from raw and then using its functions. (There are many solutions in the internet and on the StackOverflow about ffmpeg commands)

This is very simple.
Try to understand what I have written.
PImage frame, temp;
void setup()
{
/*Display your frame here*/
frame = get(); // this will capture the screen
}
void movieEvent(Movie m)
{
m.read();
frame.copy(m, 0, 0, m.width, m.height, Xbluestart, Ybluestart, Xblueend, Yblueend);
}
void draw()
{
Image(frame, 0, 0);
}
I think this would slove your problem.
P.S. instead of writing Xbluestart or Xblueend; write the co-ordinates of blue rectangle in there.

Related

Get the bit depth or the color space of a mp4 file in Android

I'm currently working on a video player on Android. The video player should support 8 bit content and 10 bit content. Because the flow in the app is different, I need to know before playing the video if the content is 10-bit BT2020. I've tried MediaMetadataRetriever but there is no information about the bit depth, color space, color primaries, transfer characteristics etc. Also, I got the same result using this project: https://github.com/wseemann/FFmpegMediaMetadataRetriever.
Is there a way to get some more information about the color space or bit depth in Android? Something similar to MediaInfo tool. https://mediaarea.net/en/MediaInfo
After some time I found out that I can use MediaExtractor then get the information that I needed from a MediaFormat object created with extractor.getTrackFormat(trackIndex). For HDR10 I check the color standard and the transfer function.
mediaFormat.containsKey(MediaFormat.KEY_COLOR_STANDARD)
) {
if (mediaFormat.getInteger(MediaFormat.KEY_COLOR_TRANSFER) == MediaFormat.COLOR_TRANSFER_ST2084
&& mediaFormat.getInteger(MediaFormat.KEY_COLOR_STANDARD) == MediaFormat.COLOR_STANDARD_BT2020
) {
return true
}
}

android camera2 api - How can i add a company logo to the recorded video file

Hy everyone.
I am trying to add a small logo in the corner to the video i've recorded . I've tried to add the imageView directly to the recording surface but it is not the solution.
I guess i'll have to create another surface and merge them together but i couldn't find any tutorial or code sample for such a thing.
I've found the option to add a foreground drawable but it doesn't show the logo on the preview surface.
This is the code:
private void startRecording(){
try {
setupMediaRecorder();
mTextureView.setForeground(getDrawable(R.drawable.toolbarlogo));
SurfaceTexture surfaceTexture = mTextureView.getSurfaceTexture();
surfaceTexture.setDefaultBufferSize(mPreviewSize.getWidth(),mPreviewSize.getHeight());
Surface previewSurfice = new Surface(surfaceTexture);
Surface recordSurface = mMediaRecorder.getSurface();
mPreviewCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
mPreviewCaptureRequestBuilder.addTarget(previewSurfice);
mPreviewCaptureRequestBuilder.addTarget(recordSurface);
mCameraDevice.createCaptureSession(Arrays.asList(previewSurfice, recordSurface), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession session) {
try {
session.setRepeatingRequest(
mPreviewCaptureRequestBuilder.build(),null,null
);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession session) {
}
},null);
} catch (Exception e) {
e.printStackTrace();
}
}
please help.
Thanks
Adding a separate surface will just show the logo while you are viewing it in your app - if you want to logo to be added to the file so it can be viewed if you share or load the video, then you need to add it to the video itself.
Assuming you capture the video to your local device, then one way you can do this is to use ffmpeg to add an image to the video - see this answer which includes notes on how to position the image:
https://stackoverflow.com/a/10920872/334402
There are several ways to include ffmpoeg in an Android project, but maybe the easiest is to use a well supported ffmpeg wrapper project like this one:
https://github.com/WritingMinds/ffmpeg-android-java
The wrappers basically wrap an interface around there command line ffmpeg tool, which has the advantage that you can use the same syntax - e.g. the syntax in the answer noted above, and leverage the support and Q&A on the web around it.
The disadvantage is that the command line tool was not originally designed to be used this way, but if you use a well supported wrapper you will likely find a lot of the problems have been ironed out.
One thing to note - video processing has quite high CPU and hence battery requirements on a mobile device. If you are going to be uploading the video to a server to share it, it may make more sense to add the image here where you have more CPU horsepower.

YUV (NV21) to BGR conversion on mobile devices (Native Code)

I'm developing a mobile application that runs on Android and IOS. It's capable of real-time-processing of a video stream. On Android I get the Preview-Videostream of the camera via android.hardware.Camera.PreviewCallback.onPreviewFrame. I decided to use the NV21-Format, since it should be supported by all Android-devices, whereas RGB isn't (or just RGB565).
For my algorithms, which mostly are for pattern recognition, I need grayscale images as well as color information. Grayscale is not a problem, but the color conversion from NV21 to BGR takes way too long.
As described, I use the following method to capture the images;
In the App, I override the onPreviewFrame-Handler of the Camera. This is done in CameraPreviewFrameHandler.java:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
{
try {
AvCore.getInstance().onFrame(data, _prevWidth, _prevHeight, AvStreamEncoding.NV21);
} catch (NativeException e)
{
e.printStackTrace();
}
}
The onFrame-Function then calls a native function which fetches data from the Java-Objects as local references. This is then converted to an unsigned char* bytestream and calls the following c++ function, which uses OpenCV to convert from NV21 to BGR:
void CoreManager::api_onFrame(unsigned char* rImageData, avStreamEncoding_t vImageFormat, int vWidth, int vHeight)
{
// rImageData is a local JNI-reference to the java-byte-array "data" from onPreviewFrame
Mat bgrMat; // Holds the converted image
Mat origImg; // Holds the original image (OpenCV-Wrapping around rImageData)
double ts; // for profiling
switch(vImageFormat)
{
// other formats
case NV21:
origImg = Mat(vHeight + vHeight/2, vWidth, CV_8UC1, rImageData); // fast, only creates header around rImageData
bgrMat = Mat(vHeight, vWidth, CV_8UC3); // Prepare Mat for target image
ts = avUtils::gettime(); // PROFILING START
cvtColor(origImg, bgrMat, CV_YUV2BGRA_NV21);
_onFrameBGRConversion.push_back(avUtils::gettime()-ts); // PROFILING END
break;
}
[...APPLICATION LOGIC...]
}
As one might conclude from comments in the code, I profiled the conversion already and it turned out that it takes ~30ms on my Nexus 4, which is unacceptable long for such a "trivial" pre-processing step. (My profiling methods are double-checked and working properly for real-time measurement)
Now I'm trying desperately to find a faster implementation of this color conversion from NV21 to BGR. This is what I've already done;
Adopted the code "convertYUV420_NV21toRGB8888" to C++ provided in this topic (multiple of the conversion-time)
Modified the code from 1 to use only integer operations (doubled conversion-time of openCV-Solution)
Browsed through a couple other implementations, all with similar conversion-times
Checked OpenCV-Implementation, they use a lot of bit-shifting to get performance. Guess I'm not able to do better on my own
Do you have suggestions / know good implementations or even have a completely different way to work around this Problem? I somehow need to capture RGB/BGR-Frames from the Android-Camera and it should work on as many Android-devices as possible.
Thanks for your replies!
Did you try libyuv? I used it in the past and if you compile it with NEON support, it uses an asm code optimized for ARM processors, you can start from there to further optimize for your special situation.

Android Camera.takePicture - Possible to disable shutter sound and preview surface?

I am working on an app that will allow a user to take quick click and forget snapshots. Most of the app is done except for the camera working that way I would like. Right now I have the camera working but I can't seem to find a way to disable the shutter sound and I cant find a way to disable displaying the preview. I was able to cover the preview up with a control but I would rather just not have it displayed if possible.
To sum things up, these are the items that I would like to disable while utilizing the built in Camera controls.
Shutter sound
Camera screen display
Image preview onPictureTaken
Does anyone know of a resource that could point me in the right direction, I would greatly appreciate it. I have been following CommonsWare's example from this sample fairly closely.
Thank you.
This is actually a property in the build.prop of a phone. I'm unsure if its possible to change this. Unless you completely override it and use your own camera code. Using what you can that is available in the SDK.
Take a look at this:
CameraService.cpp
. . .
CameraService::Client::Client(const sp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
const sp<CameraHardwareInterface>& hardware,
int cameraId, int cameraFacing, int clientPid) {
mPreviewCallbackFlag = FRAME_CALLBACK_FLAG_NOOP;
mOrientation = getOrientation(0, mCameraFacing == CAMERA_FACING_FRONT);
mOrientationChanged = false;
cameraService->setCameraBusy(cameraId);
cameraService->loadSound();
LOG1("Client::Client X (pid %d)", callingPid)
}
void CameraService::loadSound() {
Mutex::Autolock lock(mSoundLock);
LOG1("CameraService::loadSound ref=%d", mSoundRef);
if (mSoundRef++) return;
mSoundPlayer[SOUND_SHUTTER] = newMediaPlayer("/system/media/audio/ui/camera_click.ogg");
mSoundPlayer[SOUND_RECORDING] = newMediaPlayer("/system/media/audio/ui/VideoRecord.ogg");
}
As can be noted, the click sound is started without your interaction.
This is the service used in the Gingerbread Source code.
The reason they DON'T allow this is because it is illegal is some countries. Only way to achieve what you want is to have a custom ROM.
Update
If what being said here: http://androidforums.com/t-mobile-g1/6371-camera-shutter-sound-effect-off.html
still applies, then you could write a timer that turns off the sound (Silent Mode) for a couple of seconds and then turn it back on each time you take a picture.
You may use the data from the preview callback using a function to save it at a picture on some type of trigger such as a button, using onclick listener. you could compress the image to jpeg or png. In this way, there no shutterCallback to be implemented. and therefore you can play any sound you want or none when taking a picture.
You can effectively hide the preview surface by giving it dimensions of 1p in the xml file (I found an example the said 0p but for some reason that was giving me errors).
It may be illegal to have a silent shutter in some places, but it doesn't appear that the US is such a place, as my HTC One gives me an option to silence it, and in fact, since Android 4.2 you can do this:
Camera.CameraInfo info=new Camera.CameraInfo();
if (info.canDisableShutterSound) {
camera.enableShutterSound(false);
}

Grabbing consecutive frames in android using opencv

I am trying to grab consecutive frames from android using opencv VideoCapture class. Actually I want to implement optical flow on android for which i need 2 frames. I implemented optical flow in C first where I grabbed the frames using using cvQueryFrame and every thing work fine. But in android when I call
if(capture.grab())
{
if(capture.retrieve(mRgba))
Log.i(TAG, "first frame retrived");
}
if(capture.grab())
{
if(capture.retrieve(mRgba2))
Log.i(TAG, "2nd frame retrived");
}
and then subtract the matrices using Imgproc.subtract(mRgba,mRgba2,output) and then display the output it give me black image indicating that mRgba and mRgba2 are image frames with same data. Can any one help how to grab two different images. According to opencv documentation mRgba and mRgba2 should be different.
This question is an exact duplicate of
read successive frames OpenCV using cvQueryframe
You have to copy the image to another memory block, because the capture always returns the same pointer.

Categories

Resources