combine videoplayback+image target in vuforia - android

I able to show 2 kind of 3D object on 1 marker in image target sample, but now could I combine video playback + imaget target using android, too many parameters confuse me.
any help will be appreciate

First read carefully both vuforia SDK tutorials for image and video playback.
open Imagetarget.cpp and update the finction Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame()
according to your functionality.
here you can add video playback code.
change this function:
JNIEXPORT void JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame(JNIEnv *, jobject)
{
//LOG("Java_com_qualcomm_QCARSamples_ImageTargets_GLRenderer_renderFrame");// Clear color and depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
enter code here
// Get the state from QCAR and mark the beginning of a rendering section
QCAR::State state = QCAR::Renderer::getInstance().begin();
// Explicitly render the Video Background
QCAR::Renderer::getInstance().drawVideoBackground();
#ifdef USE_OPENGL_ES_1_1en`enter code here`ter code here`
// Set GL11 flags:`enter code here`
}

Related

In Quickblox, how can we rotate a video (90 degrees, for example) in Android app (QBRTCSurfaceView)?

We are streaming a video chat using Quickblox and we'd like to be able to rotate it (90, 180, 270 degrees).
In the iOS SDK this seems possible, but with Android there doesn't seem to be a setting. How can we get around this and display the video rotated? Thanks!
You can use a View method : void setRotation (float rotation) like below :
your_vide_view.setRotation(90f);
or
your_vide_view.setRotation(180f);
or
your_vide_view.setRotation(270f);
this method work for sdk >15
you should also add this condition :
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) {
your_vide_view.setRotation(180f);
}
This may be too late, but for anyone wondering how to rotate the video view, you can get the video track and call addSink() function to retrieve each frame. You can then rotate the frames and call onFrame() function on the videoview. Please find the sample code below.
Remove any renderers before adding the below code and also do not add any renderers because the below code will do that.
localVideoTrack?.track?.addSink { videoFrame->
localVideoView.onFrame(VideoFrame(videoFrame.buffer, 90, -1))
}

How to replicate the Google Cardboard Photosphere App

I'm extremely new to Unity and VR and I've just been finishing up with Unity tutorials from YouTube. Unfortunately there isn't exactly much documentation on the process one should follow to make a VR app with Unity.
What I need is to be able to replicate the Photosphere App in the Cardboard App for Android. I need to do this using Unity if possible.
The Photosphere's have been taken from a Nexus 4's Camera with the photosphere option and look like below image:
I tried following this really nice walkthrough which attaches a cubemap skybox to the lighting. The problem is the top and the bottom part of of the cube don't seem to show the proper image.
I tried doing it with a 6 sided skybox too but I'm pretty lost about the way i should proceed with that. Primarily because I've just got one Photosphere image and the 6 sided skybox has 6 input texture parameters.
I also tried following this link but the information there is slightly overwhelming.
Any help or pointers in the right direction would be extremely appreciated!
Thank you :)
I also went through the tutorial of the link you provided, but it seems that they did it "manually".
Since you have Unity 3D and Cardboard SDK for Unity, you don't have to do configure the cameras.
You please follow this tutorial.
There is an alternative way doing a Sphere and put the cam inside:
http://zhvillues.tumblr.com/post/126331275376/creating-a-360-viewer-using-unity-3d
You have to apply a custom shader to the Sphere to render the inside of it.
Shader "Custom/sphereShader" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_Color ("Main Color", Color) = (1,1,1,0.5)
}
SubShader {
Tags { "RenderType" = "Opaque" }
Cull Front
CGPROGRAM
#pragma surface surf Lambert vertex:vert
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
float4 color : COLOR;
};
void vert(inout appdata_full v)
{
v.normal.xyz = v.normal * -1;
}
void surf (Input IN, inout SurfaceOutput o) {
fixed3 result = tex2D(_MainTex, IN.uv_MainTex);
o.Albedo = result.rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "Diffuse"
}

How add frame around video

I'm developing app in which I need to create video with frame around it. So, basically i get video throuh standart camera and then I need to add frame around it. On the picture my video needs to be instead of blue area;
I have already read a tons of information about video processing and post-processing, opencv, ffmpeg etc. Does anyone knows how I can achieve this ?
After many hours I found only one solution - to use ffmpeg. You can build it and use through android jni. In my case I used executable ffmpeg file. In OnCreate I'm installing it from raw and then using its functions. (There are many solutions in the internet and on the StackOverflow about ffmpeg commands)
This is very simple.
Try to understand what I have written.
PImage frame, temp;
void setup()
{
/*Display your frame here*/
frame = get(); // this will capture the screen
}
void movieEvent(Movie m)
{
m.read();
frame.copy(m, 0, 0, m.width, m.height, Xbluestart, Ybluestart, Xblueend, Yblueend);
}
void draw()
{
Image(frame, 0, 0);
}
I think this would slove your problem.
P.S. instead of writing Xbluestart or Xblueend; write the co-ordinates of blue rectangle in there.

YUV (NV21) to BGR conversion on mobile devices (Native Code)

I'm developing a mobile application that runs on Android and IOS. It's capable of real-time-processing of a video stream. On Android I get the Preview-Videostream of the camera via android.hardware.Camera.PreviewCallback.onPreviewFrame. I decided to use the NV21-Format, since it should be supported by all Android-devices, whereas RGB isn't (or just RGB565).
For my algorithms, which mostly are for pattern recognition, I need grayscale images as well as color information. Grayscale is not a problem, but the color conversion from NV21 to BGR takes way too long.
As described, I use the following method to capture the images;
In the App, I override the onPreviewFrame-Handler of the Camera. This is done in CameraPreviewFrameHandler.java:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
{
try {
AvCore.getInstance().onFrame(data, _prevWidth, _prevHeight, AvStreamEncoding.NV21);
} catch (NativeException e)
{
e.printStackTrace();
}
}
The onFrame-Function then calls a native function which fetches data from the Java-Objects as local references. This is then converted to an unsigned char* bytestream and calls the following c++ function, which uses OpenCV to convert from NV21 to BGR:
void CoreManager::api_onFrame(unsigned char* rImageData, avStreamEncoding_t vImageFormat, int vWidth, int vHeight)
{
// rImageData is a local JNI-reference to the java-byte-array "data" from onPreviewFrame
Mat bgrMat; // Holds the converted image
Mat origImg; // Holds the original image (OpenCV-Wrapping around rImageData)
double ts; // for profiling
switch(vImageFormat)
{
// other formats
case NV21:
origImg = Mat(vHeight + vHeight/2, vWidth, CV_8UC1, rImageData); // fast, only creates header around rImageData
bgrMat = Mat(vHeight, vWidth, CV_8UC3); // Prepare Mat for target image
ts = avUtils::gettime(); // PROFILING START
cvtColor(origImg, bgrMat, CV_YUV2BGRA_NV21);
_onFrameBGRConversion.push_back(avUtils::gettime()-ts); // PROFILING END
break;
}
[...APPLICATION LOGIC...]
}
As one might conclude from comments in the code, I profiled the conversion already and it turned out that it takes ~30ms on my Nexus 4, which is unacceptable long for such a "trivial" pre-processing step. (My profiling methods are double-checked and working properly for real-time measurement)
Now I'm trying desperately to find a faster implementation of this color conversion from NV21 to BGR. This is what I've already done;
Adopted the code "convertYUV420_NV21toRGB8888" to C++ provided in this topic (multiple of the conversion-time)
Modified the code from 1 to use only integer operations (doubled conversion-time of openCV-Solution)
Browsed through a couple other implementations, all with similar conversion-times
Checked OpenCV-Implementation, they use a lot of bit-shifting to get performance. Guess I'm not able to do better on my own
Do you have suggestions / know good implementations or even have a completely different way to work around this Problem? I somehow need to capture RGB/BGR-Frames from the Android-Camera and it should work on as many Android-devices as possible.
Thanks for your replies!
Did you try libyuv? I used it in the past and if you compile it with NEON support, it uses an asm code optimized for ARM processors, you can start from there to further optimize for your special situation.

OpenCV imwrite, the colors are wrong: Which argument for convertTo/cvColor do I use?

Okay, so I've been trying and searching online, but I can't find this.
I have:
OpenCV4Android which I am using in a mixed fashion: Java and Native.
a Mat obtained with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
This can not be changed to Native because it is someone else's library and it is entirely built in non-native way.
Native methods to which I pass this Mat by using mFrame.nativeObj and using:
JNIEXPORT int JNICALL Java_com_... ( ...jlong addr ... )
{
Mat& mrgba = *((Mat*)addr);
// do stuff
imwrite( mrgba, ... );
}
Now... I use this matrix and then I write it with imwrite, all in this native part. Although imwrite does write a file, its colors are all wrong. Red where they should be white, green where they should be black and purple where they should be the color of my table i.e. yellowish. Now, instead of blindly trying cvColor and convertTo, I'd rather know stuff.
What is the number of channels, type, channel order and whatnot that I should know about a frame that was first retrieved with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
and then passed through JNI to native OpenCV? Effectively, what conversions do I need to do for native imwrite to behave?
For some reason, the image obtained with
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
needs to be converted like so:
cvtColor(mrgba, mbgr, CV_YCrCb2RGB, 4);
in order for imwrite to correctly output an image to SD Card.
I don't understand why (imwrite is supposed to accept BGR images), but at least this answers my question.
Try
capture.retrieve(mFrame, Highgui.CV_CAP_ANDROID_COLOR_FRAME_BGRA);
coz OpenCV reads images with blue , green and red channel instead of red,green, blue..

Categories

Resources