Grafika TextureMovieEncoder - android

I have recently been modifying Grafika's TextureMovieEncoder to create a recording of what I displayed onscreen: two Sprite2ds which are overlapping. Using the CameraCaptureActivity example as a reference point, I effectively ported what I created for my rendering thread into the TextureMovieEncoder but the output is jagged lines across the screen. I think I understand what's wrong, but I don't know how to fix it:
Some code:
private void prepareEncoder(EGLContext sharedContext, int width, int height, int bitRate,
File outputFile) {
try {
mVideoEncoder = new VideoEncoderCore(width, height, bitRate, outputFile);
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
mEglCore = new EglCore(sharedContext, EglCore.FLAG_RECORDABLE);
mInputWindowSurface = new WindowSurface(mEglCore, mVideoEncoder.getInputSurface(), true);
mInputWindowSurface.makeCurrent();
textureProgram = new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT);
backgroundDrawable = new Drawable2d(Drawable2d.Prefab.RECTANGLE);
backgroundRect = new Sprite2d(backgroundDrawable);
frontDrawable = new Drawable2d(Drawable2d.Prefab.RECTANGLE);
frontRect = new Sprite2d(frontDrawable);
backgroundRect.setTexture(backTextureId);
frontRect.setTexture(frontTextureId);
updateGeometry();
}
private void handleFrameAvailable(Transform transform, long timestampNanos) {
if (VERBOSE) Log.d(TAG, "handleFrameAvailable tr=" + transform);
mVideoEncoder.drainEncoder(false);
backgroundRect.draw(textureProgram, transform.movieMatrix);
frontRect.draw(textureProgram, transform.cameraMatrix);
mInputWindowSurface.setPresentationTime(timestampNanos);
mInputWindowSurface.swapBuffers();
}
I think the problem comes down to my lack of understanding of how to establish the right projection onto the WindowSurface for the VideoEncoder. In the Grafika example, FullFrameRect is used, which is easier since you can just use the identity matrix to stretch a given texture to the surface area. However, since I want to create the overlapping effect, I needed to use Sprite2d. Is the problem the shared EGLContext? Do I need to create a new one so that I can set the viewport to match the WindowSurface size? A bit lost on where to go from here.

Turns out the functionality of the code above was fine. The problem was the interaction between the TextureEncoder and the calling parent.
I was initializing the member variables backTextureId and frontTextureId after prepareEncoder and it was therefore recording garbage data into the output.

Related

Is there a way to delay a video feed from a cell phone camera in Unity?

I am making a project where you are supposed to be able to change the delay with which the feed from the cell phone camera is shown, so that people can see how their brains handle the delay/latency. I have managed to show the camera feed on a canvas that follows the camera around and fills the whole view of the Google Cardboard, but I am wondering how I could delay this video feed. Perhaps by using an image array of some sort?
I have tried searching for sollutions online, but I have come up short of an answer. I have tried a texture2D array, but the performance was really bad (I tried a modified version of this).
private bool camAvailable;
private WebCamTexture backCam;
private Texture defaultBackground;
public RawImage background;
public AspectRatioFitter fit;
// Start is called before the first frame update
void Start()
{
defaultBackground = background.texture;
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0 )
{
Debug.Log("No camera detected");
camAvailable = false;
return;
}
for (int i = 0; i < devices.Length; i++)
{
if (!devices[i].isFrontFacing)
{
backCam = new WebCamTexture(devices[i].name, Screen.width, Screen.height); // Used to find the correct camera
}
}
if (backCam == null)
{
Debug.Log("Unable to find back camera");
return;
}
backCam.Play();
background.texture = backCam;
camAvailable = true;
} //Tell me if this is not enough code, I don't really have a lot of experience in Unity, so I am unsure of how much is required for a minimal reproducible example
Should I use some sort of frame buffer or image/texture array for delaying the video? (Start "recording", wait a specified amount of time, start playing the "video" on the screen)
Thanks in advance!

Rajawali Carboard OBJ File

I just started using Rajawali and the Cardboard SDK (the integration that you can find in Rajawali’s repository). Based on the examples provided (loaders) in the repository and following the instructions to set up a new project I have created an example where I create a sphere (and attach a texture to it) and load an OBJ file, the odd thing is that I can actually see the sphere and the texture but not the OBJ object. I created a similar example where the difference just resides on the class that I’m extending, in one example I extend the RajawaliRender class (in this one I see the OBJ file) and in the other I extend the RajawaliCarboardRender. I would really appreciate if you can give a hand or provide me an example because I’m stuck and I have tried everything I can think of.
This is the content of my initScene method in both examples:
public void initScene(){
directionalLight = new DirectionalLight(1f, .2f, -1.0f);
directionalLight.setColor(1.0f, 1.0f, 1.0f);
directionalLight.setPower(2);
getCurrentScene().addLight(directionalLight);
Material material = new Material();
material.enableLighting(true);
material.setDiffuseMethod(new DiffuseMethod.Lambert());
material.setColor(0);
Texture earthTexture = new Texture("Earth", R.drawable.earthtruecolor_nasa_big);
try{
material.addTexture(earthTexture);
} catch (ATexture.TextureException error){
Log.d("DEBUG", "TEXTURE ERROR");
}
earthSphere = new Sphere(1, 24, 24);
earthSphere.setMaterial(material);
getCurrentScene().addChild(earthSphere);
getCurrentCamera().setZ(14.2f);
final LoaderOBJ loaderOBJ = new LoaderOBJ(mContext.getResources(), mTextureManager, R.raw.multiobjects_obj);
loadModel(loaderOBJ, this, R.raw.multiobjects_obj);
}

How to record screen and take screenshots, using Android API?

Background
Android got a new API on Kitkat and Lollipop, to video capture the screen. You can do it either via the ADB tool or via code (starting from Lollipop).
Ever since the new API was out, many apps came to that use this feature, allowing to record the screen, and Microsoft even made its own Google-Now-On-tap competitor app.
Using ADB, you can use:
adb shell screenrecord /sdcard/video.mp4
You can even do it from within Android Studio itself.
The problem
I can't find any tutorial or explanation about how to do it using the API, meaning in code.
What I've found
The only place I've found is the documentations (here, under "Screen capturing and sharing"), telling me this:
Android 5.0 lets you add screen capturing and screen sharing
capabilities to your app with the new android.media.projection APIs.
This functionality is useful, for example, if you want to enable
screen sharing in a video conferencing app.
The new createVirtualDisplay() method allows your app to capture the
contents of the main screen (the default display) into a Surface
object, which your app can then send across the network. The API only
allows capturing non-secure screen content, and not system audio. To
begin screen capturing, your app must first request the user’s
permission by launching a screen capture dialog using an Intent
obtained through the createScreenCaptureIntent() method.
For an example of how to use the new APIs, see the MediaProjectionDemo
class in the sample project.
Thing is, I can't find any "MediaProjectionDemo" sample. Instead, I've found "Screen Capture" sample, but I don't understand how it works, as when I've run it, all I've seen is a blinking screen and I don't think it saves the video to a file. The sample seems very buggy.
The questions
How do I perform those actions using the new API:
start recording, optionally including audio (mic/speaker/both).
stop recording
take a screenshot instead of video.
Also, how do I customize it (resolution, requested fps, colors, time...)?
First step and the one which Ken White rightly suggested & which you may have already covered is the Example Code provided officially.
I have used their API earlier. I agree screenshot is pretty straight forward. But, screen recording is also under similar lines.
I will answer your questions in 3 sections and will wrap it up with a link. :)
1. Start Video Recording
private void startScreenRecord(final Intent intent) {
if (DEBUG) Log.v(TAG, "startScreenRecord:sMuxer=" + sMuxer);
synchronized(sSync) {
if (sMuxer == null) {
final int resultCode = intent.getIntExtra(EXTRA_RESULT_CODE, 0);
// get MediaProjection
final MediaProjection projection = mMediaProjectionManager.getMediaProjection(resultCode, intent);
if (projection != null) {
final DisplayMetrics metrics = getResources().getDisplayMetrics();
final int density = metrics.densityDpi;
if (DEBUG) Log.v(TAG, "startRecording:");
try {
sMuxer = new MediaMuxerWrapper(".mp4"); // if you record audio only, ".m4a" is also OK.
if (true) {
// for screen capturing
new MediaScreenEncoder(sMuxer, mMediaEncoderListener,
projection, metrics.widthPixels, metrics.heightPixels, density);
}
if (true) {
// for audio capturing
new MediaAudioEncoder(sMuxer, mMediaEncoderListener);
}
sMuxer.prepare();
sMuxer.startRecording();
} catch (final IOException e) {
Log.e(TAG, "startScreenRecord:", e);
}
}
}
}
}
2. Stop Video Recording
private void stopScreenRecord() {
if (DEBUG) Log.v(TAG, "stopScreenRecord:sMuxer=" + sMuxer);
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.stopRecording();
sMuxer = null;
// you should not wait here
}
}
}
2.5. Pause and Resume Video Recording
private void pauseScreenRecord() {
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.pauseRecording();
}
}
}
private void resumeScreenRecord() {
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.resumeRecording();
}
}
}
Hope the code helps. Here is the original link to the code that I referred to and from which this implementation(Video recording) is also derived from.
3. Take screenshot Instead of Video
I think by default its easy to capture the image in bitmap format. You can still go ahead with MediaProjectionDemo example to capture screenshot.
[EDIT] : Code encrypt for screenshot
a. To create virtual display depending on device width / height
mImageReader = ImageReader.newInstance(mWidth, mHeight, PixelFormat.RGBA_8888, 2);
mVirtualDisplay = sMediaProjection.createVirtualDisplay(SCREENCAP_NAME, mWidth, mHeight, mDensity, VIRTUAL_DISPLAY_FLAGS, mImageReader.getSurface(), null, mHandler);
mImageReader.setOnImageAvailableListener(new ImageAvailableListener(), mHandler);
b. Then start the Screen Capture based on an intent or action-
startActivityForResult(mProjectionManager.createScreenCaptureIntent(), REQUEST_CODE);
Stop Media projection-
sMediaProjection.stop();
c. Then convert to image-
//Process the media capture
image = mImageReader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * mWidth;
//Create bitmap
bitmap = Bitmap.createBitmap(mWidth + rowPadding / pixelStride, mHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
//Write Bitmap to file in some path on the phone
fos = new FileOutputStream(STORE_DIRECTORY + "/myscreen_" + IMAGES_PRODUCED + ".png");
bitmap.compress(CompressFormat.PNG, 100, fos);
fos.close();
There are several implementations (full code) of Media Projection API available.
Some other links that can help you in your development-
Video Recording with MediaProjectionManager - website
android-ScreenCapture - github as per android developer's observations :)
screenrecorder - github
Capture and Record Android Screen using MediaProjection APIs - website
Hope it helps :) Happy coding and screen recording!
PS: Can you please tell me the Microsoft app you are talking about? I have not used it. Would like to try it :)

Android: decode JPGs in loop: random freezes

Here's another question for you :)
Basically i made a realtime streaming service, sending multiple jpegs to my android app, that decodes them as soon as he receives them.
// dIn is DataInputStream
// videoFeed is an ImageView
// bitmap is Bitmap
// hand is an Handler of the main thread
//CODE EXECUTED IN ANOTHER THERAD
byte[] inBuff = new byte[8];
byte[] imgBuff;
String inMsg;
while(socket.isConnected()) {
dIn.readFully(inBuff);
inMsg = new String(inBuff, "ASCII").trim();
int size = Integer.parseInt(inMsg);
imgBuff = new byte[size];
dIn.readFully(imgBuff);
out.write("SEND-NEXT-JPEG".getBytes("ASCII"));
bitmap = BitmapFactory.decodeByteArray(imgBuff, 0, size);
hand.post(setImage);
}
}
private Runnable setImage = new Runnable() {
#Override
public void run() {
videoFeed.setImageBitmap(bitmap);
}
};
The problem is that after about 10 or 20 jpegs are perfectly decoded in realtime, the app freezes for 400ms or so and then it continues to decode other 10/20 jpegs before another freeze...
I know that sending multiple jpegs it's not a good way for streaming video but i can only change the client (android app), not the server.
Do you have any idea for get a fluid video and avoid freezes? thanks!
Right now, you are using the three-parameter version of decodeByteArray(). Instead, switch to the four-parameter version, passing in a BitmapFactory.Options as the last value. On there, set inBitmap to be a Bitmap object that can be reused.
This requires you to maintain a small Bitmap object pool. It could be as simple as two Bitmap instances: the one that is presently being displayed and the one that you are preparing for the next "frame" of the video.
The catch is that, for API Level 18 and below, the Bitmap needs to be the same resolution (height and width in pixels). In your case, that's probably not a problem, as I would imagine that each of your bitmaps have the same resolution.

How to handle Out of Memory error when loading single Bitmap from Camera

My question is how to handle an Out of Memory error when decoding a byte array into a bitmap so I can do a rotation on it. My code is as follows and before you say its a duplicate, I have tried using BitmapFactory.Options and setting the sample size to 2. However the quality loss was far too bad to be acceptable. Also it appears to only be happening on one device so maybe its a one off thing, however I'm inclined to believe if it affects one, there will be 25 more like it later. Also this is happening on the FIRST photo taken and this is the only work that this activity does with regards to bitmaps. And while I'm working in Monodroid, Java answers are welcome too as I can usually translate them to C# fairly easily.
public void GotImage(byte[] image)
{
try
{
Android.Graphics.Bitmap thePicture = Android.Graphics.BitmapFactory.DecodeByteArray(image, 0, image.Length);
Array.Clear(image, 0, image.Length);
image = null;
GC.Collect();
Android.Graphics.Matrix m = new Android.Graphics.Matrix();
m.PostRotate(90);
Android.Graphics.Bitmap rotatedPicture = Android.Graphics.Bitmap.CreateBitmap(thePicture, 0, 0, thePicture.Width, thePicture.Height, m, true);
thePicture.Dispose();
thePicture = null;
GC.Collect();
using (MemoryStream ms = new MemoryStream())
{
rotatedPicture.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, ms);
image = ms.ToArray();
}
rotatedPicture.Dispose();
rotatedPicture = null;
GC.Collect();
listOfImages.Add(image);
storeButton.Text = " Store " + listOfImages.Count + " Pages ";
storeButton.Enabled = true;
takePicButton.Enabled = true;
gotImage = false;
cameraPreviewArea.camera.StartPreview();
}
catch (Exception ex)
{
AlertDialog.Builder alertDialog = new AlertDialog.Builder(this);
alertDialog.SetTitle("Error Taking Picture");
alertDialog.SetMessage(ex.ToString());
alertDialog.SetPositiveButton("OK", delegate { });
alertDialog.Show();
}
}
What's rotatedPicture.Dispose()? Does this just set the reference to null? The best and quickest way to get rid of a Bitmap's memory is via the recycle() method.
Well after a long day of learning, I discovered a fix/workaround. This involved setting the resolution of the picture being taken by the camera before the picture was taken instead of trying to scale it after the fact. I also set the option in settings for the user to try different resolutions till they get one that works best for them.
Camera.Parameters parameters = camera.GetParameters();
parameters.SetPictureSize(parameters.SupportedPictureSizes[parameters.SupportedPictureSizes.Count - 1].Width,
parameters.SupportedPictureSizes[parameters.SupportedPictureSizes.Count - 1].Height);
camera.SetParameters(parameters);
camera.StartPreview();

Categories

Resources