I've launched my new game libGDX on play store and I start getting crash reports.
This problem started when I updated to newer version on libgdx 1.9.7.
Here is my crash report:
java.lang.IllegalStateException:
at com.badlogic.gdx.graphics.glutils.GLFrameBuffer.build (GLFrameBuffer.java:233)
at com.badlogic.gdx.graphics.glutils.GLFrameBuffer.<init> (GLFrameBuffer.java:87)
at com.badlogic.gdx.graphics.glutils.FrameBuffer.<init> (FrameBuffer.java:51)
at com.badlogic.gdx.graphics.glutils.GLFrameBuffer$FrameBufferBuilder.build (GLFrameBuffer.java:474)
at com.dui.Screens.DirectedGame.setScreen (DirectedGame.java:57)
at com.dui.DuiGame.create (DuiGame.java:20)
at com.badlogic.gdx.backends.android.AndroidGraphics.onSurfaceChanged (AndroidGraphics.java:311)
at android.opengl.GLSurfaceView$GLThread.guardedRun (GLSurfaceView.java:1528)
at android.opengl.GLSurfaceView$GLThread.run (GLSurfaceView.java:1249)
Here is my setScreen method:
public void setScreen(AbstractGameScreen screen, ScreenTransition screenTransition) {
int w = Gdx.graphics.getWidth();
int h = Gdx.graphics.getHeight();
if (!init) {
GLFrameBuffer.FrameBufferBuilder frameBufferBuilder = new GLFrameBuffer.FrameBufferBuilder(w, h);
frameBufferBuilder.addColorTextureAttachment(GL30.GL_RGB8, GL30.GL_RGB, GL30.GL_UNSIGNED_BYTE);
currFbo = frameBufferBuilder.build();
nextFbo = frameBufferBuilder.build();
batch = new SpriteBatch();
init = true;
}
// start new transition
nextScreen = screen;
nextScreen.show(); // activate next screen
nextScreen.resize(w, h);
nextScreen.render(0); // let next screen update() once
if (currScreen != null) currScreen.pause();
nextScreen.pause();
Gdx.input.setInputProcessor(null); // disable input
this.screenTransition = screenTransition;
t = 0;
}
In libgdx news page there is example for building a custom FrameBuffer, but I'm not sure if I implement it correctly?
Can some one give me solution for my problem ?
Thanks
Related
I am making a project where you are supposed to be able to change the delay with which the feed from the cell phone camera is shown, so that people can see how their brains handle the delay/latency. I have managed to show the camera feed on a canvas that follows the camera around and fills the whole view of the Google Cardboard, but I am wondering how I could delay this video feed. Perhaps by using an image array of some sort?
I have tried searching for sollutions online, but I have come up short of an answer. I have tried a texture2D array, but the performance was really bad (I tried a modified version of this).
private bool camAvailable;
private WebCamTexture backCam;
private Texture defaultBackground;
public RawImage background;
public AspectRatioFitter fit;
// Start is called before the first frame update
void Start()
{
defaultBackground = background.texture;
WebCamDevice[] devices = WebCamTexture.devices;
if (devices.Length == 0 )
{
Debug.Log("No camera detected");
camAvailable = false;
return;
}
for (int i = 0; i < devices.Length; i++)
{
if (!devices[i].isFrontFacing)
{
backCam = new WebCamTexture(devices[i].name, Screen.width, Screen.height); // Used to find the correct camera
}
}
if (backCam == null)
{
Debug.Log("Unable to find back camera");
return;
}
backCam.Play();
background.texture = backCam;
camAvailable = true;
} //Tell me if this is not enough code, I don't really have a lot of experience in Unity, so I am unsure of how much is required for a minimal reproducible example
Should I use some sort of frame buffer or image/texture array for delaying the video? (Start "recording", wait a specified amount of time, start playing the "video" on the screen)
Thanks in advance!
I want to know how I can display video from jpegs in Xamarin (all platforms).
My jpegs are being streamed from a http client stream sent by a popular video surveillance management software.
My jpegs are in the form of byte[] and I get about 10 jpegs/second. This format is imposed.
I tried rapidly changing the Source on a Image but it results in severe fliquering on Android. This seems to work on Windows phone but not so good performance.
How can I create a videoplayer for each one? Unless I am wrond, the existing components cannot do this.
Best,
Thank you Jason! Works great, very fluid rendering!!
Simply add the SkiaSharp.Views.Forms with NuGet to the project and voila!
Here is what that would look like in code (shared project):
// Content page initialization
private void InitUI() {
Title = "Xamavideo";
var button = new Button
{
Text = "Connect!"
};
Label label = new Label
{
Text = ""
};
var scroll = new ScrollView();
scroll.BackgroundColor = Color.Black;
Content = scroll;
var stack = new StackLayout
{
Padding = 40,
Spacing = 10
};
//Add a SKCanvasView item to the stack
var videoCanvas = new SKCanvasView
{
HeightRequest = 400,
WidthRequest = 600,
};
videoCanvas.PaintSurface += OnCanvasViewPaintSurface;
stack.Children.Add(videoCanvas);
}
//Create the event handler
void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs args)
{
SKImageInfo info = args.Info;
SKSurface surface = args.Surface;
// using (var stream = new SKManagedStream(fileStream))
if (lastFrame == null) return;
using (var canvas = surface.Canvas)
// use KBitmap.Decode to decode the byte[] in jpeg format
using (var bitmap = SKBitmap.Decode(lastFrame))
using (var paint = new SKPaint())
{
// clear the canvas / fill with black
canvas.DrawColor(SKColors.Black);
canvas.DrawBitmap(bitmap, SKRect.Create(640, 480), paint);
}
}
void UpdateFrame(VideoClient client){
//Use this to update the canvas:
byte[] lastFrame = client.imageBytes;
videoCanvas.InvalidateSurface();
}
Background
Android got a new API on Kitkat and Lollipop, to video capture the screen. You can do it either via the ADB tool or via code (starting from Lollipop).
Ever since the new API was out, many apps came to that use this feature, allowing to record the screen, and Microsoft even made its own Google-Now-On-tap competitor app.
Using ADB, you can use:
adb shell screenrecord /sdcard/video.mp4
You can even do it from within Android Studio itself.
The problem
I can't find any tutorial or explanation about how to do it using the API, meaning in code.
What I've found
The only place I've found is the documentations (here, under "Screen capturing and sharing"), telling me this:
Android 5.0 lets you add screen capturing and screen sharing
capabilities to your app with the new android.media.projection APIs.
This functionality is useful, for example, if you want to enable
screen sharing in a video conferencing app.
The new createVirtualDisplay() method allows your app to capture the
contents of the main screen (the default display) into a Surface
object, which your app can then send across the network. The API only
allows capturing non-secure screen content, and not system audio. To
begin screen capturing, your app must first request the user’s
permission by launching a screen capture dialog using an Intent
obtained through the createScreenCaptureIntent() method.
For an example of how to use the new APIs, see the MediaProjectionDemo
class in the sample project.
Thing is, I can't find any "MediaProjectionDemo" sample. Instead, I've found "Screen Capture" sample, but I don't understand how it works, as when I've run it, all I've seen is a blinking screen and I don't think it saves the video to a file. The sample seems very buggy.
The questions
How do I perform those actions using the new API:
start recording, optionally including audio (mic/speaker/both).
stop recording
take a screenshot instead of video.
Also, how do I customize it (resolution, requested fps, colors, time...)?
First step and the one which Ken White rightly suggested & which you may have already covered is the Example Code provided officially.
I have used their API earlier. I agree screenshot is pretty straight forward. But, screen recording is also under similar lines.
I will answer your questions in 3 sections and will wrap it up with a link. :)
1. Start Video Recording
private void startScreenRecord(final Intent intent) {
if (DEBUG) Log.v(TAG, "startScreenRecord:sMuxer=" + sMuxer);
synchronized(sSync) {
if (sMuxer == null) {
final int resultCode = intent.getIntExtra(EXTRA_RESULT_CODE, 0);
// get MediaProjection
final MediaProjection projection = mMediaProjectionManager.getMediaProjection(resultCode, intent);
if (projection != null) {
final DisplayMetrics metrics = getResources().getDisplayMetrics();
final int density = metrics.densityDpi;
if (DEBUG) Log.v(TAG, "startRecording:");
try {
sMuxer = new MediaMuxerWrapper(".mp4"); // if you record audio only, ".m4a" is also OK.
if (true) {
// for screen capturing
new MediaScreenEncoder(sMuxer, mMediaEncoderListener,
projection, metrics.widthPixels, metrics.heightPixels, density);
}
if (true) {
// for audio capturing
new MediaAudioEncoder(sMuxer, mMediaEncoderListener);
}
sMuxer.prepare();
sMuxer.startRecording();
} catch (final IOException e) {
Log.e(TAG, "startScreenRecord:", e);
}
}
}
}
}
2. Stop Video Recording
private void stopScreenRecord() {
if (DEBUG) Log.v(TAG, "stopScreenRecord:sMuxer=" + sMuxer);
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.stopRecording();
sMuxer = null;
// you should not wait here
}
}
}
2.5. Pause and Resume Video Recording
private void pauseScreenRecord() {
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.pauseRecording();
}
}
}
private void resumeScreenRecord() {
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.resumeRecording();
}
}
}
Hope the code helps. Here is the original link to the code that I referred to and from which this implementation(Video recording) is also derived from.
3. Take screenshot Instead of Video
I think by default its easy to capture the image in bitmap format. You can still go ahead with MediaProjectionDemo example to capture screenshot.
[EDIT] : Code encrypt for screenshot
a. To create virtual display depending on device width / height
mImageReader = ImageReader.newInstance(mWidth, mHeight, PixelFormat.RGBA_8888, 2);
mVirtualDisplay = sMediaProjection.createVirtualDisplay(SCREENCAP_NAME, mWidth, mHeight, mDensity, VIRTUAL_DISPLAY_FLAGS, mImageReader.getSurface(), null, mHandler);
mImageReader.setOnImageAvailableListener(new ImageAvailableListener(), mHandler);
b. Then start the Screen Capture based on an intent or action-
startActivityForResult(mProjectionManager.createScreenCaptureIntent(), REQUEST_CODE);
Stop Media projection-
sMediaProjection.stop();
c. Then convert to image-
//Process the media capture
image = mImageReader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * mWidth;
//Create bitmap
bitmap = Bitmap.createBitmap(mWidth + rowPadding / pixelStride, mHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
//Write Bitmap to file in some path on the phone
fos = new FileOutputStream(STORE_DIRECTORY + "/myscreen_" + IMAGES_PRODUCED + ".png");
bitmap.compress(CompressFormat.PNG, 100, fos);
fos.close();
There are several implementations (full code) of Media Projection API available.
Some other links that can help you in your development-
Video Recording with MediaProjectionManager - website
android-ScreenCapture - github as per android developer's observations :)
screenrecorder - github
Capture and Record Android Screen using MediaProjection APIs - website
Hope it helps :) Happy coding and screen recording!
PS: Can you please tell me the Microsoft app you are talking about? I have not used it. Would like to try it :)
Making an AS3 app for android that uses camera roll to load, select and then use an image.
The cameraroll browser works fine, but when the image is selected, the app crashes - almost every time - as it has worked on a handful of occasions (!?!) We assumed a memory issue and attempted to close all other windows / use smaller photos in the camera roll etc. and try different devices - but cannot recreate success consistently. Cannot find another ANE that works either
It fails at the point where the photo has been selected, and RESTARTS the app...
here's the relevant code, any help appreciated.
public function openGallery():void {
var cameraRoll:CameraRoll = new CameraRoll();
if(CameraRoll.supportsBrowseForImage) {
cameraRoll.addEventListener(MediaEvent.SELECT, imageSelected);
cameraRoll.addEventListener(flash.events.Event.CANCEL, browseCanceled);
cameraRoll.addEventListener(flash.events.ErrorEvent.ERROR, mediaError);
cameraRoll.browseForImage();
}
else { trace( "Image browsing is not supported on this device."); }
}
private function imageSelected(event:MediaEvent):void {
trace("Media selected...");
var imagePromise:MediaPromise = event.data as MediaPromise;
_dataSource = imagePromise.open();
if(imagePromise.isAsync) {
trace("Asynchronous media promise.");
var eventSource:IEventDispatcher = _dataSource as IEventDispatcher;
eventSource.addEventListener(flash.events.Event.COMPLETE, onMediaLoaded);
} else {
trace("Synchronous media promise.");
readMediaData();
}
}
private function onMediaLoaded(event:flash.events.Event):void{
trace("Media load complete");
_mediaBytes = new ByteArray();
_dataSource.readBytes(_mediaBytes);
_tempDir = File.createTempDirectory();
var now:Date = new Date();
var filename:String;
filename = now.fullYear + now.month + now.day+now.hours + now.minutes + now.seconds + ".JPG";
_file = _tempDir.resolvePath(filename);
//writing temporal file to display image
_stream = new FileStream();
_stream.open(_file,FileMode.WRITE);
_stream.writeBytes(_mediaBytes);
_stream.close();
if(_file.exists){
_imageLoader = new Loader();
_imageLoader.contentLoaderInfo.addEventListener(Event.COMPLETE,onMediaLoadedBitmapData);
_imageLoader.loadBytes(_mediaBytes);
}
}
private function onMediaLoadedBitmapData(event:Event):void{
trace("onMediaLoadedBitmapData");
var loaderInfo:LoaderInfo = LoaderInfo(event.target);
_bitmapData = new BitmapData(loaderInfo.width,loaderInfo.height,false,0xFFFFFF);
_bitmapData.draw(loaderInfo.loader);
addPictureToScreen();
}
I had a similar thing happen in Air for iOS but it was related to picking large images. I just checked the dimensions of the chosen image and then used a scalar matrix if they were past a certain point. Sounds like you've thought about this but maybe look further into that?
I have integrated sygic in my android application using a surface view. I want to navigate in that sygic application . I have used this code :
SWayPoint wp = new SWayPoint();
wp.Location = new LONGPOSITION(34, 35);
ApplicationAPI.StartNavigation(err, wp, 0, true, false, MAX);
But it is not working. Any ideas ?
I have once implemented Sygic in an app and this is basically how my code looks like (after hours of debug because the documentation was very poor...):
// surfaceView for displaying the "map"
SurfaceView mSygicSurface = (SurfaceView) findViewById(R.id.sygic_surface); // surface
// api status
int mSygicAPIStatus = -2;
// start the drive
ApplicationAPI.startDrive(new ApiCallback() {
public void onRunDrive() {
mSygicSurface.post(new Runnable() {
public void run() {
runDrive(mSygicSurface, getPackageName());
}
});
}
public void onInitApi() // gets called after runDrive();
{
mSygicAPIStatus = ApplicationAPI.InitApi(getPackageName(), true, new ApplicationHandler() { /* nothing relevant here */ }); // api initialization
if (mSygicAPIStatus != 1) {
// error
return;
}
}
});
Once you want to navigate somewhere:
GeoPoint point = new GeoPoint(/* ... */, /* ... */);
final SWayPoint wayPoint = new SWayPoint("", point.getLongitudeE6(), point.getLatitudeE6());
SError error = new SError();
final int returnCode = ApplicationAPI.StartNavigation(error, point, NavigationParams.NpMessageAvoidTollRoadsUnable, true, true, 0);
Carefully note that Sygic uses E6 coordinates.
This is not an answer on the question, but for whose who searching for weird sygic exmaples in 2017 I put it here
ApiMaps.showCoordinatesOnMap(new Position((int)(-84.41949*100000.0),(int)(33.7455*100000.0)),1000,0);
//LONGITUDE first!!
//and multiply on 100 000
//https://developers.sygic.com/reference/java3d/html/classcom_1_1sygic_1_1sdk_1_1remoteapi_1_1_api_maps.html
p.s. this is for standalone apk