subjet in the title
loading textures, making
private static Map<String, Texture> textureAlreadyLoad = new HashMap<String, Texture>();
source of loading
String location = tagMap.get("location");
Texture texture;
if(textureAlreadyLoad.containsKey(location)) {
texture = textureAlreadyLoad.get(location);
} else {
texture = new Texture(Gdx.files.internal(location));
textureAlreadyLoad.put(location, texture);
}
graphic = new StaticGraphic(state, new TextureRegion(texture));
support class
public class GraphicStorage {
private Map<Pair<String, State>, ActorGraphic> ActorsGraphic;
public GraphicStorage() {
ActorsGraphic = ResourceProvider.getMapGraphic();
}
public ActorGraphic getGraphic(String className, State state) {
return ActorsGraphic.get(new Pair<String, State>(className, state));
}
}
And then I just draw it by the spriteBatch — only 64x64 white rectangles (size of image)
It is a point that I use one image.
Is it good to storage Textures in Map?
Related
I've created an Android application to show about a hundred point objects using Libgdx modelBuilder. It works fine and models are rendered without any problem, but I needed to add a marker for every point model. To do that I've extended from Actor and in this class, I've created two textures representing the marker in normal and the selected mode when the user clicked the objects. The following is the LabelActor class which has been instantiated for each point model.
public class LabelActor extends Actor {
Texture texture;
Texture selectedTexture;
Sprite sprite;
private Vector3 distVector = new Vector3();
public LabelActor( String markerColor){
String path = "markers/point_marker_" + markerColor + ".png"";
String selected_path = "markers/point_marker_select.png";
texture = new Texture(Gdx.files.internal(path));
selectedTexture = new Texture(Gdx.files.internal(selected_path));
sprite = new Sprite(texture);
setBounds(sprite.getX(), sprite.getY(), sprite.getWidth(), sprite.getHeight());
}
#Override
protected void positionChanged() {
sprite.setPosition(getX(),getY());
super.positionChanged();
}
#Override
public void draw(Batch batch, float parentAlpha) {
if (selected){
batch.draw(selectedTexture, getX()-sprite.getWidth()/2, getY());
} else {
batch.draw(texture, getX()-sprite.getWidth()/2, getY());
}
}
}
The problem will be arisen on using the marker for these objects which causes a huge amount of memory usage. Loading a sphere model for each point objects takes about 14M of graphics memory but loading texture markers take 500M of memory on the device. I've used png icons located in the asset folders to create my textures but not using a Libgdx atlas. Is there any chance to create markers for this number of point objects to consume a little amount of memory?
Below is the image representing the view of the markers and point models.
Edit 1:
I've used AssetManager to load textures and create HashMap of them in the main screen then pass an array of two textures to each LabelActor class as Retron advised, but the memory is still overloaded.
public class LabelActor extends Actor {
Texture texture;
Texture selectedTexture;
Sprite sprite;
private Vector3 distVector = new Vector3();
public LabelActor(Texture[] textures){
texture = textures[0];
selectedTexture = textures[1];
sprite = new Sprite(texture);
setBounds(sprite.getX(), sprite.getY(), sprite.getWidth(), sprite.getHeight());
}
#Override
protected void positionChanged() {
sprite.setPosition(getX(),getY());
super.positionChanged();
}
#Override
public void draw(Batch batch, float parentAlpha) {
if (selected){
batch.draw(selectedTexture, getX()-sprite.getWidth()/2, getY());
infoWindow.draw(batch, parentAlpha);
} else {
batch.draw(texture, getX()-sprite.getWidth()/2, getY());
}
}
}
Part of main screen to load textures using AssetManager:
#Override
public void create() {
...
environment = new Environment();
environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1.0f));
pointLight = new PointLight().set(0.8f, 0.8f, 0.8f, 2f, 0f, 0f, 500f);
environment.add(pointLight);
assetManager = new AssetManager();
textureTitleList = new ArrayList<>();
for (FileHandle f: Gdx.files.internal("markers").list()){
assetManager.load("markers/" + f.name(), Texture.class);
textureTitleList.add(f.name());
}
}
private void loadModel() {
textureMap = new HashMap<>();
for (String fName: textureTitleList){
textureMap.put(fName, assetManager.get("markers/" + fName, Texture.class));
}
}
#Override
public void render() {
if (!isLoaded && assetManager.update()) {
loadModel();
isLoaded = true;
}
...
}
You must use AssetManager and load ONE texture from asset, not create new texture for every new actor.
I have a live broadcasting app based off grafika's examples, where I send my video feed over RTMP to be live broadcast.
I now want to watermark my video by overlaying text or a logo on my video stream. I know this can be done with GLSL filtering, but I have no idea how to implement this based on the sample that I linked.
I tried using Alpha blending but it seems the two texture formats are somehow incompatible (one being TEXTURE_EXTERNAL_OES and the other one TEXTURE_2D) and I just get a black frame in return.
EDIT:
I based my code on Kickflip API:
class CameraSurfaceRenderer implements GLSurfaceView.Renderer {
private static final String TAG = "CameraSurfaceRenderer";
private static final boolean VERBOSE = false;
private CameraEncoder mCameraEncoder;
private FullFrameRect mFullScreenCamera;
private FullFrameRect mFullScreenOverlay; // For texture overlay
private final float[] mSTMatrix = new float[16];
private int mOverlayTextureId;
private int mCameraTextureId;
private boolean mRecordingEnabled;
private int mFrameCount;
// Keep track of selected filters + relevant state
private boolean mIncomingSizeUpdated;
private int mIncomingWidth;
private int mIncomingHeight;
private int mCurrentFilter;
private int mNewFilter;
boolean showBox = false;
/**
* Constructs CameraSurfaceRenderer.
* <p>
* #param recorder video encoder object
*/
public CameraSurfaceRenderer(CameraEncoder recorder) {
mCameraEncoder = recorder;
mCameraTextureId = -1;
mFrameCount = -1;
SessionConfig config = recorder.getConfig();
mIncomingWidth = config.getVideoWidth();
mIncomingHeight = config.getVideoHeight();
mIncomingSizeUpdated = true; // Force texture size update on next onDrawFrame
mCurrentFilter = -1;
mNewFilter = Filters.FILTER_NONE;
mRecordingEnabled = false;
}
/**
* Notifies the renderer that we want to stop or start recording.
*/
public void changeRecordingState(boolean isRecording) {
Log.d(TAG, "changeRecordingState: was " + mRecordingEnabled + " now " + isRecording);
mRecordingEnabled = isRecording;
}
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
Log.d(TAG, "onSurfaceCreated");
// Set up the texture blitter that will be used for on-screen display. This
// is *not* applied to the recording, because that uses a separate shader.
mFullScreenCamera = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
// For texture overlay:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
mFullScreenOverlay = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_2D));
mOverlayTextureId = GlUtil.createTextureWithTextContent("hello!");
mOverlayTextureId = GlUtil.createTextureFromImage(mCameraView.getContext(), R.drawable.red_dot);
mCameraTextureId = mFullScreenCamera.createTextureObject();
mCameraEncoder.onSurfaceCreated(mCameraTextureId);
mFrameCount = 0;
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
Log.d(TAG, "onSurfaceChanged " + width + "x" + height);
}
#Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE){
if(mFrameCount % 30 == 0){
Log.d(TAG, "onDrawFrame tex=" + mCameraTextureId);
mCameraEncoder.logSavedEglState();
}
}
if (mCurrentFilter != mNewFilter) {
Filters.updateFilter(mFullScreenCamera, mNewFilter);
mCurrentFilter = mNewFilter;
mIncomingSizeUpdated = true;
}
if (mIncomingSizeUpdated) {
mFullScreenCamera.getProgram().setTexSize(mIncomingWidth, mIncomingHeight);
mFullScreenOverlay.getProgram().setTexSize(mIncomingWidth, mIncomingHeight);
mIncomingSizeUpdated = false;
Log.i(TAG, "setTexSize on display Texture");
}
// Draw the video frame.
if(mCameraEncoder.isSurfaceTextureReadyForDisplay()){
mCameraEncoder.getSurfaceTextureForDisplay().updateTexImage();
mCameraEncoder.getSurfaceTextureForDisplay().getTransformMatrix(mSTMatrix);
//Drawing texture overlay:
mFullScreenOverlay.drawFrame(mOverlayTextureId, mSTMatrix);
mFullScreenCamera.drawFrame(mCameraTextureId, mSTMatrix);
}
mFrameCount++;
}
public void signalVertialVideo(FullFrameRect.SCREEN_ROTATION isVertical) {
if (mFullScreenCamera != null) mFullScreenCamera.adjustForVerticalVideo(isVertical, false);
}
/**
* Changes the filter that we're applying to the camera preview.
*/
public void changeFilterMode(int filter) {
mNewFilter = filter;
}
public void handleTouchEvent(MotionEvent ev){
mFullScreenCamera.handleTouchEvent(ev);
}
}
This is the code for Rendering the image on the screen (GLSurfaceView), but this is not actually overlayed over the video. If I am not mistaken, this is done on CameraEncoder.
Thing is, replicating the code from CameraSurfaceRenderer into CameraEncoder (they both have similar code when it comes to filters) does not provide an overlayed text/image.
The texture object uses the GL_TEXTURE_EXTERNAL_OES texture target, which is defined by the GL_OES_EGL_image_external OpenGL ES extension. This limits how the texture may be used. Each time the texture is bound it must be bound to the GL_TEXTURE_EXTERNAL_OES target rather than the GL_TEXTURE_2D target. Additionally, any OpenGL ES 2.0 shader that samples from the texture must declare its use of this extension using, for example, an "#extension GL_OES_EGL_image_external : require" directive. Such shaders must also access the texture using the samplerExternalOES GLSL sampler type.
https://developer.android.com/reference/android/graphics/SurfaceTexture.html
Post your code that you used to do alpha blending and I can probably fix it.
I would probably override the Texture2dProgram and pass that to the FullFrame Renderer. It has example code for rendering using the GL_TEXTURE_EXTERNAL_OES extension. Basically, #Override the draw function, call the base implementation, bind your watermark and draw.
That should be between camera and the video encoder.
Most of the Libgdx tutorials I found show how to add 2D elements in a 3D world, but I would like to know how to the the opposite, adding 3D elements in a 2D Stage.
I tried adding a background image to the Stage, then adding to the Stage an Actor that renders the model batch and the 3D instances in its draw() method.
But instead, the image isn't drawn and part of the 3D object is hidden.
SimpleGame class
public class SimpleGame extends ApplicationAdapter {
Stage stage;
#Override
public void create () {
stage = new Stage();
InputMultiplexer im = new InputMultiplexer(stage);
Gdx.input.setInputProcessor( im );
Image background = new Image(new Texture("badlogic.jpg"));
background.setSize(stage.getWidth(), stage.getHeight());
stage.addActor(background);
setup();
}
private void setup() {
SimpleActor3D group = new SimpleActor3D();
group.setSize(stage.getWidth(), stage.getHeight());
group.setPosition(0, 0);
stage.addActor(group);
}
#Override
public void render () {
stage.act();
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClear( GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT );
stage.draw();
}
}
SimpleActor3D class
public class SimpleActor3D extends Actor {
public Environment environment;
public PerspectiveCamera camera;
public ModelBatch modelBatch;
public ModelInstance boxInstance;
public SimpleActor3D() {
environment = SimpleUtils.createEnvironment();
camera = SimpleUtils.createCamera();
boxInstance = SimpleUtils.createModelInstance(Color.GREEN);
modelBatch = new ModelBatch();
}
#Override
public void draw(Batch batch, float parentAlpha) {
Gdx.gl.glViewport((int)getX(), (int)getY(), (int)getWidth(), (int)getHeight());
modelBatch.begin(camera);
modelBatch.render( boxInstance, environment );
modelBatch.end();
super.draw(batch, parentAlpha);
}
}
SimpleUtils class
public class SimpleUtils {
public static Environment createEnvironment() {
Environment environment = new Environment();
environment.set( new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f) );
DirectionalLight dLight = new DirectionalLight();
Color lightColor = new Color(0.75f, 0.75f, 0.75f, 1);
Vector3 lightVector = new Vector3(-1.0f, -0.75f, -0.25f);
dLight.set( lightColor, lightVector );
environment.add( dLight ) ;
return environment;
}
public static PerspectiveCamera createCamera() {
PerspectiveCamera camera = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.position.set(10f, 10f, 10f);
camera.lookAt(0,0,0);
camera.near = 1f;
camera.far = 300f;
camera.update();
return camera;
}
public static ModelInstance createModelInstance(Color color) {
ModelBuilder modelBuilder = new ModelBuilder();
Material boxMaterial = new Material();
boxMaterial.set( ColorAttribute.createDiffuse(color) );
int usageCode = VertexAttributes.Usage.Position + VertexAttributes.Usage.ColorPacked + VertexAttributes.Usage.Normal;
Model boxModel = modelBuilder.createBox( 5f, 5f, 5f, boxMaterial, usageCode );
return new ModelInstance(boxModel);
}
}
What I would like :
What I have instead :
I have tried rendering the model batch directly in the ApplicationAdapter render() method and it works perfectly, so the problems must lie somewhere with the Stage but I can't find how.
I had the same problem but I needed to render 3d object only once so I came with an idea to render 3d model as a Sprite. In order to do that I rendered my model via modelBatch to frame buffer object instead of default screen buffer and then created a sprite from FBO color buffer.
Sample code below:
FrameBuffer frameBuffer = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), true);
Sprite renderModel(ModelInstance modelInstance) {
frameBuffer.begin(); //Capture rendering to frame buffer.
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT | (Gdx.graphics.getBufferFormat().coverageSampling ? GL20.GL_COVERAGE_BUFFER_BIT_NV : 0))
modelBatch.begin(camera);
modelBatch.render(modelInstance);
modelBatch.end();
frameBuffer.end();
return new Sprite(frameBuffer.getColorBufferTexture());
}
You can always update your sprite texture in a render loop with use of sprite.setTexture(); method. You can also create an Image from a texture -> new Image(frameBuffer.getColorBufferTexture()); and use it in Scene2d.
I'm trying to add a simple Sphere Object to my VR app using Rajawali Libraries. My app should show a simple View ( with RajawaliCardboardRenderer, a sphere with the texture of an images ).
The question is : can I simply add a sphere or a 3DObject to that sphere which is gonna be clickable?
Here's my code :
public class MyRenderer extends RajawaliCardboardRenderer {
private Sphere sphere,sphere2;
public MyRenderer(Context context) {
super(context);
}
#Override
protected void initScene() {
sphere = createPhotoSphereWithTexture(new Texture("photo", R.drawable.panorama));
sphere2 = new Sphere(24,24,24);
Material m = new Material();
m.setColor(0);
try {
m.addTexture(new Texture("photo",R.drawable.zvirbloniu_parkas));
} catch ( ATexture.TextureException e) {
e.printStackTrace();
}
sphere2.setMaterial(m);
getCurrentScene().addChild(sphere);
getCurrentScene().addChild(sphere2);
getCurrentCamera().setPosition(Vector3.ZERO);
getCurrentCamera().setFieldOfView(100);
}
#Override
protected void onRender(long ellapsedRealtime, double deltaTime) {
super.onRender(ellapsedRealtime, deltaTime);
sphere2.rotate(Vector3.Axis.Y, 1.0);
}
private static Sphere createPhotoSphereWithTexture(ATexture texture) {
Material material = new Material();
material.setColor(0);
try {
material.addTexture(texture);
} catch (ATexture.TextureException e) {
throw new RuntimeException(e);
}
Sphere sphere = new Sphere(50, 64, 32);
sphere.setScaleX(-1);
sphere.setMaterial(material);
return sphere;
}
}
Don't look at the form of the code, it is bad-written, it doesn't follow any pattern!
You need to attach the sphere2 as a child of the "scenario" sphere, in order to display it inside the scenario, something like this:
sphere = createPhotoSphereWithTexture(new Texture("photo", .drawable.panorama));
sphere2 = new Sphere(24,24,24);
...
sphere.addChild(sphere2);
getCurrentScene().addChild(sphere);
The object picking mechanism of Rajawali will handle child objects. You could either add your new spheres directly to the scene, or to the photo sphere as children, whichever you prefer.
I am new in AndEngine. I am using following code to display an image of a ball.
private ITextureRegion mBallTextureRegion;
private Sprite ball1;
#Override
public void onCreateResources() {
ITexture ball = new BitmapTexture(this.getTextureManager(),
new IInputStreamOpener() {
#Override
public InputStream open() throws IOException {
return getAssets().open("gfx/ball.png");
}
});
this.mBallTextureRegion = TextureRegionFactory.extractFromTexture(ring1);
....................
....................
}
#Override
protected Scene onCreateScene() {
final Scene scene = new Scene();
scene.attachChild(backgroundSprite);
...........
ball1 = new Sprite(192, 63, this.mBallTextureRegion, getVertexBufferObjectManager());
scene.attachChild(ball1);
..............
...........
}
Now, depending on the game level I want to add multiple ball of different size in the scene. Is it possible to add ITextureRegion mBallTextureRegion multiple time in different size(using different magnifying it)? If it is, then how? Please help me this sample code.
if you want to resize a Sprite,AnimatedSprite,Text,etc...
//the original image x2, 2f because the parameter is float
youSprite.setScale(2f);
if you use a texture region in more sprites:
Sprite youSprite;
//set deepCopy() in you texture to optimized memory
youSprite= new Sprite(0,0,youTexture.deepCopy(),mEnginge.getVertexTextureManager());
and if you want generate random position of each ball use "Random" variable.
best regards.