I have a sequence of images/textures to display with threejs.
Firstly I loaded a lot of Raw RGB format data (Uint8Array) and fed into THREE.DataTexture. It was fine on PC and Android but seemed to hit the memory-bound and caused page reload on iOS safari.
var texture = new THREE.DataTexture(data, w, h, THREE.RGBFormat);
What is the best way to have a lot of texture or texture data in memory for threejs to use?
Must be compatible with PC/Android/iOS.
I changed all the data to Compressed Texture Formats and use THREE.CompressedTexture, for example Dxt1, to reduce memory usage. But DXT1 only works for PC.
// given texture_data, texutre_width, texture_height
var texture = new THREE.CompressedTexture(null, texture_width, texture_height,
THREE.RGB_S3TC_DXT1_Format, THREE.UnsignedByteType, THREE.UVMapping,
THREE.ClampToEdgeWrapping, THREE.ClampToEdgeWrapping,
THREE.LinearFilter, THREE.LinearFilter);
var material = new THREE.MeshBasicMaterial({ map: texture });
// create mipmaps with data
var mipmap = {
"data": texture_data,
"width": texture_width,
"height": texture_height
};
var mipmaps = [];
mipmaps.push(mipmap);
// set data to texture and update
texture.mipmaps = mipmaps;
texture.needsUpdate = true
material.needsUpdate = true;
Going forward, I found that the compatibility of compressed texture formats across platforms was an issue. Each format requires a certain WEBGL extension. For example Dxt1 requires S3TC extension.
if (renderer.extensions.get('WEBGL_compressed_texture_s3tc'))
alert('WEBGL_compressed_texture_s3tc is supported');
else
alert('WEBGL_compressed_texture_s3tc is NOT supported');
There was not one format that works for all three platforms(PC/iOS/Android). Details below:
ASTC/ETC/ETC1: only works for Android
ATC: don't work at all platforms
PVRTC: only works for iOS
S3TC/S3TC_SRGB: only works for PC
Another way was to encode all the textures to mp4, but decoding them to raw texture was not easy. I tried Broadway. It works but the cpu loading was heavy.
At last, I encoded all the data to basis format and use BasisTextureLoader to load them on browser. The compress rate was similar to h264 baseline, way better than dxt1.
BasisTextureLoader actually detects the compatibility at runtime and decode them to a certain CompressedTexture format for threejs to use.
Hope it helps.
Related
Im currently trying to display a video frame using opengl.
So far it works but I have some color problem.
Im using this as my
Reference for my logic
I have this code
//YUV420SP data
uint8_t *decodedBuff = AMediaCodec_getOutputBuffer(d->codec, status, &bufSize);
buildTexture(decodedBuff, decodedBuff+w*h, decodedBuff+w*h, w, h);
renderFrame();
but it displays with wrong color.
decodedBuff = Y
decodedBuff+w*h = U
decodedBuff+w*h*5 = V
but this separation formula is for YUV420P.
Do you guys happen to know whats for YUV420SP?
Your help is very much appreciated
If you are doing it this way you are doing it wrong. You should never manually read raw data from video surfaces in fragment shaders.
Generate a SurfaceTexture, bind it to an OpenGL ES texture, and use EGL_image_external to access the texture via an external image sampler.
This will give you direct access to the video data in your shader, including automatic handling of the memory format and color conversion, in many cases for "free" because it's backed by GPU hardware acceleration.
I'm developing a mobile application that runs on Android and IOS. It's capable of real-time-processing of a video stream. On Android I get the Preview-Videostream of the camera via android.hardware.Camera.PreviewCallback.onPreviewFrame. I decided to use the NV21-Format, since it should be supported by all Android-devices, whereas RGB isn't (or just RGB565).
For my algorithms, which mostly are for pattern recognition, I need grayscale images as well as color information. Grayscale is not a problem, but the color conversion from NV21 to BGR takes way too long.
As described, I use the following method to capture the images;
In the App, I override the onPreviewFrame-Handler of the Camera. This is done in CameraPreviewFrameHandler.java:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
{
try {
AvCore.getInstance().onFrame(data, _prevWidth, _prevHeight, AvStreamEncoding.NV21);
} catch (NativeException e)
{
e.printStackTrace();
}
}
The onFrame-Function then calls a native function which fetches data from the Java-Objects as local references. This is then converted to an unsigned char* bytestream and calls the following c++ function, which uses OpenCV to convert from NV21 to BGR:
void CoreManager::api_onFrame(unsigned char* rImageData, avStreamEncoding_t vImageFormat, int vWidth, int vHeight)
{
// rImageData is a local JNI-reference to the java-byte-array "data" from onPreviewFrame
Mat bgrMat; // Holds the converted image
Mat origImg; // Holds the original image (OpenCV-Wrapping around rImageData)
double ts; // for profiling
switch(vImageFormat)
{
// other formats
case NV21:
origImg = Mat(vHeight + vHeight/2, vWidth, CV_8UC1, rImageData); // fast, only creates header around rImageData
bgrMat = Mat(vHeight, vWidth, CV_8UC3); // Prepare Mat for target image
ts = avUtils::gettime(); // PROFILING START
cvtColor(origImg, bgrMat, CV_YUV2BGRA_NV21);
_onFrameBGRConversion.push_back(avUtils::gettime()-ts); // PROFILING END
break;
}
[...APPLICATION LOGIC...]
}
As one might conclude from comments in the code, I profiled the conversion already and it turned out that it takes ~30ms on my Nexus 4, which is unacceptable long for such a "trivial" pre-processing step. (My profiling methods are double-checked and working properly for real-time measurement)
Now I'm trying desperately to find a faster implementation of this color conversion from NV21 to BGR. This is what I've already done;
Adopted the code "convertYUV420_NV21toRGB8888" to C++ provided in this topic (multiple of the conversion-time)
Modified the code from 1 to use only integer operations (doubled conversion-time of openCV-Solution)
Browsed through a couple other implementations, all with similar conversion-times
Checked OpenCV-Implementation, they use a lot of bit-shifting to get performance. Guess I'm not able to do better on my own
Do you have suggestions / know good implementations or even have a completely different way to work around this Problem? I somehow need to capture RGB/BGR-Frames from the Android-Camera and it should work on as many Android-devices as possible.
Thanks for your replies!
Did you try libyuv? I used it in the past and if you compile it with NEON support, it uses an asm code optimized for ARM processors, you can start from there to further optimize for your special situation.
I am trying to load textures as follows:
private Texture mTexture;
...
public Textures(final BaseGameActivity activity, final Engine engine) {
this.mTexture = new Texture(2048, 1024,
TextureOptions.BILINEAR_PREMULTIPLYALPHA);
this.mBackgroundTextureRegion = TextureRegionFactory.createFromAsset(
this.mTexture, activity, "img/back.png", 0, 0);
this.mSwingBackTextureRegion = TextureRegionFactory.createFromAsset(
this.mTexture, activity, "img/player.png", 836, 0);
...
I want to load more than 200 textures. However, the current method that I am using is too long.
Are there faster methods to complete it?
I am working in GLES1.
The easiest way to do it is with Texture Packer, found here
This allows you to add multiple image files in to one easy to load spritesheet. The engine loads this spritesheet in to a texture and creates a class that lets you easily reference each image from that spreadsheet. Turn 200 TextureRegions in to 1 TexturePack.
I'm using GLES2 and I'm not sure where the source files are for GLES1. Poke around the forums and you should be able to find out how to use them. There has been plenty of talk about it.
There is a texture packer built in AndEngine which does this automagically. Try searching the AndEngine forum.
http://www.andengine.org/forums/
My scene in OpenGL ES requires several large resolution textures, but they are grayscale, since I am using them just for masks. I need to reduce my memory use.
I have tried loading these textures with Bitmap.Config.ALPHA_8, and as RGB_565. ALPHA_8 seems to actually increase memory use.
Is there some way to get a texture loaded into OpenGL and have it use less than 16bits per pixel?
glCompressedTexImage2D looks like it might be promising, but from what I can tell, different phones offer different texture compression methods. Also, I don't know if the compression actually reduces memory use at runtime. Is the solution to store my textures in both ATITC and PVRTC formats? If so, how do I detect which format is supported by the device?
Thanks!
PVRTC, ATITC, S3TC and so forth, the GPU native compressed texture should reduce memory usage and improve rendering performance.
For example (sorry in C, you can implement it as using GL11.glGetString in Java),
const char *extensions = glGetString(GL_EXTENSIONS);
int isPVRTCsupported = strstr(extensions, "GL_IMG_texture_compression_pvrtc") != 0;
int isATITCsupported = strstr(extensions, "GL_ATI_texture_compression_atitc") != 0;
int isS3TCsupported = strstr(extensions, "GL_EXT_texture_compression_s3tc") != 0;
if (isPVRTCsupportd) {
/* load PVRTC texture using glCompressedTexImage2D */
} else if (isATITCsupported) {
...
Besides you can specify supported devices using texture format in AndroidManifest.xml.
The AndroidManifest.xml File - supports-gl-texture
EDIT:
MOTODEV - Understanding Texture Compression
With Imagination Technologies-based (aka PowerVR) systems, you should be able to use PVRTC 4bpp and (depending on the texture and quality requirements) maybe even 2bpp PVRTC variant.
Also, though I'm not sure what is exposed in Android systems, the PVRTextool lists I8 (i.e. greyscale 8bpp) as target texture format, which would give you a lossless option.
ETC1 texture compression is supported on all Android devices with Android 2.2 and up.
I have an Android application that displays VGA (640x480) frames using OpenGL ES. The application reads each frame from a movie file and updates the texture accordingly.
My problem is that, it is taking almost 30 ms. to draw each frame using OpenGL. Similar test using the Canvas/drawBitmap was around 6 ms on the same device.
I'm following the same OpenGL calls that VLC Media Player is using, so I'm assuming that those are optimized for this purpose.
I just wanted to hear your thoughts and ideas about it?
Are you sure that the bitmap are loaded with RBG_565?Try this :
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
bm = BitmapFactory.decodeByteArray(temp, 0, temp.length,opt);
Let me know!
Which are the calls you are using ?
make sure that u create texture only once (glTexImage2D) and next time just update it with new buffer.You can also disable other gl things like depthbuffer,stencil,accumulation,lighting, etc...
If none of these helps , check you opengl implementation. make sure that that uses hardware(gpu)