I want to make video of my android screen(what i am doing on the android screen) programatically.
Is there any best tutorial or help regarding this.
I have searched a lot but i found that thing...(Capture the android screen picture programtically).
Ok fine if i capture a lot of images after each milliseconds than how i make video with a lot of captured images in android programmatic.
you can use following code for screen capturing in Android.
Please view this url.....
http://android-coding.blogspot.in/2011/05/create-custom-dialog-with-dynamic.html
As long as you have bitmaps you can flip it into video using JCodec ( http://jcodec.org ).
Here's a sample image sequence encoder: https://github.com/jcodec/jcodec/blob/master/src/main/java/org/jcodec/api/SequenceEncoder.java . You can modify it for your purposes by replacing BufferedImage with Bitmap.
Use these helper methods:
public static Picture fromBitmap(Bitmap src) {
Picture dst = Picture.create((int)src.getWidth(), (int)src.getHeight(), RGB);
fromBitmap(src, dst);
return dst;
}
public static void fromBitmap(Bitmap src, Picture dst) {
int[] dstData = dst.getPlaneData(0);
int[] packed = new int[src.getWidth() * src.getHeight()];
src.getPixels(packed, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
for (int i = 0, srcOff = 0, dstOff = 0; i < src.getHeight(); i++) {
for (int j = 0; j < src.getWidth(); j++, srcOff++, dstOff += 3) {
int rgb = packed[srcOff];
dstData[dstOff] = (rgb >> 16) & 0xff;
dstData[dstOff + 1] = (rgb >> 8) & 0xff;
dstData[dstOff + 2] = rgb & 0xff;
}
}
}
public static Bitmap toBitmap(Picture src) {
Bitmap dst = Bitmap.create(pic.getWidth(), pic.getHeight(), ARGB_8888);
toBitmap(src, dst);
return dst;
}
public static void toBitmap(Picture src, Bitmap dst) {
int[] srcData = src.getPlaneData(0);
int[] packed = new int[src.getWidth() * src.getHeight()];
for (int i = 0, dstOff = 0, srcOff = 0; i < src.getHeight(); i++) {
for (int j = 0; j < src.getWidth(); j++, dstOff++, srcOff += 3) {
packed[dstOff] = (srcData[srcOff] << 16) | (srcData[srcOff + 1] << 8) | srcData[srcOff + 2];
}
}
dst.setPixels(packed, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
}
You can as well wait for JCodec team to implement full Android support, they are working on it according to this: http://jcodec.org/news/no_deps.html
you can use following code for screen capturing in Android.
ImageView v1 = (ImageView)findViewById(R.id.mImage);
v1.setDrawingCacheEnabled(true);
Bitmap bm = v1.getDrawingCache();
For Creating Video from Images visit this link.
Related
Im trying to reproduce tensorflow object detection on xamarin.
private MappedByteBuffer LoadModelFile()
{
AssetFileDescriptor fileDescriptor = Assets.OpenFd("detect.tflite");
FileInputStream inputStream = new FileInputStream(fileDescriptor.FileDescriptor);
FileChannel fileChannel = inputStream.Channel;
long startOffset = fileDescriptor.StartOffset;
long declaredLength = fileDescriptor.DeclaredLength;
return fileChannel.Map(FileChannel.MapMode.ReadOnly, startOffset, declaredLength);
}
View view = (View) sender;
MappedByteBuffer buffer = LoadModelFile();
Interpreter interpreter = new Interpreter(buffer);
var sr = new StreamReader(Assets.Open("labels.txt"));
var labels = sr.ReadToEnd()
.Split('\n')
.Select(s => s.Trim())
.Where(s => !string.IsNullOrEmpty(s))
.ToList();
var bitmap = BitmapFactory.DecodeResource(Resources, 2130837608);
var resizedBitmap = Bitmap.CreateScaledBitmap(bitmap, 1000, 750, false)
.Copy(Bitmap.Config.Argb8888, false);
float[][][][] imgData = null;
imgData = new float[1][][][];
imgData[0] = new float[1000][][];
for (int i = 0; i < imgData[0].Length; i++)
{
imgData[0][i] = new float[750][];
for (int j = 0; j < imgData[0][i].Length; j++)
{
imgData[0][i][j] = new float[3];
}
}
var intValuess = new int[1000 * 750];
resizedBitmap.GetPixels(intValuess, 0, 1000, 0, 0, 1000, 750);
int pixels = 0;
for (int i = 0; i < imgData[0].Length; i++)
{
for (int j = 0; j < imgData[0][i].Length; j++)
{
var val = intValuess[pixels++];
imgData[0][i][j][0] = (float)((val >> 16) & 0xFF);
imgData[0][i][j][1] = (float)((val >> 8) & 0xFF);
imgData[0][i][j][2] = (float)(val & 0xFF);
}
}
var outputs = new float[labels.Count];
interpreter.Run(imgData, outputs);
but i have error "cannot convert float[][][][] to Java.Lang.Object in line interpreter.Run(imgData, outputs);
How i can convert float[][][][] to Java.Lang.Object or where i can find tensorflow lite with xamarin examples.
I know it has been a while since you asked this question but maybe my response can be useful to someone.
I am also trying to use Xamarin with tflite, to run a simple CNN.
Here is my code:
private MappedByteBuffer LoadModelFile()
{
var assets = Application.Context.Assets;
AssetFileDescriptor fileDescriptor = assets.OpenFd("seed_model_no_qt.tflite");
FileInputStream inputStream = new FileInputStream(fileDescriptor.FileDescriptor);
FileChannel fileChannel = inputStream.Channel;
long startOffset = fileDescriptor.StartOffset;
long declaredLength = fileDescriptor.DeclaredLength;
return fileChannel.Map(FileChannel.MapMode.ReadOnly, startOffset, declaredLength);
}
private string Classify(MediaFile mediaFile)
{
var assets = Application.Context.Assets;
Bitmap bp = BitmapFactory.DecodeStream(mediaFile.GetStream());
var resizedBitmap = Bitmap.CreateScaledBitmap(bp, 1280, 1280, false).Copy(Bitmap.Config.Argb8888, false);
var bufint = new int[1280 * 1280];
resizedBitmap.GetPixels(bufint, 0, 1280, 0, 0, 1280, 1280);
int pixels = 0;
var input_buffer = new byte[4 * 1280 * 1280 * 3];
for(int i = 0; i < 1280; i++)
{
for(int k = 0; k < 1280; k++)
{
int val = bufint[pixels++];
Array.Copy(BitConverter.GetBytes(((val >> 16) & 0xFF) * (1f / 255f)), 0, input_buffer, (i * 1280 + k) * 12, 4);
Array.Copy(BitConverter.GetBytes(((val >> 8) & 0xFF) * (1f / 255f)), 0, input_buffer, (i * 1280 + k) * 12 + 4, 4);
Array.Copy(BitConverter.GetBytes((val & 0xFF) * (1f / 255f)), 0, input_buffer, (i * 1280 + k) * 12 + 8, 4);
}
}
var bytebuffer = Java.Nio.ByteBuffer.Wrap(input_buffer);
var output = Java.Nio.ByteBuffer.AllocateDirect(4*160*160);
interpreter.Run(bytebuffer, output);
var buffer = new byte[4 * 160 * 160];
Marshal.Copy(output.GetDirectBufferAddress(), buffer, 0, 4 * 160 * 160);
float sum = 0.0f;
for(int i = 0; i < 160*160; i++)
{
sum += BitConverter.ToSingle(buffer, i * 4);
}
return "Count : " + ((int)(sum/255)).ToString();
}
I reused your LoadModelFile() function as it is. The code takes an image from a mediaFile (coming from the phone camera), then resizes it to 1280x1280 rgb image before feeding it to a CNN as an array of float32 values.
Your float[][][][] to Java.Lang.Object issue came from the interpreter.Run() method expecting a Java Object. Some people online solve it by giving a Java.Nio.ByteBuffer as a parameter, instead of an array. It implies some bitwise manipulations but the Run method does accept the ByteBuffer object.
When filling the ByteBuffer, I advise you not to use its methods such as PutFloat(), but to fill a byte[] buffer and then use the Java.Nio.ByteBuffer.Wrap() method as I did. Using ByteBuffer's methods seemed to imply large performance issues in my case.
Same thing happens when manipulating the output of my CNN (a 160x160 heatmap of float32 values). Using ByteBuffer.Get() method to access the values was very slow. Instead, use Marshal.Copy to store the values into a byte array, then get back the float values with BitConverter.ToSingle.
I need to use WebRTC for android to send specific cropped(face) video to the videoChannel. I was able manipulate Camera1Session class of WebRTC to get the face cropped. Right now I am setting it to an ImageView.
listenForBytebufferFrames() of Camera1Session.java
private void listenForBytebufferFrames() {
this.camera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera callbackCamera) {
Camera1Session.this.checkIsOnCameraThread();
if(callbackCamera != Camera1Session.this.camera) {
Logging.e("Camera1Session", "Callback from a different camera. This should never happen.");
} else if(Camera1Session.this.state != Camera1Session.SessionState.RUNNING) {
Logging.d("Camera1Session", "Bytebuffer frame captured but camera is no longer running.");
} else {
mFrameProcessor.setNextFrame(data, callbackCamera);
long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
if(!Camera1Session.this.firstFrameReported) {
int startTimeMs = (int)TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - Camera1Session.this.constructionTimeNs);
Camera1Session.camera1StartTimeMsHistogram.addSample(startTimeMs);
Camera1Session.this.firstFrameReported = true;
}
ByteBuffer byteBuffer1 = ByteBuffer.wrap(data);
Frame outputFrame = new Frame.Builder()
.setImageData(byteBuffer1,
Camera1Session.this.captureFormat.width,
Camera1Session.this.captureFormat.height,
ImageFormat.NV21)
.setTimestampMillis(mFrameProcessor.mPendingTimeMillis)
.setId(mFrameProcessor.mPendingFrameId)
.setRotation(3)
.build();
int w = outputFrame.getMetadata().getWidth();
int h = outputFrame.getMetadata().getHeight();
SparseArray<Face> detectedFaces = mDetector.detect(outputFrame);
if (detectedFaces.size() > 0) {
Face face = detectedFaces.valueAt(0);
ByteBuffer byteBufferRaw = outputFrame.getGrayscaleImageData();
byte[] byteBuffer = byteBufferRaw.array();
YuvImage yuvimage = new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
//My crop logic to get face co-ordinates
yuvimage.compressToJpeg(new Rect(left, top, right, bottom), 80, baos);
final byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Activity currentActivity = getActivity();
if (currentActivity instanceof CallActivity) {
((CallActivity) currentActivity).setBitmapToImageView(bitmap); //face on ImageView is set just fine
}
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
} else {
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
}
}
}
});
}
jpegArray is the final byteArray that I need to stream via WebRTC, which I tried with something like this:
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, jpegArray, (int) face.getWidth(), (int) face.getHeight(), Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(jpegArray);
Setting them up like this gives me following error:
../../webrtc/sdk/android/src/jni/androidvideotracksource.cc line 82
Check failed: length >= width * height + 2 * uv_width * ((height + 1) / 2) (2630 vs. 460800)
Which I assume is because androidvideotracksource does not get the same length of byteArray that it expects, since the frame is cropped now.
Could someone point me in the direction of how to achieve it? Is this the correct way/place to manipulate the data and feed into the videoTrack?
Edit:bitmap of byteArray data does not give me a camera preview on ImageView, unlike byteArray jpegArray. Maybe because they are packed differently?
Can we use WebRTC's Datachannel to exchang custom data ie cropped face "image" in your case and do the respective calculation at receiving end using any third party library ie OpenGL etc? Reason I am suggesting is that the WebRTC Video feed received from channel is a stream in real time not a bytearray . WebRTC Video by its inherent architecture isn't meant to crop video at other hand. If we want to crop or augment video we have to use any ar library to fulfill this job.
We can always leverage WebRTC's Data channel to exchange customized data. Using Video channel for the same is not recommended because it's real time stream not the bytearray.Please revert in case of any concern.
WebRTC in particular and video streaming in general presumes that the video has fixed dimensions. If you want to crop the detected face, your options are either to have pad the cropped image with e.g. black pixels (WebRTC does not use transparency), and crop the video on the receiver side, or, if you don't have control over the receiver, resize the cropped region to fill the expected width * height frame (you should also keep the expected aspect ratio).
Note that JPEG compress/decompress that you use to crop the original is far from efficient. Some other options can be found in Image crop and resize in Android.
Okay, this was definitely a problem of how the original byte[] data was packed and the way byte[] jpegArray was packed. Changing the way of packing this and scaling it as AlexCohn suggested worked for me. I found help from other post on StackOverflow on way to pack it. This is the code for it:
private byte[] getNV21(int left, int top, int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, left, top, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
private void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}`
I pass this byte[] data to onByteBufferFrameCaptured and callback:
Camera1Session.this.events.onByteBufferFrameCaptured(
Camera1Session.this,
data,
w,
h,
Camera1Session.this.getFrameOrientation(),
captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
Prior to this, I had to scale the bitmap which is pretty straight forward:
int width = bitmapToScale.getWidth();
int height = bitmapToScale.getHeight();
Matrix matrix = new Matrix();
matrix.postScale(newWidth / width, newHeight / height);
Bitmap scaledBitmap = Bitmap.createBitmap(bitmapToScale, 0, 0, bitmapToScale.getWidth(), bitmapToScale.getHeight(), matrix, true);
I am developing an application which includes filters and crop too. Here I am using cropping library. Here I used 8*8 luts like sample lut. Here I want to CROP the filtered image(8*8 lut)
Here is the logic to crop the image.
Bitmap cropbitmap = ivCropimageView.getCroppedImage();
Using this bitmap I generate a thumbnail bitmap like below.
Bitmap thumbImage = ThumbnailUtils.extractThumbnail(cropbitmap, 190, 250);
When I am trying to generate thumbnails for all filters then the thumbnails are displaying as too noise like this.
This result is when I implemented the answer from renderscript.
So if anyone has ab idea please help me..
I'm working on a LUT applier library which eases the use of LUT images in Android. Now it also guesses the color axes of the LUT:
https://github.com/dntks/easyLUT/wiki
It uses the algorythm I mentioned in the other post
u can go through this, hope it will help you to get the right process.
photo is the main bitmap here.
mLut3D is the array of LUT images stored in drawable
RenderScript mRs;
Bitmap mLutBitmap, mBitmap;
ScriptIntrinsic3DLUT mScriptlut;
Bitmap mOutputBitmap;
Allocation mAllocIn;
Allocation mAllocOut;
Allocation mAllocCube;
int mFilter = 0;
mRs = RenderScript.create(yourActivity.this);
public Bitmap filterapply() {
int redDim, greenDim, blueDim;
int w, h;
int[] lut;
if (mScriptlut == null) {
mScriptlut = ScriptIntrinsic3DLUT.create(mRs, Element.U8_4(mRs));
}
if (mBitmap == null) {
mBitmap = photo;
}
mOutputBitmap = Bitmap.createBitmap(mBitmap.getWidth(),
mBitmap.getHeight(), mBitmap.getConfig());
mAllocIn = Allocation.createFromBitmap(mRs, mBitmap);
mAllocOut = Allocation.createFromBitmap(mRs, mOutputBitmap);
// }
mLutBitmap = BitmapFactory.decodeResource(getResources(),
mLut3D[mFilter]);
w = mLutBitmap.getWidth();
h = mLutBitmap.getHeight();
redDim = w / h;
greenDim = redDim;
blueDim = redDim;
int[] pixels = new int[w * h];
lut = new int[w * h];
mLutBitmap.getPixels(pixels, 0, w, 0, 0, w, h);
int i = 0;
for (int r = 0; r < redDim; r++) {
for (int g = 0; g < greenDim; g++) {
int p = r + g * w;
for (int b = 0; b < blueDim; b++) {
lut[i++] = pixels[p + b * h];
}
}
}
Type.Builder tb = new Type.Builder(mRs, Element.U8_4(mRs));
tb.setX(redDim).setY(greenDim).setZ(blueDim);
Type t = tb.create();
mAllocCube = Allocation.createTyped(mRs, t);
mAllocCube.copyFromUnchecked(lut);
mScriptlut.setLUT(mAllocCube);
mScriptlut.forEach(mAllocIn, mAllocOut);
mAllocOut.copyTo(mOutputBitmap);
return mOutputBitmap;
}
you increase the mFilter value to get different filter effect with different LUT images, you have, check it out.
you can go through the this link on github for more help, i got the answer from here:-
https://github.com/RenderScript/RsLutDemo
hope it will help
How can I convert byte array received using socket.
The C++ client send image data which is of type uchar.
At the android side I am receiving this uchar array as byte[] which is ranges from -128 to +127.
What I wanted to do is that receives this data and display it. For that I was trying to convert to Bitmap using BitmapFactory.decodeByteArray(), but no luck I am getting null Bitmap. Am I doing right or any other method available.
Thanks in advance....
From the comments to the answers above, it seems like you want to create a Bitmap object from a stream of RGB values, not from any image format like PNG or JPEG.
This probably means that you know the image size already. In this case, you could do something like this:
byte[] rgbData = ... // From your server
int nrOfPixels = rgbData.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = data[3*i];
int g = data[3*i + 1];
int b = data[3*i + 2];
pixels[i] = Color.rgb(r,g,b);
}
Bitmap bitmap = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
I've been using it like below in one of my projects and so far it's been pretty solid. I'm not sure how picky it is as far as it not being compressed as a PNG though.
byte[] bytesImage;
Bitmap bmpOld; // Contains original Bitmap
Bitmap bmpNew;
ByteArrayOutputStream baoStream = new ByteArrayOutputStream();
bmpOld.compress(Bitmap.CompressFormat.PNG, 100, baoStream);
bytesImage = baoStream.toByteArray();
bmpNew = BitmapFactory.decodeByteArray(bytesImage, 0, bytesImage.length);
edit: I've adapted the code from this post to use RGB, so the code below should work for you. I haven't had a chance to test it yet so it may need some adjusting.
Byte[] bytesImage = {0,1,2, 0,1,2, 0,1,2, 0,1,2};
int intByteCount = bytesImage.length;
int[] intColors = new int[intByteCount / 3];
int intWidth = 2;
int intHeight = 2;
final int intAlpha = 255;
if ((intByteCount / 3) != (intWidth * intHeight)) {
throw new ArrayStoreException();
}
for (int intIndex = 0; intIndex < intByteCount - 2; intIndex = intIndex + 3) {
intColors[intIndex / 3] = (intAlpha << 24) | (bytesImage[intIndex] << 16) | (bytesImage[intIndex + 1] << 8) | bytesImage[intIndex + 2];
}
Bitmap bmpImage = Bitmap.createBitmap(intColors, intWidth, intHeight, Bitmap.Config.ARGB_8888);
InputStream is = new java.net.URL(urldisplay).openStream();
byte[] colors = IOUtils.toByteArray(is);
int nrOfPixels = colors.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = (int)(0xFF & colors[3*i]);
int g = (int)(0xFF & colors[3*i+1]);
int b = (int)(0xFF & colors[3*i+2]);
pixels[i] = Color.rgb(r,g,b);
}
imageBitmap = Bitmap.createBitmap(pixels, width, height,Bitmap.Config.ARGB_4444);
bmImage.setImageBitmap(imageBitmap );
I'm trying to decode a byte array into a bitmap to be used in android.
The byte array that i use for decoding is generated by an OpenGl conmmand named GlReadPixels and the data inside is correct.
MainActivity.dataInputStream.readFully(Image, 0,256 * 256 * 4);
//converting from rgba to argb
for (int i = 0; i < Image.length - 1; i = i + 4) {
aux = (Image[i + 3] & 0xFF);
IntImage[i + 3] = (int) (Image[i + 2] & 0xFF);
IntImage[i + 2] = (int) (Image[i + 1] & 0xFF);
IntImage[i + 1] = (int) (Image[i] & 0xFF);
IntImage[i] = aux;
}
if i do this :bmp = Bitmap.createBitmap(IntImage, 256, 256,Config.ARGB_8888); or this:
bmp = Bitmap.createBitmap(256, 256, Config.ARGB_8888);
bmp.setPixels(IntImage, 0, 256, 0, 0, 256, 256); ,
the resulted bitmap will have only 0 values.
Can somebody please tell me why that is ?
Bitmap.createBitmap is expecting array of 256*256 size, and you are giving it 256*256*4, and therefore you are giving one part only and it looks like this, 0x000000FF .. 0x000000RR, 0x000000GG, if you understand what I mean.
change your argb function in something like this
int[] imageARGB = new int[256 * 256];
for(int i = 0; i < imageARGB.length, i++){
imageARGB[i] = Color.argb(yourAlpha, RR, GG, BB);
}
Hope this helps and enjoy your work