I have integrated MLkit FaceDetection into my android application. I have referred below URL
https://firebase.google.com/docs/ml-kit/android/detect-faces
Code for Face Detection Processor Class is
import java.io.IOException;
import java.util.List;
/** Face Detector Demo. */
public class FaceDetectionProcessor extends VisionProcessorBase<List<FirebaseVisionFace>> {
private static final String TAG = "FaceDetectionProcessor";
private final FirebaseVisionFaceDetector detector;
public FaceDetectionProcessor() {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setLandmarkType(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
}
#Override
public void stop() {
try {
detector.close();
} catch (IOException e) {
Log.e(TAG, "Exception thrown while trying to close Face Detector: " + e);
}
}
#Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
return detector.detectInImage(image);
}
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
#Override
protected void onFailure(#NonNull Exception e) {
Log.e(TAG, "Face detection failed " + e);
}
}
Here in "onSuccess" listener , we will get array of "FirebaseVisionFace" class objects which will have "Bounding Box" of face.
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
I want to know How to convert this FirebaseVisionFace objects into Bitmap.
I want to extract face image and show it in ImageView. Can anyone please help me . Thanks in advance.
Note : I have downloaded the sample Source code of MLKit android from below URL
https://github.com/firebase/quickstart-android/tree/master/mlkit
You created the FirebaseVisionImage from a bitmap. After detection returns, each FirebaseVisionFace describes a bounding box as a Rect that you can use to extract the detected face from the original bitmap, e.g. using Bitmap.createBitmap().
Since the accepted answer was not specific enough I will try to explain what I did.
1.- Create an ImageView on LivePreviewActivity like this:
private ImageView imageViewTest;
2.- Create it on the Activity xml and link it to java file. I placed it right before the the sample code had, so it can be visible on top of the camera feed.
3.-When they create a FaceDetectionProcessor pass an instance of the imageView to be able to set the source image inside the object.
FaceDetectionProcessor processor = new FaceDetectionProcessor(imageViewTest);
4.-Change the constructor of FaceDetectionProcessor to be able to receive ImageView as a parameter and create a global variable that saves that instance.
public FaceDetectionProcessor(ImageView imageView) {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
this.imageView = imageView;
}
5.- I created a crop method that takes a bitmap and a Rect to focus only on the face. So go ahead and do the same.
public static Bitmap cropBitmap(Bitmap bitmap, Rect rect) {
int w = rect.right - rect.left;
int h = rect.bottom - rect.top;
Bitmap ret = Bitmap.createBitmap(w, h, bitmap.getConfig());
Canvas canvas = new Canvas(ret);
canvas.drawBitmap(bitmap, -rect.left, -rect.top, null);
return ret;
}
6.- Modify detectInImage method to save an instance of the bitmap being detected and save it in a global variable.
#Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
imageBitmap = image.getBitmapForDebugging();
return detector.detectInImage(image);
}
7.- Finally, modify OnSuccess method by calling the cropping method and assign result to the imageView.
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
croppedImage = cropBitmap(imageBitmap, face.getBoundingBox());
}
imageView.setImageBitmap(croppedImage);
}
This may help you if you're trying to use ML Kit to detect faces and OpenCV to perform image processing on the detected face. Note in this particular example, you need the originalcamera bitmap inside onSuccess.
I haven't found a way to do this without a bitmap and truthfully still searching.
#Override
protected void onSuccess(#NonNull List<FirebaseVisionFace> faces, #NonNull FrameMetadata frameMetadata, #NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
/* Original implementation has original image. Original Image represents the camera preview from the live camera */
// Create Mat representing the live camera itself
Mat rgba = new Mat(originalCameraImage.getHeight(), originalCameraImage.getWidth(), CvType.CV_8UC4);
// The box with a Imgproc affect made by OpenCV
Mat rgbaInnerWindow;
Mat mIntermediateMat = new Mat();
// Make box for Imgproc the size of the detected face
int rows = (int) face.getBoundingBox().height();
int cols = (int) face.getBoundingBox().width();
int left = cols / 8;
int top = rows / 8;
int width = cols * 3 / 4;
int height = rows * 3 / 4;
// Create a new bitmap based on live preview
// which will show the actual image processing
Bitmap newBitmap = Bitmap.createBitmap(originalCameraImage);
// Bit map to Mat
Utils.bitmapToMat(newBitmap, rgba);
// Imgproc stuff. In this examply I'm doing edge detection.
rgbaInnerWindow = rgba.submat(top, top + height, left, left + width);
Imgproc.Canny(rgbaInnerWindow, mIntermediateMat, 80, 90);
Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4);
rgbaInnerWindow.release();
// After processing image, back to bitmap
Utils.matToBitmap(rgba, newBitmap);
// Load the bitmap
CameraImageGraphic imageGraphic = new CameraImageGraphic(graphicOverlay, newBitmap);
graphicOverlay.add(imageGraphic);
FaceGraphic faceGraphic;
faceGraphic = new FaceGraphic(graphicOverlay, face, null);
graphicOverlay.add(faceGraphic);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
// I can't speak for this
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
Actually you can just read the ByteBuffer then you can get the array for write to object files you want with OutputStream. Of course you can get it from getBoundingBox() too.
Related
I am using the MediaRecorder and MediaProjection Api for recording the screen in android app. I am not able to get each frame that I can send over a rtmp stream. For publishing over an RTMP stream I am using JAVACV Android library.
For eg - In case of live streaming through camera, we can get each frame in onPreviewFrame() callback. After getting each frame I simply use the FFmpegFrameRecorder of JAVACV library for streaming each frame on to the rtmp url.
How can I achieve the same with screen recording?
Any help would be appreciated here. Thanks in advance!
Firstly there is no callback interface with MediaProjection which can give you buffer of screen captured frame, But it is quite possible with SurfaceTexture. Here is one implementation ScreenCapturerAndroid of screen capturing with SurfaceTexture.
An implementation of VideoCapturer to capture the screen content as a video stream.
Capturing is done by MediaProjection on a SurfaceTexture. We interact with this
SurfaceTexture using a SurfaceTextureHelper.
The SurfaceTextureHelper is created by the native code and passed to this capturer in
VideoCapturer.initialize(). On receiving a new frame, this capturer passes it
as a texture to the native code via CapturerObserver.onFrameCaptured(). This takes
place on the HandlerThread of the given SurfaceTextureHelper. When done with each frame,
the native code returns the buffer to the SurfaceTextureHelper
But if you want to send stream of your app view then following screen capturer will help you.
public class ScreenCapturer {
private boolean capturing = false;
private int fps=15;
private int width,height;
private Context context;
private int[] frame;
private Bitmap bmp;
private Canvas canvas;
private View contentView;
private Handler mHandler = new Handler();
public ScreenCapturer(Context context, View view) {
this.context = context;
this.contentView = view;
}
public void startCapturing(){
capturing = true;
mHandler.postDelayed(newFrame, 1000 / fps);
}
public void stopCapturing(){
capturing = false;
mHandler.removeCallbacks(newFrame);
}
private Runnable newFrame = new Runnable() {
#Override
public void run() {
if (capturing) {
int width = contentView.getWidth();
int height = contentView.getHeight();
if (frame == null ||
ScreenCapturer. this.width != width ||
ScreenCapturer.this.height != height) {
ScreenCapturer.this.width = width;
ScreenCapturer.this.height = height;
if (bmp != null) {
bmp.recycle();
bmp = null;
}
bmp = Bitmap.createBitmap(width,
height, Bitmap.Config.ARGB_8888);
canvas = new Canvas(bmp);
frame = new int[width * height];
}
canvas.saveLayer(0, 0, width, height, null);
canvas.translate(-contentView.getScrollX(), - contentView.getScrollY());
contentView.draw(canvas);
bmp.getPixels(frame, 0, width, 0, 0, width, height);
//frame[] is a rgb pixel array compress it to YUV if want and send over RTMP
canvas.restore();
mHandler.postDelayed(newFrame, 1000 / fps);
}
}
};
}
Usage
...
//Use this in your activity
private View parentView;
parentView = findViewById(R.id.parentView);
capturer = new ScreenCapturer(this,parentView);
//To start capturing
capturer.startCapturing();
//To Stop capturer
capturer.stopCapturing();
Using this you can send a view contents to RTMP stream, You may use parent view of your activity to capture all contents of an activity.
On the one hand, I have a Surface Class which when instantiated, automatically initialize a new thread and start grabbing frames from a streaming source via native code based on FFMPEG. Here is the main parts of the code for the aforementioned Surface Class:
public class StreamingSurface extends Surface implements Runnable {
...
public StreamingSurface(SurfaceTexture surfaceTexture, int width, int height) {
super(surfaceTexture);
screenWidth = width;
screenHeight = height;
init();
}
public void init() {
mDrawTop = 0;
mDrawLeft = 0;
mVideoCurrentFrame = 0;
this.setVideoFile();
this.startPlay();
}
public void setVideoFile() {
// Initialise FFMPEG
naInit("");
// Get stream video res
int[] res = naGetVideoRes();
mDisplayWidth = (int)(res[0]);
mDisplayHeight = (int)(res[1]);
// Prepare Display
mBitmap = Bitmap.createBitmap(mDisplayWidth, mDisplayHeight, Bitmap.Config.ARGB_8888);
naPrepareDisplay(mBitmap, mDisplayWidth, mDisplayHeight);
}
public void startPlay() {
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
while (true) {
while (2 == mStatus) {
//pause
SystemClock.sleep(100);
}
mVideoCurrentFrame = naGetVideoFrame();
if (0 < mVideoCurrentFrame) {
//success, redraw
if(isValid()){
Canvas canvas = lockCanvas(null);
if (null != mBitmap) {
canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint);
}
unlockCanvasAndPost(canvas);
}
} else {
//failure, probably end of video, break
naFinish(mBitmap);
mStatus = 0;
break;
}
}
}
}
In my MainActivity class, I instantiated this class in the following way:
public void startCamera(int texture)
{
mSurface = new SurfaceTexture(texture);
mSurface.setOnFrameAvailableListener(this);
Surface surface = new StreamingSurface(mSurface, 640, 360);
surface.release();
}
I read the following line in the Android developer page, regarding the Surface class constructor:
"Images drawn to the Surface will be made available to the SurfaceTexture, which can attach them to an OpenGL ES texture via updateTexImage()."
That is exactly what I want to do, and I have everything ready for the further renderization. But definitely, with the above code, I never get my frames captured in the surface class transformed to its corresponding SurfaceTexture. I know this because the debugger, for instace, never call the OnFrameAvailableLister method associated with that Surface Texture.
Any ideas? Maybe the fact that I am using a thread to call the drawing functions is messing everything? In such a case, what alternatives I have to grab the frames?
Thanks in advance
Hello I am new to android, I am currently trying to print a receipt from my Android 4.4.2 table to my Zicox thermal receipt printer. I have been able to print the text so far but now I need to go a step further and print barcodes / qrcodes. Unfortunately this is way beyond my knowledge, I have googled solutions and have not found one for me yet.
These are the methods I use to generate my barcode:
/**************************************************************
* getting from com.google.zxing.client.android.encode.QRCodeEncoder
*
* See the sites below
* http://code.google.com/p/zxing/
* http://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/encode/EncodeActivity.java
* http://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/encode/QRCodeEncoder.java
*/
private static final int WHITE = 0xFFFFFFFF;
private static final int BLACK = 0xFF000000;
Bitmap encodeAsBitmap(String contents, BarcodeFormat format, int img_width, int img_height) throws WriterException {
String contentsToEncode = contents;
if (contentsToEncode == null) {
return null;
}
Map<EncodeHintType, Object> hints = null;
String encoding = guessAppropriateEncoding(contentsToEncode);
if (encoding != null) {
hints = new EnumMap<EncodeHintType, Object>(EncodeHintType.class);
hints.put(EncodeHintType.CHARACTER_SET, encoding);
}
MultiFormatWriter writer = new MultiFormatWriter();
BitMatrix result;
try {
result = writer.encode(contentsToEncode, format, img_width, img_height, hints);
} catch (IllegalArgumentException iae) {
// Unsupported format
return null;
}
int width = result.getWidth();
int height = result.getHeight();
int[] pixels = new int[width * height];
for (int y = 0; y < height; y++) {
int offset = y * width;
for (int x = 0; x < width; x++) {
pixels[offset + x] = result.get(x, y) ? BLACK : WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
return bitmap;
}
private static String guessAppropriateEncoding(CharSequence contents) {
// Very crude at the moment
for (int i = 0; i < contents.length(); i++) {
if (contents.charAt(i) > 0xFF) {
return "UTF-8";
}
}
return null;
}
This is my onClick method that starts the entire process:
// barcode data
String barcode_data = "123456";
// barcode image
ImageView iv = new ImageView(this);
try {
barCode = encodeAsBitmap(barcode_data, BarcodeFormat.CODE_128, 300, 40);
// bitmap.getRowBytes();
iv.setImageBitmap(barCode);
} catch (WriterException e) {
e.printStackTrace();
}
iv.setLayoutParams(new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT));
innerLayout.addView(iv);
So now I am basically able to generate and display the barcode now I want to be able to print in on my receipts.
As I can understand from your question, you basically want to print a bitmap, that contains the QR code.
You can use PrintHelper class for the same. Here's a sample code from the official documentation that shows how to use it.
private void doPhotoPrint() {
PrintHelper photoPrinter = new PrintHelper(getActivity());
photoPrinter.setScaleMode(PrintHelper.SCALE_MODE_FIT);
Bitmap bitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.droids);
photoPrinter.printBitmap("droids.jpg - test print", bitmap);
}
Also, it says :
After the printBitmap() method is called, no further action from your application is required. The Android print user interface appears, allowing the user to select a printer and printing options.
Update :
The above method prints only a bitmap. For printing a whole layout (like a receipt, as you said), instructions have been laid out here. I'll try to summarize it for you here :
The first step is to to extend PrintDocumentAdapter class and override certain methods. In that adapter, there's a callback method known as onWrite() that is called for drawing content on the file to be printed. In this method, you'll have to use Canvas object to draw content. This object has all the helper methods to draw bitmap/line/text etc.
Then, when the print is requested, obtain an instance of `PrintManager' class as follows :
PrintManager printManager = (PrintManager) getActivity()
.getSystemService(Context.PRINT_SERVICE);
Then, call the print method as :
printManager.print(jobName, new CustomDocumentAdapter(getActivity()),
null);
Let me know if this helps.
I am trying to show frame by frame animation by changing images in a imageview. I tried animation drawable in xml and also changing the bitmap of the imageview inside a Handler. I also tried to store only three bitmaps inside a arraylist(to avoid out of memory) as a caching mechanism but really low improvement. I need to iterate 36 images for a full animation. The problem i am facing is that in all the methods I used I cannot complete the animation in the given timeframe of 50ms. The images range from 250 kb smallest to 540 kb maximum. The fps of animation is really low. As the ios version of the app is ready I am constrained to show animation consistent to the ios version. I am a noob in renderscript and opengl. Is there any way to show a smooth animation for large images in 50-60ms. Any hints or suggestions is highly appreciated. Heres a snapshot of the animation:
Here's the link to my images for any one intrested.
I wrote a simple activity that does the most basic thing:
Loads all bitmaps in a Thread, then posts a change to an ImageView every 40ms.
package mk.testanimation;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.os.Bundle;
import android.os.Handler;
import android.util.Log;
import android.widget.ImageView;
import java.util.ArrayList;
public class MainActivity extends Activity {
ImageView mImageView;
private int mImageRes[] = new int[]{
R.drawable.s0,
R.drawable.s1,
R.drawable.s2,
R.drawable.s3,
R.drawable.s4,
R.drawable.s5,
R.drawable.s6,
R.drawable.s7,
R.drawable.s8,
R.drawable.s9,
R.drawable.s10,
R.drawable.s11,
R.drawable.s12,
R.drawable.s13,
R.drawable.s14,
R.drawable.s15,
R.drawable.s16,
R.drawable.s17,
R.drawable.s18,
R.drawable.s19,
R.drawable.s20,
R.drawable.s21,
R.drawable.s22,
R.drawable.s23,
R.drawable.s24,
R.drawable.s25,
R.drawable.s26,
R.drawable.s27,
R.drawable.s28,
R.drawable.s29,
R.drawable.s30,
R.drawable.s31,
R.drawable.s32,
R.drawable.s33,
R.drawable.s34,
R.drawable.s35,
};
private ArrayList<Bitmap> mBitmaps = new ArrayList<Bitmap>(mImageRes.length);
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final Handler handler = new Handler();
mImageView = new ImageView(this);
setContentView(mImageView);
Thread important = new Thread() {
#Override
public void run() {
long timestamp = System.currentTimeMillis();
for (int i = 0; i < mImageRes.length; i++) {
mBitmaps.add(BitmapFactory.decodeResource(getResources(), mImageRes[i]));
}
Log.d("ANIM-TAG", "Loading all bitmaps took " + (System.currentTimeMillis() - timestamp) + "ms");
for (int i = 0; i < mBitmaps.size(); i++) {
final int idx = i;
handler.postDelayed(new Runnable() {
#Override
public void run() {
mImageView.setImageBitmap(mBitmaps.get(idx));
}
}, i * 40);
}
}
};
important.setPriority(Thread.MAX_PRIORITY);
important.start();
}
}
This looked pretty decent on my Nexus 7, but it did take a little over 4s to load all the bitmaps.
Can you load the bitmaps in advance?
Also, it won't save a ton, but your pngs have a bunch of padding around the transparent space. You can crop them and reduce the memory a bit. Otherwise compressing the images will also help (like limiting the number of colors used).
Ideally, in the above solution, you'd recycle the bitmaps immediately after they're no longer used.
Also, if that's too memory-heavy, you could do as you mentioned, and have a Bitmap buffer, but I'm pretty sure it'll need to be more than 3 images large.
Good luck.
EDIT: Attempt 2.
First, I cropped all the images to 590x590. This shaved about 1mb off the images. Then I created a new class, which is a bit "busy" and doesn't have a fixed frame rate but renders the images as soon as they are ready:
package mk.testanimation;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.os.Bundle;
import android.os.Handler;
import android.util.Log;
import android.widget.ImageView;
import java.util.ArrayList;
public class MainActivity extends Activity {
ImageView mImageView;
private int mImageRes[] = new int[]{R.drawable.s0, R.drawable.s1, R.drawable.s2, R.drawable.s3, R.drawable.s4, R.drawable.s5, R.drawable.s6, R.drawable.s7, R.drawable.s8, R.drawable.s9, R.drawable.s10, R.drawable.s11, R.drawable.s12, R.drawable.s13, R.drawable.s14, R.drawable.s15, R.drawable.s16, R.drawable.s17, R.drawable.s18, R.drawable.s19, R.drawable.s20, R.drawable.s21, R.drawable.s22, R.drawable.s23, R.drawable.s24, R.drawable.s25, R.drawable.s26, R.drawable.s27, R.drawable.s28, R.drawable.s29, R.drawable.s30, R.drawable.s31, R.drawable.s32, R.drawable.s33, R.drawable.s34, R.drawable.s35};
private ArrayList<Bitmap> mBitmaps = new ArrayList<Bitmap>(mImageRes.length);
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
final long timestamp = System.currentTimeMillis();
final Handler handler = new Handler();
mImageView = new ImageView(this);
setContentView(mImageView);
Thread important = new Thread() {
#Override
public void run() {
for (int i = 0; i < mImageRes.length; i++) {
mBitmaps.add(BitmapFactory.decodeResource(getResources(), mImageRes[i]));
}
}
};
important.setPriority(Thread.MAX_PRIORITY);
important.start();
Thread drawing = new Thread() {
#Override
public void run() {
int i = 0;
while (i < mImageRes.length) {
if (i >= mBitmaps.size()) {
Thread.yield();
} else {
final Bitmap bitmap = mBitmaps.get(i);
handler.post(new Runnable() {
#Override
public void run() {
mImageView.setImageBitmap(bitmap);
}
});
i++;
}
}
Log.d("ANIM-TAG", "Time to render all frames:" + (System.currentTimeMillis() - timestamp) + "ms");
}
};
drawing.setPriority(Thread.MAX_PRIORITY);
drawing.start();
}
}
The above got the rendering to start nearly immediately, and it took less than 4s on my 2012 Nexus 7.
As a last effort, I converted all the images to 8-bit PNGs instead of 32-bit. This brought the rendering to under 2 seconds!
I'll bet any solution you end up with will benefit from making the images as small as possible.
Again -- Good Luck!
I'm facing the same problem and I have solved it by overriding an AnimationDrawable.
So, if the problem is you that can't load all images in array because it is too big for the memory to hold it, then load the image when you need it.
My AnimationDrawable is this:
public abstract class MyAnimationDrawable extends AnimationDrawable {
private Context context;
private int current;
private int reqWidth;
private int reqHeight;
private int totalTime;
public MyAnimationDrawable(Context context, int reqWidth, int reqHeight) {
this.context = context;
this.current = 0;
//In my case size of screen to scale Drawable
this.reqWidth = reqWidth;
this.reqHeight = reqHeight;
this.totalTime = 0;
}
#Override
public void addFrame(Drawable frame, int duration) {
super.addFrame(frame, duration);
totalTime += duration;
}
#Override
public void start() {
super.start();
new Handler().postDelayed(new Runnable() {
public void run() {
onAnimationFinish();
}
}, totalTime);
}
public int getTotalTime() {
return totalTime;
}
#Override
public void draw(Canvas canvas) {
try {
//Loading image from assets, you could make it from resources
Bitmap bmp = BitmapFactory.decodeStream(context.getAssets().open("presentation/intro_000"+(current < 10 ? "0"+current : current)+".jpg"));
//Scaling image to fitCenter
Matrix m = new Matrix();
m.setRectToRect(new RectF(0, 0, bmp.getWidth(), bmp.getHeight()), new RectF(0, 0, reqWidth, reqHeight), Matrix.ScaleToFit.CENTER);
bmp = Bitmap.createBitmap(bmp, 0, 0, bmp.getWidth(), bmp.getHeight(), m, true);
//Calculating the start 'x' and 'y' to paint the Bitmap
int x = (reqWidth - bmp.getWidth()) / 2;
int y = (reqHeight - bmp.getHeight()) / 2;
//Painting Bitmap in canvas
canvas.drawBitmap(bmp, x, y, null);
//Jump to next item
current++;
} catch (IOException e) {
e.printStackTrace();
}
}
abstract void onAnimationFinish();
}
Now to play animation you need to do what next
//Get your ImageView
View image = MainActivity.this.findViewById(R.id.presentation);
//Create AnimationDrawable
final AnimationDrawable animation = new MyAnimationDrawable(this, displayMetrics.widthPixels, displayMetrics.heightPixels) {
#Override
void onAnimationFinish() {
//Do something when finish animation
}
};
animation.setOneShot(true); //dont repeat animation
//This is just to say that my AnimationDrawable has 72 frames with 50 milliseconds interval
try {
//Always load same bitmap, anyway you load the right one in draw() method in MyAnimationDrawable
Bitmap bmp = BitmapFactory.decodeStream(MainActivity.this.getAssets().open("presentation/intro_00000.jpg"));
for (int i = 0; i < 72; i++) {
animation.addFrame(new BitmapDrawable(getResources(), bmp), 50);
}
} catch (IOException e) {
e.printStackTrace();
}
//Set AnimationDrawable to ImageView
if (Build.VERSION.SDK_INT < 16) {
image.setBackgroundDrawable(animation);
} else {
image.setBackground(animation);
}
//Start animation
image.post(new Runnable() {
#Override
public void run() {
animation.start();
}
});
That is all, and works OK form me!!!
Try using an AnimationDrawable for your ImageView. See my answer here for example.
The most efficient way to show fram by frame animation in android is using OpenGL with NDK.
Create an imageView using programming and then rotate it with an angle and then make another one and then rotate it.. do it for number of imageView you want to show. you only have to add only one image for it.
you can rotate an image like this..
Matrix matrix=new Matrix();
imageView.setScaleType(ScaleType.MATRIX); //required
matrix.postRotate((float) angle, pivX, pivY);
imageView.setImageMatrix(matrix);
Maybe stupid answer but it may help you. Make the animation using another tool and save it as a high quality video and then just play the video.
Try exporting your images to a GIF? Android does support decoding GIF files.
Have a look : Supported Media Formats.
Loading the bitmaps from assets instead of resources saved 40% decoding time for me. You might want to try that.
So this went from 5 seconds with resources to 3 seconds on your pictures on my 2012 Nexus 7:
long time = System.currentTimeMillis();
try {
for (int i = 0; i < 35; i++) {
InputStream assetStream = getAssets().open(
"_step" + (i + 1) + ".png");
try {
Bitmap bitmap = BitmapFactory.decodeStream(assetStream);
if (bitmap == null) {
throw new RuntimeException("Could not load bitmap");
}
mBitmaps.add(bitmap);
} finally {
assetStream.close();
}
}
} catch (Exception e) {
throw new RuntimeException(e);
}
Log.d("ANIM", "Loading bitmaps elapsed "+(System.currentTimeMillis() - time)+"ms");
I have two png image files that I would like my android app to combine programmatically into one png image file and am wondering if it is possible to do so? if so, what I would like to do is just overlay them on each other to create one file.
the idea behind this is that I have a handful of png files, some with a portion of the image on the left with the rest transparent and the others with an image on the right and the rest transparent. and based on user input it will combine the two to make one file to display. (and i cant just display the two images side by side, they need to be one file)
is this possible to do programmatically in android and how so?
I've been trying to figure this out for a little while now.
Here's (essentially) the code I used to make it work.
// Get your images from their files
Bitmap bottomImage = BitmapFactory.decodeFile("myFirstPNG.png");
Bitmap topImage = BitmapFactory.decodeFile("myOtherPNG.png");
// As described by Steve Pomeroy in a previous comment,
// use the canvas to combine them.
// Start with the first in the constructor..
Canvas comboImage = new Canvas(bottomImage);
// Then draw the second on top of that
comboImage.drawBitmap(topImage, 0f, 0f, null);
// comboImage is now a composite of the two.
// To write the file out to the SDCard:
OutputStream os = null;
try {
os = new FileOutputStream("/sdcard/DCIM/Camera/" + "myNewFileName.png");
comboImage.compress(CompressFormat.PNG, 50, os)
} catch(IOException e) {
e.printStackTrace();
}
EDIT :
there was a typo,
So, I've changed
image.compress(CompressFormat.PNG, 50, os)
to
bottomImage.compress(CompressFormat.PNG, 50, os)
You can do blending. This is not particular to Android. It's just universal image processing.
EDIT:
You may find these articles & samples & code useful:
http://www.jhlabs.com/ip/
http://kfb-android.blogspot.com/2009/04/image-processing-in-android.html
http://code.google.com/p/jjil/
Image Processing on Android
I use this code
private class PhotoComposition extends AsyncTask<Object, Void, Boolean> {
private String pathSave;//path save combined images
#Override
protected Boolean doInBackground(Object... objects) {
List<String> images = (List<String>) objects[0]; //lsit of path iamges
pathSave = (String) objects[1];//path save combined images
if (images.size() == 0) {
return false;
}
List<Bitmap> bitmaps = new ArrayList<>();
for (int i = 0; i < images.size(); i++) {
bitmaps.add(BitmapFactory.decodeFile( images.get(i)));
}
int width = findWidth(bitmaps);//Find the width of the composite image
int height = findMaxHeight(bitmaps);//Find the height of the composite image
Bitmap combineBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);//create bitmap of composite image
combineBitmap.eraseColor(Color.parseColor("#00000000")); //bcakgraound color of composite image
Bitmap mutableCombineBitmap = combineBitmap.copy(Bitmap.Config.ARGB_8888, true);//create mutable bitmap to create canvas
Canvas canvas = new Canvas(mutableCombineBitmap);// create canvas to add bitmaps
float left = 0f;
for (int i = 0; i < bitmaps.size(); i++) {
canvas.drawBitmap(bitmaps.get(i), left, 0f, null);//Taking photos horizontally
left += bitmaps.get(i).getWidth();//Take right to the size of the previous photo
}
OutputStream outputStream = null;
try {
outputStream = new FileOutputStream(pathSave);//path of save composite image
mutableCombineBitmap.compress(Bitmap.CompressFormat.PNG, 80, outputStream);
} catch (IOException e) {
e.printStackTrace();
return false;
}
return true;
}
#Override
protected void onPostExecute(Boolean isSave) {
if (isSave) {
//iamge save on pathSave
Log.i("PhotoComposition", "onPostExecute: " + pathSave);
}
super.onPostExecute(isSave);
}
private int findMaxHeight(List<Bitmap> bitmaps) {
int maxHeight = Integer.MIN_VALUE;
for (int i = 0; i < bitmaps.size(); i++) {
if (bitmaps.get(i).getHeight() > maxHeight) {
maxHeight = bitmaps.get(i).getHeight();
}
}
return maxHeight;
}
private int findWidth(List<Bitmap> bitmaps) {
int width = 0;
for (int i = 0; i < bitmaps.size(); i++) {
width += bitmaps.get(i).getWidth();
}
return width;
}
USAGE
List<String> images = new ArrayList<>();
images.add("/storage/emulated/0/imageOne.png");//path of image in storage
images.add("/storage/emulated/0/imageTwo.png");
// images.add("/storage/emulated/0/imageThree");
// ... //add more images
String pathSaveCombinedImage = "/storage/emulated/0/CombinedImage.png";//path save result image
new PhotoComposition().execute(images, pathSaveCombinedImage);
And the result of using the above code will be as follows
You may wish to look into the Canvas object, which would make it easy to do other drawing operations as well. You can just draw your bitmaps onto a canvas where you want them, then save the resulting bitmap.
If they have transparent sections, then if you draw one on top of the other, only the non-transparent portions will overlap. It will be up to you to arrange the bitmaps however you like.
For the separate issue of re-saving your image to a png, use bitmap.compress().
Try this .
public Bitmap mergeBitmap(Bitmap frame, Bitmap img){
Bitmap bmOverlay = Bitmap.createBitmap(frame.getWidth(), frame.getHeight(), frame.getConfig());
Canvas canvas = new Canvas(bmOverlay);
canvas.drawBitmap(img, 0, 0, null);
canvas.drawBitmap(frame, new Matrix(), null);
return bmOverlay;
}
Returns a bitmap image
Pass two bitmap images to your function as shown below
Bitmap img= mergeBitmap(imgone, imagetwo);
See the entire post or also see merge multiple images in android programmatically