Hello I am new to android, I am currently trying to print a receipt from my Android 4.4.2 table to my Zicox thermal receipt printer. I have been able to print the text so far but now I need to go a step further and print barcodes / qrcodes. Unfortunately this is way beyond my knowledge, I have googled solutions and have not found one for me yet.
These are the methods I use to generate my barcode:
/**************************************************************
* getting from com.google.zxing.client.android.encode.QRCodeEncoder
*
* See the sites below
* http://code.google.com/p/zxing/
* http://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/encode/EncodeActivity.java
* http://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/encode/QRCodeEncoder.java
*/
private static final int WHITE = 0xFFFFFFFF;
private static final int BLACK = 0xFF000000;
Bitmap encodeAsBitmap(String contents, BarcodeFormat format, int img_width, int img_height) throws WriterException {
String contentsToEncode = contents;
if (contentsToEncode == null) {
return null;
}
Map<EncodeHintType, Object> hints = null;
String encoding = guessAppropriateEncoding(contentsToEncode);
if (encoding != null) {
hints = new EnumMap<EncodeHintType, Object>(EncodeHintType.class);
hints.put(EncodeHintType.CHARACTER_SET, encoding);
}
MultiFormatWriter writer = new MultiFormatWriter();
BitMatrix result;
try {
result = writer.encode(contentsToEncode, format, img_width, img_height, hints);
} catch (IllegalArgumentException iae) {
// Unsupported format
return null;
}
int width = result.getWidth();
int height = result.getHeight();
int[] pixels = new int[width * height];
for (int y = 0; y < height; y++) {
int offset = y * width;
for (int x = 0; x < width; x++) {
pixels[offset + x] = result.get(x, y) ? BLACK : WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
return bitmap;
}
private static String guessAppropriateEncoding(CharSequence contents) {
// Very crude at the moment
for (int i = 0; i < contents.length(); i++) {
if (contents.charAt(i) > 0xFF) {
return "UTF-8";
}
}
return null;
}
This is my onClick method that starts the entire process:
// barcode data
String barcode_data = "123456";
// barcode image
ImageView iv = new ImageView(this);
try {
barCode = encodeAsBitmap(barcode_data, BarcodeFormat.CODE_128, 300, 40);
// bitmap.getRowBytes();
iv.setImageBitmap(barCode);
} catch (WriterException e) {
e.printStackTrace();
}
iv.setLayoutParams(new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT));
innerLayout.addView(iv);
So now I am basically able to generate and display the barcode now I want to be able to print in on my receipts.
As I can understand from your question, you basically want to print a bitmap, that contains the QR code.
You can use PrintHelper class for the same. Here's a sample code from the official documentation that shows how to use it.
private void doPhotoPrint() {
PrintHelper photoPrinter = new PrintHelper(getActivity());
photoPrinter.setScaleMode(PrintHelper.SCALE_MODE_FIT);
Bitmap bitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.droids);
photoPrinter.printBitmap("droids.jpg - test print", bitmap);
}
Also, it says :
After the printBitmap() method is called, no further action from your application is required. The Android print user interface appears, allowing the user to select a printer and printing options.
Update :
The above method prints only a bitmap. For printing a whole layout (like a receipt, as you said), instructions have been laid out here. I'll try to summarize it for you here :
The first step is to to extend PrintDocumentAdapter class and override certain methods. In that adapter, there's a callback method known as onWrite() that is called for drawing content on the file to be printed. In this method, you'll have to use Canvas object to draw content. This object has all the helper methods to draw bitmap/line/text etc.
Then, when the print is requested, obtain an instance of `PrintManager' class as follows :
PrintManager printManager = (PrintManager) getActivity()
.getSystemService(Context.PRINT_SERVICE);
Then, call the print method as :
printManager.print(jobName, new CustomDocumentAdapter(getActivity()),
null);
Let me know if this helps.
Related
I have integrated MLkit FaceDetection into my android application. I have referred below URL
https://firebase.google.com/docs/ml-kit/android/detect-faces
Code for Face Detection Processor Class is
import java.io.IOException;
import java.util.List;
/** Face Detector Demo. */
public class FaceDetectionProcessor extends VisionProcessorBase<List<FirebaseVisionFace>> {
private static final String TAG = "FaceDetectionProcessor";
private final FirebaseVisionFaceDetector detector;
public FaceDetectionProcessor() {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setLandmarkType(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
}
#Override
public void stop() {
try {
detector.close();
} catch (IOException e) {
Log.e(TAG, "Exception thrown while trying to close Face Detector: " + e);
}
}
#Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
return detector.detectInImage(image);
}
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
#Override
protected void onFailure(#NonNull Exception e) {
Log.e(TAG, "Face detection failed " + e);
}
}
Here in "onSuccess" listener , we will get array of "FirebaseVisionFace" class objects which will have "Bounding Box" of face.
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
I want to know How to convert this FirebaseVisionFace objects into Bitmap.
I want to extract face image and show it in ImageView. Can anyone please help me . Thanks in advance.
Note : I have downloaded the sample Source code of MLKit android from below URL
https://github.com/firebase/quickstart-android/tree/master/mlkit
You created the FirebaseVisionImage from a bitmap. After detection returns, each FirebaseVisionFace describes a bounding box as a Rect that you can use to extract the detected face from the original bitmap, e.g. using Bitmap.createBitmap().
Since the accepted answer was not specific enough I will try to explain what I did.
1.- Create an ImageView on LivePreviewActivity like this:
private ImageView imageViewTest;
2.- Create it on the Activity xml and link it to java file. I placed it right before the the sample code had, so it can be visible on top of the camera feed.
3.-When they create a FaceDetectionProcessor pass an instance of the imageView to be able to set the source image inside the object.
FaceDetectionProcessor processor = new FaceDetectionProcessor(imageViewTest);
4.-Change the constructor of FaceDetectionProcessor to be able to receive ImageView as a parameter and create a global variable that saves that instance.
public FaceDetectionProcessor(ImageView imageView) {
FirebaseVisionFaceDetectorOptions options =
new FirebaseVisionFaceDetectorOptions.Builder()
.setClassificationType(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.setTrackingEnabled(true)
.build();
detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
this.imageView = imageView;
}
5.- I created a crop method that takes a bitmap and a Rect to focus only on the face. So go ahead and do the same.
public static Bitmap cropBitmap(Bitmap bitmap, Rect rect) {
int w = rect.right - rect.left;
int h = rect.bottom - rect.top;
Bitmap ret = Bitmap.createBitmap(w, h, bitmap.getConfig());
Canvas canvas = new Canvas(ret);
canvas.drawBitmap(bitmap, -rect.left, -rect.top, null);
return ret;
}
6.- Modify detectInImage method to save an instance of the bitmap being detected and save it in a global variable.
#Override
protected Task<List<FirebaseVisionFace>> detectInImage(FirebaseVisionImage image) {
imageBitmap = image.getBitmapForDebugging();
return detector.detectInImage(image);
}
7.- Finally, modify OnSuccess method by calling the cropping method and assign result to the imageView.
#Override
protected void onSuccess(
#NonNull List<FirebaseVisionFace> faces,
#NonNull FrameMetadata frameMetadata,
#NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
croppedImage = cropBitmap(imageBitmap, face.getBoundingBox());
}
imageView.setImageBitmap(croppedImage);
}
This may help you if you're trying to use ML Kit to detect faces and OpenCV to perform image processing on the detected face. Note in this particular example, you need the originalcamera bitmap inside onSuccess.
I haven't found a way to do this without a bitmap and truthfully still searching.
#Override
protected void onSuccess(#NonNull List<FirebaseVisionFace> faces, #NonNull FrameMetadata frameMetadata, #NonNull GraphicOverlay graphicOverlay) {
graphicOverlay.clear();
for (int i = 0; i < faces.size(); ++i) {
FirebaseVisionFace face = faces.get(i);
/* Original implementation has original image. Original Image represents the camera preview from the live camera */
// Create Mat representing the live camera itself
Mat rgba = new Mat(originalCameraImage.getHeight(), originalCameraImage.getWidth(), CvType.CV_8UC4);
// The box with a Imgproc affect made by OpenCV
Mat rgbaInnerWindow;
Mat mIntermediateMat = new Mat();
// Make box for Imgproc the size of the detected face
int rows = (int) face.getBoundingBox().height();
int cols = (int) face.getBoundingBox().width();
int left = cols / 8;
int top = rows / 8;
int width = cols * 3 / 4;
int height = rows * 3 / 4;
// Create a new bitmap based on live preview
// which will show the actual image processing
Bitmap newBitmap = Bitmap.createBitmap(originalCameraImage);
// Bit map to Mat
Utils.bitmapToMat(newBitmap, rgba);
// Imgproc stuff. In this examply I'm doing edge detection.
rgbaInnerWindow = rgba.submat(top, top + height, left, left + width);
Imgproc.Canny(rgbaInnerWindow, mIntermediateMat, 80, 90);
Imgproc.cvtColor(mIntermediateMat, rgbaInnerWindow, Imgproc.COLOR_GRAY2BGRA, 4);
rgbaInnerWindow.release();
// After processing image, back to bitmap
Utils.matToBitmap(rgba, newBitmap);
// Load the bitmap
CameraImageGraphic imageGraphic = new CameraImageGraphic(graphicOverlay, newBitmap);
graphicOverlay.add(imageGraphic);
FaceGraphic faceGraphic;
faceGraphic = new FaceGraphic(graphicOverlay, face, null);
graphicOverlay.add(faceGraphic);
FaceGraphic faceGraphic = new FaceGraphic(graphicOverlay);
graphicOverlay.add(faceGraphic);
// I can't speak for this
faceGraphic.updateFace(face, frameMetadata.getCameraFacing());
}
}
Actually you can just read the ByteBuffer then you can get the array for write to object files you want with OutputStream. Of course you can get it from getBoundingBox() too.
I'm creating an app with QR barcode. The barcode load correctly, but somehow it load bit slow, about 3-5 secs after i tap/click the menu.
Can we make it faster? or is it normal the page load that long? other part loading only takes 1 sec or less. the app also offline, so no internet connection needed.
here my code to generate the QR barcode:
ImageView imageViewBarcode = (ImageView)findViewById(R.id.imageViewBarcode);
try {
bitmap = TextToImageEncode(barcode_user);
imageViewBarcode.setImageBitmap(bitmap);
} catch (WriterException e) {
e.printStackTrace();
}
Those code above is put inside onCreate. So when the page load, it generate the barcode.
Here the function to create barcode
Bitmap TextToImageEncode(String Value) throws WriterException {
BitMatrix bitMatrix;
try {
bitMatrix = new MultiFormatWriter().encode(
Value,
BarcodeFormat.DATA_MATRIX.QR_CODE,
QRcodeWidth, QRcodeWidth, null
);
} catch (IllegalArgumentException Illegalargumentexception) {
return null;
}
int bitMatrixWidth = bitMatrix.getWidth();
int bitMatrixHeight = bitMatrix.getHeight();
int[] pixels = new int[bitMatrixWidth * bitMatrixHeight];
for (int y = 0; y < bitMatrixHeight; y++) {
int offset = y * bitMatrixWidth;
for (int x = 0; x < bitMatrixWidth; x++) {
pixels[offset + x] = bitMatrix.get(x, y) ?
getResources().getColor(R.color.colorBlack):getResources().getColor(R.color.colorWhite);
}
}
Bitmap bitmap = Bitmap.createBitmap(bitMatrixWidth, bitMatrixHeight, Bitmap.Config.ARGB_4444);
bitmap.setPixels(pixels, 0, 500, 0, 0, bitMatrixWidth, bitMatrixHeight);
return bitmap;
}
You are calling getResources().getColor() inside double loop - ie when your image size is 100*100 pixels this will be called 10000 times. Instead assign color values to some variables outside of the loops and use these variables inside loops.
int color_black = getResources().getColor(R.color.colorBlack);
int color_white = getResources().getColor(R.color.colorWhite);
for (int y = 0; y < bitMatrixHeight; y++) {
int offset = y * bitMatrixWidth;
for (int x = 0; x < bitMatrixWidth; x++) {
pixels[offset + x] = bitMatrix.get(x, y) ? color_black : color_white;
}
}
EDIT: added code example
found this: zxing generate QR on another thread here. Solved a similar problem for me.
I am able to load dicom image using imebra, and want to change the colors of image, but cant figure out a way. I want to achieve functionality as in Dicomite app.
Following is my code:
public void loadDCM() {
com.imebra.DataSet loadedDataSet = com.imebra.CodecFactory.load(dicomPath.getPath());
com.imebra.VOIs voi = loadedDataSet.getVOIs();
com.imebra.Image image = loadedDataSet.getImageApplyModalityTransform(0);
// com.imebra.Image image = loadedDataSet.getImage(0);
String colorSpace = image.getColorSpace();
long width = image.getWidth();
long height = image.getHeight();
TransformsChain transformsChain = new TransformsChain();
com.imebra.DrawBitmap drawBitmap = new com.imebra.DrawBitmap(transformsChain);
com.imebra.TransformsChain chain = new com.imebra.TransformsChain();
if (com.imebra.ColorTransformsFactory.isMonochrome(image.getColorSpace())) {
// Allocate a VOILUT transform. If the DataSet does not contain any pre-defined
// settings then we will find the optimal ones.
VOILUT voilutTransform = new VOILUT();
// Retrieve the VOIs (center/width pairs)
com.imebra.VOIs vois = loadedDataSet.getVOIs();
// Retrieve the LUTs
List < LUT > luts = new ArrayList < LUT > ();
for (long scanLUTs = 0;; scanLUTs++) {
try {
luts.add(loadedDataSet.getLUT(new com.imebra.TagId(0x0028, 0x3010), scanLUTs));
} catch (Exception e) {
break;
}
}
if (!vois.isEmpty()) {
voilutTransform.setCenterWidth(vois.get(0).getCenter(), vois.get(0).getWidth());
} else if (!luts.isEmpty()) {
voilutTransform.setLUT(luts.get(0));
} else {
voilutTransform.applyOptimalVOI(image, 0, 0, width, height);
}
chain.addTransform(voilutTransform);
com.imebra.DrawBitmap draw = new com.imebra.DrawBitmap(chain);
// Ask for the size of the buffer (in bytes)
long requestedBufferSize = draw.getBitmap(image, drawBitmapType_t.drawBitmapRGBA, 4, new byte[0]);
byte buffer[] = new byte[(int) requestedBufferSize]; // Ideally you want to reuse this in subsequent calls to getBitmap()
ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
// Now fill the buffer with the image data and create a bitmap from it
drawBitmap.getBitmap(image, drawBitmapType_t.drawBitmapRGBA, 4, buffer);
Bitmap renderBitmap = Bitmap.createBitmap((int) image.getWidth(), (int) image.getHeight(), Bitmap.Config.ARGB_8888);
renderBitmap.copyPixelsFromBuffer(byteBuffer);
image_view.setImageBitmap(renderBitmap);
}
If you are dealing with a monochrome image and you want to modify the presentation luminosity/contrast, then you have to modify the parameters of the VOILUT transform (voilutTransform variable in your code).
You can get the center and width that the transform is applying to the image before calculating the bitmap to be displayed, then modify them before calling drawBitmap.getBitmap again.
E.g., to double the contrast:
voilutTransform.setCenterWidth(voilutTransform.getCenter(), voilutTransform.getWidth() / 2);
// Now fill the buffer with the image data and create a bitmap from it
drawBitmap.getBitmap(image, drawBitmapType_t.drawBitmapRGBA, 4, buffer);
See this answer for more details about the center/width
While generating the qr code for android using zxing library is it possible to set the version number like version 4 or any other version .
Any guidance or link would be appreciable .
Thank you.
Yes check the EncodedHintType map:
private Bitmap stringToQRCode(String text, int width, int height) {
BitMatrix bitMatrix;
try {
HashMap<EncodeHintType, Object> map = new HashMap<>();
map.put(EncodeHintType.CHARACTER_SET, "utf-8");
map.put(EncodeHintType.ERROR_CORRECTION, ErrorCorrectionLevel.M);
map.put(EncodeHintType.QR_VERSION, 9); // (1-40)
map.put(EncodeHintType.MARGIN, 2); // pixels
bitMatrix = new MultiFormatWriter().encode(text, BarcodeFormat.QR_CODE, width, height, map);
int bitMatrixWidth = bitMatrix.getWidth();
int bitMatrixHeight = bitMatrix.getHeight();
int[] pixels = new int[bitMatrixWidth * bitMatrixHeight];
int colorWhite = 0xFFFFFFFF;
int colorBlack = 0xFF000000;
for (int y = 0; y < bitMatrixHeight; y++) {
int offset = y * bitMatrixWidth;
for (int x = 0; x < bitMatrixWidth; x++) {
pixels[offset + x] = bitMatrix.get(x, y) ? colorBlack : colorWhite;
}
}
Bitmap bitmap = Bitmap.createBitmap(bitMatrixWidth, bitMatrixHeight, Bitmap.Config.ARGB_4444);
bitmap.setPixels(pixels, 0, width, 0, 0, bitMatrixWidth, bitMatrixHeight);
return bitmap;
} catch (Exception i) {
i.printStackTrace();
return null;
}
}
No. There would be no real point to this. The version can't be lower than what is required to encode the data, and setting it higher just makes a denser QR code that's slightly harder to read.
I want to make animated video from list of images by applying transition animation between two images. I found many similar questions on SO like,
Android Screen capturing or make video from images
Android- How to make video using set of images from sd card?
All similar SO questions suggest to used animation for that, but how can we store that animated images to video file? Is there any Android library support this facility to make video of images?
Android do not support for AWT's BufferedBitmap nor AWTUtil, that is for Java SE. Currently the solution with SequenceEncoder has been integrated into jcodec's Android version. You can use it from package org.jcodec.api.SequenceEncoder.
Here is the solution for generating MP4 file from series of Bitmaps using jcodec:
try {
File file = this.GetSDPathToFile("", "output.mp4");
SequenceEncoder encoder = new SequenceEncoder(file);
// only 5 frames in total
for (int i = 1; i <= 5; i++) {
// getting bitmap from drawable path
int bitmapResId = this.getResources().getIdentifier("image" + i, "drawable", this.getPackageName());
Bitmap bitmap = this.getBitmapFromResources(this.getResources(), bitmapResId);
encoder.encodeNativeFrame(this.fromBitmap(bitmap));
}
encoder.finish();
} catch (IOException e) {
e.printStackTrace();
}
// get full SD path
File GetSDPathToFile(String filePatho, String fileName) {
File extBaseDir = Environment.getExternalStorageDirectory();
if (filePatho == null || filePatho.length() == 0 || filePatho.charAt(0) != '/')
filePatho = "/" + filePatho;
makeDirectory(filePatho);
File file = new File(extBaseDir.getAbsoluteFile() + filePatho);
return new File(file.getAbsolutePath() + "/" + fileName);// file;
}
// convert from Bitmap to Picture (jcodec native structure)
public Picture fromBitmap(Bitmap src) {
Picture dst = Picture.create((int)src.getWidth(), (int)src.getHeight(), ColorSpace.RGB);
fromBitmap(src, dst);
return dst;
}
public void fromBitmap(Bitmap src, Picture dst) {
int[] dstData = dst.getPlaneData(0);
int[] packed = new int[src.getWidth() * src.getHeight()];
src.getPixels(packed, 0, src.getWidth(), 0, 0, src.getWidth(), src.getHeight());
for (int i = 0, srcOff = 0, dstOff = 0; i < src.getHeight(); i++) {
for (int j = 0; j < src.getWidth(); j++, srcOff++, dstOff += 3) {
int rgb = packed[srcOff];
dstData[dstOff] = (rgb >> 16) & 0xff;
dstData[dstOff + 1] = (rgb >> 8) & 0xff;
dstData[dstOff + 2] = rgb & 0xff;
}
}
}
In case you need to change the fps, you may customize the SequenceEncoder.
You can use a pure java solution called JCodec ( http://jcodec.org ). Here's a CORRECTED simple class that does it using JCodec low-level API:
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
for (int i = 0; i < 3; i++)
Arrays.fill(toEncode.getData()[i], 0);
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
public static void main(String[] args) throws IOException {
SequenceEncoder encoder = new SequenceEncoder(new File("video.mp4"));
for (int i = 1; i < 100; i++) {
BufferedImage bi = ImageIO.read(new File(String.format("folder/img%08d.png", i)));
encoder.encodeImage(bi);
}
encoder.finish();
}
}