In my app I want to stream images from a live server and show it in the UI as an ImageBackground component. So that it feels like it is a video, but it is not - just a fast changing image, got as base64 from WebSocket.
The problem is that I'm not sure about the solution I've made, because:
I'm pretty sure that with higher frame rate it's going to lag the app (now it's 4-5fps, what if 30?)
When updating an image there's a flickering effect, it doesn't feel good (example on video)
Check on video how it currently works: video here
Code (View):
const liveFrameRef = useRef<Image | null>(null);
return (
<ImageBackground fadeDuration={0} style={styles.stream} source={require("../../../../assets/images/camera_preview.jpg")} imageRef={(image) => {
liveFrameRef.current = image
}}>
</ImageBackground>
)
Code (Stream in useEffect):
// data - my current image got from a socket as binary
if (liveFrameRef.current) {
let bytes = new Uint8Array(data);
let binary = '';
let len = bytes.byteLength;
for (let i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i])
}
const image = "data:image/jpg;base64," + btoa(binary); // b64 encoded JPG
liveFrameRef.current.setNativeProps({
src: [{uri: image}],
});
}
So basically I get the ref of my Image and with each frame change the source using setNativeProps. That's how I've been doing it in plain React in HTML, and worked fine. Here it doesn't. Any better ideas I can workaround this?
Thanks.
Related
The photos captured with the device are big. I want to upload them to my backend server after resizing them (scaling them) to more reasonable sizes (less than 800x800). I hoped to use ImageEditor module's coprImage() function, but running it with a large image results in a OutOfMemory exception. I assume that since the module tries to decode a large image and store it in memory, the app crashes.
What I need is the following:
Input
{
width: 3100,
height: 2500,
uri: content://android/1 (some location in Android device)
}
Output
{
width: 800,
height: 650,
uri: content://android/1/resized (some location in Android device)
}
Then I can grab this uri to send the picture to my backend server, and delete the resized photo from the device.
I assume that I will have to write a NativeModule so I can resize an image without loading a large decoded image into memory. React Native's Image component uses Fresco to handle resizing before rendering them, but I don't think it provides a way to resize an image and temporarily save it in fs.
Any help would be appreciated.
References:
https://developer.android.com/training/displaying-bitmaps/load-bitmap.html
http://frescolib.org/docs/resizing-rotating.html
https://facebook.github.io/react-native/docs/images.html
Memory efficient image resize in Android
Expo library has an image manipulator that can resize and more:
import { ImageManipulator } from 'expo';
...
const manipResult = await ImageManipulator.manipulate(
imageUri,
[{ resize: { width: 640, height: 480 } }],
{ format: 'jpg' }
);
manipResult will return an object with the new uri, width and height.
Find it here:
https://docs.expo.io/versions/latest/sdk/imagemanipulator.html
In my app I use react-native-image-picker which works really well.
If you don't want to use it, have a look into this function source to see how resizing is done.
Cheers.
Did you try with react-native-image-resizer? For me it works pretty well, I'm using it with react-native-camera, it looks like this:
fromCamera() {
let newWidth = 800;
let newHeight = 650;
this.refs.camera.capture()
.then((data) => {
ImageResizer.createResizedImage(data.path, newWidth, newHeight, 'JPEG', 100, rotation)
.then(uri => {
// send to backend
}
})
}
Original image is saving on device with uri: data.path, so there are no problems with memory.
Lets check out this. It will provide you detailed description about your issue for resizing and uploading to backend server.
Incase If you want to send Base64 String you can check the code below
how to install and link with android please check this link
https://github.com/TBouder/react-native-asset-resize-to-base64
// this code is for resizing and pushing to array only
let images = [];
NativeModules.RNAssetResizeToBase64.assetToResizedBase64(
response.uri,
newWidth,
newHeight,
(err, base64) => {
if (base64 != null) {
const imgSources = { uri: base64 };
this.setState({
image: this.state.image.concat(imgSources)
});
this.state.image.map((item, index) => {
images.push(item);
});
if (err) {
console.log(err, "errors");
}
}
}
);
I want to recognize digits from odometer by mobile using tesseract library.
Source image:
Next step:
Now i need to fill gaps between each segment.
Can you help me, how i do it?
(english training data work better for me than https://github.com/arturaugusto/display_ocr)
image processing:
func prepareImage(sourceImage: UIImage) -> UIImage {
let avgLuminanceThresholdFilter = GPUImageAverageLuminanceThresholdFilter()
avgLuminanceThresholdFilter.thresholdMultiplier = 0.67
let adaptiveThresholdFilter = GPUImageAdaptiveThresholdFilter()
adaptiveThresholdFilter.blurRadiusInPixels = 0.67
let unsharpMaskFilter = GPUImageUnsharpMaskFilter()
unsharpMaskFilter.blurRadiusInPixels = 4.0
let stillImageFilter = GPUImageAdaptiveThresholdFilter()
stillImageFilter.blurRadiusInPixels = 1.0
let contrastFilter = GPUImageContrastFilter()
contrastFilter.contrast = 0.75
let brightnessFilter = GPUImageBrightnessFilter()
brightnessFilter.brightness = -0.25
//unsharpen
var processingImage = unsharpMaskFilter.imageByFilteringImage(sourceImage)
processingImage = contrastFilter.imageByFilteringImage(processingImage)
processingImage = brightnessFilter.imageByFilteringImage(processingImage)
//convert to binary black/white pixels
processingImage = avgLuminanceThresholdFilter.imageByFilteringImage(processingImage)
return processingImage
}
OCR:
let tesseract_eng = G8Tesseract()
tesseract_eng.language = "eng"
tesseract_eng.engineMode = .TesseractOnly
tesseract_eng.pageSegmentationMode = .Auto
tesseract_eng.maximumRecognitionTime = 60.0
tesseract_eng.setVariableValue("0123456789", forKey: "tessedit_char_whitelist")
tesseract_eng.image = prepareImage(image)
tesseract_eng.recognize()
OpenCV has some morphology methods, which white fill the gaps between black pixels (like THIS or THIS). Pay attention to morphology opening method, this should be the primary method for solving this, but do not be afraid to combine it with dilating if only this does not help. I am not sure what software do you use for image processing, if it does have similar methods, try them out, otherwise I would highly recomend you installing OpenCV, which (is free of course) has many image-processing operations with very high speed. Also, you could try a bit to experiment with threshold values and find the balance between how much corners it cuts out and how much shadows it takes off (combined with morphological operations this should solve the issue for you).
I am trying to port libpng/apng to Android platform, using libpng with apng patch to read animated png file.
My question is that I didn't find any 'skip' method declared in png.h. What I want to do is like to directly jump to a specific frame. But I cannot get a correct result unless I read from the beginning and perform png_read_frame_head() and png_read_image() for every frame ahead.
Is there any way that can jump to a specific frame by specifying the index without read all the frame info/data ahead?
The following code is from apng sample http://littlesvr.ca/apng/tutorial/x57.html You can see it reads the apng file in a loop. And it seems that you have to call png_read_frame_head() and png_read_image() in order to make the internal information in the png_ptr_read and info_ptr_read updated. So, if there is any way to simply modify these two struct to correct information prepared for reading a specific frame, my question is solved.
for(count = 0; count < png_get_num_frames(png_ptr_read, info_ptr_read); count++)
{
sprintf(filename, "extracted-%02d.png", count);
newImage = fopen(filename, "wb");
if(newImage == NULL)
fatalError("couldn't create png for writing");
writeSetup(newImage, &png_ptr_write, &info_ptr_write);
if(setjmp(png_ptr_write->jmpbuf))
fatalError("something didn't work, jump 2");
png_read_frame_head(png_ptr_read, info_ptr_read);
if(png_get_valid(png_ptr_read, info_ptr_read, PNG_INFO_fcTL))
{
png_get_next_frame_fcTL(png_ptr_read, info_ptr_read,
&next_frame_width, &next_frame_height,
&next_frame_x_offset, &next_frame_y_offset,
&next_frame_delay_num, &next_frame_delay_den,
&next_frame_dispose_op, &next_frame_blend_op);
}
else
{
/* the first frame doesn't have an fcTL so it's expected to be hidden,
* but we'll extract it anyway */
next_frame_width = png_get_image_width(png_ptr_read, info_ptr_read);
next_frame_height = png_get_image_height(png_ptr_read, info_ptr_read);
}
writeSetup2(png_ptr_read, info_ptr_read, png_ptr_write, info_ptr_write,
next_frame_width, next_frame_height);
png_write_info(png_ptr_write, info_ptr_write);
png_read_image(png_ptr_read, rowPointers);
png_write_image(png_ptr_write, rowPointers);
png_write_end(png_ptr_write, NULL);
png_destroy_write_struct(&png_ptr_write, &info_ptr_write);
fclose(newImage);
printf("extracted frame %d into %s\n", count, filename);
}
You can't. libpng was designed to treat PNG data as a stream, so it decodes chunks sequentially, one by one. Not sure why you need to skip APNG frames. Just like in video formats, one frame might be stored as "what changed after previous frame", instead of full frame, so you might need previous frame(s) too.
These code examples might be useful:
https://sourceforge.net/projects/apng/files/libpng/examples/
Note:
Please bear with me, this (imho) is not a duplicate of the dozen questions asking about undoing in paint/draw scenarios.
Background:
I've been developing an image processing application, using Processing for Android and now I'm trying to implement a simple, one-step undo/redo functionality.
My initial (undo-friendly) idea was to apply the adjustments to the downsampled preview image only, keep an array of adjustment actions, and apply them at save-time to the original image. I had to sack this idea for two reasons:
some of the actions take a few seconds to finish, and if we have a few of these, it will make the already slow saving process tediously slower.
some actions (e.g. color-noise reduction) produce drastically different (wrong) results when applied to the downsampled image instead of the full-sized image. But anyways this is a less serious problem...
So I decided to go with storing the before/after images.
Problem:
Unfortunately buffering the images in memory is not an option because of memory limitations. So what I'm doing at the moment is saving the before/after images to internal storage.
But that creates a performance/quality dilemma:
jpeg is fast (i.e. ~500ms to save on my Xperia Arc S) but degrades the quality beyond acceptability after two/three iterations.
png is of course lossless, but is super slow (~7000ms to save) which makes it impractical.
bmp I guess would probably be fast, but android does not encode bmp (I think processing for android saves "file.bmp" as tiff).
tiff has somewhat acceptable performance (~1500ms to save), but android does not decode tiff.
I also tried writing the raw pixel array to a file using this function:
void writeData(String filename, int[] data) {
try {
DataOutputStream dos = new DataOutputStream(new BufferedOutputStream(openFileOutput(filename, Context.MODE_PRIVATE)));
for (int i = 0; i < data.length; i++) {
dos.writeInt(data[i]);
}
dos.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
but it takes above 2000ms to finish, so I gave up on it for now.
Questions:
Is there a faster way of writing/reading the data for this purpose?
...or should I go back to the initial idea and try to solve its problems as much as possible?
Any other suggestions?
Update:
I came up with this method to write the raw data:
void saveRAW2(String filename) {
byte[] bytes = new byte[orig.pixels.length*3];
orig.loadPixels(); //orig = my original PImage, duh!
int index = 0;
for (int i = 0; i < bytes.length; i++) {
bytes[i++] = (byte)((orig.pixels[index] >> 16) & 0xff);
bytes[i++] = (byte)((orig.pixels[index] >> 8) & 0xff);
bytes[i] = (byte)((orig.pixels[index]) & 0xff);
index++;
}
saveBytes(filename, bytes);
}
...and it takes less than 1000ms to finish.
It runs 3 times faster than that if I write the file on my SD card, but I guess I can't count on that to be the same on every phone. right?
Anyways, I'm using this method to read the saved data back into orig.pixels:
void loadRAW(String filename) {
byte[] bytes = loadBytes(filename);
int index = 0;
int count = bytes.length/3;
for (int i = 0; i<count; i++) {
orig.pixels[i] =
0xFF000000 |
(bytes[index++] & 0xff) << 16 |
(bytes[index++] & 0xff) << 8 |
(bytes[index++] & 0xff);
}
orig.updatePixels();
}
This takes ~1500ms to finish. Any ideas for optimizing that?
I'd recommend finding out what the Android "scratch disk" area is, and processing your images as tiles, caching them on that scratch disk. This might be a bit slower than straight memory use, but it means you can do your image editing without running into memory limitations, and (provided Android's SDK has a sensible API), writing the tiles to a full file shouldn't take incredibly long. That said, you've kind of moved from "Processing" to plain Java, so the question isn't really about Processing anymore... and my answer is probably not as good as someone who's intimately familiar with the Android SDK
I'm trying to make an epub reader
I want to do the pagination like fbreader does
Now I have source code of fbreader, but I don't know where it implement pagination
I have my implementation on other features
All I need from fbreader is the pagination
Is there anyone who have done the similar thing?
Thanks for your time to read this question.
ps: the pagination is to spit html file to pages, depending on the size of screen and size of font, and language is also in consideration, when changed the font size, the page number also changed. And epub file content is html format
It is fascinating code. I would love to see a translation of the original student project (but I presume the original document is in Russian). As this is a port of a C++ project it has an interesting style of coding in places.
The app keeps track of where you are in the book by using paragraph cursors (ZLTextParagraphCursor). This situation is comparative with database cursors and record pagination. The class that is responsible for serving up the current page and calculating the number of pages is ZLTextView.
As epubs are reflowable documents and not page-oriented there isn't really a concrete definition of a page - it just depends on where in the document you happen to be looking (paragraph, word, character) and with what display settings.
As McLaren says, FBReader doesn't implement pagination: It uses the ZLibrary, which is available from the same website as FBReader.
The original code uses this to calculate the current page number:
size_t ZLTextView::pageNumber() const {
if (textArea().isEmpty()) {
return 0;
}
std::vector<size_t>::const_iterator i = nextBreakIterator();
const size_t startIndex = (i != myTextBreaks.begin()) ? *(i - 1) : 0;
const size_t endIndex = (i != myTextBreaks.end()) ? *i :
textArea().model()->paragraphsNumber();
return (myTextSize[endIndex] - myTextSize[startIndex]) / 2048 + 1;
}
The Java version uses this function to compute the page number:
private synchronized int computeTextPageNumber(int textSize) {
if (myModel == null || myModel.getParagraphsNumber() == 0) {
return 1;
}
final float factor = 1.0f / computeCharsPerPage();
final float pages = textSize * factor;
return Math.max((int)(pages + 1.0f - 0.5f * factor), 1);
}
This is located in org.geometerplus.zlibrary.text.view.TextView
It's so simplistic, though, that you might as well implement your own.
How I understood it is that it uses 3 bitmaps previous current and next. What they have done is written a text which gets stored and read over this 3 bitmaps. Over as what you see on the top they calculate paragraphs data of how long it is for the scroll you see on the others example. You can start reverse engineering at android.view package class bitmapManager. This should explain everything about how they do their paging.