I am beginner in android. I am working on porting ffmpeg to android and i am able to display the picture, which looks really odd. I am providiing the links to pictures please advice me what really went wrong in my case.
In the native code I call sws_scale function to convert image from native format to rgb 565 and I use this rgb565 frame to be displayed using canvas and bitmap in java code.
I am guessing this is interlacing problem, but not sure. Need suggestions . Please help.
How are you calling ffmpeg? I definitely agree that its an interlacing issue. Can you just pass deinterlace to it?
Related
I need to get the frames a video and do some modification on it like drawing something on it or write some text. Then on saving I need that video with that modifications.
Please suggest me the best way to do that. Any help is appreciated.
Please see the below app for to understand my problem
https://play.google.com/store/apps/details?id=com.techsmith.apps.coachseye.free
You can try INDE Media Pack - https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
It has transcoding\remuxing functionality as MediaComposer class and several sample effects like Grayscale, TextOverlayEffect etc. For exampe effect to put text: https://github.com/INDExOS/media-for-mobile/blob/master/Android/samples/effects/src/com/intel/inde/mp/effects/TextOverlayEffect.java. It could be easily enhanced to other effects
Given a JPEG Encoded byte array taken from a camera, I need to decode it and display its image (bitmap) in my application. From searching around, I've found that there are two primary ways to go about this: use NDK Jpeg Decoding library or use BitmapFactory.decodeByteArray. This is for an experimental embedded device being developed that runs on Android with a built-in camera.
I would greatly prefer to develop the application in SDK not NDK if possible at all, but many people seemed to handle this problem by going straight to NDK, which bugged me a bit.
Is there some sort of inherent limitation in BitmapFactory.decodeByteArray that forces you to handle this problem by using libjpeg in NDK (Perhaps speed? Incompatibility?)
Performance isn't a big consideration unless if it takes say more than 45 seconds to decode the image and display it.
This is an important decision I need to make upfront, so I'd appreciate any thoughtful answers. Thank you so much.
Here is a really good example / explanation how you can decode images on device efficiently without using NDK : Displaying Bitmap Efficiently. You have an option to decode bitmap from stream or file so it depends on your needs. But in most of my applications I am using the same method and it's working great. So I suggest you to take a look at the SDK example.
Hope it will helps you.
I have an android program which draws lines and text to a canvas. (These are all vector drawing operations.) Does anyone have any advice for exporting that canvas to a PDF? I've looked into changing the Bitmap.CompressFormat that the canvas is based upon, hoping there'd be a PDF (or some sort of vector) format, but no luck there.
My goal is to output some sort of Vector file suitable for printing.
I'd appreciate any advice. Thanks!
There is nothing in Android for this. You can take a shot at seeing if somebody has a PDF library for Java (e.g., iText) working on Android, but these libraries may be large.
A better solution may be for you to save in something simpler (e.g., SVG, an XML format) and have your server convert that to PDF or anything else desired.
That was very long ago. If anybody happens to find this question nowadays, Android has it since KitKat: PrintedPdfDocument.
I've currently made an android app that can display a live preview of the camera, but I'm looking for a way to perform live pixel manipulation (ie, make the image grayscale, sepia-toned, etc.). As of yet I haven't found any code for someone whose done this before.
Any help would be appreciated.
You could use the Camera.Parameters to set the appropriate effect. Read more about it here.
If you want to do the manipulation by yours then use onPreviewFrame of camera. This gives you raw byte[] of YUV format (its by default, you could set it to other formats also. Look here for setting the preview format).
Now you could able to perform any pixel manipulation you want on byte[].
Hope this helps!
I have answered this question here. In short, this tutorial gives you probably the best way, how to achieve this (by using OpenCV, a free Computer Vision library). You can download their example application from their website as well.
Long time reader never posted until now.
Im having some trouble with Android, im implementing a library called JJIL its an open source imaging library.
My problem is this i need to run some analysis on an image and to do
so i need to have it in jjil.core.image format and once those
processes are complete i need to convert the changed image from
jjil.core.image to java.awt.image.
I cant seem to find a method of doing this does anyone have any ideas
or have any experience with this?
I would be grateful of any help.
Danny
You should let others know what you found... probably you found that android.graphics.Bitmap has a getPixel(x,y) method that returns an int representing the ARGB value?
http://developer.android.com/reference/android/graphics/Bitmap.html
I have done some serious research into this and i have found that i can extract the rgb values of a bitmap image with a similar method.
But the threshold that i was using the previous library for is now not possible because i have had to abandon the use of this library.
Is there a way to use colour matrix's to threshold the image? Or better still is there an inbuilt method that would do this automatically?
Thank you for any help.
Danny