I am developing a peer to peer collaborative drawing app on android using alljoyn framework like chalkboard .
I am able to implement collaborative chat among peers. Now i want to implement canvas sharing where in a single canvas everyone will be able to draw in real time.
How can i start with canvas, what would be its data structure,is there any specific image object i need to handle,do i need to use json,do i have to store the pixel values in a 2D array.
I need only a black & white screen with white background and black as drawing part.
I just want to know the approach behind it. Any reference will be helpful.
thanks...
Canvas is really a bitmap.
You add/change pixels on the bitmap using drawing commands.
To do collaborative drawing, you wouldn't share the pixel values between all users with each change.
That would create bottlenecks in serializing, transport and deserializing. It would be too slow to work.
Instead, share the latest drawing commands between all users with each change.
If user#1 draws a line from [20,20] to [100,100], just serialize that command that drew the line and share that with all users.
Perhaps the serialization might look like this: "L 20,20 100,100".
If you want an efficient serialization structure, take a look at the way SVG does it's path data--very efficient for transportation to many users.
All other users would listen for incoming commands. Upon receipt, they would deserialize user#1's line and have it automatically drawn on their own canvas.
Related
I am developing an app that captures a business card using custom android camera and then i need to autocrop the unwanted space in android and then store the image . I am using opencv for this. All examples i am seeing are in python . I need it in android native.
You can probably try something like this:
1) Get an edge map of the image (perform edge detection)
2) Find contours on the edge map. The outermost contour should correspond to the boundaries of your business card. (under assumption that the business card image is against a solid background) This will help you extract the business card from the image.
3) Once extracted you can store the image separately without the unwanted space.
OpenCV will help you with points 1,2 and 3. Use something like a cannyedge detection for point 1. The findContours function will come in handy for point 2. Point 3 is basic image manipulation which I guess you don't need help with.
This might not be the most precise answer out there - but neither is the question is very precise - so, i guess it is alright.
I am building an drawing application in Android using canvas. I'm directly painting on the canvas without using the offline bitmap/canvas. Everything works fine on a 512M memory device but on slower devices, the application lags after a few strokes. The application allows user to put some cliparts, background and flood fill, too.
For all this different objects I am using a ArrayList with a bean class to store each paint object, its style and its bitmap.
In onDraw(), each time I am iterating over this ArrayList and draw everything that is stored in that ArrayList. I am providing undo and redo so cannot use drawingCache.
In short, I need to improve overall performance of the application. How can I achieve that? Is the application structure proper or anything that I should change? I can post some code if needed.
Thanks in advance.
I'm planing to create tablet app. I would ask for some guidance.
I have pictures in SVG format like this one.
With SVG it is easy, You just change fill parameter to different color but as I understand, there is no easy/stable svg processing to use with libgdx. I still want to use svg files to create/store images for my app.
What processing path would You recommend?
Is there an easy way to convert svg paths/shapes for com.badlogic.gdx.math.bezier or polygon objects and then draw them on screen/get user input (tap) inside this shapes?
Or should I use different objects/path?
Shapes could be grouped together for example I want two windows in a house to change color at once.
The way LibGDX is written is it gives you a lower level way to do this type of rendering, but doesn't offer out of box ways to render SVG. It really depends on if you are looking for something with performance or just simply want it to draw basic shapes.
To simply render shapes you could use something like ShapeRenderer. Which gives you a very close interface to the Java2D way to draw things. Maybe to quickly draw some basic stuff this might be handy.
If you are wanting to do some more robust version of rendering you will probably want to look into using a Mesh and working with shaders for OpenGL ES. You can find examples of these in the LibGDX tests as well as searching online for examples/tutorials.
If you want to turn the SVG into a texture you will want to look at Pixmap and then you can create a Texture with it and render with a Spritebatch. You will need to write the pixels you want to color and such with the Pixmap. However, doing it this way will generate an unmanaged texture (i.e. you will have to rebuild it when a user comes back to the app after pressing back or putting the device to sleep).
I'm planning to write an app for Android which performs a simple cell counting. The method I'm planning to use is a type of Blob analysis.
The steps of my procedure would be;
Histographing to identify the threshold values to perform the thresholding.
Thresholding to create a binary image where cells are white and the background is black.
Filtering to remove noise and excess particles.
Particle (blob) analysis to count cells.
I got this sequence from this site where functions from the software IMAQ Vision are used to perform those steps.
I'm aware that on Android I can use OpenCV's similar functions to replicate the above procedure. But I would like to know whether I'd be able to implement histographing, thresholding and Blob analysis myself writing the required algorithms without calling API functions. Is that possible? And how hard would it be?
It is possible. From a PNG image (e.g. from disk or camera), you can generate a Bitmap object. The Bitmap gives you direct access to the pixel color values. You can also create new Bitmap objects based on raw data.
Then it is up to you to implement the algorithms. Creating a histogram and thresholding should be easy, filtering and blob analysis more difficult. It depends on your exposure to algorithms and data structures, however a hands-on approach is not bad either.
Just make sure to downscale large images (Bitmap can do that too). This saves memory (which can be critical on Android) and gives better results.
This is very theoretical question about general knowledge.
First of all I dont have so far alot of understanding about things in Open GL so please forgive me.:
The best idea to load a 3D Model into android is using Waterfall .obj files yes?
I downloaded sample model for sketchup (some robot model with alot of parts) and the .obj file has size of 3mb. I loaded it into vector of strings (almost 100k of lines) and the application is +15mb's heavier in ram usage!!!! So I am a bit concerned about this whole method.. and approach?
When I will load the model is there a simple way of rotating it and moving. Will it be like single object in open GL or do I need to multiply all thousands of verticals by matrix?
Is there anything else I should add to my understanding.
I can't answer all of your questions, but:
3) Yes. You can combine the Android framework's onTouchEvent() functionality to work with OpenGL. In OpenGL, you can rotate things very easily with simple glRotate(angle) calls (which will rotate everything for you), where the provided angle is variable based on your touch interaction.
EDIT::
2) Why are you loading it into Strings? I don't know models very well, but I parse many files. You should load into the smallest size variable you can. For instance an ArrayList of shorts, or something. I don't know your data, but that is the best way. Consider parsing in portions if you have memory issues.