as part of my project, I need to plot 2D and 3D functions in android using android studio. I know how to plot 2D functions but I'm struggling with 3D functions.
What is the best way to plot 3D functions? What do I need and where do I start?
I'm not looking for code or external libraries that do all the work, I just need to know what I need to learn to be able to do it myself.
Thanks in advance.
I know how to plot 2D functions but I'm struggling with 3D functions.
What is the best way to plot 3D functions? What do I need and where do I start?
I'm not looking for code or external libraries that do all the work, I just need to know what I need to learn to be able to do it myself.
Since you already understand 2D and want to advance to 3D there's a simple and non-optimal method:
Decide on how much z depth you desire:
EG: Currently your limits for x and y in your drawing functions are 1920 and 1080 (or even 4096x4096), if you want to save memory and have things a bit low resolution use a size of 1920x1080x1000 - that's going to use 1000x more memory and has the potential to increase the drawing time of some calls by 1000 times - but that's the cost of 3D.
A more practical limit is matrices of 8192,8192,16384 but be aware that video games at that resolution need 4-6GB graphic cards to work half decently (more being a bit better) - so you'll be chewing up some main memory starting at that size.
It's easy enough to implement a smaller drawing space and simply increase your allocation and limit variables later, not only does that test that future increases will go smoothly but it allows everything to run faster while you're ironing the bugs out.
Add a 3rd dimension to the functions:
EG: Instead of a function that is simply line_draw(x,y) change it to line_draw(x,y,z), use the global variable z_limit (or whatever you decide to name it) to test that z doesn't exceed the limit.
You'll need to decide if objects at the maximum distance are a dot or not visible. While testing having all z's past the limit changed to the limit value (thus making them a visible dot) is useful. For the finished product once it goes past the limit that you are implementing it's best that it isn't visible.
Start by allocating the drawing buffer and implementing a single function first, there's no point (and possibly great frustration) changing everything and hoping it's just going to work - it should but if it doesn't you'll have a lot on your plate if there's a common fault in every function.
Once you have this 3D buffer filled with an image (start with just a few 3D lines, such as a full screen sized "X" and "+") you draw to your 2D screen X,Y by starting at the largest value of Z first (EG: z=1000). Once you finish that layer decrement z and continue, drawing each layer until you reach zero, the objects closest to you.
That's the simplest (and slowest) way to make certain that closest objects obscure the furthest objects.
Now does it look OK? You don't want distant objects the same size (or larger) than the closest objects, you want to make certain that you scale them.
The reason to choose numbers such as 8192 is because after writing your functions in C (or whichever language you choose) you'll want to optimize them with several versions each, written in assembly language, optimized for specific CPUs and GPU architectures. Without specifically optimized versions everything will be extremely slow.
I understand that you don't want to use a library but looking at several should give you an idea of the work involved and how you might implement your own. No need to copy, improve instead.
There are similar questions and answers that might fill in the blanks:
Reddit - "I want to create a 3D engine from scratch. Where do I start?"
Davrous - "Tutorial series: learning how to write a 3D soft engine from scratch in C#, TypeScript or JavaScript"
GameDev.StackExchange - "How to write my own 3-D graphics library for Windows? [closed]"
Related
What is a suggested implementation approach for a real time scrolling raster plot on Android?
I'm not looking for a full source code dump or anything, just some implementation guidance or an outline on the "what" and "how".
what: Should I use built in Android components for drawing or go straight to OpenGL ES2? Or maybe something else I haven't heard of. This is my first bout with graphics of any sort, but I'm not afraid to get a little dirty with OpenGL.
how: Given a certain set of drawing components how would I approach implementation? I feel like the plot is basically a texture that needs updating and translating.
Background
I need do design an Android application that as part of its functionality displays a real time scrolling raster plot (i.e. a spectrogram or waterfall plot). The data will first be coming out of libUSB and passing through native C++ where signal processing will happen. Then, I assume, the plotting can happen either in C++ or Kotlin depending on what is easier and whether passing the data over the JNI is a big enough bottleneck or not.
My main concern is drawing the base raster itself in real time and not so much extra things such as zooming, axes, or other added functionality. I'm trying to start simple.
Constraints
I'm limited to free software.
Platform: Android version 7.0+ on modern device
GPU hardware acceleration is preferred as the CPU will be doing a good amount of number crunching bringing streaming data to the plot.
Thanks in advance!
Drawing on android itself, is a herculean task. Now my requirement is on to see, how robust I can draw atleast 10million points with different intensity levels.
Some methods I came across:
Android draws with Canvas and Bitmaps
SurfaceView with OpenGL
Using libGDX fastest drawing library
Custom view to refresh & update automatically
What is best method to go about it? If I need to draw 10million or more points maybe on a static image on android, how can I enhance it and not degrade its performance. Every second I need to refresh and draw another 10million points. Is it possible or android is capable of doing such a task?
As your question states 10mil/sec, I understand that you want them realtime, thus opengl is the way to go, leaving you with options 2, 3 and 4.
You would definitely need to batch those calls.
You can think about using point sprites to reduce the amount of data you need to transfer to GPU.
Android as OS is capable of anything your machine can support. Your specific device may have performance issues, or not.
Don't optimize prematurely and try option 3 (libGDX). It would be the easiest to set up and achieve your task. If it won't be performant enough I'd think about rolling my own opengl-based solution.
https://gamedev.stackexchange.com/questions/11095/opengl-es-2-0-point-sprites-size
I need to detect objects in a scene (on an iPhone and Android). The environment is constrained in a way that should make the problem easier and more accurate:
the environment is small and known... users are exploring a single room or small outdoor area that I can take pictures of ahead of time to "train" or constrain the algorithm
the user's location within the space is often limited... even when the space is large, the user might be confined to specific paths within the space
the objects being detected are relatively static... they are part of the environment and don't move
BUT, making the problem harder:
I can't modify the environment by placing markers on objects, so I need to recognize the objects themselves
The objects are pretty similar looking, so we might have to use the surrounding scene as input, not just the individual items
For instance, imagine walking through a historic cemetery along a path (you're not allowed to walk on the grass). When a user points their phone at a headstone, I'd like to be able to identify the headstone and estimate where the user is relative to the headstone (so I can estimate the user's location on the path). Many of the headstones are pretty similar looking if you're looking at just the headstone. Ahead of time I can walk that path and take multiple pictures of the objects from a variety of angles.
Is there an algorithm or library suited to this type of object detection problem?
This is something you might be looking for: http://3dar.us/
They have their own library where you can have something close to what you want (locations of objects, your location, the distance, etc.) Only caveat is that it's only for iPhone right now? Good luck in your search!
If the surrounding scenes are sufficiently different from each other you might be able to differentiate between scenes using a simple fast technique like histogram matching. This could be used to determine which scene you are in, and narrow the search set. If you can distinguish the scenes, you can then switch to an object-detection mode that looks for a specific object expected in a specific scene. I imagine if the object is static and well documented you might be able to search against a pre-compiled set of the most recognisable feature descriptors, determine relative pose from them etc. The approach of PTAMM is broadly analogous to this (determine the scene, load the scene's feature points, track in the current scene).
If your example (matching headstones) is what you're actually attempting, the problem becomes a lot more difficult (I assume, at least superficially most headstones and backgrounds will be very similar in things like geometry, colour, etc). The path constraint means you may be able to narrow your search set according to bearing (unless all the headstones are facing the same direction). After that you'll have to do the best you can with the remaining outstanding features (text?).
I have an application where I want to track 2 objects at a time that are rather small in the picture.
This application should be running on Android and iPhone, so the algorithm should be efficient.
For my customer it is perfectly fine if we deliver some patterns along with the software that are attached to the objects to be tracked to have a well-recognizable target.
This means that I can make up a pattern on my own.
As I am not that much into image processing yet, I don't know which objects are easiest to recognize in a picture even if they are rather small.
Color is also possible, although processing several planes separately is not desired because of the generated overhead.
Thank you for any advice!!
Best,
guitarflow
If I get this straight, your object should:
Be printable on an A4
Be recognizeable up to 4 meters
Rotational invariance is not so important (I'm making the assumption that the user will hold the phone +/- upright)
I recommend printing a large checkboard and using a combination of color-matching and corner detection. Try different combinations to see what's faster and more robust at difference distances.
Color: if you only want to work on one channel, you can print in red/green/blue*, and then work only on that respective channel. This will already filter a lot and increase contrast "for free".
Otherwise, a histogram backprojection is in my experience quite fast. See here.
Also, let's say you have only 4 squares with RGB+black (see image), it would be easy to get all red contours, then check if it has the correct neighbouring colors: a patch of blue to it's right and a patch of green below it, both of roughly the same area. This alone might be robust enough, and is equivalent to working on 1 channel since for each step you're only accessing one specific channel (search for contours in red, check right in blue, check below in green).
If you're getting a lot of false-positives, you can then use corners to filter your hits. In the example image, you have 9 corners already, in fact even more if you separate channels, and if it isn't enough you can make a true checkerboard with several squares in order to have more corners. It will probably be sufficient to check how many corners are detected in the ROI in order to reject false-positives, otherwise you can also check that the spacing between detected corners in x and y direction is uniform (i.e. form a grid).
Corners: Detecting corners has been greatly explored and there are several methods here. I don't know how efficient each one is, but they are fast enough, and after you've reduced the ROIs based on color, this should not be an issue.
Perhaps the simplest is to simply erode/dilate with a cross to find corners. See here .
You'll want to first threshold the image to create a binary map, probably based on color as metnioned above.
Other corner detectors such as Harris detector are well documented.
Oh and I don't recommend using Haar-classifiers. Seems unnecessarily complicated and not so fast (though very robust for complex objects: i.e. if you can't use your own pattern), not to mention the huge amount of work for training.
Haar training is your friend mate.
This tutorial should get you started: http://note.sonots.com/SciSoftware/haartraining.html
Basically you train something called a classifier based on sample images (2000 or so of the object you want to track). OpenCV already has the tools required to build these classifiers and functions in the library to detect objects.
I'm trying to put a particle system together in Android, using OpenGL. I want a few thousand particles, most of which will probably be offscreen at any given time. They're fairly simple particles visually, and my world is 2D, but they will be moving, changing colour (not size - they're 2x2), and I need to be able to add and remove then.
I currently have an array which I iterate through, handling velocity changes, managing lifecyling (killing old ones, adding new ones), and plotting them, using glDrawArrays. What OpenGl is pointing at, though, for this call, is a single vertex; I glTranslatex it to the relevant co-ords for each particle I want to plot, one at a time, set the colour with glColor4x then glDrawArrays it. It works, but it's a bit slow and only works for a few hundred particles. I'm handling the clipping myself.
I've written a system to support static particles which I have loaded into a vertex/colourarray and plot using glDrawArrays, but this approach only seems suitable for particles which will never change relative location (ie I move all of them using glTranslate), colour and where I don't need to add/remove particles. A few tests on my phone (HTC Desire) suggest that trying to alter the contents of those arrays (which are ByteBuffers, pointed to by OpenGL) is extremely slow.
Perhaps there's some way of manually writing the screen myself with the CPU. If I'm just plotting 1x1/2x2 dots on the screen, and I'm purely interested in writing and not doing any blending/antialiasing, is this an option? Would it be quicker than whatever OpenGl is doing?
(200 or so particles on a 1ghz machine with megs of ram. This is way slower than I was getting 20 years ago on a 7mhz machine with <500k of ram! I appreciate I'm using Java here, but surely there must be a better solution. Do I have to use the NDK to get the power of C++, or is what I'm after possible)
I've been hoping somebody might answer this definitively, as I'll be needing particles on Android myself. (I'm working in C++, though -- Currently using glDrawArrays(), but haven't pushed particles to the limit yet.)
I found this thread on gamedev.stackexchange.com (not Android-specific), and nobody can agree on the best approach there, but you might want to try a few things out and see for yourself.
I was going to suggest glDrawArrays(GL_POINTS, ...) with glPointSize(), but the guy asking the question there seemed unhappy with it.
Let us know if you find a good solution!