I am a newbie on openGLES, I just want to open an image serve my texture later.
Since most tutorial on the internet for openGL is based on development on laptop, they simply open image like:unsigned char *data = stbi_load("pic.jpg", &width, &height, &nrChannels, 0); and put the picture under same folder. But I then realized for Android set a correct path is not that simple. If I put the picture under same folder with my cpp file, it will not be found during runtime since the app is now on the real phone.
So the question is, Is it possible to use stbi_load on Android? Or should I find another way? BTW I have to use JNI and write opengl logic with CPP.
I figure it out by myself in case someone encounter same issue.. Note in android you need to read from Sdcard not from drawable. Remember to grant permission.
Related
It's possible that we start an android project in which it's necessary after recognising an image by camera to visualise a content generated in unity.
The easy part would be to use webGL to display it, but there is the problem of devices that do not support it directly. My question is if from android (and later iOs). It's possible to download a content of unity, load and visualise it in runtime?.
Is possible that I had to direct all the effort to generate that content in a .jar and then use something like dependency injection to load it?
I have already a unity scene in an activity but of course in project definition not in run time.
Any help or guidance would be welcome.
Unity builds levels into the final runtime executable, so adding a downloaded 'scene' directly is not possible. The best way around this is to create a 'generator' scene which can accept input from a downloaded text file, such as JSON, and use that to render the level.
However, this method does assume that all the possible objects that can be rendered are in your game as prefabs. If you're wanting to pull images from the net to be loaded into textures, the WWW class might get you started down the right path:
https://docs.unity3d.com/ScriptReference/WWW.LoadImageIntoTexture.html
I've build an application that uses Tesseract (V3.03 rc1) to identify some specific text strings. These are, unfortunately, printed on a custom font that requires that I build my own traineddata file. I've built the application on both iOS (using https://github.com/gali8/Tesseract-OCR-iOS for inspiration) and Android (using https://github.com/rmtheis/tess-two/ for inspiration as well).
The workflow for both platforms is as follows:
I select a bounding box on the preview screen for where I can crop out the relevant text, and crop the image accordingly.
I use OpenCV to get a binary image (using OpenCV's adaptive threshold function with the same parameters for both platforms)
I pass this binary image to Tesseract. Both platforms (Android and iOS) use the same traineddata file.
And yet, iOS recognizes the text strings perfectly, while Android keeps misidentifying certain characters (6s for Ss, As for Hs).
On both platforms, I use the same white list string, I disable load_type_dawg and load_system_dawg, and also choose to save the blob choices.
Has anyone encountered this kind of situation before? Am I missing a setting on Android that's automatically handled in iOS? Is there something particular about Android that hasn't crossed my mind?
Any thoughts or advice would be greatly appreciated!
So, after a lot of work, I found out what was wrong with my Android application (thankfully, it wasn't an issue with Tesseract at all). As I'm more familiar with iOS apps than Android, I wasn't sure how I could load the traineddata file onto the application without requiring the user to have the file loaded on their external storage device. I found inspiration in this project (http://www.codeproject.com/Tips/840623/Android-Character-Recognition), as they autoload the trained data file.
However, I misunderstood how it worked. I originally thought that the TessDataManager did a file lookup on the project's local tesseract/tessdata folder in order to get the trained data file (as I do this also on iOS). However, that's not what it does. It, rather, checks the internal file structure (data/data/projectname/files/tesseract/tessdata/traineddatafilegoeshere) to see if the file exists and if it doesn't, it copies over the trained data file it keeps in the Resources/Raw directory. In my case, it defaulted to the eng file, so it never read my custom font file.
Hopefully this helps someone else having similar issues. Thanks to Robin and RmTheis for all of your help!
I'm making a illustrated instruction for how to use an app that will be needed
for Android/iPhone
I'm not much into coding for Android and I though the client just needed the
illustration but he asks:
"We will need the illustration saved to a file that we can run on mobile devices (iPhone/Android) as well as the source code."
Isn't jpg enough? is there some additional code that you android programmers are
aware of?
No. In Android you can just use a Drawable. This can be a number of different file formats, including your jpeg. It may be good to have a look at Android Asset Studio. With this tool you can get a nice zip file for all your different screen densities. If you keep the file structure that asset studio outputs then Android will do all the heavy lifting for you.
It might also help you to know something about 9-patches. This is how Android knows how to resize and stretch your image. Asset Studio has an option to set this as well.
I have a problem with an image for an android game. The problem is not a problem with the code because the code that I use I took from a book (Beginning Android 4 Games Developer).
The problem is this: I know the format that I have to use in android: png, but I don't know the settings for this format that I have to use (like RGB565...). Because if I use simply png, when I run the game the images are not good. So I need someone to explain to which settings I need to use for images for android games.
P.S The software that I used is photoshop. If there is better software for this purpose tell me.
I think there is a strong misconception in your understanding of Android and how it implements graphics. You are not constrained to .png for nearly any of your development. The .png and .9.png are only enforced strictly for managing drawable constants.
Android uses Java and has the capability to utilize nearly any graphical format. In particular native support for .bmp, .png, and .jpg are present for every device and Android OS version. You may even create your graphics in realtime byte by byte.
As for a good image editor, there are a number out there. I often use a combination of GIMP and Photoshop, myself.
Hope this helps,
FuzzicalLogic
So, largely for debugging purposes, I want to be able to write an image at arbitrary points in my code, and look at it later. I figured this would be easiest if I just wrote a my bitmap to a file and read it back later, but I cannot seem to figure out where to find the file after I write it, or how to open an image that is not in res/drawable with a corresponding handle in R.
You can use openFileOutput() and openFileInput(). These pull up data streams that point to files in your app's directory, and are (as far as I know), the suggested way to handle files that your app makes.