I think somebody heard something about a new format of images - BPG. Can i handle it on Android, any ideas?
PS: BPG is really interesting format, for example, just checkout difference with jpeg
UPDATE another examples of difference here and here
The BPG format is not supported by Android natively.
I've made a small application for Android that decompresses BPG images. You can have a look here: https://github.com/alexandruc/android-bpg, maybe you can reuse some of the code.
For use in the browser, there is a javascript decoder available in the source package.
I don't think this format is supported, yet, according to the Supported Media Formats
But you should give it a try.
Note:
If you meant browser support, you should update your question to let everybody know it.
Edit:
After some tests, I couldn't manage to make it work, even on Lollipop.
Conclusion, it is not supported, yet.
Related
I am trying to use a custom Tensorflow.js model in react native using React Native CLI.
It seems that #tensorflow/tfjs-react-native is not maintained anymore. When I try to install it based on the instructions provided on the official GitHub page, I face many dependency conflicts that are impossible to resolve. So this fact renders the package useless.
Also, tflite is not an option either based on the nature of my model(Details are irrelevant, though I would gladly explain if it would help in solving my problem). But even if I wanted to use that format, I doubt it would work because the package available for that also looks deprecated considering it was last updated around 3 years ago.
And for the same reason(Model's Nature), despite having a fully updated package, ONNX is currently not an option either.
So that leaves me with Tensorflow.js. And I should mention that I know the hardware limitations involved in using this too.
So my question is, considering the things I've said above, is there a way to use Tensorflow.js meaning this package in react native, and load a custom model to make predictions?
or should I just give up? If yes, please do share a simple working piece of code.
Also, my model is a Keras model saved in Tensorflow.js format. Finally, I also tried loading a model from a URL but no luck there too(Note that I am not trying to imply that it is impossible, I am merely saying that I was not able to do it. Would be awesome if you could tell me a way to do that too if possible of course.)
Thanks in advance for your time. If there is anything unclear, tell and I will try to elaborate more.
I want to open a .dwg file in my own Android app.
Is there anyone who knows any good development tool that can help me.
I am trying to find it but just getting nothing related to it. I think very few people use it.
Also, I want to know, how these apps work to open cad files which would be developed.
The main problem here is that .dwg files have a lot of different versions, and more importantly this format is proprietary, apparently not well documented or not documented at all. And of course, it's not simple data that we are talking about, so good luck for reverse engineering the format yourself.
A look at the wikipedia page for .dwg seem to give some interesting information for your project :
There is already some open source reader developed for this file format, namely LibDWG - free access to DWG
(...)
This is a library to allow reading data from a DWG file. That's a very
important acquisiton, which may improve a lot the ability of the free
software comunity to develop more features in the field of computer
technical drawing (CAD).
The DWG structure is very complicated, it seems to be crafted so that
none can easily understand it. That's a strong reason to not use it,
and that's also why we do not provide the writing feature in the
library. One should use LibDWG mainly to read such files, filtering
them to some other format, free and usable.
(...)
I think this is the developpment tool you wanted.
I found this answer that brought me to the idea instead of using the compiled tensorflow graph you might be able to use kivy on your Android phone. That way you could directly talk to the tensorflow graph using python-for-android.
A possible advantage would be to train/adapt the model on the fly. As far as I know otherwise you can only use the final trained model (but this is currently unanswered on stackoverflow). Also cross compiling to Windows Phone might be possible what currently isn't (see here).
I don't know the technical limitations. Anyone can confirm that this is possible and maybe what would be neccessary?
Although WinPhones could be a possibility in the future, there's basically a situation where almost no-one cares as there isn't really much interest about porting it. However, there's a thing in progress about using angle for translating openGL to DirectX, so it could be possible later. There's still this funny app packaging though, so it'll eat a lot of time.
Yet I think it might be possible to use those unofficial converters APK -> WinPhone app. Re TensorFlow: to me it looks like only a recipe is missing so try write one. :P
On iPhone it is possible to have a local substitution cache, substituting web content at load time.
Ive done some searching, but I cant seem to find anything similar on android? Is there an alternativ for doing this on the android platform?
Stefan
Depending on the API level you are targeting, HttpResponseCache may be what you are looking for. There is some background information here.
Are there any Android's API's, that implement the Fourier Transform using the
device's DSP? Or are there any API's that permit using the device's DSP?
Thanks.
No, there is no public API for performing hardware accelerated FFT.
You can optimize native code by targeting the armeabi-v7a ABI in order to use the FPU. That's very useful for floating point FFT.
See CPU-ARCH-ABIS in the docs/ directory of the Android NDK.
Sorry this is a bit late, but if you want to calculate the Fourier transform in Android for a real or complex number you are better off using either jTransfrom or libgdx.FFT. libgdx uses KissFFT backend, not quite sure what jTransform uses.
Check out this example on how to implement libgdx:
http://www.digiphd.com/android-java-simple-fft-libgdx/
Hope this helps.
Firstly, not all devices have a DSP. Most in fact just have a CPU and GPU.
As of today, you probably can't really do what you want without a custom ROM/firmware. The good news is that they are working on it. Look at Renderscript which is available starting with Honeycomb. It currently only runs on CPUs (though it can use multiple cores), but the plan is for a future release to allow execution on the GPU (and maybe DSP) as well, with little-to-no code changes on your part. See this post for more info.
Visualiyer is said to be able to export FFT from audio playing :
http://developer.android.com/reference/android/media/audiofx/Visualizer.html
but it is certainly not that what you like.