I have a single model and I want to change image of it.
with below codes I can render my model (sfb file):
ModelRenderable.builder()
.setSource(this, sfb_source])
.build()
.thenAccept(renderable -> this.renderable = renderable)
.exceptionally(
throwable -> {
Toast toast =
Toast.makeText(this, "Unable to load andy renderable", Toast.LENGTH_LONG);
toast.setGravity(Gravity.CENTER, 0, 0);
toast.show();
return null;
});
Now, my model is unique. but I have multi images(with same sizes).
I should create multi sfb files or is there any ways to load them and change it in real time?
Firstly it is worth mentioning that Scene form is deprecated, or more accurately 'open sourced and archived', in case you are building on it which I think you might be from your code - see note here: https://developers.google.com/sceneform/develop.
You can still work with the older versions or open version of Sceneform - this may be ok for your use case.
Either way, for your requirements it sounds like you have two options:
use a model which support animation - Sceneform based instructions here: https://developers.google.com/sceneform/develop/animation
delete and replace the model when you want to change it - this is not as slow as it sounds and can, in my experience, look fairly seamless when changing model colours etc (at least in Sceneform based apps - not tried with OpenGL and ARCore).
Related
Recently i was able to dynamically load Material Design Icons in Flutter by using the flutter_icons/flutter_icons.dart
// First Approach
Icon icon = new Icon(Icons.settings);
// Second Approach
icon = new Icon(MaterialIcons.getIconData(Model.dynamic_icon_name);
Flutter Screenshot
This got me thinking if i could do something similar using a native android approach, with something like
// Glide Theoretical Example
Glide.with(context.getApplicationContext())
.asBitmap()
.load(MaterialIcons.getIconData(Model.dynamic_icon_name))
.apply(new RequestOptions().fitCenter())
.into(iconView);
I think it would be interesting to know, since i think could provide some advantage that a developer would not need to bundle those assets with the app
There was no effective way of handling this natively on Android, so the solution i ended up going with was having the server host these icons and i could just serve them down using Glide like this.
Step 1: Backend setup
Download the png assets for the icon of your choice, ie: the thumbs_up icon from this Material Icons link
Now you can go ahead and unzip the image assets into your server's assets folder, in my case this was in my public folder of the server since i was using a Laravel backend. In the end my icons resided in the path public/material-icons/thumb_up_black.png and made sure to keep a consistent naming convention public/material-icons/{icon_name_color}.png this was the vital point since i needed to reference these assets again.
Now we can link the icon with the respective item
Server side
...
// Creating the item
new Item::create([
'name' => 'Like-able item',
'icon' => 'thumb_up_black',
]);
...
List items API endpoint result
[
{
"name": "Like-able item",
"icon_url": "https//server-name.fancy/thumb_up_black.png"
},
{
"name": "Like-able item #2",
"icon_url": "https//server-name.fancy/thumb_up_black.png"
},
....
]
Step 2: Client Setup (Android app)
Once i have consumed the API data and about to display the items, i can use Glide to accomplish this.
ItemModel item = new ItemModel(itemDataFromServer);
Glide.with(context.getApplicationContext())
.asBitmap()
.load(MaterialIcons.getIconData(item.icon_url))
.apply(new RequestOptions().fitCenter())
.into(iconView);
This was finally how i was able to solve this problem, the nice thing with this implementation was that from our three client apps (Android, iOs and Web) we could use the same if not similar method to display accordingly.
I'm struggling to follow the examples to load in my own model with ArCore. I found the following code:
ModelRenderable.builder()
// To load as an asset from the 'assets' folder ('src/main/assets/andy.sfb'):
.setSource(this, Uri.parse("andy.sfb"))
// Instead, load as a resource from the 'res/raw' folder ('src/main/res/raw/andy.sfb'):
//.setSource(this, R.raw.andy)
.build()
.thenAccept(renderable -> andyRenderable = renderable)
.exceptionally(
throwable -> {
Log.e(TAG, "Unable to load Renderable.", throwable);
return null;
});
However I can't find the class ModelRenderable anywhere and how to import it. Also the example app I'm building app from loads models like this:
virtualObject.createOnGlThread(/*context=*/ this, "models/andy.obj", "models/andy.png");
virtualObject.setMaterialProperties(0.0f, 2.0f, 0.5f, 6.0f);
But my model has no png files, just obj and mtl. The automatic sceneform also created a sfa and sfb file.
Which one is the right way to do it?
For reference here is the official documentation about initiating a model: https://developers.google.com/ar/develop/java/sceneform#renderables
The ModelRenderable is part of the
com.google.ar.sceneform:core
library, you could add it by adding this dependency to your app level build.gradle:
implementation 'com.google.ar.sceneform:core:1.13.0'
Make sure every other arcore / sceneform dependency is on the same version, in this case 1.13.0 .
The sfa meaning is SceneFormAsset, it represents your your model details in a humanly readable form and wouldn't be part of your application (it should be in the samplefolder which is on the same hierarchy level as your src folder). The sfb however is SceneFormBinary, this binary is generated from the sfa descriptor every time you modify something in the sfa and build the project. The sfb file should be in your assets folder in your project. For model loading, you should use the sfb file:
ModelRenderable.builder()
.setSource(context, Uri.parse("house.sfb"))
About your sample code: if you aren't familiar with OpenGL I don't recommend you to follow that sample, better to look for SceneForm, here is a sample app: https://github.com/google-ar/sceneform-android-sdk/tree/master/samples/solarsystem
I'm new to Android-Developement and I would like to make a Camera-app. I found this library (this is the Github page).
But I don't know how to implement a library. I followed these steps (method 2) but I'm getting an error in a popup window called 'IDE Fatal Errors'. It says: 'To investigate / fix the problem IDE wants to attach following files to the bug report. We recommend to include all the files providing maximum information. Note: all the data you send will be kept private.' Then I can select a 'diagnostic.txt'. There is a section 'file content' where 'rootsChanged' is written. I can report the whole window to Google.
The following step is to configure the 'Fotoapparat' instance. What is an instance? When I search on Google I only find articles talking about making a library.
I'm sorry if these are stupid question but I am a beginner and I would like to learn more about Android-Development. Thanks in advance for your time and help.
Add this line in your build.gradle(Module: app) file ->
dependecies {
//Your other dependencies...
implementation 'io.fotoapparat:fotoapparat:2.3.3'
}
And start using your code. Library is working fine.
EDIT - >
You need to learn basics of java.
To setup instance of the object you need to create a variable.
Hence in your case:
Fotoapparat yourVariableName = new FotoapparatFotoapparat
.with(context)
.into(cameraView) // view which will draw the camera preview
.previewScaleType(ScaleType.CenterCrop) // we want the preview to fill the view
.photoResolution(ResolutionSelectorsKt.highestResolution()) // we want to have the biggest photo possible
.lensPosition(LensPositionSelectorsKt.back()) // we want back camera
.focusMode(SelectorsKt.firstAvailable( // (optional) use the first focus mode which is supported by device
FocusModeSelectorsKt. continuousFocusPicture(),
FocusModeSelectorsKt.autoFocus(), // in case if continuous focus is not available on device, auto focus will be used
FocusModeSelectorsKt.fixed() // if even auto focus is not available - fixed focus mode will be used
))
.flash(SelectorsKt.firstAvailable( // (optional) similar to how it is done for focus mode, this time for flash
FlashSelectorsKt.autoRedEye(),
FlashSelectorsKt.autoFlash(),
FlashSelectorsKt.torch()
))
.frameProcessor(myFrameProcessor) // (optional) receives each frame from preview stream
.logger(LoggersKt.loggers( // (optional) we want to log camera events in 2 places at once
LoggersKt.logcat(), // ... in logcat
LoggersKt.fileLogger(this) // ... and to file
))
.build();
And start using yourVariableName.
In Android, we can import SVG as Vector XML,
Use this as Drawable,
Change colors of SVG Icons and add to button
void setSvgIcnForBtnFnc(Button setBtnVar, int setSvgVar, int setClrVar, String PosVar)
{
Drawable DevDmjVar = getDrawable(setSvgVar);
DevDmjVar.setBounds(0,0,Dpx24,Dpx24);
DevDmjVar.setColorFilter(new PorterDuffColorFilter(setClrVar, PorterDuff.Mode.SRC_IN));
switch (PosVar)
{
case "Tit" : setBtnVar.setCompoundDrawables(null, DevDmjVar, null, null); break;
case "Rit" : setBtnVar.setCompoundDrawables(null, null, DevDmjVar, null); break;
case "Pit" : setBtnVar.setCompoundDrawables(null, null, null, DevDmjVar); break;
default: setBtnVar.setCompoundDrawables(DevDmjVar, null, null, null); break;
}
}
How do I do this in swift for iphones ?
setBtnVar.setImage(<image: UIImage?>, forState: <UIControlState>)
UPD: Also see this UseYourLoaf blog post
Just found on Erica Sadun blog post that on iOS 11 you could use Vector Assets.
What "Vector Assets" mean:
If you click that box, the vector data will be shipped with your
application. Which, on the one hand, makes your application a little
bit larger, because the vector data takes up some space. But on the
other hand, will give you the opportunity to scale these images, which
might be useful in a number of different situations. So, one is, if
you know that this particular image is going to be used at multiple
sizes. But that might be less obvious. So, one case is a symbolic
glyph that should resize with dynamic type. Since we're thinking about
dynamic type, you should also be thinking about having glyphs that are
appearing next to type resize appropriately. Another case that's
really not obvious, is tab bar images.
... there's a really great accessibility feature that we strongly
recommend supporting, that allows for user that have turned their
dynamic type size up. ... So, we really recommend doing that to increase the usability of your app across all users
How to use:
Convert your SVG file into PDF, e.g. on ZamZar.com
Add your pdf to Assets.xcassets
Click "Preserve Vector Data" for the imported pdf.
Create UIImageView in your UIViewController and assign pdf file like UIImage.
or Asset Catalog Creator available in the Mac App Store will do steps 1 and 2 with a simple drag and drop.
iOS < 11
There is no native way to use SVG image.
Take a look at Macaw
Import framework via Cocoapod
pod "Macaw", "0.8.2"
Check their example project: this is how you render tiger.svg (located in project directory, not in an Assets.xcassets file)
import UIKit
import Macaw
class SVGExampleView: MacawView {
required init?(coder aDecoder: NSCoder) {
super.init(node: SVGParser.parse(path: "tiger"), coder: aDecoder)
}
}
There are some other third-party libraries of course:
SwiftSVG
Snowflake
SVGKit Objective-C framework
After a nightmare I came up with this solution for using SVG in button using Swift.
This is for all who dont wish to struggle like me
I used the simple SwiftSVG library for getting UIView from SVG File
Usage :
namBtnVar.setSvgImgFnc("ikn_sev", ClrVar: UIColor.cyanColor())
Install SwiftSVG Library
1) Use pod to install :
// For Swift 3
pod 'SwiftSVG'
// For Swift 2.3
pod 'SwiftSVG', '1.1.5'
2) Add framework
Goto AppSettings
-> General Tab
-> Scroll down to Linked Frameworks and Libraries
-> Click on plus icon
-> Select SVG.framework
3) Add below code anywhere in your project
extension UIButton
{
func setSvgImgFnc(svgImjFileNameVar: String, ClrVar: UIColor)
{
setImage((getSvgImgFnc(svgImjFileNameVar, ClrVar : ClrVar)), forState: .Normal)
}
}
func getSvgImgFnc(svgImjFileNameVar: String, ClrVar: UIColor) -> UIImage
{
let svgURL = NSBundle.mainBundle().URLForResource(svgImjFileNameVar, withExtension: "svg")
let svgVyuVar = UIView(SVGURL: svgURL!)
/* The width, height and viewPort are set to 100
<svg xmlns="http://www.w3.org/2000/svg"
width="100%" height="100%"
viewBox="0 0 100 100">
So we need to set UIView Rect also same
*/
svgVyuVar.frame = CGRect(x: 0, y: 0, width: 100, height: 100)
for svgVyuLyrIdx in svgVyuVar.layer.sublayers!
{
for subSvgVyuLyrIdx in svgVyuLyrIdx.sublayers!
{
if(subSvgVyuLyrIdx.isKindOfClass(CAShapeLayer))
{
let SvgShpLyrIdx = subSvgVyuLyrIdx as? CAShapeLayer
SvgShpLyrIdx!.fillColor = ClrVar.CGColor
}
}
}
return svgVyuVar.getImgFromVyuFnc()
}
extension UIView
{
func getImgFromVyuFnc() -> UIImage
{
UIGraphicsBeginImageContext(self.frame.size)
self.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
You can use vector-based PDFs natively if you select Single Scale for Scale Factors after importing.
The dimensions of the PDF will be the 1x dimensions for the asset.
Xcode will generate the rasterized image for every scale. You can then use it like any other image.
I used Aleksey Potapov's answer. The conversion and everything is perfect!
However I had an issue where my image was too large for my application.
So use this to resize it to a good size for ios development:
<svg xmlns="http://www.w3.org/2000/svg" height="30" width="30" viewBox="0 0 1000 1000" xmlns:xlink="http://www.w3.org/1999/xlink">
Check out my app: Speculid
It will automatically convert SVGs into PDFs or PNGs depending on how your asset library is setup.
I use the randomforest estimator, implemented in tensorflow, to predict if a text is english or not. I saved my model (A dataset with 2k samples and 2 class labels 0/1 (Not English/English)) using the following code (train_input_fn function return features and class labels):
model_path='test/'
TensorForestEstimator(params, model_dir='model/')
estimator.fit(input_fn=train_input_fn, max_steps=1)
After running the above code, the graph.pbtxt and checkpoints are saved in the model folder. Now I want to use it on Android. I have 2 problems:
As the first step, I need to freeze the graph and checkpoints to a .pb file to use it on Android. I tried freeze_graph (I used the code here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py). When I call the freeze_graph in my mode, I get the following error and the code cannot create the final .pb graph:
File "/Users/XXXXXXX/freeze_graph.py", line 105, in freeze_graph
_ = tf.import_graph_def(input_graph_def, name="")
File "/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/importer.py", line 258, in import_graph_def
op_def = op_dict[node.op]
KeyError: u'CountExtremelyRandomStats'
this is how I call freeze_graph:
def save_model_android():
checkpoint_state_name = "model.ckpt-1"
input_graph_name = "graph.pbtxt"
output_graph_name = "output_graph.pb"
checkpoint_path = os.path.join(model_path, checkpoint_state_name)
input_graph_path = os.path.join(model_path, input_graph_name)
input_saver_def_path = None
input_binary = False
output_node_names = "output"
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_graph_path = os.path.join(model_path, output_graph_name)
clear_devices = True
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path,
output_node_names, restore_op_name,
filename_tensor_name, output_graph_path,
clear_devices, "")
I also tried the freezing on the iris dataset in "tf.contrib.learn.datasets.load_iris". I get the same error. So I believe it is not related to the dataset.
As a second step, I need to use the .pb file on the phone to predict a text. I found the camera demo example by google and it contains a lot of code. I wonder if there is a step by step tutorial how to use a Tensorflow model on Android by passing a feature vector and get the class label.
Thanks, in advance!
UPDATE
By using the recent version of tensorflow (0.12), the problem is solved. However, now, the problem is that what I should pass to output_node_names ??? How can I get what are the output nodes in the graph ?
Re (1) it looks like you are running freeze_graph on a build of tensorflow which does not have access to contrib ops. Maybe try explicitly importing tensorforest before calling freeze_graph?
Re (2) I don't know of a simpler example.
CountExtremelyRandomStats is one of TensorForest's custom ops, and exists in tensorflow/contrib. As was pointed out, TF switched to including contrib ops by default at some point. I don't think there's an easy way to include the contrib custom ops in the global registry in the previous releases, because TensorForest uses the method of building a .so file that is included as a data file which is loaded at runtime (a method that was the standard when TensorForest was created, but may not be any longer). So there are no easily-included python build rules that will properly link in the C++ custom ops. You can try including tensorflow/contrib/tensor_forest:ops_lib as a dep in your build rule, but I don't think it will work.
In any case, you can try installing the nightly build of tensorflow. The alternative includes modifying how tensorforest custom ops are built, which is pretty nasty.