Android Fotoapparat library - android

I'm new to Android-Developement and I would like to make a Camera-app. I found this library (this is the Github page).
But I don't know how to implement a library. I followed these steps (method 2) but I'm getting an error in a popup window called 'IDE Fatal Errors'. It says: 'To investigate / fix the problem IDE wants to attach following files to the bug report. We recommend to include all the files providing maximum information. Note: all the data you send will be kept private.' Then I can select a 'diagnostic.txt'. There is a section 'file content' where 'rootsChanged' is written. I can report the whole window to Google.
The following step is to configure the 'Fotoapparat' instance. What is an instance? When I search on Google I only find articles talking about making a library.
I'm sorry if these are stupid question but I am a beginner and I would like to learn more about Android-Development. Thanks in advance for your time and help.

Add this line in your build.gradle(Module: app) file ->
dependecies {
//Your other dependencies...
implementation 'io.fotoapparat:fotoapparat:2.3.3'
}
And start using your code. Library is working fine.
EDIT - >
You need to learn basics of java.
To setup instance of the object you need to create a variable.
Hence in your case:
Fotoapparat yourVariableName = new FotoapparatFotoapparat
.with(context)
.into(cameraView) // view which will draw the camera preview
.previewScaleType(ScaleType.CenterCrop) // we want the preview to fill the view
.photoResolution(ResolutionSelectorsKt.highestResolution()) // we want to have the biggest photo possible
.lensPosition(LensPositionSelectorsKt.back()) // we want back camera
.focusMode(SelectorsKt.firstAvailable( // (optional) use the first focus mode which is supported by device
FocusModeSelectorsKt. continuousFocusPicture(),
FocusModeSelectorsKt.autoFocus(), // in case if continuous focus is not available on device, auto focus will be used
FocusModeSelectorsKt.fixed() // if even auto focus is not available - fixed focus mode will be used
))
.flash(SelectorsKt.firstAvailable( // (optional) similar to how it is done for focus mode, this time for flash
FlashSelectorsKt.autoRedEye(),
FlashSelectorsKt.autoFlash(),
FlashSelectorsKt.torch()
))
.frameProcessor(myFrameProcessor) // (optional) receives each frame from preview stream
.logger(LoggersKt.loggers( // (optional) we want to log camera events in 2 places at once
LoggersKt.logcat(), // ... in logcat
LoggersKt.fileLogger(this) // ... and to file
))
.build();
And start using yourVariableName.

Related

Xamarin Forms Maps MoveToLastRegionOnLayoutChange not working as expected

I have an issue with the Forms Maps.
When I drag the map and move away to a different screen, I see the map back to the last location set using "MoveToRegion" when I resume back the map screen. As per the documentation, setting MoveToLastRegionOnLayoutChange to False should prevent this behavior but it doesn't.
I didn't specifically test this property for screen rotation, but Renderer doesn't change on app sleep and resume.
Tried with below:
Xamarin Forms V16.6.000.1062
Android V8.1
Visual Studio 2019 V16.6.3
UPDATE: I realized that it is not merely a case of Page resume, but the page is rendered as below:
mapInstance = MapsPage.SelfInstance ?? new MapsPage();
Detail = new NavigationPage(mapInstance);
Here, I store the page instance in SelfInstance the first time it is instantiated, so the page is not initialized again. Result below:
Here is the link to code sample I created with above result.
Reported this on XF Github page:
https://github.com/xamarin/Xamarin.Forms/issues/11488

PrintAttributes not working. not able to create custom page size

I am struggling with google printing interface in tablet. I want a print of fixed page size. PrintAttributes.Builder is not modifying the page and margin settings. How can i create a new custom/fixed page dimension for print. Now It shows ISO_A4 by default for HP printer.
My code is below:
PrintAttributes.Builder builder = new PrintAttributes.Builder();
PrintAttributes.MediaSize custom = new PrintAttributes.MediaSize("VISIT_K" , "VISIT_K", 86000,139860);
custom.asPortrait();
builder.setMediaSize( custom );
printJob = printManager.print(jobName, adapter,
builder.build());
Which Android version are you testing on? See this answer concerning a bug pre- Android 7
Regardless, the attributes here serve only as "hints", from the Android documentation:
You can use this parameter to provide hints to the printing framework
and pre-set options based on the previous printing cycle, thereby
improving the user experience. You may also use this parameter to set
options that are more appropriate to the content being printed, such
as setting the orientation to landscape when printing a photo that is
in that orientation.
I think if it conflicts with what the printer is reporting as supported/default, the printer's attributes may take precedence. This could be a feature/bug report to Android if it is not otherwise working.

Error in saving and using model of TensorForestEstimator for Android

I use the randomforest estimator, implemented in tensorflow, to predict if a text is english or not. I saved my model (A dataset with 2k samples and 2 class labels 0/1 (Not English/English)) using the following code (train_input_fn function return features and class labels):
model_path='test/'
TensorForestEstimator(params, model_dir='model/')
estimator.fit(input_fn=train_input_fn, max_steps=1)
After running the above code, the graph.pbtxt and checkpoints are saved in the model folder. Now I want to use it on Android. I have 2 problems:
As the first step, I need to freeze the graph and checkpoints to a .pb file to use it on Android. I tried freeze_graph (I used the code here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py). When I call the freeze_graph in my mode, I get the following error and the code cannot create the final .pb graph:
File "/Users/XXXXXXX/freeze_graph.py", line 105, in freeze_graph
_ = tf.import_graph_def(input_graph_def, name="")
File "/anaconda/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/importer.py", line 258, in import_graph_def
op_def = op_dict[node.op]
KeyError: u'CountExtremelyRandomStats'
this is how I call freeze_graph:
def save_model_android():
checkpoint_state_name = "model.ckpt-1"
input_graph_name = "graph.pbtxt"
output_graph_name = "output_graph.pb"
checkpoint_path = os.path.join(model_path, checkpoint_state_name)
input_graph_path = os.path.join(model_path, input_graph_name)
input_saver_def_path = None
input_binary = False
output_node_names = "output"
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_graph_path = os.path.join(model_path, output_graph_name)
clear_devices = True
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path,
output_node_names, restore_op_name,
filename_tensor_name, output_graph_path,
clear_devices, "")
I also tried the freezing on the iris dataset in "tf.contrib.learn.datasets.load_iris". I get the same error. So I believe it is not related to the dataset.
As a second step, I need to use the .pb file on the phone to predict a text. I found the camera demo example by google and it contains a lot of code. I wonder if there is a step by step tutorial how to use a Tensorflow model on Android by passing a feature vector and get the class label.
Thanks, in advance!
UPDATE
By using the recent version of tensorflow (0.12), the problem is solved. However, now, the problem is that what I should pass to output_node_names ??? How can I get what are the output nodes in the graph ?
Re (1) it looks like you are running freeze_graph on a build of tensorflow which does not have access to contrib ops. Maybe try explicitly importing tensorforest before calling freeze_graph?
Re (2) I don't know of a simpler example.
CountExtremelyRandomStats is one of TensorForest's custom ops, and exists in tensorflow/contrib. As was pointed out, TF switched to including contrib ops by default at some point. I don't think there's an easy way to include the contrib custom ops in the global registry in the previous releases, because TensorForest uses the method of building a .so file that is included as a data file which is loaded at runtime (a method that was the standard when TensorForest was created, but may not be any longer). So there are no easily-included python build rules that will properly link in the C++ custom ops. You can try including tensorflow/contrib/tensor_forest:ops_lib as a dep in your build rule, but I don't think it will work.
In any case, you can try installing the nightly build of tensorflow. The alternative includes modifying how tensorforest custom ops are built, which is pretty nasty.

Taking Screenshots Using Qt C++ on Android

thanks for checking my question out!
I'm currently working on a project using Qt C++, which is designed to be multi-platform. I'm a bit of a newcoming to it, so I've been asked to set up the ability to take screenshots from within the menu structure, and I'm having issues with the Android version of the companion app.
As a quick overview, it's a bit of software that send the content of a host PC's screen to our app, and I've been able to take screenshots on the Windows version just fine, using QScreen and QPixmap, like so:
overlaywindow.cpp
{
QPixmap screenSnapData = screenGrab->currentBackground();
}
screenGrabber.cpp
{
QScreen *screen = QGuiApplication::primaryScreen();
return screen->grabWindow( QApplication::desktop()->winId() );
}
Unfortunately, Android seems to reject QScreen, and with most suggestions from past Google searches suggesting the now-deprecated QPixmap::grab(), I've gotten a little stuck.
What luck I have had is within the code for the menu itself, and QWidget, but that isn't without issue, of course!
QFile doubleCheckFile("/storage/emulated/0/Pictures/Testing/checking.png");
doubleCheckFile.open(QIODevice::ReadWrite);
QPixmap checkingPixmap = QWidget::grab();
checkingPixmap.save(&doubleCheckFile);
doubleCheckFile.close();
This code does take a screenshot, but only of the button strip currently implemented, and not for the whole screen. I've also taken a 'screenshot' of just a white box with the screen's dimensions by using:
QDesktopWidget dw;
QWidget *screen=dw.screen();
QPixmap checkingPixmap = screen->grab();
Would anyone know of whether there was an alternative to using QScreen to take a screenshot in Android, or whether there's a specific way to get it working as compared to Windows? Or would QWidget be the right track? Any help's greatly appreciated!
as i can read in Qt doc : In your screenGrabber.cpp :
QScreen *screen = QGuiApplication::primaryScreen();
return screen->grabWindow( QApplication::desktop()->winId() );
replace with :
QScreen *screen = QGuiApplication::primaryScreen();
return screen->grabWindow( 0 ); // as 0 is the id of main screen
If you want to take a screenshot of your own widget, you can use the method QWidget::render (Qt Doc):
QPixmap pixmap(widget->size());
widget->render(&pixmap);
If you want to take a screenshot of another app/widget than your app, you should use the Android API...

Cardboard Android

Is it possible to focus and open up the information just like we do on click using magnet in cardboard android?
Like ray gaze in unity, is there a alternative for android?
I want to do something like the one shown in chromeexperiments
Finally solved!
In treasurehunt sample, you can find isLookingAtObject() method which detects where user is looking...
And another method called onNewFrame which performs some action in each frame...
My solution for our problem is:
in onNewFrame method I've added this snippet code:
if (isLookingAtObject()) {
selecting++; // selecting is an integer defined as a field with zero value!
} else {
selecting = 0;
}
if (selecting == 100) {
startYourFunction(); // edit it on your own
selecting = 0;
}
}
So when user gazes 100 frame at object, your function calls and if user's gaze finishes before selecting reaches 100, selecting will reset to zero.
Hope that this also works for you
Hope this helps. (Did small research, (fingers crossed) whether the link shared below directly answers your question)
You could check GazeInputModule.cs from GoogleSamples for
Cardboard-Unity from Github. As the documentation of that class says:
This script provides an implemention of Unity's BaseInputModule
class, so that Canvas-based UI elements (_uGUI_) can be selected by
looking at them and pulling the trigger or touching the screen.
This uses the player's gaze and the magnet trigger as a raycast
generator.
Please check some tutorial regarding Google Cardboard Unity here
Please check Google-Samples posted in Github here

Categories

Resources