I'm working with different version of Android in an application using JNI. I followed This tutorial
First of all, I manage the camera from Java, and send the data using JNI and store it into a OpenCV matrix and then manipulate the data. My application works fine in versions before 4.1.2, but in newest devices, it doesn't work.
The problem is, I set invisible the camera's preview in the UI. If the layer is not drawing, in newest versions the callback stop.
I only found a solution, but I cannot make it work. Someone know how to solve the problem of the callback preview WITHOUT show the image in the screen?
Exist another alternative, like open directly the camera in Native and use the callbacks in another thread?
This is very interesting, please try to reverse engineering this code HiddenCamera
try to grab the logic with this refs :
turn-your-android-phone-into-remote-spy
android-camera-capture-without
android-take-a-picture-without-displaying-a-preview
Related
I just finished my first Android Studio App with Kotlin. The Design/UI of the App is pretty lame, but works fine. Now I want to "upgrade" my UI with replacing/adding new Components that look better. I started reading about Flutter, but I only see tutorials where they start a complete new app from scratch. Since I got a working app, how can I simply only use the "design"-Options of flutter and let my source code be?
Im pretty new in this field, but got some C#-Experience but never did an App and since this is my diploma thesis, I just want to use all my resources so the App also looks professional in terms of UI-Design etc.
You can add flutter to existing android or ios app you can read here more.
how can I simply only use the "design"-Options of flutter and let my source code be?
Vanilla Android uses XML, however as you are likely aware thats not how it works in flutter. Since the system is completely different, I wouldn't expect you can just "change look" without re-designing it entirely with flutter.
If wanted to use flutter, I would assume you would need to add it incrementally (as mentioned in Gizmo1337's answer).
However if all you want is to update how your app looks, theres nothing stopping you from changing your existing XML style. I'd say it's probably pretty overkill to change to flutter if all you want is a new look.
I have to create a script using Espresso to test my app in Firebase test lab. My app uses Camera to capture images and I have opened the default camera app.
For testing on my device I give package name for the testing device that I am using. The issue is that the camera app package names for different Android devices is different, and we do not know each of it. Also it is not good to hard code package names.
I have searched out and not able to find a way to find a solution.
Thanking in advance.
I don't think there's a good way to do this with the actual camera app, since the camera app often differs between device models and Android versions.
How about you fake this dependency in your tests? Either by abstracting the code that calls the camera app, or by adding your own fake camera activity that will get called and returns you a picture the way it's supposed to happen.
You can use android.support.test.espresso.intent to mock camera launch intent
https://guides.codepath.com/android/UI-Testing-with-Espresso#stubbing-out-the-camera
also ref:
http://www.qaautomated.com/2016/02/testing-camera-activity-using-intent.html
check google sample
https://github.com/googlesamples/android-testing/tree/master/ui/espresso/IntentsAdvancedSample
I spent a week on this already, but I'm not an expert so I may miss something, or even want to do something impossible.
I have a Unity project (with Daydream VR) and my goal is to get a WebView working on a 3D surface (as a texture) in VR mode (for Android)
I know it sounds complicated and, I bet it is.
This is where I am so far:
First of all I started with the samples from Oculus. They made a plugin that links an Android MediaPlayer (native class from android) to a Unity Texture, this is kind of easy because the media player of android already use SurfaceTexture to render itself, and with the plugin they just created a bridge between the two environments.
I got this working easily.
Then after I found this article: http://www.felixjones.co.uk/neo%20website/Android_View/
and so I made a deduction that it could be possible to use the same trick to get a native Android WebView in a Unity Texture.
From the article, I thought that the only thing I needed was a custom Android WebView that renders to a SurfaceView (so it will be quite similar to the Android MediaView) and then the oculus plugin would work too.
Of course nothing was like that...
Step 1/
I made an Android plugin for Unity to expose a custom WebView that (IMHO) would render to a SurfaceView (but I'm not sure it is quite right)
see => JrmgxWebView.java
Step 2/
I compiled the oculus plugin, with minor changes
see => *.cpp
Step 3/
I adapted the Unity script that link both the Android plugin and the oculus plugin for unity, added that to a simple Daydream Scene on a 3D cube (with the script associated)
see => WebviewPlayerSampleWeb.cs
I get logs, and from what I understand, the problem is that the SurfaceView does not get refreshed. I can see that my Custom WebView is instanciated well, most of the code works and there are no error (nor crash)
But the texture only change from white (start value) to black (when the plugin is initialized) and then nothing happen.
The textureId coming from the plugin get stuck to either 0 or 29, and AndroidSurfaceTexture->GetNanoTimeStamp() never returns new values.
On the java side I played with .invalidate(), used .loadUrl() and .loadData() and on the C++ part, I checked each method call to see if everything was called. I spend so much time on this code that I can't even remember all the combination I tried.
What is wrong with this?
I'm sure I'm missing a basic point here.
most code here https://gist.github.com/jrmgx/019cba403769dda27a6d42e48c461b1b
PS: be nice with me, it's been a week, and it is my first time with Unity, C++, JNI and C#
Thank you in advance
I'm trying to call ffmpeg.c to trim a video based on this code 'video-trimmer'. So when I try to run the activity (that loads and uses the native lib) the first time I click trin it works and I could trim the video but when I try to run it again it crashes (and it only work with the application restarts).
So I spend three days looking for a solution for this issue, most of the answers says the issue with the static variables in ffmpeg.c and creating a lib that loads and unload the class fixes the issue (answer1, answer2). So I tried to apply the solution that is based on the answers and this github repo on the video-trimmer project but all my attempts failed.
Is there any one knows about a fork of the 'video-trimmer' project that fixes the issue?. or can anybody provide step by step answer of how to implement the solution in the 'video-trimmer' project (because I tried to follow all the solution on the web and apply them in that project but with no luck).
the problem seems to be with the initialized values (some variables are declared as global static vars, presumably for ease of access but breaks OOP principles and causes us problems like you're facing), however, there are a few ways round this that I can think of:
write a quick function to manually set the static vars back to their correct init values (quick and dirty but works). A list of methods which should not be allowed to fire off as and when they please follows:
avcodec_register_all(), avdevice_register_all(), av_register_all()
avcodec_find_encoder(), avcodec_find_decoder(), av_find_stream_info()
avcodec_open(), avcodec_close()
these could be wrapped in a boolean controlled method for example so that if they have run previously they cannot run again.
another way to control things is to manually force the variable values (by use of a class or struct to control the ffmpeg global vars) that are being re-initialised on subsequent runs, for example on running the method which currently causes the code to fail, the first step could be to manually set the variables back to their default settings so that they run correctly as at the moment I suspect you have data remaining resident between iterations, and thats what is causing problems.
you could utilse mutexes to ensure that the aforementioned methods behave more responsibly when used with threads.
Addendum:
also (at the C level) use libffmpeginvoke in preference to libffmpeg if you are going to invoke main() multiple times
forcibly invoke Garbage Collection (yep this is another 'ugly' fix) on the call to load the ffmpeg lib, which would then clean things up allowing you to call another instance
Let me know if you need something more in-depth: I can try making a test framework to replicate your problems and see where I get, although that needs access to my home PC as when I am at work I have no Android SDK.
Help us help you, please provide your implemented code or a part of it. Also Crash Log will be helpful.
Hint: Initialise ffmpeg object/thread.
Then use a call back interface. Once the VideoTrimmer gets over, give a callback.
In that callback call the destroy/kill of the ffmpeg object/thread.
May be this link can help you.
I have recently used the "android-ffmpeg-java" project from github, this is a working library, I can guarantee. You just have to implement a wrapper (test application) which will do the work.
Check this link for Source : android-ffmpeg-java
Check this link for example : android-ffmpeg-cmdline. See if you can solve on with this.
I am not sure if this will help, but C files typically have a header where you can use
ifndef
Please see the following:
http://www.cprogramming.com/reference/preprocessor/ifndef.html
Use that syntax to sandwhich the declaration in the associated .h file to ensure multiple imports don't cause a crash in the importing code.
Good Luck!
Edit: Ok, looks like that would mean recompiling ffmpeg to the .so file. You should just try to verify that it has a mechanism as described above in the codebase and try to confirm it isn't somehow being loaded twice.
While somewhat crude, a potential workaround could be to utilize/link to ffmpeg from a Service (you had better be doing that anyway) which is declared in the manifest to run in its own process rather than that of client Activities. Then have that process terminate itself - calling native exit() if needed - when the task is completely finished. Android won't particularly like that happening - it's not good practice - but you can probably make it work.
Re-engineering the library to be able to reset itself to a fresh state (or even make it entirely contextual) would be better, but for a huge legacy codebase may prove to be a large project.
I want to reuse android 4.0.4 call screen. but I cant access some widget classes which they are using. Can any one suggest me. Thanks in advance.
The source to Android is available; nothing prevents you from downloading it, getting the source to the widget you're interested in, and including that source (and any resources it needs) in your project. People have been doing this since Android was introduced.