Switching from Main thread to multiple Thread while analyzing a frame - android

I have been working on a Mobile application that analyzes the frame looking for specific objects. The processing was to heavy, and I keep getting
05-08 17:44:24.909: I/Choreographer(31606): Skipped 114 frames! The application may be doing too much work on its main thread.
So i switched the image processing to threads, now it is much faster but I am not able to recognize any object . The data(the different frames) is not updating and I don't know why. Here is what I'm doing in pseudocode( SurfaceHolder.Callback ,Camera.PreviewCallback and camera.addCallbackBuffer(data) are implemented )
public void onPreviewFrame(byte[] data, Camera camera)
{
Imageprocessor np = new ImageProcessor(data);
np.start()
results = np.getResults();
}
From the debugging I have done so far I know that start is analyzing the whole frame, but . the data is not updating it keeps stacked at the very first frame. This does not happen if I do it in the main Thread like this,
public void onPreviewFrame(byte[] data, Camera camera)
{
Imageprocessor np = new ImageProcessor();
np.process(data)
results = np.getResults();
}
This works but it force me to skip many frames. The answer may be easy, but I could not find it online.
Forgive me if I am posting a very noob question
Thanks in advance

That's because in the single threaded case, the np.process() is complete before you execute results=..., but in the threaded case, the results=... follows immediately after starting the threads. Unless getResults() waits for all the threads to finish??

Related

How to add a delay inside the Executor bound to CameraX's analyzer?

Inside my overridden analyze() I need to add some kind of throttle before performing an IO operation. Without the throttle, this operation gets executed immediately at each call of analyze() and it actually completes quickly, but apparently the calls are too fast and after a while the camera preview freezes for eternity (the app is still running because Logcat keeps displaying new messages).
I'm currently investigating if it has something to do with my code, like forgetting to call imageProxy.close(). So far everything seems fine and I'm afraid the device that performs the IO operation is raising too many interrupts for the CPU to handle, or something along the lines.
I've tried the good old Thread.sleep() but obviously it blocks the main thread and freezes the UI; I've seen some examples with Handler#postDelayed() but I don't think it does what I want; I've tried wrapping the IO call in a coroutine with a delay() at the beginning but again I don't think it does what I want. Basically I'd like to call some form of sleep() on the Executor thread itself, from within the code executed by it.
after a while the camera preview freezes for eternity
I've seen this issue occur many times, and it's usually due to an image that the Analyzer doesn't close. Are you seeing the issue even when the image analysis use case isn't used?
I've tried the good old Thread.sleep() but obviously it blocks the main thread and freezes the UI
Why's that? This shouldn't be the case if you're adding the call to Thread.sleep() inside Analyzer.analyze(), since it'll block the thread of the Executor you provided when calling ImageAnalysis.setAnalyzer(), which shouldn't be tied to the main thread.
One option to perform analysis fewer times is to drop images inside the Analyzer, something like the following:
private static final int ANALYSIS_DELAY_MS = 1_000;
private static final int INVALID_TIME = -1;
private long lastAnalysisTime = INVALID_TIME;
public void analyze (ImageProxy image) {
final long now = SystemClock.uptimeMillis();
// Drop frame if an image has been analyzed less than ANALYSIS_DELAY_MS ms ago
if (lastAnalysisTime != INVALID_TIME && (now - lastAnalysisTime < ANALYSIS_DELAY_MS)) {
image.close();
}
lastAnalysisTime = now;
// Analyze image
image.close();
}
There is one more way, This is how I am doing it in a local project. So in my Image Analyzer, I am rebinding the camera using Handler#PostDelayed and this gives a small black screen in preview, but it won't process the next image that quickly.
My use case was that I had to continuously scan barcodes, but the scanning was too fast and it kept on scanning one code many times. So, I just needed 100ms wait. So this works out for me.
try {
cameraProvider.unbindAll()
Handler(Looper.getMainLooper()).postDelayed({
cameraProvider.bindToLifecycle(
this, CameraSelector.DEFAULT_BACK_CAMERA, useCaseGroup.build())
}, 100)
}
catch(exc: Exception) {
Log.e(TAG, "Use case binding failed", exc)
}

High frequency UI update - Android

I want to make 8 squares change colors between red/black periodically.
I acomplish this using timer.schedule with period time in milliseconds and it work
BUT then I realized that I need to use small time between this transitions (example nanoseconds).
To accomplish that I wrote this code:
timerTask = new TimerTask() {
public void run() {
handler.post(new Runnable() {
public void run(){
//CODE OF THE TASK.
}
});
}
};
//To make schedule this task in 5 nanoseconds I use this!
exec = new ScheduledThreadPoolExecutor(1);
exec.scheduleAtFixedRate(timerTask, 0, 5, TimeUnit.NANOSECONDS);
But when I run this, the UI is not updating (seems to be stuck), but in logcat, all the logs are printing very fast. How can I achieve to make a task periodically x nanoseconds?
The entire Android UI runs at 60Hz- 60 updates per second. This means the minimum time between redraws is 16 ms. You cannot run it at a higher framerate. Nor are human eyes capable of seeing changes at a much higher frequency than that.
iOS and most video game consoles also work on a 60 Hz refresh rate. You'd find very few to no systems that go faster.
I'm not sure what exactly you're trying to accomplish, but I'm fairly certain you're trying to do it the wrong way.
ALSO: I notice your timer task posts to a handler. That means your timer task is going to tell the main thread to run something, and the timer task is running in nanoseconds. YOu're basically going to choke your main thread full of "run this task" messages, then eventually crash with an OOM error when the event queue becomes so massive it can't add any more (which may take several minutes), because there's no way you're processing them fast enough with the thread switching overhead.
After doing a lot of research, I realized that in order to get the view to refresh so quickly, I needed the use of SurfaceView and a Thread to make the UI redraw very fast, I really had no knowledge of this. Thanks for the help

cannot trace frames rendered from another thread

Our company develops several games for mobile platforms, including Android. We use OpenGL for all visual items, including UI (more technical details are below).
We have received some weird warnings from Google Play Console in Pre-launch report, like “Your app took 20764 ms to launch”. On the video provided with this report, the game took about a second to start.
After some investigation we found that Android Systrace cannot detect OpenGL draws made from another thread. So Pre-launch tests think (wrongly) that our game is super-slow.
Is there some method to notify the system that a frame is drawn? It’s seems that eglSwapBuffers() is not enough.
There’s a link to the same problem with Cocos2d: https://discuss.cocos2d-x.org/t/frozen-frames-warnings-by-google-play-pre-launch-report-for-3-17-cocos-demo-app/42894
Some details
When a new build is published to Google Play Console, some automated tests are performed on different devices. Results of these tests are available in Pre-launch report section of the Google Play Console.
Starting from beginning of April we receive strange performance warnings on some of devices (always the same ones). Two examples:
Startup time: Your app took 20764 ms to launch…
Frozen frames: 33.33% of the frames took longer than 700ms to render
Both problems sound dreadful--would have they be true. But when we examined videos of testing, we could not see any problems. All games started pretty fast and ran without visual stuttering.
Systrace report
This is the picture of systrace showing 5 seconds of our game being started (rectangles were drawn by me).
systrace
As you can see, the systrace have found only 4 frames rendered (the pink rect), which were drawn from the RenderThread. But by some reason Android cannot detect our GL draw calls which are performed in another thread (blue rects).
Pre-launch reports also displays only 3 to 4 frames, each 300-400 ms long.
Initialization code
Our game engine runs all game logic and render code in a separate thread. This is simplified initialization code.
The worker thread is created from our Activity’s onStart() overriden method.
public class MyActivity extends Activity
{
protected Thread worker = null;
private native void Run();
#Override
protected void onStart()
{
super.onStart();
if(worker == null)
{
worker = new Thread()
{
public void run()
{
Run();
}
};
worker.start();
}
}
}
The only thing the thread does is the Run() native function. This function may be resolved into something like this:
void MyActivity::Run()
{
initApp();
while(!destroyRequested())
{
// Process the game logic.
if (activated && window != NULL)
{
time->process();
input->process();
sound->process();
logic->process();
graphics->draw();
}
}
clearApp();
}
As you can see, the worker thread constantly spins the update-and-draw loop. Vsync protects the loop from overperforming. Heavy operations like resource loading are done asynchronously to avoid freezes.
From the user side this approach works just fine. Games are loading fast and go smoothly.

Strange performance of avcodec_decode_video2

I am developing an Android video player. I use ffmpeg in native code to decode video frame. In the native code, I have a thread called decode_thread that calls avcodec_decode_video2()
int decode_thread(void *arg) {
avcodec_decode_video2(codecCtx, pFrame, &frameFinished,pkt);
}
I have another thread called display_thread that uses aNativeWindow to display a decoded frame on a SurfaceView.
The problem is that if I let the decode_thread run continuously without a delay. It significantly reduces the performance of avcodec_decode_video2(). Sometimes it takes about 0.1 seconds to decode a frame. However if I put a delay on the decode_thread. Something likes this.
int decode_thread(void *arg) {
avcodec_decode_video2(codecCtx, pFrame, &frameFinished,pkt);
usleep(20*1000);
}
The performance of avcodec_decode_video2() is really good, about 0.001 seconds. However putting a delay on the decode_thread is not a good solution because it affects the playback. Could anyone explain the behavior of avcodec_decode_video2() and suggest me a solution?
It looks impossible that the performance of video decoding function would improve just because your thread sleeps. Most likely the video decoding thread gets preempted by another thread, and hence you get the increased timing (hence your thread did not work). When you add a call to usleep, this does the context switch to another thread. So when your decoding thread is scheduled again the next time, it starts with the full CPU slice, and is not interrupted in the decode_ video2 function anymore.
What should you do? You surely want to decode packets a little bit ahead than you show them - the performance of avcodec_decode_video2 certainly isn't constant, and if you try to stay just one frame ahead, you might not have enough time to decode one of the frames.
I'd create a producer-consumer queue with the decoded frames, with the top limit. The decoder thread is a producer, and it should run until it fills up the queue, and then it should wait until there's room for another frame. The display thread is a consumer, it would take frames from this queue and display them.

Android - Scheduling an Events to Occur Every 10ms?

I'm working on creating an app that allows very low bandwidth communication via high frequency sound waves. I've gotten to the point where I can create a frequency and do the fourier transform (with the help of Moonblink's open source code for Audalyzer).
But here's my problem: I'm unable to get the code to run with the correct timing. Let's say I want a piece of code to execute every 10ms, how would I go about doing this?
I've tried using a TimerTask, but there is a huge delay before the code actually executes, like up to 100ms.
I also tried this method simply by pinging the current time and executing only when that time has elapsed. But there is still a delay problem. Do you guys have any ideas?
Thread analysis = new Thread(new Runnable()
{
#Override
public void run()
{
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
long executeTime = System.currentTimeMillis();
manualAnalyzer.measureStart();
while (FFTransforming)
{
if(System.currentTimeMillis() >= executeTime)
{
//Reset the timer to execute again in 10ms
executeTime+=10;
//Perform Fourier Transform
manualAnalyzer.doUpdate(0);
//TODO: Analyze the results of the transform here...
}
}
manualAnalyzer.measureStop();
}
});
analysis.start();
I would recommend a very different approach: Do not try to run your code in real time.
Instead, rely on only the low-level audio code running in real time, by recording (or playing) continuously for a period of time encompassing the events of interest.
Your code then runs somewhat asynchronously to this, decoupled by the audio buffers. Your code's sense of time is determined not by the system clock as it executes, but rather by the defined inter-sample-interval of the audio data you work with. (ie, if you are using 48 Ksps then 10 mS later is 480 samples later)
You may need to modify your protocol governing interaction between the devices to widen the time window in which transmissions can be expected to occur. Ie, you can have precise timing with respect to the actual modulation and symbols within a "packet", but you should not expect nearly the same order of precision in determining when a packet is sent or received - you will have to "find" it amidst a longer recording containing noise.
Your thread/loop strategy is probably roughly as close as you're going to get. However, 10ms is not a lot of time, most Android devices are not super-powerful, and a Fourier transform is a lot of work to do. I find it unlikely that you'll be able to fit that much work in 10ms. I suspect you're going to have to increase that period.
i changed your code so that it takes the execution time of doUpdate into account. The use of System.nanoTime() should also increase accuracy.
public void run() {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_DISPLAY);
long executeTime=0;
long nextTime = System.nanoTime();
manualAnalyzer.measureStart();
while (FFTransforming)
{
if(System.nanoTime() >= nextTime)
{
executeTime = System.nanoTime();
//Perform Fourier Transform
manualAnalyzer.doUpdate(0);
//TODO: Analyze the results of the transform here...
executeTime = System.nanoTime() - executeTime;
//guard against the case that doUpdate took longer than 10ms
final long i = executeTime/10000000;
//set the timer to execute again at the next full 10ms intervall
nextTime+= 10000000+ i*10000000
}
}
manualAnalyzer.measureStop();
}
What else could you do?
eliminate Garbage Collection
go native with the NDK (just an idea, this might as well give no benefit)

Categories

Resources