Strange picture noise at the beginning of live stream - android

I have a SurfaceView and I'm displaying a live stream (RTSP) in it.
Everything is fine except, most of the time, during the first seconds or minutes of playback there is a gray noise/overlay which either disappears completely all at once or gradually clears (starting at the spots where some motion takes place in the picture - see the attached screenshots below).
I'm pretty sure it is not an android issue as the same thing happens even if I watch the stream using VLC on my PC, but judging from the way the noise clears, I have a feeling there should be a way I could programmatically "clear/refresh" the picture.
Do you have an idea how that could be accomplished?
this is the stream: rtsp://193.40.133.138:80/live/juras-erglis
Here are some screenshots, how the picture clears progressively:

Looks like you're missing an initial key frame, so the deltas are being performed against the initial buffer contents. Once enough time passes or enough motion occurs, the encoder emits another key frame and you get in sync.
Satellite and digital cable TV systems typically send a key frame 2x per second, so that you never have to wait more than half a second to sync up with the video stream. I don't know if there's much you can do except put up a "waiting for sync" message.
Wikipedia has some background.

Related

Recorded videos with MediaRecorder play only the first frame in Samsung devices

I have reports from several users on distinct Samsung devices (J6, S6, S7, ...), where recorded videos do not play, so seem to be corrupted.
The playback seems stuck/frozen at the first frame, while the audio plays correctly.
The issue happens with videos recorded using Android's MediaRecorder API.
The information I could gather is that it happens when a device goes into deep-sleep, so turning the screen off and perform no usage of the device for several minutes. When the devices becomes active again, then for some still unknown reasons, a new recording produces an excessively large delta duration between the first and second frame, giving the impression on playback that is frozen or has only 1 frame.
I've found around the internet the issue reported in different sites, yet no proper solution. Has anyone found a workaround? Samsung doesn’t seem to acknowledge the problem.
Further investigation have shown that the issue might be caused by a system bug in some Samsung models.
Inspecting the corrupted videos sent by some users I could confirm that in all the affected devices the first frame has an exaggerated large delta duration time.
So, with an incorrect delta time, it makes the impression that the video is frozen, when is actually just showing in screen the first frame as per its defined delta duration time, which for corrupted videos is tremendously long.
To fix these samples, I replaced the delta time of the first frame with the value from the second frame (only the first frame is affected). Then the video plays correctly as expected. I used IsoParser for this task.
But this is not a proper solution, as it means to have to check every video and repackage it if affected, as there is no way to fix it in-place. The operation requires to create a new video file, copy the contents of the original with the correct delta time, and replace the original file with the fixed one.
The proper solution would be to find out how the MediaRecorder API computes delta times, and why in some scenarios for the affected devices produces an invalid value for the first frame.
My only guess is that if by any chance the MediaRecorder implementation uses the System.nanoTime clock, then I read in some StackOverflow posts that this system clock gives sometimes a bizarre value when coming back from a device deep-sleep. If that was the real issue then the only real solution would be that Samsung fixes their implementation.

Determine exact screen flip times on Android

I am attempting to determine (to within 1 ms) when particular screen flips happen on Android. Choreographer fires every time a frame flips, but gives no way of determining which frame is actually being displayed. According to https://source.android.com/devices/graphics/architecture.html, there are several layers in the process: the user land buffer, which flips to a triple-buffered queue, which flips to the surface flinger, which flips to the hardware. Each of these layers can potentially drop a frame, but at this point I have only determined how to to monitor the user land buffer. Is there a way to monitor the other buffers/flips (in real time, on a non-rooted, non-custom phone)?
I have observed unexpected frame delays on the HTC M8 (about 1 every 5 minutes), but the Nexus 7 does not appear to have this problem. I measure the delays by using a Cedrus StimTracker (http://cedrus.com/stimtracker/) with a photo sensor and the Lab Streaming Layer (https://github.com/sccn/labstreaminglayer). I have tried using eglPresentationTimeANDROID to control when screens are flipped, and that has not fixed the problem.
Note that I'm using the ndk, but I can usually use the JNI to get access to non-ndk features when I need to.
The reason I care is in order to use Android for psychological and neurological experiments, where 1 ms precision is highly desirable.
As far as accessible APIs go, it sounds like you've found the relevant bits and pieces. If you haven't yet, please read through this stackoverflow item.
Using Choreographer and extrapolation, you can guess at when the next display refresh will occur. Using eglPresentationTimeANDROID() on an Android 5.0+ device, you can tell SurfaceFlinger when you want a particular frame to be sent to the display. Assuming SurfaceFlinger is properly accounting for all latency (such as additional frames added by "smart" panels), that should get you reliable timing.
(Bear in mind that the timing is based on when the display latches the next frame, not when the next frame is fully visible on the display... the latency there will depend on the panel.)
Grafika's "scheduled swap" Activity uses this feature, but it sounds like you're already familiar.
The only way to get signaled by the display when it does the swap would be to dup() the display-retire fence fd from the previous frame, and wait on it. Some of the code in SurfaceFlinger does this, notably DispSync watches the retire fences to see if the software "VSYNC" is drifting. There is no public API for fences, and the user-space response time could certainly be more than 1ms anyway... it usually works out better to schedule ahead than it does to react. Your requirement for non-rooted non-custom devices makes this problematic.
If you're mostly seeing correct behavior, but occasionally seeing a miss, your best bet is to use systrace to track down the cause.

Android triple buffering - expected behavior?

I'm investigating the performance of my app since I noticed it dropping some frames while scrolling. I ran systrace (on a Nexus 4 running 4.3) and noticed an interesting section in the output.
Everything is fine at first. Zooming in on the left section, we can see that drawing starts on every vsync, finishes with time to spare, and waits until the next vsync. Since it's triple buffered, it should be drawing into a buffer that will be posted on the following vsync after it's done.
On the 4th vsync in the zoomed in screenshot, the app does some work and the draw operation doesn't finish in time for the next vsync. However, we don't drop any frames because the previous draws were working a frame ahead.
After this happens though, the draw operations don't make up for the missed vsync. Instead, only one draw operation starts per vsync, and now they're not drawing one frame ahead anymore.
Zooming in on the right section, the app does some more work and misses another vsync. Since we weren't drawing a frame ahead, a frame actually gets dropped here. After this, it goes back to drawing one frame ahead.
Is this expected behavior? My understanding was that triple buffering allowed you to recover if you missed a vsync, but this behavior looks like it drops a frame once every two vsyncs you miss.
Follow up questions
On the right side of this screenshot, the app is actually rendering buffers faster than the display is consuming them. During performTraversals #1 (labeled in the screenshot), let's say buffer A is being displayed and buffer B is being rendered. #1 finishes long before the vsync and puts buffer B in the queue. At this point, shouldn't the app be able to immediately start rendering buffer C? Instead, performTraversals #2 doesn't start until the next vsync, wasting the precious time in between.
In a similar vein, I'm a bit confused about the need for waitForever on the left side here. Let's say buffer A is being displayed, buffer B is in the queue, and buffer C is being rendered. When buffer C is finished rendering, why isn't it immediately added to the queue? Instead it does a waitForever until buffer B is removed from the queue, at which point it adds buffer C, which is why the queue seems to always stay at size 1 no matter how fast the app is rendering buffers.
The amount of buffering provided only matters if you keep the buffers full. That means rendering faster than the display is consuming them.
The labels don't appear in your images, but I'm guessing that the purple row above the green vsync row is the BufferQueue status. You can see that it generally has 0 or 1 full buffers at any time. At the very left of the "zoomed-in on the left" image you can see that it's got two buffers, but after that it only has one, and 3/4 of the way across the screen you see a very short purple bar the indicates it just barely rendered the frame in time.
See this post and this post for background.
Update for the added questions...
The detail in the other post barely scratched the surface. We must go deeper.
The BufferQueue count shown in systrace is the number of queued buffers, i.e. the number of buffers that have content in them. When SurfaceFlinger grabs a buffer for display, it releases the buffer immediately, changing its state to "free". This is particularly exciting when the buffer is being shown on an overlay, because the display is rendering directly from the buffer (as opposed to compositing into a scratch buffer and displaying that).
Let me say that again: the buffer from which the display is actively reading data for display on the screen is marked as "free" in the BufferQueue. The buffer has an associated fence that is initially "active". While it's active, nobody is allowed to modify the buffer contents. When the display no longer needs the buffer, it signals the fence.
So the reason why the code over on the left of your trace is in waitForever() is because it's waiting for the fence to signal. When VSYNC hits, the display switches to a different buffer, signals the fence, and your app can start using the buffer immediately. This eliminates the latency that would be incurred if you had to wait for SurfaceFlinger to wake up, see that the buffer was no longer in use, send an IPC through BufferQueue to release the buffer, etc.
Note that the calls to waitForever() only show up when you're not falling behind (left side and right side of the trace). I'm not sure offhand why it's happening at all when the queue has only 1 full buffer -- it should be dequeueing the oldest buffer, which should already have signaled.
The bottom line is that you'll never see the BufferQueue go above two for triple buffering.
Not all devices work as described above. Nexus 7 (2012) doesn't use the "explicit sync" mechanism, and pre-ICS devices don't have BufferQueues at all.
Going back to your numbered screenshot, yes, there's plenty of time between '1' and '2' where your app could run performTraversals(). It's hard to say for sure without knowing what your app is doing, but I would guess you've got a Choreographer-driven animation cycle that wakes up every VSYNC and does work. It doesn't run more often than that.
If you systrace Android Breakout you can see what it looks like when you render as fast as you can ("queue stuffing") and rely on BufferQueue back-pressure to regulate the game speed.
It's especially interesting to compare N4 running 4.3 with N4 running 4.4. On 4.3, the trace is similar to yours, with the queue largely hovering at 1, with regular drops to 0 and occasional spikes to 2. On 4.4, the queue is almost always at 2 with an occasional drop to 1. In both cases it's sleeping in eglSwapBuffers(); in 4.3 the trace usually shows waitForever() below that, while in 4.4 it shows dequeueBuffer(). (I don't know the reason for this offhand.)
Update 2: The reason for the difference between 4.3 and 4.4 appears to be a Nexus 4 driver change. The 4.3 driver used the old dequeueBuffer call, which turns into dequeueBuffer_DEPRECATED() (Surface.cpp line 112). The old interface doesn't take the fence as an "out" parameter, so the call has to call waitForever() itself. The newer interface just returns the fence to the GL driver, which does the wait when it needs to (which might not be right away).
Update 3: An even longer explanation is now available here.

Android automatically delete everything before last 2 minutes of video stream

I want to make an android app, that records a video stream and when the user does not push a button, everything before the last 120 seconds of the video stream gets deleted. This should run for hours so only ~50mb are in use all the time. Has anyone an idea how to record a video like a never-ending flow of data that allows me to access certain points and delete everything before those points?
I know this question is pretty general but I find it very hard to access android camera close to the hardware.
You'll probably run into file size limitations if nothing else.
A better approach would be to just keep recording 30-second videos, and delete any that are more than two minutes old until the user presses the "record" button, at which time you start keeping them.
Then splice them together into one long video afterwards.
By the way, this will kill your battery. I assume you're equipped to deal with that.

Multiple Android Camera preview buffers for motion detection?

I want to try to do motion detection by comparing consecutive camera preview frames, and I'm wondering if I'm interpreting the android docs correctly. Tell me if this is right:
If I want the camera preview to use buffers I allocate myself, I have to call addCallbackBuffer(), at least twice to get two separate buffers to compare.
Then I have to use the setPreviewCallbackWithBuffer() form of the callback so the preview will be filled in to the buffers I allocated.
Once I get to at least the 2nd callback, I can do whatever lengthy processing I like to compare the buffers, and the camera will leave me alone, not doing any more callbacks or overwriting my buffers till I return the oldest buffer back to the camera by calling allCallbackBuffer() once again (and the newest buffer will sit around unchanged for me to use in the next callback for comparison).
That last one is the one I'm least clear on. I won't get errors or anything because it ran out of buffers will I? It really will just silently drop preview frames and not do the callback?
Well, I went and implemented the above algorithms and they actually worked, so I guess I was interpreting the docs correctly :-).
If anyone wants to see my heavily modified CameraPreview code that does this, it is on my web page at:
http://home.comcast.net/~tomhorsley/hardware/scanner/android-scanner.html

Categories

Resources