Need help in using EGLImageKHR instead of glReadPixels - android

We're building a simple Android app where user can create a collage of images and video and then export it as a single video. Our current code on 16, 17 is very slow. So imagine if we have 6 videos 1 min long (each) it take us about 15min to combine them -- when they are playing one after another -- and about 7 mins when they are working simultaneously. I want to bring it to maximum of twice the total length of the video. So, for last example 2 min when playing simultaneously and 12 mins when playing one after another.
We've tried some software libraries like FFMPEG without any help. Probably we should go hardware decoding route. For which probably we should mess up with NDK. I found this article https://vec.io/posts/faster-alternatives-to-glreadpixels-and-glteximage2d-in-opengl-es that is helpful. But I need some help on it ....

Related

How to download a part of mp3 from server?

Use Case
My use case is roughly equal to, adding a 15-second mp3 file to a ~1 min video. All transcoding merging part will be done by FFmpeg-android so that's not the concern right now.
The flow is as follows
User can select any 15 seconds (ExoPlayer-streaming) of an mp3 (considering 192Kbps/44.1KHz of 3mins = up to 7MB)
Then download ONLY the 15 second part and add it to the video's audio stream. (using FFmpeg)
Use the obtained output
Tried solutions
Extracting fragment of audio from a url
RANGE_REQUEST - I have replicated the exact same algorithm/formula in Kotlin using the exact sample file provided. But the output is not accurate (± 1.5 secs * c) where c is proportional to startTime
How to crop a mp3 from x to x+n using ffmpeg?
FFMPEG_SS - This works flawlessly with remote URLs as input, but there are two downsides,
as startTime increases, the size of downloaded bytes are closer to the actual size of the mp3.
ffmpeg-android does not support network requests module (at least the way we complied)
So above two solutions have not been fruitful and currently, I am downloading the whole file and trimming it locally, which is definitely a bad UX.
I wonder how Instagram's music addition to story feature works because that's close to what I wanted to implement.
Its is not possible the way you want to do it. mp3 files do not have timestamps. If you just jump to the middle of an mp3, (and look for the frame start marker), then start decoding, You have no idea at what time this frame is for, because frames are variable size. The only way to know, is to count the number of frames before the current position. Which means you need the whole file.

Android - Generate a tone at specific decibels

I need to generate a tone starting from -10 decibels and increasing with 10 at every step, till 100. A step takes 3 seconds.
The tone generation part is done and works nice, the remaining part is how to generate the tone with specific decibels.
While searching, I came across this thread: how to generate pure tones with different decibels in java?
They use the amplitude to calculate the dB. My question is, can I use the sound level instead of amplitude?

android audio - calculating the distance between two devices

I'm having a real hard time calculating the distance between two android phones using sound.
-the main idea is having 2 phones sync'ed on same time, making mobile A send a msg to mobile B to let him know he is playing sound soon. note that mobile A save this time.
-then mobile B sends "ok, u can go ahead" to mobile A while it starts recording the next 1 second or so.
-Then mobile A gets the "ok" and start playing a 1000Hz sound.
-Mobile B detect that freq and send its current time to mobileA
now we have all the info to calculate the distance. problem is that at theory this is all good, but when i implement this i have lots of random time added into the equation.
the main problem is that I cant point at the ABSOLUTE time when mobile B got the good freq.
I tried not recording the whole 1000 ms but lots of "mini" chunks of (12~24ms) but the time the mobile spend on the recorder_.startRecording()/recorder_.read()/recorder_.stop() commands is too much, and im missing the freq by lots of ms (each ms is equal to 30cm so i cant effort much errors...)
can any one tell me what im doing wrong or guiding me to better ways of doing that??
The main issue is the recording device cant point on the actual time he recorded the wanted freq.....
thanks in advanced,
Ofer.
Please have a look at new audio features introduced in API 19.

Android display refresh rate

I'm developing an Android app which acts like a movie clapperboard/clapboard/slate. Is there any way in which I can set the display's refresh rate?
It is very important because when you edit the movie it's necessary to "land" on specific frames. The point is that if the timer is set to 25 frames per second, I need the display to update exactly 25 times per second, when the timer changes its value.
The problem on physical devices is that, let's say my Samsung Spica GT-I5700 returns a refresh rate of 62.016 which is totally inappropriate for a 25 fps timecode, and when editing you can see Frame1-Frame1-Frame2-Frame2 etc. or intermediaries, when you should see exactly Frame1-Frame2 etc.
The point is that I would need the refresh rate to be in sync with the timecode. If the user sets 25 fps, then the display should refresh exactly 25 times per second.
Any ideas, please? Thank you!
Unfortunately, there is no way to set an arbitrary refresh rate for any given device, whether it be android or not. Video circuitry is limited to the fidelity of its timers, and like all discrete systems, cannot operate with continuous precision. However, this shouldn't be too much of a problem. If you're performing a dirty render (on demand), simply base your frame build around a 25Hz system timer, and the video hardware will render it during the next raster pass - a 60Hz raster pass will almost certainly occur within your 25Hz interval. You'll also have to consider frame-dropping when the system is too busy to honour a 25Hz interval. If you're using fixed rate rendering, simply use the elapsed time between renders in order to determine the appropriate frame for a given frame rate.

how to set playback speed for android generated tones

I am using the AudioTrack class and have generated my own tones in my Android application. However, I want to be able to control the speed of playback, and I can't figure out how.
I see the setLoopPoints method but that doesn't seem to do what I want (if anyone has used it and can explain that method to me that would be great, the api documentation doesn't help me much).
What I want to do:
As a point (here, a touch on the screen) gets closer to a target on the screen, I want to increase speed of the tones I'm generating. For example, farther away I would, say, have the tone playing 1 time in 1 second, but very close to the target, 5 times in 1 second. I am struggling to find out the best way to do this with Android sounds (generated tones or even .wav files saved to my res/raw).
Any help would be much appreciated!
Shani
You want to use the setPlaybackRate method for this:
http://developer.android.com/reference/android/media/AudioTrack.html
in conjunction with setLoopPoints. However, I believe there is probably a limit to how much you can speed up the file's "natural" playback rate, and the limit is probably 48 kHz (I'm not sure, though, and it may be device-dependent).
So, if you have a file that was recorded at, say, 8000 Hz, to get the effect you want you would set the loop count to 4 (so that it plays 5 times in a row) and set the playback rate to 40,000 (5 * 8000).
Since there is (probably) an upper limit to playback rate, your best approach might be to instead record the original sound at a high frequency, and slow down the playback as necessary to achieve the effect you want.
Update: setLoopPoints lets you specify two arbitrary locations within the file, such that when playback reaches the end looppoint the audio engine will wrap back around to the start looppoint. To loop the entire file, you would set the start looppoint to 0 and the end looppoint to the last frame in the file (the size of each frame is dependent upon the file's format - so a stereo file using 2 bytes per sample would have a frame size of 4, so the last frame is just the size of the audio data in bytes divided by 4).
To get 5 consecutive plays of your file, you would set the loop count to 4 (loopcount of 0 means the file plays once; -1 means it will loop forever).
Update 2: just read the docs some more - the upper limit for setPlaybackRate is documented as twice the rate returned by getNativeOutputSampleRate, which for most devices is probably 44,100 or 48,000 Hz. This means that a standard CD-quality WAV file can only be played back at twice its normal speed. A 22,050 Hz file could be played back at up to 4 times its normal speed, etc.

Categories

Resources