FFmpeg : How to create dynamic volume changes in an audio file? - android

I am working on a project in which I need to change the volume of an audio file dynamically.
Let's say, I have an audio file with name xyz.mp3 (20 - seconds audio file).
I need to set the volume in it like :
Time Range (in Seconds) || Volume Percentage (in %)
-------------------------------------------------------------------
0 - 4 || 100
||
4 - 8 || change from 100 - 20 (dynamically)
||
8 - 12 || 20
||
12 - 16 || change from 20 - 100 (dynalically)
||
16 - 20 || 100
Now, I know that to change the volume for a particular time in audio, I can use the following command :
ffmpeg -i in.mp3 -af volume=20:enable='between(t,8,12)' out.mp3
but when I use volume effect, it does not change the volume dynamically. It just straightly change volume from 100 to 20 and not change it like fading.
and When I use afade command like :
ffmpeg -i in.mp3 -af afade=t=in:ss=4:d=8,afade=t=out:st=12:d=16 out.mp3
or
ffmpeg -i in.mp3 -af afade=enable='between(t,4,8)':t=in:ss=4:d=4,afade=enable='between(t,12,16)':t=out:st=12:d=4 out.mp3
but it looks that afade does not work multiple times even when I am using ffmpeg 3.0.1 version.
As afade only works single time, I had also split my audio into parts of 4 second and add fade effects to it and then combine them again, but there is some milliseconds gap comes between each clip. Does anyone know a better way to do it? Please help me...
Update 1 :
Here is that code I used :
"volume='" +
"between(t,0,8)+(1-0.8*(t-8)/4)*" + // full
"between(t,8.01,11.99)+0.1*" + // change from HIGH -> LOW
"between(t,12,16)+(0.1+0.8*(t-16)/4)*" + // low
"between(t,16.01,19.99)+1*" + // change from LOW -> HIGH -
"between(t,20,24)+(1-0.8*(t-24)/4)*" + // full
"between(t,24.01,27.99)+0.1*"+ // change from HIGH -> LOW -
"between(t,28,32)+(0.1+0.8*(t-32)/4)*" + // low
"between(t,32.01,35.99)+1*" + // change from LOW -> HIGH -
"between(t,36,40)+(1-0.8*(t-40)/4)*" + // full
"between(t,40.01,43.99)+0.1*"+ // change from HIGH -> LOW -
"between(t,44,48)+(0.1+0.8*(t-48)/4)*" + // low
"between(t,48.01,51.99)+" + // change from LOW -> HIGH -
"between(t,52,56)" + // high
"':eval=frame";
In this code, I got a small (some milliseconds gap) at those places where I initialize the audio to change the volume
Update 2
Ok I got it, I just need to change the time values like 19.99 to 19.9999 and 16.01 to 16.0001 and it solve the problem. Thank You Gyaan Sir.

Use
volume='between(t,0,4)+(1-0.8*(t-4)/4)*between(t,4.01,7.99)+0.2*between(t,8,12)+(0.2+0.8*(t-12)/4)*between(t,12.01,15.99)+between(t,16,20)':eval=frame

Related

Android - Get max safe stream volume

I have a use case to change stream volume programmatically, but on newer android volume, raising the volume above a certain limit (60% as per my observations which corresponds to step 9 on most phones) results in a warning dialog:
Listening at high volume for a long time may damage your hearing. Tap OK to allow the volume
to be increased above safe levels
Cancel OK
I couldn't find any documentation about this in the the android developer portal, all I could find are some random articles citing the European regulations like this one:
According to regulations set by the European Committee for Electrotechnical Standarisation (CENELEC), all electronic devices capable of media playback sold after February 2013 must have a default output volume level of a maximum 85 dB. Users can choose to override the warning to increase the volume to a maximum of 100 dB, but in doing so the warning must re-appear after 20 hours of music playback.
So I need to figure out reliably what that number is, so I don't ever result in a volume change that would show this dialog, but I also don't want to just use step 9 as the max volume and, then find out that it's not the right value for another phone. Does the android API expose the max safe stream volume anywhere? If not, then do they at least document the step number that corresponds to it for different phone?
Thanks!
There's a resource which holds the safe volume step: config_safe_media_volume_index
// .../overlay/frameworks/base/core/res/res/values/config.xml
<integer name="config_safe_media_volume_index">7</integer>
It is defined HERE
And it is used HERE
You can get it dinamically via:
int safeVolumeStep;
int safeVolumeStepResourceId =
getResources().getIdentifier("config_safe_media_volume_index", "integer", "android");
if(safeVolumeStepResourceId != 0) {
safeVolumeStep = getResources().getInteger(safeVolumeStepResourceId);
} else {
Log.w("TESTS", "Resource config_safe_media_volume_index not found. Setting a hardcoded value");
// We probably won't fall here because config_safe_media_volume_index is defined in the AOSP
// It not a vendor specific resource...
// For any case, try to set the safe step manually to 60% of the max volume.
AudioManager audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
int maxVolume = audioManager.getStreamMaxVolume(AudioManager.STREAM_MUSIC);
safeVolumeStep = (int) (maxVolume * 0.6f);
}
Log.d("TESTS", "Safe Volume Step: " + safeVolumeStep +
" Safe volume step resourceID: " + Integer.toHexString(safeVolumeStepResourceId) );
I tested here in a Galaxy S10 and I'm getting 9.

Explanation of how this MIDI lib for Android works

I'm using the library of #LeffelMania : https://github.com/LeffelMania/android-midi-lib
I'm musician but I've always recorded as studio recordings, not MIDI, so I don't understand some things.
The thing I want to understand is this piece of code:
// 2. Add events to the tracks
// Track 0 is the tempo map
TimeSignature ts = new TimeSignature();
ts.setTimeSignature(4, 4, TimeSignature.DEFAULT_METER, TimeSignature.DEFAULT_DIVISION);
Tempo tempo = new Tempo();
tempo.setBpm(228);
tempoTrack.insertEvent(ts);
tempoTrack.insertEvent(tempo);
// Track 1 will have some notes in it
final int NOTE_COUNT = 80;
for(int i = 0; i < NOTE_COUNT; i++)
{
int channel = 0;
int pitch = 1 + i;
int velocity = 100;
long tick = i * 480;
long duration = 120;
noteTrack.insertNote(channel, pitch, velocity, tick, duration);
}
Ok, I have 228 Beats per minute, and I know that I have to insert the note after the previous note. What I don't understand is the duration.. is it in milliseconds? it doesn't have sense if I keep the duration = 120 and I set my BPM to 60 for example. Neither I understand the velocity
MY SCOPE
I want to insert notes of X pitch with Y duration.
Could anyone give me some clue?
The way MIDI files are designed, notes are in terms of musical length, not time. So when you insert a note, its duration is a number of ticks, not a number of seconds. By default, there are 480 ticks per quarter note. So that code snippet is inserting 80 sixteenth notes since there are four sixteenths per quarter and 480 / 4 = 120. If you change the tempo, they will still be sixteenth notes, just played at a different speed.
If you think of playing a key on a piano, the velocity parameter is the speed at which the key is struck. The valid values are 1 to 127. A velocity of 0 means to stop playing the note. Typically a higher velocity means a louder note, but really it can control any parameter the MIDI instrument allows it to control.
A note in a MIDI file consists of two events: a Note On and a Note Off. If you look at the insertNote code you'll see that it is inserting two events into the track. The first is a Note On command at time tick with the specified velocity. The second is a Note On command at time tick + duration with a velocity of 0.
Pitch values also run from 0 to 127. If you do a Google search for "MIDI pitch numbers" you'll get dozens of hits showing you how pitch number relates to note and frequency.
There is a nice description of timing in MIDI files here. Here's an excerpt in case the link dies:
In a standard MIDI file, there’s information in the file header about “ticks per quarter note”, a.k.a. “parts per quarter” (or “PPQ”). For the purpose of this discussion, we’ll consider “beat” and “quarter note” to be synonymous, so you can think of a “tick” as a fraction of a beat. The PPQ is stated in the last word of information (the last two bytes) of the header chunk that appears at the beginning of the file. The PPQ could be a low number such as 24 or 96, which is often sufficient resolution for simple music, or it could be a larger number such as 480 for higher resolution, or even something like 500 or 1000 if one prefers to refer to time in milliseconds.
What the PPQ means in terms of absolute time depends on the designated tempo. By default, the time signature is 4/4 and the tempo is 120 beats per minute. That can be changed, however, by a “meta event” that specifies a different tempo. (You can read about the Set Tempo meta event message in the file format description document.) The tempo is expressed as a 24-bit number that designates microseconds per quarter-note. That’s kind of upside-down from the way we normally express tempo, but it has some advantages. So, for example, a tempo of 100 bpm would be 600000 microseconds per quarter note, so the MIDI meta event for expressing that would be FF 51 03 09 27 C0 (the last three bytes are the Hex for 600000). The meta event would be preceded by a delta time, just like any other MIDI message in the file, so a change of tempo can occur anywhere in the music.
Delta times are always expressed as a variable-length quantity, the format of which is explained in the document. For example, if the PPQ is 480 (standard in most MIDI sequencing software), a delta time of a dotted quarter note (720 ticks) would be expressed by the two bytes 82 D0 (hexadecimal).

Simulate touch, hold, move in android debug bridge

Rather than using a drag or swipe command in the android debug bridge or AndroidViewClient like this:
device.drag((600,800),(600,1200), 1000)
device.shell('input touchscreen swipe 600 800 600 1200 1000')
Is there some way to simulate something like the following?
1. press down on some coordinates (eventType=DOWN)
2. sleep 2 seconds (i.e. keep holding there)
3. move to some other coordinates
2. sleep 2 seconds (i.e. keep holding there)
5. release (eventType=UP)
Basically, you touch, hold there for a few seconds, drag and keep holding there for a few seconds, then release the pad.
If you take a look at AdbClient.longPress() you will see how the long press event is sent for some keys:
if name in KEY_MAP:
self.shell('sendevent %s 1 %d 1' % (dev, KEY_MAP[name]))
self.shell('sendevent %s 0 0 0' % dev)
time.sleep(duration)
self.shell('sendevent %s 1 %d 0' % (dev, KEY_MAP[name]))
self.shell('sendevent %s 0 0 0' % dev)
You can do something similar for your case.
To get an idea of what you should write, do the same set of events you mentioned and analyze them using getevent.

Corona SDK: black screen on Composer transition

-- hide device status bar
display.setStatusBar( display.HiddenStatusBar )
-- require controller module
local composer = require( "composer" )
-- load first scene
local scrOptions =
{
effect = "fromRight",
time = 2000
}
composer.gotoScene( "game", scrOptions )
--
-- Display objects added below will not respond to storyboard transitions
local MemUsageDisplay = display.newText( "0", 400, 25, native.systemFont, 20 )
MemUsageDisplay:setFillColor( gray )
local monitorMem = function()
local textMem = system.getInfo( "textureMemoryUsed" ) / 1000000
collectgarbage()
local date = os.date( "*t" )
MemUsageDisplay.text = date.hour .. ":" .. date.min .. ":" .. date.sec .. " / Lua: " .. math.round(collectgarbage("count")) .. "K " .. "Tex: " .. math.round(textMem*10) * 0.1 .. "MB"
end
timer.performWithDelay( 500, monitorMem, 0 )
In the simulator everything is fine.
On the device however the splashscreen flashes for less than a second, then the screen goes black for about 5 seconds, and then the game starts.
There is no transition.
I have to add that my game.lua contains a lot of code, but if I understand the docs correctly, all of that should be processed while the splashscreen is visible? I also ran the app while watching it in debugging mode (catlog...) and put some markers in it to see how fast the code executes. The whole game.lua is processed in less than a second.
Is this normal behavior?
What is the parameter required for composer.gotoScene( "game", scrOptions )?
You need to figure it out in composer library.
have you decrease the time and change the effect in your scrOptions array?
Just try this and let me know what are you getting.
So i can do the further investigation.
It sounds like to me that you are not creating your scene in the scene:create() event function but in the scene:show() event function. Your transition is set for 2 seconds and if you are not creating anything in scene:create() then there won't be anything to transition, but the transition will still take place, ergo going black for a couple of seconds.
Rob
Add this code
local scene = composer.newScene()
If everything is working fine in simulator, that means please check in your code whether your using the proper file name (i.e image name and scene names are correct) since simulator will take Image.png and image.png as same, but in device it will show error.

Audio is also be cancelled which overlap with acoustic echo when using WebRtc_Aecm on Android

As an example:
PCM captured by microphone:
1, {2,3} {4,5} {6,7}, 8, 9,
{A,B} means A is the audio data I really want to capture, B is the echo at the same time.
A and B both captured by mic at the same time.
The issue I encounter: the audio 2, 4 and 6 are also cancelled while cancelling 3, 5 and 7.
This is my code:
WebRtcAecm_Init( &aecm , 8000 );
While ( aecProcessing )
{
WebRtcAecm_BufferFarend( speakerBuffer );
WebRtcAecm_Process( aecm , micBuffer , NULL , aecBuffer , 160 , 200 );
}
if you run a loopback testing, the normal voice can be cancelled partially.
don't use constant delay like 200ms, cus' this delay is always changes, you should estimate it every 1 second or shorter.
EDIT
please make clear what is echo and what is normal voice.

Categories

Resources