So, I'm trying to cut a 1 minute 29 seconds video into clips of 30 seconds each.
The expected output is 30sec,30sec,29sec.
The result is 35sec,29sec,23sec.
This is my code -
ArrayList<String> commandList = new ArrayList<>();
commandList.add("-ss");
commandList.add("00:00:00");
commandList.add("-i");
commandList.add(videoPath);
commandList.add("-c");
commandList.add("copy");
commandList.add("-f");
commandList.add("segment");
commandList.add("-segment_time");
commandList.add("00:00:30");
commandList.add(TEST.getAbsolutePath());
String[] command = commandList.toArray(new String[commandList.size()]);
execFFmpegBinary(command);
Any idea what I'm doing wrong? I read somewhere that if at a particular position a keyframe doesn't exists it seeks to position of the nearest keyframe.
Any solution or guidance will help me. Thank you in advance.
FFmpeg will start segments on keyframes. When using -codec copy there is no transcode, so It must use the existing keyframes. There is no keyframe at 30s, so it cuts at the next one.
Related
I've developed an Android app that allows user to create boomerang-alike mp4 video. This video consists of 10 still images being played back and forth quite fast. I know that such video (boomerang effect) can be easily looped from single video file while playing it, but I really need to create a mp4 video that would essentially contain already prepared boomerang video. The output video can be downloaded and played by user on any external player (over which obviously I don't have any control).
For that purpose currently I create a video from images in a loop. The loop starts from 1st picture and goes to 10th picture with 0.25 sec delay between frames, then goes back from 10th to 1st including delay. And there is 5 of those loops, which essentialy means creating a single video from 5 * 10 * 2 = 100 images. I know it's kinda ridiculous, so the time that it takes to prepare this video is riduculous as well (around 1:40 min).
What solution could you recommend assuming that the output video really has to consist of 5 loops back-and-forth? I've thought about creating single loop video (20 pictures) and then create final output video by concatenating it 5 times. But could it be any good? I'm trying to find an efficient yet understandable for a beginner Android programmer way.
You can use FFMPEG to Create boomerang like video below is a simple example code :-
ffmpeg -i input_loop.mp4 -filter_complex "[0]reverse[r];[0][r]concat,loop=5:250,setpts=N/55/TB" output_looped_video.mp4
1.5 seconds of video file as input named input_loop.mp4
n loop=5:250, 5 is number of loops, 250 is frame rate x double length of clip. The setpts is applied to avoid frame drops, and the value 25 should be replaced with the framerate of the clip
setpts=N/<VALUE>/TB" you can alter value according to your need
increase value to speed up boomerang effect
decrease value to slow down boomerang effect
I was looking for a way to create a boomerang video and found a pretty cool example of how to do it on GitHub.
You create the video by using the FFMPEG library org.bytedeco.javacpp-presets to clone the frames.
https://github.com/trantrungduc/boomerang-android
This is the place in code in which you can customize the video loop:
for (int k = 0; k < 3; k++) {
for (Frame frame1 : loop) {
frecorder.record(frame1);
}
for (int i=loop.size()-1;i>=0;i--){
frecorder.record(loop.get(i));
}
}
To be specific, I'll present my question in an example:
Say at t = 0 ms, a frame was completed and became visible to the user on the screen. From that point on, I began the work to draw the next frame. However, the work took too long that I missed the frame's due time of t = 16 ms. If finally this next frame was ready at t = 23 ms. When would it actually be visible to the user? t = 23 ms or t = 32 ms (at the next drawing "heart beat" if any)?
And also, where in the Android source code can I find the answer myself?
You actually get 16.666ms per frame, so if you aren't ready to draw at that point then the next attempt would be made at 34ms. Colt from Google has a good video explaining this actually. https://www.youtube.com/watch?v=HXQhu6qfTVU
I'm having a hard time getting the right duration and the exact framerate in Jcodec.
My situation is I have a app that shows an array of bitmaps, wherein the user can change its frame rate like 1fps, 5fps, 32fps, all I did was 1000/fps. so 1fps will show 1 bitmap every 1 second, 2fps: 2bitmap and so on, in short the user is the one that supplies the frame rate.I found this but I can't get the right formula to it.
And another thing, about the duration. What if I want 1fps and I have 16 bitmaps. JCodec should produce a 16 seconds video.
How can I achieve that? lets say that the bitmaps will be dynamic. Base on what I understand, Jcodec relies on hard coded duration. not by the number of frames it has encoded and converted to MP4.
Thanks in advance.
Had a hard time finding this myself. Took some searching through the API.
FileChooser fc = new FileChooser();
File file = fc.showOpenDialog(null);
SeekableByteChannel bc = NIOUtils.readableFileChannel(file);
MP4Demuxer dm = new MP4Demuxer(bc);
DemuxerTrack vt = dm.getVideoTrack();
double frameRate = vt.getMeta().getTotalFrames()/vt.getMeta().getTotalDuration();
I'm using the library of #LeffelMania : https://github.com/LeffelMania/android-midi-lib
I'm musician but I've always recorded as studio recordings, not MIDI, so I don't understand some things.
The thing I want to understand is this piece of code:
// 2. Add events to the tracks
// Track 0 is the tempo map
TimeSignature ts = new TimeSignature();
ts.setTimeSignature(4, 4, TimeSignature.DEFAULT_METER, TimeSignature.DEFAULT_DIVISION);
Tempo tempo = new Tempo();
tempo.setBpm(228);
tempoTrack.insertEvent(ts);
tempoTrack.insertEvent(tempo);
// Track 1 will have some notes in it
final int NOTE_COUNT = 80;
for(int i = 0; i < NOTE_COUNT; i++)
{
int channel = 0;
int pitch = 1 + i;
int velocity = 100;
long tick = i * 480;
long duration = 120;
noteTrack.insertNote(channel, pitch, velocity, tick, duration);
}
Ok, I have 228 Beats per minute, and I know that I have to insert the note after the previous note. What I don't understand is the duration.. is it in milliseconds? it doesn't have sense if I keep the duration = 120 and I set my BPM to 60 for example. Neither I understand the velocity
MY SCOPE
I want to insert notes of X pitch with Y duration.
Could anyone give me some clue?
The way MIDI files are designed, notes are in terms of musical length, not time. So when you insert a note, its duration is a number of ticks, not a number of seconds. By default, there are 480 ticks per quarter note. So that code snippet is inserting 80 sixteenth notes since there are four sixteenths per quarter and 480 / 4 = 120. If you change the tempo, they will still be sixteenth notes, just played at a different speed.
If you think of playing a key on a piano, the velocity parameter is the speed at which the key is struck. The valid values are 1 to 127. A velocity of 0 means to stop playing the note. Typically a higher velocity means a louder note, but really it can control any parameter the MIDI instrument allows it to control.
A note in a MIDI file consists of two events: a Note On and a Note Off. If you look at the insertNote code you'll see that it is inserting two events into the track. The first is a Note On command at time tick with the specified velocity. The second is a Note On command at time tick + duration with a velocity of 0.
Pitch values also run from 0 to 127. If you do a Google search for "MIDI pitch numbers" you'll get dozens of hits showing you how pitch number relates to note and frequency.
There is a nice description of timing in MIDI files here. Here's an excerpt in case the link dies:
In a standard MIDI file, there’s information in the file header about “ticks per quarter note”, a.k.a. “parts per quarter” (or “PPQ”). For the purpose of this discussion, we’ll consider “beat” and “quarter note” to be synonymous, so you can think of a “tick” as a fraction of a beat. The PPQ is stated in the last word of information (the last two bytes) of the header chunk that appears at the beginning of the file. The PPQ could be a low number such as 24 or 96, which is often sufficient resolution for simple music, or it could be a larger number such as 480 for higher resolution, or even something like 500 or 1000 if one prefers to refer to time in milliseconds.
What the PPQ means in terms of absolute time depends on the designated tempo. By default, the time signature is 4/4 and the tempo is 120 beats per minute. That can be changed, however, by a “meta event” that specifies a different tempo. (You can read about the Set Tempo meta event message in the file format description document.) The tempo is expressed as a 24-bit number that designates microseconds per quarter-note. That’s kind of upside-down from the way we normally express tempo, but it has some advantages. So, for example, a tempo of 100 bpm would be 600000 microseconds per quarter note, so the MIDI meta event for expressing that would be FF 51 03 09 27 C0 (the last three bytes are the Hex for 600000). The meta event would be preceded by a delta time, just like any other MIDI message in the file, so a change of tempo can occur anywhere in the music.
Delta times are always expressed as a variable-length quantity, the format of which is explained in the document. For example, if the PPQ is 480 (standard in most MIDI sequencing software), a delta time of a dotted quarter note (720 ticks) would be expressed by the two bytes 82 D0 (hexadecimal).
I'm writing a simple NDK OpenSL ES audio app that records the users touches on a virtual piano keyboard and then plays them back forever over a set loop. After much experimenting and reading, I've settled on using a separate POSIX loop to achieve this. As you can see in the code it subtracts any processing time taken from the sleep time in order to make the interval of each loop as close to the desired sleep interval as possible (in this case it's 5000000 nanoseconds.
void init_timing_loop() {
pthread_t fade_in;
pthread_create(&fade_in, NULL, timing_loop, (void*)NULL);
}
void* timing_loop(void* args) {
while (1) {
clock_gettime(CLOCK_MONOTONIC, &timing.start_time_s);
tic_counter(); // simple logic gates that cycle the current tic
play_all_parts(); // for-loops through all parts and plays any notes (From an OpenSL buffer) that fall on the current tic
clock_gettime(CLOCK_MONOTONIC, &timing.finish_time_s);
timing.diff_time_s.tv_nsec = (5000000 - (timing.finish_time_s.tv_nsec - timing.start_time_s.tv_nsec));
nanosleep(&timing.diff_time_s, NULL);
}
return NULL;
}
The problem is that even using this the results are better, but quite inconsistent. sometimes notes will delay for perhaps even 50ms at a time, which makes for very wonky playback.
Is there a better way of approaching this? To debug I ran the following code:
gettimeofday(&timing.curr_time, &timing.tzp);
__android_log_print(ANDROID_LOG_DEBUG, "timing_loop", "gettimeofday: %d %d",
timing.curr_time.tv_sec, timing.curr_time.tv_usec);
Which gives a fairly consistent readout - that doesn't reflect the playback inaccuracies whatsoever. Are there other forces at work with Android preventing accurate timing? Or is OpenSL ES a potential issue? All the buffer data is loaded into memory - could there be bottlenecks there?
Happy to post more OpenSL code if needed... but at this stage I'm trying figure out if this thread loop is accurate or if there's a better way to do it.
You should consider seconds when using clock_gettime as well, you may get greater timing.start_time_s.tv_nsec than timing.finish_time_s.tv_nsec. tv_nsec starts from zero when tv_sec is increased.
timing.diff_time_s.tv_nsec =
(5000000 - (timing.finish_time_s.tv_nsec - timing.start_time_s.tv_nsec));
try something like
#define NS_IN_SEC 1000000000
(timing.finish_time_s.tv_sec * NS_IN_SEC + timing.finish_time_s.tv_nsec) -
(timing.start_time_s.tv_nsec * NS_IN_SEC + timing.start_time_s.tv_nsec)