I'm trying to convert images(as jpg) to video(as mp4),
but looking up at the ffmpeg docs, I just found command lines as follow
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
Does anyone know the api with c or c++ code? Thanks.
pseudo code:
Open output file;
Open output code;
while(input files remain) {
Open input file;
Decode input frame;
Give decoded frame a timestamp;
Encode frame;
Write frame to output file;
}
Flush encode;
Close output file;
Do the steps in order. Im sure you'll have question as you implement this. When you have a question, Open a new topic on stack overflow with the code you have so far with a specific problem, and what you have already tried.
Related
I have following Dart code and I am trying to make reading the file buffered. Just like Java's BufferedReader or C++ ifstream. Is there such functionality? I cannot even find buffer mentioned in file.dart nor file_impl.dart. If I understood my debugging correctly, it seems that Dart is reading the whole file at once.
So could anybody help me make it buffered or point me in right direction where the buffer is?
final file = File(join(documentsDirectory, "xxx.txt"));
final List<String> lines = await file.readAsLines(); //file.readAsLinesSync()
lines.forEach((line) {
....
});
Use file.openRead(). This will return a Stream of bytes. If you want to read as characters, transform the stream using the appropriate decoder (probably utf8).
As it says, you must read the stream to the end, or cancel it.
I'm working on a rooted android device. I'm trying to capture the screen and store the result in Bitmap for later usage.
String path = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS).getPath();
path += "/img.png";
Process sh = Runtime.getRuntime().exec("su", null,null);
OutputStream os = sh.getOutputStream();
os.write(("/system/bin/screencap -p " + path).getBytes("ASCII"));
os.flush();
os.close();
sh.waitFor();
final Bitmap x = BitmapFactory.decodeFile(path);
What I'm doing here is naming a path for a new image and capturing the screen using the command /system/bin/screencap -p FILEPATH. Then I read the image I stored in that file and use it in the bitmap.
My problem with my current code is that it's slow(not suitable for a real-time application). I'm now trying to make it faster. Instead of saving the captured picture into file and then reading it again from the program, I want to read it directly from the result of Runtime.getRuntime().exec(...)
In the description of the command screencap, I found that I can use it without specifying the output file name, and in this case the results will be printed to stdout.
I tried several codes to read the result byte array to use it directly in my code
final Bitmap x = BitmapFactory.decodeByteArray(resultArrayByte, 0, resultArrayByte.length);
but none of the codes worked with me.
How can I use sh's input/output streams to get the result byte array directly without saving the output into a file then loading it again?
Take a Look here, in this link you can find a library called ASL
a lot of questions in this post, i'm confused :)
i hope this link is useful for your requirements.
I want to cut or trim audio song in android programmatically. i have found FFMPEG solution but i am not getting what is the step behind to cut audio song and if any other way please help me.
most people give me this type answer
ffmpeg -t 30 -i inputfile.mp3 -acodec copy outputfile.mp3
what is this and how to use in android code to cut audio?
Please help me.
Thank You
Basically follow the steps described here: http://writingminds.github.io/ffmpeg-android-java/
1.) You have to include the ffmpeg Library into your project!
put this in your build.gradle File:
dependencies {compile 'com.writingminds:FFmpegAndroid:0.3.2'}
2.) before using the Lib, copy the binary file from assets to the device:
FFmpeg ffmpeg = FFmpeg.getInstance(context);
ffmpeg.loadBinary(new LoadBinaryResponseHandler() {...}
3.) when this is finished you can start executing commands and listen for the results like this:
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {...}
where cmd is the array of arguments:
String[] command1 = new String[8];
command1[0]="-t";
command1[0]="30";
...
BUT: actually I found that the ffmpeg version for android does not work as expected for cutting or trimming audio files! (I got errors for commands that do work on linux commandline...) Hope you will figure it out.
There's a simple hack:
figure out bytes per second or millisecond -- file.length()/duration
get the start and end position where you want to cut audio.
read file as bytes and save the portion between startPos and EndPos in separate file.
I have an application Andoroid . I want to see more effects to video using photoshop file and preview it .acv . I have problems with the preview . Who has the solution to the problem by applying filters * .acv for video , thanks .
i used this
library and follwing is the command
String[]
complexCommand={"ffmpeg","-y","-i","file path of input video ","-strict","experimental","-vf","curves=psfile=filter acv file path ","-b","2097k","-vcodec","mpeg4","-ab","48000","-ac","2","-ar","22050","file path of the output video"}
for example
String[]
complexCommand={"ffmpeg","-y","-i","/storage/emulated/0/vk2/in.mp4","-strict","experimental","-vf","curves=psfile=/storage/emulated/0/videokit/sepia.acv","-b","2097k","-vcodec","mpeg4","-ab","48000","-ac","2","-ar","22050","/storage/emulated/0/videokit/out.mp4"}
given link is a forum you may ask your questions regarding ffmepeg there
I need to capture frame by frame from a video stored in my sd card of the Android device (in this case my emulator). I am using Android and OpenCV through NDK. I pushed manually the file "SinglePerson.avi" inside the sdcard through file explorer of DDBS (eclipse) and I used the code below to read the file:
JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial4_Sample4Mixed_VideoProcessing(JNIEnv*, jobject)
{
LOGI("INSIDE VideoProcessing ");
CvCapture* capture = cvCaptureFromAVI("/mnt/sdcard/SinglePerson.avi");
IplImage* img = 0;
if(!cvGrabFrame(capture)){ // capture a frame
LOGI("Inside the if");
printf("Could not grab a frame\n\7");
exit(0);
}
img=cvRetrieveFrame(capture);// retrieve the captured frame
cvReleaseCapture(&capture);
}
The problem is that cvGrabFrame(capture) results always false.
Any suggestion to correctly open the video and grab the frames?
Thanks in advance
Some versions of OpenCV (in package opencv2) build without video support. If it is your case you have to enable "-D WITH_FFMPEG=ON" in pkg's Makefile and recompile.
Look at "Displaying AVI Video using OpenCV" tutorial:
"You may need to ensure that ffmpeg has been successfully installed in order to allow video encoding and video decoding in different formats. Not having the ffmpeg functionality may cause problems when trying to run this simple example and produce a compilation errors".
Also check path in cvCaptureFromAVI for correctness.
Hope this will help!
The behavior you are observing is probably due to cvCaptureFromAVI() failing. You need to start coding safely and check the return of the calls you make:
CvCapture* capture = cvCaptureFromAVI("/mnt/sdcard/SinglePerson.avi");
if (!capture)
{
printf("!!! Failed to open video\n\7");
exit(0);
}
This function usually fails for 2 reasons:
When it's unable to access the file (due to wrong filesystem permissions);
Missing codecs on the system (or the video format is not supported by OpenCV).
If you are new to OpenCV, I suggest you test your OpenCV code on a desktop (PC) first.