hi can anyone tell me how to take snapshot of a video being played in the videoview. the snapshot gives blank image, or any method to grab the current frame of video, i want to pick the current frame, Anyone can help plz
The most efficient solution will be MediaMetadataRetriever.
First of all call
setDataSource(FileDescriptor fd)
Yes it's quite a tricky method. Sometimes it requires complicated solution. Here is an example:
public void setVideo(Uri videoPath) {
videoUri = videoPath;
File file = new File(videoPath.getPath());
ParcelFileDescriptor pfd;
FileDescriptor fd = new FileDescriptor();
try {
pfd = ParcelFileDescriptor.open(file, ParcelFileDescriptor.MODE_READ_WRITE);
fd = pfd.getFileDescriptor();
metadataRetriever.setDataSource(fd);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
to set the video you want to work with. Secondly use this method:
getFrameAtTime(long timeUs)
to extract a frame. Keep in mind you should use microseconds as parameter
Related
I have a job that is rotating and trimming video files.
I do trimming the video but couldn't rotate it.
I use following code snippet to rotate but the result video is the same with the source video .Also there isn't any error messeage.
videoPath = Environment.getExternalStorageDirectory().getAbsolutePath() + "/download/cvbenim/islenecek.mp4";
try {
String rotatedPath = videoPath.replace(".mp4", "cvbenim_is_ilanı_rotated.mp4");
Movie result = MovieCreator.build(videoPath);
File file = new File(rotatedPath);
if (file.exists()) {
file.delete();
}
Container out = new DefaultMp4Builder().build(result);
MovieHeaderBox mvhd = Path.getPath(out, "moov/mvhd");
mvhd.setMatrix(Matrix.ROTATE_90);
out.writeContainer(new FileOutputStream(rotatedPath).getChannel());
playVideoFromPath(rotatedPath);
} catch (Exception e) {
e.printStackTrace();
}
I appriciate any help.
You don't state how you present your video in playVideoFromPath but as https://stackoverflow.com/a/17395134/3233251
says.
When you playback the video on Android with the help of the VideoView you might notice that the matrix is not taken into account. I'm not entirely sure if this is done on purpose or not but the workaround is to use a TextureView that applies the transformation.
So you should try to use the TextureView as recommended if you are not already doing so.
My application captures video footage and saves it as .mp4 file. I would like to extract one frame from this video and also save it to file. Since I haven't found nothing better, I've decided to use MediaMetadataRetriever.getFrameAtTime() for that. It happens inside the class that inherits from AsyncTask. Here is how my code looks like my doInBackground():
Bitmap bitmap1 = null;
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
try {
retriever.setDataSource(src);
bitmap1 = retriever.getFrameAtTime(timeUs, MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
if (Utils.saveBitmap(bitmap1, dst)) {
Log.d(TAG, "doInBackground Image export OK");
} else {
Log.d(TAG, "doInBackground Image export FAILED");
}
} catch (RuntimeException ex) {
Log.d(TAG, "doInBackground Image export FAILED");
} finally {
retriever.release();
if (bitmap1 != null) {
bitmap1.recycle();
}
}
And the saveBitmap() method:
File file = new File(filepath);
boolean result;
try {
result = file.createNewFile();
if (!result) {
return false;
}
FileOutputStream ostream = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.PNG, 100, ostream);
ostream.close();
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
Now, the problem is that the quality of the exported image is noticeably worse then the video quality. I don't think that this should happen with PNG output format. You can see the difference below:
The first image was extracted from the video using ffmpeg on my desktop. The second one was extracted using the code above on Samsung Galaxy S6. The result looks pretty much the same on every Android device I was using.
Can someone tell how can I improve the quality of the exported picture?
I found other solution for the issue. You can use bigflake's example to build mechanism for extracting video frame. The one thing you will have to add is seeking mechanism. This works well, keeps the exact quality and does not require any third-party libraries. Only downside I've noticed so far is that it will result in longer execution time than the original idea.
My Android app downloads a bunch of photos and videos from a server and I want to cache this data. I've used DiskLruCach library to cache the images and it works fine but now I want to cache the videos also.
I've tried something like this but it doesn't seem to work - I can't find anything in the cache directory for the videos:
private boolean writeVideoToFile(String videoUri, DiskLruCache.Editor editor ) throws IOException, FileNotFoundException {
OutputStream out = null;
FileOutputStream fos;
try {
out = new BufferedOutputStream( editor.newOutputStream(0), Utils.IO_BUFFER_SIZE );
File videoFile = Utils.createFile(Utils.TYPE_VIDEO_FILE);
fos = new FileOutputStream(videoFile);
fos.write(videoUri.getBytes());
fos.close();
return true;
} finally {
if ( out != null ) {
out.close();
}
}
}
Can anyone give me an ideea on how I can accomplish this?
Not enough info, but I guess the call to getBytes() is the problem, probably an OutOfMemory exception.
Don't read the entire video file into memory (calling getBytes). Use a small intermediate buffer instead, writing/caching the video file chunk by chunk.
You are calling getBytes() for String videoUri.
Is that really what you meant to do?
I have a custom file format (similar to a zip file) that packs small files (small images, mp3 files) into 1 physical file. My android app downloads this file, and it displays one image from it. The user can touch the image and it'll start to play one of the small mp3 "files" inside the packed file. He can also swipe left or right, and the app displays the previous or next image.
In order to make things smoother I am holding 3 "cards" in the memory: the one currently displayed, and the prevous and the next one. This way when it's swiped, I can immediatelly show the next image. In order to do this, I am preloading the images and the mp3 into the MediaPlayer. The problem is that because of this it is multi threaded, as the preloading is done in the background. I have a bug: when I start to play the mp3, and during it's playing I swipe, the image I preaload is cut in the middle. After lots of debugging, I found the reason: while I load the image, the MediaPlayer is moving the file pointer in the file descriptor, and that causes the next read to read from the middle of the mp3 instead of the image.
Here's the code:
InputStream imageStream = myPackedFile.getBaseStream("cat.jpg"); // this returns an InputStream representing "cat.jpg" from my packed file (which is based on a RandomAccessFile)
Drawable image = Drawable.createFromStream(imageStream, imagePath);
FileDescriptor fd = myPackedFile.getFD();
long pos = myPackedFile.getPos("cat.mp3");
long len = myPackedFile.getLength("cat.mp3");
player.setDataSource(fd, pos, len);
player.prepare();
This is what worked for me: Instead of creating a RandomAccessFile and holding to it, I create a File, and every time I need to access it as a RandomAccessFile I create a new one:
public class PackagedFile {
private File file;
PackagedFile(String filename) {
file = new File(filename);
}
public RandomAccessFile getRandomAccessFile() {
RandomAccessFile raf = null;
try {
raf = new RandomAccessFile(file, "r");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
return raf;
}
}
and the above code became:
InputStream imageStream = myPackedFile.getBaseStream("cat.jpg"); // this returns an InputStream representing "cat.jpg" from my packed file (which is based on a RandomAccessFile)
Drawable image = Drawable.createFromStream(imageStream, imagePath);
FileDescriptor fd = myPackedFile.getRandomAccessFile().getFD();
long pos = myPackedFile.getPos("cat.mp3");
long len = myPackedFile.getLength("cat.mp3");
player.setDataSource(fd, pos, len);
player.prepare();
For API Level 13 and above, one can consider ParcelFileDescriptor.dup to duplicate the file descriptors. For more information, please refer to this link: http://androidxref.com/4.2.2_r1/xref/frameworks/base/core/java/android/app/ActivityThread.java#864
I want to add an audio file to an image. So when someone gets that image he can extract the sound from it. I think we can add additional data to image header. But I don't know how to do that sort of header processing in Android. Can you please guide me...
Using the below code you can add a small comment to the JPEG header in Android. It's only for JPEG.
final static String EXIF_TAG = "UserComment";
public static boolean setComment(String imagePath, String comment) {
try {
ExifInterface exif = new ExifInterface(imagePath);
exif.setAttribute(EXIF_TAG, comment);
exif.saveAttributes();
} catch (IOException e) {
e.printStackTrace();
}
return true;
}
But the problem is you can't put a audio file there because the data stream is way too big to fit in to the header.