I am wondering if it is possible to reuse single MediaMetadataRetriever object for the purpose of getting metadata from multiple files?
If yes - should I call release() method after each file or just set different datasource and call release() after all files being processed?
API refererence is not precise about that :/
thanks :)
Yes, you can reuse the object. You code would look something like this:
MediaMetadataRetriever mmr = new MediaMetadataRetriever();
for (int i = 0; i < files.length; i++) {
mmr.setDataSource(files[i]);
mmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_ALBUM);
mmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_ARTIST);
Bitmap b = mmr.getFrameAtTime(2000000, MediaMetadataRetriever.OPTION_CLOSEST); // frame at 2 seconds
byte [] artwork = mmr.getEmbeddedPicture();
}
mmr.release(); // all done, release the object
Related
Is there any way to read frames from an mp4 video using JavaCV in parallel?
I know that we could grab frames using FFmpegFrameGrabber but is there any other efficient method like using FrameGrabber.Array ?, I tried the below code but its not working.
frames = new Frame[grabber.getLengthInFrames()];
frameGrabbers = new FFmpegFrameGrabber[grabber.getLengthInFrames()];
*//*for (FFmpegFrameGrabber grabber : frameGrabbers) {
grabber = new FFmpegFrameGrabber(path);
}*//*
for (int i = 0; i < grabber.getLengthInFrames(); i++) {
frameGrabbers[i] = new FFmpegFrameGrabber(path);
}
grabberArray = grabber.createArray(frameGrabbers);
grabberArray.start();
frames = grabberArray.grab();
grabberArray.release();
The app crashes when I call grabberArray.start().
Thanks.
I am trying to extract all frames from a video.
By following code I want to fetch the first 30 frames of a video, but I got only first frame 30 times.
private ArrayList<Bitmap> getFrames(String path) {
try {
ArrayList<Bitmap> bArray = new ArrayList<Bitmap>();
bArray.clear();
MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
mRetriever.setDataSource("/sdcard/myvideo.mp4");
for (int i = 0; i < 30; i++) {
bArray.add(mRetriever.getFrameAtTime(1000*i,
MediaMetadataRetriever.OPTION_CLOSEST_SYNC));
}
return bArray;
} catch (Exception e) { return null; }
}
Now, how can I get all frames from a video?
Video support in Android SDK is limited and frame extraction for H264 encoded videos is only possible for key frames. In order to extract an arbitrary frame, you'll need to use a library like FFmpegMediaMetadataRetriever which uses native code to extract data from the video. It is very fast, comes with precompiled binaries (for ARM and x86) so you don't need to delve into C++ and makefiles, is licensed under Apache 2.0 and it comes with a demo Android app.
There is also a pure Java library, JCodec but it's slower and when I used it last year the colors of the extracted frame were distorted.
you have to pass the path to this method...Perfectly working code ! hope it helpfull
gradle--
implementation 'com.github.wseemann:FFmpegMediaMetadataRetriever-core:1.0.15'
public void VideoToGif(String uri) {
Uri videoFileUri = Uri.parse(uri);
FFmpegMediaMetadataRetriever retriever = new FFmpegMediaMetadataRetriever();
retriever.setDataSource(uri);
List<Bitmap> rev = new ArrayList<Bitmap>();
MediaPlayer mp = MediaPlayer.create(GitToImage.this, videoFileUri);
int millis = mp.getDuration();
System.out.println("starting point");
for (int i = 100000; i <=millis * 1000; i += 100000*2) {
Bitmap bitmap = retriever.getFrameAtTime(i, FFmpegMediaMetadataRetriever.OPTION_CLOSEST);
rev.add(bitmap);
}
GiftoImage((ArrayList) rev);
}
getFrameAt get data in milliseconds but you are incrementing .001 miliseconds in for loop.
for(int i=1000000;i<millis*1000;i+=1000000) // for incrementing 1s use 1000
{
bArray.add(mRetriever.getFrameAtTime(i,
MediaMetadataRetriever.OPTION_CLOSEST_SYNC));
}
change it like above . Above is sample for creating what you want. I also answered it here
Starting with Android 9.0 (API level 28), MediaMetadataRetriever has a getFrameAtIndex (int frameIndex) method, which accepts the zero-based index of the frame you want and returns a Bitmap.
See https://developer.android.com/reference/android/media/MediaMetadataRetriever.html#getFrameAtIndex(int)
I am using JNI code in an Android project in which the JNI native function requires a short[] argument. However, the original data is stored as a ByteBuffer. I'm trying the convert the data format as follows.
ByteBuffer rgbBuf = ByteBuffer.allocate(size);
...
short[] shortArray = (short[]) rgbBuf.asShortBuffer().array().clone();
But I encounter the following problem when running the second line of code shown above:
E/AndroidRuntime(23923): Caused by: java.lang.UnsupportedOperationException
E/AndroidRuntime(23923): at Java.nio.ShortToByteBufferAdapter.protectedArray(ShortToByteBufferAdapter.java:169)
Could anyone suggest a means to implement the conversion?
The method do this is a bit odd, actually. You can do it as below; ordering it is important to convert it to a short array.
short[] shortArray = new short[size/2];
rgbBuf.order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shortArray);
Additionally, you may have to use allocateDirect instead of allocate.
I had the same error with anything that used asShortBuffer(). Here's a way around it (adapted from 2 bytes to short java):
short[] shortArray = new short[rgbBuf.capacity() / 2]);
for (int i=0; i<shortArray.length; i++)
{
ByteBuffer bb = ByteBuffer.allocate(2);
bb.order(ByteOrder.LITTLE_ENDIAN);
bb.put(rgbBuf[2*i]);
bb.put(rgbBuf[2*i + 1]);
shortArray[i] = bb.getShort(0);
}
All I need is convert byte[] to String. Then do something with that string and convert back to byte[] array. But in this testing I'm just convert byte[] to string and convert back to byte[] and the result is different.
to convert byte[] to string by using this:
byte[] byteEntity = EntityUtils.toByteArray(entity);
String s = new String(byteEntity,"UTF-8");
Then i tried:
byte[] byteTest = s.getBytes("UTF-8");
Then i complared it:
if (byteEntity.equals(byteTest) Log.i("test","equal");
else Log.i("test","diff");
So the result is different.
I searched in stackoverflow about this but it doesn't match my case. The point is my data is .png picture so the string converted is unreadable. Thanks in advance.
Solved
Using something like this.
byte[] mByteEntity = EntityUtils.toByteArray(entity);
byte[] mByteDecrypted = clip_xor(mByteEntity,"your_key".getBytes());
baos.write(mByteDecrypted);
InputStream in = new ByteArrayInputStream(baos.toByteArray());
and this is function clip_xor
protected byte[] clip_xor(byte[] data, byte[] key) {
int num_key = key.length;
int num_data = data.length;
try {
if (num_key > 0) {
for (int i = 0, j = 0; i < num_data; i++, j = (j + 1)
% num_key) {
data[i] ^= key[j];
}
}
} catch (Exception ex) {
Log.i("error", ex.toString());
}
return data;
}
Hope this will useful for someone face same problem. Thanks you your all for helping me solve this.
Special thanks for P'krit_s
primitive arrays are actually Objects (that's why they have .equals method) but they do not implement the contract of equality (hashCode and equals) needed for comparison. You cannot also use == since according to docs, .getBytes will return a new instance byte[]. You should use Arrays.equals(byteEntity, byteTest) to test equality.
Have a look to the answer here.
In that case my target was transform a png image in a bytestream to display it in embedded browser (it was a particular case where browser did not show directly the png).
You may use the logic of that solution to convert png to byte and then to String.
Then reverse the order of operations to get back to the original file.
anyone know of any usefull links for learning audio dsp for android?
or a sound library?
im trying to make a basic mixer for playing wav files but realised i dont know enough about dsp, and i cant find anything at all for android.
i have a wav file loaded into a byte array and an AudioTrack on a short loop.
how can i feed the data in?
i expect this post will be ignored but its worth a try.
FileInputStream is = new FileInputStream(filePath);
BufferedInputStream bis = new BufferedInputStream(is);
DataInputStream dis = new DataInputStream(bis);
int i = 0;
while (dis.available() > 0) {
byteData[i] = dis.readByte(); //byteData
i++;
}
final int minSize = AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT );
track = new AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT,
minSize, AudioTrack.MODE_STREAM);
track.play();
bRun=true;
new Thread(new Runnable() {
public void run() {
track.write(byteData, 0, minSize);
}
}).start();
I'll give this a shot just because I was in your position a few months ago...
If you already have the wav file audio samples in a byte array, you simple need to pass the samples to the audio track object (lookup the write() methods).
To mix audio together you simply add the sames from each track. For example, add the first sample from track 1 to track 2, add the second sample from track 1 to track 2 and so on. The end result would ideally be a third array containing the added samplws which you pass to the 'write' method of your audio track instance.
You must be mindful of clipping here. If your data type 'short' then the maximum value allowed is 32768. A simple way to ensure that your added samples do not exceed this limit is to peform the addition and store the result in a variable whose data type is larger than a short (eg. int) and evaluate the result. If it's greater than 32768 then make it equal to 32768 and cast it back to a short.
int result = track1[i] + track2[i];
if(result > 32768) {
result = 32768;
}
else if(result < -32768) {
result = -32768;
}
mixedAudio[i] = (short)result;
Notice how the snippet above also tests for the minimum range of a short.
Appologies for the lack of formatting here, I'm on my mobile phone on a train :-)
Good luck.