i have video file in the sdcard. i want to encode the video file at minute 3 to 5. what im trying to say is,
let say, the complete video file is 10 minute. now i need to retrieve the video data at 3 to 5 minutes of the video to be encode and do the next process.
i have done it before but only encoded the complete video file and i dont have any idea to encoded the video file at specific time of the video. please help with this. Below is my encoded complete video file code:
public void loadVideo(){
//convert whole file into byte
File file = new File("/sdcard/videooutput.mp4");
byte[] fileData = new byte[(int) file.length()];
FileInputStream in;
try {
in = new FileInputStream(file);
try {
in.read(fileData);
for(int readNum; (readNum = in.read(fileData)) != -1;){
}
} catch (IOException e) {
Toast.makeText(this, "IO exc READ file", 2500).show();
e.printStackTrace();
}
try {
in.close();
} catch (IOException e) {
Toast.makeText(this, "IO exc CLOSE file", 2500).show();
e.printStackTrace();
}
String encoded = Base64.encodeToString(fileData, Base64.DEFAULT);
hantar(encoded);
} catch (FileNotFoundException e) {
Toast.makeText(this, "FILE NOT FOUND", 2500).show();
e.printStackTrace();
}
Please guide me with this. Thanks for advance.
You can try to use this class. It crops video from start to end position and saves cropped video to the file in your file system. Or you can use it to implement your own solution.
There is already answered question about cutting or trimming video on Android:
Android sdk cut/trim video file
You can try INDE Media for mobile - https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
It has transcoding\remuxing functionality as MediaComposer class and a possibility to select segments for resulted files
Related
I am using media recorder to record audio then I upload it to firebase, by saving the output file content into a byte array, the problem is that the recorded file is correctly saved and played, while in the firebase storage it doesn't work, so I checked out the size of both the saved file on my phone and that I am converting to bytes array ( which are supposed to be the same) I found that getlength returns wrong size! Here is the code
that's the code I am using to record
recorder.setAudioSource(android.media.MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setOutputFile(Environment.getExternalStorageDirectory().getAbsolutePath()+"/"+lectureTitle.getText().toString()+".mp4");
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
try {
recorder.prepare();
} catch (IOException e) {
e.printStackTrace();
}
recorder.start();
}
The code i am using to convert the file into bytes array so i can upload it
File lectureRecorded = new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/"+lectureTitle.getText().toString()+".mp4");
try {
BufferedInputStream input = new BufferedInputStream(new FileInputStream(lectureRecorded));
if(lectureRecorded.length() > 10*1024*1024){
Toast.makeText(getActivity(),"File is too big MAX (10MB)\n a loss will occur",Toast.LENGTH_LONG).show();
}else {
stream = new byte[(int)lectureRecorded.length()];
Log.d("AUDIOS",""+lectureRecorded.length());
input.read(stream, 0, (int) lectureRecorded.length());
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
recorder.stop();
recorder.release();
Silly mistake i just should have read the length after i call release -_-
In my android app I want to extract video frames. I am using MediaMetaDataRetriever for the same.
How I set datasource
Log.d("DEBUG", videoPathUri.getPath());
metadataRetriever.setDataSource(mContext, videoPathUri);
Here is the log output
/storage/emulated/0/Android/data/com.live.hootout/files/HootVideos/10701.mp4
How can I load video stored in android data folder into mediametadataretriever?
Try this...
File file = new File(context.getDataDir(),filename);
Here is how I did it.
File file = new File(videoPathUri.getPath());
try {
FileInputStream inputStream = new FileInputStream(file.getAbsolutePath());
metadataRetriever.setDataSource(inputStream.getFD());
}catch(FileNotFoundException e){
Log.d("DEBUG", "FileNotFoundException", e);
}catch(IOException ea){
Log.d("DEBUG", "IOException", ea);
}
I'm trying to send PNG file from my android server to my python client.
The PNG image I try to send is a screenshot, around 4mb tops, usually under 2mb.
android code (sending):
File myFile = new File(imagePath);
FileInputStream fis = null;
try {
fis = new FileInputStream(myFile);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
BufferedInputStream bis = new BufferedInputStream(fis);
Log.i("service", "sending file len");
try {
out.write("" +myFile.length());
out.flush();
} catch (Exception e) {
e.printStackTrace();
}
Log.i("service:", "waiteing for ok");
try {
msg = in.readLine();
} catch (Exception e) {
e.printStackTrace();
}
Log.i("service", "sending file");
byte[] outBuffer = new byte[(int) myFile.length()];
try {
bis.read(outBuffer, 0, outBuffer.length);
os = client.getOutputStream();
os.write(outBuffer, 0, outBuffer.length);
} catch (IOException e) {
e.printStackTrace();
}
python code (receiving):
print "waiting for responce's length"
MSGLEN = int(sock.recv(bufferLen))
print MSGLEN
sock.sendall("ok" +"\n")
chunks = []
bytes_recd = 0
while bytes_recd < MSGLEN:
chunk = sock.recv(min(MSGLEN - bytes_recd, bufferLen))
chunks.append(chunk)
bytes_recd = bytes_recd + len(chunk)
dataRecived = ''.join(chunks)
print 'data receieved'
print 'writing data to file'
fileout = open("D:\shots.png", 'w')
fileout.write(dataRecived)
fileout.close()
The file transfers from the android to my PC, but the file is corrupted.
When I compare it with the original image, almost everything is identical
except some empty lines here and there (not missing information, just empty line like someone added \n) and 1 or 2 big chunks of lines (15 lines or so) are missing.
Here you can see comparison between the tho files (left-original, right-file after sending).
I don't know why the file transfers corrupted, please help me.
Try coding it as Base64 and sending a simple string. Those missing lines are also part of image data - remember that those are binary.
Base64 or Bytestream is what u need
I'm working on a test app to integrate soundtouch (an open source audio processing library) on Android.
My test app already can receive input from the mic, pass the audio thru soundtouch and output the processed audio to an AudioTrack instance.
My question is, how can I change the output from AudioTrack to a new File on my device?
Here's the relevant code in my app (where I'm processing the output of soundtouch, into the input for AudioTrack)
// the following code is a stripped down version of my code
// in no way its supposed to compile or work. Its here for reference purposes
// pre-conditions
// parameters - input : byte[]
soundTouchJNIInstance.putButes(input);
int bytesReceived = soundTouchJNIInstance.getBytes(input);
audioTrackInstance.write(input, 0, bytesReceived);
Any ideas on how to approach this problem? Thanks!
Hope you are already getting the input voice from microphone and saved on a file.
Firstly, import JNI libraries to your oncreate method :
System.loadLibrary("soundtouch");
System.loadLibrary("soundstretch");
Soundstrech library :
public class AndroidJNI extends SoundStretch{
public final static SoundStretch soundStretch = new SoundStretch();
}
Now you need to call soundstrech.process with the input file path and the desired output file to store processed voice as parameters :
AndroidJNI.soundStretch.process(dataPath + "inputFile.wav", dataPath + "outputFile.wav", tempo, pitch, rate);
File sound = new File(dataPath + "outputFile.wav");
File sound2 = new File(dataPath + "inputFile.wav");
Uri soundUri = Uri.fromFile(sound);
The soundUri can be provided as a source to media player for play back :
MediaPlayer mediaPlayer = MediaPlayer.create(this, soundUri);
mediaPlayer.start();
Also note that, the sample size for recording should be selected dynamically by declaring an Array of Sample Rates :
int[] sampleRates = { 44100, 22050, 11025, 8000 }
The best way to write byteArray this :
public void writeToFile(byte[] array)
{
try
{
String path = "Your path.mp3";
File file = new File(path);
if (!file.exists()) {
file.createNewFile();
}
FileOutputStream stream = new FileOutputStream(path);
stream.write(array);
} catch (FileNotFoundException e1)
{
e1.printStackTrace();
}
}
I am not aware of sound touch at all and the link i am providing no where deals with jni code, but you can have a look at it if it helps you any way: http://i-liger.com/article/android-wav-audio-recording
I think the best way to achieve this is converting that audio to a byte[] array. Assuming you have already done that (if not, comment it and I'll provide an example), the above code should work. This assumes you're saving it in a external sdcard in a new directory called AudioRecording and saving it as audiofile.mp3.
final File soundFile = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "AudioRecording/");
soundFile.mkdirs();
final File outFile = new File(soundFile, 'audiofile.mp3');
try {
final FileOutputStream output = new FileOutputStream(outFile);
output.write(yourByteArrayWithYourAudioFileConverted);
output.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
mkdirs() method will try to construct all the parent directories if they're missing. So if you're planning to store in a 2 or more level depth directory, this will create all the structure.
I use a simple test code snippet to write my audio byte arrays:
public void saveAudio(byte[] array, string pathAndName)
{
FileOutputStream stream = new FileOutputStream(pathAndName);
try {
stream.write(array);
} finally {
stream.close();
}
}
You will probably need to add some exception handling if you are going to be using this in a production environment, but I utilise the above to save audio whenever I am am in the development phase or for personal non-release projects.
Addendum
After some brief thought I have changed my snippet to the following slightly more robust format:
public void saveAudio(byte[] array, string pathAndName)
{
try (FileOutputStream stream= new FileOutputStream(pathAndName)) {
stream.write(array);
} catch (IOException ioe) {
ioe.printStackTrace();
} finally {
stream.close();
}
}
You can use the method using SequenceInputStream, in my app I just merge MP3 files in one and play it using the JNI Library MPG123, but I tested the file using MediaPlayer without problems.
This code isn't the best, but it works...
private void mergeSongs(File mergedFile,File...mp3Files){
FileInputStream fisToFinal = null;
FileOutputStream fos = null;
try {
fos = new FileOutputStream(mergedFile);
fisToFinal = new FileInputStream(mergedFile);
for(File mp3File:mp3Files){
if(!mp3File.exists())
continue;
FileInputStream fisSong = new FileInputStream(mp3File);
SequenceInputStream sis = new SequenceInputStream(fisToFinal, fisSong);
byte[] buf = new byte[1024];
try {
for (int readNum; (readNum = fisSong.read(buf)) != -1;)
fos.write(buf, 0, readNum);
} finally {
if(fisSong!=null){
fisSong.close();
}
if(sis!=null){
sis.close();
}
}
}
} catch (IOException e) {
e.printStackTrace();
}finally{
try {
if(fos!=null){
fos.flush();
fos.close();
}
if(fisToFinal!=null){
fisToFinal.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
All , I am using Media Recorder for Recording Audio.
Case 1: If i use Android Version 2.2 installed devices, my recorded audio combined and playing well.
Case 2: If i use it in Android 1.6 installed devices, i am not able to play the combined audio file.
It is playing only the very first recorded audio and next recorded audio files keep empty no sound.
Also i am not getting any error in Logcat.
I used following code for recording audio :
mRecorder = new MediaRecorder();
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.RAW_AMR);
mRecorder.setOutputFile(main_record_file);
mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
mRecorder.prepare();
mRecorder.start();
Also i tried for mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT);
Code for combining Audio file :
public void createCombineRecFile(){
combined_file_stored_path=getFilename_combined_raw(); // File path in String to store recorded audio
byte fileContent[]=null;
FileInputStream ins;
FileOutputStream fos = null;
try{
fos = new FileOutputStream(combined_file_stored_path,true);
}
catch (FileNotFoundException e1){
// TODO Auto-generated catch block
e1.printStackTrace();
}
for(int i=0;i<audNames.size();i++){
try{
File f=new File(audNames.get(i));
Log.v("Record Message", "File Length=========>>>"+f.length());
fileContent = new byte[(int)f.length()];
ins=new FileInputStream(audNames.get(i));
int r=ins.read(fileContent);// Reads the file content as byte from the list.
Log.v("Record Message", "Number Of Bytes Readed=====>>>"+r);
fos.write(fileContent);//Write the byte into the combine file.
Log.v("Record Message", "File======="+i+"is Appended");
}
catch (FileNotFoundException e){
// TODO Auto-generated catch block
e.printStackTrace();
}
catch (IOException e)
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
try{
fos.close();
Log.v("Record Message", "===== Combine File Closed =====");
}
catch (IOException e){
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Let me know any details need.Thanks.
Every audio file has its own header (includes information about length/samples etc.) - by combining the files the way you do the resulting file has multiple headers, one per source file (depending on the exact format with file offsets etc.).
Thus the resulting file is NOT correct in terms of file format spec.
The newer Android version are more permissive and work/play with "multiple headers" present... the older versions do not...
To create a correctly combined audio file you must conform to the spec which among other things means creating one new header which describes all included audio...
Use for the the combination of audio files a different approach - for example via ffmpeg (see this for how to make ffmpeg for android).
Foreword: Haven't tested this, but I don't see why it shouldn't work.
Provided the headers ARE the cause of this problem, you can solve it really easily. Using the code you've given, the encoding is AMR-NB. According to this document the AMR header is simply the first 6 bytes, which are 0x23, 0x21, 0x41, 0x4D, 0x52, 0x0A. If the headers in subsequent files are causing the issue, simply omit those bytes from subsequent files e.g.
write all bytes of first file
write from byte[6] -> byte[end] of subsequent files
Let me know how it goes.
EDIT: At request, change the try block to:
try{
File f=new File(audNames.get(i));
Log.v("Record Message", "File Length=========>>>"+f.length());
fileContent = new byte[(int)f.length()];
///////////////new bit////////
//same as you had, this opens a byte stream to the file
ins=new FileInputStream(audNames.get(i));
//reads fileContent.length bytes
ins.read(fileContent);
//now fileContent contains the entire audio file - in bytes.
if(i>0){
//we are not writing the first audio recording, but subsequent ones
//so we don't want the header included in the write
//copy the entire file, but not the first 6 bytes
byte[] headerlessFileContent = new byte[fileContent.length()-6];
for(int j=6; j<fileContent.length();j++){
headerlessFileContent[j-6] = fileContent[j];
}
fileContent = headerlessFileContent;
}
////////////////////////////
Log.v("Record Message", "Number Of Bytes Readed=====>>>"+r);
fos.write(fileContent);//Write the byte into the combine file.
Log.v("Record Message", "File======="+i+"is Appended");
}