I'm using libGDX and face the problem that background music does not flawlessly loop on various Android devices (Nexus 7 running Lollipop for example). Whenever the track loops (i.e. jumps from the end to the start) a clearly noticeable gap is hearable. Now I wonder how the background music can be played in a loop without the disturbing gap?
I've already tried various approaches like:
Ensuring the number of Samples in the track are an exact multiple of the tracks sample rate (as mentioned somewhere here on SO).
Various audio formats like .ogg, .m4a, .mp3 and .wav (.ogg seems to be the solution of choice here at SO, but unfortunately it does not work in my case).
Used Androids MediaPlayer with setLooping(true) instead of libGDX Music class.
Used Androids MediaPlayer.setNextMediaPlayer(). The code looks like the following, and it plays the two tracks without a gap in between, but unfortunately, as soon as the second MediaPlayer finishes, the first does not start again!
/* initialization */
afd = context.getAssets().openFd(filename);
firstBackgroundMusic.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
firstBackgroundMusic.prepare();
firstBackgroundMusic.setOnCompletionListener(this);
secondBackgroundMusic.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
secondBackgroundMusic.prepare();
secondBackgroundMusic.setOnCompletionListener(this);
firstBackgroundMusic.setNextMediaPlayer(secondBackgroundMusic);
secondBackgroundMusic.setNextMediaPlayer(firstBackgroundMusic);
firstBackgroundMusic.start();
#Override
public void onCompletion(MediaPlayer mp) {
mp.stop();
try {
mp.prepare();
} catch (IOException e) { e.printStackTrace(); }
}
Any ideas what's wrong with the code snippet?
Just for the records:
It tuned out to be unsolvable. At the end, we looped the background music various times inside the file. This way the gap appears less frequently. It's no real solution to the problem, but the best workaround we could find.
This is an old question but I will give my solution in case anyone has the same problem.
The solution requires the use of the Audio-extension(deprecated but works just fine),if you cant find the link online here are the jars that I am using, also requires some external storage space.
The abstract is the following
Extract the raw music data with a decoder(VorbisDecoder class for ogg or Mpg123Decoder for mp3)and save them to the external storage(you can make a check to see if it already exists so that it only needs to be extracted once, cause it takes some time).
Create a RandomAccessFile using the file you just saved to the external storage
While playing set the RandomAccessFile pointer to the correct spot in the file and read a data segment
Play the above data segment with the AudioDevice class
Here is some code
Extract the music file and save it to the external storage,file is the FileHandle of the internal music file,here is an ogg and thats why we use VorbisDecoder
FileHandle external=Gdx.files.external("data/com.package.name/music/"+file.name());
file.copyTo(external);
VorbisDecoder decoder = new VorbisDecoder(external);
FileHandle extreactedDataFile=Gdx.files.external("data/com.package.name/music/"+file.nameWithoutExtension()+".mdata");
if(extreactedDataFile.exists())extreactedDataFile.delete();
ShortBuffer sbuffer=ByteBuffer.wrap(shortBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
while(true){
if(LogoScreen.shouldBreakMusicLoad)break;
int num=decoder.readSamples(samples, 0,samples.length);
sbuffer.put(samples,0,num);
sbuffer.position(0);
extreactedDataFile.writeBytes(shortBytes,0,num*2, true);
if(num<=0)break;
}
external.delete();
Create an RandomAccessFile pointing to the file we just created
if(extreactedDataFile.exists()){
try {
raf=new RandomAccessFile(Gdx.files.external(extreactedDataFile.path()).file(), "r");
raf.seek(0);
} catch (Exception e) {
e.printStackTrace();
}
}
Create a Buffer so we can translate the bytes read from the file to a short array that gets feeded to the AudioDevice
public byte[] rafbufferBytes=new byte[length*2];
public short[] rafbuffer=new short[length];
public ShortBuffer sBuffer=ByteBuffer.wrap(rafbufferBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
When we want to play the file we create an AudioDevice and a new thread where we read constantly read from the raf file and feed it to the AudioDevice
device = Gdx.audio.newAudioDevice((int)rate/*the hrz of the music e.g 44100*/,isthemusicMONO?);
currentBytes=0;//set the file to the beggining
playbackThread = new Thread(new Runnable() {
#Override
public synchronized void run() {
while (playing) {
if(raf!=null){
int length=raf.read(rafbufferBytes);
if(length<=0){
ocl.onCompletion(DecodedMusic.this);
length=raf.read(rafbufferBytes);
}
sBuffer.get(rafbuffer);
sBuffer.position(0);
if(length>20){
try{
device.writeSamples(rafbuffer,0,length/2);
fft.spectrum(rafbuffer, spectrum);
currentBytes+=length;
}catch(com.badlogic.gdx.utils.GdxRuntimeException ex){
ex.printStackTrace();
device = Gdx.audio.newAudioDevice((int)(rate),MusicPlayer.mono);
}
}
}
}
}
});
playbackThread.setDaemon(true);
playbackThread.start();
And when we want to seek at a position
public void seek(float pos){
currentBytes=(int) (rate*pos);
try {
raf.seek(currentBytes*4);
} catch (IOException e) {
e.printStackTrace();
}
}
Related
I'm working on an app that records video in background and sends it to server in parts by reading bytes and storing them in byte array. For now algorithm is pretty simple:
start recording;
reading part of video file to byte array;
send byte array via POST (with help of retrofit).
Problem occurs if connection somehow interrupts and last part isn't sent. Server just can't make readable video file as moov atom would be written only after recording stops. My question - is it possible some how to make complete video files from byte array parts or any other way? I can change video codec if it would solve the problem.
p.s. I can only send data via POST.
p.p.s I can't change something on server side including streaming video directly to server.
SOLUTION
I decided to record small chunks of video in recursive way. Next solution is suitable for first version of Camera API. If you're using Camera2 or something else - you can try to use same algorithm.
In service class that records video make sure that mediarecorder is configured next way:
mediaRecorder.setMaxDuration(10000);
//or
mMediaRecorder.setMaxFileSize(10000);
Then you need to implement setOnInfoListener interface next way:
mediaRecorder.setOnInfoListener(new MediaRecorder.OnInfoListener() {
#Override
public void onInfo(MediaRecorder mr, int what, int extra) {
if (what == MediaRecorder.MEDIA_RECORDER_INFO_MAX_DURATION_REACHED) {
//Use next condition if you decided to use max file size
//if (what == MediaRecorder.MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED)
stopRecording();
setRecordingStatus(false);
startRecording(surfaceHolder);
}
}
});
Don't forget to pass surfaceHolder instance for next iteration otherwise you can get "Application lost surface" error.
Next thing you need to do is declare FileObserver in onCreate method:
FileObserver fileObserver = new FileObserver(pathToFolder, FileObserver.CLOSE_WRITE) {
//FileObserver.CLOSE_WRITE mask means that this observer would be triggered when it receive end of writing to file system event
#Override
public void onEvent(int event, String path) {
//here path is name of file (with extension) but not the full path to file
if (event == FileObserver.CLOSE_WRITE && path.endsWith(".mp4")) {
String name = String.valueOf(Long.parseLong(path.substring(0, path.length() - 4)) / 1000);
sendNewVideo(pathToFolder + "/" + path, name);
}
}
};
In onStartCommand method:
fileObserver.startWatching();
I'm using Cordova to build my mobile app and I need to record sounds.
I'm using the media-capture plugin which launches the default android recorder app within this function:
private void captureAudio() {
Intent intent = new Intent(android.provider.MediaStore.Audio.Media.RECORD_SOUND_ACTION);
this.cordova.startActivityForResult((CordovaPlugin) this, intent, CAPTURE_AUDIO);
}
The problem is that after I get the file path and try to getAudioVideoData (which contains informations like "duration") the audio recording format (which is default to .amr) seems like cannot be parsed and throws an exception.
private JSONObject getAudioVideoData(String filePath, JSONObject obj, boolean video) throws JSONException {
MediaPlayer player = new MediaPlayer();
try {
player.setDataSource(filePath);
player.prepare();
obj.put("duration", player.getDuration() / 1000);
if (video) {
obj.put("height", player.getVideoHeight());
obj.put("width", player.getVideoWidth());
}
} catch (IOException e) {
Log.d(LOG_TAG, "Error: loading video file");
}
return obj;
}
I know that the problem is media format because on my older Android device, with 4.4.4, the Sound Recorder app has settings from where I can change file type and if I set it to .wav, than the getAudioVideoData works!
I have tried to add the following inside captureAudio() before startActivityForResult():
intent.putExtra(android.provider.MediaStore.Audio.Media.ENTRY_CONTENT_TYPE, "audio/aac");
intent.putExtra(android.provider.MediaStore.Audio.Media.MIME_TYPE, "audio/aac");
intent.putExtra(android.provider.MediaStore.Audio.Media.CONTENT_TYPE, "audio/aac");
..but with no success.
I couldn't find a way to influence the output of Sound Recorder app via intent, but I solved the main problem, which was that I couldn't read recorded audio file's metadata (duration property).
Fixed with this PR: https://github.com/apache/cordova-plugin-media-capture/pull/50
I have a link of video from s3 server and i am playing this video on VideoView. Video is playing properly but the problem is that first it downloads the entire video then plays it.
I want it play like buffer. I mean if 20 % video downloaded it should play those and then again download (Like in youtube). Here is my code what i have done is..
FFmpegMediaMetadataRetriever mediaMetadataRetriever = new FFmpegMediaMetadataRetriever();
AWSCredentials myCredentials = new BasicAWSCredentials(
"AKIAIGOIY4LLB7EMACGQ",
"7wNQeY1JC0uyMaGYhKBKc9V7QC7X4ecBtyLimt2l");
AmazonS3 s3client = new AmazonS3Client(myCredentials);
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest(
"mgvtest", videoUrl);
URL objectURL = s3client.generatePresignedUrl(request);
try {
mediaMetadataRetriever.setDataSource(videoUrl);
} catch (Exception e) {
utilDialog.showDialog("Unable to load this video",
utilDialog.ALERT_DIALOG);
pb.setVisibility(View.INVISIBLE);
}
videoView.setVideoURI(Uri.parse(videoUrl));
MediaController myMediaController = new MediaController(this);
// myMediaController.setMediaPlayer(videoView);
videoView.setMediaController(myMediaController);
videoView.setOnCompletionListener(myVideoViewCompletionListener);
videoView.setOnPreparedListener(MyVideoViewPreparedListener);
videoView.setOnErrorListener(myVideoViewErrorListener);
videoView.requestFocus();
videoView.start();
Listeners
MediaPlayer.OnCompletionListener myVideoViewCompletionListener = new MediaPlayer.OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer arg0) {
// Toast.makeText(PlayRecordedVideoActivity.this, "End of Video",
// Toast.LENGTH_LONG).show();
}
};
MediaPlayer.OnPreparedListener MyVideoViewPreparedListener = new MediaPlayer.OnPreparedListener() {
#Override
public void onPrepared(MediaPlayer mp) {
pb.setVisibility(View.INVISIBLE);
imgScreenshot.setVisibility(View.VISIBLE);
tvScreenshot.setVisibility(View.VISIBLE);
// final Animation in = new AlphaAnimation(0.0f, 1.0f);
// in.setDuration(3000);
// tvScreenshot.startAnimation(in);
Animation animation = AnimationUtils.loadAnimation(
getApplicationContext(), R.anim.zoom_in);
tvScreenshot.startAnimation(animation);
new Handler().postDelayed(new Runnable() {
#Override
public void run() {
tvScreenshot.setVisibility(View.INVISIBLE);
}
}, 3000);
}
};
MediaPlayer.OnErrorListener myVideoViewErrorListener = new MediaPlayer.OnErrorListener() {
#Override
public boolean onError(MediaPlayer mp, int what, int extra) {
// Toast.makeText(PlayRecordedVideoActivity.this, "Error!!!",
// Toast.LENGTH_LONG).show();
return true;
}
};
To be able to start playing an mp4 video before it has fully downloaded the video has to have the metadata at the start of the video rather than the end - unfortunately, with standard mp4 the default it usually to have it at the end.
The metadata is in an 'atom' or 'box' (basically a data structure within the mp4 file) and can be moved to the start. This is usually referred to as faststart and tools such as ffmpeg will allow you do this. The following is an extract from the ffmpeg documentation:
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location (written at the end of the file, it can be moved to the start for better playback by adding faststart to the movflags, or using the qt-faststart tool).
There are other tools and software which will allow you do this also - e.g. the one mentioned in the ffmpeg extract above:
http://multimedia.cx/eggs/improving-qt-faststart/
If you actually want full streaming where the server breaks the file into chunks and these are downloaded one by one by the client, then you probably want to use one of the adaptive bit rate protocols (Apple's HLS, MS's Smoothstreaming, Adobe Adaptive Streaming or the new open standard DASH). This also allows you have different bit rates to allow for different network conditions. You will need a server that can support this functionality to use these techniques. This may be overkill if you just want a simple site with a single video and will not have too much traffic.
Actually you have to start cloudfront with s3, so can stream s3 videos,
checkout this link for more information:
http://www.miracletutorials.com/s3-streaming-video-with-cloudfront/
I'm currently writing a csv-file-importer for my app, but I have difficulties writing tests for it. What I'm trying to do is import a sample csv file and compare the results to the database.
public class CSVImportTest extends ProviderTestCase2<MyProvider> {
#Override
protected void setUp() throws Exception {
super.setUp();
mContentResolver = getMockContentResolver();
setContext(new IsolatedContext(mContentResolver, getContext()));
mContext = getContext();
mCSVImport = new CSVImportParker(mContext);
}
public void read() {
try {
// Fails here with "File not found."
InputStream input = mContext.getResources()
.openRawResource(my.package.R.raw.file);
...
} catch (Exception e) {
e.printStackTrace();
fail();
}
...
}
}
The test file is never found, although it is available at the correct location.
The issue is that resources in the raw directory are compressed unless they have an ogg or mp3 file extension. See this description:
Proguard breaking audio file in assets or raw
and from the docs
This function only works for resources that are stored in the package as uncompressed data, which typically includes things like mp3 files and png images.
So, the easiest way to solve the issue is by adding the mp3 or ogg file extension to your raw assets. It's not clean or pretty but it works.
I want to understand the working principles of the HTC Evo 3D's 3D Display; however, the code and HTCDev's tutorial do not help on this. It is said that the SEI FPA bit in the header overrides the choice which is given by hand such as:
public void surfaceChanged(SurfaceHolder surfaceholder, int i, int j, int k) {
holder = surfaceholder;
enableS3D(true, holder.getSurface()); // note SEI FPA flag in content
// overrides this
}
The play video code:
private void playVideo() {
release();
fileName = "HTCDemo.mp4";
try {
mediaPlayer = new MediaPlayer();
final AssetFileDescriptor afd = getAssets().openFd(fileName);
mediaPlayer.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(),
afd.getLength());
mediaPlayer.setDisplay(holder);
mediaPlayer.prepare();
mediaPlayer.setOnPreparedListener(this);
mediaPlayer.setOnVideoSizeChangedListener(this);
mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
} catch (Exception e) {
Log.e(TAG, Log.getStackTraceString(e));
}
}
At this point, I could not track where it looks to the SEI FPA bit in the header. I need a help for showing the necessary code part. Thanks in advance.
Do you need to parse SEI FPA bit in the header itself? That's out of scope of this API. As noted, the codec parses this to enable (and override) S3D setting.
In the overview, it's mentioned how you can use 3rd party tools like x264 to add this bit to existing content.
I'd recommend looking at the code for x264 for help in parsing the file header if that's what you need to do at runtime.