How to change the value of an audio file's Pitch - android

I'm developing an app in Android to change the all recorded audio files by men voice to the women voice.
I found a solution to change the Pitch value of an audio file by PlaybackParams in MediaPlayer.
Here's my code for changing Pitch value:
mediaPlayer =new MediaPlayer();
mediaPlayer.setDataSource(ur);
mediaPlayer.prepare();
PlaybackParams params = null;
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.M) {
params = new PlaybackParams();
try {
params.setPitch(Float.parseFloat(1.6f));
}catch (Exception e){
params.setPitch(1.6f)
}
mediaPlayer.setPlaybackParams(params);
}
It works well, but the problem is that it works only on Android above version 5.
Does anyone know another solution for that?

Related

For Exoplayer's AdaptiveTrackSelection, should I switch to a single track with multiple bitrates instead of four tracks with separate bitrates?

Currently, I have a server that streams four RTMP MediaSources, one with 720p video source, one with 360p video source, one with 180p video source, and one audio-only source. If I wanted to switch resolutions, I have to stop the ExoPlayer instance, prepare the other track I wanted to switch to, then play.
The code I use to prepare the ExoPlayer instance:
TrackSelection.Factory adaptiveTrackSelectionFactory = new AdaptiveTrackSelection.Factory(bandwidthMeter);
TrackSelector trackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new AVControlExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
//noinspection deprecation
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
2000, // max buffer
1000, // playback
1000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addAnalyticsListener(mAnalyticsListener);
With createSource() being:
private void createSource() {
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_BOTH_AV);
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180()));
mMediaSource180.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource180"));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360()));
mMediaSource360.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource360"));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720()));
mMediaSource720.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource720"));
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_AUDIO_ONLY);
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL()));
mMediaSourceAudio.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSourceAudio"));
}
private void releaseSource() {
mMediaSource180.releaseSource(null);
mMediaSource360.releaseSource(null);
mMediaSource720.releaseSource(null);
mMediaSourceAudio.releaseSource(null);
}
And the code I currently use to switch between these MediaSources is:
private void changeTrack(MediaSource source) {
if (currentMediaSource == source) return;
try {
this.currentMediaSource = source;
mPlayer.stop(true);
mPlayer.prepare(source, true, true);
mPlayer.setPlayWhenReady(true);
if (source == mMediaSourceAudio) {
if (!audioOnly) {
try {
TransitionManager.beginDelayedTransition(rootView);
} catch (Exception ignored) {
}
layAudioOnly.setVisibility(View.VISIBLE);
vwExoPlayer.setVisibility(View.INVISIBLE);
audioOnly = true;
try {
GameQnAFragment fragment = findFragment(GameQnAFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
try {
GamePollingFragment fragment = findFragment(GamePollingFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
}
} else {
if (audioOnly) {
TransitionManager.beginDelayedTransition(rootView);
layAudioOnly.setVisibility(View.GONE);
vwExoPlayer.setVisibility(View.VISIBLE);
audioOnly = false;
}
}
} catch (Exception ignore) {
}
}
I wanted to implement a seamless switching between these MediaSources so that I don't need to stop and re-prepare, but it appears that this feature is not supported by ExoPlayer.
In addition, logging each MediaSource structure with the following code:
MappingTrackSelector.MappedTrackInfo info = ((DefaultTrackSelector)trackSelector).getCurrentMappedTrackInfo();
if(info != null) {
for (int i = 0; i < info.getRendererCount(); i++) {
TrackGroupArray trackGroups = info.getTrackGroups(i);
if (trackGroups.length != 0) {
for(int j = 0; j < trackGroups.length; j++) {
TrackGroup tg = trackGroups.get(j);
for(int k = 0; k < tg.length; k++) {
Log.i("track_info_"+i+"-"+j+"-"+k, tg.getFormat(k)+"");
}
}
}
}
}
Just nets me 1 video format and 1 audio format each.
My current workaround is to prepare another ExoPlayer instance in the background, replace the currently running instance with that upon preparations being complete, and release the old instance. That reduces the lag between the MediaSources somewhat, but doesn't come close to achieving seamless resolution changes like Youtube.
Should I implement my own TrackSelector and jam-pack all the 4 sources into that, should I implement another MediaSource that handles all 4 sources, or should I just tell the colleague who maintains the streams to switch to just one RTMP MediaSource with a sort of manifest that lists all the resolutions available for the AdaptiveTrackSelection to switch between them?
Adaptive Bit Rate Streaming is designed to allow easy switching between different bit rate streams, but it requires the streams to be segmented and the player to download the video segment by segment.
In this way the player can decide which bit rate to choose for the next segment depending on the current network conditions (and the device display size and t type). The player is able to seamlessly, apart from the different bitrate and quality, move from one bit rate to another this way.
See here for some more info: https://stackoverflow.com/a/42365034/334402
All the above relies on a delivery protocol which supports this segmentation and different bit rate streams. The most common ones today are HLS and MPEG-DASH.
The easiest way to support what I think you are looking for would be for you colleague who is supplying the stream to supply it using HLS and/or DASH.
Note that at the moment, both HLS and DASH are required as apple devices require HLS while other devices tend to default to DASH. Traditionally HLS used TS as the container for the video in the segments and DASH used fragmented MP4, but there is now a move for both to use CMAF, which is essentially fragmented MP4.
So in theory a single set of bit rate videos can be used for HLS and DASH now - in practice this will depend on whether your content is encrypted or not, as HLS and apple used one encryption mode and everyone else another in the past. This is changing now also but will take time before all devices support the new approach, where all devices can support the same encryption mode, so if your streams are encrypted this is an added complication at the moment.

Android - set Sound Recorder mime type from Intent

I'm using Cordova to build my mobile app and I need to record sounds.
I'm using the media-capture plugin which launches the default android recorder app within this function:
private void captureAudio() {
Intent intent = new Intent(android.provider.MediaStore.Audio.Media.RECORD_SOUND_ACTION);
this.cordova.startActivityForResult((CordovaPlugin) this, intent, CAPTURE_AUDIO);
}
The problem is that after I get the file path and try to getAudioVideoData (which contains informations like "duration") the audio recording format (which is default to .amr) seems like cannot be parsed and throws an exception.
private JSONObject getAudioVideoData(String filePath, JSONObject obj, boolean video) throws JSONException {
MediaPlayer player = new MediaPlayer();
try {
player.setDataSource(filePath);
player.prepare();
obj.put("duration", player.getDuration() / 1000);
if (video) {
obj.put("height", player.getVideoHeight());
obj.put("width", player.getVideoWidth());
}
} catch (IOException e) {
Log.d(LOG_TAG, "Error: loading video file");
}
return obj;
}
I know that the problem is media format because on my older Android device, with 4.4.4, the Sound Recorder app has settings from where I can change file type and if I set it to .wav, than the getAudioVideoData works!
I have tried to add the following inside captureAudio() before startActivityForResult():
intent.putExtra(android.provider.MediaStore.Audio.Media.ENTRY_CONTENT_TYPE, "audio/aac");
intent.putExtra(android.provider.MediaStore.Audio.Media.MIME_TYPE, "audio/aac");
intent.putExtra(android.provider.MediaStore.Audio.Media.CONTENT_TYPE, "audio/aac");
..but with no success.
I couldn't find a way to influence the output of Sound Recorder app via intent, but I solved the main problem, which was that I couldn't read recorded audio file's metadata (duration property).
Fixed with this PR: https://github.com/apache/cordova-plugin-media-capture/pull/50

vlc-android-sdk - cannot view RTSP live video

I've been working on an Android application that shows live streaming video via RTSP.
Assuming I have a well-functioning RTSP server that passes h264 packets, and to view the stream we should connect to rtsp://1.2.3.4:5555/stream
So I tried to use the native MediaPlayer\VideoView, but no luck (the video was stuck after 2-3 seconds of playback, so I loaded mrmaffen's vlc-android-sdk (can be found here) and used the following code:
ArrayList<String> options = new ArrayList<String>();
options.add("--no-drop-late-frames");
options.add("--no-skip-frames");
options.add("-vvv");
videoVlc = new LibVLC(options);
newVideoMediaPlayer = new org.videolan.libvlc.MediaPlayer(videoVlc);
final IVLCVout vOut = newVideoMediaPlayer.getVLCVout();
vOut.addCallback(this);
vOut.setVideoView(videoView); //videoView is a pre-defined view which is part of the layout
vOut.attachViews();
newVideoMediaPlayer.setEventListener(this);
Media videoMedia = new Media (videoVlc, Uri.parse(mVideoPath));
newVideoMediaPlayer.setMedia(videoMedia);
newVideoMediaPlayer.play();
The problem is that I see a blank screen.
Keep in mind that when I put a RTSP link with audio stream only, it works fine.
Is someone familliar with this sdk and have an idea about this issue?
Thanks in advance
Try adding this option:
--rtsp-tcp
I play rtsp streaming with following code
try {
Uri rtspUri=Uri.parse("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov");
final MediaWrapper mw = new MediaWrapper(rtspUri);
mw.removeFlags(MediaWrapper.MEDIA_FORCE_AUDIO);
mw.addFlags(MediaWrapper.MEDIA_VIDEO);
MediaWrapperListPlayer.getInstance().getMediaList().add(mw);
VLCInstance.getMainMediaPlayer().setEventListener(this);
VLCInstance.get().setOnHardwareAccelerationError(this);
final IVLCVout vlcVout = VLCInstance.getMainMediaPlayer().getVLCVout();
vlcVout.addCallback(this);
vlcVout.setVideoView(mSurfaceView);
vlcVout.attachViews();
final SharedPreferences pref = PreferenceManager.getDefaultSharedPreferences(this);
final String aout = VLCOptions.getAout(pref);
VLCInstance.getMainMediaPlayer().setAudioOutput(aout);
MediaWrapperListPlayer.getInstance().playIndex(this, 0);
} catch (Exception e) {
Log.e(TAG, e.toString());
}
When you get playing event, you need enable video track.
private void onPlaying() {
stopLoadingAnimation();
VLCInstance.getMainMediaPlayer().setVideoTrackEnabled(true);
}
This may be helpful for you

libGDX/Android: How to loop background music without the dreaded gap?

I'm using libGDX and face the problem that background music does not flawlessly loop on various Android devices (Nexus 7 running Lollipop for example). Whenever the track loops (i.e. jumps from the end to the start) a clearly noticeable gap is hearable. Now I wonder how the background music can be played in a loop without the disturbing gap?
I've already tried various approaches like:
Ensuring the number of Samples in the track are an exact multiple of the tracks sample rate (as mentioned somewhere here on SO).
Various audio formats like .ogg, .m4a, .mp3 and .wav (.ogg seems to be the solution of choice here at SO, but unfortunately it does not work in my case).
Used Androids MediaPlayer with setLooping(true) instead of libGDX Music class.
Used Androids MediaPlayer.setNextMediaPlayer(). The code looks like the following, and it plays the two tracks without a gap in between, but unfortunately, as soon as the second MediaPlayer finishes, the first does not start again!
/* initialization */
afd = context.getAssets().openFd(filename);
firstBackgroundMusic.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
firstBackgroundMusic.prepare();
firstBackgroundMusic.setOnCompletionListener(this);
secondBackgroundMusic.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
secondBackgroundMusic.prepare();
secondBackgroundMusic.setOnCompletionListener(this);
firstBackgroundMusic.setNextMediaPlayer(secondBackgroundMusic);
secondBackgroundMusic.setNextMediaPlayer(firstBackgroundMusic);
firstBackgroundMusic.start();
#Override
public void onCompletion(MediaPlayer mp) {
mp.stop();
try {
mp.prepare();
} catch (IOException e) { e.printStackTrace(); }
}
Any ideas what's wrong with the code snippet?
Just for the records:
It tuned out to be unsolvable. At the end, we looped the background music various times inside the file. This way the gap appears less frequently. It's no real solution to the problem, but the best workaround we could find.
This is an old question but I will give my solution in case anyone has the same problem.
The solution requires the use of the Audio-extension(deprecated but works just fine),if you cant find the link online here are the jars that I am using, also requires some external storage space.
The abstract is the following
Extract the raw music data with a decoder(VorbisDecoder class for ogg or Mpg123Decoder for mp3)and save them to the external storage(you can make a check to see if it already exists so that it only needs to be extracted once, cause it takes some time).
Create a RandomAccessFile using the file you just saved to the external storage
While playing set the RandomAccessFile pointer to the correct spot in the file and read a data segment
Play the above data segment with the AudioDevice class
Here is some code
Extract the music file and save it to the external storage,file is the FileHandle of the internal music file,here is an ogg and thats why we use VorbisDecoder
FileHandle external=Gdx.files.external("data/com.package.name/music/"+file.name());
file.copyTo(external);
VorbisDecoder decoder = new VorbisDecoder(external);
FileHandle extreactedDataFile=Gdx.files.external("data/com.package.name/music/"+file.nameWithoutExtension()+".mdata");
if(extreactedDataFile.exists())extreactedDataFile.delete();
ShortBuffer sbuffer=ByteBuffer.wrap(shortBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
while(true){
if(LogoScreen.shouldBreakMusicLoad)break;
int num=decoder.readSamples(samples, 0,samples.length);
sbuffer.put(samples,0,num);
sbuffer.position(0);
extreactedDataFile.writeBytes(shortBytes,0,num*2, true);
if(num<=0)break;
}
external.delete();
Create an RandomAccessFile pointing to the file we just created
if(extreactedDataFile.exists()){
try {
raf=new RandomAccessFile(Gdx.files.external(extreactedDataFile.path()).file(), "r");
raf.seek(0);
} catch (Exception e) {
e.printStackTrace();
}
}
Create a Buffer so we can translate the bytes read from the file to a short array that gets feeded to the AudioDevice
public byte[] rafbufferBytes=new byte[length*2];
public short[] rafbuffer=new short[length];
public ShortBuffer sBuffer=ByteBuffer.wrap(rafbufferBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
When we want to play the file we create an AudioDevice and a new thread where we read constantly read from the raf file and feed it to the AudioDevice
device = Gdx.audio.newAudioDevice((int)rate/*the hrz of the music e.g 44100*/,isthemusicMONO?);
currentBytes=0;//set the file to the beggining
playbackThread = new Thread(new Runnable() {
#Override
public synchronized void run() {
while (playing) {
if(raf!=null){
int length=raf.read(rafbufferBytes);
if(length<=0){
ocl.onCompletion(DecodedMusic.this);
length=raf.read(rafbufferBytes);
}
sBuffer.get(rafbuffer);
sBuffer.position(0);
if(length>20){
try{
device.writeSamples(rafbuffer,0,length/2);
fft.spectrum(rafbuffer, spectrum);
currentBytes+=length;
}catch(com.badlogic.gdx.utils.GdxRuntimeException ex){
ex.printStackTrace();
device = Gdx.audio.newAudioDevice((int)(rate),MusicPlayer.mono);
}
}
}
}
}
});
playbackThread.setDaemon(true);
playbackThread.start();
And when we want to seek at a position
public void seek(float pos){
currentBytes=(int) (rate*pos);
try {
raf.seek(currentBytes*4);
} catch (IOException e) {
e.printStackTrace();
}
}

How does Android MediaPlayer decide 3D Display by looking SEI FPA?

I want to understand the working principles of the HTC Evo 3D's 3D Display; however, the code and HTCDev's tutorial do not help on this. It is said that the SEI FPA bit in the header overrides the choice which is given by hand such as:
public void surfaceChanged(SurfaceHolder surfaceholder, int i, int j, int k) {
holder = surfaceholder;
enableS3D(true, holder.getSurface()); // note SEI FPA flag in content
// overrides this
}
The play video code:
private void playVideo() {
release();
fileName = "HTCDemo.mp4";
try {
mediaPlayer = new MediaPlayer();
final AssetFileDescriptor afd = getAssets().openFd(fileName);
mediaPlayer.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(),
afd.getLength());
mediaPlayer.setDisplay(holder);
mediaPlayer.prepare();
mediaPlayer.setOnPreparedListener(this);
mediaPlayer.setOnVideoSizeChangedListener(this);
mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
} catch (Exception e) {
Log.e(TAG, Log.getStackTraceString(e));
}
}
At this point, I could not track where it looks to the SEI FPA bit in the header. I need a help for showing the necessary code part. Thanks in advance.
Do you need to parse SEI FPA bit in the header itself? That's out of scope of this API. As noted, the codec parses this to enable (and override) S3D setting.
In the overview, it's mentioned how you can use 3rd party tools like x264 to add this bit to existing content.
I'd recommend looking at the code for x264 for help in parsing the file header if that's what you need to do at runtime.

Categories

Resources