Changing wave header in mp3 file - android

Is it possible to change the wave header in an existing mp3 file? I need it to be 8 or 16 bits per sample.
Any tips about how to proceed? I am totally stuck.
Thank you very much

Which wave header are you referring to? Are you talking about the frame header? In any case, it doesn't contain a bits-per-sample field because such information would be meaningless to store in the compressed mp3 stream.
It's up to the decoder/player to decide how many bits to use per sample when generating an uncompressed PCM stream from the mp3.
In case you mean that you've slapped a RIFF header onto an mp3 file to be able to import it into various old programs, you can find the format of that header here.

Related

Playing two different streams (files) simultaneously with proper sync on Android

I'm using two mediaPlayer instances to send two different streams to each channel. Half the time, it works fine but sometimes there is lag between left and right channel which is clearly audible. Is there any alternative except soundpool in Android to play multiple audio files simultaneously with sync ? soundpool is not suitable for my application since audiofiles are large (approx 20 MiB each). Audiofile format in question is : FLAC.
I found that there're no proper in-built mixing capabilities provided by Android API. I ended up using wav files instead of FLAC and mixing them on the fly as needed. Here is a higher level description how I achieved it.
Read both wav files and saving data part in byte array (Don't forget to strip out header bytes)
Mix them byte by byte to generate a unified wav file
In my use case, I just needed to mix left and right channels, but one can do all sorts of transformations as needed.
Create a temporary file to hold mixed data
Play the temporary file with mediaPlayer
One can also use audioTrack to play without storing resulting byte array to temporary file but I chose to use mediaPlayer due to built in seekTo functionality.
Hope, this approach is helpful.

how to merge two sound object to one sound object?? (air for android) [duplicate]

How can merge two sounds and save as a new file?. One sound is a loaded mp3 file and the other from the microphone. Then I need to upload this sound into a server. Is this possible?
This all can be done, but if you looking simple example with few methods to call, I'm afraid it's not so easy.
You can extract bytes from sound with Sound.extract(). This data is sound amplitude in 16-bit numbers, right and left channel interleaved. Use ByteArray.readShort() to get them.
Microphone data can be captured with SampleDataEvent.SAMPLE_DATA, see example here. To mix them with song, just add sound amplitudes and write result into third array. The result will be essentially WAV-format (without header,) unpacked sound data. You can upload it raw, or search for "as3 mp3 encoder" (google), but these things are rare and written by entusiasts, so maybe you can get them work. Also, to mix sounds correctly, frequencies of data from mic and sound file must be equal.
And upload part - if this was file on disk, this would be easy - FileReference.upload(). But there's only data in memory. So you can look into Socket class to send it.

Android ffmpeg save and append h264 streamed videos

I need to save a video file generated by two video streams coming from two different sources. I'm using rtsp over tcp/ip, and the videos are encoded with h264.
I need to first record the video from the first source and than continue with the second source.
So what I tried was to declare two AVFormatContext instances, initialize both with avformat_open_input(&context, "rtsp://......",NULL,&options)
and then read frames with av_read_frame(context,&packet)
and write them in the video file av_write_frame(oc,&packet);
It works fine saving the video from the first source, but if by example I saved y frames from the first context, when I try reading and saving the frames from the second context in the same file, for the first y frames I am tring to save, av_write_frame(oc,&packet2);
would retun -22, and would not add the frame to the file.
I think the problem is that the context variable remembers how many frames were read, and it gives every read packet an identification number, to make sure it isn't written twice. But when I'm using a new context those identification numbers reset, the AVOutputFormat or the AVFormatContext also retain the id of the package they are expecting to receive, and would not write anything until they receive a package with that id.
Now I'm wondering how could I solve this inconvenience. I can't find any setter for that id, or any way to reuse the same context. I thought to modify the ffmpeg sources but they are pretty complex and I couldn't find what I was looking for.
An alternative would be to save the two video in two different files but, I don't know how to append them afterwards, as ffmpeg can only append videos encoded with mpeg and rencoding the video isn't really an option, as it will take to much time. Also I couldn't find any other functional way to append two mp4 videos encoded with h264.
I'll be happy to hear any kind of usable ideea to this problem.
If you are saving raw h.264 streams why not simply store two seperate streams and then concatenate the file chunks on the command line seperately using a system command system("cat file1 file2 > finalfile")
If your output is one of the following you can append directly using cat
Transport stream [ts] with same codecs
.mpg files
raw h.264 files
raw mpeg4 files which have exactly same encoding headers [same dimensions, profile and toolsets mentioned in header]
H.263 streams
You cannot concatenate directly mp4 files or 3gpp files.

Embedding Metadata to H.264 encoded file

I am currently developing an application which produces certain metadata with respect to preview frames coming from the camera. I can see this metadata being produced properly and I have no problems here.
However, I have to embed this metadata to these frames of interest (frames are processed by a native algorithm to produce this metadata). I am using ffmpeg with x264 to encode the frames into H.264. I have checked x264.h and some documentations but failed to find what I seek.
My question is; is there any unused portion of H.264 syntax that I can embed my metadata to encoded frames?
I hope I was clear enough. Thanks in advance.
Most video elementary streams have a provision for "user data". In h.264 this is part of the SEI nal unit. You can add one before every frame you want to associate it with. I don't think that x264 has support to add user data from outside.
Two choices:
Modify x264 / ffmpeg to add the SEI message where ever you want it taking input in some form you like.
Create your stream, create your metadata. Now write a small program separately to read your metadata and parse the files and push a SEI NAL before the frame you want.
For SEI syntax you should be able to google and get it. The best place to look though is the H.264 standard. A easier way is to just look at the code in x264. It does insert one user data at the begining (the encoding parameters).

How to edit or append audio recording?

I am creating one application in Android. My application record audio and store it on SD card
and show in list format. I do this all but i need to edit this audio like suppose i open recording
then i need to start this recording again from ending and able to append audio in that recording.
Like: Suppose my recoding "MyRecord01" time is 04.06 sec and i want to add more audio in the recording
then it must start from 04.07 and add some audio.
I search lot of but didn't find anything relative. Please direct me to any link or any reference.
Or give me any hint.
Thanks in advance.
Here is the code you need. Hope it works for you.
If I identified your problem wrong, feel free to comment and tell me.
This is not too difficult. The key is to understand how the audio you've recorded is formatted. It's easiest if you use an uncompressed format, like WAV or AIFF. Here is one (of many) explanations of the WAV file formats: https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Most WAV files can be easily appended to if the data chunk is last (this may be a requirement of the format, I can't recall for sure). If it is not last, you'll first have to copy the file and modify it such that the data chunk is last. This can be a time-consuming step if the file is large.
Once that's done, you simply append new audio data to the data chunk and update a few pieces of data elsewhere in the file, such as the data chunk size (in the the data chunk), the chunk size (in the RIFF descriptor) and the Subchunk1 size (in the fmt chunk). (That will all make more sense to you once you read the explanation.) You may want to do something with those data while you are appending so that it is easy to fix in case your app crashes durring the append -- you don't want to corrupt the user's data.
The process is similar for AIFFs, but there are differences in the details.
For MP3s, I am not 100% sure off the top of my head how this would work. If memory serves the process is conceptually easier because MP3 files are structured as a series of independent chunks, and each chunk has its own mini header. Theoretically, you can just append more chunks. In practice it is going to be more complex, though, because of the compression and things like ID3 tags.

Categories

Resources