I'm trying to create a screen capturer app for android. I already have the webRTC portion setup with a video capturer using Camera2Enumerator library from here. How can I modify this to create a pre-recorded video capturer instead of camera capturer.
Thanks!
Just wanted to give an update that I have solved this. I'm unable to share the entire code but here's a process that might help:
Acquire one frame of your pre-recorded file and store in a byte array(must be in YUV format)
Replace the VideoCapturer() with the following:
fun onGetFrame(p0: ByteArray?) {
var timestampNS = java.util.concurrent.TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime())
var buffer:NV21Buffer = NV21Buffer(p0,288,352,null)
var videoFrame:VideoFrame = VideoFrame(buffer,0,timestampNS)
localVideoSource.capturerObserver.onFrameCaptured(videoFrame)
videoFrame.release()
}
where p0 is the byte array with the frame
Call this function in startLocalVideoCapture() using a timer (every few milliseconds...I used 10 nanoseconds) https://developer.android.com/reference/android/os/CountDownTimer
remove this line in startLocalVideoCapture()--->
VideoCapturer.initialize(
surfaceTextureHelper,
localVideoOutput.context,
localVideoSource.capturerObserver)
Related
I have an mp4 video on my device whose frames I'm trying to extract using FFmpegMediaMetadataRetriever. I am able to retrieve a frame by
mmr.getFrameAtTime(time*1000000L)
I am trying to retrieve multiple frames in a time range using
for(i in 0..frames step 1 ){
val frame = mmr.getFrameAtTime(i*1000000L/fps!!, FFmpegMediaMetadataRetriever.OPTION_CLOSEST)
if(frame != null)
bitMapList.add(frame)
}
Here I am getting all the frames as null when using the FFmpegMediaMetadataRetriever.OPTION_CLOSEST
When I omit the option, I get frames, all being same.
The fps is retrieved using
fps = mmr.extractMetadata(FFmpegMediaMetadataRetriever.METADATA_KEY_FRAMERATE).toInt()
Can someone please guide me how to go about correctly here?
Recently, I was working with camerax for recording video with the front camera only. But I ran into an issue where the video is being mirrored after saving.
Currently, I am using a library(Mp4Composer-android) to mirror the video after recording which takes up a processing time. So, I noticed that Snapchat and Instagram are giving the output without this processing.
After that happened I also noticed that our native camera application is providing an option to select whether we want to mirror the video or not.
The configuration I have added to camerax,
videoCapture = VideoCapture
.Builder()
.apply {
setBitRate(2000000)
setVideoFrameRate(24)
}
.build()
How can I make my camera not mirror the video?
Temporary Solution:
I used this library as a temporary solution. The issue with this library was that I had to process the video after I record it and it took considerably some time to do it. I used this code:
Add this to gradle:
//Video Composer
implementation 'com.github.MasayukiSuda:Mp4Composer-android:v0.4.1'
Code for flipping:
Mp4Composer(videoPath, video)
.flipHorizontal(true)
.listener(
object : Mp4Composer.Listener {
override fun onProgress(progress: Double) { }
override fun onCurrentWrittenVideoTime(timeUs: Long) { }
override fun onCompleted() { }
override fun onCanceled() { }
override fun onFailed(exception: Exception?) { }
}
)
.start()
Note: This will compress your video too. Look into the library documentation for more details
An answer that was given to me by a senior developer who worked in a video-based NDK for a long time:
Think of the frames that are given out by the camera going through a dedicated highway. There is a way where we can capture all the frames going through that highway.
Capture the frames coming through that highway
Flip the pixels of each frame
Give out the frames through that same highway
He didn't specify how to capture and release the frames.
Why I didn't use that solution(The issue):
If we have to perform this action in real-time, We have to do that with high efficiency. Ranging with the quality of the camera, We have to capture anywhere from 24 frames to 120 frames per second and process and dispatch the frames.
If we need to do that, we need NDK developers and a lot of engineering which most startups can't afford.
I am developing an application using Android Opencv.
This app, which I am developing, offers two operations.
The frame read from the camera is passed to Jni using native function
Mat.getNativeObjAddr (), and the new image is returned through
javaCameraView's onCameraFrame() function
It reads a video clip inside Storage, processes each frame the same
as # 1, and returns the resulting image via the onCameraFrame()
function.
First,function is implemented as simple as the following and works normally:
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame)
{
if(inputFrame!=null){
Detect(inputFrame.rgba().getNativeObjAddr(), boardImage.getNativeObjAddr());
}
return boardImage;
}
}
However, the problem occurred in the second operation.
As far as I know, the files inside Java Storage are not readable by jni.
I already tried FFmpegMediaPlayer or MediaMetadataRetriever through Google search. However, the getFrameAtTime () function provided by this MetadataRetriever took an average of 170ms when grabbing a bitmap to a specific frame of 1920 * 1080 image. What I have to develop is to show the video results in real time at 30 fps. In # 1, the native function Detect () takes about 2ms to process one frame.
For these reasons, I want to do this.
java sends a video's path (eg : /storage/emulated/0/download/video.mp4) to jni, and native functions process the video one frame at a time, and display the result image on 'onCameraFrame'.
Is there a proper way? I look forward to your reply. Thank you!
Is possible to compare a voice with already recorded voice in the phone.Based on the comparison we can rate like Good, Very Good , Excellent etc. Most closed sound get high rating.
Anybody know is it possible in Android?
Help is highly appreciable.
For a general audio processing library I can recommend marsyas. Unfortunately the official home page is currently down.
Marsyas even provides a sample android application. After getting a proper signal analysis framework, you need to analyse your signal. For example, the AimC implementation for marsyas can be used to compare voice.
I recommend installing marsyas on your computer and fiddle with the python example scripts.
For your voice analysis, you could use a network like this:
vqNetwork = ["Series/vqlizer", [
"AimPZFC/aimpzfc",
"AimHCL/aimhcl",
"AimLocalMax/aimlocalmax",
"AimSAI/aimsai",
"AimBoxes/aimBoxes",
"AimVQ/vq",
"Gain/g",
]
This network takes your audio data and transforms it as it would be processed by a human ear. After that it uses vector quantization to reduce the many possible vectors to very specific codebooks with 200 entries. You can then translate the output of the network to readable characters (utf8 for example), which you then can compare using something like string edit distances (e.g. Levenshtein distance).
Another possibility is to use MFCC (Mel Frequency Cepstral Coefficients) for speech recognition which marsyas supports as well and use something, for example Dynamic Time Warping, to compare the outputs. This document describes the process pretty well.
Using 'Musicg' library you can compare two voice (.wav format) files.
use Wave object to load the wave file to instantiate in pgm.
here using FingerPrintSimilarity
function you pass pre recorded wav files to get the output.
But you should know that "musicg" library deals only with .wav format files, so if you have a an .mp3 file for example you need to convert it to a wave file first.
android gradle dependency:
implementation group: 'com.github.fracpete', name: 'musicg', version: '1.4.2.2'
for more:
https://github.com/loisaidasam/musicg
sample code:
private void compareTempFile(String str) {
Wave w1 = new Wave(Environment.getExternalStorageDirectory().getAbsolutePath()+"/sample1.wav");
Wave w2 = new Wave(Environment.getExternalStorageDirectory().getAbsolutePath()+"/sample2.wav");
println("Wave 1 = "+w1.getWaveHeader());
println("Wave 2 = "+w2.getWaveHeader());
FingerprintSimilarity fpsc1 = w2.getFingerprintSimilarity(w1);
float scorec = fpsc1.getScore();
float simc= fpsc1.getSimilarity();
tvSim.setText(" Similarity = "+simc+"\nScore = "+scorec);
println("Score = "+scorec);
println("Similarity = "+simc);
}
I have some audio data (raw AAC) inside a byte array for playback. During playback, I need to get its volume/amplitude to draw (something like an audio wave when playing).
What I'm thinking now is to get the volume/amplitude of the current audio every 200 milliseconds and use that for drawing (using a canvas), but I'm not sure how to do that.
.
.
.
.
** 2011/07/13 add following **
Sorry just been delayed on other project until now.
What I tried is run the following codes in a thread, and playing my AAC audio.
a loop
{
// int v=audio.getStreamVolume(AudioManager.MODE_NORMAL);
// int v=audio.getStreamVolume(AudioManager.STREAM_MUSIC);
int v=audio.getStreamVolume(AudioManager.STREAM_DTMF);
// Tried 3 settings above
Log.i(HiCardConstants.TAG, "Volume - "+v);
try{Thread.sleep(200);}
catch(InterruptedException ie){}
}
But only get a fixed value, not dynamic volume...
And I also found a class named Visualizer, but unfortunately, my target platform is Android 2.2 ... :-(
Any suggestions are welcome :-)
After days and nights, I found that an Android app project called ringdroid
can solve my problem.
It helps me to get an audio gain value array, so that I can use to to draw my sound wave.
BTW, as my experience, some .AMR or .MP3 can't be parsed correctly, due to too low bitrate...