I want my Android app to recognize sound. For example I want to know if the sound from microphone is a clapping or knocking or something else.
Do I need to use math, or can I just use some library for that?
If there are any libraries for sound analysis please let me know. Thanks.
Musicg library is useful for whistle detection. Concerning claps, I wouldn't recommend use it, cause it reacts to every loud sound (even speech).
For clap and other percussive sounds detection I recommend TarsosDSP. It has a simple API with a rich functionality (pitch detection and so on). For clap detection you can use something like (if you use TarsosDSPAndroid-v3):
MicrophoneAudioDispatcher mDispatcher = new MicrophoneAudioDispatcher((int) SAMPLE_RATE, BUFFER_SIZE, BUFFER_OVERLAP);
double threshold = 8;
double sensitivity = 20;
mPercussionDetector = new PercussionOnsetDetector(22050, 1024,
new OnsetHandler() {
#Override
public void handleOnset(double time, double salience) {
Log.d(TAG, "Clap detected!");
}
}, sensitivity, threshold);
mDispatcher.addAudioProcessor(mPercussionDetector);
new Thread(mDispatcher).start();
You can tune your detector by adjusting sensitivity (0-100) and threshold (0-20).
Good luck!
There is an Api that works very well for your needs in my opinion.
http://code.google.com/p/musicg/
Good Luck!!!
You don't need math and you don't need AudioRecord. Just check MediaRecorder.getMaxAmplitude() every 1000 milliseconds.
this code and this code might be helpful.
Here is some code you will need.
public class Clapper
{
private static final String TAG = "Clapper";
private static final long DEFAULT_CLIP_TIME = 1000;
private long clipTime = DEFAULT_CLIP_TIME;
private AmplitudeClipListener clipListener;
private boolean continueRecording;
/**
* how much louder is required to hear a clap 10000, 18000, 25000 are good
* values
*/
private int amplitudeThreshold;
/**
* requires a little of noise by the user to trigger, background noise may
* trigger it
*/
public static final int AMPLITUDE_DIFF_LOW = 10000;
public static final int AMPLITUDE_DIFF_MED = 18000;
/**
* requires a lot of noise by the user to trigger. background noise isn't
* likely to be this loud
*/
public static final int AMPLITUDE_DIFF_HIGH = 25000;
private static final int DEFAULT_AMPLITUDE_DIFF = AMPLITUDE_DIFF_MED;
private MediaRecorder recorder;
private String tmpAudioFile;
public Clapper() throws IOException
{
this(DEFAULT_CLIP_TIME, "/tmp.3gp", DEFAULT_AMPLITUDE_DIFF, null, null);
}
public Clapper(long snipTime, String tmpAudioFile,
int amplitudeDifference, Context context, AmplitudeClipListener clipListener)
throws IOException
{
this.clipTime = snipTime;
this.clipListener = clipListener;
this.amplitudeThreshold = amplitudeDifference;
this.tmpAudioFile = tmpAudioFile;
}
public boolean recordClap()
{
Log.d(TAG, "record clap");
boolean clapDetected = false;
try
{
recorder = AudioUtil.prepareRecorder(tmpAudioFile);
}
catch (IOException io)
{
Log.d(TAG, "failed to prepare recorder ", io);
throw new RecordingFailedException("failed to create recorder", io);
}
recorder.start();
int startAmplitude = recorder.getMaxAmplitude();
Log.d(TAG, "starting amplitude: " + startAmplitude);
do
{
Log.d(TAG, "waiting while recording...");
waitSome();
int finishAmplitude = recorder.getMaxAmplitude();
if (clipListener != null)
{
clipListener.heard(finishAmplitude);
}
int ampDifference = finishAmplitude - startAmplitude;
if (ampDifference >= amplitudeThreshold)
{
Log.d(TAG, "heard a clap!");
clapDetected = true;
}
Log.d(TAG, "finishing amplitude: " + finishAmplitude + " diff: "
+ ampDifference);
} while (continueRecording || !clapDetected);
Log.d(TAG, "stopped recording");
done();
return clapDetected;
}
private void waitSome()
{
try
{
// wait a while
Thread.sleep(clipTime);
} catch (InterruptedException e)
{
Log.d(TAG, "interrupted");
}
}
/**
* need to call this when completely done with recording
*/
public void done()
{
Log.d(TAG, "stop recording");
if (recorder != null)
{
if (isRecording())
{
stopRecording();
}
//now stop the media player
recorder.stop();
recorder.release();
}
}
public boolean isRecording()
{
return continueRecording;
}
public void stopRecording()
{
continueRecording = false;
}
}
I realize this is a year old, but I stumbled across it. I'm pretty sure that general, open domain sound recognition is not a solved problem. So, no, you're not going to find any kind of library to do what you want on Android, because such code doesn't exist anywhere yet. If you pick some restricted domain, you could train a classifier to recognize the kinds of sounds your interested in, but that would require lots of math, and lots of examples of each of the potential sounds. It would be pretty cool if the library you wanted existed, but as far as I know, the technology just isn't there yet.
Related
I am looking for ideas on how to handle envelope re-triggering of new notes in a monophonic sampler setup causing clicks if the previous note's envelope hasn't finished. In the current setup the previous note's instance is killed on the spot when a new note is triggered (the synth.stop method call), causing a click as the envelope doesn't get a chance to finish and reach 0 volume. Any hints are welcome.
I have also added in the below code my own un-satisfactory solution putting the gain of the voice to 0 and then putting the voice to sleep for 70ms. This introduces a 70ms latency to the user interaction but gets rid of any clicks. Any values below 70ms in the sleep don't solve the clicking.
The variable are public static at the moment just so I can still play around with where I'm calling them.
Here is my listener code:
buttonNoteC1Get.setOnTouchListener(new View.OnTouchListener() {
#Override
public boolean onTouch(View v, MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_UP) {
buttonNoteC1Get.setBackgroundColor(myColorWhite); // reset gui color
if (sample.getSustainBegin() > 0) { // trigger release for looping sample
ampEnv.dataQueue.queue(ampEnvelope, 3, 1); // release called
}
limit = 0; // reset action down limiter
return true;
}
if (limit == 0) { // respond only to first touch event
if (samplePlayer != null) { // check if a previous note exists
synth.stop(); // stop instance of previous note
}
buttonNoteC1Get.setBackgroundColor(myColorGrey); // key pressed gui color
samplePitch = octave * 1; // set samplerate multiplier
Sampler.player(); // call setup code for new note
Sampler.play(); // play new note
limit = 1; // prevent stacking of action down touch events
}
return false;
}
}); // end listener
Here is my Sampler code
public class Sampler {
public static VariableRateDataReader samplePlayer;
public static LineOut lineOut;
public static FloatSample sample;
public static SegmentedEnvelope ampEnvelope;
public static VariableRateMonoReader ampEnv;
public static MixerMonoRamped mixerMono;
public static double[] ampData;
public static FilterStateVariable mMainFilter;
public static Synthesizer synth = JSyn.createSynthesizer(new JSynAndroidAudioDevice());
// load the chosen sample, called by instrument select spinner
static void loadSample(){
SampleLoader.setJavaSoundPreferred(false);
try {
sample = SampleLoader.loadFloatSample(sampleFile);
} catch (IOException e) {
e.printStackTrace();
}
} // end load sample
// initialize sampler voice
static void player() {
// Create an amplitude envelope and fill it with data.
ampData = new double[] {
envA, 0.9, // pair 0, "attack"
envD, envS, // pair 2, "decay"
0, envS, // pair 3, "sustain"
envR, 0.0, // pair 4, "release"
/* 0.04, 0.0 // pair 5, "silence"*/
};
// initialize voice
ampEnvelope = new SegmentedEnvelope(ampData);
synth.add(ampEnv = new VariableRateMonoReader());
synth.add(lineOut = new LineOut());
synth.add(mixerMono = new MixerMonoRamped(2));
synth.add(mMainFilter = new FilterStateVariable());
// connect signal flow
mixerMono.output.connect(mMainFilter.input);
mMainFilter.output.connect(0, lineOut.input, 0);
mMainFilter.output.connect(0, lineOut.input, 1);
// set control values
mixerMono.amplitude.set(sliderVal / 100.0f);
mMainFilter.amplitude.set(0.9);
mMainFilter.frequency.set(mainFilterCutFloat);
mMainFilter.resonance.set(mainFilterResFloat);
// initialize and connect sampler voice
if (sample.getChannelsPerFrame() == 1) {
synth.add(samplePlayer = new VariableRateMonoReader());
ampEnv.output.connect(samplePlayer.amplitude);
samplePlayer.output.connect(0, mixerMono.input, 0);
samplePlayer.output.connect(0, mixerMono.input, 1);
} else if (sample.getChannelsPerFrame() == 2) {
synth.add(samplePlayer = new VariableRateStereoReader());
ampEnv.output.connect(samplePlayer.amplitude);
samplePlayer.output.connect(0, mixerMono.input, 0);
samplePlayer.output.connect(1, mixerMono.input, 1);
} else {
throw new RuntimeException("Can only play mono or stereo samples.");
}
} // end player
// play the sample
public static void play() {
if (samplePlayer != null)
{samplePlayer.dataQueue.clear();
samplePlayer.rate.set(sample.getFrameRate() * samplePitch); // set pitch
}
// start the synth engine
synth.start();
lineOut.start();
ampEnv.start();
// play one shot sample
if (sample.getSustainBegin() < 0) {
samplePlayer.dataQueue.queue(sample);
ampEnv.dataQueue.queue( ampEnvelope );
// play sustaining sample
} else {
samplePlayer.dataQueue.queueOn(sample);
ampEnv.dataQueue.queue( ampEnvelope, 0,3);
ampEnv.dataQueue.queueLoop( ampEnvelope, 1, 2 );
}
} }
Unsatisfactory solution that introduces 70ms of latency, changing the action down listener handling of a previous note to this:
if (limit == 0) {
if (samplePlayer != null) {
mixerMono.amplitude.set(0);
try {
synth.sleepFor(0.07);
synth.stop(); // stop instance of previous note
}catch (InterruptedException e) {
e.printStackTrace();
}
}
You should not call synth.start() and synth.stop() for every note. Think of it like powering on a physical synthesizer. Just start the synth and the lineOut once. If the ampEnv is connected indirectly to something else that is start()ed then you do not need to start() the ampEnv.
Then just queue your samples and envelopes when you want to start a note.
When you are all done playing notes then call synth.stop().
I'm in the final stages of creating my Foley app! (my first ever created app during my internship, I learned how to make it myself, so sorry if any questions are stupid!)
My game is finished and working (also thanks to many of you!) and now I am trying to make the second part.
The idea is that it will show a video without sound. People then will have the opportunity to use our Foley room to record sounds over the video.
So far so good, but I still have a few issues.
First, I can't seem to find a way to stop the video/audio and replay it. When you take all the correct steps, there is no issue, but as most of you will know you want to make your code as waterproof as possible, so it would be nice to find a solution to this.
Second, I'd like to know if it's possible to export the combination of the video and recording audio to youtube. (I found a code to upload, but haven't tried if my idea is possible, does anyone have any experience with this?)
//<-----------------------------------Aangeven variabelen--------------->
MediaRecorder _recorder;
MediaPlayer _player;
Button _start;
Button _stop;
public static bool recorderplaying = false;
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
SetContentView(Resource.Layout.VideoActie);
//Videovariabelen
var videoView = FindViewById<VideoView>(Resource.Id.PlaceHolder);
Button play = FindViewById<Button>(Resource.Id.Play);
Button stop = FindViewById<Button>(Resource.Id.stop);
//opnamevariabelen
_start = FindViewById<Button>(Resource.Id.record);
_stop = FindViewById<Button>(Resource.Id.stop);
//<-----------------------------------Opnemen audio---------------->
//Opslaan opname
string path = $"{Android.OS.Environment.ExternalStorageDirectory.AbsolutePath}/test.3gpp";
//Toegang vragen tot opslag android telefoon
if (Build.VERSION.SdkInt > BuildVersionCodes.M)
{
if (CheckSelfPermission(Manifest.Permission.WriteExternalStorage) != Android.Content.PM.Permission.Granted
|| CheckSelfPermission(Manifest.Permission.RecordAudio) != Android.Content.PM.Permission.Granted)
{
RequestPermissions(new[] { Manifest.Permission.WriteExternalStorage, Manifest.Permission.RecordAudio }, 0);
}
}
//<-----------------------------------Video afspelen-------------->
//video source
videoView.SetMediaController(new MediaController(this));
videoView.SetVideoPath($"android.resource://{PackageName}/{Resource.Raw.puppy}");
videoView.RequestFocus();
//<-----------------------------------Buttons--------------------->
//opname start
_start.Click += delegate
{
_stop.Enabled = !_stop.Enabled;
_start.Enabled = !_start.Enabled;
_recorder.SetAudioSource(AudioSource.Mic);
_recorder.SetOutputFormat(OutputFormat.ThreeGpp);
_recorder.SetAudioEncoder(AudioEncoder.AmrNb);
_recorder.SetOutputFile(path);
_recorder.Prepare();
_recorder.Start();
videoView.Start();
recorderplaying = true;
};
//opname stop
stop.Click += delegate
{
_stop.Enabled = !_stop.Enabled;
videoView.Pause();
_player.Stop();
if (recorderplaying == true)
{
_recorder.Stop();
_recorder.Reset();
recorderplaying = false;
}
else
{
int pass = 0;
}
};
play.Click += delegate
{
_stop.Enabled = !_stop.Enabled;
_player.SetDataSource(path);
_player.Prepare();
_player.Start();
videoView.Start();
};
}
//<-----------------------------------OnResume, OnPause---------------->
protected override void OnResume()
{
base.OnResume();
_recorder = new MediaRecorder();
_player = new MediaPlayer();
_player.Completion += (sender, e) => {
_player.Reset();
_start.Enabled = !_start.Enabled;
_stop.Enabled = !_stop.Enabled;
};
}
protected override void OnPause()
{
base.OnPause();
_player.Release();
_recorder.Release();
_player.Dispose();
_recorder.Dispose();
_player = null;
_recorder = null;
}
}
}
This is what I have so far.
I also tried to make a global variable for stopping the video, since that worked fine for the audio in the game, but sadly that didn't work.
note: don't mind the puppy video, it's a placeholder haha!
If anyone has any idea if and how this is possible, that would be amazing!!
I am trying to scan an image taken from resources using a Recognizer with a RegerParserSettings inside a fragment. The problem is that BaseRecognitionResult obtained through the callback onScanningDone is always null. I have tried to set up the RecognitionSettings with MRTDRecognizer and worked fine, so I think that the library is properly integrated. This is the source code that I am using:
#Override
public void onAttach(Context context) {
...
try {
mRecognizer = Recognizer.getSingletonInstance();
mRecognizer.setLicenseKey(context, LICENSE_KEY);
} catch (FeatureNotSupportedException | InvalidLicenceKeyException e) {
Log.d(TAG, e.getMessage());
}
buildRecognitionSettings();
mRecognizer.initialize(context, mRecognitionSettings, new DirectApiErrorListener() {
#Override
public void onRecognizerError(Throwable t) {
//Handle exception
}
});
}
private void buildRecognitionSettings() {
mRecognitionSettings = new RecognitionSettings();
mRecognitionSettings.setRecognizerSettingsArray(setupSettingsArray());
}
private RecognizerSettings[] setupSettingsArray() {
RegexParserSettings regexParserSettings = new RegexParserSettings("[A-Z0-9]{17}");
BlinkOCRRecognizerSettings sett = new BlinkOCRRecognizerSettings();
sett.addParser("myRegexParser", regexParserSettings);
return new RecognizerSettings[] { sett };
}
I scan the image like:
mRecognizer.recognizeBitmap(bitmap, Orientation.ORIENTATION_PORTRAIT, FragMicoblink.this);
And this is the callback handled in the fragment
#Override
public void onScanningDone(RecognitionResults results) {
BaseRecognitionResult[] dataArray = results.getRecognitionResults();
//dataArray is null
for(BaseRecognitionResult baseResult : dataArray) {
if (baseResult instanceof BlinkOCRRecognitionResult) {
BlinkOCRRecognitionResult result = (BlinkOCRRecognitionResult) baseResult;
if (result.isValid() && !result.isEmpty()) {
String parsedAmount = result.getParsedResult("myRegexParser");
if (parsedAmount != null && !parsedAmount.isEmpty()) {
Log.d(TAG, "Result: " + parsedAmount);
}
}
}
}
}`
Thanks in advance!
Helllo Spirrow.
The difference between your code and SegmentScanActivity is that your code uses DirectAPI, which can process only single bitmap image you send for processing, while SegmentScanActivity processes camera frames as they arrive from the camera. While doing so, it can utilize time redundant information to improve the OCR quality, i.e. it combines consecutive OCR results from multiple video frames to obtain a better quality OCR result.
This feature is not available via DirectAPI - you need to use either SegmentScanActivity, or custom scan activity with our camera management.
You can also find out more here:
https://github.com/BlinkID/blinkid-android/issues/54
Regards
I'm developing an application and I don't know how can I get CPU usage and temperature, example in a textview.
I try with TYPE_AMBIENT_TEMPERATURE to get temperature, but it doesn't work.
Logs say that "i haven't the sensor", but by using other apps in the play store I get the temperature and freq..
The code works fine if I use other sensor like TYPE_GYROSCOPE or other, so I don't understand how TYPE_AMBIENT_TEMPERATURE doesn't work..
Sorry for my bad english...and please help me..
for the CPU frequency you can use
public static int[] getCPUFrequencyCurrent() throws Exception {
int[] output = new int[getNumCores()];
for(int i=0;i<getNumCores();i++) {
output[i] = readSystemFileAsInt("/sys/devices/system/cpu/cpu"+String.valueOf(i)+"/cpufreq/scaling_cur_freq");
}
return output;
}
you might want to use this one too:
public static int getNumCores() {
//Private Class to display only CPU devices in the directory listing
class CpuFilter implements FileFilter {
#Override
public boolean accept(File pathname) {
//Check if filename is "cpu", followed by a single digit number
if(Pattern.matches("cpu[0-9]+", pathname.getName())) {
return true;
}
return false;
}
}
try {
//Get directory containing CPU info
File dir = new File("/sys/devices/system/cpu/");
//Filter to only list the devices we care about
File[] files = dir.listFiles(new CpuFilter());
//Return the number of cores (virtual CPU devices)
return files.length;
} catch(Exception e) {
//Default to return 1 core
return 1;
}
}
for the temperature (https://stackoverflow.com/a/11931903/1031297)
public class TempSensorActivity extends Activity, implements SensorEventListener {
private final SensorManager mSensorManager;
private final Sensor mTempSensor;
public TempSensorActivity() {
mSensorManager = (SensorManager)getSystemService(SENSOR_SERVICE);
mTempSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_AMBIENT_TEMPERATURE);
}
protected void onResume() {
super.onResume();
mSensorManager.registerListener(this, mTempSensor, SensorManager.SENSOR_DELAY_NORMAL);
}
protected void onPause() {
super.onPause();
mSensorManager.unregisterListener(this);
}
public void onAccuracyChanged(Sensor sensor, int accuracy) {
}
public void onSensorChanged(SensorEvent event) {
}
PS. You first need to check if the sensor is present...If it isn't, there's nothing that can be done. I guess some of the apps lie.
PS2. You can always reverse engineer an app to see how they display the temperature ;)
this is because your device simply doesn't have the thermometer. A lot of older devices don't have it.
Android documentation says devices "MAY" have these sensors built in, but are not "required" to have them.
the other apps that are showing you the temperature, are calculating by their own methods (which may all be different to certain extents)
I am currently in the process of making my own app that measures CPU temperatures myself...
As OWADVL answered, you can ready the temperature also as a system file, like this:
int temperature = readSystemFileAsInt("sys/class/thermal/thermal_zone0/temp");
note that the readSystemFileAsInt is not a system call. I found the implementation here
private static int readSystemFileAsInt(final String pSystemFile) throws Exception {
InputStream in = null;
try {
final Process process = new ProcessBuilder(new String[] { "/system/bin/cat", pSystemFile }).start();
in = process.getInputStream();
final String content = readFully(in);
return Integer.parseInt(content);
} catch (final Exception e) {
throw new Exception(e);
}
}
private static final String readFully(final InputStream pInputStream) throws IOException {
final StringBuilder sb = new StringBuilder();
final Scanner sc = new Scanner(pInputStream);
while(sc.hasNextLine()) {
sb.append(sc.nextLine());
}
return sb.toString();
}
About the CPU usage (load) you can look at this question and already working solution by Souch at his gitHub here
I'm trying to use MediaCodec and MediaMux, and I meet some trouble.
Here is the errors from the logcat:
12-13 11:59:58.238: E/AndroidRuntime(23218): FATAL EXCEPTION: main
12-13 11:59:58.238: E/AndroidRuntime(23218): java.lang.RuntimeException: Unable to resume activity {com.brendon.cameratompeg/com.brendon.cameratompeg.CameraToMpeg}: java.lang.IllegalStateException: Can't stop due to wrong state.
12-13 11:59:58.238: E/AndroidRuntime(23218): at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2918)
The code get wrong at "mStManager.awaitNewImage();", which is in the onResume function. And the logcat says "camera frame wait time out".
mStManager is an instance of the class SurfaceTextureManager. And "camera frame wait time out" comes from the awaitNewImage() function. I've added that class to my post.
Part of my code is like this(The onCreate function and onResume function):
#Override
protected void onCreate(Bundle savedInstanceState) {
// arbitrary but popular values
int encWidth = 640;
int encHeight = 480;
int encBitRate = 6000000; // Mbps
Log.d(TAG, MIME_TYPE + " output " + encWidth + "x" + encHeight + " #" + encBitRate);
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_camera_to_mpeg);
prepareCamera(encWidth, encHeight);
prepareEncoder(encWidth, encHeight, encBitRate);
mInputSurface.makeCurrent();
prepareSurfaceTexture();
mCamera.startPreview();
}
#Override
public void onResume(){
try {
long startWhen = System.nanoTime();
long desiredEnd = startWhen + DURATION_SEC * 1000000000L;
SurfaceTexture st = mStManager.getSurfaceTexture();
int frameCount = 0;
while (System.nanoTime() < desiredEnd) {
// Feed any pending encoder output into the muxer.
drainEncoder(false);
// Switch up the colors every 15 frames. Besides demonstrating the use of
// fragment shaders for video editing, this provides a visual indication of
// the frame rate: if the camera is capturing at 15fps, the colors will change
// once per second.
if ((frameCount % 15) == 0) {
String fragmentShader = null;
if ((frameCount & 0x01) != 0) {
fragmentShader = SWAPPED_FRAGMENT_SHADER;
}
mStManager.changeFragmentShader(fragmentShader);
}
frameCount++;
// Acquire a new frame of input, and render it to the Surface. If we had a
// GLSurfaceView we could switch EGL contexts and call drawImage() a second
// time to render it on screen. The texture can be shared between contexts by
// passing the GLSurfaceView's EGLContext as eglCreateContext()'s share_context
// argument.
mStManager.awaitNewImage();
mStManager.drawImage();
// Set the presentation time stamp from the SurfaceTexture's time stamp. This
// will be used by MediaMuxer to set the PTS in the video.
if (VERBOSE) {
Log.d(TAG, "present: " +
((st.getTimestamp() - startWhen) / 1000000.0) + "ms");
}
mInputSurface.setPresentationTime(st.getTimestamp());
// Submit it to the encoder. The eglSwapBuffers call will block if the input
// is full, which would be bad if it stayed full until we dequeued an output
// buffer (which we can't do, since we're stuck here). So long as we fully drain
// the encoder before supplying additional input, the system guarantees that we
// can supply another frame without blocking.
if (VERBOSE) Log.d(TAG, "sending frame to encoder");
mInputSurface.swapBuffers();
}
// send end-of-stream to encoder, and drain remaining output
drainEncoder(true);
} catch(Exception e) {
Log.d(TAG, e.getMessage());
// release everything we grabbed
releaseCamera();
releaseEncoder();
releaseSurfaceTexture();
}
}
a class in the code that is relevant to the error
private static class SurfaceTextureManager
implements SurfaceTexture.OnFrameAvailableListener {
private SurfaceTexture mSurfaceTexture;
private CameraToMpeg.STextureRender mTextureRender;
private Object mFrameSyncObject = new Object(); // guards mFrameAvailable
private boolean mFrameAvailable;
/**
* Creates instances of TextureRender and SurfaceTexture.
*/
public SurfaceTextureManager() {
mTextureRender = new CameraToMpeg.STextureRender();
mTextureRender.surfaceCreated();
if (VERBOSE) Log.d(TAG, "textureID=" + mTextureRender.getTextureId());
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId());
// This doesn't work if this object is created on the thread that CTS started for
// these test cases.
//
// The CTS-created thread has a Looper, and the SurfaceTexture constructor will
// create a Handler that uses it. The "frame available" message is delivered
// there, but since we're not a Looper-based thread we'll never see it. For
// this to do anything useful, OutputSurface must be created on a thread without
// a Looper, so that SurfaceTexture uses the main application Looper instead.
//
// Java language note: passing "this" out of a constructor is generally unwise,
// but we should be able to get away with it here.
mSurfaceTexture.setOnFrameAvailableListener(this);
}
public void release() {
// this causes a bunch of warnings that appear harmless but might confuse someone:
// W BufferQueue: [unnamed-3997-2] cancelBuffer: BufferQueue has been abandoned!
//mSurfaceTexture.release();
mTextureRender = null;
mSurfaceTexture = null;
}
/**
* Returns the SurfaceTexture.
*/
public SurfaceTexture getSurfaceTexture() {
return mSurfaceTexture;
}
/**
* Replaces the fragment shader.
*/
public void changeFragmentShader(String fragmentShader) {
mTextureRender.changeFragmentShader(fragmentShader);
}
/**
* Latches the next buffer into the texture. Must be called from the thread that created
* the OutputSurface object.
*/
public void awaitNewImage() {
final int TIMEOUT_MS = 2500;
synchronized (mFrameSyncObject) {
while (!mFrameAvailable) {
try {
// Wait for onFrameAvailable() to signal us. Use a timeout to avoid
// stalling the test if it doesn't arrive.
mFrameSyncObject.wait(TIMEOUT_MS);
if (!mFrameAvailable) {
// TODO: if "spurious wakeup", continue while loop
throw new RuntimeException("Camera frame wait timed out");
}
} catch (InterruptedException ie) {
// shouldn't happen
throw new RuntimeException(ie);
}
}
mFrameAvailable = false;
}
// Latch the data.
mTextureRender.checkGlError("before updateTexImage");
mSurfaceTexture.updateTexImage();
}
/**
* Draws the data from SurfaceTexture onto the current EGL surface.
*/
public void drawImage() {
mTextureRender.drawFrame(mSurfaceTexture);
}
#Override
public void onFrameAvailable(SurfaceTexture st) {
if (VERBOSE) Log.d(TAG, "new frame available");
synchronized (mFrameSyncObject) {
if (mFrameAvailable) {
throw new RuntimeException("mFrameAvailable already set, frame could be dropped");
}
mFrameAvailable = true;
mFrameSyncObject.notifyAll();
}
}
}
Does anyone have any ideas? Thank you!
I encountered this issue as well. The reason therefore is that your code is running on a thread that has a looper. You have to make sure that the code is running on a thread that does not have a looper. If it does, SurfaceTexture.OnFrameAvailableListener will deliver the "frame available" message to the waiting thread, rather than sending the Message to the Handler on the main thread, and you'll get stuck.
Bigflake's examples provide you with a detailed description on that:
/**
* Wraps testEditVideo, running it in a new thread. Required because of the way
* SurfaceTexture.OnFrameAvailableListener works when the current thread has a Looper
* configured.
*/
private static class VideoEditWrapper implements Runnable {
private Throwable mThrowable;
private DecodeEditEncodeTest mTest;
private VideoEditWrapper(DecodeEditEncodeTest test) {
mTest = test;
}
#Override
public void run() {
try {
mTest.videoEditTest();
} catch (Throwable th) {
mThrowable = th;
}
}
/** Entry point. */
public static void runTest(DecodeEditEncodeTest obj) throws Throwable {
VideoEditWrapper wrapper = new VideoEditWrapper(obj);
Thread th = new Thread(wrapper, "codec test");
th.start();
th.join();
if (wrapper.mThrowable != null) {
throw wrapper.mThrowable;
}
}
}
As Florian correctly explained, the issue is that your code is running on a thread that has a looper. You need to make sure that the code is running on a thread that does not have a looper.
The way I solved it was by modifying the setup() method in OutputSurface and ensuring that the setOnFrameListener() is attached to another Handler Thread.
Here is the code for the same:
class OutputSurface implements SurfaceTexture.OnFrameAvailableListener {
private static final String TAG = "OutputSurface";
private static final boolean VERBOSE = false;
private EGLDisplay mEGLDisplay = EGL14.EGL_NO_DISPLAY;
private EGLContext mEGLContext = EGL14.EGL_NO_CONTEXT;
private EGLSurface mEGLSurface = EGL14.EGL_NO_SURFACE;
private SurfaceTexture mSurfaceTexture;
private Surface mSurface;
private Object mFrameSyncObject = new Object();
private boolean mFrameAvailable;
private TextureRender mTextureRender;
private HandlerThread mHandlerThread;
private Handler mHandler;
public OutputSurface(int width, int height) {
if (width <= 0 || height <= 0) {
throw new IllegalArgumentException();
}
eglSetup(width, height);
makeCurrent();
setup();
}
public OutputSurface() {
setup();
}
private void setup() {
mTextureRender = new TextureRender();
mTextureRender.surfaceCreated();
mHandlerThread = new HandlerThread("callback-thread");
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
// Even if we don't access the SurfaceTexture after the constructor returns, we
// still need to keep a reference to it. The Surface doesn't retain a reference
// at the Java level, so if we don't either then the object can get GCed, which
// causes the native finalizer to run.
if (VERBOSE) Log.d(TAG, "textureID=" + mTextureRender.getTextureId());
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId());
// This doesn't work if OutputSurface is created on the thread that CTS started for
// these test cases.
//
// The CTS-created thread has a Looper, and the SurfaceTexture constructor will
// create a Handler that uses it. The "frame available" message is delivered
// there, but since we're not a Looper-based thread we'll never see it. For
// this to do anything useful, OutputSurface must be created on a thread without
// a Looper, so that SurfaceTexture uses the main application Looper instead.
//
// Java language note: passing "this" out of a constructor is generally unwise,
// but we should be able to get away with it here.
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
mSurfaceTexture.setOnFrameAvailableListener(this, mHandler);
} else {
mSurfaceTexture.setOnFrameAvailableListener(this);
}
mSurface = new Surface(mSurfaceTexture);
}
}
The rest of the OutputSurface class can remain the same.