Tensorflow Android demo : No output - android

I am trying to use retrained inception-v3 model in Tensroflow android demo app, no output shown.
What I have done
Trainded the model per description retrain inception. After training (only five classes) I have tested the graph using
bazel build tensorflow/examples/label_image:label_image &&
bazel-bin/tensorflow/examples/label_image/label_image \
--output_layer=final_result \
--labels=/tf_files/retrained_labels.txt \
--image=/home/hannan/Desktop/images.jpg \
--graph=/tf_files/retrained_graph.pb
and following are the outputs
I tensorflow/examples/label_image/main.cc:206] shoes (3): 0.997833
I tensorflow/examples/label_image/main.cc:206] chair (1): 0.00118802
I tensorflow/examples/label_image/main.cc:206] door lock (2): 0.000544737
I tensorflow/examples/label_image/main.cc:206] bench (4): 0.000354453
I tensorflow/examples/label_image/main.cc:206] person (0): 7.93592e-05
Done the optimization for inference using
bazel build tensorflow/python/tools:optimize_for_inference
bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=/tf_files/retrained_graph.pb \
--output=/tf_files/optimized_graph.pb \
--input_names=Mul \
--output_names=final_result
and tested the output graph again works fine.
finaly ran the following strip_unsued.py
python strip_unused.py \
--input_graph=/tf_files/optimized_graph.pb \
--output_graph=/tf_files/stirpped_graph.pb\
--input_node_names="Mul" \
--output_node_names="final_result" \
--input_binary=true
tested the graph again works fine.
Android app chnages in Classifier Activity
private static final int NUM_CLASSES = 5;
private static final int INPUT_SIZE = 229;
private static final int IMAGE_MEAN = 128;
private static final float IMAGE_STD = 128;
private static final String INPUT_NAME = "Mul:0";
private static final String OUTPUT_NAME = "final_result:0";
private static final String MODEL_FILE"file:///android_asset/optimized_graph.pb";
private static final String LABEL_FILE = "file:///android_asset/retrained_labels.txt";
Build & Ran the project.
Traceback
D/tensorflow: CameraActivity: onCreate org.tensorflow.demo.ClassifierActivity#adfa77e
W/ResourceType: For resource 0x0103045b, entry index(1115) is beyond type entryCount(1)
W/ResourceType: For resource 0x01030249, entry index(585) is beyond type entryCount(1)
W/ResourceType: For resource 0x01030249, entry index(585) is beyond type entryCount(1)
W/ResourceType: For resource 0x01030248, entry index(584) is beyond type entryCount(1)
W/ResourceType: For resource 0x01030247, entry index(583) is beyond type entryCount(1)
D/PhoneWindowEx: [PWEx][generateLayout] setLGNavigationBarColor : colors=0xff000000
I/PhoneWindow: [setLGNavigationBarColor] color=0x ff000000
D/tensorflow: CameraActivity: onStart org.tensorflow.demo.ClassifierActivity#adfa77e
D/tensorflow: CameraActivity: onResume org.tensorflow.demo.ClassifierActivity#adfa77e
D/OpenGLRenderer: Use EGL_SWAP_BEHAVIOR_PRESERVED: false
D/PhoneWindow: notifyNavigationBarColor, color=0x: ff000000, token: android.view.ViewRootImplAO$WEx#5d35dc4
I/OpenGLRenderer: Initialized EGL, version 1.4
I/CameraManagerGlobal: Connecting to camera service
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1440
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1088
I/tensorflow: CameraConnectionFragment: Adding size: 1920x1080
I/tensorflow: CameraConnectionFragment: Adding size: 1280x720
I/tensorflow: CameraConnectionFragment: Adding size: 960x720
I/tensorflow: CameraConnectionFragment: Adding size: 960x540
I/tensorflow: CameraConnectionFragment: Adding size: 800x600
I/tensorflow: CameraConnectionFragment: Adding size: 864x480
I/tensorflow: CameraConnectionFragment: Adding size: 800x480
I/tensorflow: CameraConnectionFragment: Adding size: 720x480
I/tensorflow: CameraConnectionFragment: Adding size: 640x480
I/tensorflow: CameraConnectionFragment: Adding size: 480x368
I/tensorflow: CameraConnectionFragment: Adding size: 480x320
I/tensorflow: CameraConnectionFragment: Not adding size: 352x288
I/tensorflow: CameraConnectionFragment: Not adding size: 320x240
I/tensorflow: CameraConnectionFragment: Not adding size: 176x144
I/tensorflow: CameraConnectionFragment: Chosen size: 480x320
I/TensorFlowImageClassifier: Reading labels from: retrained_labels.txt
I/TensorFlowImageClassifier: Read 5, 5 specified
I/native: tensorflow_inference_jni.cc:97 Native TF methods loaded.
I/TensorFlowInferenceInterface: Native methods already loaded.
I/native: tensorflow_inference_jni.cc:85 Creating new session variables for 7e135ad551738da4
I/native: tensorflow_inference_jni.cc:113 Loading Tensorflow.
I/native: tensorflow_inference_jni.cc:120 Session created.
I/native: tensorflow_inference_jni.cc:126 Acquired AssetManager.
I/native: tensorflow_inference_jni.cc:128 Reading file to proto: file:///android_asset/optimized_graph.pb
I/native: tensorflow_inference_jni.cc:132 GraphDef loaded from file:///android_asset/optimized_graph.pb with 515 nodes.
I/native: stat_summarizer.cc:38 StatSummarizer found 515 nodes
I/native: tensorflow_inference_jni.cc:139 Creating TensorFlow graph from GraphDef.
I/native: tensorflow_inference_jni.cc:151 Initialization done in 931.7ms
I/tensorflow: ClassifierActivity: Sensor orientation: 90, Screen orientation: 0
I/tensorflow: ClassifierActivity: Initializing at size 480x320
I/CameraManager: Using legacy camera HAL.
I/tensorflow: CameraConnectionFragment: Opening camera preview: 480x320
I/CameraDeviceState: Legacy camera service transitioning to state CONFIGURING
I/RequestThread-0: Configure outputs: 2 surfaces configured.
D/Camera: app passed NULL surface
I/[MALI][Gralloc]: dlopen libsec_mem.so fail
I/Choreographer: Skipped 89 frames! The application may be doing too much work on its main thread.
I/Timeline: Timeline: Activity_idle id: android.os.BinderProxy#a9290d7 time:114073819
I/CameraDeviceState: Legacy camera service transitioning to state IDLE
I/RequestQueue: Repeating capture request set.
W/LegacyRequestMapper: convertRequestMetadata - control.awbRegions setting is not supported, ignoring value
W/LegacyRequestMapper: Only received metering rectangles with weight 0.
W/LegacyRequestMapper: Only received metering rectangles with weight 0.
E/Camera: Unknown message type -2147483648
I/CameraDeviceState: Legacy camera service transitioning to state CAPTURING
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
D/tensorflow: CameraActivity: Initializing buffer 0 at size 153600
D/tensorflow: CameraActivity: Initializing buffer 1 at size 38400
D/tensorflow: CameraActivity: Initializing buffer 2 at size 38400
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value
When I use the app to identify an object no output is shown. Output
this one is also show in logs
I/native: tensorflow_inference_jni.cc:228 End computing. Ran in 4639ms (4639ms avg over 1 runs)
E/native: tensorflow_inference_jni.cc:233 Error during inference: Invalid argument: computed output size would be negative
[[Node: pool_3 = AvgPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 8, 8, 1], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/cpu:0"](mixed_10/join)]]
E/native: tensorflow_inference_jni.cc:170 Output [final_result] not found, aborting!
I/[MALI][Gralloc]: lock_ycbcr: videobuffer_status is invalid, use default value

I have figured it out.There is typo mistake in ClassifierActivity.
private static final int INPUT_SIZE = 229;
should be
private static final int INPUT_SIZE = 299;

Related

Extract video frames on Android with ExtractMpegFramesTest and running a computationally intensive function instead of saving as images

On Android, I want to extract all frames of a video and run an object tracking function from boofcv on each frame. Therefore, I am using the ExtractMpegFramesTest example and slightly adjusted it to run the tracking on each frame instead of saving the frames as png. I.e. instead of calling outputSurface.saveFrame() I am calling outputSurface.processFrame(), which I implemented as follows:
public void processFrame() {
mPixelBuf.rewind();
GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE,
mPixelBuf);
Bitmap bmp = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
mPixelBuf.rewind();
bmp.copyPixelsFromBuffer(mPixelBuf);
GrayU8 img = ConvertBitmap.bitmapToGray(bmp, (GrayU8)null, null);
Log.d(TAG, "Running tracker...");
tracker.process(img, location);
barPath.add(getCenterOf(location));
Log.d(TAG, "Finished running tracker.");
bmp.recycle();
}
As soon as I comment out the tracker.process(img, location) line, the code works flawlessly. As soon as I include the tracking, I receive no error meassage but also nothing happens. The app gets stuck at tracking the first image, the logcat output looks like this:
2021-10-19 13:01:05.778 32449-32449/org.boofcv.android D/ExtractMpegFrames: Initialized
2021-10-19 13:01:05.780 32449-32449/org.boofcv.android I/Choreographer: Skipped 30 frames! The application may be doing too much work on its main thread.
2021-10-19 13:01:05.789 32449-32449/org.boofcv.android W/Looper: Slow Looper main: doFrame is 514ms late because of 3 msg, msg 2 took 510ms (seq=242 running=153ms runnable=2ms late=18ms h=android.view.ViewRootImpl$ViewRootHandler c=android.view.View$PerformClick)
2021-10-19 13:01:05.793 32449-32726/org.boofcv.android D/ExtractMpegFrames: Extractor selected track 0 (video/avc): {track-id=1, file-format=video/mp4, level=8192, mime=video/avc, frame-count=624, profile=8, language=, color-standard=1, display-width=1920, csd-1=java.nio.HeapByteBuffer[pos=0 lim=9 cap=9], color-transfer=3, durationUs=10395844, display-height=1080, width=1920, color-range=2, rotation-degrees=90, max-input-size=1555201, frame-rate=60, height=1080, csd-0=java.nio.HeapByteBuffer[pos=0 lim=23 cap=23]}
2021-10-19 13:01:05.794 32449-32726/org.boofcv.android D/ExtractMpegFrames: Video size is 1920x1080
2021-10-19 13:01:05.824 32449-32726/org.boofcv.android D/ExtractMpegFrames: textureID=1
2021-10-19 13:01:05.830 32449-32728/org.boofcv.android I/OMXClient: IOmx service obtained
2021-10-19 13:01:05.888 32449-32727/org.boofcv.android D/SurfaceUtils: connecting to surface 0x7864f16010, reason connectToSurface
2021-10-19 13:01:05.888 32449-32727/org.boofcv.android I/MediaCodec: [OMX.qcom.video.decoder.avc] setting surface generation to 33227777
2021-10-19 13:01:05.888 32449-32727/org.boofcv.android D/SurfaceUtils: disconnecting from surface 0x7864f16010, reason connectToSurface(reconnect)
2021-10-19 13:01:05.888 32449-32727/org.boofcv.android D/SurfaceUtils: connecting to surface 0x7864f16010, reason connectToSurface(reconnect)
2021-10-19 13:01:05.890 32449-32728/org.boofcv.android I/ExtendedACodec: setupVideoDecoder()
2021-10-19 13:01:05.892 32449-32728/org.boofcv.android I/ExtendedACodec: Decoder will be in frame by frame mode
2021-10-19 13:01:05.929 32449-32728/org.boofcv.android D/SurfaceUtils: set up nativeWindow 0x7864f16010 for 1920x1080, color 0x7fa30c06, rotation 90, usage 0x20002900
2021-10-19 13:01:05.939 32449-32728/org.boofcv.android W/Gralloc3: allocator 3.x is not supported
2021-10-19 13:01:05.939 32449-32726/org.boofcv.android D/ExtractMpegFrames: loop
2021-10-19 13:01:05.946 32449-32726/org.boofcv.android D/ExtractMpegFrames: submitted frame 0 to dec, size=148096
2021-10-19 13:01:05.958 32449-32726/org.boofcv.android D/ExtractMpegFrames: no output from decoder available
2021-10-19 13:01:05.958 32449-32726/org.boofcv.android D/ExtractMpegFrames: loop
2021-10-19 13:01:05.959 32449-32726/org.boofcv.android D/ExtractMpegFrames: submitted frame 1 to dec, size=14112
2021-10-19 13:01:05.970 32449-32726/org.boofcv.android D/ExtractMpegFrames: no output from decoder available
2021-10-19 13:01:05.970 32449-32726/org.boofcv.android D/ExtractMpegFrames: loop
2021-10-19 13:01:05.972 32449-32726/org.boofcv.android D/ExtractMpegFrames: submitted frame 2 to dec, size=17632
2021-10-19 13:01:05.974 32449-32726/org.boofcv.android D/ExtractMpegFrames: decoder output format changed: {crop-right=1919, color-format=2141391878, slice-height=1088, mime=video/raw, hdr-static-info=java.nio.HeapByteBuffer[pos=0 lim=25 cap=25], stride=1920, color-standard=2, color-transfer=3, crop-bottom=1079, crop-left=0, width=1920, color-range=1, crop-top=0, height=1088}
2021-10-19 13:01:05.974 32449-32726/org.boofcv.android D/ExtractMpegFrames: loop
2021-10-19 13:01:05.975 32449-32726/org.boofcv.android D/ExtractMpegFrames: submitted frame 3 to dec, size=30208
2021-10-19 13:01:05.976 32449-32726/org.boofcv.android D/ExtractMpegFrames: surface decoder given buffer 16 (size=8)
2021-10-19 13:01:05.976 32449-32726/org.boofcv.android D/ExtractMpegFrames: awaiting decode of frame 0
2021-10-19 13:01:05.978 32449-32449/org.boofcv.android D/ExtractMpegFrames: new frame available
2021-10-19 13:01:05.992 32449-32726/org.boofcv.android D/ExtractMpegFrames: Running tracker...
2021-10-19 13:01:06.003 32449-32727/org.boofcv.android D/SurfaceUtils: disconnecting from surface 0x7864f16010, reason disconnectFromSurface
The tracker.process() functions itself works flawlessly, e.g. when using MediaMetadataRetriever to extract the frames (however MediaMetadataRetriever is too slow, that's why I am using ExtractMpegFrames). On my device the tracker.process() function itself usually takes about 30-40 ms per frame. Just extracting the frames with ExtractMpegFrames as bitmap without further processing takes about 3-4 ms per frame.
So I guess the problem might have to do with threading? I would be thankful for every help.
Edit
On my UI Thread I am calling new ExtractMpegFrames().run() while the ExtractMpegFrames class looks like this:
public class ExtractMpegFrames {
public void run() throws Throwable {
this.initialize();
ExtractMpegFramesWrapper.execute(this);
}
private static class ExtractMpegFramesWrapper implements Runnable {
private Throwable mThrowable;
private ExtractMpegFrames mExtractor;
private ExtractMpegFramesWrapper(ExtractMpegFrames test) {
mExtractor = test;
}
#Override
public void run() {
try {
mExtractor.extractMpegFrames();
} catch (Throwable th) {
mThrowable = th;
}
}
public static void execute(ExtractMpegFrames obj) throws Throwable {
ExtractMpegFramesWrapper wrapper = new ExtractMpegFramesWrapper(obj);
Thread th = new Thread(wrapper, "main");
th.start();
//th.join();
if (wrapper.mThrowable != null) {
throw wrapper.mThrowable;
}
}
}
...
// The rest (except for the processFrame() function above)
// looks the same as in ExtractMpegFramesTest
// See here:
// https://bigflake.com/mediacodec/#ExtractMpegFramesTest
}
Edit 2
I still did not resolve this issue. I tried to replace the boofcv tracker with an opencv one (org.opencv.tracking.TrackerMOSSE). However, this leads to a deadlock as well (same logcat output as above). Lastly, I tried to replace the tracking function by this opencv template matching algorithm. This works and does not lead to a deadlock, however it is way too slow (takes about one second per frame on my device).

Google Cloud Vision Object Detection Model Crashes on Android

I recently trained an object detection model on Google Cloud Vision. I exported the metadat jason file, label text file, and the model tflite file of the trained model and I intend to run it on Android. However, I cannot run this model using the Android demo app as it crashes every time.
The demo app used is compatible with a locally trained and converted tflite model but not the one exported from Google Cloud.
What might be wrong here and how can it be solved?
Thanks
Reference:
Demo App: https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection
Partial Log:
2020-01-24 11:29:11.628 18071-18071/org.tensorflow.lite.examples.detection E/libc: Access denied finding property "persist.camera.privapp.list"
2020-01-24 11:29:11.732 18071-18101/org.tensorflow.lite.examples.detection I/tensorflow: CameraConnectionFragment: Opening camera preview: 640x480
2020-01-24 11:29:11.769 18071-18102/org.tensorflow.lite.examples.detection D/vndksupport: Loading /vendor/lib/hw/android.hardware.graphics.mapper#2.0-impl.so from current namespace instead of sphal namespace.
2020-01-24 11:29:11.770 18071-18102/org.tensorflow.lite.examples.detection D/vndksupport: Loading /vendor/lib/hw/gralloc.msm8937.so from current namespace instead of sphal namespace.
2020-01-24 11:29:11.803 18071-18071/org.tensorflow.lite.examples.detection I/Timeline: Timeline: Activity_idle id: android.os.BinderProxy#5ab1c5e time:332335506
2020-01-24 11:29:12.198 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 0 at size 307200
2020-01-24 11:29:12.201 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 1 at size 153599
2020-01-24 11:29:12.203 18071-18101/org.tensorflow.lite.examples.detection D/tensorflow: CameraActivity: Initializing buffer 2 at size 153599
2020-01-24 11:29:12.204 18071-18101/org.tensorflow.lite.examples.detection I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread.
2020-01-24 11:29:12.311 18071-18100/org.tensorflow.lite.examples.detection I/tensorflow: DetectorActivity: Running detection on image 1
2020-01-24 11:29:12.475 18071-18100/org.tensorflow.lite.examples.detection E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.examples.detection, PID: 18071
java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 307200 bytes and a Java Buffer with 270000 bytes.
at org.tensorflow.lite.Tensor.throwIfShapeIsIncompatible(Tensor.java:332)
at org.tensorflow.lite.Tensor.throwIfDataIsIncompatible(Tensor.java:305)
at org.tensorflow.lite.Tensor.setTo(Tensor.java:123)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:148)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:296)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:193)
at org.tensorflow.lite.examples.detection.DetectorActivity$2.run(DetectorActivity.java:183)
at android.os.Handler.handleCallback(Handler.java:790)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:175)
at android.os.HandlerThread.run(HandlerThread.java:65)
=========================================================
Update: Now we know that it is because the image fed to the model and the input shape of the model does not match. The input/output shape of models trained from Google Cloud Vision doesn't seem to be consistent. I recently got one of [ 1 320 320 3] in and [ 1 20 4] out, and another of [ 1 512 512 3] in and [ 1 20 4] out.
The demo app is made to handle models of [ 1 300 300 3] in and [ 1 10 4] out.
How do I assign the shapes of a model before training on Google Cloud Vision or how do I make the demo app capable of handling a model of a specific shape?
=========================================================
As an attempt to enable the demo app to handle a model of a specific shape, I changed TF_OD_API_INPUT_SIZE from 300 to 320, which seems to have solved the input data shape issue. However, problems come at the output side.
The new error log says:
java.lang.IllegalArgumentException: Cannot copy between a TensorFlowLite tensor with shape [1, 20, 4] and a Java object with shape [1, 10, 4].
Changing TEXT_SIZE_DIP from 10 to 20 doesn't help.
The cause for the crash is that the input shape doesn't not match with that of the model, after solving which, another crash is caused due to mismatch of the output shape.
The solution to that is to adjust the I/O shape of the demo application according to the model metadata provided by AutoML on Google Cloud.

java.lang.IllegalArgumentException: Output error: Shape of output target [1, 1917, 4] does not match with the shape of the Tensor [1, 1917, 1, 4]

I've trained my own model for object detection with tensorflow and I got it working with Tensorflow mobile for android. Now since Tensorflow Lite is released and is going to replace mobile in the future I wanted to start working with it. The Tensorflow team provided a demo for TFLite for object detection (you can find it here). So I tried to get it working with my model but I got the error in the title. Here's the logcat :
05-17 11:18:50.624 25688-25688/? I/tensorflow: DetectorActivity: Camera orientation relative to screen canvas: 90
05-17 11:18:50.624 25688-25688/? I/tensorflow: DetectorActivity: Initializing at size 640x480
05-17 11:18:50.628 25688-25688/? I/tensorflow: MultiBoxTracker: Initializing ObjectTracker: 640x480
05-17 11:18:50.637 25688-25688/? I/tensorflow: DetectorActivity: Preparing image 1 for detection in bg thread.
05-17 11:18:50.689 25688-25707/? I/tensorflow: DetectorActivity: Running detection on image 1
05-17 11:18:52.496 25688-25707/? E/AndroidRuntime: FATAL EXCEPTION: inference
Process: org.tensorflow.lite.demo, PID: 25688
java.lang.IllegalArgumentException: Output error: Shape of output target [1, 1917, 4] does not match with the shape of the Tensor [1, 1917, 1, 4].
at org.tensorflow.lite.Tensor.copyTo(Tensor.java:44)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:154)
at org.tensorflow.demo.TFLiteObjectDetectionAPIModel.recognizeImage(TFLiteObjectDetectionAPIModel.java:222)
at org.tensorflow.demo.DetectorActivity$3.run(DetectorActivity.java:242)
at android.os.Handler.handleCallback(Handler.java:761)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:156)
at android.os.HandlerThread.run(HandlerThread.java:61)
Note : as a checkpoint to train the model I used ssd_mobilenet_v1_coco_2017_11_17 and the only thing I changed in the code is this (TFLiteObjectDetectionAPIModel.java):
private static final int NUM_CLASSES = 3;
because I only have two objects to detect. Any help or information would much appreciated.
I also had the same problem but here is how i solved it with a small hack:
in the "TFLiteObjectDetectionAPIModel.java" file create a new variable array:
float [][][][] temp1 = new float[1][NUM_RESULTS][1][4];
and then for your "outputMap" object replace:
outputMap.put(0, outputLocations);
by:
outputMap.put(0, temp1);
This will solve the shape miss-match problem. and also be sure to put the correct number of classes. For example i had only one class but in the .txt file, the first class is listed as "???" and then the second one is my actual class. Hence i had:
private static final int NUM_CLASSES = 2;
even if i only have one class. But those two hacks seem to solve the problem.
P.S: The TFLite version of the fronzen model seems to run even slower than the .pb extension(on my Samsung Galaxy S8 android API 26).

Why is video made with MediaCodec garbled for Samsung Galaxy S7?

When I encode a video via Surface -> MediaCodec -> MediaMuxer, I get a very strange result when testing on the Samsung Galaxy S7. For other devices tested (emulator with Marshmallow and HTC Desire), the video comes out correctly, but on this device the video is garbled.
Using MediaCodec to save series of images as Video had a similar output of video, but I don't see how the solution could apply here because I am using a Surface as input and set the color format to COLOR_FormatSurface.
I also tried messing with the video resolution (settled on 1280 x 720) per MediaCodec Encoded video has green bar at bottom and chrominance screwed up, but that didn't solve the problem either. (c.f. Nexus 7 2013 mediacodec video encoder garbled output)
Does anyone have suggestions for what I might try to get the video formatted correctly?
Here is part of the log from the encoding:
D/ViewRootImpl: #1 mView = android.widget.LinearLayout{1dc79f2 V.E...... ......I. 0,0-0,0 #102039c android:id/toast_layout_root}
I/ACodec: [] Now uninitialized
I/OMXClient: Using client-side OMX mux.
I/ACodec: [OMX.qcom.video.encoder.avc] Now Loaded
W/ACodec: [OMX.qcom.video.encoder.avc] storeMetaDataInBuffers (output) failed w/ err -1010
W/ACodec: do not know color format 0x7fa30c06 = 2141391878
W/ACodec: do not know color format 0x7fa30c04 = 2141391876
W/ACodec: do not know color format 0x7fa30c08 = 2141391880
W/ACodec: do not know color format 0x7fa30c07 = 2141391879
W/ACodec: do not know color format 0x7f000789 = 2130708361
D/ViewRootImpl: MSG_RESIZED_REPORT: ci=Rect(0, 0 - 0, 0) vi=Rect(0, 0 - 0, 0) or=1
I/ACodec: setupVideoEncoder succeeded
W/ACodec: do not know color format 0x7f000789 = 2130708361
I/ACodec: [OMX.qcom.video.encoder.avc] Now Loaded->Idle
I/ACodec: [OMX.qcom.video.encoder.avc] Now Idle->Executing
I/ACodec: [OMX.qcom.video.encoder.avc] Now Executing
I/MPEG4Writer: setStartTimestampUs: 0
I/MPEG4Writer: Earliest track starting time: 0
The 5th unrecognized color seems to be COLOR_FormatSurface... Is that a problem?
Other details:
MIME: video/avc
Resolution: 1280 x 720
Frame rate: 30
IFrame interval: 2
Bitrate: 8847360
Per Android Docs for MediaCodec.createInputSurface():
The Surface must be rendered with a hardware-accelerated API, such as
OpenGL ES. lockCanvas(android.graphics.Rect) may fail or produce
unexpected results.
I must have missed (or ignored) that in writing the code. Since I was using lockCanvas() to get a canvas upon which to draw my video frames, the code broke. I have put a quick fix on the problem by using lockHardwareCanvas() if API level >= 23 (since it is unavailable prior to that and since the code ran fine on API level 19).
Long term however (for me and anyone else who might stumble across this), I may have to get into more OpenGL stuff for a more permanent and stable solution. It's not worth going that route though unless I find an example of a device which will not work with my quick fix.
If you are still looking for an example for rendering bitmaps to a InputSurface.
I was able to get this to work.
Look at my answers here.
https://stackoverflow.com/a/49331192/7602598
https://stackoverflow.com/a/49331352/7602598
https://stackoverflow.com/a/49331295/7602598

camera2 will crash if SurfaceView size is not one of the supported sizes

When using a SurfaceHolder/SurfaceView for configuring a CaptureSession, I expected that the SurfaceView can have any layout size, while I set a good preview size with the same aspect ratio for it by surfaceView.getHolder().setFixedSize(preview_width, preview_height). The result should be that the incoming preview buffer may be scaled down to the layout size.
But in camera2 - hardware level LEGACY -, configuring a CaptureSession will only work if I use a SurfaceView that has exactly the same layout size like one that is in the List returned by streamConfigurationMap.getOutputSizes(SurfaceHolder.class). If not, the image will not be scaled down but the configuring throws an error.
/**
* Prerequisites:
* - The device must be opened.
* - The surface view must be ready.
*/
protected void init() {
// ...
try {
CameraCaptureSession.StateCallback cb = new CameraCaptureSession.StateCallback() {
// ...
};
// The following line will result in an error*, if the viewfinder has not the right size:
cameraDevice.createCaptureSession(Arrays.asList(viewfinder.getHolder().getSurface(), imageReaderSmall.getSurface()), cb, null);
}
catch (CameraAccessException e) {
// ...
}
}
From the log (Samsung Galaxy A3 '14, SDV v21):
05-12 ...: Output sizes for SurfaceHolder.class: [1440x1080, 1280x720, 960x720, 880x720, 960x540, 720x540, 800x480, 720x480, 640x480, 528x432, 352x288, 320x240, 176x144]
...
05-12 ... I/CameraManager: Using legacy camera HAL.
...
05-12 ... I/OpenGLRenderer: Initialized EGL, version 1.4
05-12 ... D/OpenGLRenderer: Get maximum texture size. GL_MAX_TEXTURE_SIZE is 4096
05-12 ... D/OpenGLRenderer: Enabling debug mode 0
05-12 ....CameraActivity: Surface created
05-12 ....CameraActivity: Surface changed 4 540x405
*) 05-12 ... E/CameraDevice-0-LE: Surface with size (w=540, h=405) and format 0x4 is not valid, size not in valid set: [1440x1080, 1280x720, 960x720, 880x720, 960x540, 720x540, 800x480, 720x480, 640x480, 528x432, 352x288, 320x240, 176x144]
05-12 ... W/CameraDevice-JV-0: Stream configuration failed
05-12 ... E/CameraCaptureSession: Session 0: Failed to create capture session; configuration failed
...
05-12 ....CameraActivity: Configure failed!
Using a Nexus 5X, SDK v23, and waiting for the surfaceChanged() call after surfaceHolder.setFixedSize(), there is no error with a preview size that is not in the list of supported output sizes, but the preview does not start. From the log:
05-12 08:47:10.052 ....CameraActivity: Surface created
05-12 08:47:10.053 ....CameraActivity: Surface changed 4 1455x1080
05-12 08:47:10.054 ....CameraActivity: Find preview size for 1455x1080 (1.347424:1) px
05-12 08:47:10.054 ....CameraActivity: Preview size 1600x1200 px
05-12 08:47:10.070 ....CameraActivity: Surface changed 4 1600x1200
05-12 08:47:10.110 ....CameraActivity: Session started
05-12 08:47:10.163 ....CameraActivity: Surface: Surface(name=null)/#0xec338e5
Result: The preview does not start, I can give the surface view a background color to demonstrate it.
How can I solve this problem and still use a SurfaceView which is more performant and backwards-compatible than using a SurfaceTexture.
After calling setFixedSize, you need to wait for the surfaceChanged() callback to fire again, before creating the camera capture session.
setFixedSize queues up the necessary SurfaceView changes, but they don't take effect immediately.

Categories

Resources