I try to use Gstreamer on Android via Qt C++.
I already use Gstreamer on these platforms but now I have an issues with the plugins:
G_BEGIN_DECLS
GST_PLUGIN_STATIC_DECLARE(coreelements);
GST_PLUGIN_STATIC_DECLARE(audioconvert);
GST_PLUGIN_STATIC_DECLARE(playback);
G_END_DECLS
void MainWindow::play(){
GST_PLUGIN_STATIC_REGISTER(coreelements);
GST_PLUGIN_STATIC_REGISTER(audioconvert);
GST_PLUGIN_STATIC_REGISTER(playback);
GstElement *pipeline;
GError *error = NULL;
pipeline = gst_parse_launch("playbin uri=http://docs.gstreamer.com/media/sintel_trailer-368p.ogv", &error);
if (!pipeline) {
ui->label->setText("error");
return;
}
if(error != NULL){
qDebug("GST error: ");
qDebug(error->message);
}else{
qDebug("GST without errors");
}
gst_element_set_state(pipeline, GST_STATE_READY);
gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(pipeline), this->ui->playback_widget->winId());
gst_element_set_state(pipeline, GST_STATE_PLAYING);
ui->label->setText("Playing...");
}
After executing of this code I don't get either video in the playback_widget or the audio, but error var is clear(equals NULL) and label set to "Playing...". So, maybe I missed something?
Related
i have been working on this problem for 2 weeks now, i have integrate C++ code into my Voip call recording app, the code is supposed to take care of forcefully setting Input_source of mediaRecorder to same one as from the Voip Call (in my case it is input_source=7 / Voice_Communication).
In order to achieve my goal i load shared library : libaudioflinger.so and attempt to reach SetParameters function, as can be seen from the snipet below :
handleLibAudioFlinger = dlopen("libaudioflinger.so", RTLD_LAZY | RTLD_GLOBAL);
if (handleLibAudioFlinger != NULL) {
func = dlsym(handleLibAudioFlinger, "setParameters"); // i do not know the mangled name of
SetParameters function
if (func != NULL) {
__android_log_print(ANDROID_LOG_ERROR, "TRACKERS", "%s", "Function is not null");
result = 0;
}
audioSetParameters = (lasp) func;
} else {
__android_log_print(ANDROID_LOG_ERROR, "TRACKERS", "%s", "Function is null");
result = -1;
}
dlopen does not return null, but dlsym does, reason is that i need to have the exact mangled name of the function setParameters from AudioFlinger.cpp As in Android source code.
i am new to handling android c++ code and dealing with shared libraries,etc... if someone can tell me step by step how to get correct mangled name for the function that i need?
I am new in TensorFlow. I built TensorFlow Lite libraries from sources. I try to use TensorFlow for face recognition. This one a part of my project. And I have to use GPU memory for input/output e.g. input data: opengl texture, output data: opengl texture. Unfortunately, this information is outdated: https://www.tensorflow.org/lite/performance/gpu_advanced. I tried to use gpu::gl::InferenceBuilder for building gpu::gl::InferenceRunner. And I have problem. I don’t understand how I can get the model in GraphFloat32 (Model>) format and TfLiteContext.
Example of my experemental code:
using namespace tflite::gpu;
using namespace tflite::gpu::gl;
const TfLiteGpuDelegateOptionsV2 options = {
.inference_preference = TFLITE_GPU_INFERENCE_PREFERENCE_SUSTAINED_SPEED,
.is_precision_loss_allowed = 1 // FP16
};
tfGPUDelegate = TfLiteGpuDelegateV2Create(&options);
if (interpreter->ModifyGraphWithDelegate(tfGPUDelegate) != kTfLiteOk) {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "GPU Delegate hasn't been created");
return ;
} else {
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "GPU Delegate has been created");
}
InferenceEnvironmentOptions envOption;
InferenceEnvironmentProperties properties;
auto envStatus = NewInferenceEnvironment(envOption, &env, &properties);
if (envStatus.ok()){
__android_log_print(ANDROID_LOG_INFO, "Tensorflow", "Inference environment has been created");
} else {
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Inference environment hasn't been created");
__android_log_print(ANDROID_LOG_ERROR, "Tensorflow", "Message: %s", envStatus.error_message().c_str());
}
InferenceOptions builderOptions;
builderOptions.usage = InferenceUsage::SUSTAINED_SPEED;
builderOptions.priority1 = InferencePriority::MIN_LATENCY;
builderOptions.priority2 = InferencePriority::AUTO;
builderOptions.priority3 = InferencePriority::AUTO;
//The last part requires a model
// GraphFloat32* graph;
// TfLiteContext* tfLiteContex;
//
// auto buildStatus = BuildModel(tfLiteContex, delegate_params, &graph);
// if (buildStatus.ok()){}
You may look function BuildFromFlatBuffer (https://github.com/tensorflow/tensorflow/blob/6458d346470158605ecb5c5ba6ad390ae0dc6014/tensorflow/lite/delegates/gpu/common/testing/tflite_model_reader.cc). It creates Interpreter and graph from it.
Also Mediapipe uses InferenceRunner you may find for useful in files:
https://github.com/google/mediapipe/blob/master/mediapipe/calculators/tflite/tflite_inference_calculator.cc
https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/util/tflite/tflite_gpu_runner.cc
I am working on Cordova app. I wannt to implement a qrcode reader. I tried plugins available in Cordova but they all are buggy and some doesnot provide preview of scanner/video on same screen.
So I decided to use instascan which is a js based library to be used with webcams. I used it and implemented it in a simple cordova app and its working.
Now I see preview of my scan (camera video which is currently being scanned) and it scans perfectly.
But later I merged that code with my actual Cordova app which uses Vue cli. Now I am getting:
Error: Cannot access video stream (NotReadableError)
This error is probably (as I read) due to Chrome's https policy. But the problem is, Cordova uses webview and another cordova app which is basic cordova instance with this plugin only is working perfectly.
My implementation:
mounted: function () {
var _this = this;
this.$ons.ready(function () { // this is ready event fired by Onsen UI when cordova's native APIs are loaded
var scanner = new Instascan.Scanner({
continuous: true,
mirror: false,
video: document.getElementById('scannerPreview'),
});
scanner.addListener('scan', function (content) {
alert('scan' + content);
});
Instascan.Camera.getCameras().then(function (cameras) {
if (cameras.length > 0) {
if (cameras.length === 1) {
scanner.start(cameras[0]);
} else {
scanner.start(cameras[1]);
}
scanner.start(cameras[0]);
} else {
alert('No cameras found.');
}
}).catch(function (e) {
alert(e);
});
});
},
first add the permissions plugin:
cordova plugin add cordova-plugin-permission
And then you must request the permits in the camera:
permissions.requestPermission(permissions.CAMERA, success, error);
function error() {
return false;
}
function success( status ) {
if( !status.hasPermission ) error();
return true;
}
I am trying to create a 1v3 or 4v4 conferencing(whatever you call it) android app, i have successfully connected 4 people together using webrtc and socket.io.
But when i disconnect one of the users i get webrtc native crash
Fatal signal 11 (SIGSEGV), code 1, fault addr 0xb8 in tid 17650 (Thread-648)
Same code works in moto c but crashes on other devices.
webrtc version used - compile 'io.pristine:libjingle:9694#aar'
onDisconnect(){
if (null != peerConnection2) {
peerConnection2.removeStream(localMediaStream);
peerConnection2.close();
peerConnection2 = null;
}
if (null != peerConnection3) {
peerConnection3.dispose();
peerConnection3 = null;
}
if (null != localVideoSource) {
localVideoSource.dispose();
localVideoSource = null;
}
if (null != peerConnectionFactory) {
audioManager.setMode(AudioManager.MODE_NORMAL);
audioManager.setSpeakerphoneOn(false);
peerConnectionFactory.dispose(); <<<<--- THIS IS WHERE THE APP CRASHES.
peerConnectionFactory = null;
}
}
I am not sure if updating the version will help me with this bug, even if i go for updating it to new version, i am unable to find any proper documentation or blog related to it, it'll be great if you can point me to any known link(blog/documentaion) for latest version of libjingle.
I try to use GStreamer on Android via Qt and C++. I already use GStreamer on these platforms but now I have an issues with the plugins:
G_BEGIN_DECLS
GST_PLUGIN_STATIC_DECLARE(coreelements);
GST_PLUGIN_STATIC_DECLARE(audiotestsrc);
GST_PLUGIN_STATIC_DECLARE(audioconvert);
GST_PLUGIN_STATIC_DECLARE(androidmedia);
GST_PLUGIN_STATIC_DECLARE(playback);
G_END_DECLS
void MainWindow::play(){
GST_PLUGIN_STATIC_REGISTER(coreelements);
GST_PLUGIN_STATIC_REGISTER(audiotestsrc);
GST_PLUGIN_STATIC_REGISTER(audioconvert);
GST_PLUGIN_STATIC_REGISTER(androidmedia);
GST_PLUGIN_STATIC_REGISTER(playback);
GstElement *pipeline;
GError *error = NULL;
pipeline = gst_parse_launch("audiotestsrc ! audioconvert ! androidmedia", &error);
if (!pipeline) {
ui->label->setText("error");
return;
}
if(error != NULL){
qDebug("GST error: ");
qDebug(error->message);
qDebug("GST end.");
}else{
qDebug("GST without errors");
}
gst_element_set_state(pipeline, GST_STATE_PLAYING);
ui->label->setText("Playing...");
}
With this variant of code I get
undefined reference to 'gst_preset_get_type'
and I don't know with which library I need to link my app.
How can I solve this problem?