I'm building a plugin in Java for Unity. I set up the device camera myself in Java, which is working great. However to pass the camera preview data to Unity is proving difficult.
I have tested everything working using the ARToolkit library, which has a function to pass camera preview data to Unity.
However Unity itself has such a function for this as well for camera support in Android, which I would rather like to use. This function is called
private final native void nativeVideoFrameCallback(int var1, byte[] var2, int var3, int var4);
in the UnityPlayer class, in classes.jar.
You can download the classes.jar for inspection from here: https://github.com/PlayFab/Unity3d_Login_Example_Project/blob/master/Assets/Facebook/Editor/android/android-libs/unity-classes.jar (press the 'Raw' button).
As you can see it is set to private, so I have no option of calling it.
Original use by UnityPlayer
nativeVideoFrameCallback is originally called by Unity in:
public void onCameraFrame(final com.unity3d.player.a var1, final byte[] var2) {
final int var3 = var1.a();
final Size var4 = var1.b();
this.a(new UnityPlayer.c((byte)0) {
public final void a() {
UnityPlayer.this.nativeVideoFrameCallback(var3, var2, var4.width, var4.height);
var1.a(var2);
}
});
}
which is public, but asks for a non-public variable "com.unity3d.player.a var1", which I can't instantiate.
A possible solution
My solution was to create a new native function link for nativeVideoFrameCallback, but it leads to a FatalException. I do not get this exception when not calling my own nativeVideoFrameCallback link, so Unity does succeed for its own.
UnsatisfiedLinkError: No Implementation found for ...package...UnityPlayer.nativeVideoFrameCallback)int, yte[], int, int).
My UnityPlayer class:
public class UnityPlayer extends com.unity3d.player.UnityPlayer {
private final ConcurrentLinkedQueue<Runnable> jobs = new ConcurrentLinkedQueue<Runnable>();
public UnityPlayer(ContextWrapper contextWrapper) {
super(contextWrapper);
}
public void addJob(final Camera camera, final int cam, final byte[] data, final int width, final int height) {
jobs.add(new Runnable() {
#Override
public void run() {
nativeVideoFrameCallback(cam, data, width, height);
camera.addCallbackBuffer(data);
}
});
}
private final native void nativeVideoFrameCallback(int var1, byte[] var2, int var3, int var4);
static {
try {
System.loadLibrary("main"); // Is this still required? I would think not, as Unity already loads it
} catch (UnsatisfiedLinkError var1) {
Log.d(Constants.TAG, "Unable to find " + "main");
} catch (Exception var2) {
Log.d(Constants.TAG, "Unknown error " + var2);
}
}
#Override
protected void executeGLThreadJobs() {
super.executeGLThreadJobs();
Runnable job = jobs.poll();
if (job != null) {
job.run();
}
}
}
which requires a copy of UnityNativeActivity and the above UnityPlayer instantiation instead of com.unity3d.player.UnityPlayer.
I got it to work using reflection. It's not optimal to use reflection, so if someone knows a solution without reflection, I will accept that as the answer.
For anyone else looking to manage their own camera on Android with Unity:
public class UnityPlayer extends com.unity3d.player.UnityPlayer {
private final ConcurrentLinkedQueue<Runnable> jobs = new ConcurrentLinkedQueue<Runnable>();
public UnityPlayer(ContextWrapper contextWrapper) {
super(contextWrapper);
}
public void addJob(final Camera camera, final int cam, final byte[] data, final int width, final int height) { // execute on opengl thread using jobs
jobs.add(new Runnable() {
#Override
public void run() {
videoFrameCallback(cam, data, width, height);
camera.addCallbackBuffer(data);
}
});
}
#Override
protected int[] initCamera(int var1, int var2, int var3, int var4) {
return new int[]{640, 480}; // return width and height of camera
}
// private final native void nativeVideoFrameCallback(int var1, byte[] var2, int var3, int var4); ==> camera id (0 back, 1 front), imagedata, width, height
private void videoFrameCallback(int var1, byte[] var2, int var3, int var4) {
try {
Method m = com.unity3d.player.UnityPlayer.class.getDeclaredMethod("nativeVideoFrameCallback", Integer.TYPE, byte[].class, Integer.TYPE, Integer.TYPE);
m.setAccessible(true);
m.invoke(this, var1, var2, var3, var4);
} catch (Exception e) {
Log.d(Constants.TAG, e.toString());
}
}
#Override
protected void executeGLThreadJobs() {
super.executeGLThreadJobs();
Runnable job = jobs.poll();
if (job != null) {
job.run();
}
}
}
Related
My end goal is to visualize the remote audio in a one on one call using the Agora Api. The agora api and available examples are quite vast but I did not find a example that allows me to access the audio as it is streamed in so that I can then get the samples max amplitude and send to a visualizer.The byte array would do just fine.
I have looked through the examples provide at https://github.com/AgoraIO/API-Examples, which seemed promising but I have not been able to solve this. Any help is appreciated.
(Within the API_Example on Github, I have attempted to implement ProcessRawData and AudioRecordService)
Update: The APIExample allows me to grab the raw data as it flows through and that is what I am looking for. The issue arises when I try to duplicate the "ProcessRawData" class in a new project. The call back for the audio observer is never called. I have gone through my code and it matches everything in the example. The only thing I can think of is that the method to import the "lib-raw-data" folder was incorrect. I simply copied the entrire folder 'lib-raw-data' from the example api project and into my own. I then added the library directory to the gradle.settings file as well as the gradle(app) file. Outisde of that, I simply made sure the code matches the example provided.
Below is the most basic form of my application with the imported library "lib-raw-data" as described. I have no errors within Android Studio so I don't know where to look. The example in the github above works, but the same code below does not.
MainActivity.java
public class MainActivity extends AppCompatActivity {
private SessionVideoCall videoCall;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initVideoCall();
}
private void initVideoCall(){
videoCall = new SessionVideoCall(this);
videoCall.setChannelName(#CHANNEL_NAME);
videoCall.attachView();
videoCall.startCall();
}
}
SessionVideoCall.java
public class SessionVideoCall implements MediaDataAudioObserver {
private Handler handler;
private final String TAG = this.getClass().getSimpleName();
private static final int PERMISSION_REQ_ID = 44;
private static final String[] REQUESTED_PERMISSIONS = {
Manifest.permission.RECORD_AUDIO,
Manifest.permission.CAMERA,
android.Manifest.permission.READ_EXTERNAL_STORAGE,
Manifest.permission.WRITE_EXTERNAL_STORAGE
};
private final Activity ACTIVITTY;
private MediaDataObserverPlugin mediaDataObserverPlugin;
private String channelName;
private FrameLayout mLocalContainer;
private FrameLayout mRemoteContainer;
private VideoCanvas mLocalVideo;
private VideoCanvas mRemoteVideo;
private Timer timer;
boolean isVisualizerAttached = true;
public static RtcEngine engine;
// set up engine
public SessionVideoCall(Activity activity){
this.ACTIVITTY = activity;
handler = new Handler(Looper.getMainLooper());
}
// set channel name
public void setChannelName(String channelName){
this.channelName = channelName;
}
public void attachView(){
mLocalContainer = ACTIVITTY.findViewById(R.id.local_video_view_container);
mRemoteContainer = ACTIVITTY.findViewById(R.id.remote_video_view_container);
}
// start call
public void startCall(){
if(hasPermissions())
initEngineAndJoinChannel();
}
// end call
public void endCall(){
removeFromParent(mLocalVideo);
mLocalVideo = null;
removeFromParent(mRemoteVideo);
mRemoteVideo = null;
if (mediaDataObserverPlugin != null) {
mediaDataObserverPlugin.removeAudioObserver(this);
mediaDataObserverPlugin.removeAllBuffer();
}
MediaPreProcessing.releasePoint();
leaveChannel();
}
private void initEngineAndJoinChannel() {
initializeEngine();
setupObserver();
setupAudioConfig();
setupVideoConfig();
setupLocalVideo();
joinChannel();
}
private void initializeEngine() {
try {
engine = RtcEngine.create(ACTIVITTY.getApplicationContext(), ACTIVITTY.getString(R.string.agora_app_id), mRtcEventHandler);
} catch (Exception e) {
Log.e(TAG, Log.getStackTraceString(e));
throw new RuntimeException("NEED TO check rtc sdk init fatal error\n" + Log.getStackTraceString(e));
}
}
private void setupObserver(){
mediaDataObserverPlugin = MediaDataObserverPlugin.the();
MediaPreProcessing.setCallback(mediaDataObserverPlugin);
MediaPreProcessing.setAudioPlayByteBuffer(mediaDataObserverPlugin.byteBufferAudioPlay);
mediaDataObserverPlugin.addAudioObserver(this);
}
private void setupAudioConfig(){
engine.setChannelProfile(Constants.CHANNEL_PROFILE_LIVE_BROADCASTING);
engine.setClientRole(IRtcEngineEventHandler.ClientRole.CLIENT_ROLE_BROADCASTER);
engine.setDefaultAudioRoutetoSpeakerphone(false);
engine.setEnableSpeakerphone(false);
engine.setPlaybackAudioFrameParameters(4000, 1, RAW_AUDIO_FRAME_OP_MODE_READ_ONLY, 1024);
}
private void setupVideoConfig() {
engine.enableVideo();
engine.setVideoEncoderConfiguration(new VideoEncoderConfiguration(
VideoEncoderConfiguration.VD_640x360,
VideoEncoderConfiguration.FRAME_RATE.FRAME_RATE_FPS_15,
VideoEncoderConfiguration.STANDARD_BITRATE,
VideoEncoderConfiguration.ORIENTATION_MODE.ORIENTATION_MODE_FIXED_PORTRAIT));
}
private void setupLocalVideo() {
SurfaceView view = RtcEngine.CreateRendererView(ACTIVITTY);
view.setZOrderMediaOverlay(true);
mLocalContainer.addView(view);
mLocalVideo = new VideoCanvas(view, VideoCanvas.RENDER_MODE_HIDDEN, 0);
engine.setupLocalVideo(mLocalVideo);
}
private void joinChannel() {
String token = ACTIVITTY.getString(R.string.agora_access_token);
if (TextUtils.isEmpty(token) || TextUtils.equals(token, "#YOUR ACCESS TOKEN#")) {
token = null; // default, no token
}
engine.joinChannel(token, channelName, "Extra Optional Data", 0);
}
private void leaveChannel(){
if (mediaDataObserverPlugin != null) {
mediaDataObserverPlugin.removeAudioObserver(this);
mediaDataObserverPlugin.removeAllBuffer();
}
MediaPreProcessing.releasePoint();
engine.leaveChannel();
if(timer != null)
timer.cancel();
}
private void removeFromParent(VideoCanvas canvas) {
if (canvas != null) {
ViewParent parent = canvas.view.getParent();
if (parent != null) {
ViewGroup group = (ViewGroup) parent;
group.removeView(canvas.view);
//return group;
}
}
//return null;
}
private void setupRemoteVideo(int uid) {
ViewGroup parent = mRemoteContainer;
if (parent.indexOfChild(mLocalVideo.view) > -1) {
parent = mLocalContainer;
}
if (mRemoteVideo != null) {
return;
}
SurfaceView view = RtcEngine.CreateRendererView(ACTIVITTY);
view.setZOrderMediaOverlay(parent == mLocalContainer);
parent.addView(view);
mRemoteVideo = new VideoCanvas(view, VideoCanvas.RENDER_MODE_HIDDEN, uid);
// Initializes the video view of a remote user.
engine.setupRemoteVideo(mRemoteVideo);
}
private final IRtcEngineEventHandler mRtcEventHandler = new IRtcEngineEventHandler() {
#Override
public void onJoinChannelSuccess(String channel, int uid, int elapsed) {
super.onJoinChannelSuccess(channel, uid, elapsed);
setupTimer();
Log.d(TAG,"onJoinChannelSuccess: ");
}
#Override
public void onFirstRemoteVideoDecoded(final int uid, int width, int height, int elapsed) {
ACTIVITTY.runOnUiThread(new Runnable() {
#Override
public void run() {
Log.d(TAG,"First remote video decoded, uid: " + (uid & 0xFFFFFFFFL));
setupRemoteVideo(uid);
}
});
}
#Override
public void onUserOffline(final int uid, int reason) {
super.onUserOffline(uid, reason);
// when remote user logs off
}
#Override
public void onUserJoined(int uid, int elapsed) {
super.onUserJoined(uid, elapsed);
Log.i(TAG, "onUserJoined->" + uid);
Log.d(TAG, "user has joined call: " + uid);
handler.post(() ->
{
if (mediaDataObserverPlugin != null) {
mediaDataObserverPlugin.addDecodeBuffer(uid);
}
});
}
#Override
public void onRemoteAudioStateChanged(int uid, int state, int reason, int elapsed)
{
super.onRemoteAudioStateChanged(uid, state, reason, elapsed);
Log.i(TAG, "onRemoteAudioStateChanged->" + uid + ", state->" + state + ", reason->" + reason);
}
};
private boolean hasPermissions(){
return (checkSelfPermission(REQUESTED_PERMISSIONS[0]) &&
checkSelfPermission(REQUESTED_PERMISSIONS[1]) &&
checkSelfPermission(REQUESTED_PERMISSIONS[2]) &&
checkSelfPermission(REQUESTED_PERMISSIONS[3]));
}
private boolean checkSelfPermission(String permission) {
if (ContextCompat.checkSelfPermission(ACTIVITTY, permission) !=
PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(ACTIVITTY, REQUESTED_PERMISSIONS, PERMISSION_REQ_ID);
return false;
}
return true;
}
private void setupTimer(){
timer = new Timer();
timer.schedule(new TimerTask() {
#Override
public void run() {
if(maxAmplitude > 0)
Log.e(TAG, "Amplitude Greater than 0: " + maxAmplitude);
}
},0,50);
}
#Override
public void onRecordAudioFrame(byte[] data, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
Log.e(TAG, "onRecordAudioFrame: ");
}
private int maxAmplitude = 0;
#Override
public void onPlaybackAudioFrame(byte[] data, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
if(isVisualizerAttached) {
short[] rawAudio = new short[data.length/2];
ByteBuffer.wrap(data).order(ByteOrder.BIG_ENDIAN).asShortBuffer().get(rawAudio);
short amplitude = 0;
for(short num: rawAudio){
if(num > amplitude)
amplitude = num;
}
Log.e(TAG, "onPlaybackAudioFrame: Supposedly we have data -> max: " + amplitude);
}
Log.e(TAG, "onPlaybackAudioFrame:");
}
#Override
public void onPlaybackAudioFrameBeforeMixing(int uid, byte[] data, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
Log.e(TAG, "onPlaybackAudioFrameBeforeMixing: ");
}
#Override
public void onMixedAudioFrame(byte[] data, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
Log.e(TAG, "onMixedAudioFrame: ");
}
}
Did you give/add permissions in the manifest? Because Agora does not show any runtime error when you don't give/add permissions; instead, your voice call goes blank, and the microphone and speaker buttons, if added, won't do anything. This is what I went through one day dealing with some code on GitHub by Agora (that is, it didn't look up the manifest file).
According to the Agora documentation, the following permissions are required:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.BLUETOOTH" />
I have discovered where in the API Example (provided by the github repo link above), where to interact with the remote audio data. In the CustomAudioSource class, within the AsyncTask object at the bottom of the file. Had to run on physical device to take advantage of it.
Now I am need of finding the max amplitude of the audio byte array. The formula given within this example is "lengthInByte = sampleRate/1000 × 2 × channels × audio duration (ms)"
I'm currently creating an Android app for school but still want to give my best.
I'm pretty new to Android development and coding in general. The app is supposed to be a stock market game. (Btw, I'm German, so there might be some German variables)
So I want to sort my RecyclerView containing shares. It works alphabetically but not by worth.
I can guarantee that the name "worth" of the double in the JSONObject is correct. What am I doing wrong?
public class CompanyAdapter extends RecyclerView.Adapter<CompanyAdapter.viewHolder> implements Filterable {
private CustomFilter filter;
private ArrayList<JSONObject> jObjList;
private final String keyName;
private final String keyWorth;
private final String keyChange;
public final static int SORT_ALPHABETICALLY = 0;
public final static int SORT_ALPHABETICALLY_REVERSE = 1;
public final static int SORT_BY_WORTH = 2;
public final static int SORT_BY_WORTH_REVERSE = 3;
public CompanyAdapter(Context context, ArrayList<JSONObject> jObjList) {
this.jObjList = jObjList;
Context c = context;
keyName = c.getResources().getString(R.string.nameCompany);
keyWorth = c.getResources().getString(R.string.worthCompany);
keyChange = c.getResources().getString(R.string.changeCompany);
sort(SORT_ALPHABETICALLY);
}
//left out some unnecessary code
public void sort (int sorting) {
if (jObjList.size()>1) {
switch (sorting) {
case SORT_ALPHABETICALLY:
sortAlphabetically();
break;
case SORT_ALPHABETICALLY_REVERSE:
sortAlphabeticallyReverse();
break;
case SORT_BY_WORTH:
sortByWorth();
break;
case SORT_BY_WORTH_REVERSE:
sortByWorthReverse();
break;
}
}
}
private void sortAlphabetically () {
Collections.sort(jObjList, new Comparator<JSONObject>() {
#Override
public int compare(JSONObject j1, JSONObject j2) {
try {
return j1.getString(keyName).compareToIgnoreCase(j2.getString(keyName));
} catch (JSONException e) {
return 0;
}
}
});
}
private void sortAlphabeticallyReverse () {
sortAlphabetically();
Collections.reverse(jObjList);
}
private void sortByWorth () {
Collections.sort(jObjList, new Comparator<JSONObject>() {
#Override
public int compare(JSONObject j1, JSONObject j2) {
try {
return Double.compare(j1.getDouble(keyWorth), j2.getDouble(keyWorth));
} catch (JSONException e) {
Log.e("JSONException", e.getMessage());
return 0;
}
}
});
}
private void sortByWorthReverse () {
sortByWorth();
Collections.reverse(jObjList);
}
}
try to replace
return Double.compare(j1.getDouble(keyWorth), j2.getDouble(keyWorth));
with
System.out.print("VALUE1: "+j1.getDouble(keyWorth));
System.out.print("VALUE2: "+j2.getDouble(keyWorth));
return (int)(j1.getDouble(keyWorth)-j2.getDouble(keyWorth));
as well to make sure and debug the value print it.
and after sortByWorth();
add notifyDataSetChanged();
Have you checked the values of the objects you are comparing within the console?
Since you are reading in the values as a string, perhaps they will not be giving the result you expect.
Furthermore, what operation is the compare function performing?
Replace:
return Double.compare(j1.getDouble(keyWorth), j2.getDouble(keyWorth));
In sortByWorth method, to:
return j1.getDouble(keyWorth).compareTo(j2.getDouble(keyWorth))
Try it..
I forgot the notifyDataSetChanged(). Sorry, that's a stupid error.
I am replacing my Sqlite database with an online database (Firestore). For that each answer of the database comes back to me by callback.
The problem is that I have several calls to the database in a loop that filled a table and that the table is not accesible unless I declare it in the end and therefore I can not change it.
So I'm looking for a way to fill this table without completely modifying the code that already exists. I saw the ArrayBlockingQueue but I wonder if a simpler solution does not exist.
If possible I would like to keep all the variables inside the function but I have not yet found a solution for that.
I know that for this example we do not necessarily need a table but I want to keep it because it's just an example ;)
Before (SQLite)
public int player_in_x_game(int id_player) {
int gamesWherePlayerIsHere = 0;
ArrayList<Games> gamesArray = Database.getGamesArray();
for (Game game: gamesArray)
if(Utils.isPlayerPresentInGame(game.getId(), idPlayer))
gamesWherePlayerIsHere++;
return gamesWherePlayerIsHere;
}
After (with callbacks)
private static int counter= 0;
private static int resultNP = 0;
private static ArrayBlockingQueue<Integer> results;
public static void numberGamesWherePlayerIsPresent(final long idPlayer, final Callbacks.IntCallback callback){
Thread thread = new Thread() {
#Override
public void run() {
games(new Callbacks.ListGameCallback() {
#Override
public void onCallback(ArrayList<Game> gameArrayList) {
counterNumberGamesWherePlayerIsPresent= gameArrayList.size();
results = new ArrayBlockingQueue<>(gameArrayList.size());
for (Game game: gameArrayList){
Utils.isPlayerPresentInGame(game.getId(), idPlayer, new Callbacks.BooleanCallback() {
#Override
public void onCallback(boolean bool) {
if (bool)
results.add(1);
else
results.add(0);
}
});
}
int result;
try {
while (counter > 0) {
result = results.take();
counter--;
resultNP += result;
}
}catch (InterruptedException ie){
ie.fillInStackTrace();
Log.e(TAG,"results.take() failed");
}
callback.onCallback(resultNP);
}
});
}
};
thread.setName("Firestore - numberGamesWherePlayerIsPresent()");
thread.start();
}
I need to load images from my database, I store them in blobs just like android. Each image is represented by my costume URI. How can I work it out with Glide?
I want to benefit from Glide cache and fast loading.
Is there a proper way of doing this?
you can register customize ModelLoader class to Glide by calling Glide.get(context).register() method. and in your ModelLoader, you can tell Glide how to load image resources from your database by implement getResourceFetcher method and return a customize DataFetcher instance.
here's a example:
DBImageUri class:
public class DBImageUri {
private String uriString;
public DBImageUri(String uriString){
this.uriString = uriString;
}
#Override
public String toString(){
return uriString;
}
}
DBDataFetcher class:
public class DBDataFetcher implements DataFetcher<InputStream> {
private DBImageUri uri;
private int width;
private int height;
private InputStream stream;
public DBDataFetcher(DBImageUri uri, int width, int height){
this.uri = uri;
this.width = width;
this.height = height;
}
#Override
public InputStream loadData(Priority priority){
String uriString = this.uri.toString();
stream = //**load image based on uri, and return InputStream for this image. this is where you do the actual image from database loading process**;
return stream;
}
#Override
public String getId(){
//width & height should be ignored if you return same image resources for any resolution (return uri.toString();)
return uri.toString() + "_" + width + "_" + height;
}
#Override
public void cleanup(){
if (stream != null) {
try {
stream.close();
} catch (IOException e) {
// Ignore
}
}
}
#Override
public void cancel(){
}
}
DBModelLoader class:
public class DBModelLoader implements ModelLoader<DBImageUri, InputStream> {
#Override
public DataFetcher<InputStream> getResourceFetcher(DBImageUri model, int width, int height){
return new DBDataFetcher(model, width, height);
}
public static class Factory implements ModelLoaderFactory<DBImageUri, InputStream>{
#Override
public ModelLoader<DBImageUri, InputStream> build(Context context, GenericLoaderFactory factories){
return new DBModelLoader();
}
#Override
public void teardown(){
}
}
}
and then you add ModelLoader to Glide registry by calling:
Glide.get(context).register(DBImageUri.class, InputStream.class, new DBModelLoader.Factory());
now you can load you database images:
Glide.with(context).load(new DBImageUri(/*your unique id string for database image*/)).into(imageview);
I am trying to generate a depth map from the point cloud. I know that I can project the point cloud to the image plane, however there is already a function (ScreenCoordinateToWorldNearestNeighbor) in the TangoSupport script that finds the XYZ point given a screen coordinate.
I am unable to get this support function to work, and it seems that one or more of my inputs are invalid. I am updating my depthmap texture in the OnTangoDepthAvailable event.
public void OnTangoDepthAvailable(TangoUnityDepth tangoDepth)
{
_depthAvailable = true;
Matrix4x4 ccWorld = _Camera.transform.localToWorldMatrix;
bool isValid = false;
Vector3 colorCameraPoint = new Vector3();
for (int i = 0; i < _depthMapSize; i++)
{
for (int j = 0; j < _depthMapSize; j++)
{
if (TangoSupport.ScreenCoordinateToWorldNearestNeighbor(
_PointCloud.m_points, _PointCloud.m_pointsCount,
tangoDepth.m_timestamp,
_ccIntrinsics,
ref ccWorld,
new Vector2(i / (float)_depthMapSize, j / (float)_depthMapSize),
out colorCameraPoint, out isValid) == Common.ErrorType.TANGO_INVALID)
{
_depthTexture.SetPixel(i, j, Color.red);
continue;
}
if (isValid)
{
//_depthTexture.SetPixel(i, j, new Color(colorCameraPoint.z, colorCameraPoint.z, colorCameraPoint.z));
_depthTexture.SetPixel(i, j,
new Color(0,UnityEngine.Random.value,0));
}
else
{
_depthTexture.SetPixel(i, j, Color.white);
}
}
}
_depthTexture.Apply();
_DepthMapQuad.material.mainTexture = _depthTexture;
}
If I had to guess, I would say that I am passing in the wrong matrix (ccWorld). Here is what it says in the documents for the matrix param:
Transformation matrix of the color camera with respect to the Unity
world frame.
The result is a white depth map, which means that the function is returning successfully, however the isValid is false meaning that it couldn't find any nearby point cloud point after projection.
Any ideas? Also I noticed that the performance is pretty bad, even when my depth map is 8x8. Should I not be updating the depthmap when ever new depth data is available (inside OnTangoDepthAvailable)?
Edit:
I was able to make the function return successfully, however now it doesn't find a point cloud point nearby after projection. The resulting depth map is always white. I am printing out all the arguments, and it all looks correct, so I think I am passing in the wrong matrix.
You should update your SDK and Project Tango Dev Kit. Here is an example of getting depth map on Android, perhaps you get a hint for unity:
public class MainActivity extends AppCompatActivity {
private Tango mTango;
private TangoConfig mTangoConfig;
private TangoPointCloudManager mPointCloudManager;
private AtomicBoolean tConnected = new AtomicBoolean(false);
Random rand = new Random();
private ImageView imageDepthMap;
private static final ArrayList<TangoCoordinateFramePair> framePairs = new ArrayList<TangoCoordinateFramePair>();
{
framePairs.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH,
TangoPoseData.COORDINATE_FRAME_DEVICE));
}
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
//initialize the imageView
imageDepthMap = (ImageView)findViewById(R.id.imageView);
//initialize pointCloudManager
mPointCloudManager = new TangoPointCloudManager();
}
#Override
protected void onResume(){
super.onResume();
//obtain the tango configuration
if(tConnected.compareAndSet(false, true)) {
try {
setTango();
} catch (TangoOutOfDateException tE) {
tE.printStackTrace();
}
}
}
#Override
protected void onPause(){
super.onPause();
if(tConnected.compareAndSet(true, false)) {
try {
//disconnect Tango service so other applications can use it
mTango.disconnect();
} catch (TangoException e) {
e.printStackTrace();
}
}
}
private void setTango(){
mTango = new Tango(MainActivity.this, new Runnable() {
#Override
public void run() {
TangoSupport.initialize();
mTangoConfig = new TangoConfig();
mTangoConfig = mTango.getConfig(TangoConfig.CONFIG_TYPE_CURRENT);
mTangoConfig.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true); //activate depth sensing
mTango.connect(mTangoConfig);
mTango.connectListener(framePairs, new Tango.OnTangoUpdateListener() {
#Override
public void onPoseAvailable(TangoPoseData tangoPoseData) {
}
#Override
public void onXyzIjAvailable(TangoXyzIjData pointCloud) {
// Log.d("gDebug", "xyZAvailable");
//TangoXyzIjData pointCloud = mPointCloudManager.getLatestXyzIj();
// Update current camera pose
if (pointCloud.ijRows * pointCloud.ijCols > 0){
try {
// Calculate the last camera color pose.
TangoPoseData lastFramePose = TangoSupport.getPoseAtTime(0,
TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE,
TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR,
TangoSupport.TANGO_SUPPORT_ENGINE_OPENGL, 0);
if (pointCloud != null) {
//obtain depth info per pixel
TangoSupport.DepthBuffer depthBuf = TangoSupport.upsampleImageNearestNeighbor(pointCloud, mTango.getCameraIntrinsics(TangoCameraIntrinsics.TANGO_CAMERA_COLOR), lastFramePose);
//create Depth map
int[] intBuff = convertToInt(depthBuf.depths, depthBuf.width, depthBuf.height);
final Bitmap Image = Bitmap.createBitmap(intBuff, depthBuf.width, depthBuf.height, Bitmap.Config.ARGB_8888);
runOnUiThread(new Runnable() {
#Override
public void run() {
imageDepthMap.setImageBitmap(Image);
}
});
}
} catch (TangoErrorException e) {
Log.e("gDebug", "Could not get valid transform");
}
}
}
#Override
public void onFrameAvailable(int i) {
//Log.d("gDebug", "Frame Available from " + i);
}
#Override
public void onTangoEvent(TangoEvent tangoEvent) {
}
});
}
});
}
private int[] convertToInt(FloatBuffer pointCloudData, int width, int height){
double mulFact = 255.0/5.0;
int byteArrayCapacity = width * height;
int[] depthMap = new int[byteArrayCapacity];
int grayPixVal = 0;
pointCloudData.rewind();
for(int i =0; i < byteArrayCapacity; i++){
//obtain grayscale representation
grayPixVal = (int)(mulFact * (5.0- pointCloudData.get(i)));
depthMap[i] = Color.rgb(grayPixVal, grayPixVal, grayPixVal);
}
return depthMap;
}
}
I extracted this code from my already working version. Try to fix any config related errors. The code assumes depth sensing range of 0.4m - 5m in depth estimation. Mapping zero to 255 allows regions which were not estimated (value of zero) to be white.