HOW TO run pose estimation on single image with TensorFlow-Lite? - android

I recently used this sample of great TensorFlow lite in android.
I can use this project correctly, but I want to estimate poses on single images too (not just in real time mode). so I tried to reach my goal! but unfortunately I couldn't! and those disappointing codes are here:
private fun runOnSimpleImage() {
val detector = MoveNet.create(this, device, ModelType.Lightning)
detector.let { detector ->
simpleDetector = detector
}
simpleDetector?.estimatePoses(templateBitmap)?.let {persons->
VisualizationUtils.drawBodyKeypoints(
templateBitmap,
persons, false
)
}
showOutputBitmap(templateBitmap)
}
Also I search it and found this. but I couldn't solve my problem yet.
and my result is something like this:

Fortunately my code is not wrong! and it works correctly. and you can use it!
The problem was in method that I used to convert my drawable image to Bitmap.
I used to use these codes:
val drawable = ResourcesCompat.getDrawable(resources, R.drawable.resized,theme)
templateBitmap = (drawable as BitmapDrawable).bitmap
But when I changed those to something like this:
templateBitmap = BitmapFactory.decodeResource(resources,R.drawable.resized)
my problem solved.

Related

Minimized video call and shared view with Zoom SDK

I'm trying to integrate Zoom SDK meetings in an Android app. I've struggled for a while now with using the custom meeting ui and learning how to use Zoom's video view called MobileRTCVideoView. Here's the interface I would like to create:
What I've tried:
Studied Zoom's sample apps on Github.
Studied Zoom's documentation for customized meeting ui.
Asked on the developer forum.
Read related threads on the developer forum.
However, I still don't understand how to implement it, and would very much appreciate some explanation as to how to use MobileRTCVideoView, and achieving the meeting ui illustrated on the image. The meetings should only hold up to two users at a time.
I initialize the Zoom SDK with API Key and Secret, and use email login. I enable the custom meeting ui with:
zoomSDK!!.meetingSettingsHelper.isCustomizedMeetingUIEnabled=true
I start an instant meeting with:
val meetingService=zoomSDK!!.meetingService
val opts=InstantMeetingOptions()
opts.no_driving_mode = true
opts.no_invite = false
opts.no_meeting_end_message = false
opts.no_titlebar = false
opts.no_bottom_toolbar = false
opts.no_dial_in_via_phone = true
opts.no_dial_out_to_phone = true
opts.no_disconnect_audio = true
meetingService.startInstantMeeting(this,opts)
I've tried to follow the sample apps by creating another activity for the custom meetings, but apparently the class and the code is not complete:
class CustomMeetingActivity: FragmentActivity() {
private var zoomSDK:ZoomSDK?=null
private var inflater:LayoutInflater?=null
private var normal_view:View?=null
private var video_view:MobileRTCVideoView?=null
private var video_manager:MobileRTCVideoViewManager?=null
private var meeting_service:MeetingService?=null
private var in_meeting_service:InMeetingService?=null
private var share_view:MobileRTCShareView?=null
private var meeting_video_view:FrameLayout?=null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
zoomSDK=ZoomSDK.getInstance()
meeting_service = ZoomSDK.getInstance().meetingService
in_meeting_service=ZoomSDK.getInstance().inMeetingService
if(meeting_service==null || in_meeting_service==null){finish();return}
setContentView(R.layout.custom_meeting_layout)
inflater=layoutInflater;
normal_view = inflater!!.inflate(R.layout.meeting_content_normal,null)
meeting_video_view = findViewById<View>(R.id.meetingVideoView) as FrameLayout
share_view = findViewById<View>(R.id.sharingView) as MobileRTCShareView
video_view=normal_view!!.findViewById(R.id.videoView) as MobileRTCVideoView
}
}
Added the activity in the manifest:
<activity
android:name="com.mypackage.appname.CustomMeetingActivity"
android:configChanges="orientation|keyboardHidden|screenSize"
android:theme="#style/ZMTheme.SubWindow">
</activity>
Solid advice I can give is:
Override or reuse their existing Sample to get started. (Though their sample app looks like it was done in a rush)
Don't user their styles, override their styles and use them.
Scan/Study the MyMeetingActivity. Most of the heavy lifting is done in it already.
Check both of their samples. If you cannot figure out sharedView from MyMeetingActivity, then it looks like you haven't studied hard enough
I have worked a lot on this over the last few weeks. Customized UI is working well. I am looking to make the Gallery view. We have loads of features and functionality that we added and reused. Over all it was a bumpy ride, but still went smooth as I spent time on it.
I don't understand why this question is not yet answered. Unfortunately I am too busy to actually write code out for you, especially since I am not even developing in Kotlin. Sorry. Hope you figure it. If I implement the gallery view, then maybe I can come back and give you some pointers. Good Luck

AndroidX ExifInterface can read camera make/model but not lens make/model

I'm building an app which reads EXIF data from images and overlays that data on the image so you can share your camera settings with a nice graphic rather than manually typing them out (EG: "F/1.4 at 1/200 ISO400")
I'm using AndroidX ExifInterface 1.1.0-beta01 and the blow code works to get every piece of data except the LensMake and LensModel are always null.
I've tried reverting to ExifInterface 1.0.0 and that made no difference, it still behaves identically.
I note that the documentation for ExifInterface refers to LensMake and LensModel as returning an "ASCII String" which Camera Make and Camera Model just return a "String" so i've tried different variations of getAttribute without success.
These exact files work fine on the iOS version of the app I've previously built and i've tried files from multiple different cameras (Fuji X-T3, Canon 5D III)
var stream: InputStream? = null
try {
stream = contentResolver.openInputStream(uri)
val exifInterface = ExifInterface(stream!!)
FS = exifInterface.getAttribute(ExifInterface.TAG_F_NUMBER)!!
SS = exifInterface.getAttribute(ExifInterface.TAG_EXPOSURE_TIME)!!
ISO = exifInterface.getAttribute(ExifInterface.TAG_PHOTOGRAPHIC_SENSITIVITY)!!
val LensMake = exifInterface.getAttribute(ExifInterface.TAG_LENS_MAKE) //THIS APPEARS TO BE ALWAYS NULL :(
val LensModel = exifInterface.getAttribute(ExifInterface.TAG_LENS_MODEL) //THIS APPEARS TO BE ALWAYS NULL :(
val CameraMake = exifInterface.getAttribute(ExifInterface.TAG_MAKE)
val CameraModel = exifInterface.getAttribute(ExifInterface.TAG_MODEL)
}
I'd like to be able to read the lens information, I know it's in the file but this library doesn't seem to want to expose it.
There is an open bug filed on the issue tracker, which states, that:
Although the constants are available for LensMake and LensModel, the getter does not return the actual values from the file. It seems like proper support is missing. I think the reason is that ExifTag[] IFD_EXIF_TAGS does not contain an array item for lens make and model. Adding the following lines at the right place of the aforementioned array, seems to fix things:
new ExifTag(TAG_LENS_MAKE, 42035, IFD_FORMAT_STRING),
new ExifTag(TAG_LENS_MODEL, 42036, IFD_FORMAT_STRING),
Not sure how reliable this is, but it is at least a solution approach.

how to capture a screenshot?

I know that there are a lot of questions about capturing screenshots, and I have checked most of them. They have the same answer (with small code variations).
I have following method for screenshot capturing:
#NonNull
public static Bitmap takeScreenShot(Window window) throws IOException {
final View rootView = window.getDecorView().getRootView();
final boolean drawingCacheEnabled = rootView.isDrawingCacheEnabled();
rootView.setDrawingCacheEnabled(true);
try {
return Bitmap.createBitmap(rootView.getDrawingCache());
} finally {
rootView.setDrawingCacheEnabled(drawingCacheEnabled);
}
}
And you can use it like these: takeScreenShot(getActivity().getWindow())
However these approach has several limitations:
If you have some dialogs on the screen they will not be captured on
screenshot.
Will it work with hardware accelerated views? According
to documentation:
When hardware acceleration is turned on, enabling the
drawing cache has no effect on rendering because the system uses a
different mechanism for acceleration which ignores the flag
Screenshot contains black boxes
instead of GLviews. (e.g. when you app has maps.). It seems to be as a result of 2nd point.
So my question is, is there any solution without rooting that can solve at least some of my issues?
Check out the following GitHub repo (not mine!): https://github.com/AndroidDeveloperLB/ScreenshotSample
Also, the following will be useful reading:
How to properly take a screenshot, globally?

Not Getting Thumb with Genres - Universal Music Player

I am using UMP example provided by Google, I have not made any change in my code, even I have not tried anything out of the box, I just imported your project into my work-space and checked it on my device, and found that I am not getting Thumb with Genres (Songs by genre) and List of Genres...
Whereas I supposed to get Thumb from our JSON, here is what I have tried (but no success) -
holder.mImageView.setImageBitmap(description.getIconBitmap());
UPDATE # 1 AS PER SUGGESTED BY #NageshSusarla here
holder.mTitleView.setText(description.getTitle());
holder.mDescriptionView.setText(description.getSubtitle());
AlbumArtCache cache = AlbumArtCache.getInstance();
Bitmap art = cache.getIconImage(url);
if (art == null) {
cache.fetch(url, new AlbumArtCache.FetchListener() {
#Override
public void onFetched(String artUrl, Bitmap bitmap, Bitmap icon) {
if (artUrl.equals(url)) {
holder.mImageView.setImageBitmap(icon);
}
}
});
} else {
holder.mImageView.setImageBitmap(bitmap);
}
holder.mImageView.setImageBitmap(description.getIconBitmap());
and getting Cannot resolve symbol 'url'
The icon bitmap may not have been set. It's best to use the AlbumartCache to fetch the icon and then set it on the imageView. The url to be passed to AlbumArtCache.getInstance().fetch(url,..) is description.getIconUri().toString()
The reason you may not be seeing it in uAmp is because of the tint being applied to it. You can remove the tint from media_list_item.xml to try out the changes.
Aside: This is indeed by design and the icon is only shown at the bottom when a user selects the item to be played.
This is by design. To keep the navigation cleaner, we decided to not
show the MediaItem icon on the local browsing UI. Other browsing UI's
may show it, like Android Auto and Android Wear.
I think you should check it in issues of android-UniversalMusicPlayer. Take a look at comment given by mangini here.
If you want to change uAmp to show the MediaDescription.getIconUri,
set the holder.mImageView at this.

How to import a SpriteFont into MonoGame

I'm porting a simple tetris-like XNA app to Android, using Mono For Android and MonoGame; I have followed the suggested steps in this link and so far, everything compiles well, and no relevant warnings fire up. However, upon loading the contents, a null parameter exception breaks the program at the point below in my program:
protected override void LoadContent() {
// ...
_font = Content.Load<Microsoft.Xna.Framework.Graphics.SpriteFont>("SpriteFont1");
// ...
}
The content root directory is set in the game constructor class:
public Game2 (){
Content.RootDirectory = "Content";
Content.RootDirectory = "Assets/Content"; // TEST.
//...}
And I have tried several combinations, all to no avail.
I have also tried setting the xnb files as Content as well as Android Assets in the Build Action property; having the linked, copied always, copied only if newer... etc.
Either way, my problem is that I don't really understand WHY and HOW should I do this. I'm rather new to the platform and to XNA as well, so this may very well be a newbie question, but the truth is after several hours banging my head and fists against the monitor/keyboard I feel stuck and need your help.
I have a library that supports variable-width fonts (generated by BMFont) on MonoGame. Unfortunately it is a renderer and so has other code around it. However, the basic idea is very simple. You can take a look at the loader here and the mesh builder (given a string) here. This builder supports fonts that spread characters across multiple pages, too.
Hope this helps!
MonoGame (2.5.1) throws NotImplementedException in ContentManager.Load for SpriteFont type. Have the same not resolved problem. I'm trying not to use DrawString.
For loading textures in Win32 application I use:
Content.RootDirectory = #"../../Content";
var sampleTexture = Content.Load<Texture2D>("Sample.png");
You even must not add it to solution.
For Andoind (MonoDroid) application you must add "Content" folder to your solution and set "Andtoid Asset" in "Sample.png" properties.
Content.RootDirectory = "Content";
var sampleTexture = Content.Load<Texture2D>("Sample.png");
See also:
http://monogame.codeplex.com/discussions/360468
http://monogame.codeplex.com/discussions/267900

Categories

Resources