I noticed there are a large number of player guides on YouTube that show a "heat map" or visual of typical user interaction with a touchscreen application, like this example:
http://www.youtube.com/watch?v=H5mVS1sEAZI
I have an Android application being used for research purposes, and we are already tracking (in a SQLLite database) when and where users touch / interact with a video.
We would love to create a visualization of where and when users are touching the screen.
Are there any tools, APIs, etc. out there that anyone has seen for generating this kind of data visualization?
If not, is there any good way to take screenshots of the video / application at a moment in time when users touch the application?
For iOS you can use https://heatma.ps SDK
Related
I have an architecture question. I have 10 android apps and I want to create a cross promotion system in those apps - which means, every time a user opens one of those apps - they will see an interstitial ad that promotes another app.
In a very basic architecture, all I did was creating an AWS database which contains the URLs of the other apps and the ad in an mp4 format.
Then, when a user opens an app, I have a class that randomly chooses from the AWS db an ad and shows it to the user -> it loads the mp4 video and displays it to the user using android class video SurfaceView.
I'm currently facing to major issues:
the buffer until the ad loads is very long - unlike other ads I see on other apps that loads a video ad in seconds.
the bandwidth that being used from AWS is very high - because every time the user open an app the video is loading.
anyone has suggestions how can I improve my architecture and solve my main two problems?
I'm no expert on videos or AWS, so can't give specific advice on that.
Some random advice, no specific order:
Can you preload the ad's onto the devices, so that the app just picks an ad and displays it without having to stream it live? Your problem then becomes one of getting them onto the devices, but you don't have the same "user impacting" pressures. I'm assuming you could maybe provide a couple of default videos with the app install, so that first views can be handled while the more up-to-date ads are acquired.
Are the other ads you are comparing to actually mp4?
Have you tried testing using different devices, networks, etc?
Have you tried hosting the video on non-AWS platforms / locations?
Is there a reference architecture or implementation you can refer to to validate your approach?
Have you done the background research into how best to stream/download mp4 content to devices, and play it most efficiently? E.g. formats, sizes, quality settings, etc - for AWS, the devices your app is on, the tech stack you're using, the player you're using? I'm thinking here (not related to your issue) about the kind of advice that YouTube gives in terms of video quality for uploading & processing, etc.
Is your SurfaceView set-up to play as soon as it has a buffer ready to go, or is it doing a full download (maybe you have a mis-configuration)?
Background
I am building an Optical Character Recognition (OCR) tool that makes sense of photographed Forms.
Arguably the most complicated part of the pipeline is to get the target Document into perspective; basically what is attempted in this Tutorial.
Due to the fact that the data is acquired often in very poor conditions, i.e.:
Uncontrolled Brightness
Covered or removed corners
Background containing more texture than the Target Document
Shades
Overlapped Documents
I have "solved" the Problem using Instance + Semantic Segmentation.
Situation
The images are uploaded by Users via an App that captures images as is. There are versions for the App in both Android and IOS.
Question
Would it be possible to force the App to use the Users' mobile phone Documents mode (if present) prior to acquiring the photo?
The objective is to simplify the pipeline.
In end effect, at a description level, the App would have to do three things:
1 - Activate the Documents mode
2 - Outline the Target Document; if possible even showing the yellow frame.
3 - Upload the processed file to the server. Orientation and extension are not important.
iOS
This isn't a "mode" for the native camera app.
Android
There isn't a way to have the the "documents mode" automatically selected. This isn't available for all Android devices, either. So even if you could, it wouldn't be reliable.
Best bet is following the documentation for Building a camera app, rather than using the native camera if special document scanning is essential. This won't come out of the box on either platform for you.
I have been assigned a task to create build an Android application on a mobile phone that has an Augmented Reality keyboard.
When a user points the phone towards a surface, he/she should be able to view this keyboard on his/her phone and should be able to type in.
This data will then be displayed to the user on the screen.
Any idea how this can be achieved?
Mixed Reality Toolkit has a similar solution implemented for HoloLens. You can start by looking into it: https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_SystemKeyboard.html
You can download the project and check the source code from here.
Additionally, using ARFoundation coupled with Unity can give you a jumpstart, where most of the AR functionalities are accessible through the API, like detecting a ground plane to put your keyboard on.
I'm trying to develop an app that can recognize an active object (for example: a memory) that touch the smartphone display. Before I start to develop I've to know if there's any objects that my touch screen display can recognize? Which device can be recognizable by a smartphone display? I'm interested to know that for iPhone or for Android phone.
I found this app and you can see that with a card I can interact with a mobile device, now I'm asking you if anyone know how to do this kind of app with an iPhone or with an Android phone.
Does anyone knows how to do that? There's a library (iOS or Android) to recognize object that I put over the display?
volumique is the company that develops the monopoly card technology that you are talking about. However I will suggest two things.
For Android devices you can use NFC. Its kind of what you are doing right now but you just need to bring your object closer to the screen, no need to actually touch it.
For iOS, there is no NFC or RFID technology available. However you can develop a hardware which has active capacitors arranged in a pattern over it so when you bring your device closer to the iOS screen, the touch controller should recognize the pattern of the capacitors and report this to the main controller which can do recognition of the object with the help of your code. basically capacitive touch screens used in iPhones are just an array of capacitors arranged in a grid pattern. So when you touch using your finger, you change the capacitance of one or two capacitors and then the controller finds out the location of the change. However if you change the capacitance of say 5 6 sensors at the same time, in a particular order like in a pentagon, then you can write software for your controller that if the location of the sensors whose capacitance has been changed by this external object form the shape of a pentagon, then show the viewer that it is a 5 $ card (just an example). This is one way I can think of doing this.
Thanks
I am thinking of using Flex4/Air for developing an Android app. I want the app to have hover to play like ability. I mean if a video thumbnail is selected from alist of videos but not clicked it should play a 5 second clip just like in bing.com/videos. I am assuming this is the closest we come to "hover" in Android devices - please correct me if ths is not the case.
Does Flex4/Air have this capability? Otherwise can we implement hover-to-play ability on Android devices?
Appreciate any help/pointers.
Does Flash Builder 4.5 have this capability? No! I think you meant to ask does Flex 4.5 have this capability, but the question would be best stated: "does AIR for Android have this capability? If so, how can I access it in Flex?"
When developing code for mobile devices, I would take great care when implementing functionality based on a "hover" approach. However, you can take a look at these touch events:
TOUCH_OVER
TOUCH_ROLL_OVER
I thought a Long Press / Long Touch event may be what you need, but I couldn't find any documented one supported by AIR's touch API.