I have an architecture question. I have 10 android apps and I want to create a cross promotion system in those apps - which means, every time a user opens one of those apps - they will see an interstitial ad that promotes another app.
In a very basic architecture, all I did was creating an AWS database which contains the URLs of the other apps and the ad in an mp4 format.
Then, when a user opens an app, I have a class that randomly chooses from the AWS db an ad and shows it to the user -> it loads the mp4 video and displays it to the user using android class video SurfaceView.
I'm currently facing to major issues:
the buffer until the ad loads is very long - unlike other ads I see on other apps that loads a video ad in seconds.
the bandwidth that being used from AWS is very high - because every time the user open an app the video is loading.
anyone has suggestions how can I improve my architecture and solve my main two problems?
I'm no expert on videos or AWS, so can't give specific advice on that.
Some random advice, no specific order:
Can you preload the ad's onto the devices, so that the app just picks an ad and displays it without having to stream it live? Your problem then becomes one of getting them onto the devices, but you don't have the same "user impacting" pressures. I'm assuming you could maybe provide a couple of default videos with the app install, so that first views can be handled while the more up-to-date ads are acquired.
Are the other ads you are comparing to actually mp4?
Have you tried testing using different devices, networks, etc?
Have you tried hosting the video on non-AWS platforms / locations?
Is there a reference architecture or implementation you can refer to to validate your approach?
Have you done the background research into how best to stream/download mp4 content to devices, and play it most efficiently? E.g. formats, sizes, quality settings, etc - for AWS, the devices your app is on, the tech stack you're using, the player you're using? I'm thinking here (not related to your issue) about the kind of advice that YouTube gives in terms of video quality for uploading & processing, etc.
Is your SurfaceView set-up to play as soon as it has a buffer ready to go, or is it doing a full download (maybe you have a mis-configuration)?
Related
I need to develop an app that can make a video-call between client A and client B, while client B also send 4k video to a server to be processed with some computer vision algorithms (that need some really high quality video to perform correctly).
connection diagram
As far as I understand, and by my experience working with camera in PC, when the video is being used by a process, this resource get locked and cannot be used by another one, so I was thinking about doing something like create 2 fake cameras which I could do like this in a computer, but I'm not sure it can be done in an Android phone, specially without root permission.
Also, while I would prefer to develop in some framework that let me write code in JavaScript, like react native or ionic, I'm not constrained to a specific technology, so I'm open to the possibility to develop it in Kotlin if that gives me more control over the camera API.
Edit 1: (Trying to give more detail, as #Vidz required)
In want to send (client B) video to just one service that estimate some variables from the video. But at the same time, I need to use the same video (in a lower quality) for a video call between this user (B), and another type of user (client A) that doesn't send video to the server. (I was thinking about something like WebRTC for the videocall between clients).
Please see the image above for a clearer vision of the problem.
I have a server that will be producing images, and I would like the phone to sort of subscribe to the API and update its background / lockscreen image as often as the API serves a new image, in a live, ongoing basis, e.g. every five seconds.
No interactivity. Just showing each static image live as they are produced.
I imagine this cannot be done natively within the OS (?) but could an app that would be authorised by Apple and Google be created that served that purpose?
Spotify does something like it when playing music displaying the current album cover. Ideally the solution would have no visible buttons: just the current image updated.
I noticed there are a large number of player guides on YouTube that show a "heat map" or visual of typical user interaction with a touchscreen application, like this example:
http://www.youtube.com/watch?v=H5mVS1sEAZI
I have an Android application being used for research purposes, and we are already tracking (in a SQLLite database) when and where users touch / interact with a video.
We would love to create a visualization of where and when users are touching the screen.
Are there any tools, APIs, etc. out there that anyone has seen for generating this kind of data visualization?
If not, is there any good way to take screenshots of the video / application at a moment in time when users touch the application?
For iOS you can use https://heatma.ps SDK
Long version:
I have a very particular issue. I'm a multimedia artist working at the moment together with an animator - we are trying to create an interactive animation that I want to make available online as a website and as free app on the App Store and the Android Market.
But here's the key problem I am faced with now.
The output video of the actual animation will be massive in resolution - probably something like 4 or more times the HD resolution, but it's for a reason. The idea is to let the viewer only watch a part of the video at one time - like when panning around in Google Maps or any other canvas-like view (eg. MMORPG or strategy computer games). So you only see a part of the whole image at one time, and then you can move around to see what's "behind the corner".
So the animation would be a Google Maps-alike canvas (panning and perhaps zooming if there's enough time to implement it) but with video instead of images.
The big problem that comes up is performance. I was thinking that the best way to make it run would be to scale down the video for different devices accordingly. But then even just considering desktop computers for now - scaling down to 720p for HD screen means there is in total of about 4 times 720p in resolution, which is probably too much for an average computer to decode (Full HD is quite often already problematic) - and the output resolution would be more than the 4K standard (5120 by 2880, whilst 4K is 4096x2160). Anyhow, that is unacceptable.
But I reached the conclusion that there is really no point in decoding and rendering the parts of the video which are invisible to the user anyway (why waste the CPU+GPU time for that) - since we know that only about 1/6th of the full canvas would be visible at any given time.
This inspired an idea that maybe I could split the output video into blocks - something between 8 to 64 files stacked together side by side like cells in a table, then have a timecode timer playing in some variable and enabling the video-blocks on demand. As the user drags the canvas to the visible element it would automatically start the playback of the file at the given timecode read from the global variable. There could be some heuristics anticipating users movement and prematurely activating the invisible blocks in order to remove any delay caused by seeking within video and starting the playback. Then blocks which are no longer visible could deactivate themselves after a certain amount of time.
So my first attempt was to try and see what are my choices platform-wise and I really see it comes down to:
HTML5 with JavaScript (heavily using <video> tag)
Adobe Flash (using Flash Builder to deploy the apps to all the different devices)
And HTML5 would really be more preferable.
So I did some research to see if it would be at all possible to even synchronize more than one video at a time in HTML5. Unfortunately it's far from perfect, there are two available "hacks" which work well with Firefox, but are buggy in Webkit (the videos often get out of sync by more than a few frames, sometimes even up to half a second, which would be quite visible if it was a single video split into blocks/segments). Not to mention the fact that I have not even tried it on mobile platforms (Android / iOS).
These two methods/hacks are Rick Waldron's sync as shown here:
http://weblog.bocoup.com/html5-video-synchronizing-playback-of-two-videos/
And the other one, also developed by Rick is the mediagroup.js (this one doesn't work in Chrome at all):
https://github.com/rwldrn/mediagroup.js
My test here: http://jsfiddle.net/NIXin/EQbAx/10/
(I've hidden the controller, cause it is always playing back earlier than the rest of the clips for some reason)
So after explaining all that I would really appreciate any feedback from you guys - what would be the best way of solving this problem and on which platform. Is HTML5 mature enough?
Short version:
If I still haven't made it clear as to what I need - think of a video zoomed in at 600% so that you can't see everything (some bits are off screen) and you need to pan around by dragging with your mouse (or flicking your finger on mobile devices) to see what's going on in different places of the video. How could I do that (have the video run smoothly) across platforms, while retaining the high quality and resolution of the video?
Thanks a lot, let me know if you need any more details or any clarification of the matter.
Does Android have the software capabilities to, if a phone has video-out, to open or push content solely to the video out.
So for example if the user is in and clicks on a YouTube link, the app, instead of opening the content on the main screen over the app it would push it to the video out so the YouTube video would display on their connect display and they could continue to browse.
I know Motorola's have the WebTop software and this idea is similar to what I am trying to accomplish but on a much more basic level. It's more similar to Apples AirPlay but much less complex again (without a network/external player - just video out).
Or if even that is to complex an even simpler solution of having the video-out still output even when the phone is locked. Currently the video-out mirroring on both my HTC Incredible and Galaxy Nexus will stop when the phone is locked.
EDIT:
I've noticed while using my phone that playing a video through the Google Videos app that on the phone controls will overlay on the screen i.e. play, pause, seek bar, and, the soft buttons, but the video-out display (Television) plays the video continuously/seamlessly without any of the controls over-layed. Now this is very primitive example of what i'm ultimately alluding too but it does show a real world example of an android device (no 3rd party manufacture software) doing video out that isn't exactly mirroring.
Well... I hate to furnish this as an answer, but it really looks like there's simply nothing in the API for it at all...
http://groups.google.com/group/android-developers/browse_thread/thread/9e3bcd1eea2c379
which just redirects to:
https://groups.google.com/forum/#!topic/android-developers/Jxp_9ZtzL60
I'm definitely going to favorite this question, and hope someone chimes in with something more useful than a "doesn't look like it's possible, though that does appear to be the correct answer, at this time.
Much like Dr.Dredel has mentioned, there is nothing current for multiple displays in terms of display 1 showing 'A' and display 2 showing 'B'
There is support for multiple screen sizes per the following:
http://developer.android.com/guide/practices/screens_support.html#support
This will be the case for a little while longer until someone creates the support for it.