Does Android have the software capabilities to, if a phone has video-out, to open or push content solely to the video out.
So for example if the user is in and clicks on a YouTube link, the app, instead of opening the content on the main screen over the app it would push it to the video out so the YouTube video would display on their connect display and they could continue to browse.
I know Motorola's have the WebTop software and this idea is similar to what I am trying to accomplish but on a much more basic level. It's more similar to Apples AirPlay but much less complex again (without a network/external player - just video out).
Or if even that is to complex an even simpler solution of having the video-out still output even when the phone is locked. Currently the video-out mirroring on both my HTC Incredible and Galaxy Nexus will stop when the phone is locked.
EDIT:
I've noticed while using my phone that playing a video through the Google Videos app that on the phone controls will overlay on the screen i.e. play, pause, seek bar, and, the soft buttons, but the video-out display (Television) plays the video continuously/seamlessly without any of the controls over-layed. Now this is very primitive example of what i'm ultimately alluding too but it does show a real world example of an android device (no 3rd party manufacture software) doing video out that isn't exactly mirroring.
Well... I hate to furnish this as an answer, but it really looks like there's simply nothing in the API for it at all...
http://groups.google.com/group/android-developers/browse_thread/thread/9e3bcd1eea2c379
which just redirects to:
https://groups.google.com/forum/#!topic/android-developers/Jxp_9ZtzL60
I'm definitely going to favorite this question, and hope someone chimes in with something more useful than a "doesn't look like it's possible, though that does appear to be the correct answer, at this time.
Much like Dr.Dredel has mentioned, there is nothing current for multiple displays in terms of display 1 showing 'A' and display 2 showing 'B'
There is support for multiple screen sizes per the following:
http://developer.android.com/guide/practices/screens_support.html#support
This will be the case for a little while longer until someone creates the support for it.
Related
In Android Studio we can both capture and record screen on our devices. When capturing the screen we have the option to directly frame our screenshot in device art or use online tool http://developer.android.com/distribute/tools/promote/device-art.html
When Recording Screen there is no option to frame it in device art and Google don't seem to provide any online option either.
What would be the fastest and easiest way for developers that want to showcase their screen recording in a frame from device art where a new video is created where device art is wrapped around our screen recording.
Give away your best tip. I would love a fast, free online service that solves this and guess there is one out there.
I've spent a substantial amount of time researching, but could also not find a service for it.
I have however written a guide for how to record screen, add device frame and convert to gif, which you can find here:
http://appdictive.dk/blog/how-to/2015/04/20/google_plus_gif/
The short version of it is:
Record screen
Get Device frame picture
Use Photoshop or other video editing software to add the frame as static picture to the video (guide briefly shows how to do this in Photoshop).
Hope this helps others searching for a solution to this as well.
There are very few services I've come across on the web that allow you to simply upload a video within device art frames, especially free platforms. However, if you have video editing software such as Adobe After Effects, you could import images and videos and create this yourself; albeit added cost and time against the earlier option.
Despite the above, I've managed to utilise Google's Device Art Generator page to extract the device frames they offer quite easily. All you need to do is create a solid background colour in Adobe Photoshop (as an example), say bright green, followed by uploading this to the generator. Download the image generated with the device art applied with the solid fill colour and then use the Magic Wand tool in Photoshop and simply delete the solid colour. With this in mind, the next course of action is to simply insert the video using After Effects in the transparent space now available. This is even a bit overkill, as you only need to overlay the video within the screen dimensions of the generated screenshot from Google, yet this gives you an idea of how you could achieve what you want.
I had the same problem and ended up writing an app to do that:
https://play.google.com/store/apps/details?id=de.mobilej.recapp
It works completely on the device - no plugin or desktop version yet. It needs Android 7.0+
I'm the author of the app - so sorry for referring to my own app.
Since ICS it became really easy to take a screenshot from within Android. In the presentations Google goes further than that by streaming/saving the whole content that is shown on the screen including sound directly from the device. How to achieve that? Do I need some extra HDMI certified screen grabbing cards consoles require?
I am working on an android tablet app for displaying sheet music giving it some basic editing and playing capabilities.
My first hurdle is how to display the music in the first place.
The majority of desktop apps use music fonts, which makes zooming and resizing quite "simple". However, it would appear that I would need to be able to access the individual glyphs to do this - which is not possible in android as far as I understand.
Should I stick with a music font and try to find a way to get the information needed ?
Should I abandon this and be looking at using svg files or transparent PNGs ?
Can anyone advise here, please ?
Long version:
I have a very particular issue. I'm a multimedia artist working at the moment together with an animator - we are trying to create an interactive animation that I want to make available online as a website and as free app on the App Store and the Android Market.
But here's the key problem I am faced with now.
The output video of the actual animation will be massive in resolution - probably something like 4 or more times the HD resolution, but it's for a reason. The idea is to let the viewer only watch a part of the video at one time - like when panning around in Google Maps or any other canvas-like view (eg. MMORPG or strategy computer games). So you only see a part of the whole image at one time, and then you can move around to see what's "behind the corner".
So the animation would be a Google Maps-alike canvas (panning and perhaps zooming if there's enough time to implement it) but with video instead of images.
The big problem that comes up is performance. I was thinking that the best way to make it run would be to scale down the video for different devices accordingly. But then even just considering desktop computers for now - scaling down to 720p for HD screen means there is in total of about 4 times 720p in resolution, which is probably too much for an average computer to decode (Full HD is quite often already problematic) - and the output resolution would be more than the 4K standard (5120 by 2880, whilst 4K is 4096x2160). Anyhow, that is unacceptable.
But I reached the conclusion that there is really no point in decoding and rendering the parts of the video which are invisible to the user anyway (why waste the CPU+GPU time for that) - since we know that only about 1/6th of the full canvas would be visible at any given time.
This inspired an idea that maybe I could split the output video into blocks - something between 8 to 64 files stacked together side by side like cells in a table, then have a timecode timer playing in some variable and enabling the video-blocks on demand. As the user drags the canvas to the visible element it would automatically start the playback of the file at the given timecode read from the global variable. There could be some heuristics anticipating users movement and prematurely activating the invisible blocks in order to remove any delay caused by seeking within video and starting the playback. Then blocks which are no longer visible could deactivate themselves after a certain amount of time.
So my first attempt was to try and see what are my choices platform-wise and I really see it comes down to:
HTML5 with JavaScript (heavily using <video> tag)
Adobe Flash (using Flash Builder to deploy the apps to all the different devices)
And HTML5 would really be more preferable.
So I did some research to see if it would be at all possible to even synchronize more than one video at a time in HTML5. Unfortunately it's far from perfect, there are two available "hacks" which work well with Firefox, but are buggy in Webkit (the videos often get out of sync by more than a few frames, sometimes even up to half a second, which would be quite visible if it was a single video split into blocks/segments). Not to mention the fact that I have not even tried it on mobile platforms (Android / iOS).
These two methods/hacks are Rick Waldron's sync as shown here:
http://weblog.bocoup.com/html5-video-synchronizing-playback-of-two-videos/
And the other one, also developed by Rick is the mediagroup.js (this one doesn't work in Chrome at all):
https://github.com/rwldrn/mediagroup.js
My test here: http://jsfiddle.net/NIXin/EQbAx/10/
(I've hidden the controller, cause it is always playing back earlier than the rest of the clips for some reason)
So after explaining all that I would really appreciate any feedback from you guys - what would be the best way of solving this problem and on which platform. Is HTML5 mature enough?
Short version:
If I still haven't made it clear as to what I need - think of a video zoomed in at 600% so that you can't see everything (some bits are off screen) and you need to pan around by dragging with your mouse (or flicking your finger on mobile devices) to see what's going on in different places of the video. How could I do that (have the video run smoothly) across platforms, while retaining the high quality and resolution of the video?
Thanks a lot, let me know if you need any more details or any clarification of the matter.
I'm writing my first Android game, and though the game itself is working well, I'm not too sure about some of the Android integration aspects of it. Specifically:
Should I provide an in-game volume control?
Should I hide the status bar?
Is the Menu button generally used to pause the game, or should I provide an on-screen control for this?
etc.
Basically I just want my game to do everything the "standard" way. I don't want to frustrate users. Is there some resource (official or not) that lists recommendations for such things? Alternatively, can anyone give me a few important guidelines?
There are no official guidelines how to do this, but some 'Android common sense' would be advisable.
As usual, there is more than one way to do anything, but most of the apps seem to follow the following principles:
full screen games (especially ones in landscape mode) seem to hide the status bar most of the time
you should override the menu button, so it does not get pressed accidentally, but provide a quick way to leave the game
back button usually pauses the app
you do not need in-game volume control since all of existing android devices include a volume rocker, but make turning the volume off available as soon as the game (splash screen) starts, preferably give the person a few moments to turn it off before the music start (a 'would you like to turn the music down?' dialog would be nice)
an (as usually on android) don't count on anything and specify special game requirements (trackpad support, min screen size, ...) in the manifest file
hopefully you can find some more resources online