I am working on an Android TV app project. I need to split one video into 2, 3 or X screens in equal parts. Each screen has an Android TV stick plugged on it with my app on it.
For example:
If we have 2 screens each one will show 50% of the video played.
If we have 3 screens each one will show 33,33% of the video played.
If we have 4 screens each one will show 25% of the video played.
Here is one image to have a better understanding of my expectations:
The video is played simultaneously in each screens of the wall and about this point I have already think about it : one screen will be the NTP (network time protocol) master and the other screen(s) will be the slave(s). To synchronize the players.
My first idea is to have on each app the complete video, playing it and having visible only the part that I need. How can I achieve that ? Is it possible ?
In advance thank you for your help.
I'm not clear how you'll handle the height (e.g., if you have a 1080p video but span it across four screens, you're going to have to cut off 3/4 of the pixels to "zoom in" on it across the screens), but some thoughts:
If you don't have to worry about HDCP, an HDMI splitter might work. If not, but it's for a one-off event (e.g., setting up a kiosk for a trade show), then it's probably least risky and easiest to create separate video files with them actually split how you'd want. If this has to be more flexible/robust, then it's going to be a bit of a journey with some options.
Simplest
You should be able to set up a SurfaceView as large as you need with the offsets adjusted for each device. For example, screen 2 might have a SurfaceView set with a width of #_of_screens * 1920 (or whatever the appropriate resolution is) and an X starting position of -1920. The caveat is that I don't know how large of a SurfaceView this could support. For example, this might work great for just two screens but not work for ten screens.
You can try using VIDEO_SCALING_MODE_SCALE_TO_FIT_WITH_CROPPING to scale the video output based on how big you need it to display.
For powerful devices
If the devices you're working with are powerful enough, you may be able to render to a SurfaceTexture off screen then copy the portion of the texture to a GLSurfaceView. If this is DRMed content, you'll also have to check for the EGL_EXT_protected_content extension.
For Android 10+
If the devices are running Android 10 or above, SurfaceControl may work for you. You can use a SurfaceControl.Transaction to manipulate the SurfaceControl, including the way the buffer coordinates are mapped. The basic code ends up looking like this:
new SurfaceControl.Transaction()
.setGeometry(surfaceControl, sourceRect, destRect, Surface.ROTATION_0)
.apply();
There's also a SurfaceControl sample in the ExoPlayer v2 demos: https://github.com/google/ExoPlayer/tree/release-v2/demos/surface
Related
I am creating a multiplayer game in Unity and as with a lot of multiplayer games I want the user to have a top down view of both his own game map and the other players game map. I want the world to be in 3D and I'm having trouble figuring out how to get both players to see the same things due to different screen resolutions. I obviously don't mind - and kind of expect - that I will need some black bordering.
The first screen resolution issue comes when rendering the view to a texture, and then the second screen resolution issue comes when rendering the texture to the UI. This is currently blowing my mind.
Is this possible with Unity and what would be the best approach? Am I barking up the wrong tree by using render to texture at all?
EDIT:
For clarity. Probably best described as a top down split screen, something like the following image, apart from it will be making use of full 3D and be at a slight angle.
It sounds like you're trying to keep the game "fair", so one player with higher resolution settings doesn't have an advantage over the other.
If that's the case, and you're happy with black borders, you can probably just use a mask in front of the camera. As long as the mask is the same for both players, the resolution wouldn't make any difference other than quality.
I'm doing a mobile game in Unity for Android and iOS, I already have my assets design for a device of 480*800 pixels, but when I launch my game in a biggest screen, the image is horrible.
I tried different technics to scale the camera, the images... without success. In Android there is multiple size of drawables : hdpi, xxdpi... and in iOS the images can't be the same for an iPad and an iPhone, so I did not understand, how I can load multiple images to fit all the screen resolution.
or How to make my assets to do that. Can you explain me please ?
400 pixels is incredibly small for game images these days. We rarely get or use anything less than 4K these days.
One thing: if it is actual "pixel art" (so, "retro" pixel art), you must use the "point" enlarger to keep the "pixel shape" when you enlarge it.
Note that pretty much everyone uses 2DToolkit with Unity for 2D projects. It creates sprite sheets for you. BUT it also has the concept of different sprite sheet sets for different screen sizes, if you are working on a pixel-perfect concept (as much as that has any meaning today).
Unity itself does not contain any "different sprites for different devices" concept, and this is one of the main reasons 2DToolkit remains so popular.
Finally note that: if you make computer games it is
...extremely difficult...
dealing with different screen ratios.
Say you are making a side scroller: what does it "mean" that some players have a wider screen than others? What should you "see" on a wide screen versus a normal screen in, say, GTA?
This can involve a huge amount of conceptual work and programming. This affects everyone who makes games, from the kid on the corner to Nintendo. There is no "simple solution".
I need a live stream or some method to see what is currently being displayed on screen.
I do not want to save a screenshot to a file, or record a video etc.
I do not want to stream the screen to another device (I'm not trying to screencast/mirracast), I need the screen contents locally on a Android application.
Why would I need that?
I want to write an Android application to manage a lightpack by adapting the lights to what is being displayed on the android device.
Requiring root is fine (I expected root to be required).
Needing additional lower level binaries is also acceptable.
The quality of the image does not have to be exact as on screen, a much lower
resolution will still be acceptable
If you say it can't be done, you're wrong. If you download the Prismatik app for Android you'll see exactly what I want, it does screen grabbing and manages my lightpack without any performance issues while grabbing the screen. I just want to create my own application that does similar, except I will make it an open source GitHub project...
edit: I think using /dev/graphics/fb0 could be part of an answer. I don't know how the performance of using this will be though.
I have found an answer to my own question.
Android (linux too) has a framebuffer file (/dev/graphics/fb0 on Android) which you can read that contains the current framebuffer of the entire screen contents. In most cases it will actually contain 2 frames for double buffering...This file will constantly change as the contents on the screen changes (for what I want to use it for, it is fine, I don’t need EXACT to the < millisecond contents, I only need to roughtly know what colors are displayed where on screen)
This file contains the graphics in a RAW format. On my Galaxy S3 it contained 4bytes per pixel.
If you need to know how many bytes you should read to get your current screen (only 1 frame), you need to do the math. My screen is 720x1280 pixels.
Therefore I need to read:
720 x 1280 pixels = 921,600 pixels
921,600 pixels * 4 bytes per pixel = 3,686,400 bytes
From this you can extract each color channel, and do the calculations you need.
The method to read the data is really up to you. You can write a native bit of code to read the buffer and do manipulations.
Or you can read the buffer from Android code with an Input Stream.
Depending on the ROM loaded on your device you might need root to access this file (most cases you will need root)
I am building a mobile application that will target iPhone/iPad and Android phones. The application will involve users taking photos and uploading to my server and later on the users will be able to look at those photos on their mobile devices (although not necessarily their own photos so an Android user might be looking at a photo taken with an iPhone).
Which sizes should I save the photos to be able to cover the most use cases? iPads are 1.333 W/H, most mobile phones are 1.5 or 1.333 W/H with some rare 1.666 W/H. Specifically:
iPad: 1024x768, iPad3: 2048x1536, iPhone and some other phones: 960x640, 480x320, 800x480.
To be able to keep it manageable, I need to decide on a few certain image sizes and save the photos in those sizes. I am not really looking for help on the technical side. I can do image scaling on the server side etc. I am looking for recommendations / best practices / lessons learned about image sizes before I go too far into building it.
Which sizes should I save the photos in to cover the most use cases?
Do you recommend any client side scaling before uploading to server to save on transfer time (for example scaling down 2048x1536 iPad photos) or should I always transfer originals?
How should I handle incompatible image sizes (showing a picture taken with an iPad on an Android device for example)? Should I pre-cut those images on my server before sending to client or should I let the client phone handle image resizing?
There is also the issue of UI. There will be other things on the page other than the photo maybe a button or two for navigation. Should I go for something smaller than the full screen size while keeping the same aspect ratio when saving pictures?
I know some of these questions don't have one answer and the answers are relative but I wanted to get some opinions. Thanks.
For Android, I think the best place for you to start would be here, it has a lot of information including standard screen sizes and how to display images while keeping them in the best possible quality.
http://developer.android.com/guide/practices/screens_support.html
I'd also suggest doing as much image manipulation as possible on your server. Images are a pain to work with on Android due to memory constraints and fragmentation. Two phones may store pictures taken the same way with different orientations, and there is no simple way to handle rotations, though it can be done (thankfully, I've yet to encounter a phone that incorrectly records Exif data, but I wouldn't be surprised if they existed...). The more you rely on the phone to do, the more chances you have for error due to manufacturers putting wrappers around and otherwise customizing how it handles media.
As for how to display, ideally if your back end is already doing a bunch of different resizes, you can include your screen density when you request the images and send the best size based on the dev guide. If you want to keep differences to a minimum, at least support med or high density for phones, and extra high density for tablets.
Just my two cents, I'm sure you'll have a lot of opinions. Good luck.
I don't have a full answer for you, but I do have some thoughts...
1) I'd suggest reducing the image sizes before uploading. If I were using your application and I had to upload a 4 meg photo, everytime I wanted to use your application, I'd probably pass. And as we venture forward, we're hitting much better technology in terms of camera phones; Nokia has released a 41 megapixel camera, which I'm guessing will create rather large images. Users having to download a 4-6 MB image is also not a great idea. Just some thoughts from a user point of view.
2) I wouldn't cut the images. You don't necessarily know what parts of the image aren't important, so how would you know where to crop it? Let the phone size the pictures accordingly and rely on the ability to zoom into pictures to see things at a larger size.
3) You could try to make a UI that hides buttons. If you have something really simple (like just going forward or backwards) you could rely on gesture controls (swiping) to move around your application. You can implement sliding drawers and menus that take up space temporarily, when in use, but give you the space back when you want to look at the main content (pictures, in your case). I've typically found that hiding buttons doesn't work well and people seem to want/search for buttons that let them navigate, but the Android gallery works just fine with menu + swiping only, so who really knows.
Long version:
I have a very particular issue. I'm a multimedia artist working at the moment together with an animator - we are trying to create an interactive animation that I want to make available online as a website and as free app on the App Store and the Android Market.
But here's the key problem I am faced with now.
The output video of the actual animation will be massive in resolution - probably something like 4 or more times the HD resolution, but it's for a reason. The idea is to let the viewer only watch a part of the video at one time - like when panning around in Google Maps or any other canvas-like view (eg. MMORPG or strategy computer games). So you only see a part of the whole image at one time, and then you can move around to see what's "behind the corner".
So the animation would be a Google Maps-alike canvas (panning and perhaps zooming if there's enough time to implement it) but with video instead of images.
The big problem that comes up is performance. I was thinking that the best way to make it run would be to scale down the video for different devices accordingly. But then even just considering desktop computers for now - scaling down to 720p for HD screen means there is in total of about 4 times 720p in resolution, which is probably too much for an average computer to decode (Full HD is quite often already problematic) - and the output resolution would be more than the 4K standard (5120 by 2880, whilst 4K is 4096x2160). Anyhow, that is unacceptable.
But I reached the conclusion that there is really no point in decoding and rendering the parts of the video which are invisible to the user anyway (why waste the CPU+GPU time for that) - since we know that only about 1/6th of the full canvas would be visible at any given time.
This inspired an idea that maybe I could split the output video into blocks - something between 8 to 64 files stacked together side by side like cells in a table, then have a timecode timer playing in some variable and enabling the video-blocks on demand. As the user drags the canvas to the visible element it would automatically start the playback of the file at the given timecode read from the global variable. There could be some heuristics anticipating users movement and prematurely activating the invisible blocks in order to remove any delay caused by seeking within video and starting the playback. Then blocks which are no longer visible could deactivate themselves after a certain amount of time.
So my first attempt was to try and see what are my choices platform-wise and I really see it comes down to:
HTML5 with JavaScript (heavily using <video> tag)
Adobe Flash (using Flash Builder to deploy the apps to all the different devices)
And HTML5 would really be more preferable.
So I did some research to see if it would be at all possible to even synchronize more than one video at a time in HTML5. Unfortunately it's far from perfect, there are two available "hacks" which work well with Firefox, but are buggy in Webkit (the videos often get out of sync by more than a few frames, sometimes even up to half a second, which would be quite visible if it was a single video split into blocks/segments). Not to mention the fact that I have not even tried it on mobile platforms (Android / iOS).
These two methods/hacks are Rick Waldron's sync as shown here:
http://weblog.bocoup.com/html5-video-synchronizing-playback-of-two-videos/
And the other one, also developed by Rick is the mediagroup.js (this one doesn't work in Chrome at all):
https://github.com/rwldrn/mediagroup.js
My test here: http://jsfiddle.net/NIXin/EQbAx/10/
(I've hidden the controller, cause it is always playing back earlier than the rest of the clips for some reason)
So after explaining all that I would really appreciate any feedback from you guys - what would be the best way of solving this problem and on which platform. Is HTML5 mature enough?
Short version:
If I still haven't made it clear as to what I need - think of a video zoomed in at 600% so that you can't see everything (some bits are off screen) and you need to pan around by dragging with your mouse (or flicking your finger on mobile devices) to see what's going on in different places of the video. How could I do that (have the video run smoothly) across platforms, while retaining the high quality and resolution of the video?
Thanks a lot, let me know if you need any more details or any clarification of the matter.