Is it possible to do some postprocessing on the video data that gets sent to the display driver in Android?
For context, what I would like to do would be to able to apply effects such as blurring, sharpening, increasing or decreasing constrast, on the entire screen output, regardless of what is running.
I would like to know if there is some way to grab the actual video data before it gets shown on screen, process it, and then send it to the screen, (a fairly low-level operation, which I don't believe is provided by the Android API - However, I am only a beginner and do not know how hard it would actually be) or if there is any way by which I would be able to simulate this kind of behavior.
It may work by software in theory. However, performance may be the big issue. It can not be done in Jave app layer. Normally, it's done by HW(in Qualcomm platform, there is a specific device called MDP, which is mostly for video postprocessing).
Related
I have a macbook and I would like to use it to monitor a nest wireless security camera, including an approximately 1 tb archive of continuously updated video history (perhaps of motion detected clips only). This can be done by subscribing to a nest cloud account, but that can get expensive, especially for several cameras, so I'd rather do it myself.
Can anyone point me to open-source code that will handle this? If not, is there another type of camera that will allow me to do this over wifi?
As promised above, I will update the status of this issue.
After a significant amount of work and also significant progress, I was able to connect to the live nest camera feed programatically but was never able to actually record the live stream into short videos, although this was easy for my MacBook webcam. My belief is that Nest has engineered this feed such that camera owners cannot directly access it, leaving no option but to use their "Nest Aware" monthly service. I do not want to do this as I do not want to pay for it and because I want to create options that Nest Aware does not offer.
Searching the web, it appears that this kind of thing might be done by using another software package, "blue iris". I did not want to get this either as I am sure that flexibility would be sacrificed and also the camera would need to be made publicly shared(!)
So I am giving up on Nest, although I like the hardware.
I did find an alternative. I also had an Arlo Q camera and I tried that, using an open source API on GitHub:
https://github.com/jeffreydwalter/arlo
I was able to access the camera and save motion detected videos to my disk within an hour of finding the above link. So, if you want to do this type of thing, I recommend Arlo over Nest.
I'm currently working on an app with the end goal of being roughly analogous to an Android version of Air Play for the iDevices.
Streaming media and all that is easy enough, but I'd like to be able to include games as well. The problem with that is that to do so I'd have to stream the screen.
I've looked around at various things about taking screenshots (this question and the derivatives from it in particular), but I'm concerned about the frequency/latency. When gaming, anything less than 15-20 fps simply isn't going to cut it, and I'm not certain such is possible with the methods I've seen so far.
Does anyone know if such a thing is plausible, and if so what it would take?
Edit: To make it more clear, I'm basically trying to create a more limited form of "remote desktop" for Android. Essentially, capture what the device is currently doing (movie, game, whatever) and replicate it on another device.
My initial thoughts are to simply grab the audio buffer and the frame buffer and pass them through a socket to the other device, but I'm concerned that the methods I've seen for capturing the frame buffer are too slow for the intended use. I've seen people throwing around comments of 3 FPS limits and whatnot on some of the more common ways of accessing the frame buffer.
What I'm looking for is a way to get at the buffer without those limitations.
I am not sure what you are trying to accomplish when you refer to "Stream" a video game.
But if you are trying to mimic AirPlay, all you need to do is connect via a Bluetooth/ internet connection to a device and allow sound. Then save the results or handle it accordingly.
But video games do not "Stream" a screen because the mobile device will not handle much of a work load. There are other problems like, how to will you handle the game if the person looses internet connection while playing? On top of that, this would require a lot of servers to support the game workload on the backend and bandwidth.
But if you are trying to create an online game. Essentially all you need to do is send and receive messages from a server. That is simple. If you want to "Stream" to another device, simply connect the mobile device to speakers or a TV. Just about all mobile video games or applications just send simple messages via JSON or something similar. This reduces overhead, is simple syntax, and may be used across multiple platforms.
It sounds like you should take a look at this (repost):
https://stackoverflow.com/questions/2885533/where-to-start-game-programming-for-android
If not, this is more of an open question about how to implement a video game.
Hi I was researching the possibility to transport the "not rendered" rendering calls to a second screen from Android. While researching I found out that Skia is behind the Surfacefinger, and the Canvas.draw() method. So my question is now, what would be the best interception point to branch off the calls in order to use them for a second screen / machine. The second device mut not be a pure replay device, but can be another Android device.
First I used VNC for that concept, but quickly found out that it badly performs, due to the double buffering effect, it is also possible to manipulate the android code in a sense that it omits the double buffering, but it is still of interest to actually use the pre rendered calles on a second, maybe scaled device.
Thanks
I've got an Android project I'm working on that, ultimately, will require me to create a movie file out of a series of still images taken with a phone's camera. That is to say, I want to be able to take raw image frames and string them together, one by one, into a movie. Audio is not a concern at this stage.
Looking over the Android API, it looks like there are calls in it to create movie files, but it seems those are entirely geared around making a live recording from the camera on an immediate basis. While nice, I can't use that for my purposes, as I need to put annotations and other post-production things on the images as they come in before they get fed into a movie (plus, the images come way too slowly to do a live recording). Worse, looking over the Android source, it looks like a non-trivial task to rewire that to do what I want it to do (at least without touching the NDK).
Is there any way I can use the API to do something like this? Or alternatively, what would be the best way to go about this, if it's even feasible on cell phone hardware (which seems to keep getting more and more powerful, strangely...)?
Is there any way I can use the API to
do something like this?
No.
Or alternatively, what would be the
best way to go about this, if it's
even feasible on cell phone hardware
(which seems to keep getting more and
more powerful, strangely...)?
It is possible you can find a Java library that lets you assemble movies out of stills and annotations, but I would be rather surprised if it met your needs, would run on Android, and would run acceptably on mobile phone hardware.
IMHO, the best route is to use a Web service. Use the device for data collection, use the server to do all the heavy lifting of assembling the movie out of the parts.
If you have to do it on-device, the NDK seems like the only practical route.
Do you just want to create movie files or do you want to display them on the phone?
If you just want to display the post-processed annotated images as a movie then it's possible. What is the format of your images ? Currently, I'm able to display to MJPEG video on a Nexus One (running 2.1) without any noticeable lag without using the NDK. In my case the images are coming from the network.
On the other hand, if you just want to create movie files and store is on the phone or some other place then CommonsWave's idea of "delegating" this to a server makes more sense since you will have more processing power and storage on the server. This will require that you have access to a network and don't mind sending all the images from the phone to the server and then download the movie file back to the phone.
I noticed that Flash allows you to insert cue's into a video file (flv). Is something like this possible on Android? I have a video that runs locally in my Android app and I would like to insert cues into the video which will give me callbacks when a certain portion of the video has been reached. If this is not possible, are there any other methods to do something similar? I have to be pretty precise with where the cue is located.
Thanks
Note:
I just found this same question on stackoverflow. Can anyone verify that this is still the case? (That it is not possible, only by polling the video continually). I did know of this way, but it's not the most accurate way if you need to be precise and stich dynamic pieces of video together seamlessly.
Android VideoView - Detect point of time in video
I´m working on this as well and a kind of cue/action scripts. For tutorials, instruction video I need to keep track of current position to serve for example questions and navigation menus appropriate for that point in time. Easy when it´s sufficient to act in response to user input but otherwise firing up a thread to poll at some decent interval is the thing. Accuracy might be acceptable and can be calibrated by sensing actual position.