How to stream live webcam video in AndEngine? - android

Hi I'm using the game engine AndEngine, and I want to be able to stream live video from a webcam on a robot to my Android app. The reason I'm using AE is because I need game controls that control my robot. However, I have no idea how to stream video when using AndEngine (or even when not using it, for that matter). The controls and video feed need to be in the same screen (unless there's absolutely no other way). My question is how would one put a video stream over-top an AndEngine scene, and/or how would one format that feed so that it didn't obscure the controls? (they're oriented in the bottom left and top right of the screen, which is a pain I know, but I don't think I can change it due to some problems with multi-touch on my device).
Thanks.

Look at the Augumentged Reality example at GitHub.
https://github.com/nicolasgramlich/AndEngineExamples
It could be of use to you. However, I know that this example was problematic and didn't work when I tried it, but maybe you'll have more luck.

Related

Android Reverse Playback and Frame by Frame

I am trying to create an app with the following features:
normal video playback
slower video playback
frame by frame
reverse video playback (normal, slower, frame by frame)
seekable to specific times
video scrubbing
no video sound needed
video is recorded via the device's camera
The closest comparison to an app, would be the Ubersense Coach app for iOS and Coach's Eye on Android, though there are a few others, and they have all these features.
I have looked into several choices so far:
First the built in Android Media Player, which can't really do anything I need.
Then the MediaExtractor/decoder, looking through the code for Grafika (https://github.com/google/grafika), which can't play backwards.
Tried to pull out each frame as needed with the MediaMetadataRetriever, which is too slow (100ms per frame) and the GC is in the way.
Looked for a library that could potentially solve the issue, without luck so far.
With MediaExtractor I already have the ability to play back video, forward frame by frame or full speed. But I do not have that luxury in reverse, and the seeking does take some time since I need it without artifacts.
When trying to go in reverse, even trying to seek to a previous sync and advancing to the frame, before the one I currently had, it is not doable without huge lag (as expected).
I am wondering if there is a better codec I could use, or a library I have yet to stumble upon. And would rather avoid having to create something custom in native code if possible.
Thanks in advance
Michael

Screencast from Google Glass

I'm looking for an efficient way to stream what Google Glass's camera is seeing in real time. I know there're several alternatives, including MyGlass, but I'd like to build my own app since it will have several extra features.
My first approach was to just send frame by frame to the server, letting server side to compose the video, but I'm guessing there must be a more efficient approach. Anyone who can point me in the right direction to a better approach?
MyGlass does not deliver live video casting, nothing even close to this. I have easily built an RTSP streaming app around the wonderful libstreaming library, which has recently been updated to support Google Glass. Worked smoothly (but with some lag) over WiFi at 320x240 resolution with H264 codec.
See this example for generically doing this in android: http://www.mattakis.com/blog/kisg/20090708/broadcasting-video-with-android-without-writing-to-the-file-system

Video Chat implement with h264 encoder

Hi guys I am trying to develop a video chat application and I am using the h264 encoder for video, but I am facing some issues as the video seems a bit unclear. I have attached an image below in which you can see some shade above the eyebrows. Can anybody tell me what could be the possible problem? Because it occurs only with the front camera and works fine with the real camera.
Hoping for response.
The encoding world is literally huge, there are a lot of ways to store data, manipulate data and transfer data, and when this comes to the imaging world all the options simply multiply.
Your problem reminds me this topic, also looks like there are some shadows, kinda like that bluish shadow is part of a previous frame with a shape in a different position.
Remember that H264 is a patented codec and you have to pay for it if you want to use it.

see several videos from rtsp on android device?

I just want to know if I can see several videos at once from the network using rtsp? I'm trying to do an android app similar to video surveillance and I need to see several videos at the some time in the screen, I tried to use it with MediaPlayer and after that with VideoView, but in the both cases sometimes the videos appear, sometimes give me an error that can not play one or more videos...What can I do to put it to work well?
What Cruceo said is correct. It's better to mux (FFmpeg is really great) all streams in one stream with a incredible resolution with a low bitrate & framerate. Then create a program to display it and make zoom in when you select a viewpoint.
A other option would be to change the video streams into jpg files. This way is allot easier because you can use your web browser to display it. With your web browser you can make use of JavaScript (ProcessingJS is very easy and good at this) to make a image viewer with allot of functionality.

Processing Android video frame by frame while recording

What I am attempting to do is create an application that adds effects to videos while recording. Is there any way to have a callback method receive the frame, then apply an effect to it and have that recorded. There is currently an application on the Android Market (Videocam Illusion) that claims it is the only application that can do this. Anyone know how Videocam Illusion does this or have some links to possible tutorials outlining video processing for Android?
This is a similar question that is unanswered:
Android preview processing while video recording
Unfortunately, (unless I'm unaware of some other method provided by the API) the way this is done is using a direct stream to the camera and manipulating it by using some sort of Native Code to modify the stream. I've done something similar to this before when I was working on an eyetracker - So I'll tell you how it works basically.
Open a stream using the NDK (possibly api, depending on implementations)
Modify the bytes of the stream - each frame is sent as a separate packet. You have to grab each packet from the camera, and modify it. You can do a replace of colors, or you can translate. You can also use OpenGL to modify the image entirely by adding things like glass effects.
Flatten the images back out
send the image over to the view controller to be displayed.
One thing that you have to be mindful of is the load and send of the packets & images happen in about 1/30th of a second for each frame. So the code has to be extremely optimized.

Categories

Resources