I am developing an android application that provides instruction on various topics. Within my application, I would like to have a "talking head" or even a full-body person that talks with moving lips synchronized (or at least close) to the spoken output. Ideally, I would want the head/body to move while the speech is occurring also, with eyes blinking, arms (if it has a body) moving, etc. I know how to do all the speech parts, but I've never developed animation before. I'm using Eclipse. I really am only looking for advice to get me started down the right path. Is there a framework, add-on purchase, etc. that will make my life easier? There has to be a better method than animating/rotating open/close mouth images during the speech output. I do NOT want jib-jab type of animation! Thank you in advance for any starting advice you can give me!
Xface may be a solution. You need SMIL scripts for the audio.
Related
I did create a React Cordova app that listen the mic based on https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API, and shows a visualization of the listening mic sound.
I can see a delay of about 1 second or maybe more, between the sound which is output in the speakers in my room and the graph shown on the Android app.
I can imagine a few possible reasons:
React is not fast enough for such an app, which I doubt
I am analyzing too many frequencies of the audio data, and somehow I should only get frequencies I am interested in...
My phone is too slow.
Do you have any suggestions to improve the output of this specific app?
Have a skim through this article and see if you can avoid the listed sources of performance problems. When debugging you can also use the console.log function to record when a method has finished running (using the Date class to get the current time) to compare and see what methods are being cumbersome.
https://reactnative.dev/docs/performance
If you are still having trouble be sure to share some code snippets that may be causing issues for the lovely Stack Overflow community to have a look at and maybe someone will find something :)
Skip the first two paragraphs if your not interested in why I'm asking this question.
Here is the situation: I'm using a Moto Z Play with the Projector Modification, the mod is really cool and allows me to literally project my phone screen onto the wall. I've been writing a personal assistant program that helps me with my daily life I.E. Sorting gmails, reminding me of calendar events, keeping track of anything I want it to remember and reminding me of those things when I've asked it to, and much more. Its basically a personal secretary.
One new feature I just added was a habit tracker. I created a small graphical interface on my phone using Tasker that would email my "assistant" who would then record the habit and create a really cool graph that shows my past habit record as well as using a neural network to predict the next days habit. Only problem is, the graph got really intricate really fast. I want to show a months worth of habits (16 total habits), creating what can be up to a 16 x 31 floating point graph with labels. My laptop screen is just not big enough to display all of that without it just being a mess! I really want to display the graph from my projector mod, the entire wall will definitely be big enough to show all that data.
Ok, now my question (thanks for hanging in there I know that was a lot):
Is there any way that I can display an image on my phone from a Python program without creating a standalone app? Even if my phone needs to be plugged into my computer to stream the data through a cable.
I would use a service like Kivy to create a standalone app, but then it wouldn't be hooked up to my assistant, completely defeating the purpose.
I'm not looking for anything similar to a notification, I really want to draw over the entire screen of my phone. This is something I did with Processing (Java library) a while back, but now I'm using Python because it's more machine learning friendly.
I've looked into a lot of services but nothing seems to be able to do this. Remember that I dont need to send anything back from my phone, simply display an image on the screen until the desktop side program tells it to stop.
Not my expertise but if I would need to do something like that I would make a web-service of the python app using django and go to the url with my phone. Don't know if it help....
Regardless of "how" or "what", the answer is, you will always need some software running on the Android to capture the stream of data (images) and display it in the screen.
The point is, you don't have to write this software yourself. The obvious example that come to mind is use any DLNA compatible software, VLC for example, and have your python to generate a h264 stream and point VLC to it. Another way would be use some http service from your python and simply load it in the browser.
hope it helps.
I am trying to animate an object (mouth, eyebrow and some other expressions) to move in accordance to incoming sound. I was thinking to read sound modulation and detect changes and to animate movement of objects.
Is this a good approach?
If not, how should I approach to coding such feature? Does this have to be done in OpenGL or I can use Android SDK and its animations?
I assume you mean Frequency Modulation, as opposed to Amplitude Modulation? Perhaps a combination of both? That might be pretty neat. I don't think there's any reason NOT do to this...
As to whether its a "good" approach or not? I think it might be pretty cool.
The android SDK has pretty robust animation built in at this point. You should be able to do the animations using the SDK just fine. If you get to the point where you really want to go wild with it you might need to step down a layer for speed concerns.
Look at:
https://stackoverflow.com/questions/17163446/what-is-the-best-2d-game-engine-for-android/17166794#17166794
For a pretty robust coverage of 2d Games in Android, which might point you in the right direciton.
Very interesting idea. I think there are two parts to the problem: input (sound) and output (graphics). For the graphics side, I don't think you need OpenGL per se. You could check out this guide to game development: http://www.techrepublic.com/blog/software-engineer/the-abcs-of-android-game-development-prepare-the-canvas/2157/
I think it is relevant because it deals with moving graphics in real time.
For the sound, it would be nice to have an integer based on the frequency and modulation of the sound. For analyzing the sound, perhaps this library could help you out: http://code.google.com/p/musicg/
It can:
- Read amplitude-time domain data
- Read frequency-time domain data
Then, you could progress through the amplitude data of a sound in real time and update the graphics accordingly.
I am taking this crazy class on Moble Programming. We have to do a final project and I would like to do some sort simple guitar processor app.
I wanted to do this in IOS, but it seems like the learning curve for IOS is to impractical for a short class.
No offense to anyone but Droid is easier to program, at least to me, but I am confused if you can even get guitar input from a jack (not mic) and then do some processing on the input and feed it to the output.
I'm aware of latency, which may or may not be a big deal for a class.
Does anyone know if Droid can do anything like this? If so any articles or somewhere to start? I know with IOS you can at least buy a jack and it seems to have tons of open source processing code, but I can't seem to find anything for Droid. All I have seen is "Ghetto Amp" for guitar stuff.
Any ideas?
Thanks
You may want to look at this project:
http://code.google.com/p/moonblink/wiki/Audalyzer
should be pretty useful :)
However the core class you will be using to pick up and look at audio streams is: http://developer.android.com/reference/android/net/rtp/AudioStream.html
I wrote a MIDI guitar for a college project a long time ago, in assembly for a Texas Instruments DSP. As long as you just played exactly one note, and were really careful about it, it could tell what you'd played.
Not much amplification was needed. In fact, I could get some notes even on an unamplified signal. I had oscilloscopes and a pretty generalized ADC to work with, you might have to amplify the signal...but if you do, be careful not to fry your input. Start low...and really, the more you can read up on the tolerances the better.
Looks like they never made any hi-fi micro-USB 24-bit ADCs or wrote drivers for them. I guess there's no market. :) But if you're doing a school project and not producing the latest Muse album, get a path from your guitar to the headset line in:
http://androidforums.com/android-media/194740-questions-about-audio-recording-droid.html
I'd probably just sacrifice a cheap or broken headset to get the headset plug. ( Maybe they sell appropriate tips at Radio Shack but I've learned not to assume such things anymore :-/ ) After building a cable I'd I'd feed it an amplified signal from the guitar so I could control the gain level to whatever I wanted.
Depending on latency requirements you can use Java or NDK. Note this answer:
Need help about sound processing
(I have one of the original Droids sitting around in a drawer, I'm sure I could use it for something but I just haven't figured out what!)
I noticed that Flash allows you to insert cue's into a video file (flv). Is something like this possible on Android? I have a video that runs locally in my Android app and I would like to insert cues into the video which will give me callbacks when a certain portion of the video has been reached. If this is not possible, are there any other methods to do something similar? I have to be pretty precise with where the cue is located.
Thanks
Note:
I just found this same question on stackoverflow. Can anyone verify that this is still the case? (That it is not possible, only by polling the video continually). I did know of this way, but it's not the most accurate way if you need to be precise and stich dynamic pieces of video together seamlessly.
Android VideoView - Detect point of time in video
I´m working on this as well and a kind of cue/action scripts. For tutorials, instruction video I need to keep track of current position to serve for example questions and navigation menus appropriate for that point in time. Easy when it´s sufficient to act in response to user input but otherwise firing up a thread to poll at some decent interval is the thing. Accuracy might be acceptable and can be calibrated by sensing actual position.