Fetching YouTube video list in Titanium - android

I am newbie in Titanium and trying to fetch video lists of a particular channel from YouTube using THIS tutorial.
The problem is, all time i get "No videos were found for this search" message(used inside catch exception) and from Chrome console i get the exception message:
"No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin localhost:8020 is therefore not allowed access. Failed to load resource: the server responded with a status of 503 (Service Unavailable)"
Can anyone point me out what solution would be for this problem? From some googling, i see that is not the problem inside the code, it's a server-side problem, so how can i overcome this YouTube response problem?
And i also checked that, this line var doc = this.responseXML.documentElement; always results to null that's what no videos are found. I used instead:
if (!this.responseXML)
{
//if not XML you have to convert it to XML
doc = Titanium.XML.parseString(this.responseText).documentElement;
}
else
{
//if it is XML, then just set the doc variable
doc = this.responseXML.documentElement;
}
Still doc is null alltime! Thanks in advance for any suggestion.

Well, i solved the problem. In fact that was a browser issue, when i tested in desktop, browser uses localhost:8020 as the address and YouTube didn't response back for that address.
Later, i tested it a real android device and voila!, it works..var doc = this.responseXML.documentElementclearly contains the responseXML and then it extract video information by var items = doc.getElementsByTagName("entry").
Hope that could help someone someday!

Related

Unable to find exact Class name for finding Comments of a URL using jsoup

I am working in Android and using Jsoup for cwaling some data from internet. I am unable to find the exact class name where the comment lies in the below defined code. I tried with disqus_thread , dsq-content,ul-dsq-comments and dsq-comment-body by going to the source page of url but not any one returned the comments.
public static void main(String[] args) {
Document d;
Elements lin = null;
String url = "http://blogs.tribune.com.pk/story/39090/i-hate-materialistic-people-beta-but-i-love-my-designer-clothes/";
try {
d = Jsoup.connect(url).timeout(20*1000).userAgent("Chrome").get();
lin = d.getElementsByClass("dsq-comment-body");
System.out.println(lin);
} catch (IOException e) {
e.printStackTrace();
}
int i=0;
for(Element l :lin){
System.out.println(""+i+ " : " +l.text());
i++;
}
}
That's because the HTML that makes up the comments is generated dynamically after the page has been loaded, using Javascript. When the page is loaded the comment HTML doesn't exist, so Jsoup cannot retrieve it.
To get hold of the comments you have 3 options:
1) Use a web-crawler that can execute javascript. Selenium Webdriver (http://www.seleniumhq.org/projects/webdriver/) and PhantomJS (http://phantomjs.org/) are popular options here. The former works by hooking into a browser implementation (e.g. Mozilla Firefox) and opening the browser programmatically. The latter does not open a browser and executes the javascript by using Webkit instead.
2) Intercept the network traffic when opening the site (here you can probably use your browser's built-in network tab) and find the request that fetches the comments. Make this request yourself and extract the relevant data to your application. Bear in mind that this will not work if the server serving the comments requires some kind of authentication.
3) If the comments are served by a specialized provider with an openly accessible API, then it might be possible to extract them through this API. The site you linked to uses Disqus to handle the comment section so it might be possible to hook into their API and fetch them this way.

How to check if an image/image link is present in text in android

I am creating an android application. I am getting the response from server in json format. I ma parsing the json response. When I get the content it may contain image or video link. How can I check whether image or video link is present in the content and download the corresponding image or video and display it in my application. I am aware f downloading images and displaying them, but I am not aware of how to check for the link.
My response is in the following format:
<p class='para-inside-post'> cool panda <a class='handler_name' href='/#12'>#12</a> </p><img class=\"post-img-tag\" postcreatorid=\"56332edfad441746cbd15000\" src=\"https://image.jpg\" height=\"430px\" width=\"430px\">"
I am parsing the text as shown below:
postContentSplit = Html.fromHtml(content).toString();
Similarly, how can I do the same for images and videos?
All suggestions are welcome. Please help me come out of this issue.
Use Patterns in order to check url validity
Patterns.WEB_URL.matcher(potentialUrl).matches()
It will return True if URL is valid and false if URL is invalid.

Selecting paragraph tagged with itemprop=recipeInstructions using jsoup on android

I've tried my query in http://try.jsoup.org/ and it works fine. However, when I try it on android (4.2.2) it is returning a zero sized array.
The query I want is [itemprop=recipeInstructions].
The website I'm testing on is http://www.foodnetwork.co.uk/recipes/real-meatballs-and-spaghetti-674.html
My android code looks like
Document doc = Jsoup.connect("http://www.foodnetwork.co.uk/recipes/real-meatballs-and-spaghetti-674.html").get();
Elements recipe = doc.select("[itemprop=recipeInstructions]");
// recipe is a zero sized array :(
I'm linking against jsoup-1.7.3.jar
My android code works fine on the website http://www.foodnetwork.com/recipes/ina-garten/broccoli-and-bow-ties-recipe.html so I suspect it's a bug in the html or how jsoup parses the html of the first site.
Try to add the "User Agent".
Document doc = Jsoup.connect(url).userAgent("Mozilla/4.0").get();
Because, the server may return to different page according to different browser identification.
Try something like that:
Elements recipe = doc.select("p[itemprop = recipeInstructions]");

retrieving images from Picasa on android

hello am trying to get images from an album from picasa in an android app. I tried creating an album and the album was created successfully however there is something wrong when i try to get the images in the album. Please find below my code:
PicasawebService myService = new PicasawebService("myApp");
myService.setUserCredentials("username", "password");
URL url = new URL ("https://picasaweb.google.com/data/feed/api/user/myusername/albumid/myalbumid");
AlbumFeed feed= myService.getFeed(url, AlbumFeed.class);
List<MediaContent> l;
for(PhotoEntry photo : feed.getPhotoEntries()){
l= photo.getMediaContents();
return l.get(0).getUrl().toString();
}
the for loop is not being entered but when i check the size of the feed and it shows me the correct number of images in the album. I got the code from google developers guide:(https://developers.google.com/picasa-web/docs/2.0/developers_guide_java#listalbums)
NB: i tried the same exact code in a desktop app and worked perfectly.
Thank you
EDIT: The problem is that the ALbumFeed is returning entries of class GPhotoEntry and not PhotoEntry. I searched online and the solution was to include the gdata-photos-meta.jar in my library which was already included...Any idea?
google's GData doesnt work with android. You should use google Client api and they say that there is a running sample for picasa on android.

Can not Download video from youtube

I am using http://www.youtube.com/get_video_info?video_id=*VIDEO_ID* and from the data I get I am parsing the url_encoded_fmt_stream_map and I get the urls like
http://blah.youtube.com/videoplayback?blah
Earlier I could download the videos using this url but now I am not able to download the videos anymore. Anyone has a clue why?
Here is the code to return the video urls:
all credits to youtube-dl I only copied the part of their script which you need for extracting the urls
video_id = "yourvideoid"
for el_type in ['&el=embedded', '&el=detailpage', '&el=vevo', '']:
video_info_url = ('http://www.youtube.com/get_video_info?&video_id=%s%s& ps=default&eurl=&gl=US&hl=en'
% (video_id, el_type))
request = urllib2.Request(video_info_url)
try:
video_info_webpage = urllib2.urlopen(request).read()
video_info = parse_qs(video_info_webpage)
if 'token' in video_info:
break
except (urllib2.URLError, httplib.HTTPException, socket.error), err:
print('ERROR: unable to download video info webpage: %s' % str(err))
video_url_list = video_info['url_encoded_fmt_stream_map'][0]
url_data_strs = video_info['url_encoded_fmt_stream_map'][0].split(',')
url_data = [parse_qs(uds) for uds in url_data_strs]
url_data = filter(lambda ud: 'itag' in ud and 'url' in ud, url_data)
url_map = dict((ud['itag'][0], ud['url'][0] + '&signature=' + ud['sig'][0]) for ud in url_data)
print(str(url_map))
No clue as to why, but it seems to be affecting all downloader extensions, so it's almost certainly on YouTube's side. I'm assuming it has something to do with intellectual property. YouTube is "intended" to be a streaming site, not a video file repository.
Shutaro at addons.mozilla.com has discovered a workaround that entails forcing YouTube to revert to delivering the older .webm format.
I am having the same problem and from what I understand from someone else who has fixed it that we need to add a signature to the video link (the mp4 or 3gp links that are returned)... I'm looking into this and will update. I hope you can do the same if you discover anything.

Categories

Resources