I am making an application for android, and an element of the functionality of the application is to return results from an online search of a library's catalogue. The application needs to display the results of the search, which is carried out by way of a custom HTML form, in a manner in keeping with the rest of the application. Ie, the results of the search need to be parsed and the useful elements displayed. I was just wondering if/how this could be achieved in android?
You would use a Html Parser. One that i use and works VERY well is JSoup
This is where you will need to begin with parsing html. Also Apache Jericho is another good one.
You would retrieve the html document by using DOM, and use the JSOUP Select() method to select any tags that you would like to get. Either by tag, id, or class.
Solution
Use the: Jsoup.connect(String url) method:
Document doc = Jsoup.connect("http://example.com/").get();
This will allow you to connect to the html page by using the url. And store it as the Document doc, Through DOM. And the read from it using the selector() method.
Description
The connect(String url) method creates a new Connection, and get()
fetches and parses a HTML file. If an error occurs whilst fetching the
URL, it will throw an IOException, which you should handle
appropriately.
The Connection interface is designed for method chaining to build
specific requests:
Document doc = Jsoup.connect("http://example.com")
If you read through the documentation on Jsoup you should be able to achieve this.
EDIT: Here is how you would use the selector method
//Once the Document is retrieved above, use these selector methods to Extract the data you want by using the tags, id, or css class
Elements links = doc.select("a[href]"); // a with href
Elements pngs = doc.select("img[src$=.png]");
// img with src ending .png
Element masthead = doc.select("div.masthead").first();
// div with class=masthead
Elements resultLinks = doc.select("h3.r > a"); // direct a after h3
EDIT: Using JSOUP you could use this to get attributes, text,
Document doc = Jsoup.connect("http://example.com")
Element link = doc.select("a").first();
String text = doc.body().text(); // "An example link"
String linkHref = link.attr("href"); // "http://example.com/"
String linkText = link.text(); // "example""
String linkOuterH = link.outerHtml();
// "<b>example</b>"
String linkInnerH = link.html(); // "<b>example</b>"
You can use XmlPullParser for parsing XML.
For e.g. refer to http://developer.android.com/reference/org/xmlpull/v1/XmlPullParser.html
Being that the search results are HTML and HTML is a Markup Language (ML) you can use Android's XmlPullParser to parse the results.
Related
I am making an android app that displays stored HTML data using webview. Now, the problem I am trying to over come is how to ignore HTML/CSS etc tag/elements when searching for some user-input string. My DB is already 110MB and I think using another field with only text and no HTML will just add more size to DB. Regex will be expensive too and may not be reliable.
Is there any other way to do it?
Maybe you can do an additional filtering in your program on the queried records. You can use HTML parsers like Jsoup to strip HTML tags, then you can search in the remaining text. Simple Java example with Jsoup:
List<String> records = ... // your queried records - potential results
List<String> results = new ArrayList<String>();
for(String r : records) {
Document d = Jsoup.parse(r); // parse HTML
String text = d.text(); // extract text
if (text.contains(searchTerm)) { // or do your search here
results.add(r);
}
}
return results; // you got real results here
It may not be the best solution but is an option. I think it's expensive too, but more reliable than regular expressions (which you try to avoid).
Update: the regex way
I think the only way to strip HTML tags while fetching is to use regex in SQLite. For example, the following pattern should work to match string outside HTML tags:
(^|>)[^<]*(searchterm)[^<]*(<|$)
In the following example text it will match only the 1st, 3rd and 4th searchterm and not the 2nd:
searchterm <tag searchterm> searchterm </tag> searchterm
You can see it in action here.
In SQLite you can use regular expressions this way:
WHERE column-name REGEXP 'regular-expression'
OK, What I want to achieve is to write each result JSoup fetches me in a separate String. Is this somehow possible? I can get the first and last with a function but, yea, then the rest is lost.
right now i have this in my doInBackground:
// Connect to the web site
Document document = Jsoup.connect(url).get();
// Using Elements to get the Meta data
Elements titleElement = document.select("h2[property=schema:name]");
// Locate the content attribute
date1 = titleElement.toString();
Log.e("Date", String.valueOf(Html.fromHtml(date1)));
With this i get a list of results which is nice, but i'd like to have every result in a separate String.
Thanks in advance, if you need anything more please ask :)
I read through the documentation carefully again and found this:
element.eq(n).text
The "n" defines which position to get, the .text strips all the html and makes it a readable text
This is a simple scenario in which I have tried multiple times but do not receive the data I am after. I am using an imported library called JSoup which parses HTML.
I collect the webpage html document:
// url - The URL of the HTML document:
Document document = Jsoup.connect(url).get();
From there I know you parse data from tags. I want data inside this tag:
<pre>
Example scenario:
<pre> This is the String data inside this tag I wish to collect </pre>
If anyone could help me, I would be grateful (-:
Thanks all (-:
Firstly, you should check the source code from the url accessing to confirm whether the tag pre exists.
Then, you can use the select method of jsoup to extract the pre tag. The sample code is like this:
Document document = Jsoup.connect(url).get();
Elements eles = doc.select("pre");
for (Element ele : eles) {
System.out.println(ele.toString());
}
I want to show in my Android app, a single item from rss each day.
Now I dont' want to build a complex rss reader with a listview and so, I just want to get a single item each day, is it possible?
Thanks
This is a very general question, but I'll do my best to respond to it. You'll need to do three things: get the xml, parse the xml, and then display it.
For getting the XML I recommend using an AsyncTaskLoader and an HttpUrlConnection to download it.
For parsing the XML you have a few options. You can use a SAX parser. This is the way I like to do it. You can checkout this tutorial on how to use the SAX parser. Or if you prefer to use a DOM parser, you can checkout this tutorial.
Once it's been parsed, you can just set the contents of a TextView to whatever you parsed.
I don't have an example of all this because this is bascially an entire android app. But with what I've given you, you should have enough to get started and build one on your own.
Fetching the feed with http and parsing the xml can be easily done with a third party library.
Here's code that I used to fetch the Picasa featured photos feed, and parse the rss item to extract the title and the content url.
public void xml_ajax(){
String url = "https://picasaweb.google.com/data/feed/base/featured?max-results=8";
aq.ajax(url, XmlDom.class, this, "picasaCb");
}
public void picasaCb(String url, XmlDom xml, AjaxStatus status){
showResult(xml, status);
if(xml == null) return;
List<XmlDom> entries = xml.tags("entry");
List<String> titles = new ArrayList<String>();
String imageUrl = null;
for(XmlDom entry: entries){
titles.add(entry.text("title"));
imageUrl = entry.tag("content", "type", "image/jpeg").attr("src");
}
showTextResult(titles);
}
I also blogged about this here
http://blog.androidquery.com/2011/09/simpler-and-easier-xml-parsing-in.html
I need to extract information from an unstructured web page in Android. The information I want is embedded in a table that doesn't have an id.
<table>
<tr><td>Description</td><td></td><td>I want this field next to the description cell</td></tr>
</table>
Should I use
Pattern Matching?
Use BufferedReader to extract the information?
Or are there faster way to get that information?
I think in this case it makes no sense to look for a fast way to extract the information as there is virtually no performance difference between the methods already suggested in answers when you compare it to the time it will take to download the HTML.
So assuming that by fastest you mean most convenient, readable and maintainable code, I suggest you use a DocumentBuilder to parse the relevant HTML and extract data using XPathExpressions:
Document doc = DocumentBuilderFactory.newInstance()
.newDocumentBuilder().parse(new InputSource(new StringReader(html)));
XPathExpression xpath = XPathFactory.newInstance()
.newXPath().compile("//td[text()=\"Description\"]/following-sibling::td[2]");
String result = (String) xpath.evaluate(doc, XPathConstants.STRING);
If you happen to retrieve invalid HTML, I recommend to isolate the relevant portion (e.g. using substring(indexOf("<table")..) and if necessary correct remaining HTML errors with String operations before parsing. If this gets too complex however (i.e. very bad HTML), just go with the hacky pattern matching approach as suggested in other answers.
Remarks
XPath is available since API Level 8 (Android 2.2). If you develop for lower API levels you can use DOM methods and conditionals to navigate to the node you want to extract
The fastest way will be parsing the specific information yourself. You seem to know the HTML structure precisely beforehand. The BufferedReader, String and StringBuilder methods should suffice. Here's a kickoff example which displays the first paragraph of your own question:
public static void main(String... args) throws Exception {
URL url = new URL("http://stackoverflow.com/questions/2971155");
BufferedReader reader = null;
StringBuilder builder = new StringBuilder();
try {
reader = new BufferedReader(new InputStreamReader(url.openStream(), "UTF-8"));
for (String line; (line = reader.readLine()) != null;) {
builder.append(line.trim());
}
} finally {
if (reader != null) try { reader.close(); } catch (IOException logOrIgnore) {}
}
String start = "<div class=\"post-text\"><p>";
String end = "</p>";
String part = builder.substring(builder.indexOf(start) + start.length());
String question = part.substring(0, part.indexOf(end));
System.out.println(question);
}
Parsing is in practically all cases definitely faster than pattern matching. Pattern matching is easier, but there is a certain risk that it may yield unexpected results, certainly when using complex regex patterns.
You can also consider to use a more flexible 3rd party HTML parser instead of writing one yourself. It will not be as fast as parsing yourself with beforehand known information. It will however be more concise and flexible. With decent HTML parsers the difference in speed is pretty negligible. I strongly recommend Jsoup for this. It supports jQuery-like CSS selectors. Extracting the firsrt paragraph of your question would then be as easy as:
public static void main(String... args) throws Exception {
Document document = Jsoup.connect("http://stackoverflow.com/questions/2971155").get();
String question = document.select("#question .post-text p").first().text();
System.out.println(question);
}
It's unclear what web page you're talking about, so I can't give a more detailed example how you could select the specific information from the specific page using Jsoup. If you still can't figure it at your own using Jsoup and CSS selectors, then feel free to post the URL in a comment and I'll suggest how to do it.
When you Scrap Html webPage. Two things you can do for it. First One is using REGEX. Another One is Html parsers.
Using Regex is not preferable by all. Because It causes logical exception at the Runtime.
Using Html Parser is More Complicated to do. you can not sure proper output will come. its too made some runtime exception by my experience.
So Better make response of the url to Xml file. and do xml parsing is very easy and effective.
Why don't you just write
int start=data.indexOf("Description");
After that take the required substring.
Why don't you create a script that does the scraping with cURL and simple html dom parser and just grab the value you need from that page? These tools work with PHP, but other tools exist for exist for any language you need.
One way of doing this is to put the html into a String and then manually search and parse through the String. If you know that the tags will come in a specific order then you should be able to crawl through it and find the data. This however is kinda sloppy, so its a question of do you want it to work now? or work well?
int position = (String)html.indexOf("<table>"); //html being the String holding the html code
String field = html.substring(html.indexOf("<td>",html.indexOf("<td>",position)) + 4, html.indexOf("</td>",html.indexOf("</td>",position)));
like i said... really sloppy. But if you're only doing this once and you need it to work, this just might do the trick.