I'm facing a particular problem while parsing xml so the parser works fine until it encouters <Url/> tag which contains no value, I already made the test il my code :
static final String ImageHotel = "Url";
...
else if (name.equalsIgnoreCase("ImageHotel")){
message.setHotelImage(property.getFirstChild().getNodeValue());
if (!marchand.getImgHtlUrl().equalsIgnoreCase("")){
message.setHotelImageLink(new URL(marchand.getImgHtlUrl() + property.getFirstChild().getNodeValue()));
}else{
message.setHotelImageLink(new URL("http://localhost/noimage.jpg"));
}
}
This keep throwing error even when i tried to bypass by surrounding with try/catch..
any help is welcome
thanks.
Houssem.
I have reference to parse this file try to add your own data and get tag value....
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
import java.util.Vector;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
public class NewsParsing {
NewsBeen Objnewsbeen;
Vector<NewsBeen> vectParse;
public NewsParsing() {
System.out.println("Constructor is calling Now ...");
try {
vectParse = new Vector<NewsBeen>();
// http://www.npr.org/rss/rss.php?id=1001
URL url = new URL(
"http://www.npr.org/rss/rss.php?id=1001");
URLConnection con = url.openConnection();
System.out.println("Connection is : " + con);
BufferedReader reader = new BufferedReader(new InputStreamReader(
con.getInputStream()));
System.out.println("Reader :" + reader);
String inputLine;
String fullStr = "";
while ((inputLine = reader.readLine()) != null)
fullStr = fullStr.concat(inputLine + "\n");
InputStream istream = url.openStream();
DocumentBuilder builder = DocumentBuilderFactory.newInstance()
.newDocumentBuilder();
System.out.println("Builder : " + builder);
Document doc = builder.parse(istream);
System.out.println("Doc is : " + doc);
doc.getDocumentElement().normalize();
System.out.println("After Normlize : " + doc);
System.out.println("Root is : "
+ doc.getDocumentElement().getNodeName());
System.out
.println("-------------------------------------------------------------------------------------------------------------");
Element element = doc.getDocumentElement();
parseFile(element);
for (int index1 = 0; index1 < vectParse.size(); index1++) {
NewsBeen ObjNB = (NewsBeen) vectParse.get(index1);
System.out.println("Item No : " + index1);
System.out.println();
System.out.println("Title is : " + ObjNB.title);
System.out.println("Description is : " + ObjNB.description);
System.out.println("Pubdate is : " + ObjNB.pubdate);
System.out.println("Link is : " + ObjNB.link);
System.out.println("Guid is : " + ObjNB.guid);
System.out.println();
System.out
.println("-------------------------------------------------------------------------------------------------------------");
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void parseFile(Node node) {
NodeList nodelist = node.getChildNodes();
for (int index = 0; index < nodelist.getLength(); index++) {
Node nodefromList = nodelist.item(index);
if (nodefromList.getNodeType() == Node.ELEMENT_NODE) {
// System.out.println("node.getNodeType() : " +
// nodefromList.getNodeType());
// System.out.println("Node is : " + node.getNodeName());
if (nodefromList.getNodeName().equalsIgnoreCase("item")) {
Objnewsbeen = new NewsBeen();
vectParse.addElement(Objnewsbeen);
}
if (nodefromList.hasChildNodes()) {
if (nodefromList.getChildNodes().item(0).getNodeName()
.equals("#text")) {
if (!nodefromList.getChildNodes().item(0)
.getNodeValue().trim().equals("")
&& Objnewsbeen != null)
if (nodefromList.getNodeName().equalsIgnoreCase(
"title")) {
Objnewsbeen.title = nodefromList
.getChildNodes().item(0).getNodeValue();
} else if (nodefromList.getNodeName()
.equalsIgnoreCase("description")) {
Objnewsbeen.description = nodefromList
.getChildNodes().item(0).getNodeValue();
} else if (nodefromList.getNodeName()
.equalsIgnoreCase("pubDate")) {
Objnewsbeen.pubdate = nodefromList
.getChildNodes().item(0).getNodeValue();
} else if (nodefromList.getNodeName()
.equalsIgnoreCase("link")) {
Objnewsbeen.link = nodefromList.getChildNodes()
.item(0).getNodeValue();
} else if (nodefromList.getNodeName()
.equalsIgnoreCase("guid")) {
Objnewsbeen.guid = nodefromList.getChildNodes()
.item(0).getNodeValue();
} else {
// System.out.println();
}
}
parseFile(nodefromList);
}
}
}
}
public static void main(String[] args) {
new NewsParsing();
}
}
The NewsBeen class is::
public class NewsBeen {
public String title;
public String description;
public String pubdate;
public String link;
public String guid;
}
I told you that surrounding with try/catch did not work, so, now it does, because i had to track the NullPointerException instead of simple Exception, like this :
try{
message.setHotelImage(property.getFirstChild().getNodeValue());
}catch(NullPointerException nEx){
marchand.setImgHtlUrl("");
}
thank you guys and sorry for this alert ;)
Related
i want to open ebub files either in webview or by something else but content click and images,videos should be there
-I have tried using webview but images cant display in that
textView.setText(Html.fromHtml(data, new Html.ImageGetter() {
#Override
public Drawable getDrawable(String source) {
String imageAsStr = source.substring(source.indexOf(";base64,") + 8);
byte[] imageAsBytes = Base64.decode(imageAsStr, Base64.DEFAULT);
Bitmap imageAsBitmap = BitmapFactory.decodeByteArray(imageAsBytes, 0, imageAsBytes.length);
int imageWidthStartPx = (pxScreenWidth - imageAsBitmap.getWidth()) / 2;
int imageWidthEndPx = pxScreenWidth - imageWidthStartPx;
Drawable imageAsDrawable = new BitmapDrawable(getResources(), imageAsBitmap);
imageAsDrawable.setBounds(imageWidthStartPx, 0, imageWidthEndPx, imageAsBitmap.getHeight());
return imageAsDrawable;
}
}, null));
-i tried this also but in this images are display but content click is no there
Use this code provided you have necessary libraries and sampleepubfile.epub in your assets...
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.List;
import nl.siegmann.epublib.domain.Book;
import nl.siegmann.epublib.domain.TOCReference;
import nl.siegmann.epublib.epub.EpubReader;
import android.app.Activity;
import android.content.res.AssetManager;
import android.os.Bundle;
import android.text.Html;
import android.util.Log;
import android.webkit.WebView;
public class EPubDemo extends Activity {
WebView webview;
String line, line1 = "", finalstr = "";
int i = 0;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
webview = (WebView) findViewById(R.id.webview);
AssetManager assetManager = getAssets();
try {
// find InputStream for book
InputStream epubInputStream = assetManager
.open("sampleepubfile.epub");
// Load Book from inputStream
Book book = (new EpubReader()).readEpub(epubInputStream);
// Log the book's authors
Log.i("author", " : " + book.getMetadata().getAuthors());
// Log the book's title
Log.i("title", " : " + book.getTitle());
/* Log the book's coverimage property */
// Bitmap coverImage =
// BitmapFactory.decodeStream(book.getCoverImage()
// .getInputStream());
// Log.i("epublib", "Coverimage is " + coverImage.getWidth() +
// " by "
// + coverImage.getHeight() + " pixels");
// Log the tale of contents
logTableOfContents(book.getTableOfContents().getTocReferences(), 0);
} catch (IOException e) {
Log.e("epublib exception", e.getMessage());
}
String javascrips = "";
try {
// InputStream input = getResources().openRawResource(R.raw.lights);
InputStream input = this.getAssets().open(
"poe-fall-of-the-house-of-usher.epub");
int size;
size = input.available();
byte[] buffer = new byte[size];
input.read(buffer);
input.close();
// byte buffer into a string
javascrips = new String(buffer);
} catch (IOException e) {
e.printStackTrace();
}
// String html = readFile(is);
webview.loadDataWithBaseURL("file:///android_asset/", javascrips,
"application/epub+zip", "UTF-8", null);
}
#SuppressWarnings("unused")
private void logTableOfContents(List<TOCReference> tocReferences, int depth) {
if (tocReferences == null) {
return;
}
for (TOCReference tocReference : tocReferences) {
StringBuilder tocString = new StringBuilder();
for (int i = 0; i < depth; i++) {
tocString.append("\t");
}
tocString.append(tocReference.getTitle());
Log.i("TOC", tocString.toString());
try {
InputStream is = tocReference.getResource().getInputStream();
BufferedReader r = new BufferedReader(new InputStreamReader(is));
while ((line = r.readLine()) != null) {
// line1 = Html.fromHtml(line).toString();
Log.v("line" + i, Html.fromHtml(line).toString());
// line1 = (tocString.append(Html.fromHtml(line).toString()+
// "\n")).toString();
line1 = line1.concat(Html.fromHtml(line).toString());
}
finalstr = finalstr.concat("\n").concat(line1);
// Log.v("Content " + i, finalstr);
i++;
} catch (IOException e) {
}
logTableOfContents(tocReference.getChildren(), depth + 1);
}
webview.loadDataWithBaseURL("", finalstr, "text/html", "UTF-8", "");
}
}
I have tried this and its working for me
I am relativly new to programming in java and android, so I wanted to ask you guys for a simple and understandable way of filtering two tables and their h3 headings of this website, possibly even cache it, and load it into a transparent WebView, so it doesnt look like a website. I thought of RegEx.. I do this to keep it up to date without having to service that thing.
With "simple and understandable" I mean comments, and possibly show what are just var names, method names or other custom names. And many explanations, comments and other things... Of course you can also just bomb the code in there, that would also work but I probably could not understand all of it.. ;)
Here's some code I tried:
package com.mrousavy.gemeindemuckendorfwipfing;
import android.os.AsyncTask;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* Created by Marc on 15.10.2015.
*/
public class Table {
// found on stackoverflow
public static boolean exists2(String url) {
try {
URL u = new URL(url);
HttpURLConnection connection = (HttpURLConnection) u.openConnection();
connection.setRequestMethod("HEAD");
connection.connect();
return connection.getResponseCode() == HttpURLConnection.HTTP_OK;
} catch (Exception ex) {
return false;
}
}
/**
* must NOT be called in main thread!!!
*/
public static String getHTML2(String url) throws Exception {
try {
URL u = new URL(url);
BufferedReader in = new BufferedReader(new InputStreamReader(u.openStream()));
String tmp, html = "";
while ((tmp = in.readLine()) != null) {
html += tmp;
try {
Thread.sleep(10);
} catch (Exception e) {
}
}
return html;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
/**
* must NOT be called in main thread!!!
*/
public static List<String> getUrlsFromHTML2(String html) throws Exception {
List<String> urls = new ArrayList();
//init Patterns
Pattern divsPattern = Pattern.compile("<h3>.</table>");
//Pattern urlPattern = Pattern.compile("<a href=\"\\./files/(.*?)\"");
//search for right divs
Matcher divs = divsPattern.matcher(html);
while (divs.find()) {
//search for links
String innerDiv = divs.group(1);
Matcher url = urlPattern.matcher(innerDiv);
if (url.find()) {
if (!urls.contains(url.group(1)))
urls.add(url.group(1));
}
try {
Thread.sleep(10);
} catch (Exception e) {
}
}
return urls;
}
public static List<News> getNewsFromHTML(String html) {
List<News> ret = new ArrayList();
Pattern firstNewsPattern = Pattern.compile("<h3><strong>Aktuelle Meldungen</strong></h3>(.*?)<hr />");
Pattern newsPattern = Pattern.compile("<hr />(.*?)<hr />");
Pattern newsHeaderPattern = Pattern.compile("<h4>(.*?)</h4>");
Pattern hrefPattern = Pattern.compile("href=\"(.*?)\"");
Matcher newsHeader = null;
Matcher href = null;
Matcher firstNews = firstNewsPattern.matcher(html);
if(firstNews.find()) {
String content = firstNews.group(1).replace("./", "http://www.muckendorf-wipfing.at/");
href = hrefPattern.matcher(content);
while(href.find()) {
String url = href.group(1);
if(!url.contains("/")) {
content = content.replace("href=\"" + url + "\"", "href=\"" + "http://www.muckendorf-wipfing.at/" + url + "\"");
}
}
newsHeader = newsHeaderPattern.matcher(content);
if(newsHeader.find())
ret.add(new News(newsHeader.group(1).replaceAll("<(.*?)>", "").replaceAll("&#\\d{4};", ""), content));
}
Matcher news = newsPattern.matcher(html);
while(news.find()) {
String content = news.group(1).replace("./", "http://www.muckendorf-wipfing.at/");
href = hrefPattern.matcher(content);
while(href.find()) {
String url = href.group(1);
if(!url.contains("/")) {
content = content.replace("href=\"" + url + "\"", "href=\"" + "http://www.muckendorf-wipfing.at/" + url + "\"");
}
}
newsHeader = newsHeaderPattern.matcher(content);
if(newsHeader.find())
ret.add(new News(newsHeader.group(1).replaceAll("<(.*?)>", "").replaceAll("&#\\d{4};", ""), content));
}
return ret;
}
public static String listToString(List<String> list) {
String ret = "";
for(String str : list) {
ret += str + "§";
}
ret = ret.substring(0, ret.length()-1);
return ret;
}
public static List<String> stringToList(String str) {
String[] arr = str.split("§");
List <String> ret = new ArrayList();
for(String s : arr) {
if(!s.trim().equals(""))
ret.add(s);
}
return ret;
}
public static String extractContentFromHTML(String html) {
Pattern regex = Pattern.compile("<div id=\"content\">((.*?(<div.*?<\\/div>)*.*?)*)<\\/div>");
Pattern hrefPattern = Pattern.compile("href=\"(.*?)\"");
Matcher match = regex.matcher(html);
if(match.find()) {
String content = match.group(1).replace("./", "http://www.muckendorf-wipfing.at/");
Matcher href = hrefPattern.matcher(content);
while(href.find()) {
String url = href.group(1);
if(!url.contains("/")) {
content = content.replace("href=\"" + url + "\"", "href=\"" + "http://www.muckendorf-wipfing.at/" + url + "\"");
}
}
return content;
}
return "";
}
}
I hope someone can help me out! :)
Thank you! ^^
Don't use regex to parse html/xml, it's error prone. Try using specialized lib as the excellent Jsoup:
import org.jsoup.Jsoup;
import org.jsoup.Connection;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
[...]
final String url = "http://www.muckendorf-wipfing.at/25-0-Wirtschaft+und+Gastronomie.html";
String tablesHtml = parseHTML(url);
[...]
String parseHTML(String url) {
//Retrieve html of {url} via GET
Connection.Response response = Jsoup.connect(url).method(Connection.Method.GET).execute();
//Parse html
Document doc = response.parse();
//Select the div with id="content", where both tables are stored
Element contentDiv = doc.select("div#content").first();
//return the inner html of <div id="content"> selected above
return contentDiv.html();
}
The syntax of the select function can be found here
UPDATE: i've updated the code to parse the content of div too, creating a Table class that store <h3> text and table as 'html' and also as a bidimensional String array. It has a nice toString() method too useful to see what you get.
NB: The trick is in the jsoup select statement "h3:contains(" + h3Text + ") ~ table": it selects the tables after and h3 tag with h3Text (the title of the table) inside. Later we're taking only the first table of the list so we can be sure we're selecting the table coupled with the h3 title.
import org.jsoup.Connection;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
//[...]
/* //CASE 1: if you have to download the html
* String url = "http://www.muckendorf-wipfing.at/25-0-Wirtschaft+und+Gastronomie.html";
* //Retrieve html of "url" via GET
* Connection.Response response = Jsoup.connect(url).method(Connection.Method.GET).execute();
* //Parse html
*Document doc = response.parse();
*/
//CASE 2: If you already have the html in a String called htmlString
Document doc = Jsoup.parse(htmlString);
//Select the div with id="content", where both tables are stored
Element contentDiv = doc.select("div#content").first();
//Create a list for the data
List tables = new ArrayList<Table>();
//Loop on h3 titles and get the coupled table below
for ( Element h3 : contentDiv.select("h3") )
{
//get the text inside <h3> tag
String h3Text = h3.text();
//jsoup select statement to get the table
//immediately after the <h3></h3>
String select = "h3:contains(" + h3Text + ") ~ table";
//Actually get the jsoup table element jTable
Element jTable = contentDiv.select(select).first();
//Load the data on the list
tables.add(new Table(h3Text,jTable));
}
//print them
for ( Table t : tables )
System.out.println(t);
//[...]
class Table
{
String h3Title;
String htmlTable;
String[][] splittedTable;
Table(String h3Title, Element jTable)
{
this.h3Title = h3Title;
this.htmlTable = jTable.html();
this.splittedTable = getSplittedTable(jTable);
}
String[][] getSplittedTable(Element jTable)
{
//Get all the rows of the jTable
Elements trs = jTable.select("tr");
//Get the number of rows
int rows = trs.size();
//Get the columns of the first row (the same of all the rows)
int columns = trs.first().select("td").size();
//Allocate new bidimensional array table
String[][] table = new String[rows][columns];
int i = 0; int j = 0;
for ( Element tr : trs ) {
for ( Element td : tr.select("td") ) {
table[i][j++] = td.text();
}
j = 0; //reset column cursor
i++; //increment row cursor
}
return table;
}
#Override
String toString() {
StringBuilder sb = new StringBuilder();
String ln = System.lineSeparator();
sb.append(h3Title + ln);
sb.append("--" + ln);
sb.append(this.htmlTable + ln);
sb.append("--" + ln);
for (int i = 0; i < splittedTable.length; i++) {
for (int j = 0; j < splittedTable[i].length; j++) {
sb.append(splittedTable[i][j] + " | ")
}
sb.append(ln + "--" + ln);
}
return sb.toString();
}
}
I'm having Problem in updating JSON parsed informations in my Views. Nothing is problem with getting JSON text from internet it works fine. I hope parsing process also looks good. Im having problem with Displaying contents in the Views.
package rev.app.revlearningdemo;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;
import org.json.JSONException;
import org.json.JSONObject;
import android.app.Activity;
import android.os.Bundle;
import android.util.Log;
import android.widget.TextView;
public class JsonFromInternet extends Activity {
String jsonText, name, weather_main, description, lon, lat, temp, pressure,
humidity, temp_min, temp_max, speed, deg;
long dt, sunrise, sunset;
TextView area, info, main;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.weather);
area = (TextView) findViewById(R.id.weatherAreaName);
info = (TextView) findViewById(R.id.weatherInfo);
main = (TextView) findViewById(R.id.weatherMain);
Thread net = new Thread() {
public void run() {
URL url;
BufferedReader buff = null;
StringBuilder build = new StringBuilder();
try {
url = new URL(
"http://api.openweathermap.org/data/2.5/weather?q=Madurai,in");
buff = new BufferedReader(new InputStreamReader(
url.openStream(), "UTF-8"));
for (String l; (l = buff.readLine()) != null;)
build.append(l.trim());
} catch (IOException e1) {
Log.e("REV DEMO", "ERROR");
} finally {
if (buff != null)
try {
buff.close();
} catch (IOException e) {
Log.e("REV DEMO", "ERROR");
}
jsonText = build.toString();
interrupt();
}
};
};
net.start();
if (net.getState()==Thread.State.TERMINATED) {
parseJsonText(jsonText);
area.setText(name);
info.setText(description + "\n" + "Longitude, Lattitude " + lon
+ ", " + lat + "\nTemperature " + temp + "\n(Min:"
+ temp_min + ", Max:" + temp_max + ")\nPressure "
+ pressure + "\nHumidity " + humidity + "\nWind Speed "
+ speed + ", direction " + deg);
main.setText(weather_main);
}
}
private void parseJsonText(String jsonFromInternet) {
JSONObject root, coord, main, wind, sys, weather;
try {
root = new JSONObject(jsonFromInternet);
coord = root.getJSONObject("coord");
lon = coord.optString("lon");
lat = coord.optString("lat");
main = root.getJSONObject("main");
temp = main.optString("temp");
pressure = main.optString("pressure");
humidity = main.optString("humidity");
temp_min = main.optString("temp_min");
temp_max = main.optString("temp_max");
wind = root.getJSONObject("wind");
speed = wind.optString("speed");
deg = wind.optString("deg");
sys = root.getJSONObject("sys");
sunrise = sys.optLong("sunrise");
sunset = sys.optLong("sunset");
dt = root.optLong("dt");
name = root.optString("name");
weather = root.optJSONArray("weather").getJSONObject(0);
weather_main = weather.optString("main");
description = weather.optString("description");
} catch (JSONException e) {
e.printStackTrace();
}
}
}
You start asynchronous Thread to download the JSON and it means that your views are getting updated immediately (when your jsonText is not initialized yet).
Move updating your views into the end of run() method and update your views when you finished parsing data, not earlier. Warning: to call setText from thread you will have to use runOnUiThread method.
Thread net = new Thread() {
public void run() {
URL url;
BufferedReader buff = null;
StringBuilder build = new StringBuilder();
try {
url = new URL(
"http://api.openweathermap.org/data/2.5/weather?q=Madurai,in");
buff = new BufferedReader(new InputStreamReader(
url.openStream(), "UTF-8"));
for (String l; (l = buff.readLine()) != null;)
build.append(l.trim());
// parse and update views now:
jsonText = build.toString();
parseJsonText(jsonText);
runOnUiThread(new Runnable() {
#Override
public void run() {
area.setText(name);
info.setText(description + "\n" + "Longitude, Lattitude " + lon
+ ", " + lat + "\nTemperature " + temp + "\n(Min:"
+ temp_min + ", Max:" + temp_max + ")\nPressure "
+ pressure + "\nHumidity " + humidity + "\nWind Speed "
+ speed + ", direction " + deg);
main.setText(weather_main);
}
});
} catch (IOException e1) {
Log.e("REV DEMO", "ERROR");
} finally {
if (buff != null)
try {
buff.close();
} catch (IOException e) {
Log.e("REV DEMO", "ERROR");
}
interrupt();
}
};
};
I am getting JSON data from a web service and would like to display a progress bar while the data is downloading. All the examples I have seen use a StringBuilder like so:
//Set up the initial connection
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.setRequestMethod("GET");
connection.setDoOutput(true);
connection.setReadTimeout(10000);
connection.connect();
InputStream stream = connection.getInputStream();
//read the result from the server
reader = new BufferedReader(new InputStreamReader(stream));
StringBuilder builder = new StringBuilder();
String line = "";
while ((line = reader.readLine()) != null) {
builder.append(line + '\n');
}
result = builder.toString();
I got the ProgressBar to work by downloading the data as a byte array, then converting the byte array to a String, but I'm wondering if there is a 'more correct' way to do this. Since I've found no other way of doing this, the following class can also serve as a working example, seems a bit of a hack, but it does work well.
package com.royaldigit.newsreader.services;
import android.os.AsyncTask;
import android.util.Log;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLEncoder;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import com.royaldigit.newsreader.controller.commands.CommandInterface;
import com.royaldigit.newsreader.model.data.SearchResultDO;
import com.royaldigit.newsreader.model.data.SearchTermDO;
/**
* Gets news results from Feedzilla based on the search term currently stored in model.searchTermDO
*
* Sends progress update and returns results to the CommandInterface command reference:
* * command.onProgressUpdate(progress);
* * command.serviceComplete(results);
*
*
*/
public class FeedzillaSearchService {
private static final String TAG = "FeedzillaSearchService";
private static final String SERVICE_URI = "http://api.feedzilla.com/v1/categories/26/articles/search.json?q=";
private static final int STREAM_DIVISIONS = 10;
private CommandInterface command;
private SearchTermDO currentSearchTermDO;
private Integer maximumResults;
private DownloadTask task;
private ArrayList<SearchResultDO> results;
public Boolean isCanceled = false;
public void getData(CommandInterface cmd, SearchTermDO termDO, Integer maxResults){
command = cmd;
currentSearchTermDO = termDO;
//Feedzilla only allows count to be 100 or less, anything over throws an error
maximumResults = (maxResults > 100)? 100 : maxResults;
results = new ArrayList<SearchResultDO>();
task = new DownloadTask();
task.execute();
}
public void cancel() {
isCanceled = true;
if(task != null) task.cancel(true);
}
/**
* Handle GET request
*
*/
private class DownloadTask extends AsyncTask<Void, Integer, String> {
#Override
protected String doInBackground(Void...voids) {
String result = "";
if(currentSearchTermDO == null || currentSearchTermDO.term.equals("")) return result;
BufferedReader reader = null;
publishProgress(0);
try {
String path = SERVICE_URI + URLEncoder.encode(currentSearchTermDO.term, "UTF-8") + "&count=" + maximumResults;
Log.d(TAG, "path = "+path);
URL url = new URL(path);
//Set up the initial connection
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.setRequestMethod("GET");
connection.setDoOutput(true);
connection.setReadTimeout(10000);
connection.connect();
int length = connection.getContentLength();
InputStream stream = connection.getInputStream();
byte[] data = new byte[length];
int bufferSize = (int) Math.ceil(length / STREAM_DIVISIONS);
int progress = 0;
for(int i = 1; i < STREAM_DIVISIONS; i++){
int read = stream.read(data, progress, bufferSize);
progress += read;
publishProgress(i);
}
stream.read(data, progress, length - progress);
publishProgress(STREAM_DIVISIONS);
result = new String(data);
} catch (Exception e) {
Log.e(TAG, "Exception "+e.toString());
} finally {
if(reader != null){
try {
reader.close();
} catch(IOException ioe) {
ioe.printStackTrace();
}
}
}
return result;
}
protected void onProgressUpdate(Integer... progress) {
int currentProgress = progress[0] * 100/STREAM_DIVISIONS;
if(!this.isCancelled()) command.onProgressUpdate(currentProgress);
}
#Override
protected void onPostExecute(String result){
if(!this.isCancelled()) downloadTaskComplete(result);
}
}
/**
*
* #param data
*/
private void downloadTaskComplete(Object data){
if(!isCanceled){
try {
Log.d(TAG, data.toString());
JSONObject obj = new JSONObject(data.toString());
JSONArray array = obj.getJSONArray("articles");
for(int i = 0; i < array.length(); i++){
SearchResultDO dataObj = new SearchResultDO();
dataObj.title = array.getJSONObject(i).getString("title");
dataObj.url = array.getJSONObject(i).getString("url");
dataObj.snippet = array.getJSONObject(i).getString("summary");
dataObj.source = array.getJSONObject(i).getString("source");
dataObj.date = array.getJSONObject(i).getString("publish_date");
dataObj.termId = currentSearchTermDO.id;
//Reformat date
SimpleDateFormat format1 = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss Z");
try {
Date date = format1.parse(dataObj.date);
SimpleDateFormat format2 = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
dataObj.date = format2.format(date);
} catch(ParseException pe) {
Log.e(TAG, pe.getMessage());
}
results.add(dataObj);
}
command.serviceComplete(results);
} catch(JSONException e){
Log.e(TAG, e.toString());
command.serviceComplete(results);
}
}
}
}
UPDATE: Here is the finished version of the class using the suggestions from Nikolay. I ended up using the StringBuilder after all. The previous version would break because some times connection.getContentLength() returns -1. This version degrades gracefully for that case. Tested this implementation quite a bit and it seems bulletproof.
package com.royaldigit.newsreader.services;
import android.os.AsyncTask;
import android.util.Log;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLEncoder;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.UnsupportedEncodingException;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import com.royaldigit.newsreader.controller.commands.CommandInterface;
import com.royaldigit.newsreader.model.data.SearchResultDO;
import com.royaldigit.newsreader.model.data.SearchTermDO;
/**
* Gets news results from Feedzilla based on the search term currently stored in model.searchTermDO
*
* Sends progress update and returns results to the CommandInterface command reference:
* * command.onProgressUpdate(progress);
* * command.serviceComplete(results);
*
*/
public class FeedzillaSearchService implements SearchServiceInterface {
private static final String TAG = "FeedzillaSearchService";
private static final String SERVICE_URI = "http://api.feedzilla.com/v1/categories/26/articles/search.json?q=";
private CommandInterface command;
private SearchTermDO currentSearchTermDO;
private Integer maximumResults;
private DownloadTask task;
private ArrayList<SearchResultDO> results;
private Boolean isCanceled = false;
public void getData(CommandInterface cmd, SearchTermDO termDO, Integer maxResults){
command = cmd;
currentSearchTermDO = termDO;
//Feedzilla only allows count to be 100 or less, anything over throws an error
maximumResults = (maxResults > 100)? 100 : maxResults;
results = new ArrayList<SearchResultDO>();
task = new DownloadTask();
task.execute();
}
public void cancel() {
isCanceled = true;
if(task != null) task.cancel(true);
}
/**
* Handle GET request
*
*/
private class DownloadTask extends AsyncTask<Void, Integer, String> {
#Override
protected String doInBackground(Void...voids) {
String result = "";
if(currentSearchTermDO == null || currentSearchTermDO.term.equals("")) return result;
BufferedReader reader = null;
publishProgress(0);
try {
String path = SERVICE_URI + URLEncoder.encode(currentSearchTermDO.term, "UTF-8") + "&count=" + maximumResults;
Log.d(TAG, "path = "+path);
URL url = new URL(path);
//Set up the initial connection
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.setRequestMethod("GET");
connection.setDoOutput(true);
connection.setReadTimeout(20000);
connection.connect();
//connection.getContentType() should return something like "application/json; charset=utf-8"
String[] values = connection.getContentType().toString().split(";");
String charset = "";
for (String value : values) {
value = value.trim();
if (value.toLowerCase().startsWith("charset=")) {
charset = value.substring("charset=".length());
break;
}
}
//Set default value if charset not set
if(charset.equals("")) charset = "utf-8";
int contentLength = connection.getContentLength();
InputStream stream = connection.getInputStream();
reader = new BufferedReader(new InputStreamReader(stream));
StringBuilder builder = new StringBuilder();
/**
* connection.getContentLength() can return -1 on some connections.
* If we have the content length calculate progress, else just set progress to 100 and build the string all at once.
*
*/
if(contentLength>-1){
//Odd byte array sizes don't always work, tried 512, 1024, 2048; 1024 is the magic number because it seems to work best.
byte[] data = new byte[1024];
int totalRead = 0;
int bytesRead = 0;
while ((bytesRead = stream.read(data)) > 0) {
try {
builder.append(new String(data, 0, bytesRead, charset));
} catch (UnsupportedEncodingException e) {
Log.e(TAG, "Invalid charset: " + e.getMessage());
//Append without charset (uses system's default charset)
builder.append(new String(data, 0, bytesRead));
}
totalRead += bytesRead;
int progress = (int) (totalRead * (100/(double) contentLength));
//Log.d(TAG, "length = " + contentLength + " bytesRead = " + bytesRead + " totalRead = " + totalRead + " progress = " + progress);
publishProgress(progress);
}
} else {
String line = "";
while ((line = reader.readLine()) != null) {
builder.append(line + '\n');
publishProgress(100);
}
}
result = builder.toString();
} catch (Exception e) {
Log.e(TAG, "Exception "+e.toString());
} finally {
if(reader != null){
try {
reader.close();
} catch(IOException ioe) {
ioe.printStackTrace();
}
}
}
return result;
}
protected void onProgressUpdate(Integer... progress) {
if(!this.isCancelled()) command.onProgressUpdate(progress[0]);
}
#Override
protected void onPostExecute(String result){
if(!this.isCancelled()) downloadTaskComplete(result);
}
}
/**
*
* #param data
*/
private void downloadTaskComplete(Object data){
if(!isCanceled){
try {
Log.d(TAG, data.toString());
JSONObject obj = new JSONObject(data.toString());
JSONArray array = obj.getJSONArray("articles");
for(int i = 0; i < array.length(); i++){
SearchResultDO dataObj = new SearchResultDO();
dataObj.title = array.getJSONObject(i).getString("title");
dataObj.url = array.getJSONObject(i).getString("url");
dataObj.snippet = array.getJSONObject(i).getString("summary");
dataObj.source = array.getJSONObject(i).getString("source");
dataObj.date = array.getJSONObject(i).getString("publish_date");
dataObj.termId = currentSearchTermDO.id;
//Reformat date
SimpleDateFormat format1 = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss Z");
try {
Date date = format1.parse(dataObj.date);
SimpleDateFormat format2 = new SimpleDateFormat(SearchResultDO.DATE_FORMAT_STRING);
dataObj.date = format2.format(date);
} catch(ParseException pe) {
Log.e(TAG, pe.getMessage());
}
results.add(dataObj);
}
} catch(JSONException e){
Log.e(TAG, e.toString());
}
command.serviceComplete(results);
}
}
}
Well, since content length is reported in bytes, there is really no other way. If you want to use a StringReader you could take the length of each line you read and calculate the total bytes read to achieve the same thing. Also, the regular idiom is to check the return value of read() to check if you have reached the end of the stream. If, for some reason, the content length is wrong, your code may read more/less data then available. Finally, when converting a byte blob to a string, you should explicitly specify the encoding. When dealing with HTTP, you can get that from the 'charset' parameter of the 'Content-Type' header.
I had similar problem. I tried the solution of Jeremy C, but it was inaccurate, because value of "Content-Length" from header can be very different than real data.
My solution is:
Send HTTP header from server (PHP):
$string = json_encode($data, JSON_PRETTY_PRINT | JSON_FORCE_OBJECT);
header("X-Size: ".strlen($string)); //for example with name: "X-Size"
print($string);
Read correct value "X-size" for contentLength variable from HTTP header before read from stream:
protected String doInBackground(URL... urls) {
if (General.DEBUG) Log.i(TAG, "WebAsyncTask(doInBackground)");
String result = "";
BufferedReader reader = null;
try {
HttpURLConnection conn = (HttpURLConnection) urls[0].openConnection();
conn.setConnectTimeout(General.TIMEOUT_CONNECTION);
conn.setReadTimeout(General.TIMEOUT_SOCKET);
conn.setRequestMethod("GET");
conn.connect();
if (General.DEBUG) Log.i(TAG, "X-Size: "+conn.getHeaderField("X-Size"));
if (General.DEBUG) Log.i(TAG, "getHeaderField: "+conn.getHeaderFields());
if(conn.getResponseCode() != General.HTTP_STATUS_200)
return General.ERR_HTTP;
int contentLength = -1;
try {
contentLength = Integer.parseInt(conn.getHeaderField("X-Size"));
} catch (Exception e){
e.printStackTrace();
}
InputStream stream = conn.getInputStream();
reader = new BufferedReader(new InputStreamReader(stream));
StringBuilder builder = new StringBuilder();
//Pokud delku zname:
if(contentLength > -1){
byte[] data = new byte[16]; //TODO
int totalRead = 0;
int bytesRead = 0;
while ((bytesRead = stream.read(data)) > 0){
Thread.sleep(100); //DEBUG TODO
try {
builder.append(new String(data, 0, bytesRead, "UTF-8"));
} catch (UnsupportedEncodingException e) {
Log.i(TAG, "Invalid charset: " + e.getMessage());
//Append without charset (uses system's default charset)
builder.append(new String(data, 0, bytesRead));
}
totalRead += bytesRead;
int progress = (int) (totalRead * (100/(double) contentLength));
Log.i(TAG, "length = " + contentLength + " bytesRead = " + bytesRead + " totalRead = " + totalRead + " progress = " + progress);
publishProgress(progress);
}
} else {
String line = "";
while ((line = reader.readLine()) != null) {
builder.append(line + '\n');
publishProgress(100);
}
}
result = builder.toString();
} catch (SocketException | SocketTimeoutException e){
if (General.DEBUG) Log.i(TAG, "SocketException or SocketTimeoutException");
e.printStackTrace();
return General.HTTP_TIMEOUT;
} catch (Exception e){
e.printStackTrace();
return General.ERR_HTTP;
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return result;
}
I am working on an android app in which i have to fetch data from a RSS feed, i am able to read Title, link and facing problem with description. if it is in this format
<description>
worsening of developments in......
</description>
i am able to read it , but in some rss feeds having this format also
<description>
<p><a href="http://news.yahoo.com/ap-sources
</description>
i am not getting this text..
this is the Rss feed url : http://news.yahoo.com/rss/politics.
how to read this description..
package com.samir.XMLParser;
import java.io.*;
import java.net.*;
import java.util.*;
import javax.xml.parsers.*;
import org.w3c.dom.*;
public class HTMLRemoverParser {
HTMLRemoverBean objBean;
Vector<HTMLRemoverBean> vectParse;
int mediaThumbnailCount;
boolean urlflag;
int count = 0;
public HTMLRemoverParser() {
try {
vectParse = new Vector<HTMLRemoverBean>();
URL url = new URL("http://news.yahoo.com/rss/politics");
URLConnection con = url.openConnection();
System.out.println("Connection is : " + con);
BufferedReader reader = new BufferedReader(new InputStreamReader(
con.getInputStream()));
System.out.println("Reader :" + reader);
String inputLine;
String fullStr = "";
while ((inputLine = reader.readLine()) != null)
fullStr = fullStr.concat(inputLine + "\n");
InputStream istream = url.openStream();
DocumentBuilder builder = DocumentBuilderFactory.newInstance()
.newDocumentBuilder();
Document doc = builder.parse(istream);
doc.getDocumentElement().normalize();
NodeList nList = doc.getElementsByTagName("item");
System.out.println();
for (int temp = 0; temp < nList.getLength(); temp++) {
Node nNode = nList.item(temp);
if (nNode.getNodeType() == Node.ELEMENT_NODE) {
Element eElement = (Element) nNode;
objBean = new HTMLRemoverBean();
vectParse.add(objBean);
objBean.title = getTagValue("title", eElement);
objBean.description = getTagValue("description", eElement);
String noHTMLString = objBean.description.replaceAll("\\<.*?\\>", "");
objBean.description=noHTMLString;
objBean.link = getTagValue("link", eElement);
objBean.pubdate = getTagValue("pubDate", eElement);
}
}
for (int index1 = 0; index1 < vectParse.size(); index1++) {
HTMLRemoverBean ObjNB = (HTMLRemoverBean) vectParse
.get(index1);
System.out.println("Item No : " + index1);
System.out.println();
System.out.println("Title is : " + ObjNB.title);
System.out.println("Description is : " + ObjNB.description);
System.out.println("Link is : " + ObjNB.link);
System.out.println("Pubdate is : " + ObjNB.pubdate);
System.out.println();
System.out
.println("-------------------------------------------------------------------------------------------------------------");
}
} catch (Exception e) {
e.printStackTrace();
}
}
private String getTagValue(String sTag, Element eElement) {
NodeList nlList = eElement.getElementsByTagName(sTag).item(0)
.getChildNodes();
Node nValue = (Node) nlList.item(0);
return nValue.getNodeValue();
}
public static void main(String[] args) {
new HTMLRemoverParser();
}
}
And Bean is ::
package com.samir.XMLParser;
public class HTMLRemoverBean {
public String title;
public String description;
public String link;
public String pubdate;
}
When you detect that the block of text is HTML, open it in a WebView instead of a TextView. My solution looks like this:
WebView wv = (WebView) v.findViewById(R.id.feed_entry_detail);
wv.loadData(mContentFromFeed, "text/html; charset=utf-8", null);