i am doing JSON parsing for particular web service with different IDs,Parsing returns some fields like "Description,unitcost,saleprice,summary etc.., In Description field i am getting data in HTML format, But HTML Structure is not unique for each ID,
these are the urls i am using
http://demo.s2commerce.net/DesktopModules/S2Commerce/S2Commerce.svc/rest/ProductID/8/Portal/0
http://demo.s2commerce.net/DesktopModules/S2Commerce/S2Commerce.svc/rest/ProductID/5/Portal/0
And data i am getting in "description" field for 3 urls is below
1."Description":" <\/p>\u000d\u000a\u000d\u000aThis exclusive edition is another striking symbol of cooperation between Acer and Ferrari -- two progressive companies with proud heritages built on passion, innovation, power and success<\/p>\u000d\u000a<\/div>\u000d\u000a\u000d\u000aAcer has flawlessly designed the Ferrari 3200, instilling it with exceptional performance, brilliant graphics, and lightning-fast connectivity. This exclusive edition is another striking symbol of cooperation between Acer and Ferrari -- two progressive companies with proud heritages built on passion, innovation, power and success.<\/p>\u000d\u000a<\/div>\u000d\u000a <\/p>",
2."Description":"\u000d\u000aA technically sophisticated point-and-shoot camera offering a number of pioneering technologies such as Dual Image Stabilization, Bright Capture Technology, and TruePic Turbo, as well as a powerful 5x optical zoom.<\/p>\u000d\u000a<\/div>\u000d\u000a\u000d\u000aOlympus continues to innovate with the launch of the Stylus 750 digital camera, a technically sophisticated point-and-shoot camera offering a number of pioneering technologies such as Dual Image Stabilization, Bright Capture Technology, and TruePic Turbo, as well as a powerful 5x optical zoom that tucks away into a streamlined metal, all-weather body design. The camera is distinguished by a number of premium features, including:<\/p>\u000d\u000a* An advanced combination of the mechanical CCD-shift Image Stabilization and Digital Image Stabilization work together to ensure the clearest pictures possible in any situation;\u000d\u000a* A 5x optical zoom lens with a newly developed lens element to maintain a small compact size;\u000d\u000a* A 2.5-inch LCD and Bright Capture Technology dramatically improve composition, capture and review of images in low-light situations;\u000d\u000a* Olympus' exclusive TruePic Turbo Image Processing engine is coupled with a 7.1-megapixel image sensor to produce crisp, high-quality p<\/p>\u000d\u000a<\/div>
i want to get only paragraphs between paragraphs tags.
can anyone suggest me to do this?
thanks in advance
You can use regular expressions. Something like this
String description = "test <p> some \n string <\\/p> skip this <p> another <\\/p> not in range";
...
if (!"".equals(description)) {
Pattern p = Pattern.compile("\\Q<p>\\E[\\w|\\s]*\\Q<\\/p>\\E");
Matcher m = p.matcher(description);
while (m.find()) {
String ptag = m.group();
Log.d("regex", ptag);
}
}
this will find every part of text between <p> and <\/p>. Maybe, you'll need some modiifications. See all supported RegEx instructions in documentation
just see this link.
Is it possible to have multiple styles inside a TextView?
you just need to set the string data parsed from json in to this textview.
Related
I'm attempting to extract the white balance parameters from the auto white balance algorithm in the S9. On every other device I've tested, it gives meaningful parameters back (the numbers have a floating point precision of like 6 digits and are constantly changing) but the S9 appears to round it's result parameters to the nearest whole number which ends up being giving some very poor results in terms of color balance. Here's the code I am using to do this:
if (result.get(CaptureResult.COLOR_CORRECTION_GAINS) != null) {
channelVector = result.get(CaptureResult.COLOR_CORRECTION_GAINS);
}
Anybody else run into this issue and if so... any solutions to it out there???
Consider working with custom Samsung Camera API. These days, it is based on camera2.
Specifically, they provide their COLOR_CORRECTION_GAINS. They also explain that
… the camera device may do additional processing but android.colorCorrection.gains and android.colorCorrection.transform will still be provided by the camera device (in the results) and be roughly correct.
(the emphasis is mine)
i received following json file from server . it is in unicode character , i want convert into correct web content, and then want display correct content in web view please anyone help me for that!
content":"\u003Cstrong\u003EHas there ever been a more open looking Epsom
Derby?\u003C\/strong\u003E\r\n\u003Cp\u003EThe fact that two
fillies - both
unlikely to run for obvious reasons - are highly prominent in the
betting market tells it's own story.\u003C\/p\u003E\r\n\u003Cp\u003ESo
far at least this
has not exactly been a vintage year for the classic colts division.\u003C\/p\u003E\r\n\u003Cp\u003EAir
Force Blue never kicked
into gear
at all in the 2,000 Guineas while the Derby market has been in constant
turmoil.\u003C\/p\u003E\r\n\u003Cp\u003EUS Army Ranger going
to the top of the betting for winning a glorified slow bicycle
race on horrible ground at The Curragh was only the start of the fun."
try this
String result = removeUTFCharacters(unicodeString).toString();
public static StringBuffer removeUTFCharacters(String data){
Pattern p = Pattern.compile("\\\\u(\\p{XDigit}{4})");
Matcher m = p.matcher(data);
StringBuffer buf = new StringBuffer(data.length());
while (m.find()) {
String ch = String.valueOf((char) Integer.parseInt(m.group(1), 16));
m.appendReplacement(buf, Matcher.quoteReplacement(ch));
}
m.appendTail(buf);
return buf;
}
result will be:
<strong>Has there ever been a more open looking Epsom
Derby?<\/strong>\r\n<p>The fact that two
fillies - both
unlikely to run for obvious reasons - are highly prominent in the
betting market tells it's own story.<\/p>\r\n<p>So
far at least this
has not exactly been a vintage year for the classic colts division.<\/p>\r\n<p>Air
Force Blue never kicked
into gear
at all in the 2,000 Guineas while the Derby market has been in constant
turmoil.<\/p>\r\n<p>US Army Ranger going
to the top of the betting for winning a glorified slow bicycle
race on horrible ground at The Curragh was only the start of the fun.
Is there a known API or way to SCAN the text from a card without actually manually saving (and uploading) the picture? (iOS and Android)
Then I would need to know if that API can determine the marquee within the camera that should be scanned.
I want a behaviour similar to the one of QR scanners, or Augmented Reality apps. Where the user just directs the camera and the action occurs.
I have printed cards with a Redeem code in Text, and including QR will need to change the current card production.
The text is inside a white box, which may make it easier to recognise:
On iOS, you would use CIDetector with an AVCaptureSession. It can be used to process capture session output buffers as they come in from the camera without having to take a picture and provide text scanning.
For text detection, using CIDetector with CIDetectorTypeText will return areas that are likely to have text in them, but you would have to perform additional processing for Optical Character Recognition.
You could also use OpenCV for a not out of the box solution.
You can try this: https://github.com/gali8/Tesseract-OCR-iOS
Usage:
// Specify the image Tesseract should recognize on
tesseract.image = [[UIImage imageNamed:#"image_sample.jpg"] g8_blackAndWhite];
// Optional: Limit the area of the image Tesseract should recognize on to a rectangle
tesseract.rect = CGRectMake(20, 20, 100, 100);
// Optional: Limit recognition time with a few seconds
tesseract.maximumRecognitionTime = 2.0;
// Start the recognition
[tesseract recognize];
Also trying to get access to color data bytes from color cam of Tango, I was stuck on java API by being able to connect tango Cam to a surface for display (but just OK for display in fact, no easy access to raw data, nor time stamp)... so finally I switch using C API on native code (latest FERMAT lib and header) and follow recommendation I found on stack Overflow by registering a derivated sample code to connectOnFrameAvailable()... (I start using PointCloudActivity sample for that test).
First problem I found is somewhat a side effect of registering to that callback, that works usually fine (callbacks gets fire regularly), but then another callback that I also registered, to get xyz clouds, start to fail to fire. Like in sample code I mentioned, clouds are get through a onXYZijAvailable() callback, that the app registers using TangoService_connectOnXYZijAvailable(onXYZijAvailable).
So failing to get xyz callback fired is not happening always, but usually half of the time, during tests, with a awful workaround that is by taking the app in background then foreground again ... this is curious, is this "recover" related to On-pause/On-resume low level stuff??). If someone has clues ....
By the way in Java API, same side effect was observed, once connecting cam texture for display (through Tango adequate API ...)
But here is my second "problem", back to acquiring YV12 color data from camera :
through registering to TangoService_connectOnFrameAvailable( TangoCameraId::TANGO_CAMERA_COLOR, nullptr, onFrameAvailable)
and providing static funtion onFrameAvailable defined like this :
static void onFrameAvailable(void* ctx, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
LOGI("OnFrameAvailable(): Cam frame data received");
// Check if data format of expected type : YV12 , i.e.
// TangoImageFormatType::TANGO_HAL_PIXEL_FORMAT_YV12
// i.e. = 0x32315659 // YCrCb 4:2:0 Planar
//LOGI("OnFrameAvailable(): Frame data format (%x)", buffer->format);
....
}
the problem is that width, height, stride information of received TangoImageBuffer structure seems valid (1280x720, ...), BUT the format returned is changing every-time, and not the expected magic number (here 0x32315659) ...
I am doing something wrong there ? (but other info are OK ...)
Also, there is apparently only one data format defined (YV12 ) here, but seeing Fish Eye images from demo app, it seems grey level image, is it using same (color) format as low level capture than the RGB cam ???
1) Regarding the image from the camera, I came to the same conclusion you did - only availability of image data is through the C API
2) Regarding the image - I haven't had any issues with YUV, and my last encounter with this stuff was when I wrote JPEG stuff - the format is naked, i.e. it's an organizational structure and has no header information save the undefined metadata in the first line of pixels mentioned here - Here's a link to some code that may help you decode the image in a response to another message here
3) Regarding point cloud returns -
Please note this information is anecdotal, and to some degree the product of superstition - what works for me only does that sometimes, and may not work at all for you
Tango does seem to have a remarkable knack to simply stop producing point clouds. I think a lot of it has to do with very sensitive timing internally (I wonder if anyone mentioned that Linux ain't an RTOS when this was first crafted)
Almost all issues I encounter can be attributed to screwing up the timing where
A. Debugging the C level can may point clouds stop coming
B. Bugs in the native or java code that cause hiccups in the threads that are handling the callbacks can cause point clouds to stop coming
C. Excessive load can cause the system to loose sync, at which point the point clouds will stop coming - this is detectable, you will start to see a silvery grid pattern appear in rectangular areas of the image, and point clouds will cease. Rarely, the system will recover if load decreases, the silvery pattern goes away, and point clouds come back - more commonly the silvery pattern (I think its the 3d spatializing grid) grows to cover more of the image - at least a restart of the app is required for me, and a full tablet reboot every 3rd time or so
Summarizing, that's my suspicions and countermeasures, but it's based completely on personal experience -
Can anyone elaborate on what mobile browsers support multitouch, in particular the ability to press multiple "buttons" (or joysticks) at the same time? It is required for a game I'm making and would like to know, so I can switch to native app instead if necessary, though prefer not to.
If the answer is, it is generally not supported, does anyone happen to know a library/framework which can create screens from XML format or similar, like HTML, cross platform-wise and cross resolution-wise?
//get coordinates of 3 touch
function touch(e){
var touch1 = {x:e.changedTouches[0].clientX,y:e.changedTouches[0]};
var touch2 = {x:e.changedTouches[1].clientX,y:e.changedTouches[1]};
var touch3 = {x:e.changedTouches[2].clientX,y:e.changedTouches[2]};
}
you can get coordinates of each touch flowing up.
Multitouch is supported on Honeycomb and higher, and iOS 4 and higher.