Print strings on Apex3 using printer command language - android

How to print strings on Apex3 using printer command language or SDK?
I created a library for android application to print Bitmap images on mobile thermal printers:
Apex3, 3nStar, PR3, Bixolon, Sewoo LK P30 using appropriate SDKs.
It works fine but pretty slow, every ticket of 30 cm length takes 20-40 secs to print (it depends on type of printer).
For 3nStar and Apex3 I started to do improvements to speed up a printing.
For 3nStar I developed a class which prints a header logo and formatted text with spanish characters.
(different alignments and different fonts for different parts of a ticket)
so, the ticket looks very similar as Bitmap image but a printing time is only 6 secs.
Now I have to do the same but with Apex3. And here I got stuck.
How I do it on 3nStar for strings:
I send in outputStream bytes which are commands for printer what to do.
outputStream.write(some_bytes)
First command always is
{0x1b, 0x74, 40} //Esc t ( -- [ISO8859-15 (Latin9)]
to print spanish characters.
Then, in a loop, for n strings:
I choose a font
{0x1B, 0x21, 0x00}//Esc ! 0 -- 0 is normal, 8 is bold etc.
where changing last byte I print different fonts: normal, bold, two height, two width and combined fonts.
Next I choose an alignment
{0x1b, 0x61, 48} //Esc a 48 for left, 49 for center, 50 for right
Then I convert a string in bytes using ISO_8859_1 to print spanish characters and also write in outputStream.
outputStream.write(messageString.getBytes(StandardCharsets.ISO_8859_1))
And last byte to send is
{0x0a} // Move on next line
And the above approach doesn't work with Apex3, also I failed using
http://www.old.adtech.pl/upload/Extech_Printer_Command_Language%20Rev_H_05062009.pdf
even though on page 1 of that book is written that is fit for Apex3.
I think I miss something, I start to see how to do it using some SDK feature of Android_SDK_ESC_V1.01.17.01PRO.jar
but I would prefer to do that using direct writing of bytes.

Answers I found from this
manual
Shortly differences with the approach that I described for 3nStar are:
1)Before printing set a char set
ESC F 1 //Esc t - for 3nStar
2)Set a font for text, for example
ESC K 1 3 CR //ESC F 1 - for 3nStar
3)Send a line of text with alignment
For 3nStar I can use an alignment command before sending a text, like
ESC 1 49 //Centering
But for Apex3 I have to know a line length which depends on type of font, also a length of printing string,
then I get
freeSpace = (lineLength - printingString)
and set spaces at the begining of a line (right alignment),
at the end (left alignment) or devide them (centering).
So, for both types of printers I use the same logic which differs only in 3 places.
It is simplified explanation as a real code includes several classes with hundreds lines of code.

Related

IOException: write failed: EINVAL (Invalid argument) on UVC FileOutputStream in Kotlin

I'm trying to write Android Camera stream frames to the UVC Buffer using FileOutputStream. For context: the UVC Driver is working on the device and it has a custom built kernel.
I get 24 frames per second using imageAnalyzer:
imageAnalyzer = ImageAnalysis.Builder()
.setTargetAspectRatio(screenAspectRatio)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_YUV_420_888)
...
imageAnalysis.setAnalyzer(cameraExecutor) { image ->
val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
...
}
Then based on UVC Specifications I build the header of the frame:
val header = ByteBuffer.allocate(26)
val frameSize = image.width * image.height * ImageFormat.getBitsPerPixel(image.format) / 8
val EOH = 0x01
val ERR = 0x00
val STI = 0x01
val REST = 0x00
val SRC = 0x00
val PTS = (System.currentTimeMillis() - referenceTime) * 10000
val endOfFrame = 0x01
val FID = (frameId).toByte()
Add all of the above to the header
header.putInt(frameSize)
header.putShort(image.width.toShort())
header.putShort(image.height.toShort())
header.put(image.format.toByte())
header.put(((EOH shl 7) or (ERR shl 6) or (STI shl 5) or (REST shl 4) or SRC).toByte())
header.putLong(PTS)
header.put(endOfFrame.toByte())
header.put(FID)
Open the FileOutputStream and try to write the header and the image:
val uvcFileOutputStream = FileOutputStream("/dev/video3", true)
uvcFileOutputStream.write(header.toByteArray() + data)
uvcFileOutputStream.close()
Tried to tweak the header/payload but I'm still getting the same error:
java.io.IOException: write failed: EINVAL (Invalid argument)
at libcore.io.IoBridge.write(IoBridge.java:654)
at java.io.FileOutputStream.write(FileOutputStream.java:401)
at java.io.FileOutputStream.write(FileOutputStream.java:379)
What could I be doing wrong? is the header format wrong?
I don't know the answer directly, but I was curious to look and have some findings. I focused on the Kotlin part, as I don't know about UVC and because I suspect the problem to be there.
Huge assumption
Since there's no link to the specification I just found this source:
https://www.usb.org/document-library/video-class-v15-document-set
within the ZIP I looked at USB_Video_Payload_Frame_Based_1.5.pdf
Page 9, Section 2.1 Payload Header
I'm basing all my findings on this, so if I got this wrong, everything else is. It could still lead to a solution though if you validated the same things.
Finding 1: HLE is wrong
HLE is the length of the header, not the image data. You're putting the whole image size there (all the RGB byte data). Table 2-1 describes PTS and SCR bits control whether PTS and SCR are present. This means that if they're 0 in BFH then the header is shorter. This is why HLE is either 2, 6, 12.
Confirmation source + the fact that the field is 1 byte long (each row of Table 2-1 is 1 byte/8 bits) which means the header can be only up to 255 bytes long.
Finding 2: all the header is misaligned
Since your putting HLE with putInt, you're writing 4 bytes, from this point on, everything is messed up in the header, the flags depend on image size, etc.
Finding 3: SCR and PTS flag inconsistencies
Assuming I was wrong about 1 and 2. You're still setting the SRC and PTS bit to 0, but pushing a long (8 bytes).
Finding 4: wrong source
Actually, something is really off at this point, so I looked at your referenced GitHub ticket and found a better example of what your code represents:
Sadly, I was unable to match up what your header structure is, so I'm going to assume that you are implementing something very similar to what I was looking at, because all PDFs had pretty much the same header table.
Finding 5: HLE is wrong or missing
Assuming you need to start with the image size, the HLE is still wrong because it's the image format's type, not in connection with SCR and PTS flags.
Finding 6: BFH is missing fields
If you're following one of these specs, the BFH is always one byte with 8 bits. This is confirmed by how the shls are putting it together in your code and the descriptions of each flag (flag = true|false / 1/0).
Finding 7: PTS is wrong
Multiplying something that is millisecond precise by 10000 looks strange. The doc says "at most 450 microseconds", if you're trying to convert between ms and us, I think the multiplier would be just 1000. Either way it is only an int (4 bytes) large, definitely not a long.
Finding 8: coding assistant?
I have a feeling after reading all this, that Copilot, ChatGPT or other generator wrote your original code. This sound confirmed by you looking for a reputable source.
Finding 9: reputable source example
If I were you I would try to find a working example of this in GitHub, using keyword search like this: https://github.com/search?q=hle+pts+sti+eoh+fid+scr+bfh&type=code the languages don't really matter since these are binary file/stream formats, so regardless of language they should be produced and read the same way.
Finding 10: bit order
Have a look at big endian / little endian byte order. If you look at Table 2-1 in the PDF I linked you can see which bit should map to which byte. You can specify the order you need easily on the buffer BEFORE writing to it, by the looks of the PDF it is header.order(ByteOrder.LITTLE_ENDIAN). I think conventionally 0 is the lowest bit and 31 is the highest. I can't cite a source on this, I seem to remember from uni. Bit 0 should be the 2^0 component (1) and bit 7 is the 2^7 (128). Reversing it would make things much harder to compute and comprehend. So PTS [7:0] means that byte is the lowest 8 bits of the 32 bit PTS number.
If you link to your specification source, I can revise what I wrote, but likely will find very similar guesses.

EOS POS printer Font size - Android

I am trying to decrease font size for printing but unable to do this I have used this for small font size
val FONT_SMALL = byteArrayOf(0x1B, 0x21, 0x001)
But I want decrease more font size
It depends on the vendor and model of printer you are using.
For example, for EPSON, specifying Font C with the following command will result in a smaller font.
But whether it is supported or not depends on the printer model.
ESC M
In response to comment:
This is a command that changes the printer font for subsequent prints.
After sending this command, if you send the character string data you want to print, it will be printed with the changed font size.

Parse only a specific part of image with Tesseract

I am trying to use Tesseract OCR on Android to read the state of a gas meter when you take a picture of it:
This is the output when I parse this image:
vb"
22% BK-G4T ||||||||I||||I|||ii\|||\
’ 64 2007
22?: 06.0"! 'm'lm Mm. 23212274 ,
v 2,0 dm’ 1
pmn 0_5 bar tm ~25°C v‘40"(1 I
1amp é 0_o1m’ sb15°cl :Sp 20°c l
'I ELSTEQ~I¢¢>>InstrogwnSs HB Z _ 18 _ 1013 . ‘
a, 069373593435- 3 I
i'23212214 Y _ w w V'
g
The idea is to extract the first 5 digits of the state of the gas meter ( 06937 on this image ).
My question is, is there a way to train Tesseract to only parse this part of the image? Absolute coordinates are not an option since every picture would be different. I am guessing the best logic would be something like: parse only white numbers on black background.
By changing the page segmentation mode (psm), tesseract 4.00.00 alpha is able to read the meter line characters correctly as 06937598-m3 apart from other characters.
The command used is:
tesseract meter.jpg output --psm 11 -l eng
--psm 11 means to recognize "Sparse text. Find as much text as possible in no particular order".
Here is the output file with showing all the ASCII control characters.
If --psm 11 works on other meter images, then you could just need to search -m3 at the end of the line to extract the who meter line characters. With that, you can get the first 5 digits right away.
Hope this help.

CDMA PDU parsing on Android

I have written a program to decode a CDMA 3GPP2 point-to-point SMS message. I tested it on a couple CDMA PDU hex strings I found on the internet, and it works perfectly. However, when I try to implement it on all incoming text messages on the Android platform, it always fails.
I took a look at the incoming PDU, and it doesn't seem to follow the same pattern I have been used to seeing. Can anyone explain what format this PDU is in, or what I am missing to correctly decode this PDU? Is there additional header or fields I am not taking into account?
Example PDU pulled from a incoming text message on my phone:
000000000000100200000000000000000A36373839313031363734000000000000000000001B000310864D000306120624205611010B104C2CF9F3F5EBD73E7000
All of the CDMA pdus I found and tested my parser on look more like:
00000210020207028CE95DCC65800601FC08150003168D3001061024183060800306101004044847
Carrier: Verizon
Phone: Samsung Galaxy S Fascinate running Android 2.3.3
See the javadoc from $SDK/sources/android-16/com/android/internal/telephony/cdma/SmsMessage:
/**
* Creates byte array (pseudo pdu) from SMS object.
* Note: Do not call this method more than once per object!
*/
...so it's not following any particular CDMA standard. You can decode it however; so in fine ASCII art:-
000000000000100200000000000000000A36373839313031363734000000000000000000001B000310864D000306120624205611010B104C2CF9F3F5EBD73E7000
--------messageType --digitMode --------bearerReply ------------------------------------------------------bearer data
--------teleService --ton --------------------src --replySeqNo --messageID --msts --userdata
--------serviceCategory --errorClass --len --XX--len --len
--numberMode --causeCode ------ ------------2012/06/24 20:56:11
--npi --------bearerDataLength ----------------------userdata
--len
Note that I think you made a cut/paste error in your message - the 00 byte marked 'XX' I think shouldn't be there - luckily it's easy to spot the date and work backwards. So this is a message from 6789101674 with userdata:
104C2CF9F3F5EBD73E7000, the first five bits of which show that it's 7-bit encoded (0x02). Having shifted the remainder of the userdata 5 bits to the left, we're left with:
09859f3e7ebd7ae7ce00
--len(septets) 9 septets == 63 bits, so we expect 8 bytes of body
----------------7bit-body
So your 7bit-body decoded is "Bggguuugg".

How do you send an extended-ascii AT-command (CCh) from Android bluetooth to a serial device?

This one really has me banging my head. I'm sending alphanumeric data from an Android app, through the BluetoothChatService, to a serial bluetooth adaptor connected to the serial input of a radio transceiver.
Everything works fine except when I try to configure the radio on-the-fly with its AT-commands. The AT+++ (enter command mode) is received OK, but the problem comes with the extended-ascii characters in the next two commands: Changing the radio destination address (which is what I'm trying to do) requires CCh 10h (plus 3 hex radio address bytes), and exiting the command mode requires CCh ATO.
I know the radio can be configured OK because I've done it on an earlier prototype with the serial commands from PIC basic, and it also can be configured by entering the commands directly from hyperterm. Both these methods somehow convert that pesky CCh into a form the radio understands.
I've have tried just about everything an Android noob could possibly come up with to finagle the encoding such as:
private void command_address() {
byte[] addrArray = {(byte) 0xCC, 16, 36, 65, 21, 13};
CharSequence addrvalues = EncodingUtils.getString(addrArray, "UTF-8");
sendMessage((String) addrvalues);
}
but no matter what, I can't seem to get that high-order byte (CCh/204/-52) to behave as it should. All other (< 127) bytes, command or data, transmit with no problem. Any help here would be greatly appreciated.
-Dave
Welll ... turns out the BluetoothChat code re-creates the byte array with message.getBytes() before sending to the service. (after all, being chat code it would normally source only regular ascii strings) As others on this site have pointed out, getBytes() can create encoding issues in some cases. So, for purposes of sending these extended-ascii commands, I don't mess with strings and just send the byte array to the service with
private void sendCommand(byte[] cmd) {
mChatService.write(cmd);
}
The so-called command array is first initialized with placeholders for the hex radio address elements
byte[] addrArray = {(byte) 0xCC, 16, 0, 0, 0, 13};
and then filled in with the help of the conversion method
radioArray = HexStringToByteArray(radioAddr1);
which can be found here: HexStringToByteArray#stackoverflow

Categories

Resources