IOException: write failed: EINVAL (Invalid argument) on UVC FileOutputStream in Kotlin - android

I'm trying to write Android Camera stream frames to the UVC Buffer using FileOutputStream. For context: the UVC Driver is working on the device and it has a custom built kernel.
I get 24 frames per second using imageAnalyzer:
imageAnalyzer = ImageAnalysis.Builder()
.setTargetAspectRatio(screenAspectRatio)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_YUV_420_888)
...
imageAnalysis.setAnalyzer(cameraExecutor) { image ->
val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
...
}
Then based on UVC Specifications I build the header of the frame:
val header = ByteBuffer.allocate(26)
val frameSize = image.width * image.height * ImageFormat.getBitsPerPixel(image.format) / 8
val EOH = 0x01
val ERR = 0x00
val STI = 0x01
val REST = 0x00
val SRC = 0x00
val PTS = (System.currentTimeMillis() - referenceTime) * 10000
val endOfFrame = 0x01
val FID = (frameId).toByte()
Add all of the above to the header
header.putInt(frameSize)
header.putShort(image.width.toShort())
header.putShort(image.height.toShort())
header.put(image.format.toByte())
header.put(((EOH shl 7) or (ERR shl 6) or (STI shl 5) or (REST shl 4) or SRC).toByte())
header.putLong(PTS)
header.put(endOfFrame.toByte())
header.put(FID)
Open the FileOutputStream and try to write the header and the image:
val uvcFileOutputStream = FileOutputStream("/dev/video3", true)
uvcFileOutputStream.write(header.toByteArray() + data)
uvcFileOutputStream.close()
Tried to tweak the header/payload but I'm still getting the same error:
java.io.IOException: write failed: EINVAL (Invalid argument)
at libcore.io.IoBridge.write(IoBridge.java:654)
at java.io.FileOutputStream.write(FileOutputStream.java:401)
at java.io.FileOutputStream.write(FileOutputStream.java:379)
What could I be doing wrong? is the header format wrong?

I don't know the answer directly, but I was curious to look and have some findings. I focused on the Kotlin part, as I don't know about UVC and because I suspect the problem to be there.
Huge assumption
Since there's no link to the specification I just found this source:
https://www.usb.org/document-library/video-class-v15-document-set
within the ZIP I looked at USB_Video_Payload_Frame_Based_1.5.pdf
Page 9, Section 2.1 Payload Header
I'm basing all my findings on this, so if I got this wrong, everything else is. It could still lead to a solution though if you validated the same things.
Finding 1: HLE is wrong
HLE is the length of the header, not the image data. You're putting the whole image size there (all the RGB byte data). Table 2-1 describes PTS and SCR bits control whether PTS and SCR are present. This means that if they're 0 in BFH then the header is shorter. This is why HLE is either 2, 6, 12.
Confirmation source + the fact that the field is 1 byte long (each row of Table 2-1 is 1 byte/8 bits) which means the header can be only up to 255 bytes long.
Finding 2: all the header is misaligned
Since your putting HLE with putInt, you're writing 4 bytes, from this point on, everything is messed up in the header, the flags depend on image size, etc.
Finding 3: SCR and PTS flag inconsistencies
Assuming I was wrong about 1 and 2. You're still setting the SRC and PTS bit to 0, but pushing a long (8 bytes).
Finding 4: wrong source
Actually, something is really off at this point, so I looked at your referenced GitHub ticket and found a better example of what your code represents:
Sadly, I was unable to match up what your header structure is, so I'm going to assume that you are implementing something very similar to what I was looking at, because all PDFs had pretty much the same header table.
Finding 5: HLE is wrong or missing
Assuming you need to start with the image size, the HLE is still wrong because it's the image format's type, not in connection with SCR and PTS flags.
Finding 6: BFH is missing fields
If you're following one of these specs, the BFH is always one byte with 8 bits. This is confirmed by how the shls are putting it together in your code and the descriptions of each flag (flag = true|false / 1/0).
Finding 7: PTS is wrong
Multiplying something that is millisecond precise by 10000 looks strange. The doc says "at most 450 microseconds", if you're trying to convert between ms and us, I think the multiplier would be just 1000. Either way it is only an int (4 bytes) large, definitely not a long.
Finding 8: coding assistant?
I have a feeling after reading all this, that Copilot, ChatGPT or other generator wrote your original code. This sound confirmed by you looking for a reputable source.
Finding 9: reputable source example
If I were you I would try to find a working example of this in GitHub, using keyword search like this: https://github.com/search?q=hle+pts+sti+eoh+fid+scr+bfh&type=code the languages don't really matter since these are binary file/stream formats, so regardless of language they should be produced and read the same way.
Finding 10: bit order
Have a look at big endian / little endian byte order. If you look at Table 2-1 in the PDF I linked you can see which bit should map to which byte. You can specify the order you need easily on the buffer BEFORE writing to it, by the looks of the PDF it is header.order(ByteOrder.LITTLE_ENDIAN). I think conventionally 0 is the lowest bit and 31 is the highest. I can't cite a source on this, I seem to remember from uni. Bit 0 should be the 2^0 component (1) and bit 7 is the 2^7 (128). Reversing it would make things much harder to compute and comprehend. So PTS [7:0] means that byte is the lowest 8 bits of the 32 bit PTS number.
If you link to your specification source, I can revise what I wrote, but likely will find very similar guesses.

Related

BLE Heart Rate Senser Value Interpretation

I have an Android App where I get Heart Rate Measurements from a Polar H10 Device.
I'm totally lost on how to interpret the heart rate. Various links to the bluetooth.com site are resulting in 404 errors unfortunately.
The characteristics value is i.e.
[16, 59, 83, 4]
From what I understood the second byte (59) is the heart rate in BPM. But this does not seem to be decimal as the value goes up to 127 and then goes on -127, -126, -125, ... It is not hex either.
I tried (in kotlin)
characteristic.value[1].toUInt()
characteristic.value[1].toInt()
characteristic.value[1].toShort()
characteristic.value[1].toULong()
characteristic.value[1].toDouble()
All values freak out as soon as the -127 appears.
Do I have to convert the 59 to binary (59=111011) and see it in there? Please give me some insight.
### Edit (12th April 2021) ###
What I do to get those values is a BluetoothDevice.connectGatt().
Then hold the GATT.
In order to get heart rate values I look for
Service 0x180d and its
characteristic 0x2a37 and its only
descriptor 0x2902.
Then I enable notifications by setting 0x01 on the descriptor. I then get ongoing events in the GattClientCallback.onCharacteristicChanged() callback. I will add a screenshot below with all data.
From what I understood the response should be 6 bytes long instead of 4, right? What am I doing wrong?
On the picture you see the characteristic on the very top. It is linked to the service 180d and the characteristic holds the value with 4 bytes on the bottom.
See Heart Rate Value in BLE for the links to the documents. As in that answer, here's the decode:
Byte 0 - Flags: 16 (0001 0000)
Bits are numbered from LSB (0) to MSB (7).
Bit 0 - Heart Rate Value Format: 0 => UINT8 beats per minute
Bit 1-2 - Sensor Contact Status: 00 => Not supported or detected
Bit 3 - Energy Expended Status: 0 => No Present
Bit 4 - RR-Interval: 1 => One or more values are present
So the first byte is a heart rate in UInt8 format, and the next two bytes are an RR interval.
To read this in Kotlin:
characteristic.getIntValue(FORMAT_UINT8, 1)
This return a heart rate of 56 bpm.
And ignore the other two bytes unless you want the RR.
It seems I found a way by retrieving the value as follows
val hearRateDecimal = characteristic.getIntValue(BluetoothGattCharacteristic.FORMAT_UINT8, 1)
2 things are important
first - the format of UINT8 (although I don't know when to use UINT8 and when UINT16. Actually I thought I need to use UINT16 as the first byte is actually 16 (see the question above)
second - the offset parameter 1
What I now get is an Integer even beyond 127 -> 127, 128, 129, 130, ...

Print strings on Apex3 using printer command language

How to print strings on Apex3 using printer command language or SDK?
I created a library for android application to print Bitmap images on mobile thermal printers:
Apex3, 3nStar, PR3, Bixolon, Sewoo LK P30 using appropriate SDKs.
It works fine but pretty slow, every ticket of 30 cm length takes 20-40 secs to print (it depends on type of printer).
For 3nStar and Apex3 I started to do improvements to speed up a printing.
For 3nStar I developed a class which prints a header logo and formatted text with spanish characters.
(different alignments and different fonts for different parts of a ticket)
so, the ticket looks very similar as Bitmap image but a printing time is only 6 secs.
Now I have to do the same but with Apex3. And here I got stuck.
How I do it on 3nStar for strings:
I send in outputStream bytes which are commands for printer what to do.
outputStream.write(some_bytes)
First command always is
{0x1b, 0x74, 40} //Esc t ( -- [ISO8859-15 (Latin9)]
to print spanish characters.
Then, in a loop, for n strings:
I choose a font
{0x1B, 0x21, 0x00}//Esc ! 0 -- 0 is normal, 8 is bold etc.
where changing last byte I print different fonts: normal, bold, two height, two width and combined fonts.
Next I choose an alignment
{0x1b, 0x61, 48} //Esc a 48 for left, 49 for center, 50 for right
Then I convert a string in bytes using ISO_8859_1 to print spanish characters and also write in outputStream.
outputStream.write(messageString.getBytes(StandardCharsets.ISO_8859_1))
And last byte to send is
{0x0a} // Move on next line
And the above approach doesn't work with Apex3, also I failed using
http://www.old.adtech.pl/upload/Extech_Printer_Command_Language%20Rev_H_05062009.pdf
even though on page 1 of that book is written that is fit for Apex3.
I think I miss something, I start to see how to do it using some SDK feature of Android_SDK_ESC_V1.01.17.01PRO.jar
but I would prefer to do that using direct writing of bytes.
Answers I found from this
manual
Shortly differences with the approach that I described for 3nStar are:
1)Before printing set a char set
ESC F 1 //Esc t - for 3nStar
2)Set a font for text, for example
ESC K 1 3 CR //ESC F 1 - for 3nStar
3)Send a line of text with alignment
For 3nStar I can use an alignment command before sending a text, like
ESC 1 49 //Centering
But for Apex3 I have to know a line length which depends on type of font, also a length of printing string,
then I get
freeSpace = (lineLength - printingString)
and set spaces at the begining of a line (right alignment),
at the end (left alignment) or devide them (centering).
So, for both types of printers I use the same logic which differs only in 3 places.
It is simplified explanation as a real code includes several classes with hundreds lines of code.

Assertion failed (scn == 3 || scn == 4) in void cv::cvtColor(cv::InputArray, cv::OutputArray, int, int), file /../Linux/./../src/color.cpp, line 8000

I am trying to learn openCv with native code and I am taking reference from here.
I successfully build the project using ndk-build.
Now I want to make change in scan.cpp file which is responsible for getting point for image, crop it, scan it and set color.
I want to give different argument for line 321 in file which is
cvtColor(mbgra, dst, CV_BGR2GRAY);
Can I give any other argument for CV_BGR2GRAY.
If yes HOW? If no WHY?
Please guide me and tell me if I am missing anything.
Thank you.
Yes, you can give any argument you want. Would you get a reasonable output? It depends. CV_BGR2GRAY expects BGR (3 channel) input and will output gray (1 channel).
If you input is 3 channel BGR (and even if it isn't really BGR, opencv won't care) you can use any 3 channel conversions, for example CV_BGR2HSV which would result in 3 channel HSV output.
If your input is 1 channel - than you won't be able to use BGR 2 GRAY obviously.
Mat bgraImage = imread("BGRA_IMAGE.png", -1); // 4 channel input image
Mat grayImage = imread("GRAY_IMAGE.png", CV_LOAD_IMAGE_GRAYSCALE); // 1 channel input image
Mat result;
cvtColor(bgraImage, result, CV_BGRA2GRAY); // CORRECT, input 4 channel, output will be 1 channel
cvtColor(bgraImage, result, CV_BGR2GRAY); // ALSO CORRECT
cvtColor(grayImage, result, CV_BGR2GRAY); // INCORRECT & will crash, input is 1 channel, expecting 3 or 4
cvtColor(grayImage, result, CV_GRAY2BGR); // CORRECT, input is 1 channel, output is 3 channel
You can see all possible color conversions here and read more about them here

How can I visualize PCM data

I have a PCM datafile that I know is valid. I can play it, edit it into pieces, etc. and it will always play, as well as the individual pieces.
But when I try to translate it into shorts from bytes
bytes[i] | (bytes[i+1] << 8)
The file is 16 bit, single channel and 44100 sampling. I don't see anything that looks like a wave file visually.
As a test I record led among silencer with one very loud sound in the middle. Still the chart I made from my intake looked like every other chart when I try this. Am I somehow doing this wrong? Or misunderstanding what I'm reading/attempting?
All am looking to do is detect a very low threshold to find a word gap.
Thanks
My psychic powers suggest this is a big-endian vs little-endian thing.
If the source file stores samples in big-endian, this is likely what you want:
(bytes[i] << 8) | (bytes[i+1])
For what it's worth, WAV files are little-endian.
Other possibilities include:
I don't see your code, but maybe your code is only incrementing i by 1 instead of 2 on every loop iteration. (A common mistake I've made in my own code).
signed types or casting. Be explicit how you do the bit operations with respect to signed vs. unsigned. I'm not sure if "bytes" is an array of "unsigned char" or "char". Nor am I sure if "char" defaults to signed or unsigned. This might be better:
unsigned char b1 = (unsigned char)(bytes[i]);
unsigned char b2 = (unsigned char)(bytes[i+1]);
short sample = (short)((b1 << 8) | (b2));

Why does Android Shift-JIS encoding of Yen (U+00A5) symbol produce -4,-4 ?

Running the following code seems to generate the wrong values:
byte[] data = "\u00a5".getBytes("Shift_JIS");
It produces [ -4, -4 ], but I expect [ 0x5c ]
I've tried various alternative names, "Shift-JIS", "shift_jis", "cp932" and all produce the same result.
When I feed the resulting data into the Shift-JIS decoder, I get an exception: java.nio.charset.UnmappableCharacterException: Length: 2
That is, with the decoder configured as follows:
Charset charset = Charset.forName("Shift_JIS);
CharsetDecoder decoder = charset.newDecoder()
.onMalformedInput(CodingErrorAction.REPORT)
.onUnmappableCharacter(CodingErrorAction.REPORT);
But given the output of the encoder looks wrong, my guess is that the decoder is irrelevant. My point is that regardless of the actual bytes, the encoder generates data that it can't decode.
The full width Yen (U+FFE5) encodes to [ -127 (0x81), -113 (0x8F) ], and decodes correctly.
Strangely, if I try to decode [ 92 (0x5C) ] which is what I think the Shift-JIS encoding of the single width Yen is, the Android/Java decoder produces a back slash, leaving the character as 92.
If the encoder didn't support a given character, I would expect a replacement character such as '?'. But -4 (0xFC) doesn't even seem to be valid Shift-JIS. It's not even the Unicode replacement character U+FFFD.
Using the following line I can see that the encoder seems to be configured to use [-4, -4]:
Charset.forName("Shift_JIS").newEncoder().replacement()
So why isn't the single width Yen mapped in Shift-JIS?
Is [-4, -4] a sensible encoder replacement?
Why doesn't the decoder support 0x5C mapping to Yen (U+00A5)?
If 0x5C is not the correct encoding, what is?
A partial answer: back when Microsoft created its east-Asian code pages for Windows, like the Japanese code page 932 and Korean 949, they made the byte 0x5C render as a currency symbol (either a Yen sign or Won sign respectively) while still syntactically acting as a backslash character in file paths (so that a file path on a Japanese system might look like
C:¥Documents¥something.doc
). Thus the byte was in a sense a Yen sign, but also in a sense a backslash; the same byte was even rendered as a different one of these symbols depending upon the font when on a Japanese system, according to http://archives.miloush.net/michkap/archive/2005/09/17/469941.html.
The lack of a consistent meaning of the symbol within the encoding means that while a Shift-JIS encoder can sensibly map both \ and ¥ to the byte 0x5C, a decoder trying to map a Shift-JIS-encoded string to a sequence of unicode code points has no way of knowing whether to convert the byte 0x5C to a backslash or to a yen sign; Japanese users used to make that choice via their font selection (if they were able to make it at all).
In the face of this unfixable ambiguity, all decoders seem to choose to decode 0x5C to a backslash. (At least, Python does this, and the WhatWG have a spec that dictates it.)
As for the details of what Java/Android in particular are doing when asked to encode a Yen sign in shift_jis, I'm afraid I don't know.

Categories

Resources