To learn ARM Neon on Android, I tried to run a sample code.
But I got an error message.
uint16_t in[8] = {0, 1, 2, 3, 4, 5, 6, 7};
uint16_t out[8];
r = vld1q_u16(&in[0]);
**vst1q_u16(&out[0], r);** <-- Here comes an error message
the error message is Invalid Arguments
I don't understand why the problem was.
vld1q_u16 works correctly and the value of r is also correct.
but vst1q_u16 doesn't work.
You should use
r = vld1q_u16(in);
vst1q_u16(out, r);
SIMD engines like NEON read memory contents backwards by default so giving it address of array element 0 is bad idea.
Related
I'm trying to write Android Camera stream frames to the UVC Buffer using FileOutputStream. For context: the UVC Driver is working on the device and it has a custom built kernel.
I get 24 frames per second using imageAnalyzer:
imageAnalyzer = ImageAnalysis.Builder()
.setTargetAspectRatio(screenAspectRatio)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_YUV_420_888)
...
imageAnalysis.setAnalyzer(cameraExecutor) { image ->
val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
...
}
Then based on UVC Specifications I build the header of the frame:
val header = ByteBuffer.allocate(26)
val frameSize = image.width * image.height * ImageFormat.getBitsPerPixel(image.format) / 8
val EOH = 0x01
val ERR = 0x00
val STI = 0x01
val REST = 0x00
val SRC = 0x00
val PTS = (System.currentTimeMillis() - referenceTime) * 10000
val endOfFrame = 0x01
val FID = (frameId).toByte()
Add all of the above to the header
header.putInt(frameSize)
header.putShort(image.width.toShort())
header.putShort(image.height.toShort())
header.put(image.format.toByte())
header.put(((EOH shl 7) or (ERR shl 6) or (STI shl 5) or (REST shl 4) or SRC).toByte())
header.putLong(PTS)
header.put(endOfFrame.toByte())
header.put(FID)
Open the FileOutputStream and try to write the header and the image:
val uvcFileOutputStream = FileOutputStream("/dev/video3", true)
uvcFileOutputStream.write(header.toByteArray() + data)
uvcFileOutputStream.close()
Tried to tweak the header/payload but I'm still getting the same error:
java.io.IOException: write failed: EINVAL (Invalid argument)
at libcore.io.IoBridge.write(IoBridge.java:654)
at java.io.FileOutputStream.write(FileOutputStream.java:401)
at java.io.FileOutputStream.write(FileOutputStream.java:379)
What could I be doing wrong? is the header format wrong?
I don't know the answer directly, but I was curious to look and have some findings. I focused on the Kotlin part, as I don't know about UVC and because I suspect the problem to be there.
Huge assumption
Since there's no link to the specification I just found this source:
https://www.usb.org/document-library/video-class-v15-document-set
within the ZIP I looked at USB_Video_Payload_Frame_Based_1.5.pdf
Page 9, Section 2.1 Payload Header
I'm basing all my findings on this, so if I got this wrong, everything else is. It could still lead to a solution though if you validated the same things.
Finding 1: HLE is wrong
HLE is the length of the header, not the image data. You're putting the whole image size there (all the RGB byte data). Table 2-1 describes PTS and SCR bits control whether PTS and SCR are present. This means that if they're 0 in BFH then the header is shorter. This is why HLE is either 2, 6, 12.
Confirmation source + the fact that the field is 1 byte long (each row of Table 2-1 is 1 byte/8 bits) which means the header can be only up to 255 bytes long.
Finding 2: all the header is misaligned
Since your putting HLE with putInt, you're writing 4 bytes, from this point on, everything is messed up in the header, the flags depend on image size, etc.
Finding 3: SCR and PTS flag inconsistencies
Assuming I was wrong about 1 and 2. You're still setting the SRC and PTS bit to 0, but pushing a long (8 bytes).
Finding 4: wrong source
Actually, something is really off at this point, so I looked at your referenced GitHub ticket and found a better example of what your code represents:
Sadly, I was unable to match up what your header structure is, so I'm going to assume that you are implementing something very similar to what I was looking at, because all PDFs had pretty much the same header table.
Finding 5: HLE is wrong or missing
Assuming you need to start with the image size, the HLE is still wrong because it's the image format's type, not in connection with SCR and PTS flags.
Finding 6: BFH is missing fields
If you're following one of these specs, the BFH is always one byte with 8 bits. This is confirmed by how the shls are putting it together in your code and the descriptions of each flag (flag = true|false / 1/0).
Finding 7: PTS is wrong
Multiplying something that is millisecond precise by 10000 looks strange. The doc says "at most 450 microseconds", if you're trying to convert between ms and us, I think the multiplier would be just 1000. Either way it is only an int (4 bytes) large, definitely not a long.
Finding 8: coding assistant?
I have a feeling after reading all this, that Copilot, ChatGPT or other generator wrote your original code. This sound confirmed by you looking for a reputable source.
Finding 9: reputable source example
If I were you I would try to find a working example of this in GitHub, using keyword search like this: https://github.com/search?q=hle+pts+sti+eoh+fid+scr+bfh&type=code the languages don't really matter since these are binary file/stream formats, so regardless of language they should be produced and read the same way.
Finding 10: bit order
Have a look at big endian / little endian byte order. If you look at Table 2-1 in the PDF I linked you can see which bit should map to which byte. You can specify the order you need easily on the buffer BEFORE writing to it, by the looks of the PDF it is header.order(ByteOrder.LITTLE_ENDIAN). I think conventionally 0 is the lowest bit and 31 is the highest. I can't cite a source on this, I seem to remember from uni. Bit 0 should be the 2^0 component (1) and bit 7 is the 2^7 (128). Reversing it would make things much harder to compute and comprehend. So PTS [7:0] means that byte is the lowest 8 bits of the 32 bit PTS number.
If you link to your specification source, I can revise what I wrote, but likely will find very similar guesses.
I have the following code ( from a class member function of mine):
this->mLengOfPath = mFirst->mLengOfPath + mSecond->mLengOfPath;
unsigned short* data = mMiddle->mPathContainer;
mMiddle->mLengOfPath = 0;
for (int index = 0; index < mMiddle->mSize; index++) { //crash here
if (index % 2 == 1 && index > 2){
mMiddle->mLengOfPath +=
GestureUtils::distance(data[index - 3], data[index - 2],
data[index - 1], data[index]);
}
}
In most case, this code doesn't crash. But crashlytics told me that my code "sometimes" crash at line 4, which I don't understand why. If mMiddle is nullptr, it should have crashed at line 2 (I already use mMiddle there).
But crashlytics consistently reports that line 4 is the problem. Anyone know how can my code go wrong at line 4?
Yeah, it's UB, if pointer is invalid. But we have particular platform and compiler in mind, while talking Android NDK and a statistical tool. With native ARM code lines 1,2,4 lines may crash sometimes on invalid pointer, only writing to null pointer is 100% failure.
Line 3 will always fail if mMiddle is null, but may or may not if it points into data segment or not. Statistical tool would highlight line 4 as it is one more often executed one: comparing expression is evaluated on every iteration. Failures on other lines may become statistical noise.
I am trying to read and write the DACR on an ARM device running Linux (Android on nexus 5 :)). I have a kernel module. The relevant instructions are as follows:
MRC p15, 0, <Rd>, c3, c0, 0 ; Read DACR
MCR p15, 0, <Rd>, c3, c0, 0 ; Write DACR
I am using C code in the module with assembly inside. I wrote the following to read the current DACR value:
unsigned int x = 0;
__asm__("MRC p15, 0, r1, c3, c0, 0;" : "=r" (x));
printk(KERN_INFO "DACR read - value = %u", x);
The above didn't crash the kernel, and the value read out was 3920437248.
I am not able to write the instruction for DACR write correctly. I was trying to follow from this question and did the following (to write all 1's to DACR to test), but the device crashed and rebooted:
__asm__("MVN r1, #0;");
__asm__("MCR p15, 0, r1, c3, c0, 0;");
Can anyone advice how to write to DACR correctly ?
Also how to parameterize the above instruction - e.g. for using value of x to initialize DACR, would the following be correct:
__asm__("MCR p15, 0, %0, c3, c0, 0;" :: "r" (x));
Oh, you're writing the register correctly alright.
The trouble is, the question is like this:
I am trying to engage reverse gear on my car driving on the motorway. I was trying to follow the directions in the handbook and moved the gear lever firmly into the "R" position, but my gearbox is now in bits all over the road. Can anyone advise how to engage reverse gear correctly?
You're on a live system. The kernel is already using domains. It needs access permissions to work correctly. If you declare open season by marking everything as Manager and removing all permission checks, copy-on-write no longer works; every process starts trashing the zero page via their initial mappings instead of triggering the allocation of real backing pages; cats and dogs live together; chaos.
Is there any known problem with using time(NULL) on Android?
I tried running the following piece of code:
int32_t now1 = time(NULL);
int64_t now1_6 = (int64_t)time(NULL);
int32_t nt = (time_t)-1;
int64_t nt6 = (int64_t)-1;
And then log the result using the following format:
"Now1 is %d. Now1_6 is %lld. NT is %d. NT6 is %lld.\n", now1, now1_6, nt, nt6
This is the output I got:
01-05 19:10:15.354: I/SMOS(11738): Now1 is 1533390320. Now1_6 is 6585861276402981128. NT is 0. NT6 is 283493768396.
There were other problems as well, such as getting the same time values on different loop iterations.
I encountered these issues on both a virtual and a physical device running Android 4.0.3 (API 15), both of which were configured with the correct time. The output above is from the physical device.
I am led to believe there is a problem with this particular POSIX function in Bionic, but I could not find any reference to such, either online or in the Bionic docs.
You are trying to cast an int64_t into a long long, and they might not be equivalent on Android. If you want to print out int64_t values using printf and %lld, either convert the number first to long long, or use the correct format modifier:
printf("%lld", (long long)now1_6);
or
printf("%" PRId64, now1_6);
For the time(NULL) giving false times, try using gettimeofday instead:
struct timeval tm;
gettimeofday( &tm, NULL);
I am using the official Android port of SDL 1.3, and using it to set up the GLES2 renderer. It works for most devices, but for one user, it is not working. Log output shows the following error:
error of type 0x500 glGetIntegerv
I looked up 0x500, and it refers to GL_INVALID_ENUM. I've tracked down where the problem occurs to the following code inside the SDL library: (the full source is quite large and I cut out logging and basic error-checking lines, so let me know if I haven't included enough information here)
glGetIntegerv( GL_NUM_SHADER_BINARY_FORMATS, &nFormats );
glGetBooleanv( GL_SHADER_COMPILER, &hasCompiler );
if( hasCompiler )
++nFormats;
rdata->shader_formats = (GLenum *) SDL_calloc( nFormats, sizeof( GLenum ) );
rdata->shader_format_count = nFormats;
glGetIntegerv( GL_SHADER_BINARY_FORMATS, (GLint *) rdata->shader_formats );
Immediately after the last line (the glGetIntegerv for GL_SHADER_BINARY_FORMATS), glGetError() returns GL_INVALID_ENUM.
The problem is the GL_ARB_ES2_compatibility extension is not properly supported on your system.
By GL_INVALID_ENUM it means that it does not know the GL_NUM_SHADER_BINARY_FORMATS and GL_SHADER_BINARY_FORMATS enums, which are a part of the said extension.
In contrast, GL_SHADER_COMPILER was recognized, which is strange.
You can try using GL_ARB_get_program_binary and using these two instead:
#define GL_NUM_PROGRAM_BINARY_FORMATS 0x87fe
#define GL_PROGRAM_BINARY_FORMATS 0x87ff
Note that these are different from:
#define GL_SHADER_BINARY_FORMATS 0x8df8
#define GL_NUM_SHADER_BINARY_FORMATS 0x8df9
But they should pretty much do the same.