I'm running an android app (custom) that scans for nearby Bluetooth low energy devices, and I am noticing that some devices have longer advertised packets than specifications provision for. The scanning device is a Nexus 5 running Android 6.0.
I am using the following line on a ScanResult object result.getScanRecord().getBytes(); to get the byte array
I know that the ScanRecord's byte array is actually constructed of the advertised data (mac address not included) and the scan response, so I expect 31 bytes each for a total of 62 bytes in the array. This is the size of the total array that I receive, but it looks like the advertised data makes it into the response portion of the array. The format follows the specification, where the first byte of a GAP is the length, next byte is GAP type, and next length-1 bytes is data.
But with this format, the devices in question have data fields that extend over into the response portion. Here's an example of the array in hex, with each GAP on a different line:
02 01 06 (flags)
0D FF DF 00 57 30 46 30 30 33 43 45 56 5A (manufacturer specific data)
11 07 6D 69 73 66 69 74 A6 34 4A 7D 7F 95 01(<-expected end of advertise) 00 DA 3D (UUID, 128-bit)
07 09 46 6F 73 73 69 6C (device name)
03 03 12 18 (UUID, 16-bit)
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 (leftover bytes)
I don't really care about the scan response data, but I am confused as to how a BLE device could send advertisement packets larger than 31 bytes if it is on 4.2 specification (I know Bluetooth 5 allows larger packets, but the manufacturer states it uses 4.2). I can also see the name of the device showing as it should, but in the response portion.
Would anyone know why this is the case here? Thanks.
Probably those devices simply don't follow the specification (which requires an entry not to cross the packet boundaries). So your device might send
02 01 06 0D FF DF 00 57 30 46 30 30 33 43 45 56 5A 11 07 6D 69 73 66 69 74 A6 34 4A 7D 7F 95
as the Advertising Data and
01 00 DA 3D 07 09 46 6F 73 73 69 6C 03 03 12 18
as the Scan Response Data. You can see for example in the HCI log if this is correct. If it is so, you should complain to the manufacturer of that BLE device.
Related
I need to stream the screen of my Windows PC to Android. I intend to use FFmpeg to capture the screen and encode using H.264 codec, send the stream through RTP and finally use MediaCodec to decode the video and display it on a SurfaceView.
I tried the following FFmpeg command:
ffmpeg -f gdigrab -i desktop -an -video_size 1920x1080 -f rtp rtp://192.168.0.12:23000
However, all the NAL units that result seem to be corrupted, because:
The forbidden_zero_bit (most significant bit) of the NAL unit header is 1. For example, header of the NAL unit shown below (the byte right after 0x00 0x00 0x01) is 0xB6, so clearly the most significant bit is equal to 1.
A lot of bytes in the NAL unit are equal to 0xFF. I don't actually know if they are supposed to be like this, they just seem weird to me.
This is the beginning of one of the NAL units outputted by FFmpeg, captured with Wireshark:
0000 00 00 01 b6 56 5a bc 7c fd de ea e7 72 ff ff ff
0010 ff ff ff ef 7d d7 ff bd 6f 5f ff ee d7 ba bf ff
0020 fd df bd 7b a5 ff ff ff ff ff fd d7 78 bf fd e2
0030 ff ff ff ff ff ff 7b fe eb ff ff ff ff ff ff ff
0040 fe f5 ff ff ff ff fd b4 c6 17 45 ba 7e f4 e9 fb
0050 d7 ef 7f de ff ff ff ff fd d7 ff 79 ff bc ff ff
0060 ff ff ff ff ff ba ff ff ff ff ff ff ff 7b ff f7
0070 27 ff ff ff de ff ff ff ff ff ff ff fe ef fd c7
0080 de ef 6f 7b db dd db 74 de dd 37 bd ef ff ff ff
0090 ff ff ff ff 77 bb ff 75 ee ee bf ff ff fb dd df
00a0 ee d7 79 5e 5f ff ff ff fb 9b ff fb d7 ff ff ff
00b0 de bf ff ff ff ff ff ff ff ff fb 9d ef bd df 00
00c0 00 8f 03 ef ff ff ff ff ff ff ff 7b f7 03 1f fd
00d0 ed e5 ba ef 5d d5 cc 5f ff ff ff ff ff ff ff ff
00e0 ff ff ff ee 06 37 be f4 f6 eb ff ff ff ff ff ff
00f0 ff ff ff ff ff ff ff ba 5f f7 af ff ff ff ff ff
0100 ff ff ff ff ff ff ff ff fd d3 fb c2 ef 1b dd ed
...
...
...
Screenshot from Wireshark (same NAL unit)
I also tried specifying the video codec explicitly in FFmpeg, like this:
ffmpeg -f gdigrab -i desktop -an -vcodec libx264 -f rtp rtp://192.168.0.12:23000
In this case, I don't get Annex B style NAL units, but AVCC style ones (without the 0x00 0x00 0x01 separators, but preceded by their length, as described here).
With AVCC NAL units I don't really understand where one ends and another begins, and also where that "extradata" mentioned in the question linked above is.
In summary, what I want to know is as follows:
Why are the NAL units outputted by the first command corrupted?
From what I understand (from here), you have to feed separate NAL units to MediaCodec for decoding. So, how do I separate NAL units in AVCC format from one another?
Can I somehow force FFmpeg to output Annex B style NAL units instead of AVCC ones while specifying the video codec as libx264?
Is there a more straightforward way of capturing the screen on Windows, encoding, sending the stream to the Android device and displaying the video in my app? (maybe a library or an API that is escaping my notice)
1) Why are the NAL units outputted by the first command corrupted?
There not getting corrupted. There is information in an RTP packet other that raw h264 data. That information may contain the byte sequence 00 00 01 and it does not signal a NALU follows.
2) From what I understand (from here), you have to feed separate NAL
units to MediaCodec for decoding. So, how do I separate NAL units in
AVCC format from one another?
You parse the stream, including the protocol overhead
3) Can I somehow force FFmpeg to output Annex B style NAL units
instead of AVCC ones while specifying the video codec as libx264?
That won't not conform to the RTP specification. If you use RTP, you get AVCC start lengthsize values.
4) Is there a more straightforward way of capturing the screen on
Windows, encoding, sending the stream to the Android device and
displaying the video in my app? (maybe a library or an API that is
escaping my notice)
This is a pretty opinionated question. RTP is pretty straight forward. But there are may other choices, each with pros and cons.
I have an audio recording app on android, and I've encountered a strange issue on a friend's phone.
One of the recordings did not work and throws:
E/MediaPlayer-JNI(5996): QCMediaPlayer mediaplayer NOT present
E/MediaPlayer(5996): Unable to create media player
E/com.audioRec.player.MediaPlayer(5996): setDataSourceFD failed.: status=0x80000000
E/com.audioRec.player.MediaPlayer(5996): java.io.IOException: setDataSourceFD failed.: status=0x80000000
but it is playing with success on windows media player, vlc player, etc !!!!!!!!!
Could someone take a look over the header of the "RecordingNotOk.wav" file?
Here are both recordings
For the RecordingNotOk.wav file, it sounds like the specified size of the data chunk is bigger than the file size:
00 F8 0A 00 in little endian is 718848 bytes while the whole file size is only 716844 and the actual data size available for this chunk is only 716800
This number can be found here (3rd line after 'data'):
52 49 46 46 24 F0 0A 00 57 41 56 45 66 6D 74 20 RIFF $ð?? WAVE fmt
10 00 00 00 01 00 02 00 80 3E 00 00 00 FA 00 00 ???? ???? >?? ?ú??
04 00 10 00 64 61 74 61 00 F8 0A 00 00 00 00 00 ???? data ?ø?? ????
Basically it seems that your file has been truncated. Some players might ignore this and try to play whatever is available up to the end of the file. I guess on Android it tries to read the whole file first according to the expected size hence the IOException
My Galaxy 5 has strange behaviour when i'm emulate a card on the ACR122U.
I think the problem occured when i ran an Android update. When the application isn't on the foreground my intent filters doesn't catch the tag anymore because the emulated tag is seen as a JIS 6319-4 instead of a ISO/IEC 14443-4 tag.
The sequence i get when the application isn't on the foreground OR is on the foreground and running in foregroundDispatch:
TgInitAsTarget
> FF 00 00 00 27 D4 8C 04 04 00 01 23 45 20 000000000000000000000000000000000000000000000000000000000000
< D5 8D 08 E0 80 90 00
TgGetData
> FF 00 00 00 02 D4 86
Target has been released error
< D5 87 29 90 00
I loop this 5 times, but none of the TgInitAsTarget will work. When i'm using enableReaderMode (without NDEF skip) i get the correct sequence:
...
> FF 00 00 00 02 D4 86 //TgGetData
< D5 87 00 00 A4 04 00 07 D2 76 00 00 85 01 01 00 9000 //SELECT command
> FF 00 00 00 05 D4 8E 02 6A 82 //file or application not found
< D5 8F 00 90 00 //Ack
> FF 00 00 00 02 D4 86 //TgGetData
< D5 87 00 00 A4 04 00 07 D2 76 00 00 85 01 00 90 00 //SELECT command
> FF 00 00 00 05 D4 8E 02 6A 82 //file or application not found
< D5 8F 00 90 00 //Ack
> FF 00 00 00 02 D4 86 //TgGetData
< D5 87 00 00 A4 04 00 07 D2 76 00 00 85 01 00 90 00 //SELECT command
> FF 00 00 00 05 D4 8E 02 6A 82 //file or application not found
< D5 8F 00 90 00 //Ack
TgGetData
> FF 00 00 00 02 D4 86
//Recieving data
Question 1
Why does Android send nothing back when the application isn't on the foreground or with enableForegroundDispatch? It's very weird because it was always working, but it looks like the update changed the behaviour of NFC.
Question 2
Is it normal that the behaviour of enableReaderMode (without NDEF skip) is different from the behaviour of enableForegroundDispatch?
Note that reader-moder mode is enabled with the following command:
nfcAdapter.enableReaderMode(this, this, NfcAdapter.FLAG_READER_NFC_A, null);
Regarding the ACR122U in host card emulation mode being detected as FeliCa (JIS X 6319-4) rather than ISO-DEP (ISO/IEC 14443-4):
This seems to be a known issue with the PN532 NFC controller. So far I did not find any solution to this. That's the same problem that you already discovered in this question of yours.
Regarding question 1: Why does Android send nothing back when the application isn't on the foreground or with enableForegroundDispatch?
Well, as you already found out, the Android device shows that it detects the emulated card as FeliCa. The response to the tgInitAsTarget command (D5 8D 08 E0 80 90 00) indicates, however, that the PN532 was activated as ISO-DEP though. Consequently, it seems that the Android device initiated the communication with the emulated ISO-DEP card but must have immediately dropped it without ever sending command frames (hence you receive the error in response to the tgGetData command). Instead, the Android device must have detected (and possibly talked to) the emulated FeliCa (actually NFCIP-1) card (which relates to the problem in the first part of my answer).
As this was working before, the update must have introduced some changes to the polling/peer-discovery algorithm of your Android device.
Regarding question 2: Is it normal that the behaviour of enableReaderMode (without NDEF skip) is different from the behaviour of enableForegroundDispatch?
That depends on what you consider "normal behavior". As you enabled Android's reader-mode with the command
nfcAdapter.enableReaderMode(this, this, NfcAdapter.FLAG_READER_NFC_A, null);
you explicitly instruct Android to behave differently than with the default tag/peer discovery mechanism (that's used with enableForegroundDispatch and the normal tag dispatch system).
The default polling will try to discover all different tag technologies (NfcA, NfcB, NfcF (fast), NfcF (slow), NfcV (possible with multiple modes), NFCIP-1 active mode, NfcBarcode; usually not in this order), hence it can discover the ACR122U in FeliCa/NFC-DEP mode.
With your enableReaderMode command, you explicitly instruct Android to only poll for NfcA. Hence, your device will properly activate the ACR122U in ISO-DEP mode and, consequently, it will start the NDEF discovery procedure.
Based on this article, i'm trying to emulate mifare card managing APDU on android.According to the APDU receive, my application should answer the right APDU, thus simulating the mifare behaviour.
with rfidiot.py, reading a mifare card give me :
> FF CA 00 00 00
< CD EA 7D 2B 90 0
Tag ID: CDEA7D2B
ATR: 3B8F8001804F0CA000000306030001000000006A
Setting Mifare Key A: FFFFFFFFFFFF
Authenticating to sector 00 with Mifare Key A (FFFFFFFFFFFF)
> FF 82 20 00 06 FF FF FF FF FF FF
< [] 90 0
> FF 88 00 00 60 00
< [] 90 0
OK
Dumping data blocks 01 to 01:
> FF 88 00 01 60 00
< [] 90 0
> FF B0 00 01 01
< [] 6C 10
> FF B0 00 01 10
< 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 90 0
01: 00000000000000000000000000000000 ................
With my app simulating the card i got a wrong comportment:
> FF CA 00 00 00
< 08 F0 82 65 90 0
Tag ID: 08F08265
ATR: 3B80800101
Setting Mifare Key A: FFFFFFFFFFFF
Authenticating to sector 00 with Mifare Key A (FFFFFFFFFFFF)
> FF 82 20 00 06 FF FF FF FF FF FF
< [] 90 0
> FF 88 00 00 60 00
< [] 90 0
OK
Dumping data blocks 01 to 01:
> FF 88 00 01 60 00
< [] 90 0
> FF B0 00 01 01
< [] 69 81
Failed: Command incompatible with file structure
An error appear on FF B0 00 01 01 APDU command. And i don't know where the 6981 APDU command comes from..
Does someone can help me on this "bug" ?
It is not possible what you are trying to do... What #NikolayElenkov has done is emulate a ISO 7816-4 compliant card. MIFARE Classic is not ISO 7816-4 compliant (it does not use APDU commands and responses for communication). In fact, it is not even ISO 14443-4 compliant: it uses proprietary encryption on top of ISO 14443-3.
The fact that the communication looks like APDUs from the reader side, is because your reader strips off all encryption before it passes the data on and wraps that data inside "virtual" APDUs. In that way, MIFARE cards can be used with software that can only deal with ISO 7816-4 compliant cards.
i'm capturing framebuffer on my android device, but the decode is not working for the correctly resolution.
I found i can get the bbp and screen resoution using:
ioctl -rl 28 /dev/graphics/fb0 17920
This command return:
return buf: f0 00 00 00 40 01 00 00 f0 00 00 00 80 02 00 00 00 00 00 00 00 00 00 00 20 00 00 00
In little-endian format i have:
The last four bytes (20) is the number of bbp 32.
The first four bytes is the screen width 0xF0 = 240
Bytes 5 to 8 is the screen height 0x0140 = 320
I tried to decode the fb (Galaxy 5) using the folowing command :
./ffmpeg -vcodec rawvideo -f rawvideo -pix_fmt rgb32 -s 240x320 -i fb0 -f image2 -vcodec png image%d.png
And i got this warning:
Invalid buffer size, packet size 40960 < expected length 307200 Error
while decoding stream #0:0: Invalid argument
and this two images:
My raw file have 655.360 bytes but the correctly value expected is 614.400 bytes using this equation:
fileSize = xres * yres * bpp/8 * numberOfFrames
fileSize = 240 * 320 * 32/8 * 2 (android use double framebuffer) =
614.400
For my surprise i change width size on ffmpeg to 256 to match 655.360 bytes and worked (kind of, there are 16 extras pxs on the right side!
I got the following images:
So my question is WHY i have to use 256 width if my screen resolution is 240. And how to discovery this magic number for others resolutions.
You should used line_length to calculate size of line.
+-------------------------+----+
| | |
| | |
|<-------- XRES --------->| | = Xres is display resolution
| | |
| | |
|<------- LINE LENGTH -------->| = Memory Size per line
| | |
| | |
+-------------------------+----+
^ ^
| |
display on screen --+ +----> This is stride
The right padding is called "stride" (stride = (line_length in pixel) - width). Many device had this stride in the framebuffer if the display resolution is not multiply of 8.
So the formula is:
fileSize = line_length * yres * numberOfFrames
Don't multiply it with bpp/8, because the line_length is memory size (not pixel size).
To retrive the line_length You should used FBIOGET_FSCREENINFO (0x4602 - 17922) rather than FBIOGET_VSCREENINFO (0x4600 - 17922) like this:
ioctl -rl 50 /dev/graphics/fb0 17922
My Galaxy Nexus return like this:
return buf: 6f 6d 61 70 66 62 00 00 00 00 00 00 00 00 00 00 00 00 a0 ac 00 00 00
01 00 00 00 00 00 00 00 00 02 00 00 00 01 00 01 00 00 00 00 00 80 0b 00 00 00 00
^_________^
My Galaxy Nexus have line_length: 2944 (0xb80).