I am trying to encrypt my message in android and decrypt in node js server.
Android Code :
SecretKeySpec secretkeyspec = new SecretKeySpec("password".getBytes(), "AES");
Cipher cipher = Cipher.getInstance("AES");
cipher.init(Cipher.ENCRYPT_MODE, secretkeyspec);
byte[] encoded = cipher.doFinal(s.getBytes());
System.out.println(Arrays.toString(encoded));
Node JS Code :
var crypto = require('crypto');
var CIPHER_ALGORITHM = 'aes-128-cbc';
var key = 'password';
var ivBuffer = new Buffer(16);
ivBuffer.fill(0);
var cipher = crypto.createCipheriv(CIPHER_ALGORITHM, new Buffer(key, 'utf-8'), ivBuffer);
var encryptedBuffer = cipher.update(plainText, 'utf-8');
var fBuf = new Int8Array(Buffer.concat([encryptedBuffer, cipher.final()]));
console.log(fBuf);
When i try to print the buffers im getting different values between android and node js.
Node Buffer :
[26,
116,
2,
-56,
-70,
121,
-44,
66,
101,
84,
-46,
127,
-70,
-42,
67,
31,
124,
-104,
-24,
88,
74,
4,
-22,
-70,
-39,
48,
-120,
-21,
37,
-15,
-24,
-30 ]
Android Buffer :
[26, 116, 2, -56, -70, 121, -44, 66, 101, 84, -46, 127, -70, -42, 67, 31, -92, 97, 16, -101, -45, -68, 108, 89, -125, 17, -71, 53, 2, -13, 31, -79]
could someone tell whats the android default AES equivalent node js decryption code.
I finally found the answer.
var cipher = crypto.createCipheriv(CIPHER_ALGORITHM, new Buffer(key, 'utf-8'), '');
var encryptedBuffer = cipher.update(plainText, 'utf-8');
var finalEncryptedBuffer = new Int8Array(Buffer.concat([encryptedBuffer, cipher.final()]));
console.log(encodeBytes(finalEncryptedBuffer));
Make sure mode of operation (for example, CBC) and padding (for example, PKCS5) are match between both implementations.
Related
When i scan with my iOS app a NFC-Tag i get this result:
04ea1b72835c80
i think this is the Uid.
Now i program with Flutter and the NFC-Manager package a NFC-Reader for Android.
Now when i scan the same NFC Tag i get this Information:
{nfca: {identifier: [4, 234, 27, 114, 131, 92, 128], atqa: [68, 0], maxTransceiveLength: 253, sak: 0, timeout: 618}, mifareultralight: {identifier: [4, 234, 27, 114, 131, 92, 128], maxTransceiveLength: 253, timeout: 618, type: 2}, ndef: {identifier: [4, 234, 27, 114, 131, 92, 128], isWritable: true, maxSize: 492, canMakeReadOnly: true, cachedMessage: null, type: org.nfcforum.ndef.type2}}
i use this code in flutter:
void _tagRead() {
NfcManager.instance.startSession(onDiscovered: (NfcTag tag) async {
var mytag = tag.data;
result.value = tag.data ;
NfcManager.instance.stopSession();
});
}
i tried to parse the identifier in different ways but i didn´t get the same result how from iOS.
Anyone know the right way?
This
void _tagRead() {
NfcManager.instance.startSession(onDiscovered: (NfcTag tag) async {
var mytag = tag.data;
result.value = tag.data ;
NfcManager.instance.stopSession();
});
}
change to
void _tagRead() {
NfcManager.instance.startSession(onDiscovered: (NfcTag tag) async {
var mytag = tag.data["mifareultralight"]["identifier"].map((e) => e.toRadixString(16).padLeft(2, '0')).join(''); ;
result.value = mytag ;
NfcManager.instance.stopSession();
});
}
and i got 04ea1b72835c80
I try write some automatic app in python, and i have to check color pixel from mobile as fast as possible. I try take screenshot, and take pixel from photo. But it is too slow,
Thanks!
It depends on how fast is fast for your use case. However, this is an starter idea using AndroidViewClient:
#! /usr/bin/env python3
import random
import time
from com.dtmilano.android.viewclient import ViewClient
device, serialno = ViewClient.connectToDeviceOrExit()
w = device.getProperty('display.width')
h = device.getProperty('display.height')
for n in range(20):
x = random.randint(0, w)
y = random.randint(0, h)
start = time.time()
p = device.takeSnapshot(reconnect=True).getpixel((x, y))
print(f'{time.time() - start:.4f}: #({x:4}, {y:4}) -> {p}')
the results obtained were
0.4690: #( 795, 596) -> (245, 166, 194, 255)
0.5251: #( 330, 1580) -> (192, 102, 144, 255)
0.3421: #( 64, 1582) -> (202, 110, 152, 255)
0.3729: #( 219, 869) -> (248, 174, 201, 255)
0.3395: #( 794, 1871) -> (113, 41, 113, 255)
0.3349: #( 620, 388) -> (243, 169, 198, 255)
0.3432: #( 154, 827) -> (249, 178, 203, 255)
0.2958: #( 336, 956) -> (244, 165, 196, 255)
0.3586: #( 20, 1613) -> (200, 105, 149, 255)
0.3397: #( 119, 1692) -> (199, 89, 136, 255)
0.3915: #( 825, 942) -> (236, 151, 188, 255)
0.3343: #( 654, 306) -> (244, 170, 202, 255)
0.3416: #( 24, 1661) -> (202, 97, 144, 255)
0.4386: #( 788, 1273) -> (223, 134, 171, 255)
0.3927: #( 378, 128) -> (217, 158, 186, 255)
0.3229: #( 273, 1261) -> (235, 152, 183, 255)
0.2288: #( 19, 1567) -> (208, 116, 157, 255)
0.2286: #( 0, 1522) -> (220, 130, 164, 255)
0.3781: #( 728, 1323) -> (222, 132, 168, 255)
0.3954: #( 793, 1755) -> (154, 62, 118, 255)
There's a conversion to a PIL image that can still be removed to increase a bit the performance.
Also, using CulebraTerster2-public can speed up things a bit if previous results are not enough.
I am developing an react-native app that gets the weight value of a scale(MI SCALE2) that supports Bluetooth.(I have no knowledge of bluetooth.)
// version
"react-native": "0.66.1",
"react-native-ble-plx": "https://github.com/below/react-native-ble-plx",
I was able to get these values when I got on the scale.
# feature
{"data": [33, 0, 0, 0], "type": "Buffer"}
# Weight
{"data": [2, 156, 74, 178, 7, 1, 7, 22, 33, 1], "type": "Buffer"}
{"data": [2, 156, 74, 178, 7, 1, 7, 22, 33, 1], "type": "Buffer"}
{"data": [2, 156, 74, 178, 7, 1, 7, 22, 33, 1], "type": "Buffer"}
{"data": [2, 156, 74, 178, 7, 1, 7, 22, 33, 2], "type": "Buffer"}
{"data": [2, 156, 74, 178, 7, 1, 7, 22, 33, 2], "type": "Buffer"}
{"data": [34, 156, 74, 178, 7, 1, 7, 22, 33, 2], "type": "Buffer"}
{"data": [162, 156, 74, 178, 7, 1, 7, 22, 33, 6], "type": "Buffer"}
After reading the Q&A in several places, I know that it is necessary to combine the value of the feature with the value of the weight array.
I want to know how to get the weight value from my result like "94.9kg, 95.5kg, ..."
Below is the code I wrote.
manager.startDeviceScan(null, null, (error, device) => {
if (error) {
console.log('error : ' + error);
return;
}
console.log(device.name);
if (device.name === 'MI SCALE2') {
console.log('detected!!');
manager.stopDeviceScan();
device
.connect()
.then(device => {
return device.discoverAllServicesAndCharacteristics();
})
.then(device => {
return device.services();
})
.then(services => {
const result = services.filter(id => id.uuid.indexOf('181d') != -1); // 181d is Weight Scale -> org.bluetooth.service.weight_scale;
return result[0].characteristics();
})
.then(characters => {
const resultDateObject = characters.filter(
data => data.uuid.indexOf('2a2b') != -1, // 2a2b is Current Time -> org.bluetooth.characteristic.current_time;
);
const resultWeightFeature = characters.filter(
data => data.uuid.indexOf('2a9e') != -1, // 2a9e is Weight Scale Feature -> org.bluetooth.characteristic.weight_scale_feature
);
const resultWeight = characters.filter(
data => data.uuid.indexOf('2a9d') != -1, // 2a9d is Weight Measurement -> org.bluetooth.characteristic.weight_measurement;
);
const resultPosition2D = characters.filter(
data => data.uuid.indexOf('2a2f') != -1, // 2a2f is Position 2D -> org.bluetooth.characteristic.position_2d;
);
// const DeviceID = resultWeightFeature[0].deviceID;
// const ServiceUUID = resultWeightFeature[0].serviceUUID;
// const DateCharacterUUID = resultDateObject[0].uuid;
// const WeightFeatureCharacterUUID = resultWeightFeature[0].uuid;
// const WeightCharacterUUID = resultWeight[0].uuid;
// const PositionCharacterUUID = resultPosition2D[0].uuid;
resultWeight[0].monitor((error, characteristic) => {
if (error) {
console.log('error:::::', error);
return;
}
let your_bytes = Buffer.from(characteristic.value, "base64");
console.log(your_bytes);
})
return resultWeightFeature[0].read();
}).then(feature => {
let feature_bytes = Buffer.from(feature.value, "base64");
console.log('feature.value');
console.log(feature_bytes);
})
}
});
As far as I understand your Code, your weight scale makes use of Bluetooth Weight Scale Profile and Weight Scale Service.
The data you find in the corresponding characteristics needs to be interpreted as described in Personal Health Devices Transcoding
Edit:
You can find more information on the data structure here:
GATT Specification Supplement 5
example:
Feature([33,0,0,0]) => 0x00000011 => ...00 0010 0001 =>
value
description
1
Time Stamp Supported: True
0
Multiple Users Supported: False
0
BMI Supported: False
0100
Weight Measurement Resolution: Resolution of 0.05 kg or 0.1 lb
000
Height Measurement Resolution: Not specified
Weight = [34, 156, 74, 178, 7, 1, 7, 22, 33, 2]
=> 0x22 0x9c 0x4a 0xb2 0x07 0x01 0x07 0x16 0x21 0x02
First byte is a flags field => 0x22 => 0010 0010
value
description
0
Measurement Units: SI
1
Time Stamp present: True
0
User ID present: False
0
BMI and Height present: False
0010
Reserved for Future Use
Weight in kilograms with resolution 0.005 (uint16) => 0x4a9c => 95,5 kg
Time Stamp 0xb2 0x07 0x01 0x07 0x16 0x21 0x02
year(uint16) => 0x07b2 => 1970
month(uint8) => 0x01 => 1
day(uint8) => 0x07 => 7
hours(uint8) => 0x16 => 22
minutes(uint8) => 0x21 => 33
seconds(uint8) => 0x02 => 2
date 1970-01-07T22:33:02
https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_ssd_tflite_v1.zip
I am making an android object detection app with gpu delegate support.
The above link is for tensorflow lite object detection float model.
There is no documentation available for this. I want to know the input and output form of the variables for this tflite models so that i can feed it to the interpreter for gpu delegation.
Thanks in advance!
I use colaboratory. So I use below code to determine inputs and outputs:
import tensorflow as tf
interpreter = tf.lite.Interpreter('mobilenet_ssd.tflite')
print(interpreter.get_input_details())
print(interpreter.get_output_details())
So unzip the folder, find the file and load it with above code. I did that with above code and the result was:
[{'name': 'Preprocessor/sub', 'index': 165, 'shape': array([ 1, 300, 300, 3], dtype=int32), 'shape_signature': array([ 1, 300, 300, 3], dtype=int32), 'dtype': , 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
[{'name': 'concat', 'index': 172, 'shape': array([ 1, 1917, 4], dtype=int32), 'shape_signature': array([ 1, 1917, 4], dtype=int32), 'dtype': , 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'concat_1', 'index': 173, 'shape': array([ 1, 1917, 91], dtype=int32), 'shape_signature': array([ 1, 1917, 91], dtype=int32), 'dtype': , 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
Also inside android you can do:
// Initialize interpreter
#Throws(IOException::class)
private suspend fun initializeInterpreter(app: Application) = withContext(Dispatchers.IO) {
// Load the TF Lite model from asset folder and initialize TF Lite Interpreter without NNAPI enabled.
val assetManager = app.assets
val model = loadModelFile(assetManager, "mobilenet_ssd.tflite")
val options = Interpreter.Options()
options.setUseNNAPI(false)
interpreter = Interpreter(model, options)
// Reads type and shape of input and output tensors, respectively.
val imageTensorIndex = 0
val inputShape: IntArray =
interpreter.getInputTensor(imageTensorIndex).shape() // {1, length}
Log.e("INPUT_TENSOR_WHOLE", Arrays.toString(inputShape))
val imageDataType: DataType =
interpreter.getInputTensor(imageTensorIndex).dataType()
Log.e("INPUT_DATA_TYPE", imageDataType.toString())
//modelInputSize indicates how many bytes of memory we should allocate to store the input for our TensorFlow Lite model.
//FLOAT_TYPE_SIZE indicates how many bytes our input data type will require. We use float32, so it is 4 bytes.
//PIXEL_SIZE indicates how many color channels there are in each pixel. Our input image is a colored image, so we have 3 color channel.
inputImageWidth = inputShape[1]
inputImageHeight = inputShape[2]
modelInputSize = FLOAT_TYPE_SIZE * inputImageWidth *
inputImageHeight * PIXEL_SIZE
val probabilityTensorIndex = 0
outputShape =
interpreter.getOutputTensor(probabilityTensorIndex).shape()// {1, NUM_CLASSES}
Log.e("OUTPUT_TENSOR_SHAPE", outputShape.contentToString())
val probabilityDataType: DataType =
interpreter.getOutputTensor(probabilityTensorIndex).dataType()
Log.e("OUTPUT_DATA_TYPE", probabilityDataType.toString())
isInitialized = true
Log.e(TAG, "Initialized TFLite interpreter.")
// Inputs outputs
/*val inputTensorModel: Int = interpreter.getInputIndex("input_1")
Log.e("INPUT_TENSOR", inputTensorModel.toString())*/
}
#Throws(IOException::class)
private fun loadModelFile(assetManager: AssetManager, filename: String): MappedByteBuffer {
val fileDescriptor = assetManager.openFd(filename)
val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
val fileChannel = inputStream.channel
val startOffset = fileDescriptor.startOffset
val declaredLength = fileDescriptor.declaredLength
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength)
}
If you need any help tag me.
I have the following string and inside the string you can see that there is number of arrays. "10, 20, 30, 40, 30, 20, 10, 5, 20, 30, 20, 30"
What I would like to do is basically need to divide this string in each individual strings and would like to convert them in to the integer array.
For example: String array = ["10", "20", "30", "40", "30", "20", "10", "5", "20", "30", "20", "30"] - > Integer array = [10, 20, 30, 40, 30, 20, 10, 5, 20, 30, 20, 30].
If you want convert String array to Int array:
val stringArray = arrayOf("10", "20", "30", "40", "30", "20", "10", "5", "20", "30", "20", "30")
And convert it to int array using map
val intArray = stringArray.map { it.toInt() }
If you want to print it:
print(stringArray)
print(intArray)
Or, if you want convert a "String" to int array, you need split it and map.
val inputString = "10, 20, 30, 40, 30, 20, 10, 5, 20, 30, 20, 30"
val intArray = inputString.split(", ").map { it.toInt() }
Assuming you have a string has an array in it. This following code should give you what you want.
val array = "[20, 30, 40, 30, 20, 10, 5, 20, 30, 20, 30]"
val items = array.replace("\\[".toRegex(), "").replace("\\]".toRegex(), "").replace("\\s".toRegex(), "").split(",".toRegex()).dropLastWhile { it.isEmpty() }.toTypedArray()
val results = IntArray(items.size)
for (i in items.indices) {
results[i] = Integer.parseInt(items[i])
}