I'm following javacv Face Detection/Recognition code, there is confusion regarding face recognition.. What I'm doing is (Sorry if it sounds stupid but I'm stuck)
1) Detect Face crop it and save it to sdcard and place path in learn.txt file (Learning part)
2) Detect Face crop it and find it in existing faces whether it exists or not, but it always return nearest position even if the face doesn't exist in sample faces..
what I'm doing wrong?
// Method, I'm using to recognize face
public Integer recognizeFace(Bitmap face, Context context) {
Log.i(TAG, "===========================================");
Log.i(TAG, "recognizeFace (single face)");
float[] projectedTestFace;
float confidence = 0.0f;
int nearest = -1; // closest match -- -1 for nothing.
int iNearest;
if (trainPersonNumMat == null) {
return null;
}
Log.i(TAG, "NUMBER OF EIGENS: " + nEigens);
// project the test images onto the PCA subspace
projectedTestFace = new float[nEigens];
// Start timing recognition
long startTime = System.nanoTime();
testFaceImg = bmpToIpl(face);
// saveBmp(face, "blah");
// convert Bitmap it IplImage
//testFaceImg = IplImage.create(face.getWidth(), face.getHeight(),
// IPL_DEPTH_8U, 4);
//face.copyPixelsToBuffer(testFaceImg.getByteBuffer());
// project the test image onto the PCA subspace
cvEigenDecomposite(testFaceImg, // obj
nEigens, // nEigObjs
new PointerPointer(eigenVectArr), // eigInput (Pointer)
0, // ioFlags
null, // userData
pAvgTrainImg, // avg
projectedTestFace); // coeffs
// LOGGER.info("projectedTestFace\n" +
// floatArrayToString(projectedTestFace));
Log.i(TAG, "projectedTestFace\n" + floatArrayToString(projectedTestFace));
final FloatPointer pConfidence = new FloatPointer(confidence);
iNearest = findNearestNeighbor(projectedTestFace, new FloatPointer(pConfidence));
confidence = pConfidence.get();
// truth = personNumTruthMat.data_i().get(i);
nearest = trainPersonNumMat.data_i().get(iNearest); // result
// get endtime and calculate time recognition process takes
long endTime = System.nanoTime();
long duration = endTime - startTime;
double seconds = (double) duration / 1000000000.0;
Log.i(TAG, "recognition took: " + String.valueOf(seconds));
Log.i(TAG, "nearest = " + nearest + ". Confidence = " + confidence);
Toast.makeText(context, "Nearest: "+nearest+" Confidence: "+confidence, Toast.LENGTH_LONG).show();
//Save the IplImage so we can see what it looks like
Random generator = new Random();
int n = 10000;
n = generator.nextInt(n);
String fname = "/sdcard/saved_images/" + nearest + " " + String.valueOf(seconds) + " " + String.valueOf(confidence) + " " + n + ".jpg";
Log.i(TAG, "Saving image as: " + fname);
cvSaveImage(fname, testFaceImg);
return nearest;
} // end of recognizeFace
EDIT The confidence is always negative!
Thanks in advance
Related
Can I get some examples about pointclouds with ARCore? I really search it for days.
Currently I am working on an application similar to this one:This app
Has the feature to view pcl and save files in .ply format
Thanks
The HelloARSample app renders pointcloud in a scene. You can get the coordinates for each point and save them manually in a .ply format.
To get a point cloud, you can create a float buffer and add the points from each frame.
FloatBuffer myPointCloud = FloatBuffer.allocate(capacity);
Session session = new Session(context);
Frame frame = session.update();
try (PointCloud ptcld = frame.acquirePointCloud()) {
myPointCloud.put(ptcld.getPoints());
}
The float buffer saves the points like [x1,x1,z1,confidence1, x2,x2,z2,confidence2, ...].
I havent looked at .ply file struckture, but if you want to save it to a .pcd file, you must create a header, then insert a point per line. Here is a detailed explanation on how to do it.
I did it like this
private boolean savePointCloudFile() {
String data = "";
String fileName = "pointCloud";
int points = 0;
String holder = "";
// Write the point cloud data by iterating over each point:
for (int i=0; i<pointCloud.position(); i+=4) {
data += pointCloud.get(i) + " " + // x
pointCloud.get(i + 1) + " " + // y
pointCloud.get(i + 2) + " " + // z
pointCloud.get(i + 3) + "\n"; // confidence
points = i;
}
points = points / 4 - 10; // Removed last 10 points to prevent errors in case that I lost points
// Write file header
data = "# .PCD v.7 - Point Cloud Data file format\n" +
"VERSION .7\n" +
"FIELDS x y z rgb\n" + // confidence represented by rgb
"SIZE 4 4 4 4\n" + // you only have 3 values xyz
"TYPE F F F F\n" + // all floats
"COUNT 1 1 1 1\n" +
"WIDTH " + points + "\n" +
"HEIGHT 1\n" +
"VIEWPOINT 0 0 0 1 0 0 0\n" +
"POINTS " + points + "\n" +
"DATA ascii \n" + data;
//BufferedWriter out = new BufferedWriter(new FileWriter(new File(new File(Environment.getExternalStoragePublicDirectory(
// Environment.DIRECTORY_DOCUMENTS), fileName + ".pcd"));
try {
File file = new File(context.getExternalFilesDir(null), fileName +".pcd");
FileOutputStream stream = new FileOutputStream(file);
file.createNewFile();
//FileOutputStream out = new FileOutputStream(file);
stream.write(data.getBytes());
stream.close();
Log.i("SUCCESS", "File saved successfully in " + file.getAbsolutePath());
return true;
} catch (IOException e) {
Log.e("Exception", "File write failed: " + e.toString());
return false;
}
}
You should save the file from within a separate thread as it may cause a timeout error, because it takes too long to save so many points to a file.
You should get a file similar to this
# .PCD v.7 - Point Cloud Data file format
VERSION .7
FIELDS x y z rgb
SIZE 4 4 4 4
TYPE F F F F
COUNT 1 1 1 1
WIDTH 3784
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS 3784
DATA ascii
0.068493545 -0.18897545 -0.6662081 0.007968704
0.26833203 -0.18425867 -1.5039357 0.02365576
0.19286658 -0.2141684 -1.58289 0.038087178
0.070703566 -0.17931458 -0.69418937 0.016636848
0.044586033 -0.18726173 -0.6926071 0.024707714
0.04002113 -0.20350328 -0.68689686 0.018577512
0.029185327 -0.18594348 -0.73340106 0.12292312
0.0027626567 -0.20299685 -1.5578543 0.15424652
-0.031320766 -0.20478198 -0.70128816 0.13745676
-0.06351853 -0.20185146 -0.61755043 0.15234329
-0.08655308 -0.19128543 -0.6776818 0.170851
1.0159657 -0.41043654 -6.8713074 0.05946503
-0.031778865 -0.20536968 -1.5218562 0.15976532
-0.09223208 -0.19543779 -0.61643535 0.12331226
0.02384475 -0.20319816 -1.7497014 0.15273231
-0.10013421 -0.19931296 -0.5924832 0.16186734
0.49137634 -0.09052197 -5.7263794 0.16080469
To viaualize the point cloud you can use pcl_viewer or Matlab. In Matlab I just typed
ptCloud = pcread('pointCloud.pcd');
pcshow(ptCloud);
I have created a simple Android-Application that takes a photo and stores the devices GPS infos in the exif-tags for the jpg-file. The following code shows this process (i know it's messy)
Android.Locations.Location loc = await client.GetLastLocationAsync();
ByteBuffer buffer = mImage.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
using (var output = new FileOutputStream(mFile))
{
try
{
output.Write(bytes);
output.Flush();
ExifInterface exif = new ExifInterface(mFile.AbsolutePath);
string[] degMinSec = Location.Convert(loc.Latitude, Format.Seconds).Split(':');
string dms = degMinSec[0] + "/1," + degMinSec[1] + "/1" + degMinSec[2] + "/1000";
string[] degMinSec1 = Location.Convert(loc.Longitude, Format.Seconds).Split(':');
string dms1 = degMinSec1[0] + "/1," + degMinSec1[1] + "/1" + degMinSec1[2] + "/1000";
exif.SetAttribute(ExifInterface.TagGpsLatitude, dms);
exif.SetAttribute(ExifInterface.TagGpsLatitudeRef, loc.Latitude < 0?"S":"N");
exif.SetAttribute(ExifInterface.TagGpsLongitude, dms1);
exif.SetAttribute(ExifInterface.TagGpsLongitudeRef, loc.Longitude < 0 ? "W" : "E");
exif.SaveAttributes();
}
...
So now to the problem:
When i take a picture and debug the loc variable, it looks as this:
as you can see, the latitude is 48.4080605 and de longitude is 15.6257273
when i debug the converted values dms & dms1 they show these values:
dms represents latitude and has the value 48° 24' 29.0178'', dms1 represents longitude and has the value 15° 37' 32.61828''.
when i look at the pictures exif-data in metapicz.com it shows these values:
can anyone explain me what is going on and what i'm doing wrong?
i can't figure out why it shows a different location than it should
dms = degMinSec[0] + "/1," + degMinSec[1] + "/1" + degMinSec[2] + "/1000";
Should that not be
dms = degMinSec[0] + "/1," + degMinSec[1] + "/1," + degMinSec[2] + "/1000";
?
I'm using MediaMetadataRetriever to retrieve thumbnails at a specific time in video. This is how I achieve this:
MediaMetadataRetriever metadataRetriever = new MediaMetadataRetriever();
try {
metadataRetriever.setDataSource(MainActivity.this, Uri.parse("android.resource://packageName/raw/"+"test"));
String duration=metadataRetriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
long time = Long.valueOf(duration)/3;
Bitmap bitmap1 = metadataRetriever.getFrameAtTime(time,MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
imgone.setImageBitmap(bitmap1);
}catch (Exception ex) {
Toast.makeText(MainActivity.this, String.valueOf(ex), Toast.LENGTH_SHORT).show();
}
This returns a bitmap/thumbnail as expected, the problem is that if I want to get multiple thumbnails at different times in the video like this:
MediaMetadataRetriever metadataRetriever = new MediaMetadataRetriever();
try {
metadataRetriever.setDataSource(MainActivity.this, Uri.parse("android.resource://packageName/raw/"+"test"));
String duration=metadataRetriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
long time = Long.valueOf(duration)/3;
long time2 = time+time;
long time3 = time+time+time;
Bitmap bitmap1 = metadataRetriever.getFrameAtTime(time,MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
Bitmap bitmap2 = metadataRetriever.getFrameAtTime(time2,MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
Bitmap bitmap3 = metadataRetriever.getFrameAtTime(time3,MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
imgone.setImageBitmap(bitmap1);
imgtwo.setImageBitmap(bitmap2);
imgthree.setImageBitmap(bitmap3);
}catch (Exception ex) {
Toast.makeText(MainActivity.this, String.valueOf(ex), Toast.LENGTH_SHORT).show();
}
Then it still only returns the same thumbnail, I'm not sure if it is because there is only one thumbnail available for the video or what, but I've tried different video files with the same result.
I've tried changing MediaMetadataRetriever.OPTION_CLOSEST_SYNC to all the available options but still the same result.
Im not sure if FFMPEG would be a better option for this?
Exactly a year later, I noticed that I never provided an answer.
In the original question I wanted to retrieve 3 thumbnails, I ended up retrieving 5. I also mentioned that I'm not sure if FFmpeg will be a suitable option, that's exactly what I used.
So, in OnCreate, I make sure that FFmpeg is supported and then I do the following:
if (FFmpeg.getInstance(getApplication()).isSupported()) {
#SuppressLint("SimpleDateFormat")
//ffmpeg expects the time format to be "00:00:00"
Format formatter = new SimpleDateFormat("00:" + "mm:ss.SS");
//Get the duration of the video
long duration = player.getDuration();
//Since I want 5 thumbnails, I divide the duration by 6 to get the first thumbnail position
long img1 = duration / 6;
//I format the first thumbnail time since ffmpeg expects "00:00:00" format
String firstTumbTime = formatter.format(img1);
//Scale the size of the thumbnail output (this can be improved/changed to your liking)
String scaledSize = displayMetrics.widthPixels / 7 + ":" + displayMetrics.heightPixels / 7;
//Set ffmpeg command (notice that I set vframes to one, since I only want 1 thumbnail/image)
String[] a = {"-ss", firstTumbTime, "-i", mStringFilePath, "-vframes", "1", "-s", scaledSize, imageThumbsDirectory + "/" + "thumb1.bmp"};
//start ffmpeg asynctask for the first thumbnail
ExecuteThumbFFMPEG(a);
} else {
Toast.makeText(TestNewPlayer.this, "Your device doesn't support FFMPEG...", Toast.LENGTH_SHORT).show();
}
The comments in the code above explains everything, now here is my ExecuteThumbFFMPEG method.
public void ExecuteThumbFFMPEG(String[] command) {
ffmpegImages = FFmpeg.getInstance(this).execute(command, new ExecuteBinaryResponseHandler() {
#Override
public void onStart() {
//ffmpeg started
}
#Override
public void onProgress(String message) {
//get ffmpeg progress
}
#Override
public void onFailure(String message) {
//ffmpeg failed
}
#Override
public void onSuccess(String message) {
//first thumbnail saved successfully, now to get the other 4
//Scale the thumbnail output (Same as above)
String scaledSize = displayMetrics.widthPixels / 7 + ":" + displayMetrics.heightPixels / 7;
try {
//I first set the path/name for each thumbnail, this will also be used to check if the thumbnail is available or if we should get it
String imgPath1 = imageThumbsDirectory + "/" + "thumb1.bmp";
String imgPath2 = imageThumbsDirectory + "/" + "thumb2.bmp";
String imgPath3 = imageThumbsDirectory + "/" + "thumb3.bmp";
String imgPath4 = imageThumbsDirectory + "/" + "thumb4.bmp";
String imgPath5 = imageThumbsDirectory + "/" + "thumb5.bmp";
//Set the format again (same as above)
#SuppressLint("SimpleDateFormat")
Format formatter = new SimpleDateFormat("00:" + "mm:ss.SS");
//Get the length of the video
long duration = Player.getDuration();
//Divide the length of the video by 6 (same as above)
long time = duration / 6;
//Since I want 5 thumbnails evenly distributed throughout the video
//I use the video length divided by 6 to accomplish that
long img2 = time + time;
long img3 = time + time + time;
long img4 = time + time + time + time;
long img5 = time + time + time + time + time;
//Format the time (calculated above) for each thumbnail I want to retrieve
String Img2Timeformat = formatter.format(img2);
String Img3Timeformat = formatter.format(img3);
String Img4Timeformat = formatter.format(img4);
String Img5Timeformat = formatter.format(img5);
//Get reference to the thumbnails (to see if they have been created before)
File fileimgPath1 = new File(imgPath1);
File fileimgPath2 = new File(imgPath2);
File fileimgPath3 = new File(imgPath3);
File fileimgPath4 = new File(imgPath4);
File fileimgPath5 = new File(imgPath5);
//If thumbnail 1 exist and thumbnail 2 doesn't then we need to get thumbnail 2
if (fileimgPath1.exists() && !fileimgPath2.exists()) {
//Get/decode bitmap from the first thumbnail path to be able to set it to our ImageView that should hold the first thumbnail
Bitmap bmp1 = BitmapFactory.decodeFile(imgPath1);
//Set the first thumbnail to our first ImageView
imgone.setImageBitmap(bmp1);
//Set the ffmpeg command to retrieve the second thumbnail
String[] ffmpegCommandForThumb2 = {"-ss", Img2Timeformat, "-i", mStringFilePath, "-vframes", "1", "-s", scaledSize, imageThumbsDirectory + "/" + "thumb2.bmp"};
//Start ffmpeg again, this time we will be getting thumbnail 2
ExecuteThumbFFMPEG(ffmpegCommandForThumb2);
}
//If thumbnail 2 exist and thumbnail 3 doesn't then we need to get thumbnail 3
if (fileimgPath2.exists() && !fileimgPath3.exists()) {
//Get/decode bitmap from the second thumbnail path to be able to set it to our ImageView that should hold the second thumbnail
Bitmap bmp2 = BitmapFactory.decodeFile(imgPath2);
//Set the second thumbnail to our second ImageView
imgTwo.setImageBitmap(bmp2);
//Set the ffmpeg command to retrieve the third thumbnail
String[] ffmpegCommandForThumb3 = {"-ss", Img3Timeformat, "-i", mStringFilePath, "-vframes", "1", "-s", scaledSize, imageThumbsDirectory + "/" + "thumb3.bmp"};
//Start ffmpeg again, this time we will be getting thumbnail 3
ExecuteThumbFFMPEG(ffmpegCommandForThumb3);
}
////If thumbnail 3 exist and thumbnail 4 doesn't then we need to get thumbnail 4
if (fileimgPath3.exists() && !fileimgPath4.exists()) {
//Get/decode bitmap from the third thumbnail path to be able to set it to our ImageView that should hold the third thumbnail
Bitmap bmp3 = BitmapFactory.decodeFile(imgPath3);
//Set the third thumbnail to our third ImageView
imgThree.setImageBitmap(bmp3);
//Set the ffmpeg command to retrieve the fourth thumbnail
String[] ffmpegCommandForThumb4 = {"-ss", Img4Timeformat, "-i", mStringFilePath, "-vframes", "1", "-s", scaledSize, imageThumbsDirectory + "/" + "thumb4.bmp"};
//Start ffmpeg again, this time we will be getting thumbnail 4
ExecuteThumbFFMPEG(ffmpegCommandForThumb4);
}
////If thumbnail 4 exist and thumbnail 5 doesn't then we need to get thumbnail 5
if (fileimgPath4.exists() && !fileimgPath5.exists()) {
//Get/decode bitmap from the first fourth path to be able to set it to our ImageView that should hold the fourth thumbnail
Bitmap bmp4 = BitmapFactory.decodeFile(imgPath4);
//Set the fourth thumbnail to our fourth ImageView
imgFour.setImageBitmap(bmp4);
//Set the ffmpeg command to retrieve the last thumbnail
String[] ffmpegCommandForThumb5 = {"-ss", Img5Timeformat, "-i", mStringFilePath, "-vframes", "1", "-s", scaledSize, imageThumbsDirectory + "/" + "thumb5.bmp"};
//Start ffmpeg again, this time we will be getting thumbnail 5
ExecuteThumbFFMPEG(ffmpegCommandForThumb5);
}
//If thumbnail 5 exist, then we are done and we need to set it to our ImageView
if (fileimgPath5.exists()) {
Bitmap bmp5 = BitmapFactory.decodeFile(imgPath5);
imgFive.setImageBitmap(bmp5);
}
} catch (Exception ex) {
Toast.makeText(Player.this, String.valueOf(ex), Toast.LENGTH_SHORT).show();
}
}
#Override
public void onFinish() {
//ffmpeg is done
}
});
}
When the user back out of the Activity or OnDestroy gets called, all the thumbnails should be deleted, I do this by calling the following method:
DeleteThumbs.deleteAllThumbnails(getBaseContext());
Here is the DeleteThumbs class for deleting all the thumbnails/images
class DeleteThumbs {
#SuppressWarnings("unused")
static void deleteAllThumbnails(Context baseContext){
//Directory where all the thumbnails are stored
File imageThumbsDirectory = baseContext.getExternalFilesDir("ThumbTemp");
//Path to each thumbnail
File f1 = new File(imageThumbsDirectory + "/" + "thumb1.bmp");
File f2 = new File(imageThumbsDirectory + "/" + "thumb2.bmp");
File f3 = new File(imageThumbsDirectory + "/" + "thumb3.bmp");
File f4 = new File(imageThumbsDirectory + "/" + "thumb4.bmp");
File f5 = new File(imageThumbsDirectory + "/" + "thumb5.bmp");
boolean d1 = f1.delete();
boolean d2 = f2.delete();
boolean d3 = f3.delete();
boolean d4 = f4.delete();
boolean d5 = f5.delete();
}
}
Since we know the name of each thumbnail, it's easy to delete them all at once.
This provides me with 5 thumbnail images that are scaled to reduce loading time into the ImageView's. Because I divided the duration of the video by 6, I get 5 images that are evenly "distributed" throughout the video.
NOTE:
This can be improved by caching the images into memory or using a library like picasso or glide to handle the image loading for us.
Try this one
public void detectBitmapFromVideo(int secondcount, int framecount, String videoPath) {
//int fps = 800000 / framecount;
int delta_time = secondcount * 1000000; //in microsecs
//FFmpegMediaMetadataRetriever mmr = new FFmpegMediaMetadataRetriever();
//mmr.setDataSource(videoPath);
//String s_duration = mmr.extractMetadata(FFmpegMediaMetadataRetriever.METADATA_KEY_DURATION);
MediaMetadataRetriever mediaMetadataRetriever = new MediaMetadataRetriever();
mediaMetadataRetriever.setDataSource(videoPath);
int duration = getVideoDuration(mediaMetadataRetriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION));
//int duration = getVideoDuration(s_duration);
ArrayList<Frame> frames = new ArrayList<Frame>();
//Log.e("Duration ", "Duration = " + duration + " Delta time = " + delta_time);
for (int i = 0; i <= duration; i += delta_time) {
Bitmap bmFrame = mediaMetadataRetriever.getFrameAtTime(i);
//unit in microsecond
if (bmFrame == null) {
//Log.e(TAG, "frame image " + bmFrame.toString());
continue;
}
//saveBitmapImage(bmFrame,i+"");
frames.add(new Frame.Builder().setBitmap(bmFrame).build());
/*Bitmap frame_orig = mmr.getFrameAtTime(i, FFmpegMediaMetadataRetriever.OPTION_CLOSEST);
if (frame_orig == null) {
continue;
}
frames.add(new Frame.Builder().setBitmap(rotateBitmap(frame_orig, 90f)).build());
//Log.e("Faces Detected", "Face detection on going duration = " + duration + " Deleta time = " + i);
}
}
I am implementing the fall detection using accelerometer sensor, and create below code.
public void onSensorChanged(SensorEvent foEvent) {
if (foEvent.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
double loX = foEvent.values[0];
double loY = foEvent.values[1];
double loZ = foEvent.values[2];
double loAccelerationReader = Math.sqrt(Math.pow(loX, 2)
+ Math.pow(loY, 2)
+ Math.pow(loZ, 2));
mlPreviousTime = System.currentTimeMillis();
Log.i(TAG, "loX : " + loX + " loY : " + loY + " loZ : " + loZ);
if (loAccelerationReader <= 6.0) {
moIsMin = true;
Log.i(TAG, "min");
}
if (moIsMin) {
i++;
Log.i(TAG, " loAcceleration : " + loAccelerationReader);
if (loAccelerationReader >= 30) {
long llCurrentTime = System.currentTimeMillis();
long llTimeDiff = llCurrentTime - mlPreviousTime;
Log.i(TAG, "loTime :" + llTimeDiff);
if (llTimeDiff >= 10) {
moIsMax = true;
Log.i(TAG, "max");
}
}
}
if (moIsMin && moIsMax) {
Log.i(TAG, "loX : " + loX + " loY : " + loY + " loZ : " + loZ);
Log.i(TAG, "FALL DETECTED!!!!!");
Toast.makeText(this, "FALL DETECTED!!!!!", Toast.LENGTH_LONG).show();
i = 0;
moIsMin = false;
moIsMax = false;
}
if (i > 5) {
i = 0;
moIsMin = false;
moIsMax = false;
}
}
}
its give me fall detected, but if i am riding or running it will also give me fall alert.
if i throw device from 6 inch, alert shown.
I also see the sensitivity is device specific.
when i test moto e and mi 4 with same height
Moto e return maximum 32 value for loAccelerationReader
and in mi 4 it will return 60 value for loAccelerationReader
can any one help me out for the correct way.
I got some solution not sure its work for all or not, but i am using below code and its working for me.
if (foEvent.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
double loX = foEvent.values[0];
double loY = foEvent.values[1];
double loZ = foEvent.values[2];
double loAccelerationReader = Math.sqrt(Math.pow(loX, 2)
+ Math.pow(loY, 2)
+ Math.pow(loZ, 2));
DecimalFormat precision = new DecimalFormat("0.00");
double ldAccRound = Double.parseDouble(precision.format(loAccelerationReader));
if (ldAccRound > 0.3d && ldAccRound < 0.5d) {
//Do your stuff
}
}
You are on the right track. It can detect fall! But it also detect other non-fall events. My suggestion is instead of single point thresholding (e.g. magnitude > 30), get a time interval of accelerometer readings (e.g. 1 second). I am sure that that the readings for fall, running, and driving will be very different statistically (e.g. mean, variance). I hope this can serve as a starting point for your next iteration of detection algorithm.
It is very likely that the readings will be different from machine to machine since the accelerometers they use are different and may have different sensitivity.
In my application I need to display calendar and some events on it, like on image below.
I've read this topic, but libs described there doesn't fit my needs. I've also found this lib: Extended Calendar View, but it works only on devices with api level 14 or higher while I need to support api level 10.
Maybe someone know some library for solving this problem? Any help will be highly appreciated.
i referred this link to create something like that link
/**
* NOTE: YOU NEED TO IMPLEMENT THIS PART Given the YEAR, MONTH, retrieve
* ALL entries from a SQLite database for that month. Iterate over the
* List of All entries, and get the dateCreated, which is converted into
* day.
*
* #param year
* #param month
* #return
*/
private HashMap findNumberOfEventsPerMonth(int year, int month)
{
HashMap map = new HashMap<String, Integer>();
// DateFormat dateFormatter2 = new DateFormat();
//
// String day = dateFormatter2.format("dd", dateCreated).toString();
//
// if (map.containsKey(day))
// {
// Integer val = (Integer) map.get(day) + 1;
// map.put(day, val);
// }
// else
// {
// map.put(day, 1);
// }
return map;
}
In the above code create Hashmap which contains date. and replace if condition with this(Make changes according to ur needs)
if ((mEventsPerMonthMap != null) && (!mEventsPerMonthMap.isEmpty())) {
Set<String> keys = mEventsPerMonthMap.keySet();
for (String key : keys) {
if (key.equals(theday + "/" + monthInNo+ "/" + theyear)
&& mEventsPerMonthMap.containsKey((String.format("%02d", Integer.parseInt(theday)))
+ "/"+ monthInNo+ "/" + theyear)) {
datewiseEventmap.put(theday + "/" + monthInNo+"/" + theyear,
(ArrayList<Appointment>)
mEventsPerMonthMap.get((String.format("%02d",Integer.parseInt(theday)))+ "/" + monthInNo+ "/" + theyear));}}
Changing each grid cell
if (datewiseEventmap != null && datewiseEventmap.size() > 0) {
mNumEvents = (ArrayList<Appointment>) datewiseEventmap
.get(theday + "/" + monthInNo+ "/" + theyear);
eventName.setText(sbf);
eventName.setBackgroundResource(R.drawable.appointment_name_bg);
//gridcell.setBackgroundResource(R.drawable.calender_details_description);
eventCount.setVisibility(View.VISIBLE);
//eventCount.setBackgroundResource(R.drawable.calender_details_description);
if (mNumEvents.size() > 1) {
eventCount.setVisibility(View.VISIBLE);
eventCount.setText("+ " + String.valueOf(mNumEvents.size()));
}