mp4parser cannot cut a video in the exact time - android

My original video is 10.3 seconds.
I want to start cutting from sec 2.7 to sec 5.7
public static void startTrim(#NonNull File src, #NonNull String dst, long startMs, long endMs, #NonNull OnTrimVideoListener callback) throws IOException {
final String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss", Locale.US).format(new Date());
final String fileName = "MP4_" + timeStamp + ".mp4";
final String filePath = dst;
File file = new File(filePath);
file.getParentFile().mkdirs();
Log.d(TAG, "Generated file path " + filePath);
genVideoUsingMp4Parser(src, file, startMs, endMs, callback);
}
private static void genVideoUsingMp4Parser(#NonNull File src, #NonNull File dst, long startMs, long endMs, #NonNull OnTrimVideoListener callback) throws IOException {
// NOTE: Switched to using FileDataSourceViaHeapImpl since it does not use memory mapping (VM).
// Otherwise we get OOM with large movie files.
Movie movie = MovieCreator.build(new FileDataSourceViaHeapImpl(src.getAbsolutePath()));
List<Track> tracks = movie.getTracks();
movie.setTracks(new LinkedList<Track>());
// remove all tracks we will create new tracks from the old
double startTime1 = startMs ; //2.7
double endTime1 = endMs, //5.7
boolean timeCorrected = false;
// Here we try to find a track that has sync samples. Since we can only start decoding
// at such a sample we SHOULD make sure that the start of the new fragment is exactly
// such a frame
for (Track track : tracks) {
if (track.getSyncSamples() != null && track.getSyncSamples().length > 0) {
if (timeCorrected) {
// This exception here could be a false positive in case we have multiple tracks
// with sync samples at exactly the same positions. E.g. a single movie containing
// multiple qualities of the same video (Microsoft Smooth Streaming file)
throw new RuntimeException("The startTime has already been corrected by another track with SyncSample. Not Supported.");
}
startTime1 = correctTimeToSyncSample(track, startTime1, false);
endTime1 = correctTimeToSyncSample(track, endTime1, true);
timeCorrected = true;
}
}
for (Track track : tracks) {
long currentSample = 0;
double currentTime = 0;
double lastTime = -1;
long startSample1 = -1;
long endSample1 = -1;
for (int i = 0; i < track.getSampleDurations().length; i++) {
long delta = track.getSampleDurations()[i];
if (currentTime > lastTime && currentTime <= startTime1) {
// current sample is still before the new starttime
startSample1 = currentSample;
}
if (currentTime > lastTime && currentTime <= endTime1) {
// current sample is after the new start time and still before the new endtime
endSample1 = currentSample;
}
lastTime = currentTime;
currentTime += (double) delta / (double) track.getTrackMetaData().getTimescale();
currentSample++;
}
movie.addTrack(new AppendTrack(new CroppedTrack(track, startSample1, endSample1)));
}
dst.getParentFile().mkdirs();
if (!dst.exists()) {
dst.createNewFile();
}
Container out = new DefaultMp4Builder().build(movie);
FileOutputStream fos = new FileOutputStream(dst);
FileChannel fc = fos.getChannel();
out.writeContainer(fc);
fc.close();
fos.close();
if (callback != null)
callback.getResult(Uri.parse(dst.toString()));
}
But after the method correctTimeToSyncSample is finished the startTime1 gets value 2.08... and endTime1 gets value 5.18...
startTime1 = 2.0830555555555557
endTime1 = 5.182877777777778
private static double correctTimeToSyncSample(#NonNull Track track, double cutHere, boolean next) {
double[] timeOfSyncSamples = new double[track.getSyncSamples().length];
long currentSample = 0;
double currentTime = 0;
for (int i = 0; i < track.getSampleDurations().length; i++) {
long delta = track.getSampleDurations()[i];
if (Arrays.binarySearch(track.getSyncSamples(), currentSample + 1) >= 0) {
// samples always start with 1 but we start with zero therefore +1
timeOfSyncSamples[Arrays.binarySearch(track.getSyncSamples(), currentSample + 1)] = currentTime;
}
currentTime += (double) delta / (double) track.getTrackMetaData().getTimescale();
currentSample++;
}
double previous = 0;
for (double timeOfSyncSample : timeOfSyncSamples) {
if (timeOfSyncSample > cutHere) {
if (next) {
return timeOfSyncSample;
} else {
return previous;
}
}
previous = timeOfSyncSample;
}
return timeOfSyncSamples[timeOfSyncSamples.length - 1];
}
The video is successfully saved but not in the exact time I wanted..
Can anyone please help me with this

Video can only be cut at keyframes (called sync samples in mp4). A key frame is uasually every 1 to 10 seconds. To get an exact frame, you need to transcode using a tool like ffmpeg.

You can add edts/elst/stss/stsh/sdtp box to do it.
add edts/elst box to indicate the media-time and segment-duration, for your case, media-time of 'elst' box is set to the media time of 2.7s, and segment duration is set to 3 seconds with the unit of time-scale of movie.
Of course, you need add Sync Sample box, Shadow Sync Sample Box and Independent and Disposable Samples Box, to specify the dependency of your first frame if it is not a key frame.
The qualified mp4 player will find the dependent sync sample before the frame at your start time, and decode all of frames edited out by means of an edit list which is used for decoding your first dependent frame, but not to present them until the first frame you specified.

Related

Trimming audio using mp4Parser. Or is there any alternative?

I am trying to trim an audio using mp4Parser.
Is that possible?
I tried ffmpeg which is quite time consuming.
Our app does require both video and audio processing.
Any suggestions on this?
ffmpeg is fast. What command did you use? maybe you made ffmpeg transcode, which is slow?
What about
fmpeg -i audiofile.mp3 -ss 3 -t 5 -acodec copy outfile.mp3
This remuxes starting from second 3 for 5 seconds into outfile.mp3, very fast.
Yes, possible with mp4Parser and FFMPEG also.
You have to use some command key for faster processing in FFMPEG.
You could take a look into this library :
k4l-video-trimmer
Yes its possible to trim audio with mp4parser.
It should be something like following
private CroppedTrack getCroppedTrack(Track track, int startTimeMs, int endTimeMs)
{
long currentSample = 0;
double currentTime = 0;
long startSample = -1;
long endSample = -1;
double startTime = startTimeMs / 1000;
double endTime = endTimeMs / 1000;
for (int i = 0; i < track.getSampleDurations().length; i++)
{
if (currentTime <= startTime)
{
// current sample is still before the new starttime
startSample = currentSample;
}
if (currentTime <= endTime) {
// current sample is after the new start time and still before the new endtime
endSample = currentSample;
}
else
{
// current sample is after the end of the cropped video
break;
}
currentTime += (double) track.getSampleDurations()[i] / (double) track.getTrackMetaData().getTimescale();
currentSample++;
}
return new CroppedTrack(track, startSample, endSample);
}
Use the above method to trim track/tracks as per your desired start and end points.
movie.addTrack(getCroppedTrack(track, 0, 8000));

Integrating im2txt model in Android phone

I am new to TensorFlow and cannot find out the solution of these questions.
How can I retrain the im2txt model for my new dataset such that the dataset on which the im2txt model was trained does not get lost and my new dataset is added to the MSCOCO dataset to caption the new images (i.e training dataset= MSCOCO dataset + My new dataset). Someone, please share the detailed procedure and the problems that I can face while retraining.
I have found out the TensorFlow tutorial for running inception V3 model in android on real-time datasets, Can this method be applied as well to the im2txt model i.e. can this be made to caption an image taken from a mobile in real time. Someone, please share the detailed steps how to do this.
After the weeks of struggle can be able to run and execute the im2txt model on Android.
Since I found the solutions from different blogs and different questions and answers, Felt that it might be useful if the all(maximum) solution is at one place.So, Sharing the steps followed.
You need to clone the tensorflow project https://github.com/tensorflow/tensorflow/releases/tag/v1.5.0 in order to freeze the graph and for some more utils.
Downloaded the im2txt model form https://github.com/KranthiGV/Pretrained-Show-and-Tell-model
Followed the steps described in the above link Can able to Run the inference to generate captions on the Linux desktop Successfully after renaming some variable in the graph(to overcome NotFoundError (see above for traceback): Key lstm/basic_lstm_cell/bias not found in checkpoint types of errors)
Now we need to freeze the existing model to obtain the frozen graph in order to use in android/ios
from cloned tensorflow project using freeze_graph.py( tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py) one can freeze the graph from any model by giving the following command
An example of command-line usage is:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=some_graph_def.pb \
--input_checkpoint=model.ckpt-8361242 \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax
--input_binary=true
we need to supply the all the output_node_names which we required to run the model, from "Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_wrapper.py" we can list out the output node names as 'softmax', 'lstm/initial_state' and 'lstm/state'
when I run the freeze graph command by supplying the output node names as 'softmax', 'lstm/initial_state' and 'lstm/state' got the error "AssertionError: softmax is not in the graph"
From the answers for
How to freeze an im2txt model?(How to freeze an im2txt model?)
by Steph and Jeff Tang
The current model ckpt.data, ckpt.index and ckpt.meta files and a graph.pbtxt should be loaded in inference mode (see InferenceWrapper in im2txt). It builds a graph with the correct names 'softmax', 'lstm/initial_state' and 'lstm/state'. You save this graph (with the same ckpt format) and then you can apply the freeze_graph script to obtain the frozen model.
to do this in Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_utils\inference_wrapper.base.py, just add something like saver.save(sess, "model/ckpt4") after saver.restore(sess, checkpoint_path) in def _restore_fn(sess):. Then rebuild and run_inference and you'll get a model that can be frozen, transformed, and optionally memmapped, to be loaded by iOS and Android apps
Now I run the command as below
python tensorflow/python/tools/freeze_graph.py \
--input_meta_graph=/tmp/ckpt4.meta \
--input_checkpoint=/tmp/ckpt4 \
--output_graph=/tmp/ckpt4_frozen.pb \
--output_node_names="softmax,lstm/initial_state,lstm/state" \
--input_binary=true
and loaded the obtained ckpt4_frozen.pb file into the Android application and got the error
"java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'DecodeJpeg' with these attrs. Registered devices: [CPU], Registered kernels:
[[Node: decode/DecodeJpeg = DecodeJpegacceptable_fraction=1, channels=3, dct_method="", fancy_upscaling=true, ratio=1, try_recover_truncated=false]]"
From https://github.com/tensorflow/tensorflow/issues/2883
Since DecodeJpeg isn't supported as part of the Android tensorflow core, you'll need to strip it out of the graph first
bazel build tensorflow/python/tools:strip_unused && \
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=ckpt4_frozen.pb \
--output_graph=ckpt4_frozen_stripped_graph.pb \
--input_node_names=convert_image/Cast,input_feed,lstm/state_feed\
--output_node_names=softmax,lstm/initial_state,lstm/state\
--input_binary=true
When I try to load ckpt4_frozen_stripped_graph.pb in android i faced errors so i followed Jeff Tang's answer (Error using Model after using optimize_for_inference.py on frozen graph)
Instead of tools:strip_unused I used graph transformation tool
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=/tmp/ckpt4_frozen.pb \
--out_graph=/tmp/ckpt4_frozen_transformed.pb \
--inputs="convert_image/Cast,input_feed,lstm/state_feed" \
--outputs="softmax,lstm/initial_state,lstm/state" \
--transforms='
strip_unused_nodes(type=float, shape="1,299,299,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms'
I can able load the obtained ckpt4_frozen_transformed.pb on android successfully.
when I supply the input as float array of RGB image pixels for input node "convert_image/Cast" and fetch the output from "lstm/initail_state" node successfully.
Now the challenge is to understand the beam search in "Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_utils\caption_generator.py" and same should be implemented on the Android side.
If you observe the python script caption_generator.py at
softmax, new_states, metadata = self.model.inference_step(sess,input_feed,state_feed)
input_feed is an int32 bit array and state_feed is a multidimensional float array
on the android side, I tried feeding int32 bit array for "input_feed", since there is no Java API to feed multidimensional array so I fed float array to lstm/state_feed as it is which fetched previously from "lstm/initail_state" node.
Got two errors one is for the input_fedd is expecting int 64bit and
"java.lang.IllegalArgumentException: -input rank(-1) <= split_dim < input rank (1), but got 1" at lstm/state_feed.
For the first error, I changed the input_feed feed data type from int32 to int 64.
About second error it is expecting rank two tensor.
If you see tensorflow java sources the data type float array which we are feeding is converted to a rank one tensor, we should feed data type in such a way that rank two tensor should be created but at present, I didn't find any API to feed multidimensional float array there
When I browsing tensorflow java source I found the API which was not exposed as Android API where we can create a rank two tensor. so I rebuilt and both libtensorflow_inference.so and libandroid_tensorflow_inference_java.jar by enabling the rank two tensor creation call.(for building process refer https://blog.mindorks.com/android-tensorflow-machine-learning-example-ff0e9b2654cc)
Now I can able to run the inference on Android and get the one caption for the image.But the accuracy very low.
Reason limiting for the one caption is I didn't find a way to fetch the outputs as a multidimensional array which is required for generating more number of cations for a single image.
String actualFilename = labelFilename.split("file:///android_asset/")[1];
vocab = new Vocabulary(assetManager.open(actualFilename));
inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFilename);
final Graph g = c.inferenceInterface.graph();
final Operation inputOperation = g.operation(inputName);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + inputName + "'");
}
final Operation outPutOperation = g.operation(outputName);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + outputName + "'");
}
// The shape of the output is [N, NUM_CLASSES], where N is the batch size.
int numClasses = (int) inferenceInterface.graph().operation(outputName)
.output(0).shape().size(1);
Log.i(TAG, "Read " + vocab.totalWords() + " labels, output layer size is " + numClasses);
// Ideally, inputSize could have been retrieved from the shape of the input operation. Alas,
// the placeholder node for input in the graphdef typically used does not specify a shape, so it
// must be passed in as a parameter.
inputSize = inputSize;
// Pre-allocate buffers.
outputNames = new String[]{outputName + ":0"};
outputs = new float[numClasses];
inferenceInterface.feed(inputName + ":0", pixels, inputSize, inputSize, 3);
inferenceInterface.run(outputNames, runStats);
inferenceInterface.fetch(outputName + ":0", outputs);
startIm2txtBeamSearch(outputs);
//Implemented Beam search in JAVA
private void startIm2txtBeamSearch(float[] outputs) {
int beam_size = 1;
//TODO:Prepare vocab ids from file
ArrayList<Integer> vocab_ids = new ArrayList<>();
vocab_ids.add(1);
int vocab_end_id = 2;
float lenth_normalization_factor = 0;
int maxCaptionLength = 20;
Graph g = inferenceInterface.graph();
//node input feed
String input_feed_node_name = "input_feed";
Operation inputOperation = g.operation(input_feed_node_name);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + input_feed_node_name + "'");
}
String output_feed_node_name = "softmax";
Operation outPutOperation = g.operation(output_feed_node_name);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + output_feed_node_name + "'");
}
int output_feed_node_numClasses = (int) outPutOperation.output(0).shape().size(1);
Log.i(TAG, "Output layer " + output_feed_node_name + ", output layer size is " + output_feed_node_numClasses);
FloatBuffer output_feed_output = FloatBuffer.allocate(output_feed_node_numClasses);
//float [][] output_feed_output = new float[numClasses][];
//node state feed
String input_state_feed_node_name = "lstm/state_feed";
inputOperation = g.operation(input_state_feed_node_name);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + input_state_feed_node_name + "'");
}
String output_state_feed_node_name = "lstm/state";
outPutOperation = g.operation(output_state_feed_node_name);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + output_state_feed_node_name + "'");
}
int output_state_feed_node_numClasses = (int) outPutOperation.output(0).shape().size(1);
Log.i(TAG, "Output layer " + output_state_feed_node_name + ", output layer size is " + output_state_feed_node_numClasses);
FloatBuffer output_state_output = FloatBuffer.allocate(output_state_feed_node_numClasses);
//float[][] output_state_output= new float[numClasses][];
String[] output_nodes = new String[]{output_feed_node_name, output_state_feed_node_name};
Caption initialBean = new Caption(vocab_ids, outputs, (float) 0.0, (float) 0.0);
TopN partialCaptions = new TopN(beam_size);
partialCaptions.push(initialBean);
TopN completeCaption = new TopN(beam_size);
captionLengthLoop:
for (int i = maxCaptionLength; i >= 0; i--) {
List<Caption> partialCaptionsList = new LinkedList<>(partialCaptions.extract(false));
partialCaptions.reset();
long[] input_feed = new long[partialCaptionsList.size()];
float[][] state_feed = new float[partialCaptionsList.size()][];
for (int j = 0; j < partialCaptionsList.size(); j++) {
Caption curCaption = partialCaptionsList.get(j);
ArrayList<Integer> senArray = curCaption.getSentence();
input_feed[j] = senArray.get(senArray.size() - 1);
state_feed[j] = curCaption.getState();
}
//feeding
inferenceInterface.feed(input_feed_node_name, input_feed, new long[]{input_feed.length});
inferenceInterface.feed(input_state_feed_node_name, state_feed, new long[]{state_feed.length});
//run
inferenceInterface.run(output_nodes, runStats);
//fetching
inferenceInterface.fetch(output_feed_node_name, output_feed_output);
inferenceInterface.fetch(output_state_feed_node_name, output_state_output);
float[] word_probabilities = new float[partialCaptionsList.size()];
float[] new_state = new float[partialCaptionsList.size()];
for (int k = 0; k < partialCaptionsList.size(); k++) {
word_probabilities = output_feed_output.array();
//output_feed_output.get(word_probabilities[k]);
new_state = output_state_output.array();
//output_feed_output.get(state[k]);
// For this partial caption, get the beam_size most probable next words.
Map<Integer, Float> word_and_probs = new LinkedHashMap<>();
//key is index of probability; value is index = word
for (int l = 0; l < word_probabilities.length; l++) {
word_and_probs.put(l, word_probabilities[l]);
}
//sorting
// word_and_probs = word_and_probs.entrySet().stream()
// .sorted(Map.Entry.comparingByValue())
// .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,(e1, e2) -> e1, LinkedHashMap::new));
word_and_probs = MapUtil.sortByValue(word_and_probs);
//considering first (beam size probabilities)
LinkedHashMap<Integer, Float> final_word_and_probs = new LinkedHashMap<>();
for (int key : word_and_probs.keySet()) {
final_word_and_probs.put(key, word_and_probs.get(key));
if (final_word_and_probs.size() == beam_size)
break;
}
for (int w : final_word_and_probs.keySet()) {
float p = final_word_and_probs.get(w);
if (p < 1e-12) {//# Avoid log(0).
Log.d(TAG, "p is < 1e-12");
continue;
}
Caption partialCaption = partialCaptionsList.get(k);
ArrayList<Integer> sentence = new ArrayList<>(partialCaption.getSentence());
sentence.add(w);
float logprob = (float) (partialCaption.getPorb() + Math.log(p));
float scroe = logprob;
Caption beam = new Caption(sentence, new_state, logprob, scroe);
if (w == vocab_end_id) {
completeCaption.push(beam);
} else {
partialCaptions.push(beam);
}
}
if (partialCaptions.getSize() == 0)//run out of partial candidates; happens when beam_size = 1.
break captionLengthLoop;
}
//clear buffer retrieve sub sequent output
output_feed_output.clear();
output_state_output.clear();
output_feed_output = null;
output_state_output = null;
output_feed_output = FloatBuffer.allocate(output_feed_node_numClasses);
output_state_output = FloatBuffer.allocate(output_state_feed_node_numClasses);
Log.d(TAG, "----" + i + " Iteration completed----");
}
Log.d(TAG, "----Total Iterations completed----");
LinkedList<Caption> completeCaptions = completeCaption.extract(true);
for (Caption cap : completeCaptions) {
ArrayList<Integer> wordids = cap.getSentence();
StringBuffer caption = new StringBuffer();
boolean isFirst = true;
for (int word : wordids) {
if (!isFirst)
caption.append(" ");
caption.append(vocab.getWord(word));
isFirst = false;
}
Log.d(TAG, "Cap score = " + Math.exp(cap.getScore()) + " and Caption is " + caption);
}
}
//Vocab
public class Vocabulary {
String TAG = Vocabulary.class.getSimpleName();
String start_word = "<S>", end_word = "</S>", unk_word = "<UNK>";
ArrayList<String> words;
public Vocabulary(File vocab_file) {
loadVocabsFromFile(vocab_file);
}
public Vocabulary(InputStream vocab_file_stream) {
words = readLinesFromFileAndLoadWords(new InputStreamReader(vocab_file_stream));
}
public Vocabulary(String vocab_file_path) {
File vocabFile = new File(vocab_file_path);
loadVocabsFromFile(vocabFile);
}
private void loadVocabsFromFile(File vocabFile) {
try {
this.words = readLinesFromFileAndLoadWords(new FileReader(vocabFile));
//Log.d(TAG, "Words read from file = " + words.size());
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
private ArrayList<String> readLinesFromFileAndLoadWords(InputStreamReader file_reader) {
ArrayList<String> words = new ArrayList<>();
try (BufferedReader br = new BufferedReader(file_reader)) {
String line;
while ((line = br.readLine()) != null) {
// process the line.
words.add(line.split(" ")[0].trim());
}
br.close();
if (!words.contains(unk_word))
words.add(unk_word);
} catch (IOException e) {
e.printStackTrace();
}
return words;
}
public String getWord(int word_id) {
if (words != null)
if (word_id >= 0 && word_id < words.size())
return words.get(word_id);
return "No word found, Maybe Vocab File not loaded";
}
public int totalWords() {
if (words != null)
return words.size();
return 0;
}
}
//MapUtil
public class MapUtil {
public static <K, V extends Comparable<? super V>> Map<K, V> sortByValue(Map<K, V> map) {
List<Map.Entry<K, V>> list = new ArrayList<>(map.entrySet());
list.sort(new Comparator<Map.Entry<K, V>>() {
#Override
public int compare(Map.Entry<K, V> o1, Map.Entry<K, V> o2) {
if (o1.getValue() instanceof Float && o2.getValue() instanceof Float) {
Float o1Float = (Float) o1.getValue();
Float o2Float = (Float) o2.getValue();
return o1Float >= o2Float ? -1 : 1;
}
return 0;
}
});
Map<K, V> result = new LinkedHashMap<>();
for (Map.Entry<K, V> entry : list) {
result.put(entry.getKey(), entry.getValue());
}
return result;
}
}
//Caption
public class Caption implements Comparable<Caption> {
private ArrayList<Integer> sentence;
private float[] state;
private float porb;
private float score;
public Caption(ArrayList<Integer> sentence, float[] state, float porb, float score) {
this.sentence = sentence;
this.state = state;
this.porb = porb;
this.score = score;
}
public ArrayList<Integer> getSentence() {
return sentence;
}
public void setSentence(ArrayList<Integer> sentence) {
this.sentence = sentence;
}
public float[] getState() {
return state;
}
public void setState(float[] state) {
this.state = state;
}
public float getPorb() {
return porb;
}
public void setPorb(float porb) {
this.porb = porb;
}
public float getScore() {
return score;
}
public void setScore(float score) {
this.score = score;
}
#Override
public int compareTo(#NonNull Caption oc) {
if (score == oc.score)
return 0;
if (score < oc.score)
return -1;
else
return 1;
}
}
//TopN
public class TopN {
//Maintains the top n elements of an incrementally provided set.
int n;
LinkedList<Caption> data;
public TopN(int n) {
this.n = n;
this.data = new LinkedList<>();
}
public int getSize() {
if (data != null)
return data.size();
return 0;
}
//Pushes a new element
public void push(Caption x) {
if (data != null) {
if (getSize() < n) {
data.add(x);
} else {
data.removeLast();
data.add(x);
}
}
}
//Extracts all elements from the TopN. This is a destructive operation.
//The only method that can be called immediately after extract() is reset().
//Args:
//sort: Whether to return the elements in descending sorted order.
//Returns: A list of data; the top n elements provided to the set.
public LinkedList<Caption> extract(boolean sort) {
if (sort) {
Collections.sort(data);
}
return data;
}
//Returns the TopN to an empty state.
public void reset() {
if (data != null) data.clear();
}
}
Even though accuracy is very low I am sharing this because it might be useful for some to load the show and tell models in android.

How can i get exposure time by rational from photo properties in android?

I'm writing gallery.But I got double when i use exifInterface.getAttribute(ExifInterface.TAG_EXPOSURE_TIME), it should be rational(fraction). If I open system gallery, it is rational.Please help me.Thanks.
To get precise/correct values use the new ExifInterface support library instead of the old ExifInterface.
You must add to your gradle:
compile "com.android.support:exifinterface:25.1.0"
And then ensure you use the new android.support.media.ExifInterface library instead of the old android.media.ExifInterface.
import android.support.media.ExifInterface;
String getExposureTime(final ExifInterface exif)
{
String exposureTime = exif.getAttribute(ExifInterface.TAG_EXPOSURE_TIME);
if (exposureTime != null)
{
exposureTime = formatExposureTime(Double.valudeOf(exposureTime));
}
return exposureTime;
}
public static String formatExposureTime(final double value)
{
String output;
if (value < 1.0f)
{
output = String.format(Locale.getDefault(), "%d/%d", 1, (int)(0.5f + 1 / value));
}
else
{
final int integer = (int)value;
final double time = value - integer;
output = String.format(Locale.getDefault(), "%d''", integer);
if (time > 0.0001f)
{
output += String.format(Locale.getDefault(), " %d/%d", 1, (int)(0.5f + 1 / time));
}
}
return output;
}

If user change date and time in device then how to get current date and time in android?

If any user change date and time in his/her device then how to get current date and time which is actually running. I am working on a project that use current date as the start date so if user change his/her date as he want then its custom date will be my start date in place of that i want to current date if device date is before of after.
Ex. Now current date is 8 Aug 2015 then user change his/her device date as 20 Aug 2015 then how I can get actual date which is 8 Aug 2015? Thank you in advance...
Connection to NTP Server:
import android.os.AsyncTask;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
public class NTPUTCTime {
private static final String TAG = "SntpClient";
private static final int RECEIVE_TIME_OFFSET = 32;
private static final int TRANSMIT_TIME_OFFSET = 40;
private static final int NTP_PACKET_SIZE = 48;
private static final int NTP_PORT = 123;
private static final int NTP_MODE_CLIENT = 3;
private static final int NTP_VERSION = 3;
// Number of seconds between Jan 1, 1900 and Jan 1, 1970
// 70 years plus 17 leap days
private static final long OFFSET_1900_TO_1970 = ((365L * 70L) + 17L) * 24L * 60L * 60L;
private long mNtpTime;
public boolean requestTime() {
try {
new AsyncTask<Void, Void, Boolean>() {
#Override
protected Boolean doInBackground(Void... voids) {
try {
DatagramSocket socket = new DatagramSocket();
socket.setSoTimeout(1000);
InetAddress address = InetAddress.getByName("pool.ntp.org");
byte[] buffer = new byte[NTP_PACKET_SIZE];
DatagramPacket request = new DatagramPacket(buffer, buffer.length, address, NTP_PORT);
buffer[0] = NTP_MODE_CLIENT | (NTP_VERSION << 3);
writeTimeStamp(buffer, TRANSMIT_TIME_OFFSET);
socket.send(request);
// read the response
DatagramPacket response = new DatagramPacket(buffer, buffer.length);
socket.receive(response);
socket.close();
mNtpTime = readTimeStamp(buffer, RECEIVE_TIME_OFFSET);
} catch (Exception e) {
// if (Config.LOGD) Log.d(TAG, "request time failed: " + e);
e.printStackTrace();
return false;
}
return true;
}
}.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR).get();
} catch (Exception e) {
// if (Config.LOGD) Log.d(TAG, "request time failed: " + e);
e.printStackTrace();
return false;
}
return true;
}
public long getNtpTime() {
return mNtpTime;
}
/**
* Reads an unsigned 32 bit big endian number from the given offset in the buffer.
*/
private long read32(byte[] buffer, int offset) {
byte b0 = buffer[offset];
byte b1 = buffer[offset + 1];
byte b2 = buffer[offset + 2];
byte b3 = buffer[offset + 3];
// convert signed bytes to unsigned values
int i0 = ((b0 & 0x80) == 0x80 ? (b0 & 0x7F) + 0x80 : b0);
int i1 = ((b1 & 0x80) == 0x80 ? (b1 & 0x7F) + 0x80 : b1);
int i2 = ((b2 & 0x80) == 0x80 ? (b2 & 0x7F) + 0x80 : b2);
int i3 = ((b3 & 0x80) == 0x80 ? (b3 & 0x7F) + 0x80 : b3);
return ((long) i0 << 24) + ((long) i1 << 16) + ((long) i2 << 8) + (long) i3;
}
/**
* Reads the NTP time stamp at the given offset in the buffer and returns
* it as a system time (milliseconds since January 1, 1970).
*/
private long readTimeStamp(byte[] buffer, int offset) {
long seconds = read32(buffer, offset);
long fraction = read32(buffer, offset + 4);
return ((seconds - OFFSET_1900_TO_1970) * 1000) + ((fraction * 1000L) / 0x100000000L);
}
/**
* Writes 0 as NTP starttime stamp in the buffer. --> Then NTP returns Time OFFSET since 1900
*/
private void writeTimeStamp(byte[] buffer, int offset) {
int ofs = offset++;
for (int i = ofs; i < (ofs + 8); i++)
buffer[i] = (byte) (0);
}
}
Or you can use this library
https://github.com/instacart/truetime-android
Similar to How can I get the actual date and time if the device date and time are inaccurate?
Use a time server - that will be the best approach. Or whatever backend you have, expose an API to get server time when the application launches then you can keep tack of the time during that entire session using Timer if you need hh:mm:ss also in addition to just the Date.
The answer from user2629865 is mostly correct. You will need to use something like SNTP (Simple Network Time Protocol) to connect to a trusted Internet time server.
The problem comes if the user has also changed the time zone to an incorrect one.
The reason why this is a problem in your case is NTP and SNTP retrieve time in UTC format which is good in one way but obviously won't get you the 'local' time based on the devices geographical location. At this point you need to be looking at geo-location services to attempt to identify the location and time zone in order to get an accurate local date / time.
In short, there's no easy one-step answer to this and ultimately it comes down to WHY you need to do this. If a user wishes to mess about with their date / time / time zone settings on their own device then that's their choice.

How can insert a lot of items with ContentResolver fast

I'm trying to add words to user dictionary in bulks (tens of thousands of words).
I've tried to
Scanner scanner = new Scanner(file);
int counter = 0;
while(scanner.hasNextLine()) {
String word = scanner.nextLine();
UserDictionary.Words.addWord(getBaseContext(), word, 255, null, Locale.getDefault());
setProgress((float) counter++ * 100 / linesCount);
}
and I've also tried bulkinsert
ArrayList<String> words = new ArrayList<String>(capacity);
int counter = 0;
while(scanner.hasNextLine()) {
words.add(scanner.nextLine());
if (words.size() >= capacity) {
MyUserDictionary.addWords(getBaseContext(), words, Locale.getDefault());
words.clear();
setProgress((float) counter * 100 / linesCount);
}
counter++;
}
if (!words.isEmpty()) {
MyUserDictionary.addWords(getBaseContext(), words, Locale.getDefault());
}
And I've tried applyBatch.
All this method results with very long running time.
I know that sqlite can do better with transactions.
Do you know more faster way to do it?
EDIT1:
final ContentResolver resolver = context.getContentResolver();
final int COLUMN_COUNT = 5;
List<ContentValues> valueses = new ArrayList<ContentValues>(words.size());
for (String word: words) {
ContentValues values = new ContentValues(COLUMN_COUNT);
values.put(UserDictionary.Words.WORD, word);
values.put(UserDictionary.Words.FREQUENCY, DEFAULT_FREQUENCY);
values.put(UserDictionary.Words.LOCALE, null == locale ? null : locale.toString());
values.put(UserDictionary.Words.APP_ID, 0); // TODO: Get App UID
values.put(UserDictionary.Words.SHORTCUT, "");
valueses.add(values);
}
resolver.bulkInsert(UserDictionary.Words.CONTENT_URI, valueses.toArray(new ContentValues[valueses.size()]));

Categories

Resources