Firstly, pardon me due to lack of info as I am very new to programming. Below are my codes for opening Physicaloid to connect my android device to Arduino through serial communication. Right now I am receiving analog signals coming from my arduino and appearing on tVread from the array "buf". However I failed and stucked trying to plot it the array "buf" using Android Plot. Please kindly advice. Thank you!
if (mPhysicaloid.open()) {
setEnabledUi(true);
if (cbAutoscroll.isChecked()) {
tvRead.setMovementMethod(new ScrollingMovementMethod());
}
mPhysicaloid.addReadListener(new ReadLisener() {
#Override
public void onRead(int size) {
byte[] buf = new byte[size];
Number[] numarray = new Number[size];
mPhysicaloid.read(buf,size);
//convert buf to int array
for(int i=0;i<size;i++) {
numarray[i]= buf[i];
}
tvAppend(tvRead, Html.fromHtml("<font color=blue>" + new String (buf) + "</font>"));
XYSeries series1 = new SimpleXYSeries(
Arrays.asList(numarray),SimpleXYSeries.ArrayFormat.Y_VALS_ONLY,"Series1");
LineAndPointFormatter series1Format = new LineAndPointFormatter(
Color.rgb(0, 200, 0), // line color
Color.rgb(0, 100, 0), // point color
null, // fill color (none)
new PointLabelFormatter(Color.WHITE));
mySimpleXYPlot.addSeries(series1, series1Format);
}
});
} else {
Toast.makeText(this, "Cannot open", Toast.LENGTH_LONG).show();
}
}
It's not clear from the code snippet at which point in the Activity lifecycle this gets called but it seems likely that it happens well after onCreate via a background thread.
You'll likely need to add a call to plot.redraw() at the end of your onRead() implementation. Also, depending on how many times this method gets called, what you currently have will result in adding an additional series to the plot on each call, which you probably do not want.
Instead you probably want create your XYSeries outside of the callback and simply update it from onRead().
Related
I have Implemented an algorithm to draw text on pages using PDFBox library for Android. The problem is whenever I add a new page the text is overlapped like shown on the image below. I am sure i am using the PDPageContentStream.newLine() method but the result is not as expected.
Am I missing something else ?
Here is my code snippet
PDPage page1 = new PDPage();
getInstance().getAnnexe().addPage(page1);
PDPageContentStream contentStream1 = new
PDPageContentStream(getInstance().getAnnexe(), page1, true, true);
contentStream1.beginText();
contentStream1.newLineAtOffset(100F, 650F);
contentStream1.setFont(font, fontSize);
printMultipleLines(subSet, contentStream1);
contentStream1.endText();
contentStream1.close();
And this is the printMultipleLines() Method
private void printMultipleLines(ArrayList<String> lines, PDPageContentStream contentStream) {
try {
for (String line :
lines) {
if (line.length() > 110) {
// Print line as 2 lines
contentStream.showText(line.substring(0, 90));
contentStream.newLine();
contentStream.showText(line.substring(90, line.length()));
} else
// Print line as a whole
contentStream.showText(line);
// Print Carriage Return
contentStream.newLine();
}
} catch (IOException e) {
e.printStackTrace();
}
}
Thank to #TilmanHausherr the problem was with the TL operator. Each newly created page had TL equal to zero amount of user default units. I just had to set the Text Leading offset.
Here is the updated code :
PDPage page1 = new PDPage();
getInstance().getAnnexe().addPage(page1);
PDPageContentStream contentStream1 = new
PDPageContentStream(getInstance().getAnnexe(), page1, true, true);
// Set the Text Leading (TL operator) here!!!!
contentStream1.setLeading(12);
contentStream1.beginText();
contentStream1.newLineAtOffset(100F, 650F);
contentStream1.setFont(font, fontSize);
printMultipleLines(subSet, contentStream1);
contentStream1.endText();
contentStream1.close();
All thanks and credits goes to # TilmanHausherr for his fast and accurate answer.
I have recently been modifying Grafika's TextureMovieEncoder to create a recording of what I displayed onscreen: two Sprite2ds which are overlapping. Using the CameraCaptureActivity example as a reference point, I effectively ported what I created for my rendering thread into the TextureMovieEncoder but the output is jagged lines across the screen. I think I understand what's wrong, but I don't know how to fix it:
Some code:
private void prepareEncoder(EGLContext sharedContext, int width, int height, int bitRate,
File outputFile) {
try {
mVideoEncoder = new VideoEncoderCore(width, height, bitRate, outputFile);
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
mEglCore = new EglCore(sharedContext, EglCore.FLAG_RECORDABLE);
mInputWindowSurface = new WindowSurface(mEglCore, mVideoEncoder.getInputSurface(), true);
mInputWindowSurface.makeCurrent();
textureProgram = new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT);
backgroundDrawable = new Drawable2d(Drawable2d.Prefab.RECTANGLE);
backgroundRect = new Sprite2d(backgroundDrawable);
frontDrawable = new Drawable2d(Drawable2d.Prefab.RECTANGLE);
frontRect = new Sprite2d(frontDrawable);
backgroundRect.setTexture(backTextureId);
frontRect.setTexture(frontTextureId);
updateGeometry();
}
private void handleFrameAvailable(Transform transform, long timestampNanos) {
if (VERBOSE) Log.d(TAG, "handleFrameAvailable tr=" + transform);
mVideoEncoder.drainEncoder(false);
backgroundRect.draw(textureProgram, transform.movieMatrix);
frontRect.draw(textureProgram, transform.cameraMatrix);
mInputWindowSurface.setPresentationTime(timestampNanos);
mInputWindowSurface.swapBuffers();
}
I think the problem comes down to my lack of understanding of how to establish the right projection onto the WindowSurface for the VideoEncoder. In the Grafika example, FullFrameRect is used, which is easier since you can just use the identity matrix to stretch a given texture to the surface area. However, since I want to create the overlapping effect, I needed to use Sprite2d. Is the problem the shared EGLContext? Do I need to create a new one so that I can set the viewport to match the WindowSurface size? A bit lost on where to go from here.
Turns out the functionality of the code above was fine. The problem was the interaction between the TextureEncoder and the calling parent.
I was initializing the member variables backTextureId and frontTextureId after prepareEncoder and it was therefore recording garbage data into the output.
I'm working on a cocos2d-x project and got stuck. I want to call a function, delay and than call the same function again. I'm using cocos2d-x 2.2.5. and developing for Android.
This is what I got so far:
CCArray *arr = CCArray::create();
arr->addObject(pSprite);
arr->addObject(pGetal);
CCFiniteTimeAction *fun1 = CCCallFuncO::create(this, callfuncO_selector(GameLayer::animateFlip), arr);
CCDelayTime *delay = CCDelayTime::create(1.0);
pSprite->runAction(CCSequence::create(fun1, delay, fun1, NULL));
The method I want to call:
void GameLayer::animateFlip(CCObject *pSObj, CCObject *pGObj){
CCSprite *pSprite = (CCSprite *) pSObj;
CCLabelTTF *pGetal = (CCLabelTTF *) pSObj;
...
...
}
The function is in the same class and requires two arguments. I've tried putting both arguments (CCSprite and CCLabelTTF) in an array, but it crashes on runtime...
When I call the function just like this no errors occur:
this->animateFlip(sprite1, getal1);
Anyone any idea?
Thanks for your answers, I've created a struct for my buttons like Joachim suggested, which has the sprite and the label in it. All the buttons are put in an array. I also added the functions animateFlip and showFlip to it. AnimateFlip works fine and I'm able to call the function from the gamelayer for each indevidial button.
void GameLayer::ccTouchesBegan(cocos2d::CCSet *pTouches, cocos2d::CCEvent *pEvent)
{
CCTouch *touch = (CCTouch *)pTouches->anyObject();
CCPoint location = touch->getLocationInView();
location = CCDirector::sharedDirector()->convertToGL(location);
for(int i = 0; i < 12; i++){
if(butArr[i].sprite->boundingBox().containsPoint(location)){
butArr[i].animateFlip();
break;
}
}
}
The struct:
struct Button
{
cocos2d::CCSprite *sprite;
cocos2d::CCLabelTTF *getal;
int getalINT;
void showFlip();
void animateFlip();
};
But as rule number one in programming tells us, where one problem is solved, two shall arise, I've stumbled upon a new problem:
void Button::showFlip()
{
CCFiniteTimeAction *fun1 = CCCallFunc::create(this, callfunc_selector(Button::animateFlip));
CCFiniteTimeAction *fun1 = CCCallFunc::create(this, callfunc_selector(Button::animateFlip));
CCDelayTime *delay = CCDelayTime::create(1.0f);
this->runAction(CCSequence::create(fun1, delay, fun2, NULL));
}
The CCCallFunc::create() request a context (I guess?), which is usually 'this', but 'this' in my struct point to Button. How can I get the context of the gamelayer?
Thanks again!
My question is how to handle an Out of Memory error when decoding a byte array into a bitmap so I can do a rotation on it. My code is as follows and before you say its a duplicate, I have tried using BitmapFactory.Options and setting the sample size to 2. However the quality loss was far too bad to be acceptable. Also it appears to only be happening on one device so maybe its a one off thing, however I'm inclined to believe if it affects one, there will be 25 more like it later. Also this is happening on the FIRST photo taken and this is the only work that this activity does with regards to bitmaps. And while I'm working in Monodroid, Java answers are welcome too as I can usually translate them to C# fairly easily.
public void GotImage(byte[] image)
{
try
{
Android.Graphics.Bitmap thePicture = Android.Graphics.BitmapFactory.DecodeByteArray(image, 0, image.Length);
Array.Clear(image, 0, image.Length);
image = null;
GC.Collect();
Android.Graphics.Matrix m = new Android.Graphics.Matrix();
m.PostRotate(90);
Android.Graphics.Bitmap rotatedPicture = Android.Graphics.Bitmap.CreateBitmap(thePicture, 0, 0, thePicture.Width, thePicture.Height, m, true);
thePicture.Dispose();
thePicture = null;
GC.Collect();
using (MemoryStream ms = new MemoryStream())
{
rotatedPicture.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, ms);
image = ms.ToArray();
}
rotatedPicture.Dispose();
rotatedPicture = null;
GC.Collect();
listOfImages.Add(image);
storeButton.Text = " Store " + listOfImages.Count + " Pages ";
storeButton.Enabled = true;
takePicButton.Enabled = true;
gotImage = false;
cameraPreviewArea.camera.StartPreview();
}
catch (Exception ex)
{
AlertDialog.Builder alertDialog = new AlertDialog.Builder(this);
alertDialog.SetTitle("Error Taking Picture");
alertDialog.SetMessage(ex.ToString());
alertDialog.SetPositiveButton("OK", delegate { });
alertDialog.Show();
}
}
What's rotatedPicture.Dispose()? Does this just set the reference to null? The best and quickest way to get rid of a Bitmap's memory is via the recycle() method.
Well after a long day of learning, I discovered a fix/workaround. This involved setting the resolution of the picture being taken by the camera before the picture was taken instead of trying to scale it after the fact. I also set the option in settings for the user to try different resolutions till they get one that works best for them.
Camera.Parameters parameters = camera.GetParameters();
parameters.SetPictureSize(parameters.SupportedPictureSizes[parameters.SupportedPictureSizes.Count - 1].Width,
parameters.SupportedPictureSizes[parameters.SupportedPictureSizes.Count - 1].Height);
camera.SetParameters(parameters);
camera.StartPreview();
I've been asking questions regarding my Android project that continually plots Bluetooth data in real-time.
Basically what I've already done is create a first version of my app by cobbling together some open source code Blueterm and OrientationSensorExample
It's been suggested that I add a thread, a handler, a Service, or use Async Task, or AIDL, etc. But I don't know how to use any of these and would appreciate an explanation.
Here's a description of the Blueterm open source code I started with (see link above). Blueterm is basically a terminal emulator program that communicates over Bluetooth. It consists of several activities with Blueterm being the most important. It discovers, pairs, and connects with a remote Bluetooth device that supports SPP/RfComm. When connected I can use Blueterm to configure the remote device by sending it commands to turn on sampling, change the number of channels to sample (to one channel), change to format of the incoming data (I like comma separated data), etc
Here's a description of the OrientationSensorExample open source code I started with (see link above). It's basically an example application of the AnroidPlot library. The OrientationSensor activity implements SensorEventListener. This includes overriding onSenorChanged() which is called whenever new orientation sensor data is taken, and it redraws the graph.
Having cobbled together these two open source projects (Blueterm and OrientationSensorExample) into one application (Blueterm) here's a description of how the overall application (Blueterm) works. When I start Blueterm the whole screen emulates a nice blue terminal. From the Options Menu I discover, pair with, connect to, and configure a remote bluetooth device as described above. Once I have configured the remote device, I go again to the Options Menu and select "Plot data" which launches the Plot activity. The terminal emulator goes away, and a nice scrolling real-time plot from the Plot activity shows up.
As far as I can tell there is a background thread that calls an update() method as follows:
/**
* Look for new input from the ptty, send it to the terminal emulator.
*/
private void update() {
int bytesAvailable = mByteQueue.getBytesAvailable();
int bytesToRead = Math.min(bytesAvailable, mReceiveBuffer.length);
try {
int bytesRead = mByteQueue.read(mReceiveBuffer, 0, bytesToRead);
append(mReceiveBuffer, 0, bytesRead);
//VTR use existing handler that calls update() to get data into plotting activity
Plot.plotData(mReceiveBuffer, 0, bytesRead);
} catch (InterruptedException e) {
//VTR OMG their swallowing this exception
}
}
In the update() method I found it convenient to call my Plot.plotData() method and pass it the same date that is passed to the append() method to plot the data. NOTE: This only works if plotData() is a static method. No one has been able to explain why.
Anyway plotData() is a static method and here's how it and it's helper methods look now:
private static StringBuffer strData = new StringBuffer("");
public static void plotData(byte[] buffer, int base, int length) {
Log.i("Entering: ", "plotData()");
/*
byte[] buffer = (byte[]) msg.obj;
int base = msg.arg1;
int length = msg.arg2;
*/
for (int i = 0; i < length; i++) {
byte b = buffer[base + i];
try {
if (true) {
char printableB = (char) b;
if (b < 32 || b > 126) {
printableB = ' ';
}
Log.w("Log_plotData", "'" + Character.toString(printableB)
+ "' (" + Integer.toString(b) + ")");
strData.append(Character.toString(printableB));
if (b == 10)
{
Log.i("End of line: ", "processBlueData()");
Log.i("strData", strData.toString());
splitData(strData);
strData = new StringBuffer("");
}
}
} catch (Exception e) {
Log.e("Log_plotData_exception", "Exception while processing character "
+ Integer.toString(i) + " code "
+ Integer.toString(b), e);
}
}
Log.i("Leaving: ", "plotData()");
}
private static void splitData(StringBuffer strBuf) {
String strDash = strBuf.toString().trim();
String[] strDashSplit = strDash.split("-");
for (int ndx = 0; ndx < strDashSplit.length; ndx++)
{
if (strDashSplit[ndx].length() > 0)
Log.i("strDashSplit", ndx + ":" + strDashSplit[ndx]);
String strComma = strDashSplit[ndx].trim();
String[] strCommaSplit = strComma.split(",");
for (int mdx = 0; mdx < strCommaSplit.length; mdx++)
{
if (strCommaSplit[mdx].length() > 0)
Log.i("strCommaSplit", mdx + ":" + strCommaSplit[mdx]);
if (mdx == 1)
{
int raw = Integer.parseInt(strCommaSplit[1],16);
Log.i("raw", Integer.toString(raw));
float rawFloat = raw;
Log.i("rawFloat", Float.toString(rawFloat));
float ratio = (float) (rawFloat/65535.0);
Log.i("ratio", Float.toString(ratio));
float voltage = (float) (5.0*ratio);
Log.i("voltage", Float.toString(voltage));
nowPlotData(voltage);
}
}
}
}
public static void nowPlotData(float data) {
// get rid the oldest sample in history:
if (plotHistory.size() > HISTORY_SIZE) {
plotHistory.removeFirst();
}
// add the latest history sample:
plotHistory.addLast(data);
// update the plot with the updated history Lists:
plotHistorySeries.setModel(plotHistory, SimpleXYSeries.ArrayFormat.Y_VALS_ONLY);
//VTR null pointer exception?
if (plotHistoryPlot == null)
Log.i("aprHistoryPlot", "null pointer exception");
// redraw the Plots:
plotHistoryPlot.redraw();
}
If it is strongly recommended that plotData() not be a static method and that I should do something else please explain here and how. Thanks!
This might be a question much better suited for Code Review, rather than here. Perhaps you can reformulate to post it there, or trim it a lot to repost it here.
Furthermore, to answer: "It's been suggested that I add a thread, a handler, a Service, or use Async Task, or AIDL, etc. But I don't know how to use any of these and would appreciate an explanation.", the best advise would be to link you to a book about android, such as: http://commonsware.com/Android/ . Chapters 35 and 36 deal with services, while chapter 20 is about threads. You will never get an answer as complete as those chapters here.