on windows, we can call several time MyThread.waitfor on the same thread. if the thread is already terminated no problem this will not raise any exception and return immediatly (normal behavior).
on Android, it's different, if we call twice MyThread.waitfor then we will have an exception on the second try with "No such process".
function TThread.WaitFor: LongWord;
{$ELSEIF Defined(POSIX)}
var
X: Pointer;
ID: pthread_t;
begin
if FExternalThread then
raise EThread.CreateRes(#SThreadExternalWait);
ID := pthread_t(FThreadID);
if CurrentThread.ThreadID = MainThreadID then
while not FFinished do
CheckSynchronize(1000);
FThreadID := 0;
X := #Result;
CheckThreadError(pthread_join(ID, X));
end;
{$ENDIF POSIX}
the error is made because on call to waitfor they set FThreadID := 0 so off course any further call will failled
i think it's must be written like :
function TThread.WaitFor: LongWord;
{$ELSEIF Defined(POSIX)}
begin
if FThreadID = 0 then exit;
...
end;
{$ENDIF POSIX}
what do you think ? did i need to open a bug request at emb ?
The documentation for pthread_join says:
Joining with a thread that has previously been joined results in undefined behavior.
This explains why TThread takes steps to avoid invoking undefined behavior.
Is there defect in the design? That's debatable. If we are going to consider the design of this class, let's broaden the discussion, as the designers must. A Windows thread can be waited on by multiple different threads. That's not the case for pthreads. The linked documentation also says:
If multiple threads simultaneously try to join with the same thread, the results are undefined.
So I don't think Embarcadero could reasonably implement the same behaviour on Posix platforms as already exists on Windows. For sure they could special case repeated waits from the same thread, as you describe. Well, they'd have to persist the thread return value so that WaitFor could return it. But that would only get you part way there, and wouldn't be very useful anyway. After all, why would you wait again from the same thread?
I suspect that FThreadID is set to 0 in an effort to avoid the undefined behaviour and fail in a more robust way. However, if multiple threads call WaitFor then there is a data race so undefined behaviour is still possible.
If we were trying to be charitable then we could
Leaving those specific details to one side, it is clear that if WaitFor is implemented by calling pthread_join then differing behaviour across platforms is inevitable. Embarcadero have tried to align the TThread implementations for each platform, but they cannot be perfectly equivalent because the platform functionality differs. Windows offers a richer set of threading primitives than pthreads.
If Embarcadero had chosen a different path they could have aligned the platforms perfectly but would have needed to work much harder on Posix. It is possible to replicate the Windows behaviour there, but this particular method would have to be implemented with something other than pthread_join.
Facing the reality though, I think you will have to adapt to the different functionality of pthreads. In pthreads the ability to wait on a thread is included merely as a convenience. You would do better to wait on an event or a condition variable instead, if you really do want to support repeated waits. On the other hand you might just re-create your code to ensure you only wait once.
So, to summarise, you should probably raise an issue with Embarcadero, if there isn't one already. It is possible that they might consider supporting your scenario. And it's worth having an issue in the system. But don't be surprised if they choose to do nothing and justify that because of the wider platform differences that cannot be surmounted, and the extra complexity needed in the class to support your somewhat pointless use case. One thing I expect we can all agree on though is that the Delphi documentation for TThread.WaitFor should cover these issues.
Related
I have a working app which I need to speed up. I set up profiling (see here for details) which appears to report on how much time each function takes. I can not find a way to discover anything about time consumed in different sub-parts of functions.
I then inserted the keyword "inline" in the declarations of some frequently accessed small functions hoping for some speedup. But when I profiled again, I saw the same list of functions, including the ones I'd made inline. This made me suspicious as to whether the inline keyword had just been ignored.
I have a vague recollection that with some compilers the inline keyword was something that the compiler could optionally ignore, depending on things like the amount of memory available.
So is there some check I could do to confirm whether or not the "inline" keyword has actually done its job?
You could try:
examining the compiler's assembly or machine code output (whether disassembling or just checking for the function symbol with nm or whatever android has), or stepping through with a debugger
using a compiler pragma/attribute to force inlining (if available, for example GCC has a function attribute always_inline), if your profiling results aren't affected then presumably the compiler was already inlining
checking your profiling docs to make sure that however you're doing profiling doesn't inhibit inlining
As you recalled, inline (and member functions defined inside their class which are implicitly inline) are just hints for the compiler. Some people argue they're just convenient ways to manage One Definition Rule issues, but you'd have to check individual C++ compilers' code to see if the keyword was really that meaningless these days. The compiler might use all sorts of metrics to work out when to inline, including the optimisation flags in affect, the size of the out-of-line function, the number of calls to the function (e.g. if there's only one, why not inline even a large function?) etc..
On Windows platform, TCriticalSection is implemented by calling the Windows API EnterCriticalSection/LeaveCriticalSection. Microsoft documentation explicitly says that after a thread has ownership of a critical section, it can make additional calls to EnterCriticalSection.
So far so good.
But what about the behavior under the other platforms Delphi supports such as OSX, iOS and Android?
Other platforms seems to make use of TMonitor. So the question could be rewritten against TMonitor.
The implementation of TCriticalSection under other platforms than Windows simply uses TMonitor. So the answer to your question reduces to the behaviour of TMonitor.Enter. At least the documentation states that TMonitor.Enter is reentrant.
This part of the documentation would mean a "yes" to your answer:
Prohibits the access of all other threads but the calling one to the specified object.
The relevant code part of TMonitor is in TMonitor.TryEnter:
function TMonitor.TryEnter: Boolean;
begin
if FOwningThread = GetCurrentThreadId then // check for recursion
begin
...
Result := True;
...
In the Android SDK documentation, the page entitled "Using DDMS" has the following comment under the subheading "How DDMS Interacts with a Debugger":
Known debugging issues with Dalvik - Debugging an application in the Dalvik VM should work the same as it does in other VMs. However, when single-stepping out of synchronized code, the "current line" cursor may jump to the last line in the method for one step.
In this context, I've two questions:
a) I'm not sure what "synchronized code" refers too? Are we talking about "debug" code or code using the "synchronized" keyword, or something else? I'm lacking a definition on the page, and synchronized is a generic term so it's not clear to me where the limitation actually lies.
b) Depending on the answer from "b", I suspect my second question would be what does stepping "out" of synchronized code mean?
Your help in explaining this would be appreciated with thanks.
I believe they simply meant "synchronous code". Asynchronous code might jump to other threads as the scheduler sees fit, but synchronous code should proceed in order. They have mentioned a known peculiarity with the Dalvik debugger, that it makes a seemingly inexplicable jump when it should step from one line of execution to the next. That issue has actually confused me once or twice...
synchronized is a keyword you may use either on methods or on blocks. It is helpful when using threads.
Synchronized methods enable a simple strategy for preventing thread interference and memory consistency errors: if an object is visible to more than one thread, all reads or writes to that object's variables are done through synchronized methods.
http://docs.oracle.com/javase/tutorial/essential/concurrency/syncmeth.html
I want to trace system calls against some specific code lines in an Android application and using strace or system call hooking, I can get list of system calls against an APK.
I was wondering if there is any function call or anything else for which we know already exact number of system calls so that I can put it before and after my interested lines of code?
The trouble with what your trying to do is that your one-step divorced from the underlying system by the Java VM. It can be doing all sorts of system-call related stuff under your feet which your app has no control of.
In non-android world the most often used system call is ''gettimeofday'' as it is never cached by the library functions and reasonably easy to see happen. However as you note in you previous question this is a lot harder as the VM is making gettimoufday calls of it's own accord (most likely for accounting purposes). Therefor you want to choose a android function that results in a system call but isn't likely to be called normally by the VM. Some candidates looking through the API include:
statfs
socket
Good luck.
You could find out yourself. Count the system calls in a "hello world" app, add a function call to it, count the calls again, and the difference is the number of calls made by the added function.
Android project with a native component. I'm using a third party library where I suspect there's an bug with uninitialized or unreset variable. The same sequence of calls (should be equivalent according to the interface definition) yields different results.
I've got the sources to the library, but I don't want to dig deep in them (it's really big and convoluted). Is there a way to leverage something like GDB to compare two runs of a piece of code - see if the variable state diverges at any point? It should not - the code is completely in-memory, no I/O or randomness there.