I have my CI on TeamCity. It currently triggers builds on different events (Pull Request creations, merges into Develop branch, etc.). However, I was wondering if it is possible to trigger a specific build as a result of writing a specific comment/labelling a Pull Request.
The goal is to run a set of Automated UI tests when a Pull Request has been approved (from a coding correctness point of view) and that branch is ready to be merged into Develop. I don't want to run that set of Automated UI tests for every commit on that branch as it takes around 1h to run them and I don't want to run it only after merging that PR so we avoid merging anything that breaks the UI tests into Develop.
The desired flow would be to write a special comment on that PR such as run_UI_test or labelling the PR with a custom label so the tests are executed on CI and the feedback is displayed on the PR on Github.
Thank you very much in advance.
I don't believe TeamCity has any awareness of the comments on Github since the comments themselves aren't stored directly in the branch(es). My assumption is that you have a VCS Root similar to this:
+:refs/heads/master
+:refs/heads/develop
+:...
+:refs/pull/*/head
It is possible to access the comments of a pull request through the GitHub "Issues API", however, I think this would add unnecessary complexity to your build process and it would obscure how the builds are getting triggered.
My suggestion would be to follow a more tradition CI process and create another layer of "integration" through a new branch. So basically your flow would look like this:
master (merge pull requests for release)
integration (Run UI automation test)
develop (Run basic unit tests and other stuff)
So basically all of your development happens on develop or some other "feature" branch. When all basic tests are passing and you are ready to "promote" you could merge into the integration branch which will then trigger your UI tests. I also want to point out that this "integration" branch could in fact just be "pull requests" and no static branch is actually required depending on how you have your development flow setup.
it's much easier to setup trigger rules and branch filters in TC then it would be to write some custom REST API script to work with the GitHub Issues API.
You can achieve similar functionality by setting up a separate build configuration which utilizes a specific VCS Trigger to run the UI tests. This build configuration would also have different build steps which would include the commands to execute your tests.
For example: In Triggers, you can add a new VCS Trigger with +:comment=presubmit:** This would look for any commit message that contains "presubmit" and would trigger your UI test suite to run.
I have seen a few repositories use tools such as https://danger.systems to write custom rules which can look for the text in Github comments and trigger interactions based on the comment.
Related
As the title says, I want to implement a Git interface in my Flutter app: I need it to have basic functionality (commit, push, pull, merge) on an internal path repo (no external storage access). There is this git package for Dart, which is only a CLI interface, i.e. not compatible with Android, which is the main issue I'm facing, as I need my app to be compatible with that and Windows.
I happen to use GitJournal, an Android note-taking app that uses Git to sync. I found out that it is also built in Flutter -- it does essentially the same things I need out of this Git implementation, so eureka, maybe?
It seems to be using libgit2, along with both(?) this custom git_bindings Dart package (related pub.dev page) and this other dart-git custom implementation fully in Dart. This last package apparently isn't fully-featured, is experimental and maybe only used in parallel with the previous git_bindings thingy.
Last thing I found, one could write custom ffigen-libgit2 bindings, but I'm positive that some of the things I posted before are already using this in some way. Plus, I would have no clue on where to start with this.
I don't require a bullet-proof implementation, meaning that I'd be willing to play around with an experimental / bleeding-edge solution, as long as it does the job and doesn't require me to reinvent the wheel.
I have no familiarity with bindings, interfaces between languages and such trickeries, but I'd be willing to do some learning if necessary. Needless to say, the simplest the solution, the better -- I chose Flutter for a reason, after all :D
Thanks in advance
We have a repo that contains a Base code for a React Native App (SaaS)
In the same Repo, We have multiple branches for every client (separated app)
Master contains the main base code (we push any new features/ fixes to it)
Now we have issues when want to push the new features/fixes to other branches! It's about changing the package name and icons etc (native stuff). When open PR to take updates from master => client-1.
So do you recommend any tips that help us to manage all fixes/features in the base code and applying it to other branches without effected by iOS/Android things?
This is potentially an insufficiently specific question but, I'll try to help.
The way I'd approach this is just having develop, staging, and main branches, so I can minimize the conflict potential. In what I assume is a monorepo, I'd then be able to accomplish having base changes get to the client apps in some orchestrated manner, via localized package management by way of leveraging Lerna or Yalc, plus a CI/CD step. In effect, I'd have any changes to base cause an associated version bump of "base" in the dependent clients, and a rebuild + release of those clients.
I've got this Android, with the default Gradle-tasks. In the Android-project, there is a androidTest package, containing integrationTests and uiTests. I have also two Kotlin classes containing a suite of test classes to be called.
However, ./gradlew connectedAndroidTest runs both integrationTests and uiTests, I want to separate this. I came up with multiple solutions:
Android Studio's run configurations. However, this isn't checked into VCS, therefore we have no access on our Jenkins build server.
Add Gradle tasks in the Groovy language, however, it's not encouraged to call another gradle task in a new task.
So I'm looking for a way to only test either integrationTests or uiTests. How can I do this?
I'm going to give you a cheap and cheerful answer now. Maybe someone else will be able to provide a fuller one.
Since all the tests appear to be part of the same source set, you need to distinguish between them some other way. The most appropriate solution is whatever mechanism your testing library has for grouping, which you can then utilise from Gradle.
Alternatively, use something like a naming convention to distinguish between UI tests and integration tests.
What you do then depends on how you want the build to deal with these different categories. The main options include:
Using test filtering from the command line — via the --tests option — to run just integration or UI tests. Note that filtering only works via class name, so you'd have to go with the naming convention approach.
Configure the appropriate Test task — is that connectedAndroidTest? — so that if you set a particular project property it will run either the integration tests or the UI tests based on the value of that project property. This involves using an if condition in the configuration. This approach works with both filtering and grouping.
Add two extra Test tasks, one that executes the integration tests and one that executes the UI tests. You would leave connectedAndroidTest untouched. This is my preferred approach, but requires a bit more code than the others.
This answer is missing a lot of detail on how to implement those solutions, but I'm afraid filling that detail out is too time-consuming for me right now. As I said, hopefully someone will come along with a fuller answer.
I'm using Jenkins for my Android continuous integration. I have some isolated, independent Robotium UI tests that currently take 12 minutes to run serially against a single emulator. Can anybody recommend a good way to run them in parallel so it will take only 6 minutes (or less)?
I know about various ways to run the full test suite in parallel on multiple devices/emulators, e.g. see the Multi-configuration (matrix) job section of the Jenkins Android Emulator Plugin, Spoon, or cloud testing companies like AppThwack.
I know how to run a specific subset of my tests, by using JUnit annotations, or apparently Spoon supports a similar function (see my question about it).
I'm now using Spoon to run my full test suite (mostly to take advantage of the lovely HTML output with screenshots). If anybody has tips on the best way to split my tests and run them in parallel, that would be great.
I assume I could achieve this by splitting the tests into two separate CI jobs, but it sounds like a pain to maintain two separate jobs and combine the results.
Update: I've added another answer which I think gives a cleaner and more concise Jenkins configuration, and is based more directly on Spoon.
I've just discovered the Jenkins MultiJob Plugin which allows you to run multiple jobs in parallel within a Phase.
Below is my working, but slightly fragile approach to doing this using the Fork plugin. I use manually configured regular expressions to partition the tests (this was the main reason I tried to use Fork - it supports using regex).
The MultiJob looks like this with multiple downstream jobs in a Phase:
Main job configuration
Here's how my "Android Multi Job" is configured:
Downstream job configuration
Here's how the downstream "Android Phase N" jobs are configured (with different android.test.classes regular expressions for each):
Gotchas
Fork currently fails to run on Gradle v1.0.0, as per fork plugin issue #6.
If you want a Fork regex to match multiple different packages, you need to comma separate your regex. This isn't very well documented in the Fork project, but their TestClassFilter source shows you how they interpret the regex.
Any abstract test classes need to be named Abstract*, otherwise Fork will try to run them as tests, creating annoying failures. Their TestClassScanner controls this, and issue #5 tracks changing this.
IIRC, you need to have the Fingerprint Plugin installed for the "Aggregate downstream test results" option to work. If you don't have it installed you'll see this error: "Fingerprinting not enabled on this build. Test aggregation requires fingerprinting."
Limitations
Test results are aggregated, but only using the JUnit XML test reports. This means you need to click through to each downstream job to view nice HTML results.
Manually partitioning your tests based on regular expressions can be tedious and error prone. If you use this approach I recommend you still have a nightly/weekly Jenkins job to run your full test suite in a single place, to make sure you don't accidentally lose any tests.
This MultiJob approach requires you to manually configure each downstream job, one for each slave node you want to use. We've prototyped a better approach using a Matrix job, where you only have to configure everything in a single Jenkins job). We'll try to write that up in the next couple of weeks.
Futures
We've also prototyped a way of extending Spoon (the output is more pretty than Fork) to automatically split the whole test suite across N downstream jobs. We still need to enhance it to aggregrate all those results back into a single HTML page in the upstream job, but unfortunately a bug in the Jenkins "Copy To Slave" plugin is blocking this from working at the moment.
You can perform this in 3 steps:
Create 2 nodes pointing to the single target machine (which satisfies your condition to run tests on same machine).
In the Job during execution use the Jenkins env variable $NODE_NAME to and assign different set of tests to each node (you may need the NodeLabel Parameter Plugin).
After execution you will have 2 report files, luckily on the same machine. You can either merge them into one if they are text files or create xml something similar to PerfPublisher plugin format which gives you detailed report.
It means you can actually execute 2 sets of tests on same machine (2 nodes pointing to it) using a single job. Obtaining single report would be tricky but If I get to know the format I can help.
Hope this is useful
This answer is an improvement on my previous MultiJob answer.
The best way I've found to do this is to use a Jenkins Matrix job (a.k.a. "Multi-configuration project"). This is really convenient because you can configure everything in a single Jenkins job.
Spoon now supports a --e option which allows you to pass arguments directly to the instrumentation runner. I've updated their README file with a section on Test Sharding.
That README should give you what you need, but here are other highlights from our Jenkins job configuration, in case that helps.
The User-defined Axis sets the number of slave nodes we want to run. We have to set the label to android so our cloud provider can launch an appropriate slave.
We have a run-build.sh script which invokes Spoon with the correct parameters. We need to pass in the total node count (in this case 6), and the index of the specific slave node being run (automatically present in the node_index variable).
The post build steps shouldn't be any different to your existing ones. In future we'll probably need to add something to collect up the results onto the master node (this has been surprisingly difficult to figure out). For now you can still click through to results on the slaves.
You can use for example Jenkins MultiJob Plugin and Testdroid API to send your APKs to real devices. That's probably the easiest way to do it. Ref guide here.
I'm trying to set up continuous integration with an Android project and Cucumber.
The idea is to write tests in Cucumber and run the tests on my Android build via Cuke4Duke and NativeDriver for Android.
When I have this running, I plan to use Maven and a Jenkins server to automate the testing, so it is run every time i commit to the Subversion repo.
Has this been done before? Is there a good guide somewhere? Or is it a bad idea to do it this way?
We're doing exactly what you're planning to do with Maven, Jenkins, and Git. The missing ingredient is the android/cucumber integration from lesspainful.com.
I don't think that what you've planned is a bad idea. But I don't know of anyone that's doing Android CI with that particular setup.
You may also want to take a look at Robotium, it's like Selenium for Android and offers a very rich DSL that will help out with your cuke4duke step implementations.
In my company we use a little different setup (but probably you will have to solve similar challenges): Jenkins + Jenkins Android Plugin + Robotium + Ant. We find out that ant is hard to maintain when you try to use it to something more complicated then simple build and we are rewriting our scripts to gradle.
It works quite well, however you should be aware of two potential problems:
1. emulator is slow (even on fast server) - you can consider attaching physical device to your server.
2. you probably have to setup lock (or use only one executor) for emulator since using multiple emulator instance is hard/tricky.
What we have done is write an test instrumentation engine above Robotium. This engine is mainly a state machine reading keywords from a text file and converting them into Robotium API calls. We did initially notice that inputs and outputs were the same: user taps on the screen, a new screen is displayed or new text is displayed.
That allows us to implement keyword test driven but it runs on the device so not remotely.
It is 20% of effort to get 80% of the benefit: easy to write/add new tests that are readable by anybody. Of course there are limitations but our goal was achieved.
Cheers
Ch